Pitching for legal business as a law firm once meant gaining an understanding of the client’s needs and demonstrating how the firm might meet them. At Ashurst, however, a recent request for proposal came with a new demand: that the firm show how it would combine generative artificial intelligence with human expertise to handle the client’s legal projects.
Whether for making pitches or training junior associates, AI is becoming a dominant presence in legal workplaces, requiring both law firms and companies’ in-house legal teams to navigate complex new working relationships between human experts and digital tools.
For Ashurst, the pitch involved going head-to-head with another firm. Both were given 10 matters to work on over the course of two weeks, to show how they would use generative AI.
Ashurst won the business. “The reason we were successful, the client told us, was because of how we augmented the technology with the expertise,” says Hilary Goodier, a partner and global head of Ashurst Advance, the firm’s technology-enabled legal services division.
Blending AI with human expertise is not always easy, however. Goodier says it takes planning to design working processes that accommodate the strengths and weaknesses of both humans and digital tools.
“We’re seeing a lot more work upfront to prompt and test the AI and trial the process,” she says. “And that means a multidisciplinary team of lawyers, project managers and technologists working together before we jump into delivering the matter.”
Use of AI in the corporate world means in-house lawyers are also starting to embrace a multidisciplinary approach, says Pamela Salling, managing director of in-house counsel recruiting at legal recruitment firm Major, Lindsey & Africa.
Companies, she says, now want their in-house lawyers to be translators who can bridge law, strategy and technology. And if they cannot, “they’re faltering at the finish line,” she says.
As generative AI starts to permeate the corporate workplace, lawyers must prepare to collaborate with senior data executives, says Leigh Dance, founder and president of ELD International, an adviser to global in-house legal teams. “They are often on committees with the person who heads AI or the person who heads information security,” she says. “That means they need to learn about what those other functions do.”
Meanwhile, generative AI is transforming the legal learning experience. This is partly because the technology can tailor content and pedagogy to individual learning styles and partly because it offers new forms of training, such as simulations and immersive learning.
For example, global arbitration law firm Three Crowns and Stanford University’s CodeX project, a legal tech innovation hub, are using generative AI to create real-life simulations that students and legal professionals can use to develop cross-examination skills.
How junior lawyers develop legal expertise is also changing. Tasks that were once part of learning on the job — such as contracting, legal research and document drafting — can now be handled by AI technologies.
This could be a good thing, says Winston Weinberg, chief executive and co-founder of legal AI start-up Harvey. “The premise of a career in law was always apprenticeship — you would learn the craft from someone with experience and work your way up with their mentorship,” he says.
But in more recent years, this approach has got “lost in a sea of administrative tasks,” says Weinberg. With AI assuming responsibility for this more mundane work, he adds, junior lawyers are free to spend more time with experienced colleagues, helping to revive the original principles of the apprenticeship.
“There used to be this fiction that by doing the grunt work, you were learning how to be a lawyer,” says Danny Tobey, chair of DLA Piper’s AI and data analytics practice for the Americas. Now, “there’s more opportunity for mentorship in the things humans are suited to”.
Tobey experienced this evolution at first hand. As an associate, he says he would spend 15 hours a day reviewing paper documents. “A couple of years later, it was all e-discovery,” he adds. “The only thing I lost was hours spent alone in rooms with boxes — and that was not high-value training time.”
However, as the shift from paper to digital files enables AI to classify, analyse and extract new insights from legal documents, lawyers face a new challenge: to use AI aggressively to meet new business objectives while ensuring the data remains secure.
“That is one of the paramount tensions,” says Michael Pastor, law professor and dean for New York Law School’s technology law programmes.
The dilemma for in-house lawyers, he says, is that their corporate bosses and business development teams are pushing for rapid implementation of AI, to get ahead of the competition. Yet they must also apply caution to prevent data being misused, lost or stolen.
“As an in-house lawyer, you need to help your client navigate those tensions while keeping an eye on the business objective,” says Pastor. “That is where lawyers are going to earn their money.”
Law firms face similar tensions since, as guardians of their clients’ privileged information, their ability to deliver the right solutions depends on the integrity of this information.
This, says Tobey, means having conversations with senior executives to ensure they have AI data governance policies in place. “I talk to boards of directors and CEOs all the time and tell them this is fundamental to the accuracy of information throughout their organisations.”
Using data irresponsibly, he says, exposes clients to risks that could lead to litigation, regulatory scrutiny and reputational crises — which will end up on the desks of their legal advisers. “We’ll pick up the pieces,” says Tobey. “But I’d rather keep the vase intact.”