How to Safely Use AI and Automation in Legal

Tonkean
Tonkean
December 5, 2023
November 16, 2023
20
min read
How to Safely Use AI and Automation in Legal

In AI, there’s a riff on Newton’s Third Law, for every action there’s an equal and opposite reaction: For every overhyped take on the latest in AI, someone will voice an equal and opposite fearful one. Too often, the conversation around AI is pulled taut by the two extremes of overhype and doomsday. 

Both of those takes are overly simplistic. And the resulting attitudes towards AI—turn it loose versus ban it entirely—aren’t especially productive. The benefits and dangers of AI are far more nuanced—and significantly less dramatic. 

Rather than let ourselves be pulled back and forth between these extremes, we need to seek out and understand that nuance if we're going to take advantage of the benefits AI has to offer while mitigating the risks. 

For example, as tempting as it is for, say, legal professionals to outright ban the use of large language models (LLMs) for their organizations because of the obvious liability risks, there’s a middle ground where they can apply AI and automation safely and practically to improve some of their processes. 

That’s not to say there isn’t a lot of genuine, informed, and reasoned optimism and concern. But the extreme voices can make that all hard to parse.  

The key is understanding that AI is fundamentally just a tool (or a set of tools), just like computers, the internet, cloud computing, and automation. As with any tool, you have to learn what it’s for, how it works, and how to use it safely and effectively. 

And it’s critical to understand how that knowledge will be different depending on the field you’re in. For example, the concerns HR professionals have (accuracy, credibility, diversity) will be different from the ones creative media professionals have (provenance, copyright, distribution). 

Those concerns need to be addressed in every field—and fast—because AI is already fueling the next giant wave of digital transformation. 

The legal field is an ideal canvas upon which to paint a picture of how to understand and apply AI safely and intelligently, because it contains both rote, manual work—like legal matter intake and sending documents—and highly specialized, creative expertise. And there are very real risks involved that lawyers need to abate carefully.

We’re going to look at how you should understand and apply AI and automation both conceptually and practically in Legal.    

AI is not a magic box

“Any sufficiently advanced technology is indistinguishable from magic.” That’s one of science fiction writer Arthur C. Clarke’s “three laws”. He wrote it way back in 1962, but it’s perhaps even more salient today. The idea is that when we don’t understand how a technology works (at least at a high level), we allow it to become something both greater and less accurate than what it actually is. 

Imagine trying to explain a fighter jet to a Civil War soldier, or video chat to a telephone switch operator in the 1950s, or the modern internet to a computing enthusiast in the 1970s. It would boggle the mind so fully that they’d wonder what supernatural powers were at play.

More than 100 million people have played with ChatGPT at this point. It feels like a little magic box, where we ask it things and marvel at its sometimes shockingly human-sounding (and sometimes hilariously not human) responses. Too often, we treat it like a curious toy that we poke at but don’t understand. 

The danger of allowing ourselves to view a compelling new piece of technology in that way—to, as Jaron Lanier recently wrote in The New Yorker, “mythologize” it—is that we’ll view is as a supernatural good or evil to be embraced or avoided, rather than learning how to use it and harness its potential to do things and solve problems.

The latest in generative AI is no different. Generative AI is an area of the field of artificial intelligence that’s focused on outputting content, like text, art, code, and so on. Popular generative tools out there right now include ChatGPT and DALL-E. Generative AI is neither good nor evil, neither utopian nor dystopian. And it’s not magic. It’s just the latest cutting-edge technology, and we’re still learning how to wrap our minds around it. That’s the broad task in front of the tech industry right now. 

Generative AI has become astonishing in what it can do. But it’s instructive to take a step back and understand what AI really is and how it works—and what are its limitations. 

How does AI work?

To put it exceptionally colloquially, AI is more or less just very fancy pattern matching. It’s based on AI models, which are algorithms that are trained on datasets to uncover patterns. These models are designed to do something you need—sort numbers in a list, identify what’s in photographs, analyze text, and so on. 

Data is fed into the models to “train” them. AI training simply means teaching a particular model to interpret data correctly, often by asking it to finish an incomplete pattern and then confirming or correcting the model’s guess, which improves its accuracy over millions of these guesses. After the model is sufficiently trained, its makers can put it into production. Then, you can have the model do what it’s designed to do for your purposes—sort numbers in your list, and so on.     

The more data that gets fed into the training set, the more it “learns,” and ostensibly the better the model performs. 

Notably, you can use a model on your “something” without allowing your data to become part of the training data. (There’s a huge and important concern about data; a lot of data is sensitive for a variety of reasons, so ensuring that sensitive data doesn’t end up somewhere it shouldn’t is a key guardrail that AI tools need.)

And yes, the quality of the data has everything to do with the quality of the model—classic garbage in, garbage out.   

If an AI works well, it certainly can feel like some kind of magic. For example, perhaps an AI can accurately identify your face and tag you in any of your thousands of photos that you have stored somewhere, with hardly more than a press of a button.

Amazing? Yes. That’s because that AI has been given a very, very narrow task. But if you asked it to find your face in those photos, tag you in them, and then email them to your contacts, it wouldn’t begin to know how to do that. 

Even the most powerful and intricate AI can’t do more than one essential task. Consider fully autonomous driving, which is one of the highest-level applications of AI (and which has proven extremely difficult to execute successfully). Although autonomous driving comprises multiple types of AI (and other complementary technologies) and the system is perceiving and adjusting to many, many inputs on the fly…that’s all it can do. That AI can’t tell a joke. It can’t brew a cup of coffee. It can’t teach you how to tie your shoes. 

Large language models

Large language models (LLMs) like ChatGPT feel different than a lot of the AI tools out there. Partially, this is because of how readily available many of them are to regular consumers, as opposed to engineers or computer scientists who are working in labs and academic settings. LLMs still need narrowly defined tasks to perform well, but they have a critical additional trick, which is that they “understand” context. That’s why you can use common language prompts with them and get results you actually want. 

Here’s an example: “You're a social media manager for a company that provides AI-powered business automation software. Write an engaging LinkedIn post about why every organization should be leveraging AI in the automations they build and deploy internally.”

There are so many details here that the LLM “understands,” like what a social media manager is, what business automation software is, what LinkedIn is, what a good LinkedIn post might entail, and an argument for why an organization should do X.

That was a real prompt, by the way. Here’s how the LLM responded to it, including the hashtags:

Embrace the Future: AI-Powered Business Automation 

Businesses, it's time to level-up your game  AI-powered automation software isn't just a fancy trend; it's an essential tool for enhancing efficiency, reducing costs, and driving growth 

Here's why your organization should jump onboard the AI train :

  • Streamlined Workflows: Automate repetitive tasks, helping employees focus on higher-value activities
  • Improved Accuracy: Eliminate human errors and maintain consistency across processes
  • Enhanced Decision-Making: Gain actionable insights from data, making better and faster decisions
  • Increased Agility: Empower your organization to adapt rapidly to market changes and customer demands
  • Competitive Edge: Stay ahead of competitors by leveraging cutting-edge AI technology

Invest in AI-powered business automation today and propel your organization to greater heights:sparkles: Don't get left behind! #AI #automation #BusinessGrowth

Indeed, LLMs have begun to feel more and more like artificial general intelligence (AGI). Simply put, AGI would be essentially sentient. But, again, understand what’s happening even in the very impressive example above: The LLM isn’t creating new things. It’s scouring what humans have already made, and then it’s rearranging them in a way that resembles what humans have already made. 

Impressive? Yes. 

Magic? No. 

But that’s great news for you as you work to bring such powerful technology to bear on your organization’s digital transformation. Because it’s not magic—it’s just a tool—which means you can learn how to use it. 

Apply AI and automation strategically, at the process level 

At Tonkean, we’ve spelled out ways you can bring tools like GPT-4 to bear on solving problems or making processes easier, faster, or better. That’s what we’ve done with our AI Front Door, and now we have an entire product suite called Legal Works that’s dedicated to business process automation in Legal. 

Conceptually, all of that rolls up to the belief that the best way to leverage AI and automation to facilitate digital transformation in your organization safely is to be strategic about it, at the process level. 

Operations of all kinds, especially Legal Ops, is about people more than it is about processes. Successful operations are about enabling people to do better, more meaningful work. You have to optimize the experience around your processes for your users. That’s what Tonkean’s business process automation is all about. Great implementations of technology are about augmenting what humans can do, not replacing them. 

In-house legal teams work with employees from every corner of an organization, and often the requests are urgent or sensitive. Further, legal teams need to be able to contribute to the organization’s larger strategic business objections, be those M&A, reining in costs, managing risk, and so on. 

In Legal—as in other internal service departments—your processes are the means by which you meet the critical mandates of today’s legal departments, including: 

  • Scaling efforts to improve operational performance
  • Systemically mitigating risk and ensuring compliance
  • Providing more value to the business
  • Increasing efficiency

But you can’t reliably begin to achieve any of those goals if people don’t follow your processes. That’s the first and highest-level issue to address, because process adoption remains shockingly low. At least 67% of employees routinely skip legal processes altogether. 

How do you increase process adoption? You have to make following your processes easier than not following them. That is, you have to remember that your process is about people and build those processes to meet people where they are.

That’s why we built LegalWorks—to enable legal teams to harness the potential of AI-powered process automation technology safely and effectively. 

Many common processes and tasks in Legal can be automated (and empowered by AI), like legal matter intake and inbound request categorization and resolution. But you need structure to make it work—both by enabling legal teams to create those flexible, powerful, personalized legal processes that employees will actually follow and around the raw firepower of innovative technologies like LLMs. 

That’s what LegalWorks does. It includes the Tonkean AI Front Door, an AI-powered intake experience that’s accessible to employees via email, Slack/Teams, or a customized web portal. It lets employees ask for what they need using common language prompts, like “What NDA form do I need for the ACME account?” The Front Door will “understand” the context of the question and reply with the necessary form, for example.

Other components of the LegalWorks platform lets you create processes that automatically triage and classify unstructured inbound business requests, including by using NLP to gauge the urgency of each request, automatically handle simple requests, and automatically route more complex tasks to the right person or group in the organization. It wraps around whatever tools, policies, and apps your organization already uses, which avoids the need for new software. And for the end user or “customer,” it all happens in whatever apps they’re already using, like email or Slack, so there’s no change management creating friction for them. For example, if they need to know which form to grab for a client, they can ask for (and receive) what they need within a Slack channel without ever having to leave the app.

And it’s 100% no-code, so you don’t have to rely on developers to implement process improvements or iterate on workflows.

LegalWorks gives legal teams a powerful way to create business value, save money, reduce risk, increase process adoption, accelerate time-to-resolution for internal requests, and drastically reduce matter cycles.

But a huge question that lingers over exciting AI tools like this—and especially legal professionals—is whether or not this kind of AI is safe to use.  

Is GPT safe to use for legal processes?

The short answer is yes, LLMs like GPT are safe to use for legal processes—if you use them correctly, apply them to the right tasks, and ensure that any necessary guardrails are in place. 

What do we mean when we talk about “safety” for Legal? It’s about data privacy and security, client confidentiality, accuracy in automation, auditability, risk reduction, and overall best practices for using any technology tool.

A lot of the safety comes from the application of these technologies. In other words, you can make it safe by design, rather than by making something that needs to be fixed and gatekept and hoping your guardrails are sufficient. 

Legal professionals are right to blanche at the idea of relying on ChatGPT to dispense legal advice. First of all, the liability for a company that allows an LLM to give out advice of that nature is sky-high, and one of the core tasks of legal professionals inside of organizations is to reduce risk. In that sense, using an LLM in this way is antithetical to their mission.

And even if an LLM like ChatGPT returns what appears to be solid legal advice, you have to validate that result with an attorney to be safe anyway, which mostly defeats the purpose of using it in the first place. 

That’s why matter management is an ideal application for automation. And, importantly, the way Tonkean is designed, there’s built-in transparency and tracking, so you can always self-audit to ensure every message returns the right answer or ends up in the hands of the right lawyer. 

The entire design of LegalWorks and the AI Front Door ensures safety and efficacy. You, the user, create and define all the processes, tasks, documentation, and policies within Tonkean. You supply your data, and that’s what the AI looks at—not all the collected data across the world that may be related to your request. That way, when an employee makes a request, it’s less difficult for the AI to provide resolution. 

Consider again the above example, asking for an NDA form for a client. There are only so many NDA forms in your system, and there are only so many clients, and that particular client has a Salesforce entry in your organization’s database.

And with a platform like Tonkean, your data doesn’t need to be fed into an LLM’s training data for you to get results. (But you should definitely check the Terms of Service for any AI tool you use or that your data touches, and adjust the data controls accordingly.) 

Tonkean adds another layer of protection: If you’re in Tonkean and crafting the rules for what to do with an incoming legal matter email, if it encounters any message containing words like “privileged,” “internal use only”, and “confidential,” Tonkean won’t send those messages through GPT. Instead, the platform will route them to a person in your organization for manual review.

How to leverage AI safely while mitigating risks

These layers of protection and the ability to triage and coordinate requests are critically important. It’s this ability that enables teams to leverage the incredible power of AI tools like GPT-4 while still maintaining the appropriate controls and security.

There are some general best practices you can follow when you’re vetting an AI tool:

  • Ensure human-in-the-loop design, which creates transparency and enables people to see how an AI is working and step in if need be. 
  • Use it for narrowly defined applications (even with powerful generative AI like GPT).
  • Understand the concerns germane to your vertical (e.g., there are different things to worry about between HR versus Procurement versus Legal).
  • Make sure it’s purpose-built for what you need; that is, use a tool that’s empowered by GPT, like the AI Front Door, rather than just letting everyone use GPT with no parameters or clear goals.
  • Education: Teach people in your organization how to use your tools properly. Some of that will be specific to your organization, and some will be part of the larger culture—for example, just as we all had to learn how to use an internet search engine, so too do we need to learn how to give LLMs prompts that will deliver the resolution we need. (That also makes it easier for your employees to follow your procedures and policies, which reduces risk.)
  • If necessary, gate the tool so people in your organization can’t accidentally use it destructively. (This is similar to the way we hold software licenses and permissions.)  
  • Data oversight, such that you know where your data is going and how it would be used—which you can check by looking up the Terms of Service on the tools you use, like OpenAI—and whether or not you can opt in or out of data collection.

AI is just a tool. It happens to be an extremely powerful tool. But just like any tool, it’s not something to be feared, nor is it something you can be careless with. It’s something to respect. All users have a responsibility to learn what it’s for, how it works, and how to use it safely and effectively. That takes some effort, but it’s more than worth it to bring AI and automation into your organization to bring about this next great wave of digital transformation.

Get started with Tonkean’s LegalWorks platform and test Tonkean’s AI Front Door now.

Share this post
Read more posts
ProcureTechSTARS with Sagi Eliyahu, Co-Founder & CEO of Tonkean
Tonkean
10
min read

ProcureTechSTARS with Sagi Eliyahu, Co-Founder & CEO of Tonkean

Tonkean CEO and co-founder Sagi Eliyahu sits down with Lance Younger, CEO of ProcureTech, for a discussion about the origins of Tonkean, the future of procurement, and the role of AI-powered intake orchestration in that future’s formation.
Read post
The Promise and Practical Potential of Intake Orchestration in Procurement
Procurement
5
min read

The Promise and Practical Potential of Intake Orchestration in Procurement

If you work in procurement, it’s likely you’ve been hearing a lot about the promise and potential of intake orchestration software. But what exactly is intake orchestration, what does it look like in practice, and what makes it so exciting?
Read post

Stay up to date

Get experts articles & updates to your inbox!
1384

Create a process experience that works.

Maximize adoption, compliance, and efficiency.
Transform your internal processes with powerful AI and personalized experience. 100% no-code.