3 minute read

ARTICLE

How Law Firms Can Adopt AI Safely and Strategically

The only constant in AI is change. New models and tools are developed continually promising game-changing efficiency or existential risk depending on who you chose to listen to. In regulated professions many are taking a wait-and-see approach, as debate continues over whether AI is just hype or a fundamental shift.

With so much uncertainty, how can law firm leaders make an informed, strategic choice that is both pragmatic and future proof?

AI introduces distinctive challenges; although current IT policies address most potential risks through data security measures, implementing a dedicated AI Statement establishes clear direction and parameters for the adoption of AI best practices.

An often-shared example is when asked how many ‘R’s are in the word ‘strawberry’, many AI models respond with two, despite the correct answer being three. This kind of mistake might seem surprising, but it reveals something important about how these systems work.

This happens because AI breaks words into chunks (tokens), not individual letters. It doesn’t "see" the word as we do. Sceptics cited this as proof AI was useless, failing at a seemingly simple task, the reality is that AI models are at their core just a very clever autocomplete and not designed to work in that way.

Failure to understand how AI works, and the guardrails you need to put in place to use it effectively can also lead to situations like happened in the £89m damages case against the Qatar National Bank, the claimants made 45 case-law citations, 18 of which turned out to be fictitious*.

AI was working correctly. It predicted the most likely next combination of words, like an auto-complete, for the task it was given. It produced citations when it didn’t have access to them in the data it was trained on.

Using specialist legal tools which will only cite real case law, or having basic checks in place on the accuracy of information could easily have prevented this happening.

That’s one reason why we’re firm believers that the starting point for adopting AI in an organisation lies in having an agreed AI Statement. A guiding document that sets out how you will, and will not, use AI. It acts as a foundation to ensure your business uses AI effectively, legally and ethically.

As models continue to improve, an organisation’s use of AI is as much a brand decision as a technology one. Transparency in how we use AI is essential for building client trust and we’re starting to see questions on AI tool usage in the procurement process across a range of industries.

A first step for AI adoption in an organisation must be setting clear guidance for your team to help them to know how to use these new tools safely, legally and effectively while communicating your values effectively to suppliers and partners.

How your firm uses AI sends a message, what do you want it to say?

Join me to explore this topic further at the upcoming FREE Webinar (Non-members £15): AI in Business – Risk, Opportunities and Impact Thursday 25 September 2025 – 12:00 – 13:00 Book your places here: https://www.bournemouthlaw.com/bournemouth-district-law-society-lectures 

Luke Williams

Group Head of AI Intergage Group

*source: https://www.theguardian.com/technology/2025/jun/06/high-court-tells-uk-lawyers-to-urgently-stop-misuse-of-ai-in-legal-work

This article is from: