LPM: Why a ‘watch and wait’ approach to Generative AI may be judicious
Laying the groundwork for generative AI now, as the tech evolves, puts SME firms at an advantage to adopt the tech when it matures, says Andrew Lindsay, General Manager at LexisNexis Enterprise Solutions
It’s an understatement to say that generative AI (genAI) dominated 2023. While there is no doubt that the trend will continue in 2024, genAI is still in its infancy – both as a technology as well as the business potential it offers. To devise an AI strategy that will remain steadfast for the long term, it’s advantageous in the early stages to pay attention to the technology’s evolution, providing a prime opportunity for firms to lay the groundwork for future development — so that when the time is right, the firm has a solid foundation for the technology’s adoption.
Here are areas to think about:
Cloud computing
The cost of operating generative AI makes cloud computing a necessity. Running it requires specialised hardware as large language models are extremely GPU intensive. Therefore, the cost of purchasing such hardware rules out on-premises deployment even for the largest organisations. Firms also need to consider the impact of the energy required to run and maintain the equipment on their ESG policies.
To make genAI affordable, sustainable, and green, a cloud computing model with a focus on addressing these barriers to adoption, is key. It will reduce the need for upfront capital expenditure, and help manage the ongoing cost of usage, while limiting the carbon footprint output. Additionally, it ensures consistent performance and availability, while providing flexible/scalable compute power.
Many technology providers are working fast and furiously to embed and integrate genAI into their business applications. Up until then though, firms should be cautious about using free, public genAI tools such as ChatGPT to avoid potential breaches of the General Data Protection Regulation, the Solicitors Regulation Authority’s policies, agreed client data uses and Service-Level Agreements.
Furthermore, there is a significant cost associated with the questions that lawyers may ask of the data set. Questions are measured in text, which use several tokens per query, per data source — and tokens cost money. Potentially this could impact how many people can be granted access to the genAI tool and the budget allocated to individual users. The cost of searching also has a bearing on the type and length of questions that individuals can ask the technology. Such restrictions can limit the value of genAI and throttle potential productivity gains. So, think of how you would manage, and control user spend when the tools are in the hands of your team.
Data quality
Regardless of the specific genAI tools firms ultimately invest in, the benefit the technology delivers will almost entirely rest on the quality and integrity of the data the large language models are trained on. To illustrate, for the firm’s AI tool to accurately respond to a lawyer’s request for creating a new precedent using previous precedents X, Y and Z, the chatbot will need access to the latest, authorised versions of those examples. This is a good time for firms to get their data house in order and create a single data set or digital file that comprehensively captures information on cases across matter files, documents, and emails, as well as the knowledge of the firm’s lawyers.
In the interest of innovation, within reason, it’s safe to say that large law firms can afford to bet at the roulette table of change, to see where their chips land as they experiment with the technology. In fact, the industry needs such vanguards to help develop solutions that exploit the value of AI and deliver tangible return on investment for wider adoption within the industry.
For growing and medium-sized firms, adopting a ‘watch and wait’ brief for genAI, could be a judicious way forward. And while they do so, investing time in staying abreast of the technology while becoming AI-ready means that when the solution is truly deployable, the preliminary work is already in place to facilitate speedy adoption and faster return on investment.