
Today, would-be investors are faced with a new and unfamiliar challenge – artificial intelligence (AI). How does an IP due diligence exercise differ where the target’s primary asset is an AI system or where the target is a company that uses third-party AI systems in its business?
Below we outline five key considerations for investors when conducting IP due diligence where AI systems are involved, whether the target company is using third-party AI systems or has developed its own. The crucial message is that a traditional IP/IT due diligence exercise might not identify all of the new risks raised by the use of AI and that a more tailored approach is recommended.
- Regulatory compliance
The highly anticipated EU AI Act officially came into force on 1 August 2024. Whilst none of the Act’s requirements apply immediately, they will begin to apply over time throughout EU Member States. The EU AI Act is the first comprehensive regulation of AI and adopts a risk-based approach – i.e. the greater the perceived risk of the AI system, the more stringent the regulations that are imposed upon those using or providing those systems.
The majority of the new obligations fall on providers of so-called “high-risk” AI, such as general-purpose AI (GPAI) models like Open AI’s GPT-3 (also known as ChatGPT). Obligations for GPAI providers include providing to customers who intend to incorporate the GPAI model into their AI system: technical documentation, instructions for use, compliance with the Copyright Directive and a training data summary.
Meanwhile in the UK, the 2024 King’s speech indicated that the new Labour Government has plans to “establish appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models.” The scope and content of the proposed legislation remains to be seen, but it is expected that it will target only a handful of companies developing the most powerful AI models.
In undertaking due diligence on a company that provides AI systems, key regulatory questions for investors include: (a) what regulatory compliance obligations currently apply or are likely to attach to the AI system over the next 18 months (i.e. under the EU AI Act), and (b) does the target company monitor its compliance obligations (i.e. is it aware of and able to meet those in-coming compliance obligations)?
2. AI code generation and OSS
AI software code generators trained on or that incorporate open-source software (OSS) are now a key part of the software development process. However, as with all OSS, the use of such tools raises questions of IP ownership and liability which diligent investors should consider.
OSS is commonly used in software development and AI software code generators trained on or incorporating OSS are increasingly likely to be used to build AI systems. OSS is free to use, but generally comes with licensing terms that can have significant knock-on implications for any software system that uses it. Because of the automated nature of the AI software development process, OSS licensing issues may be overlooked.
Key points for investors to consider in this area are: (a) to verify if AI software code generators trained on or incorporating open-source software (OSS) were used to create the target company’s AI system, (b) what OSS licence terms apply to the generated code (particular care should be taken if the licences include copyleft conditions restricting the use of the AI system or requiring it be made free to use), and (c) consider if AI-generated code can be substituted with alternative non-AI generated code.
3. Data Protection & Company trained AI models
The target company may use third-party AI systems in its business and/or may have worked with a third party developer to help create a bespoke AI system. In either case, the AI system in question may have been trained on the target company’s customer data. This exposes the target company to data protection and privacy risks.
Key due diligence questions for investors to ask include:
- What training data was used and did it include any personal data?
- How did the company develop/obtain the training data?
- What rights does the company own or control in respect of the training data?
- What rights were granted to the AI developer to use that training data?
- Can the AI developer use the target company’s training data with other clients? If so, are they under any obligation to keep that training data confidential?
We explore how to protect training data in further detail in our article here.
4. IP ownership and use of AI-generated outputs
AI-generated outputs (for example, software code, imagery, data analysis) may constitute valuable IP for the target company. If the outputs have been created using a third party AI system, what rights do the target company own in respect of such outputs? Crucially, are the outputs protected by any IP rights? Investors should consider how these outputs can be protected and who owns the relevant IP rights.
The English courts have made it clear that an AI system cannot be an “inventor”, meaning that unless there is also a human inventor, an invention made by AI may not be patentable for the purposes of the UK Patents Act. Similarly, it is not straightforward for works created by an AI system to gain copyright protection. Under English law, for an artistic or literary work to be protected there either needs to be an identifiable human “author” of a copyright work or for the work to fall within the scope of “computer generated works”. In the latter case, the owner of the copyright work will be, “the person by whom the arrangements necessary for the creation of the work are undertaken. Where patent or copyright protection is not available for an AI generated output, the output might still be protected as a trade secret.
This is an evolving area of law and it is possible that the same AI-generated outputs will not receive the same IP protection in all jurisdictions. It will therefore be important for investors to understand: (a) what outputs have been generated using AI, (b) how important those outputs are to the target business, (c) where were those outputs created and how were they generated (i.e. to what extent was a human involved in creating the output), (d) if a human was involved, was the human an employee of the target company, and (e) what, if any, contractual provisions regulate the ownership of the AI generated output?
5. AI and liability
In addition to the AI Act, the EU has proposed legislation to respond to other challenges which arise from the use of AI. These include amendments to the Product Liability Directive which will be widened to include software and AI systems within the definition of a “product” and to mean that manufacturers (e.g. suppliers of AI systems) may be liable for damage to a customer’s systems caused by a defect in their AI system which arises as a result of a failure to supply software and/or security updates or upgrades.
Liabilities relating to AI systems can arise from multiple sources of law (in this case EU law). This layered and complex regulatory landscape means that it is important to understand what steps a target company has taken to monitor AI regulation and its compliance with it. Namely, it is key to check that the target’s monitoring has been wide-ranging enough to identify any relevant compliance requirements.
Key takeaways
While the use of AI offers significant opportunities to businesses it brings with it a new set of concerns and potential liabilities. In the short term, those looking to invest in or buy companies with significant AI assets would be wise not to assume that the traditional software or IT related due diligence questions will be sufficient to flush out all AI related issues. Instead, take specialist advice and consider:
(a) the risks relating to the data on which the relevant AI systems were trained,
(b) ownership issues relating to the outputs of AI systems used by the company, and
(c) infringement risk relating to the use of those outputs.
This is an evolving area of law and while the EU’s AI Act may be the first comprehensive legislative regime to take effect we expect further regional regulation which may add further complexity to any investments relating to AI systems. If you have questions on AI in transactional matters or IP due diligence more generally, please contact the authors or your usual Carpmaels’ contact.