Patenting AI: Three reasons why patent applications for applied AI inventions do not succeed at the EPO
09
Mar
2022
Technicality, sufficiency and plausibility

AI is being adopted as a tool for solving the most challenging of problems in a wide range of technical fields, including healthcare and life sciences.

At the EPO, AI inventions applied to all fields of technology are considered to be computer-implemented inventions [1]. As is true for computer-implemented inventions generally, the way a patent application for an applied AI invention is drafted has a considerable impact on its success in examination before the EPO.

In our last article in this AI series, we considered inventorship and the mechanics of ownership. In this article we will focus on three pitfalls which can lead to patent applications for applied AI inventions not succeeding at the EPO, and how to avoid them. Specifically, reflecting on recent case law and changes to the EPO’s Guidelines for Examination, we will consider the EPO’s technicality and sufficiency requirements. Then, looking forward, we will consider whether the EPO’s plausibility requirement might start to play more of a role in examination of applied AI inventions.

Technicality requirement

The technicality requirement is perhaps the most well-known (and most notorious) requirement for computer-implemented inventions at the EPO, and the way this requirement is assessed for applied AI inventions is no different to other types of computer-implemented inventions [2].

As summarised by the Enlarged Board of Appeal in G 1/19, the technicality requirement is made up of two distinct hurdles. The first hurdle asks whether the invention falls within the definition of ‘invention’ in Article 52 EPC. To do so, there must be at least one feature which is technical when considered in isolation. Once the first hurdle is overcome, the second hurdle occurs during the assessment of inventive step under Article 56 EPC, where the features supporting inventive step (i.e. those distinguishing the invention over the closest prior art) must contribute towards a technical effect serving a technical purpose.

The immediate question arising from the two hurdles is: what does ‘technical’ mean? Unfortunately, what is considered by the EPO to be ‘technical’ is not rigidly defined, not even in case law, which can present a challenge for certain (though certainly far from all) applications of AI. That said, any subject-matter listed in Article 52(2) EPC, which includes programs for computers and mathematical methods, is not deemed to be technical.

Despite being a combination of a program for a computer and mathematical method, applied AI inventions seldom fall at the first hurdle. Their general implementation on a computer implies the use of a processor, which is sufficient for overcoming the first hurdle.

For the second hurdle, it is much less clear cut for applied AI inventions, and this is where a well drafted patent application can be a considerable advantage. Whilst the full assessment of inventive step is fact-dependent, in general the second hurdle may be overcome where the AI is used to solve a technical problem in a field of technology. The EPO gives the examples of the use of a neural network in a heart-monitoring apparatus for the purpose of identifying irregular heartbeats and the classification of digital images, videos, audio or speech signals based on low-level features (e.g. edges or pixel attributes for images) as technical applications of AI.

In view of the second hurdle, it is best practice when drafting to state prominently in the patent application the technical problem solved by the applied AI invention, and to avoid statements relating to problems which are not technical. Any considerations for the applied AI invention arising from the specific field of technology or science are also very useful to include.

The claims of applied AI inventions should be drafted in a way that either explicitly states or at least directly implies the particular technical application. Often this involves defining the technical use of the output of a trained AI model. Without this, the EPO may consider that a claim does not contribute towards a technical effect serving a technical purpose across its entire scope, which would cause the claim to fail the second hurdle. The March 2022 version of the EPO Guidelines for Examination do refer to a notable exception to this when a measurement from external physical reality is used as the input and the application of AI results in an indirect measurement that calculates or predicts the physical state of an existing real object. In this scenario, a technical effect is considered to arise regardless of what use is made of the indirect measurement [3]. However, this scenario will not be applicable for all applied AI inventions.


Sufficiency requirement

The sufficiency requirement, which arises from Article 83 EPC, is where considerations for applied AI inventions begin to deviate from computer-implemented inventions.

Article 83 EPC stipulates that a patent application must disclose the invention in a manner sufficiently clear and complete for it to be carried out by a person skilled in the art. For computer-implemented inventions, this requirement is generally considered to be satisfied by a functional description of the invention. This means that, rather than defining how the steps of a computer-implemented invention are programmed in a computer, it is usually sufficient to define in a patent application in terms of the function of each of the steps. However, for applied AI inventions, particularly those that rely on trained AI, it has been shown by T 161/18 of the Board of Appeal that a functional definition is not necessarily enough to meet the sufficiency requirement.

To make sense of this distinction, it is useful to consider what the person skilled in the art is required to do in order to carry out the invention beyond that which is disclosed in the patent application.

For computer-implemented inventions more broadly, the person skilled in the art must program the defined functionality on a computer before the invention can be carried out. As the act of programming itself is not considered by the EPO to be a technical endeavour [4], it is perhaps no surprise that the EPO deems a computer-implemented invention to be sufficiently disclosed without mention in the patent application of how the functionality is programmed on a computer.

For applied AI inventions, whilst an element of programming is still required to turn defined functions into a practical reality, this alone is usually not enough. Although a trained AI model might be thought of as a ‘black box’ having certain functionality, typically more than an act of programming is needed to recreate the functionality achieved by the black box. The architecture of the model being trained and the algorithm used for training may play important roles in defining the functionality, but perhaps most significant is the trained parameters of the model or, more commonly, the training data used to arrive at these parameters. This is because the nature of the training data has a direct impact on the functionality learnt by the black box. Accordingly, for patent applications relating to applied AI inventions, it is usually necessary to disclose the training data (or trained parameters) in the patent application to meet the sufficiency requirement.

Of course, this need to disclose training data in the patent application immediately raises the question of how detailed the training data needs to be. The EPO’s view is that it depends on the nature of the invention [5]. Meeting the sufficiency requirement in some cases might require the disclosure of the specific training data, but in others describing the type of data used for training (e.g. “images of human faces”) might be sufficient. For drafting patent applications for applied AI inventions then, the nature of the training data required to solve the technical problem to which AI is being applied is likely to be the deciding factor in how detailed the training data needs to be in the patent application.

On training data still, where training data is difficult for the person skilled in the art to obtain, as was the case in T 161/18 of the Board of Appeal, then the patent application should disclose how the training data can be obtained in a quantity that is adequate for training. Such details help to show that the patent application is not speculative and can be put into effect.

 

Plausibility requirement?

Plausibility relates to whether a technical effect of the invention is made credible (plausible) from the patent application as filed, and often arises for chemical inventions. Unlike the technicality and sufficiency requirements, there is no statutory basis in the EPC that an invention has to be plausible but, nevertheless, plausibility has featured in numerous Board of Appeal decisions under either inventive step (Article 56 EPC) or sufficiency (Article 83 EPC). Although there are ongoing questions about the extent to which a technical effect of the invention must be made plausible from the patent application as filed in view of G 2/21 of the Enlarged Board of Appeal, plausibility remains an important consideration when drafting patent applications for chemical and biological inventions.

Whilst plausibility is not commonly raised as an issue outside of chemical and biological inventions, we have seen the rumblings of plausibility being applied to computer-implemented inventions. For example, in T 2147/16 the Board of Appeal took the view that the claims lacked inventive step because the alleged technical effect was not specifically and sufficiently documented in the patent application as filed (although in that case the emphasis was more on the technicality requirement). Moreover, G-II, 3.3.2 of the Guidelines for Examination, which has recently received an update, states that where there is an alleged improvement that is not achieved because a computer-implemented simulation is not accurate enough for its intended technical application, this may be taken into account in the assessment of inventive step or sufficiency.

Plausibility has not yet arisen for applied AI inventions specifically. However, this could be a logical extension given the nature of applied AI inventions, particularly those that use a trained AI model. As the trained AI model may be thought of as a ‘black box’ having certain functionality, it may be difficult to determine without further information in the patent application whether and how the black box solves the technical problem that it purports to solve. Therefore, to mitigate potential future plausibility issues, it may be helpful to include in the patent application data showing the applied AI invention achieving its intended technical effect, or at the very least an explanation as to why the technical effect is achieved by the invention.

You may also be interested in the other articles in our AI series:

 

References

[1] https://www.epo.org/news-events/in-focus/ict/artificial-intelligence.html

[2] G‑II, 3.3.1 Artificial intelligence and machine learning – Guidelines for Examination (epo.org)

[3] G‑II, 3.3.2 Simulation, design or modelling – Guidelines for Examination (epo.org)

[4]  G‑II, 3.6.2 Information modelling, activity of programming and programming languages – Guidelines for Examination (epo.org)

[5] https://www.wipo.int/export/sites/www/about-ip/en/artificial_intelligence/conversation_ip_ai/pdf/igo_epo.pdf