AI (part one): how does AI interact with UK excluded subject matter provisions?
Recently the UK Intellectual Property Office ran a consultation on artificial intelligence and intellectual property, to which D Young & Co responded with a discussion of the issues raised. In this first article reporting on the consultation, we highlight our view of the definition of “AI” and how this interacts with excluded subject matter provisions in UK (and European) patent law.
UKIPO AI consultation outcome
The UKIPO published the outcome of its "artificial intelligence call for views: patents" on 23 March 2021.
Read moreThe UKIPO’s stated motivation for the consultation was to look ahead to the challenges that new technologies bring, and to ensure that “the UK’s IP environment is adapted to accommodate them”. In this regard, AI is particularly significant as it is so broadly applicable to other areas of technology, but also sits close to technologies that have been problematic in the past, such as computer programs and mental acts.
Defining artificial intelligence
Because of this, as a preliminary step it is important to define what AI is, pending any paradigm shift in the capabilities of modern AI systems, as this may significantly influence any response to the consultation.
Previously the UK Government has defined AI as:
technologies with the ability to perform tasks that would otherwise require human intelligence, such as visual perception, speech recognition, and language translation.
Whilst this might be a good general-purpose definition, it is ill-suited to patent law as it defines AI by its technical effect. This risks fragmenting the definition of AI according to the field of endeavour, or automatically excluding AIs in some fields irrespective of any other technical merits.
Furthermore by defining AI only by what it can do, it does not capture what AI is. We therefore propose that a better definition of AI for the purposes of patents is:
Any technology whose output or functionality is at least in part a consequence of training rather than programming.
This captures the unique aspect of (modern) AI that it attempts to replicate the natural processes by which intelligence is achieved, in particular the use of training or experience rather than programming or hard-coded rules. A notable consequence for such AIs is their ability to respond to novel inputs for which no rule has been coded; a property referred to as “generalisation”.
Our definition may also help to differentiate AIs from computer programs “as such”. The punch cards and listings of the 1970s computer era were considered by the legislators of the time to be literary works protectable by copyright, and thus excluded from patent protection – a decision that has made life interesting for those in the patent profession ever since.
By contrast, a machine-learning based AI system is qualitatively different. Such systems learn by training on examples. Different AI architectures use such examples in different ways, but share the property that the examples contribute to a modification of the AI’s functional structure, for example in the strength of connections within the system. Another shared property is that the training examples themselves are not preserved within the AI system in a manner that permits independent access; the trained AI system is also not simply a database.
Instead an AI system is different to both, not being an authored rule-based program that may consume data in operation, nor a database that stores data for independent access; an AI learns an output or function by internalising an abstraction of such data within its own structure. It then functions based on this abstraction, either in response to new inputs or spontaneously (depending on the architecture).
With our pragmatic definition of AI, which is likely to apply in the near to medium terms envisaged by the consultation, we can now look at some of the questions the consultation poses.
AI patentable subject matter
Probably the most basic question relates to the patentability of AI inventions and their technical character.
We think that the application of AIs to solve technical problems (for example, machine vision), or the solving of technical problems to facilitate AIs (for example, parallelisation on graphics cards) can both provide patentable subject matter. However for this consultation the more fundamental question is whether and to what extent AIs naturally fall, as a matter of policy, under the existing exclusions of “a scheme, rule or method for performing a mental act, … or a program for a computer … as such”, as found in Section 1(2)(c) of the UK Patents Act (and corresponding Article 52(2)(c) EPC).
There is a clear temptation to suggest that the technical effect of an AI is the performance of a mental act, as is perhaps reflected in the UK Government’s own definition of AI. However, the purpose of the mental act exclusion is to ensure, as a matter of policy, that people are free to conduct their thoughts, in a similar way that the purpose of the treatment and diagnosis exclusions of section 4A UKPA are to ensure as a matter of policy that people are free to conduct medicine.
Despite this, in the same way that section 4A UKPA nevertheless does not prevent the patenting of tools that may be essential for such medical practices, section 1(2)(c) UKPA should not prevent the patenting of tools that produce similar effects to mental acts, without being such acts themselves.
Hence we would assert, for example, that a claim to an AI that performs text recognition does not have as its sole effect a mental act of text recognition as such, precisely because it is a claim first and foremost to an AI. Pragmatically it is only by using an AI that performs text recognition that a third party could infringe the claim, and hence the claim scope does not encompass a mental act conducted independent of an AI. More philosophically, the operation of the AI should not be construed as having the technical effect of a mental act not just because it typically operates in a different way to a true mental act, but fundamentally because its broader technical effect is to imbue a non-human system with this capability. No mental act does this, by definition.
This is important because one area where AIs may have their greatest use is in replicating human activities that are seen as prima facie non-technical mental acts. Whether this is reading, speaking, driving, walking, sorting, or any other field, the presence of an AI in the claim should mean that the technical effect encompasses producing a non-human system that performs this activity, and not merely the activity itself.
Meanwhile, for computer programs, our definition specifically distinguishes AI from programming as such. In particular, a trained AI embodies an abstraction of external data within its structure to become a bespoke computing system with a beneficial ability to generalise its behaviour in response to new inputs, unlike a standard programmed computer. This can be seen as consistent with patentability from the decision in Macrossan, in which the court rejected an application on the grounds that its system had not contributed a new form of hardware, unlike in the parallel Aerotel case (see paragraphs 53 & 63 of Aerotel Ltd v Telco Holdings Ltd & Ors Rev 1 [2006] EWCA Civ 1371).
In terms of hardware, and defining itself through internal connectivity, an AI as a computing system is similar to a field programmable gate array where functionality is encoded physically in the architecture of the system. Indeed it will be appreciated that there are already dedicated neural chips with similar properties for a similar purpose. In this sense, a software implementation of an AI is an emulation of a computer, not a program for a computer.
Hence there are plausible grounds for arguing that a trained AI is not a computer program as such. The current UK case law does not yet reflect this, with the current five Symbian / AT&T signposts for software patentability begging the question by assuming that the subject matter is a computer program in the first place. Yet even if forced into this constraint there is scope to overcome the exclusion (see paragraph 40 of AT&T Knowledge Ventures LP, Re [2009] EWHC 343 (Pat), and section 1.37 of the UK Manual of Patent Practice).
In many cases, the application of the AI will result in a technical effect external to a computer (the first signpost). Even where the effect may be superficially non-technical (such as a mental act like driving or walking), there is the argument that the appropriate effect is imbuing the system itself with this capability. The other signposts relate to improving the computer itself in the absence of external technical effects, and generally assume that the benefits are independent of any one program. Whilst AIs tend to be task-specific, if this task can in principle be accessed by any program that needs it then the AI could in these cases be treated as a beneficial emulated co-processor.
In conclusion, it is clear that an appropriate definition of AI may be critical to the patentability of some cases, and it is also clear that there is scope for patent law to evolve so as to distinguish AIs from computer programs and mental acts as such.
The UK Government’s recent response to the consultation is to propose to “publish enhanced IPO guidelines on patent exclusion practice for AI inventions and engage AI interested sectors, including SMEs, and the patent attorney profession to enhance understanding of UK patent exclusion practice and AI inventions”. We intend to fully engage with this process and any discussions that follow.
In our next (June) patent newsletter we will look at inventorship and ownership for inventions arising from AI, in particular whether patent law should allow an AI to be identified as a sole or joint inventor.
If you would like to receive your newsletter by email as soon as it publishes, please send your contact details to us at subscriptions@dyoung.com.