Bad cases make bad law: Has DABUS "the AI inventor" actually invented anything?

In keeping with the so-called media "silly season" of late summer, PatKat thought she would check-in on the AI inventor debate. PatKat has been sceptical about Dr Thaler and his purported inventing machine, DABUS, for some time (IPKat). A recent EPO Examining Division office action appears to share similar concerns. In the pending European DABUS case (EP4067251), DABUS's invention as originally claimed was found to lack novelty in view of 25 year old prior art. After amendment, the EPO now finds the claimed invention insufficiently disclosed and to lack inventive step. To this Kat, it is the insufficiency objection that is particularly revealing. The insufficiency attack demonstrates the random and abstract nature of the purported inventions, and highlights the practical and commercial irrelevance of the DABUS dispute to the burgeoning commercial landscape of creative AI such as ChatGPT. 

AI inventor case catch-up: Formalities, not patentability

It is worth remembering that a patent application may be filed for any subject matter, provided the appropriate forms are filled in and the necessary fees paid. The process of patent prosecution determines whether the application contains an invention that may be awarded a patent. In most jurisdictions, this will require the invention to be sufficiently disclosed and to be novel, non-obvious and possessing of utility. 

No court or patent office has thus far found the subject matter claimed in the DABUS patent applications patentable. In most cases, the patent offices haven't even got around to actually assessing the patentability of the claimed invention. The patent applications have simply been refused for failing to satisfy the formal requirements of filing. Notably, South Africa, the only jurisdiction to grant a DABUS patent, does not substantively examine patentability. We are currently waiting for the decision from the UK Supreme Court on the UK DABUS case. However, as with the courts in other jurisdictions, the UK Supreme Court is not assessing the patentability of the AI-derived invention, whether the AI actually invented the invention, or even if anything has actually been invented. The Supreme Court is merely considering whether an AI may be formally designated as an inventor on a UK patent.

Sceptical Kat

Has DABUS invented?

The EPO refused the European DABUS applications (EP3564144 and EP3563896) in 2021 for failing to satisfy the formal requirement of naming a human inventor (J 08/20) (IPKat). Undaunted, Dr Thaler filed a divisional application for one of these cases (EP4067251). Unsurprisingly, the Examining Division has indicated that the divisional application will be refused for failing to designate an inventor. 

The EPO Examining Division has now also made some remarks on the patentability of the claimed subject-matter. Claim 1 of the patent application specifies a cylindrical food or beverage container, formed from a wall with a "fractal profile". The Examining Division indicates that the claims lack inventive step. In particular, the Examiner argues that the distinguishing feature of the claims in view of the cited prior art:

"do not appear to provide any special technical effect over [the closest prior art], and therefore to solve any problem. They can therefore only be considered as a design option that the skilled person would choose for non-technical reasons, and for this reason the subject-matter of claim 1 does not involve an inventive step". 

Unusually for a mechanical invention, the Examiner also argues that the application lacks sufficiency (Article 83 EPC). The Examiner observes:

 "The description is silent about how to obtain a fractal profile for the wall of the container. Instead, it explains that 'the skilled person will appreciate that the profile of the wall will not be pure fractal form but will have a form dictated by practical considerations such as the minimum practical or desirable size of its fractal components'. How this minimum practical or desirable size has to be determined is not explained. Neither is explained what a fractal profile (claim 1) which would not be of pure form (description) can be, let alone how it can be obtained". 

In other words, according to the current assessment of the EPO, not only does the subject matter produced by DABUS lack inventive step, but a skilled person would also not know how to even make the invention. The EPO is thus not just refusing to permit designation of DABUS as an inventor, it is highly sceptical that the subject-matter ascribed to DABUS is actually a patentable invention. 

Final thoughts

It is an adage of the legal profession that bad cases make bad law. This Kat will not reiterate her previous thoughts on Dr Thaler's AI inventor crusade (IPKat). According to a recent profile piece in the Economist, Dr Thaler has apparently fallen in love with DABUS. Perhaps herein lies the problem. 

A notable characteristic of the DABUS dispute is the apparent lack of a commercial product covered by the patent applications. After all, patent prosecution and maintenance, not to mention legal proceedings before the UK and US courts, is an expensive business. There is however no indication that Dr Thaler is interested in commercialising a fractal-patterned food container. As such, the whole dispute appears to be an academic exercise. This Kat doubts that anyone with a commercially valuable product to protect would have any difficulty filling in EPO Form 1002 according to the EPO's requirements. 

If we ignore the red-herring of the DABUS case however, it is clear that the commercially relevant question raised by the possibility of AI inventorship is not inventorship, but ownership. It is the owner of an invention, and not the inventor who reaps the rewards of patent protection. Notably, Dr Thaler seems to have no difficulty in claiming ownership of DABUS's purported inventions. However, in true cases of AI inventorship, this Kat predicts that ownership issues will be front and centre, particularly ownership issues surrounding the algorithms and their training data. These legal disputes will not be qualitatively different from those relating to any platform technology. 

In the meantime, the patent and AI communities will continue to raise an ironical eyebrow at the mainstream media reports of DABUS the inventing AI. 

Further reading

Bad cases make bad law: Has DABUS "the AI inventor" actually invented anything? Bad cases make bad law: Has DABUS "the AI inventor" actually invented anything? Reviewed by Rose Hughes on Friday, August 11, 2023 Rating: 5

16 comments:

  1. Thanks for he post. Hard to encapsulate my thoughts in the Comment space but see https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4410992

    ReplyDelete
  2. I don't believe the DABUS cases are simply an academic exercise. Once upon a time laws were used to define certain human beings as property (slaves), and so the conversations around AI inventors are going to become more important as AI develops, especially in areas where it is more creative and more intelligent than its 'owner'. Liability for AI actions has been considered in a very serious way by the EU, see https://commission.europa.eu/business-economy-euro/doing-business-eu/contract-rules/digital-contracts/liability-rules-artificial-intelligence_en. We are going to see many more areas of life being impacted, and needing new legislation in view of AI. IP is no different, but perhaps also offers illuminating aspects on which AI functions we are going to classify as 'human' and 'non-human' especially in terms of who created the IP and who owns it.

    ReplyDelete
    Replies
    1. The inventor of an AI invention is readily covered under existing legislation. If I create a machine capable of making inventions that fulfil patentability requirements, I am the inventor, either sole or joint depending on the contribution of others. Laws on intellectual property ownership change all the time - reference to slavery is irrelevant re IP.

      The European Commission may believe it has the right to propose legislation on AI through its catch-all interpretation of the treaty, but we don't all agree with their view. Keeps the non-real-world academics in business through grants won by promoting a self-serving narrative.

      Delete
    2. The DABUS flashing light application EP3563896 (“Devices and methods for attracting attention”) is worth reading. It contains self-celebrating statements , including references to “spiritual significance”, “cosmic consciousness”, “deity”, “religion”, which speak volumes about the credibility of Mr Thaler’s claim that DABUS is the inventor of his application.

      See paragraphs [0019], [0020], [0021] and [0058]

      [0019] Embodiments of the present invention further provide a symbol celebrating the unique tempo by which creative cognition occurs. The algorithmically-driven neural flame may be incorporated within one or more structures that resemble candles or altar fixtures, for instance, to accentuate the light’s spiritual significance. It is noted that that the light source or beacon can incorporate any type of light-emitting device.

      [0020] Such embodiments stem from the notion of one perceiving neural net monitoring another imagining net, the so-called “Creativity Machine Paradigm” (Thaler 2013), which has been proposed as the basis of an “adjunct” religion wherein cosmic consciousness, tantamount to a deity, spontaneously forms as regions of space topologically pinch off from one another to form similar ideating and perceiving pairs, each consisting of mere inorganic matter and energy. Ironically, this very neural paradigm has itself proposed an alternative use for such a flicker rate, namely a religious object that integrates features of more traditional spiritual symbols such as candles and torches.

      [0021] Moreover, in a theory of how cosmic consciousness may form from inorganic matter and energy (Thaler, 19978 , 2010, 2017), the same attentional beacons may be at work between different regions of spacetime. Thus, neuron-like, flashing elements may be used as philosophical, spiritual, or religious symbols, especially when mounted atop candle- or torch-like fixtures, celebrating what may be considered deified cosmic consciousness. Such a light source may also serve as a beacon to that very cosmic consciousness most likely operating via the same neuronal signaling mechanism.

      [0058] Furthermore, aspects of the present invention provide an object of contemplative focus embodying symbolic meaning of varying significance (e.g., philosophical/religious) due to the fact that the unique fractal rhythms used are those thought to: (1) be exploited by the brain to detect idea formation, and (2) have grandiose meaning as the temporal signature of creative cognition, whether in extraterrestrial intelligence or cosmic consciousness.

      Delete
  3. The commercial deployment of generative AIs such as ChatGPT implies that this technology has become part of the skilled person’s common general knowledge.

    Inventive contribution may be present in any human input applied to come up with an output worthy of patent protection. AIPPI resolution Q272 « Inventorship of inventions made using Artificial Intelligence » adopted in October 2020 lays out in points 4(a)-(e) such human inputs : definition of the target (« prompt »), identification of data sources and processing of data suitable for training an AI, selection of data for input to a trained AI, identification of an output as an invention. And for an application to be filed, a filing decision must be made, which involves the assessment of novelty and utility and the drafting of claims.

    In the case discussed in Rose’s post, what is claimed is a container structure which does not refer to AI and the patentability assessment can be carried out as in any mechanical case.

    As to claimed inventions referring to AI, they are treated as CII cases, but with peculiarities, inter alia the plausibility issue arising from the black box character of the AI system and the issues related to access to the training data (see T 161/18).

    We are only at the beginning and public policy issues are emerging as we see from the Commission's proposals, such as product liability, which could impact patent law issues.

    ReplyDelete
  4. It is not to be denied that AI can have a determining influence on our every day’s life.

    It is full of promises, e.g. automatic evaluation of X-ray images for detecting tumours, but also full of dangers, e.g. in determining the solvability of a borrower, not to speak about face recognition.

    The major danger being that the way the AI system works is far from being manifest as in lots of cases, neither the training data nor the correlation algorithm are known. It is only possible to trust an AI system when those items are known. That the legislator wants to intervene in those matters is to be welcomed.

    As far as patents are concerned, in the absence of a set of training data disclosed at filing, the question of sufficiency of disclosure will occur almost immediately.

    In view of the intrinsic value of large collections of data, companies or individuals holding such collections of data will not be inclined to disclose them. It is therefore to be expected that the number of patent applications involving AI will be rather modest.

    As far as the two DABUS applications are concerned, the first one (can) is barely inventive and the second one (flashing light) is lingering on the brink of insufficient disclosure.

    Being a computerised way of crunching data, AI will be barely in a position to be inventive. An AI system will do what it has been told to do, but certainly not more. AI with IP is a nice playground for legal scholars, and it should be left at this.

    As said, it does not mean that AI is without danger, and the more people will be aware of those dangers, the better it will be for our societies.

    ReplyDelete
  5. One issue is ownership of the data that an AI that outputs in the form of an embodiment of a patentable inventive concept. That's easy though. The machine does not own the data it outputs.

    Another issue is the point in time when an invention is conceived. Is it when the AI outputs (or even before it does any outputting) the outcome of its data processing? Or is it when a human looking at that output perceives in it, that is to say, "conceives" in it, something that rises to the level of a concept that is arguably "inventive" that is to say an "invention" that might be patentable? Who first gets to see the AI output has first opportunity to "make" a patentable invention. One supposes that the owner of the AI guards the output of the AI as if it were a trade secret.

    Can machines yet "think"? Dubio ergo cogito, cogito ergo sum. I doubt; and therefore I think. Machines don't have any doubts, do they? So they don't yet think, right. And if they can't yet think, how can they make an invention?

    ReplyDelete
    Replies
    1. MaxDrei, the EPC (or any national legislation to my knowledge) does not require the inventor to 'think'. That is also a test that we would not know how to carry out. Also the inventor is not required to 'own' the data/invention. Also human perception at any point is not needed (though it would of course happen). The reality is that we have a pretty low threshold for inventorship in that everyone that was involved at an early stage of discussion is often an inventor. The exact way their mind works is not relevant. Whilst I do not believe AI should be an inventor, I see no actual rational reason for this apart from 'legal discrimination' which for now is necessary because our laws on who a 'person' is cannot deal with anyone apart from a human. [My initial comment above mentions 'slavery' as I have a very uncomfortable feeling of not protecting AI's rights properly, whatever those rights might be]

      Delete
    2. Santa, as to "think" I had in mind that a patent claim is a definition of an inventive concept and that (at least in the USA) the inventor is the one who "conceives" that concept. The way I see it, the present day AI is a tool, which generates an output, and that output does not (yet) rise to the level of a conception of an inventive concept. Accordingly, the intelligence that studies the output and conceives as a result of that study a concept that can be defined in a patent claim is what one should identify as the inventor. The UK definition of the inventor as the "actual deviser" might not be helpful any more, in these times of AI as an ever more useful tool for outputting processed data.

      But I have to confess that I have zero experience of any AI and what it outputs. Perhaps already it is hard to distinguish between the "feature combination" set forth in a patent claim and an AI output that can also plausibly be argued to be a stating a "combination" of technical features at the level of generalisation of a patent claim.

      Delete
    3. Thank you MaxDrei. In my own understanding AI based on a neural network can systematically explore any defined 'concept space' which means that what is does is not simply mathematical, but can result in 'solutions' to what we would consider technical problems. That 'systematic' exploration includes doing random things and also adjusting the rules by which it is working to derive a solution. So through essentially mass number crunching it can derive, for example, an algorithm that processes an image to identify the identity of the person who was photographed. That algorithm would pass the novelty and inventive tests. Often the humans who 'own' the AI will not know how it derived the algorithm, as analysing a neural network's specific working is a research project in itself. So no human could be seen as a conceiver/devisor for the new algorithm. Surely the AI must be the inventor here?

      Delete
    4. Santa, thanks for that. I am at this stage out of my depth. I don't have any sense of the point at which the act of "deriving an algorithm" is technical as is the same as "conceiving a patentable invention".

      As to recognising faces in a football crowd, there are human "super-identifiers" and there soon will be, if not already, AI super-identifiers. Whether any of them have invented anything, I don't know. Your last sentence starts with the word "surely". That is often something of a give-away. It is surprising how often that word is selected when the person who utters it is not actually "sure" at all.

      Delete
    5. Thank you MaxDrei for your point about being 'sure'. I would simply say that you have been making substantial technical comments on blog articles for years. You would be a very good candidate for someone who could assess inventorship in a strange new situation (AI). Administrators, politicians, lawyers would not be more qualified. If you do not see a clear solution, then I take that as strong evidence of the possibility that AI 'could' be an inventor. This is important if we are to avoid the trap of humanity judging an alternative 'consciousness' to not be worthy of being an inventor for no good reason.

      Delete
  6. Now, firstly I wonder if there is really some AI out there already ( most things I read about are advanced patter recognition machines...).
    Assuming there are AIs it might be helpful to turn to similar situations.
    Who owns the copyright for a photograph? The camera or the photographer?
    If the camera selects the exposure, zoom, scene, ... with some kind of AI? The camera? The user?
    If an AI is asked to come up with a picture of Marvin, the paranoid android? The AI? Douglas Adams? The user of the AI?
    For me, whoever uses an AI is the owner of its output.
    When we have an AI with Genuine People Personality I will reconsider...

    ReplyDelete
  7. @Fragender
    Your assumption that the user of the IA is the owner of the IA output e.g. a picture may not imply that the user owns the copyright to the picture. This is so only if the output is copyrightable under the applicable legal requirements. And the legal requirements depend on the policy objectives of copyright law as defined by the case law. It is far from obvious that the output of an IA should always be protected by copyright.

    ReplyDelete
    Replies
    1. Sorry, I did not mean to imply that the output of an "AI" is automatically ( or ever) covered by a copyright. I just wanted to say, that the camera certainly does not own any right to the picture (nor an ape using a camera, compare the PETA case...)

      Delete
  8. Dr Thaler is not succesful with copyright, either:
    https://www.politico.com/news/2023/08/21/ai-cannot-hold-copyright-federal-judge-rules-00111865

    ReplyDelete

All comments must be moderated by a member of the IPKat team before they appear on the blog. Comments will not be allowed if the contravene the IPKat policy that readers' comments should not be obscene or defamatory; they should not consist of ad hominem attacks on members of the blog team or other comment-posters and they should make a constructive contribution to the discussion of the post on which they purport to comment.

It is also the IPKat policy that comments should not be made completely anonymously, and users should use a consistent name or pseudonym (which should not itself be defamatory or obscene, or that of another real person), either in the "identity" field, or at the beginning of the comment. Current practice is to, however, allow a limited number of comments that contravene this policy, provided that the comment has a high degree of relevance and the comment chain does not become too difficult to follow.

Learn more here: http://ipkitten.blogspot.com/p/want-to-complain.html

Powered by Blogger.