Invention

Trends in Artificial Intelligence Invention Patent Protection

In recent years, AI patent activity has increased exponentially. The figure below shows the volume of public AI patent applications ranked by AI component in the United States from 1990 to 2018. The eight AI components in Fig. 1 are defined in an article published in 2020 by the USPTO. Most components of AI have experienced explosive growth over the past decade, especially in the areas of planning/control and knowledge processing (e.g., the use of big data in automated systems).

AI technology is complex and includes different parts in different fields. Inventors and patent attorneys often face the challenge of effectively protecting the development of new AI technologies. The rule of thumb is to focus patent protection on what inventors improve upon conventional technology. However, inventors often need to improve various aspects of an existing AI system to adapt it and make it work for their applications. In the following sections, we will discuss an illustrative list of areas that may offer patentable AI inventions.

Figure 1. AI patenting activities by year

(1) Training stage

The training phase of an AI system includes most of the exciting technical aspects of machine learning algorithms exploring the latent patterns embedded in the training data. A typical training process includes preparing the training data, transforming the training data to facilitate the training process, feeding the training data into a machine learning model, fitting (training) the model to machine learning based on training data, testing the trained machine learning model, etc. Different AI models or machine learning models can have different training process, such as supervised training based on labeled training data, unsupervised training which infers function to describe hidden structure from training data unlabeled, semi-supervised training based on partially labeled training data, reinforcement learning (RL), etc. Common areas of the formation phase that may give rise to patentable ideas include:

  • Preparation of training data: collecting meaningful training data, balancing positive/negative samples in training data, labeling training data, normalizing training data, encoding or integrating training data , generation of synthetic training data.

  • New machine learning architectures: new neural network architecture, hybrid model (for example, a group of homogeneous neural networks working collectively, or a neural network trained on the basis of training data from a general domain and then transformed by training based on specific domain training data), hierarchical model (e.g., federated learning).

  • Loss function: A new loss function that improves training efficiency.

  • Sparsification/pruning of neural networks: reduction in the number of active neurons in neural networks, reduction in the number of channels/layers in neural networks.

  • Output post-processing: converting predictions to probabilities when finality is detrimental.

(2) Application (Inference) Phase

The application phase of an AI system includes applying the trained models to make predictions, inferences, classifications, etc. This phase typically covers the actual application of the AI ​​system. It can provide easier detectability of infringements and therefore valuable patent protection for the AI ​​system. In this digital age, AI systems can be applied to almost every aspect of our lives. For example, an AI patent may claim or describe how the AI ​​system helps the user make better decisions or perform previously impossible tasks. These applications can be seen as practical applications that are powerful in overcoming potential rejections of “abstract ideas” during AI patent pursuit.

On the other hand, simply claiming an AI system as a magic black box that generates accurate predictions based on input data will likely trigger rejections during the prosecution, such as patentable object rejections ( for example, a simple black box application can be classified as human activities). There are several ways to reduce the chances of getting such rejections. For example, adding a brief description of the training process or the structure of the machine learning model helps overcome rejections from USC §101.

(3) Between software and hardware

Another flavor of AI patents relates to accelerators, pieces of hardware with embedded software logic that speed up the training and/or inference process. These AI patents can be claimed from a software or hardware perspective. Some examples include hardware specifically designed to improve training efficiency when working with GPU/TPU/NPU/xPU (e.g. reducing data migrations between different components/units), memory layout changes for improve computational efficiency of compute-intensive steps, arrangement of processing units for easy data sharing, and efficient parallel training (e.g., tensor segmentation to evenly distribute workloads across processors ), an architecture that fully exploits the scarcity of tensors to improve computational efficiency.

(4) AI Model Data Robustness, Security, Reliability and Privacy

Cutting-edge AI systems are far from perfect. Robustness, security, reliability, data privacy are just some of the most notable weak points in training and deploying AI systems. For example, an AI model trained from a first domain may have near-perfect accuracy for inference in the first domain, but generate disastrous inferences when deployed in a second domain, even if the domains share some similarities. Therefore, how to train an AI model efficiently and adaptively so that it is robust when deployed across all domains of interest is both challenging and intriguing.

As another example, AI systems trained on the training database can be easily fooled by adversarial attacks. For example, a second deep neural network can be designed to compete with the first to identify its weaknesses. The safety and reliability of these AI systems will be essential in the years to come and could constitute important patentable objects.

As another example, training data may in many cases include sensitive data (e.g. customer data), direct use of such training data may result in serious data privacy breaches. This problem becomes more alarming when a plurality of entities collectively train a model using their own training data. As a result, researchers and engineers have explored differential privacy protection and federated learning to address these issues.

Copyright © 2022, Sheppard Mullin Richter & Hampton LLP.National Law Review, Volume XI, Number 242