This article is the second in a five-part series. Each of these articles relates to the state of machine-learning patentability in the United States during 2019. Each of these articles describe one case in which the PTAB reversed an Examiner’s Section-101 rejection of a machine-learning-based patent application’s claims. The first article of this series described the USPTO’s 2019 Revised Patent Subject Matter Eligibility Guidance (2019 PEG), which was issued on January 7, 2019. The 2019 PEG changed the analysis provided by Examiners in rejecting patents under Section 101[1] of the patent laws, and by the PTAB in reviewing appeals from these Examiner rejections. The first article of this series also includes a case that illustrates the effect of reciting AI components in the claims of a patent application. The following section of this article describes another case where the PTAB applied the 2019 PEG to a machine-learning-based patent and concluded that the Examiner was wrong.

Case 2: Appeal 2018-004459[2] (Decided June 21, 2019)

This case involves the PTAB reversing the Examiner’s Section 101 rejections of claims of the 14/316,186 patent application. This application relates to “a probabilistic programming compiler that generates data-parallel inference code.” The Examiner contended that “the claims are directed to the abstract idea of ‘mathematical relationships,’ which the Examiner appears to conclude are [also] mental processes i.e., identifying a particular inference algorithm and producing inference code.”

The PTAB quickly dismissed the “mathematical concept” category of abstract ideas. The PTAB stated: “the specific mathematical algorithm or formula is not explicitly recited in the claims. As such, under the recent [2019 PEG], the claims do not recite a mathematical concept.” This is the same reasoning that was provided for the PTAB decision in the previous article, once again requiring that a mathematical algorithm be “explicitly recited.” As explained before, the 2019 PEG does not use the language “explicitly recited,” so the PTAB’s reasoning is not exactly lined-up with the language of the 2019 PEG – however, the PTAB’s ultimate conclusion is consistent with the 2019 PEG.

Next, the PTAB addressed and dismissed the “organizing human activity” category of abstract ideas just as quickly. Then, the PTAB moved on to the third category of abstract ideas: “mental processes.” The PTAB noted the following relevant language from the specification of the patent application:

There are many different inference algorithms, most of which are conceptually complicated and difficult to implement at scale.
. . .
Probabilistic programming is a way to simplify the application of machine learning based on Bayesian inference.
. . .
Doing inference on probabilistic programs is computationally intensive and challenging. Most of the algorithms developed to perform inference are conceptually complicated.

The PTAB opined that the method is complicated, based at least partially on the specification explicitly stating that the method is complicated. Then, in determining whether the method of the claims is able to be performed in the human mind, the PTAB found that this language from the specification was sufficient evidence to prove the truth of the matter it asserted (i.e., that the method is complicated). The PTAB did not seem to find the self-serving nature of the statements in the specification to be an issue.

The PTAB then stated:

In other words, when read in light of the Specification, the claimed ‘identifying a particular inference algorithm’ is difficult and challenging for non-experts due to their computational complexity. . . .  Additionally, Appellant’s Specification explicitly states that ‘the compiler then generates inference code’ not an individual using his/her mind or pen and paper.

First, as explained above, it seems that the PTAB used the assertions of “complexity” made in the specification to conclude that the method is complex and cannot be a mental process. Second, the  PTAB seems to have used the fact that the algorithm is not actually performed in the human mind as evidence that it cannot practically be performed in the human mind. Footnote 14 of the 2019 PEG states:

If a claim, under its broadest reasonable interpretation, covers performance in the mind but for the recitation of generic computer components, then it is still in the mental processes category unless the claim cannot practically be performed in the mind.

Accordingly, the fact that the patent application provides that the method is performed on a computer, and not performed in a human mind, should not be the sole reason for determining that it is not a mental process. However, as the PTAB demonstrated in this opinion, the fact that a method is performed on a computer may be used as corroborative evidence for the argument that the method is not a mental process.


This case illustrates:

(1) the probabilistic programming compiler that generates data-parallel inference code was held to not be an abstract idea, in this context;
(2) reciting in the specification that the method is “complicated” did not seem to hurt the argument that the method is in fact complicated, and is therefore not an abstract idea;
(3) reciting that a method is performed on a computer, though not alone sufficient to overcome the “mental processes” category of abstract ideas, may be useful for corroborating other evidence; and
(4) the PTAB might not always use the exact language of the 2019 PEG in its reasoning (e.g., the “explicitly recited” requirement), but seems to come to the same overall conclusion as the 2019 PEG.

The next three articles will build on this background, and will provide different examples of how the PTAB approaches reversing Examiner 101-rejections of machine-learning patents under the 2019 PEG. Stay tuned for the analysis and lessons of the next case, which includes methods for overcoming 101 rejections where the PTAB has found that an abstract idea is “recited,” and focuses on Step 2A Prong 2.


[1] 35 U.S.C. § 101.