George Mason University Assistant Professor of Systems Engineering & Operations Yifan Liu and J.D. Candidate Ayal Sharon reviewed the backlog problem at the PTO, and applied a "queueing theory" analysis to filings to determine where the problems may lie.
Queueing theory enables mathematical analysis of several related processes, including arriving at the back of a queue, waiting in the queue (essentially a storage process), and being served by the server(s) at the front of the queue. Applying a "priority queue" analysis, the process works to:
- add an element to the queue with an associated priority;
- remove the element from the queue that has the highest priority, and return it; and
- (optionally) peek at the element with highest priority without removing it.
(1) A priority queue processes higher priority items first, giving them a shorter queuing delay – but at only at the expense of longer queuing delays for lower-priority arrivals. Since the USPTO gives higher priority to RCEs and continuing applications than to regular new applications, large numbers of RCEs and continuing applications should result in disproportionate waiting times for regular new applications.
(2) At the very least, a limit on the number of continuing applications would be desirable because continuing applications can produce multiple generations of unlimited numbers of high priority offspring. RCEs and multiple non-final office actions both contribute to the risk of starvation, but pose less potential risk than continuing applications because they do not produce parallel offspring (branching feedback).
Did the study validate the PTO's assertion that pendency would be reduced? Erm . . . not quite:
. . . the [current] USPTO model is saturated and is overwhelmed. Some sort of policy change is needed to lower the total traffic intensity.
. . . based on the current statistics, the results from limiting continuing applications seem similar to the results from limiting RCE applications. Prohibiting second and later generations of continuation application, as the only policy change, will not have much impact. It may help somewhat to prohibit all continuations, but the system will remain saturated, and as with eliminating RCEs, it is unfair to the applicants.
This is not to say that the USPTO does not need safeguards to prevent an excessive use of continuation applications from becoming a workload problem in the future. A policy limiting continuation applications would be similar to the “admission control policies” used to limit the number of arriving items in computer operating system priority queues. What this analysis shows, however, is that based on current statistics, such a policy will not currently have much of an effect.
. . . we find that the excessive number of non-final rejections per application is the main cause of the system’s saturation. The sensitivity analysis performed in appendix IV supports this conclusion. This number of excessive non-final rejections in each round of prosecution dwarfs the number of second and later RCEs and continuation applications. Reducing the number of non-final rejections per application is the most effective way to improve the throughput of the USPTO. This will help reduce the primary burden on the system — the large number of regular amended cases that remain alive in the system due to repeated non-final rejections.
. . . if we keep the current policies on RCE, continuations, and nonfinal rejections, we would need to increase the number of examiners by a factor of 1.2230 in order to decrease the total intensity from 1.2230 to 1. This would mean hiring 4215 x 0.2230 = 940 new examiners.139 (The USPTO goal for fiscal years 2005 and 2006 was to do exactly that.)
Hiring additional examiners is a problematic solution. Our hiring calculations are based on a on a "steady state" assumption that the growth rate of applications will remain proportional to the growth rate of the examining corps. However, for many years the incoming rate of applications has been rising, and is expected to continue to rise in the future. This means that the number of patent examiners would need to grow at least at a rate proportional to growth rate of new applications, ad infinitum. An exponential growth in hiring is not a sustainable solution. The USPTO has recognized that hiring is not a viable long-term solution to the problem.
We were surprised by the results of the simulation. We expected the results to show starvation of the priority queue, caused by the RCEs and continuations. Instead, we got the unexpected result that the large number of non-final rejections per round of prosecution is the major cause of the backlog of applications.
Given the limitations of time and scarcity of data, our model was very simple in terms of mathematical complexity. A more detailed analysis was beyond the scope of this modest student note. The authors hope that this note will encourage further research along these lines, both inside and outside the USPTO.
View/download the study here (link)
Hat tip: I/P Updates