PatRe Benchmark Models Full Patent Examination Lifecycle
Researchers have unveiled PatRe, the inaugural benchmark designed to simulate the complete patent examination process, which encompasses the generation of Office Actions and the responses from applicants. This examination process is intricate and involves multiple stages that necessitate both technical knowledge and legal reasoning, facing increasing pressure from the growing number of applications. Previous benchmarks primarily treated patent examination as either discriminative classification or static extraction, overlooking its interactive and iterative aspects, akin to the peer review and rebuttal processes in academic publishing. PatRe consists of 480 authentic cases and accommodates both oracle and retrieval-simulated evaluation environments. It reconceptualizes patent examination as a dynamic, multi-turn dialogue of justification and counterargument. Comprehensive experiments with various large language models (LLMs) provide valuable insights into their performance, highlighting distinctions between proprietary and open-source models.
Key facts
- PatRe is the first benchmark modeling the full patent examination lifecycle.
- It includes Office Action generation and applicant rebuttal.
- The benchmark comprises 480 real-world cases.
- It supports oracle and retrieval-simulated evaluation settings.
- Patent examination is compared to peer review and rebuttal in academic publishing.
- Experiments across various LLMs reveal insights into model performance.
- Differences between proprietary and open-source models were observed.
- The benchmark reframes patent examination as a dynamic, multi-turn process.
Entities
—