GenAI-accelerated TLA+ challenge

My image description
šŸ† Announcement: Winners of the 2025 TLAi+ Challenge

The TLA+ Foundation, in collaboration with NVIDIA, is pleased to announce the winners of the first GenAI-accelerated TLA+ Challenge—an open call for submissions showcasing creative and technically impressive work at the intersection of TLA+, formal methods, and AI-assisted development.

šŸ„‡ First Place — Specula (Code → Spec)

Specula, developed by Qian Cheng, Dr. Tianyin Xu, and Dr. Yu Huang, is an open-source framework that automatically derives TLA+ specifications from source code and checks them against the implementation. It combines an LLM-based generator with a Control Flow Analyzer to ensure syntactic and structural correctness, then uses trace validation to semantically align the spec and the implementation. Demonstrated on etcd’s Raft (Go) and Asterinas’s SpinLock (Rust), Specula offers a reproducible path toward scaling automated specification to broader codebases and abstracting algorithmic intent.

Award: Nvidia GeForce RTX 5090 (sponsored by NVIDIA)

🄈 Second Place — Andrew Helwer (LLM Token Restriction)

Andrew tested whether local LLMs can be constrained to produce valid TLA+ by restricting token generation. Using LlamaCpp’s GBNF syntax (context-free) and the Guidance framework (context-sensitive), he showed that grammar-based constraints can reliably enforce syntax and, in some cases, symbol definition. Adding TLA+ documentation to prompts improved results, and the language’s explicit symbol declarations make it well-suited to such constraints. This approach could complement or even surpass fine-tuning for niche languages.

Award: One-year single seat, individual subscription to Github Copilot Pro+ (sponsored by the TLA+ Foundation)

šŸ„‰ Third Place — Gregory Terzian (Spec → Code)

Gregory explored using TLA+ as a blueprint for generating idiomatic, multithreaded Rust code. By applying TLA+’s refinement process in stages, the LLM is guided toward correct and efficient implementations, avoiding the ambiguity of natural language instructions. If this approach were combined with new TLA+ MCP server integrations, it could pave the way to fully verified, spec-driven code generation.

Award: One-year single seat, individual subscription to Github Copilot Pro (sponsored by the TLA+ Foundation)

šŸ™ Acknowledgements

We warmly thank all participants for their submissions. The challenge was made possible thanks to NVIDIA and the TLA+ Foundation, whose generous sponsorship funded the prizes. We also invite other companies to consider donating prizes for future challenges to help grow the TLA+ ecosystem. For those with larger, longer-term projects in mind, remember that the TLA+ Grants Program is open for proposals year-round.


The rest of this page is retained in its original form, preserved for posterity.

Example Project Areas

Participants may submit work including, but not limited to:

  • Intelligent refactoring of TLA+ specifications (e.g., managing UNCHANGED correctly when adding variables)
  • LLM-enhanced linters, formatters, or other development tools
  • LLM-driven tools for automated grading in education
  • Visualizations of specifications or execution traces
  • Generation of type annotations for tools like Apalache
  • Synthesis of inductive invariant candidates, validated via TLC or Apalache
  • Synthesis of TLAPS proofs
  • Synthesis of entire specifications from source code and design documents

Evaluation

Submissions will be judged by the TLA+ Specification Language Committee (SLC)

The Jury will evaluate submissions based on their functionality, relevance to the TLA+ ecosystem, and the thoughtful use of AI. Submissions must be reproducible by the Jury. Passive formats, such as videos alone, are not sufficient. However, the Jury does not require a fully polished product—a prototype is sufficient. All submissions must be released under the MIT license, and any underlying AI models must be publicly available.

The use of GenAI/LLMs is explicitly encouraged, provided that any AI-generated content—such as specs, invariants, visualizations, … —is checked using some form of verification such as the TLA+ tools.

Details

Participation Criteria

Eligible participants must meet the following:

Submission Timeline & Announcement

Submissions will open alongside this announcement. The deadline to submit entries for the challenge is July 3, 2025. Submissions must be sent to genai@tlapl.us. The jury will select the winner one month after the submission period closes. We welcome innovative, technically robust, and practically valuable contributions that explore and expand the potential of GenAI within the context of TLA+.

For longer-term or foundational engineering and research efforts related to TLA+, we encourage you to explore the TLA+ grant program.