Highly Incremental: A Simple Programmatic Approach for Many Objectives
Our new paper "Highly Incremental: A Simple Programmatic Approach for Many Objectives" by Philipp Schröer and Joost-Pieter Katoen presents a one-fits-all approach for quantitative objectives on probabilistic programs, applicable to verification with Caesar.
The paper will be presented at The 27th International Symposium on Formal Methods (FM 2026) in Tokyo, Japan.
A preprint with a detailed appendix of proofs and case studies is available on arXiv: "Highly Incremental: A Simple Programmatic Approach for Many Objectives (Extended Version)".
In this post, we:
- show how
rewardstatements model runtime and related quantities. - explain why verification of variance is difficult for runtime objectives.
- introduce our sound program transformation for these objectives, including higher moments, tail probabilities, CDFs, expected excess, and moment-generating functions.
- work through a Caesar example for the second moment of runtime.
Programmatic Reward Modeling
The paper explores what can be encoded into probabilistic programs with reward statements, which collect a certain quantity — rewards — during execution.
We then consider expected rewards, i.e. the expected value of rewards collected over the distribution of program executions.
This approach enables more applications for Caesar's weakest pre-expectation reasoning out-of-the-box.
