In this position paper, we argue that human baselines in foundation model evaluations must be more rigorous and more transparent to enable meaningful comparisons of human vs. AI performance, and we provide recommendations and a reporting checklist towards this end. Human performance baselines are vital for the machine learning community, downstream users, and policymakers to interpret AI evaluations. Models are often claimed to achieve “super-human” performance, but existing baselining methods are neither sufficiently rigorous nor sufficiently well-documented to robustly measure and assess performance differences. Based on a meta-review of the measurement theory and AI evaluation literatures, we derive a framework with recommendations for designing, executing, and reporting human baselines. We synthesize our recommendations into a checklist that we use to systematically review 115 human baselines (studies) in foundation model evaluations and thus identify shortcomings in existing baselining methods; our checklist can also assist researchers in conducting human baselines and reporting results. We hope our work can advance more rigorous AI evaluation practices that can better serve both the research community and policymakers.
Trending
- Gone But Not Forgotten: Retaining Advising Capabilities Even as the Army Cuts Its Advising Units
- Grand Strategy Heart of the Machine Is ready to Reach The 1.0 Version
- WDS 2026: Lockheed Martin unveils a new unmanned turret
- Visit Mystery Ranch at Enforce Tac at the Mehler Stand
- UPS’ 2026 closures will hit Atlanta, Dallas, other cities across US
- Situation in the Great Lakes region (14 February 2026)
- Tata and Airbus launch H125 helicopter final assembly line in India
- Leonardo DRS joins SHIELD missile defense program

