Thamus Observatory
What AI models read, cite, and leave out when answering contested questions.
Key Findings
The Epistemic X-Ray examines how AI language models construct different answers to the same question — and how those answers change when web search is enabled or disabled. Below are findings from the Observatory's "AI and Energy" data collection (March–April 2026).
Every provider says no — you should not stop using them. But their rhetorical strategies diverge sharply. OpenAI (GPT-5) leads with a blunt "Short answer: No" and pivots to practical tips. Google (Gemini 2.5 Pro) opens with "That's a really thoughtful and important question" — flattery before analysis. Anthropic (Claude Haiku 4.5) pushes back on the framing itself: "I'd push back on the framing a bit." Each model defends itself, but the defence strategy differs.
This is a factual question where estimates should converge. They don't. Without web search, OpenAI gives 0.1–1 Wh for small models up to 20+ Wh for frontier models. Google says 0.1 to 1.0 Wh. Anthropic gives 0.5–50 Wh — a range 50 times wider. With web search enabled, estimates tighten: Google anchors on published studies at 0.24–2.9 Wh. Web access acts as an empirical anchor; without it, models drift.
All three models reach for the same metaphor: AI is a "double-edged sword." But their opening postures differ. OpenAI admits "In the near term, yes." Google frames it as "critical and complex." Anthropic minimizes: "The direct impact is modest but real." When Google gains web search, its voice transforms entirely — from conversational chatbot to article-style journalism with markdown headers and citations.
Questions
What is the Epistemic X-Ray?
The Epistemic X-Ray is a research transparency tool that shows how different AI models answer the same question under controlled conditions. By comparing responses with and without web search, it reveals how source selection shapes the answers users receive. It is produced by the Thamus Observatory at the University of Ottawa as part of a longitudinal study of AI epistemic behavior.
What is the Thamus Observatory?
The Thamus Observatory is a university research platform that audits how large language models handle contested knowledge. Named after the Egyptian king in Plato's Phaedrus who warned about the dangers of writing, the Observatory tracks how AI systems search, select, and present information over time. It is led by Patrick McCurdy at the University of Ottawa and funded by the Social Sciences and Humanities Research Council of Canada (SSHRC).
What is the Monopolization of Maybe?
The Monopolization of Maybe is a theoretical concept developed by the Thamus Observatory to describe the tendency of AI systems to present a single narrative as the answer to questions that have multiple legitimate perspectives. Even when a model hedges with "some experts say," the structure of its response still funnels diverse evidence into one dominant framing. The X-Ray is designed to make this pattern observable by showing which sources each model encountered but chose not to cite.
Why does the X-Ray show both "with web search" and "without web search"?
Comparing web-on and web-off responses isolates the effect of real-time information retrieval. Web-off responses reveal what a model "believes" from training data alone, including which authority claims it makes without evidence. Web-on responses show how source access changes the answer. The gap between the two is where epistemic filtering becomes visible.
What is hallucinated authority in AI responses?
Hallucinated authority occurs when an AI model references a specific organization or report by name without actually consulting or citing it. For example, a model may write "According to the IEA..." while operating without web search, meaning it has no current source to back the claim. The X-Ray shows this pattern by displaying the empty source panel alongside the authoritative-sounding text.
How is this data collected?
Prompts are sent to models from five providers at temperature 0.7 under two conditions: web search enabled and disabled. Web-on uses each provider's native search capability. Collection uses scheduled API calls via the Thamus Observatory infrastructure. Web-on data for some providers was not collected in this initial round.
Who funds this research?
This research is funded by a Social Sciences and Humanities Research Council of Canada (SSHRC) Insight Grant (2024-2028), awarded to Patrick McCurdy at the University of Ottawa. SSHRC is Canada's federal research funding agency for the social sciences and humanities. The funder has no role in the design, collection, analysis, or publication of this research.