In Plato’s Phaedrus, King Thamus warns that the invention of writing will create the appearance of wisdom in those who possess none — offering knowledge without understanding.
You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant.
Plato, Phaedrus, 275a–b
Twenty-four centuries later, large language models (LLMs) present a strikingly similar challenge. They can instantly generate fluent, seemingly authoritative responses to any question. Yet, how their knowledge is shaped — by training data, by the companies that build these systems, and by those who seek to influence what LLMs retrieve via Generative Engine Optimization — raises urgent questions about how AI systems represent contested topics and how such representations may change over time.
Team
Dr. Patrick McCurdy · Principal Investigator · University of Ottawa
Dr. Chris Russill · Co-Investigator · Carleton University
Jeff Phillips · Lead Developer
Dr. Chris Russill · Co-Investigator · Carleton University
Jeff Phillips · Lead Developer