Why AI Needs Human Editors in Newsrooms
The Poynter Institute for Media has issued a stark warning against the unchecked use of artificial intelligence in journalism, urging that all AI-generated content must undergo human editorial review to safeguard accuracy and credibility.
According to Lisa McLendon, a journalism professor at the University of Kansas, while AI tools are rapidly transforming workflows in newsrooms and classrooms, they remain prone to “hallucinations” — confidently delivering fabricated information, nonexistent sources, and events that never occurred.
McLendon noted that these errors have actually worsened in the latest iterations of AI tools, such as OpenAI’s updates, raising concerns about the reliability of generative systems that are marketed as productivity boosters.
She emphasized that without careful human oversight, misinformation can quickly spread to massive audiences, eroding public trust in media institutions.
The warning comes as several embarrassing examples of AI-only content have surfaced in recent months, underscoring the reputational risks of removing human editors from the process.
Some media organizations, recognizing that their audiences perceive AI-generated stories as low quality or even misleading, have started rehiring writers and editors to restore credibility.
McLendon concluded that the guiding principle for responsible AI use should be: “Human first, human last — and that last human must be an editor,” reaffirming the irreplaceable role of trained professionals in protecting journalistic integrity.