Recent reporting by Nieman Lab describes how some major news organizations—including The Guardian, The New York Times, and Reddit—are limiting or blocking access to their content in the Internet Archive’s Wayback Machine. As stated in the article, these organizations are blocking access largely out of concern that generative AI companies are using the Wayback Machine as a backdoor for large-scale scraping.
These concerns are understandable, but unfounded. The Wayback Machine is not intended to be a backdoor for large-scale commercial scraping and, like others on the web today, we expend significant time and effort working to prevent such abuse. Whatever legitimate concerns people may have about generative AI, libraries are not the problem, and blocking access to web archives is not the solution; doing so risks serious harm to the public record.
Knowledge rot is already a problem and has been for years – where you try to follow some links only to find they’re dead, or people deleted their content. The anecdotes of finding some old problem and someone just said “I figured it out”. Sure, archival won’t fix that specific example, but the principle is there - we lose so much information.
It would be nice if we had a government that worked for We the People and made information archival mandatory — likr the Library of Congress already does with printed materials.
Precisely that one, yes :)



