The Next Web reports that the Internet Archive has vastly increased its historical database of the web:
The Internet Archive has updated its Wayback Machine with a significant bump in coverage: the service has gone from 150,000,000,000 URLs to having 240,000,000,000 URLs, a total of about 5 petabytes of data. More specifically, the Wayback Machine now covers the Web from late 1996 to December 9, 2012.
Web archiving is a topic of great interest to me and the subject of an article I’m writing. Part of the paper addresses the Bush administration’s questionable conduct regarding the content of the White house website. For example, the White House website’s robots exclusion file — a mechanism that can be used to ask search engine and web archive spiders to stay away — is nearly 2300 lines long. 2300 lines? Simply absurd. (Click here for a copy of the White House robots file that I downloaded on Nov. 25, 2008.)
Today, researchers at the University of Illinois released a study showing how the White House has deleted or modified portions of its website. Their findings are, sadly, unsurprising:
Legacies are in the air as President Bush prepares to leave the White House. How future historians will judge the president remains to be seen, but one thing is certain: future historians won’t have all the facts needed to make that judgment. One legacy at risk of being forgotten is the way the Bush White House has quietly deleted or modified key documents in the public record that are maintained under its direct control.
Remember the “Coalition of the Willing” that sided with the United States during the 2003 invasion of Iraq? If you search the White House web site today you’ll find a press release dated March 27, 2003 listing 49 countries forming the coalition. A key piece of evidence in the historical record, but also a troubling one. It is an impostor.
And although there were only 45 coalition members on the eve of the Iraq invasion, later deletions and revisions to key documents make it seem that there were always 49.
The study is a disturbing read. Rightly or not, a primary source of history for many researchers is the web. And any effort by the government to modify or delete historical records is appalling. As the authors note:
Updating lists to keep up with the times is one thing. Deleting original documents from the White House archives is another. Back-dating later documents and using them to replace the originals goes beyond irresponsible stewardship of the public record. It is rewriting history.
Although the Internet Archive’s Wayback Machine is a great research tool, its utility is hampered but a lack of basic search mechanisms. One can search by URL and archived links, but basic Google-style boolean searching isn’t available. The Archive once offered a beta boolean search tool, but it never worked and it was later withdrawn.
However, a new application may significantly expand our ability to data-mine archived webdata. Reports give a sneak peek at Zoetrope, an application being developed by researchers at Adobe and the University of Washington. As put by the researchers:
The Web is ephemeral. Pages change frequently, and it is nearly impossible to find data or follow a link after the underlying page evolves. We present Zoetrope, a system that enables interaction with the historical Web (pages, links, and embedded data) that would otherwise be lost to time. Using a number of novel interactions, the temporal Web can be manipulated, queried, and analyzed from the context of familar [sic] pages. Zoetrope is based on a set of operators for manipulating content streams. We describe these primitives and the associated indexing strategies for handling temporal Web data. They form the basis of Zoetrope and enable our construction of new temporal interactions and visualizations.
The demo video shows how historical webdata could be manipulated and compared, as the authors note, in a variety of “novel” ways. Even more significantly, researcher Eytan Adar “hopes to eventually incorporate information from the Internet Archive’s nearly 14 years of records.” Such a combination would massively increase the utility of web archives, but would also — as discussed in a paper I’m writing — exacerbate concerns over informational autonomy.
Just a few days ago, I wrote about the Library of Congress’ new report on digital preservation (which itself followed the report of the Section 108 Study Group issued last March). Now, the Commission of the European Communities has released a green paper entitled Copyright in the Knowledge Economy, which discusses, among other things, digital preservation, the making available of digitized works, and orphan works.
A joint report on the problems of copyright and digital preservation — International Study on the Impact of Copyright Law on Digital Preservation — was released this month by the Library of Congress National Digital Information Infrastructure and Preservation Program (“NDIIP”), the Joint Information Systems Committee, the Open Access to Knowledge (OAK) Law Project, and the SURFfoundation.
The report studies problems of digital preservation by looking at the copyright laws of four countries, including the United States. It finds:
Digital preservation is vital to ensure that works created and distributed in digital form will continue to be available over time to researchers, scholars and other users. Digital works are ephemeral, and unless preservation efforts are begun soon after such works are created, they will be lost to future generations. Although copyright and related laws are not the only obstacle to digital preservation activities, there is no question that those laws present significant challenges.