The edge of the known
Will serious scholars always require libraries and books or will they one day be supplanted by online documents? The World Wide Web is the closest thing in history to a universal mind dump, a download of the sum total of our knowledge. But can we ever map its extent? Or will our knowledge always grow faster than our ability to catalogue it?
The Library of Congress, which is larger than the New York Public Libary, contains about 11 terabytes of information. That’s a huge amount of information. Yet it is dwarfed by the amount of information already accessible online through search engines, about 167 terabytes. This is about fifteen times as much as the Library of Congress, a figure which even Grafton admits is impressive. But the information available through search engines like Google in turn shrinks to a literal dot compared to the material for which no ready directory exists: the so-called Deep Web. Deep Web is that part of the Internet for which there is no street map. The University of California in Berkeley estimates the Deep Web to be 91,000 terabytes in size — 545 times larger than all the material indexed by search engines and 8,150 times larger than the holdings of the Library of Congress.
Read the rest of my article at Pajamas Media. Nothing follows.