The Death of Print, Xanadu and Other Nightmares, or, Brother,
Can You Paradigm: Additional Notes and Comments
Walt Crawford
Copyright © 1992 by Walt Crawford
Originally written summer 1992
Given the glorious opportunity to offer a slightly curmudgeonly view of futuristic librarianship,
I prepared a talk that was at least twice as long as the time allotted—and almost certainly longer than anyone
would stay to hear!
This handout includes some of the rest of the talk. Thus, it’s not notes for the presentation
itself; it’s an extension of the presentation. The same caveats apply as for the presentation: I’m speaking only
for myself, not for RLG or for LITA; I don’t have any better credentials as a prophet than anyone else (or any
worse, for that matter); and the only thing I’m totally sure of is that change will continue in frequently-unpredictable
ways.
Topics appear here in an order directly related to the talk, but with even less connectivity
than the talk itself.
Most Innovations Fail
Here’s an unnerving thought, when you’re deciding how far out in front you should be on new media
and other innovations. Most innovations fail. Sometimes before really penetrating the market; sometimes after a
short blaze of glory. There’s no sure way to predict which will fail and which will succeed—and, much as I hate
to say it, you can’t wait out all of the likely failures.
If there’s anything with a worse record for failure than technological innovation, it’s predictions
about the future of technology and civilization—but that’s another story. Meanwhile, the technology situation is
bad enough. Remember eight-track tapes? That was one of those blaze-of-glory situations (or, if you know how eight-track
tapes actually worked, blaze of infamy: the technology was fatally flawed from the beginning). Then there were
the half-dozen or more videocassette systems that were introduced, and failed, before Sony’s Beta made videorecording
popular. Then again, Beta’s pretty much gone now except for professional use—but it had more than a decade of reasonable
success. Still, I bet some libraries got involved with Cartrivision, or SelectaVision, or V-Cord—and lots of libraries
(and others) still use U-matic tapes.
Then there are videodiscs, only now just beginning to succeed in the consumer market, and then
only thanks to CD players. The number of failed videodisc systems is astonishing, dating back to 1928 and pretty
much ending in 1984, when RCA finally abandoned their wretched CED system.
The list goes on. Libraries have had more than their share of failed micromedia, including ultrafiche,
aperture cards and various micro-opaque systems. Would anybody care to guess how many incompatible personal computer
systems came and went over the past two decades—and how many semi-compatible systems are still out there? I wouldn’t,
but the number is depressingly large.
New electronic publishing media? Well, of course, CD-ROM is an instant success. Which is to say
that the standards were established in 1983, the first products came out in 1984, and predictions of massive marketplace
success have been common since 1987 or 1988. They’re still predictions. Meanwhile, libraries may still be the largest
CD-ROM market. But compared to some other techniques, CD-ROM is doing great. Ever hear of OROM, Optical Read Only
Memory? 3M announced it in the early 1980s; IBM has been involved; it offers much faster access than CD-ROM, with
similar capacity; and in 1988, I thought it might be nearing the marketplace. Similarly, Sony’s DataROM—which may,
for all I know, have mutated into Sony’s miniature CD-ROM for the Data Discman. How about Cauzin Data Strips: a
big deal for a year or two, with PC World and Library Hi Tech News actively publishing the strips—but long since
disappeared.
Compact Disc Interactive? It’s been in the works since 1986; it’s on the market now, with what
appears to be tepid success at best. Compact Disc Video: also around since 1987 or so, but basically a dead duck.
Digital Video Interactive, announced by RCA in 1987; unclear what’s happening. Drexel’s LaserCard, in use for niche
applications for several years, with no breakout apparent. And we can’t forget "digital paper"—the hot
new medium that’s been coming any day now for at least half a decade.
The Library as Museum of Failed Technology
Why mention all these failures, only a few of the many? Because librarians have been urged to
use almost every one of these media, before it’s too late and their libraries become irrelevant—and some have.
What does the library do after the technology disappears? The easy answer is that you reformat the materials onto
a current digital medium such as write-once optical discs or digital audio tape (which has gone from "next
big thing" to niche product in a matter of months). But who has the money or time to do that?
In practice, one of two things happens: either the materials (some of which may be unique) become
inaccessible, or the library—some library—becomes a museum of failed technology, all of it lovingly maintained
so that the resources are available. We will continue to need some such museums, but it would be good to avoid
adding too many new systems to their collections. That’s particularly true because the new media require players
that aren’t quite as easy to maintain as some of the older media. Any good craftsman with the proper lenses could
build a new ultrafiche reader; will that be true for CD-ROM, if it ever leaves the marketplace? Even now, I wonder
how many historically important wire recordings sit in archives, museums and libraries that no longer have working
wire recorders...
It’s fine for a thousand experiments to bloom—but most libraries can’t afford to adopt the technologies
prematurely. When is the time right? There’s no easy answer.
The New Complements the Old
With relatively few exceptions, new technologies complement older ones, displacing them over
time and to the extent that the new technologies offer clear advantages. When it comes to communications, that’s
particularly true. Print did not destroy the oral tradition, although it extended its reach. Radio news did not
destroy newspapers. Even though television has apparently hurt newspaper circulation to some extent, there are
still many profitable newspapers. Neither did television destroy radio, which is more popular now than ever—although
it did change radio’s direction. Television and home video surely changed the motion picture business—but in complex
ways still not fully understood, and ways that have not destroyed the motion picture industry by any means.
Some will bring up Compact Discs as a case showing that new technology can totally displace an
older one quite rapidly. This is an exception, and the premise is somewhat faulty. Vinyl discs were already being
displaced by audiocassettes; they were a minority sound medium before CDs took over. More to the point, vinyl discs
represented a fundamentally flawed technology. Every use of a vinyl disc tends to destroy it, and you need exceptional
care to make vinyl discs work well in the first place. People moved to audiocassettes not because they were higher
quality (they’re significantly lower quality, in fact) but because cassettes are more portable, don’t require such
agonies of cleaning, anti-static treatment, etc., and don’t deteriorate sonically as rapidly or dramatically as
vinyl discs. CDs combine the convenience of cassettes with sound quality as good as or better than vinyl discs;
they sound as good on the twentieth playing as on the first; and you don’t need to be a tweak to get them set up
properly.
If books, magazines and newspapers were as hard to use as vinyl discs, they would be ripe for
the trashing—particularly if CD-ROM and electronic access were as straightforward as CDs. Neither is true; far
from it.
Limitations of Electronic Publications: Further Notes
Readability: Lighting and Clarity
Right now, no electronic medium can even begin to compare with ink on paper for readability,
particularly for sustained reading. In practice, the only reasonable way to read anything longer than five or six
screens is to print it out—which undermines the supposed ecological virtues of electronic publishing, and doesn’t
restore the much higher readability of well-designed printed material.
Some of the problems are fairly straightforward, and not nearly as easy to solve as death-of-the-book
advocates would have you believe. Virtually every readable electronic display uses transmitted light—light shining
in your face as you read—which is inherently more tiring than the reflected light used for book-reading. Displays
using only reflected light appear to be at a dead end; for one thing, the contrast isn’t high enough.
Resolution is another factor, and not a minor one. Most high-quality electronic displays resolve
at 72 dots per inch—some a little higher, cheap ones a little lower. So far, it appears remarkably difficult to
build a non-CRT display that can resolve much more finely; while you can do better with CRTs, the only 120-dot-per-inch
displays are frightfully expensive and seem likely to stay that way, given what’s involved in manufacturing.
But the lowest-quality printed text that you’re likely to encounter is 300 dots per inch. That’s
four times as much linear resolution but sixteen times as many dots per square inch. Sixteen times: that’s a huge
difference, and it’s the difference between squinting at 10-point type and reading it easily. That may also be
why, when I use Ventura Publisher in "normal" mode, what I see is actually about 33% larger than the
text size—so that, while continuous reading is still tiresome, I can at least read the screen without getting an
immediate headache.
Note that 300 dots per inch really isn’t particularly great quality for print, although it’s
good enough so that it won’t annoy most people. Realistically, good print is always effectively smoother than that.
Most modern laser printers use resolution enhancement to achieve something closer to 600 dots per inch apparent
resolution—four times the density, or 64 times that of a screen. For another, when laser output is photographed
to make a plate, then printed onto uncoated paper, the imperfections tend to be smoothed out in the process. Finally,
much of the highest-quality print (including most glossy magazines) uses much higher-resolution imagesetting, typically
1,200 to 2,500 lines per inch. At the low end of that scale, that’s 256 times the density of a screen; at the high
end, roughly 1200 times.
For most text and graphics on uncoated paper, once you get above an effective 400-600 dots per
inch, there’s usually little visible difference in legibility and readability. But while the gap between enhanced
laser output and imagesetting is reasonably small (and shrinking), the gap between enhanced laser output and screen
display is more of a chasm.
When you’re reading text on a screen, your mind is seeing characters and words—but the dots are
visible, and your mind is doing extra work to resolve the dots into characters and the characters into words. That’s
a strain, and it’s one reason that you just don’t read as effectively from a screen as from a page. Think about
a really entertaining novel or short story you’ve read recently—a real page-turner, as it were. You may very well
have read two pages a minute. Now, imagine reading the same thing from the best display you’ve ever seen. Hard
to imagine, isn’t it? How fast do you estimate you would read it—and how long would it be before you’d turn away
for aspirin or just a break?
The point isn’t that we should just wait a few years and all these problems will be solved. Some
of them will be; others won’t. The point is that we have a first-rate medium for extended reading; it’s called
ink on paper. Until electronic media perform at least as well, there’s really no reason to discuss displacement
of non-reference, widely-circulated print material: why give up something that works for something that doesn’t?
Another way of putting this: you can’t get a "good read" from a display screen as easily
as you can from ink on paper. That’s simply not likely to change—particularly because the death-of-print folks
don’t appear to recognize the concept of "a good read."
Where’s the Money?
Most grandiose schemes of all-encompassing electronic publishing assume a computer in every home,
omnipresent and omnipotent text-searching, book-quality displays on every library terminal and every home computer,
and so on.
Once in a while, some technophile admits that this will all cost an incredibly large amount of
money. Where does that money come from?
Many libraries can’t even provide as many dumb terminals for their catalogs as they need. Many
more can’t dream of replacing those dumb terminals with PCs to provide better access. Now we come along and say,
well, you actually need super-high-quality graphics displays. Where’s the money?
The simple answer, one frequently given, is that we get it by scrapping print publications. As
we’ll see later, that’s a rather narrow answer even if print didn’t have its place, since only a small fraction
of publishing costs actually relates to putting ink on paper and distributing the bound products.
There seems to be a sense that currently-printed materials would somehow be free to libraries
if only they weren’t on paper. If some of you are getting free CD-ROM indexes, then maybe there’s a case to be
made—and I’d like to hear about your suppliers! Otherwise, it may be quite the opposite. Electronic publishing
doesn’t eliminate copyright—but it could make pay-per-use much easier to enforce. How, exactly, does that help
libraries? In practice, it not only doesn’t help them, it could destroy them.
Right now, electronic media can impose extra costs on libraries. If a library wishes to make
an electronic journal readily available to its patrons, it must not only catalog the journal, it must also assure
that the journal is archived, make reading stations readily available, and so on. That’s a burden. It’s one of
several reasons that the Public-Access Computer Systems Review, the electronic journal I’m associated with (one
that has no visible costs or charges), is also available (on a delayed basis) as an annual cumulated paperback
through LITA. The modest price ($20, so far) brings a nicely-formatted 6x9 paperback, with significant value added
in terms of better readability. For the first couple of volumes, there’s even an index. For a typical library,
the $20 paperback makes more sense than the free e-journal—although one doesn’t invalidate the other.
Economics of Print vs. Electronic Publishing for Journals
and Books
We’re told that electronic publishing is much cheaper than print publishing. Perhaps, but the
case isn’t all that clear. Quite apart from the real (if frequently hidden) costs of electronic storage and networking
mechanisms themselves, it’s not the case that all or most of the costs of print materials come from printing and
distributing materials.
Print publishing involves several costs, which vary depending on the kind of print publishing.
For books, there are salary costs for acquisitions editors, copy editors, production editors, layout people, artists,
indexers and proofreaders; other costs include typesetting or imagesetting, platemaking and printing, binding,
distribution, and publicity. There’s also a little matter of profit. For journals, there may or may not be acquisitions
editors, but there will be editors and quite possibly reviewers; all of the other costs will also exist.
For many publications, the typesetting/imagesetting costs have already been reduced or eliminated,
as have some proofreading and layout costs. For example, my last five books have gone directly from my laser printer
to the platemaker. My contract calls for a production payment, partially offsetting the savings to the publisher;
that isn’t a big part of a typical book’s cost in any case.
Electronic publishing only eliminates imagesetting, printing, binding and some portion of distribution.
It has no effect on the need to acquire, edit, design (for best reading), index (or create hypertext links), and
publicize things. There’s still a distribution cost, even if it’s hidden; for CD-ROM, there’s both a "printing"
and a distribution cost. Some people are now arguing that you need better indexes for electronic publications,
since they’re much harder to "flip through." Hypertext links almost certainly require more care and intelligence
to prepare than do print indexes.
Publishers add value to the raw text. They also serve as gatekeepers—implying a level of quality
through their imprint, just as the better print journals imply a level of quality through refereeing. The gatekeeping
process is never free, although frequently subsidized in journals. Now that PACS Review is publishing some refereed
articles, I’d be willing to assert that some thousands of dollars in donated time for refereeing, spread out across
the editorial board, is being added to the thousands in donated time and computer resources that the University
of Houston and a few devoted librarians have provided in the past.
If you’re dealing with an all-volunteer publication, and if the electronic costs are subsidized
by institutions, you can achieve a "free" publication such as PACS Review—but, frankly, if you’re dealing
entirely with volunteers, you can do a print publication for a surprisingly low cost as well. How low? That depends—but
probably a lot lower than most "death of print" advocates realize. Say $10/year for a 6x9 quarterly,
120 pages per issue, 6,000 subscribers, permanent paper, including all printing and mailing costs. I’m absolutely
certain that’s feasible—with room to spare.
Narrow-Circulation STM Journals: The Most Severe Problem
The most obvious cause of economic stress in academic libraries comes from scientific, technical
and medical or STM journals, more specifically the ever-growing number of ever-more-expensive STM journals published
by a few large international publishers. It would be lovely to see some of these journals (particularly the overspecialized
journals with no appreciable non-library circulation) disappear from print, to be replaced by electronic journals
or print-on-demand operations.
Such journals have the fastest-growing subscription rates, and those rates have little to do
with actual production costs, except that it is admittedly more expensive per subscriber to produce very short-run
journals. The publishers market to the professors who publish in these journals, but libraries pay the bills: it’s
a sweet system for the publishers, but it’s killing the libraries.
Will such displacement really happen? I hope so, but I’m not terribly sanguine. Publishers won’t
give up those easy profits without a fight. Doubts will be cast on the quality of electronic journals. Any concerted
effort to establish a consortial nonprofit publishing arrangement, whether in print or electronic form, will face
any number of difficulties. But this is one area where libraries and their parent institutions need to work toward
a solution.
I don’t believe the solution is to leave the journals in the hands of their present publishers,
but convert to print-on-demand. In that case, the demand price will be exceedingly high. For libraries to realize
savings, the journals need to be in nonprofit hands, in institutions where the small level of demand doesn’t pose
a problem.
Serious as this problem is, it primarily affects universities and large colleges, and it primarily
affects the STM aspects of their libraries. It’s less serious for liberal arts colleges and almost irrelevant for
most public libraries.
Short-Lived Reference Works
I will shed no tears if Social Science Citation Index, Chem Abstracts and other massive A&I
volumes disappear from the shelves, replaced by CD-ROM or online access—assuming that prior years are still available
as needed. Far be it from me to criticize a library that scraps a variety of print reference sources that can clearly
be improved by electronic access. That’s all to the good.
Replacing a print product with a CD-ROM does not, in any real way, take us toward the "virtual
library." CD-ROMs are, fundamentally, just big, hard-to-read books in little packages; they are published
items that libraries buy, house and provide access to.
Print abstracts and indexes may be the best examples of books that never made much sense as books.
They’re too bulky; distribution and printing represent unreasonable overhead; although I suspect imagesetting is
improving this situation, they’re inherently too slow to produce—and so on. I say "bring on the CD-ROM"—but
libraries must be watchful, so that back issues of abstracts and indexes continue to be available. Libraries purchase
books and get to keep them or give them away; why is it that they don’t actually own the CD-ROMs that they pay
good money for?
Garden-Variety Books: the 7-to-1 Ratio
Let’s move on to regular books—novels, non-fiction books, the stuff of public libraries and humanities
collections. We already know there are good reasons why people prefer these in book form to electronic form, and
that this preference is likely to continue for a while. What about the economics?
It may help to understand what portion of a book represents production and distribution costs.
It varies, of course, depending on the print run, the form of binding, nature of illustrations, etc., etc. But
one round number I’ve heard for typical medium-run hardbound books is seven to one: that is, the cost of production
and distribution is roughly one-seventh of the price.
There’s your potential savings—except, of course, that electronic distribution or CD-ROM publishing
isn’t free either. One-seventh—and, I stress again, that’s a round number. It’s also a number that typically includes
typesetting, and a growing number of books don’t involve typesetting charges.
Let’s take a short-run paperback example. Say that I’m going to produce a 208-page paperback,
6x9, on permanent paper, in an edition of 5,000 copies. Leave out typesetting costs, since the pages will be produced
using desktop publishing techniques. Would anyone care to guess the per-copy price for printing and binding, including
platemaking charges? $6 a copy? $5? $4? $3? How does $1.97 sound?
I think that’s on the high side—but I’ll guarantee you it can be done for that, including a two-color
cover. On a short schedule, and with high quality. Two bucks a copy. If it was a 300-page hardbound book, with
only 2,000 copies produced, on really handsome permanent book paper? It might be as much as $5 or $6 a copy; certainly
not much more than that. So if you’re going to publish the book on CD-ROM or some other electronic medium, you
can save anywhere from $2 to $4 per copy. That’s not really a lot, is it?
But let’s consider true electronic distribution, where you download the book and print it out—and
you will print it out, if you really want to read it. Now what are the economics?
Right now, you’re faced with an ecological and economic nightmare. Your typical laser printer
prints on one side of 8.5x11 paper, rather than the two sides of 6x9 paper that the book is on—thus, you’re using
19,448 square inches of paper instead of 5,616 square inches: three times as much paper. Figure about 2/3 of a
cent per sheet for the paper, and about 2 cents per page for the toner, assuming you get good discounts—that’s
$5.55 worth of materials, and you’re left with an unbound book twice as thick as it should be. Some savings! You’ve
just spent $5.55 to save $1.97 plus the shipping cost of $0.33 to $0.67.
Those costs will change. At some point, for some books, it will be perfectly reasonable for publishers
to use CD-ROMs, optical discs or direct transmission to distribute specialty texts to local bookstores or agencies,
which can then print out specific books on demand, using high-speed duplexing laser printers that use properly-sized
paper. I don’t see how toner can ever be as cheap as ink, but for specialized books, this might be reasonable.
The bottom line is this: for run-of-the-mill books, anything with at least 1,000-1,500 probable
sales, traditional publishing makes economic sense—and electronic publishing doesn’t seem to offer major savings.
Mass-Market Paperbacks: The True Revolution
For mass-market paperbacks, we’re talking very different numbers. Using paper that’s not much
better than newsprint and high-speed presses, mass-market paperbacks are so cheap to produce that publishers can’t
be bothered to get back returns. Instead, the merchants just tear off the covers and ship them back for credit;
with luck, they recycle the innards of the book. I have no idea what the production cost per copy of a typical
mass-market paperback might be, but I’d be surprised if it exceeded fifty cents.
Indeed, as Michael Gorman has said, mass-market paperbacks represent the true information revolution.
They may not be beautiful, but they are readable—and they’re cheap, they’re everywhere, and they keep people reading.
Every supermarket and most corner stores have racks of paperbacks, at least a few dozen books; it may not be great
art, but it means that people are reading. That counts for a lot—particularly if there are good public libraries
nearby to allow them to go beyond the genre novels and big-name nonfiction paperbacks.
The Curious Economics of Mass-Market Magazines
Speaking of mass-market, what about mass-market magazines—by which I mean anything you’re likely
to find in a supermarket magazine rack? Such magazines may have as little as 50,000 circulation, but quite a few
have circulations in the millions. Are they candidates for displacement by electronic means? Not in any future
that makes sense to me, they aren’t.
Subscribers really don’t pay most of the cost of most mass-market magazines; advertisers do.
They pay those costs because they know who buys the magazines and know that print ads work, partly because you
keep the ad as long as you keep the magazine. That’s true for many magazines that don’t appear on newsstands as
well, including the extreme cases, the many "controlled-circulation" magazines and newspapers whose subscribers
don’t pay anything at all.
I pay $30 a year or so for PC Magazine, which gets me 22 issues averaging about 600 pages each,
clearly representing the efforts of a large and expert editorial staff. That’s about $1.40 an issue. It probably
costs half of that just to mail the monster. Most of the editorial and production money (including Ziff-Davis’s
profit) is coming from the advertisers, I suspect. Would it make sense to convert PC Magazine to electronic form?
I can’t imagine how—and I can’t imagine how you could make an electronic version as readable and handsome, or how
you would get me to look at the ads. You can get portions of PC Magazine online, using Ziff-Davis’s Ziffnet on
CompuServe, more specifically PC Magnet. You will, of course, pay CompuServe’s rates. After all, information isn’t
free.
I get another Ziff-Davis publication every week: PC Week, around 120 pages, tabloid, nicely produced.
It doesn’t cost me a cent, because I’m identified as a sufficiently significant buyer of PC-related products. Advertisers
pay the entire costs. I get a local computer fortnightly and computer monthly on the same basis: free. How will
you do that electronically?
Then there’s Consumer Reports. No advertising. Not cheap, either: about $1.50 a month, for a
relatively slender magazine (about 80 pages). But that’s all editorial material, it represents millions of dollars
worth of testing, and it includes loads of photographs and tables. Can you do it as effectively, more cheaply,
using electronic methods? I doubt it. I could be wrong. But I’m pretty certain I wouldn’t go through an entire
issue of Consumer Reports on Prodigy.
Personalized Newspapers and Big-City Daily Newspapers
Newspapers. There’s an area where the futurists tell us we can get something better through electronic
publishing. Specifically, we can have personalized newspapers: newspapers that contain all the stories that we
care about, and only those stories—and that are much more up-to-date than big city newspapers. Sounds great, huh?
I’m sure to some people it does—and, if they have enough money, they may eventually be able to
get something like this. To me, it sounds disastrous, not only from an economic perspective but from a social perspective.
The economics seem pretty straightforward. If it’s a personalized paper, it can’t be paid for by ad revenues, so
there will be an hourly charge or charge per item. Reading online is slow—and if you’re downloading, there will
be a charge for that. Even if the charge is as low as $12/hour, which seems unlikely, how much news can you really
get for the price of a daily newspaper? The equivalent of a front page? What will 35 cents buy you in personalized
daily news?
Assuming you want to keep your daily newspaper price to less than three times as much as the
print paper, I’ll guess that you’ll wind up with little more than the headlines for your selected stories—unless,
of course, this is all value-added service effectively paid for by all those print subscribers—but they’ve disappeared,
because everything’s online.
About your interest profile: do you really want a daily newspaper that deals only with the interests
you state up front? How do you broaden your horizons? Or do you prefer, as some clearly do, to become more and
more narrow—more and more of a specialist, until (as someone said) you know everything there is to know about nothing
at all.
The second economic problem with personalized daily papers is the special role of local newspaper
print advertising—the stuff that actually pays for most of the daily paper. Most local newspaper print advertising
isn’t like television advertising; it isn’t just trying to make brand names stand out. A great deal of it comes
from local merchants, telling you who they are, what they are doing right now, what’s on sale this week, and why
you’d want to visit them. Take away the newspaper audience, and many of these businesses disappear—and your knowledge
of when and where to shop diminishes. This is not a good thing.
Now, what about the social aspect? Just this: part of what you get from a newspaper comes from
all the headlines you glance over—and from the stories that you wind up reading, even though they would never be
in your interest profile. This is also one big advantage of the daily local paper over TV news: it has many times
as much room for text, and it covers lots of big and little items. Scores of them, in fact.
I don’t think that’s a trivial fact. You’re subconsciously aware of all those other headlines,
and they help to keep you in touch with the complexities of the real world. Some of those complexities are unpleasant,
to be sure. That’s part of non-virtual reality. Being aware, at least mildly, of what’s happening around you is
critical to being part of society. The personal newspaper simply won’t give you that—which, I suspect, is one reason
some people are so enamored of it, as they are with virtual reality. Who in this room would have Bosnia in his
or her personal profile? Virtual reality is what you want it to be—it doesn’t have all those nasty real-world aspects
to it.
Just for fun, I checked one fairly typical issue of the San Francisco Chronicle—neither the best
big-city daily nor one of the worst. It was a fairly slow day: Thursday, August 13. Not including things like stock
market tables, comics, ads and sports scores, I counted 168 different stories, 168 headlines that you would notice,
one way or another. Those weren’t all hard news—and I’m not sure that matters. To break it down: 21 were local
and regional stories, some fairly important; 7 were about California, but not the Bay Area; 19 were national news;
28 were international. Another 11 dealt with health, science and technology, and another 20 were on business, several
of them affecting other aspects of life. That’s 106 different aspects of daily life that you’d be aware of—perhaps
not read fully, but be aware of—skimming through the paper. There were also seven "people stories," 15
features and editorials; 8 columns; 12 reviews and entertainment stories and one fashion story. Finally, there
were 19 stories in the Sports section; with the Olympics over, it was an exceptionally slow period there.
How many of those 106 news stories and 62 other stories would fit in my personal profile, if
I believed in a personalized newspaper? Maybe a dozen; probably not that many. There would be some others, more
specialized items that don’t make the daily newspaper—and mostly things I really don’t need to know about right
away.
Passing Comments on
Project Xanadu
If you’ve never heard of Project Xanadu or Ted Nelson, well, you’re better off. If you want to
know more about them, the information is readily available. Ted Nelson’s books are certainly available. Fundamentally,
Project Xanadu is hypertext gone global: a worldwide network containing everything ever written and everything
that ever will be written, all linked from paragraph to paragraph, idea to idea, in any way you could imagine.
You go to an information kiosk, slide your credit card through the slot, and use the most wonderful navigation
tools to find all the information you could ever want and make all the intellectual links and leaps that will make
all that information more worthwhile. Oh, and authors are protected—not in terms of the intellectual integrity
of their work, since that tends to disappear in this global hypertext universe, but in terms of royalties: every
time you touch a new paragraph, some payment is credited to the author’s account.
Why, it’s all so wonderful, with ideas building on ideas, paragraphs leading to paragraphs, making
connections from here to there, finding everything you never knew existed—how could boring old books and print
possibly survive in the face of such competition?
And it’s coming to your town, any day now. That’s been true for a decade or more. Autodesk is
actually putting money into it—that proves that it’s serious. Doesn’t it?
No, it doesn’t. Autodesk is a quirky company with one big success and enough money to play with
lots of interesting ideas. They produce Chaos: The Software and Rudy Rucker’s Cellular Automata Lab, both of which
are wonderfully interesting graphics systems of questionable usefulness or commercial potential. Their sponsorship
of something means that someone there thinks it’s neat. That’s about it. Incidentally, the company is named after
its originally-intended flagship product, Autodesk—which never reached the market. A secondary product, Autocad,
is what pays for all this wonderfulness.
Oh, and by the way, Autodesk announced in late August that they’re spinning off Xanadu, that
is, getting rid of the operation. I believe they have finally recognized the economic realities of the project,
although there’s no way to know for sure.
My original outline includes subheadings on the economics of Project Xanadu and the intellectual
realities of universal hypertext. You can figure out the economics for yourself, I think—particularly since, barring
some disastrous legislative action, most contemporary authors (that is, people who actually organize data and information
into useful linear text) can refuse to have their material entered into the global hyperverse. Or can demand real
payment when the stuff gets read in isolated paragraphs...which means, say, paying a dime for the author every
time you touch a paragraph, quite apart from what you pay for system overhead. Sound good?
Enough about Xanadu. It’s just not worth the attention. Some dystopian futures bear within them
the seeds of their own destruction; I believe that’s true in this case, and certainly hope so.
Full-Text Retrieval
Whether as part of a global nightmare such as Xanadu or in more local contexts, full text requires
retrieval techniques. Advocates of online everything can be surprisingly casual about such techniques.
Some simply say, "Use string searching to find what you want." There are three problems
with that approach. First, it assumes that the words you look for will lead to the proper paragraphs. Second, it
assumes that the searching will be fast enough to be feasible. Third, and most significant, it assumes that you
can handle the results. But you can readily predict that, given any reasonably-large full-text database, almost
any single word will yield far too many paragraphs for the average user—to say nothing of what happens when a "smart"
search system automatically searches for synonyms as well.
Good book indexes represent much more and much less than concordances, whereas full-text searching
represents a concordance approach. Should e-texts have book-like indexes? Perhaps—and some would argue that the
indexes need to be even more carefully created. That can only work for an individual e-text, however; once you
start combining tens of gigabytes of text into single searchable chunks, even the indexes will become useless.
I have seen no realistic solutions to this rather overwhelming problem of large full-text systems.
The biggest name these days seems to be WAIS, Wide-Area Information Servers—"free" software created by
a company that sells highly-parallel supercomputers. And, incidentally, you’d need such supercomputers to use WAIS
effectively on a large textbase.
WAIS seems to work in test cases, but such test cases are almost always fairly modest, and the
reports are really rather sketchy. With WAIS, you feed in some terms, get back some "likely candidates,"
choose one or two that look good and say "get me more like that." You don’t really know what’s going
on, but supposedly you’ll get just what you need. If you don’t, of course, you won’t know. Those who advocate WAIS
assert that it will work effectively—on a textbase of up to 30 gigabytes. Impressive, right?
Not really. If we’re talking substantial access to large amounts of information, 30 gigabytes
is just a start. By comparison, the RLIN database—which represents only bibliographic information, not full-text—includes
something like twice that much. Yes, 30 gigabytes is quite a few books, but not all that many: perhaps 30-50 thousand,
or the contents of a medium-sized branch library. And when it comes to the kind of stuff you’d really load into
a WAIS—news feeds, journal articles, magazine articles, etc.—you can use up 30 gigabytes very quickly. Not a universal
solution by any means, even with the best technology.
I don’t believe there is a universal solution for truly massive full-text bases, one that allows
you to treat the entire collection of material as a single searchable entity and hope to make effective use of
the results. Large textbases are very effective ways of losing information and knowledge in vast swamps of data.
This doesn’t argue that full texts of some items shouldn’t be available electronically. Textual
analysis is a significant scholarly tool; I’m not enough of a scholar to understand its strengths and weaknesses,
but the computer can certainly simplify the process. Lots of things really do work at a paragraph-by-paragraph
level, and it’s great to make them available that way.
But when someone talks about "streamlining copyright to meet the needs of the information
age," I have this paranoid feeling that they want to change the law so that I can’t object to my publications
being made available electronically—and I do object to losing that power. So will, so should, many of the best
fiction authors and other writers.
False Pasts and Doomed Futures
When you think about library futures, it doesn’t hurt to be realistic about the past and present
roles of libraries. I’m struck by the extent to which the digital futurists glamorize the past of the library as
part of their call to wreck its future on behalf of their own visions. I sometimes feel that these futurists really
don’t want us to think about the future—they just want us to agree with them, and to follow their advice. Here
are a few examples, broadly adapted from what I’ve seen in the past few months and years.
"When Libraries were the Key Sources for Up-To-Date Information"
We’re sometimes told that, if we don’t take drastic action, libraries will no longer be the key
sources for the most up-to-date information; people will bypass libraries for other, more current, sources.
But just when has the library ever been the key source for the most up-to-the-minute information?
How many patrons call their public libraries to check on traffic conditions? What percentage of daily newspaper
readership takes place at the public library? Have businesspeople trying to keep up with an industry ever relied
on the library for the latest information—or have they subscribed to the industry weeklies, specialized newsletters
and, lately, online services?
I don’t know about your patrons, but I use the library to fill in the pieces, to provide resources
that I don’t need every day—and I think that’s pretty typical, and always has been. I don’t expect those resources
to be today’s news or even this week’s news, in most cases; nor, by and large, do I need them to be that current.
Libraries need to provide the cultural record, and to provide a range of information, enlightenment
and entertainment to those who wouldn’t have ready access to it otherwise. Libraries typically deal more in digested
data—that is, information that someone has organized with some thought—than in late-breaking news and raw data.
That’s always been their primary role. It should continue to be. It’s not the most glamorous role—but it’s important
and realistic.
More to the point, it’s a role that libraries can do well. To abandon that role, in order to
shift all resources over to up-to-the-minute electronic data, is a hopeless quest for centrality for the privileged
few. People will not abandon their other means of acquiring the current information that they need (or want) the
most. There’s no good reason for them to suddenly turn to the library for what they can do better on their own.
On the other hand, those who rely on the library for the cultural record and for useful (if less absolutely current)
material on subjects of secondary interest, will be disappointed and will turn against the library. We’d better
be sure that we can succeed in what is fundamentally a non-traditional role, before we abandon the traditional
role that has made libraries central to campuses and important to communities.
That new role is, in any case, one that only works well within academic and special libraries—and,
primarily, within the scientific, technical and medical spheres. I shudder at the thought that a library at a liberal
arts college or a community college would focus on an all-electronic, up-to-the-minute future; it would betray
its users and destroy itself in the process.
"When Everyone Was Literate, and All Adults Were Book-Buyers"
Some library futurists—or, in this case, no-futurists—say that libraries should abandon books
for virtual reality and multimedia, because we’re moving away from the era in which everyone was literate and all
adults purchased and read books. Now, book buyers are a minority and literacy is a problem.
Ah yes, those wonderful decades when every adult American read books as a primary means of leisure—and
when they all had the leisure to read books. Can anyone place those decades in history? Back before book and magazine
publishing began their dramatic decline? Back when every American took the daily newspaper? Just when were those
decades, and why did they end?
The decades I’m describing don’t exist, as far as I can tell—just as total book and magazine
publishing really hasn’t declined. We have a wonderful, heartwarming tendency to romanticize the past—why, Abraham
Lincoln, your typical nineteenth century kind of guy, was so desperate to read that he did it by the light of the
fireplace. Perhaps; but how many others did the same?
The remarkable thing is that surveys indicate that two-thirds of Americans do use their public
libraries—and most studies show that public library use is increasing, in some cases fairly dramatically. These
people are reading. They may not be buying books—but I’m not sure there’s ever been a period when the majority
of people in a society were actually book-buyers and book-readers. As with universal literacy, universal bookreading
is a wonderful idea and one that we should work towards—but it isn’t a description that fits any period of American
history. Maybe all the "people who count" used to read books—but that’s never been a large percentage
of the population.
One big difference these days is that we can no longer pretend that the rest of the population
is somehow subhuman, or that they don’t exist. It’s also true that non-readers have increasingly serious handicaps
in today’s and tomorrow’s society; the only way to have an equitable society is to have universal literacy.
The answer is not to say, "Well, nobody reads any more, so we’ll stop dealing with books."
The answer is for libraries to be involved in adult literacy work, for libraries to keep expanding those wonderful
children’s book-reading programs, for libraries to keep on keeping on. When children learn to love books, they
grow up as readers. When adults dismiss books as irrelevant, their children are less likely to be readers—and they
will suffer for it, in many ways.
"When Most Information Is Electronic, Public Libraries Will Be Obsolete"
The final, and perhaps most disheartening, ploy in the futurist’s routine is the simple assertion
that most information will be electronic in the future, with direct access by the user, thus making public libraries
obsolete.
Let’s look at that more carefully. First, substitute data for information, since many enthusiasts
don’t seem able to distinguish the two. In that case, the game’s already over: most data already exists in electronic
rather than print form. I can’t prove that, but it’s a reasonably safe bet, if only because of the absurdly large
quantity of satellite-generated data received every day.
But let’s say that we really mean information: that is, processed data that has some useful meaning
to people. Most is not all, of course, and that’s an important distinction. Saying that most information will be
electronic is, in and of itself, essentially meaningless for libraries, particularly for public libraries. So what?
The average public library doesn’t acquire most books that are published, and it doesn’t subscribe to most journals
(indeed, it probably subscribes to a tiny percentage of journals, as opposed to magazines). Does that make it worthless?
Not according to the two-thirds of Americans who use public libraries—they can’t use all the information there
is, in any case.
Finally, remember that the word "obsolete" means different things to different people.
In the computer industry, "obsolete" is more-or-less synonymous with either "on the market for at
least a week" or "will be on the market within the next six months," depending on how current you
want to be. In other words, if it’s useful, it’s obsolete.
I can live with that reading. An all-electronic public library will be useless for most patrons;
if being useful means being obsolete, so be it. This conference is certainly obsolete; we should be sitting in
front of our video terminals teleconferencing. What? You mean your library doesn’t support full-color, full-motion
teleconferencing? Too bad; you’re clearly obsolete.
Disintermediation: The Death of the Librarian
Disintermediation: there’s a word to conjure with. What it means, I think, is putting the end-user
in complete control of all transactions—in other words, getting rid of the librarian. When you see a non-librarian
use the word, it pays to find out whether the person has the vaguest idea what librarians actually do. The same
goes when you see a so-called librarian spout this nonsense.
What new computer tools will make librarians obsolete? Well, you can hand-wave about artificial
intelligence and expert systems; the first is one of the great long-running hypes of our century, and the second
appears to be applicable primarily to narrow realms. But there will be something—at least if we devalue the librarian’s
role enough to believe that a computer can do as well.
Equally absurd is the "online consultant" role as the future role of librarians, one
spouted by a leading proponent of e-texts who really should know better. He says that libraries will go away—but
that librarians can set themselves up as telephone consultants at a $5 per minute fee. They can get rich that way.
And pigs can fly.
Librarianship will die if librarians kill it off. If you think you’re doomed and act as though
you’re irrelevant—well, then, you are. More’s the pity for the poor everyday user.
The Best Paradigm: Librarians and Libraries, Serving Users
and Preserving the Culture
Here’s a paradigm that libraries and librarians can live with, now and for the future. It’s fairly
simple to state, although the true futurists will find it seriously off the mark. Here goes:
- Libraries and librarians should serve their users and preserve the culture.
Serve and preserve. A library must know who its current and potential users are—fundamentally,
who pays the bills, or who is its target audience. Different libraries have different audiences. A library serves
its users by providing them with information, enlightenment and entertainment in ways that the users and librarians
agree fall within the library’s sphere. That almost never means that the library serves all the information, enlightenment
and entertainment needs of its users; it never has, and never will. It does mean that the mix of services and mix
of media varies depending on the library and its users.
Show me a physics library that spends more on books than on journals and that has no plans to
shift significant levels of budgeting to electronic and other new media, and I’ll show you a physics library that’s
in serious trouble. Show me a neighborhood branch library that spends much more on books than on magazines (and
almost nothing on journals) and that, while considering some new media and electronic access, doesn’t plan any
radical changes, and I’ll show you a library that’s probably serving its community well. Most libraries fall somewhere
in the middle.
Preserving the culture is a little different. Every library has some part to play in this overall
societal need, but the parts differ. Part of preserving the culture is providing some core of materials that, while
relevant to the user population, isn’t their current highest priority. Every branch public library should have
the works of William Shakespeare, the novels of Mark Twain and a reasonable collection of other classic literature,
even though they won’t circulate as well as genre novels and how-to books. Another part is seeing to it that cultural
artifacts—not only the best books of a generation, but the popular books, magazines, and other material—are available
in enough locations, and cared for well enough, that future historians and users can see how the culture has evolved.
Electronic media, particularly CD-ROM and laserdisc, have enormous potential roles to play in this area, making
it feasible for smaller libraries to provide access to the raw materials of the culture. That availability should
not eliminate the original items. For some purposes, only the artifact will do.
If this hardly sounds like a jeremiad against all new media, there’s a reason for that. LC’s
American Memory Project is an important use of new technology to expand services and enrich our cultural memory;
it should prosper. It doesn’t threaten existing collections and services; it expands them. And it most assuredly
doesn’t threaten the existence of libraries. The average user really doesn’t need to have these materials at home,
and has little need for a high-speed digital link to be able to download them as needed—but what a wonderful thing
to have available at the neighborhood library when the interest arises.
Text unchanged since 1992. Loaded to Web May 23, 1999; layout modified July 18, 1999
Return