Visions of the Media Age:
Taming the Information Monster


Eli M. Noam

Introduction

Response: fight new information media

Response: increased heat

Response: reorganization

Response: automatization

Response: multimedia technology

Response: Using economics as a screen

Conclusion


Ebene höher Dateien suchen Kontakt Einführung Impressum Hauptmenü

Introduction

Exposed as we are to a torrent of victory bulletins from the front lines of R & D labs and marketers, it is easy to believe that the information revolution is being won. Computers get faster, smaller, and cheaper. Telecommunications have more capacity, diversity, and mobility. Television becomes sharper, smarter and more global. Fax, VCRs, PCs and CD-ROMs reach distant cottages. Thus, humankind appears to be on the verge of achieving mastery over information, turning a scarce resource, knowledge, into an abundant one.

But sometimes the worst that can happen is to get what one wants. And perhaps this is happening to us with the revolution in information and communications. While this revolution is progressing quite successfully, success, just as failure, has a way of creating its own problems.

We live in the information age, work in the information economy, and are surrounded by an information technology of astonishing performance and price. And yet -- why is it that with all these technological marvels we feel less than ever on top of information, a resource that does not exist (outside of DNA) except by our own creation? Why do we feel, as individuals and organizations, less in control, and always behind of what we should know?

The reason may be called the Paradox of Information Technology: the more information technology we have and the more knowledge we produce, the further behind we are in coping with information. We invent and build new technologies to help us, but they set us back still more. Today's new model, multimedia technology, is another such effort to catch up with information and to manage it. As with previous technological solutions, this effort will not be successful in gaining mastery over information flows.

Why do we have such a problem? The reason is that we have created a systemic imbalance in the information environment that leads to new bottlenecks. A communications process, to simplify considerably, consists of three major stages: the production of information, its distribution, and its absorption. These three elements have to exist in some relation to each other. Let us define information as "raw data subjected to organization"; i.e. data enhanced by the application of some selectivity and logical connections. As a refinement of new data, information does not occur by itself, but needs to be produced, distributed and used, just like an axe or a meal. In recent decades, technology has made giant strides in the distribution end of information.

We are near the point, historically speaking, when the cost of information distribution becomes both negligible and distance-insensitive. Distribution has contributed, in an interrelated fashion, to the production of information, which has been spurred by the evolution of advanced economies to services and knowledge-based manufacturing. One of the characteristics of postindustrial society is the systematic acquisition of and application of information which has replaced labor and capital as the source of value, productivity, and profits. The weak link in the chain is the processing of the produced and distributed information.

These bottlenecks are both human and organizational - the limited ability of individuals and their collectives to mentally process, evaluate, and use information. The real issue for future technology therefore does not appear to be production of information, and certainly not transmission, but rather processing. Almost anybody can add information. The difficult question is how to reduce it.

There is a reinforcing relationship between the stages of information: production, distribution, processing. If I produce a piece of information, it will stimulate distribution and use. Similarly, distribution increase stimulates information production and processing. And information production creates demand for still more such production. The relationship between the stages of information with each other and themselves can be summarized in an input-output matrix, in the same way as has been done in past for the interaction of industrial production such as for steel, coal, electricity, etc. Where bottlenecks in growth occur, they are likely to have ripple effects throughout the other stages and beyond.

In the past, the three stages of information grew slowly and more or less in tandem. Information institutions started about 5,000-8,000 years ago when at different places around the world specialized preservers and producers of information emerged in the form of priests. Recording methods emerged. When production was low, such as in Europe during the Dark Ages, distribution was also fairly rudimentary. Processing was under little pressure.

When printing and later the Industrial Revolution increased distribution technologies, information production grew and processing increased in parallel. Literacy rose dramatically. Organizational structures were formed to handle the increased information load, and they grew rapidly.

By sometime following World War II, the parallel trends diverged, and things have never been the same. The driving technologies were advanced by that war - computers (from code-breaking efforts); microwave transmission (from radar technology); satellites (from missile development); and television (from superior electronics) The production of information in the U.S. economy increases at rate of about 6%, and the growth rate is itself increasing. The distribution rate is increasing even faster, by an estimated 10% and more.

The rate of increase in processing capacity needs to keep up with that. To reach a similar growth rate is very hard, and is not being achieved. It is hard, because of the limited capacity of processing channels of individuals and organizations, and the difficulty of increasing it.

This has serious implications. Virtually all aspects of society are changing due to that imbalance, and in the ensuing attempts to adjust the individual and social processing rates of information to the demands that growth in the other stages have put on them.

We all know that the quantity of information and of information producers has grown prodigiously. It has been said that 90 percent of all scientists who ever lived live today. The same holds for other information professions such as lawyers, journalists, or engineers. The number of scientists and engineers in the U.S. grew from 557,000 in 1950 to 4,372,000 in 1986, an increase of nearly 800%. By the late 1980s, their numbers roughly equaled the entire information workforce of 1900.

Most branches of science show an exponential growth of about 4-8 percent annually, with a doubling period of 10-15 years. To get a sense of the trend: Chemical Abstracts took 32 years (1907 to 1938) to reach one million abstracts. The second million took 18 years; the third, 8; the fourth, 4 years 8 months; and the fifth, 3 years and 4 months. If we assume that before 1907 a full million of chemistry articles had not been produced, this means that in the past 2-3 years more articles on chemistry have been published than in humankind's entire history before the 20th century.

A weekday edition of The New York Times contains more information than the average seventeenth-century Englishman came across in a lifetime. The Sunday edition far exceeds that. Some indicators: (For the US, unless otherwise noted)


A critical point is that information is always accompanied by 'noise.' In technical terms, noise is the interference, in a channel, with the primary signal. Noise also includes unwanted information that must be filtered out. The more information we produce, the more noise we produce, too. Conversely, as noise increases (including unwanted information), the filtering must increase, as the information signal must gain in strength. Both activities require substantial resources. Thus, the creation of noise by information affects information, and thus is a serious matter, because information is itself one of the ways to counter entropy.

Shannon and Weaver (1949), pioneers of information theory, identified the noise in communication, that is, opposed to information in signals with entropy. This obscure mathematical point gave noise a central role in social analysis. Entropy is the essence of the second law of thermodynamics. It is deeply pessimistic in that law sees the world eventually and irreversibly losing its energy potential and becoming, in Boulding's words, a "lukewarm pea soup." Accordingly, the world would not go out in a bang but in a whimper.

Entropy uses up the potential of energy and of life. But life's ability to create information and organize itself can counter entropy. Thus, information is perhaps the one major counterforce to entropy. Society's inability to manage its information resources therefore means that noise increases more rapidly than information, and this has many individual, organizational, and social implications.

There are a variety of social responses to the problem of noise and inadequate processing. They will now be discussed.

Response: fight new information media

One classic line of response to an expansion of information is to blame the new information medium that creates the expansion. Complaints against new media have been with us forever. In the city of Mainz, where Western printing was invented in 1456, already had a censorship decree was issued already in 1485. Berthold von Henneberg, archbishop of Mainz, otherwise a reformer, argued against the misuse made by printers, due to their greed in seeking money and glory.

In the 16th century, Erasmus wrote "Printers fill the world with useless, stupid, calumnious, libellous, violent, impious and seditious books, thwarting also the good effects of good books."

When movies were invented, they did not show Shakespeare's plays, but instead exhibited vaudeville dancers and even bare ankles. Traditionalists were outraged and sought a ban. Later, when sound was introduced into motion pictures, musicians' unions agitated that "sound movies are economic and cultural murder." When the radio arrived, researchers noted that "Parents have become aware of a puzzling change in the behavior of their children ..." The telephone was no exception to the dismissal of a new medium. Soon after its introduction, it was accused by a noted psychiatrist of driving people permanently insane.

When television emerged in the late '40s, it affected the dominant media negatively, and tried to suppress it, using a variety of arguments on behalf of creativity. Hollywood went to war against TV. It's us or them, they said. Ronald Reagan went to work for TV and never made a Hollywood movie again. He had to look for another line of work. In addition, print, the dominant medium of intellectual culture, went to war against video, as its hold over culture slipped. The proponent of print culture have long attacked TV as a medium, not just its particular programs, channels, or industry structure. To their audience, being anti-TV is a self-identification as a cultured person.

Later, when cable TV emerged, it was the same story. The TV broadcasters, now the new media establishment, fought cable TV tooth and nail. The new arguments were the loss of national cohesion, and the absence of public interest standards. Broadcasters in the U.S. enlisted the government, as others have traditionally done, to try to control the new medium.

Today, with computer media in ascendance, the question is how they are treated. This is important, because multimedia are fundamentally the convergence of video technology - yesterday's villain - with computer processing, storage, and routing.

In the 1950s and 1960s, many believed that computers would surely create a 1984-like state, and computers had a negative image. Data protection laws, based on that "Big Brother" image of the technology, were passed just as computers became "distributed" rather than mainframe. When the real 1984 rolled around, the fear had become that 14-year-olds would use computers to start a nuclear war on their own.

In 1960, there were about 9,000 computers in the entire world, of which 55 percent were in the United States, 20 percent in Western Europe, and 1 percent in Latin America. By the mid 80s, there were about 50 million computers, and the irrelative distribution remained roughly the same. In 1995, the number of computers had increased to about 110 million.

Today, when computer usage is becoming democratic and when computers are becoming a communications device, the Cassandra industry is in full force, and an avalanche of neo-luddite literature is rolling in. Today's fears are the usual suspects in new garb:

Impressionable children. Sex. Violence. Crime. Games.
Idleness. Bad manners. Alienation. Anti-authority. Bad grammar.
Extremist potential. Isolation. Information poverty.
Commercialization.

Poor countries. This is not to belittle these concerns, or to give credence to the Polyannas of the computer industry, but rather to observe that it seems that the new media kid on the block seems to be held responsible for the social sins of his elder media, often in inconsistent ways.

Where once too much elite control was decried for television, now there seems to be too little of it over anti-social tendencies on the net. Where once lowest common denominator programming was decried, we now mourn the loss of the national dialogue and common hearth. Where once youngsters did not communicate enough, they now communicate excessively, obsessively, and sloppily. Where once the old series were ridiculed as chewing gum for the eye, the same programs are now romanticized as golden oldies, and bathed in nostalgia.

Response: increased heat

More information, more noise, and more clutter lead to a need to amplify and/or repeat a signal message. This can be seen best in advertising. Between 1930 and 1990, advertising expenditures per capita in the U.S. increased by over 2,200%, whereas the population increase was 200%. A quarter century ago, the average American was targeted by at least 560 daily advertising messages, of which only 76 were noticed. In 1991, the average American received 3000 daily marketing messages.

Viewer retention (part of processing) of television commercials dropped. In 1986, 64% of those surveyed could name a TV commercial they had seen in the previous four weeks. But six years later, in 1990, only 48% could. This leads to an increase in the "heat" of messages, whether in advertising, politics, or the general culture. It also affects media programs, which also must be more intense. It favors visual themes, simple stories, and pseudo-facts. In politics, it has led to the emergence of the pseudo-event and the 15-second sound bite.

Increasing heat and frequency, however, do not solve the problem of the processing bottleneck, because everyone resorts to the same methods of amplification. Thus, like the onlookers to a parade that are all standing on their toes, we end up less comfortable, with more noise, and with even less processing relative to information. Response: closing and specialization

One way people protect their processing channel is to shield it from too much information by selective attention, stereotype, even prejudice. People tend to notice communications favorable to their dispositions. Voters do not want information but confirmation (Lazarfeld, 1944, Kriesberg, 1949). Leon Festinger introduced the concept of cognitive dissonance coping mechanism.

John Locke in his Essay Concerning Human Understanding wrote: "Where in the mind does these three things: first, it chooses a certain number (of specific ideas); secondly, it gives them connexion, and makes them into one idea; thirdly, it ties them together by a name." This is done "... for the convenience of communication."

Another form of closing is specialization. As the volume of information rises relative to any individual's ability to handle it, specialization takes place. There is nothing new about this.

Tasks were divided from the earliest days. Long before Adam Smith wrote his famous description of the needle factory, the sons of the original Adam specialized already, the Bible tells us. As the body of knowledge grew, the evolution of fields of expertise continued into ever-narrower slices. German has an apt term, the "Fachidiot" (Specialty-moron).

Nietzsche mocked it century ago. "A scientist was examining the leeches in a marsh when Zarathustra, the prophet approached him and asked if he was a specialist in the ways of the leech. O, Zarathustra, ... that would be something immense; how could I presume to do so! ... That, however, of which I am master and knower, is the brain of the leech; that is my world! ... For the sake of this did I cast everything else aside, for the sake of this did everything else become indifferent to me ..."

The result: The inexorable specialization of scholars means that universities cannot maintain a coverage of all subject areas in the face of the expanding universe of knowledge, unless their research staff grows more or less at the same rate as scholarly output, about 4-8 percent a year. This is not sustainable economically. The result is that universities do not cover anymore the range of scholarship. They might still have most academic disciplines represented -- whatever that means - but only a limited set of the numerous subspecialities.

Many specialized scholars find fewer similarly specialized colleagues on their own campus for purposes of complementarity of work. In other words, the collaborative advantages of physical proximity in universities decline. Instead, scholarly interaction increasingly takes place with similarly interested but distant specialists of similar specialists, i.e., in the professional rather than the physical realm.

This is not new, of course. Diana Crane's classic Invisible Colleges (1972) traced the interaction among distant scientists. But the information-induced pressures of specialization have increased, as did the means to make the invisible college the main affiliation. Air transport established the jet-setting professoriate. Even more so, electronic communications are creating new scholarly communities which respond to the elementary need for intellectual collaboration, through electronic dialogues, computer conferencing, and (soon) video calls, strengthened by the occasional beer at a conference for human bonding. Thus, while more information should help our understanding, it also narrows our focus, breaks up established patterns, and increases transaction cost.

Response: reorganization

An organization transforms inputs - resources, messages - into outputs. Groups, like individuals, have channel processing capacity and points of overload. James G. Miller (1960, 1971) studied group information overload experimentally. Overload is the point at which additional information does not increase performance but rather involves a leveling or falling off of performance. At over-capacity, the system needs to take mostly care of the exceptional circumstances (crises).

Even without cogestion, more information is not necessarily better for decision-making purposes. According to studies, when people recognize the absence of relevant information, they tend to less extreme evaluations. (c f., Yates, Jagacinski & Faber 1978). Judgements are adjusted to compensate for the uncertainty due to incomplete information, and this means more moderate positions. (Cialdini, Levy, Herman & Evenbeck, 1973; Jaccard & Wood, 1988). Their laboratory studies show that decision makers seek more information than they can effectively use (O'Reilly, 1980).

Management studies show (Raymond, 1962) that the typical executive can receive and absorb only 1/100 to 1/1000 of the available information that is relevant to his or her decisions. Additional information may actually reduce performance, it increases the decision maker's confidence (Oskamp, 1965).

There were hardly any middle managers in the United States before the mid-nineteenth century. But by 1940, managers and clerks accounted for almost 17 percent of the U.S. work force. From 1900 to 1910 the number of clerks accounted for almost 17 percent of the U.S. work force. Their number grew by 45% alone between 1900 and 1910, far outpacing the growth in the general work force. In the same decade alone, the number of stenographers, typists, and secretaries, the staff workers for middle management, increased by 189 percent (Beninger, 1986). The function of these employees was essencially to carry information up to decision makers and implement their decisions back down.

One way for organizations to increase information processing capacity is simply to grow. As information increases, control mechanisms require still more information, leading to excess load and even potentially to general breakdown.

An organization's response to informational complexity is usually to increase organizational complexity - management layers, procedures, and controls. The result are organizational pathologies, such as tensions between the field and the center; depersonalized leadership; fragmented understanding; take-over of rigid procedures.

Just as individuals, a group also has upper limits for information processing. The larger the group, the more specialization and task-sharing can be accomplished, but the greater internal information flows become. For Peter Drucker, the First Law of information theory is that: "every relay doubles the noise and cuts the message in half." As the group grows, reciprocal relations become impossible to maintain. Once the number of nodes in a group grows beyond six, small group structure break down (Davis, 1969)."

One alleged new tool to enhance productivity in organizing is "groupware," such as Lotus Notes, which permits many people to communicate among themselves, both within and among companies. IBM just paid 3.6 billion dollars for Lotus, largely based on the potential of Notes software. Yet one does such technology improve performance? One study (J.G. Miller, 1960) found that teams of four participants had actually a lower channel capacity than single individuals at the same task.

In these experiments four people were required to cooperate in coordinating information that appeared on a screen. The performance of two teams leveled off at about three bits of input per second, showing the point at which overload occurred. The channel capacity was found to be between 2 and 2.5 bits of output per second whereas it was for individuals. With overload, group behavior patterns included: (1) dropping information, (2) processing erroneous information, (3) queuing - dealying action with the hope to catch during rush periods to catch up during a lull, (4) filtering -selecting some types of information and ignoring others, (5) creating multiple channels by decentralization.

Various network patterns different channel capacities; for example, a "wheel" has a better capacity than a "chain", but studies also show that strain increases on the central hub in a "wheel" when information increases., i.e., when the executive suffers overload.(Gilchrist, 1955)

Furthermore, as one introduces new technology, ceteris do not remain paribus. Naturally the workplace would be transformed in time. In the past, jobs and work arranged in a way that assures physical access to the physical object of work and to the necessary information. Within an organization that had meant substantial stationariness. But, now, the need for physical presence declines, because information distribution becomes cheap and powerful. In consequence, offices and even companies themself become "virtual" organizations, i.e. a network relationship.

Indeed, one may work for multiple such virtual organizations at the same time, and the classic employer-employee relationship will be supersede by freelance type arrangements, in which the organization bids at any time for particular skills it needs at that moment. This means that much of the information processing capacity of the organoization are outside of it.

Indeed, a major form of information processing is to delegate it to professionals. Society is full of institutions and professions whose major function is to select important information out of the babble. Examples are:


Editing creates a tight, condensed, and less redundant information. On an individual level, it leads to a substitution of direct experience for "edited" reality. People go less to sports events, lectures, or political events. Instead of eyewitnessing raw data, they get the executive summary.

As one recent President proved, one can boil down any issue under the sun onto one index card. It helps, of course, to have three million people working for you. What is likely is that there will be increasing formal and informal rules (social norms) on keeping memos short, and executive summaries will become the main event. Briefs may be brief again.

Response: automatization

Information screening is the key technological challenge for the information sector. The super pipe requires the super screen. But as everyone who has used a data base can tell, the tricky part of any existing search system is how to suppress repetitive or unimportant information. That is, one needs a screening by quality. Expert systems and artificial intelligence applications will be useful here, but the technology is not even close at hand, if it can ever be achieved.

Some such systems are "intelligent agents," autonomous and adaptive computer programs within software environments such as operating systems, databases or computer networks. Typical tasks performed by intelligent agents could include filtering electronic mail, scheduling appointments, locating information, alerting to investment opportunities and making travel arrangements. A learning agent acquires its competence by continuously watching the users performance and examples, by direct and indirect user feedback, and by asking for advice from other agents that assist other users with the same task.

One example of an intelligent agent is Telescript, General Magic's communications-oriented programming language. TeleScript messages know what to do and where to go. They can navigate wide-area networks on their own. But all agent technology is rudimentary. The so-called intelligent agents are mainly mail filters. Technology can do only the most formalistic information selection. Humans can infer concepts from the words of a document.

Computers are bad at that task. They have great difficulties determining what is important. Contextual analysis will have to advance to the point that machines can comprehend the context of information and its meaning. Technological screening is, at present, quite high in its ratio of hype to reality.

Response: multimedia technology

Today's technological effort at managing information are multimedia. Of course, multimedia has been around since the dances of cavemen. What we call, vaguely and imprecisely, "multimedia" is a collection of attributes based on the convergence of technologies. These attributes are:


Two-way Interactivity

Interactivity and return channels permit the establishment of user control in the way that a reader has who can flip, scan and select different books. Selectivity, in turn, permits a customization of product, i.e., it leads to individualization.

For television consumption, for example, it is tempting to believe that, as the trend in TV continues, we will move from multi-channel to mega-channel television. But this would be an incorrect extrapolation. Actually, the opposite will happen: We will move into distributed television. The key technologies here are video servers, broadband switching, and navigational agents. Fiber lines are important but not essential. Video servers are large computer-like storage devices, storing thousands of films, documentaries, and other kinds of programs.

Many companies will operate these video servers, charging a varying mix of usage fees, subscription charges, transaction fees, advertising charges, and sales commissions. There will be customized ads, based on customer demographics and on customer transaction data. These servers will be interconnected through phone and cable in the way that the Internet today links computers and their databases.

This means an extraordinary choice of program options. When given an abundance of choices, how do people react? They seek simplification and convenience. In the U.S., for example, few people go through the trouble of ordering films by pay-per-view.

In the future, they will simplify the selection task by "navigators" and personalized menus. In that world, channels will disappear, or rather become "virtual" channels. This leads to the emergence of an individualized "me-TV" ("canal moi", "Kanal Ich") based on a viewer's expressed interest, his past viewing habits, recommendations from critics he trusts, of delegated selection agents, a bit of built-in randomness.

This is why the future will not be one of 50, 500 or 5000 channels, the TV-critics' nightmare. Much worse. It will be a future of only one channel, a personalized channel for each individual. The simultaneous mass medium experience will be replaced by individualized experience. This is not just narrow-casting. It is custom-casting.

In telecommunications, similarly, the evolution of networks leads to customization. As networks proliferate, a new class of 'systems integrators' is about to emerge, whose role is to provide the end user with access to a variety of services, in a one-stop fashion.

Today, systems integrators exist for large customers. But tomorrow things may be quite different. The additional step will be for systems integrators to emerge that put together individualized networks for personal use, or personal networks, providing a whole range of communications and content options.

Multitracking

With rising information inflows, two coping strategies exist to increase processing rates: either raise the channel capacity by technology and organization, or use channels in a parallel fashion. Electronic information systems can increase channel capacity, especially in transmission. But biological and social systems of humans cannot increase their channel flow equally dramatically. This suggests the multi-channelling of information.

Media have different rates of display and absorption, for different types of information and different senses. One strategy information processing therefore is to affect the way information gets presented. Eyes can get visual information at a broadband megabit rate. In fact, if the TV action is too slow, one gets bored. On the other hand, written information gets absorbed at the much slower rate of about 300 words/min., or 200 bits per second. Ears are even slower about 200 words/min. or about 150 bits per second. And the tactile sense can handle up to perhaps 20 words/min., or about 15 bps, using Braille.

Thus, visual information is by far and away the fastest. Print takes up only a tiny fraction of our absorptive capacity. We are using hopelessly outmoded Phoenician and Latin communications protocols. But we are stuck with them. The form of written language has hardly changed in centuries, and we have a big social investment in this particular form of standardization.

Society needs compatibility, of infrastructure exchange symbols, and the social and cultural fabric revolves around it. Therefore, even streamlining the needlessly complicated spelling of the English language would be a culturally traumatic event, and unlikely to happen outside a tiny circle of professionally eccentric poets. So instead of junking the Latin alphabet and traditional forms of written language, what is more likely to happen is a shift to a multimedia form of communications with more visual and symbolic information, each carrying the type of information that can get processed most effectively on that particular channel.

Visiuals are good for conveying emotions. Print is better for abstract facts. This means the simultaneous attention to several information streams. Multimedia thus moulds several inflows, such as vision, hearing, and smell.

Children already engage in informational multitasking. One psychological study concluded that children, while watching TV, fight, flip baseball cards, play jacks, play with pets, look after brothers and sisters, play board games, make and build things, play with toys, jump and dance, read, do homework, fight and talk.

Television advertisements are a simple example for multiple information streams. They pack a lot into 30 seconds of picture, voice, music, and written language, all superimposed on each other and very tightly edited. Another example are sales presentations with their increasingly elaborate audiovisual aids.

This multi-channel communications will lead to new forms of communications language. Many more symbols will be used, because this can speed up the processing, and combines abstraction of written language with the speed of visual message. Even the sense of smell can, in theory, be used as a channel. Artificial smells are becoming production items. There are now "corporate identity" smells offered, and no doubt smells can be reproduced over distance. Touch and feel communication are also in development, first for sex applications.

"Virtual reality" technology is today's most sophisticated multitracking medium, filling up much of the user's sensory capacity by creating a simulation that permits the user to "enter" three-dimensional space and interact in it.

In Don Quixote, I, Ch. 20, Sancho Panza tells stories discursively. Don Quixote, "If that is the way you tell your tale, Sancho, repeating everything you are going to say twice, you will not finish it in two days. Go straight on with it, and tell it like a reasonable man, or else say nothing." Don Quixote is the archetypical man of letters. He wants Sancho Panza to conform to written style, linear and clear. But Sancho Panza retorts indignantly, "Tales are always told in my part of the country in the very way I am telling this, and I cannot tell it in any other, nor is it right of your worship to ask me to adopt new customs."

Will video push print out to a secondary role? Not really. Print works well for abstractions, whereas for images, video is superior. According to Nobel laureate Herbert Simon, the "least cost-efficient thing you can do" is to read daily newspapers. He recommends instead reading The World Almanac once a year.

Thus, each information stream and presentation has some advantages. For me, the medium of the future is the comic strip. Or rather, the 'hyper' comic strip: panels of text with still pictures, some of them moving like film when you touch the screen. There will be sound, and even smell. The text will go into deeper details and connect with other text, like hypertext. One can skim this hyper comic strip or navigate in it. This will be on flat and light display panels one holds like a book, and one could write notes on it, store, and send it to other locations.

Storage and Retrieval

Multimedia is often primarily a storage and retrieve technology. This saves one of the major responses to information overload-- a substitution of storage for processing, with retrieval the key link. Instead of "learning" and "knowing," we develop skills and technologies of "finding."

Response: Using economics as a screen

To an economist, the main problem is the absence of economic mechanisms in allocating processing capacity. If our individual and organizational attention is a limited resource, it should be allocated as other scarce commodities. At least that is the question. An example: we are being inundated by junk e-mail, each piece imposing some time cost on us. Prices are an excellent form of information about information. They provide relative values.

This could be applied to an e-mail, voice-mail, or fax system, with the sender assessing the content's value by attaching "urgent," "standard" or "junk" levels of "electronic postage" on an outgoing message. The postage would be charged against the sender's budget and credited by the recipient. This will cut excessive group lists and junk mail.

Telemarketing information is a similar case, because access is of value, exchange transactions would create rational markets instead of the present disruptive calls followed by hang-ups.

How could this happen? Telecommunications equipment and service providers are likely to offer the capability for customers to select among their incoming calls electronically only those calls they want, and to assess an access charge for those calls they don't normally want to accept. Such a service might be described as Personal-900 Service, analogous to 900-service in which the caller pays a fee to the called party. Such a service would, for example, block incoming telephone calls to a consumer with an electronic message and a series of options. The caller would be informed that the customer "charges" telemarketers for the privilege of speaking to them.

Individual customers could set different price schedules for themselves based on their privacy value, and even the time of day. They would establish a "personal access charge" account with their phone or an enhanced services provider, or a credit card company. By proceeding, the telemarketer enters into a contractual agreement. The billing service provider would then automatically credit and debit the accounts in question. Thus, markets in information access will develop.

For example, consumers will adjust the payment they demand in response to the number of telemarketer calls competing for their limited attention span. If a consumer charges more than telemarketers are willing to pay, he can either lower access or will not be called anymore.

For example, why is our time a free good for anyone who wants to access our mailbox or telephone receiver? Let them pay for access. In the upper reaches of power and prestige, access was always paid for indirectly.

In advertising, marketers will increasingly pay consumers rewards for attention. Now some follow an ad by a little quiz, viewers would get a free on demand movie, or a merchandise coupon as a reward. These payments can also be indirect, through a premium in price for watching a program without further advertising interruptions.

Conclusion

Information technology and its present expression as multimedia technology, will not rectify the imbalance between information production and distribution, on the one hand, and processing on the other. It will not solve the problem of limited processing and of noisy channels.

We may be talking about emerging information technology as if it is just about getting entertainment and study help into the home and stock market data into the office. But it is naive to think that it will stop there, and not affect us much more deeply. When the automobile was introduced, it was thought of a horseless carriage. But it did not stop there. Now, as cities, our houses, our family structures, our work, our neighbors are changed. Why should a revolution in information transport not have a similar impact that the earlier revolution in physical form had?

Stand: Juni 1995


Ebene höher Dateien suchen Kontakt Einführung Impressum Hauptmenü