Francis HEYLIGHEN[*] & Johan BOLLEN
PO, Free University of Brussels, Pleinlaan 2, B-1050 Brussels, Belgium
E-mail: fheyligh@vnet3.vub.ac.be
It is an old idea that the whole of humanity, the system formed by all individuals together with their patterns of communication, exchange and cooperation, can be viewed as a single organism, the "super-being" (Turchin, 1977) or "metaman" (Stock, 1993). When considering the conflicts, misunderstandings, or simply lacking communication between individuals, it becomes clear, though, that the integration of individuals in human society is much less advanced than the integration of cells in a multicellular organism. Analysis of the evolutionary mechanisms underlying individual and group selfishness, competition and cooperation moreover shows that there is no easy way to overcome the hurdles to further integration (Heylighen & Campbell, 1995).
However, there is at least one domain where integration seems to be moving full speed ahead: the development of ever more powerful communication channels between individuals and groups. After the advent of telegraph and telephone in the 19th century, and the mass media of radio and TV in the first half of the 20th century, the last decade in particular has been characterized by the explosive development of world-wide, digital communication networks. In the "society as super-organism" metaphor, the communication channels play the roles of nerves, transmitting signals between the different organs and muscles (cf. Turchin, 1977).
In more advanced organisms, the nerves tend to develop a complex mesh of interconnections, the brain, where sets of incoming signals are processed and converted to further signals sent to organs and muscles. Whereas the traditional communication media link sender and receiver directly, networked media have multiple cross-connections between the different channels, allowing complex sets of data from different sources to be integrated before being delivered to the receiver. (For example, a keyword search over the Internet will gather a list of documents residing in different places and produced by different authors.) The fact that the different "nodes" of the digital network are controlled by computers, capable of sophisticated information processing, reinforces the similarity between the network and the brain, viewed as a network of interconnected neurons. This has led to the metaphor of the world-wide computer network as a "global brain" (Mayer-Kress, 1995; Russell, 1983).
In organisms, the evolution of the brain is characterized by several metasystem transitions producing subsequent levels of complexity. The level where neural pathways and the signals they carry are interconnected according to a fixed program was called the level of "complex reflexes" by Turchin (1977; cf. Heylighen, 1995). This is to be contrasted with the previous level of "simple reflexes", where there are no interconnections between pathways or reflex arcs (and thus no "brain"), and the subsequent level of "learning" or "associating", where interconnections can adapt to experience. This paper will argue that the present global computer network is on the verge of undergoing similar transitions to the subsequent levels of learning, thought, and possibly even metarationality. These transitions can be facilitated by taking the "network as brain" metaphor more seriously, turning it into a model of what a future global network may look like, and thus helping us to better design and control that future. In reference to the super-organism metaphor for society this model will be called the "super-brain".
WWW in particular has become so popular that it not only is by far the most used Internet system, but that most other systems are increasingly accessed via the WWW interface. In addition to the factors above, this popularity is due to WWW's extremely simple but powerful way of representing networked information: distributed hypermedia. It is this architecture that turns WWW into a prime candidate for the substrate of a global brain.
The distributed hypermedia paradigm is a synthesis of three ideas (Heylighen, 1994). 1) Hypertext refers to the fact that WWW documents are cross-referenced by "hotlinks": high-lighted sections or phrases in the text, which can be selected by the user, calling up an associated document with more information about the phrase's subject. Linked documents ("nodes") form a network of associations or "web", in a sense similar to the associative memory characterizing the brain. 2) Multimedia means that documents can present their information in any modality or format available: formatted text, drawings, sound, photographs, movies, 3-D "virtual reality" scenes, ..., or any combination of these. This makes it possible to choose that presentation which is best suited for conveying an intuitive grasp of the document's contents to the user, if desired, bypassing abstract, textual or symbolic representations for more concrete, sensory equivalents. 3) Distribution means that linked documents can reside on different computers, maintained by different people, in different parts of the world. With good network connections, the time needed to transfer a document from another continent is of the order of seconds, not noticeably different from the time it takes to transfer a document from the neighbouring office. This makes it possible to transparently integrate information on a global scale.
Initially the Web was used for passive browsing through existing documents. The addition of "electronic forms", however, made it possible for users to actively enter information, allowing them to create or edit documents and query or interact with specialized computer programs anywhere on the net, through the same intuitive "point and click" interface.
At present the World-Wide Web can be likened to a huge external memory, where stored information can be retrieved either by following associative links, or by explicitly entering looked-for terms in a search engine. What it lacks, though, is the capacity to autonomously learn new information. In practice, all documents and links are added by people, who use their own judgment about what is worthwhile or which documents should be linked to which other documents. However, the cognitive capacity of an individual is much too limited to get any grasp of a huge network consisting of millions of documents. Personal experience or intuition is a rather poor guide for efficiently organizing the Web. The result is that the Web is mostly labyrinthine, and it is quite difficult to find the information one is looking for.
A first step to make the "Web as memory" more efficient is to let the Web itself discover the best possible organization. In the human brain knowledge and meaning develop through a process of associative learning: concepts that are regularly encountered together become more strongly connected (Hebb's rule for neural networks). At present, such learning in the Web only takes place through the intermediary of the user: when a maintainer of a web site about a particular subject finds other Web documents related to that subject, he or she will normally add links to those documents on the site. When many site maintainers are continuously scanning the Web for related material, and creating new links when they discover something interesting, the net effect is that the Web as a whole effectively undergoes some kind of associative learning.
However, this process would be much more efficient if it could work automatically. It is possible to implement simple algorithms that create associations on the basis of the paths of linked documents followed by the users. The principle is simply that links followed by many users become "stronger", while links that are rarely used become "weaker". Some simple heuristics can then propose likely candidates for new links: if a user moves from A to B to C, it is likely that there exists not only an association between A and B but also between A and C (transitivity), and between B and A (symmetry). In this manner, potential new links are continuously generated, while only the ones that gather sufficient "strength" are retained and made visible to the user. This process was tested by us in an adaptive hypertext experiment, where a web of randomly connected words self-organized into a semantic network, by learning from the link selections made by its users (see a companion paper, Bollen & Heylighen, 1996, for more details about learning algorithms and experimental results).
The strength of such associative learning mechanisms is that they work locally (they only need to store information about documents at most two steps away), but the self-organization they produce is global: given enough time, documents which are an arbitrary number of steps away from each other can become directly connected if a sufficient number of users follow the connecting path. We could imagine extending this method by more sophisticated techniques, which e.g. compute a degree of similarity between documents on the basis of the words they contain, and use this to suggest similar documents as candidate links from a given document. The expected result of such associative learning processes is that documents that are likely to be used together will also be situated near to each other in the topology of "cyberspace".
If such learning algorithms could be generalized to the Web as a whole, the knowledge existing in the Web could become structured into a giant associative network which continuously learns from its users. Each time a new document is introduced, the links to and from it would immediately start to adapt to the pattern of its usage, and new links would appear which the author of the document never could have foreseen. Since this mechanism in a way "absorbs" the collective wisdom of all people consulting the Web, we can expect the result to be much more useful, extended and reliable than any indexing system generated by individuals or groups.
A first such mechanism can be found in WAIS-style search engines (e.g. Lycos, http://lycos.cs.cmu.edu/). Here the users enters a combination of keywords that best reflect his or her query. The engine searches through its index of web documents for documents containing those keywords, and scores the "hits" for how well the corresponding documents match the search criteria. The best matches (e.g. which contain the highest density of desired words) are proposed to the user. For example, the input of the words "pet" and "disease" might bring up documents that have to do with veterinary science. This only works if the document one is looking for effectively contains the words used as input. However, there might be other documents on the same subject using different words (e.g. "animal" and "illness") to discuss the same issue.
This problem may be overcome by a direct extension from the associative memory metaphor, the mechanism of "spreading activation" (cf. Jones, 1986; Salton & Buckley, 1988): activating one concept in memory activates its adjacent concepts which in turn activate their adjacent concepts. Documents about pets in an associative network are normally directly or indirectly linked to documents about animals, and so a spread of the activation received by "pet" to "animal" may be sufficient to select all documents about the issue. This can be implemented as follows. Nodes or concepts get an initial "activation" proportional to an estimate of their relevance for the query. This activation spreads to linked nodes, with a strength proportional to the strength of the link. The total activation of a newly reached node is calculated as the sum of activations entering through different links, weighted by the links' strength. This process is repeated, with the activation diffusing in parallel over different links, until a satisfactory solution is activated (or the activation becomes too weak).
Until now, both search engines and spreading activation are typically implemented on single computers, which may carry an index of web documents. To extend these mechanisms to the Web as a whole, we may turn to the new technology of software "agents" (cf. Maes, 1994). A software agent is a (typically small) program or script, which can travel to different places and make decisions autonomously, while representing the interests of its user.
A simple way to conceptualize the function of an agent is through the concept of vicarious selector, which was introduced by Campbell (1974) to model the multilevel evolution of knowledge. A vicarious selector is a delegate mechanism, which explores a variety of situations and selects the ones that are most "fit", in anticipation of, or as a substitute for, the selection that would eventually be carried out by a more direct mechanism. For example, echo-location in bats and dolphins functions through the broadcast of an acoustic signal, which is emitted blindly in all directions, but which is selectively reflected by objects. The reflections allow the bat to localize these distant objects (e.g. prey or obstacles) in the dark, without need for direct contact. Similarly, an agent may be "broadcast" over the Web, exploring different documents without a priori knowledge of where the documents it is looking for will be located. The documents that fulfil the selection criteria embodied in the agent, can then be "reflected" back to the user. In that way, the user, like the bat, does not need to personally explore all potentially important locations, while still getting a good picture of where the interesting things are.
A Web agent might contain a set of possibly weighted keywords (e.g. "pet" and "disease") which represents its user's interest, and then evaluate the documents it encounters with respect to how well it satisfies the interest profile. The highest scoring documents could then be returned to the user, similar to the way a search engine works. Agents can moreover implement spreading activation: an agent encountering different potentially interesting directions (links) for further exploration, could replicate or divide itself into different copies, each with a fraction of the initial "activation", depending on the strengths of links and the degree to which their starting document fulfils the search criteria. When different copies again arrive in the same document, their activations are added in order to calculate the overall score of the document. In order to avoid virus-like epidemics of agents spreading all over the network, there should be a built-in cut-off mechanism, so that no further copies are made below a given threshold activation, and so that the initial activation supply of an agent is limited, perhaps in proportion to the amount of resources (credits, computer time, money, ...) the user is willing to invest in the query.
The selection criteria an agent uses to reach its goal may be explicitly introduced by the user, but they may also be autonomously "learned" by the agent (Maes, 1994). An agent may monitor its user's actions and try to abstract general rules from observed instances. For example, if the agents notes that many of the documents consulted by the user contain the word "pet", it may add that word to its search criteria and propose to the user to go and collect more documents about that topic. "Learning" agents and the "learning" Web can reinforce each others effectiveness. An agent that has gathered documents that are related according to its built-in or learned selection criteria can signal that to the Web, allowing the Web to create or strengthen links between these documents. Reciprocally, by creating better associations, the learning Web will facilitate the agents' search, by guiding the spread of activation or by suggesting related keywords (e.g. "animal" in addition to "pet"). Through their interaction with a shared associative web, agents can thus indirectly learn from each other, rather than directly exchange experiences (Maes, 1994).
Answers to specific queries may be further facilitated if the Web is not just associatively, but also semantically structured, i.e. if the links can belong to distinct types corresponding to semantic categories with specific properties (e.g. "is a", "has part", "has property", etc.). That would further guide searches, by restricting the number of links that need to be explored for a specific query. The resulting Web would more resemble a semantic network or knowledge-based system capable of "intelligent" inferences. Yet, it is important to maintain free associations without determined type in order not to constrain the type of information that can be found in the network (cf. Heylighen, 1991)
We can safely assume that in the following years virtually the whole of human knowledge will be made electronically available on the Web. If that knowledge is then organized as an associative or semantic network, "spreading" agents should be capable to find the answer to practically any question for which an answer somewhere exists. The spreading activation mechanism allows questions that are very vague, ambiguous or ill-structured: you may have a problem, but not be able to clearly formulate what it is you are looking for, and just have some ideas about things it has to do with.
As an example, imagine the following situation: your dog is regularly licking the mirror in your home. You do not know whether you should worry about that, whether that is just normal behavior, or perhaps a symptom of a disease. So, you try to find more information by entering the keywords "dog", "licking" and "mirror" into a Web search agent. If there would be a "mirror-licking" syndrome described in the literature about dog diseases, such a search would immediately find the relevant documents. However, that phenomenon may just be an instance of the more general phenomenon that certain animals like to touch glass surfaces. A traditional search on the above keywords would never find a description of that phenomenon, but the spread of activation in a semantically structured Web would reach "animal" from "dog", "glass" from "mirror" and "touching" from "licking", thus activating documents that contain all three concepts. Moreover, a smart agent would assume that documents that discuss possible diseases would be more important to you than those that just describe observed behavior, and would retrieve the former documents with a higher priority. This example can be easily generalized to the most diverse and bizarre problems. Whether it has to do with how you decorate your house, how you reach a particular place, how you remove stains of a particular chemical, what is the natural history of the Yellowstone region: whatever the problem you have, if some knowledge about the issue exists somewhere, spreading agents should be able to find it, and draw the relevant conclusions from it.
For the more ill-structured problems, the answer may not come immediately, but be reached after a number of steps. Formulating part of the problem brings up certain associations which may then call up others that make you or the agent reformulate the problem in a better way, which makes it easier to select relevant documents, etc., until you get a satisfactory answer. The Web will not only provide straight answers but general feedback that will direct you in your efforts to get closer to the answer.
Coming back to our brain metaphor, the agents travelling over the Web, searching for information that satisfies the user's requirements, dividing themselves into several copies exploring different regions, creating new associations by the paths they follow and the selections they make, and finally combining the found information into a synthesis or overview, which either solves the problem or provides a starting point for a further round of reflection, seem quite similar to thoughts spreading and recombining over the network of associations in the brain. This would bring the Web into the metasystem level of thinking (Turchin, 1977). This should not surprise us since the emergence of a vicarious selector can be interpreted as a metasystem transition (Heylighen, 1995).
In order to most effectively use the cognitive power offered by an intelligent Web, the distance or barrier between the user's wishes and desires and the sending out of Web-borne agents should be minimized. At present, we are still defining queries by typing in keywords in specifically chosen search engines, or by "programming" search rules into an agent's script. This is rather slow and awkward when compared to the speed and flexibility with which our own brain processes thoughts. Several mechanisms can be conceived to accelerate that process. We already mentioned the learning agents, which try to anticipate the user's desires by analysing his or her actions. We also mentioned the multimedia interface, which attempts to harness the full bandwidth of 3-dimensional audio, visual and tactile perception in order to communicate information to the user's brain. The complementary technologies of speech or gesture recognition make the input of information by the user much easier. But even more direct communication channels between the human brain and the Web are conceivable.
There have already been experiments in which people managed to steer a cursor on a computer screen simply by thinking about it: their brain waves associated with specific thoughts (such as "up", "down", "left" or "right") are registered by sensors, interpreted by neural network software, translated into commands, and executed by the computer. Such set-ups make use of a two way learning process: the neural network learns the correct interpretation of the registered brain-wave patterns, while the user, through bio-feedback, learns to focus his thoughts so that they become more understandable to the computer interface. A even more direct approach can be found in neural interfaces, where electronic chips are designed that can be directly implanted in the human body and connected to nerves, so that they can register neural signals (see e.g. Kovacs et al., 1994).
Once these technologies have become more sophisticated, we could imagine the following scenario: your thought would initially form in your own brain, then be translated automatically via a neural interface to an agent or thought in the external brain, continue its development by spreading activation, and come back to your own brain in a much enriched form. With a good enough interface, there should not really be a boundary between "internal" and "external" thought processes: the one would flow over naturally and immediately into the other. It really would suffice that you just think about your dog licking mirrors to see an explanation of that behavior pop-up before your mind's eye.
Interaction between internal and external brain does not always need to go in the same direction. Just like the external brain can learn from your pattern of browsing, it could also learn from you by directly asking you questions. A smart Web would continuously check the coherency and completeness of the knowledge it contains. If it finds contradictions or gaps it would try to situate the persons most likely to understand the issue (probably the authors or active users of a document), and direct their attention to the problem. In many cases, an explicit formulation of the problem will be sufficient for an expert to be able to quickly fill in the gap, using implicit (associative) knowledge which was not as yet entered into the Web (Heylighen, 1991). Many "knowledge acquisition" and "knowledge elicitation" techniques exist for stimulating experts to formulate their intuitive knowledge in such a way that it can be implemented on a computer. In that way, the Web would learn implicitly and explicitly from its users, while the users would learn from the Web. Similarly, the Web would mediate between users exchanging information, answering each other's questions. In a way, the brains of the users themselves would become nodes in the Web: stores of knowledge directly linked to the rest of the Web which can be consulted by other users or by the Web itself.
Though individual people might refuse answering requests received through the super-brain, no one would want to miss the opportunity to use the unlimited knowledge and intelligence of the super-brain for answering one's own questions. However, normally you cannot continuously receive a service without giving anything in return. People will stop answering your requests if you never answer theirs. Similarly, one could imagine that the intelligent Web would be based on the simple condition that you can use it only if you provide some knowledge in return.
More generally, there are the economic constraints of the "knowledge market", which make that people will not only use services, but provide services in order to earn the resources they need to sustain their own usage. Presently, there is a rush of commercial organizations moving to the Web in order to attract customers. The best way to convince prospective clients to consult their documents, will be to make these documents as interesting and useful as possible. Similarly, the members of the academic community are motivated by the "publish or perish" rule: they should try to make their results as widely available as possible, and are most likely to succeed if these results are highly evaluated by their peers (perhaps in collaboration with the future intelligent Web). Thus, we might anticipate a process where the users of the Web are maximally motivated both to make use of the Web's existing resources and to add new resources to it. This will make the Web-user interaction wholly two-way, the one helping the other to become more competent.
Eventually, the different brains of users may become so strongly integrated with the Web that the Web would literally become a "brain of brains": a super-brain. Thoughts would run from one user via the Web to another user, from there back to the Web, and so on. Thus, billions of thoughts would run in parallel over the super-brain, creating ever more knowledge in the process.
The creation of a super-brain is not yet sufficient for a metasystem transition beyond the level of thinking (Turchin, 1977; Heylighen, 1995): what we need is a higher level of control which somehow steers and coordinates the actions of the level below. To become a metasystem, thinking in the super-brain must not be just quantitatively, but qualitatively different from human thinking. The continuous reorganization and improvement of the super-brain's knowledge by analysing and synthesising knowledge from individuals, and eliciting more knowledge from those individuals in order to fill gaps or inconsistencies is a metalevel process: it not only uses existing, individual knowledge but actively creates new knowledge, which is more fit for tackling different problems.
Even without consulting the users, an intelligent Web can extend its own knowledge by the process which is known as "knowledge discovery" or "data mining" (Fayyad & Uthurusamy, 1995). The principle is the same as the one underlying scientific discovery: a more abstract rule or model is generated which summarizes the available observations or data, and which, by induction, makes it possible to make predictions for situations not yet observed or entered in the database. Many different techniques are available to support this discovery of general principles: different forms of statistical analysis, genetic algorithms, machine learning, neural networks, etc., but these still lack integration. The controlled development of knowledge requires a unified metamodel: a model of how new models are created and evolve. A possible approach to develop such a metamodel starts with an analysis of the building blocks of knowledge, of the mechanisms that combine and recombine building blocks to generate new knowledge systems, and of a list of values or selection criteria, which distinguish "good" or "fit" knowledge from "unfit" knowledge (Heylighen, 1992).
A remaining problem is whether an integrated "super-brain" will lead to an integrated social system or "super-organism". In other words, in how far can the conflicting interests of the different individuals and groups using the Web be reconciled and merged into a "global good" for the whole of humanity? The emergence of the super-brain might facilitate such an integrative process, since it is in everybody's interest to add to the knowledge stored in the super-brain: there does not seem to be a part-whole competition (Heylighen & Campbell, 1995) between individual and super-brain. This is due to the peculiar nature of information: unlike limited, material resources, information or knowledge does not diminish in value if it is distributed or shared among different people. Thus, there is no a priori benefit in keeping a piece of information to oneself (unless this information controls access to a limited resource!).
However, there remains the problem of intellectual property (e.g. copyright or patents): though it might be in the interest of society to immediately make all new knowledge publicly available, it is generally in the interest of the developer of that new knowledge to restrict access to it, and demand compensation for the effort that went into developing it. An advantage of the global network is that it will minimize all costs having to do with the development, storage, access and transaction of knowledge, and will foster competition between knowledge providers, so that the price of using a piece of knowledge developed by someone else might become so low as to make it practically free. A very large number of users paying a very small sum may still provide the developer with a sufficient reward for the effort.
This does not yet solve the problems of the sharing of material resources, though it must be noted that the value of the material resources (as contrasted with intellectual resources) is steadily decreasing as a fraction of the total value of products or services. As to the fair distribution of resources over the world population, the super-brain may facilitate the creation of a universal ethical and political system, by promoting the development of shared ideologies (Heylighen & Campbell, 1995) and by minimizing the distance between "government" and individuals. However, these questions remain very subtle and difficult, and huge obstacles remain, so that at this stage it seems impossible to make any concrete predictions.
Yet, we also have the many unfulfilled promises from the 40 year history of artificial intelligence to remind us that problems may turn out to be much more serious than they initially appeared. It is our impression that the main obstacles hindering AI are absent in the present view of the super-brain. First, AI was dogged by the fact that every-day intelligent behavior requires an inordinate amount of knowledge of common-sense facts and rules, which is very difficult to gather in a single system. The fact that millions of users in parallel add new knowledge to the super-brain eliminates this bottleneck. The traditional symbolic AI paradigm moreover demanded all knowledge to be formulated and entered in an explicit, precise, formal way, something which is impossible to achieve in most real world circumstances. Our view of the super-brain rather emphasizes the self-organizing, adaptive, fuzzy character of associative networks, and is in that respect more reminiscent of the connectionist paradigm than of the symbolic paradigm.
Whether such changes in approach will be sufficient to overcome the technical hurdles is something that will only become clear in the coming years. At this stage, we can only conclude that extensive research will be needed in order to develop, test and implement the present model for a future network.
Campbell D.T. (1974): "Evolutionary Epistemology", in: The Philosophy of Karl Popper, Schilpp P.A. (ed.), (Open Court Publish., La Salle, Ill.), p. 413-463.
Fayyad U.M. & Uthurusamy R. (eds.) (1995): Proceedings 1st Int. Conference on Knowledge Discovery and Data Mining (AAAI Press, Menlo Park, CA).
Heylighen F. (1991): "Design of a Hypermedia Interface Translating between Associative and Formal Representations", International Journal of Man-Machine Studies 35, p. 491-515.
Heylighen F. (1993): "Selection Criteria for the Evolution of Knowledge", in: Proc. 13th Int. Congress on Cybernetics (Association Internat. de Cybernétique, Namur), p. 524-528.
Heylighen F. (1994): "World-Wide Web: a distributed hypermedia paradigm for global networking", in: Proceedings of SHARE Europe, Spring 1994, "The Changing Role of IT in Business" (SHARE Europe, Geneva), p. 355-368.
Heylighen F. (1995): "(Meta)systems as constraints on variation", World Futures: the Journal of General Evolution .45, p. 59-85.
Heylighen F. & Campbell D.T. (1995): "Selection of Organization at the Social Level: obstacles and facilitators of metasystem transitions", World Futures: the Journal of General Evolution 45, p. 181-212.
Jones W. P. (1986): "On the Applied Use of Human Memory Models: The Memory Extender Personal Filing System", International Journal of Man-Machine Studies 25: 2, p. 191-228.
Kovacs, G.T.A., Storment, C.W., Halks-Miller, M. (1994): "Silicon-Substrate Microelectrode Arrays for Parallel Recording of Neural Activity in Peripheral and Cranial Nerves", IEEE Transactions on Biomedical Engineering v. 41 n. 6, p. 567.
Krol E. (1993): The Whole Internet: user's guide and catalog, (O'Reilly, Sebastopol, CA).
Maes P. (1994): "Agents that Reduce Work and Information Overload", Communications of the ACM, Vol. 37, No. 7. July 1994.
Mayer-Kress G. & C. Barczys (1995): "The Global Brain as an Emergent Structure from the Worldwide Computing Network, and its Implications for Modelling", The Information Society 11 No 1.
Russell P. (1983): The Global Brain: speculations on the evolutionary leap to planetary consciousness, (Houghton Mifflin, Boston, MA).
Salton G. & Buckley C. (1988): "On the Use of Spreading Activation Methods in Automatic Information Retrieval", in: Proc. 11th Ann. Intern. ACM SIGIR Conference on Research and Development in Information Retrieval, (Association for Computing Machinery), p. 147-160
Stock G. (1993): Metaman: the merging of humans and machines into a global superorganism, (Simon & Schuster, New York).
Turchin, Valentin (1977): The Phenomenon of Science. A Cybernetic Approach to Human Evolution, (Columbia University Press, New York ).