|
Wild Computing -- copyright Ben Goertzel, 1999
|
My subject here is the philosophy of Internet AI engineering: the general ideas
that must be understood in order to create intelligent computer programs in the
Internet medium. Only a few of these ideas are of my invention; this is a body
of knowledge that has emerged from the community of computer scientists,
engineers, complexity scientists and associated thinkers over the last few
decades. But I believe I have filled in a few crucial gaps.
This is not a traditional "AI philosophy" book. I take it for granted here that
computer programs can have minds, if they are designed correctly. And I take it
for granted that computer programs can be conscious -- ignoring the "problem" of
computer consciousness just as I ignore the "problem" of human consciousness
every day, as I interact with other humans. The reader who is interested to know
my views on these matters can consult my previous publications.
I firmly believe that, by fleshing out and implementing the ideas given here,
over the next decade, we will be able to create Internet-based AI programs that
surpass human beings in general intelligence. Some of our expectations about AI
will be remain unfulfilled -- for example, the Turing test, which requires a
computer to simulate a human in conversation, is unlikely to be passed until we
can engineer human-like bodies for our AI programs. But in other areas, our AI
programs will exceed our expectations dramatically. Our understanding of the
collective intelligence of the human race will be greatly enhanced by
interaction with AI systems linked in to the collective data resources of the
Internet.
Of course, I am aware that AI optimism has been proved wrong before -- AI
pioneers of the 1950's and 60's predicted that true machine intelligence was
right around the corner. But like optimism about human flight 150 years ago, AI
optimism is bound to be proved right eventually. The AI theorists of the 50's
and 60's lacked many the ideas needed to realize machine intelligence, but they
also lacked adequate hardware. If they had had the needed machinery, they
probably would have corrected their ideas in the iterative manner typical of
empirical science. Now we have PC's with 4 gigabytes of RAM, mainframes with
100; and we have high-bandwidth network cable that can link an arbitrary number
of computers together into a single computational process. Just barely, we have
the firepower needed to implement a mind. The ideas in this book provide the key
to making use of this firepower.
Part of the key to thinking about Internet AI correctly is placing AI in the
correct context. Exciting things are happening in computing -- the Internet is
more than just a way to send naked pictures of Pamela Andersen to your buddies
in Chechnya. There is a new kind of computer science and computer technology
brewing, one that focuses on self-organizing networks and emergent dynamics
rather than algorithms and data structures (though algorithms and data
structures will still be there, to be sure, just as the bit-level constructs of
assembly-language were not vanquished by the advent of structured programming).
Artificial intelligence is essential here, because humans don't have the time or
the ability to deal with the glut of rapidly shifting information that modern
networks bring. And the network computing revolution is essential, because mind
and brain are necessarily networks. A non-network-based computing paradigm can
never adequately support AI.
Java, the premier language of the new paradigm, is maturing into a central role
in server-side network apps. Exciting new Java-derivative technologies like
Objectspace Voyager are beginning to take off, and yet others like Jini and
Javaspaces are waiting in the wings. Electronic commerce is beginning to become
real, especially in the realm of business-to-business commerce; and online AI is
finally picking up speed -- embedded in websites like Excite and
barnesandnoble.com are sophisticated systems for learning and guessing user
profiles. We are just a few years away from the situation where the various
intelligent systems on the Net, like the ones inside Excite and
barnesandnoble.com, are learning from each other rather than existing as islands
of intelligence in a sea of inert text and data. The Internet, and the computing
environment in general, is poised for the advent of real artificial
intelligence. What is needed to make it happen is understanding on the part of
the technical community -- understanding of how the network of mind emerges from
the underlying network of computers.
For the last two years, since I founded the Silicon Alley start-up company
Intelligenesis, I have been involved in building an internet-based AI system
called Webmind, which exemplifies the methodologies of AI design to be discussed
in the following pages. This project has been challenging and exciting, but has
left me fairly little time to reflect on the general lessons and principles
underlying the work I'm doing, and even less time to write down these lessons
and principles systematically. This book synthesizes some of the ideas that I
have had time to write about. Most of the chapters originated as informal
articles, written to clarify ideas to myself and distributed informally to
various e-mail co-workers and acquaintances; several pertain specifically to our
work at Intelligenesis.
Chapter 1 reviews the network computing paradigm and its relevance to
intelligence; Chapters 2 and 3 follow up on this by exploring the "mind as
network" theme as it relates to the structure of the brain and the abstract
structure of cognition. Chapter 4 presents a general philosophical framework for
understanding Internet information space, beginning from philosopher Kent
Palmer's four ontological categories: Pure Being (data), Process Being
(programs), Hyper Being (network computing) and Wild Being (the next stage,
emergent Internet intelligence; the origin of the term Wild Computing). Chapters
5 and 6 discuss Internet AI from two very different perspectives: economics and
transpersonal consciousness (the World Wide Brain, encapsulating the essential
patterns of the human "collective unconscious"). Chapters 7 and 8 introduce and
discuss the Webmind network AI architecture that we are currently building at
Intelligenesis Corp., a specific example of the new network AI paradigm. The
full details of Webmind are not entered into here, but the architecture is
painted in broad strokes, and it is indicated how Webmind has the potential to
fulfill the promise of next-phase, emergent AI. Finally, Chapter 9 describes
some ideas as to how a new agents communication protocol could allow a diverse
population of intelligent network entities, including Webminds, to
synergetically interact -- and hence form the practical infrastructure for a
global brain.
This is a diverse body of thinking, and I have not attempted to force a false
unity upon it. There is a real underlying coherence here, and I am too aware of
its depth and profundity to want to obscure it under a glib, premature
systematization. (As Nietzsche said, "The will to a system is a lack of
integrity.") Instead, , the book mirrors its subject matter, in that the topics
presented are not locked together into a rigid linear series, but represent
rather nodes in a network, interrelating with each other in numerous complex
ways. As the new computing develops, the pattern of interrelation between its
various aspects will become clearer and more aesthetic. We are all part of this
clarifying process.
Wild Computing -- copyright Ben Goertzel, 1999
Chapter 1:- The Network is the Computer is the Mind 1-3
|
1. Past and Future History of the Internet
The evolution of the Internet up till now can be divided, roughly speaking, into
three phases:
Pre-Web. Direct, immediate interchange of small bits of text, via e-mail and
Usenet. Indirect, delayed interchange of large amounts of text, visual
images and computer programs, via ftp.
Web. Direct, immediate interchange of images, sounds, and large amounts of
text. Online publishing of articles, books, art and music. Interchange of
computer programs, via ftp, is still delayed, indirect, and
architecture-dependent.
Network Computing. Direct, immediate interchange of animations and computer
programs as well as large texts, images and sounds. Enabled by languages
such as Java, the Internet becomes a real-time software resource.
Intelligent agents traverse the web carrying out specialized intelligent
operations for their owners.
The third phase, the Network Computing phase, is still in a relatively early
stage of development, driven the dissemination and development of the Java
programming language. However, there is an emerging consensus across the
computer industry as to what the ultimate outcome of this phase will be. For
many applications, people will be able to run small software "applets" from Web
pages, instead of running large, multipurpose programs based in their own
computers' hard disk drives. The general-purpose search engines of the Web phase
will evolve into more specialized and intelligent individualized Web exploration
agents. In short, the Web will be transformed from a "global book" into a
massively parallel, self-organizing software program of unprecented size and
complexity.
But, exciting as the Network Computing phase is, it should not be considered as
the end-point of Web evolution. I believe it is important to look at least one
step further. What comes after Network Computing, I propose, is the autopoietic,
emergently structured and emergently intelligent Web -- or, to put it
metaphorically, the World Wide Brain. The Network Computing environment is a
community of programs, texts, images, sounds and intelligent agents, interacting
and serving their own ends. The World Wide Brain is what happens when the
diverse population making up the Active Web locks into a global attractor,
displaying emergent memory and thought-oriented structures and dynamics not
programmed into any particular part of the Web. Traditional ideas from
psychology or computer science are of only peripheral use in understanding and
engineering a World Wide Brain. As we shall see in the following chapters,
however, ideas from complex systems science are considerably more relevant.
It may seem hasty to be talking about a fourth phase of Web evolution -- a World
Wide Brain -- when the third phase, network computing, has only just begun, and
even the second phase, the Web itself, has not yet reached maturity. But if any
one quality characterizes the rise of the Web, it is rapidity. The Web took only
a few years to dominate the Internet; and Java, piggybacking on the spread of
the Web, has spread more quickly than any programming language in history. Thus,
it seems reasonable to expect the fourth phase of Internet evolution to come
upon us rather rapidly, certainly within a matter of years rather than decades.
2. Network Computing
The history of networks as a computing paradigm is at first glance simple. There
were mainframes and terminals, then PC's and local-area networks ... and now
large-scale, integrated network computing environments, providing the benefits
of both mainframes and PC's and new benefits besides. This is a true story, but
far from the whole story. This view underestimates the radical nature of the
network computing paradigm. What network computing truly represents is a return
to the cybernetic, self-organization-oriented origins of computer science. It
goes a long way toward correcting the fundamental error committed in the 1940's
and 1950's, when the world decided to go with a serial, von-Neumann style
computer architecture, to the almost total exclusion of more parallel,
distributed, brain-like architectures.
The move to network computing is not merely a matter of evolving engineering
solutions, it is also a matter of changing visions of computational
intelligence. Mainframes and PC's mesh naturally with the symbolic, logic-based
approach to intelligence; network computing environments, on the other hand,
mesh with a view of the mind as a network of intercommunicating, intercreating
processes. The important point is that the latter view of intelligence is the
correct one. From computing frameworks supporting simplistic and fundamentally
inadequate models of intelligence, one is suddenly moving to a computing
framework supporting the real structures and dynamics of mind.
For mind and brain are fundamentally network-based. The mind, viewed
system-theoretically, is far more like a network computing system than like a
mainframe-based or PC-based system. It is not based on a central system that
services dumb peripheral client systems, nor it is based on a huge host of
small, independent, barely communicating systems. Instead it is a large,
heterogenous collection of systems, some of which service smart peripheral
systems, all of which are intensely involved in inter-communication. In short,
by moving to a network computing framework, we are automatically supplying our
computer systems with many elements of the structure and dynamics of mind.
This does not mean that network computer systems will necessarily be
intelligent. But it suggests that they will inherently be more intelligent than
their mainframe-based or PC counterparts. And it suggests that researchers and
developers concerned with implementing AI systems will do far better if they
work with the network computing environment in mind. The network computing
environment, supplied with an appropriate operating system, can do half their
job for them -- allowing them to focus on the other half, which is
inter-networking intelligent agents in such a way as to give rise to the
large-scale emergent structures of mind. Far more than any previous development
in hardware, network computing gives us real hope that the dream of artificial
intelligence might be turned into a reality.
3. The Network Becomes the Computer
The history of computing in the late 20'th century is familiar, but bears
repeating and re-interpretation from a cybernetic, network-centric point of
view.
As everyone knows, over the past twenty years, we have seen the mainframes and
terminals of early computing replaced with personal computers. With as much
memory and processing power as early mainframes, and far better interfaces, PC's
have opened up computing to small businesses, homes and schools. But now, in the
late 90's, we are seeing a move away from PC's. The PC, many computer pundits
say, is on its way out, to be replaced with smart terminals called "network
computers" or NC's, which work by downloading their programs from central
servers.
An NC lacks a floppy drive; word processing and spreadsheet files, like
software, are stored centrally. Instead of everyone having their own copy of,
say, a huge word processor like MicroSoft Word, the word processor can reside on
the central server, and individuals can download those parts of the word
processor that they need. Routine computing is done by the NC; particularly
intensive tasks can be taken over by the central computer.
At the present time NC's are not a viable solution for home computer users, as
the bandwidth of modem or even ISDN connections is not sufficient for rapid
transmission of complex computer programs. For businesses, however, there are
many advantages to the network computing approach. Internal bandwidth, as
exploited by local-area networks, is often already high, and by providing
employees with NC's instead of PC's, an employer saves on maintenance and gains
in control.
And once cable modem or equivalently powerful technology becomes common, NC's
will be equally viable for the home user. The individual PC, with its limited
array of software, will quickly come to seem sterile and limiting. Why pay a lot
of money for a new game which you may not even like, when you can simply
download a game in half a minute, and pay for it on an as-you-play basis? When
it is almost as fast to download software from the Net as it is to extract it
from the hard drive of a PC, the advantages of PC's over NC's will quickly
become negative. The entire network becomes a kind of virtual hard drive, from
which programs or pieces thereof can be extracted on command.
These developments are widely perceived as ironic. After all, the move to PC's
was greeted with relief by almost everyone -- everyone but IBM and the other
large corporations who made their money from the mainframe market. What a relief
it was to be able to get a print-out on your desk, instead of waiting two hours
and then running down the the basement printer room. How wonderful to be able to
individualize one's computing environment. How exciting to compute at home. What
is ironic is that, in an NC context, these mainframes -- or newer machines of
even greater "grunt power" -- are becoming fashionable again.
The catch is, of course, that the context is different. In a modern network
computing environment, consisting of a central server and a large group of
peripheral NC's, no one has to wait two hours for their print-out to come out in
the basement. No one has to use a line printer interface, or even a text
interface on a screen. High-bandwidth communications conspire with advances in
hardware, operating system and graphical interface design to make the modern
network computing environment something entirely new: something incorporating
the advantages of the old mainframe approach and the advantages of PC's, along
with other advantages that have no precedent.
The grunt is there, the raw processing power of the central server; and so is
the efficiency of storing only one central copy of each program. But the
convenience of PC's is there too, provided by the modern possibility of spinning
off new copies of centrally-stored programs on the fly, and downloading them
into super-smart terminals -- terminals so smart that they are no longer merely
terminals but network computers.
From the point of view of the NC user, the network is like a PC with a giant
hard drive. The fact that this hard drive happens to be shared by other people
is barely even relevant. From the point of view of the central server itself, on
the other hand, the network is almost like an old-style mainframe system, the
only difference lying in the semantics of the input/ouput messages sent and
received, and in the speed with which these messages have to be sent. Instead of
just exchanging verbal or numerical messages with dumb terminals operated by
humans, the central server is now exchanging active programs with less powerful
but still quite capable satellite computers.
Finally, what is entirely new in modern network computing is the interconnection
of various central servers into their own large-scale network. This occurs in
the Internet, and within large organizations, it also occurs internally. This is
important because it means that no single server has to hold all the information
potentially required by its client NC's. The server is responsible for feeding
information to its NC's, but it may in some cases derive this information from
elsewhere, and simply serve as a "middleman." In other words, a higher-level
network of information is overlaid on a lower-level network of control. This
opens up the scope of information and applications available to computer users,
to a degree never before seen.
All this is exciting, tremendously exciting, but to computer industry insiders,
it is becoming almost hackneyed before it has even come to pass. "The Network is
the Computer" is a late 90's mantra and marketing slogan. And it is accurate;
the network should be, and increasingly really is the computer. What is now
inside the PC -- memory, processing, information -- will in the network
computing environment be stored all over the place. The overall computation
process is distributed rather than centralized, even for basic operations like
word processing or spreadsheeting.
What is not observed very often, however, is the relation between network
computing and the mind. As it turns out the mainframe approach to computing and
the PC approach to computing embody two different incorrect theories of
intelligence. The network computing approach embodies a more sophisticated
approach to intelligence, which, although still simplistic compared to the
structure of the human brain, is fundamentally correct according to principles
of cybernetics and cognitive science. As we move toward a worldwide network
computing environment, we are automatically moving toward computational
intelligence, merely by virtue of the structure of our computer systems, of the
logic by which our computer systems exchange and process information. In other
words, not only is the network is the computer, but it is the mind as well. This
leads to the new, improved slogan with which I have titled this chapter: The
network is the computer is the mind.
What does this slogan mean? It stands for a series of statements, of increasing
daring. At the most conservative end, one may say that the network computing
environment provides an ideal context within which to program true artificial
intelligence. At the most radical end, on the other hand, one might plausibly
argue that the widespread adoption of network computing will lead automatically
to the emergence of computational intelligence. The truth almost certainly lies
somewhere between these two extremes. Network computing provides a certain
degree of intelligence on its own, and it provides a natural substrate within
which to implement programs that display yet greater degrees of intelligence.
The network is the computer is the mind.
4.
Chapter 1:- The Network is the Computer is the Mind 4-6
|
4. AI and the History of Hardware
It's a pleasant, familiar story: mainframes to PC's to network computer systems.
The whole thing has the inevitability of history about it. In fact, though,
there is more to these developments than is commonly discussed. Network
computing is, I believe, an inevitable occurrence -- but only in the abstract
sense of "computing with self-organizing networks of intercommunicating
processes." Networks are the best way to do artificial intelligence, and they
are also the best way to solve a huge variety of other problems in computing.
The particular form that network computing is assuming at the present time,
however -- computing with networks of interconnected serial digital computers --
is merely a consequence of the evolutionary path that computer hardware has
taken over the past half-century. This evolutionary path toward network
computing has in fact been highly ironic. Of all the directions it could
possibly have taken, computer technology has taken the one most antithetical to
artificial intelligence and self-organizing network dynamics. Even so, however,
the "network" archetype has proved irresistable, and is emerging in an
unforeseen way, out of its arch-enemy, the standard serial, digital computer
architecture.
During the last century, we have seen, there have emerged two sharply divergent
approaches to the problem of artificial intelligence: the cybernetic approach,
based on emulating the brain and its complex processes of self-organization, and
the symbolic approach, based on emulating the logical and linguistic operations
of the conscious mind.
Neither of these approaches is perfect. It has become increasingly clear in
recent years, however, that whereas the symbolic approach is essentially
sterile, the cybernetic approach is fertile. The reason for this is quite
simple: the mind/brain really is a large, specially-structured, self-organizing
network of processes; it really is not a rule-based logical reasoning system.
We do, of course, carry out behaviors that can be described as logical reasoning
-- but what we are doing in these cases, while it can be described in terms of
rule-following to some degree of accuracy, is not really rule-following.
Ignoring the self-organizing dynamics underlying apparent rule-following --
taking logical reasoning out of the context of intelligently perceived
real-world situations, and out of the context of illogical, unconscious mental
dynamics -- results in an endless variety of philosophical paradoxes and
practical obstacles. The result is that symbolic AI systems are inflexible and
uncreative -- not much more "intelligent," really, than a program like
Mathematica which does difficult algebra and calculus problems by applying
subtle mathematical methods.
The cybernetic approach to AI, for all its flaws, has a far clearer path to
success. Building larger and larger, more and more intelligently structured
self-organizing networks, we can gradually build up to more and more mind-like
structures. Neural networks are one of the more popular implementations of the
cybernetic approach, but not the only one. One can also think in terms of
genetic algorithms, or more abstract systems of "software agents" -- the point
is the emphasis on creative, complex network dynamics, rather than deterministic
interactions between systems of logical, rational rules.
In the beginning, back in the 1940's and 50's, these two different approaches to
AI were tied in with different approaches to computer hardware design: parallel,
distributed analog design versus serial, digital design. Each theory of
"computational mind" matched up naturally with a certain approach to
"computational brain."
In the parallel, distributed, analog computer, many things happen at each point
in time, at many different points in space. Memory is stored all over, and
problem-solving is done all over. Memory is dynamic and is not fundamentally
distinct from input/output and processing. Furthermore, the basic units of
information are not all-or-nothing switches, but rather continuous signals, with
values that range over some interval of real numbers. This is how things happen
in the brain: the brain is an analog system, with billions of things occurring
simultaneously, and with all its different processes occurring in an intricately
interconnected way.
On the other hand, in the serial, digital computer, commonly known as the von
Neumann architecture, there is a central processor which does one thing at each
time, and there is a separate, inert memory to which the central processor
refers. On a hardware level, the von Neumann design won for practical
engineering reasons (and not for philosophical reasons: von Neumann himself was
a champion of neural-net-like models of the mind/brain). By now it is ingrained
in the hardware and software industries, just as thoroughly as, say, internal
combustion engines are ingrained in the automobile industry. Most likely, every
computer you have ever seen or heard of has been made according to the von
Neumann methodology.
The cybernetic approach to artificial intelligence, however, has survived the
dominance of the von Neumann architecture, moving to a methodology based
primarily on serial digital simulations of parallel distributed analog systems.
The fact that cybernetic AI has survived even in a totally hostile computing
hardware environment is a tribute to the fundamental soundness of its underlying
ideas. One doubts very much, on the other hand, whether symbolic AI would have
ever become dominant or even significant in a computing environment dominated by
neural net hardware. In such an environment, there would have been strong
pressure to ground symbolic representation in underlying network dynamics. The
whole project of computing with logic, symbolism and language in a formal,
disembodied way, might never have gotten started.
Today, many of us feel that the choice of the von Neumann architecture may have
been a mistake -- that computing would be far better off if we had settled on a
more brain-like, cybernetics-inspired hardware model back in the 1940's. The
initial engineering problems might have been greater, but they could have been
overcome with moderate effort, and the billions of dollars of money spent on
computer R&D in the past decades would have been spent on brainlike computers
rather than on the relatively sterile, digital serial machines we have today. In
practice, however, no alternate approach to computer hardware has yet come close
to the success of the von Neumann design. All attempts to break the von Neumann
hegemony have met with embarrassing defeat.
Numerous parallel-processing digital computers have been constructed, from the
restricted and not very brainlike "vector processors" inside Cray
supercomputers, to the more flexible and AI-oriented "massively parallel"
Connection Machines manufactured by Thinking Machines, Inc.. The Cray machines
can do many things at a each time step, but they all must be of the same nature.
This approach is called SIMD, "single-instruction, multiple dataset": it is
efficient for scientific computation, and some simple neural network models, but
not for sophisticated AI applications. The Thinking Machines computers, on the
other hand, consist of truly independent processors, each of which can do its
own thing at each time, using its own memory and exchanging information with
other processors at its leisure. This is MIMD, "multiple instruction, multiple
dataset"; it is far more reminiscent of brain structure. The brain, at each
time, has billions of "instructions" and billions of "data sets"!
These parallel digital machines are exciting, but, for a combination of
technical and economic reasons, they have not proved as cost-effective as
networks of von Neumann computers. They are used almost exclusively for
academic, military and financial research, and even their value in these domains
has been dubious. Thinking Machines Inc. has gone bankrupt, and is trying to
re-invent itself as a software company; their flagship product, GlobalWorks, is
a piece of low-level software that allows networks of Sun workstations to behave
as if they were Connection Machines (Sun workstations are high-end engineering
computers, running the Unix operating system and implementing, like all other
standard contemporary machines, the serial von Neuman model).
With GlobalWorks, all the software tools developed for use with the Connection
Machines can now be used in a network computing environment instead. There is a
serious loss of efficiency here: instead of a network of processors hard-wired
together inside a single machine, one is dealing with a network of processors
wired together by long cables, communicating through complex software protocols.
However, the economies of scale involved in manufacturing engineering
workstations means that it is actually be more cost-effective to use the network
approach rather than the parallel-machine approach, even though the latter is
better from a pure engineering point of view.
Of even greater interest than the massively parallel digital Connection Machine
is the analog, neural net based hardware being produced by several contemporary
firms, -- radical, non-binary computer hardware that is parallel and distributed
in nature, mixing up multiple streams of memory, input/output and processing at
every step of time. For instance, the Australian company Formulab Neuronetics,
founded in the mid-80's by industrial psychologist Tony Richter, manufactures
analog neural network hardware modeled fairly closely on brain structure. The
Neuronetics design makes the Connection Machine seem positively conservative.
Eschewing traditional computer engineering altogether, it is a a hexagonal
lattice of "neuronal cells," each one exchanging information with its neighbors.
There are perceptual neurons, action neurons, and cognitive neurons, each with
their own particular properties, and with a connection structure loosely
modelled on brain structure. This technology has proved itself in a variety of
process control applications, such as voice mail systems and internal automotive
computers, but it has not yet made a splash in the mainstream computer industry.
By relying on process control applications for their bread and butter,
Neuronetics will hopefully avoid the fate of Thinking Machines. But the ultimate
ambition of the company is the same: to build an ultra-high-end supercomputer
that, by virtue of its size and its brainlike structure, will achieve
unprecedented feats of intelligence.
As of now, this kind of neural net hardware is merely a specialty product. But I
suspect that, as PC's fade into history, these analog machines will come to play
a larger and larger role in the world. In the short run, we might see
special-purpose analog hardware used in the central servers of computer
networks, to help deal with the task of distributing information amongst various
elements of a network computing environment. In the long run, one might see
neurocomputers joining digital computers in the worldwide computer network, each
contributing their own particular talents to the overall knowledge and
processing pool.
The history of AI and computer hardware up till now, then, is a somewhat sad
one, with an ironic and optimistic twist at the end. The dominant von Neumann
architecture is patently ill-suited for artificial intelligence. Whether it is
truly superior from the point of view of practical engineering is difficult to
say, because of the vast amount of intelligence and resources that has been
devoted to it, as compared to the competitors. But it has incredible momentum --
it has economies of scale on its side, and it has whole industries, with massive
collective brainpower, devoted to making it work better and better. The result
of this momentum is that alternate, more cybernetically sophisticated and
AI-friendly visions of computing are systematically squelched. The Connection
Machine was abandoned, and the Neuronetics hardware is being forced to earn its
keep in process control. This is the sad part. As usual in engineering, science,
politics, and other human endeavors, once a certain point of view has achieved
dominance, it is terribly difficult for anything else to gain a foothold.
The ironic and possibly optimistic part, however, comes now and in the near
future. Until now, brainlike parallel architectures have been squelched by
serial von Neumann machines -- but the trend toward network computing is an
unexpected and unintentional reversal of this pattern. Network computing is
boldly cybernetic -- it is brainlike computer architecture emerging out of von
Neumann computer architecture. It embodies a basic principle of Oriental martial
arts: when your enemy swings at you, don't block him, but rather position
yourself in such a way that his own force causes him to flip over.
The real lesson, on a philosophical level, may be that the structure of brain
and intelligence is irresistable for computing. We took a turn away from it way
back in the 1940's, rightly or wrongly, but now we are returning to it in a
subtle and unforeseen way. The way to do artificial intelligence and other
sophisticated computing tasks is with self-organizing networks of
intercommunicating processes -- and so, having settled on computer hardware
solutions that do not embody self-organization and intercommunication, we are
impelled to link our computers together into networks that do
5. Issues of Scale in Artificial Intelligence
Another way to look at these issues is to observe that, historically, the
largest obstacle to progress in AI has always been scale. Put simply, our best
computers are nowhere near as powerful as a chicken's brain, let alone a human
brain. One is always implementing AI programs on computers that, in spite of
special-purpose competencies, are overall far less computationally able than one
really needs them to be. As a consequence, one is always presenting one's AI
systems with problems that are far, far simpler than those confronting human
beings in the course ordinary life. When an AI project succeeds, there is always
the question of whether the methods used will "scale-up" to problems of more
realistic scope. And when an AI project fails, there is always the question of
whether it would have succeeded, if only implemented on a more realistic scale.
In fact, one may argue on solid mathematical
|