The Proceedings of the 1989 NanoCon Northwest regional
nanotechnology conference, with K. Eric Drexler as Guest of Honor.
Go to NanoCon Foreword
Go to NanoCon Proceedings Table
Return to Jim's Molecular
Nanotechology Home Page
Van der Waals cylinder-and-sleeve bearing
February 17-19, 1989
Guest of Honor
K. Eric Drexler, Visiting Scholar, Stanford University, and
author of Engines
Guests (in alphabetical order)
Greg Bear, author of BLOOD MUSIC
Dr. Gregory Benford, University of California
Dr. John Cramer, University of Washington
G. Louis Roberts, Boeing Computer Services
Dr. Bruce Robinson, University of Washington
Dr. Nadrian Seeman, New York University
Marc Stiegler, Xanadu
Mike Thomas, Boeing Computer Services
NANOCON Chair: John L. Quel
Vice-Chair: Dr. Jim Lewis
This document was transcribed and edited by Jim Lewis and John L. Quel
between March and May of 1989. This WWW version was created by Jim Lewis
between March and June of 1996.
© Copyright 1989, by NANOCON
Published by NANOCON. All rights reserved under international and Pan-American
conventions. However, we permit copying for educational purposes only provided
that the source of the document, NANOCON, is acknowledged at all times.
Cover, diamond cube, and rod logic graphics © copyright 1989, by K.
The theme of NANOCON was nanotechnology, the predicted development of the
technological capability to manipulate matter at the atomic scale and to
build complex devices to precisely atomic specification. Current technologies
that are leading in this direction include protein engineering and scanning
tunneling microscopy. However, this conference made no pretensions to covering
either of these areas, each of which has been the focus of several recent
conferences. Instead it featured brief glimpses of three very disparate
The conference focussed on a diverse set of ideas advanced by K. Eric Drexler
in the book Engines
of Creation, published by Anchor Press/Doubleday in 1986.
- proposals for molecular design and engineering, particularly relating
- the management of very complex information systems.
- the effects that these advanced technologies might have on society
over the next 30 - 50 years.
The idea of atomic-scale manufacturing is an outgrowth of earlier suggestions
[K. Eric Drexler. 1981. "Molecular
engineering: An approach to the development of general capabilities for
molecular manipulation" Proc. Natl. Acad. Sci. USA
78:5275-5278]. Drexler discussed his recent work on design considerations
for molecular-scale mechanical computers, which are expected to produce
the equivalent of a current main-frame computer the size of a bacterial
Other work on molecular engineering by Bruce Robinson and Nadrian
Seeman, researchers at the UW and at NYU, used a different approach:
base pairing of specially-designed DNA segments to provide a self-assembling
3-dimensional matrix to which conducting organic polymers could be attached.
Such constructions could provide nanomanipulators or an electronic computer,
somewhat larger than Drexler's proposed mechanical computer, but still molecular
in size and faster because it would be electronic rather than mechanical
in operation. [Bruce H. Robinson & Nadrian C. Seeman. 1987. The design
of a biochip: a self-assembling molecular-scale memory device. Protein
The second major theme of NanoCon was the use of hypermedia to manage information.
G. Louis Roberts of Boeing Computer Services discussed a hypertext system
developed at Boeing to access a huge library of visual information. Marc
Stiegler described the Xanadu Hypertext Project based on the proposal of
Theodor Nelson in Literary Machines. Xanadu
will constitute a vast library of published works, interconnected by two-way
The third major theme encompasses the effects of such technological revolutions.
Long term effects upon society were discussed by a panel that included two
prominent science fiction authors, Greg Bear and Gregory Benford. An additional
panel considered possible developmental paths toward these technologies.
James B. Lewis, PhD
Table of Contents
I wish to give special thanks to my partner, Dr. Jim Lewis, and the four
people who worked with me from the beginning to make NANOCON a success:
Dr. John Cramer, Mike Thomas, Rick Burton and Steve Salkovics. They were
the primary contacts with the business and academic communities, giving
it the credibility it needed to succeed. Without their efforts, organizing
the conference would have been out of the question.
Special thanks also needs to be given the guests, especially K. Eric Drexler.
Without their support, this conference would not have been possible.
Jane Hawkins, Kathleen Critchett, Marcie Malinowycz, and Eileen Gunn provided
invaluable assistance with the registration process.
Finally, Grant Fjermedal and Steve Salkovics are to be commended for their
work in recording the entire conference, and David Gagliano and Rick Burton
for help with printing and copying this document, making the proceedings
available to all who are interested in these extraordinary ideas.
A Note on the Proceedings
These proceedings have been edited for conciseness and clarity. People tend
to be more repetitious when speaking than when writing. In editing this
document, we saw little reason to transcribe verbatim what was said in all
cases. We have, however, struggled to keep the meaning and emphasis of what
the speakers said intact. Possibly offensive comments have been edited.
The speakers used slides or other visual materials to illustrate their talks.
Only a very few of these could be included either with the talk or in the
Appendices. We have tried to explain the references to the slides whenever
possible, but this compromise will inevitably prove inadequate. In the case
of Eric Drexler's rod-logic talk, additional material is available from
the Foresight Institute, as noted
NANOCON was a hybrid and an experiment. Neither a fan gathering (as the
use of the suffix "con" would imply), nor a true scientific conference,
it shamelessly borrowed elements of both. I doubt that anything quite like
it had been tried before, at least with any regularity. NANOCON sought to
bring together people who rarely interact: scientists, technicians (by that
term, I mean those who apply science to business), and writers, to see what
would happen. NANOCON attempted to please everyone, and I think did not
do such a bad job of it.
The three day conference was a forced march through a lot of material. The
goal was to achieve by the close of the conference a state in which the
seeds of future thought would be planted. It is our hope and intent that
in the coming months and years those seeds will grow.
I initiated the events that led to this conference because I believe the
history of human progress is the history of evolving ideas and institutions.
In addition, I believe that while society cannot be reduced to economics,
there is an undeniable under-pinning of economics to social action. The
effect of "nanotechnologies" (I prefer the plural form because
the singular, which I reserve for the final achievable goal, has already
become overladen with emotionalism and utopianism) on both will be profound
and lasting; indeed, in the most fundamental social sense of all -- upon
our whole understanding of what it means to be human.
Can such all encompassing changes be understood? Can we, as individuals,
prepare for them? Can the ideas be explained and communicated responsibly,
or are they doomed to remain the staple of bad science fiction stories until
it is much too late?
I had been concerned about these questions ever since I read Engines
of Creation on Christmas Eve, 1986. NANOCON grew out of my frustrations
that little was being made in enlarging upon the vision described in that
book. NANOCON grew out of my despair over the enormous difficulties involved
in communicating that vision, even as we understood it today; difficulties
that threatened to drown the ideas under waves of foolishness and fear.
I felt that something different had to be tried, something that would make
these extraordinary possibilities more accessible to a broader audience
than the traditional "futurephiles." Some way had to be found
to get beyond the "Giggle" stage -- and that would require more
than the usual intellectual tilt-a-whirl "cons" traditionally
supply. The problem as I saw it was not one of intelligence or knowledge,
but of thought. And by thought, in this context, I mean the willingness
to engage in conceptual exploration.
Since that Christmas Eve, I have had many talks regarding nanotechnology
with people differing widely in career, income, religion, age, and education.
While I expected few to embrace the ideas, it was disheartening that the
vast majority of responses could be classified into but three simple groupings:
1. Nanotechnology will never happen (or so far in the future
it might as well be "never").
2. Nanotechnology is wrong, if "it" does happen -- that was a
very common response.
3. Nanotechnology is nothing special. I attributed that response to "end
of the world" burn out, an attitude for which I have a lot of sympathy.
These depressingly unimaginative attitudes seemed to me to all sum to the
same underlying feeling -- "Don't bother me: I don't want to think
Why were all these good, honest, intelligent people, so loath to distance
themselves from rash moral assessments where understanding was so obviously
lacking? Why so little motivation to examine the assumptions which underlie
their feelings? Why the eagerness to make all-encompassing pronouncements
where not a single argument or fact could be given to sustain their statements?
It seemed to me that they could understood the words, but the logic or syntax,
if you wish, of their thinking did not allow for consideration of anything
beyond their own highly restricted domains of comfort.
I concluded that understanding the implications of nanotechnology is most
emphatically not a matter of education or intelligence. It is a matter of
how we use our minds and our emotions, and most people clearly use them
badly, whatever their innate mental capabilities.
In structuring this conference, we tried to give the participants as much
of the fundamentals of these profound future changes as time would permit
and put off as long as possible normative considerations. We also attempted
to make these prospects as concrete as possible. That was why economic considerations
were so vital. Economics does bring a refreshing earthiness to speculation.
By requiring us to ask detailed questions flavored with strong dosages of
reality and by inserting the "I" into the equations of social
action, it can enable us to avoid the pitfalls of seeing ourselves as "cosmic
spokespersons" for the whole of existence. Economic motives can be
an excellent incentive for focusing thinking, something that sermonizing
By exploring alternatives and possibilities, one thinks not to be "right",
either in the sense of being correct or moral, but instead engages in a
process that cannot be measured against arbitrary abscissas of "right"
and "wrong." Such thought exploration simply "is." It
is unquestionably difficult by being both a rational function and a creative
process, and it is certainly uncomfortable, as judgments and proofs while
not being excluded, are postponed (that is a crucial point, because I am
not arguing for moral nihilism). It is tragically unfortunate that such
thinking is seldom attempted.
These notions of exploration are difficult to communicate. The tendency
is always to assume one's prejudices are the laws of existence, and to denounce
and insult anyone who dares disagree. It seems shallow and somehow lacking
to look upon thinking as a skill. That would cause us to humble ourselves
before the future; to admit we are struggling to manage these awesome speculations
with very limited tools; to realize that our vision of the future will always
be a vague simulation, until that future is upon us. But, I am convinced
such unavoidable vagueness may be sufficient and, in any event, is certainly
better than nothing.
Consider the following: if an asteroid were heading towards earth, we would
have at our disposal Newtonian mechanics and a good understanding of the
effects of high energy/momentum impacts to argue effectively for the diversion
of all necessary resources to avert global catastrophe. With little imagination,
detailed scenarios could be constructed on the basis of the time left --
if the impact were a year, a decade, or a century away. We would feel confident
that our audience would be extremely attentive, as our language would possess
graphic precision. For once, nonsense would be drained from our discussions.
For once, our thinking would be clear and unencumbered. But this speculative
future, not the happiest, is a poor guide when attempting to understand
the massive social changes resulting from the nanotechnologies.
One obvious difficulty is that we have no social science corresponding to
Newtonian physics to communicate effectively what is to come, nor is such
a "science of the future" terribly likely. Since there is no analogous
asteroid to point to, we are stuck with "vague simulations", where
equations are few and untrustworthy. Abstractions, limited knowledge, and
extreme uncertainty are implicit in every statement we make. And that makes
the process of communication, at any level, very difficult indeed -- unless,
and this is a crucial distinction, our intent is to instill fear, our goal
power over the minds of others. The cautions of Dr. Gregory Benford during
the social issues panel need to be pondered in depth. Let no one doubt for
a moment that the ideas implied by nanotechnology promise a global field
day for the ignorant, the incompetent, and the irresponsible, but the option
of closing all discussion on the matter is much worse, even if it were possible
to do so.
Despite the problems and the misgivings, NANOCON courageously aimed to put
those future thoughts and communications on as firm a basis as possible,
given our present knowledge. Now that NANOCON is over, it is our hope that
those who met during the conference will remain in contact, and that the
cross fertilization of ideas concerning these technologies will continue
and spread. It is our hope that the legacy of Engines
of Creation will be built upon, so that when the future arrives,
not so many decades from now, we will be in some measure prepared for it.
That would be a first in human history, and it is certainly doubtful, but
that is no reason not to try, and try we did.
In retrospect, I believe NANOCON will be looked upon as only an early, crude
attempt at understanding and communicating what is to come. And judging
from the interest that grows more evident every day, it will certainly,
and quite properly, not be the last such conference.
On behalf of the people who made it possible, I thank you for your support.
John L. Quel
I. K. ERIC DREXLER:
An Introduction by Dr. John Cramer
We can think of ourselves as standing in the trough just before the tidal
wave hits; the only question is just how far away that tidal wave is. It's
perhaps a unique circumstance in human history: a revolution that is going
to have a profound effect on our society, and the way we do things, and
the way we build things, has been anticipated in the way that it has in
this particular circumstance. I can't think of another example of an instance
in which a monstrous societal impact of a technology was seen coming far
enough in advance that one could do advance thinking and planning. To some
extent, one could say that people like Norbert Wiener and John von Neumann
thought about computers before they were upon us, but while there was plenty
of time, there was very little advance planning as a result of their visions.
With Eric Drexler, however, I believe the situation is different.
When I read Engines
of Creation, my reaction was "Of course"! It's ideas
were something I had been thinking about for a long time -- in a rather
vague way. Suddenly, they came into focus. The focus is that, as Scientific
American said, there is an "air of inevitability" about
these ideas. They are coming. The best we can do is try to understand and
digest them. And towards that end, I hope that we can accomplish something
at this conference other than deciding "how many nanotechnologists
it takes to screw in a light bulb."
(Answer: None -- that's a problem for conventional technology. With nanotechnology,
you can make a light bulb that can fix itself!)
Eric was an undergraduate at M.I.T. studying interdisciplinary science.
He was all over the map studying physics, mathematics, and various aspects
of engineering. He wandered into the engineering program and received a
master's degree, but before he received a PhD., he wandered out again, because
his interest in nanotechnology took him in a direction for which there is
I think Eric's experience is in a way a comment upon our education system.
Nanotechnology is an interdisciplinary field where so many different elements
are being brought into play that no one department is willing to grant degrees
in such a subject. It's a comment upon the specialization of our educational
system -- "you can't do things like that!"
But their loss is our gain. It's a real pleasure to hear Eric tell us about
the shape of the future.
II. THE CHALLENGE OF NANOTECHNOLOGY
K. Eric Drexler
I might add to the previous remarks that nanotechnology fit in especially
poorly in an aeronautics and astronautics graduate program. "You want
to take a course in molecular morphogenesis?" they said. "That's
not part of your field. You can't."
A. The Four Challenges
Having given my introductory nontechnical talk this evening the label "The
Challenge of Nanotechnology", I found myself thinking: what is the
challenge of nanotechnology? I decided that there are at least four categories
the challenge could be divided into:
1) The Challenge of Technological Development
The challenge of going from the technology base we have today towards greater
and greater control over the structure of matter to the point where one
is able to build complex (and increasingly complex) things atom by atom,
including molecular machines and assemblers, which will then enable a very
general control over the structure of matter.
2) The Challenge of Technological Foresight
Trying to understand what lies, not necessarily at the end, but well along
this path of technological development; trying to get some sense of the
lower bounds of the future possibilities. Not exploring all the things that
would be possible, since that would be foolish to undertake, but trying
to get a sense of a few of the key capabilities we will be able to equal
3) The Challenge of Credibility and Understanding
Imagine that we had gone on a safari into the conceptual world of the future
and bagged some big, strange looking technological "animals" and
dragged them back. How would one package this information? How does one
present it? How does one make things that are true sound credible and distinguish
them from things that are not true (indeed nonsensical, yet sound superficially
similar), and thus give people a clearer understanding of what these technological
possibilities are? That understanding is the necessary foundation for dealing
with the fourth challenge.
4) The Challenge of Formulating Public Policy
How do we formulate public policy based on that understanding, so that when
the "tidal wave" hits, we are, in fact, as ready as we can be
to deal with it?
B. Discipline in an Interdisciplinary
I usually start off my talks, various technical colloquia and such, with
questions for the audience. These questions are intended to address the
point of credibility and understanding. How many people have backgrounds
in physics, chemistry, biology, engineering, computer science? I always
get to comment: "My there are a lot of people in computer science here."
My explanation for that is that people in computer science are used to the
notion that making things very small and controlled and fast can be valuable.
When they hear that more is coming they say, "Oh, yes. Tell me more."
The reason I ask this question is that nanotechnology is very interdisciplinary.
It cuts across all the fields mentioned and more. But, unfortunately, interdisciplinary
subjects have a way of escaping from any discipline whatever. If you don't
watch out, you end up with the equivalent of Velikovsky with his book "Worlds
in Collision." He wrote in that book how ancient writings "explained"
how the solar system formed and that Venus was a comet coughed up out of
Jupiter. And he made substantial headway in the scientific community with
these theories of the past and the solar system. The historians thought
his astronomy was quite interesting, but, of course, his history was bunk.
The astronomers thought that the history was fascinating though, of course,
the astronomy was bunk. Similarly, experts in one of the above fields tend
in general to be harshly critical of any ideas that fall into their own
fields, but less critical about ideas in other fields. I believe that it
is extremely important for meaningful discussion of nanotechnology that
ideas be subject to demanding criticism. Accordingly I strongly encourage
this audience to be harshly critical of any ideas labeled "nanotechnology",
starting with mine - with what I say now, and extending to anything you
may hear in the future.
I will briefly outline some of the content of nanotechnology and how it
relates to where we are in technology today. I will discuss paths towards
nanotechnology -- the challenge of technological development. I will show
some pictures illustrating things found in exploring where this technology
will be in the long term. I will close with what I believe are some crucial
points in understanding the challenge of public policy.
C. Nanotechnology is Engineering,
In thinking about nanotechnology, it is of vital importance to distinguish
engineering from science. If you believe the media, you would conclude that
when people are out on the launch pad working on the space shuttle main
engines, that those people are "scientists." We are told they
are "scientists", but I don't believe they are out there studying
space shuttle main engines as natural phenomena, or taking samples of the
metal to study the precipitation or hardening of these metals, or something
like that. Instead, they are doubtless engineers, or perhaps technicians.
Here is the difference. If you ask a scientist to make a prediction about
the future of a field: "What will you discover 10 years from now, sir?",
and if that scientist responds that "In 10 years, I will be discovering
X" then that is obviously bunk. If you already know it, it cannot possibly
be a discovery. It is a contradiction.
If you asked an engineer, on the other hand, "what if we give you enough
time and money, what will you be able to build in five or ten years?",
you would expect a more reasonable answer. That question was posed to some
aerospace people in the early 60's and they replied, "We think we can
land a man on the moon and return him safely, Mr. President." They
gave a cost figure, (and then they doubled it), and the budget was submitted
to congress. In fact, it was done. The reason being that people understood
the fundamental scientific principles, such as Newton's laws, they understood
how to build fuel tanks and engines and so forth, and they had confidence
that systems like this could be built and debugged.
I will argue today for a similar position with respect to nanotechnology
-- that we understand fundamental scientific principles well enough to see
much of what is possible, though a lot of work remains to be done. A lot
of that work will have the flavor of science in finding out details and
sorting out what works. But the lack of effort in what I call exploratory
engineering has left us, as a society, with a huge blind spot. Scientists
don't look too far ahead, because you can't in science. Engineers don't
look too far ahead, because they are not paid to. If you were an engineer
and went to your boss and said,"Just give me a year to think about
what we will be able to build in another 20 years." The response would
be: "No.", at least in this country. However, I sometimes get
the impression you would get a different answer in Japan, or at least a
somewhat greater time horizon.
D. Building with Atoms
I have been trying to fill a little bit of that blind spot in the area of
nanotechnology by trying to get some understanding of what could be built
with tools we don't have as yet. If you look at present day technology,
a modern research and development laboratory would look something like this:
people manipulating atoms -- huge, thundering herds of atoms, statistical
populations of them, stirring them around, heating them, reshaping them
by pounding, whirling, and so on. By these techniques, we make all sorts
of impressive things. We make sophisticated devices like transistors. We
are advancing in semiconductor technology and are getting very good at miniaturization.
Nowadays, we make very fine features on chips by processes that include,
for example, oven baking, and from these techniques we get some impressive,
extraordinarily useful devices, such as microprocessors.
I will argue that it is possible to put entire mainframes, with memory and
disk drives, in a cubic micron. Yet, the smallest features on a contemporary
chip are several microns across.
Here is an example of an intermediate stage on the conventional path of
miniaturization, that is, trying to use large technologies to "build
down." This example is near the limits of scale for that process that
has been achieved in recent years; while not quite at the limit, it is close.
It is the surface of a salt crystal that has had lines drawn on it by a
tightly focused electron beam. This was done by the Naval
In the upper left-hand corner, you see an 18 nanometer scale bar. In an
etymological sense, you could call this "nanotechnology" because
it is on a nanometer scale. The next figure shows the contrast between that
and the results of the kind of "bottom-up" nanotechnology that
I discuss in Engines of Creation:: the technology that I think
is the cause of the kind of excitement that has resulted in this conference,
as opposed to the kind of excitement that motivates further miniaturization
in the computer industry. Both of which are important but are on very different
time scales and scales of consequence.
Consider one cubic nanometer of diamond and imagine what it looks like to
a "nanotechnologist." If the planes in the figure below are cutting
cleanly between the planes of carbon, it is slightly less than one cubic
nanometer. Diamond has an extraordinarily high number density of atoms,
but most materials have on the order of 100 atoms/cubic nanometer. If each
of those atoms is something you can think of as a building block, then it
becomes clear that one can build relatively complex things in a single cubic
nanometer. Now a cubic micron, which is currently considered fairly small
in microtechnology, is a billion cubic nanometers. To a nanotechnologist,
therefore, a cubic micron is a vast amount of space to work in.
One cubic nanometer of diamond, containing 176 atoms.
A cube 100 nm on a side would contain 176 million atoms
To give a sense of the kind of structures one thinks about in looking
at advanced nanotechnology, things based on assemblers able to build general
structures atom by atom, one might build a path for transmitting force,
or part of a lever to transmit torque, using a structure composed of carbon
atoms. In the example, hydrogen atoms are left out for simplicity. The structure
is a nanometer long and contains a countable number of atoms, each one of
which is in a precise place. If you remove one of them, it would no longer
Another example of an atomically precise structure, again not showing all
the atoms, is a roller bearing -- nothing a chemist would think about making
today, but something one can think about making with assemblers [see
cover]. It illustrates the principle that one can get smooth rotary
motion despite atomic "bumpiness", if you have the atomic "bumps"
on the two surfaces mesh in gear-like fashion. This answers one concern
that one might have about friction in molecular mechanical devices of this
E. The Paths to Nanotechnology
Where we are today on the road to nanotechnology? DNA can be considered
as an "engineering material" and certainly has a possible role
in the construction of molecular objects. However, the interest most people
have had in it is not in its engineering properties, but in its informational
The overwhelming reason people have done genetic engineering is that you
can take a piece of DNA and insert it into bacteria, where the DNA is transcribed
to make RNA molecules that contain the same information. The RNA molecules
in turn bind to ribosomes and the ribosomes read the RNA. The RNA base turns
out to have two bits of information. RNA reads three bases at a time, thus
six bits at a time, that is, RNA reads a series of six-bit words. One word
might say: "start" (... making a protein with a particular amino
acid); another might say: "add" (... another particular amino
acid); and so on. The result is a growing polypeptide chain. Finally, the
six-bit word "stop" (... release the chain and, perhaps, start
over again) is reached.
The reason people are interested in programming ribosomes to produce proteins
is that proteins can serve as components of molecular machines. A protein
doesn't remain as just a loose, floppy chain, but instead folds up into
a three dimensional configuration, in which the interior is a closely packed
arrangement of side chain atoms. Proteins look like random, haphazard things,
but every protein of that sort will roll up in the same way to make an object
which, at least in the case of some proteins that serve structural roles
in bacteria, has about the stiffness of a piece of wood, or a piece of epoxy
engineering resin. Proteins are molecular objects. They are pieces that
fold up and then go together to make more complex objects. An example is
a collection of two protein molecules and a strand of DNA. They are like
this because of Brownian motion. The random motion of things suspended in
solution under thermal agitation banged these molecules together in all
possible positions and orientations. Eventually they bumped together in
the right orientation, because these molecules had complementary surfaces
at that point: bumps matching hollows, patterns of electric charge matching,
and so forth. This resulted in selective stickiness and self-assembly.
A very powerful principle that will be used, I believe, in developing
nanotechnology, is the principle that if complex molecules
are made with complementary surfaces, they will self-assemble to make
We see that in nature: here is an example of a complex assemblage of protein
molecules. It looks like something out of a Grade B science fiction movie,
or an industrial small parts catalog (an analogy more promising from a nanotechnology
perspective). It is a collection of protein molecules stuck together with
some DNA in the head. In certain conditions it falls to pieces, pieces which
can be made to reassemble by putting them in the right conditions of temperature
and solution composition.
The assemblage is a T4 bacteria phage, a bacterial virus. You can take it
apart to individual protein molecules which have been assigned numbers by
the people working in the field and again they will assemble in the right
conditions to form a complete device. I call it a device because it will
selectively stick to the wall of the bacterium and act like a spring-loaded
hypodermic syringe: the base plate helps to make a hole in the bacterium
cell wall, the sheaf contracts and drives the core down and injects the
DNA. The DNA is then copied to make more DNA and is transcribed to make
RNA. The RNA re-programs the ribosomes of the bacterium to make more of
the T4 proteins. The proteins and the DNA spontaneously assemble inside
the bacterium to make more viruses. This process finally leads to the production
of proteins that break up the bacterial cell and completely destroy it,
releasing the viruses.
The above example is depressing if one is bothered by the presence of parasitism
in the world on all known size scales. It is also an example of a molecular
machine. If you look at nature you find a variety of molecules, you find
a variety of components that are very tempting to think about from a mechanical
engineering point of view.
There are bearings: a molecule that is held together by a single sigma bond
and does not have any interference between the two parts of the molecule
will allow one part to rotate freely with respect to the other. That bond
can serve as a bearing with a load strength of a number of nanonewtons.
That is a healthy load for things on this scale, though perhaps not all
that one would want, which is why other bearings are being looked at.
There are rotary motors: Some bacteria can swim, and while advanced cells
can swim using flagella, bacteria have simple helical rods of protein, rods
that rotate: where the rod attaches to the cell wall there is a variable
speed reversible motor. This motor has been described in the literature
as a "proton" turbine.
There are also linear motors, like the molecular fibers that drive muscle.
We already have an example of more complex machines as well, a numerically
controlled machine tool called the ribosome.
What this suggests is that there are paths leading from engineering folding
polymers, such as proteins or things like proteins that it might be easier
to design the folding of, such as perhaps properly configured DNA molecules.
These paths lead from those sorts of systems, as we improve our ability
to design them, to building molecular machines.
Now, a lot of people have said to design a protein from scratch is an extraordinarily
difficult problem -- yet it has been done in the last year ["Characterization
of a helical protein designed from first principles" L. Regan &
W.F. DeGrado. 1988. Science 241:976-978]. In Engines
of Creation, I waffled on how long that might take. Now the milestone
has been passed.
Along that path there is still a lot of improvement to be made in design
techniques. When they are improved one could build machines, not just things
that fold, but things that fold to form objects that do something, and use
those machines to build better machines. We know by looking at nature that
molecular machines can, by holding reactive molecules at particular positions
and orientations, perform chemical operations to build up complex structures
in specific ways. That is the function of enzymes at one end of a spectrum
of machines. If you have more flexible, programmable machines, they start
to look more and more like general purpose assemblers. One can use low end
machines to build better machines, and better machines until one has reached
the kind of assemblers that form the bulk of the subject matter of Engines
There are other paths. One could work in non-biological chemistry, such
as supramolecular chemistry, which is the chemistry of the assemblage of
molecules. Three people shared a Nobel prize recently for their work in
that field. Again, since this slide was prepared.
And one can, perhaps, extend the technology of the scanning tunneling microscope
(STM) or its relative, the Atomic Force Microscope (AFM), for molecular
manipulation. The STM is a device that can position a tip to atomic precision
near a surface and can move it around. Since this slide was done, people
have demonstrated ["Atomic-scale surface modifications using a tunnelling
microscope" R.S. Becker, J.A. Golovchenko, and B.S. Swartzentruber.
29 January, 1987. Nature 325:419-421] the ability to
get atoms on a tip by touching it near a surface at one place and evaporate
them off the tip at another and create a new "glob" on the surface
that seems to be a single atom. Unfortunately, the last I heard it only
worked on germanium and the Bell Laboratory workers were unable to "call
their shot", i.e., they were not able to see where it goes. The STM
is not something at this time that can build nanomechanisms.
Also, since this slide was done, at IBM Almaden ["Molecular manipulation
using a tunnelling microscope" J.S. Foster, J.E. Frommer, P.C. Arnett.
1988. Nature 331:324-327] people have scanned a surface
in an organic liquid with an STM tip, then placed a voltage pulse on the
tip and apparently electrically excited these molecules, made them reactive,
and bonded them to the surface. This resulted in nanometer scale "blobs"
that were visible when the surface was later scanned. This may be very useful
for building computer memory. Again, however, they were not able to make
what a chemist would consider a specific modification. To do that, one may
need to create a hybrid technology: develop molecular tools through one
of the previous methods, and bind them to the tip, for example, of an Atomic
Force Microscope, to give it greater specificity of action then these metallic
or ceramic tips do today.
F. Building with Assemblers
A key point in thinking about these enabling technologies is that from a
longer perspective, from the point of view not of the challenge of technological
development, but from the points of view of technological foresight and
public policy, it doesn't matter what path is followed. All paths lead to
the same place.
When you have assemblers, you build assemblers. And what kind of assemblers
you build does not depend upon the things that led up to the assemblers.
My standard metaphor for this is, look at aircraft today, such as a Boeing
747. We note a certain shape to the wings, and a certain composition to
the metal, which has no necessary connection to the shape of the wings and
the composition of the cloth in the Wright brothers' original aircraft.
The Wright brothers brought us into the domain of a new technology, but
what we do there depends upon the tools that we have today and the design
ability that we have today, not how it all started. It will be the same
with full-blown nanotechnology.
As I discussed in Engines of Creation, there are very strong
reasons for thinking that assemblers can be made to work based on pointing
to things a lot like them that already do work. Chemistry shows us a wide
range of reactions that can be made to occur when molecules come together
in the right positions and orientations. Enzymes show that if you hold reactive
molecules together, in a particular position and orientation, you can get
a particular reaction to occur. What is needed to build complex structures
is systematic positioning of molecules to make reactions occur in very specific
and very complicated patterns. That is the core of nanotechnology. That
is what assemblers will accomplish by using the kinds of tools we are already
familiar with. The important addition is that, instead of being a specific
jig that can only catalyze one reaction, as an enzyme is, we are talking
about things that can do programmable positioning; something that is a general
purpose, flexible tool for construction. And, as icing on the cake, it will
then be possible to drive a lot of these reactions using external sources
of energy, such as voltage, or even mechanical force by means of the molecular
And that leads to this slide, which is really intended to summarize assemblers
and what is important about the case for them. Here is the summary of what
it means. What assemblers will give us is thorough and, as I will argue,
inexpensive control over the structure of matter. That means that in contrast
to today, where technology is very strongly limited by fabrication, which
conditions everything that we do, there are a tremendous range of things
that one can design that one would not normally think of trying to design,
because it would be ridiculous to think of being able to build it. A large
part of what I have done is simply to ask that if we can build almost anything
that makes physical sense, what then becomes possible? And then explore
the very elementary possibilities that are opened up by that fabrication
It appears that assemblers can build anything that makes chemical sense,
and at that point the main limits will be physical law: what does natural
law actually predict to exist and function and what will be the design capabilities
-- what are we clever enough to design? I have been trying to stay well
within the limits of physical law, indeed well away from those limits, so
as to have things that are easily defensible and will clearly work. That
involves things that are very far from the limits when you have advanced
design capabilities, more people working in the field, the ability to test
questionable ideas against nature to see whether they work or not, and so
forth. All these things will enable people to push much closer to the frontiers
of the possible than one can do with any safety in exploratory engineering
today with limited resources and without the possibility of experimental
feedback. Therefore, the things that I design are "stupid." They
are clunky. In a moment, I will discuss one of the stupidest and clunkiest
devices of all -- a mechanical nanocomputer.
G. Nano-Scale Mechanical Computers
Ordinarily, in thinking about making small things, such as small computers,
one says "Well, if computers are electronic devices, and if we are
going to build molecular computers, that means molecular electronic devices."
So, we have conferences on molecular electronic devices, of which there
have been a number. I very strongly suspect that some of the designs that
have already been presented at these conferences or will be presented in
coming years, will work. They will be vastly superior to the kinds of computers
that I am designing. I would guess, off hand, that these molecular electronic
computers will be three orders of magnitude faster than my molecular mechanical
The problem with designing molecular electronic systems, however, is that
one must deal with the quantum mechanical properties of electrons and very
small, irregular structures. If you look at the current work in trying to
understand the structure of high-temperature superconductors, where we know
where all the atoms are, all the relevant fundamental laws of physics, there
is still a Nobel prize for any theoretician who can figure out how they
work. Despite all those favorable factors, no one has done so and defended
his theories in a fully credible fashion. Here we are talking about systems
that are, again, complex and electronic, and while you may have something
that will work, as the superconductors work, but you may not be able to
argue that it will. Therefore, I don't try to argue that, for any given
design, they will. I simply say "probably one of them", and I
wander off and instead design "stupid" things like a mechanical
Computers did not start with electronics. They started with machines, though
they didn't quite get off the ground with that technological medium. The
illustration is a picture of part of a machine designed by Charles Babbage
back in the middle of the 19th century, the analytical engine. If Babbage
had had more time, more money, and perhaps better machinists, he would have
built the world's first programmable computer back around 1860.
And then in subsequent years as systems were refined and the Swiss got into
the business and displaced the English and there was a national hue and
cry about the loss of the computer industry to Switzerland (where they are
better at miniaturization of mechanical devices), we would eventually have
had computer science departments emerging out of mechanical engineering
departments. Everyone would then have thought that software was fundamentally
a branch of mechanical engineering, instead of being confused, as we are
today, that it is a branch of electrical engineering.
The Babbage machine would have been quite slow. However, it turns out that
if you scale a mechanical system down by a factor of 10, it becomes 10 times
faster in the frequency of operation. If you scale a mechanical computer
down by a factor of a million, it becomes a million times faster. If you
then make it out of stronger, lighter, stiffer materials, that helps as
well. Another nice side effect is that you reduce the volume by a factor
One would transmit signals in such a device by moving rods: the rods would
have knobs on them that mechanically interact, blocking and unblocking each
other. This turns out to give one the ability to build things that are analogous
to transistors. An example would be two rods, one of which would move if,
and only if, the other rod was out of the way. This is like a transistor
in which current will flow if, and only if, the right voltage is on another
conduction path. One can look at complex systems built out of these. You
can analyze them using Newtonian, instead of quantum, mechanics. It may
be a "stupid" design, but undoubtedly it could be made to work,
which is my ambition. Not to make it work as such, but to give convincing
arguments that it could work and, therefore, one could do at least this
well or better. Now you have a conceptual building block that can be used
for thinking about what nanotechnology can do.
H. Cell Repair Machines
of Creation, there is a discussion of cellular repair systems:
medicine based on extremely small computational devices hooked up to molecular
scale sensors and devices that can do operations on molecules -- taking
them apart, synthesizing them, and so on. To do that, one is interested
in how small one can make a general purpose computer, given that one could
put all the atoms where wanted. To estimate that, take the scale of rod-logic
devices, and look at the number of devices and conducting paths and so on,
in a simple, ancient, bottom-of-the-line Intel 4004 4-bit microprocessor,
the first processor that saw any substantial commercial use. If one goes
through that exercise and asks how large a block is required to hold the
equivalent of an Intel 4004, something on the order of a few tens of nanometers
will about do it. If you estimate the volume of memory devices, one would
conclude that a roughly comparable volume will hold one kilobyte of RAM.
Comparable volumes will hold roughly a hundred kilobytes of tape memory,
and a lot of molecular sensors and molecular machinery suitable to characterize
and manufacture a wide variety of macromolecules.
That whole package of "stuff" combined with unspecified software,
which may be a greater challenge than the hardware in the long run, is something
that might be described as a "repair device." If you take that
collection of objects, it will look very small in comparison to the diameter
of a 20 micron size cell. Yet, a cubic-micron computer is on the order of
a contemporary main-frame: tens of megabytes of random access memory and
some hundreds of megabytes of fast tape memory. It turns out that this gives
one a database with more information than was used to construct the cell
in the first place, along with a number of mainframe computers, and a (very)
Local Area Network, sufficient to connect to 100,000 similar repair devices.
If you can come up with the software, which is another question I am not
addressing, all this would function in less than one percent of the volume
of the cell. If the software problems can be handled and if one can use
this same technology to figure out what needs to be done by doing a very
thorough job of characterizing biological materials, then one should be
able to bring surgical control to the molecular level and begin to repair
tissue at a level that medicine cannot begin to deal with today.
Today, one mode of therapy is to throw drug molecules into the body: they
diffuse around and selectively stick to things and perturb the behavior
of the biological structures. The other major mode of therapy is to take
an enormous piece of metal and hack through tissue, ignoring entirely where
the cells are. The result is that the body abandons its dead and self-heals
-- if things go well. Technology like this, however, would bring surgical
control to the molecular level, which means tissue could be either healed
or reconstructed -- again, if you have the software to handle the task,
and there are arguments that such is achievable, though the arguments are
in the software domain.
These ideas led to a column in Scientific American in January
1988, which had an illustration of a possible repair device. These repair
"submarines" were drawn rather larger than I told them to, so
the devices pictured could easily hold a gigabyte of memory, and thus think
rather deep thoughts about the fat they were chewing.
I. "Mega-Brain" Computers
Now, what happens when you have a lot of computers? If you can make a mainframe
in a micron, and I might add that the clock rate estimate for these mechanical
computers is moderately faster than a contemporary CRAY (about a gigahertz
clock rate), though technological progress will shortly give us faster computers.
Take a cubic centimeter volume, allocate half of it to mainframe-cubic micron
devices and the other half to cooling and communication channels, you find
you have room for about 0.5 trillion computers, possessing far more computational
capacity than has been built in the world to date.
Cooling is a problem, but it is possible. If you have these devices executing
one "gram-mole" gate operations/second, which is a chemist's idea
of a round number (6x1023), the result is about four kilowatts
of power which can be dissipated by pouring in a couple of liters of cool
water per minute, and getting a couple of liters of warm water out.
Another result from this exercise is that one can make a very crude estimate
of the computational capacity of the human brain. This crude estimate, which
is argued to be grossly generous to the brain, results from the idea of
considering a synapse operating for a millisecond to be equivalent to a
gate operation during a clock cycle. This says nothing about software, however.
A computer with all the computational power one could want can do nothing
without the proper software. Comparing only raw computational capacity,
however, shows this device to be in the mega-brain range. There are all
sorts of conceptual problems involved in artificial intelligence, artificial
neural networks, and trying to make machines think. There are obvious software
difficulties involved, but I would like to point out that almost all the
work done to date has been on machines that are not even in the monobrain
range, but instead in the microbrain range. It is remarkable that anything
has been accomplished at all. It is entirely possible that sliding up several
orders of magnitude in raw computational power might make the task of AI
J. Assemblers and Industrial Production
This whole cubic centimeter is something "big" in the field of
nanotechnology, and this raises the question of making large things. An
example of a lot of small things is a "paste" of E. Coli
bacteria, a paste that started out as a single genetically engineered E.
Coli some modest number of days previously. Exponential growth can take
you from the scale of a single nanomachine (a bacterium) to planetary masses
(if you could supply the necessary raw material and energy and get rid of
the waste heat) in a matter of days. This is in terms of raw reproductive
capacity. If you have devices that can reproduce themselves, you have a
very powerful industrial technology base for making things. In fact, if
you look at assemblers, they are very well positioned to do just that.
Previously we have shown bearings and the like that would make up the parts
of an assembler, and discussed some general aspects of the software problem.
Let us consider assemblers proper. The kinds of advanced nanotechnology
assemblers I talk about will be very much like industrial robots: they will
be special purpose machines. How will they be built? Think of a rigid, jointed,
programmable "thing", like an industrial robot, but with parts
roughly a million-fold smaller, thus roughly a million fold faster in characteristic
frequencies, able to do a million operations per second. Now, take that
technology which is 1018 times more compact and a million times
faster, and let it work not with prefabricated pieces, but with the building
blocks of matter, atoms. One can build anything that can be built with atoms,
if one is careful about unit operations and chemical reactions.
The devices would operate in a world that we often forget is rich in identical,
prefabricated parts. In Japan today, there is at least one factory where
robots assemble parts into more robots of the same type. This is not a process
that gives a tremendous economic advantage because one still has to make
the parts and most of the expense is in making the parts rather than in
the assembly. But instead of having to have a world of factories and mines
and so on to make the parts, what if the parts are abundant molecules or
can be had at the price of industrial chemicals? Under those conditions
it is much easier to imagine a device making a copy of itself, as we already
seen the robot factory doing as a proof of concept. Rough calculations that
I went over at my class in Stanford (Spring 88), indicate that a device
like this can be made that can build a copy of itself in something like
a thousand seconds. That is about the time it takes a bacterium to replicate.
Actually, the original figure was 100 seconds, but some conservative factors
were added to the calculation.
If you can take raw materials, and an assembler, and end up with a lot of
assemblers, and have the assemblers work in parallel, then nanotechnology
should be able to make big things.
The Eiffel tower was once the tallest structure that human beings had built
on the face of the planet. In later years, this steel structure had to be
retrofitted with warning lights for aircraft. If you were to build an analogous
structure, not out of steel, but out of well bonded carbon structures like
diamond, and ask how tall you could make that tower, the result is impressive.
Aircraft warning lights would again be needed, but they would be around
the base. Around the sides and top, you would need a traffic control system
for dealing with possible satellite collisions. Such a tower would extend
well beyond the atmosphere.
This structure is rather larger than a redwood tree, which is already built
by molecular machines, but on a log scale, it is not that much larger. That
is probably not the best way to get into space, however. It gets you out
of the air where you can see a lot, and there are advantages to that, but
it doesn't get you going anywhere. To do that, spacecraft such as the shuttle
have been used, with mixed success.
K. Nanotechnology in Space
What are the implications of nanotechnology for things like spaceflight?
Today, spacecraft are a fairly marginal technology. We are pushing the limits
of the strength of the materials that we can fabricate reliably into structures
of this sort. We are pushing the limits of the reliability of operations,
because we require vast numbers of people making things with small margins
of safety, so that a small flaw can destroy the entire space-craft. The
amount of labor required is incredible. By comparison, the input of raw
materials is trivial. The energy required, by the present standards of launch
cost, is essentially negligible.
If you are able to make complex structures, atom by atom, you are not going
to be sticking human hands into the process. There is no point in sticking
your hands into a bunch of assemblers. Therefore, there is not much role
for human labor, so the labor cost is very small. An analogy is the production
of wood. In wood production, one takes solar energy (and nanomechanisms
can certainly build effective solar collectors), and abundant raw materials.
One can build things out of carbon (that is already too abundant in the
atmosphere) possessing something like 50 times the strength to weight ratio
of what the space shuttle was built from, and produce those things in intricate
shapes for a cost per pound on the order of, perhaps, cordwood. If that
can be done, then spacecraft can be built that can fly much higher, faster,
and further than anything that can be built today. In addition, costs will
be vastly lower, margins of safety will be substantially higher, and reliability
much greater. I emphasize such things because there is nothing "small"
about nanotechnology. Some of its consequences will be far removed from
the domain of small things.
One of the consequences will be that the space frontier will be opened.
If there is routine, inexpensive access to space, materials among the asteroids
can be used. Out there are enough raw materials to bury all of earth's continents
kilometers deep. That means that what is out there is an awful lot compared
to what we are using down here. Space is rich in raw materials even if you
just use the rubble left over from the formation of the planets.
In space, there is also the sun -- our very own nuclear furnace. The sun
puts out every second a substantial fraction of a kilogram of energy per
capita for everyone in the human race. That means there is a lot of energy
out there, most of which plunges past the planets into interstellar space.
If there is access to materials in space, and you have already amortized
your R&D costs, then you can cheaply produce hardware that produces
more hardware at a very high rate.
Today, NASA's idea of an ambitious thing to build in space is a few tin
cans in orbit. A more ambitious idea, that was discussed in the 70's, and
in the light of this production capability becomes modest, is that of building
very large, inhabitable structures in space: cylinders kilometers across,
with sun-light brought in by mirrors through large windows, air, and the
feel of gravity underfoot, resulting in a pleasant environment inside.
There is a book coming out in a few months, part of the Time-Life series
on computers, that is going to have a picture essay on nanotechnology. It
pictures a space settlement being constructed by assemblers from asteroidal
materials. The size of the settlement is a thousand kilometers in diameter.
That is the sort of thing one can do with superior structural materials.
The marginal cost of building such structures using nanotechnology, with
respect to human labor and terrestrial resources, will be essentially zero.
The greatest issue will be R&D cost. Further, if you have a general
way of applying assemblers to making things, then you can specify the size
and shape of those things, and thus even that cost may not be so very high.
That makes the "world" look like a very different place, if suddenly
the "world" is larger than the earth, because most of the "world"
isn't the earth, by many orders of magnitude.
L. Potential for Abuse
There is another side to these technologies, also discussed in Engines
of Creation. In the illustration, you can see trees silhouetted
against the early stages of the expansion of a nuclear fireball. Nanotechnology
has nothing to do with nuclear technology. There is no transmuting of
nuclei as the alchemists tried to do, and as is done by nuclear technologists.
Nanotechnology only does what chemists do: rearrange molecules. Nonetheless,
it is a technology where the principle of exponentiation can be brought
to bear: nuclear explosions come from an exponential proliferation of neutrons
in a critical mass of fissile material. Here, we are talking not about an
exponential growth of destroying things and releasing energy, but instead
a potential exponential growth of constructing complex artifacts. In its
way, that is a far more powerful capability. Powerful not only for medicine,
not only in a higher standard of living for everyone on the planet than
we have in this country today, but it can also be used to produce very high
performance weapon systems in vast quantities and virtually overnight --
once you have worked out the prototype and know how to build one. It could
also be used to make computers that are smaller than bacteria, and thus
make programmable germs for germ warfare. That is an ugly possibility.
The reasons for looking forward in time to this technology are not just
for the "Gee whiz, it will be wonderful" aspects but, also, that
there are uses we would like to prevent and/or control. That is the core
of the challenge to public policy.
Returning to the example of the sun, again, a nuclear fireball. In the foreground
of this picture, we have self-replicating, solar-powered, molecular machinery
systems -- plants. It is clear these things can come together to make a
pleasant world. What is at stake is this: a very large collection of atoms
in space known as Earth. It is the difference between having a world that
is polluted and having a world that is clean; the difference between dying
of cancer and having a healthy body; the difference between having a biosphere
and not having a biosphere. Nanotechnology will enable us to achieve the
cleanup of toxic wastes by taking molecules apart and doing things with
them. It will also enable us, if the wrong kind of replicator or weapon
system is built, to destroy the biosphere. It is something that can be used
to extend human life or destroy it.
M. The Emergence of Nanotechnology
Why should one expect something as outrageous as nanotechnology to emerge
in the real world and expect people to have to deal with it, and possibly
not in a time frame that is generations away, as I think some people would
very much like to believe?
It is clear that the basic principles of nanotechnology work because they
are demonstrated by biology and we are alive. We know that if there are
paths to "there"; that there is a "there" there -- the
possibility of nanotechnology.
There are many paths to nanotechnology. There is no one problem that can
block progress in this direction because there are so many ways of making
that progress. I have outlined several; there are hybrids among them; there
are many variations on those themes.
The goal of nanotechnology does not require the perception of a vast payoff
in the future to entice people to pour research and development money into
some new direction to make this happen. Payoffs along the way, things like
better pharmaceuticals, scientific understanding, enzymes for industrial
processes, and so forth are already leading people to learn how to build
complex molecular structures and build proteins and so on. Today, researchers
in those fields are increasingly seeing that what they are doing is leading
towards nanotechnology. The next time they pick a research direction, it
is likely to biased towards research that leads in this direction. Even
without that, one would still get there, though perhaps with less warning
Towards the end of the development paths, there are potentials for tremendous
medical, commercial, and military applications. When you think about the
decision makers in the technological nations of this world, it is very hard
to conceive of even a single decision maker, let alone a majority, that
is not motivated by one or more of the goals of greater wealth, longer,
healthier lives, and either defensive or offensive capability. In a world
that holds many competing companies and governments, it is very hard to
imagine anything short of a global catastrophe that would stop people from
continuing along one or more of these many paths with the short term payoffs,
to finally lead to the kinds of capabilities that have been described.
Today, we are trying to learn how to design improvements of molecules that
we already know how to build. Several of these paths begin with designing
polymer molecules that fold up as proteins do. This process is underway.
I believe that is an area that will see increasing commercial activity.
Today, there is a need for software tools of greater capability and/or lower
cost for doing modeling of complex molecular structures in a way that is
useful for computer aided design in a molecular world. I think that the
combination of improved design software, along with improved methodologies
for designing, improving, and characterizing these molecules will lead to
an increasing range of short term applications. Enzymes, pharmaceuticals,
and molecules that do interesting things from the point of view of getting
information about what is happening in the molecular world, will be some
of the products.
Activity will increase over the years and will blend bit by bit into programs
to build complex molecular machines that can built better molecular machines.
At some point the result will be nanotechnology. And I hope we are ready.
N. Questions and Answers
AUDIENCE: You spoke of the benefits of nanotechnology, the development of
materials, and examples of nanomachines that occur in nature: it would appear
that energy conversion is one of the most critical technological challenges
posed by nanotechnology, along with the software problem. Do you plan to
address those problems in detail in your next book?
E. DREXLER:. Certainly in more detail than I did in Engines
of Creation. In my own thinking on these matters, in an exploratory
engineering vein, where the goal is to come up with relatively simple, understandable
things that are still general enough that they support arguments for a wide
variety of capabilities, I found myself thinking in terms of DC power, as
the basic "energy currency" for running these systems. You can
get DC power, literally, by plugging into a wall circuit, and from there
you can run a motor. The electrostatic motor that I showed was designed
to run on five volts. You can also get DC power by converting chemical energy
into electrical energy, as is done in fuel cells -- which on a very small
scale would have a very high power density, since this conversion is a surface
effect and nanostructures would have a high surface to volume ratio. Finally,
one can imitate plants. Plants convert sunlight into chemical energy, they
could do it equally well into electrical energy with some modest modifications
of the kind of processes that go on in these molecular structures. They
do that with an efficiency that is now around a couple of percent, typically.
We already know how to do that with 30% efficiency, even without nanotechnology,
in artificial structures. We should be able to do even better with molecular
devices. Nanotechnology will also make energy cheap.
AUDIENCE: Your book stated that you expected the breakthrough to take place
within 10 to 50 years, approximately. Near the end of your lecture you stated
that short of a global catastrophe, you expected this breakthrough to happen.
It looks like there has to be an enormous amount of work done before we
cross this threshold, is it conceivable to you that the knowledge and motivation
will continue if a number of world governments became so powerful and oppressive
that they were able to halt this research.
E. DREXLER: I said that one of the things that could block nanotechnology
was a global catastrophe. That was one of the catastrophes I had in mind.
I think, however, that if you look at the time frame we are talking about,
which is measured in small numbers of decades, a number which I can argue
for as being reasonable, if one's sense of this is at all correct, and if
you look over the history of the past few decades, what has happened in
this century is that people have figured out how to do organized research
and development, and it has spread. Now we have more and more countries
that have R&D labs. Korea has a goal of becoming a power in biotechnology
in the 21st century and they are actively working on it. Their educational
system is superior to ours. If you ask what governments are going to be
the dominant world powers a few decades out, I think it will be the governments,
and supporting culture and ideology, that are effective in developing technology.
Any group, or country, that says, in effect, "we are going to pull
back", unless everybody else by some miracle did it simultaneously,
simply pulls themselves out of the race. Soon, they would not be effective
anymore, and one would shift one's attention to those that are ahead in
AUDIENCE: I was not thinking that governments would intentionally pull back
from this. Suppose we envision the governments controlling and consuming
more and more resources of their respective economies, creeping up on the
goose that is laying the golden egg. It might not require a huge catastrophic
event. It might gradually happen.
E. DREXLER: This gets into speculations about future social systems and
clearly a wide range of things are possible, but currently trends actually
seem to be away from that. I personally hope those trends continue.
AUDIENCE: In response to the previous question, if we look at history in
this century, research and development have been enormously accelerated
by the competition between nations. If you have nations rising to achieve
military power, the urge to create weapons to stop them and technologies
to beat them becomes irresistible. Every nation is running scared of every
other that it thinks is going to beat them. My analysis is that research
and development are accelerating, not declining.
E. DREXLER: That matches my evaluation.
AUDIENCE: You said you expect the progression to be from protein machines
to non-protein machines. Do you think it possible to shorten or skip that
intermediary step? Could you design machines directly from say m-RNA to
fold together to make something useful? Instead of having the t-RNA attach
to an amino acid, modify it to attach to a different set of molecules to
be directly used as a part in building a machine?
E. DREXLER: What you are talking about is essentially re-engineered ribosomes
and t-RNA so that you could use this programmable machine tool that we already
have to make a different kind of polymer. While that is a fundamentally
sound notion, in practice it would be enormously difficult, because the
existing systems are adapted so well to doing just what they do. It is plausible,
though, that one might build systems that do the sort of thing that you
are talking about, but probably designing them from scratch, by "looking
over the shoulder" at the way ribosomes do it. That kind of intermediate
technology, where you have molecular machines that are producing not generalized
structures, but superior polymers for making molecular machines, I think
is a very important intermediate step. One way of getting the information
to those machines, instead of programming them by "tape", which
requires some fairly complex mechanisms to read, might be to have a molecule
that might be stepped through a series of operations by changing the composition
of the chemical bath. Biology can't do that; chemistry does do that a lot.
Think of it as a halfway house between solid phase synthesis, which is how
proteins and nucleic acids are made, in a more or less automatic way, and
assemblers. It would be a simple molecular machine for making the chain.
AUDIENCE: I am interested in what might be called "analyzers",
machines able to look at a material and tell what the composition is and
make a tape for the assembler to replicate. One could in principle create
the "tape" and read it at another location to regenerate the organism
or material or person.
E. DREXLER: That has also been suggested as a means of transportation. Those
things are not discussed in Engines of Creation, partly because
discussion of them in my experience generates more heat than light. However,
the general class of capabilities that you are pointing to is going to be
an important one for people to be concerned with.
AUDIENCE: You discussed in your book the "germ" theory of information,
memes. "Nanotechnology" is essentially a meme. One of the things
I have noticed is that when I mention your book and the concepts in it to
rather intelligent people, the first approach is one of fear, very definitely:
won't this make human beings obsolete?
E. DREXLER: If human being are in some fashion in charge and don't consider
themselves to be obsolete, then the answer is 'no'. If that condition is
not met, then the answer is 'yes', and things are either very awful or very
strange, depending on whether its involuntary or voluntary.
In considering the implications of nanotechnology, I would like to distinguish
two phases in the development of nanotechnology. Phase I involves
the ability to make very small computers and assemblers, things that are
no more complex then we already know how to make on a macroscopic level
but are simply implemented on a molecular scale. Phase II involves
design and software capabilities far beyond what we can do today.
Nanocomputers, assemblers, and even replicators are in the first phase.
I believe a replicator is about as complex as a modern automated factory
even though it has the advantage of working in an environment rich in pre-fabricated
parts. Relatively simple cellular repair machines are also part of the first
Things like very ambitious cell repair, AI (which is what I think of when
you speak of making people obsolete), and very ambitious re-working of the
human body are part of the second phase. The things you are pointing to
are part of Phase II Nanotechnology -- nanotechnology combined with very
powerful design capabilities, probably in a world that has real artificial
AUDIENCE: How quickly would this happen? Phase I could very rapidly move
to Phase II.
E. DREXLER: As I discuss in Engines
of Creation, if you can build genuine AI, there are reasons to
believe that you can build things like neurons that are a million times
faster. That leads to the conclusion that you can make systems that think
a million times faster than a person. With AI, these systems could do engineering
design. Combining this with the capability of a system to build something
that is better than it, you have the possibility for a very abrupt transition.
This situation may be more difficult to deal with even than nanotechnology,
but it is much more difficult to think about it constructively at this point.
Thus, it hasn't been the focus of things that I discuss, although I periodically
point to it and say: 'That's important too.'
AUDIENCE: One of my big concerns is not that human beings will become obsolete,
but that a lot of human institutions will become obsolete. I have tried
to conceive of major social institutions that could deal with full-blown
nanotechnology. I don't see any.
E. DREXLER: The Foresight Institute
is intended to encourage people to think about these matters. I expect that
most of the high quality debate will eventually be in media with fast publication
of little bits of ideas that can be tied together and criticized, i.e.,
I think that there are some basic principles of checks and balances that
work fairly well in some of the democracies. I think that something in the
direction of these principles can be applied to the very important problems
that you have been thinking about, and I encourage people to keep on staring
at these problems to see what can be done. The way that huge problems can
be made manageable is to try to whittle them down by finding partial solutions
here and there.
AUDIENCE: A lot of what you have been discussing is based on very small
computers to be made possible by nanotechnology. How are you going to transfer
this information to and from these very small computers?
E. DREXLER: You can take the "I/O problem" and separate it into
the "I problem" and the "O problem."
If you can build a nanocomputer, you can certainly build a wire that is
thin at one end to bond to the nanocomputer and fat at the other to bond
to a microchip. If a wire ends in a plate that is 10 nm across and separated
from another plate by a few nm, and the other plate is kept at ground potential,
and we move the voltage on the first plate from ground up to a few volts
and back down again, it will carry an electrostatic field between the plates
of this little capacitor. The numbers suggest that you should be able to
generate the force necessary to yank a rod in the nanocomputer in a way
mechanically compatible with the operation of the system. Thus you can go
from a standard 5 volt electronic signal to a logic state on a nanocomputer.
For output, you can have two parallel plates and look at the current flowing
between them, which varies as a function of distance. We know from the scanning
tunneling microscope that conventional electronics can detect changes in
the conductivity of a circuit that result from the interaction between a
surface and the single atom at the end of the STM needle. Two plates 10
atoms on a side give you a factor of 100 more detectable than what the STM
can detect now. The plate separation can be changed by moving a rod, and
now we have an output channel.
AUDIENCE: What about organic luminescence?
E. DREXLER: You can also use light for both input and output. A limitation
is that the focal spot of a light beam is large compared to a single device.
You could still get multiple channels by going to multiple frequencies.
I haven't looked at this approach quantitatively.
AUDIENCE: What about the accidentally destructive aspects of nanotechnology
as well as the beneficent and the malicious applications of nanotechnology?
As a software writer, I've noticed that I rarely write a program that doesn't
have bugs the first time.
E. DREXLER: The problem you are raising is of accidents in design that lead
to devices that run amuck in a destructive way. I feel that I didn't address
this problem as well as I might have in Engines of Creation.
Since writing that, I've come to the conclusion that if people are really
concerned about such a thing happening, they will try to avoid a situation
in which a small accident can produce a run-away self-replicating machine
that gobbles up the world.
Here's an example of how a very little bit of care could eliminate that
problem. Never build a replicator that's anything like a replicator
that could survive in nature. In biotechnology, people are tinkering with
cells that have evolved to live in nature. That has the flavor of a dangerous
thing to do. The danger, nevertheless, was very over-rated in the early
days due to a lack of understanding of bacterial ecology. If instead you
are working with devices that are no more like something that could live
freely in nature than is a piece of machinery, the danger of run-away growth
is non-existent. Here is a metaphor: Imagine that you design a replicator
that works in a vat of industrial chemicals, that requires for its oxygen
source hydrogen peroxide, and for its carbon source, some petroleum derivative.
Such a thing would have an obligatory requirement for those things in the
same way that an automobile has a requirement for gasoline and transmission
fluid. To have something like that accidentally be able to live in nature
would be like having your mechanic slip up when working on your car with
the result that the car could go into the woods and suck sap from trees.
This realization has made me feel much better about accidents, but very
scared about abuse.
AUDIENCE: One thing that might help is if for every team working to build
something, you had another team working to figure out every way in which
it could get loose.
E. DREXLER: That might be useful in some cases; in others it shouldn't be
AUDIENCE: There is a problem with your last argument. The nature of the
human spirit is to create organisms that can go and live by themselves.
They're normally called 'children' but also are called 'computer viruses'.
E. DREXLER: Yes. Again, I believe that people deliberately doing these sorts
of things is what we have to watch out for.
AUDIENCE: I feel that the competitive spirit tends to drive a lot of things.
I was wondering what your estimate is of the competition among companies
and among countries for nanotechnology in the near term.
E. DREXLER: At present, the state of competition with respect to nanotechnology
per se is essentially nonexistent. There are a few companies that have expressed
an interest in putting nanotechnology on their research agendas. I have
heard essentially nothing from the government. Nanotechnology will come
out of other areas that are the focus of intense competition because of
short term pay-offs. I expect that in coming years, as we see the transition
from people reacting to nanotechnology as a wild, unworkable idea, to people
saying (and this is already starting to happen) that it's obvious, not worth
talking about, that companies and countries will recognize nanotechnology
as one of a very few key research priorities. You may well see something
like the Manhattan project.
If history is any guide, it is likely that such programs will be competitive
programs. I would like to see them be cooperative programs across as wide
a range as possible of decent governments, which hopefully will embrace
Next page of the NanoCon Proceedings
Return to NanoCon Proceedings Table of Contents
Return to Jim's Molecular Nanotechology Home Page
Return to Web Site Design and Authoring
Comments and correspondence? Send email to Jim Lewis:
NanoCon Proceedings © Copyright 1989, by NANOCON
This page is part of Jim's Molecular Nanotechnology Web, copyright ©1996
James B. Lewis Enterprises. All rights reserved.
Last updated 24June96.
The URL of this document is: http://www.halcyon.com/nanojbl/NanoConProc/nanocon1.html