Nanotechnology, probably one of the most significant breakthroughs in modern history, if not the most, holds a fundamental purpose, and that is to come up with efficient, cost-effective products. Throughout the years, the discovery of nanotechnology has ascended to become a highly distinguished scientific advancement. However, the capability of producing machines on the molecular scale and with high precision, does not come in cheap. In fact, such a production could come up with a sky-high price tag. So, how efficient could this production be, compared to the current production technologies that are actually doing their tasks, to some extent, pretty well?
Nanotechnology has encompassed several different processes, with relatively high precision and quality, compared to macro-scale processes. Most of these processes have been types of a top-down approach, considering the fact that a substance is at hand, and all what we generally do is a form of sculpting and carving this material to come up with the desired product. These processes, though, have brought us many important and valuable products, some of which are miniature transistors, microprocessors, hydrophobic and hydrophilic materials, and protective coatings of high strength and resistance. This technology has evolved very rapidly, and is still evolving, holding a very remarkable prospect. Nonetheless, it still requires time, effort and money – a big amount of each. The other type of processes is of the form of a bottom-up methodology, and that is, as its name implies, the assembly of objects and artefacts right down from the molecular and atomic level up to the bulk level of the material. This can be done using positional assembly of the molecular components to build relatively large constructions. But then, considering the fact that the assembler, would be building a structure with a macro-scale size, molecule by molecule, this process would take a long time to be done. The quickest subsequent thought that comes to mind to solve such a problem would be increasing the number of assemblers; but taking into consideration the size (scale) difference between a molecule and a macro-scale object, we would need a massive number of assemblers, then – which is not feasible.
This gives rise to an alternative notion and a new perspective towards such manufacturing processes – this perspective is the realization of a concept called Self-assembly, and more particularly Self Replication. Self-replication is generally the building of an identical copy of a system resulting from any behaviour the latter performs. The best way to comprehend and appreciate such a concept is to look at how nature does it. One simple example is potatoes. Just put a potato in damp or wet soil, give it some air and sunlight, and you get more potatoes. This looks very straightforward, and makes it easy to understand why potatoes are actually cheap. The comprehension is simple: Potatoes are self-replicating. Potatoes are said to contain sophisticated molecular “machines” made up of a great deal of proteinaceous and genetic units that drive this whole process of self-replication; yet, their cost is very low that we do not even consider thinking about this marvellous natural phenomenon while eating French fries, for example. Bearing in mind another example, some microorganisms are self-replicating. Bacteria have the tendency to replicate; and they do it very efficiently. Some of them can even carry out selective or preferential replication.
All what we have to do is to acquire inspiration from nature as we design and develop molecular fabrication systems. And inspiration does not insinuate copying nature, but actually grasping the natural replication concept and applying it to our manufacturing systems, bearing in mind the limits and boundary conditions of our design. Self-replication is but one of many abilities that living systems exhibit. Copying that one ability in an artificial system will be challenging enough without attempting to emulate their many other remarkable abilities.
In his report “Self-Replication and Pathways to Molecular Nanotechnology”, J. Storrs Hall, a research fellow at the Institute of Molecular Manufacturing, stated that the ability of a complete molecular manufacturing relies on two important facets, (1) positional assembly and manipulation at the molecular level, and (2) self-replicating construction. A self-replicating technology does not necessarily mean a specific self-reproducing machine. It more generally means a set of manufacturing capital equipment which includes, as a subset of its output, everything that is necessary to make more of itself.
The study of artificial self-replicating systems was first pursued by the Hungarian-American mathematician and scientist John von Neumann in the 1940's. Von Neumann set for himself the goal of showing what the logical organization of a self-replicating machine might be. He had in mind a full range of self-replicating machine models which he intended to explore, including the kinematic machine, cellular machine, neuron type machine, continuous machine, and probabilistic machine. He concluded that the following characteristics and capabilities were sufficient for machines to replicate without degeneracy of complexity:
Logical universality: the ability to function as a general-purpose computing machine able to simulate a universal Turing machine (an abstract representation of a computing device, which itself is able to simulate any other Turing machine). This was deemed necessary because a replicating machine must be able to read instructions to carry out complex computations.
Construction capability: to self-replicate, a machine must be capable of manipulating information, energy, and materials of the same sort of which it itself is composed.
Constructional universality: in parallel to logical universality, constructional universality implies the ability to manufacture any of the finitely sized machines which can be formed from specific kinds of parts, given a finite number of different kinds of parts but an indefinitely large supply of parts of each kind.
Self-replication follows immediately from the above, since the universal constructor must be constructible from the set of manufacturable parts. Von Neumann’s (in the picture to the right) deductions on the logic of replication in the late 1940s and early 1950s were highly abstract and known only to a few specialists who had heard his lectures. In 1956, mathematician and scientist Edward F. Moore offered the first known suggestion for a real-world application of kinematic self-replicating machines, representing the front line in a long line of commentators who would later expound upon the powerful economic potential of exponential growth among populations of such devices. Moore proposed this other type of a self-replicating machine, calling it "Artificial Living Plants".
The first known physical implementation of simple machine replication was reported by the British geneticist Lionel S. Penrose at University College, London, and his son, physicist Sir Roger Penrose at Bedford College, London, in 1957. They started by defining some ground rules for such efforts: “A structure may be said to be self-replicating if it causes the formation of two or more new structures similar to itself in every detail of shape and also the same size, after having been placed in a suitable environment. One of the new structures may be identical with the original one; alternatively, the original structure may be destroyed in the process of forming two new replicas. Certain conditions are added which exclude all well-known types of physical or chemical chain reactions. First, the replicating structure must be built by assembling simpler units present in the environment. Secondly, more than one design can be built from the same set of units though the only replicating structure that can be automatically assembled will be one exactly copying a previously existing structure. The pre-existing structure is known as a seed.”
By the late 1970s the idea of self-replicating robots had been suggested in various popular writings, and was beginning to be mentioned, however briefly, in more technical publications. Recognizing the tremendous potential for advanced automation in future space mission planning and development, a NASA Study Group chaired by the noted astronomer Carl Sagan, was organized in 1977, and they presented a report on Machine Intelligence and Robotics. Much afterwards, around the beginning of the 21st century, a new application came into being, firstly presented by Lipson and Pollack, known as 3D-Printing. But they pointed out, later on, that they were a long way from making a 3D printer that can print another 3D printer, as would be required for complete self-replication.
On a further note, at the molecular scale, the natural phenomena that undergo self-assembly and self-replication are numerous. Some of which are self-assembling peptides, nucleotides, DNA, viruses, eukaryotic cells, mitochondria and plasmids. Considering the artificial replication concept, however, and the relative micro assembly, the first person that gave the motivation to consider very tiny machines, and elaborated the idea of nanomachines, was Nobel Physicist Richard P. Feynman. In his famous speech in December 1959, he stated that “There’s plenty of room at the bottom.” This turned out to be a very controversial statement, yet it still had more supporters than detractors. He then proposed a prototypical strategy with a top-down approach to construct complex nanomachinery. Inspired by Feynman’s vision and speeches, Eric Drexler introduced the concept of mechanosynthesis in 1981. He said, “By analogy with macroscopic devices, feasible molecular machines presumably include manipulators able to wield a variety of tools…thus, such tools can be positioned with atomic precision.” By 1985, Drexler was addressing the theory of replicating assemblers widely. One year later, he came up with the notion of the molecular assembler, which is a device resembling an industrial robot which would be capable of holding and positioning reactive parts of the material in order to control the precise time and location at which chemical reactions take place. Drexler proposed the first generic design for a molecular assembler in 1986 in his popular book Engines of Creation, published in 1986.
An important fact to address, as well, is the classification attempted by the famous biologist Richard Dawkins in his 1976 work The Selfish Gene, where he observed three aspects that affect the replicator’s quality: fecundity, fidelity, and longevity. In other words, it’s the replicator’s relative productiveness, reliability and durability. Dawkins mentioned that “What is of evolutionary importance is that it produces copies of itself so that it is potentially immortal in the form of copies. Of course, everything else being equal, the more copies that are replicated (fecundity) and the more accurately a replicator produces them (fidelity), the greater its longevity and evolutionary success.”
He also added two design factors to the three features, stating that replicators may be active or passive, and germ-line or dead-end. In the first one, he considered that an active replicator is an entity that influences its probability of being replicated – for example, any DNA molecule which, either through protein synthesis or the regulation of protein synthesis, has some phenotypic effect. On the other hand, a passive replicator is one which has no influence on its probability of being copied, such as a section of DNA that is not transcribed. For the second feature, he regarded the replicators as germ-line or dead end depending on whether they are potential ancestors of an indefinitely long line of descendant replicators.
So, this is a somehow detailed description about the concept of self-replication, its emergence and relation to micro/nanotechnology, and some relative studies that were done by the scientific community. Please, share your thoughts on the topic and on whether you think self-replication would play a big role in the future or not.
Born in 1992 in El Chouf, Lebanon, Samir grew up dreaming of becoming a scientist and an explorer. Today, he is a former bioengineering and nanotechnology research engineer who has contributed to award-winning projects on cancer diagnosis and silicon-based nanofabrication. He is currently a science communicator and content writer, and is influenced by scientists such as Carl Sagan, Richard Feynman, Richard Dawkins and Stephen Hawking.
Living between Lebanon and Germany, he aims to inform, inspire, educate and entertain readers in various areas of science and engineering by simplifying complex topics, triggering curiosity, provoking thoughts about science and the natural world, and as he says, “gradually bridging the information gap.”