A technological singularity is a point, conjectured in the development of a civilization, where technological progress accelerates beyond the ability of human beings to understand and predict it. The singularity can, more specifically, refer to the advent of an intelligence superior to humans (even artificial), and to the technological advances that, in cascade, would be presumed to follow from such an event. Whether a singularity could ever happen is a matter of debate.
Although it is commonly believed that the concept of singularity was born in the last 20 years of the twentieth century, it really arose in the fifties of the twentieth century:
A conversation centered on the ever-accelerating progress of technology and change in the ways of life of human beings, which gives the appearance of the approaching of some fundamental singularity in the history of race beyond which, the affairs of human beings, as we know them, they cannot continue.(Stanislaw Ulam, May 1958, referring to a conversation with John von Neumann)
In 1965, statistician I. J. Good described a concept even more similar to the contemporary meaning of singularity, in which he included the advent of superhuman intelligence:
Let’s say that an ultra-intelligent machine is defined as a machine that can far surpass all the intellectual activities of any man no matter how skilled he is. Since the design of these machines is one of these intellectual pursuits, an ultra-intelligent machine could design better and better machines; therefore, there would be an “explosion of intelligence”, and man’s intelligence would be left far behind. Therefore, the first ultra-intelligent machine will be the last invention that man will need to make.(I. J. Good, 1965)
Even earlier, in 1954 the science fiction writer Fredric Brown, in the very short story Answer, anticipated the concept of technological singularity by imagining the construction of a “galactic supercomputer” to which the first question, after switching on, is asked if God exists; the supercomputer replied, “There is now!“.
The concept of technological singularity as it is known today is credited to the mathematician and novelist Vernor Vinge. Vinge began talking about the singularity in the 1980s, and gathered his thoughts in the first article on the subject in 1993, with the essay Technological Singularity. This has since become the subject of many futuristic and science fiction stories and writings.
Vinge’s essay contains the often-quoted statement that “within thirty years, we will have the technological means to create superhuman intelligence. Soon thereafter, the era of human beings will end.“
Vinge’s singularity is commonly and incorrectly interpreted as the statement that technological progress will grow indefinitely, as is the case in a mathematical singularity. In reality, the term was chosen, as a metaphor, taking it from physics and not from mathematics: as one approaches the singularity, the models of forecasting the future become less reliable, just as the models of physics become ineffective in proximity, to example, of a gravitational singularity.
Singularity is often seen as the end of human civilization and the birth of a new civilization. In his essay, Vinge wonders why the human era should end, and argues that humans will be transformed during the singularity into a higher form of intelligence. After the creation of superhuman intelligence, according to Vinge, people will be, by comparison, an inferior life form.
It is not very clear what is meant, according to Vinge, by new forms of civilization. Vinge wrote a story about man’s most intelligent machines and how they conceive the world. His editor told him that no one can be able to write such a tale, because understanding how a smarter observer of man sees the world is impossible for the man himself. The moment human beings build a machine that will be able to become smarter, a new era will begin, which will be lived from a different point of view: an era that it will not be possible to qualify a priori as good or bad, but it will simply be different.
The historical analysis of technological progress demonstrates that the evolution of technology follows an exponential and non-linear process as one would be led to think. In his essay, The Law of Accelerating Returns, Ray Kurzweil proposes a generalization of Moore’s Law that forms the basis of many people’s beliefs about singularity. Moore’s law describes an exponential pattern of the growth of the complexity of semiconductor integrated circuits, which typically has an asymptote (a “maximum point”): however, the exponential increase in performance seems to have slowed drastically after 2000.
Kurzweil extends this trend by including technologies much earlier than integrated circuits and extending it to the future. He believes the exponential growth of Moore’s Law will continue beyond the use of integrated circuits, with the use of technologies that will drive the singularity. He envisions the tendency for advances to feed on themselves, increasing the rate of further advance, and pushing well past what one might sensibly project by linear extrapolation of current progress.
The law described by Ray Kurzweil has in many ways altered the public’s perception of Moore’s law. It is a common belief that Moore’s law makes predictions that apply to all forms of technology when in reality it only affects semiconductor circuits. Many futurologists still use the term “Moore’s law” to describe ideas such as those presented by Kurzweil. Making long-term predictions, at least partially correct, about where the technology will go is practically impossible. Still too often technological progress is thought of as something that proceeds in a linear way, according to what is the intuitive line of progress.
It can be hypothesized that technology is linked to a product, with an evolutionary trend, a life cycle, a useful life: it is born, grows, dies. Each technology has its own diffusion and cost-benefit curve for a certain time (the useful life of the technology), after which it is abandoned and replaced by a new one. You will then get a cost-time and performance-time graph for each technology. Being defined in a closed and limited time interval (the useful life), the curve has an absolute maximum or minimum (Weierstrass theorem).
With the adoption of new technology (the so-called “radical change”, comparable with the notion of paradigm change), the account restarts from a “net zero”, with a new exponential curve of cost reduction and performance improvement over time: however, zero is placed not in the origin of the axes, but at the cost or performance value that had already been achieved with the previous technology. If a technology follows a bell curve (or a U curve), when it has reached its “theoretical” extreme (minimum production cost, maximum performance) it can provide, it is abandoned before the decline phase, of course, if better alternatives exist.
The adoption of a new technology determines a point of discontinuity (a step), from which a new exponential curve starts. By summing the graphs of each technology vertically, to construct a unique cost-time and performance-time graph, we obtain n exponential curves one after the other and “rising”: where one ends and the next begins, there is one step.
To express the concept in mathematical terms, we could think of representing this speed with an exponential curve. However, this is still not correct; an exponential trend is in fact correct, but only for short time intervals: if we take into consideration, for example, a period of 100 years, an exponential curve, it will not be correct. Typically journalists make their predictions when considering the future, extrapolating the current rate of evolution to determine what can be expected over the next ten or a hundred years. This is what Ray Kurzweil calls an intuitive linear view.
What Kurzweil calls the law of accelerating returns can be summarized in the following points:
- development applies positive responses, as the best method derived from a stage of progress. These positive responses form the basis for further development;
- consequently, the rate of progress of an evolutionary process increases exponentially over time. Over time the order of magnitude of the information that is included in the development process increases;
- consequently, the gain in terms of technology increases exponentially;
- in another cycle of positive responses, to a particular evolutionary process, these are used as a springboard for further progress. This causes the second level of exponential development and the process of exponential development itself grows exponentially;
- biological development is also one of these developments;
- technological development is part of this evolutionary process. Indeed, the first technology that generated the species formed the basis for the development of the subsequent technology: technological development is a consequence and a continuation of biological development;
- a certain paradigm (= method) (such as increasing the number of transistors on integrated circuits to make computers more powerful) guarantees exponential growth until it runs out of potential; then a change occurs that allows the exponential development to continue.
If these principles are applied to the evolution of the Earth, it is possible to notice how they are adherent to the development process that has taken place:
- I epoch: physics and chemistry, information in atomic structures. It can be compared to the creation of the cell, combinations of amino acids in proteins and nucleic acids (RNA, later DNA), or the introduction of the paradigm of biology.
- II epoch: biology, information in the DNA. Consequently, DNA provided a “digital” method for recording the results of evolutionary experiments.
- III epoch: brains, information in neural configurations. The evolution of the species has united rational thinking.
- IV epoch: technology, information in hardware and software projects. This has decisively shifted the paradigm from biology to technology.
- 5th epoch: fusion of technology and human intelligence, technology masters the methods of biology (including those of human intelligence). What is to come is the shift from biological intelligence to a combination of biological and non-biological intelligence. (see also research by Prof. Hiroshi Ishiguro)
- 6th epoch: the universe awakens, and enormously expanded human intelligence (mostly non-biological) spreads throughout the universe.
By examining the timing of these steps, you can see how the process continually accelerates. The evolution of life forms, for example, took several million years for the first step (eg primitive cells), but then accelerated more and more. Now “biological technology” has become too slow compared to man-made technology, which uses its own results to move forward significantly faster than nature can.
It is assumed that ever-faster technological growth will come with the development of superhuman intelligence, either by directly empowering human minds (perhaps with cybernetics) or by building artificial intelligence. This superhuman intelligence would presumably be able to invent ways of empowering itself even faster, producing a feedback effect that would surpass pre-existing intelligence.
Simply having artificial intelligence on the same level as human intelligence, others assume, could produce the same effect, if Kurzweil’s law continues indefinitely. In the beginning, it is assumed, that such intelligence should be equal to that of humans. Eighteen months later it would be twice as fast. Three years later, it would be four times faster, and so on. But given that accelerated AI is currently designing computers, each subsequent step would take about eighteen subjective months and proportionately less real-time at each step. If Kurzweil’s law continues to apply unchanged, each step would take half the time. In three years (36 months = 18 + 9 + 4.5 + 2.25 + …) the computer speed would theoretically reach infinity. This example is illustrative, and in any case, many futurologists agree that it cannot be assumed that Kurzweil’s law is valid even during the singularity, or that it can remain true literally forever, as would be required to produce a truly infinite intelligence.
A possible confirmation of the impossibility of such a technological singularity, concerning the limits of the development of a sustainable intelligence, came some time ago from a well-known English cosmologist, Martin Rees, who argued that the current organic dimension of the human brain is in reality already the upper limit of the animal’s ability to develop intelligent concepts, beyond which the information between the synapses would take too long to constitute comprehensive constructive and evolutionary reasoning, and would be limited to the accumulation of concepts intended as long-term memory. To be understood, a brain that was double that of a human being would store twice as much data as possible but at double the time and half the learning speed, (and so on exponentially), making done, negative progress. If this theorem were also applied to advanced computer systems, therefore, we will also have an upper physical limit here, beyond which the physical distance between the processors and the consequent hypothetical original data processing speeds would be subject to limits which, although high, would culminate at most in a super-mass memory more than in a super-computer with infinite intelligence.
Singularity and Society
According to many researchers, the technological development that leads to singularity will have important repercussions on many aspects of our daily life, both individually and socially. At the political level, the advancement of technology allows the creation of alternative forms of aggregation to the modern state: even without considering the colonization of other planets (a perspective analyzed by many transhumanists, including Ray Kurzweil), concrete projects already exist for the creation of libertarian autonomous communities in non-territorial waters – such as Patri Friedman’s Seasteading Foundation (grandson of Nobel laureate Milton Friedman).
Other scholars – including the founder of the Artificial General Intelligence Research Institute Ben Goertzel and the co-founder of iLabs Gabriele Rossi – explicitly contemplate the possibility of a “society within society”, which therefore lives “virtually” in the relations between members and is not confined to a specific physical space (an extremization of what is already happening today, in a primitive way, with Second Life and social networks).
The approach to singularity will ultimately change the economy and justice. The availability of Artificial intelligence will revolutionize the labor market, introducing, in addition to the often mentioned ethical problems, new production challenges; the radical extension of life will require changes in all sectors involved, from health to insurance. Finally – realizing a dream that Norbert Wiener, father of cybernetics already had – the exponential growth in the knowledge of reality and the improvement of expert systems will allow a gradual replacement of human justice with a more efficient and objective automatic justice.
There are two main types of criticisms of the prediction of the singularity: those that question whether the singularity is probable or even possible, and those that wonder whether it is desirable or not rather be regarded as dangerous and be opposed as much as possible.
Some like Federico Faggin, developer of the MOS technology that allowed the manufacture of the first microprocessors, doubt that a technological singularity has many possibilities to be realized. Detractors ironically refer to it as “the taking into heaven for nerds”.
Most speculations about the singularity assume the possibility of artificial intelligence superior to that of humans. It is controversial whether creating such artificial intelligence is possible. Many believe that practical advances in artificial intelligence have not yet proved this empirically.
Some argue that the speed of technological progress is increasing. The exponential growth of technological progress could become linear or flex or it could begin to flatten into a curve that allows for limited growth.
The writer Roberto Vacca focuses, in particular, on Kurzweil’s assertion of the “exponential” progress of “every aspect of information”, countering that no progress follows an exponential law.
There has often been speculation, in science fiction and other genres, that advanced AI could probably have goals that do not coincide with those of humanity and that could threaten its existence. It is conceivable, if not probable, that a superintelligent AI (Artificial Intelligence) would simply eliminate the intellectually inferior human race, and humans would be unable to stop it. This is one of the biggest issues that worry both singularity advocates and critics alike, and it was the subject of an article by Bill Joy that appeared in Wired Magazine, entitled Why the Future Doesn’t Need Us.
Some critics argue that advanced technologies are simply too dangerous for us to allow a singularity to occur, and propose that work be done to prevent it. Perhaps the most famous proponent of this view is Theodore Kaczynski, the Unabomber, who wrote in his “manifesto” that AI could empower the upper classes of society to “simply decide to exterminate the mass of humanity”. Alternatively, if AI is not created, Kaczynski argues that humans “will be reduced to the level of pets” after sufficient technological progress has been made. Part of Kaczynski’s writings has been included in both Bill Joy’s article and a recent book by Ray Kurzweil. It must be noted that Kaczynski is not only opposed to the singularity, but he is a Luddite and many people oppose the singularity without opposing modern technology as the Luddites do.
Of course, scenarios such as those described by Kaczynski are regarded as undesirable even by singularity advocates. Many proponents of the singularity, however, do not believe these are so likely and are more optimistic about the future of the technology. Others believe that, regardless of the dangers the singularity poses, it is simply inevitable — we must advance technologically because we have no other way forward.
Proponents of friendly AI, and especially SIAI, recognize that singularity is potentially very dangerous and work to make it safer by creating AI seeds that will act benevolently towards humans and eliminate existential risks. This idea is so inherent in Asimov’s Three Laws of Robotics, which prevent the logic of an AI robot from acting maliciously toward humans. However, in one of Asimov’s tales, in spite of these laws, robots end up causing harm to a single human being as a result of the wording of the Zero law. The theoretical framework of Friendly Artificial Intelligence is currently being designed by the Singularitian Eliezer Yudkowsky.
Another view, albeit less common, is that AI will eventually dominate or destroy the human race and that this scenario is desirable. Dr. Prof. Hugo de Garis is the best-known supporter of this opinion.
The opinion of the techno-individualists is that each individual must be able to artificially increase every possible faculty (intellectual and otherwise), in order to face both the singularities and the elites of Power. According to them, investments and research and development in technology should empower individuals, not authorities, and the ruling caste is not aware of the prospects. Fundamental, for them, is the strength of complete freedom of communication between individuals (inter-individualism, transhumanism, Swarm Theory – swarming intelligence).
The Singularity Institute for Artificial Intelligence (SIAI), a nonprofit research and education institution, was created to work on safe cognitive enhancement (i.e. a beneficial singularity). They emphasize Friendly AI, as they believe that generalist and more likely AI can boost cognitive ability substantially before human intelligence can be significantly enhanced by neurotechnology or somatic gene therapy.
Singularity and Modern Culture
One can find traces of the idea of a time when a new species of machines will replace man already in the writings of Samuel Butler
What if technology continues to evolve so much faster than the plant and animal kingdoms? Would it replace us in the supremacy of the planet? Just as the vegetable kingdom has slowly developed from the mineral, and in turn the animal kingdom has succeeded the vegetable one, in the same way in recent times a completely new kingdom has arisen, of which we have seen, up to now, only what one day it will be considered the antediluvian prototype of a new race … We are entrusting to machines, day after day, more and more power, and providing them, through the most disparate and ingenious mechanisms, those capacities of self-regulation and autonomy of action that will constitute for them what the intellect was for the human race.(Samuel Butler, 1855)
as well as the concept of men enslaved to machines
Machines serve man only on condition that they are served, and themselves set the conditions for this mutual agreement […]. How many men today live in a state of slavery to machines? How many spend their entire lives, from cradle to death, looking after machines night and day? Think of the ever-increasing number of men they have enslaved, or who are dedicated body and soul to the advancement of the mechanical realm: isn’t it evident that machines are taking over us? […] Aren’t there more men engaged in looking after machines than in looking after their fellow men?(Samuel Butler, 1871, Erewhon)
Concepts similar to those related to post-singularity development can be found in the description of the Omega Point, a supposed future when everything in the universe spirals toward a final point of unification, as formulated by Pierre Teilhard de Chardin.
In addition to the original stories by Vernor Vinge who pioneered the first ideas about the singularity, several other science fiction writers have written stories that have singularity as a central theme. Notable authors are Charles Stross, Greg Egan, Rudy Rucker, and of course Isaac Asimov in his short story The Last Question.
The concept of singularity is also present in various computer games. Among others, Sid Meier’s Alpha Centauri, a game that features something like a singularity when the players embrace the “Ascent to Transcendence” in which humans join their brains with a hive organism in its metamorphosis to godhood, and Cell to singularity, an idle game in which you start as a single cell organism, and upgrade your biology, intellect, and technology until you engulf an entire planet with a civilization on the brink of technological singularity.
References to the concept of singularity can also be found in various films, including the 2014 Transcendence with Johnny Depp, Lucy with Scarlett Johansson, Ex Machina with Alicia Vikander, and the 2015 Humandroid with Dev Patel and Hugh Jackman.