Max Tegmark - Life 3.0: Being Human in the Age of Artificial Intelligence - 2017

History / Edit / PDF / EPUB / BIB /
Created: December 15, 2017 / Updated: November 2, 2024 / Status: finished / 24 min read (~4679 words)
Artificial General Intelligence

  • Alternative stories based on the prelude
  • Given a jailboxed AGI that produces "media", if an external agent can collect this media which contains, in any hidden way, enough data for the jailboxed AGI to escape, it might be possible for the jailboxed AGI to transmit this media without anyone noticing (think of steganography for instance)
  • Is a Dyson sphere effectively the same as making use of the energy within the core of a planet?
  • If we can make our lives infinite, what impact would this have
    • on progress?
    • on change?
  • What should we do if all paths to super intelligence/SAGI leads to the destruction of the human race? Should we still proceed?

  1. Do you want there to be superintelligence?
    Yes.
  2. Do you want humans to still exist, be replaced, cyborgized and/or uploaded/simulated?
    Do not care.
  3. Do you want humans or machines in control?
    See 2.
  4. Do you want AIs to be conscious or not?
    Define conscious.
  5. Do you want to maximize positive experiences, minimize suffering or leave this to sort itself out?
    Minimize suffering.
  6. Do you want life spreading into the cosmos?
    Do not care.
  7. Do you want a civilization striving toward a greater purpose that you sympathize with, or are you OK with future life forms that appear content even if you view their goals as pointlessly banal?
    Difficult to say.

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

  • Irving Good, 1965

  • It simply boiled down to maximizing their rate of return on investment, but normal investment strategies were a slow-motion parody of what they could do: whereas a normal investor might be pleased with a 9% return per year, their MTurk investments had yielded 9% per hour, generating eight times more money each day
  • They soon realized that, even though they could get much better returns than other investors, they'd be unlikely to get returns anywhere close to what they could get from selling their own products

  • A good teacher can help students learn science much faster than they could have discovered it from scratch on their own

  • Phase 1: Gain people's trust
  • Phase 2: Persuasion
  • Given any person's knowledge and abilities, Prometheus could determine the fastest way for them to learn any new subject in a manner that kept them highly engaged and motivated to continue, and produce the corresponding optimized videos, reading materials, exercises and other learning tools
  • Seven slogans
    • Democracy
    • Tax cuts
    • Government social service cuts
    • Military spending cuts
    • Free trade
    • Open borders
    • Socially responsible companies
  • Poll after poll showed that most voters around the world felt their quality of life improving, and that things were generally moving in a good direction. This had a simple mathematical explanation: before Prometheus, the poorest 50% of Earth's population had earned only 4% of the global income, enabling the Omega-controlled companies to win their hearts (and votes) by sharing only a modest fraction of their profits with them

  • Let's define life very broadly, simply as a process that can retain its complexity and replicate
    • What's replicated isn't matter but information
  • We can think of life as a self-replicating information-processing system whose information (software) determines both its behavior and the blueprints for its hardware
  • The three stages of life:
    • biological evolution: evolves its hardware and software
    • cultural evolution: evolves its hardware, designs much of its software
    • technological evolution: designs its hardware and software
  • The fact that most of our human software is added after birth (through learning) is useful, since our ultimate intelligence isn't limited by how much information can be transmitted to us at conception via our DNA
  • The synaptic connections that link the neurons in my brain can store about a hundred thousand times more information than the DNA that I was born with
  • Your synapses store all your knowledge and skills as roughly 100 terabytes' worth of information, while your DNA stores merely about a gigabyte, barely enough to store a single movie download

  • Three distinct schools of thought
    • Digital utopians
    • Techno-skeptics
    • Members of the beneficial-AI movement
  • Most controversies surrounding strong artificial intelligence (that can match humans on any cognitive task) center around two questions:
    • When (if ever) will it happen?
    • Will it be a good thing for humanity?

  • The real worry isn't malevolence, but competence. A superintelligent AI is by definition very good at attaining its goals, whatever they may be, so we need to ensure that its goals are aligned with ours

  • Intelligence = the ability to accomplish complex goals
  • It makes no sense to quantify intelligence of humans, non-human animals or machines by a single number such as an IQ
  • We can say that a third program is more intelligent than both of the others if it's at least as good as them at accomplishing all goals, and strictly better at at least one
  • To classify different intelligences into a taxonomy, another crucial distinction is that between narrow and broad intelligence
  • AGI: the ability to accomplish any goal at least as well as humans
  • Intelligent behavior is inexorably linked to goal attainment
  • The fact that low-level sensorimotor tasks seem easy despite requiring enormous computational resources is known as Moravec's paradox, and is explained by the fact that our brain makes such tasks feel easy by dedicating massive amounts of customized hardware to them - more than a quarter of our brain

  • The reason that memory is so stable is that it requires more energy than random disturbances are likely to provide to modify it
  • The energy of a complicated physical system can depend on all sorts of mechanical, chemical, electrical and magnetic properties, and as long as it takes energy to change the system away from the state you want it to remember, this state will be stable
  • Information can take on a life of its own, independent of its physical substrate
  • Our brains store much more information than our genes: in the ballpark of 10 gigabytes electrically (specifying which of your 100 billion neurons are firing at any one time) and 100 terabytes chemically/biologically (specifying how strongly different neurons are linked by synapses)
  • Whereas you retrieve memories from a computer or hard drive by specifying where it's stored, you retrieve memories from your brain by specifying something about what is stored
    • Such memory systems are called auto-associative, since they recall by association rather than by address
  • It was proved that you can squeeze in as many as 138 different memories for every thousand neurons without causing major confusion

  • A computation is a transformation of one memory state into another
  • How can a clump of seemingly dumb matter compute a complicated function?
    • It must exhibit complex dynamics so that its future state depends in some complicated way on the present state
    • We want the system to have the property that if we put it in a state that encodes the input information, let it evolve according to the laws of physics for some amount of time, and then interpret the resulting final state as the output information, then the output is the desired function of the input
  • The fact that exactly the same computation can be performed on any universal computer means that computation is substrate-independent in the same way that information is: it can take on a life of its own, independent of its physical substrate
  • Substrate independence doesn't mean that a substrate is unnecessary, but that most of its details don't matter
  • The substrate-independent phenomenon takes on a life of its own, independent of its substrate
  • It's often only the substrate-independent aspect that we're interested in

  • For matter to learn, it must rearrange itself to get better and better at computing the desired function - simply by obeying the laws of physics
  • Network of interconnected neurons can learn in an analogous way: if you repeatedly put it into certain states, it will gradually learn these states and return to them from any nearby state
  • Neural networks are universal in the sense that they can compute any function arbitrarily accurately, by simply adjusting those synapse strength numbers accordingly
  • The question of why neural networks work so well can't be answered with mathematics alone, because part of the answer lies in physics
  • The simple task of multiplying $n$ numbers requires a whopping $2^n$ neurons for network with only one layer, but takes only about $4n$ neurons in a deep network

  • As technology keeps improving, will the rise of AI eventually eclipse also those abilities that provide my current sense of self-worth and value on the job market? (or in general)
  • How will near-term AI progress change what it means to be human?

  • As technology grows more powerful, we should rely less on the trial-and-error approach, to safety engineering
    • We should become more proactive than reactive
  • Four mains areas of technical AI-safety research
    • Verification: Ensuring that software fully satisfies all the expected requirements (Did I build the system right?)
    • Validation: (Did I build the right system?)
    • Security: Directed at deliberate malfeasance
    • Control: Ability for a human operator to monitor the system and change its behavior if necessary

  • Digital technology drives inequality in three different ways
    • By replacing old jobs with ones requiring more skills
    • Since the year 2000, an even-larger share of corporate income has gone to those who own the companies as opposed to those who work there - and that as long as automation continues, we should expect those who own the machines to take a growing fraction of the pie
    • It often benefits superstars over everyone else

  • How many FLOPS are needed to simulate a brain?
    • In the ballpark of a hundred petaFLOPS ($10^{17}$)
  • How many FLOPS are needed for human intelligence?
  • How many FLOPS can a human brain perform?
  • Hans Moravec figured that replicating a retina's computations on a conventional computer requires about a billion FLOPS and that the whole brain does about ten thousand times more computation than a retina (based on comparing volumes and numbers of neurons), so that the computational capacity of the brain is around $10^{13}$ FLOPS

  • Our DNA gave us the goal of sex because it "wants" to be reproduced, but now that we humans have understood the situation, many of us choose to use birth control, thus staying loyal to the goal itself rather than to its creator or the principle that motivated the goal
  • What these strategies have in common is that your intellectually inferior jailers haven't anticipated or guarded against them
  • Prometheus caused problems for certain people not because it was necessarily evil or conscious, but because it was competent and didn't fully share their goals
  • Despite all the media hype about a robot uprising, Prometheus wasn't a robot - rather, its power came from its intelligence
  • Nash equilibrium: a situation where any party would be worse off if they altered their strategy
  • To prevent cheaters from ruining the successful collaboration of a large group, it may be in everyone's interest to relinquish some power to a higher level in the hierarchy that can punish cheaters
  • Long life loses much of its point if we are fated to spend it staring stupidly at ultra-intelligent machines as they try to describe their ever more spectacular discoveries in baby-talk that we can understand
    • Hans Moravec, Mind Children, 1988
  • Evolution optimizes strongly for energy efficiency because of limited food supply, not for ease of construction or understanding by human engineers

  • The hardware and electricity costs of running the AI are crucial, since we won't get an intelligence explosion until the cost of doing human-level work drops below human-level hourly wages
  • If we don't know what we want, we're unlikely to get it

  • Why should the power balance between multiple superintelligences remain stable for millenia, rather than the AIs merging or the smartest one taking over?
  • Why should the machines choose to respect human property rights and keep humans around, given that they don't need humans for anything and can do all human work better and cheaper themselves?

  • The AI enforces two tiers of rules: universal and local
    • Universal rules apply in all sectors
    • Individual sectors have additional local rules on top of this, encoding certain moral values
  • There's no real point in humans trying to do science or figuring other things out, because the AI already has
  • There's no real point in humans trying to create something to improve their lives, because they'll readily get it from the AI if they simply ask

  • In this spirit (of open-source software), all intellectual property rights are abolished: there are no patents, copyrights, or trademarked designs - people simply share their good ideas, and everyone is free to use them

  • There are at least four dimensions wherein the optimal balance must be struck
    • Centralization: There's a trade-off between efficiency and stability: a single leader can be very efficient, but power corrupts and succession is risky
    • Inner threats: One must guard both against growing power centralization (group collusion, perhaps even a single leader taking over) and against growing decentralization (into excessive bureaucracy and fragmentation)
    • Outer threats: If the leadership structure is too open, this enables outside forces (including the AI) to change its values, but if it's too impervious, it will fail to learn and adapt to change
    • Goal stability: Too much goal drift can transform utopia into dystopia, but too little goal drift can cause failure to adapt to the evolving technological environment

  • It's generally hard for two entities thinking at dramatically different speeds and with extremely disparate capabilities to have meaningful communication as equals

  • If a thing is not diminished by being shared with others, it is not rightly owned if it is only owned and not shared
    • Saint Augustine

  • There is reason to suspect that ambition is a rather generic trait of advanced life
  • Almost regardless of what it's trying to maximize, be it intelligence, longevity, knowledge or interesting experiences, it will need resources. It therefore has an incentive to push its technology to the ultimate limits, to make the most of the resources it has. After this, the only way to further improve is to acquire more resources, by expanding into ever-larger regions of the cosmos
  • If we're interested in the extent to which our cosmos can ultimately come alive, we should study the limits of ambition that are imposed by the laws of physics

  • Future life that's reached the technological limit needs mainly one fundamental resource: so-called baryonic matter, meaning anything made up of atoms or their constituents
  • A black hole act like a hot object - the smaller, the hotter - that gives off heat radiation known as Hawking radiation
  • This means that the black hole gradually loses energy and evaporates away. In other words, whatever matter you dump into the black hole will eventually come back out again as heat radiation, so by the time the black hole has completely evaporated, you've converted your matter to radiation with nearly 100% efficiency
  • A problem with using black hole evaporation as a power source is that, unless the black hole is much smaller than an atom in size, it's an excruciatingly slow process that takes longer than the present age of our Universe and radiates less energy than a candle
  • Gravity rearranged hydrogen into stars which rearranged the hydrogen into heavier atoms, after which gravity rearranged such atoms into our planet where chemical and biological processes rearranged them into life

  • What upper limits do the laws of physics place on the amount of matter that life can ultimately make use of?
  • Even if there are infinitely many galaxies, it appears that we can see and reach only a finite number of them: we can see about 200 billion galaxies and settle in at most ten billion
  • About 98% of our Universe is "see but not touch"
  • The two key problems are that conventional rockets spend most of their fuel simply to accelerate the fuel they carry with them, and that today's rocket fuel is hopelessly inefficient
  • If there's an intelligence explosion, the key question isn't if intergalactic settlement is possible, but simply how fast it can proceed
  • Two separate types of probes: seed probes and expanders
    • The seed probes would slow down, land and seed their destination with life
    • The expanders would never stop: they'd scoop up matter in flight, perhaps using some improved variant of the ramjet technology, and use this matter both as fuel and as raw material out of which they'd build expanders and copies of themselves
  • Five main suspects for our upcoming cosmic apocalypse, or cosmocalypse
    • The Big Chill: Our universe keeps expanding forever, diluting our cosmos into a cold, dark and ultimately dead place
    • The Big Crunch: The cosmic expansion is eventually reversed and everything comes crashing back together in a cataclysmic collapse akin to a backward Big Bang
    • The Big Rip: Where our galaxies, planets and even atoms get torn apart in a grand finale a finite time from now
    • The Big Snap
    • Death Bubbles
  • The cost of computation drops when you compute slowly, so you'll ultimately get more done if you slow things down as much as possible
  • Future superintelligence may prefer to burn through its energy supplies relatively quickly, to turn them into computations before running into problems such as cosmic horizons and proton decay

  • For an intelligent information-processing system, going big is a mixed blessing involving an interesting trade-off
    • Going bigger lets it contain more particles, which enable more complex thoughts
    • It slows down the rate at which it can have truly global thoughts, since it now takes longer for all the relevant information to propagate to all its parts
  • If life engulfs our cosmos, what form will it choose: simple and fast, or complex and slow?
  • Organisms that are large, complex and slow often mitigate their sluggishness by containing smaller modules that are simple and fast
  • Because internal communication is slow and costly, computations will be done as locally as possible
  • If superintelligence one day expands to cosmic scales, what will its power hierarchy be like?
    • Will it be freewheeling and decentralized or highly authoritarian?
    • Will cooperation be based mainly on mutual benefit or on coercion and threats?
  • If superintelligence develops technology that can readily rearrange elementary particles into any form of matter whatsoever, then it will eliminate most of the incentives for long-distance trade
  • In a cosmos teeming with superintelligence, almost the only commodity worth shipping long distances will be information
    • The only exception might be matter to be used for cosmic engineering projects - for example, to counteract the aforementioned destructive tendency of dark energy to tear civilizations apart
  • A superintelligent hub could use the analogous strategy of deploying a network of loyal guards throughout its cosmic empire
  • What happens if life evolves independently in more than one place and two expanding civilizations meet?
  • If the distance between neighboring space-settling civilizations is much larger than dark energy lets them expand, then they'll never come into contact with each other or even find out about each other's existence
  • It's plausible that long before two superintelligent civilizations encounter one another, their technologies will plateau at the same level, limited merely by the laws of physics
    • This makes it seem unlikely that one superintelligence could easily conquer the other even if it wanted to
    • Moreover, if their goals have evolved to be relatively aligned, then they may have little reason to desire conquest or war
  • Information is very different from the resources that humans usually fight over, in that you can simultaneously give it away and keep it
  • An ambitious civilization can encounter three kinds of regions:
    • Uninhabited ones
    • Life bubbles
    • Death bubbles
  • This assumption that we're not alone in our Universe is not only dangerous but also probably false
  • For our nearest neighbor civilization to be within our Universe, whose radius is about $10^{26}$ meters, the number of zeroes can't exceed 26, and the probability of the number of zeroes falling in the narrow range between 22 and 26 is rather small
  • Although dinosaurs ruled Earth for over 100 million years, a thousand times longer than we modern humans have been around, evolution didn't seem to inevitably push them toward higher intelligence and inventing telescopes or computers

  • The ultimate roots of goal-oriented behavior can be found in the laws of physics themselves, and manifest themselves even in simple processes that don't involve life
  • Out of all ways that nature could choose to do something, it prefers the optimal way, which typically boils down to minimizing or maximizing some quantity
  • There are two mathematically equivalent ways of describing each physical law:
    • As the past causing the future
    • Nature optimizing something

  • At some point, a particular arrangement of particles got so good at copying itself that it could do so almost indefinitely by extracting energy and raw materials from its environment. We call such a particle arrangement life
  • Whereas earlier, the particles seemed as though they were trying to increase average messiness in various ways, these newly ubiquitous self-copying patterns seemed to have a different goal: not dissipation but replication
  • Since the most efficient copiers outcompete and dominate the others, before long any random life form you look at will be highly optimized for the goal of replication

  • The real risk with AGI isn't malice but competence
  • Three tough subproblems
    • Making AI learn our goals
    • Making AI adopt our goals
    • Making AI retain our goals
  • A key idea underlying inverse reinforcement learning is that we make decisions all the time, and that every decision we make reveals something about our goals
    There may be hints that the propensity to change goals in response to new experiences and insights increases rather than decreases with intelligence
  • There's tension between world-modeling and goal retention
    • With increasing intelligence may come not merely a quantitative improvement in the ability to attain the same old goals, but a qualitatively different understanding of the nature of reality that reveals the old goals to be misguided, meaningless or even undefined
  • In its attempt to better model the world, the AI may naturally, just as we humans have done, attempt also to model and understand how it itself works - in other words, to self-reflect

  • Four principles of ethics
    • Utilitarianism: Positive conscious experiences should be maximized and suffering should be minimized
    • Diversity: A diverse set of positive experiences is better than many repetitions of the same experience, even if the latter has been identified as the most positive experience possible
    • Autonomy: Conscious entities/societies should have the freedom to pursue their own goals unless this conflicts with an overriding principle
    • Legacy: Compatibility with scenarios that most humans today would view as happy, incompatibility with scenarios that essentially all humans today would view as terrible
  • Given how ethical views have evolved since the Middle Ages regarding slavery, women's rights, etc., would we really want people from 1,500 years ago to have a lot of influence over how today's world is run?
  • Might there be a goal system or ethical framework that almost all entities converge to as they get ever more intelligent?
  • It's likely that any superintelligent AIs will have subgoals including efficient hardware, efficient software, truth-seeking and curiosity, simply because these subgoals help them accomplish whatever their ultimate goal are
  • Orthogonality thesis: the ultimate goals of a system can be independent of its intelligence

  • Consciousness = subjective experience

  • How does the brain process information? How does intelligence work?
  • What physical properties distinguish conscious & unconscious systems?
  • How do physical properties determine qualia?
  • Why is anything conscious?

  • Psychologists have long known that you can unconsciously perform a wide range of other tasks and behaviors as well, from blink reflexes to breathing, reaching, grabbing and keeping your balance. Typically, you're consciously aware of what you did, but not how you did it
  • On the other hand, behaviors that involve unfamiliar situations, self-control, complicated logical rules, abstract reasoning or manipulation of language tend to be conscious. They're known as behavioral correlates of consciousness, and they're closely linked to the effortful, slow and controlled way of thinking that psychologists call "System 2"
  • Evidence suggests that of the roughly $10^7$ bits of information that enter our brain each second from our sensory organs, we can be aware of only a tiny fraction, with estimates ranging from 10 to 50 bits
  • It takes about a quarter of a second from when light enters your eye from a complex object until you consciously perceive seeing it as what it is

  • I think that consciousness is a physical phenomenon that feels non-physical because it's like waves and computations: it has properties independent of its specific physical substrate
  • Four necessary conditions for consciousness
    • Information principle: A conscious system has substantial information-storage capacity
    • Dynamics principle: A conscious system has substantial information-processing capacity
    • Independence principle: A conscious system has substantial independence from the rest of the world
    • Integration principle: A conscious system cannot consist of nearly independent parts

  • Imagine using future technology to build a direct communication link between two human brains, and gradually increasing the capacity of this link until communication is as efficient between the brains as it is within them. Would there come a moment when the two individual consciousness suddenly disappear and get replaced by a single unified one as IIT predicts, or would the transition be gradual so that the individual consciousnesses coexisted in some form even as a joint experience began to emerge?

  • Although the conscious information processing in our brains appears to be merely the tip of an otherwise unconscious iceberg, we should expect the situation to be even more extreme for large future AIs: if they have a single consciousness, then it's likely to be unaware of almost all the information processing taking place within it

  • First we humans discovered how to replicate some natural processes with machines, making our own wind and lightning, and our own mechanical horsepower
  • Gradually, we started realizing that our bodies were also machines
  • Then the discovery of nerve cells started blurring the borderline between body and mind
  • Then we started building machines that could outperform not only our muscles, but our minds as well

  • The expectation that good things will happen if you plan carefully and work hard for them

  • Tegmark, Max. Life 3.0: Being Human in the Age of Artificial Intelligence. Knopf, 2017.