Shared posts

21 Apr 19:06

Light On Dark Matter

by Robin Hanson

I posted recently on the question of what makes up the “dark matter” intangible assets that today are most of firm assets. Someone pointed me to a 2009 paper of answers:

IntangibleShares

[C.I. = ] Computerized information is largely composed of the NIPA series for business investment in computer software. …

[Scientific R&D] is designed to capture innovative activity built on a scientific base of knowledge. … Non-scientific R&D includes the revenues of the non-scientific commercial R&D industry … the costs of developing new motion picture films and other forms of entertainment, investments in new designs, and a crude estimate of the spending for new product development by financial services and insurance firms. …

[Brand equity] includes spending on strategic planning, spending on redesigning or reconfiguring existing products in existing markets, investments to retain or gain market share, and investments in brand names. Expenditures for advertising are a large part of the investments in brand equity, but … we estimated that only about 60 percent of total advertising expenditures were for ads that had long-lasting effects. …

Investment in firm-specific human and structural resources … includes the costs of employer-provided worker training and an estimate of management time devoted to enhancing the productivity of the firm. … business investments in firm-specific human and structural resources through strategic planning, adaptation, reorganization, and employee-skill building. (more; HT Brandon Pizzola)

According to this paper, more firm-specific resources is the biggest story, but more product development is also important. More software is third in importance.

Added 15Apr: On reflection, this seems to suggest that the main story is our vast increase in product variety. That explains the huge increase in investments in product development and firm-specific resources, relative to more generic development and resources.

11 Apr 01:18

The Real Reason College Tuition Costs So Much - NYTimes.com

by blog
Bjorno

this is a must read.

10 Apr 18:27

‘About’ Isn’t About You

by Robin Hanson

Imagine you told people:

  1. What looks like the sky above is actually the roof of a cave, and trees hold it up.
  2. The food we eat doesn’t give us nutrition; we get nutrition by rubbing rocks.
  3. The reason we wear clothes isn’t for modesty or protection from weather, but instead to keep cave frogs from jumping on our skin.

Imagine that you offered plausible evidence for these claims. But imagine further that people mostly took your claims as personal accusations, and responded defensively:

“Don’t look at me. I’ve always been a big supporter of trees, I’ve always warned against the dangers of frogs, and I make sure to rub rocks regularly.”

Other than being defensive, however, people showed little interest in these revelations. How would that make you feel?

That is how I feel about typical responses to my saying politics isn’t about policy, medicine isn’t about health, charity isn’t about helping, etc. People usually focus on proving that even if I’m right about others, they are the rare exceptions. They offer specific evidence on their personal behavior to prove that for them politics is about policy, medicine is about health, charity is about helping, etc. But aside from that, they show little interest in what such hypotheses might imply about the world in which they live. (They are, however, often eager to point out that I may have illicit motivations for pointing all this out.)

To which I respond: really, “X is not about Y” is not about you. Yes, your forager ancestors were hyper-sensitive to being singled out by public accusations of norm violations, and in fact much of our reasoning and story abilities may have evolved to help us defend against such accusations, and to make such accusations against others. So yes your instincts naturally push you to react this way.

But I’m talking about ways that we all violate the norms to which we all give lip service. I’m not trying to shame some of us, or even all of us, into trying harder to live up to our professed ideals. I’m focused first and foremost on making sense of our world. If I really believed that the sky might really be the roof of a cave held up by trees, or that we wear clothes to protect against frogs, I wouldn’t focus first on making sure that I was very publicly pro-tree and anti-frog; I’d instead ask what else I must rethink, given such revelations.

Once we better understand the basics of what we are doing in areas like policy, medicine, charity, etc. then we might start to ask if we should be doing more or less of those things, and if invoking norms, and shaming norm violators, will help or hurt on net. But first someone needs to figure out the basics of what we are doing in these areas of life. I implore some of you to join me in this noble quest.

06 Apr 00:49

Firms Now 5/6 Dark Matter!

by Robin Hanson

Scott Sumner:

We all know that the capital-intensive businesses of yesteryear like GM and US steel are an increasingly small share of the US economy. But until I saw this post by Justin Fox I had no idea how dramatic the transformation had been since 1975:

intangibles

Wow. I had no idea as well. As someone who teaches graduate industrial organization, I can tell you this is HUGE. And I’ve been pondering it for the week since Scott posted the above.

Let me restate the key fact. The S&P 500 are five hundred big public firms listed on US exchanges. Imagine that you wanted to create a new firm to compete with one of these big established firms. So you wanted to duplicate that firm’s products, employees, buildings, machines, land, trucks, etc. You’d hire away some key employees and copy their business process, at least as much as you could see and were legally allowed to copy.

Forty years ago the cost to copy such a firm was about 5/6 of the total stock price of that firm. So 1/6 of that stock price represented the value of things you couldn’t easily copy, like patents, customer goodwill, employee goodwill, regulator favoritism, and hard to see features of company methods and culture. Today it costs only 1/6 of the stock price to copy all a firm’s visible items and features that you can legally copy. So today the other 5/6 of the stock price represents the value of all those things you can’t copy.

So in forty years we’ve gone from a world where it was easy to see most of what made the biggest public firms valuable, to a world where most of that value is invisible. From 1/6 dark matter to 5/6 dark matter. What can possibly have changed so much in less than four decades? Some possibilities:

Error – Anytime you focus on the most surprising number you’ve seen in a long time, you gotta wonder if you’ve selected for an error. Maybe they’ve really screwed up this calculation.

Selection – Maybe big firms used to own factories, trucks etc., but now they hire smaller and foreign firms that own those things. So if we looked at all the firms we’d see a much smaller change in intangibles. One check: over half of Wilshire 5000 firm value is also intangible.

Methods – Maybe firms previously used simple generic methods that were easy for outsiders to copy, but today firms are full of specialized methods and culture that outsiders can’t copy because insiders don’t even see or understand them very well. Maybe, but forty years ago firm methods sure seemed plenty varied and complex.

Innovation – Maybe firms are today far more innovative, with products and services that embody more special local insights, and that change faster, preventing others from profiting by copying. But this should increase growth rates, which we don’t see. And product cycles don’t seem to be faster. Total US R&D spending hasn’t changed much as a GDP fraction, though private spending is up by less than a factor of two, and public spending is down.

Patents – Maybe innovation isn’t up, but patent law now favors patent holders more, helping incumbents to better keep out competitors. Patents granted per year in US have risen from 77K in 1975 to 326K in 2014. But Patent law isn’t obviously so much more favorable. Some even say it has weakened a lot in the last fifteen years.

Regulation – Maybe regulation favoring incumbents is far stronger today. But 1975 wasn’t exact a low regulation nirvana. Could regulation really have changed so much?

Employees – Maybe employees used to jump easily from firm to firm, but are now stuck at firms because of health benefits, etc. So firms gain from being able to pay stuck employees due to less competition for them. But in fact average and median employee tenure is down since 1975.

Advertising – Maybe more ads have created more customer loyalty. But ad spending hasn’t changed much as fraction of GDP. Could ads really be that much more effective? And if they were, wouldn’t firms be spending more on them?

Brands – Maybe when we are richer we care more about the identity that products project, and so are willing to pay more for brands with favorable images. And maybe it takes a long time to make a new favorable brand image. But does it really take that long? And brand loyalty seems to actually be down.

Monopoly – Maybe product variety has increased so much that firm products are worse substitutes, giving firms more market power. But I’m not aware that any standard measures of market concentration (such as HHI) have increased a lot over this period.

Alas, I don’t see a clear answer here. The effect that we are trying to explain is so big that we’ll need a huge cause to drive it. Yes it might have several causes, but each will then have to be big. So something really big is going on. And whatever it is, it is big enough to drive many other trends that people have been puzzling over.

Added 5p: This graph gives the figure for every year from ’73 to ’07.

Added 8p: This post shows debt/equity of S&P500 firms increasing from ~28% to ~42% from ’75 to ’15 . This can explain only a small part of the increase in intangible assets. Adding debt to tangibles in the numerator and denominator gives intangibles going from 13% in ’75 to 59% in ’15.

Added 8a 6Apr: Tyler Cowen emphasizes that accountants underestimate the market value of ordinary capital like equipment, but he neither gives (nor points to) an estimate of the typical size of that effect.

04 Apr 14:54

"One of history’s few iron laws is that luxuries tend to become necessities and to spawn new obligations."

by Joe Koster
Bjorno

I can't remember the last email where I carefully considered what I wanted to say and how to phrase it. A few business...but no personal.

From Sapiens: A Brief History of Humankind:
One of history’s few iron laws is that luxuries tend to become necessities and to spawn new obligations. Once people get used to a certain luxury, they take it for granted. Then they begin to count on it. Finally they reach a point where they can’t live without it. Let’s take another familiar example from our own time. Over the last few decades, we have invented countless time-saving devices that are supposed to make life more relaxed – washing machines, vacuum cleaners, dishwashers, telephones, mobile phones, computers, email. Previously it took a lot of work to write a letter, address and stamp an envelope, and take it to the mailbox. It took days or weeks, maybe even months, to get a reply. Nowadays I can dash off an email, send it halfway around the globe, and (if my addressee is online) receive a reply a minute later. I’ve saved all that trouble and time, but do I live a more relaxed life? 
Sadly not. Back in the snail-mail era, people usually only wrote letters when they had something important to relate. Rather than writing the first thing that came into their heads, they considered carefully what they wanted to say and how to phrase it. They expected to receive a similarly considered answer. Most people wrote and received no more than a handful of letters a month and seldom felt compelled to reply immediately. Today I receive dozens of emails each day, all from people who expect a prompt reply. We thought we were saving time; instead we revved up the treadmill of life to ten times its former speed and made our days more anxious and agitated. 

04 Apr 14:52

MLB's new commissioner hopes baseball's scoring drought will fix itself

by Cork Gaines

In 2014, scoring in Major League Baseball fell to 8.1 runs per game, a 20.8% drop in just 15 years. At the same time, MLB games averaged a record 3 hours, 2 minutes in 2014, a 15-minute increase in just the last ten years.

That combination creates the scary chart below in which scoring and the length of games are moving in opposite directions.

Major League Baseball scoring and timing

However, rookie Commissioner Rob Manfred says he is only worried about one of the lines and is hopeful the other will take care of itself.

MLB has already addressed the time of games by implementing new "pace of play" rules for the 2015 season, which will include a requirement for batters to remain in the batter's box in between pitches if there was no swing, and a new clock limiting the time between innings.

A more extensive version of these pace of play rules shortened games in the Arizona Fall League by an average of ten minutes.

However, Manfred is less concerned about offense, noting that there is not the same "outcry for changes to promote offense" as there were for the pace-of-play changes, according to Tyler Kepner of the New York Times. Manfred also says the preferred outcome is to see hitters adjust and increase offense naturally.

"[Texas Rangers slugger] Prince Fielder actually laid down a bunt down the third-base line in a spring training game the other day," said Manfred, referring to Fielder's attempt to beat a defensive shift. "That kind of epitomizes the question in our minds: Are these great players going to adjust in a way that we don’t have to do anything? That’s the preferred outcome, from our perspective."

Manfred's point is a valid one. The chart above is only scary if the scoring is going down while games are getting longer. It is the combination of those two factors that creates what some feel are boring games. But as most baseball fans will tell you, a 2-1 game can be a very exciting game as long as it doesn't take 3.5 hours, and a 3.5-hour game can be fun if the score is 10-9.

If baseball can fix one line (the time), they can afford to be patient and wait for the other line to fix itself.

Join the conversation about this story »

NOW WATCH: This video will change the way you watch the WWE

02 Apr 00:37

Why you should take notes by hand — not on a laptop

by Alex Tabarrok

Vox reports on a study comparing taking notes by hand versus using a laptop

For the first study, the students watched a 15-minute TED talk and took notes on it, then took a test on it half an hour afterward. Some of the test questions were straightforward, asking for a particular figure or fact, while others were conceptual, and asked students to compare or analyze ideas.

The two groups of students — laptop users and hand-writers — did pretty similarly on the factual questions. But the laptop users did significantly worse on the conceptual ones:

Screen_shot_2014-06-03_at_4.55.00_pm

The problem appears to be that the laptop turns students into stenographers, people who write down everything they hear as quickly as they can. Students who take handwritten notes, however, try to process the material as they are writing it down so that they only have to write down the key ideas. Forcing the brain to extract the most vital information is actually when the learning happens.

The laptops resulted in worse learning even under the study conditions when they were actually used to take notes. In the real world, the laptops are a tempting distraction. I am reminded of the day my son came to my class. He sat in the back and afterwards he said “Dad, I can see why you are so interested in online education. Half of your students are online during your class already.”

31 Mar 21:07

How a genre of music affects life expectancy of famous musicians in that genre

by Tyler Cowen

musiclife

That is from Dianne Theodora Kenny, via Ted Gioia.  Kenny notes:

For male musicians across all genres, accidental death (including all vehicular incidents and accidental overdose) accounted for almost 20% of all deaths. But accidental death for rock musicians was higher than this (24.4%) and for metal musicians higher still (36.2%).

Suicide accounted for almost 7% of all deaths in the total sample. However, for punk musicians, suicide accounted for 11% of deaths; for metal musicians, a staggering 19.3%. At just 0.9%, gospel musicians had the lowest suicide rate of all the genres studied.

Murder accounted for 6.0% of deaths across the sample, but was the cause of 51% of deaths in rap musicians and 51.5% of deaths for hip hop musicians, to date.

Beware selection, because of course most rap musicians aren’t dead yet.  This problem will be more extreme, the younger is the genre.  Another selection effect may be that getting killed, or dying in an unusual way, contributes to your fame.

25 Mar 20:25

The reallocation of talent in the American economy

by Tyler Cowen

At the Massachusetts Institute of Technology, a premier source of young recruits, only 9.9 percent of undergraduates went into finance in 2013, compared with the 31 percent that took jobs on Wall Street in 2006, before the financial crisis. Software companies, meanwhile, hired 28.1 percent of M.I.T. graduates in 2013, compared with 10.5 percent in 2006.

That is from Popper and Dougherty in the NYT, via Binyamin Appelbaum.

24 Mar 20:16

Let it go. (photo via mattryd7)



Let it go. (photo via mattryd7)

20 Mar 14:56

Kara Tippetts, a woman who wrote an open letter to Brittany Maynard, is about to die


Kara Tippetts, an author and mother of four, has terminal breast cancer at the age of 38. (Photo by Jay “Napoleon” Lyons)

A Christian author and blogger with terminal cancer who tried to convince Brittany Maynard to reconsider her November decision to die through doctor-assisted suicide is facing her own death.

Maynard made headlines as the 29-year-old who chose to die on Nov. 1 by taking a legal lethal prescription as she faced an aggressive cancerous brain tumor.

Kara Tippetts, an author and mother of four, has terminal breast cancer at the age of 38. Kara Tippetts, an author and mother of four, has terminal breast cancer at the age of 38.

Tippetts, a Colorado Springs wife of a pastor and 38-year-old mother of four who was diagnosed two years ago with stage four breast cancer, has become the poster face of an opposite view. Her book publicist confirmed on Thursday that her family believes she is close to death.

Tippetts’s open letter to Maynard on Ann Voskamp’s popular blog went viral in many Christian circles. “Dear heart, we simply disagree,” Tippetts wrote. “Suffering is not the absence of goodness, it is not the absence of beauty, but perhaps it can be the place where true beauty can be known. In your choosing your own death, you are robbing those that love you with the such tenderness, the opportunity of meeting you in your last moments and extending you love in your last breaths.”

Tippetts argued in her post that hastening death is not what God intended.

I get to partner with my doctor in my dying, and it’s going to be a beautiful and painful journey for us all.

But, hear me —  it is not a mistake —

beauty will meet us in that last breath.

Her story was picked up by Ross Douthat, who wrote about the debate represented by Maynard and Tippetts.

“The future of the assisted suicide debate may depend, in part, on whether Tippetts’s case for the worth of what can seem like pointless suffering can be made either without her theological perspective, or by a liberalism more open to metaphysical arguments than the left is today,” Douthat wrote.

Kara Tippetts, a mother of four with terminal breast cancer,tried to convince cancer-patient Brittany Maynard to reconsider her November decision to die through doctor-assisted suicide. Now Tippetts is in hospice care. Kara Tippetts, a mother of four with terminal breast cancer, is in hospice care. (Jay “Napoleon” Lyons)

Tippetts was admitted into hospice care in December. On Friday, her husband Jason Tippetts wrote about his wife’s final days.

“I have an us that cannot be lost,” Jason Tippetts wrote. “And I still get small moments where we are us. But I grieve as I watch her fade. The peace that is in our house is amazing, peace in the midst of tears, peace in the midst of impending loss, but it is peace.”

Jay Lyons, a producer who is a friend of the Tippetts, raised more than $15,000 of his goal of $13,750 to create a documentary.

Before her death in November, Maynard became an advocate for laws for legal protections for terminally ill patients who want to die with medical assistance, legal in five states in the United States.

“Goodbye to all my dear friends and family that I love. Today is the day I have chosen to pass away with dignity in the face of my terminal illness, this terrible brain cancer that has taken so much from me … but would have taken so much more,” she wrote on Facebook before her death.

Brittany Maynard, a 29-year-old with terminal brain cancer, tells her story and explains why she plans to ingest a prescription that will end her life on Nov. 1 in this video from advocacy group Compassion & Choices. (Compassion & Choices via YouTube)

NPR host Diane Rehm has emerged as a key force in the end-of-life debates. Americans are divided on the role of medicine in the issue, according to recent Pew Research surveys. When asked about end-of-life decisions for other people, two-thirds of Americans say there are at least some situations in which a patient should be allowed to die, while nearly a third say that medical professionals always should do everything possible to save a patient’s life. Of those polled, 47 percent approved and 49 percent disapproved of laws that would allow a physician to prescribe lethal doses of drugs for a terminally ill patient.

Interested in more religion stories? Read more from Acts of Faith:

Behold! What Shakespeare’s words on mercy can teach us about Internet shaming

Can one pastor bridge deep divides between evangelicals and mainline Protestants?

Presbyterian Church (USA) changes its constitution to include gay marriage

27 Feb 12:56

Does using Facebook make you happier?

by Tyler Cowen

I’ve long suggested that those worried about inequality, envy, and relative deprivation should tax Facebook rather than the private fortune of Bill Gates.  Most envy is local, and connected to people you know and whose lives you are in touch with.  Along these lines, here is some recent research by Verduyn, et.al.:

Prior research indicates that Facebook usage predicts declines in subjective well-being over time. How does this come about? We examined this issue in 2 studies using experimental and field methods. In Study 1, cueing people in the laboratory to use Facebook passively (rather than actively) led to declines in affective well-being over time. Study 2 replicated these findings in the field using experience-sampling techniques. It also demonstrated how passive Facebook usage leads to declines in affective well-being: by increasing envy. Critically, the relationship between passive Facebook usage and changes in affective well-being remained significant when controlling for active Facebook use, non-Facebook online social network usage, and direct social interactions, highlighting the specificity of this result. These findings demonstrate that passive Facebook usage undermines affective well-being.

The pointer is from Robin Hanson on Twitter.

21 Feb 05:55

The Rise of Opaque Intelligence

by Alex Tabarrok

Many years ago I had a job picking up and delivering packages in Toronto. Once the boss told me to deliver package A then C then B when A and B were closer together and delivering ACB would lengthen the trip. I delivered ABC and when the boss found out he wasn’t happy because C needed their package a lot sooner than B and distance wasn’t the only variable to be optimized. I recall (probably inaccurately) the boss yelling:

Listen college boy, I’m not paying you to think. I’m paying you to do what I tell you to do.

It isn’t easy suppressing my judgment in favor of someone else’s judgment even if the other person has better judgment (ask my wife) but once it was explained to me I at least understood why my boss’s judgment made sense. More and more, however, we are being asked to suppress our judgment in favor of that of an artificial intelligence, a theme in Tyler’s Average is Over. As Tyler notes notes:

…there will be Luddites of a sort. “Here are all these new devices telling me what to do—but screw them; I’m a human being! I’m still going to buy bread every week and throw two-thirds of it out all the time.” It will be alienating in some ways. We won’t feel that comfortable with it. We’ll get a lot of better results, but it won’t feel like utopia.

I put this slightly differently, the problem isn’t artificial intelligence but opaque intelligence. Algorithms have now become so sophisticated that we human’s can’t really understand why they are telling us what they are telling us. The WSJ writes about driver’s using UPS’s super algorithm, Orion, to plan their delivery route:

Driver reaction to Orion is mixed. The experience can be frustrating for some who might not want to give up a degree of autonomy, or who might not follow Orion’s logic. For example, some drivers don’t understand why it makes sense to deliver a package in one neighborhood in the morning, and come back to the same area later in the day for another delivery. But Orion often can see a payoff, measured in small amounts of time and money that the average person might not see.

One driver, who declined to speak for attribution, said he has been on Orion since mid-2014 and dislikes it, because it strikes him as illogical.

Human drivers think Orion is illogical because they can’t grok Orion’s super-logic. Perhaps any sufficiently advanced logic is indistinguishable from stupidity.

Hat tip: Robin Hanson for discussion.

20 Feb 15:24

Investing and speculation...

by Joe Koster
Bjorno

“An investment operation is one which, upon thorough analysis, promises safety of principal and an adequate return. Operations not meeting these requirements are speculative.”
everyone should have this memorized.

From Brandes on Value:
Benjamin Graham addressed the differences between investing and speculation on the very first page of his book The Intelligent Investor: “An investment operation is one which, upon thorough analysis, promises safety of principal and an adequate return. Operations not meeting these requirements are speculative.” 
This still rings true today. Yet, with my contemporary perspective, I add two more criteria that define speculation:   
  • Any contemplated holding period shorter than a normal business cycle (typically three to five years)  
  • Any purchase based solely on anticipated market movements
..................

Related previous posts:

Graham and Dodd on the ‘Relation of the Future to Investment and Speculation’

Warren Buffett on investment, speculation, and gambling

James Grant quote

12 Feb 16:01

‘So the Opposite of Addiction Is Not Sobriety. It Is Human Connection.’

by John Gruber

Compelling piece by Johann Hari, author of a new book on the war against drugs:

The experiment is simple. Put a rat in a cage, alone, with two water bottles. One is just water. The other is water laced with heroin or cocaine. Almost every time you run this experiment, the rat will become obsessed with the drugged water, and keep coming back for more and more, until it kills itself.

The advert explains: “Only one drug is so addictive, nine out of ten laboratory rats will use it. And use it. And use it. Until dead. It’s called cocaine. And it can do the same thing to you.”

But in the 1970s, a professor of Psychology in Vancouver called Bruce Alexander noticed something odd about this experiment. The rat is put in the cage all alone. It has nothing to do but take the drugs. What would happen, he wondered, if we tried this differently? So Professor Alexander built Rat Park. It is a lush cage where the rats would have colored balls and the best rat-food and tunnels to scamper down and plenty of friends: everything a rat about town could want. What, Alexander wanted to know, will happen then?

In Rat Park, all the rats obviously tried both water bottles, because they didn’t know what was in them. But what happened next was startling.

The rats with good lives didn’t like the drugged water. They mostly shunned it, consuming less than a quarter of the drugs the isolated rats used. None of them died. While all the rats who were alone and unhappy became heavy users, none of the rats who had a happy environment did.

23 Jan 22:10

Digit Finds Extra Money in Your Budget and Automatically Saves It

by Kristin Wong

Digit Finds Extra Money in Your Budget and Automatically Saves It

If you want to sock away more money, automating your savings is the way to go. Digit takes this a step further by calculating how much you can afford to save every few days. As your income and spending change, it automatically adjusts your savings.

Read more...








23 Jan 21:44

The AI Revolution: The Road to Superintelligence

by Tim Urban

Note: The reason this post took three weeks to finish is that as I dug into research on Artificial Intelligence, I could not believe what I was reading. It hit me pretty quickly that what’s happening in the world of AI is not just an important topic, but by far THE most important topic for our future. So I wanted to learn as much as I could about it, and once I did that, I wanted to make sure I wrote a post that really explained this whole situation and why it matters so much. Not shockingly, that became outrageously long, so I broke it into two parts. This is Part 1—Part 2 is here.

_______________

We are on the edge of change comparable to the rise of human life on Earth. — Vernor Vinge

 

What does it feel like to stand here?

Edge1

It seems like a pretty intense place to be standing—but then you have to remember something about what it’s like to stand on a time graph: you can’t see what’s to your right. So here’s how it actually feels to stand there:

Edge

Which probably feels pretty normal…

_______________

The Far Future—Coming Soon

Imagine taking a time machine back to 1750—a time when the world was in a permanent power outage, long-distance communication meant either yelling loudly or firing a cannon in the air, and all transportation ran on hay. When you get there, you retrieve a dude, bring him to 2015, and then walk him around and watch him react to everything. It’s impossible for us to understand what it would be like for him to see shiny capsules racing by on a highway, talk to people who had been on the other side of the ocean earlier in the day, watch sports that were being played 1,000 miles away, hear a musical performance that happened 50 years ago, and play with my magical wizard rectangle that he could use to capture a real-life image or record a living moment, generate a map with a paranormal moving blue dot that shows him where he is, look at someone’s face and chat with them even though they’re on the other side of the country, and worlds of other inconceivable sorcery. This is all before you show him the internet or explain things like the International Space Station, the Large Hadron Collider, nuclear weapons, or general relativity.

This experience for him wouldn’t be surprising or shocking or even mind-blowing—those words aren’t big enough. He might actually die.

But here’s the interesting thing—if he then went back to 1750 and got jealous that we got to see his reaction and decided he wanted to try the same thing, he’d take the time machine and go back the same distance, get someone from around the year 1500, bring him to 1750, and show him everything. And the 1500 guy would be shocked by a lot of things—but he wouldn’t die. It would be far less of an insane experience for him, because while 1500 and 1750 were very different, they were much less different than 1750 to 2015. The 1500 guy would learn some mind-bending shit about space and physics, he’d be impressed with how committed Europe turned out to be with that new imperialism fad, and he’d have to do some major revisions of his world map conception. But watching everyday life go by in 1750—transportation, communication, etc.—definitely wouldn’t make him die.

No, in order for the 1750 guy to have as much fun as we had with him, he’d have to go much farther back—maybe all the way back to about 12,000 BC, before the First Agricultural Revolution gave rise to the first cities and to the concept of civilization. If someone from a purely hunter-gatherer world—from a time when humans were, more or less, just another animal species—saw the vast human empires of 1750 with their towering churches, their ocean-crossing ships, their concept of being “inside,” and their enormous mountain of collective, accumulated human knowledge and discovery—he’d likely die.

And then what if, after dying, he got jealous and wanted to do the same thing. If he went back 12,000 years to 24,000 BC and got a guy and brought him to 12,000 BC, he’d show the guy everything and the guy would be like, “Okay what’s your point who cares.” For the 12,000 BC guy to have the same fun, he’d have to go back over 100,000 years and get someone he could show fire and language to for the first time.

In order for someone to be transported into the future and die from the level of shock they’d experience, they have to go enough years ahead that a “die level of progress,” or a Die Progress Unit (DPU) has been achieved. So a DPU took over 100,000 years in hunter-gatherer times, but at the post-Agricultural Revolution rate, it only took about 12,000 years. The post-Industrial Revolution world has moved so quickly that a 1750 person only needs to go forward a couple hundred years for a DPU to have happened.

This pattern—human progress moving quicker and quicker as time goes on—is what futurist Ray Kurzweil calls human history’s Law of Accelerating Returns. This happens because more advanced societies have the ability to progress at a faster rate than less advanced societies—because they’re more advanced. 19th century humanity knew more and had better technology than 15th century humanity, so it’s no surprise that humanity made far more advances in the 19th century than in the 15th century—15th century humanity was no match for 19th century humanity.11← open these

This works on smaller scales too. The movie Back to the Future came out in 1985, and “the past” took place in 1955. In the movie, when Michael J. Fox went back to 1955, he was caught off-guard by the newness of TVs, the prices of soda, the lack of love for shrill electric guitar, and the variation in slang. It was a different world, yes—but if the movie were made today and the past took place in 1985, the movie could have had much more fun with much bigger differences. The character would be in a time before personal computers, internet, or cell phones—today’s Marty McFly, a teenager born in the late 90s, would be much more out of place in 1985 than the movie’s Marty McFly was in 1955.

This is for the same reason we just discussed—the Law of Accelerating Returns. The average rate of advancement between 1985 and 2015 was higher than the rate between 1955 and 1985—because the former was a more advanced world—so much more change happened in the most recent 30 years than in the prior 30.

So—advances are getting bigger and bigger and happening more and more quickly. This suggests some pretty intense things about our future, right?

Kurzweil suggests that the progress of the entire 20th century would have been achieved in only 20 years at the rate of advancement in the year 2000—in other words, by 2000, the rate of progress was five times faster than the average rate of progress during the 20th century. He believes another 20th century’s worth of progress happened between 2000 and 2014 and that another 20th century’s worth of progress will happen by 2021, in only seven years. A couple decades later, he believes a 20th century’s worth of progress will happen multiple times in the same year, and even later, in less than one month. All in all, because of the Law of Accelerating Returns, Kurzweil believes that the 21st century will achieve 1,000 times the progress of the 20th century.2

If Kurzweil and others who agree with him are correct, then we may be as blown away by 2030 as our 1750 guy was by 2015—i.e. the next DPU might only take a couple decades—and the world in 2050 might be so vastly different than today’s world that we would barely recognize it.

This isn’t science fiction. It’s what many scientists smarter and more knowledgeable than you or I firmly believe—and if you look at history, it’s what we should logically predict.

So then why, when you hear me say something like “the world 35 years from now might be totally unrecognizable,” are you thinking, “Cool….but nahhhhhhh”? Three reasons we’re skeptical of outlandish forecasts of the future:

1) When it comes to history, we think in straight lines. When we imagine the progress of the next 30 years, we look back to the progress of the previous 30 as an indicator of how much will likely happen. When we think about the extent to which the world will change in the 21st century, we just take the 20th century progress and add it to the year 2000. This was the same mistake our 1750 guy made when he got someone from 1500 and expected to blow his mind as much as his own was blown going the same distance ahead. It’s most intuitive for us to think linearly, when we should be thinking exponentially. If someone is being more clever about it, they might predict the advances of the next 30 years not by looking at the previous 30 years, but by taking the current rate of progress and judging based on that. They’d be more accurate, but still way off. In order to think about the future correctly, you need to imagine things moving at a much faster rate than they’re moving now.

Projections

2) The trajectory of very recent history often tells a distorted story. First, even a steep exponential curve seems linear when you only look at a tiny slice of it, the same way if you look at a little segment of a huge circle up close, it looks almost like a straight line. Second, exponential growth isn’t totally smooth and uniform. Kurzweil explains that progress happens in “S-curves”:

S-Curves

An S is created by the wave of progress when a new paradigm sweeps the world. The curve goes through three phases:

1. Slow growth (the early phase of exponential growth)
2. Rapid growth (the late, explosive phase of exponential growth)
3. A leveling off as the particular paradigm matures3

If you look only at very recent history, the part of the S-curve you’re on at the moment can obscure your perception of how fast things are advancing. The chunk of time between 1995 and 2007 saw the explosion of the internet, the introduction of Microsoft, Google, and Facebook into the public consciousness, the birth of social networking, and the introduction of cell phones and then smart phones. That was Phase 2: the growth spurt part of the S. But 2008 to 2015 has been less groundbreaking, at least on the technological front. Someone thinking about the future today might examine the last few years to gauge the current rate of advancement, but that’s missing the bigger picture. In fact, a new, huge Phase 2 growth spurt might be brewing right now.

3) Our own experience makes us stubborn old men about the future. We base our ideas about the world on our personal experience, and that experience has ingrained the rate of growth of the recent past in our heads as “the way things happen.” We’re also limited by our imagination, which takes our experience and uses it to conjure future predictions—but often, what we know simply doesn’t give us the tools to think accurately about the future.2 When we hear a prediction about the future that contradicts our experience-based notion of how things work, our instinct is that the prediction must be naive. If I tell you, later in this post, that you may live to be 150, or 250, or not die at all, your instinct will be, “That’s stupid—if there’s one thing I know from history, it’s that everybody dies.” And yes, no one in the past has not died. But no one flew airplanes before airplanes were invented either.

So while nahhhhh might feel right as you read this post, it’s probably actually wrong. The fact is, if we’re being truly logical and expecting historical patterns to continue, we should conclude that much, much, much more should change in the coming decades than we intuitively expect. Logic also suggests that if the most advanced species on a planet keeps making larger and larger leaps forward at an ever-faster rate, at some point, they’ll make a leap so great that it completely alters life as they know it and the perception they have of what it means to be a human—kind of like how evolution kept making great leaps toward intelligence until finally it made such a large leap to the human being that it completely altered what it meant for any creature to live on planet Earth. And if you spend some time reading about what’s going on today in science and technology, you start to see a lot of signs quietly hinting that life as we currently know it cannot withstand the leap that’s coming next.

_______________

The Road to Superintelligence

What Is AI?

If you’re like me, you used to think Artificial Intelligence was a silly sci-fi concept, but lately you’ve been hearing it mentioned by serious people, and you don’t really quite get it.

There are three reasons a lot of people are confused about the term AI:

1) We associate AI with movies. Star Wars. Terminator. 2001: A Space Odyssey. Even the Jetsons. And those are fiction, as are the robot characters. So it makes AI sound a little fictional to us.

2) AI is a broad topic. It ranges from your phone’s calculator to self-driving cars to something in the future that might change the world dramatically. AI refers to all of these things, which is confusing.

3) We use AI all the time in our daily lives, but we often don’t realize it’s AI. John McCarthy, who coined the term “Artificial Intelligence” in 1956, complained that “as soon as it works, no one calls it AI anymore.”4 Because of this phenomenon, AI often sounds like a mythical future prediction more than a reality. At the same time, it makes it sound like a pop concept from the past that never came to fruition. Ray Kurzweil says he hears people say that AI withered in the 1980s, which he compares to “insisting that the Internet died in the dot-com bust of the early 2000s.”5

So let’s clear things up. First, stop thinking of robots. A robot is a container for AI, sometimes mimicking the human form, sometimes not—but the AI itself is the computer inside the robot. AI is the brain, and the robot is its body—if it even has a body. For example, the software and data behind Siri is AI, the woman’s voice we hear is a personification of that AI, and there’s no robot involved at all.

Secondly, you’ve probably heard the term “singularity” or “technological singularity.” This term has been used in math to describe an asymptote-like situation where normal rules no longer apply. It’s been used in physics to describe a phenomenon like an infinitely small, dense black hole or the point we were all squished into right before the Big Bang. Again, situations where the usual rules don’t apply. In 1993, Vernor Vinge wrote a famous essay in which he applied the term to the moment in the future when our technology’s intelligence exceeds our own—a moment for him when life as we know it will be forever changed and normal rules will no longer apply. Ray Kurzweil then muddled things a bit by defining the singularity as the time when the Law of Accelerating Returns has reached such an extreme pace that technological progress is happening at a seemingly-infinite pace, and after which we’ll be living in a whole new world. I found that many of today’s AI thinkers have stopped using the term, and it’s confusing anyway, so I won’t use it much here (even though we’ll be focusing on that idea throughout).

Finally, while there are many different types or forms of AI since AI is a broad concept, the critical categories we need to think about are based on an AI’s caliber. There are three major AI caliber categories:

AI Caliber 1) Artificial Narrow Intelligence (ANI): Sometimes referred to as Weak AI, Artificial Narrow Intelligence is AI that specializes in one area. There’s AI that can beat the world chess champion in chess, but that’s the only thing it does. Ask it to figure out a better way to store data on a hard drive, and it’ll look at you blankly.

AI Caliber 2) Artificial General Intelligence (AGI): Sometimes referred to as Strong AI, or Human-Level AI, Artificial General Intelligence refers to a computer that is as smart as a human across the board—a machine that can perform any intellectual task that a human being can. Creating AGI is a much harder task than creating ANI, and we’re yet to do it. Professor Linda Gottfredson describes intelligence as “a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.” AGI would be able to do all of those things as easily as you can.

AI Caliber 3) Artificial Superintelligence (ASI): Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” Artificial Superintelligence ranges from a computer that’s just a little smarter than a human to one that’s trillions of times smarter—across the board. ASI is the reason the topic of AI is such a spicy meatball and why the words immortality and extinction will both appear in these posts multiple times.

As of now, humans have conquered the lowest caliber of AI—ANI—in many ways, and it’s everywhere. The AI Revolution is the road from ANI, through AGI, to ASI—a road we may or may not survive but that, either way, will change everything.

Let’s take a close look at what the leading thinkers in the field believe this road looks like and why this revolution might happen way sooner than you might think:

Where We Are Currently—A World Running on ANI

Artificial Narrow Intelligence is machine intelligence that equals or exceeds human intelligence or efficiency at a specific thing. A few examples:

  • Cars are full of ANI systems, from the computer that figures out when the anti-lock brakes should kick in to the computer that tunes the parameters of the fuel injection systems. Google’s self-driving car, which is being tested now, will contain robust ANI systems that allow it to perceive and react to the world around it.
  • Your phone is a little ANI factory. When you navigate using your map app, receive tailored music recommendations from Pandora, check tomorrow’s weather, talk to Siri, or dozens of other everyday activities, you’re using ANI.
  • Your email spam filter is a classic type of ANI—it starts off loaded with intelligence about how to figure out what’s spam and what’s not, and then it learns and tailors its intelligence to you as it gets experience with your particular preferences. The Nest Thermostat does the same thing as it starts to figure out your typical routine and act accordingly.
  • You know the whole creepy thing that goes on when you search for a product on Amazon and then you see that as a “recommended for you” product on a different site, or when Facebook somehow knows who it makes sense for you to add as a friend? That’s a network of ANI systems, working together to inform each other about who you are and what you like and then using that information to decide what to show you. Same goes for Amazon’s “People who bought this also bought…” thing—that’s an ANI system whose job it is to gather info from the behavior of millions of customers and synthesize that info to cleverly upsell you so you’ll buy more things.
  • Google Translate is another classic ANI system—impressively good at one narrow task. Voice recognition is another, and there are a bunch of apps that use those two ANIs as a tag team, allowing you to speak a sentence in one language and have the phone spit out the same sentence in another.
  • When your plane lands, it’s not a human that decides which gate it should go to. Just like it’s not a human that determined the price of your ticket.
  • The world’s best Checkers, Chess, Scrabble, Backgammon, and Othello players are now all ANI systems.
  • Google search is one large ANI brain with incredibly sophisticated methods for ranking pages and figuring out what to show you in particular. Same goes for Facebook’s Newsfeed.
  • And those are just in the consumer world. Sophisticated ANI systems are widely used in sectors and industries like military, manufacturing, and finance (algorithmic high-frequency AI traders account for more than half of equity shares traded on US markets6), and in expert systems like those that help doctors make diagnoses and, most famously, IBM’s Watson, who contained enough facts and understood coy Trebek-speak well enough to soundly beat the most prolific Jeopardy champions.

ANI systems as they are now aren’t especially scary. At worst, a glitchy or badly-programmed ANI can cause an isolated catastrophe like knocking out a power grid, causing a harmful nuclear power plant malfunction, or triggering a financial markets disaster (like the 2010 Flash Crash when an ANI program reacted the wrong way to an unexpected situation and caused the stock market to briefly plummet, taking $1 trillion of market value with it, only part of which was recovered when the mistake was corrected).

But while ANI doesn’t have the capability to cause an existential threat, we should see this increasingly large and complex ecosystem of relatively-harmless ANI as a precursor of the world-altering hurricane that’s on the way. Each new ANI innovation quietly adds another brick onto the road to AGI and ASI. Or as Aaron Saenz sees it, our world’s ANI systems “are like the amino acids in the early Earth’s primordial ooze”—the inanimate stuff of life that, one unexpected day, woke up.

The Road From ANI to AGI

Why It’s So Hard

Nothing will make you appreciate human intelligence like learning about how unbelievably challenging it is to try to create a computer as smart as we are. Building skyscrapers, putting humans in space, figuring out the details of how the Big Bang went down—all far easier than understanding our own brain or how to make something as cool as it. As of now, the human brain is the most complex object in the known universe.

What’s interesting is that the hard parts of trying to build AGI (a computer as smart as humans in general, not just at one narrow specialty) are not intuitively what you’d think they are. Build a computer that can multiply two ten-digit numbers in a split second—incredibly easy. Build one that can look at a dog and answer whether it’s a dog or a cat—spectacularly difficult. Make AI that can beat any human in chess? Done. Make one that can read a paragraph from a six-year-old’s picture book and not just recognize the words but understand the meaning of them? Google is currently spending billions of dollars trying to do it. Hard things—like calculus, financial market strategy, and language translation—are mind-numbingly easy for a computer, while easy things—like vision, motion, movement, and perception—are insanely hard for it. Or, as computer scientist Donald Knuth puts it, “AI has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do ‘without thinking.'”7

What you quickly realize when you think about this is that those things that seem easy to us are actually unbelievably complicated, and they only seem easy because those skills have been optimized in us (and most animals) by hundreds of million years of animal evolution. When you reach your hand up toward an object, the muscles, tendons, and bones in your shoulder, elbow, and wrist instantly perform a long series of physics operations, in conjunction with your eyes, to allow you to move your hand in a straight line through three dimensions. It seems effortless to you because you have perfected software in your brain for doing it. Same idea goes for why it’s not that malware is dumb for not being able to figure out the slanty word recognition test when you sign up for a new account on a site—it’s that your brain is super impressive for being able to.

On the other hand, multiplying big numbers or playing chess are new activities for biological creatures and we haven’t had any time to evolve a proficiency at them, so a computer doesn’t need to work too hard to beat us. Think about it—which would you rather do, build a program that could multiply big numbers or one that could understand the essence of a B well enough that you could show it a B in any one of thousands of unpredictable fonts or handwriting and it could instantly know it was a B?

One fun example—when you look at this, you and a computer both can figure out that it’s a rectangle with two distinct shades, alternating:

Screen Shot 2015-01-21 at 12.59.21 AM

Tied so far. But if you pick up the black and reveal the whole image…

Screen Shot 2015-01-21 at 12.59.54 AM

…you have no problem giving a full description of the various opaque and translucent cylinders, slats, and 3-D corners, but the computer would fail miserably. It would describe what it sees—a variety of two-dimensional shapes in several different shades—which is actually what’s there. Your brain is doing a ton of fancy shit to interpret the implied depth, shade-mixing, and room lighting the picture is trying to portray.8 And looking at the picture below, a computer sees a two-dimensional white, black, and gray collage, while you easily see what it really is—a photo of an entirely-black, 3-D rock:

article-2053686-0E8BC15900000578-845_634x330

Credit: Matthew Lloyd

And everything we just mentioned is still only taking in stagnant information and processing it. To be human-level intelligent, a computer would have to understand things like the difference between subtle facial expressions, the distinction between being pleased, relieved, content, satisfied, and glad, and why Braveheart was great but The Patriot was terrible.

Daunting.

So how do we get there?

First Key to Creating AGI: Increasing Computational Power

One thing that definitely needs to happen for AGI to be a possibility is an increase in the power of computer hardware. If an AI system is going to be as intelligent as the brain, it’ll need to equal the brain’s raw computing capacity.

One way to express this capacity is in the total calculations per second (cps) the brain could manage, and you could come to this number by figuring out the maximum cps of each structure in the brain and then adding them all together.

Ray Kurzweil came up with a shortcut by taking someone’s professional estimate for the cps of one structure and that structure’s weight compared to that of the whole brain and then multiplying proportionally to get an estimate for the total. Sounds a little iffy, but he did this a bunch of times with various professional estimates of different regions, and the total always arrived in the same ballpark—around 1016, or 10 quadrillion cps.

Currently, the world’s fastest supercomputer, China’s Tianhe-2, has actually beaten that number, clocking in at about 34 quadrillion cps. But Tianhe-2 is also a dick, taking up 720 square meters of space, using 24 megawatts of power (the brain runs on just 20 watts), and costing $390 million to build. Not especially applicable to wide usage, or even most commercial or industrial usage yet.

Kurzweil suggests that we think about the state of computers by looking at how many cps you can buy for $1,000. When that number reaches human-level—10 quadrillion cps—then that’ll mean AGI could become a very real part of life.

Moore’s Law is a historically-reliable rule that the world’s maximum computing power doubles approximately every two years, meaning computer hardware advancement, like general human advancement through history, grows exponentially. Looking at how this relates to Kurzweil’s cps/$1,000 metric, we’re currently at about 10 trillion cps/$1,000, right on pace with this graph’s predicted trajectory:9

PPTExponentialGrowthof_Computing-1

So the world’s $1,000 computers are now beating the mouse brain and they’re at about a thousandth of human level. This doesn’t sound like much until you remember that we were at about a trillionth of human level in 1985, a billionth in 1995, and a millionth in 2005. Being at a thousandth in 2015 puts us right on pace to get to an affordable computer by 2025 that rivals the power of the brain.

So on the hardware side, the raw power needed for AGI is technically available now, in China, and we’ll be ready for affordable, widespread AGI-caliber hardware within 10 years. But raw computational power alone doesn’t make a computer generally intelligent—the next question is, how do we bring human-level intelligence to all that power?

Second Key to Creating AGI: Making it Smart

This is the icky part. The truth is, no one really knows how to make it smart—we’re still debating how to make a computer human-level intelligent and capable of knowing what a dog and a weird-written B and a mediocre movie is. But there are a bunch of far-fetched strategies out there and at some point, one of them will work. Here are the three most common strategies I came across:

1) Plagiarize the brain.

This is like scientists toiling over how that kid who sits next to them in class is so smart and keeps doing so well on the tests, and even though they keep studying diligently, they can’t do nearly as well as that kid, and then they finally decide “k fuck it I’m just gonna copy that kid’s answers.” It makes sense—we’re stumped trying to build a super-complex computer, and there happens to be a perfect prototype for one in each of our heads.

The science world is working hard on reverse engineering the brain to figure out how evolution made such a rad thing—optimistic estimates say we can do this by 2030. Once we do that, we’ll know all the secrets of how the brain runs so powerfully and efficiently and we can draw inspiration from it and steal its innovations. One example of computer architecture that mimics the brain is the artificial neural network. It starts out as a network of transistor “neurons,” connected to each other with inputs and outputs, and it knows nothing—like an infant brain. The way it “learns” is it tries to do a task, say handwriting recognition, and at first, its neural firings and subsequent guesses at deciphering each letter will be completely random. But when it’s told it got something right, the transistor connections in the firing pathways that happened to create that answer are strengthened; when it’s told it was wrong, those pathways’ connections are weakened. After a lot of this trial and feedback, the network has, by itself, formed smart neural pathways and the machine has become optimized for the task. The brain learns a bit like this but in a more sophisticated way, and as we continue to study the brain, we’re discovering ingenious new ways to take advantage of neural circuitry.

More extreme plagiarism involves a strategy called “whole brain emulation,” where the goal is to slice a real brain into thin layers, scan each one, use software to assemble an accurate reconstructed 3-D model, and then implement the model on a powerful computer. We’d then have a computer officially capable of everything the brain is capable of—it would just need to learn and gather information. If engineers get really good, they’d be able to emulate a real brain with such exact accuracy that the brain’s full personality and memory would be intact once the brain architecture has been uploaded to a computer. If the brain belonged to Jim right before he passed away, the computer would now wake up as Jim (?), which would be a robust human-level AGI, and we could now work on turning Jim into an unimaginably smart ASI, which he’d probably be really excited about.

How far are we from achieving whole brain emulation? Well so far, we’ve not yet just recently been able to emulate a 1mm-long flatworm brain, which consists of just 302 total neurons. The human brain contains 100 billion. If that makes it seem like a hopeless project, remember the power of exponential progress—now that we’ve conquered the tiny worm brain, an ant might happen before too long, followed by a mouse, and suddenly this will seem much more plausible.

2) Try to make evolution do what it did before but for us this time.

So if we decide the smart kid’s test is too hard to copy, we can try to copy the way he studies for the tests instead.

Here’s something we know. Building a computer as powerful as the brain is possible—our own brain’s evolution is proof. And if the brain is just too complex for us to emulate, we could try to emulate evolution instead. The fact is, even if we can emulate a brain, that might be like trying to build an airplane by copying a bird’s wing-flapping motions—often, machines are best designed using a fresh, machine-oriented approach, not by mimicking biology exactly.

So how can we simulate evolution to build AGI? The method, called “genetic algorithms,” would work something like this: there would be a performance-and-evaluation process that would happen again and again (the same way biological creatures “perform” by living life and are “evaluated” by whether they manage to reproduce or not). A group of computers would try to do tasks, and the most successful ones would be bred with each other by having half of each of their programming merged together into a new computer. The less successful ones would be eliminated. Over many, many iterations, this natural selection process would produce better and better computers. The challenge would be creating an automated evaluation and breeding cycle so this evolution process could run on its own.

The downside of copying evolution is that evolution likes to take a billion years to do things and we want to do this in a few decades.

But we have a lot of advantages over evolution. First, evolution has no foresight and works randomly—it produces more unhelpful mutations than helpful ones, but we would control the process so it would only be driven by beneficial glitches and targeted tweaks. Secondly, evolution doesn’t aim for anything, including intelligence—sometimes an environment might even select against higher intelligence (since it uses a lot of energy). We, on the other hand, could specifically direct this evolutionary process toward increasing intelligence. Third, to select for intelligence, evolution has to innovate in a bunch of other ways to facilitate intelligence—like revamping the ways cells produce energy—when we can remove those extra burdens and use things like electricity. It’s no doubt we’d be much, much faster than evolution—but it’s still not clear whether we’ll be able to improve upon evolution enough to make this a viable strategy.

3) Make this whole thing the computer’s problem, not ours.

This is when scientists get desperate and try to program the test to take itself. But it might be the most promising method we have.

The idea is that we’d build a computer whose two major skills would be doing research on AI and coding changes into itself—allowing it to not only learn but to improve its own architecture. We’d teach computers to be computer scientists so they could bootstrap their own development. And that would be their main job—figuring out how to make themselves smarter. More on this later.

All of This Could Happen Soon

Rapid advancements in hardware and innovative experimentation with software are happening simultaneously, and AGI could creep up on us quickly and unexpectedly for two main reasons:

1) Exponential growth is intense and what seems like a snail’s pace of advancement can quickly race upwards—this GIF illustrates this concept nicely:

2) When it comes to software, progress can seem slow, but then one epiphany can instantly change the rate of advancement (kind of like the way science, during the time humans thought the universe was geocentric, was having difficulty calculating how the universe worked, but then the discovery that it was heliocentric suddenly made everything much easier). Or, when it comes to something like a computer that improves itself, we might seem far away but actually be just one tweak of the system away from having it become 1,000 times more effective and zooming upward to human-level intelligence.

The Road From AGI to ASI

At some point, we’ll have achieved AGI—computers with human-level general intelligence. Just a bunch of people and computers living together in equality.

Oh actually not at all.

The thing is, AGI with an identical level of intelligence and computational capacity as a human would still have significant advantages over humans. Like:

Hardware:

  • Speed. The brain’s neurons max out at around 200 Hz, while today’s microprocessors (which are much slower than they will be when we reach AGI) run at 2 GHz, or 10 million times faster than our neurons. And the brain’s internal communications, which can move at about 120 m/s, are horribly outmatched by a computer’s ability to communicate optically at the speed of light.
  • Size and storage. The brain is locked into its size by the shape of our skulls, and it couldn’t get much bigger anyway, or the 120 m/s internal communications would take too long to get from one brain structure to another. Computers can expand to any physical size, allowing far more hardware to be put to work, a much larger working memory (RAM), and a longterm memory (hard drive storage) that has both far greater capacity and precision than our own.
  • Reliability and durability. It’s not only the memories of a computer that would be more precise. Computer transistors are more accurate than biological neurons, and they’re less likely to deteriorate (and can be repaired or replaced if they do). Human brains also get fatigued easily, while computers can run nonstop, at peak performance, 24/7.

Software:

  • Editability, upgradability, and a wider breadth of possibility. Unlike the human brain, computer software can receive updates and fixes and can be easily experimented on. The upgrades could also span to areas where human brains are weak. Human vision software is superbly advanced, while its complex engineering capability is pretty low-grade. Computers could match the human on vision software but could also become equally optimized in engineering and any other area.
  • Collective capability. Humans crush all other species at building a vast collective intelligence. Beginning with the development of language and the forming of large, dense communities, advancing through the inventions of writing and printing, and now intensified through tools like the internet, humanity’s collective intelligence is one of the major reasons we’ve been able to get so far ahead of all other species. And computers will be way better at it than we are. A worldwide network of AI running a particular program could regularly sync with itself so that anything any one computer learned would be instantly uploaded to all other computers. The group could also take on one goal as a unit, because there wouldn’t necessarily be dissenting opinions and motivations and self-interest, like we have within the human population.10

AI, which will likely get to AGI by being programmed to self-improve, wouldn’t see “human-level intelligence” as some important milestone—it’s only a relevant marker from our point of view—and wouldn’t have any reason to “stop” at our level. And given the advantages over us that even human intelligence-equivalent AGI would have, it’s pretty obvious that it would only hit human intelligence for a brief instant before racing onwards to the realm of superior-to-human intelligence.

This may shock the shit out of us when it happens. The reason is that from our perspective, A) while the intelligence of different kinds of animals varies, the main characteristic we’re aware of about any animal’s intelligence is that it’s far lower than ours, and B) we view the smartest humans as WAY smarter than the dumbest humans. Kind of like this:

Intelligence

So as AI zooms upward in intelligence toward us, we’ll see it as simply becoming smarter, for an animal. Then, when it hits the lowest capacity of humanity—Nick Bostrom uses the term “the village idiot”—we’ll be like, “Oh wow, it’s like a dumb human. Cute!” The only thing is, in the grand spectrum of intelligence, all humans, from the village idiot to Einstein, are within a very small range—so just after hitting village idiot-level and being declared to be AGI, it’ll suddenly be smarter than Einstein and we won’t know what hit us:

Intelligence2

And what happens…after that?

An Intelligence Explosion

I hope you enjoyed normal time, because this is when this topic gets unnormal and scary, and it’s gonna stay that way from here forward. I want to pause here to remind you that every single thing I’m going to say is real—real science and real forecasts of the future from a large array of the most respected thinkers and scientists. Just keep remembering that.

Anyway, as I said above, most of our current models for getting to AGI involve the AI getting there by self-improvement. And once it gets to AGI, even systems that formed and grew through methods that didn’t involve self-improvement would now be smart enough to begin self-improving if they wanted to.3

And here’s where we get to an intense concept: recursive self-improvement. It works like this—

An AI system at a certain level—let’s say human village idiot—is programmed with the goal of improving its own intelligence. Once it does, it’s smarter—maybe at this point it’s at Einstein’s level—so now when it works to improve its intelligence, with an Einstein-level intellect, it has an easier time and it can make bigger leaps. These leaps make it much smarter than any human, allowing it to make even bigger leaps. As the leaps grow larger and happen more rapidly, the AGI soars upwards in intelligence and soon reaches the superintelligent level of an ASI system. This is called an Intelligence Explosion,11 and it’s the ultimate example of The Law of Accelerating Returns.

There is some debate about how soon AI will reach human-level general intelligence—the median year on a survey of hundreds of scientists about when they believed we’d be more likely than not to have reached AGI was 204012—that’s only 25 years from now, which doesn’t sound that huge until you consider that many of the thinkers in this field think it’s likely that the progression from AGI to ASI happens very quickly. Like—this could happen:

It takes decades for the first AI system to reach low-level general intelligence, but it finally happens. A computer is able to understand the world around it as well as a human four-year-old. Suddenly, within an hour of hitting that milestone, the system pumps out the grand theory of physics that unifies general relativity and quantum mechanics, something no human has been able to definitively do. 90 minutes after that, the AI has become an ASI, 170,000 times more intelligent than a human.

Superintelligence of that magnitude is not something we can remotely grasp, any more than a bumblebee can wrap its head around Keynesian Economics. In our world, smart means a 130 IQ and stupid means an 85 IQ—we don’t have a word for an IQ of 12,952.

What we do know is that humans’ utter dominance on this Earth suggests a clear rule: with intelligence comes power. Which means an ASI, when we create it, will be the most powerful being in the history of life on Earth, and all living things, including humans, will be entirely at its whim—and this might happen in the next few decades.

If our meager brains were able to invent wifi, then something 100 or 1,000 or 1 billion times smarter than we are should have no problem controlling the positioning of each and every atom in the world in any way it likes, at any time—everything we consider magic, every power we imagine a supreme God to have will be as mundane an activity for the ASI as flipping on a light switch is for us. Creating the technology to reverse human aging, curing disease and hunger and even mortality, reprogramming the weather to protect the future of life on Earth—all suddenly possible. Also possible is the immediate end of all life on Earth. As far as we’re concerned, if an ASI comes to being, there is now an omnipotent God on Earth—and the all-important question for us is:

 

Will it be a nice God?

 

That’s the topic of Part 2 of this post.

___________

Sources at the bottom of Part 2.

Related Wait But Why Posts

The Fermi Paradox – Why don’t we see any signs of alien life?

How (and Why) SpaceX Will Colonize Mars – A post I got to work on with Elon Musk and one that reframed my mental picture of the future.

Or for something totally different and yet somehow related, Why Procrastinators Procrastinate

And here’s Year 1 of Wait But Why on an ebook.


  1. Okay so there are two different kinds of notes now. The blue circles are the fun/interesting ones you should read. They’re for extra info or thoughts that I didn’t want to put in the main text because either it’s just tangential thoughts on something or because I want to say something a notch too weird to just be there in the normal text.

  2. Kurzweil points out that his phone is about a millionth the size of, a millionth the price of, and a thousand times more powerful than his MIT computer was 40 years ago. Good luck trying to figure out where a comparable future advancement in computing would leave us, let alone one far, far more extreme, since the progress grows exponentially.

  3. Much more on what it means for a computer to “want” to do something in the Part 2 post.


  1. Gray squares are boring objects and when you click on a gray square, you’ll end up bored. These are for sources and citations only.

  2. Kurzweil, The Singularity is Near, 39.

  3. Kurzweil, The Singularity is Near, 84.

  4. Vardi, Artificial Intelligence: Past and Future, 5.

  5. Kurzweil, The Singularity is Near, 392.

  6. Bostrom, Superintelligence: Paths, Dangers, Strategies, loc. 597

  7. Nilsson, The Quest for Artificial Intelligence: A History of Ideas and Achievements, 318.

  8. Pinker, How the Mind Works, 36.

  9. Kurzweil, The Singularity is Near, 118.

  10. Bostrom, Superintelligence: Paths, Dangers, Strategies, loc. 1500-1576.

  11. This term was first used by one of history’s great AI thinkers, Irving John Good, in 1965.

  12. Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, loc. 660

The post The AI Revolution: The Road to Superintelligence appeared first on Wait But Why.

16 Jan 04:47

Technology doesn’t always make you more stressed out

by blog
16 Jan 01:58

How much do subsidies to community college attendance matter?

by Tyler Cowen

That is a new NBER Working Paper by Angrist, Autor, Hudson, and Pallais.  Here is the sentence of interest for the recent community college initiative:

Awards offered to prospective community college students had little effect on college enrollment or the type of college attended.

Do note that some other kinds of awards appeared to be more effective, so this is not an anti-subsidy result per se.  And here is a new Bulman and Hoxby paper on federal tax credits and the demand for higher education (not just community colleges):

We assess several explanations why the credits appear to have negligible causal effects.

Making these programs work is not so easy.  Reihan Salam offers good points, so does Arnold Kling.

11 Jan 15:02

Obama’s Community-College Plan: A Reading List - NYTimes.com

by blog
08 Jan 21:59

Sentences from Kevin Drum

by Tyler Cowen

…”the internet is now a major driver of the growth of cognitive inequality.” Or in simpler terms, “the internet makes dumb people dumber and smart people smarter.”

The post is here, Kevin’s earlier post on that theme is here.

31 Dec 21:43

★ Powerful Lessons From Multi-Millionaire Tim Ferriss

by J. Money

“Ferris firmly believes in turning the traditional American retirement plan upside down. And rather than deferring all your happiness to the future (65+ years old), he presents the case for us to streamline our lives, build our own business, find joy, travel the world and be passionate today.”

30 Dec 22:09

Fed Up (2014)

Bjorno

looks good.

PG   Dec. 30th, 2014 - Jan. 06th, 2015
Rent in iTunes →

Everything we’ve been told about food and exercise for the past 30 years is dead wrong. FED UP is the film the food industry doesn’t want you to see. From Katie Couric, Laurie David (producer of the Oscar-winning AN INCONVENIENT TRUTH) and director Stephanie Soechtig, FED UP will change the way you eat forever.

Watch The Trailer →
movie
23 Dec 03:03

The Dominant Life Form in the Cosmos Is Probably Superintelligent Robots | Motherboard

by blog
22 Dec 15:24

Vehicle pollution - Cleaner than what?


Why an electric car may be much dirtier than a petrol one
18 Dec 05:02

★ Yahoo’s Decline

by John Gruber

From a New York Times Magazine excerpt of Nicholas Carlson’s upcoming book, Marissa Mayer and the Fight to Save Yahoo:

In many ways, Yahoo’s decline from a $128 billion company to one worth virtually nothing is entirely natural. Yahoo grew into a colossus by solving a problem that no longer exists. And while Yahoo’s products have undeniably improved, and its culture has become more innovative, it’s unlikely that Mayer can reverse an inevitability unless she creates the next iPod. All breakthrough companies, after all, will eventually plateau and then decline. U.S. Steel was the first billion-dollar company in 1901, but it was worth about the same in 1991. Kodak, which once employed nearly 80,000 people, now has a market value below $1 billion. Packard and Hudson ruled the roads for more than 40 years before disappearing. These companies matured and receded over the course of generations, in some cases even a century. Yahoo went through the process in 20 years. In the technology industry, things move fast.

Carlson’s take is pretty brutal, and paints a bleak picture for Yahoo’s prospects as an independent company. (Activist investors are pushing for a merger with AOL.) And it doesn’t seem like Mayer is going to get much more time.

I would argue that Yahoo lost its way early. Yahoo was an amazing, awesome resource when it first appeared, as a directory to cool websites. Arguably, the directory to cool websites. It was hard to find the good stuff on the early web, and Yahoo created a map. Their whole reason for being was to serve as a starting point that sent you elsewhere.

Then came portals. The portal strategy was the opposite of the directory strategy — it was about keeping people on Yahoo’s site, instead of sending them elsewhere. It was lucrative for a while, but ran its course. And it turned out that the web quickly became too large, far too large, for a human-curated directory to map more than a fraction of it. The only way to index the web was algorithmically, as a search engine. And one search engine stood head and shoulders above all others: Google.

Yahoo reportedly had an opportunity to buy Google in 2002 for $5 billion. Yahoo, under the leadership of CEO Terry Semel, declined. And that was the end of Yahoo.1 We all know hindsight is 20/20. There are all sorts of acquisitions that could have been made. But I would argue that acquiring Google in 2002 (if not earlier) was something Yahoo absolutely should have known they needed to do. The portal strategy had played itself out. All they were left with was their original purpose, serving as a starting page for finding what you were looking for on the web.

Buying Google in 2002, at whatever cost, was the only way for Yahoo to return to those roots. Google wasn’t just something shiny and new — it was the best solution to date (even now) to the problem Yahoo was originally created to solve. In a broad sense, buying Google would have been to Yahoo what buying NeXT was to Apple in 1997: an acquisition that returned the parent company to its roots, with superior industry-leading technology and outstanding talent.2

In short, Yahoo’s early 2000s leadership had no understanding whatsoever why Yahoo had gotten popular and profitable in the first place. That serving as the leading homepage for the entire web was important and profitable, and that the only way to maintain that leadership was to acquire Google.

Google, on the other hand, learned an important lesson from Yahoo. The basic gist of portals never really died: Google has gone on to build all sorts of properties like Gmail, Google News, Maps, and Google Plus, all of which are designed to keep users on Google-owned sites. But Google never conflated these things with web search. The google.com home page remains to this day as spartan as when it first appeared, and they fully understand that the point of it is to send users to other sites.

Yahoo’s loss of focus on indexing the web was a mistake in the late ’90s. They had a chance to completely correct that mistake by acquiring Google in the early 00’s. They blew that chance, and it’s been all downhill for them ever since.


  1. You could argue that the mistake wasn’t declining to acquire Google, but rather the earlier decision to hire Semel as CEO and an executive staff with a Hollywood/media company background. Two sides of the same bad coin, I say. 

  2. Among the many problems with this analogy: Apple and NeXT needed each other. Both companies were deeply adrift in 1996. NeXT had talent and great software, but their prospects were even bleaker than Apple’s. Google, obviously, did not need Yahoo, and in fact was almost certainly better served by staying independent and declining any offers to acquire it. 

16 Dec 04:08

Robin Hanson’s TEDx talk

by Tyler Cowen

“Something out there is killing everything, and you’re probably next.”

You can view the talk here.  It is called “The Great Filter.”

12 Dec 00:31

Case Study: Average Everyday Complainypants Seeks Redemption

by Mr. Money Mustache
Bjorno

This is the greatest thing I've ever read.

Average consumer's daily commute vehicles
Average consumer's daily commute vehicles

Average consumer’s daily commute vehicles

Today’s case study is a classic, because it addresses a problem suffered by tens of millions of families: the chronic time shortage caused by a double income, double commute, kid-raising lifestyle. While some practitioners of this game do it by choice, many other would rather have more free time … if only they could afford it.

 

 

Dear Mr. Money Mustache,

I am new to your blog but have been seriously enjoying this new found financial porn on a daily basis. I think I have the basic principles down. Bike good; car bad. Mindful spending good; mindless consumer orgy bad. Early retirement good; endless wage-slavery bad.

Instead of sitting in my beige 8×12 government cubicle daydreaming about how cute I would look with a new red Guess bag and tall leather boots from the mall across the street…I am now in my beige cubicle fantasizing about a simpler life with a smaller home, more time at home with my tiny humans and more time to read.

At the risk of being labelled a complainypants, I genuinely do not understand how to move from this wageslavery to being a Mustachian. It seems to me to be bit of a chicken and egg conundrum. How do I live on 50% or less of my income while still being stuck in said cubicle with all the expenses that it incurs?

The Basic Stats:

  • I am a fellow Canadian and as such am exceedingly polite
  • I live in one of the coldest winter cities in the world (temperatures in January and February routinely dip to -40 degrees)
  • Aside from the extreme temperatures in which I live, I am otherwise average in virtually every way.
  • Average height, average weight, average number of kids (2)
  • Average home (1200 sq feet), average mortgage (260K, worth about 420K in today’s market)
  • Average income (75K/year, 165K/year household…although according to you…I have already made it big)
  • Average cars (2 –one 2006 Honda Odyssey mini van and one…wait for it…2011 Ford F-150 Eco-boost Extended cab truck)you saw that coming from a mile away didn’t
  • you?…but amazingly both are paid off)
  • Average commute time (20 minutes direct, 45 minutes if you include the kids daycare/school drop time. My husband works 15 km in the opposite direction so we can’t even car pool.)
  • And last but not least, average amount of consumer debt ($12000 on a line of credit).
  • We have an average amount of savings (120 000 in RRSPs and $12 000 in a few different savings places)
  • And best of all I am in 15 years into a 30 years sentence with Her Majesty the Queen to be given my golden hand shake at the age of 55 (ie 70% of my income for the rest of my life…or if I cashed it in today 280K)…which as you might guess, I am starting to think isn’t worth the next 18 years of my life.

 

A basic sampling of our current overall monthly budget is below:

 

Take-Home Pay $7500
Savings:
Retirement accounts, emergency fund, etc $500
Debt Paydowns $500
Spending
Mortgage $1400
Property Tax $325
Home Improvement /maintenance $300
Utilities $325
Daycare $1200
Groceries and Personal care $1200
Insurance (home, life, van, truck) $475
Gasoline $500
Parking $95
Charity $150
Kids' sports (hockey/swimming) $100 (we're Canadian - hockey is a fixed expense)
TV/phones/Internet $100
Miscellaneous (birthday parties, lunches out, hair cuts,
gifts, golf, hobbies, entertaining)
$330
Total Spending $6500

My days and nights consist of rushing around like a chicken with its head cut off.  How do I get from here to retirement and more time enjoying life with tiny humans?

Interestingly my husband is a structural engineer, who does carpentry and custom wood working on the side, which is his passion that he would like to make his career, he is not interested in ‘retirement’ he would just like a career change.

Sincerely,
Whiny in Winnipeg

Mr. Money Mustache Responds:

Dear WW,

While your situation sounds horrific to me, it is of course the standard situation for most two-jobs-plus-kids families. Let’s begin with the end in mind: getting you some freedom ASAP.

Right now, you earn $75,000 before tax or 45% of your family’s gross pay. Since you listed take-home pay at $7500, let’s assume you are bringing in $3400 of it.

Out of that, the following monthly costs might be byproducts of your job:

  • Gas and direct/indirect car costs for almost 2000km/month of driving around in a van: $1,000
  • Parking: $95
  • Daycare: $1200
  • Convenience foods and services that show up in your grocery and miscellaneous bills: $200

    Total: $2495

This leaves only about $1000 per month of “profit” from your job. So, including commuting and shuttling kids around to child care, are spending about 250 hours a month to earn $1,000 – or four bucks an hour. If you can think of better things to do than working for well under half of Manitoba’s minimum wage, you should quit immediately. Since this is what you wanted anyway, congratulations!!!

But it gets even better than that. Since it sounds like properties increase in price as you move towards your job downtown, they might well decrease as you move towards your husband’s job. If so, you could find a new place close to his work, and eliminate his commute as well – potentially saving the $600 per month he is currently burning up commuting in the opposite direction.

The savings from owning a less expensive house might free up an additional $200 per month in interest, since the equity from your current house would easily wipe your debts and you’d also have a lower mortgage payment.

So far we have only addressed basic strategy – the simple choice of where to live and work. There’s even more wealth on tap as soon as you activate a bit of Mustachian frugality.

For starters, since this is the MMM blog we’ll need to fix your insane choice of vehicles.

trucks

 

You have two kids, and yet you drive around in a BRAND NEW GAS GUZZLING LUXURY RACING BUS. The 2006 Honda Odyssey is not a vehicle for an indebted mother to use to drop the kids off and then head downtown. It is something a hopelessly spendy multimillionaire might use to shuttle around six pampered passengers on a cross-country roadtrip while hauling a giant trailer full of supplies. For two kids, you use a Toyota Yaris or similar. That will cut your gas bill down by 50%.

Your husband appears to be driving alone and not even a multimillionaire himself, and yet he has a TWIN-TURBO SIX PASSENGER RACING FARM TRUCK!!! Holy shit, brother, how many heads of cattle and pigs are you hauling on that roundtrip, while simultaneously carrying international heads of state in the stately cabin? That is a fucking ridiculous vehicle for ANYONE to drive except the rarest breed of Farmer/Diplomat, and I’m betting none of them also hold jobs as Structural Engineers.

So you’ll be selling that, and walking to work. For those rare times you drive, you can ask to borrow the wife’s manual transmission Yaris hatchback. You are also permitted to buy a used mountain bike, and if you’re REALLY getting serious with the carpentry, a 2001 Ford Ranger pickup, 2 wheel drive 4 cylinder manual longbed. You may weld a 12-foot lumber rack to it in order to outperform your current clown truck.

The savings on depreciation, fuel, and insurance will compound an additional $86,000 per decade into your family’s wealth.

Once you have these big wins in place, you’ll have much more time and energy to go after the medium-sized ones: your grocery bill can easily be cut in half, according to most Canadian Mustachian 4-person families. Restaurants and other takeout frivolities may drop as well, depending on your priorities.  Another $1000 per month is possible in this area, which will go directly to your financial independence fund.

When you add in Mrs. WW’s outstanding windfall of a $280,000 early pension payout, all my calculations indicate that you will be further ahead than you are today, even after ditching the government job. In fact, after a year of making these changes, Mr. WW may even start getting the itch to scale down his own job and do exactly as he sees fit as well. And that would be nothing to whine about at all.

Best of luck!

Do YOU see any parallels to your own life? It is almost always possible to avoid the two-commute family with kids if you make it a priority.

 

09 Dec 15:53

Bill Gates: the best books I read in 2014

by Joe Koster
26 Nov 15:53

Best Books for Investors: A Short Shelf - by Jason Zweig

by Joe Koster