I Want to Live Forever

Faith, Hope, and Singularity: Entering the Matrix with New York’s Futurist Set

It's the end of the world as we know it, and they feel fine.

(Illustration: David Saracino)

The situation on Alyssa Vance’s couch would have been best described as a cuddle puddle—a tangle of hair-petting and belly-stroking and neck-nuzzling, seven people deep. It was Friday night in late June in the living room of her one-bedroom apartment at The Caroline, a “white-glove service” building in Chelsea. Ms. Vance, a transgender former Google intern with the lips of a Renaissance statue, sat somewhere near the middle next to her girlfriend, Alice. Snuggling up on either end were a neuroscience Ph.D. from Columbia, a Yale grad student in applied mathematics, and a redhead in from Berkeley who “sells drugs on the Internet.” Across the room, a row of white chairs laid out expressly for Ms. Vance’s 21st birthday party stood abandoned in favor of the handsy human octopus.

The Observer hovered near the drinks table. Next to us, a ponytailed programmer from Morgan Stanley nibbled on a family-sized Trader Joe’s chocolate bar as we both stole glances at the pile-on.

The partygoers had a more solemn connection than their youthful PDA might suggest. They were all disciples of the blog Less Wrong, so named because “We try to be as least wrong as possible,” as one guest later explained. Despite describing itself as a forum on “the art of human rationality,” the New York Less Wrong group, which holds weekly Tuesday meetups and boasts almost 300 people signed up for its mailing list, is fixated on a branch of futurism that would seem more at home in a 3D multiplex than a graduate seminar: the dire existential threat—or, with any luck, utopian promise—known as the technological Singularity.

The Singularity, a term first coined by mathematician and science fiction writer Vernor Vinge, refers to the point in time when man will create a machine capable of superhuman intelligence, shortly after which, he told a crowd of NASA scientists in 1993, “the human era will be ended.” The concept was subsequently popularized by best-selling author and entrepreneur Ray Kurzweil, known for his embrace of cryonics, or freezing one’s body with the hope of thawing it out, no worse for the wear, when conditions seem ripe. Mr. Kurzweil adopted the term to describe his predictions about the exponential growth of computer technology and its eventual merger with mankind. “By the 2040s and the 2030s, we will begin to augment our neocortex directly,” he told The Observer. “You’ll be talking to a hybrid. You won’t easily be able to separate, Oh, that came from my non-biological side.

As unsettling as that sounds, Mr. Kurzweil sees it as a potential solution to the planet’s woes. “People look around and take a linear perspective and think we’re gonna run out of energy and water,” he said, suggesting that reverse-engineering the human brain could unleash unfathomable powers of intelligence to address what ails us. (He discusses the subject in more depth in his upcoming bookHow to Create a Mind: The Secret of Human Thought Revealed, out in November.) “The reality is that we’re going to become those machines,” he added. “That’s why we create them, to compensate for our own limitations.”

Such pronouncements have made Mr. Kurzweil a lightning rod for AI enthusiasts. In June, the tech blog Gizmodo hosted a party in his honor on the roof of Gawker’s Soho headquarters. It was a swanky affair for such a sweltering night. Rectangles of pizza and cross-sections of pork belly were passed around as Mr. Kurzweil assured the crowd that “optimism is a self-fulfilling prophecy.”

No one from the Less Wrong meetup group was invited.

Mr. Yudkowsky, left, and Mr. Thiel

Considerably more radical than Kurzweil, Less Wrong is affiliated with the Singularity Institute in Berkeley. Both were cofounded by 32-year-old Eliezer Yudkowsky, an eighth-grade dropout with an IQ of 143 (though he claims that might be a lowball figure). The messianic Mr. Yudkowsky also helped attract funding from his friend Peter Thiel, an early Facebook investor and noted libertarian billionaire whom Forbes pegs as the 303rd richest person in America. The Thiel Foundation, Mr. Thiel’s philanthropic group, has donated at least $1.1 million to SIAI, more than four times its next largest donor. (The nonprofit’s Form 990 from 2010 shows assets of $462,470.)

While Mr. Kurzweil has generally been viewed as the Singularity’s chief standard-bearer, on the geekier fringe, that distinction belongs to Mr. Yudkowsky. “I have been seriously and not in a joking way accused of trying to take over the world,” he humble-brags on his OKCupid profile.

SingInst or SIAI, as the institute is known, was founded in 2000 to further research on “technological forecasting, human rationality, and architecting safe artificial intelligence.” Although the contemporary futurist movement has largely been a Bay Area phenomenon, the New York Less Wrong meetup group—a motley crew of libertine twenty- and thirty-somethings with impressive jobs and developing social skills—represents something of an East Coast bureau, and is the largest and fastest-growing group in the Less Wrong community. The New York group’s upcoming plans include a Humanist open mic night on the Lower East Side and a trip to a co-ed Russian sauna in the Financial District.

SIAI is not to be confused with the more commercially-minded Singularity University, which counts Google, Cisco, and Nokia as corporate backers and has spun out dozens of startups. That organization, which has plans to go for-profit this year, is focused on giving a boost to emerging technologies that are market-ready now, and in spurring thinking about how we can harness them for the future, whereas SingInst’s organizing principle has a more apocalyptic cast. Technological advancements in machine learning, it argues, are hurtling towards to the creation of a self-improving artificial intelligence–one that can program itself to be smarter and smarter still. If the world doesn’t work to ensure the emerging AI is “human-friendly” and shares our values, it will destroy us all.

If Singularity University is the Mitt Romney of futurist advocacy groups—sleek and corporate—then the Singularity Institute is Ron Paul, scruffy and unhinged (and, incidentally, another beneficiary of Mr. Thiel’s largess).

“The AI is smarter than we are, so it would kill everyone. Or it wants all our resources, so of course it’s going to kill everyone,” Zvi Mowshowitz explained as the assembled rose from the couch to whoop it up to show tunes and eighties pop hits. Mr. Mowshowitz, who lives a couple floors up at The Caroline with his girlfriend (the neuroscientist), has jet black hair and an easy, childlike grin. He was wearing a electric blue gym shorts and a homemade T-shirt commemorating his reign as a professional champion of the Magic: The Gathering fantasy card game. Mr. Mowshowitz is currently working with Ms. Vance and Jaan Tallinn, the renowned Estonian programmer behind Skype and Kazaa, on a personalized medicine startup. “People come up with really bad arguments for why the AI wouldn’t kill everyone,” he continued. “‘Well, killing everyone—that’s like Terminator, so John Connor will stop it, right?’ The answer is no, John Connor will die! John Connor is dead!”

The Judgment Day narrative makes it easy to see why it’s been satirized as “the Rapture for nerds.” Mitch Kapor, cofounder of Lotus Development, also drew a religious parallel, calling it “intelligent design for the IQ 140 people.”

“I’ve made my peace with the fact that, you know, this is not going to last,” Mr. Mowshowitz said, looking out the window at weekend traffic on Sixth Avenue as though it would all disappear. “We have a very dysfunctional civilization right now. There are better things that could be done.” By the drinks table, his girlfriend sang along with The Lion King’s “I Just Can’t Wait to Be King.”

The people behind SIAI know that the end of the world as we know it sounds like a downer, so they are actively engaged in reframing Armageddon. On the webpage “Why Work Toward the Singularity,” SingInst offers a gloriously transcendent vision of AI as mankind’s salvation. If we are able to develop a “friendly” superhuman intelligence, then it could do everything from curing cancer to accelerating scientific research to eradicating hunger. Meanwhile, cohorts focused on anti-aging, nanotechnology, longevity and transhumanism are at work on genetic therapies and body-hacks that will extend our lifespans beyond those of the vampire population of True Blood.

Mr. Mowshowitz calls it escape velocity. “That’s where medicine is advancing so fast that I can’t age fast enough to die,” he explained. “I can’t live to 1,000 now, but by the time I’m 150, the technology will be that much better that I’ll live to 300. And by the time I’m 300, I’ll live to 600 and so on,” he said, a bit breathlessly. “So I can just . . . escape, right? And now I can watch the stars burn out in the Milky Way and do whatever I want to do.”

Many members of the Less Wrong meetup group are hopeful enough to have invested in cryonics; some are even cryonics counselors. At the party, Ms. Vance, who glided around the room with the head-bob and muffled laugh of a very polite alien, interrupted Mr. Mowshowitz to share the business card of a “cryo life insurance guy.” Not necessary; he was already covered.

Convincing people that the world is about to be thoroughly upended has never been an easy or rewarding task, and the singularity cadres have adopted some canny marketing techniques to help the medicine go down. Branding themselves as “rationalists,” as the Less Wrong crew has done, makes it a lot harder to dismiss them as a “doomsday cult.” The Singularity Institute itself is making a similar leap, spinning off what it’s calling The Center for Applied Rationality, which hosts summer camps for math olympians and rationality “mini camps” in San Francisco.

Michael Vassar, the former president of Singularity Institute, who stepped down in January to pursue his idea for a personalized medicine startup–later bringing on Mr. Mowshowitz and Ms. Vance–admitted the nonprofit had learned to hide some of its more radical ideas, emphasizing rationality instead.

As Mr. Yudkowsky put it, “There are plenty of people out there who would be interested in cognitive science-based thinking skills who wouldn’t necessarily buy into the whole ‘save humanity’ thing.”

Mr. Yudkowsky’s most successful stab at attracting young cadres to the cause was a 1,000-page fan fiction project called Harry Potter and the Methods of Rationality, which substitutes scientific method for magic and has received, at last count, as many as 5 million hits.

So eager are Singularity adherents to keep the discussion upbeat that Mr. Yudkowsky instituted a ban from the Less Wrong forums of a particularly insidious discussion thread, ominously nicknamed “the Basilisk,” after science fiction writer David Langford’s notion of images that crash the mind. In the initial post, a prominent Less Wrong contributor mused about whether a friendly AI—one hell-bent on saving the world—would punish even true believers who had failed to do everything they could to bring about its existence, including donating their disposable income to SIAI. It seemed like little more than a harmless thought experiment, but rumor has it that the discussion thread was deemed a danger to susceptible minds and exorcised from the blog after a reader had a nervous breakdown.

The Observer tried to ask the Less Wrong members at Ms. Vance’s party about it, but Mr. Mowshowitz quickly intervened. “You’ve said enough,” he said, squirming. “Stop. Stop.”

Indeed, the last thing Less Wrong wants to do is freak anyone out. On the contrary, one of the group’s missions seems to be to empower its less socially well-adjusted members and teach them to cope with the various challenges presented by the here and now. In this sense, the whole movement owes a little something to its Bay Area forebears, the New Age and self-actualization movements of the 1960s, 70s and 80s.

“Our primary source of value is helping young nerds become engaging and extroverted—and occasionally more muscular,” Raymond Arnold, a Less Wrong member and 3D animator with an advertising firm in Manhattan, explained a few weeks after Ms. Vance’s party. The Observer had stopped by Mr. Mowshowitz’s apartment, a few floors up at The Caroline, for the group’s weekly Tuesday session. As we knew from the mailing list—a constant stream of emails extolling the Paleo diet, recounting post-hike “massage puddles” and offering extra tickets to The Dark Knight Rises—this three-hour meeting would be “Rationalist therapy” for one of the members, a chance to get guidance on productivity, dating, and work.

“You’re playing on hard mode,” one of the members assured their fretful subject for the evening, borrowing a video game analogy.

“Really?” he asked.

“Really.”

Follow Nitasha Tiku on Twitter or via RSS. ntiku@observer.com

blog comments powered by Disqus