Jamais Cascio

Headshot
Photo by Bart Nagel

Interviews and Talks

My Name is Jamais Cascio, and I'm a Futurologist interview for pinITALY
(video)          July 2014

Everything Will Be Alright* interview for documentary series.
(video)          February 2014

Crime and Punishment discussion at Fast Company's Innovation Uncensored
(video)          April 2013

Bots, Bacteria, and Carbon talk at the University of Minnesota
(video)          March 2013

Visions of a Sustainable Future interview
(text)          March 2013
Talking about apocalypse gets dull...all apocalypses are the same, but all successful scenarios are different in their own way.

The Future and You! interview
(video)          December 2012

Bad Futurism talk in San Francisco
(video)          December 2012

Inc. magazine interview
(text)          December 2012
Any real breakthrough in AI is going to come from gaming.

Singularity 1 on 1 interview
(video)          November 2012

Momentum Interview
(text)          September 2012
One hope for the future: That we get it right.

Doomsday talk in San Francisco
(video)          June 2012

Polluting the Data Stream talk in San Francisco
(video)          April 2012

Peak Humanity talk at BIL2012 in Long Beach
(video)          February 2012

Acceler8or Interview
(text)          January 2012
Our tools don't make us who we are. We make tools because of who we are.

Hacking the Earth talk in London
(video)          November 2011

Cosmoetica Interview
(text)          May 2011
The fears over eugenics come from fears over the abuse of power. And we have seen, time and again, century after century, that such fears are well-placed.

Future of Facebook project interviews
(video)          April 2011

Geoengineering and the Future interview for Hearsay Culture
(audio)          March 2011

Los Angeles and the Green Future interview for VPRO Backlight
(video)          November 2010

Surviving the Future excerpts on CBC
(video)          October 2010

Future of Media interview for BNN
(video)          September 2010

Hacking the Earth Without Voiding the Warranty talk at NEXT 2010
(video)          September 2010

Map of the Future 2010 at Futuro e Sostanabilita 2010 (Part 2, Part 3)
(video)          July 2010

We++ talk at Guardian Activate 2010
(video)          July 2010

Wired for Anticipation talk at Lift 10
(video)          May 2010

Soylent Twitter talk at Social Business Edge 2010
(video)          April 2010

Hacking the Earth without Voiding the Warranty talk at State of Green Business Forum 2010
(video)          February 2010

Manipulating the Climate interview on "Living on Earth" (public radio)
(audio)          January 2010

Bloggingheads.TV interview
(video)          January 2010

Homesteading the Uncanny Valley talk at the Biopolitics of Popular Culture conference
(audio)          December 2009

Sixth Sense interview for NPR On the Media
(audio)          November 2009

If I Can't Dance, I Don't Want to be Part of Your Singularity talk for New York Future Salon
(video)          October 2009

Future of Money interview for /Message
(video)          October 2009

Cognitive Drugs interview for "Q" on CBC radio
(audio)          September 2009

How the World Could (Almost) End interview for Slate
(video)          July 2009

Geoengineering interview for Kathleen Dunn Show, Wisconsin Public Radio
(audio)          July 2009

Augmented Reality interview at Tactical Transparency podcast
(audio)          July 2009

ReMaking Tomorrow talk at Amplify09
(video)          June 2009

Mobile Intelligence talk for Mobile Monday
(video)          June 2009

Amplify09 Pre-Event Interview for Amplify09 Podcast
(audio)          May 2009

How to Prepare for the Unexpected Interview for New Hampshire Public Radio
(audio)          April 2009

Cascio's Laws of Robotics presentation for Bay Area AI Meet-Up
(video)          March 2009

How We Relate to Robots Interview for CBC "Spark"
(audio)          March 2009

Looking Forward Interview for National Public Radio
(audio)          March 2009

Future: To Go talk for Art Center Summit
(video)          February 2009

Brains, Bots, Bodies, and Bugs Closing Keynote at Singularity Summit Emerging Technologies Workshop (video)          November 2008

Building Civilizational Resilience Talk at Global Catastrophic Risks conference
(video)          November 2008

Future of Education Talk at Moodle Moot
(video)          June 2008

G-Think Interview
(text)          May 2008
"In the best scenario, the next ten years for green is the story of its disappearance."

A Greener Tomorrow talk at Bay Area Futures Salon
(video)          April 2008

Geoengineering Offensive and Defensive interview, Changesurfer Radio
(audio)          March 2008

Wired interview
(text)           March 2008
"The road to hell is paved with short-term distractions. "

The Future Is Now interview, "Ryan is Hungry"
(video)          March 2008

G'Day World interview
(audio)          March 2008

UK Education Drivers commentary
(video)          February 2008

Futurism and its Discontents presentation at UC Berkeley School of Information
(audio)          February 2008

Opportunity Green talk at Opportunity Green conference
(video)          January 2008

Metaverse: Your Life, Live and in 3D talk
(video)          December 2007

Singularity Summit Talk
(audio)          September 2007

Political Relationships and Technological Futures interview
(video)          September 2007

NPR interview
(audio)          September 2007
"Science Fiction is a really nice way of uncovering the tacit desires for tomorrow...."

Spark Radio, CBC interview
(audio)          August 2007
Spark Radio, part 2 CBC interview
(audio)          August 2007

True Mutations Live! roundtable Part 1
(audio)          July 2007
True Mutations Live! roundtable Part 2
(audio)          July 2007

G'Day World interview
(audio)          June 2007

NeoFiles interview
(audio)          June 2007

Take-Away Festival talk
(video)          May 2007

NeoFiles interview
(audio)          May 2007

Changesurfer Radio interview
(audio)          April 2007

NeoFiles interview
(audio)          July 2006

FutureGrinder: Participatory Panopticon interview
(audio)          March 2006

TED 2006 talk
(video)          February 2006

Commonwealth Club roundtable on blogging
(audio)          February 2006

Personal Memory Assistants Accelerating Change 2005 talk
(audio)          October 2005

Participatory Panopticon MeshForum 2005 talk
(audio)          May 2005

Reminder: Open the Future is on a temporary hiatus while I work on a book. I will post now and again, but may go for a few weeks at a time without updating. If you're new to the site, check out the "Start Here" links to the right. Thanks.

A World in Which

road sign(This is the full text of a talk I gave at the Institute for the Future on 21 October 2015, as part of the "New Body Language" workshop on wearable/"body area network" technologies for the Technology Horizons program.)

Why do we think about the future?

This may seem an odd setting in which to ask this question. We’re all here tonight because we’re interested in big changes that seem to be thundering ahead in technology, in politics, in the human experience. But there has to be more than “interest.” An organization like the Institute for the Future wouldn’t be around for nearly a half-century if it was really just the Institute for Idle Curiosity About Tomorrow.

No. We think about the future because we believe two fundamental things: 1), that the future matters; and 2), that we still have a say in the future we get. The shape of tomorrow arises from the choices we make today. Or, to twist that around, we can make better decisions now if we consider the different ways in which those decisions could play out. The scenarios I will present tonight are examples of one tool we can use to undertake that consideration of consequences. Scenarios are stories that offer us a lens through which we might see our lives in a new world.

We’re not accustomed to thinking about longer-term futures. We evolved to reach quick, reasonably accurate conclusions about near-term risks and outcomes—is there a saber-toothed tiger in that cave? Will that plant poison me? There’s even some evidence that the part of the brain that lights up when we think about the future is the same part active in ballistics, that is, hitting a moving target with something. So when Wayne Gretzky talked about skating to where the puck will be, he was actually offering up a bit of futurist wisdom.

One important rule for thinking about the future is remembering that what we may imagine as a massively disruptive, distant horizon is an everyday, boring present for those who live there. They aren’t entirely different people in an alien environment, they’re us, a generation from now. They’ve gone through—we’ve gone through—all of the upheaval and have adapted. Their lives then may not be the same as our lives now, but they are the descendants of our lives.

It’s because of this clarity of connection that I believe it to be important to think about the future in generational terms, not just as a count of years. If, as LP Hartley claims, “the past is a foreign country,” so too is the future—but it’s a foreign country that we’ll never quite get to. Our vision of the future is a destination, but our lived experience of it is as a journey. We walk an unbroken pathway from today to tomorrow.

Continue reading "A World in Which" »

Gun Control's MP3 Moment

Screen Shot 2015 10 05 at 4 40 49 PMReading the continued, ongoing arguments about gun regulations ("reasonable" or otherwise) is frustrating. Not only for the usual reasons (absolutist positions, inability to recognize multi-causal phenomena, relentless hostility towards different opinions, etc.), but because of how incredibly irrelevant it is becoming. 3D-printable firearms are already here, and becoming increasingly reliable. Every gun control law in the world is obsolete.

With a 3D printer costing a thousand dollars or less, it's possible to produce a usable firearm. The first generation of these printed guns had a tendency to blow up when used, but the newer models can work just fine. Single-shot, magazine-fed, automatic or semi-automatic, there's now a variety of weapon designs available, ready to be downloaded and printed out.

Controlling this won't ever be easy, and is currently impossible. The design files are digital and easily spread around the Internet. 3D printers are general purpose systems, meaning that they can ostensibly be instructed to print out anything possible (given their size and material resource limits). Printers may be programmed to recognize a specific 3D gun file, but aren't smart enough to identify any random file that will produce a weapon. Open sourced 3D printer designs would make it possible to avoid the use of devices programmed with ORM ("object rights management") restrictions. You're not going to arm a militia with one of these, at least not quickly, but it wouldn't be hard to print out a small arsenal for person enjoyment.

Again, this is stuff that's happening now. It's not easy, quick, or cheap at the moment, but it's heading that direction. I'd be surprised if we didn't see someone killed with a 3D printed weapon by the end of the decade. Continuing to fight over gun control laws is painfully close to the music industry continuing to demand "home taping" restrictions and taxes on cassette tapes, even as MP3 files proliferated.

One final caveat: A 3D-printable firearm still needs ammunition, and bullets will be hard to 3D print for awhile yet. It may be another decade or more before it's possible to easily print bullets. If we really want to continue the debate and hostility, we may have a few years left.

Links:

  • http://www.wired.com/2014/05/3d-printed-guns/
  • http://3dprint.com/89919/shuty-hybrid-3d-printed-pistol/
  • http://gizmodo.com/3d-printed-guns-are-only-getting-better-and-scarier-1677747439
  • https://defdist.org

  • Uncertainty, Complexity, and Taking Action (revisited)

    I stumbled across this transcript of a talk I gave way back in late 2008, at the "Global Catastrophic Risks" conference. I was asked to provide some closing thoughts, based on what had gone before in the meeting, so it's more off-the-cuff than a prepared statement. The site hosting the transcript seems to have gone dark, though, so I wanted to make sure that it was preserved. There was some pretty decent thinking there -- apparently, I had a functioning brain back then.

    Uncertainty, Complexity, and Taking Action

    Jamais Cascio gave the closing talk at GCR08, a Mountain View conference on  Global Catastrophic Risks. Titled “Uncertainty, Complexity and Taking Action,” the discussion focused on the challenges inherent in planning to prevent future disasters emerging as the result of global-scale change.

    The following transcript of Jamais Cascio’s GCR08 presentation “Uncertainty, Complexity, and Taking Action” has been corrected and approved by the speaker. Video and audio are also available.


    Anders Sandberg: Did you know that Nick [Bostrom] usually says that there have been more papers about the reproductive habits of dung beetles than human extinction.  I checked the number for him, and it’s about two orders of magnitude more papers.

    Jamais Cascio:  There is an interesting question there—why is that?  Is it because human extinction is just too depressing?  Is it because human extinction is unimaginable?  There is so much uncertainty around these issues that we are encapsulating under “global catastrophic risk.”

    There is an underlying question in all of this.  Can we afford a catastrophe? I think the consensus answer and the reason we are here is that we can’t.  If we can’t afford a catastrophe, or a series of catastrophes, the question then is, what do we do that won’t increase the likelihood of catastrophe?  That actually is a hard question to answer.  We have heard a number of different potential solutions—everything from global governance in some confederated form to very active businesses.  We didn’t quite get the hardcore libertarian position today—that’s not a surprise at an IEET meeting—and I’m not complaining.  We have a variety of answers that haven’t satisfied.

    I think it really comes down to unintended consequences. We recognize that these are complex fucking systems.  Pardon my language about using “complex,” but these are incredibly difficult, twisty passages all leading to being devoured by a grue.  This is a global environment in which simple answers are not just limited, they are usually dangerous.  Yet, simple answers are what our current institutions tend to come up with—that’s a problem.

    One way this problem manifests is with silo thinking.  This notion of “I’m going to focus on this particular kind of risk, this particular kind of technology, and don’t talk to me about anything else.”  That is a dangerous thought, not in the politically incorrect sense, but in the sense that the kinds of solutions that you might develop in response to that kind of silo thinking are likely to be counterproductive when applied to the real world, which is, as you recall, a complex fucking system.

    There is also, you’ve noticed here, an assumption of efficiency.  I mean by that an assumption that all of these things work.  That is not necessarily a good assumption to make.  We are going to have a lot of dead ends with these technologies.  Those dead ends, in and of themselves, may be dangerous, but the assumption that all the pieces work together and that we can get the global weather system up and running in less than a week…

    With a sufficiently advanced tested, reliable system, no doubt.  If we are in that kind of world of global competition where I have to get this up before the Chinese do, we’re not going to spend a lot of time testing the system.  I’m not going to be doing all the various kinds of safety checks, longitudinal testing to make sure the whole thing is going to work as a complex fucking system.  There is an assumption that all of these things are going to work just fine, when in actuality: one, they may not—they may just fall flat.  Two, the kinds of failure states that emerge may end up being even worse, or at least nastier in a previously unpredictable way than what you thought you were confronting with this new system/ technology/ behavior, etc.

    This is where I come back to this notion of unintended consequences—uncertainty.  Everything that we need to do when looking at global catastrophic risks has to come back to developing a capacity to respond effectively to global complex uncertainty.  That’s not an easy thing.  I’m not standing up here and saying all we need is to get a grant request going and we’ll be fine.

    This may end up being, contrary to what George was saying about the catastrophes being the focus—it’s the uncertainty that may end up being the defining focus of politics in the 21st century.  I wrote recently on the difference between long-run and long-lag. We are kind of used to thinking about long-run problems: we know this thing is going to hit us in fifty years, and we’ll wait a bit because we will have developed better systems by the time it hits.  We are not so good at thinking about long-lag systems: it’s going to hit us in fifty years, but the cause and proximate sources are actually right now, and if we don’t make a change right now, that fifty years out is going to hit us regardless.

    Climate is kind of the big example of that.  Things like ocean thermal inertia, carbon commitment, all of these kinds of fiddly forces that make it so that the big impacts of climate change may not hit us for another thirty years, but we’d damn well better do something now because we can’t wait thirty years. There is actually with ocean thermal inertia two decades of warming guaranteed, no matter what we do. We could stop putting out any carbon right this very second and we would still have two more decades of warming, probably another good degree to degree and a half centigrade.

    That’s scary, because we are already close to a tipping point.  We’re not really good at thinking about long-lag problems.  We are not really good at thinking about some of these complex systems, so we need to develop better institutions for doing that.  That institution may be narrow—the transnational coordinating institutions focusing on asteroids or geoengineering.  This may end up being a good initial step, the training wheels, for the bigger picture transnational cooperation.

    We might start thinking about the transnational cooperation not in terms of states, but in terms of communities.  I mentioned in response to George earlier about a lot of the super-powered angry individuals, terrorist groups, etc. that in the modern world actually tend to come not from anarchic states or economically dislocated areas but in fact from community dislocated areas.  Rethinking the notion of non-geographic community—“translocal community” is a term we are starting to use at the Institute for the Future—that ends up requiring a different model of governance.

    You talk about getting away from wars and thinking about police actions, but police actions are 20th century… so very twen-cen. Thomas Barnett, a military thinker, has a concept that I think works reasonably well as a jumping off point.  He talks about combined military intervention civilian groups as sys admin forces—system administration forces.  I’m kind of a geek at heart, so I appreciate it from that regard, but also the notion that these kinds of groups go in, not to police or enforce, but to administrate the complex fucking system.

    Hughes:  I’M IN UR CAPITAL, REBOOTING UR GOVERNMENT?

    Cascio:  Exactly.

    One last questions that I think plays into all of this popped into my mind during Alan’s talk.  I’m not asking this because I know the answer ahead of time—I’m actually curious.  When have we managed to construct speculative regulation?  That is, regulatory rules that are aimed at developments that have not yet manifest.  We know this technology is coming, so let’s make the rules now and get them all working before the problem hits.  Have we managed to do that, because if so, that then becomes a really useful model for dealing with some of these big catastrophic risks.

    Goldstein:  The first Asilomar Conference on Recombinant DNA.

    Cascio:  Were the proposals coming out of Asilomar ever actually turned into regulatory rules?

    Hughes:  No, they were voluntary.

    Cascio:  I’m not trying to dismiss that.  What would be a Bretton Woods, not around the economy but around technology?  Technology is political behavior.  Technology is social.  We can talk about all of the wonderful gadgets, all of the wonderful prizes and powers, but ultimately the choices that we make around those technologies (what to create, what to deploy, how those deployments manifest, what kinds of capacities we add to the technologies) are political decisions.

    The more that we try to divorce technology from politics, the more we try to say that technology is neutral, the more we run the risk of falling into the trap of unintended consequences.  No one here did today, but it’s not hard to find people who talk about technology as neutral.  I think that is a common response in the broader Western discourse.

    I want to finish my observations here by saying that ultimately the choices that we make in thinking about these technologies, these choices matter.  We can’t let ourselves slip into the pretense that we are just playing with ourselves socially.  We are actually making choices that could decide the fate of billions of people.  That’s a heavy responsibility, but this is a pretty good group of people to start on that.

    High-Frequency Combat

    MILITARY ROBOT v2Science and technology luminaries Stephen Hawking, Elon Musk, and Steve Wozniak count among the hundreds of researchers pledging support of a proposed ban on the use of artificial intelligence technologies in warfare. In "Autonomous Weapons: an Open Letter from AI & Robotics Researchers", the researchers (along with thousands of citizens not directly involved with AI research) call on the global community to ban "offensive autonomous weapons beyond meaningful human control." They argue that the ability to deploy fully-autonomous weapons is imminent, and the potential dangers of a "military AI arms race" are enormous. Not just in the "blow everything up" sense -- we've been able to do that quite nicely for decades -- but in the "cause havoc" sense. They call out:

    Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.

    They don't specify in the open letter (which is surprisingly brief), but the likely rationale as to why autonomous weapons would be particularly useful for assassinations, population control, and genocide is that they wouldn't say "no." Despite the ease with which human beings can be goaded into perpetrating atrocities, there are lines past which some of us could never cross, no matter the provocation. During World War II, only 15-20 percent of U.S. soldiers in combat actually fired upon enemy troops, at least according to Brigadier General S.L.A. Marshal; while some debate his numbers, it's clear that a significant fraction of soldiers will say "no" even to lawful orders. Certainly a higher percentage of troops will refuse to carry out unlawful and inhumane orders.

    Autonomous weapons wouldn't say no.

    There's another problematic aspect, alluded to in the title of this piece: autonomous military systems will make decisions far faster than the human mind can follow, sometimes for reasons that will elude researchers studying the aftermath. The parallel here is to "high-frequency trading" systems, operating in the stock market at a speed and with a sophistication that human traders simply can't match. The problem here is manifold:

  • High-speed decision-making will push against any attempt by human leaders to think through consequences -- not by making that consideration impossible, but by making it inefficient or even dangerous. If your opponent is using "high-frequency" military AI (HFMAI), a slow response may be detrimental to your future.
  • HFMAI can make opaque decisions, again with the result of potentially undermining longer-term strategic thinking. Note that "autonomous weapons" and "high frequency military AI" does not mean fully-self-aware, Singularity-style super-intelligent machines able to consider long-term possible consequences. HFMAI in the near term will be complex software designed to make specific kinds of on-the-spot decisions in the moment. If you've ever experienced a game AI doing something that gains a quick benefit but weakens its long-term position, or is simply utterly inscrutable, you'll understand what I mean.
  • Worst of all is that, just like high-frequency trading systems, opponents will be able to figure out how to spoof, confuse, or otherwise game the HFMAI software. Think about zero-day exploits tricking your weapons into making bad decisions.

    Although I signed the open letter, I do think that fully-autonomous weapon systems aren't quite as likely as some fear. I'm frankly more concerned about semi-autonomous weapon systems, technologies that give human operators the illusion of control while restricting options to pre-programmed limits. If your software is picking out bombing targets for you, that you tap the "bomb now" on-screen button may technically give you the final say, but ultimately the computer code is deciding what to attack. Or, conversely, computer systems that decide when to fire after you pull the trigger -- giving even untrained shooters uncanny accuracy -- distance the human action from the violent result.

    With semi-autonomous weapons, the human bears responsibility for the outcome, but retains less and less agency to actually control it -- whether or not he or she recognizes this. That's a more subtle, but potentially more dangerous, problem. One that's already here.

  • In the Press

    MeNS0715Yep, pretty busy lately. I hope to have a book announcement soon, though.

    I was asked to write a short opinion piece for New Scientist on the problem of filtering our reality, based on the success of the "Here Active Listening" system on Kickstarter. The piece is online, but sadly for now behind a paywall. This excerpt should give you a taste:

    Critics have also noted an implicit class element in paying for the ability to block out other people's lives. This ambivalence will only grow as the technology improves. Political protests, styles of music, and even specific voices or words could be blocked or altered as digital processing becomes more powerful.

    The desire to filter out what we find disturbing or unwelcome isn't new. In the online environment, it is possible like never before to repel opinions, ideas, or even facts that don't match our world views. The "real world" has been the stubborn holdout, confronting us with things that we may find insulting, offensive or blasphemous. That's about to change.

    For those of you who have followed my exploration of the "Participatory Panopticon" concept, these conclusions are unsurprising. What is a bit more startling -- at least to me -- is that the first real reality filtering technology will affect what we hear rather than what we see.

    This week also sees the publication of an article in Business Insider about the impact of self-driving vehicles, consisting entirely of an interview with me. This came as a bit of a surprise; the interview was actually done as a discussion about the Elon Musk Hyperloop concept, but it looks like the author/editor decided to shift the focus. No complaints here, except that the author quoted me accurately in a stream-of-words bit which included both a mis-statement and the immediate correction. If my main complaint is being quoted too accurately, I'll let it slide.

    The full article is online (and only online, I believe), but here's an excerpt:

    "It is going to be a more cultural shift even more than a technological shift because we have this romantic culture around cars and we are going to look back at that in the same kind of wistful way that we looked back at the relationship people had with horses," Cascio said.

    "You will probably have school girls with all kinds of model cars around the room instead of model horses. You will have people who really enjoy personally owned cars, but for the same reason people own horses today. It's not a utility; it's something that is a romantic hobby."

    That school girls with model cars around the room bit was a joke; I should really stop trying to make tongue-in-cheek references in interviews.

    Usefully Wrong

    It's a line I've used quite a bit in my talks: "The point of futurism [foresight, scenarios] isn't to make accurate predictions. We know that in details large and small, our forecasts will usually be wrong. The goal is to be usefully wrong." I'm not just pre-apologizing for my own errors (although I do hope that it leaves people less annoyed by them). I'm trying to get at a larger point -- forecasts and futurism can still be powerful tools even without being 100% on-target.

    Forecasts, especially of the multiple-future scenario style, force you (the reader or recipient of said futurism) to re-examine the assumptions you make about where things go from here. If your response to a given forecast is "that's bullshit!" you need to be able to ask why you think so. Even if the futurist behind the scenarios leaves out something important, she or he may just as easily have included something that you had ignored. To push this thinking, it's often productive to ask:

    • What would have to happen to make this forecast plausible?
    • What would have to happen to make this forecast impossible (not simply unlikely)?
    • What in this forecast feels both surprising and uncomfortably true?

    Thinking deeply about forecasts and futurism can change your perception. Events and developments that you might once have ignored or reflexively categorized take on new meanings and (critically) new implications. You start to think in terms of consequences, not just results. Here you ask:

    • Did I expect that event or development? Why or why not?
    • What should I now be prepared to see happen next?
    • What expected consequences or results did we manage to avoid?

    Unfortunately, if you really embrace this kind of thinking, you begin to see on a daily basis just how close we as a planet keep coming to disaster. "Dodging bullets" is the top characteristic of human civilization, apparently. Welcome to my world.

    Not Very Uplifting

    What responsibility do we have for the things we make?

    At its root, this is a fairly straightforward science story. Neuroscience researchers at the University of Rochester and the University of Copenhagen successfully transplanted human glial progenitor cells (hGPCs) into a newborn mouse (here's the technical article in The Journal of Neuroscience, and the lay-friendly version in New Scientist). While glial cells are generally considered a support cell in the brain, positioning, feeding, insulating, and protecting neurons, they also help neurons make synaptic connections. The hGPCs out-competed the mouse glial cells, basically taking over that function in the mouse brain, and -- as had been found in similar research (with adult glial cells) -- the mice demonstrated greater intelligence than their unaltered fellows.

    So, mice with grafted human brain support cells are smarter than regular mice. The next phase is testing with rats, which start out even smarter. The researchers insist that there's nothing especially human about these altered mice:

    "This does not provide the animals with additional capabilities that could in any way be ascribed or perceived as specifically human," he says. "Rather, the human cells are simply improving the efficiency of the mouse's own neural networks. It's still a mouse."

    However, the team decided not to try putting human cells into monkeys. "We briefly considered it but decided not to because of all the potential ethical issues," Goldman says.

    (...A statement that somewhat undermines his whole "it's still a mouse" argument -- after all, wouldn't it still be a monkey?)

    As always, I'm mostly interested in the "what happens next?" question. It's likely that rats with hGPC will show increased intelligence; same with dogs. And just because this set of researchers won't add the hGPC special sauce to monkeys doesn't mean that somebody else won't do it. And maybe even throw in a few neuron precursors for flavor.

    But even sticking with hGPCs, the fact remains: we're making these non-human animals demonstrably smarter. We are, in a very limited fashion, uplifting them (to use David Brin's terminology). They will be able to understand the world a bit (or even a lot) better than others of their kind. And at some point, we may well even end up with test subjects significantly smarter than typical and able to demonstrate behaviors unsettlingly close to our own.

    What rights should any of these types of uplifted animals have? Do we need to spell out a greater set of rights for the human chimera mice in the news report? Or as we create increasingly more-intelligent-than-typical animals, will there a point at which they could no longer be limited to the rights given to all scientific research animals? At what point would it become a crime to kill them, no matter how humanely or in accordance with ethical standards? It would be easy to draw the line if the uplifted animals exhibit human-like behavior -- complex communication, for example, or the creation of art -- but what about intelligence-boosted animals that exhibit forms of higher intelligence that don't readily map to human-specific behavior but are clearly beyond what a typical animal of that species could do? When do we give them a say in their own lives?

    This connects in fairly obvious ways to the ongoing efforts to provide more expansive rights to the Great Apes or Cetaceans, but it's equally an issue for the Magna Cortica project. What it's not is a science fiction question for our distant descendants. This is happening now, and these issues need to be addressed now.

    The Inevitable Future

    Film student Taylor Baldschun invited me to participate in a project of his, a short documentary on the end of humanity. His final (for the moment) version can be seen here:

    The Inevitable Future from Taylor Baldschun on Vimeo.

    On my first viewing, I started counting off the various mannerisms and habits that I find annoying in my own speaking style. But I was caught off-guard by my own final statement, which Taylor uses to close the movie.

    If humanity were to go extinct, obviously, our life goes away. Over time, our artifacts go away. So what really would be lost in that existential sense is potential. Because we know that we could do so much more than what we’ve done by now. That we could be better stewards of the planet. That we could develop tools to let us learn new things and go new places. That we could make a better world. And that goes away. That potential, that possibility… it would be an enormous loss of a future.

    And that, to me is, the hardest thing to envision — not because it’s difficult to imagine but because it’s painful to imagine.

    We have, as a civilization, as human beings, such incredible potential. Potential that has not yet been made manifest. And I hope that we have enough time to show the value of that potential.

    It's not perfect, could use a bit of editing to clean it up, but it's not too bad for something made up on the spot. The video as a whole is thoughtful, quiet, and well worth watching. It's not a bad way to spend ten minutes of your day.

    Magna Cortica talk at TEDx Marin

    (brushes away cobwebs, wipes dust off of screen, sits quietly for a moment and wonders what happened...)

    Screen Shot 2014 11 04 at 6 13 30 AM

    The video of my TEDx talk on the ethics of cognitive augmentation is now up, and you can view it at the TEDx Marin website.

    (It's also on YouTube directly, but for the time being I'm doing as asked and pointing people to the TEDx Marin website.)

    A few notes:

    Most importantly: This talk is based on the work I did for the Institute for the Future's 2014 Ten-Year Forecast. Of all of the things I would like to change about this talk, calling this out explicitly is at the top of the list.

    I don't actually speak as fast as I seem to at the outset of the talk; I believe that the editor elided some early "um"/"ah"/word repetitions, resulting in what sounds like I was going WAY too fast.

    Most of my usual gestures are on display, but I do think I managed to tone them down a bit.

    Unfortunately, I'm still pacing back and forth like a caged carnivore.

    There's one thing I do repeatedly throughout the talk, and I don't know why. I'm not going to tell you what it is, because I may just be hypersensitive to it.

    So there.

    Advertisement

    More Site Info...

    Since November 11, 2007. Based on IEA averages.


    Featured in Alltop

    Recent Comments

    get likes for instagram on
       Mo' MoMo Pics:
       instagram followers
    get likes for instagram on
       Putting the Human Back Into the Post-Human -- The Motion Picture:
       buy instagram likes
    get likes for instagram on
       TEDx in Marin:
       buy instagram followers
    Addier Lopkjer on
       Google Glass Ten Second Review:
       We are helping for the students for
    Addier Lopkjer on
       Ambiveillance:
       We are helping for the students for
    India123 on
       Not Very Uplifting:
       You are right here. We don't give p
    India123 on
       Usefully Wrong:
       This is the same problem that we ar
    Aish on
       TEDx in Marin:
       friends
    Aish on
       TEDx in Marin:
       Awesome post and very helpful topic
    Sui Smith on
       Not Very Uplifting:
       This is an interesting post and val

    Archives

    Creative Commons License
    This weblog is licensed under a Creative Commons License.
    Powered By MovableType 4.37
    ARCHIVES