On Holographic Telepresence

Alex Kipman, the head of the Hololens project at Microsoft, gave a talk on the TED stage where he showed off some fantastic new Hololens demos and talked about the future of the technology.

Source: Check out Alex Kipman’s mesmerizing HoloLens TED talk   [I’ll replace the low-quality video with the official TED video when it’s out]

I am almost-beyond-words excited to finally see Microsoft showing off its holographic telepresence. Immersive games in your living room are amazing. Enterprise apps will be empowering. But Telepresence is the killer app for AR, hands down. It’ll still be some time before this replaces video chat and phone calls for everyone, but that time will pass quickly.

Holographic Telepresence is the main reason I pushed so hard for our small Analog Labs team to pursue head-worn displays (Screen Zero, as it was originally called). The original prototyping work behind the very first patent on this started a few months before I joined Alex’s team, with a simple demo of me as a video-quality “Star Wars” hologram, showing a better way to communicate.

Over in Craig Mundie’s org (SBG), we’d been trying to develop Avatar-based telepresence up to then, culminating in Avatar Kinect. I learned just how far we are from making avatars good enough for every day use. [Sorry Faceblock, it’s going to be a long while to wait. ]

Just as I joined the amazing team of PMs, designers and engineers in Analog Labs, we were fortunate to be asked by upper management to re-imagine the next Xbox console. We were still in the last nine months of shipping Kinect. Alex, Ryan, Mark, Johnny and others were heads down, working super hard on finishing Kinect.

For our top secret Screen Zero/Fortaleza, it was actually perfect timing, inside a company notorious for in-fighting and killing inspired ideas — unless of course they could show billion dollar values early on or directly support the mothership (Windows/Office). AR was not nearly at that point of confidence. Far from it, it was more likely to cost a billion than earn it anytime soon. (I’d estimate 1-2 billion spent thus far).

Needless to say, at the time, AR was a giant ball of risk. Few people understood the potential or the actual time to market. In fact, Alex was originally going to pursue autostereo 3D televisions as the “next big thing,” a kind of follow-up to Kinect. But after seeing six months of concepting, prototyping, and demos for Screen Zero, even Alex was convinced this stuff will be the future. It was just a matter of time and money, both of which Microsoft had.

I give Alex so much credit for getting the project to this point. In my opinion, he was the only person I’d met in the entire company with the political skills needed to keep HoloLens alive long enough to see daylight. It’s difficult to even describe the kinds of “House of Cards” maneuvers he had to do to remain in control, gather more resources (people & tech) and prevent disruption from executives and even former executives who either didn’t get it or had other ideas.

I may get around to telling the rest of the story some other time. But I just wanted to say how proud I am of the team and the vision to show the world a glimpse of our collective future.

Paris attacks: Silicon Valley in crosshairs over encryption – BBC News

In the wake of the Paris attacks, Silicon Valley is braced for an onslaught against security, privacy and encryption.

source: Paris attacks: Silicon Valley in crosshairs over encryption – BBC News

Around the time of the Snowden revelations, I remember sitting in a board meeting at some startup I won’t name, trying to convince (prominent VC and C-level) folks that the right thing for customers was for the company to encrypt their private data, give the customers the keys, and retain no ability to decrypt without permission. We could do this technically and still pull off the company mission.

Fast forward a few years and Apple and others are doing exactly what I proposed and on a much larger scale. Was I prescient? Not much. It was about understanding what customers want. Companies will inevitably need to build what customers want, or lose out to someone who will. And anyone really listening would have come to the same conclusion.

I remember the board members at the time looking at me like I was nuts. One called me a dumb hippy. Another smart and well-respected fellow said there was no way the government would let that happen, i.e., if they wanted a back door, they’d get it.

Coming back to present, the government now has fewer back doors then they did back then, in part thanks to Snowden and more so due to an active pro-security community routing these things out. Many security flaws (intentional or otherwise) have been exposed, serving to protect private information and private citizens.

But given an attack like the unconscionable events in Paris, including a purportedly encrypted cell phone, some people think that will shift us all back to the 9/11 mindset. The opportunists, at least, hope this will be their chance to throw in some more PATRIOT act like nonsense.

I don’t think it will. There is zero public evidence of any terrorist plot ever being foiled by decrypting communications. And more importantly, there is no customer value to be gained from doing so.

On the first point, we can look back to WW-II, where Turing and company broke Enigma. That’s a great example of government cracking encryption for social good, right?

Had the Germans known the British and US forces had a backdoor in their favorite encrypted chat app (of the day), they would not have relied on said app, or would have added more & different encryption on top. Bletchley Park had to painstakingly avoid passing critical intel to the Allies, costing many lives in the process, just to ensure the Germans had no proof their communications were vulnerable.

So, by extension, if Western governments succeed in getting back doors put back into popular chat apps, then it can do little good in solving or preventing terrorism. The terrorists will just do something else, including something low-tech.

As with DRM, the only people harmed by inane government policies on encryption, guns, or whatnot are ordinary law abiding folks.

 

[by the way, compare and contrast this excellent linked article today with the crappy one on TerraServer I blogged about yesterday to see the vast range in journalistic quality we have on line]

 

Anatomy of a Poorly Sourced Article

Source: Microsoft Invented Google Earth in the 90s Then Totally Blew It | Motherboard

I don’t often comment on disappointing news articles, but I couldn’t pass on this one. The premise of the article is that Microsoft invented Google Earth in 1998 and then blew it by not exploiting their invention.

Sounds easy to believe, but it’s nonsense. And a small amount of fact-checking would have saved us from it. Most importantly:

  1. TerraServer is not the same as Google Earth, unless you’re just talking about the “idea of viewing aerial imagery on a computer in some form.” However, that idea existed well before TerraServer in the form of applications, vs. TS being free in web browsers for the first time.
  2. Microsoft didn’t abandon maps. It launched a competitor to Google Earth called Virtual Earth and invested heavily in sourcing aerial imagery. It also maintains (to this day!) Bing Maps as a viable, if less popular, competitor to Google Maps.
  3. FWIW, post TerraServer, MS invented the car-mounted cameras that NavTek drove around our streets for MS and Google, before Google unleashed their own StreetView cars to remove that dependency. MS invented orthorectified ‘birds-eye’ imagery, 45 degree views, to show the top and sides of buildings. I could list a dozen other MS innovations in this space that were productized and even successful in their time. It’s only last year (or so) that Microsoft cratered its internally-sourced mapping efforts, selling some assets and people to Uber.

Let’s break it down with some easily discovered information:

  • TerraServer, released in 1998, was most interesting for hosting an unprecedented 1TB satellite imagery database publicly on the internet for free, withstanding thousands of simultaneous users (millions daily). That deserves ample credit as a feat of engineering alone.
  • The user-experience for TS, however, was crude. Jumping from one whole tile to the next was not an adequate end-user experience. In other words, it was never an actual product, just a tech demo. The eventual productization of that demo came in Virtual Earth and Bing Maps (though the Bing brand came later).
  • Where2 improved on the basic map-web-interface significantly in 2004 by allowing smooth scrolling of image tiles, later smooth zooming, while mixing roads and labels. That became Google Maps, which was actually useful and not just interesting.
  • It’s reasonable to say that Google Maps was an improvement on TerraServer’s UI with the same basic concept. Of course, in that vein we could say that Bing Maps was even closer improvement on TerraServer. But that undermines the whole premise of the article, since Bing Maps still exists…
  • We can trace some of the actual lineage of Google Earth separately back to work at SGI around 1996, called “Space to Face” — a virtual zoom in from 100 miles up down to a single spot on the ground. This demo inspired Chris Tanner (et al) to invent a cheaper way to render the right portion of a planet-sized image in real-time without the expensive custom hardware. That was the first key innovation of Google Earth, coming around 1997/98.
  • The only similarity between the way GE handles imagery and TS is the idea of chopping a multi-terrabyte virtual image up into multi-resolution tiles. This is such a basic idea that it’s not even patentable (another company already lost on that claim). To be viable delivering real-time imagery, Keyhole had to innovate significantly in terms of how it batches tiles for efficient network transport. And unlike TS and VE, the rendering in GE is not actually done in tiles.
  • The most important difference is in the user experience. TerraServer is a simple 2D click-through sequential aerial image-browser. Google Earth is a seamless virtual planet rendered in real-time 3D, letting you fly anywhere and see any view. It renders 3D roads and labels in real-time, optimizing for the user’s view. It lets you literally change your perspective, from way out in space down to standing on the ground. It’s basically AR/VR (without the HMD) while TS is basically zoomable maps.
  • Google Earth was re-branded around 2005. Keyhole existed from 1999 on, and the code from even earlier. It’s hard to claim MS invented Google Earth as TerraServer when this was fairly contemporaneous at best.
  • And, as stated before, the premise that MS abandoned maps is just plain wrong. There are plenty of things we can fault MS for doing or not doing. But let’s not make stuff up. 🙂
  • There’s a much more interesting story in the relationship of maps to the future of Augmented Reality for Microsoft and others. But let’s leave that for another day.

 

Amazon Update

Back in 2014, I published a post on why I rejoined Amazon, after spending a year at a startup. I’ll say again that Amazon is the best run company I’ve ever worked for.

I say that after close to 25 years spent with many big and small companies. I could be getting a startup funded right now. I could probably make more money doing something else. I’m at Amazon because I want to be, because it’s an amazing place to work.

The thing I tell everyone who may be thinking of coming here, perhaps after reading slanted commentaries, is to read the leadership principles yourself. They really do represent the company and what it’s like to work here. And these LPs evolve with time and input as well.

The bigger news is that I helped spin up an exciting new project in my free time and now we’re growing. We need one great back-end developer and one great UX designer to join an amazing team. You can email me, or apply at the links below. I can’t say what it is, but it’s fun and it’s new and it’s like nothing you’ve ever seen before.

http://www.amazon.jobs/jobs/341935/senior-software-development-engineer

http://www.amazon.jobs/jobs/326804/interaction-designer

Nintendo Fires Employee For Speaking On Podcast

Last week, Nintendo localization editor Chris Pranger made an appearance on a small podcast called Part-Time Gamers. This week, Nintendo fired him.

Source: Nintendo Fires Employee For Speaking On Podcast

As companies acquire more and more (reportedly constitutional) rights to free speech, employees of companies ironically lose them.

When did this happen and who said it was OK?

Sure, “at-will” employment anticipates companies will fire and employees will quit for no apparent reason. On the other hand, one has to wonder what’s wrong with companies that treat their employees like worthless cogs vs. human beings whose passion and creativity are what make any (technically lifeless) entity seem alive in the first place.

You would think these companies would encourage that passion, but perhaps spend a few bucks up front to train everyone as to more media savvy techniques: when and what to say vs. nothing at all.

I have personal experience with this, having almost been fired by Microsoft (Balmer-era, to be fair) for blogging. Later on, I apparently had a promotion temporarily held hostage over it.

Granted, writing about the company you work for is never to be done without support or counsel. Writing about what they should do in the future is extra-ballsy, perhaps even nuts. In my case, I actually had prior permission from two more senior employees in my org (one being my boss, both of whom defended me) to blog about the topic. And yet that was almost not enough to stem the tide of an angry mob escalating the issue to the top.

“To the cannon!”

Some may say that what I said was going against company direction. Others know that I was being extremely diplomatic, even generous, in my public comments compared to reality.

The VP who most frothed at the mouth over my post is now out of his. His product is now radically changed for the better. And what I’d said was such basic common sense, like “level duh,” that it eventually took hold and became overall company policy, no thanks to me.

What I know for sure that I was ridiculously naive about the reaction of the press. Sites like “The Register” were so eager to bash Microsoft that they were willing to use me as the tool, instead of actually reporting. Because reporting would have required effort, at least a little, to figure out the really newsworthy story under the hood.

So the moral of this story is not that “one can’t talk” — we do in fact have free speech as long as we’re not afraid to use it — but that one can’t talk in such a way that it gets significant media attention without buy-in from suitably higher and higher levels of the company in advance.

Because the more attention you get, the more powerful people in a company will rise up to protect their own vested interests. And that, in the end, is why people get fired for talking.

Animating with Voice

I’ve been wanting stuff like this for 25 years. To me, this represents a much bigger step towards creating a real Holodeck than most current display technologies.

Still, it isn’t just the keying of animations that needs to be voice controlled. Directing actors and directing AIs is still a world apart in quality, subtleties, and expressiveness of emotion. In their case, they get some extra realism from using motion capture, which helps. Nice work overall.

Source: This Natural Language Interface Aims To Let Anyone Make Animations Jump

What’s it like to be a thousand years old?

Ever notice how as we get older, time passes more quickly? Lately, I feel like a week passes in what used to be a day.

I found an interesting page from the late 90s (its formatting ironically stuck in time) that tries to explain our perception of time as a function of age, mainly our memory of the past and expectation for the future.

Well before that, I’d based a whole science fiction story on a similar idea, extrapolating what it might be like to be perfectly healthy at a thousand years-old. At that scale, we might be able to see and communicate with things that are just too slow for us today, like genetically enhanced grape vines for example…

What kinds of relationships could we have between old and young, once the gap in ages gets into hundreds or thousands of years? AI Singularity or not, might we find it just too hard to communicate and relate among mere humans across such a wide gap in perceptual speed?

Logtime: the hypothesis that our age is our basis for estimating time intervals, resulting in a perceived logarithmic shrinking of our years as we age

Source: Logtime: Logarithmic Time Perception With Aging

 

Bonus: see the movie “The Man From Earth” for an even more interesting slant.

Disney VR: Redux

A few years ago, I documented some of the cool experiences I worked on at Disney Imagineering starting in 1994. Now, Inspired by John Carmack exploring Scheme as the language of VR for Oculus, I figured it would be helpful to talk about the software stack a bit. And I’ll finish with a few thoughts on Scheme for VR in the future.

First, as always, I suck at taking credit, in the company of such amazing co-workers. So for the real kudos, please thank Scott Watson (now CTO of Disney R&D) and JWalt Adamczyk (Oscar Winner and amazing solo VR artist/hacker) and our whole team for building much of this system before I even arrived. Thant Tessman esp. deserves credit for the Scheme bindings and interop layer.

This Disney gig was my first “big company” job after college, not counting my internships at Bell Labs. My one previous startup, Worldesign, tried to be a cutting edge VR concept studio about 20 years too early. But Peter Wong and I managed to scrape together a pretty killer CAVE experience (a hot air balloon time travel ride) for only $30,000, which represented exactly all of the money in the world to us. The startup went broke before we even started that work. But because we’d borrowed ample SGI equipment, it did get me noticed by this secret and amazing Disney*Vision Aladdin VR project I knew nothing about.

I had to join on faith.

I quickly learned that Disney was using multiple SGI “Onyx” supercomputers, each costing about a million dollars to render VR scenes for just one person each. Each “rack” (think refrigerator-sized computer case) had about the same rendering power as an Xbox, using the equivalent of today’s “SLI” to couple three RealityEngine 3D graphics cards (each card holding dozens of i860 CPUs) in series to render just 20fps each for a total of 60fps for each VR participant. In theory, anyway.

Disney was really buying themselves a peek ahead of Moore’s Law, roughly 10 years, and they knew it. This was a research project, for sure, but using hundreds of thousands of live “guests” in the park to tell us if we were onto something. (Guests are what Disney calls humans who don’t work there…)

I talked previously about the optically-excellent-but-quite-heavy HMD (driven by Eric Haseltine and others). Remember this was an ultra-low-latency system, using monochrome CRTs to avoid any hint of pixels or screen doors. So let’s dive into the software environment that inspired me for another 20 years.

Even with supercomputers with 4-8 beefy CPUs each (yes, sounds like nothing today), it took a while to re-compile the C++ core of the ride. “SGI Doom” and “Tron 3D lightcycles” filled some of those lapses in productivity…

This code was built on top of the excellent SGI Performer 3D engine/library written by Michael Jones, Remi Arnaud, John Rohlf, Chris Tanner and others, with customizations to handle that 3-frame latency introduced by the “TriClops” (SLI) approach. The SGI folks were early masters of multi-core asynchronous programming, and we later went on to build Intrinsic Graphics games-middleware and then Google Earth. But let’s focus on the Scheme part here.

Above the C++ performance layer, Scott, Thant, JWalt and team had build a nice “show programming” layer with C++ bindings to send data back and forth. Using scheme, the entire show could be programmed, functions prototyped and later ported to C++ as needed. But the coolest thing about it was that the show never stopped (you know the old saying…) unless you wanted to recompile the low-level. The VR experience continued to run at 60fps while you could interactively define Scheme functions or commands to change any aspect of the show interactively.

So imagine using Emacs (or your favorite editor), writing a cool real-time particle system function to match the scarab’s comet-like tail from the Aladdin movie, and hitting two keys to send that function into the world. Viola, the particle system I wrote was running instantly on my screen or HMD. When I wanted to tweak it, I just sent the new definition down and I’d see it just as fast. Debugging was similar. I could write code to inspect values and get the result back to my emacs session, or visually depict it with objects in-world. I prototyped new control filters in Scheme and ported them to C++ when performance became an issue, getting the best of both worlds.

The Scheme layer was fairly incapable of crashing the C++ side (with much effort, to be honest). So for me, this kind of system became the gold standard for rapid prototyping for all future projects. Thant even managed to get multi-threading working in Scheme using continuations. So we were able to escape the single-threaded nature of the thing.

Thant and I also worked a bit on a hierarchical control structure for code and data to serve as a real-time “registry” for all show contents — something to hang an entire virtual world off so everyone can reference the same data in an organized fashion. That work later lead me to build what became KML at Keyhole, now a geographic standard (but forget the XML part — our original JSON-like syntax is superior).

BTW, apart from programming the actual Aladdin show, my first real contribution to this work was getting it all to run at 60fps. That required inventing some custom occlusion culling, because the million dollar hardware was severely constrained in terms of the pixel fill complexity. We went from 20fps to 60fps in about two weeks with some cute hacks, though the Scheme part always stayed at 24fps, as I recall. Similarly, animating complex 3D characters was also too slow for 60fps, so I rewrote that system to beef it up and eventually separated those 3 graphics cards so each could run its own show, about a 10x performance improvement in six months.

The original three-frame latency increased the nausea factor, not surprisingly. So we worked extra hard make to something not far from Carmack’s “time warp” method, sans programmable shaders. We rendered a wider FOV than needed and set the head angle at the very last millisecond in the pipeline, thanks to some SGI hacks for us. That and a bunch of smoothing and prediction on the 60fps portions of the show made for a very smooth ride, all told.

(I do recall getting the then-Senate-majority leader visibly nauseated under the collar for one demo in particular, but only because we broke the ride controls that day and I used my mouse to mirror his steering motions, with 2-3 seconds of human-induced latency as a result).

This Disney project attracted and redistributed some amazing people also worth mentioning. I also got to work with Dr. Randy Pausch, Jesse Schell (also in his first real gig as a jr. show programmer) went on to great fame in the gaming world. Aaron Pulkka also went onto an excellent career as well. I’m barely even mentioning the people on the art and creative leadership side, resulting in a VR demo that is still better than at least half of what I see today.

Further Thoughts

So can Scheme help Carmack and company make it easier to build VR worlds? Absolutely. A dynamic language is exactly what VR needs, esp. one strong in the ways of Functional Reactive Programming, closures, monads, and more.

Is it the right language? If you asked my wise friend Brian Beckman, he’d probably recommend Clojure for any lisp-derived syntax today, since it benefits from the JVM for easy interoperability with Java, Scala and more. Brian is the one who got me turned onto Functional Reactive Programming in the first place, and with Scott Isaacs, helped inspire Read/Write World at Microsoft, which was solving a similar problem to John’s, but for the real world…

If you asked me, today I’d have to go with Javascript as the scripting language for VR. It’s come a long way from the 90s, esp. with ES-6. And, like Thant 20 years ago with Scheme, I can now make JS look like anything I want with very little performance penalty but lots of flexibility. But the single biggest benefit is there is just so much MIT-licensed code for NodeJS and browsers. The community is the single biggest benefit in the end. For rapid prototyping, nothing saves time as much as the code you don’t need to write.

Syntactically, lisp-derivatives aren’t that hard to learn IMO, but it does take some brain warping to get right. I worked with CS legend Danny Hillis for a time and he tried to get me to write the next VR system in Lisp directly. He told me he could write lisp that outperformed C++, and I believed him. But I balked at the learning curve for doing that myself. If other young devs balk at Scheme due to simple inertia, that’s a downside, unfortunately.

Eric Meijer once taught me that Javascript is the assembly language of the internet. With asm.js and Web Assembly that’s become literally true. There really isn’t anything more appropriate right now for a language to build Cyberspace.

 

Palmer Luckey Wants to Build the Matrix

It’s worth remembering that virtual reality has never always been about gaming. Any real virtual reality enthusiast can look back at VR science fiction. It’s not about playing games … “The Matrix,” “Snow Crash,” all this fiction was not about sitting in a room playing video games. It’s about being in a parallel digital world that exists alongside our own, communicating with other people, playing with other people.

Source: Oculus Rift Inventor Palmer Luckey

Palmer Luckey wants to build The Matrix. I can totally relate. I wanted to do something similar back in my 20s, when it was called “Cyberspace,” the “Metaverse,” “the Other Plane,” or, for me, “Reality Prime.”

He’s a bit off base with a few things though. It’s not so much that VR died in the 90s, around the time he was born. The hype certainly died down, so there weren’t many media artifacts to review later. But VR thrived in many forms, including making billions in MMO gaming, sans HMDs. Immersive VR also survived at the very high end (e.g., CAVEs, etc..) for oil exploration, simulation, and more.

What really happened, apart from the H/W remaining too expensive for mass adoption, until cell phone demand drove component prices down, is that a lot of people working in VR realized that there were better ways to serve the world. In other words, we moved on to bigger and better things.

It’s nice, btw, that he gives a shout-out to his time at ICT. World-class inventor, Mark Bolas’ open-sourced HMD design was apparently instrumental for defining the first Oculus Rifts. Palmer may be aware of more design differences between his and Mark’s inventions than I am. Rift has certainly come a long way since then. It’s quite nice, if not quite done.

But what about this “Matrix” thing?

In the film, people are plugged into the global AI network, their realities (and bodies) controlled by mysterious AI entities with varying motives, all centered on control. We’re a long way from that in real life. But still, the analogy may hold for what Facebook, Oculus’ new benefactor, is already doing.

In the movie, there was a weak (IMO) plot device where the AIs were secretly exploiting humans as batteries. It’s weak because: thermodynamics. People are relatively poor transformers of food into energy. What about alternatives in geothermal, nuclear or fusion power, you ask? You have to just accept this bit of superficial fiction on faith. Fair enough.

However, if you replaced the idea of “people as batteries” with “people as wallets” connected to the grid, now you’re onto something, allegorically speaking.

It’s not energy that people collectively produce to benefit the AIs, but rather new/monetizable value, which can be dollars, attention or even new ideas and intellectual property, all fungible.

To many people working in big internet technologies, customers are already fairly abstract entities, never seen directly, but more like “wallets” and “personal data” plugged in at arbitrary endpoints. These customers somehow make things (real or virtual, doesn’t matter), they make money and charge-up their bank accounts, almost like batteries.

That’s not Facebook’s concern, for the most part, as it’s beyond their corporate capability to create so much original content and value at this scale. They can collect and connect it very well though.

But affecting how people spend their money and time is Facebook’s core business model. That is: influencing your “brand thinking,” consumption and spending habits with targeted and personalized content, especially ads, even selling your data to third parties who will. That’s their bread and butter.

To get these “internet attached wallets” to open up for advertisers for the maximum return on investment, Facebook needs your “personal data” to get to know you better. For that, they provide a socially compelling service that gets you to share your life freely without worrying your pretty little head about who owns that data you created or where it goes next.

So yes, in a strong sense, Facebook is a lot like those AIs who provide an immersive world for the humans to blithely live their lives while unwittingly producing a commodity the AIs need. The main difference is that unlike the Matrix, we don’t spend all of our time in Facebook — yet. But Facebook would very much like to improve that metric, using VR and companion mobile devices (chat, text, voice) as the medium.

In the near future, Facebook will know what you like (and want) simply by how you look at things or how you react emotionally, with no manual “like” button needed. They could continue to experiment on you, as they’ve done before, to mine your personality, and potentially even control you, most subtly, by conveniently filtering and mediating your interactions, social and otherwise.

If that doesn’t seem plausible, read my previous post about how it works. This isn’t science fiction. And it doesn’t require anyone with “bad intentions” either, just bad business models, and it will happen. The result is inevitable without adequate constraints, given the push to always make more money with a bounded set of people, roughly 7 billion. It will take much longer until Facebook gets into the business of making more customers. (I’m kidding, I hope).

To be clear, I think very highly of Facebook and Oculus’ engineering talent, product designers, and leadership. I have a healthy respect for their achievements and capabilities, which only adds to these concerns — if they succeed. They don’t seem to want (or believe) this outcome as conscientious individuals, and yet they’re already building it collectively, brick by brick.

So when people openly throw around that they’re inspired by and building towards “The Matrix,” then I think we need to ask even more emphatically about social impact and ethics and demand to know who will ultimately control this new power we’re unleashing.

Palmer is right. This is not about games. The stakes are so much higher than that.

What do you think?