The Internet Archive discovers and captures web pages through many different web crawls.
At any given time several distinct crawls are running, some for months, and some every day or longer.
View the web archive through the Wayback Machine.
I am almost-beyond-words excited to finally see Microsoft showing off its holographic telepresence. Immersive games in your living room are amazing. Enterprise apps will be empowering. But Telepresence is the killer app for AR, hands down. It’ll still be some time before this replaces video chat and phone calls for everyone, but that time will pass quickly.
Holographic Telepresence is the main reason I pushed so hard for our small Analog Labs team to pursue head-worn displays (Screen Zero, as it was originally called). The original prototyping work behind the very first patent on this started a few months before I joined Alex’s team, with a simple demo of me as a video-quality “Star Wars” hologram, showing a better way to communicate.
Over in Craig Mundie’s org (SBG), we’d been trying to develop Avatar-based telepresence up to then, culminating in Avatar Kinect. I learned just how far we are from making avatars good enough for every day use. [Sorry Faceblock, it’s going to be a long while to wait. ]
Just as I joined the amazing team of PMs, designers and engineers in Analog Labs, we were fortunate to be asked by upper management to re-imagine the next Xbox console. We were still in the last nine months of shipping Kinect. Alex, Ryan, Mark, Johnny and others were heads down, working super hard on finishing Kinect.
For our top secret Screen Zero/Fortaleza, it was actually perfect timing, inside a company notorious for in-fighting and killing inspired ideas — unless of course they could show billion dollar values early on or directly support the mothership (Windows/Office). AR was not nearly at that point of confidence. Far from it, it was more likely to cost a billion than earn it anytime soon. (I’d estimate 1-2 billion spent thus far).
Needless to say, at the time, AR was a giant ball of risk. Few people understood the potential or the actual time to market. In fact, Alex was originally going to pursue autostereo 3D televisions as the “next big thing,” a kind of follow-up to Kinect. But after seeing six months of concepting, prototyping, and demos for Screen Zero, even Alex was convinced this stuff will be the future. It was just a matter of time and money, both of which Microsoft had.
I give Alex so much credit for getting the project to this point. In my opinion, he was the only person I’d met in the entire company with the political skills needed to keep HoloLens alive long enough to see daylight. It’s difficult to even describe the kinds of “House of Cards” maneuvers he had to do to remain in control, gather more resources (people & tech) and prevent disruption from executives and even former executives who either didn’t get it or had other ideas.
I may get around to telling the rest of the story some other time. But I just wanted to say how proud I am of the team and the vision to show the world a glimpse of our collective future.
I don’t often comment on disappointing news articles, but I couldn’t pass on this one. The premise of the article is that Microsoft invented Google Earth in 1998 and then blew it by not exploiting their invention.
Sounds easy to believe, but it’s nonsense. And a small amount of fact-checking would have saved us from it. Most importantly:
TerraServer is not the same as Google Earth, unless you’re just talking about the “idea of viewing aerial imagery on a computer in some form.” However, that idea existed well before TerraServer in the form of applications, vs. TS being free in web browsers for the first time.
Microsoft didn’t abandon maps. It launched a competitor to Google Earth called Virtual Earth and invested heavily in sourcing aerial imagery. It also maintains (to this day!) Bing Maps as a viable, if less popular, competitor to Google Maps.
FWIW, post TerraServer, MS invented the car-mounted cameras that NavTek drove around our streets for MS and Google, before Google unleashed their own StreetView cars to remove that dependency. MS invented orthorectified ‘birds-eye’ imagery, 45 degree views, to show the top and sides of buildings. I could list a dozen other MS innovations in this space that were productized and even successful in their time. It’s only last year (or so) that Microsoft cratered its internally-sourced mapping efforts, selling some assets and people to Uber.
Let’s break it down with some easily discovered information:
TerraServer, released in 1998, was most interesting for hosting an unprecedented 1TB satellite imagery database publicly on the internet for free, withstanding thousands of simultaneous users (millions daily). That deserves ample credit as a feat of engineering alone.
The user-experience for TS, however, was crude. Jumping from one whole tile to the next was not an adequate end-user experience. In other words, it was never an actual product, just a tech demo. The eventual productization of that demo came in Virtual Earth and Bing Maps (though the Bing brand came later).
Where2 improved on the basic map-web-interface significantly in 2004 by allowing smooth scrolling of image tiles, later smooth zooming, while mixing roads and labels. That became Google Maps, which was actually useful and not just interesting.
It’s reasonable to say that Google Maps was an improvement on TerraServer’s UI with the same basic concept. Of course, in that vein we could say that Bing Maps was even closer improvement on TerraServer. But that undermines the whole premise of the article, since Bing Maps still exists…
We can trace some of the actual lineage of Google Earth separately back to work at SGI around 1996, called “Space to Face” — a virtual zoom in from 100 miles up down to a single spot on the ground. This demo inspired Chris Tanner (et al) to invent a cheaper way to render the right portion of a planet-sized image in real-time without the expensive custom hardware. That was the first key innovation of Google Earth, coming around 1997/98.
The only similarity between the way GE handles imagery and TS is the idea of chopping a multi-terrabyte virtual image up into multi-resolution tiles. This is such a basic idea that it’s not even patentable (another company already lost on that claim). To be viable delivering real-time imagery, Keyhole had to innovate significantly in terms of how it batches tiles for efficient network transport. And unlike TS and VE, the rendering in GE is not actually done in tiles.
The most important difference is in the user experience. TerraServer is a simple 2D click-through sequential aerial image-browser. Google Earth is a seamless virtual planet rendered in real-time 3D, letting you fly anywhere and see any view. It renders 3D roads and labels in real-time, optimizing for the user’s view. It lets you literally change your perspective, from way out in space down to standing on the ground. It’s basically AR/VR (without the HMD) while TS is basically zoomable maps.
Google Earth was re-branded around 2005. Keyhole existed from 1999 on, and the code from even earlier. It’s hard to claim MS invented Google Earth as TerraServer when this was fairly contemporaneous at best.
And, as stated before, the premise that MS abandoned maps is just plain wrong. There are plenty of things we can fault MS for doing or not doing. But let’s not make stuff up. 🙂
There’s a much more interesting story in the relationship of maps to the future of Augmented Reality for Microsoft and others. But let’s leave that for another day.
A few years ago, I documented some of the cool experiences I worked on at Disney Imagineering starting in 1994. Now, Inspired by John Carmack exploring Scheme as the language of VR for Oculus, I figured it would be helpful to talk about the software stack a bit. And I’ll finish with a few thoughts on Scheme for VR in the future.
First, as always, I suck at taking credit, in the company of such amazing co-workers. So for the real kudos, please thank Scott Watson (now CTO of Disney R&D) and JWalt Adamczyk (Oscar Winner and amazing solo VR artist/hacker) and our whole team for building much of this system before I even arrived. Thant Tessman esp. deserves credit for the Scheme bindings and interop layer.
This Disney gig was my first “big company” job after college, not counting my internships at Bell Labs. My one previous startup, Worldesign, tried to be a cutting edge VR concept studio about 20 years too early. But Peter Wong and I managed to scrape together a pretty killer CAVE experience (a hot air balloon time travel ride) for only $30,000, which represented exactly all of the money in the world to us. The startup went broke before we even started that work. But because we’d borrowed ample SGI equipment, it did get me noticed by this secret and amazing Disney*Vision Aladdin VR project I knew nothing about.
I had to join on faith.
I quickly learned that Disney was using multiple SGI “Onyx” supercomputers, each costing about a million dollars to render VR scenes for just one person each. Each “rack” (think refrigerator-sized computer case) had about the same rendering power as an Xbox, using the equivalent of today’s “SLI” to couple three RealityEngine 3D graphics cards (each card holding dozens of i860 CPUs) in series to render just 20fps each for a total of 60fps for each VR participant. In theory, anyway.
Disney was really buying themselves a peek ahead of Moore’s Law, roughly 10 years, and they knew it. This was a research project, for sure, but using hundreds of thousands of live “guests” in the park to tell us if we were onto something. (Guests are what Disney calls humans who don’t work there…)
I talked previously about the optically-excellent-but-quite-heavy HMD (driven by Eric Haseltine and others). Remember this was an ultra-low-latency system, using monochrome CRTs to avoid any hint of pixels or screen doors. So let’s dive into the software environment that inspired me for another 20 years.
Even with supercomputers with 4-8 beefy CPUs each (yes, sounds like nothing today), it took a while to re-compile the C++ core of the ride. “SGI Doom” and “Tron 3D lightcycles” filled some of those lapses in productivity…
This code was built on top of the excellent SGI Performer 3D engine/library written by Michael Jones, Remi Arnaud, John Rohlf, Chris Tanner and others, with customizations to handle that 3-frame latency introduced by the “TriClops” (SLI) approach. The SGI folks were early masters of multi-core asynchronous programming, and we later went on to build Intrinsic Graphics games-middleware and then Google Earth. But let’s focus on the Scheme part here.
Above the C++ performance layer, Scott, Thant, JWalt and team had build a nice “show programming” layer with C++ bindings to send data back and forth. Using scheme, the entire show could be programmed, functions prototyped and later ported to C++ as needed. But the coolest thing about it was that the show never stopped (you know the old saying…) unless you wanted to recompile the low-level. The VR experience continued to run at 60fps while you could interactively define Scheme functions or commands to change any aspect of the show interactively.
So imagine using Emacs (or your favorite editor), writing a cool real-time particle system function to match the scarab’s comet-like tail from the Aladdin movie, and hitting two keys to send that function into the world. Viola, the particle system I wrote was running instantly on my screen or HMD. When I wanted to tweak it, I just sent the new definition down and I’d see it just as fast. Debugging was similar. I could write code to inspect values and get the result back to my emacs session, or visually depict it with objects in-world. I prototyped new control filters in Scheme and ported them to C++ when performance became an issue, getting the best of both worlds.
The Scheme layer was fairly incapable of crashing the C++ side (with much effort, to be honest). So for me, this kind of system became the gold standard for rapid prototyping for all future projects. Thant even managed to get multi-threading working in Scheme using continuations. So we were able to escape the single-threaded nature of the thing.
Thant and I also worked a bit on a hierarchical control structure for code and data to serve as a real-time “registry” for all show contents — something to hang an entire virtual world off so everyone can reference the same data in an organized fashion. That work later lead me to build what became KML at Keyhole, now a geographic standard (but forget the XML part — our original JSON-like syntax is superior).
BTW, apart from programming the actual Aladdin show, my first real contribution to this work was getting it all to run at 60fps. That required inventing some custom occlusion culling, because the million dollar hardware was severely constrained in terms of the pixel fill complexity. We went from 20fps to 60fps in about two weeks with some cute hacks, though the Scheme part always stayed at 24fps, as I recall. Similarly, animating complex 3D characters was also too slow for 60fps, so I rewrote that system to beef it up and eventually separated those 3 graphics cards so each could run its own show, about a 10x performance improvement in six months.
The original three-frame latency increased the nausea factor, not surprisingly. So we worked extra hard make to something not far from Carmack’s “time warp” method, sans programmable shaders. We rendered a wider FOV than needed and set the head angle at the very last millisecond in the pipeline, thanks to some SGI hacks for us. That and a bunch of smoothing and prediction on the 60fps portions of the show made for a very smooth ride, all told.
(I do recall getting the then-Senate-majority leader visibly nauseated under the collar for one demo in particular, but only because we broke the ride controls that day and I used my mouse to mirror his steering motions, with 2-3 seconds of human-induced latency as a result).
This Disney project attracted and redistributed some amazing people also worth mentioning. I also got to work with Dr. Randy Pausch, Jesse Schell (also in his first real gig as a jr. show programmer) went on to great fame in the gaming world. Aaron Pulkka also went onto an excellent career as well. I’m barely even mentioning the people on the art and creative leadership side, resulting in a VR demo that is still better than at least half of what I see today.
So can Scheme help Carmack and company make it easier to build VR worlds? Absolutely. A dynamic language is exactly what VR needs, esp. one strong in the ways of Functional Reactive Programming, closures, monads, and more.
Is it the right language? If you asked my wise friend Brian Beckman, he’d probably recommend Clojure for any lisp-derived syntax today, since it benefits from the JVM for easy interoperability with Java, Scala and more. Brian is the one who got me turned onto Functional Reactive Programming in the first place, and with Scott Isaacs, helped inspire Read/Write World at Microsoft, which was solving a similar problem to John’s, but for the real world…
Syntactically, lisp-derivatives aren’t that hard to learn IMO, but it does take some brain warping to get right. I worked with CS legend Danny Hillis for a time and he tried to get me to write the next VR system in Lisp directly. He told me he could write lisp that outperformed C++, and I believed him. But I balked at the learning curve for doing that myself. If other young devs balk at Scheme due to simple inertia, that’s a downside, unfortunately.
Here are the slides from my AWE 2015 talk and a link to the video on youtube. See below for the original speech in prose form. Thanks again to Ori and Dave for inviting me. And I was totally humbled to be sharing the stage with some of my heroes that day.
Funny story. I’d practiced the whole speech for a week or more. I was totally relaxed back stage. But I somehow got nervous in the moment and the speech escaped my brain about 20 seconds in. Embarassing!
Without a teleprompter or any notes, I had to wing the whole thing. So a big thank you to Travis and the A/V team for giving me a new clicker to buy time and cover for my fumble. Totally cool move.
Let me know which version you like better, “as written” or “as delivered”
Lesson: next time I’m going to just do it more spontaneously, since that’s how it may wind up anyway.
The Original Speech:
 In the last 23 years, I’ve worked for some really big companies and some really small ones. I’m not here to represent any of them. I’m here with the simple job title of Person. And I’m here to hopefully inspire some of you to take action, and others to at least understand what needs to be done.
We’re all here today because we recognize the game-changing potential of AR/VR. This technology brings magic into the world. It gives us superpowers. How can that not be game changing? But this new magic is so powerful, and the potential is so big, that some of the biggest companies are already vying for control.
 So what happens when big companies – with a variety of business models – bring what we might call “big magic” into the world?
I was a little worried about using such bold words until I heard David Brin talk so eloquently this morning. I’ll sum up. The danger zone of any big new technology is when it’s still unevenly distributed. We saw this fire to radio to books to TNT. There is no such thing as a purely good technology. It’s all in how you use it.
The good news is we get to decide how this goes down. We’re the creators, but also the customers. We can shape the world we want.
 I gave a talk here two years ago equating AR/VR to a host of new human superpowers. I’m pleased to see the theme of the conference this year.
That talk is on-line if you’re interested. But even then, these ideas had been percolating for a long time and I was just dying to talk about it.
 In 2010, I’d joined a secret project inside Microsoft to reboot the next-gen Xbox…
Leadership had concluded that cramming 10x more of everything wasn’t enough. They wanted something fundamentally more game changing, something where they could spend, say, a billion dollars to buy a strong lead. They wanted something that would normally scare them (and everyone else) from even trying.
 I had a few ideas…
I’ve been very lucky in my career to work with amazing people on amazing opportunities.
I got to work on Disney’s $20M Aladdin VR ride, helped craft Google Earth and Second Life. I was recruited to Microsoft in 2008 to help build social AR-like experiences into Bing. We called the project “First Life.” Alas, some folks didn’t think mobile was going to be a big deal and it stalled. So I switched tracks to work on communications, social avatars, and then interactive video holography.
That lead me to join XBox Incubations, with perfect timing, to propose and build the very first HoloLens prototypes and concept definitions, and invent about 20 new ideas in the first six months.
 Just to clarify:
TP is Telepresence. Holographic toilet paper == worst idea ever. The use (some might say abuse) of the word Hologram came from popular fiction, like Star Wars.
Hundreds, if not thousands of people worked on HoloLens after me, solving some very hard problems. Many of the original team have moved on. They ALL deserve credit.
 So AR is really coming. It’s only taken 47 years since Ivan Sutherland built the first prototype.
 But all of a sudden VR is exploding again. Yes. I want my holodeck too. But since my Disney VR days, I’ve come to realize that early VR is going to be mostly “Dark Rides.” Think Pirates of the Caribbean. You’ll sit in a chair and experience an exhilarating, magical, evocative but not-very-relevant journey.
On the whole, VR is:
üHigh Presence and Immersion
üLow Relevance to Your Daily Life
Not that there’s anything wrong with a little escapism, from time to time.
 The fundamental difference between AR and VR is not hardware. Same tech will eventually do both easily. The fundamental difference is that AR builds on Context. In other words, it’s about You and Your World. And context goes to one kind of monetization.
Mixed Reality, as a reminder, is that whole spectrum from AR to VR. You could look at it as a spectrum of reality vs. fantasy, but it’s more instructive to see it as a “Spectrum of Relevance.”
 Why are highly relevant experiences worth an order of magnitude more?
1)Because we spend so much more time and money in the real world
2)Because we care so much more about the real world
All good so far. AR is a goldmine of reality. VR is a goldmine of creativity.
 But, Beware the Dark Side
 You knew there had to be a dark side somewhere, right?
Fact: the more you can be swayed by a given ad, the more that ad is worth. Companies want track your desires, your purchasing intent, and your ultimate transactions to (as they say) “close the loop.” The world is moving from analyzing your clickstreams (on the web), to analyzing your communication-streams (as in chat, voice, email) and eventually to studying your thought-streams.
How do they obtain your thought streams and mine your personality without literally reading your mind?
It’s not like people would ever treat other people like lab rats…
 Oops. And Facebook is not alone, not by a long shot.
Note: scientific experiments are often very positive. There rely on this thing called “informed consent”
And no, EULAs and privacy notices don’t count. Let’s stop pretending people read those. Informed consent means informed consent.
 In 1995, I had the honor of working with Dr. Randy Pausch at Disney Imagineering to help study, with informed consent, how people experienced VR… We continuously recorded people’s gaze vectors – hundreds of thousands of people — as they flew their magic carpets through the world of Aladdin, to study which parts of our storytelling worked best.
BTW, we found that while men averaged a head angle of “straight ahead,” women, on the whole, looked 15 degrees to the left. What?
We figured out that the motorcycle-like seat of our physical VR rig forced people wearing skirts to sit side-saddle. So, statistically speaking and unintentionally, the data told us if you were wearing a skirt.
 More recently, VR helped reveal dangerous sex offenders before their release, even where the offender believes he’s been cured. They were shown risky scenes. I won’t elaborate on how their responses were measured…
But with coming face capture, eye tracking, EEGs, muscle sensors, skin conduction, pulse and more built into new HMDs, imagine what kind of latent inclinations can be teased out of you. Companies like Facebook and Google, betting on VR, will be able to show you something and tell instantly how you feel about it, no Like Button necessary.
 Did you look at the woman in the red dress? We know you did.
The thing about the Matrix is: the whole humans as batteries trope is kind of silly. But if you imagine people as wallets and credit cards connected to the internet, that seems to be exactly how some companies look at their customers.
But for the record, I don’t think we’re in danger of being grown in vats anytime soon.
 Tobii is a leader in using eye tracking to help understand user behavior.
The picture on the left is of a woman wearing glasses that track her gaze as she shops. The person with the tablet is studying her behavior.
Another study on the upper right tracked men and women’s gaze over various photos. Conclusion: men have no idea what they’re staring at most of the time. These are involuntary reactions. Stimulus and response.
To the extent AR or other devices track what we see and do, companies will be able to monitor our sensory inputs and emotions as we pursue our day. The thing about AR is it now gives us a compelling reason to wear it all day long.
 The point of all this is not to get scared, feel powerless and withdraw.
The point is that we have control. We always did.
Nothing in the world is free. You’re going to pay for stuff one way or another.
Companies that sell things can and should be the most customer-focused, protecting privacy and curbing abuses. That’s in their core business interest
Companies that sell user data, sell ads, sell you, well, they have every incentive to keep pushing the envelope on this front and keep you ignorant of it.
It’s all about their business models, not you personally. You can steer this by simply choosing who you do business with.
 Case in point, Apple lately has one of the better takes on user privacy, responding to latent fears over just how much data they’re collecting. They’re a product company, and even their iAd product is more privacy-friendly than most.
But can Apple bring it home? The next thing I want to hear from Apple is: “You OWN your data. You made it. It’s about you. Can we help put it to work for you, please?”
HealthKit is the closest thing to that so far, with opt-in studies. And it’s great to see them trying to figure this out.
I’d also give Cortana kudos for the notebook feature, letting you easily see and edit what Microsoft knows about you. That comes from consumer demand.
 Recapping so far:
Big Companies are bringing “Big Magic” to the world
Big Magic can either Liberate or Enslave us
We get to pick. Here’s how…
 Basically, we need to build the AR equivalent of the World Wide Web. And I don’t mean just boxes in space.
You own your content, your little part of the graph.
You create the world you want to live in.
 All of these statements may be true, to some extent. But they don’t have to be true. We’ve also let developers of web technologies largely off the hook. We can demand parity of browsers and native experiences. Apple, Microsoft have for years let their browsers, especially on mobile, lag the native side.
Now, it’s true that having a free and open web today doesn’t guarantee privacy or lack of exploitation. Just look at web bugs and cookies and Facebook. And security is the primary reason cited for the lack of features in web tech.
But having a free and open web does at least make it very hard for any one big company (or government) to eliminate your choice unilaterally. You get more options the more open the field is. And you get more voice. That’s the point. Just look at the fight over net neutrality. Could that have happened if AT&T provided everyone’s internet service? No way.
 So consider what made the WWW a winner. Why didn’t the web take off as a series of native “apps” and walled gardens when they’re clearly much more safe and capable?
üContent is device independent
üContent is dynamically and neutrally served
üContent is viewable, copyable, mashable
 Same for the next phase of evolution.
 Content is going to need to adapt based on the chosen device, its resolution, perf, field of view, depth of field. And for AR it’s going to also need to adapt to real-world location, people and activity.
Baking this all into native code and statically packaged data is problematic. It has to be adaptable, reactive at its core.
There are millions of self-taught web developers out there who live and die by View Source and Stack Exchange. It will take an army of AR/VR enthusiasts to likewise capture the real world and build new worlds that we want to see.
Or it could follow TV, Movies, Games and big Media down a content-controlled narrow mind-numbing path. I hope not.
 In AR, content has to adapt to the user’s environment, including other people in view.
Here we see just the furniture playing a role. That’s pretty cool to see in action.
Mapping the world is far less invasive than mapping our brains.
 Business instincts will naturally drive companies to have app stores, to protect all IP and mediate access from the irrational mob, i.e., you.
Resist the urge. It’s not good for them and it’s not good for you.
The value of copying and remixing content far outweighs the loss of control. Look at YouTube vs. the App Store.
I look at App Stores and see more clones and less inspiration. DRM doesn’t prevent copying. It just makes everything suck.
 Most Importantly: We need a way to link people, places & things into a truly open meta graph.
Here, I’ll praise Facebook for Open Graph and Microsoft for following with their own kind of graph. What we need next is the meta-version of these that spans companies to build a secure graph of all things or GOAT.
Open experiences need to understand the dynamic relationships among people, places, and things. But information about people should be considered private, privileged, and protected. Those links can’t even be seen without authentication, authorization and auditing, aka user’s informed consent.
Users will live in a world where they subscribe to layers of AR based on levels of trust. Do I like Facebook’s view of the world? If yes, then I can see it. Do I like Microsoft’s. Ok, then that’s visible too. Do I trust Facebook with my data? If yes, then they can see me too.
We can build this. We built the web. It need not be owned by any one company. And we have just enough time to get it right.
 This is the key. You already own the content. Copyright is implicit in the US and beyond. If you published it, you own it.
If you express yourself on Facebook, right now, they own it, or at least can use it anyway they want. That’s because you clicked a EULA. But that’s not the natural state of affairs
We need a markup language for reality, letting us describe what IS in a semantically rich way.
We also need an expression language for content, that lets this content adapt to the environment.
There are some great starts in open standards. We can build the next steps on top of those.
 Ask yourselves: why are we doing this AR/VR stuff? For the technology itself? For the money?
It’s not an internet of things or a web of sites or a graph of places. it’s about people.
We do this because we ARE those people, building amazing things for ourselves and others to enjoy. And the things we build next are going to knock their socks off.
So our focus must always be on the people, our customers, and how to help and not hurt them. Because, even if we’re selfish, they and we are one and the same and our choices matter.
 We live in and make up an internet of people.
Key findings on American consumers include that — 91% disagree (77% of them strongly) that “If companies give me a discount, it is a fair exchange for them to collect information about me without my knowing” 71% disagree (53% of them strongly) that “It’s fair for an online or physical store to monitor what I’m doing online when I’m there, in exchange for letting me use the store’s wireless internet, or Wi-Fi, without charge.” 55% disagree (38% of them strongly) that “It’s okay if a store where I shop uses information it has about me to create a picture of me that improves the services they provide for me.”Source: The Online Privacy Lie Is Unraveling | TechCrunch
I’ve had this same argument for years.
The “smart money” says that people no longer care about privacy. They point to millennials who post tons of embarrassing crap about themselves on Facebook. They say it’s a cultural shift from my generation to the next. Privacy is dead or dying.
I say that teenagers are generally reckless, nonchalant about their own futures, as an almost rite of passage. However, teenagers, for the most part, grow up, become responsible, and have concerns like the rest of us. So I figured the pendulum would swing back towards privacy as soon as these kids got older, saw the pitfalls. The new kids would become the reckless ones.
This study shows that people do actually care about privacy. But cynicism about how much power we have to protect it is a third factor to consider. If people are resigned to lose their privacy, it becomes less vital. It doesn’t mean they care less or are any less harmed. If people felt more empowered, they might even fight for their rights.
For me, this is pretty simple. If I create data by my activities, it’s the same as creating a work of art. It doesn’t matter that my phone is the tool vs. a paint brush or keyboard. This data would not exist except for my actions. I made it and I own it, unless I choose to sell it.
It’s perfectly fine for any adult to trade or sell their own data, as long as there is informed consent and people are in control of their own information.
I just got outed on Techcrunch. So I’ll come clean. 🙂
I’ve recently (April 2014) rejoined Amazon as a manager and developer on the Prime Air team.
We’ve set up a new team in downtown SF to focus on some interesting aspects of the project. We’re growing rapidly. If you’re interested in the project and love the Bay Area, feel free to reach out or apply directly via the Amazon website (here or here)
So why did I re-join Amazon?
The simplest answer is that I really admire this team, this project, and this company. I’m not one to gush or blush — if anything I excel at finding fault. But this job is really fun. We have trained professionals who love to do the stuff I don’t.
The project doesn’t need any more hype from me. JeffB already talked about it on 60 minutes. You may have heard me talk about various superpowers in another context… This is a similar level of game-changer IMO.
Speaking personally, this project meets a number of important requirements for me:
First, it needs to be fairly green-field. I did early AR/VR in the 90s. We built an entire Earth in 2000. I worked on massive multiplayer worlds and avatars after that. I moved onto robotic parachutes in 2004, designed geo-social-mobile apps in 2008, then telepresence and more stuff I can’t talk about after that.
I like to learn fast, often by making mistakes, with a whole lot of guessing and path-finding until the way is clear. By the time 100,000 people are working on something, there are up to 100,000 people who are potentially way smarter than me, plus ample documentation on the right and wrong ways to do anything.
Second, I want to work on projects that use new technology in the most positive ways, sometimes as an antidote to the other negative ones out there. I’ve left companies on that principle alone…
I’ve both given and received some criticism over this – even been called a “hippie.” But I didn’t inhale that sort of sentiment. I just moved on. At the end of the day, I always try to do the right thing and help people wherever I can.
That’s based on what I like to think of as “principles.” Many of the reasons I like Amazon as a company are due to its principles.
At Amazon, I saw these principles come up almost every day on the job and I was suitably impressed. Naturally, they’re used as a kind of lens for job candidates, esp. as a way to efficiently discuss their leadership skills. But these concepts are used and reinforced almost daily for things like professional feedback and taking responsibility, above and beyond our job specs.
I’ve seen senior leaders uphold the “vocally self critical” principle in meetings, where at other companies such behavior might be called a “career-limiting” move. This principle alone meant that even in my earliest interviews, I could be blunt about learning from my past mistakes without worrying if I should say things like “my biggest fault is that I work too hard.” What a relief.
The first Amazon value on the list is, of course, “customer obsession.” There’s no other value that rises above this, not expedience or profit. And in my opinion it shows.
Companies that stick to their principles tend to be consistent and well-trusted. Having clear and understandable principles, reinforcing them and even working through when they seem to be in internal conflict leads to making better decisions overall and avoiding really bad ones.
That’s especially true when you don’t have the luxury of seeing the full repercussions of your choices in advance. These principles are there for when the choices are hard or unclear, not just when they’re easy.
I believe that companies that get this, and especially those that put their customers first, are the ones that will succeed.
BTW, there’s still some perception out there that “the FAA nixed Prime Air.” Here are a fewarticles that addressed that question directly.
This could be catastrophic to companies like Unity3D, who solve cross-platform 3D by making you work in their time-tested little sandbox, except for the poor level of support WebGL has on mobile. That’s the last remaining bottleneck to real Web 3D.
Apple only officially supports WebGL inside iAds, proving it’s not a technical problem at least. Android support is variable, but within reach. These conditions are mostly IMO functions of the current lucrative business model for apps, not any lingering hardware or security limits. Consider: if mobile browsers improve, then cool 3D apps are once again free, unchained by “app” and “play” stores and their up to 30% markups.
On the other hand, the web is what built the digital economy that’s fueled mobile growth. Mobile phones have gone back to the pre-browser era to make some money, but it’s inevitable that we’ll all return to a more open ecosystem, esp. on Android. Closed ecosystems like AOL only lasted until people found the door.
Here’s fun Verge article from last spring that mentioned me nicely. I don’t seek out much press, but it’s nice to get a good review.
If you haven’t seen the video, you can watch me battle the stage lights below.
In an inspirational speech, Avi Bar-Zeev of Syntertainment, a startup in stealth mode, suggests that [Augmented Reality] could change the world.
“Every game-changing technology can be recast as a human superpower,” he suggests, likening the television to primitive clairvoyance, the telephone to telepathy, and the wheel to telekinesis. “If I decide I want that rock to move, I have the power to make it move with much less effort,” he says. But if you could reshape your reality at will, could “teleport” elsewhere, he asks, what would it mean to be in jail? Bar-Zeev also points out that the difference between augmented reality and virtual reality is purely semantic if you imagine screens built into contact lenses. “What’s the difference between AR and VR? Open and close your eyes. That’s it.”