SEEING ISN'T BELIEVING

We’ll soon be able to experience reality as we want it. But is that a good thing?

Seeing the world through a whole new type of rose-tinted glasses.
Seeing the world through a whole new type of rose-tinted glasses.
Image: Chris McRobbie
We may earn a commission from links on this page.

The human eye is an extraordinary organ. Packed with over 120 million photoreceptor cells, it can discern 10 million different colors and is the body’s fastest muscle—the average blink lasts just 100 milliseconds.

For all of its wonders, however, the human eye hasn’t significantly evolved in millennia. Although we’ve invented glasses and telescopes to improve our vision and cameras to record it, our ancestors perceived the everyday world much as we do.

But that’s about to change in a radical way.

Imagine this: You’re strolling down Lafayette Street in New York on a gloomy December morning wearing your Apple augmented glasses. These stylish spectacles have built-in GPUS (graphical processing units) that customize or “curate” everything in your field of view in real-time. It’s no fun seeing trash or dog poop on the street, so these glasses detect and delete them from your sight, replacing them with images you prefer: green bushes, red geraniums, and Hollywood-style sidewalk stars with the names of your fondest mentors and professors (or is that only my wish?).

Sensing your depressed mood, your glasses compensate by brightening up the bleak winter sky with hints of summer sun. Instead of street advertising, you see videos of friends speaking to you in Italian to help you get a handle on the language for an upcoming trip. Thanks to the augmented-reality feature in Tinder, recent dates that didn’t work out are automatically “edited out” of your line of sight—like that guy who dumped you last week (jerk!). If you happen to pass him on the street, you only see a ghostlike figure; it’s the literal version of “ghosting”. You said you never wanted to see him again—these glasses make it so.

Because you recently downloaded a Barney’s fashion app, your glasses detect and highlight on-trend outfits others are wearing. You bookmark them with a quick eyebrow-raise gesture. Meanwhile, ratings for stores you pass pop into view, along with avatars of smiling friends who recently made purchases there. When you get hungry, gluten-free restaurants are highlighted while burger joints are blurred, thanks the healthy-food filter super-imposed by your health-insurance company. Hey, they paid for the glasses, so what can you expect? You just wish you could turn off the small pack of zombies they’ve programmed to nearly catch you each time you go out for a jog—that’s one way to stay motivated.

Don’t mistake this scenario for some far-future sci-fi fantasy. In research labs today, every smartphone maker, social media giant, game studio, and wireless carrier is racing to “own” and personalize the photons that hit your eyes; one research analyst pegs computer vision as a $30 billion market by 2025. For the first time in history, we’re about to have a super-human ability to experience reality as we want it (or as the companies who are subsidizing our glasses want it).

A whole ecosystem of spatial software companies is emerging to create compelling applications and new business models for the next giant wave of computing. One company, Magic Leap, has raised a stratospheric $2.3 billion to offer “mixed reality” experiences using sophisticated light-field display spectacles. Augmented and mixed reality already exist in the form of computer vision-enabled product packaging from LEGO, and playing cards that trigger animated characters when viewed through a phone app. The Air Force used to pay tens of millions for a heads-up display helmet worn by pilots. Today, Microsoft sells one for $3,500. In the next decade, consumers won’t even have to pay that—they will get magic glasses for free, if they’re willing to suffer persistent reality-based advertising or pro-health behavioral nudges.

And it’s not just what we see: Everyday objects are also gaining the ability see, too. An incredible range of new products embedded with AI cameras are starting to hit the market. Mirrors analyze biometrics and health, and offer daily fashion advice. Salt shakers will monitor our diets and encourage us to put less of the white stuff on our kale chips. Bookshelf-size farms in our homes will use computer vision to optimize growing conditions, so that we enjoy a constant supply of fresh produce with minimal work.

Computer-powered eyes will also help with our social interactions. Today’s politicians have handlers who whisper tips to them about the people they meet at a party so that they can remember names of spouses, kids, dogs, and so on. A version of these assistive agents is coming soon to augment our glasses. Imagine you’re walking toward me, about to say hello, and an icon I’ve designated as my “agent” pops up in your field of vision and speaks to you. “Before you interrupt David,” this agent says, “please know that he has a book deadline coming up in two days, and he hasn’t had a lot of sleep recently. He might be irritable. Do him a favor, and please don’t ask him about it. On the other hand, his daughter is about to graduate from high school with a 4.0 GPA. Ask him about that.”

As wearables mature and become more fashionable—and as investors and entrepreneurs deliver specific commercial applications—super-sight will penetrate deep into our everyday lives, changing the way we think, shop, work, and live. They will help make many jobs easier, and solve many problems in our lives.

But is that really such a good thing?

The downside of double vision

Almost every computer-vision application raises concerns that are, at turns, fascinating and frightening. In China, for example, governments are equipping city buses with AI-powered cameras that allow bosses to spot reckless driving and drowsiness by tracking drivers’ bodily movements and facial expressions.

Safer transportation sounds great, but we can only guess at the potential effects that 24/7 surveillance and monitoring will have on our psyches. Workers might no longer trust their managers, knowing that bosses are comparing their performance to that of their peers, posting the analytics for all to see. Even more disturbingly, individuals might find it difficult to limit access to the data that on-the-job monitoring generates. Employers might share detailed information about staffers with insurance companies, making it harder for people to get policies. Or they might share it with universities, making it harder for employees (or ex-employees) to gain admission.

The deeper we dig into these technologies and their secondary effects, the more unsettling the possibilities become. What will happen to relationships and our sense of community when we’re all walking around every day in our own, self-contained, personalized realities? Some will choose to erase entire subsets of the population from their view: seniors, transgender people, homeless people. This would prove catastrophic for social unity and general empathy. We might even choose to have our trusty computers superimpose “hazard ratings” over the heads of everyone we encounter—flashing red light alerts showing a history of felony convictions or simply low Airbnb or Uber ratings. In that scenario, the guilty among us may never manage to live down past misdeeds. 

But super-sight isn’t inherently good or bad—it’s both, just like other monumental inventions like plastic or the gas engine. At this juncture, it’s critical that we understand the emerging technologies as best as we can, charting their likely effects across key areas of our lives. The more completely we can imagine our collective augmented future, the more inclined we’ll be to integrate the technology sensibly, regulate away its excesses, and use it as a force for good.

There are six profound risks that we must confront in a world saturated with magic glasses:

  1. Social insulation. Super-sight provides each of us with a tailored personal view of the world, impeding our ability to connect with, understand, and empathize with others. Product designers must design ways for people to synchronize views and see the same world.
  2. State surveillance. Pervasive cameras already installed in cities, workplaces, and homes allow governments to amass unprecedented data. Our civic leaders must put safeguards and policies in place to protect privacy. We have the right to be invisible and to be forgotten.
  3. Pervasive persuasion. People are accustomed to trading personal data for free digital services (hello, Google). In the age of super-sight, companies and brands will know our wishes and influence behavior and purchases as never before. Financial service brands need to protect these personalization profiles and broker their access.
  4. Super-sight for some. Historic levels of inequality are already entrenched in society, powered in part by digitization. We need to break out of this digital caste system and help everyone gain access to super-sight’s educational and connecting powers.
  5. Cognitive crutches. Assistive technology like GPS, auto-pilot, and CAD lead people to lose skills like map-reading, flying, and architectural drafting. In the future, we will rely on augmented services for more and more social and technical tasks. To combat cognitive atrophy, we need to embed challenges and skill-building experiences in smart glasses, so that people don’t lose their ability to remember colleagues’ names and make decisions for themselves.
  6. Hallucination side effects. Spending all day in mixed reality will lead to disturbingly weird ways of living. Some will choose to redecorate their environment in ways that the rest of us find incomprehensible. We need to decide to remove our glasses for certain activities and times of day to decompress from the trippy effects and recalibrate.

Powerful tools inevitably cut both ways. As a society, we must decide how far we will permit government or private companies to go in applying computer vision, and we must implement laws or policies to prevent abuses of privacy, dignity, and equality.

How much control should we reserve for people in designing augmented-reality applications, and how much for computers? Companies must also wrestle with how much privacy protection to design into product architectures, how to correct social biases (racial, ethnic, and so on) in the data we use to program computers, and how to explain the inner workings of artificial intelligence so that people understand and trust super-sight products and services.

If companies handle these issues well, they might help us contain many of the technology’s more disturbing implications. If they don’t, the technology will prove more destabilizing, and society will have to move more aggressively to regulate it.