Five Questions for Jaron Lanier

The ‘father of virtual reality’ reflects on his rambunctious offspring — its promise, and its potential to foster bias and manipulate its users.

“Iam called the father of VR,” Jaron Lanier writes in “Dawn of the New Everything: Encounters With Reality and Virtual Reality.” But “VR was actually birthed by a long parade of scientists and entrepreneurs,” he adds in this latest work, a memoir mixed with digressions on the history and current state of VR.

“When you move to behavior modification, you make society absurd. You pull people apart.”

A new book from the author of “Who Owns the Future?” and “You Are Not a Gadget: A Manifesto.”

Questions of parenthood aside, Lanier’s pivotal role in the field of VR is impossible to deny. As a computer scientist, author, and musician, he is best known for developing and popularizing VR — he founded the first commercial VR startup, VPL Research, Inc., in 1984. Now 57, he runs a lab at Microsoft that focuses on developing augmented reality.

Fundamentally, however, Lanier is a humanist, skeptical of the power of large tech companies and avoidant of all social media. He is the author of the international best-sellers “Who Owns the Future?” and “You Are Not a Gadget: A Manifesto” — which are both critical of tech’s influence on society. Although he thinks VR can improve human interactions and relationships, he is also wary of how it could be used to manipulate people.

For this installment of the Undark Five, I spoke with Lanier about VR’s role in society, how the VR experience is different from other human interactions with technology, and why he doesn’t believe in artificial intelligence. Here is our conversation, edited for length and clarity.

UNDARK — Some researchers have shown links between using devices with screens — like TV, video games, and smartphones — and detrimental effects on brain development. Are people engaged in VR having a different experience compared with those in front of other screens?

JARON LANIER — I’ve noticed a few things. The use pattern for virtual reality is intrinsically more physical and kids get tired after a while, so it doesn’t foster the kind of zombie-like addiction that you see with devices where you can just space out totally. If you watch kids either watching videos on a tablet or playing with conventional gaming interfaces, they can just sort of sit there. When they’re using VR, they’re really active because otherwise the VR goes away. It just can’t be sustained. It’s more like going on a trampoline or something. They’ll get on it for a while, but it just doesn’t last. I think it’s exactly the way things should be.

UD — The way some AI systems are built often underrepresents women and minorities. For instance, Pokémon Go originally included very few Pokémon locations in black neighborhoods. Can VR also be vulnerable to this type of bias? How could it be prevented?

JL — That’s a huge topic. One place I’d start is that there have been some claims from some researchers that it’s more of a male thing. But if you dig down, you’ll see that the particular examples for VR that were being tested were created by remarkably non-diverse teams. If you have an un-diverse team that creates a technology, it shouldn’t be surprising that people who fall outside of whatever that narrow demographic was will find the technology less attuned to them.

This is a gigantic problem right now, because what you tend to see in both the fundamental VR development groups at the different companies and startups, and then in the studios that are creating content, is a demographic that’s similar to the gaming demographic — which is to say young men who tend to be kind of technical and nerdy. There is a lack of diversity, which is a really serious problem because you see it reflected in all kinds of little details down the line, in everything from hardware just for how things work with your eyes to software for how inactivity is recognized. It holds us back.

UD — One of the biggest problems you see with VR is its potential to manipulate people. Should there be regulators? Who could be involved in preventing this?

JL — It is this gigantic mistake with the information economy where we said to entrepreneurs, “You’re gonna work within the capitalist system.” “We love entrepreneurs.” “We love Silicon Valley startups.” We love all that, but we also want everything to be free. We want our music free. We want their social networking services for free. We want our messaging free. Everything must be free, free, free.

[If] everything must be free, but you also want them to be businesses, you limit their options, right?

We initially backed enterprisers into a corner and said your only option is advertising. Then the problem is [that] as information systems become more advanced, advertising gradually transforms. It has transformed from just sending people persuasive content to behavior modification. Where you are measuring what the people do in a tight loop and then customizing the feedback they get with an algorithm in order to manipulate them like a Skinner box.

When you move to behavior modification, you make society absurd. You pull people apart because people are no longer screening the same thing — because everything has to be customized. Everything falls apart. You just lay out this huge red carpet for whoever is the creepiest manipulator. They’re the ones that will get the most out of the system.

Let’s suppose that after Gutenberg, there was his movement to say all books must be free. Nobody can charge for a book. But it’s okay for books to have advertisements. What we would have ended up with is advertisers determining what books there were. We would have ended up with this totally screwed up world.

UD — Back in the early VR days, many people compared VR to taking mind-altering drugs. Why do you see these experiences as distinct?

JL — I have to start with a disclaimer that I haven’t used psychedelic drugs — although there are some assumptions, because of my hair or something. I think the obvious thing to say is that virtual reality is primarily involved or solely involved with the edge of your perceptive organs. We can create this stimulus that you perceive, but we are not directly intervening into how you perceive it within the brain.

So it’s a totally different process, and therefore it’s something that can be an expressive or controlled process rather than a somewhat random and unpredictable one. I don’t know if we can be as vivid as drugs or not. It clearly has an entirely different dynamic.


Love Undark? Sign up for our newsletter!


UD — You write that you do not believe in AI. What do you mean?

JL — So I have to call out my most important mentor, a man I really love, Marvin Minsky, who was an MIT professor. Needless to say, we argued about AI all the time.

Here’s my problem with AI. At Microsoft, we have one of the three major interlanguage translation services running in the world. You can enter something in English and it will come out in German or Chinese or something. What really bothers me about this way of doing things is that in order to keep it working, we have to go out and gather millions of example phrases from people who are real, human translators every single day, just to keep up with current events and whatnot. So I think the service that we provide makes the world better. I’m glad it’s there. But here’s the thing: We’ve sent out this message that all these other people, these translators, are no longer needed because we built this electronic brain to replace them. But we do need them. We need their examples, or the brain doesn’t work.

It’s more accurate to say that we’ve created a new, more useful, better channel from their work for people who can benefit from it than to say we’re replacing them.

In order to keep this idea of AI as a way to understand what your program is doing, you have to pretend you don’t need people as much, because that ruins the whole fantasy that you’re creating this being that stands on its own. I find the whole thing a lie; it’s ugly and I just really dislike it.

The problem is not the code. It’s not the engineering. It’s not the math. It’s the storytelling.


Hope Reese is a writer and editor in Louisville, Kentucky. Her writing has appeared in Undark, The Atlantic, The Boston Globe, The Chicago Tribune, Playboy, Vox, and other publications.

See What Others Are Saying

3 comments / Join the Discussion

    Building on the following quote:

    “Let’s suppose that after Gutenberg, there was his movement to say all books must be free. Nobody can charge for a book. But it’s okay for books to have advertisements. What we would have ended up with is advertisers determining what books there were. We would have ended up with this totally screwed up world. ”

    …it strikes me that the same problem has occurred with research and universities. When Newt Gingrich and his cohort cut research funding to universities through various government programs, it opened the door to corporate partnerships with universities, of necessity. Did that enhance university research ( in a parallel to advertising supporting ‘free’), or skew it to narrower purposes. Is applied research, as a result, siphoning off pure research to the extent that applied research in the future will not be nourished by new insights when it needs to take the next step? Are we draining the well of future creativity?

    Reply

    The last sentence means everything to me “The problem is not the code. It’s not the engineering. It’s not the math. It’s the storytelling.” You are a TRUE ARTIST!

    Reply

    Mr. Lanier obviously has exceptional clarity in thought and easily debunked the mainstream AI Hocus Pocus crowd claims. Its encouraging to see that there are some good folks lacking sensationalism still working to make the world a better place. Much Good Luck!

    Reply
Join the discussion

Your email address will not be published. Required fields are marked *

Top