I read for the times a book transports me in the way Ready Player One did. For me, this book was seamless and filmic — and synchronous, because my real and made-up worlds have a habit of colliding.
Mere weeks before reading, a midnight google of “What did Roberta Williams do after Sierra?” led me back to Adventure and Dig Dug, both of which happen to be mentioned by Wade, the protagonist.
Naturally, I was compelled to seek out these old videogames and play them (along with Digger, Bubble Bobble, Zelda, etc.). And in doing so, I discovered that the desperate hours spent in these games as a child are just as fervour-filled when playing as an adult.
The experience makes me want to reach into the pages of this book and tell Wade that Halliday recreated the 1980s in the OASIS not to relive his troubled childhood, but to preserve his access to the fantasy, the familiarity, the escape.
The wonder of Halliday’s OASIS is built on users’ abilities to experience created worlds in virtual reality. This VR experience includes the ability to choose and change one’s own avatar, which can be completely different to the real world — and this technology really is close to hand.
But what if you could change the way other people look and sound to you?
The ability to change the way we view others — or, rather, their avatars — is a principle I’m reading about in The Singularity is Near. Ray Kurzweil poses all sorts of wondrous ideas in this book, but it is this one that has hooked me for the time being.
I can see its potential for good: if we can choose to see others differently, this may make it easier for us to interact without exterior distractions, and it could also remove barriers related to physical ability and mental health.
But I think the more likely outcome is that we would end up either demeaning other people or modelling them after some standard we have built up in our own minds, whether that exists in its entirety (eg. an actor, a historical figure, a comic or storybook character) or as a partial representation of cultural ideals (eg. high fashion, larger eyes, athletic figure, deeper voice).
Without your real-life partner’s consent or knowledge, you could morph them into Jessica Rabbit. While you might not have a problem with that, would they? Should they? And what if it was someone else perceiving your partner like this — or making changes to your avatar?
I have many questions about how this concept would function, including:
- Would our entire sensory experience be open to change, to include taste, touch, and smell, as well as sight and sound? How would we perceive space?
- If we lived predominantly in a virtual world, what would this mean for our interactions with real-world places and items, and flesh-and-blood people? Would we even continue to interact in real life?
- What happens to empathy when we see only what we want to see?
Even without VR, we humans are pretty good at seeing and hearing what we want, so it could be that the cloaking of our perceptions in new skins would represent a small leap from what goes on inside most minds anyway. This, however, has the potential to go much further. Not only could we change our environment’s aesthetic and functions in a hyperrealistic way, we could perform acts in our virtual worlds that we would never ordinarily consider, with no natural consequences.
Part of human growth is about challenging our ways of seeing. How we deal with, incorporate, or rise above these challenges shapes our character. If our lives are never tested beyond experiences we enable, without feedback tethered to the world around us, we narrow our potential for emotional and mental growth.
And if our real experiences cease, what does this mean for our values and priorities? What does this mean for the future of not just VR, but AI — and humanity?
[Feature image: ‘Reality’ by Eran Fowler, used with permission. Find more of Eran’s incredible work at http://eranfolio.com/]