Will humanity become a hive mind of co-thinking, non-independent individuals or will we remain independent individuals that resemble the humanity that we were born with and all know in the past?
In this SDXD workshop, our guest speaker, Tyson McDowell addressed this question and challenged the group to take responsibility in shaping the answer for themselves and their offices.
Tyson McDowell is a serial tech entrepreneur focused on positively merging humans with AI-driven technologies. As a venture capitalist, public speaker (including TEDx) and product designer, he currently works to empower technology entrepreneurs to be exponential by leveraging next gen technologies to improve the quality of life of all humans in their workplace, in their own bodies, and in their key relationships.
What is AI?
Like many emerging trend buzzwords, Artificial Intelligence (AI) means many different things to different people. Attendees defined it this way:
- Predictive analytics
- How to make machines behave like humans
- Leveraging the system to solve people’s problems
- How I’m going to lose my job
Tyson defined AI simply as “any technology that’s invisible, but changes everything”. More technically, “it’s tons of data exhausts analyzed by computers that tell humans and computers what to do.”
Why Does UX Matter in AI?
Every user experience involves two parties; a user and a product, system, or service. Until recently, there has always been a distinct difference between humans as users and the product, system, or service we interacted with. As a result of technological advancements and artificial intelligence, that line is beginning to blur. Technology is not only more prevalent in all of our daily lives, it is being specifically programmed to mimic the qualities that make us human.
“When you’re designing a technical system the human and the computer are equal-part stakeholders. The human is part of the system.”
AI is the outsourcing of knowledge by humans to a computer. We are defining the computer and giving humans (who are the subject of our UX) the service of outsourcing their knowledge to the computer.
Tyson offered us the following formula for determining how humanized an AI experience is:
R (Presence of Regressive Feedback Loops)
P (Risk of Persuading Individual Thinking)
D (Potential for Impact Factor, Frequency of Interaction)
H (Human-centric Morality Architecture)
He shared two examples that exemplified this:
Wikipedia.org scores really well in this model. No regressive loops (not persuasive), you search it, only used occasionally at arms-length. Blind judgement of valid information ensures pure individual judgement.
Facebook.com scores poorly. Multiple regressive loops (persuasive), it pushes to you and social validation (many post views per day), many people have it, “likes” drive content promotion and like volume is visible to all, polluting individual judgement.
Ultimately we learned from Tyson that this leads us into the moral dilemma of AI: just information vs. always right.
We want to have our artificially intelligent cake, and eat it too. As Tyson explained though, “you can’t exponentially accelerate humanity’s progress, turn work into play, and have freedom across the board without an environment that frees us from our physical limitations to communicate and share, to be human.” This environment includes technologies that are becoming dirt cheap, several thousand times more powerful and increasingly persuasive (harder to distinguish objective truth).
“Truth isn’t about what’s real. It’s about what’s popular.”
Information flow is valuable.
Tyson spoke in depth about how social engagement tools are redefining cultural norms and constraining the flow of information. He pointed out that regressive algorithms are driving thought patterns that are largely hidden from users.
Today, social media is notorious for using correlated data to aggressively drive unwitting individuals into thought patterns that get absorbed over time.
We’re constantly reminded that “if you can dominate the information flow, you can shape individuals.”
Ok, we know that artificial intelligence is fast becoming an integral part of our human lives.
Tyson posed this question: “do we encode morality into the system or do we encode the system to get its morality from its user?”
Changing the Trajectory
Tyson closed out the night with this encouragement. “As designers, and more importantly, as humans, we have the power to not only make AI safe for humanity, but actually amplify humanity through proper implementation.”
We took the first stab at this by breaking into groups with the challenge of breaking regression and ensuring that critical thinking stays intact.
So what did I take away from this talk? For one, I now have a much better understanding of what artificial intelligence is. But more importantly, I was reminded that being human means having an independent sense of self. Technology is threatening to erode that away from me but it doesn’t have to. On the contrary, it can actually drive me to be even more independent and individual. I get to play an integral part in how humanity understands and adapts to a future filled with artificial intelligence. To do this, I must commit to staying educated on how this technology is evolving, and create experiences that protect human choice.
Tyson’s Recommended Resources
- Pandora’s Star
- The Book of Why – The New Science of Cause and Effect
- Thinking, Fast and Slow
- The Inmates are Running the Asylum: Why High Tech Products Drive Us Crazy and How to Restore the Sanity
Tyson’s TEDx talk on Humanity and AI
thehuman.ai A place for product people to come and share examples of how they’ve implemented AI and the positive/negative effects
James is a San Diego based User Experience Researcher passionate about understanding users to craft memorable, impactful experiences.