Skip to main content









John C. Havens on Being Pro-Human in an Age of AIML

John C. Havens on Being Pro-Human in an Age of AIML

Marsha Dunn's picture
Marsha Dunn
November 16th, 2016

john havens heartifical intelligence humans machine learning artificial intelligence facilitator facilitation collaboration

Below is the fourth installment in our series, AIML: Big Changes, Big Conversations. Taking AIML (Artificial Intelligence and Machine Learning) as our example, we will explore methods for accelerating and amplifying conversations around major changes in ways that enable us, as individuals and as a collective, to understand and direct their impact.

We recently had the pleasure of working with John C. Havens at Innotribe at Sibos 2016. John is an author and speaker focused on emerging technology and human well-being. His latest book, Heartificial Intelligence, argues that the “technological pinnacle” reached through the creation of intelligent machines requires us to “elevate the quest to honor humanity and to best define how AI can evolve it.” I had the opportunity to speak with John shortly after he presented at Sibos.

Marsha Dunn: In the introduction to your book Heartificial Intelligence you describe yourself as “not anti-AI, but rather pro-human”. Why is that distinction important?

John C. Havens: People tend to view artificial intelligence (AI) as overwhelmingly scary—something to be avoided. Or else they see it as the next big thing and rush to embrace it. What I am advocating is a third approach. I believe we have an opportunity—a responsibility—to define the path forward with AI and machine learning (ML). In order to do this we need to program these machines with the values that matter to us as humans. We need to determine the ethics around how machines will help us in the future.

Marsha: Which means we need to think carefully about what our ethics are.

John: Yes, and while our sense of self is deeply rooted in our personal values, we are not used to being asked to list or, better yet, quantify them. In fact, people often see an attempt to quantify values as trivializing them. However, machines are constantly gathering data on our values via our preferences and behaviors. Machines are attempting to codify our ethics even if we are not and they are likely doing so out of context. Your choice to buy a Rolls Royce might mean you value luxury items or it might mean something else entirely.    

Marsha: And what is the risk of not codifying our ethics? How does this relate to the use of algorithms in AI?

John: Today, data from our personal preferences feeds into algorithms used to influence our purchasing decisions via ads or a user experience intended to attract and retain us.  And this can be great. Why would I want to see every sneaker on the market when I am only interested in a particular style from Nike? However, in the future, our ability to distinguish between unfiltered experience and “reality” as determined by external algorithms will become much harder. Time spent in the digital arena is going to surpass time spent in the “real” world. If you don’t have a tool to project your subjectivity in the digital arena it either won’t exist or someone else is going to create it.

Marsha: Break that down for us.

John: In ten to twenty years, wearing augmented reality glasses will be the norm. This sounds extreme, but when you have a player like Apple moving into this space its adoption is not far off. And once we are wearing augmented reality lenses, we will literally be looking through the lens of Facebook or Apple. This view will be part of our physicality, making it much harder to demarcate external and subjective perspectives.

As a result, the opportunity, and I would say the mandate, is to codify our own subjective values in advance of this. We want to make sure the lens we look through has been programmed with our ethical perspective not with an interpretation of our preferences or the preferences of a programmer.  We need to determine whether we want the self-driving car to swerve to avoid a child in the road even if it risks the lives of our passengers.

Marsha: Heartificial Intelligence provides very practical guidance around how individuals can begin to codify their ethics. Can you share some of this advice here?

John: I provide 12 core values (e.g., family, work, education, health) as a starting point and advise that you rank each one on a scale of 1-10. I then instruct people to spend the next few days observing whether or not they have lived in accordance with their values in order to gain further insight into what contributes most to their sense of well-being.

Marsha: Then what is done with this values-data?

John: We need to give people tools to be able to upload this data and say here are the preferences I would like to be honored. This would be a paradigm shift.

Marsha: So organizations and organizational leaders can play an important role in the pro-human approach by providing these tools?

John: Providing people with tools to codify and share their values is one way. And it represents a huge opportunity for companies as well as customers.

Unless you are a giant like Facebook or Google, trying to compete by using technology tracking to get unique user information is a dying game.

john havens heartifical intelligence humans machine learning artificial intelligence facilitator facilitation collaboration

People are constant generators of data, and I believe that if companies give them more ways to talk about and protect their data they will earn their trust, thus benefiting everyone involved. I believe this is the new Green - and companies have an opportunity to align their brand with this approach. It will be a competitive advantage.

Marsha: When you talk about using our data in ways that will empower us as humans in the future, you also focus on how we can use technology to enhance our well-being.

John: With the Internet of Things, we can begin to use apps and devices to gain a new level of understanding about what activities foster our sense of well-being. I focus on this in my prior book, Hacking Happiness: Why Your Personal Data Counts and Why Tracking it Can Change the World. For example, biometric information generated through these tools can increasingly tell us things about how we respond physiologically to taking part in particular activates. These tools will enable us to learn what reduces our stress levels or increases our sense of purpose. Organizational leaders can support their employee’s efforts to improve their well-being through these methods. Bob might learn that while he sits in IT every time he engages in accounting activities his biometrics tell us his mood improves.  And not only does Bob learn what he enjoys, but he begins to learn the art of trained introspection that will benefit him in the future.

To summarize: as we think about codifying our values and “hacking our happiness” through the AI, ML, and the Internet of Things, we need to keep in mind that we have an opportunity to not only preserve but to enhance all of the beauty that is inherent to who we are as humans. This is an opportunity – not a guarantee.

Tags

AIML: Big Changes, Big Conversations