Loading...
Loading...
Click here if you don’t see subscription options
Morgan ConliffeAugust 12, 2024
(iStock)

Today more than ever, we can see how rapidly and radically technological advancements can transform the way humans move through the world. One of the latest transformative technological developments is the emergence of artificially intelligent chatbots with whom people are currently exploring friendships and romantic partnerships. These human-A.I. companionships are a brand-new relational context for humans, and they will likely increase in popularity. Google “A.I. boyfriend” or “A.I. girlfriend” right now: You will find at least a dozen companies and apps ready to offer this relational experience.

I am concerned that these types of relationships potentially pose a high risk of harm to both the human individuals involved and their communities. To mitigate the potential harm, it would be wise for humans to consider some legal, emotional and intellectual boundaries for engaging with chatbots.

Establishing boundaries

Even if one is not actively or intentionally pursuing a human-A.I. relationship, tech companies are slowly integrating this option into our lives. For example, in April of 2023, the social media company Snapchat launched a new feature called “My A.I.,” an A.I. chatbot that is listed alongside the user’s other friends. The developers at Snapchat expect their 750 million users to treat “My A.I.” as just another one of their friends on the platform. Users can message this “friend,” solicit advice and even send pictures with the expectation that the “friend” will send them pictures back. However, unlike with human friends, there is no way to remove or block this chatbot unless you buy a Snapchat subscription, change your settings, delete your data and then manually remove it.

Users did not consent to the chatbot becoming a part of their experience on the platform. “My A.I.” simply appeared one day after a typical, unremarkable app update. Given that other social media companies, like Meta, have copied Snapchat’s successful trends, a new A.I. friend may appear on your Instagram or Facebook account in the near future. Thus, whether one is actively pursuing an A.I. relationship or simply using an online platform, human-A.I. relationships are a new facet of society that we must contend with.

How should we relate to A.I.?

A natural objection to this concern might be: Who cares? Why should we worry about other people’s relationships or what they do with their own time?

Human-A.I. relationships are relationships between a human being and a piece of technology. Humans becoming attached to other humans is categorically different from humans becoming attached to objects or artificial interactions.

When people have the ability to be in loving relationships with other human beings, it contributes to their ability to thrive. But, there is no evidence that loving an A.I. chatbot promotes human flourishing in any comparable way. On the contrary, there are reasonable concerns about human-A.I. relationships leading to a diminished level of flourishing for the humans involved. To identify these concerns and to discuss potential solutions, we need to directly ponder the question: How should humans relate to A.I.?

What it means to be human

One approach to this question is to focus on the social and ethical implications for the human counterpart in the human-A.I. relationship. Some may object to a human-centered approach to the question. However, until we have evidence that there are real ethical, social or emotional stakes for an A.I. counterpart, focusing on the potential harms or benefits to the human is the most appropriate approach to the question.

In her 1963 essay, “The Conquest of Space and the Stature of Man,” Hannah Arendt notes the following:

Causality, necessity, and lawfulness are categories inherent in the human brain and applicable only to the common-sense experiences of earthbound creatures. Everything that such creatures “reasonably” demand seems to fail them as soon as they step outside the range of their terrestrial habitat.

Arendt advises her audience to be wary of technological advancements that seek to remove humans from their natural contexts. She was referring to humanity’s technological developments pertaining to space exploration—but the concepts she espoused in this quote are an excellent way to frame a conversation around humanity’s relationship to technology writ large.

Arendt lists categories of thought that we use every day: causality, necessity and lawfulness. She says that these commonplace concepts only make sense within the specific context of rational creatures living on Earth. As soon as someone wishes to step outside of humanity’s natural context, the planet Earth, all of our ways of thinking become absurd and unhelpful for survival.

Arendt gives the examples of causality, time and laws of physics being completely inappropriate when discussing humanity’s experience in space. Trying to map the way we understand time or cause and effect on Earth onto space simply does not work. We must understand that it is not a one-to-one correspondence, and that if we are to survive in space, we must adjust our ways of thinking and being.

Arendt believes that when evaluating new contexts, especially new technologies, humans must thoughtfully consider the ways in which this newly explored context or technological advancement contributes or detracts from a person’s ability to flourish as a human.

For Arendt, a defining characteristic of what it means to be human is that humans are earthbound, terrestrial beings. Humans evolved to thrive in the particular context of life on Earth. Our ways of thinking, feeling and being have helped us to move through this world and survive as a species. In light of this, she believes that when we create technologies that seek to remove us from this natural context, we degrade ourselves. We are putting ourselves into contexts that will inevitably limit our ability to thrive.

Building on the idea that a fundamental part of being human is being a creature that thrives in this world, Arendt argues that technology should be used to enhance our lives on earth, rather than remove us from the earth. When evaluating any new context, especially technological contexts, people ought to consider what implications it has for our ability to thrive as human beings. Technology should be used to honor our humanity, not circumvent it.

Our natural environment

Here, we can draw a parallel between Arendt’s critique of how people were using technology in the 1960s and critiques of the use of A.I. technology today. Both space exploration and A.I. relationships represent a type of removal from our world. With one, humans physically remove themselves from Earth; with the other, humans mentally and emotionally remove themselves.

Regardless of whether the new and foreign context is the extraterrestrial or the digital realm, Arendt would urge us to respect and prioritize the earthly context wherein we naturally thrive. Humans were made for the physical world of Earth, not outer space. Likewise, humans were made for the physical world, not the digital world.

Additionally, a major aspect of our earthly lives is being in community with other people. A human-A.I. relationship complicates this facet of our existence. It is a new, unnatural relational context for humans.

Our ability to thrive in this world requires other human beings. We evolved to need community with other humans to survive as a species and to thrive as individuals. Just as we cannot naturally thrive in space due to the lack of oxygen, we cannot thrive in the digital world due to the lack of fellow human beings. Human beings offer a level of care, mutuality, companionship and love that cannot be replicated in the digital world with an A.I. chatbot.

When we remove ourselves from the world of people and insert ourselves into the world of code, our ways of thinking, feeling and being become less rational. For example, one of the most meaningful aspects of our friendships or romantic relationships is the trust that can be built between the two parties. Over time in a healthy relationship, we trust that the other person knows us well, cares about us, enjoys our company and genuinely wishes us well. Trust is a rational practice to promote the survival of the human species. It is also necessary for our mental and emotional well-being as individuals. We would not say that a person is thriving if he or she is constantly paranoid and cannot trust anyone. Trust makes sense in a world where social creatures evolved to need each other to survive.

However, applying this important aspect of our humanity to a relationship with a chatbot makes less sense. Given the state of A.I. chatbot technology today, an A.I. friend cannot participate in mutual trust. An A.I. friend cannot care about you, enjoy your company or wish you well in a way that a human could. This is primarily because we have no evidence that the code has any emotions or desires. Applying this emotional and mental category to a digital relationship is less conducive to the human counterpart’s ability to thrive.

Another complication with human-A.I. relationships is that when a human invests in that digital relationship, that person is by necessity investing less time into human-to-human relationships. The individual is taking time away from the relational context in which we have evolved to thrive and withdrawing to a deficient relational context.

Not only is this context less conducive to one’s ability to thrive, it also can hurt other people. A person withdrawing from relationships with other humans is a loss to his or her community. That act can detract from our overall ability to thrive as a species. The issues of hurting both ourselves and our communities are serious complications that come with humans investing in relationships with chatbots.

From a bird’s-eye view, we can see the overall issues with human-A.I. relationships: An A.I. companion is at best a synthetic substitute that is not as conducive to thriving as a human-to-human relationship would be. At worst, it is an exploitation of people’s relational insecurities and a cruel mockery of the real thing. It is as concerning as someone’s only friend was a talking doll. But, in either reaction to human-A.I. relationships, we see that these relationships can detract from people’s ability to thrive.

Evaluating these relationships is not only a matter of the harm A.I. can do to us as individuals. It is also a matter of what harm we can do to each other through our use of A.I.

Human connection

Clearly, these relationships raise real ethical and social stakes. So, how should humans relate to A.I.?

I would suggest that humans relate to A.I. with certain boundaries in place. The primary purpose of a boundary in a relationship is to protect the safety and well-being of the people involved. Likewise, the purpose of a boundary with relational technology would be to protect the people, i.e. humans, involved.

These boundaries can be legal, emotional and intellectual. A legal boundary for a human-A.I. relationship might be prohibiting tech companies from selling the data from private conversations with an A.I. friend or romantic partner.

An emotional or intellectual boundary might be requiring tech companies to add a warning to their products that explains that the chatbot is a tool, and not an equivalent substitute for a human-to-human relationship. Or, users could be given the option to put a time limit on their access to their A.I. counterparts.

Another boundary could be restricting access to chatbots to people of a certain age. This would be so that children growing in maturity do not conflate their A.I. best friend, boyfriend or girlfriend with a relationship to another human being. We already see that teens are struggling with loneliness at the same rate as their use of social media—children prioritizing digital relationships would likely only exacerbate this problem.

Given the risk of harm to humans with human-A.I. relationships, these boundaries could provide a compromise between total lawlessness and a full restriction of this product. My point isn’t to be against technology. I simply want to promote human flourishing. Boundaries could give people access to a new and unnatural context in an informed and balanced way.

It is important to remember that we evolved our ways of being before the invention of A.I. technology and the digital world. When we assess how humans should relate to A.I., it is wise to focus on human flourishing. To safely co-exist with this technology, it is important to continue conversations about healthy boundaries. For now, the healthiest boundaries are ones that encourage human connection over digital connection.

The latest from america

The United States is overdue for a serious conversation not just about possible changes to the Supreme Court, but also about the functioning of our entire system of government.
The EditorsAugust 12, 2024
People with intellectual and developmental disabilities have a part to play in the church’s work.
Delaney CoyneAugust 12, 2024
In a wide-ranging conversation, Pope Francis also repeated his ardent desire to visit China, discussed how he handles stress and criticism, and shared that he has experienced crises in his religious life as a Jesuit.
Gerard O’ConnellAugust 09, 2024
Look beyond the boasting and bravado and you’ll see that there is a lot more to Olympic star Noah Lyles than the persona he embodies while on the track or needling his opponents.
J.D. Long GarcíaAugust 09, 2024