top of page
aaross25

Artificial Intelligence Needs Real Privacy

PG_2016_AI

With great power comes the responsibility to discern when and how to use such power. As we enter an age where the stuff of speculative science fiction—specifically Artificial Intelligence (AI)—becomes reality, climbing the mountain “because it’s there” is no longer a sufficient motivational and ethical framework for our future.

Artificial Intelligence and Ethics

In order to create a future where essential human values such as autonomy and personal privacy remain vital, we must act with awareness and forethought. Fortunately, the same tech giants who are dedicated to developing sophisticated AI recognize the potential for profound societal shifts the technology will produce.

A recent article in The New York Times highlights how thought leaders from Alphabet, Amazon, Facebook, IBM, and Microsoft want “to ensure that A.I. research is focused on benefiting people, not hurting them.” The development of “real ethics” for AI is essential because, as a Stanford report on AI states,

“that attempts to regulate A.I. in general would be misguided, since there is no clear definition of A.I. (it isn’t any one thing), and the risks and considerations are very different in different domains.”

AI is an enormously complex and diverse field. Wikipedia defines AI this way:

“In computer science an ideal “intelligent” machine is a flexible rational agent that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.”

When we discuss ethics in the context of AI, we refer to “the ethics of technology specific to robots and other artificially intelligent beings” including concerns with “the moral behavior of humans as they design, construct, use and treat artificially intelligent beings.”

Some of our greatest concerns with AI ethics center on the threat to privacy, human dignity, and the potential for catastrophic unintended consequences.

The Relationship Between AI, Big Data, and Privacy

Almost everything we do these days generates data. What we search for online, share, and transmit, either intentionally or unintentionally, creates vast amounts of information stored in the cloud. And what we generate today is but a fraction of what we’ll generate in the years to come. As the Internet of Things pushes the network deeper into our physical environment, satellites become even more powerful, and we rely more heavily on digital assistants to answer our questions, we’ll produce (and reveal) radically more.

Gathering and storing this information is only one part of the picture. In order to make it useful, we require the power to make sense of it. This is where AI comes in. According to James Niccolai for Computerworld,

“I have seen the future, and it is a world of unparalleled convenience, untold marketing opportunities, and zero privacy.”

AI, like IBM’s Watson project, “are needed to uncover patterns in mountains of information and make decisions we can no longer arrive at through traditional programming.”

In the article, Peter Diamandis, founder of the X Prize Foundation, describes a world of “near perfect data” and rather optimistically describes scenarios in which this world of near perfect data will benefit each of us. Accidents where the chain of events are perfectly clear. Pickpockets nabbed by ubiquitous surveillance. “If you thought your privacy wasn’t dead yet, think again,” Diamandis says.

Mike Elgan, in a different article for Computerworld argues that in the future our concerns about privacy will seem quaint. In fact, Elgan views the upside of AI as so transformative and essential that he seems to feel we have a moral obligation to feed AI the data it needs to help us all.

“Artificial intelligence can do amazing things, if given massive amounts of data,” he concludes. “Whether we’re motivated by naked self-interest or the spirit of the greater good, we’ll willingly give up our data. All of it.”

But must we entertain the death of privacy in order to reap the benefits of AI?

Real Privacy in the AI Age

Utopian visions should never obscure a healthy awareness of totalitarian potential. Though the developers of AI’s bleeding edge may have the best intentions at heart, human nature and the potential for unintended consequences demand we explore solutions which balance our right to privacy with the benefits of AI.

Inventor and entrepreneur Foteini Agrafioti believes Privacy by Design (PbD) is the key to this future. She writes in her piece for the Huffington Post:

“Under privacy by design, technology companies must account for human values when creating their systems and ensure they have engineered for maximum individual privacy in every step of their process. It’s a costly and time-consuming measure, but it’s one of the only measures standing in the way of a digital Wild West.”

Douglas Rushkoff also believes in a collaborative, humanist approach to retaining our values in the face of enormous technological power. For Rushkoff, dialogue is the key to grappling with the real challenges of building a world we want versus one which simply happens to us. Profiled in Fast Company, his new podcast Team Human

wants to talk about how we can design a more human-friendly future—reshaping technological and economic systems—at a time when businesses prioritize algorithms and profits over people. “This is really trying to address what I see as a widespread need for solidarity around the issues that can help humans make it through the next century or so,” he says.”

Realizing the thrilling potential of AI moves closer day-by-day, but we should not let technology so dazzling blind us to what allows us to remain human and maintain our privacy.

5 views0 comments

Comments


bottom of page