the privacy guru
  • About
  • Speaking & Events
    • Speaking Events
    • Bring the Privacy Guru to Speak
    • Book a Privacy Salon
    • Book a One-On-One Session
  • Publications
    • Privacy Memes
    • Articles
    • E-Book: Privacy for Humans
  • Blog





How to train your AI

On 25 Oct, 2019
Uncategorized
By : theprivacyguru
With 1 Comment
Views : 1480

artificial-intelligence-and-cognitive-computing-what-is-ai

The Viking village of Berk, a small town in the fictional world of the animated film How to Train Your Dragon, is frequently the target of attacks by—gasp—dragons!

The son of the Viking chief, a young boy named Hiccup, is seen by his father and others as unfit to fight against the onslaught of winged menaces who snatch away valuable livestock and destroy property in their town. To make up for his lack of physical strength and fighting prowess, Hiccup tinkers with gizmos and gadgets that augment his abilities—many of which fail spectacularly.

Yet one day, Hiccup’s fortunes take a turn. One of his inventions, a kind of net launcher, injures a dragon flying over the island,  bringing it crashing  to the  ground. Hiccup tracks down the dragon in the forest. Finally, the time has come for him to join the ranks of Berk’s dragon slayers! But when Hiccup finds the dragon, downed and unable to fly from injuries sustained in the crash, he simply can’t bring himself to kill it. Instead, he uses his tinkering smarts to fashion a kind of prosthesis that allows the dragon to fly again, under Hiccup’s control.

Hiccup interacts with a number of other captive dragons, finding them all to be benevolent and trainable. In fact, the only reason they attack Berk is because one particularly ornery Red Death dragon threatens to gobble them up whole if they don’t. Ultimately, Hiccup’s “pet” dragons help turn the tide against the mean old beast at the center of the circle of violence. The death of the Red Death dragon ushers in a new epoch—one where dragons and the people of Berk can peacefully coexist.

It’s a cute movie, one that can teach us a valuable lesson about peaceful coexistence and jumping to conclusions, particularly when those conclusions are based only on partial information. It’s a lesson we might do well to heed, particularly when it comes to “monsters” of our own creation. I’m talking about Artificial Intelligence (AI).

AI is a kind of intelligence that machines demonstrate. It can involve elements from machine learning, robotics, and a number of other applications in computer science. So far, computers using artificial intelligence have been able to compose their own original pieces of music, create their own language, dominate strategy games, and autonomously operate motor vehicles. And yet, they’re not necessarily benevolent. They’ve also gone on racist tirades, and been called “humanity’s greatest existential threat” and compared to “summoning the demon” by Elon Musk.

But perhaps AI is neither friend nor foe, at least not inherently. What’s far more likely is that,  like the dragons of Berk, AI is how we treat and train it. It has spewed hatred only when it has been taught hatred, and created art when it has been taught art. It stands to reason that, with the  right training, AI can be a force for good.

As much as How to Train Your Dragon, in its title and substance, seems to be about training dragons, it’s also about training people—about getting people to see beyond their initial impressions, to overcome the fear of the unknown, and to work cooperatively with powers perhaps greater than our own. In terms of AI, a similar attitude is equally, if not more important, particularly when it concerns retaining control of the great power AI represents—when it concerns our privacy, security protections, and ethical behavior.

To “learn,” so to speak, AI needs to be fed data. Once solely the  domain of computer scientists, tools like Microsoft’s Cognitive Toolkit are democratizing AI, making it possible for regular people to develop their own artificially intelligent programs and applications with  the power of a laptop.

While such democratization is no doubt exciting, it means training humans—  ourselves—to use these powerful tools legally. We need to set limitations, particularly around privacy. For instance, because of the vast amounts of data it takes to train AI systems, it’s almost inevitable for these programs to run up against the strict limitations that GDPR and other privacy regulations place on using personal data.

There are also a wide range of ethical concerns when it comes to utilizing AI, particularly as it relates to  privacy, surveillance, and the potential for misuse and discrimination. For instance, a number of companies  have begun using facial  recognition  technology to scan interviewees  faces during job interviews. While proponents of the tech have praised it for allowing employers to interview a greater number of applicants, and  to move beyond CV-related constraints in the hiring process, others have  warned that the tech stands  to  reinforce existing  bias,  and to weed out candidates who don’t fit a narrow definition of who would  likely excel in a given position. In law enforcement, others have expressed concern over using AI to flag individuals as “criminals” before they actually commit a crime—an application that poses a serious potential for discrimination.

What seems most important—for privacy, ethics, and for people and society—is training, both of ourselves and our AI. One hopes that AI institutes popping up at universities like MIT will tackle not only technology but ethics and data protection and institutions will look to design thinking and operationalize AI ethics. The EU advocates a human-centric approach, one that

…strives to ensure that human values are central to the way  in which AI systems are developed, deployed, used and monitored, but ensuring respect for fundamental rights…all of which are united by reference to a common foundation rooted  in respect for human dignity, in which  the  human  being enjoys a unique and inalienable moral status. This also entails consideration of the natural environment and of  other living beings that are part of the human ecosystem, as well as a sustainable approach enabling the flourishing of future generations to come.

At the end of the day, AI has the potential  to  be an incredibly powerful tool. With the right training—of our tools and ourselves—it can even be a powerful tool for good.



Tags :   AIethicsprivacysurveillance

Previous Post Next Post 

About The Author

theprivacyguru


Number of Posts : 130
All Posts by : theprivacyguru

Related Posts

  • Facial Recognition & Law Enforcement: Do You Have That Guilty Look?

  • Data Ethics for Humans

  • 3 Ways to Go Forth and Be Private

  • Privacy Is The New Black

Comments ( 1 )

  • Dustin Howes Nov 08 , 2019 at 7:03 pm / Reply

    The natural growth of AI has always had me thinking about the machines taking over. Do you personally embrace the change of technology or do you try to avoid it in your life?


Leave a Comment

Before posting a comment, please read our Comment Policy

Click here to cancel reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>





  • Error

    Fans
  • 3892

    Followers
  • Subscribe

    RSS Feed

Recent Posts

  • The Privacy Field Needs More Diversity
  • Covid 19 -Ethical and Privacy Concerns
  • Celebrating International Women’s Day
  • These are a Few of My Favorite Podcasts on Privacy, Security and Technology
  • Building Trust in Data Protection and Compliance

Archives

  • August 2020
  • June 2020
  • March 2020
  • February 2020
  • December 2019
  • October 2019
  • September 2019
  • July 2019
  • June 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014

Categories

  • Uncategorized
The personal views expressed on The Privacy Guru blog are my own, not those of my employer. The information contained on the blog is not legal advice.

Phone: 415 713 0271 | Email: alexandra@theprivacyguru.com

© Copyright 2017 THEPRIVACYGURU. All Rights Reserved.    terms of use | privacy policy
Follow theprivacyguru on Pinterest Follow theprivacyguru on Instagram