the privacy guru
  • About
  • Speaking & Events
    • Speaking Events
    • Bring the Privacy Guru to Speak
    • Book a Privacy Salon
    • Book a One-On-One Session
  • Publications
    • Privacy Memes
    • Articles
    • E-Book: Privacy for Humans
  • Blog





Data Ethics for Humans

On 07 Jun, 2019
Uncategorized
By : theprivacyguru
With 1 Comment
Views : 5532

data ethics

 

Way back in 2014, when I first started my blog, I wrote about Privacy for Humans – a movement towards human centered use of technology. I expanded on this theme in my 2016 ebook, called Privacy for Humans, which provides tools for mindfully cultivating privacy awareness—tools just as applicable today as they were then.

Well, here we are in the future. It’s 2019, and we’ve seen the good, the bad and the ugly in terms of privacy wins and fails. The bad and ugly came in the form of high profile data breaches, and a general techlash—a backlash against ad tech, social media, and other tech companies that traffic in data. But there is also good. Data privacy and security as topics for discussion, analysis and scrutiny are now more out in the open, so to speak, than perhaps they have ever been. And the attitude about privacy has become less “ignorance is bliss” and more “information is power.” Companies who may have treated privacy as something of an afterthought are having to reprioritize, and that’s a good thing for everyone.

What’s more is that I’m noticing a shift beyond foundational privacy and security protections, towards technology ethics, particularly the ethical use of data. The discussion expands upon how users and companies can practice not just a mindful use of technology, but what influence we may bring to how technology builds ethics into its bones. And this isn’t a conversation that’s happening on the fringes. Whole conferences have sprung up around data ethics, and even some big companies like Salesforce have formed entire departments around the ethical and humane use of technology. But what is data ethics? And why should we care about it?

What is Data Ethics?

 


According to Brian Patrick Green, director of Technology Ethics at Santa Clara’s Markkula Center for Applied Ethics, technology ethics is the application of ethical thinking to the practical concerns of technology. Swap out the words “technology ethics” with “data ethics,” and you’ll get our working definition of the same. Technology ethics—and, by association, data ethics—falls into two main categories.

First, these schools concerns themselves with the ethics involved developing new technology and new ways to use, manage, and otherwise interface with data. In other words, instead of asking whether we can do something, technology ethics is concerned with whether or not we should—always, never, or depending on the context. For example, in a warming climate, is it ethical to develop tools that are a drain on resources, or exacerbate global warming? See for example, recent reports on AI’s carbon footprint. And while automation in manufacturing might help speed up production, is it ethical, since it may result in fewer workers able to make a living?

And the answers aren’t always clear cut. Some technological innovations fall squarely in the gray area. For instance, the Tor web browser allows individuals to browse the web anonymously and untraceably. While it’s a useful tool for protecting individual privacy, it has also allowed certain people to circumvent the law online. And what are the unintended consequences of facial recognition databases – especially when used by police or authoritarian regimes? And those are just two examples. Lots of technologies can be similarly contextually right or wrong.

Second, technology ethics is interested the ethical questions around the ways technology has made us powerful. Because we didn’t always have the power to edit our own genetic code. We didn’t always have the ability to post our private thoughts for the world to see. And we didn’t always have AI personal assistants. Now we do. Technology ethics asks questions about what we, as individuals and organizations, are to do with that power. And because with great power comes great responsibility (thank you, Voltaire, or Spiderman’s uncle, depending on who you ask), technology ethics is more important than ever.

Ethics by Design

 


Today, tech moves faster than the legal system, meaning innovations and developments take place a lot faster than regulators can, well, regulate. But technology ethics may be able to fill in some of the gaps not covered by existing laws, regulations, or best practices. Implementing Ethics by Design—adopting ethical obligations in the development of new technologies—can be part of the solution. By essentially building an ethical component into the development process, companies and individuals can wrestle with ethical conundrums in the abstract long before they become real life ethical problems.

One organization, the Omidyar Network, a group of entrepreneurs, investors, innovators, and activists in the tech space, has gone so far as to develop what it calls The Ethical Operating System (Ethical OS). Ethical OS is “a practical framework designed to help makers of tech…anticipate risks or scenarios before they happen…Ethical OS…outlines emerging risks and scenarios…to help teams better future-proof their tech.” Other companies, like Microsoft have implemented a set of guiding principles to make AI safer—principles that include fairness, reliability and safety, inclusiveness, accountability, transparency, and privacy. In a similar spirit, as many as 40 countries have adopted similar principles.

For companies, adopting a policy of Ethics by Design isn’t just the right thing to do, it’s also good for business. Acting ethically is in the best interest of customers—it builds trust and increases value. It can even become a selling point, as it recently has for Apple.

How can companies and privacy professionals, practically, leverage existing privacy and security programs, governance, and stakeholders to include ethical principles? According to Ethical OS, these questions are a good place to start:

* If today’s technology might someday be used in unexpected ways, how can you prepare?

* What risk categories should you pay special attention to now?

* And which design, team, or business model choices can actively safeguard users, communities, society, and organizations from future risk?

It may sound obvious, but most technology isn’t inherently good or bad—that usually comes from how it’s implemented. One helpful concept is using ‘design fiction’ – playing out the worst case, Black Mirror dystopian scenario for how technology may be used, or abused. Asking the hard questions up front and building ethical decision-making into the process can certainly help nudge tech in the right direction.

 

 



Tags :   Black Mirrordata ethicsethicsethics by designOmidyar Networkprivacytechnology ethics

Previous Post Next Post 

About The Author

theprivacyguru


Number of Posts : 130
All Posts by : theprivacyguru

Related Posts

  • Death and Privacy: The Practice of Mindful Afterliving

  • Protect Your Personal Connection to Privacy

  • The GDPR – Keeping the Momentum

  • Taking Responsibility for Your Privacy Part 1: Personal Responsibility

Comments ( 1 )

  • Jose Belo Aug 01 , 2019 at 12:41 am / Reply

    Very interesting article. Certainly, food for thought.


Leave a Comment

Before posting a comment, please read our Comment Policy

Click here to cancel reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>





  • Error

    Fans
  • 3930

    Followers
  • Subscribe

    RSS Feed

Recent Posts

  • The Privacy Field Needs More Diversity
  • Covid 19 -Ethical and Privacy Concerns
  • Celebrating International Women’s Day
  • These are a Few of My Favorite Podcasts on Privacy, Security and Technology
  • Building Trust in Data Protection and Compliance

Archives

  • August 2020
  • June 2020
  • March 2020
  • February 2020
  • December 2019
  • October 2019
  • September 2019
  • July 2019
  • June 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014

Categories

  • Uncategorized
The personal views expressed on The Privacy Guru blog are my own, not those of my employer. The information contained on the blog is not legal advice.

Phone: 415 713 0271 | Email: alexandra@theprivacyguru.com

© Copyright 2017 THEPRIVACYGURU. All Rights Reserved.    terms of use | privacy policy
Follow theprivacyguru on Pinterest Follow theprivacyguru on Instagram