Thursday 30 January 2020

Part 4 - What Should I Do!? Ethical Frameworks



(Part 4 of the University of Hertfordshire Tech Ethics Course. << Part 3 | Part 5 >>)

It would be nice to think a grasp of the law and an innate moral sense could guarantee you'd always do the right thing in all circumstances. Unfortunately, it’s not that simple. People can end up doing bad things without being a conscience-free psychopath.

With the best of intentions, there are many ways I might do dodgy stuff without meaning to:

  • I might make an inadvertent product mistake by not thinking through all possible implications.
  • I could spot a potential issue but it may seem like a small harm.
  • I may decide more people would be helped than hurt by it (that's called utilitarianism - as Mr Spock said, “The needs of the many outweigh the needs of the few.” Although utilitarianism is appealingly logical, it can come across as cold to the general public and risks a PR disaster if the harm is anything but minor).
  • I could suspect something’s wrong but everyone around me seems fine with it, so I go along with their group decision.
  • Something that used to be good might turn into something bad without me noticing.
  • My boss may tell me to do it, “or else”.

In fact, it's very easy for good people to do bad things. According to the 2018 stack overflow ethics questionnaire, only 60% of developers said they definitely wouldn’t write unethical code. The other 40% were more equivocal. In addition, only 20% of those surveyed felt the person who wrote a piece of code was ultimately ethically responsible for it (one problem with that position is the coder might be the only person who fully understands it).

It’s hard to do the right thing. Psychology plays a big part in why we don't (as we'll discuss in the next post) but even if you do try to do good, relying on your gut feel for what's right or wrong is highly unreliable. That's why many industries use ethical frameworks to help people make better considered decisions.

There are advantages to using a framework for your ethical judgments:

  • they can help with thinking through all the possible implications of a decision, including encouraging you to get different perspectives on an issue
  • they codify and let you learn from the experience of others
  • they support you in convincing other people a problem exists (you can point to the framework as a source of authority that supports your argument).

In this blog, we are going to look at several frameworks that provide help in different ways.

Ethical Theory

This article from Brown University is a good introduction to ethical thinking and the difference between morals and ethics. However, our blog series is not about morality. Our focus is on consequences, not intent.

Nevertheless, it's worth remembering that bad intentions don’t play well in the press or in court. Several European Volkswagon engineers found this out in 2017 when they were sent to jail in the US for deliberately falsifying emissions data. It is increasingly hard to keep dodgy practices like that a secret. Stop and think. If your rationale wouldn’t look good in court or on the front page of the Daily Mail then change it. Don’t hope you can keep it under wraps.

If you are interested in the more philosophical side of ethical theory, this free Harvard course by Michael Sandel is a great introduction.

Practical Ethics

As far as I'm concerned, your soul is your own business. I care about your professionalism. I want you to know how to spot problems in advance and manage them so they don't turn into crises. Avoiding catastrophe is better for your users, your company and you.

There are several ethical frameworks to help you make better tech product choices. Below, I discuss 3 of them:

  • The ACM code of ethics.
  • The EthicalOS toolkit.
  • The Doteveryone Consequence Scanning process.

ACM Code of Professional Ethics

The Association for Computing Machinery (ACM) published an updated code of ethical and professional conduct in 2018. It’s designed to serve as a basis for “ethical decision-making” and “remediation when violations occur” (i.e. spotting and fixing your inevitable mistakes).

ACM’s Code is a definition of what behaviour to aim for. Their ethical duties are quite close to “don’t break the law” (at least in Europe). However, they go further. In their view, responsibility is not merely about avoiding prosecution, it is also about doing the right thing: taking professional care to produce high quality, tested and secure systems.

Their principles include making sure that you (and by extension the products you produce):

  • Avoid harm (don’t physically, mentally, socially or financially harm your users or anyone else).
  • Are environmentally sustainable.
  • Are not discriminatory.
  • Are honest (don’t actively lie to or mislead users and certainly don’t commit fraud).
  • Don’t infringe licenses, patents, trademarks or copyright.
  • Respect privacy and confidentiality.

The framework is a fairly short, uncontroversial, and conservative one. It maps closely to obeying the letter AND spirit of the law where your products will be used. The ACM go beyond what is currently legally required in most countries, but I suspect the law will get there at some point.

Speculative Ethics - The Ethical OS Toolkit

On the less practical and more speculative side, the EthicalOS Toolkit is a high-level framework for helping individuals and teams to wargame worst-case scenarios for products and and think through in advance how those situations could be handled or avoided.

Part 1 of the Toolkit asks developers to think through possible failure modes for 14 potential (somewhat dystopian) products and how the problems might be mitigated. In particular, in each case it asks:

In this situation: "What actions would you take to safeguard privacy, truth, democracy, mental health, civic discourse, equality of opportunity, economic stability, or public safety?”

The answers you come up with might range from “add an alert” to “don’t develop this product at all”. The goal is to gauge the risk and decide whether you need to take an action.

EthicalOS have clearly identified the 8 "good things" listed above as their basis of ethics. There are overlaps with ACM’s list (privacy, truth, safety) but they're not identical. The EthicalOS list feels slightly US-centric to me (“truth, justice and the American way” as Superman might say). If you live in mainland China, democracy is not going to be one of your ethical goods. I foresee "privacy" and "equality of opportunity" could also be at odds in future. In my opinion, if you want a more global definition of good you should take a look at the UN’s global sustainable development goals. Nonetheless, the EthicalOS toolkit's role-playing is an imaginative way to think through ethically tricky questions.

Part 2 of the kit asks questions about your own product to help you anticipate how it could be misused. For example for: propaganda, addiction, crime, or discrimination.

This section of the toolkit is useful, but there is an omission when it comes to one of the most pressing issues of our time: pollution and climate. That raises an interesting point. It’s easy to spend your time worrying about how your product might overthrow world order in a decade’s time, whilst omitting to do easy good like putting your AWS instances in green regions.

Finally, part 3 of the toolkit lists 6 potential strategies for producing more responsible tech. You’ll be pleased to hear that number one is take a course on it. Others include oaths (which unfortunately don’t appear to have much effect as we'll discuss in the next post). Ethical bug bounties, product monitoring (this is my personal preference), practice licenses for developers (I’m dubious about this one as well, as software engineering isn’t location-bound like legal, medical or architectural practices).

At the end, there is a set of checklists to help you consider whether you have carefully scanned your product for ethical and thus professional risks.

Agile Ethics - Consequence Scanning 

Consequence Scanning by the UK think tank dotEveryone (Brown S. (2019) Consequence Scanning Manual Version 1. London: Doteveryone) defines a way of considering positive and negative implications by asking:

  • What are the intended and unintended consequences of your product? 
  • What are the positive consequences to focus on? 
  • What are the consequences to mitigate? 

The lightweight process slots into existing agile development and is designed “for the early stages of product planning and should be returned to throughout development and maintenance”. It uses guided brainstorming sessions and is an easy way to add more ethical thought into your product management.

Sector Specific Frameworks

The frameworks above are general to any product but there are others being created that are aimed at more specific areas including: AI, data and machine learning (e.g. “Principles of AI” by AI expert Professor Joanna Bryson and the data ethics canvas by the Open Data Institute). We’ll talk more about these in later posts.

Conclusion

In this post, we have reviewed several of the early ethics and responsible technology frameworks out there for developers. We have seen some common themes:

  • The need to consider the potentially harmful consequences of products and features both up-front and throughout the lifetime of the product.
  • The need to look at products from multiple viewpoints (not just the ones in your engineering team).
  • The need to comply with the law and potentially go further.
  • The need to monitor the use of products in the field.

But is ethics only a matter of process, or is it more? In the next post in this series, we’ll look at the role psychology plays in risk management, professional behaviour and decision-making.

(Part 4 of the University of Hertfordshire Tech Ethics Course. << Part 3 | Part 5 >>)

About the Author

Anne Currie is tech greybeard (ahem) who has been in the sector as an engineer, writer and speaker for 25 years. She runs & helps organise conferences in hard tech and in ethics, is a visiting lecturer at the University of Hertfordshire and most importantly of all, is the author of the dystopian, hard scifi Panopticon series (Amazon USAmazon UK). Contact her on Twitter @anne_e_currie or at www.annecurrie.com 
Sponsored by Container Solutions
The Panopticon Series on Amazon US and UK

Photo by Jason Wong on Unsplash

No comments:

Post a Comment