Thursday 30 January 2020

Part 4 - What Should I Do!? Ethical Frameworks



(Part 4 of the University of Hertfordshire Tech Ethics Course. << Part 3 | Part 5 >>)

It would be nice to think a grasp of the law and an innate moral sense could guarantee you'd always do the right thing in all circumstances. Unfortunately, it’s not that simple. People can end up doing bad things without being a conscience-free psychopath.

With the best of intentions, there are many ways I might do dodgy stuff without meaning to:

  • I might make an inadvertent product mistake by not thinking through all possible implications.
  • I could spot a potential issue but it may seem like a small harm.
  • I may decide more people would be helped than hurt by it (that's called utilitarianism - as Mr Spock said, “The needs of the many outweigh the needs of the few.” Although utilitarianism is appealingly logical, it can come across as cold to the general public and risks a PR disaster if the harm is anything but minor).
  • I could suspect something’s wrong but everyone around me seems fine with it, so I go along with their group decision.
  • Something that used to be good might turn into something bad without me noticing.
  • My boss may tell me to do it, “or else”.

In fact, it's very easy for good people to do bad things. According to the 2018 stack overflow ethics questionnaire, only 60% of developers said they definitely wouldn’t write unethical code. The other 40% were more equivocal. In addition, only 20% of those surveyed felt the person who wrote a piece of code was ultimately ethically responsible for it (one problem with that position is the coder might be the only person who fully understands it).

It’s hard to do the right thing. Psychology plays a big part in why we don't (as we'll discuss in the next post) but even if you do try to do good, relying on your gut feel for what's right or wrong is highly unreliable. That's why many industries use ethical frameworks to help people make better considered decisions.

There are advantages to using a framework for your ethical judgments:

  • they can help with thinking through all the possible implications of a decision, including encouraging you to get different perspectives on an issue
  • they codify and let you learn from the experience of others
  • they support you in convincing other people a problem exists (you can point to the framework as a source of authority that supports your argument).

In this blog, we are going to look at several frameworks that provide help in different ways.

Ethical Theory

This article from Brown University is a good introduction to ethical thinking and the difference between morals and ethics. However, our blog series is not about morality. Our focus is on consequences, not intent.

Nevertheless, it's worth remembering that bad intentions don’t play well in the press or in court. Several European Volkswagon engineers found this out in 2017 when they were sent to jail in the US for deliberately falsifying emissions data. It is increasingly hard to keep dodgy practices like that a secret. Stop and think. If your rationale wouldn’t look good in court or on the front page of the Daily Mail then change it. Don’t hope you can keep it under wraps.

If you are interested in the more philosophical side of ethical theory, this free Harvard course by Michael Sandel is a great introduction.

Practical Ethics

As far as I'm concerned, your soul is your own business. I care about your professionalism. I want you to know how to spot problems in advance and manage them so they don't turn into crises. Avoiding catastrophe is better for your users, your company and you.

There are several ethical frameworks to help you make better tech product choices. Below, I discuss 3 of them:

  • The ACM code of ethics.
  • The EthicalOS toolkit.
  • The Doteveryone Consequence Scanning process.

ACM Code of Professional Ethics

The Association for Computing Machinery (ACM) published an updated code of ethical and professional conduct in 2018. It’s designed to serve as a basis for “ethical decision-making” and “remediation when violations occur” (i.e. spotting and fixing your inevitable mistakes).

ACM’s Code is a definition of what behaviour to aim for. Their ethical duties are quite close to “don’t break the law” (at least in Europe). However, they go further. In their view, responsibility is not merely about avoiding prosecution, it is also about doing the right thing: taking professional care to produce high quality, tested and secure systems.

Their principles include making sure that you (and by extension the products you produce):

  • Avoid harm (don’t physically, mentally, socially or financially harm your users or anyone else).
  • Are environmentally sustainable.
  • Are not discriminatory.
  • Are honest (don’t actively lie to or mislead users and certainly don’t commit fraud).
  • Don’t infringe licenses, patents, trademarks or copyright.
  • Respect privacy and confidentiality.

The framework is a fairly short, uncontroversial, and conservative one. It maps closely to obeying the letter AND spirit of the law where your products will be used. The ACM go beyond what is currently legally required in most countries, but I suspect the law will get there at some point.

Speculative Ethics - The Ethical OS Toolkit

On the less practical and more speculative side, the EthicalOS Toolkit is a high-level framework for helping individuals and teams to wargame worst-case scenarios for products and and think through in advance how those situations could be handled or avoided.

Part 1 of the Toolkit asks developers to think through possible failure modes for 14 potential (somewhat dystopian) products and how the problems might be mitigated. In particular, in each case it asks:

In this situation: "What actions would you take to safeguard privacy, truth, democracy, mental health, civic discourse, equality of opportunity, economic stability, or public safety?”

The answers you come up with might range from “add an alert” to “don’t develop this product at all”. The goal is to gauge the risk and decide whether you need to take an action.

EthicalOS have clearly identified the 8 "good things" listed above as their basis of ethics. There are overlaps with ACM’s list (privacy, truth, safety) but they're not identical. The EthicalOS list feels slightly US-centric to me (“truth, justice and the American way” as Superman might say). If you live in mainland China, democracy is not going to be one of your ethical goods. I foresee "privacy" and "equality of opportunity" could also be at odds in future. In my opinion, if you want a more global definition of good you should take a look at the UN’s global sustainable development goals. Nonetheless, the EthicalOS toolkit's role-playing is an imaginative way to think through ethically tricky questions.

Part 2 of the kit asks questions about your own product to help you anticipate how it could be misused. For example for: propaganda, addiction, crime, or discrimination.

This section of the toolkit is useful, but there is an omission when it comes to one of the most pressing issues of our time: pollution and climate. That raises an interesting point. It’s easy to spend your time worrying about how your product might overthrow world order in a decade’s time, whilst omitting to do easy good like putting your AWS instances in green regions.

Finally, part 3 of the toolkit lists 6 potential strategies for producing more responsible tech. You’ll be pleased to hear that number one is take a course on it. Others include oaths (which unfortunately don’t appear to have much effect as we'll discuss in the next post). Ethical bug bounties, product monitoring (this is my personal preference), practice licenses for developers (I’m dubious about this one as well, as software engineering isn’t location-bound like legal, medical or architectural practices).

At the end, there is a set of checklists to help you consider whether you have carefully scanned your product for ethical and thus professional risks.

Agile Ethics - Consequence Scanning 

Consequence Scanning by the UK think tank dotEveryone (Brown S. (2019) Consequence Scanning Manual Version 1. London: Doteveryone) defines a way of considering positive and negative implications by asking:

  • What are the intended and unintended consequences of your product? 
  • What are the positive consequences to focus on? 
  • What are the consequences to mitigate? 

The lightweight process slots into existing agile development and is designed “for the early stages of product planning and should be returned to throughout development and maintenance”. It uses guided brainstorming sessions and is an easy way to add more ethical thought into your product management.

Sector Specific Frameworks

The frameworks above are general to any product but there are others being created that are aimed at more specific areas including: AI, data and machine learning (e.g. “Principles of AI” by AI expert Professor Joanna Bryson and the data ethics canvas by the Open Data Institute). We’ll talk more about these in later posts.

Conclusion

In this post, we have reviewed several of the early ethics and responsible technology frameworks out there for developers. We have seen some common themes:

  • The need to consider the potentially harmful consequences of products and features both up-front and throughout the lifetime of the product.
  • The need to look at products from multiple viewpoints (not just the ones in your engineering team).
  • The need to comply with the law and potentially go further.
  • The need to monitor the use of products in the field.

But is ethics only a matter of process, or is it more? In the next post in this series, we’ll look at the role psychology plays in risk management, professional behaviour and decision-making.

(Part 4 of the University of Hertfordshire Tech Ethics Course. << Part 3 | Part 5 >>)

About the Author

Anne Currie is tech greybeard (ahem) who has been in the sector as an engineer, writer and speaker for 25 years. She runs & helps organise conferences in hard tech and in ethics, is a visiting lecturer at the University of Hertfordshire and most importantly of all, is the author of the dystopian, hard scifi Panopticon series (Amazon USAmazon UK). Contact her on Twitter @anne_e_currie or at www.annecurrie.com 
Sponsored by Container Solutions
The Panopticon Series on Amazon US and UK

Photo by Jason Wong on Unsplash

Friday 24 January 2020

Part 3: Are you a Goodie or a Baddie? What Does Being Ethical Mean?


(Part 3 of the University of Hertfordshire Tech Ethics Course. << Part 2 | Part 4 >>)

Tech ethics isn’t philosophy, it's professional behaviour.

In fact, I'd define ethical behaviour for a technologist as just taking reasonable care to avoid doing harm, which is also the foundation of doing your job professionally. What does that mean in practise?
  • Thinking upfront about how harm might come to a group or individual from your product.
  • Taking reasonable steps to avoid it. 
  • Monitoring your system in production, including user issues, to spot problems you missed.
  • Fixing them.  

Isn’t That Just the Law?

Not necessarily. The law lags behind progress in software (we move very fast). Some of this is against the law (as I described in part 2) and some of it isn't.

Or at least, it isn't yet.

Causing foreseeable physical harm is often a crime. Causing reputational, economic, or emotional damage; inconvenience, reduction in quality of life, or environmental problems, for example, may not be. However, even if you aren't breaking criminal law you might be in breach of civil or contract law.

Are you Responsible?

Most developers think they are not legally or morally responsible for the code they write, but the courts may disagree. “I was just following orders” is not a legal defense. Neither is “I didn’t know that was against the law.” It is your responsibility to check.

Is it Unprofessional to be Ethical?

It may be that the company you work for is a nefarious organisation with a plan for world domination. If you work in a secret lair under an extinct volcano, that might apply to you. If your CEO is a super-villain, bad behaviour may well be part if his business strategy and Dr Evil might consider it unprofessional of you to raise a concern.

However, most businesses are not run by baddies. As well as staying inside the law, they do care about keeping customers happy, retaining staff, and avoiding newspaper scandals. For most of them, ethical breaches are mistakes. The error might be caused by failure to test, lack of awareness, shortsightedness, misunderstandings, or miscommunications. In fact, ethical problems should be considered alarm bells for poor cognition in an organisation.

Your processes to avoid ethical breaches should be the same as your processes to avoid any potentially costly mistake. They are about managing risk to avoid it turning into a crisis. The processes are not there to ensure no error ever happens (that would be impossible) but to make sure issues are spotted and corrected before they do irreparable harm.

Don't Assume that Because You're Paid, You are a Baddie!

Never assume your CEO is an evil genius and you're only paid a salary to avert your eyes from his misdeeds. Where tech is concerned, he's more likely to be an idiot. History is rife with soldiers that followed orders that were never given. Don’t be that person. If you're asked to do something dangerous or harmful, it’s probably a mistake. Raise your concerns immediately. That is your job. Never assume your job is to help with a cover-up.

Why raising issues is good for business:
  • It is a sign that you're being careful about your work. 
  • You might be about to break the law (or be breaking it in some countries) and all businesses want to avoid that.
  • You might get sued.
  • You might get bad PR: people may boycott your products or you might struggle to hire. 
  • Ethical breaches are often a warning sign of dodgy decision-making that needs to be fixed.. 
Personally, I don’t want to find myself yelling, "I was just following orders!' in court, on the front page of the Daily Mail, or anywhere else. So how do I avoid it?

Avoiding Harm

It’s not unethical to have something go wrong. This is software - things go wrong. It’s only unethical (or unprofessional) if you don’t make reasonable efforts to:
  • avoid it going harmfully wrong
  • spot when bad stuff is happening
  • resolve serious problems when you encounter them.

Think up Front

In your product's design phase, set time aside to do “consequence scanning”. Think through:
  • harms that could result from your product, including by misuse
  • how you would spot if that happened
  • how bad it would be and how to mitigate that if necessary. 
In the next post we’ll talk about some frameworks that exist to help with this.

Follow Best Practises

Where they exist, follow best practises unless there is a very good reason not to. For new stuff like machine learning, best practises are still being formed. If best practises are not set in stone in your area:

  • follow what you can
  • be very careful when you stray off that path
  • document your thinking processes and decisions, at least in your issue tracking system, so that other engineers, auditors, and your future self can see why you made the decision you did (there is usually good reason but, trust me, you'll forget what it was).

Report Problems

What should you do if you see something potentially harmful like unpatched systems?

Here are 3 things you probably shouldn’t do:
  • Ignore it.
  • Quit.
  • Immediately become a whistleblower, phone up the Daily Mail then escape on the first plane to Moscow.
Here’s what you should do:
  • Raise it in your issue tracking system with an appropriate severity.
  • Be prepared to argue the case for that severity level.

Test!

The most ethical thing you can ever do is thorough testing. Watch out for edge and missing test cases. A classic mistake is to only test your product on the people in your IT team - they almost certainly don’t reflect all of humanity. If they do, you might be over-staffed on that project.

Field testing is a good idea generally and sometimes unavoidable. Plan for errors to be spotted and handled without harming the user, which takes us to the next paragraph...

Monitor

Industries like aviation, cars or oil and gas have something called a safety culture. They actively search out problems and examine them carefully. They do thorough postmortems and try to make sure any harmful issue only happens once. But don't just track actual failures, track near ones too...

Track Near Misses

The most successful businesses don’t only track active failure, they also monitor “near misses”: problems that never actually materialise and often resolve themselves, but are a warning sign of something bad in future.

In aviation, a rise in plane near misses indicates that a situation is becoming dangerous and there is more risk of a collision. Getting early warning from your near miss or near collision reporting lets you take action and avoid a catastrophe!

Be Accountable and Auditable

Finally, keep records. Record the decisions you made and why. This can just be in your code management and issue tracking systems. If you are working on machine learning, you need to keep detailed information about your test data and models. 

The reason for this is two-fold:
  • you'll need it for any post-mortems
  • it gives you another chance to spot anything dodgy and act on it.

Trust Yourself and Speak up

If something feels wrong, it probably is and maybe you're the only person who has spotted that. Perhaps you are worrying unnecessarily but ask anyway. The worst that'll happen is you'll learn something!

(Part 3 of the University of Hertfordshire Tech Ethics Course. << Part 2 | Part 4 >>)

About the Author

Anne Currie is tech greybeard (ahem) who has been in the sector as an engineer, writer and speaker for 25 years. She runs & helps organise conferences in hard tech and in ethics, is a visiting lecturer at the University of Hertfordshire and most importantly of all, is the author of the dystopian, hard scifi Panopticon series (Amazon USAmazon UK). Contact her on Twitter @anne_e_currie or at www.annecurrie.com 

Sponsored by Container Solutions




Hero image by the great JD Hancock at jdhancock.com

Friday 17 January 2020

Part 2: Tech Ethics: The Law's The Floor


(Part 2 of the University of Hertfordshire Tech Ethics Course << Part 1 | Part 3 >>)

The 101 of ethical behaviour is: don’t break the law. That might seem obvious; it's not sufficient; but it is necessary. The laws of each country codify a subset of its ethical rules. If you're breaking them, you're probably acting unethically so the foundation of responsible tech development is to obey the laws that apply to you.

In this post, we're going to cover some of the regulations you need to follow as a techie.

I'm not a lawyer and I'm not giving legal advice. I am commenting as a layman who has experienced most of these rules in my engineering career. If you need expert advice on any of this stuff talk to an actual lawyer. You may have one in your firm but if you don't your insurer can often help.

Be Warned! I Haven't Covered Everything

Every country and sector has its own rules you need to stick to when building a new tech product or extending an existing one. That's a lot of laws. I couldn't list them even if I knew them all, which I don't (and I'm not in prison yet). The good news is, most of them will never apply to you. However, some we do run across a lot and you're likely to encounter.

(Note: for every new project someone usually needs to do a bit of legal research: at least do some searching online and talk to veteran techies in that area).

1. Privacy, Transparency and Security (GDPR & Others)

Many countries have laws about digital privacy, but perhaps the most extensive are the European Union's General Data Protection Regulations (GDPR). They limit what a company or individual can do with the personal information of EU citizens. They also dictate how, and how long, such data can be stored.

The US state of California has similar privacy rules and there may also be regulations on specific industries or groups that apply to your application. One example is HIPAA, which covers US healthcare information. Another is the COPPA rule on children's data.

As well as privacy, GDPR includes regulation around security. It requires the use of encryption and anonymisation for storing some sensitive personal data. Again, other countries have their own rules, for example the US government's FedRAMP regs.

GDPR also imposes transparency requirements on uses of data. For example, a "right to explanation" for some algorithmic decisions, particularly if they have significant ramifications for the individual like prison sentence recommendations or credit scores.

The transparency aspect of the GDPR is widely expected to cause legal wrangling in future because deep neural networks defy explanation. The UK government's current advice on handling this kind of decision-making is sensible:
  • give individuals information about the processing you do
  • introduce simple ways for them to request human intervention or challenge a decision
  • carry out regular checks to make sure that your systems are working as intended.

2. Patents, Trademarks, Copyright, & Licensing (IP Law)

IP law applies to everything that you didn't produce from scratch yourself. That might be code samples or libraries, written text, music, or images you downloaded from the internet.

Even if you did produce something yourself, you could break IP law by accidentally infringing a patent or trademark. Accidental infringement usually doesn't come with huge penalties but the IP owner could stop you using your materials from then on.

Whenever you use anything you didn't create from scratch yourself, legally you need to confirm your right to do so. That might include licensing either the patent or the copyright. Licenses and trademarks tell you what you can and cannot do with materials and legally you must comply. Even if you have a license, you can't do anything you want. For example, you can't tell people you are the author if you aren't.

All open source code comes with a copyright license that tells you how you can use it. Some licenses, for example Apache 2, are permissive and let you use the code for whatever you like. Some licenses are more restrictive, e,g, GPL, and only let you legally use the code in certain ways.

Even if you did write all your code yourself, you still need to be careful not to deliberately infringe someone else's IP because the penalties for that can be steep. Avoid casually discussing patents or trade secrets with anyone outside your company so you don't learn things you shouldn't know and might put in future products. If a conversation like that starts, subtly excuse yourself ASAP.

3. Contracts 

Contracts are two-sided commitments that describe the work one company or individual does for another. Contracts ensure the buyer gets what they want and the supplier gets paid for it. They might be between an individual user and the company behind a website, for example, or between a contractor building a custom application and the company who hired them.

A contract is legally binding. If either side fails to do what they agreed, the courts can force them to. Before they sign a contract, most companies ensure they have insurance to cover the cost of either suing the other party for a breach or being sued. That might happen even if you didn't do anything wrong. This is called liability insurance. 

4. Confidentiality

Engineers are frequently asked to keep secret what they are working on or what they learn through work.

Many contracts contain confidentiality clauses or you might be asked to sign a non-disclosure agreement (NDA). Confidentiality clauses and NDAs are enforced the same way as any contract: though the courts. If you blab, you can be sued.

5. Duty of Care

Duty of care legislation applies to products that may do foreseeable harm. This means physical or psychological harm rather than "pure economic loss" and therefore hasn't generally been applied to software products in the past. However, where software is incorporated into a physical device (robot, IOT etc...) then liability may apply because it could physically hurt someone.

6. Accessibility

Many countries have laws about access to websites for disabled users. If your site or product is inaccessible there is a potential risk of you being sued, particularly if you have users in the United States. Some government bids require compliance with accessibility standards like section 508 in the US or the EU's web accessibility standards. Being accessible also helps with Google's SEO scoring. 

7. Other Stuff to Comply With

Although GDPR, IP law, and contracts are probably the rules you'll encounter most often in the tech industry, they aren't the only ones.
- In every country, there are regulations on tax (VAT or other sales taxes, customs duties etc..) Those rules affect product reporting.
- Your product might come under export laws (for example the US rules on exporting so-called dual-use technology - items that are classified as potentially military. Some of those laws apply to fairly innocuous-seeming stuff like publicly available SSL libraries. For cryptographic libraries in particular, double-check before including them in your products. Don't panic - even if you end up using dual use tech in your application it normally just involves some extra paperwork).
- There are sectors where the software is additionally regulated, for example, finance products and strict anti-money laundering (AML) rules.

Cybercrime or the Computer Misuse Act

I've talked about rules that affect how you write products, but there are also ones about how you use them. The laws around computer misuse are fairly draconian. For example, gaining unauthorised access to a computer, even if you do no harm, is a criminal offense in the UK with a penalty of up to 2 years in prison!

Other forms of cybercrime include online trolling, bullying and stalking, which are quite common. You may spot them being committed using your company's computer equipment. It's often an inside job: one of your employees or someone who's leaving, so you might have to act to stop or report it. 

Laws to Come?

The EU has plans to add new laws around AI and public surveillance, which will probably appear over the next few years. Or then again, perhaps not.

There are a Lot of Laws

You always need to do some reading around and checking in your field. What's legal and what's not changes all the time and it's not necessarily obvious. Research is your friend!

In the next post, we'll look at why complying with the law is not always enough - it's just the minimum.

<< Read Part 1 | Read Part 3 >>

About the Author

Anne Currie is tech greybeard (ahem) who has been in the sector as an engineer, writer and speaker for 25 years. She runs & helps organise conferences in hard tech and in ethics, is a visiting lecturer at the University of Hertfordshire and most importantly of all, is the author of the dystopian, hard scifi Panopticon series (Amazon USAmazon UK). Contact her on Twitter @anne_e_currie or at www.annecurrie.com


Photo by King's Church International on Unsplash



Monday 13 January 2020

Part 1: What Next for Tech Ethics? A New Course


(Part 1 (Intro) to the University of Hertfordshire Tech Ethics Course. Part 2 >>)

In 2018, a group of keen techies ran a conference on technical ethics in London. The first spin-off from that event was the sustainable servers 2024 petition. We are now happy to announce the next. In 2020, we will be combining academic and industrial work on tech ethics to create practical resources to help developers make more informed choices about what to build; how to build it; and how to operate it safely for users and non-users alike (aka the rest of society).

We’ll be writing and delivering an open source “Responsible Technology” module for the University of Hertfordshire’s Computer Science MSc. The project is supported by the University and sponsored by Container Solutions.

What’s Coming?

We’re searching for the best work out there on practical tech ethics for the course and we'll build an open source repo of all our written and gathered materials.

We’ll also publish a series of blog posts on:
  • What is tech ethics and why is it a big deal?
  • “The law’s the floor.” But what’s legal and what’s not?
  • What does society want? How to keep your eye on current and up-and-coming priorities: the climate and ecosystem, privacy, fairness, equality, and health.
  • What’s out there to help? Resources from industry and academia.
  • The psychology of responsibility. Why do people do bad things? (Including Milgram, Asch, psychological safety, and whether codes of ethics really work).
  • Testing, monitoring and reporting.
  • Deeper dives into areas like: energy use, AI, Big Data, data bias, cyberwarfare, and the digital Geneva convention.
  • The history of accessibility and the UI, which is a fascinating example of an ethical success story that stopped working.
In the first blog post in the series, we'll look at why "the law's the floor".

(Part 1 (Intro) to the University of Hertfordshire Tech Ethics Course. Part 2 >>)

About the Author

Anne Currie is tech greybeard (ahem) who has been in the sector as an engineer, writer and speaker for 25 years. She runs & helps organise conferences in hard tech and in ethics, is a visiting lecturer at the University of Hertfordshire and most importantly of all, is the author of the dystopian, hard scifi Panopticon series (Amazon US, Amazon UK). Contact her on Twitter @anne_e_currie or at www.annecurrie.com.