Friday 24 January 2020

Part 3: Are you a Goodie or a Baddie? What Does Being Ethical Mean?


(Part 3 of the University of Hertfordshire Tech Ethics Course. << Part 2 | Part 4 >>)

Tech ethics isn’t philosophy, it's professional behaviour.

In fact, I'd define ethical behaviour for a technologist as just taking reasonable care to avoid doing harm, which is also the foundation of doing your job professionally. What does that mean in practise?
  • Thinking upfront about how harm might come to a group or individual from your product.
  • Taking reasonable steps to avoid it. 
  • Monitoring your system in production, including user issues, to spot problems you missed.
  • Fixing them.  

Isn’t That Just the Law?

Not necessarily. The law lags behind progress in software (we move very fast). Some of this is against the law (as I described in part 2) and some of it isn't.

Or at least, it isn't yet.

Causing foreseeable physical harm is often a crime. Causing reputational, economic, or emotional damage; inconvenience, reduction in quality of life, or environmental problems, for example, may not be. However, even if you aren't breaking criminal law you might be in breach of civil or contract law.

Are you Responsible?

Most developers think they are not legally or morally responsible for the code they write, but the courts may disagree. “I was just following orders” is not a legal defense. Neither is “I didn’t know that was against the law.” It is your responsibility to check.

Is it Unprofessional to be Ethical?

It may be that the company you work for is a nefarious organisation with a plan for world domination. If you work in a secret lair under an extinct volcano, that might apply to you. If your CEO is a super-villain, bad behaviour may well be part if his business strategy and Dr Evil might consider it unprofessional of you to raise a concern.

However, most businesses are not run by baddies. As well as staying inside the law, they do care about keeping customers happy, retaining staff, and avoiding newspaper scandals. For most of them, ethical breaches are mistakes. The error might be caused by failure to test, lack of awareness, shortsightedness, misunderstandings, or miscommunications. In fact, ethical problems should be considered alarm bells for poor cognition in an organisation.

Your processes to avoid ethical breaches should be the same as your processes to avoid any potentially costly mistake. They are about managing risk to avoid it turning into a crisis. The processes are not there to ensure no error ever happens (that would be impossible) but to make sure issues are spotted and corrected before they do irreparable harm.

Don't Assume that Because You're Paid, You are a Baddie!

Never assume your CEO is an evil genius and you're only paid a salary to avert your eyes from his misdeeds. Where tech is concerned, he's more likely to be an idiot. History is rife with soldiers that followed orders that were never given. Don’t be that person. If you're asked to do something dangerous or harmful, it’s probably a mistake. Raise your concerns immediately. That is your job. Never assume your job is to help with a cover-up.

Why raising issues is good for business:
  • It is a sign that you're being careful about your work. 
  • You might be about to break the law (or be breaking it in some countries) and all businesses want to avoid that.
  • You might get sued.
  • You might get bad PR: people may boycott your products or you might struggle to hire. 
  • Ethical breaches are often a warning sign of dodgy decision-making that needs to be fixed.. 
Personally, I don’t want to find myself yelling, "I was just following orders!' in court, on the front page of the Daily Mail, or anywhere else. So how do I avoid it?

Avoiding Harm

It’s not unethical to have something go wrong. This is software - things go wrong. It’s only unethical (or unprofessional) if you don’t make reasonable efforts to:
  • avoid it going harmfully wrong
  • spot when bad stuff is happening
  • resolve serious problems when you encounter them.

Think up Front

In your product's design phase, set time aside to do “consequence scanning”. Think through:
  • harms that could result from your product, including by misuse
  • how you would spot if that happened
  • how bad it would be and how to mitigate that if necessary. 
In the next post we’ll talk about some frameworks that exist to help with this.

Follow Best Practises

Where they exist, follow best practises unless there is a very good reason not to. For new stuff like machine learning, best practises are still being formed. If best practises are not set in stone in your area:

  • follow what you can
  • be very careful when you stray off that path
  • document your thinking processes and decisions, at least in your issue tracking system, so that other engineers, auditors, and your future self can see why you made the decision you did (there is usually good reason but, trust me, you'll forget what it was).

Report Problems

What should you do if you see something potentially harmful like unpatched systems?

Here are 3 things you probably shouldn’t do:
  • Ignore it.
  • Quit.
  • Immediately become a whistleblower, phone up the Daily Mail then escape on the first plane to Moscow.
Here’s what you should do:
  • Raise it in your issue tracking system with an appropriate severity.
  • Be prepared to argue the case for that severity level.

Test!

The most ethical thing you can ever do is thorough testing. Watch out for edge and missing test cases. A classic mistake is to only test your product on the people in your IT team - they almost certainly don’t reflect all of humanity. If they do, you might be over-staffed on that project.

Field testing is a good idea generally and sometimes unavoidable. Plan for errors to be spotted and handled without harming the user, which takes us to the next paragraph...

Monitor

Industries like aviation, cars or oil and gas have something called a safety culture. They actively search out problems and examine them carefully. They do thorough postmortems and try to make sure any harmful issue only happens once. But don't just track actual failures, track near ones too...

Track Near Misses

The most successful businesses don’t only track active failure, they also monitor “near misses”: problems that never actually materialise and often resolve themselves, but are a warning sign of something bad in future.

In aviation, a rise in plane near misses indicates that a situation is becoming dangerous and there is more risk of a collision. Getting early warning from your near miss or near collision reporting lets you take action and avoid a catastrophe!

Be Accountable and Auditable

Finally, keep records. Record the decisions you made and why. This can just be in your code management and issue tracking systems. If you are working on machine learning, you need to keep detailed information about your test data and models. 

The reason for this is two-fold:
  • you'll need it for any post-mortems
  • it gives you another chance to spot anything dodgy and act on it.

Trust Yourself and Speak up

If something feels wrong, it probably is and maybe you're the only person who has spotted that. Perhaps you are worrying unnecessarily but ask anyway. The worst that'll happen is you'll learn something!

(Part 3 of the University of Hertfordshire Tech Ethics Course. << Part 2 | Part 4 >>)

About the Author

Anne Currie is tech greybeard (ahem) who has been in the sector as an engineer, writer and speaker for 25 years. She runs & helps organise conferences in hard tech and in ethics, is a visiting lecturer at the University of Hertfordshire and most importantly of all, is the author of the dystopian, hard scifi Panopticon series (Amazon USAmazon UK). Contact her on Twitter @anne_e_currie or at www.annecurrie.com 

Sponsored by Container Solutions




Hero image by the great JD Hancock at jdhancock.com

No comments:

Post a Comment