Thursday, 20 February 2020

Part 6 - The Tech Ethical Issues To Talk About


(Part 6 of the University of Hertfordshire Tech Ethics Course. << Part 5 | Part 7 >>)

In our last few posts, we have discussed tech ethics and responsible technology in general terms and left it to the practitioner (you) to decide how to apply it to your own future products. In the next posts, we are going to briefly look at some specific areas that get a great deal of press coverage. Some get more than others - ethics shouldn’t be subject to fashion, but inevitably it is.

In each case, I am not going to define the rights and wrongs - other than pointing out where the law already does so - but I will try to outline some of the main arguments on both sides, and where the ethical floor might be, no matter which side of the argument you come down on.

Over the next few blog posts I’ll cover:

  • Energy use in the tech sector.
  • AI and Big Data.
  • Cyberwarfare, propaganda and killer robots.
  • Surveillance.
  • Anthropomorphism.
  • Attention.
  • Social Media influence.
  • Open v closed code and data.
  • The role of social scoring and civil order (mass surveillance apps).
  • Accessibility.
  • Exclusion.
  • Privacy.
  • Security.
  • Future of trust.
  • Changing behaviours and social norms.

I’ll also discuss different definitions of social good (e.g. individualistic vs community) which vary from country to country and person to person.

That’s a lot! Inevitably, it will only be an overview.

Energy Use

Tech is one of the most successful industries worldwide. That makes it one of the fastest-growing users of fossil fuels.

The UK’s electricity was just over 40% hydrocarbon generated in 2019. Globally, fossil fuels generated ~65% of electricity in 2017. Data centres alone are currently estimated to use 2% of the world’s electricity and if you add in devices, the % gets much higher (according to Greenpeace, 10-20%). Machine Learning is particularly energy intensive.

Does this make the tech industry a force for good or evil when it comes to climate change?

On one hand, we could assert that communications tech cuts down on travel (very green); technology often increases efficiency (green again); and there are societal benefits to tech that make it worth some climate cost.

On the other, we might say tech is electricity-powered and there are better, more sustainable ways to generate that than by burning fossil fuels. Many people argue that the tech industry is rich and powerful, and therefore the right thing is for the industry to lead the way in using clean electricity.

Those views are not conflicting.

As engineers, our responsibility is to stay informed; make active choices; and pay attention to our energy usage. For example, it is vastly cleaner to host instances in AWS’s Dublin region (100% renewable or offset) than in US East (only 50% offset). Google Cloud and Azure are both 100% offset everywhere. This information is usually available on cloud provider’s sustainability web pages. Read it. If data isn’t available for your provider, that is not a good sign. Ask for it.

AI, Machine Learning and Big Data

I’m not talking about general AI and whether to build Skynet here. That’s a rather long way off. What I am focusing on is data analytics, and the automation of physical and intellectual tasks.

Data Analysis 

When we discuss Machine Learning (ML), what we're usually talking about is machine-enhanced statistical analysis of existing data sets. Sometimes that analysis uses something like deep learning, but surprisingly often it is still just a SQL query!

ML is an astonishingly useful tool for humanity. Using digital techniques, huge pools of high quality, often high density, data can be analysed. That information couldn’t be processed manually in any realistic timeframe.

For example, medical, astronomical, agricultural or other scientific photographs can be scanned and automatically studied, potentially unearthing radical new hypotheses in those fields. Similarly, huge quantities of public domain text data is already being analysed, leading to breakthroughs in automatic translation. Major leaps are being made in medicine by ML.

But it's not all good news. There are also significant ethical concerns about ML:

  • Is the source data accurate, or does it contain false information? Historic data may include beliefs that are incorrect but may have been, or continue to be, widely held. Products based on such biased data might be unfair or cause unlawful discrimination.
  • Has the data been sourced in a responsible manner or have people inadvertently given away information that will harm themselves, others or society in general?
  • Is the analysis bugged in a way that is hard to detect? Has it been sufficiently tested?
  • How do we handle false positives or negatives? These will happen even if there are no bugs because that is how statistics works. 
  • Does restricted access to the source data lead to monopolies that are not in the public interest?

The EU’s GDPR legislation attempts to address some of these concerns via its transparency and right of challenge rules.

As engineers, our responsibility is to obey the law; check the provenance of our data; be aware that data quality impacts every conclusion we reach; test thoroughly; and rigorously document and account for all our decisions.

We also need to understand that error is baked in to statistics: data = model + error  - i.e. some error will always happen. The inevitable mistakes therefore need to be handled compassionately and thoroughly accounted for.

Automation

One of the common uses of AI or machine learning is in job automation. Again, this is neither good nor bad but has benefits and risks.

Task automation means many dull, dangerous, or rote tasks don't need to be performed by humans any more. That increases productivity and accuracy, and reduces cost. One example is self-driving cars, which are anticipated will reduce congestion and accidents. That’s all good.

So, what are the arguments against automation?

One concern is that where humans are directly controlled by algorithms, the result can be heartless. For example, where gig economy workers are given automatically calculated and distributed work schedules, they sometimes allow no time for family life or illness. Let’s take the real life case of the Kronos scheduling software. In 2014, The New York Times revealed the way it optimised for efficiency screwed up the lives of employees. Presumably the developers hadn’t intended that, they just hadn’t foreseen the problems and had no existing guidelines to work from.

Of course, humans can also be cruel, but here the responsibility for mercy lies with the software developer. He or she has to code it in. The risk is, they might not (probably won't?) have a good understanding of the human factors in the situation. If they mess it up, they could make people's lives hell.

Automation can also lead to significant labour disruption. For example, Uber's stated business plan is to replace all its human drivers with those lovely self-driving cars. The oil industry has also introduced significant automation to oil wells, leading to a huge fall in the number of human employees.

Finally, automation directly links business productivity to capital (money to spend on robots or software), which advantages those who already have capital. That can lead to wealth concentration, which may not be in the public interest.

As engineers, the last two points are probably beyond our scope, but we do need to consider how our code is written and tested so it doesn’t harm people who are controlled by it and provides mechanisms for problems to be detected and reported. You may also want to consider whether you are happy with the existence of the product you’re building. That is your personal choice.

The Future of Warfare

Even if you don’t go into the defence industry, you need to be aware of the direction warfare is headed in because the wars of the future will not only be fought with physical weaponry - they may be fought on your software.

Killer Robots

The most obvious ethical dilemma around tech and war today is the use of killer robots.

Remotely controlled, unmanned aerial vehicles (UAVs), aka drones, are already widely used on the battlefield, particularly in the Middle East. Also under development are machines that can target and shoot without human intervention: so-called killer robots (an accurate, if literally loaded, term).

The main argument in favour of these is they could be more accurate and reduce battlefield loss of life.

The primary argument against them is that the technology is still not good enough to use anywhere near civilian populations. They often kill the wrong person and targeting mistakes are frequently swept under the carpet by the military, rather than properly addressed. Note that the same arguments are also applied to UAVs with human triggers, where the automatically-generated hit list may contain mistakes (see AI and Machine learning above).

More philosophical reasons against autonomous weapons centre around whether humans should ever be killed automatically. You might argue landmines do it, but to kill that way with a landmine is against Geneva and UN conventions - most of us have already decided that’s wrong.

Another argument is that killer robots make warfare cheaper (once the tech has been created) and therefore more of it can be waged. Whether that is a "for" or "against" depends on your viewpoint and current context.

Cyberwarfare

Cyberwarfare is a new weapons frontier. In the Ukraine in 2015, the power grid was hacked and brought down by a sophisticated cyber attack. From 2005 to 2010, the US's Stuxnet virus attacked Iran's uranium enrichment program. In the first case, the target was the control code for a power facility. In the second, it was possibly every Windows machine in the world (it only triggered if the PC was in an Iranian nuclear plant).

In future, the target could be your system. It is hard to write code or support systems that are proof against a state actor, but as an engineer it is vital your systems are proof against everyone they can be resilient to. Don’t get taken down and cause the deaths of thousands because you didn’t apply a security patch.

Propaganda and civil disorder

“Destabilizing an adversary society by creating conflict in it and creating doubt, uncertainty, distrust in institutions” - Keir Giles, senior consulting fellow on Russia at Chatham House.

Creating killer robots is expensive up-front. A lower Capex alternative is propaganda: eroding trust in a government using targeting advertising, misinformation, deep fakes or just fake news. The US may even be using popular games.

As an engineer, it is your responsibility to consider if your new social media platform, or product (e.g. a game), or tool (like a video editor) could be used as a weapon of destabilisation and how you would detect that and stop it.

What Else?

In this post, we have very briefly covered some of the ethical issues around climate change (energy use), AI and Machine Learning, and Cyber warfare. It is part of being a professional to balance up these benefits and risks.

In the next post in this series, we’ll look at surveillance, anthropomorphism and attention…

(Part 6 of the University of Hertfordshire Tech Ethics Course. << Part 5 | Part 7 >>)


Wednesday, 12 February 2020

Part 5 - Why do Humans do Bad Things?


(Part 5 of the University of Hertfordshire Tech Ethics Course. << Part 4 | Part 6 >>)

People do bad things because they’re evil. If you’re a good person, you’ll never do anything wrong.

Hurray!! You can stop reading here.

Hang on a Minute!

Unfortunately, as we discussed in the last article, humans don’t appear to work like that. The study of social psychology suggests our behaviour is highly influenced by our environment. Your individual (usually good) nature is less critical than you might hope.

Most of us want to be ethical. This post is about what psychology tells us stands in our way, and what we can do about that. 

I’m a technologist not a psychologist, so these are mostly the judgments and investigations of my colleague and co-author, the registered psychologist Andrea Dobson. Many thanks Andrea!

Obedience

“More hideous crimes have been committed in the name of obedience than in the name of rebellion.” - C.P. Snow

After the second world war, psychologists started looking at why seemingly-normal people could do very bad things. The trigger was the Nuremberg trials. The world was stunned as, over and over, individuals justified mass murder on the grounds that “Befehl ist Befehl” - an order is an order.

In 1963, Yale psychologist Stanley Milgram decided to investigate further. He wanted to know how powerful the desire to be obedient was and how far it could change people’s behaviour. He devised a set of infamous electric shock experiments and what he found was extraordinarily disturbing. 65% of ordinary Americans would electrocute a stranger, provided the order came from an authority figure.

Some of the studies that followed have reported obedience rates of over 80% (from Italy, Germany, Austria, Spain, and Holland). It is now well accepted that obedience is a powerful driver in human behaviour.

Is that all? Do we merely follow orders or is there anything else as powerful that affects us?

Conformity

Would you contradict your colleagues? I’d like to think I would, but the evidence suggests I’m kidding myself. Most of us go along with the group consensus, whatever it might be. In fact, psychology tells me I’m more likely to deny the facts than risk being the odd one out.

In the 1950’s, Polish psychologist Solomon Asch ran a series of experiments to investigate how much an individual’s judgments were affected by those of the folk around them. He discovered most of us (nearly 75% in his tests) conform: we will lie or deceive ourselves, at least some of the time, to publicly fit in with an overwhelming majority.

We’re Doomed!

Does this mean we’re the slaves of our environment? Fortunately not. Or not completely.

  • 35% of Milgram’s experimental subjects disobeyed orders and wouldn’t “electrocute” their victim, even under extreme social pressure. 
  • 95% of Asch’s subjects went against the group at least once, even if they mostly complied. Rebellion was more common if they had an ally or if voting was secret. 

Obedience and Conformity are not insurmountable, they are merely a strong influence that we should be aware of.

Riven with Guilt?

Experiments suggest most of us want to be good but we will often act badly if either those around us are, or we’re told to.

Does that mean we all live in a constant state of guilt and remorse? The answer is kind-of. We’re very good at ignoring our own guilt, or at least rationalising it away, using a process called Moral Disengagement.

Moral Disengagement is the process of convincing ourselves normal ethical standards don’t apply to us in the situation we’re in. We thus avoid the “self-sanction” that would normally stop us doing something wrong.

According to Albert Bandura of Stanford University: “Moral disengagement functions [..] through moral justification, euphemistic labelling, advantageous comparison, displacing or diffusing responsibility, disregarding or misrepresenting injurious consequences, and dehumanising the victim.”

A common way to diffuse moral responsibility, for example, is through group decision-making:

“People act more cruelly under group responsibility than when they hold themselves personally accountable for their actions” - Bandura

Again, it is something we need to be aware of. Moral disengagement doesn’t work in every case but it does appear to work. Remember that any action you take is an action you are personally ethically and legally responsible for, no matter what moral disengagement may tell you.

Unethical Amnesia

If you can’t quite explain away what you did, psychology suggests you have another option: forget all about it.

Psychologists Francesca Gino and Maryam Kouchaki from Northwestern and Harvard Universities conducted a series of experiments on whether people remembered themselves doing good things better than they recalled doing bad ones. Their studies of over 2100 participants demonstrated people recall times they acted ethically, like playing a game fairly, more clearly than times they cheated. Again, this is something to watch out for - we appear to be hardwired to believe we are better behaved than we are. When we behave less well we literally forget it.

We Seem to be Good at Doing Bad Things. How do we Fix That? 

If we want everyone to act more ethically, there are several approaches we could take.

Top-down change of behavior throughout an entire organisation. 

The trouble is, top down change is hard. Even if the CEO really means it, folk probably won’t believe it - at least not for a long time. Top down changes can take years to permeate, and any authority-based approach can also lead to moral disengagement, which is risky in an ethically unclear situation (“The disappearance of a sense of responsibility is the most far-reaching consequence of submission to authority” - Stanley Milgram).

Bottom up, individual-driven change. 

Bottom-up change could be quicker - people have a strong desire to see themselves as the goodies and will generally act well if left alone. However, people’s desire to do good is easily derailed by Obedience, Conformity and Moral Disengagement. As Bandura puts it: “Given the many psychological devices for disengaging moral control, societies cannot rely entirely on individuals”.

So what could we do?

Some researchers have suggested bad behaviour in companies often comes from bad incentives. For example:

  • Too many business transformation programs can warp a company’s own ethical climate by pushing too much change from the top, too quickly and too frequently. People who are rushed or flustered are more likely to become morally disengaged and act unethically.
  • Incentives and pressure to inflate achievement of targets can also cause issues. People do what they are rewarded to do, and most are rewarded for hitting KPIs, not following their principles. Again, this leads to moral disengagement.

The best way to combat disengagement is with engagement. So consider:
  • What are people paid and promoted for? Does it incentivise dodgy behaviour?
  • Are people punished for speaking up and questioning a decision or the accepted way of doing things?
  • Do people feel like they work for an amoral company? If they do, they’ll behave that way too.
  • Do leaders acknowledge dilemmas or sweep them under the carpet? Are problems discussed openly and frankly? Are diverse or conflicting views heard? 

Speak up!

“In a true learning organisation, employees are able to speak up, express concern and make mistakes without fearing negative consequences like punishment or ridicule.” - Andrea Dobson

Psychological Safety is a management concept that has become popular in the past few years. The idea is to create a team culture that promotes learning by making any question safe to ask, from “I don’t understand, how does that work?” to “isn’t that going to get someone killed?”

It’s a way of working that makes asking difficult, potentially ethical, questions part of your job (obedient) and expected (compliant) and has been suggested as a bulwark against moral disengagement. It is therefore one possible way to promote a more ethical work environment.

“Life in society requires consensus as an indispensable condition. But consensus, to be productive, requires that each individual contribute independently out of his experience and insight.” - Solomon Asch

Psychological safety is just one aspect of a learning organisation and tools are now around to help companies implement it (which, according to Google’s Aristotle project, has productivity advantages beyond just ethics).

The previous posts in this series talked about why you should act ethically in order to do your job professionally and legally. In this post, we discussed the psychological reasons why you, or your colleagues, might not do so even if you want to. The processes and behavioural norms around us can drive us via obedience, conformity, and moral disengagement. In the next post, we will look at some specific sectors of the industry and examine their ethical pros and cons.

(Part 5 of the University of Hertfordshire Tech Ethics Course. << Part 4 | Part 6 >>)

Authors 

Andrea Dobson-Kock is a Registered Psychologist (HPCP) and a Cognitive Behavioural therapist. As a practicing psychologist, Dobson-Kock specialised in depression and anxiety disorders, complex grief and worked for over a decade in mental health.

Anne Currie is an engineer of 25 years, a speaker, writer and science fiction author. She also teaches Tech Ethics at the University of Hertfordshire.

References

S.E. Asch (1956). Studies of independence and conformity: I. A minority of one against a unanimous majority. Psychological Monographs: General and Applied, Vol 70(9),, 1-70.
T.C. McLaverty (2016). The influence of culture on senior leaders as they seek to resolve ethical dilemmas at work
Klass, E. T. (1978). Psychological effects of immoral actions: The experimental evidence. Psychological Bulletin, 85(4), 756
Festinger, L. (1957). A Theory of cognitive dissonance. Stanford, CA: Stanford University Press.
M. Kouchaki & F. Gino (2016). Memories of unethical actions become obfuscated over time. PNAS May 31, 2016. 113 (22) 6166-6171
Hofmann W., Wisneski DC., Brandt, M.J. and Skitka, L.J. (2014). Morality in everyday life. Science 345(6202):1340–1343.
Goodwin, G.P., Piazza, J., Rozin, P. (2014). Moral character predominates in person perception and evaluation. J Pers Soc Psychol 106(1):148–168.
Festinger & Carlsmith (1959). Cognitive consequences of forced compliance. Journal of Abnormal and Social Psychology, 58, 203 – 210
Milgram, S. (1963). Behavioral study of obedience. Journal of Abnormal and Social Psychology, 67, 371-378    .
W. Weiten (2010). Psychology: themes and variations
Sponsored by Container Solutions
The Panopticon Series on Amazon US and UK

Photo by Stefano Pollio on Unsplash

Thursday, 30 January 2020

Part 4 - What Should I Do!? Ethical Frameworks



(Part 4 of the University of Hertfordshire Tech Ethics Course. << Part 3 | Part 5 >>)

It would be nice to think a grasp of the law and an innate moral sense could guarantee you'd always do the right thing in all circumstances. Unfortunately, it’s not that simple. People can end up doing bad things without being a conscience-free psychopath.

With the best of intentions, there are many ways I might do dodgy stuff without meaning to:

  • I might make an inadvertent product mistake by not thinking through all possible implications.
  • I could spot a potential issue but it may seem like a small harm.
  • I may decide more people would be helped than hurt by it (that's called utilitarianism - as Mr Spock said, “The needs of the many outweigh the needs of the few.” Although utilitarianism is appealingly logical, it can come across as cold to the general public and risks a PR disaster if the harm is anything but minor).
  • I could suspect something’s wrong but everyone around me seems fine with it, so I go along with their group decision.
  • Something that used to be good might turn into something bad without me noticing.
  • My boss may tell me to do it, “or else”.

In fact, it's very easy for good people to do bad things. According to the 2018 stack overflow ethics questionnaire, only 60% of developers said they definitely wouldn’t write unethical code. The other 40% were more equivocal. In addition, only 20% of those surveyed felt the person who wrote a piece of code was ultimately ethically responsible for it (one problem with that position is the coder might be the only person who fully understands it).

It’s hard to do the right thing. Psychology plays a big part in why we don't (as we'll discuss in the next post) but even if you do try to do good, relying on your gut feel for what's right or wrong is highly unreliable. That's why many industries use ethical frameworks to help people make better considered decisions.

There are advantages to using a framework for your ethical judgments:

  • they can help with thinking through all the possible implications of a decision, including encouraging you to get different perspectives on an issue
  • they codify and let you learn from the experience of others
  • they support you in convincing other people a problem exists (you can point to the framework as a source of authority that supports your argument).

In this blog, we are going to look at several frameworks that provide help in different ways.

Ethical Theory

This article from Brown University is a good introduction to ethical thinking and the difference between morals and ethics. However, our blog series is not about morality. Our focus is on consequences, not intent.

Nevertheless, it's worth remembering that bad intentions don’t play well in the press or in court. Several European Volkswagon engineers found this out in 2017 when they were sent to jail in the US for deliberately falsifying emissions data. It is increasingly hard to keep dodgy practices like that a secret. Stop and think. If your rationale wouldn’t look good in court or on the front page of the Daily Mail then change it. Don’t hope you can keep it under wraps.

If you are interested in the more philosophical side of ethical theory, this free Harvard course by Michael Sandel is a great introduction.

Practical Ethics

As far as I'm concerned, your soul is your own business. I care about your professionalism. I want you to know how to spot problems in advance and manage them so they don't turn into crises. Avoiding catastrophe is better for your users, your company and you.

There are several ethical frameworks to help you make better tech product choices. Below, I discuss 3 of them:

  • The ACM code of ethics.
  • The EthicalOS toolkit.
  • The Doteveryone Consequence Scanning process.

ACM Code of Professional Ethics

The Association for Computing Machinery (ACM) published an updated code of ethical and professional conduct in 2018. It’s designed to serve as a basis for “ethical decision-making” and “remediation when violations occur” (i.e. spotting and fixing your inevitable mistakes).

ACM’s Code is a definition of what behaviour to aim for. Their ethical duties are quite close to “don’t break the law” (at least in Europe). However, they go further. In their view, responsibility is not merely about avoiding prosecution, it is also about doing the right thing: taking professional care to produce high quality, tested and secure systems.

Their principles include making sure that you (and by extension the products you produce):

  • Avoid harm (don’t physically, mentally, socially or financially harm your users or anyone else).
  • Are environmentally sustainable.
  • Are not discriminatory.
  • Are honest (don’t actively lie to or mislead users and certainly don’t commit fraud).
  • Don’t infringe licenses, patents, trademarks or copyright.
  • Respect privacy and confidentiality.

The framework is a fairly short, uncontroversial, and conservative one. It maps closely to obeying the letter AND spirit of the law where your products will be used. The ACM go beyond what is currently legally required in most countries, but I suspect the law will get there at some point.

Speculative Ethics - The Ethical OS Toolkit

On the less practical and more speculative side, the EthicalOS Toolkit is a high-level framework for helping individuals and teams to wargame worst-case scenarios for products and and think through in advance how those situations could be handled or avoided.

Part 1 of the Toolkit asks developers to think through possible failure modes for 14 potential (somewhat dystopian) products and how the problems might be mitigated. In particular, in each case it asks:

In this situation: "What actions would you take to safeguard privacy, truth, democracy, mental health, civic discourse, equality of opportunity, economic stability, or public safety?”

The answers you come up with might range from “add an alert” to “don’t develop this product at all”. The goal is to gauge the risk and decide whether you need to take an action.

EthicalOS have clearly identified the 8 "good things" listed above as their basis of ethics. There are overlaps with ACM’s list (privacy, truth, safety) but they're not identical. The EthicalOS list feels slightly US-centric to me (“truth, justice and the American way” as Superman might say). If you live in mainland China, democracy is not going to be one of your ethical goods. I foresee "privacy" and "equality of opportunity" could also be at odds in future. In my opinion, if you want a more global definition of good you should take a look at the UN’s global sustainable development goals. Nonetheless, the EthicalOS toolkit's role-playing is an imaginative way to think through ethically tricky questions.

Part 2 of the kit asks questions about your own product to help you anticipate how it could be misused. For example for: propaganda, addiction, crime, or discrimination.

This section of the toolkit is useful, but there is an omission when it comes to one of the most pressing issues of our time: pollution and climate. That raises an interesting point. It’s easy to spend your time worrying about how your product might overthrow world order in a decade’s time, whilst omitting to do easy good like putting your AWS instances in green regions.

Finally, part 3 of the toolkit lists 6 potential strategies for producing more responsible tech. You’ll be pleased to hear that number one is take a course on it. Others include oaths (which unfortunately don’t appear to have much effect as we'll discuss in the next post). Ethical bug bounties, product monitoring (this is my personal preference), practice licenses for developers (I’m dubious about this one as well, as software engineering isn’t location-bound like legal, medical or architectural practices).

At the end, there is a set of checklists to help you consider whether you have carefully scanned your product for ethical and thus professional risks.

Agile Ethics - Consequence Scanning 

Consequence Scanning by the UK think tank dotEveryone (Brown S. (2019) Consequence Scanning Manual Version 1. London: Doteveryone) defines a way of considering positive and negative implications by asking:

  • What are the intended and unintended consequences of your product? 
  • What are the positive consequences to focus on? 
  • What are the consequences to mitigate? 

The lightweight process slots into existing agile development and is designed “for the early stages of product planning and should be returned to throughout development and maintenance”. It uses guided brainstorming sessions and is an easy way to add more ethical thought into your product management.

Sector Specific Frameworks

The frameworks above are general to any product but there are others being created that are aimed at more specific areas including: AI, data and machine learning (e.g. “Principles of AI” by AI expert Professor Joanna Bryson and the data ethics canvas by the Open Data Institute). We’ll talk more about these in later posts.

Conclusion

In this post, we have reviewed several of the early ethics and responsible technology frameworks out there for developers. We have seen some common themes:

  • The need to consider the potentially harmful consequences of products and features both up-front and throughout the lifetime of the product.
  • The need to look at products from multiple viewpoints (not just the ones in your engineering team).
  • The need to comply with the law and potentially go further.
  • The need to monitor the use of products in the field.

But is ethics only a matter of process, or is it more? In the next post in this series, we’ll look at the role psychology plays in risk management, professional behaviour and decision-making.

(Part 4 of the University of Hertfordshire Tech Ethics Course. << Part 3 | Part 5 >>)

About the Author

Anne Currie is tech greybeard (ahem) who has been in the sector as an engineer, writer and speaker for 25 years. She runs & helps organise conferences in hard tech and in ethics, is a visiting lecturer at the University of Hertfordshire and most importantly of all, is the author of the dystopian, hard scifi Panopticon series (Amazon USAmazon UK). Contact her on Twitter @anne_e_currie or at www.annecurrie.com 
Sponsored by Container Solutions
The Panopticon Series on Amazon US and UK

Photo by Jason Wong on Unsplash

Friday, 24 January 2020

Part 3: Are you a Goodie or a Baddie? What Does Being Ethical Mean?


(Part 3 of the University of Hertfordshire Tech Ethics Course. << Part 2 | Part 4 >>)

Tech ethics isn’t philosophy, it's professional behaviour.

In fact, I'd define ethical behaviour for a technologist as just taking reasonable care to avoid doing harm, which is also the foundation of doing your job professionally. What does that mean in practise?
  • Thinking upfront about how harm might come to a group or individual from your product.
  • Taking reasonable steps to avoid it. 
  • Monitoring your system in production, including user issues, to spot problems you missed.
  • Fixing them.  

Isn’t That Just the Law?

Not necessarily. The law lags behind progress in software (we move very fast). Some of this is against the law (as I described in part 2) and some of it isn't.

Or at least, it isn't yet.

Causing foreseeable physical harm is often a crime. Causing reputational, economic, or emotional damage; inconvenience, reduction in quality of life, or environmental problems, for example, may not be. However, even if you aren't breaking criminal law you might be in breach of civil or contract law.

Are you Responsible?

Most developers think they are not legally or morally responsible for the code they write, but the courts may disagree. “I was just following orders” is not a legal defense. Neither is “I didn’t know that was against the law.” It is your responsibility to check.

Is it Unprofessional to be Ethical?

It may be that the company you work for is a nefarious organisation with a plan for world domination. If you work in a secret lair under an extinct volcano, that might apply to you. If your CEO is a super-villain, bad behaviour may well be part if his business strategy and Dr Evil might consider it unprofessional of you to raise a concern.

However, most businesses are not run by baddies. As well as staying inside the law, they do care about keeping customers happy, retaining staff, and avoiding newspaper scandals. For most of them, ethical breaches are mistakes. The error might be caused by failure to test, lack of awareness, shortsightedness, misunderstandings, or miscommunications. In fact, ethical problems should be considered alarm bells for poor cognition in an organisation.

Your processes to avoid ethical breaches should be the same as your processes to avoid any potentially costly mistake. They are about managing risk to avoid it turning into a crisis. The processes are not there to ensure no error ever happens (that would be impossible) but to make sure issues are spotted and corrected before they do irreparable harm.

Don't Assume that Because You're Paid, You are a Baddie!

Never assume your CEO is an evil genius and you're only paid a salary to avert your eyes from his misdeeds. Where tech is concerned, he's more likely to be an idiot. History is rife with soldiers that followed orders that were never given. Don’t be that person. If you're asked to do something dangerous or harmful, it’s probably a mistake. Raise your concerns immediately. That is your job. Never assume your job is to help with a cover-up.

Why raising issues is good for business:
  • It is a sign that you're being careful about your work. 
  • You might be about to break the law (or be breaking it in some countries) and all businesses want to avoid that.
  • You might get sued.
  • You might get bad PR: people may boycott your products or you might struggle to hire. 
  • Ethical breaches are often a warning sign of dodgy decision-making that needs to be fixed.. 
Personally, I don’t want to find myself yelling, "I was just following orders!' in court, on the front page of the Daily Mail, or anywhere else. So how do I avoid it?

Avoiding Harm

It’s not unethical to have something go wrong. This is software - things go wrong. It’s only unethical (or unprofessional) if you don’t make reasonable efforts to:
  • avoid it going harmfully wrong
  • spot when bad stuff is happening
  • resolve serious problems when you encounter them.

Think up Front

In your product's design phase, set time aside to do “consequence scanning”. Think through:
  • harms that could result from your product, including by misuse
  • how you would spot if that happened
  • how bad it would be and how to mitigate that if necessary. 
In the next post we’ll talk about some frameworks that exist to help with this.

Follow Best Practises

Where they exist, follow best practises unless there is a very good reason not to. For new stuff like machine learning, best practises are still being formed. If best practises are not set in stone in your area:

  • follow what you can
  • be very careful when you stray off that path
  • document your thinking processes and decisions, at least in your issue tracking system, so that other engineers, auditors, and your future self can see why you made the decision you did (there is usually good reason but, trust me, you'll forget what it was).

Report Problems

What should you do if you see something potentially harmful like unpatched systems?

Here are 3 things you probably shouldn’t do:
  • Ignore it.
  • Quit.
  • Immediately become a whistleblower, phone up the Daily Mail then escape on the first plane to Moscow.
Here’s what you should do:
  • Raise it in your issue tracking system with an appropriate severity.
  • Be prepared to argue the case for that severity level.

Test!

The most ethical thing you can ever do is thorough testing. Watch out for edge and missing test cases. A classic mistake is to only test your product on the people in your IT team - they almost certainly don’t reflect all of humanity. If they do, you might be over-staffed on that project.

Field testing is a good idea generally and sometimes unavoidable. Plan for errors to be spotted and handled without harming the user, which takes us to the next paragraph...

Monitor

Industries like aviation, cars or oil and gas have something called a safety culture. They actively search out problems and examine them carefully. They do thorough postmortems and try to make sure any harmful issue only happens once. But don't just track actual failures, track near ones too...

Track Near Misses

The most successful businesses don’t only track active failure, they also monitor “near misses”: problems that never actually materialise and often resolve themselves, but are a warning sign of something bad in future.

In aviation, a rise in plane near misses indicates that a situation is becoming dangerous and there is more risk of a collision. Getting early warning from your near miss or near collision reporting lets you take action and avoid a catastrophe!

Be Accountable and Auditable

Finally, keep records. Record the decisions you made and why. This can just be in your code management and issue tracking systems. If you are working on machine learning, you need to keep detailed information about your test data and models. 

The reason for this is two-fold:
  • you'll need it for any post-mortems
  • it gives you another chance to spot anything dodgy and act on it.

Trust Yourself and Speak up

If something feels wrong, it probably is and maybe you're the only person who has spotted that. Perhaps you are worrying unnecessarily but ask anyway. The worst that'll happen is you'll learn something!

(Part 3 of the University of Hertfordshire Tech Ethics Course. << Part 2 | Part 4 >>)

About the Author

Anne Currie is tech greybeard (ahem) who has been in the sector as an engineer, writer and speaker for 25 years. She runs & helps organise conferences in hard tech and in ethics, is a visiting lecturer at the University of Hertfordshire and most importantly of all, is the author of the dystopian, hard scifi Panopticon series (Amazon USAmazon UK). Contact her on Twitter @anne_e_currie or at www.annecurrie.com 

Sponsored by Container Solutions




Hero image by the great JD Hancock at jdhancock.com

Friday, 17 January 2020

Part 2: Tech Ethics: The Law's The Floor


(Part 2 of the University of Hertfordshire Tech Ethics Course << Part 1 | Part 3 >>)

The 101 of ethical behaviour is: don’t break the law. That might seem obvious; it's not sufficient; but it is necessary. The laws of each country codify a subset of its ethical rules. If you're breaking them, you're probably acting unethically so the foundation of responsible tech development is to obey the laws that apply to you.

In this post, we're going to cover some of the regulations you need to follow as a techie.

I'm not a lawyer and I'm not giving legal advice. I am commenting as a layman who has experienced most of these rules in my engineering career. If you need expert advice on any of this stuff talk to an actual lawyer. You may have one in your firm but if you don't your insurer can often help.

Be Warned! I Haven't Covered Everything

Every country and sector has its own rules you need to stick to when building a new tech product or extending an existing one. That's a lot of laws. I couldn't list them even if I knew them all, which I don't (and I'm not in prison yet). The good news is, most of them will never apply to you. However, some we do run across a lot and you're likely to encounter.

(Note: for every new project someone usually needs to do a bit of legal research: at least do some searching online and talk to veteran techies in that area).

1. Privacy, Transparency and Security (GDPR & Others)

Many countries have laws about digital privacy, but perhaps the most extensive are the European Union's General Data Protection Regulations (GDPR). They limit what a company or individual can do with the personal information of EU citizens. They also dictate how, and how long, such data can be stored.

The US state of California has similar privacy rules and there may also be regulations on specific industries or groups that apply to your application. One example is HIPAA, which covers US healthcare information. Another is the COPPA rule on children's data.

As well as privacy, GDPR includes regulation around security. It requires the use of encryption and anonymisation for storing some sensitive personal data. Again, other countries have their own rules, for example the US government's FedRAMP regs.

GDPR also imposes transparency requirements on uses of data. For example, a "right to explanation" for some algorithmic decisions, particularly if they have significant ramifications for the individual like prison sentence recommendations or credit scores.

The transparency aspect of the GDPR is widely expected to cause legal wrangling in future because deep neural networks defy explanation. The UK government's current advice on handling this kind of decision-making is sensible:
  • give individuals information about the processing you do
  • introduce simple ways for them to request human intervention or challenge a decision
  • carry out regular checks to make sure that your systems are working as intended.

2. Patents, Trademarks, Copyright, & Licensing (IP Law)

IP law applies to everything that you didn't produce from scratch yourself. That might be code samples or libraries, written text, music, or images you downloaded from the internet.

Even if you did produce something yourself, you could break IP law by accidentally infringing a patent or trademark. Accidental infringement usually doesn't come with huge penalties but the IP owner could stop you using your materials from then on.

Whenever you use anything you didn't create from scratch yourself, legally you need to confirm your right to do so. That might include licensing either the patent or the copyright. Licenses and trademarks tell you what you can and cannot do with materials and legally you must comply. Even if you have a license, you can't do anything you want. For example, you can't tell people you are the author if you aren't.

All open source code comes with a copyright license that tells you how you can use it. Some licenses, for example Apache 2, are permissive and let you use the code for whatever you like. Some licenses are more restrictive, e,g, GPL, and only let you legally use the code in certain ways.

Even if you did write all your code yourself, you still need to be careful not to deliberately infringe someone else's IP because the penalties for that can be steep. Avoid casually discussing patents or trade secrets with anyone outside your company so you don't learn things you shouldn't know and might put in future products. If a conversation like that starts, subtly excuse yourself ASAP.

3. Contracts 

Contracts are two-sided commitments that describe the work one company or individual does for another. Contracts ensure the buyer gets what they want and the supplier gets paid for it. They might be between an individual user and the company behind a website, for example, or between a contractor building a custom application and the company who hired them.

A contract is legally binding. If either side fails to do what they agreed, the courts can force them to. Before they sign a contract, most companies ensure they have insurance to cover the cost of either suing the other party for a breach or being sued. That might happen even if you didn't do anything wrong. This is called liability insurance. 

4. Confidentiality

Engineers are frequently asked to keep secret what they are working on or what they learn through work.

Many contracts contain confidentiality clauses or you might be asked to sign a non-disclosure agreement (NDA). Confidentiality clauses and NDAs are enforced the same way as any contract: though the courts. If you blab, you can be sued.

5. Duty of Care

Duty of care legislation applies to products that may do foreseeable harm. This means physical or psychological harm rather than "pure economic loss" and therefore hasn't generally been applied to software products in the past. However, where software is incorporated into a physical device (robot, IOT etc...) then liability may apply because it could physically hurt someone.

6. Accessibility

Many countries have laws about access to websites for disabled users. If your site or product is inaccessible there is a potential risk of you being sued, particularly if you have users in the United States. Some government bids require compliance with accessibility standards like section 508 in the US or the EU's web accessibility standards. Being accessible also helps with Google's SEO scoring. 

7. Other Stuff to Comply With

Although GDPR, IP law, and contracts are probably the rules you'll encounter most often in the tech industry, they aren't the only ones.
- In every country, there are regulations on tax (VAT or other sales taxes, customs duties etc..) Those rules affect product reporting.
- Your product might come under export laws (for example the US rules on exporting so-called dual-use technology - items that are classified as potentially military. Some of those laws apply to fairly innocuous-seeming stuff like publicly available SSL libraries. For cryptographic libraries in particular, double-check before including them in your products. Don't panic - even if you end up using dual use tech in your application it normally just involves some extra paperwork).
- There are sectors where the software is additionally regulated, for example, finance products and strict anti-money laundering (AML) rules.

Cybercrime or the Computer Misuse Act

I've talked about rules that affect how you write products, but there are also ones about how you use them. The laws around computer misuse are fairly draconian. For example, gaining unauthorised access to a computer, even if you do no harm, is a criminal offense in the UK with a penalty of up to 2 years in prison!

Other forms of cybercrime include online trolling, bullying and stalking, which are quite common. You may spot them being committed using your company's computer equipment. It's often an inside job: one of your employees or someone who's leaving, so you might have to act to stop or report it. 

Laws to Come?

The EU has plans to add new laws around AI and public surveillance, which will probably appear over the next few years. Or then again, perhaps not.

There are a Lot of Laws

You always need to do some reading around and checking in your field. What's legal and what's not changes all the time and it's not necessarily obvious. Research is your friend!

In the next post, we'll look at why complying with the law is not always enough - it's just the minimum.

<< Read Part 1 | Read Part 3 >>

About the Author

Anne Currie is tech greybeard (ahem) who has been in the sector as an engineer, writer and speaker for 25 years. She runs & helps organise conferences in hard tech and in ethics, is a visiting lecturer at the University of Hertfordshire and most importantly of all, is the author of the dystopian, hard scifi Panopticon series (Amazon USAmazon UK). Contact her on Twitter @anne_e_currie or at www.annecurrie.com


Photo by King's Church International on Unsplash



Monday, 13 January 2020

Part 1: What Next for Tech Ethics? A New Course


(Part 1 (Intro) to the University of Hertfordshire Tech Ethics Course. Part 2 >>)

In 2018, a group of keen techies ran a conference on technical ethics in London. The first spin-off from that event was the sustainable servers 2024 petition. We are now happy to announce the next. In 2020, we will be combining academic and industrial work on tech ethics to create practical resources to help developers make more informed choices about what to build; how to build it; and how to operate it safely for users and non-users alike (aka the rest of society).

We’ll be writing and delivering an open source “Responsible Technology” module for the University of Hertfordshire’s Computer Science MSc. The project is supported by the University and sponsored by Container Solutions.

What’s Coming?

We’re searching for the best work out there on practical tech ethics for the course and we'll build an open source repo of all our written and gathered materials.

We’ll also publish a series of blog posts on:
  • What is tech ethics and why is it a big deal?
  • “The law’s the floor.” But what’s legal and what’s not?
  • What does society want? How to keep your eye on current and up-and-coming priorities: the climate and ecosystem, privacy, fairness, equality, and health.
  • What’s out there to help? Resources from industry and academia.
  • The psychology of responsibility. Why do people do bad things? (Including Milgram, Asch, psychological safety, and whether codes of ethics really work).
  • Testing, monitoring and reporting.
  • Deeper dives into areas like: energy use, AI, Big Data, data bias, cyberwarfare, and the digital Geneva convention.
  • The history of accessibility and the UI, which is a fascinating example of an ethical success story that stopped working.
In the first blog post in the series, we'll look at why "the law's the floor".

(Part 1 (Intro) to the University of Hertfordshire Tech Ethics Course. Part 2 >>)

About the Author

Anne Currie is tech greybeard (ahem) who has been in the sector as an engineer, writer and speaker for 25 years. She runs & helps organise conferences in hard tech and in ethics, is a visiting lecturer at the University of Hertfordshire and most importantly of all, is the author of the dystopian, hard scifi Panopticon series (Amazon US, Amazon UK). Contact her on Twitter @anne_e_currie or at www.annecurrie.com. 



Saturday, 27 July 2019

Why We Quit SquareSpace & Moved to Blogger

When we set up the Coed:Ethics conference back in 2018, we built a shiny new website on SquareSpace and it looked fantastic. Unfortunately, great as the site was we realised that ethically we couldn't stay there.

The tech industry is one of the fastest-growing climate polluters because servers require a lot of power to run and we run a lot of servers.

So, we decided to switch our hosting to someone who had a stated position on the sustainability of their servers. SquareSpace had nothing about a sustainability commitment on their website and didn't respond to help queries on the topic. Perhaps they are sustainable? Who knows? In the absence of any data we felt ethically compelled to move.

As a result, we have relocated to Blogger on the Google cloud, which is currently the most sustainable large-scale hosting option. Does the site look quite as shiny? No. Is that additional shine worth the planet? No. In our opinion, it isn't.