Wednesday 9 December 2020

Part 9 - The Art of Changing the World - Persuasion



(Part 9 of the University of Hertfordshire Tech Ethics Course. << Part 8)


Lean In

One of my favourite bosses over the years only ever had one piece of advice for me, “Put a tin hat on!” What he meant was I had to defend what I knew to be right even in the face of opposition. 

This doesn’t mean shouting. Although to be fair, I think he meant shouting - he was of a different generation. In fact, yelling is counterproductive particularly if you’re a woman or junior person. It makes you look unstable. What fighting your corner often means in practice is you need to be persistent and try some persuasion.

The soft skills to ensure the right thing happens are a fundamental part of being a professional. It doesn’t matter if you realise something’s against the law or might harm your users and you feel bad about it, if you go ahead and do it anyway. Perhaps because you couldn’t convince your team not to or, worse, you thought you wouldn’t be able to persuade them, so didn’t try.

How Can You Convince People?

There is plenty of material online about how to make a potentially tough sell. In her book “Lean In”, Sheryl Sandberg, the COO of Facebook, describes her method for getting things done as being, “relentlessly pleasant”. Note there are two parts to this: being relentless (i.e. persistent), and being nice about it. Her advice is for women, but I'd direct it at anyone who isn't built like a Marvel superhero. 

UK psychology company MindGym points out you need to keep your goal in mind at all times and, first and foremost, set out your rationale. Your emotions need to be held back until after that.

What does that mean in practice? Keeping calm and giving yourself time to think and prepare.

When it comes to defending an ethical or professional issue, step back. Take a breath. Make yourself a cup of tea. What is it you want to achieve?

  • To stop your product breaking the law?
  • To stop your product from potentially harming users?
  • To stop your product breaking norms for best practice, such as being insufficiently secure or tested.
  • To stop your products and your company looking bad on TV or the front page of the Daily Mail?

Keep that in mind throughout! 

Communication is 90% preparation. Consider your argument for why a bad thing might happen because of what your team is currently doing. Write your rationale down as bullet points. Leave it a few hours then reread it. Ideally, ask someone else to read it and give you feedback. 

If possible, identify a professional authority that backs up your argument. 
  • The best is the law. Is there a risk of breaking it from what you are doing? Then say that in your argument. This includes duty of care to employees, customers, and users. What about GDPR, equalities act, HR rules?
  • The next most effective authority is your business’ contracts. Are you at risk of breaking contractual service level agreements (SLAs)?
  • The third best is your marketing team. Would they agree there’s a risk of looking bad and losing business?
  • The fourth best is an ethical and professional framework such as the ACM code of professional ethics. Point out that infringing professional standards looks bad from a customer or recruitment perspective.

How to Present

Ideally, you will get the key people you need to persuade to individually agree to your argument in advance. Convincing a whole group to change their collective mind is far harder than convincing a set of individuals. Tackling them one by one will take longer, but it should pay off. 

Try to schedule a 1-1 meeting or, even better, an informal chat/coffee with each to discuss the issue. What should you say? See below for tips on persuasion techniques.  

What If You Are Not Completely Sure if It’s Wrong?

It might seem the only way to convince someone to change course is if you can give them a cast iron reasoned argument, which you are 100% certain of.

However, that doesn’t always help as much as you’d think. Often, even if you are personally certain, it’s useful to suggest you aren’t sure something is wrong. That can be an excellent way to deploy one of the most effective persuasive techniques: flattery + asking for help.

Ask!

One of the most effective forms of persuasion is to genuinely ask for help.
  • “I’m worried this might break the law in this way. What do you think?”
  • “I think there’s a risk there might be a security hole here, but I’m not sure what to do about it. What would you do?”
  • “I can see downsides to this but perhaps I don’t fully understand it. What do you think?” 
Playing the dummy card can be a helpful way to get folk involved in the argument on your side. Of course, their knee jerk response may be, “It's all fine!” You need them to put a bit more thinking into it than that. You might need to be persistent in pointing out why you think there’s an issue. Keep calm, keep smiling and keep on.

For a junior person, it is more effective to combine a light touch with persistence than to go in all guns blazing. Avoid an explosive denouement because you’ll probably lose. Let your idea sink in for a range of people. Don’t put them on the defensive by being too aggressive. Highlight where you agree with them. 

Make sure you thank them profusely even if they just tell you something you already know. Remember: relentless and pleasant.

A Bit of Flattery Seldom Goes Amiss

Don’t overdo it, but do it. A subtle compliment is a good precursor to asking.
  • “I know you’re an expert in this area, and I was wondering if I could ask your opinion?”
  • “You’ve had more experience with this kind of thing than me, so...”

This can often be combined with inspiring them with a version of themselves they then want to live up to. Don't lie. Be truthful and positive.
  • “I know you are a highly professional person and I trust your judgment. What do you think we should do?”
  • “You’re someone who takes ethical stands. What do you think we should do here?”

Sacrifice the Credit

It’s amazing what you can achieve if you don’t take the credit. Do you want to be right, be seen to be right, or want the right thing to happen? 

You may need to sacrifice the limelight to the other people you roped in to help. The payoff for them is they’ll feel like good people and they get the plaudits. The payoff to you is you get the result.

Buy Yourself Time

Everything above takes time. Persuasion isn’t quick and you need to make sure you have enough time to do it.

In the heat of the moment, it’s very easy to say a panicked “yes” to something you suspect is wrong. You need a way to avoid reflexive agreement. Give yourself at least a few hours to think things through.

An exercise I used to use with engineers new to consulting with customers was to prepare a list of “stalling lines” with them. They were instructed to say them if a customer (or their boss!) asked them something they didn’t know the answer to or to do something they felt might be wrong. The idea was to give them time to think it through or consult with someone else before they committed to something that might be a mistake. For example:
  • “That is an excellent question. Hopefully, I can do that but I have something I need to check first. Can I get back to you tomorrow?”
  • “Great idea! Let me look into that and I’ll let you have a definitive answer by close of business.”

Prepare some default “stall lines” for yourself. Make them positive. They are incredibly useful. Once you’ve bought yourself time you can think through your argument or talk it over with someone. 

Make sure you always have something to say rather than just squeaking, “Yes!”

Courageous Conversations

It’s a useful rule of thumb that if there is a conversation you don’t want to have, it’s the conversation you should be having. Be courageous. It’s usually a heck of a lot easier than you expected and a huge weight off your shoulders. 

And remember Sheryl Sandberg’s advice. Don’t yell. Be relentlessly pleasant.

Is it Easier When You’re the Boss?

Yes and no. If you order people to do things “or else” you’ll usually get a shoddy outcome or none at all. 

As you progress in your career, persistence, reason, asking for help, judicious flattery, and not hogging all the credit are often still the best tools in your toolbox.

But That’s Not How My Boss Does It!

Some people don’t persuade. They order. That might be from a position of organisational authority (subtext “do it or you're fired”). Tall or physically powerful people often issue commands as well. Some folk use their natural aggression to do it. If you only give orders, you’ve probably been doing so since childhood and teachers may even have described you as a leader. You probably are. A bad one.

Orders don’t require or encourage thought, reason, or collaboration and they're not the only way to influence events. If you want to change the status quo, but you don’t have the power to issue orders, don’t panic. You can do it. It’ll take longer and be more effort, but the result will be far, far better. 


(Part 9 of the University of Hertfordshire Tech Ethics Course. << Part 8)

Friday 6 March 2020

Part 8 - Open Data, Social Credit Scoring and Accessibility


(Part 8 of the University of Hertfordshire Tech Ethics Course. << Part 7 | Part 9 >>)

Let’s continue our brief look at some of the arguments around contentious areas in tech. In this post, we'll consider open data and code, social scoring, and accessibility.

Open Code

When code is released by the copyright holder under a license that means it can be looked at, updated, or tried by anyone, we call that Open Source Software (OSS) or Free Software. Some licenses impose restrictions on the code's actual use (e.g. copyleft licenses) but some don't (permissive licenses).

However, recently the seemingly unimpeachable concept of open code has been ethically challenged:

  • Several OSS communities have complained their software is being used by commercial businesses in a way that negatively impacts them. The subject of the accusation is often AWS, and the claim is that AWS takes OSS projects and changes them, drawing users away from the original community. AWS counter-argues they couldn’t use the code commercially without changing it, and they contribute back to the codebase. There are strong arguments (and feelings) on both sides.
  • In another ethical OSS debate, some project contributors are getting angry that the code they write for free is being used by organisations they disagree with (for example, ICE). This is a particularly difficult argument. How do you feel about personal responsibility? Does it lie with the creator of a product, or the user? Should you impose your own ethical standards on your code's users?

The takeaway for engineers working on open source projects is to read the license your code will be released under and think about it. Some licenses mean anyone can use what you produce and you might have little say about it. How you feel about that might depend whether you reckon the creator is responsible for their code's use or the user. That is something philosophers are still arguing over.

Open Data

The concept of Open Data is similar to that of open code. Open Data enthusiasts believe data should be available to all without copyrights or patents getting in the way of use. The idea is that if data is freely available, more scientific and social progress will be made and that will help everyone.

When we’re discussing government data, opening it is fairly uncontroversial. When the data was gathered by private companies, things get more difficult. They argue the data they record is a valuable commercial asset and should only be available to them. One counter-argument might be that the public should, by rights, have access to data gathered in public (e.g. a camera on a public street). Another is that, as a private individual, you should have access to data gathered from you (e.g. recordings made of your heartbeat by a medical device).

Further complicating matters is another argument made against releasing data, which is that it might contain information individuals want to keep private.

Again, there are arguments to be made on either side. For me personally, the value of open data outweighs the reasons against it.

Keeping Score

Social scores are ratings based on group feedback, like thumbs ups on Facebook or likes on Twitter. Commercial sites allow you to rate everyone from tradesmen to AirBnB guests. Social scoring is generally well-liked by society - although there are always some concerns over fake reviews, which aim to distort the scores, and make them less valid or useful.

More controversial, however, are scoring applications that are either devised by or heavily linked to governments, and may use surveillance data rather than voluntary ratings. The most advanced of these apps make up China's social credit system: a form of scoring that affects citizen’s lives in a variety of ways: from public shaming of jaywalkers to whether you can use public transport.

There isn't one single social credit score in China. There are a whole range: some are generated by businesses, some by state government, and most are tied to the government in some way. According to Samantha Hoffman, fellow at the Australian Strategic Policy Institute, the basis of credit scores varies by region. "It's according to which place you're in, because they have their own catalogs," she says. “[Bad behaviour] can range from not paying fines when you're deemed fully able to, misbehaving on a train, standing up a taxi, or driving through a red light.”

In one example, residents of the Chinese city of Rongcheng start with 1,000 civic points and the authorities can deduct them for antisocial activity, while points are added for social goods like donating to charity.

In the West, the idea that your government is constantly watching you and might even publicly shame you or stop you taking the train seems dystopian. Many Hong Kong residents feel that way too and fear of Chinese government surveillance may have played a part in 2019’s riots. However, in mainland China the social credit system is reasonably popular and it has certainly helped them in their response to the coronavirus outbreak.

Accessibility

As a techie, it is generally accepted that the right thing to do when writing a product is to make it accessible to users who have poor eyesight, hearing, dexterity or any other challenge. As we covered in part 2 (“The Law’s the Floor”) this is a legal requirement for some software, but not all.

Until a few years ago, accessibility was uncontroversial and the W3C HTML standards ensured that, by default, most well-engineered websites could be read by accessibility tools like screenreaders. However, things have changed. HTML is well defined and therefore accessible, but some CSS frameworks are not, That makes it very easy to write modern websites that look great but are unusable by a lot of users.

The takeaway for engineers is to consider website accessibility when choosing a UI framework.

What next?

In this article we have looked at the ethics of open data and code, social scoring, and accessibility. Most have arguments on both sides - very few ethical debates are black and white. The important thing is to carefully consider both sides and make your own judgment on what you think is right - it might not be your instant reaction. 

(Part 8 of the University of Hertfordshire Tech Ethics Course. << Part 7 | Part 9 >>)

Image by https://unsplash.com/@shivelycreative


Friday 28 February 2020

Part 7 - More Tech Ethics Issues: Surveillance, Anthropomorphism, Attention


(Part 7 of the University of Hertfordshire Tech Ethics Course. << Part 6 | Part 8 >>)

Surveillance

“Have people inadvertently given away data that will harm themselves, others or society in general?”

A 2019 report from tech research company Comparitech ranked the most surveilled cities in the world. Unsurprisingly, China won eight of the top 10 spots but coming in at number six, and sporting around 630,000 cameras, was London. The British capital has approximately one recording device for every 14 inhabitants. According to cctv.co.uk: “Anyone going about their business in London will be caught on camera around 300 times per day.”

In the past decade, China’s internal security spending (much of which is widely believed to be spent on surveillance tech) increased tenfold. It now significantly outstrips their external defence budget. Closer to home, by 2025 cctv.co.uk expects the number of cameras in London will top one million. Pair all of this data with the growing sophistication of facial recognition and we have a new world.

The arguments for automated surveillance using facial recognition are usually made by governments or multinationals:

  • Improved security (a common argument in the UK).
  • Better social cohesion, control, and trust; especially when coupled with social scoring (China).
  • More convenient grocery shopping (Amazon in the US).

The last might seem a rather trivial benefit, but it appears popular.


The arguments against tend to be made by academics and the EU:

  • Facial recognition technology isn’t good enough for what's it's being used for yet, which will lead to miscarriages of justice.
  • It is an invasion of privacy.
  • It is a threat to personal liberty.
  • Although the data is recorded in public, it is not publicly available. When it remains in the hands of MNCs, we're all working for them for nothing.

As engineers, we should be aware that the EU considered banning public surveillance in 2020. It has not yet done so, but may do in future. Even without a ban, the legal floor remains that recorded video data may fall under a county's rules for data protection.

Anthropomorphism

“Who is human and who only appears (masquerades) as human? Unless we can individually and collectively be certain of the answer to this question, we face what is, in my view, the most serious problem possible” - Philip K Dick

Anthropomorphism is defined as “the attribution of human characteristics or behaviour to a god, animal, or object.” Humans are very good at it.

We don’t yet have chatbots that can pass the Turing test (which is about the imitation of intelligence, not AGI. Essentially, it's anthropomorphism). In most cases, however, they don’t need to be all that convincing to fool us.

We aren’t expecting the “person” we’re talking to over webchat support to be anything other than a human and the last thing we're going to ask is, “Can you eat a chair?” (which, according to Steve Warswick, winner of the 2019 Loebner prize for the world's most convincing human emulation, is the kind of question often asked to trip up chatbots).

To be fooled into believing a chatbot is a human is only inadvertent anthropomorphism - it's being deliberately mislead. True anthropomorphism requires us to be convinced something we know is a machine is as intelligent, caring, and compassionate as us. That is sadly easy to do when it has a human-looking form.

Ben Goertzel, the creator of the performance chatbot “Sophia” actively aims to achieve this. “For most of my career as a researcher," he said, "people believed that it was hopeless, that we’d never achieve human-level AI. Now, half the public thinks we’re already there.” That's hardly surprising, given that's what he tells them.

In reality, we are a long way off artificial general intelligence (which we can't currently even define). If we were closer, we'd have some ethical discussions on that point but for the moment it is largely an irrelevant distraction. We'll instead concentrate on anthropomorphism as it currently applies in tech.

The main arguments in favour are:

  • Anthropomorphic robots are a cheap way to provide (the imitation of) care, love, and companionship to people. 
  • It’s a cheaper way to provide customer service (and recent research suggests that’s more effective if people believe they are talking to a human rather than merely a chatbot).

The arguments against human emulation are:

  • It’s a lie.
  • It undermines real relationships (which are more challenging than talking to a robot that has no reciprocal needs).
  • Most dangerously, anthropomorphism leads to a misleading impression of competence and predictability. It can cause users to over-attribute accuracy and safety to “AI” algorithms.

As an engineer, it isn’t against the law for products to pretend to be human, but you must consider the ethical and safety implications.

Attention

Where does your attention go? In our attention economy, that matters. Marketing firms don’t only pay for clicks, they often pay for impressions (the number of people who see an ad on their screen). That means old school media and new social media firms want to keep you online, with ads in front of you, for as long as possible. Some of them are very good that. Is it a problem?

On the positive side, media is entertainment. If people want to look at it, surely that’s a good thing - it's the whole point.

It’s your choice how you spend your time. Arguments about “amusing ourselves to death” (according to philosopher Neil Postman) were made about TV in the last century and novels before that, both of which may have done society good by encouraging empathy and social cohesion. Modern social media causes people to become more personally and viscerally involved in the issues of the day and that has upsides and downsides, but should lead to a more engaged populace in the longer term. Surely that's a good thing?

The counter argument is that modern media is more deliberately compulsive than TV or books and is significantly more superficial. That it is reducing our attention spans and removing nuance, leading to anxiety (or possibly just correlated with it), increasing polarization, and reducing the quality of social discourse and day-to-day family life. For employers, there is also a fear it may be reducing productivity by distracting the workforce with trivial Facebook updates and pointless clickbait when they should be getting on with their jobs.

10 years ago, 7% of the US population used one or more social networking sites. Now that figure has increased to 65% and the global average time spent daily in social media is 2 hours 23 minutes. Note that's still less than TV (3 h 35 mins in the US)

To a certain extent, social media usage is a matter of choice. Unlike smoking, it doesn’t provenly cause harm. However, time is a precious commodity and social media is designed to be addictive. One potential ethical approach might be to allow users to audit themselves (easily find out how much time they have spent on your app) enabling them to make informed choices.

In this post we have looked at surveillance, anthropomorphism, and attention. In the next one we'll consider: open v closed code and data; social scoring; accessibility and exclusion.

(Part 7 of the University of Hertfordshire Tech Ethics Course. << Part 6 | Part 8 >>)

Thursday 20 February 2020

Part 6 - The Tech Ethical Issues To Talk About


(Part 6 of the University of Hertfordshire Tech Ethics Course. << Part 5 | Part 7 >>)

In our last few posts, we have discussed tech ethics and responsible technology in general terms and left it to the practitioner (you) to decide how to apply it to your own future products. In the next posts, we are going to briefly look at some specific areas that get a great deal of press coverage. Some get more than others - ethics shouldn’t be subject to fashion, but inevitably it is.

In each case, I am not going to define the rights and wrongs - other than pointing out where the law already does so - but I will try to outline some of the main arguments on both sides, and where the ethical floor might be, no matter which side of the argument you come down on.

Over the next few blog posts I’ll cover:

  • Energy use in the tech sector.
  • AI and Big Data.
  • Cyberwarfare, propaganda and killer robots.
  • Surveillance.
  • Anthropomorphism.
  • Attention.
  • Social Media influence.
  • Open v closed code and data.
  • The role of social scoring and civil order (mass surveillance apps).
  • Accessibility.
  • Exclusion.
  • Privacy.
  • Security.
  • Future of trust.
  • Changing behaviours and social norms.

I’ll also discuss different definitions of social good (e.g. individualistic vs community) which vary from country to country and person to person.

That’s a lot! Inevitably, it will only be an overview.

Energy Use

Tech is one of the most successful industries worldwide. That makes it one of the fastest-growing users of fossil fuels.

The UK’s electricity was just over 40% hydrocarbon generated in 2019. Globally, fossil fuels generated ~65% of electricity in 2017. Data centres alone are currently estimated to use 2% of the world’s electricity and if you add in devices, the % gets much higher (according to Greenpeace, 10-20%). Machine Learning is particularly energy intensive.

Does this make the tech industry a force for good or evil when it comes to climate change?

On one hand, we could assert that communications tech cuts down on travel (very green); technology often increases efficiency (green again); and there are societal benefits to tech that make it worth some climate cost.

On the other, we might say tech is electricity-powered and there are better, more sustainable ways to generate that than by burning fossil fuels. Many people argue that the tech industry is rich and powerful, and therefore the right thing is for the industry to lead the way in using clean electricity.

Those views are not conflicting.

As engineers, our responsibility is to stay informed; make active choices; and pay attention to our energy usage. For example, it is vastly cleaner to host instances in AWS’s Dublin region (100% renewable or offset) than in US East (only 50% offset). Google Cloud and Azure are both 100% offset everywhere. This information is usually available on cloud provider’s sustainability web pages. Read it. If data isn’t available for your provider, that is not a good sign. Ask for it.

AI, Machine Learning and Big Data

I’m not talking about general AI and whether to build Skynet here. That’s a rather long way off. What I am focusing on is data analytics, and the automation of physical and intellectual tasks.

Data Analysis 

When we discuss Machine Learning (ML), what we're usually talking about is machine-enhanced statistical analysis of existing data sets. Sometimes that analysis uses something like deep learning, but surprisingly often it is still just a SQL query!

ML is an astonishingly useful tool for humanity. Using digital techniques, huge pools of high quality, often high density, data can be analysed. That information couldn’t be processed manually in any realistic timeframe.

For example, medical, astronomical, agricultural or other scientific photographs can be scanned and automatically studied, potentially unearthing radical new hypotheses in those fields. Similarly, huge quantities of public domain text data is already being analysed, leading to breakthroughs in automatic translation. Major leaps are being made in medicine by ML.

But it's not all good news. There are also significant ethical concerns about ML:

  • Is the source data accurate, or does it contain false information? Historic data may include beliefs that are incorrect but may have been, or continue to be, widely held. Products based on such biased data might be unfair or cause unlawful discrimination.
  • Has the data been sourced in a responsible manner or have people inadvertently given away information that will harm themselves, others or society in general?
  • Is the analysis bugged in a way that is hard to detect? Has it been sufficiently tested?
  • How do we handle false positives or negatives? These will happen even if there are no bugs because that is how statistics works. 
  • Does restricted access to the source data lead to monopolies that are not in the public interest?

The EU’s GDPR legislation attempts to address some of these concerns via its transparency and right of challenge rules.

As engineers, our responsibility is to obey the law; check the provenance of our data; be aware that data quality impacts every conclusion we reach; test thoroughly; and rigorously document and account for all our decisions.

We also need to understand that error is baked in to statistics: data = model + error  - i.e. some error will always happen. The inevitable mistakes therefore need to be handled compassionately and thoroughly accounted for.

Automation

One of the common uses of AI or machine learning is in job automation. Again, this is neither good nor bad but has benefits and risks.

Task automation means many dull, dangerous, or rote tasks don't need to be performed by humans any more. That increases productivity and accuracy, and reduces cost. One example is self-driving cars, which are anticipated will reduce congestion and accidents. That’s all good.

So, what are the arguments against automation?

One concern is that where humans are directly controlled by algorithms, the result can be heartless. For example, where gig economy workers are given automatically calculated and distributed work schedules, they sometimes allow no time for family life or illness. Let’s take the real life case of the Kronos scheduling software. In 2014, The New York Times revealed the way it optimised for efficiency screwed up the lives of employees. Presumably the developers hadn’t intended that, they just hadn’t foreseen the problems and had no existing guidelines to work from.

Of course, humans can also be cruel, but here the responsibility for mercy lies with the software developer. He or she has to code it in. The risk is, they might not (probably won't?) have a good understanding of the human factors in the situation. If they mess it up, they could make people's lives hell.

Automation can also lead to significant labour disruption. For example, Uber's stated business plan is to replace all its human drivers with those lovely self-driving cars. The oil industry has also introduced significant automation to oil wells, leading to a huge fall in the number of human employees.

Finally, automation directly links business productivity to capital (money to spend on robots or software), which advantages those who already have capital. That can lead to wealth concentration, which may not be in the public interest.

As engineers, the last two points are probably beyond our scope, but we do need to consider how our code is written and tested so it doesn’t harm people who are controlled by it and provides mechanisms for problems to be detected and reported. You may also want to consider whether you are happy with the existence of the product you’re building. That is your personal choice.

The Future of Warfare

Even if you don’t go into the defence industry, you need to be aware of the direction warfare is headed in because the wars of the future will not only be fought with physical weaponry - they may be fought on your software.

Killer Robots

The most obvious ethical dilemma around tech and war today is the use of killer robots.

Remotely controlled, unmanned aerial vehicles (UAVs), aka drones, are already widely used on the battlefield, particularly in the Middle East. Also under development are machines that can target and shoot without human intervention: so-called killer robots (an accurate, if literally loaded, term).

The main argument in favour of these is they could be more accurate and reduce battlefield loss of life.

The primary argument against them is that the technology is still not good enough to use anywhere near civilian populations. They often kill the wrong person and targeting mistakes are frequently swept under the carpet by the military, rather than properly addressed. Note that the same arguments are also applied to UAVs with human triggers, where the automatically-generated hit list may contain mistakes (see AI and Machine learning above).

More philosophical reasons against autonomous weapons centre around whether humans should ever be killed automatically. You might argue landmines do it, but to kill that way with a landmine is against Geneva and UN conventions - most of us have already decided that’s wrong.

Another argument is that killer robots make warfare cheaper (once the tech has been created) and therefore more of it can be waged. Whether that is a "for" or "against" depends on your viewpoint and current context.

Cyberwarfare

Cyberwarfare is a new weapons frontier. In the Ukraine in 2015, the power grid was hacked and brought down by a sophisticated cyber attack. From 2005 to 2010, the US's Stuxnet virus attacked Iran's uranium enrichment program. In the first case, the target was the control code for a power facility. In the second, it was possibly every Windows machine in the world (it only triggered if the PC was in an Iranian nuclear plant).

In future, the target could be your system. It is hard to write code or support systems that are proof against a state actor, but as an engineer it is vital your systems are proof against everyone they can be resilient to. Don’t get taken down and cause the deaths of thousands because you didn’t apply a security patch.

Propaganda and civil disorder

“Destabilizing an adversary society by creating conflict in it and creating doubt, uncertainty, distrust in institutions” - Keir Giles, senior consulting fellow on Russia at Chatham House.

Creating killer robots is expensive up-front. A lower Capex alternative is propaganda: eroding trust in a government using targeting advertising, misinformation, deep fakes or just fake news. The US may even be using popular games.

As an engineer, it is your responsibility to consider if your new social media platform, or product (e.g. a game), or tool (like a video editor) could be used as a weapon of destabilisation and how you would detect that and stop it.

What Else?

In this post, we have very briefly covered some of the ethical issues around climate change (energy use), AI and Machine Learning, and Cyber warfare. It is part of being a professional to balance up these benefits and risks.

In the next post in this series, we’ll look at surveillance, anthropomorphism and attention…

(Part 6 of the University of Hertfordshire Tech Ethics Course. << Part 5 | Part 7 >>)


Wednesday 12 February 2020

Part 5 - Why do Humans do Bad Things?


(Part 5 of the University of Hertfordshire Tech Ethics Course. << Part 4 | Part 6 >>)

People do bad things because they’re evil. If you’re a good person, you’ll never do anything wrong.

Hurray!! You can stop reading here.

Hang on a Minute!

Unfortunately, as we discussed in the last article, humans don’t appear to work like that. The study of social psychology suggests our behaviour is highly influenced by our environment. Your individual (usually good) nature is less critical than you might hope.

Most of us want to be ethical. This post is about what psychology tells us stands in our way, and what we can do about that. 

I’m a technologist not a psychologist, so these are mostly the judgments and investigations of my colleague and co-author, the registered psychologist Andrea Dobson. Many thanks Andrea!

Obedience

“More hideous crimes have been committed in the name of obedience than in the name of rebellion.” - C.P. Snow

After the second world war, psychologists started looking at why seemingly-normal people could do very bad things. The trigger was the Nuremberg trials. The world was stunned as, over and over, individuals justified mass murder on the grounds that “Befehl ist Befehl” - an order is an order.

In 1963, Yale psychologist Stanley Milgram decided to investigate further. He wanted to know how powerful the desire to be obedient was and how far it could change people’s behaviour. He devised a set of infamous electric shock experiments and what he found was extraordinarily disturbing. 65% of ordinary Americans would electrocute a stranger, provided the order came from an authority figure.

Some of the studies that followed have reported obedience rates of over 80% (from Italy, Germany, Austria, Spain, and Holland). It is now well accepted that obedience is a powerful driver in human behaviour.

Is that all? Do we merely follow orders or is there anything else as powerful that affects us?

Conformity

Would you contradict your colleagues? I’d like to think I would, but the evidence suggests I’m kidding myself. Most of us go along with the group consensus, whatever it might be. In fact, psychology tells me I’m more likely to deny the facts than risk being the odd one out.

In the 1950’s, Polish psychologist Solomon Asch ran a series of experiments to investigate how much an individual’s judgments were affected by those of the folk around them. He discovered most of us (nearly 75% in his tests) conform: we will lie or deceive ourselves, at least some of the time, to publicly fit in with an overwhelming majority.

We’re Doomed!

Does this mean we’re the slaves of our environment? Fortunately not. Or not completely.

  • 35% of Milgram’s experimental subjects disobeyed orders and wouldn’t “electrocute” their victim, even under extreme social pressure. 
  • 95% of Asch’s subjects went against the group at least once, even if they mostly complied. Rebellion was more common if they had an ally or if voting was secret. 

Obedience and Conformity are not insurmountable, they are merely a strong influence that we should be aware of.

Riven with Guilt?

Experiments suggest most of us want to be good but we will often act badly if either those around us are, or we’re told to.

Does that mean we all live in a constant state of guilt and remorse? The answer is kind-of. We’re very good at ignoring our own guilt, or at least rationalising it away, using a process called Moral Disengagement.

Moral Disengagement is the process of convincing ourselves normal ethical standards don’t apply to us in the situation we’re in. We thus avoid the “self-sanction” that would normally stop us doing something wrong.

According to Albert Bandura of Stanford University: “Moral disengagement functions [..] through moral justification, euphemistic labelling, advantageous comparison, displacing or diffusing responsibility, disregarding or misrepresenting injurious consequences, and dehumanising the victim.”

A common way to diffuse moral responsibility, for example, is through group decision-making:

“People act more cruelly under group responsibility than when they hold themselves personally accountable for their actions” - Bandura

Again, it is something we need to be aware of. Moral disengagement doesn’t work in every case but it does appear to work. Remember that any action you take is an action you are personally ethically and legally responsible for, no matter what moral disengagement may tell you.

Unethical Amnesia

If you can’t quite explain away what you did, psychology suggests you have another option: forget all about it.

Psychologists Francesca Gino and Maryam Kouchaki from Northwestern and Harvard Universities conducted a series of experiments on whether people remembered themselves doing good things better than they recalled doing bad ones. Their studies of over 2100 participants demonstrated people recall times they acted ethically, like playing a game fairly, more clearly than times they cheated. Again, this is something to watch out for - we appear to be hardwired to believe we are better behaved than we are. When we behave less well we literally forget it.

We Seem to be Good at Doing Bad Things. How do we Fix That? 

If we want everyone to act more ethically, there are several approaches we could take.

Top-down change of behavior throughout an entire organisation. 

The trouble is, top down change is hard. Even if the CEO really means it, folk probably won’t believe it - at least not for a long time. Top down changes can take years to permeate, and any authority-based approach can also lead to moral disengagement, which is risky in an ethically unclear situation (“The disappearance of a sense of responsibility is the most far-reaching consequence of submission to authority” - Stanley Milgram).

Bottom up, individual-driven change. 

Bottom-up change could be quicker - people have a strong desire to see themselves as the goodies and will generally act well if left alone. However, people’s desire to do good is easily derailed by Obedience, Conformity and Moral Disengagement. As Bandura puts it: “Given the many psychological devices for disengaging moral control, societies cannot rely entirely on individuals”.

So what could we do?

Some researchers have suggested bad behaviour in companies often comes from bad incentives. For example:

  • Too many business transformation programs can warp a company’s own ethical climate by pushing too much change from the top, too quickly and too frequently. People who are rushed or flustered are more likely to become morally disengaged and act unethically.
  • Incentives and pressure to inflate achievement of targets can also cause issues. People do what they are rewarded to do, and most are rewarded for hitting KPIs, not following their principles. Again, this leads to moral disengagement.

The best way to combat disengagement is with engagement. So consider:
  • What are people paid and promoted for? Does it incentivise dodgy behaviour?
  • Are people punished for speaking up and questioning a decision or the accepted way of doing things?
  • Do people feel like they work for an amoral company? If they do, they’ll behave that way too.
  • Do leaders acknowledge dilemmas or sweep them under the carpet? Are problems discussed openly and frankly? Are diverse or conflicting views heard? 

Speak up!

“In a true learning organisation, employees are able to speak up, express concern and make mistakes without fearing negative consequences like punishment or ridicule.” - Andrea Dobson

Psychological Safety is a management concept that has become popular in the past few years. The idea is to create a team culture that promotes learning by making any question safe to ask, from “I don’t understand, how does that work?” to “isn’t that going to get someone killed?”

It’s a way of working that makes asking difficult, potentially ethical, questions part of your job (obedient) and expected (compliant) and has been suggested as a bulwark against moral disengagement. It is therefore one possible way to promote a more ethical work environment.

“Life in society requires consensus as an indispensable condition. But consensus, to be productive, requires that each individual contribute independently out of his experience and insight.” - Solomon Asch

Psychological safety is just one aspect of a learning organisation and tools are now around to help companies implement it (which, according to Google’s Aristotle project, has productivity advantages beyond just ethics).

The previous posts in this series talked about why you should act ethically in order to do your job professionally and legally. In this post, we discussed the psychological reasons why you, or your colleagues, might not do so even if you want to. The processes and behavioural norms around us can drive us via obedience, conformity, and moral disengagement. In the next post, we will look at some specific sectors of the industry and examine their ethical pros and cons.

(Part 5 of the University of Hertfordshire Tech Ethics Course. << Part 4 | Part 6 >>)

Authors 

Andrea Dobson-Kock is a Registered Psychologist (HPCP) and a Cognitive Behavioural therapist. As a practicing psychologist, Dobson-Kock specialised in depression and anxiety disorders, complex grief and worked for over a decade in mental health.

Anne Currie is an engineer of 25 years, a speaker, writer and science fiction author. She also teaches Tech Ethics at the University of Hertfordshire.

References

S.E. Asch (1956). Studies of independence and conformity: I. A minority of one against a unanimous majority. Psychological Monographs: General and Applied, Vol 70(9),, 1-70.
T.C. McLaverty (2016). The influence of culture on senior leaders as they seek to resolve ethical dilemmas at work
Klass, E. T. (1978). Psychological effects of immoral actions: The experimental evidence. Psychological Bulletin, 85(4), 756
Festinger, L. (1957). A Theory of cognitive dissonance. Stanford, CA: Stanford University Press.
M. Kouchaki & F. Gino (2016). Memories of unethical actions become obfuscated over time. PNAS May 31, 2016. 113 (22) 6166-6171
Hofmann W., Wisneski DC., Brandt, M.J. and Skitka, L.J. (2014). Morality in everyday life. Science 345(6202):1340–1343.
Goodwin, G.P., Piazza, J., Rozin, P. (2014). Moral character predominates in person perception and evaluation. J Pers Soc Psychol 106(1):148–168.
Festinger & Carlsmith (1959). Cognitive consequences of forced compliance. Journal of Abnormal and Social Psychology, 58, 203 – 210
Milgram, S. (1963). Behavioral study of obedience. Journal of Abnormal and Social Psychology, 67, 371-378    .
W. Weiten (2010). Psychology: themes and variations
Sponsored by Container Solutions
The Panopticon Series on Amazon US and UK

Photo by Stefano Pollio on Unsplash

Thursday 30 January 2020

Part 4 - What Should I Do!? Ethical Frameworks



(Part 4 of the University of Hertfordshire Tech Ethics Course. << Part 3 | Part 5 >>)

It would be nice to think a grasp of the law and an innate moral sense could guarantee you'd always do the right thing in all circumstances. Unfortunately, it’s not that simple. People can end up doing bad things without being a conscience-free psychopath.

With the best of intentions, there are many ways I might do dodgy stuff without meaning to:

  • I might make an inadvertent product mistake by not thinking through all possible implications.
  • I could spot a potential issue but it may seem like a small harm.
  • I may decide more people would be helped than hurt by it (that's called utilitarianism - as Mr Spock said, “The needs of the many outweigh the needs of the few.” Although utilitarianism is appealingly logical, it can come across as cold to the general public and risks a PR disaster if the harm is anything but minor).
  • I could suspect something’s wrong but everyone around me seems fine with it, so I go along with their group decision.
  • Something that used to be good might turn into something bad without me noticing.
  • My boss may tell me to do it, “or else”.

In fact, it's very easy for good people to do bad things. According to the 2018 stack overflow ethics questionnaire, only 60% of developers said they definitely wouldn’t write unethical code. The other 40% were more equivocal. In addition, only 20% of those surveyed felt the person who wrote a piece of code was ultimately ethically responsible for it (one problem with that position is the coder might be the only person who fully understands it).

It’s hard to do the right thing. Psychology plays a big part in why we don't (as we'll discuss in the next post) but even if you do try to do good, relying on your gut feel for what's right or wrong is highly unreliable. That's why many industries use ethical frameworks to help people make better considered decisions.

There are advantages to using a framework for your ethical judgments:

  • they can help with thinking through all the possible implications of a decision, including encouraging you to get different perspectives on an issue
  • they codify and let you learn from the experience of others
  • they support you in convincing other people a problem exists (you can point to the framework as a source of authority that supports your argument).

In this blog, we are going to look at several frameworks that provide help in different ways.

Ethical Theory

This article from Brown University is a good introduction to ethical thinking and the difference between morals and ethics. However, our blog series is not about morality. Our focus is on consequences, not intent.

Nevertheless, it's worth remembering that bad intentions don’t play well in the press or in court. Several European Volkswagon engineers found this out in 2017 when they were sent to jail in the US for deliberately falsifying emissions data. It is increasingly hard to keep dodgy practices like that a secret. Stop and think. If your rationale wouldn’t look good in court or on the front page of the Daily Mail then change it. Don’t hope you can keep it under wraps.

If you are interested in the more philosophical side of ethical theory, this free Harvard course by Michael Sandel is a great introduction.

Practical Ethics

As far as I'm concerned, your soul is your own business. I care about your professionalism. I want you to know how to spot problems in advance and manage them so they don't turn into crises. Avoiding catastrophe is better for your users, your company and you.

There are several ethical frameworks to help you make better tech product choices. Below, I discuss 3 of them:

  • The ACM code of ethics.
  • The EthicalOS toolkit.
  • The Doteveryone Consequence Scanning process.

ACM Code of Professional Ethics

The Association for Computing Machinery (ACM) published an updated code of ethical and professional conduct in 2018. It’s designed to serve as a basis for “ethical decision-making” and “remediation when violations occur” (i.e. spotting and fixing your inevitable mistakes).

ACM’s Code is a definition of what behaviour to aim for. Their ethical duties are quite close to “don’t break the law” (at least in Europe). However, they go further. In their view, responsibility is not merely about avoiding prosecution, it is also about doing the right thing: taking professional care to produce high quality, tested and secure systems.

Their principles include making sure that you (and by extension the products you produce):

  • Avoid harm (don’t physically, mentally, socially or financially harm your users or anyone else).
  • Are environmentally sustainable.
  • Are not discriminatory.
  • Are honest (don’t actively lie to or mislead users and certainly don’t commit fraud).
  • Don’t infringe licenses, patents, trademarks or copyright.
  • Respect privacy and confidentiality.

The framework is a fairly short, uncontroversial, and conservative one. It maps closely to obeying the letter AND spirit of the law where your products will be used. The ACM go beyond what is currently legally required in most countries, but I suspect the law will get there at some point.

Speculative Ethics - The Ethical OS Toolkit

On the less practical and more speculative side, the EthicalOS Toolkit is a high-level framework for helping individuals and teams to wargame worst-case scenarios for products and and think through in advance how those situations could be handled or avoided.

Part 1 of the Toolkit asks developers to think through possible failure modes for 14 potential (somewhat dystopian) products and how the problems might be mitigated. In particular, in each case it asks:

In this situation: "What actions would you take to safeguard privacy, truth, democracy, mental health, civic discourse, equality of opportunity, economic stability, or public safety?”

The answers you come up with might range from “add an alert” to “don’t develop this product at all”. The goal is to gauge the risk and decide whether you need to take an action.

EthicalOS have clearly identified the 8 "good things" listed above as their basis of ethics. There are overlaps with ACM’s list (privacy, truth, safety) but they're not identical. The EthicalOS list feels slightly US-centric to me (“truth, justice and the American way” as Superman might say). If you live in mainland China, democracy is not going to be one of your ethical goods. I foresee "privacy" and "equality of opportunity" could also be at odds in future. In my opinion, if you want a more global definition of good you should take a look at the UN’s global sustainable development goals. Nonetheless, the EthicalOS toolkit's role-playing is an imaginative way to think through ethically tricky questions.

Part 2 of the kit asks questions about your own product to help you anticipate how it could be misused. For example for: propaganda, addiction, crime, or discrimination.

This section of the toolkit is useful, but there is an omission when it comes to one of the most pressing issues of our time: pollution and climate. That raises an interesting point. It’s easy to spend your time worrying about how your product might overthrow world order in a decade’s time, whilst omitting to do easy good like putting your AWS instances in green regions.

Finally, part 3 of the toolkit lists 6 potential strategies for producing more responsible tech. You’ll be pleased to hear that number one is take a course on it. Others include oaths (which unfortunately don’t appear to have much effect as we'll discuss in the next post). Ethical bug bounties, product monitoring (this is my personal preference), practice licenses for developers (I’m dubious about this one as well, as software engineering isn’t location-bound like legal, medical or architectural practices).

At the end, there is a set of checklists to help you consider whether you have carefully scanned your product for ethical and thus professional risks.

Agile Ethics - Consequence Scanning 

Consequence Scanning by the UK think tank dotEveryone (Brown S. (2019) Consequence Scanning Manual Version 1. London: Doteveryone) defines a way of considering positive and negative implications by asking:

  • What are the intended and unintended consequences of your product? 
  • What are the positive consequences to focus on? 
  • What are the consequences to mitigate? 

The lightweight process slots into existing agile development and is designed “for the early stages of product planning and should be returned to throughout development and maintenance”. It uses guided brainstorming sessions and is an easy way to add more ethical thought into your product management.

Sector Specific Frameworks

The frameworks above are general to any product but there are others being created that are aimed at more specific areas including: AI, data and machine learning (e.g. “Principles of AI” by AI expert Professor Joanna Bryson and the data ethics canvas by the Open Data Institute). We’ll talk more about these in later posts.

Conclusion

In this post, we have reviewed several of the early ethics and responsible technology frameworks out there for developers. We have seen some common themes:

  • The need to consider the potentially harmful consequences of products and features both up-front and throughout the lifetime of the product.
  • The need to look at products from multiple viewpoints (not just the ones in your engineering team).
  • The need to comply with the law and potentially go further.
  • The need to monitor the use of products in the field.

But is ethics only a matter of process, or is it more? In the next post in this series, we’ll look at the role psychology plays in risk management, professional behaviour and decision-making.

(Part 4 of the University of Hertfordshire Tech Ethics Course. << Part 3 | Part 5 >>)

About the Author

Anne Currie is tech greybeard (ahem) who has been in the sector as an engineer, writer and speaker for 25 years. She runs & helps organise conferences in hard tech and in ethics, is a visiting lecturer at the University of Hertfordshire and most importantly of all, is the author of the dystopian, hard scifi Panopticon series (Amazon USAmazon UK). Contact her on Twitter @anne_e_currie or at www.annecurrie.com 
Sponsored by Container Solutions
The Panopticon Series on Amazon US and UK

Photo by Jason Wong on Unsplash

Friday 24 January 2020

Part 3: Are you a Goodie or a Baddie? What Does Being Ethical Mean?


(Part 3 of the University of Hertfordshire Tech Ethics Course. << Part 2 | Part 4 >>)

Tech ethics isn’t philosophy, it's professional behaviour.

In fact, I'd define ethical behaviour for a technologist as just taking reasonable care to avoid doing harm, which is also the foundation of doing your job professionally. What does that mean in practise?
  • Thinking upfront about how harm might come to a group or individual from your product.
  • Taking reasonable steps to avoid it. 
  • Monitoring your system in production, including user issues, to spot problems you missed.
  • Fixing them.  

Isn’t That Just the Law?

Not necessarily. The law lags behind progress in software (we move very fast). Some of this is against the law (as I described in part 2) and some of it isn't.

Or at least, it isn't yet.

Causing foreseeable physical harm is often a crime. Causing reputational, economic, or emotional damage; inconvenience, reduction in quality of life, or environmental problems, for example, may not be. However, even if you aren't breaking criminal law you might be in breach of civil or contract law.

Are you Responsible?

Most developers think they are not legally or morally responsible for the code they write, but the courts may disagree. “I was just following orders” is not a legal defense. Neither is “I didn’t know that was against the law.” It is your responsibility to check.

Is it Unprofessional to be Ethical?

It may be that the company you work for is a nefarious organisation with a plan for world domination. If you work in a secret lair under an extinct volcano, that might apply to you. If your CEO is a super-villain, bad behaviour may well be part if his business strategy and Dr Evil might consider it unprofessional of you to raise a concern.

However, most businesses are not run by baddies. As well as staying inside the law, they do care about keeping customers happy, retaining staff, and avoiding newspaper scandals. For most of them, ethical breaches are mistakes. The error might be caused by failure to test, lack of awareness, shortsightedness, misunderstandings, or miscommunications. In fact, ethical problems should be considered alarm bells for poor cognition in an organisation.

Your processes to avoid ethical breaches should be the same as your processes to avoid any potentially costly mistake. They are about managing risk to avoid it turning into a crisis. The processes are not there to ensure no error ever happens (that would be impossible) but to make sure issues are spotted and corrected before they do irreparable harm.

Don't Assume that Because You're Paid, You are a Baddie!

Never assume your CEO is an evil genius and you're only paid a salary to avert your eyes from his misdeeds. Where tech is concerned, he's more likely to be an idiot. History is rife with soldiers that followed orders that were never given. Don’t be that person. If you're asked to do something dangerous or harmful, it’s probably a mistake. Raise your concerns immediately. That is your job. Never assume your job is to help with a cover-up.

Why raising issues is good for business:
  • It is a sign that you're being careful about your work. 
  • You might be about to break the law (or be breaking it in some countries) and all businesses want to avoid that.
  • You might get sued.
  • You might get bad PR: people may boycott your products or you might struggle to hire. 
  • Ethical breaches are often a warning sign of dodgy decision-making that needs to be fixed.. 
Personally, I don’t want to find myself yelling, "I was just following orders!' in court, on the front page of the Daily Mail, or anywhere else. So how do I avoid it?

Avoiding Harm

It’s not unethical to have something go wrong. This is software - things go wrong. It’s only unethical (or unprofessional) if you don’t make reasonable efforts to:
  • avoid it going harmfully wrong
  • spot when bad stuff is happening
  • resolve serious problems when you encounter them.

Think up Front

In your product's design phase, set time aside to do “consequence scanning”. Think through:
  • harms that could result from your product, including by misuse
  • how you would spot if that happened
  • how bad it would be and how to mitigate that if necessary. 
In the next post we’ll talk about some frameworks that exist to help with this.

Follow Best Practises

Where they exist, follow best practises unless there is a very good reason not to. For new stuff like machine learning, best practises are still being formed. If best practises are not set in stone in your area:

  • follow what you can
  • be very careful when you stray off that path
  • document your thinking processes and decisions, at least in your issue tracking system, so that other engineers, auditors, and your future self can see why you made the decision you did (there is usually good reason but, trust me, you'll forget what it was).

Report Problems

What should you do if you see something potentially harmful like unpatched systems?

Here are 3 things you probably shouldn’t do:
  • Ignore it.
  • Quit.
  • Immediately become a whistleblower, phone up the Daily Mail then escape on the first plane to Moscow.
Here’s what you should do:
  • Raise it in your issue tracking system with an appropriate severity.
  • Be prepared to argue the case for that severity level.

Test!

The most ethical thing you can ever do is thorough testing. Watch out for edge and missing test cases. A classic mistake is to only test your product on the people in your IT team - they almost certainly don’t reflect all of humanity. If they do, you might be over-staffed on that project.

Field testing is a good idea generally and sometimes unavoidable. Plan for errors to be spotted and handled without harming the user, which takes us to the next paragraph...

Monitor

Industries like aviation, cars or oil and gas have something called a safety culture. They actively search out problems and examine them carefully. They do thorough postmortems and try to make sure any harmful issue only happens once. But don't just track actual failures, track near ones too...

Track Near Misses

The most successful businesses don’t only track active failure, they also monitor “near misses”: problems that never actually materialise and often resolve themselves, but are a warning sign of something bad in future.

In aviation, a rise in plane near misses indicates that a situation is becoming dangerous and there is more risk of a collision. Getting early warning from your near miss or near collision reporting lets you take action and avoid a catastrophe!

Be Accountable and Auditable

Finally, keep records. Record the decisions you made and why. This can just be in your code management and issue tracking systems. If you are working on machine learning, you need to keep detailed information about your test data and models. 

The reason for this is two-fold:
  • you'll need it for any post-mortems
  • it gives you another chance to spot anything dodgy and act on it.

Trust Yourself and Speak up

If something feels wrong, it probably is and maybe you're the only person who has spotted that. Perhaps you are worrying unnecessarily but ask anyway. The worst that'll happen is you'll learn something!

(Part 3 of the University of Hertfordshire Tech Ethics Course. << Part 2 | Part 4 >>)

About the Author

Anne Currie is tech greybeard (ahem) who has been in the sector as an engineer, writer and speaker for 25 years. She runs & helps organise conferences in hard tech and in ethics, is a visiting lecturer at the University of Hertfordshire and most importantly of all, is the author of the dystopian, hard scifi Panopticon series (Amazon USAmazon UK). Contact her on Twitter @anne_e_currie or at www.annecurrie.com 

Sponsored by Container Solutions




Hero image by the great JD Hancock at jdhancock.com

Friday 17 January 2020

Part 2: Tech Ethics: The Law's The Floor


(Part 2 of the University of Hertfordshire Tech Ethics Course << Part 1 | Part 3 >>)

The 101 of ethical behaviour is: don’t break the law. That might seem obvious; it's not sufficient; but it is necessary. The laws of each country codify a subset of its ethical rules. If you're breaking them, you're probably acting unethically so the foundation of responsible tech development is to obey the laws that apply to you.

In this post, we're going to cover some of the regulations you need to follow as a techie.

I'm not a lawyer and I'm not giving legal advice. I am commenting as a layman who has experienced most of these rules in my engineering career. If you need expert advice on any of this stuff talk to an actual lawyer. You may have one in your firm but if you don't your insurer can often help.

Be Warned! I Haven't Covered Everything

Every country and sector has its own rules you need to stick to when building a new tech product or extending an existing one. That's a lot of laws. I couldn't list them even if I knew them all, which I don't (and I'm not in prison yet). The good news is, most of them will never apply to you. However, some we do run across a lot and you're likely to encounter.

(Note: for every new project someone usually needs to do a bit of legal research: at least do some searching online and talk to veteran techies in that area).

1. Privacy, Transparency and Security (GDPR & Others)

Many countries have laws about digital privacy, but perhaps the most extensive are the European Union's General Data Protection Regulations (GDPR). They limit what a company or individual can do with the personal information of EU citizens. They also dictate how, and how long, such data can be stored.

The US state of California has similar privacy rules and there may also be regulations on specific industries or groups that apply to your application. One example is HIPAA, which covers US healthcare information. Another is the COPPA rule on children's data.

As well as privacy, GDPR includes regulation around security. It requires the use of encryption and anonymisation for storing some sensitive personal data. Again, other countries have their own rules, for example the US government's FedRAMP regs.

GDPR also imposes transparency requirements on uses of data. For example, a "right to explanation" for some algorithmic decisions, particularly if they have significant ramifications for the individual like prison sentence recommendations or credit scores.

The transparency aspect of the GDPR is widely expected to cause legal wrangling in future because deep neural networks defy explanation. The UK government's current advice on handling this kind of decision-making is sensible:
  • give individuals information about the processing you do
  • introduce simple ways for them to request human intervention or challenge a decision
  • carry out regular checks to make sure that your systems are working as intended.

2. Patents, Trademarks, Copyright, & Licensing (IP Law)

IP law applies to everything that you didn't produce from scratch yourself. That might be code samples or libraries, written text, music, or images you downloaded from the internet.

Even if you did produce something yourself, you could break IP law by accidentally infringing a patent or trademark. Accidental infringement usually doesn't come with huge penalties but the IP owner could stop you using your materials from then on.

Whenever you use anything you didn't create from scratch yourself, legally you need to confirm your right to do so. That might include licensing either the patent or the copyright. Licenses and trademarks tell you what you can and cannot do with materials and legally you must comply. Even if you have a license, you can't do anything you want. For example, you can't tell people you are the author if you aren't.

All open source code comes with a copyright license that tells you how you can use it. Some licenses, for example Apache 2, are permissive and let you use the code for whatever you like. Some licenses are more restrictive, e,g, GPL, and only let you legally use the code in certain ways.

Even if you did write all your code yourself, you still need to be careful not to deliberately infringe someone else's IP because the penalties for that can be steep. Avoid casually discussing patents or trade secrets with anyone outside your company so you don't learn things you shouldn't know and might put in future products. If a conversation like that starts, subtly excuse yourself ASAP.

3. Contracts 

Contracts are two-sided commitments that describe the work one company or individual does for another. Contracts ensure the buyer gets what they want and the supplier gets paid for it. They might be between an individual user and the company behind a website, for example, or between a contractor building a custom application and the company who hired them.

A contract is legally binding. If either side fails to do what they agreed, the courts can force them to. Before they sign a contract, most companies ensure they have insurance to cover the cost of either suing the other party for a breach or being sued. That might happen even if you didn't do anything wrong. This is called liability insurance. 

4. Confidentiality

Engineers are frequently asked to keep secret what they are working on or what they learn through work.

Many contracts contain confidentiality clauses or you might be asked to sign a non-disclosure agreement (NDA). Confidentiality clauses and NDAs are enforced the same way as any contract: though the courts. If you blab, you can be sued.

5. Duty of Care

Duty of care legislation applies to products that may do foreseeable harm. This means physical or psychological harm rather than "pure economic loss" and therefore hasn't generally been applied to software products in the past. However, where software is incorporated into a physical device (robot, IOT etc...) then liability may apply because it could physically hurt someone.

6. Accessibility

Many countries have laws about access to websites for disabled users. If your site or product is inaccessible there is a potential risk of you being sued, particularly if you have users in the United States. Some government bids require compliance with accessibility standards like section 508 in the US or the EU's web accessibility standards. Being accessible also helps with Google's SEO scoring. 

7. Other Stuff to Comply With

Although GDPR, IP law, and contracts are probably the rules you'll encounter most often in the tech industry, they aren't the only ones.
- In every country, there are regulations on tax (VAT or other sales taxes, customs duties etc..) Those rules affect product reporting.
- Your product might come under export laws (for example the US rules on exporting so-called dual-use technology - items that are classified as potentially military. Some of those laws apply to fairly innocuous-seeming stuff like publicly available SSL libraries. For cryptographic libraries in particular, double-check before including them in your products. Don't panic - even if you end up using dual use tech in your application it normally just involves some extra paperwork).
- There are sectors where the software is additionally regulated, for example, finance products and strict anti-money laundering (AML) rules.

Cybercrime or the Computer Misuse Act

I've talked about rules that affect how you write products, but there are also ones about how you use them. The laws around computer misuse are fairly draconian. For example, gaining unauthorised access to a computer, even if you do no harm, is a criminal offense in the UK with a penalty of up to 2 years in prison!

Other forms of cybercrime include online trolling, bullying and stalking, which are quite common. You may spot them being committed using your company's computer equipment. It's often an inside job: one of your employees or someone who's leaving, so you might have to act to stop or report it. 

Laws to Come?

The EU has plans to add new laws around AI and public surveillance, which will probably appear over the next few years. Or then again, perhaps not.

There are a Lot of Laws

You always need to do some reading around and checking in your field. What's legal and what's not changes all the time and it's not necessarily obvious. Research is your friend!

In the next post, we'll look at why complying with the law is not always enough - it's just the minimum.

<< Read Part 1 | Read Part 3 >>

About the Author

Anne Currie is tech greybeard (ahem) who has been in the sector as an engineer, writer and speaker for 25 years. She runs & helps organise conferences in hard tech and in ethics, is a visiting lecturer at the University of Hertfordshire and most importantly of all, is the author of the dystopian, hard scifi Panopticon series (Amazon USAmazon UK). Contact her on Twitter @anne_e_currie or at www.annecurrie.com


Photo by King's Church International on Unsplash