Thursday 20 February 2020

Part 6 - The Tech Ethical Issues To Talk About


(Part 6 of the University of Hertfordshire Tech Ethics Course. << Part 5 | Part 7 >>)

In our last few posts, we have discussed tech ethics and responsible technology in general terms and left it to the practitioner (you) to decide how to apply it to your own future products. In the next posts, we are going to briefly look at some specific areas that get a great deal of press coverage. Some get more than others - ethics shouldn’t be subject to fashion, but inevitably it is.

In each case, I am not going to define the rights and wrongs - other than pointing out where the law already does so - but I will try to outline some of the main arguments on both sides, and where the ethical floor might be, no matter which side of the argument you come down on.

Over the next few blog posts I’ll cover:

  • Energy use in the tech sector.
  • AI and Big Data.
  • Cyberwarfare, propaganda and killer robots.
  • Surveillance.
  • Anthropomorphism.
  • Attention.
  • Social Media influence.
  • Open v closed code and data.
  • The role of social scoring and civil order (mass surveillance apps).
  • Accessibility.
  • Exclusion.
  • Privacy.
  • Security.
  • Future of trust.
  • Changing behaviours and social norms.

I’ll also discuss different definitions of social good (e.g. individualistic vs community) which vary from country to country and person to person.

That’s a lot! Inevitably, it will only be an overview.

Energy Use

Tech is one of the most successful industries worldwide. That makes it one of the fastest-growing users of fossil fuels.

The UK’s electricity was just over 40% hydrocarbon generated in 2019. Globally, fossil fuels generated ~65% of electricity in 2017. Data centres alone are currently estimated to use 2% of the world’s electricity and if you add in devices, the % gets much higher (according to Greenpeace, 10-20%). Machine Learning is particularly energy intensive.

Does this make the tech industry a force for good or evil when it comes to climate change?

On one hand, we could assert that communications tech cuts down on travel (very green); technology often increases efficiency (green again); and there are societal benefits to tech that make it worth some climate cost.

On the other, we might say tech is electricity-powered and there are better, more sustainable ways to generate that than by burning fossil fuels. Many people argue that the tech industry is rich and powerful, and therefore the right thing is for the industry to lead the way in using clean electricity.

Those views are not conflicting.

As engineers, our responsibility is to stay informed; make active choices; and pay attention to our energy usage. For example, it is vastly cleaner to host instances in AWS’s Dublin region (100% renewable or offset) than in US East (only 50% offset). Google Cloud and Azure are both 100% offset everywhere. This information is usually available on cloud provider’s sustainability web pages. Read it. If data isn’t available for your provider, that is not a good sign. Ask for it.

AI, Machine Learning and Big Data

I’m not talking about general AI and whether to build Skynet here. That’s a rather long way off. What I am focusing on is data analytics, and the automation of physical and intellectual tasks.

Data Analysis 

When we discuss Machine Learning (ML), what we're usually talking about is machine-enhanced statistical analysis of existing data sets. Sometimes that analysis uses something like deep learning, but surprisingly often it is still just a SQL query!

ML is an astonishingly useful tool for humanity. Using digital techniques, huge pools of high quality, often high density, data can be analysed. That information couldn’t be processed manually in any realistic timeframe.

For example, medical, astronomical, agricultural or other scientific photographs can be scanned and automatically studied, potentially unearthing radical new hypotheses in those fields. Similarly, huge quantities of public domain text data is already being analysed, leading to breakthroughs in automatic translation. Major leaps are being made in medicine by ML.

But it's not all good news. There are also significant ethical concerns about ML:

  • Is the source data accurate, or does it contain false information? Historic data may include beliefs that are incorrect but may have been, or continue to be, widely held. Products based on such biased data might be unfair or cause unlawful discrimination.
  • Has the data been sourced in a responsible manner or have people inadvertently given away information that will harm themselves, others or society in general?
  • Is the analysis bugged in a way that is hard to detect? Has it been sufficiently tested?
  • How do we handle false positives or negatives? These will happen even if there are no bugs because that is how statistics works. 
  • Does restricted access to the source data lead to monopolies that are not in the public interest?

The EU’s GDPR legislation attempts to address some of these concerns via its transparency and right of challenge rules.

As engineers, our responsibility is to obey the law; check the provenance of our data; be aware that data quality impacts every conclusion we reach; test thoroughly; and rigorously document and account for all our decisions.

We also need to understand that error is baked in to statistics: data = model + error  - i.e. some error will always happen. The inevitable mistakes therefore need to be handled compassionately and thoroughly accounted for.

Automation

One of the common uses of AI or machine learning is in job automation. Again, this is neither good nor bad but has benefits and risks.

Task automation means many dull, dangerous, or rote tasks don't need to be performed by humans any more. That increases productivity and accuracy, and reduces cost. One example is self-driving cars, which are anticipated will reduce congestion and accidents. That’s all good.

So, what are the arguments against automation?

One concern is that where humans are directly controlled by algorithms, the result can be heartless. For example, where gig economy workers are given automatically calculated and distributed work schedules, they sometimes allow no time for family life or illness. Let’s take the real life case of the Kronos scheduling software. In 2014, The New York Times revealed the way it optimised for efficiency screwed up the lives of employees. Presumably the developers hadn’t intended that, they just hadn’t foreseen the problems and had no existing guidelines to work from.

Of course, humans can also be cruel, but here the responsibility for mercy lies with the software developer. He or she has to code it in. The risk is, they might not (probably won't?) have a good understanding of the human factors in the situation. If they mess it up, they could make people's lives hell.

Automation can also lead to significant labour disruption. For example, Uber's stated business plan is to replace all its human drivers with those lovely self-driving cars. The oil industry has also introduced significant automation to oil wells, leading to a huge fall in the number of human employees.

Finally, automation directly links business productivity to capital (money to spend on robots or software), which advantages those who already have capital. That can lead to wealth concentration, which may not be in the public interest.

As engineers, the last two points are probably beyond our scope, but we do need to consider how our code is written and tested so it doesn’t harm people who are controlled by it and provides mechanisms for problems to be detected and reported. You may also want to consider whether you are happy with the existence of the product you’re building. That is your personal choice.

The Future of Warfare

Even if you don’t go into the defence industry, you need to be aware of the direction warfare is headed in because the wars of the future will not only be fought with physical weaponry - they may be fought on your software.

Killer Robots

The most obvious ethical dilemma around tech and war today is the use of killer robots.

Remotely controlled, unmanned aerial vehicles (UAVs), aka drones, are already widely used on the battlefield, particularly in the Middle East. Also under development are machines that can target and shoot without human intervention: so-called killer robots (an accurate, if literally loaded, term).

The main argument in favour of these is they could be more accurate and reduce battlefield loss of life.

The primary argument against them is that the technology is still not good enough to use anywhere near civilian populations. They often kill the wrong person and targeting mistakes are frequently swept under the carpet by the military, rather than properly addressed. Note that the same arguments are also applied to UAVs with human triggers, where the automatically-generated hit list may contain mistakes (see AI and Machine learning above).

More philosophical reasons against autonomous weapons centre around whether humans should ever be killed automatically. You might argue landmines do it, but to kill that way with a landmine is against Geneva and UN conventions - most of us have already decided that’s wrong.

Another argument is that killer robots make warfare cheaper (once the tech has been created) and therefore more of it can be waged. Whether that is a "for" or "against" depends on your viewpoint and current context.

Cyberwarfare

Cyberwarfare is a new weapons frontier. In the Ukraine in 2015, the power grid was hacked and brought down by a sophisticated cyber attack. From 2005 to 2010, the US's Stuxnet virus attacked Iran's uranium enrichment program. In the first case, the target was the control code for a power facility. In the second, it was possibly every Windows machine in the world (it only triggered if the PC was in an Iranian nuclear plant).

In future, the target could be your system. It is hard to write code or support systems that are proof against a state actor, but as an engineer it is vital your systems are proof against everyone they can be resilient to. Don’t get taken down and cause the deaths of thousands because you didn’t apply a security patch.

Propaganda and civil disorder

“Destabilizing an adversary society by creating conflict in it and creating doubt, uncertainty, distrust in institutions” - Keir Giles, senior consulting fellow on Russia at Chatham House.

Creating killer robots is expensive up-front. A lower Capex alternative is propaganda: eroding trust in a government using targeting advertising, misinformation, deep fakes or just fake news. The US may even be using popular games.

As an engineer, it is your responsibility to consider if your new social media platform, or product (e.g. a game), or tool (like a video editor) could be used as a weapon of destabilisation and how you would detect that and stop it.

What Else?

In this post, we have very briefly covered some of the ethical issues around climate change (energy use), AI and Machine Learning, and Cyber warfare. It is part of being a professional to balance up these benefits and risks.

In the next post in this series, we’ll look at surveillance, anthropomorphism and attention…

(Part 6 of the University of Hertfordshire Tech Ethics Course. << Part 5 | Part 7 >>)


No comments:

Post a Comment