(Part 7 of the University of Hertfordshire Tech Ethics Course. << Part 6 | Part 8 >>)
Surveillance
“Have people inadvertently given away data that will harm themselves, others or society in general?”A 2019 report from tech research company Comparitech ranked the most surveilled cities in the world. Unsurprisingly, China won eight of the top 10 spots but coming in at number six, and sporting around 630,000 cameras, was London. The British capital has approximately one recording device for every 14 inhabitants. According to cctv.co.uk: “Anyone going about their business in London will be caught on camera around 300 times per day.”
In the past decade, China’s internal security spending (much of which is widely believed to be spent on surveillance tech) increased tenfold. It now significantly outstrips their external defence budget. Closer to home, by 2025 cctv.co.uk expects the number of cameras in London will top one million. Pair all of this data with the growing sophistication of facial recognition and we have a new world.
The arguments for automated surveillance using facial recognition are usually made by governments or multinationals:
- Improved security (a common argument in the UK).
- Better social cohesion, control, and trust; especially when coupled with social scoring (China).
- More convenient grocery shopping (Amazon in the US).
The last might seem a rather trivial benefit, but it appears popular.
The arguments against tend to be made by academics and the EU:
- Facial recognition technology isn’t good enough for what's it's being used for yet, which will lead to miscarriages of justice.
- It is an invasion of privacy.
- It is a threat to personal liberty.
- Although the data is recorded in public, it is not publicly available. When it remains in the hands of MNCs, we're all working for them for nothing.
As engineers, we should be aware that the EU considered banning public surveillance in 2020. It has not yet done so, but may do in future. Even without a ban, the legal floor remains that recorded video data may fall under a county's rules for data protection.
Anthropomorphism
“Who is human and who only appears (masquerades) as human? Unless we can individually and collectively be certain of the answer to this question, we face what is, in my view, the most serious problem possible” - Philip K DickAnthropomorphism is defined as “the attribution of human characteristics or behaviour to a god, animal, or object.” Humans are very good at it.
We don’t yet have chatbots that can pass the Turing test (which is about the imitation of intelligence, not AGI. Essentially, it's anthropomorphism). In most cases, however, they don’t need to be all that convincing to fool us.
We aren’t expecting the “person” we’re talking to over webchat support to be anything other than a human and the last thing we're going to ask is, “Can you eat a chair?” (which, according to Steve Warswick, winner of the 2019 Loebner prize for the world's most convincing human emulation, is the kind of question often asked to trip up chatbots).
To be fooled into believing a chatbot is a human is only inadvertent anthropomorphism - it's being deliberately mislead. True anthropomorphism requires us to be convinced something we know is a machine is as intelligent, caring, and compassionate as us. That is sadly easy to do when it has a human-looking form.
Ben Goertzel, the creator of the performance chatbot “Sophia” actively aims to achieve this. “For most of my career as a researcher," he said, "people believed that it was hopeless, that we’d never achieve human-level AI. Now, half the public thinks we’re already there.” That's hardly surprising, given that's what he tells them.
In reality, we are a long way off artificial general intelligence (which we can't currently even define). If we were closer, we'd have some ethical discussions on that point but for the moment it is largely an irrelevant distraction. We'll instead concentrate on anthropomorphism as it currently applies in tech.
The main arguments in favour are:
- Anthropomorphic robots are a cheap way to provide (the imitation of) care, love, and companionship to people.
- It’s a cheaper way to provide customer service (and recent research suggests that’s more effective if people believe they are talking to a human rather than merely a chatbot).
The arguments against human emulation are:
- It’s a lie.
- It undermines real relationships (which are more challenging than talking to a robot that has no reciprocal needs).
- Most dangerously, anthropomorphism leads to a misleading impression of competence and predictability. It can cause users to over-attribute accuracy and safety to “AI” algorithms.
As an engineer, it isn’t against the law for products to pretend to be human, but you must consider the ethical and safety implications.
Attention
Where does your attention go? In our attention economy, that matters. Marketing firms don’t only pay for clicks, they often pay for impressions (the number of people who see an ad on their screen). That means old school media and new social media firms want to keep you online, with ads in front of you, for as long as possible. Some of them are very good that. Is it a problem?On the positive side, media is entertainment. If people want to look at it, surely that’s a good thing - it's the whole point.
It’s your choice how you spend your time. Arguments about “amusing ourselves to death” (according to philosopher Neil Postman) were made about TV in the last century and novels before that, both of which may have done society good by encouraging empathy and social cohesion. Modern social media causes people to become more personally and viscerally involved in the issues of the day and that has upsides and downsides, but should lead to a more engaged populace in the longer term. Surely that's a good thing?
The counter argument is that modern media is more deliberately compulsive than TV or books and is significantly more superficial. That it is reducing our attention spans and removing nuance, leading to anxiety (or possibly just correlated with it), increasing polarization, and reducing the quality of social discourse and day-to-day family life. For employers, there is also a fear it may be reducing productivity by distracting the workforce with trivial Facebook updates and pointless clickbait when they should be getting on with their jobs.
10 years ago, 7% of the US population used one or more social networking sites. Now that figure has increased to 65% and the global average time spent daily in social media is 2 hours 23 minutes. Note that's still less than TV (3 h 35 mins in the US)
To a certain extent, social media usage is a matter of choice. Unlike smoking, it doesn’t provenly cause harm. However, time is a precious commodity and social media is designed to be addictive. One potential ethical approach might be to allow users to audit themselves (easily find out how much time they have spent on your app) enabling them to make informed choices.
In this post we have looked at surveillance, anthropomorphism, and attention. In the next one we'll consider: open v closed code and data; social scoring; accessibility and exclusion.
No comments:
Post a Comment