AI Ethics
January 31, 2024

AI Ethics - Why is it Important and What are the Pain Points?

None can deny that 2023 is the year of AI - a now necessary and accepted tool for businesses. According to Forbes, the global artificial intelligence market size is projected to expand at a CAGR of 37.3% from 2023 to 2030, reaching $1.8 trillion.

However, with this sudden influx of AI tools, only a select few are investigating the ethical impact of AI in the coming years. It seems like an alien topic, especially when people are still trying to wrap their heads around how this tech works. That’s what makes it a pertinent topic to focus on.

What are AI ethics?

As IBM puts it - “Ethics is a set of moral principles which help us discern between right and wrong.”

AI ethics is a multifaceted and evolving field that addresses the moral implications and societal impacts of artificial intelligence. At its core, AI ethics is concerned with ensuring that AI technologies are developed and deployed in a manner that is beneficial, fair, and respectful of human rights and dignity. 

Ethics is an essential and ongoing dialogue in the field of artificial intelligence. It seeks to guide AI technologies in a way that maximizes their benefits while minimizing harm and respecting human values. As AI continues to advance and integrate into various aspects of life, the importance of ethical considerations in AI will only grow more prominent.

And as we see new scandals come to light, leading companies are spurring into action to create ethical guidelines for the use of AI. Poor design, inadequate research, and biased datasets can do a world of harm to a company’s reputation and its bottom line. 

And there are multiple pain points that a company must look at. These include:

  • Bias
  • Transparency
  • Accountability 
  • Environmental Impact
  • Misuse and Malice
  • Unemployment

Bias

We’ve all come across moments of bias around the world - some subtle, some not so. If every AI system is based on data, what happens if the data is biased?

Bias in AI systems is a critical concern within the realm of AI ethics, stemming from the fact that AI algorithms can inadvertently perpetuate and amplify existing societal biases. AI models are likely to inherit skewed or unrepresentative training data, possibly leading to unfair or prejudiced outcomes. 

For example, facial recognition technologies have faced scrutiny for their disproportionate inaccuracies in identifying individuals of certain racial and ethnic backgrounds. There have been other examples of bias, with Amazon’s automated recruitment system choosing men over women for STEM field jobs.

The clearest example of biased training data was Microsoft’s chatbot - Tay - which quickly started responding with offensive and discriminatory language after interacting with users on Twitter. 

Algorithmic bias has been identified in criminal punishments, healthcare, recruitment, media, and more. Bias in AI is more than embarrassing for companies. It can erode trust, create real harm and even result in legal action, underscoring the need for a rigorous and ethical approach to AI development.

Transparency

Do you know how ChatGPT came to the conclusion that it did and gave you that particular response? Is it fabricating conclusions or giving you facts? 

Companies can clearly determine when their AI models/ algorithms are working well. What is less clear is how the AI is working - Why is it drawing certain conclusions?

Transparency in AI ethics refers to the clarity and openness with which AI systems and their decision-making processes are designed, implemented, and operated. It's a crucial aspect because AI systems, particularly those based on complex algorithms like deep learning, can often be "black boxes," with decision-making processes that are opaque even to their creators. This lack of transparency can lead to issues in accountability, trust, and fairness. 

Financial services is an area where transparency is vital, especially with credit scoring. A notable case is the controversy surrounding Apple Card, where allegations of gender bias in credit limit decisions emerged. There were calls for investigations, to understand how the AI algorithms determined credit limits. This incident underscores the need for transparent AI systems that can be audited and understood, not only by their developers but also by users and regulators.

Accountability

Who should be held accountable when AI fails or misbehaves? Many scholars in the field believe that accountability is the key to governing AI. Accountability can cover different aspects:

  • The Face of Accountability: Have companies drawn clear lines of responsibility and accountability in the event of a scandal?
  • Legal Compliance: Have companies adhered to existing laws and regulations to avoid disputes and penalties?
  • Ethical Development: Were ethical considerations built into the AI system from the beginning?
  • Copyright: In the case of generative AI, have issues of copyright, ownership and authenticity been addressed? Who owns the rights to the content that was created by AI - the developer of the AI system, the user, or the AI itself?

Avoiding or failing to recognise the importance of accountability could result in serious consequences for companies.

A clear example of accountability can be seen in the case of autonomous vehicles (AVs), where questions around accountability are particularly complex and critical. For instance, consider a scenario where an autonomous vehicle is involved in a traffic accident. Determining accountability in such a case involves a multitude of factors: Was the accident due to a flaw in the vehicle's AI algorithms? Did it arise from a failure in the vehicle's sensors or hardware? Or was it a result of unforeseen circumstances that the AI couldn't reasonably be expected to handle?

Environmental Impact

The connections between AI and the environment seem tenuous, until you look at the development and deployment of this technology - Large amounts of energy are required to train Machine Learning models. How do we account for this, especially with such scrutiny on a company’s environmental impact (think ESG)?

To start with, if this energy comes from fossil fuels, it can create a net negative impact on climate change. OpenAI researchers conducted a study where they found that the amount of computing power needed to train AI models has doubled every 3 - 4 months since 2012. It is estimated that by the year 2040, the emissions generated by the Information and Communications Technology industry will reach 14% of global emissions.

Another concern is the e-waste produced by AI technology, which contains extremely hazardous chemicals, such as lead, cadmium and mercury. These chemicals can contaminate soil and water, causing serious health problems. According to the World Economic Forum, the amount of e-waste generated annually will surpass 120 million metric tonnes by 2050. 

Misuse and Malice

Stephen Hawking argued the issue of malice stating “The real risk with AI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble."

Although we’re hopefully far away from AI “bad-guys”, we can still find them in human form. 

The capacity for misuse and malice of AI has always worried experts. Without regulation and accountability, powerful AI systems could be used for nefarious purposes including privacy breaches, autonomous weapons, economical inequality, social manipulation, spread of misinformation, and even stock market manipulations. Deep fakes is one such implementation of AI that has received a lot of attention. These digitally altered videos have been used to spread false information with the goal of harassing, intimidating, or undermining someone. 

A UK-based energy company fell prey to this technology, when scammers used deepfakes to mimic the CEO’s voice - authorizing the transfer of £200k to a Hungarian bank account. 

There are also a variety of AI-powered security attacks, including password cracking, AI-assisted hacking, compromising business email, ransomware attacks, DDoS attacks, intellectual property theft, and more. One of the most discussed AI-assisted attacks was on TaskRabbit - The website was hacked in 2018 and over 3.75 million users were affected, scooping up social security numbers and bank account details. Hackers utilized a huge botnet that was controlled by AI.

Unemployment

This ethical dilemma seems most relevant to decision-makers, as many employees are currently living in fear of losing their jobs to AI. Automation has already led to job losses in various industries, and AI will continue this trend, taking over jobs in healthcare, law, and education. We’re already seeing this in Customer Service, with AI-powered chatbots and automated phone systems increasingly handling customer service complaints. An Indian entrepreneur laid off 90% of his support staff due to increased usage of chatbots. 

So, what can be done to make employees feel more secure in the face of potential job loss? One encouraging sign has been with job creation. AI is also generating jobs in many industries. Companies could set up programs to help reskill employees so that they can find new opportunities to thrive in an AI-driven environment. 

These are some of the big ethical issues companies have to look at when leveraging AI. but the question remains:

How do these ethical issues manifest?

Source

1. The Case of Swedish Police and Privacy

Clearview AI - an American firm has been plagued by scandal since the beginning of 2019, when the organization expanded its reach to 26 countries and started working with law enforcement. Clearview’s software allows organizations access to a database of 3 million images to be used for facial recognition.

The problem lies in the fact that the company had extracted photos from various social media platforms and sites, without informing users - infringing upon their right to privacy.

When this news broke, the Swedish Authority for Privacy Protection launched an investigation against the Swedish Police, discovering that:

  • Clearview has been used by the police on many occasions, with few exceptions police employees using it without prior authorization.
  • The police failed to set up and implement sufficient measures to demonstrate that the use of Clearview AI is in compliance with the Criminal Data Act.
  • The Police unlawfully processed biometric data for facial recognition.

The Swedish Authority for Privacy Protection imposed an administrative fine of SEK 2,500,000 (approximately EUR 250,000) on the police for infringing on the Criminal Data Act. 

But what about the data shared with the company? Do they still retain it? Is the ethical question around privacy answered? 

2. The Case of Instagram and Bias

In 2020, a study of Instagram’s algorithms revealed that it prioritized pictures of scantily clad women and men. The investigation looked at over 2,400 pictures and discovered that 362 images posted by content creators were of bare-chested men and women in bikinis. The study also found that it was 54% more likely for photos of semi-nude women and 28% likely for bare-chested men to pop up. 

These biases that are baked into the algorithm have real consequences for content creators on the platform. Pictures that did not reveal skin had a smaller organic reach on Instagram, and could potentially pressure content creators to present themselves in ways that made them uncomfortable.

3. Eating Disorder Helpline and Job Loss

An AI-powered chatbot was mired in controversy when it provided harmful advice to those looking for help. The scandal began when the National Eating Disorder Association (NEDA) announced that it would be shutting down its human-run chatline, replacing it with a chatbot called “Tessa”. However, Tessa’s time at the helpline was brief,  as she ended up offering harmful or dubiously sourced information to those using the helpline. 

The advice included telling people to aim for up to a 1000 calorie deficit per day for weight loss. Customers of the helpline stated that the advice that Tessa shared had caused them to develop eating disorders. 

The NEDA helpline workers who were let go in this move noted their disappointment in a tweet stating that "A chatbot is no substitute for human empathy.”

So, what should companies do?

AI ethics is not a fixed or static process - as the technology develops, so will the ethics. 

Therefore, many experts believe that companies need to be proactive instead of reactive. This includes keeping updated with any changes in regulations and policies, while including all stakeholders into open dialogue. This will allow you to build a comprehensive and robust AI ethics architecture within the organization. 

You can read our article on creating an AI ethics framework, here.

What are your thoughts on AI ethics?

Let us know!

Want to cut the clutter and get information directly in your mailbox?