AI Ethics
March 12, 2024

How to Create an AI Ethics Framework for your Company?

This is part 3 of our series on AI ethics. You can check out our articles on The Pain Points and Problems that Led to AI Ethics and The Case for Transparency and Regulation around AI.

You can’t deny that Artificial Intelligence (AI) is slowly becoming a powerful decision-making force in business. However, AI that remains unchecked and impenetrable presents deep ethical challenges and real-world consequences for organizations. 

A survey by IBM and Oxford Economics threw out some interesting data:

  • 85% of consumers stated that it was important for companies to factor in ethics if they were using AI to tackle societal issues.
  • 75% of executives rank AI ethics as important in 2021.

And as our use of AI evolves, so do the challenges. One recent report by Stanford University’s Human-Centered Artificial Intelligence Institute claimed to have seen the number of incidents and controversies increase drastically – 2021 was 26 times higher than in 2012.

Number of AI incidents and controversies, 2012 - 2021

This has forced companies and organizations to reorient their approach to AI and the ethical framework required for optimal use.

The W’s of AI ethics

At the heart of AI ethics is one word - impact. 

What kind of impact will an opaque and unregulated AI have on your organization? In order to determine that, companies must establish the basic principles of AI ethics by asking a few important questions:

  • Where do we use AI?  (Context and purpose)
  • When the damage is done, who should be blamed? (Accountability)
  • Why does AI come to a conclusion or make a decision? (Transparency)
  • What role should AI play in making judgments? (Fairness)
  • Who is impacted by the decisions AI makes? (Bias)

So, what should companies do? 

While the broader implications of AI ethics are important to keep in mind, the framework that you create for your company should be customized, scalable, and long-term. These are the steps that decision-makers could take:

So, what should companies do? 

1. Talk to Experts - AI Ethicists 

AI ethics is a wide and varied field, with different factors to consider. To kick start the process you should talk to experts like AI Ethicists – someone who works with companies on the ethical aspects of developing and implementing AI. 

Their responsibilities include:

  • Developing comprehensive guidelines and policies for the ethical development of AI.
  • Conducting an ethics review of all AI projects before they go live - identifying risks and suggesting remedies.
  • Assessing risks and ensuring compliance with existing regulations.
  • Collaborating with teams and employees on all levels to prioritize ethics at each stage of AI development. 
  • Educating team members on ethical practices and creating an environment of ethical consciousness.

Ethicists and AI experts can help you understand what factors to look out for when building your company’s ethical compass.

2. Review your existing AI ethics architecture

Companies aren’t entirely in the dark when it comes to AI ethics. Some have already set up data governance boards that overlook compliance, security, privacy, and other risks. However, with the pace of change in the AI space, it’s important to check the efficacy of your existing policies, guidelines, and documents. 

  • Does the existing infrastructure align with regulations and compliance requirements?
  • Does it align with the company’s core values and mission?
  • Have all factors and stakeholders been considered? 
  • Are there redundancies and fail-safes? Do you have a system to monitor violations?

Identifying the gaps in your existing architecture can provide a blueprint and timeline for the implementation of a robust ethics policy. 

3. Identify the risks you can run into with AI

It all starts with risk. Understanding the various points of risk associated with AI use will set the foundation for your framework. While an AI expert can offer you broad guidance in terms of risk, you need to determine the specific pain points that are relevant to your company. Can the use of AI products or services lead to concerns around:

  • Data responsibility and privacy
  • Fairness
  • Explainability
  • Transparency
  • Environmental sustainability
  • Inclusion
  • Accountability
  • Trust
  • Technology misuse

Where do the risks lie? What potential scandals or worse-case scenarios can you envision? This is a vital starting point to determine the scope of your AI ethics policies.

4. Identify the building blocks of AI ethics policies for your industry

To build a comprehensive AI ethics architecture, you should:

  • Determine the external and internal stakeholders who would be affected.
  • Set up a governing structure and a guideline of how these structures will continue during changing personnel and circumstances. 
  • Establish performance standards.
  • Set up a quality assurance program to ensure that the ethical standards you have chosen, remain effective.
  • Research the existing regulations and compliances in your industry.

Most importantly, these measures should be customized to suit your industry. For instance, in healthcare, you should look closely at privacy measures and methods to eradicate potential bias.

5. Make sure that your AI policies are useful for employees at all levels

The effectiveness of AI ethics policies hinges on their applicability at every level, from C-suite executives to front-line staff. Creating a robust AI ethics framework should account for the employees to whom it will apply. Here's how companies can achieve this:

  • Conduct a Needs Assessment: Understand the specific requirements, knowledge gaps, and challenges that different employee groups may face when dealing with AI technologies. This requires engaging with various departments, from technical teams to non-technical staff. This will also help identify employees who require upskilling.
  • Tailored Training: Implement AI ethics training programs that cater to employees' different roles and responsibilities. High-level executives may require training on strategic decision-making with AI, while developers and data scientists may need guidance on the technical aspects of ethical AI development. Make use of real-world case studies and use cases.
  • Simplicity: Strive for simplicity and clarity in your policy language. Avoid complex technical terminology or legal jargon. Ensure that employees, regardless of their technical expertise, can easily comprehend and apply the guidelines.
  • Feedback Mechanisms: Establish feedback channels that allow employees to voice concerns, seek clarification, or report potential AI ethics violations.
  • Accountability: Clearly outline roles and responsibilities for AI ethics compliance across the organization, potentially through an accountability framework. This clarity ensures that all employees understand their role in maintaining ethical AI practices.

6. Monitor for any potential impact 

AI products can be developed with goodwill and high ethical standards, but there is a risk of people using the same product for unethical purposes. That’s why it’s important to monitor the impact of the product in the market through qualitative and quantitative research:

  • Stakeholder Engagement: Include stakeholders early in the product development process. This helps them understand the product's capabilities and limitations and contributes to responsible use.
  • User-Centered Approach: Ensure that your product aligns with user expectations by setting clear boundaries. This minimizes the risk of unethical use. 
  • Iterative Improvement: Regularly assess the product's ethical impact, gather feedback, and make iterative improvements to stay aligned with evolving ethical concerns.

This approach will help your company create AI ethics policies that proactively address ethical risks and promote responsible AI use.

7. Access resources and guidance from organizations that promote AI ethics

Sometimes, you don’t need to reinvent the wheel. There are several organizations that have put together resources for enforcing AI ethics. Refer to them and see if it can accelerate your efforts: 

  • Algorithm Watch - A non-profit that works on traceable and explainable AI algorithms in AI programs.
  • CHAI - Various institutes have come together to form The Centre for Human-Compatible Artificial Intelligence (CHAI), which promotes trustworthy AI.
  • AI Now Institute - A non-profit at New York University, it examines the social implications of AI.

But where do you draw the line?

The question of ethics is one that comes down to the decisions you want to make for your company. These are broad guidelines for setting up an ethical framework. Questions like “What exactly constitutes fairness in business?” or “Who will this technology harm?” will dictate the choices you make to a certain extent. It’s important to safeguard your business from industry-specific vulnerabilities and ethical risks.

Have you had any ethical dilemmas when creating a framework?

Let us know.

Want to cut the clutter and get information directly in your mailbox?