How to design ethics into your organization

Blogs and Articles

To ensure the ethical use of complex AI tools, we need to understand the limitations of these tools so that we can exercise our own judgment. AI, in its current state, is far from perfect. Its development requires human input. If the human user cannot intelligently engage with AI, then we risk having imperfect AI systems dictate erroneous decisions to human decision-makers.

Sue Trombley
Sue Trombley
March 1, 20227 mins
Iron Mountain logo with blue mountains

During a recent Iron Mountain Executive Exchange event, we had the privilege of hearing from Cansu Canca, Ph.D., Founder and Director of AI Ethics Lab, on the topic of responsible AI and ethics by design. Dr. Canca shared her perspective on how organizations should be building ethics into their AI systems. There was so much to discuss that she later sat down with me to carry on the conversation.

Sue: It would seem that we are at the beginning stages of how businesses, governments, and society will employ AI. What are the basic considerations regarding ethics for determining use cases and development of algorithms? 

Cansu: Let's take a step back and think about the role of ethics in AI. Ethics and ethical decision-making are, and have been, part of the development and use of AI, whether we explicitly paid attention to it or not. As soon as AI systems engage with human decision-making, human well-being, and distribution of benefits and burdens within the society, we are in the realm of ethics. For example, an AI system that determines flight frequency and pricing has ethical judgments embedded within it, even if the developers or users never think about it. Is it fair to offer different prices to different customers according to their login region or their interest in a particular flight (as traced via cookies)? What about the ethics of building AI systems to predict customers' emotional vulnerability and offer them higher flight prices when they are most likely to engage in impulse purchase?

Not taking ethics into consideration explicitly only increases the risk of ethical errors in AI systems and in their use. All stages of AI development and use—from determining the goal of the AI system to the type of data we use, the user interface we design, and the purpose for which we use the AI—involve ethically loaded decisions. Once we understand that, we can build comprehensive AI ethics tools with use-cases and systems for decision-making. These tools will not be one-size-fits-all but rather customized according to the sector and the organization.

Sue: What does it take for an organization to "bake" ethics about AI into their business ethics training, awareness, and policy? In effect, have an "ethics by design" mindset?

Cansu: Effective integration of AI ethics into the innovation cycle and into business operations requires a comprehensive approach, which includes (1) organizational AI ethics strategy, (2) embedded AI ethics analyses for products and projects, and (3) AI ethics training and skill building for all levels within the organization. These three components also constitute the main structure of our PiE Model "“ Puzzle-solving in Ethics Model developed at AI Ethics Lab. When implementing the PiE Model, organizational culture is crucial. We first assess the organizational awareness and readiness about the ethical aspects of their AI development and use. We also want to break the impression of "ethics as policing" and introduce the collaborative, problem-solving, and design-thinking aspects of ethics practice. Only after we build this awareness and interest across all levels of the organization, from C-suite to the development team, we implement the full cycle of the PiE Model.

Sue: Who in an organization's leadership typically is accountable for decisions made about the ethical use of AI?

Cansu: We can think of ethics questions in two categories: simple questions where the ethical choice is clear and complex questions where we do not know what is the ethically right decision/action. When the ethical choice is clear, leadership has the capacity and the responsibility for implementing the ethical use of AI. When faced with complex ethics questions, ethics experts should step in to define the risks, lay out the choices, and help determine the right action. Depending on the level of complexity and risk involved, these decisions might be guided by the AI ethics experts or an AI ethics advisory board, who might also share the responsibility.

Sue: What do you see as the biggest challenge for the ethical use of AI, especially as it relates to the broader ESG (environmental, social, governance) initiatives?

Cansu: It's difficult to say what the biggest challenge is for ethical use of AI, but one of them is the human-AI interaction. To ensure the ethical use of complex tools, users need to understand the limitations of these tools so that they can still exercise their own judgment. AI, in its current state, is far from perfect. Its development requires human input. If the human user cannot intelligently engage with AI, then we risk having imperfect AI systems dictate erroneous decisions to human decision-makers. For any "good" use of AI systems, including their use for and within the ESG initiatives, adequate human-AI interaction is crucial.

Sue: Can you envision AI tools actually helping to identify potential areas where ethics are being compromised?

Cansu: Of course. AI can provide us with insights that help us uncover or devise solutions for existing ethical problems. For example, AI can and does help us detect and show racial or gender discrimination within various risk assessment methods and tools that are widely utilized in areas ranging from healthcare to finance. Laying out these biases also helps us devise solutions to overcome them. But AI also has the potential to adopt these biases, which constitutes one of its major ethics risks. If we are unaware of these embedded biases within the AI systems that we use, then we risk perpetuating and amplifying these ethical errors. Another related example would be the AI systems that help us gain insights about and provide better services to groups who are historically disadvantaged. This could be financial insights about small businesses, women, and minorities—who are historically underserved—and provide them with better financial opportunities. Or it could be medical insights about women, children, and people of color—who are historically underrepresented in clinical trials—and provide them with better medical care.

For more about design, listen to our pre-recorded webinar The Resiliency of Privacy by Design

Or, register here for our upcoming March 24 event that will explore the metaverse and what happens as innovation continues to outpace regulation.  

Elevate the power of your work

Get a FREE consultation today!

Get Started