shifthappens

Building Trust in AI: Governance and Transparency in Data Management

Building Trust in AI shifthappens Ave Point

Introduction

Cities and local governments around the world continue to seek ways to incorporate AI into their citizen services. In Buenos Aires, a chatbot named Boti evolved alongside Generative AI, achieving a record 11 million conversations in January 2022 to become a preferred channel for citizens. Singapore has come up with more than 100 Generative AI solutions actively at work in the city-state. Amsterdam is taking natural language processing to a whole new level, swapping out words for molecules to create new, sustainable materials. Deep in the heart of Texas, a startup is training autonomous vehicles exclusively on generative AI.

However, as AI continues to transform the business landscape, it also brings forth challenges, particularly in terms of trust and transparency. How can businesses ensure that their AI systems are ethical, responsible, and transparent enough to gain the trust of customers, regulators, and other stakeholders? Case in point? Nine out of ten mayors in cities around the world want to engage with Generative AI, but only 2% are doing so.

In this blog, we will explore some of the key questions and considerations for building trustworthy AI, based on the insights from the PwC report Sizing the prize: What's the real value of AI for your business and how can you capitalise?

We will also outline some of the governance mechanisms and best practices that can help businesses to manage and control their AI applications, and to foster a culture of trust and transparency in their data management.

Key Questions for Building Trustworthy AI

According to the PwC report, AI could contribute up to $15.7 trillion to the global economy in 2030, more than the current output of China and India combined. But to realize this potential, businesses must address the societal and ethical implications of AI and build stakeholder trust in their AI solutions.

The study outlines some of the key questions that organizations should ask themselves, such as:

  • What are the objectives and values that guide the design and development of AI systems?
  • What are the potential impacts and risks of AI systems on customers, employees, suppliers, partners, regulators, and society at large?
  • How can AI systems be aligned with the legal, regulatory, and ethical standards and norms?
  • How can AI systems be made transparent, explainable, and accountable so that humans can understand and challenge their decisions and actions?
  • How can AI systems be monitored, audited, and updated to ensure their performance, reliability, and security?
  • How can AI systems be integrated with human oversight and collaboration to ensure human dignity, autonomy, and agency?

These questions are relevant to the technical aspects of AI and to its strategic, organizational, and cultural aspects. Businesses need to adopt a holistic and human-centric approach to AI, one that considers the needs and expectations of all the stakeholders involved and balances the benefits and risks of AI in a responsible and ethical way.

In a recent #shifthappens podcast episode with Peter Voss, the CEO and Founder of Aigo.AI and one of the original group of technologists to coin the term Artificial General Intelligence (AGI) in 2002, he warned that as more organizations seek to achieve AGI it is essential companies fully understand their limitations. However, if companies can get it right, he believes the impact can be profound: “To me, that is the future I see with AGI: the phenomenal improvement of the human condition.”

Peter Voss - Episode 77: A Look into the Future with Artificial General Intelligence

Governance Mechanisms

To address the questions PWC raises and to build trust and transparency in AI like Peter Voss suggests, businesses need to establish effective governance mechanisms that can oversee and control the development and deployment of AI systems.

AI TRiSM

Specifically, organizations should:

  • Source, cleanse and control key data inputs: Data is the fuel of AI, and its quality, accuracy, and relevance are essential for the performance and reliability of AI systems. Businesses need to ensure that they have robust data governance processes and policies, that can source, cleanse and control the data inputs for their AI systems, and that can prevent data bias, errors and breaches.
  • Integrate data and AI management: Data and AI are closely interrelated, and their management should be integrated and aligned. Businesses need to ensure that they have clear roles and responsibilities for data and AI management, and that they have effective coordination and communication between the data and AI teams and functions.
  • Adopt ethical principles and frameworks: Ethical principles and frameworks can provide guidance and direction for the design and development of AI systems and can help to align AI with the values and objectives of the business and its stakeholders. Businesses need to adopt ethical principles and frameworks that are both relevant and appropriate for their domain and jurisdiction, as well as address the specific challenges and dilemmas of AI.
  • Implement transparency and explainability tools and methods: Transparency and explainability are key for building trust and accountability in AI, and for ensuring that AI systems can be understood and challenged by humans. Businesses need to implement tools and methods that can make their AI systems transparent and explainable, such as documentation, visualization, testing, verification, validation and certification.
  • Establish monitoring and auditing mechanisms: Monitoring and auditing mechanisms can help ensure the performance, reliability, and security of AI systems and identify and correct any issues or problems that may arise. Businesses need to establish monitoring and auditing mechanisms that can track and measure the inputs, outputs, and outcomes of their AI systems and provide feedback and improvement suggestions.
  • Create human oversight and collaboration mechanisms: Human oversight and collaboration mechanisms can help to ensure human dignity, autonomy and agency in the use of AI and to leverage the complementary strengths of humans and machines. Businesses need to create human oversight and collaboration mechanisms that can involve humans in the design, development and deployment of AI systems, and that can enable humans to intervene, override or correct AI decisions and actions.

Bringing It All Together

AI is a transformative force offering immense value and competitive advantage. However, to harness its full potential, businesses must ensure their AI systems are ethical, responsible, and transparent. By adopting a holistic and human-centric approach and establishing effective governance mechanisms, businesses can foster a culture of trust and transparency, ensuring AI's responsible integration into our future.

By embedding these principles into their AI strategies, businesses can not only capitalize on AI's benefits but also navigate its challenges responsibly, building a future where AI works for everyone.

AI and Information Management Report 2024


Artificial Intelligenceinformation management