Digital Transformation Blog

AI for Board Members: A Practical Guide to Governance

Written by Rob Farrell | Mar 24, 2025 9:15:00 AM

Artificial Intelligence (AI) is transforming businesses across industries, offering numerous opportunities while presenting significant challenges. Here's an overview of the key opportunities, along with real-world examples.

AI: Past, Present and Possible Future

Artificial Intelligence (AI) is a rapidly evolving field of computer science that enables machines to perform tasks that typically require human intelligence. These tasks include visual perception, speech recognition, decision-making, and language translation. AI encompasses various subfields, including machine learning, deep learning, and natural language processing, which collectively allow computers to learn from data, recognize patterns, and make intelligent decisions.

In recent years, the AI industry has experienced explosive growth. The global AI market, valued in the hundreds of billions, is projected to reach trillions within the next decade, growing at a substantial compound annual growth rate. This remarkable expansion is driven by increased investments, technological advancements, and widespread adoption across diverse sectors such as healthcare, finance, manufacturing, and retail.

Looking ahead to the next five years, the AI industry is poised for continued exponential growth. Projections indicate significant market expansion by the end of the decade, with substantial growth rates anticipated. This growth will likely be fueled by the increasing integration of AI into various aspects of business and daily life, from advanced web search engines and recommendation systems to autonomous vehicles and creative tools.

The impact of AI is not limited to specific regions or industries. The AI market in major global regions is estimated to grow significantly over the next decade, experiencing substantial growth rates. Meanwhile, other key global markets are expected to expand rapidly, potentially reaching trillions by the end of the decade.

As AI continues to evolve and mature, it promises to reshape industries, drive innovation, and create new opportunities for businesses and individuals alike. However, this rapid growth also brings challenges, including ethical considerations, data privacy concerns, and the need for responsible AI deployment. As board members and business leaders, understanding the current state and future trajectory of AI is crucial for making informed decisions and effectively navigating the AI-driven future.

AI 101

Machine Learning Systems – A complex set of machine learning models that collects and uses existing data to develop outputs on new data.

Expert Systems – A computer-based decision-making system that is capable of solving complex problems in specific domains/areas of expertise. Expert systems can advise, diagnose, instruct and assist
humans in decision-making, predict results, interpret input, suggest alternatives, amongst other capabilities.

Natural Language Systems – Systems that are able to undertake natural language processing (NLP). Organisations use NLP to read text, hear speech (voice to text), interpret and analyse language-based data, measure sentiment, and determine which parts are important.


Automated Decision-making Systems – Systems that are capable of making a decision by an automated means and without human involvement. The systems can process and analyse large-scale data from various sources to make the decision. It is becoming widely used in public administration e.g. by governments, in business, health, education, law and other sectors, with varying degrees of human intervention or oversight.

Virtual Agents and Chatbots – Chatbots are rule-based software that has been designed to understand and respond to select human keywords or phrases. Virtual agents advance the chatbot functionality – using AI, including natural language processing, to recognise human speech.

Recommendation Systems – Systems that suggest products, services, information to users based on analysis of data, patterns and trends.

AI-powered Robotics – ‘Robots’ or physical systems that are equipped with various sensors e.g. proximity, computer vision to move and execute tasks in dynamic environments.

FRT (Facial Recognition Technologies) – Any system or device that is capable of determining whether an image contains a face. Often FRT uses biometric data to verify someone’s identity, to identify an individual or to analyse characteristics about a person. 

 

What Directors Need to Know

In light of the staggering increase in AI use, directors are under mounting pressure to ensure their organisations are prepared to use AI in a responsible manner. These include the obligation to exercise due care, skill and diligence and to act in good faith and in the best interests of the company when discharging his or her duties. 

Directors must be aware of the legal and regulatory implications of AI use, even in the absence of explicit AI laws. Existing legislation on privacy, human rights, and anti-discrimination may apply to AI applications. It's crucial to understand potential legal violations and implement mitigating processes to ensure compliance.

Implementing appropriate AI governance is essential for directors, as AI impacts all aspects of their oversight duties. This includes managing data, models, and people involved in AI implementation. Effective risk assessment in today's organizations necessitates addressing the impact of AI.

Directors must carefully consider the risks associated with AI use, evaluating its impact on society, people, and the organization. Special attention should be given to key stakeholders such as employees and customers. It's equally important to weigh the risks of not adopting AI solutions against the potential benefits.

Ongoing assurance of AI systems is critical. AI risk management is not a one-time task but requires regular monitoring and evaluation. This includes routine assurance of both the AI systems and the governance framework to ensure continued compliance with regulations and adherence to best practices.

Did You Know... Directors may be personally exposed to legal liability if they fail to uphold their statutory duties when overseeing the use of AI in their organisation. Ultimately, they will remain responsible for any final decision-making, despite using AI tools to reach each decision.

What Directors Need to Know (PwC)

 

AI Governance And Regulation

Artificial intelligence (AI) is revolutionizing industries across the globe, with leaders recognizing its potential to enhance productivity, creativity, quality, and innovation. This recognition has led to a surge in demand for AI technologies, particularly generative AI. The transformative power of generative AI is evident in various sectors, from inspiring new designs in furniture to personalizing marketing strategies and accelerating drug discovery in pharmaceuticals. However, the adoption of AI also brings concerns about risks such as bias, safety, security, and potential reputational damage. Many leaders acknowledge that effectively mitigating these risks can provide a competitive advantage and contribute significantly to organizational success. As a result, the ethical and responsible adoption of AI has become a key consideration, driving the rapid emergence and implementation of AI governance practices.

AI is already subject to a range of regulations that extend beyond the technology itself, encompassing areas such as privacy, anti-discrimination, liability, and product safety. Moreover, AI-specific regulatory activity is expanding rapidly. A significant milestone was reached in 2024 with the passing of the European Union's Artificial Intelligence Act (EU AI Act), which is expected to influence similar legislation globally, much like the General Data Protection Regulation (GDPR) did for data privacy. The increasing focus on AI in policy discussions is evident, with mentions in legislative proceedings doubling in 2023 compared to the previous year. Some regions, like China, have introduced measures specifically targeting generative AI, such as the Interim Administrative Measures for Generative Artificial Intelligence Services.

The AI regulatory landscape is further shaped by an increase in standards activities and cross-jurisdictional collaboration. Notable initiatives in this space are being driven by organizations such as the Organisation for Economic Co-operation and Development (OECD), the US National Institute of Standards and Technology (NIST), the United Nations Educational, Scientific and Cultural Organization (UNESCO), the International Organization for Standardization (ISO), and the Group of Seven (G7). These collaborative efforts aim to establish comprehensive frameworks and guidelines for the responsible development and deployment of AI technologies, ensuring that as AI continues to advance, it does so in a manner that is ethical, safe, and beneficial to society as a whole.

Many organizations are adopting self-governance approaches for AI to align with their values and build credibility. This often involves meeting ethical standards that go beyond regulatory requirements. To achieve this, organizations are leveraging voluntary frameworks such as: US NIST's AI Risk Management Framework, Singapore's AI Verify framework and toolkit and UK's AI Safety Institute's open-sourced Inspect AI safety testing platform.

AI Governance Trends To Watch (WEF)

Effective AI self-governance combines organizational management systems and automated technical controls. The ISO/IEC 42001 international standard provides guidance on implementing strong organizational controls for AI systems.

There are a number of existing compliance obligations which directors will need to address. These fall under the following headings:
● Privacy & cyber security
● Discrimination in recruitment
● Intellectual property
● Employee relations and health and safety
considerations
● Consumer protection
● Competition considerations
● Duty of care and negligence

 

A Possible Responsible AI Framework

Artificial Intelligence (AI) is transforming industries, but with great power comes great responsibility. PwC’s Responsible AI Framework provides a structured approach to ensuring AI is used ethically, securely, and effectively. This framework is built on four key pillars: Strategy, Control, Responsible Practices, and Core Practices.

Strategy: Aligning AI with Organizational Values

AI should reflect an organization’s values and ethical considerations. This involves two crucial aspects:

  • Data & AI Ethics: Organizations must consider the moral implications of AI, ensuring responsible data use.

  • Policy & Regulation: Staying ahead of public policy and regulatory trends is vital to compliant AI deployment.

Control: Governance and Compliance

Ensuring AI operates within legal and ethical boundaries requires strong control mechanisms:

  • Governance: Oversight is necessary for AI implementation across different levels of the organization.

  • Compliance: Organizations must adhere to industry regulations and internal policies.

  • Risk Management: Identifying and mitigating AI-related risks helps prevent unintended consequences.

Responsible Practices: Enhancing AI Performance and Fairness

To build reliable and fair AI, responsible practices are essential:

  • Interpretability & Explainability: AI decision-making should be transparent and understandable.

  • Sustainability: AI should minimize negative environmental and social impacts.

  • Robustness: Ensuring AI systems are resilient and perform consistently.

  • Bias & Fairness: AI must be tested against fairness standards to prevent discrimination.

  • Security, Privacy, and Safety: AI must be designed with cybersecurity measures, data privacy protections, and safeguards to prevent harm.

Core Practices: Establishing a Strong AI Foundation

To maintain AI effectiveness, organizations should focus on:

  • Problem Formulation: Clearly identifying the problem AI is solving to ensure its necessity and impact.

  • Standards: Following industry best practices for AI deployment.

  • Validation: Continuously improving AI models through iteration and evaluation.

  • Monitoring: Ongoing assessment to detect and mitigate risks.

Responsible AI Framework (created in Napkin AI)

Opportunities of AI in Business

Enhanced Decision-Making: AI-powered analytics process vast amounts of data to inform strategic decisions. For instance, a major bank uses AI to analyze customer data and provide personalized financial advice, helping customers make smarter financial decisions.

Operational Efficiency: AI automates routine tasks, reducing costs and freeing up human resources. A large telecommunications company created a virtual assistant, resulting in a leap in customer self-service interactions and a significant drop in call volume.

Personalized Customer Experiences: AI enables hyper-personalization of products and services. A leading e-commerce platform uses AI to suggest products to customers based on their previous purchases and browsing behavior, increasing sales and customer satisfaction.

Innovation Acceleration: AI speeds up research and development processes. A large pharmaceutical company uses AI to boost productivity and has the potential to speed up drug development and support advancements in their business processes.

Predictive Maintenance: AI predicts maintenance needs in industries with physical assets. A prominent automobile manufacturer uses AI-driven analytics to improve production efficiency and predict maintenance needs, resulting in higher-quality vehicles and reduced time to market.

Supply Chain Optimization: AI enhances forecasting accuracy and streamlines logistics. A major retailer uses AI for demand forecasting and to reduce food waste, making it easier for its customers to change their lifestyle.

Fraud Detection and Security: Advanced AI algorithms detect patterns indicative of fraud or security threats. In the finance industry, AI is enhancing fraud detection and risk management.

 

Challenges and Risks of AI Implementation

Ethical concerns surrounding AI systems perpetuating or amplifying biases raise questions about fairness and equity. This challenge requires organizations to implement robust ethical guidelines and conduct regular audits of AI systems to identify and address potential biases, ensuring that AI decision-making processes are fair and unbiased across all user groups.

Data privacy issues arise from the vast amounts of data required for AI systems, raising concerns about data protection and privacy. To address this, organizations must develop stringent data governance policies and ensure compliance with relevant data protection regulations, implementing measures such as data anonymization and encryption techniques where appropriate.

Job displacement due to automation may lead to workforce restructuring, necessitating careful change management. Organizations should invest in reskilling and upskilling programs to help employees transition to new roles created by AI adoption, fostering a culture of continuous learning and adaptation within the workforce.

Algorithmic transparency can be challenging due to the "black box" nature of some AI systems, making it difficult to explain decisions. To mitigate this, organizations should prioritize the development of explainable AI models and maintain clear documentation of AI decision-making processes, enabling stakeholders to understand and trust AI-driven decisions.

Regulatory compliance with evolving AI regulations requires vigilant monitoring and adaptation. Establishing a dedicated team to monitor AI-related regulations and ensure ongoing compliance is crucial for organizations to navigate the complex and rapidly changing regulatory landscape surrounding AI technologies.

Cybersecurity risks arise as AI systems can be vulnerable to adversarial attacks or manipulation. Implementing robust cybersecurity measures specifically designed for AI systems and conducting regular security audits are essential to protect AI infrastructure and data from potential threats.

Implementation challenges in integrating AI into existing systems and processes can be complex and resource-intensive. Developing a phased implementation plan, starting with pilot projects to gain experience and demonstrate value, can help organizations overcome these challenges and successfully integrate AI into their operations.

New Policies to Consider

AI Ethics and Responsible Use: Develop comprehensive guidelines for ethical AI development and deployment, addressing issues such as bias, fairness, and transparency. This policy should outline principles for responsible AI use, including respect for human rights, non-discrimination, and accountability. It should also establish processes for ethical review of AI projects, ongoing monitoring of AI systems for potential ethical issues, and mechanisms for addressing ethical concerns raised by employees or stakeholders. The policy should be regularly updated to reflect evolving ethical standards and technological advancements in AI.

AI Risk Management and Mitigation: Create a comprehensive framework for identifying, assessing, and mitigating AI-specific risks. This policy should detail procedures for conducting AI risk assessments, including potential impacts on privacy, security, and business operations. It should establish risk tolerance levels for different types of AI applications and outline strategies for risk mitigation, such as implementing fail-safes, conducting regular audits, and maintaining human oversight of critical AI systems. The policy should also include protocols for incident response in case of AI system failures or unintended consequences.

AI Transparency: Establish standards for making AI decision-making processes as transparent and explainable as possible. This policy should define requirements for documenting AI algorithms, data sources, and decision-making processes. It should outline procedures for generating explanations of AI decisions that are understandable to both technical and non-technical stakeholders. The policy should also address how to handle situations where full transparency might compromise intellectual property or security, balancing the need for explainability with other business considerations.

AI Talent Acquisition and Development: Outline strategies for attracting, retaining, and developing AI talent within the organization. This policy should detail plans for building an AI-skilled workforce, including recruitment strategies, partnerships with educational institutions, and internal training programs. It should address the creation of career paths for AI professionals within the organization and establish guidelines for ongoing skill development and knowledge sharing. The policy should also consider strategies for fostering a diverse and inclusive AI team to mitigate bias in AI development.

AI Vendor Management and Partnerships: Set criteria for evaluating and managing relationships with AI vendors and partners. This policy should establish standards for assessing the technical capabilities, ethical practices, and security measures of potential AI vendors. It should outline requirements for data sharing agreements, intellectual property rights, and liability in AI partnerships. The policy should also include procedures for ongoing monitoring of vendor performance and compliance with organizational AI standards, as well as protocols for terminating partnerships if necessary.

Steps for Implementation

Form a Cross-Functional AI Governance Committee: This step involves assembling a diverse team of experts from various departments, including IT, legal, ethics, HR, and relevant business units. The committee should have clearly defined roles and responsibilities, with regular meeting schedules and reporting mechanisms established. It's crucial to empower this committee with decision-making authority and the ability to allocate resources for AI initiatives. The committee should also be responsible for developing and enforcing AI policies across the organization.

Develop a Comprehensive AI Strategy: This involves aligning AI initiatives with overall business goals and establishing clear objectives. Conduct a thorough analysis of the organization's AI readiness, considering factors such as existing infrastructure, data availability, and staff capabilities. Identify priority areas for AI implementation based on potential impact and feasibility. Create a detailed roadmap for AI adoption, including both short-term quick wins and long-term transformative projects. This strategy should be flexible enough to adapt to changing business needs and technological advancements.

Conduct Thorough Risk Assessments: Identify potential risks associated with each AI project, including ethical, legal, and operational risks. Use scenario planning to anticipate potential future risks and develop a risk matrix to prioritize and address identified risks. Regularly update risk assessments as projects progress and new information becomes available. This process should involve input from various stakeholders and consider both direct and indirect impacts of AI implementation.

Establish Ethical Guidelines: Develop and enforce clear ethical principles for AI development and use. Create an AI ethics review board to evaluate proposed AI projects against these principles. Implement mechanisms for ongoing ethical monitoring of AI systems, including regular audits and impact assessments. Provide comprehensive ethics training for all employees involved in AI development and deployment, ensuring a culture of ethical awareness throughout the organization.

Implement Robust Data Governance: Ensure data quality, security, and compliance with privacy regulations. Develop clear policies for data collection, storage, and usage, with particular attention to sensitive or personal data. Implement data anonymization and encryption techniques where appropriate to protect individual privacy. Regularly audit data practices to ensure ongoing compliance and quality, and establish processes for data lifecycle management, including data retention and deletion policies.

Prioritize Transparency: Implement processes to make AI decision-making as transparent as possible. Develop explainable AI models wherever feasible and create clear documentation of AI decision-making processes. Establish mechanisms for stakeholders to request explanations of AI decisions, ensuring that the rationale behind AI-driven outcomes can be understood and scrutinized when necessary. This transparency is crucial for building trust in AI systems among users, customers, and regulatory bodies.

Continuous Monitoring and Evaluation: Regularly assess AI system performance, impacts, and potential biases. Implement real-time monitoring tools for critical AI systems to detect anomalies or unexpected behaviors promptly. Conduct periodic audits of AI systems by independent third parties to ensure objectivity in evaluation. Establish feedback loops to incorporate learnings into future AI development, continuously improving the performance and reliability of AI systems.

Invest in AI Education: Provide ongoing training for employees at all levels to foster AI literacy throughout the organization. Develop specialized training programs tailored to different roles within the organization, from basic AI awareness for general staff to advanced technical training for AI developers. Partner with educational institutions to stay current on AI developments and potentially contribute to AI research. Create internal knowledge-sharing platforms to disseminate AI best practices and lessons learned across the organization.

Engage Stakeholders: Maintain open communication with customers, employees, and other stakeholders about AI use within the organization. Conduct regular surveys to gauge stakeholder perceptions and concerns about AI implementation. Establish channels for stakeholders to provide feedback on AI systems and their impacts. Regularly report on AI initiatives and their outcomes to build trust and transparency. This engagement should be an ongoing process, adapting to changing stakeholder needs and expectations.

Plan for Scalability: Design AI systems with the ability to scale as needs evolve and demand increases. Implement modular AI architectures that can be easily expanded or modified to accommodate new requirements or technological advancements. Regularly assess infrastructure needs to support AI growth, including computing resources, data storage, and network capabilities. Develop contingency plans for rapid scaling in response to unexpected demands or opportunities, ensuring that the organization can quickly capitalize on AI-driven innovations.

 

Conclusion

AI presents a transformative opportunity for organizations willing to embrace its potential while navigating the inherent challenges responsibly. By understanding the opportunities and risks, establishing comprehensive policies, and implementing AI initiatives with careful consideration, organizations can unlock significant value while maintaining ethical standards and ensuring long-term sustainability.

Here are three key takeaways for board members to keep in mind as you guide your organizations through the AI revolution:

  1. Prioritize Ethical AI: Embed ethical considerations into every stage of AI development and deployment, ensuring fairness, transparency, and accountability.

  2. Embrace Continuous Learning: Stay informed about the latest AI trends and best practices, and invest in AI literacy throughout the organization.

  3. Foster Collaboration: Encourage cross-functional collaboration and stakeholder engagement to ensure AI initiatives align with business goals and societal values.

By embracing these takeaways, you can lead your organization toward a future where AI drives innovation, creates value, and contributes to a better world.