Skip to main content

Advising UK’s Office for AI on AI Regulatory Framework

Providing advice and expertise to the UK Government with its efforts in establishing a UK-wide responsible AI deployments and regulatory framework.

Key Services

  • Regulatory Review

  • Responsible AI

  • Data Ethics

  • AI Strategy

  • Modernisation

  • Consumer Behaviour

Key Markets

  • Government

  • Regulatory

Nebuli's Response to UK Government's Policy Paper of Establishing a Pro-innovation Approach to Regulating AI

In July 2022, the UK Government proposed the future regulation of AI, intending to take a less centralised approach. The government stated that this proposal was designed to help develop consistent rules to promote innovation in this groundbreaking technology and protect the public.

Nebuli’s team was invited to review these proposals, following our CEO’s, Tim El-Sheikh, involvement in the panel discussion on whether Britain can achieve its ambitions to be a global superpower in Artificial Intelligence (AI) – featuring the Minister responsible for the National AI Strategy.

Nebuli, alongside other industry experts, academics and civil society organisations focusing on this technology were invited to share their views on putting this approach into practice through a call for evidence.

In March 2023, following the above policy paper review, the government launched its AI white paper to guide the use of artificial intelligence in the UK, to drive responsible innovation and maintain public trust in this revolutionary technology.

Nebuli was invited to review the key policies proposed in the whitepaper and support the government’s plans for the next phase.

Nebuli’s Review Process

The white paper outlined 5 principles that the government’s advised regulators should consider to best facilitate the safe and innovative use of AI in the industries they monitor.

Nebuli provided detailed reviews of these 5 principles, suggesting further principles for the government to consider based on the company’s research on human-centric augmented intelligence and experience in building responsible AI solutions in diverse markets.

Below is a summary of the recommendations provided to the government’s Office for AI:

Promoting Transparency

Transparency is the bedrock of responsible AI. We fully endorsed the white paper’s proposition of requiring organisations to be clear about their use of AI. Transparency cultivates trust among users and stakeholders, aligning with our human-centric approach that values open communication and informed decision-making.

Envisioning Explainable AI

We endorsed the white paper’s emphasis on the adoption of explainable AI models. These models enable users to understand the decision-making processes behind AI systems, instilling confidence in their outcomes. Our philosophy strongly advocates for the use of “Human-in-the-Loop” approaches, steering clear of opaque “black-boxed” models that can perpetuate bias and propagate harmful content. Encouraging organisations to adopt explainable AI models reinforces ethical considerations and ensures responsible AI deployment.

Empowering Redress and Accountability

The white paper’s insights regarding the need to improve current routes for contesting AI-related harms were welcomed. Our human-centric augmented intelligence approach resonates with the government’s commitment to providing effective mechanisms for reporting inaction by regulated entities. We suggested going a step further by establishing a clear process for users to report inaction, creating an additional layer of accountability. This strengthens the framework’s responsiveness and ensures prompt resolution of reported issues, fostering trust in the system.

Cross-Sectoral Principles

The white paper outlined the government’s revised cross-sectoral principles, which is an important step. By encompassing safety, security, fairness, accountability, and contestability, the framework demonstrated a comprehensive understanding of the diverse risks associated with AI technologies. We supported the application of context-specific approaches by regulators, as it aligns with our belief in tailoring AI practices to different industries and sectors. To enhance the framework, however, we recommend incorporating sector-related AI expertise by establishing sector-specific regulatory bodies. This will enable regulators to address unique challenges effectively and facilitate responsible AI development within each domain.

Skill Development for a Sustainable Future and Reducing Digital Inequality

The white paper highlighted the current problem concerning skill gaps in the AI sector. We advocated for significant investments in AI education and training programs, particularly in sectors where AI applications require domain-specific expertise. This collaborative approach between the government, educational institutions, and industry stakeholders ensures a skilled workforce capable of responsible AI development and deployment. By closing the skill gap, we can create a sustainable ecosystem that embraces ethical considerations and safeguards against potential risks.

Strengthening the Framework

The government aims to introduce a statutory duty for regulators to have due regard to the principles. Our team agreed and supported reinforcing accountability within the AI landscape. However, we recommended an incremental approach to introducing this statutory duty, allowing regulators to gradually adapt and strengthen their mandates. We also recommended establishing a certification mechanism or labelling system to recognise AI systems are meeting specific standards, incentivise responsible AI practices and foster greater transparency.

Educating and Empowering through Public Awareness Initiatives

The white paper’s focus on educating the public about AI aligns with our philosophy and, thus, we welcomed it. We advocate for government-led national campaigns that employ jargon-free language and user-friendly interfaces to empower individuals and foster a deeper understanding of AI’s benefits, risks, and responsible usage. Collaborations with educational institutions and schools can further drive public awareness and ensure inclusivity in AI adoption.

Harmonising Innovation and Regulation

The white paper’s call for effective coordination mechanisms and stakeholder collaboration is essential for avoiding overlapping and contradictory guidance. We emphasised the importance of nurturing a collaborative ecosystem where regulators work transparently and share information. Continual engagement with industry experts and organisations will enable regulators to adapt to the rapidly evolving AI market while staying attuned to emerging technologies.