In July 2022, the UK Government proposed the future regulation of AI, intending to take a less centralised approach. The government stated that this proposal was designed to help develop consistent rules to promote innovation in this groundbreaking technology and protect the public.
Nebuli’s team was invited to review these proposals, following my involvement in the panel discussion on whether Britain can achieve its ambitions to be a global superpower in Artificial Intelligence (AI) – featuring the Minister responsible for the National AI Strategy.
Nebuli, alongside other industry experts, academics and civil society organisations focusing on this technology were invited to share their views on putting this approach into practice through a call for evidence.
In March 2023, following the above policy paper review, the government launched its AI white paper to guide the use of artificial intelligence in the UK, to drive responsible innovation and maintain public trust in this revolutionary technology.
Our team of experts at Nebuli was invited to review the key policies proposed in the whitepaper and support the government’s plans for the next phase.
The UK Government’s Priorities
The white paper set out what the government describes as a new approach to regulating artificial intelligence to build public trust in cutting-edge technologies and make it easier for businesses to innovate, grow and create jobs.
The plan also highlighted the government’s ambitions of establishing a new expert task force to build the UK’s capabilities in foundation models, including large language models like ChatGPT, and £2 million for sandbox trial to help businesses test AI rules before getting to market.
The government emphasised that it will avoid heavy-handed legislation that could stifle innovation and take an adaptable approach to regulating AI. Instead of giving responsibility for AI governance to a new single regulator, the government will empower existing regulators – such as the Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority – to come up with tailored, context-specific approaches that suit the way AI is being used in their sectors.
The white paper outlined 5 principles that these regulators should consider to best facilitate the safe and innovative use of AI in the industries they monitor. The principles are:
- Safety, security and robustness
Applications of AI should function in a secure, safe and robust way where risks are carefully managed
- Transparency and explainability
Organisations developing and deploying AI should be able to communicate when and how it is used and explain a system’s decision-making process in an appropriate level of detail that matches the risks posed by the use of AI
AI should be used in a way that complies with the UK’s existing laws, for example, the Equality Act 2010 or UK GDPR, and must not discriminate against individuals or create unfair commercial outcomes;
- Accountability and governance
Measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes
- Contestability and redress
People need to have clear routes to dispute harmful outcomes or decisions generated by AI
We provided detailed reviews of these 5 principles, suggesting further principles to consider based on the company’s research on human-centric augmented intelligence and experience in building responsible AI solutions in diverse markets. To see the context of this review, I would like to remind readers that we are an independent, self-financed research studio and are fully committed to AI Ethics, as defined by the European Commission’s Ethics Guidelines for Trustworthy AI, and through our membership in the European AI Alliance.
I also wish to highlight that it is becoming more apparent to us that the current AI landscape is being driven very aggressively by the investment community, rather than the business community led by science and technology innovators, ethicists and business leaders. This is a concern because we are seeing AI models being rushed into the market with inadequate privacy and safety standards. This is one of the key reasons Nebuli was formed in 2019 because we are strongly against this hysterical AI rush, which does not only harm humanity but the quality of AI systems themselves.
So, in general terms, we welcome the fact that the UK government is taking proactive action to ensure that the AI market does not morph into an out-of-control wild west. But the efficacy of these proposals will not materialise anytime soon, as you will see later in this review. But first, below is a summary of the recommendations provided by Nebuli experts to the government’s Office for AI:
Promoting Transparency and Building Trust through Clarity
The government emphasised in the whitepaper its resolute focus on transparency, describing it as an essential component in creating a foundation of trust in AI technologies. By requiring organisations to disclose their use of AI, the government addresses concerns about AI’s black-box nature, promoting more transparency and accountability.
This focus aligns well with our call for more human-centric principles in AI deployments, fostering open communication and empowering users to make informed decisions about AI interactions. We endorsed the government’s recommendations, as per Nebuli’s safety standards that our customers have adopted over the years.
Bridging the Gap Between Humans and Machines via Explainability
The government’s emphasis on explainable AI models marks a crucial step forward in addressing the interpretability challenge in AI systems. Explainable AI fosters trust and confidence by enabling users to comprehend the reasoning behind AI decisions. We endorsed the white paper’s emphasis on the adoption of explainable AI models. These models enable users to understand the decision-making processes behind AI systems, instilling confidence in their outcomes.
Our philosophy strongly advocates for the use of “Human-in-the-Loop” approaches, steering clear of opaque “black-boxed” models that can perpetuate bias and propagate harmful content. Encouraging organisations to adopt explainable AI models reinforces ethical considerations and ensures responsible AI deployments.
Empowering Redress, Accountability and Safeguarding Fairness
The government highlighted its strong commitment to improving routes for contesting AI-related harms and it acknowledges the need for accountability and redress mechanisms. This, again, resonates with our call for a human-centric approach, where accountability forms a cornerstone of responsible AI deployments.
However, we advised the government to go a step further by establishing a clear process for users to report inaction, creating an additional layer of accountability. This strengthens the framework’s responsiveness and ensures prompt resolution of reported issues, fostering better trust in the system.
The government emphasised cross-sectoral principles, effectively applying a holistic shield against AI risks, covering safety, security, fairness, accountability, and contestability. This suggests its understanding of the multifaceted risks associated with AI technologies across different verticals, which we strongly welcome. Indeed, it is possible to emphasise context-specific strategies by customising AI practices for different sectors. We regularly advise our enterprise customers that AI should never be treated as an “out-of-the-box product” or a “one-size-fits-all” solution to various problems.
Hence, we supported the application of context-specific approaches by regulators for different industries and sectors. To enhance the framework, however, we recommend incorporating sector-related AI expertise by establishing sector-specific regulatory bodies. This will enable regulators to effectively address the unique challenges emerging within specific verticals and facilitate responsible AI developments within each domain with more relevancy.
Skill Development for a Sustainable Future and Nurturing AI Talent
The white paper highlighted the current problem concerning skill gaps in the AI sector. We advocated for significant investments in AI education and training programs, particularly in sectors where AI applications require domain-specific expertise. Indeed, investing in AI education and training programs is vital to building a competent workforce capable of driving responsible AI deployments.
By establishing a collaborative approach between the government, educational institutions, and industry stakeholders to ensure a skilled workforce and narrowing the skill gap, we can create a sustainable ecosystem that embraces ethical considerations and safeguards against potential risks.
Strengthening the Framework
According to the whitepaper, the government is contemplating the introduction of a statutory duty for regulators to uphold AI principles and further enhance accountability. Our team agreed and supported reinforcing accountability within the AI landscape.
However, we recommended an incremental approach to introducing this statutory duty, allowing regulators to gradually adapt and strengthen their mandates. We also recommended establishing a certification mechanism or labelling system to recognise AI systems are meeting specific standards, incentivise responsible AI practices and foster greater transparency.
Educating, Empowering and Cultivating AI Awareness
The white paper’s focus on educating the public about AI aligns with our philosophy and, thus, we welcomed it. We advocated for government-led national campaigns that employ jargon-free language and user-friendly interfaces to empower individuals and foster a deeper understanding of AI’s benefits, risks, and responsible usage. Collaborations with educational institutions and schools can further drive public awareness and ensure inclusivity in AI adoption.
However, we recommend a significantly higher investment in skill development and gradual implementation of statutory duties to reinforce accountability. By fostering public awareness and a collaborative ecosystem, the government can ensure an AI landscape that evolves responsibly while prioritising ethics, trust, and societal well-being.
Harmonizing Innovation and Regulation
The white paper’s call for effective coordination mechanisms and stakeholder collaboration is essential for avoiding overlapping and contradictory guidance. We reemphasised the importance of nurturing a collaborative ecosystem where regulators work transparently and share information. Continual engagement with industry experts and organisations will enable regulators to adapt to the rapidly evolving AI market while staying attuned to emerging technologies.
Timeline – When will we see action?
First and foremost, as dedicated advocates of the human-centric augmented intelligence space, we generally welcome the UK government’s white paper on AI regulation for its emphasis on responsible AI development and application. The proposed measures promote transparency, explainable AI, accountability, fairness, and sector-specific approaches, which is, indeed, a very positive start.
According to the government’s public statements, their proposed approach will mean the UK’s rules can adapt as this fast-moving technology develops, ensuring protections for the public without holding businesses back from using AI technology to deliver stronger economic growth, better jobs, and bold discoveries that radically improve people’s lives.
But the government also highlighted that, over the next 12 months, regulators will issue practical guidance to organisations, as well as other tools and resources like risk assessment templates, to set out how to implement these principles in their sectors. When parliamentary time allows, legislation could be introduced to ensure regulators consider the principles consistently.
In other words, this whole exercise will take significant time to implement by the government, and with the general election looming within the next 12-16 months, there is a very high possibility these proposals will be kicked further along the long grass. This is a concern because the AI market is moving at such a rapid rate that, by the time these regulations emerge, they will most likely be less relevant.
We will keep you updated with this important development. However, organisations can adopt some of these ideas immidiately and benefit from building trust between their users and their AI deployments. Using Nebuli’s solutions and services, we can provide your teams with the needed support, advice and AI deployment models in preparation for AI regulatory frameworks. You can contact our team and we will be able to support you.