Skip to main content

Nebuli’s Response to UK Government’s Policy Paper of Establishing a Pro-innovation Approach to Regulating AI

Nebuli's Response to UK Government's Policy Paper of Establishing a Pro-innovation Approach to Regulating AI

In July 2022, the UK Government proposed the future regulation of AI, intending to take a less centralised approach. The government stated that this proposal was designed to help develop consistent rules to promote innovation in this groundbreaking technology and protect the public.

Nebuli’s team was invited to review these proposals, following our CEO’s, Tim El-Sheikh, involvement in the panel discussion on whether Britain can achieve its ambitions to be a global superpower in Artificial Intelligence (AI) – featuring the Minister responsible for the National AI Strategy. The proposals focus on supporting growth and avoiding unnecessary barriers being placed on businesses. According to the government’s press release, this could see businesses sharing information about how they test their AI’s reliability as well as following the guidance set by UK regulators to ensure AI is safe and avoids unfair bias.

Industry experts, academics and civil society organisations focusing on this technology were invited to share their views on putting this approach into practice through a call for evidence, and below is Nebuli’s submitted response to the call’s questions.

We are sharing our response to help teams and enterprises apply these principles to their AI and broader digital strategies and the regulatory framework they should consider. If you require further support from our team, you can contact us with additional questions here.

Nebuli’s Responses to the Consultation Questions

Q1 – What are the most important challenges with our existing approach to regulating AI? Do you have views on the most important gaps, overlaps or contradictions?

At Nebuli, we strongly advocate for the “People First” approach, where understanding people via psychographic analysis is the core driver of digital success, not technology alone. The most critical problem we have witnessed in the AI market for over a decade is the overall disregard by the technology companies for the negative impact AI poses on users, such as severe mental health problems in young children.

While the UK government’s approach suggests establishing a more robust human intervention, which we welcome, there needs to be a much stronger emphasis on human supervision (i.e. Human in The Loop) and AI explainability (i.e., avoiding the Blackbox model entirely). Furthermore, with the deeper integration of psychographic methodologies in modern AI algorithms (as is the case with Nebuli’s BehaviorLink Frameworkhttps://nebuli.com/behaviorlink), it is critically important that the regulators assess the potential harms AI-powered systems may pose to end users, such as context graph recommendation systems inadvertently pushing harmful content to users in social media platforms.

Thus, AI providers would need to answer the question as to why their AI solution is necessary to tackle a given challenge by ensuring they frame their solutions from a people-first perceptive rather than being purely driven by technological endeavours and innovations first.

The second prominent issue in the AI market is the gross misconception of the meaning of “AI” and the technical terminologies associated with AI-based innovations. We observed that many failures in AI projects, as well as in ethical and regulatory frameworks, are due to the lack of establishing a common standard of what key AI terminologies and their related algorithmic functions or outcomes entail. It is, therefore, critically important that regulators establish a consensus for standard terminology that would support more precise rules and guidelines for the market. This is particularly important for the higher education sectors, startups, investors and newcomers in this evergrowing market.

Q2 – Do you agree with the context-driven approach delivered through the UK’s established regulators set out in this paper? What do you see as the benefits of this approach? What are the disadvantages?

We generally agree with the context-driven approach since AI technologies are historically context-specific and tend to operate more successfully in well-defined use cases within targeted verticals. However, the policy paper eluded the view that AI is a general-purpose technology, with which we disagree for two reasons:

  1. Several studies have consistently demonstrated that AI algorithms tend to struggle in general-purpose applications; and
  2. There are social implications of deploying general-purpose AI systems which do not correlate to specific contexts. According to a January 2022 survey from Ipsos (https://www.ipsos.com/en/global-opinions-about-ai-january-2022), on average, for all 28 countries surveyed, almost two-thirds (64%) claim they have a good understanding of what AI is, but only half (50%) know which types of products and services use AI.

Therefore, it is imperative to consider the social context in close correlation to any given sector-specific contexts and not turn this regulatory framework into an algorithm-focused policing, which does not work well in any context. As highlighted above, psychographic frameworks should play a key role in assessing potential social outcomes, combined with enforcing transparency and AI explainability.

Nonetheless, we are closer to establishing better cross-sector AI ecosystems with the current advancements in context-specific algorithms, which will eventually lead to the availability of more reliable and safer general-purpose AI solutions to the wider economy.

Q3 – Do you agree that we should establish a set of cross-sectoral principles to guide our overall approach? Do the proposed cross-sectoral principles cover the common issues and risks posed by AI technologies? What, if anything, is missing?

In general terms, we agree with principles of establishing cross-sectoral guidelines. However, there must be a stronger emphasis on updating existing regulations and laws within the individual sectors before looking into establishing new regulations that target AI applications. For example, the broader legal framework, such as data privacy laws and Anti-discrimination laws, are already applicable to AI to some extent, though they are not explicitly designed for AI-powered systems.

We would recommend establishing sector-specific regulatory groups with the relevant expertise in AI use cases within their verticals, which can present their recommendation to a national cross-sector panel. This is with the aim of establishing a dynamic and relevant regulatory ecosystem with the ability to cover shared legal issues, such as personal data collection and privacy concerns, and sector-specific issues, such as military data security.

In other words, sector-specific AI regulatory guidelines can establish more comprehensive and, indeed, pro-innovation regulations that could lead to the future deployment of safer and more trust-worthy AI systems. We also believe this approach will create a unique opportunity to modernise existing regulatory frameworks that have thus far been too slow in adapting to and responding to the rapidly changing digital and data-driven world.

Q4 – Do you have any early views on how we best implement our approach? In your view, what are some of the key practical considerations? What will the regulatory system need to deliver on our approach? How can we best streamline and coordinate guidance on AI from regulators?

First, as highlighted above, we strongly believe in a collaborative cross-sectoral approach to establishing a more comprehensive and dynamic regulatory framework that serves specific verticals integrated with broader economic and social contexts. Thus, collaboration is critical, involving government and academic institutions and businesses.

Second, we would apply Nebuli’s “3Ps” AI development model, which focuses on understanding the components of a given challenge. In this case, the challenge is implementing a pro-innovation and contextual legal framework. The 3Ps entail the following:

  • People – the social context, potential harms that must be considered, understnaindg the interactions between the AI algorithms in question and the end users. What impact this may have on their behavioural and mental well-being, safety, privacy, overall productivity, etc?
  • Processes – sources of the data, data validity, data security, the systems involved, where data storage locations (i.e. on-premise servers or public cloud services) and the tasks carried out.
  • Platforms – the technology and interfaces that are used to deliver the AI output. Who are the stakeholders?

Each one of the above will require a form of contextual regulatory assessment.

Q5 – Do you anticipate any challenges for businesses operating across multiple jurisdictions? Do you have any early views on how our approach could help support cross-border trade and international cooperation in the most effective way?

From our experience serving customers in different jurisdictions, there are three critical challenges that UK businesses may face:

  1. Local regulations, which may conflict with UK’s regulations. Thus, businesses will need to assess their deployment strategies in such locations.
  2. Cloud services, which play a key role in data storage and processing, as well as hosting AI algorithms that are accessible internationally. However, these services are not governed by any specific international regulation. Instead, the regulatory landscape is made up of a mixture of different rules, depending on the geolocations of the cloud hosting companies. This, again, may cause legal conflicts.
  3. Cultural attitudes toward AI applications. From our point of view at Nebuli, AI technology has now become closely integrated with the five most fundamental aspects of our personal lives. These include identity, finance, education, healthcare and sustainability. Thus, cultural, ethical and biased attitudes toward AI applications should be expected to differ significantly. According to the Ipsos survey referenced above, the likelihood of trusting companies that use AI is highest among business decision-makers (62%), business owners (61%), the more affluent (57%) and those with a higher-education degree (56%), and lowest among those are who 50 and older (44%), those with no higher education (45%), and those who are not employed (45%). Furthermore, the survey also highlighted a wider divide between emerging and higher-income economies. For example, an overwhelming majority of respondents in nearly all emerging economies trust companies that use AI, most of all in China (76%), Saudi Arabia (73%), and India (68%). In contrast, only about one-third of respondents in many high-income economies are as trusting of AI-powered companies, including Canada (34%), France (34%), the United States (35%), Great Britain (35%), and Australia (36%).

The third point above is particularly poignant and was the key factor behind Nebuli’s People-first model, which is the critical foundation for ensuring adequate applications of AI systems in the appropriate context. We would strongly recommend similar models are followed.

Q6 – Are you aware of any robust data sources to support monitoring the effectiveness of our approach, both at an individual regulator and system level?

AI regulation is somewhat a novel endeavour around the world; thus, it is currently early to identify comprehensive resources that assess the now implemented or upcoming regulations around the world. However, a new study (https://link.springer.com/article/10.1007/s11023-022-09612-y) published in August 2022 by Minds and Machines (Journal for Artificial Intelligence, Philosophy and Cognitive Science) provided a constructive critical analysis of the US Algorithmic Accountability Act of 2022 versus the EU Artificial Intelligence Act and what they can learn from each other. This study is a good reference for the UK regulators at the time of this writing. We anticipate further such studies will be published in the coming months, which we will monitor through our knowledge discovery partners and report accordingly.