Our market research and discussions with customers and the wider business community found that the vast majority of what they described as “AI hype” (i.e. artificial intelligence) has been focused on automation and robotics. Of course, as we discussed previously, automation and robotics are not AI.
Since 2012, we have witnessed emerging weaknesses and overhyped promises made by the legacy AI applications that have been exposed by their users quite frequently. The overuse of the acronym “AI” can be further witnessed in companies applying almost no “real” artificial intelligence or some minuscule automation within their workforce.
From our point of view, this creates a negative impact on this exciting market as we witness a further drop in AI confidence amongst business users. This is not only due to common AI failures, which tend to sit around the 90% mark from my experience working with AI since the late 1990s, though COVID-19 seems to have put a stop to this (I will be reporting on this interesting change in my upcoming post). But even with the systems that do work, end-users are not sufficiently confident in the AI’s decision-making outcomes, at best.
An interesting analysis published in the journal Computers in Human Behavior this month highlighted that people’s inappropriate decision to accept or reject suggestions from AI applications could lead to “severe consequences” in high-stakes AI-assisted decision-making scenarios. Indeed, many of our customers have major concerns regarding the safety and efficacy of any AI-based system, which is why they rely on us to guide them through such challenges, and I would like to share with you the key aspect of our philosophy. And that is, you must focus on understanding human intelligence in a given scenario rather than figuring out the mathematical algorithms that merely depict our limited understanding of human intelligence.
we are still seeing limited to no market consensus over exactly what counts as an AI technology or service
Since we embarked on our augmented intelligence journey in the early 2010s, we are still seeing limited to no market consensus over exactly what counts as an AI technology or service. Not to mention, several VCs have indicated to me that startups are raising massive sums of money off the AI buzzword alone rather than having a genuine AI innovation.
We believe the core reason for this issue is the unintended restrictions that the AI market imposed on itself. Focusing only on mathematical models, confusing intelligence with automation, mixing it up with robotics, and then trying to encapsulate all of these into a SaaS product creates an obscure picture of what AI actually is all about.
If we look at history, classical philosophers’ attempted to describe human thinking as a symbolic system, which influenced the emergence of artificial intelligence in the early 1950s. It was all about mimicking human intellectual capabilities. SaaS is not how our brain works, and, in our opinion, this highlights a problem that the AI market is not studying the meaning of intelligence effectively enough, if at all.
For this reason, we believe very strongly that it is essential to understand the true meaning of intelligence. Otherwise, how can we artificially mimic something without understanding its mechanisms first, or at least attempt to analyse it more deeply?
Hence, my co-founders and I (who are ex-biomedical scientists) applied our medical point of view of intelligence, rather than mathematical, to architect Nebuli’s intelligence and data processing models. We were particularly interested in the neuropsychological theory of Working Memory to achieve the following principle:
Minimal Data Input Must Generate Maximum Intelligence Output.
The Working Memory is responsible for “flexible” manipulation of crucial information that is available for specific processing and is critical for many domains of cognition, including:
One caveat, however, is that Working Memory only temporarily stores the crucial information that it needs. Hence, we developed our Augmented Working Memory Theory where we focus on building a long-term “Augmented Working Memory” that is forever learning, expanding, self-manipulating and serving. This is one of the core foundations of our Datastack framework, powered by our BehaviorLink analysis.
With our founding team’s experience with AI since the late 1990s, we compared the AI systems applied today to the brain’s short and long-term memory. Unlike the Working Memory, both short and long-term memories only refer to the short and long-term storage of all sorts of information, respectively, whether they are useful or not – i.e. like big data.
Successful AI systems rely heavily on large-scale data inputs with exhaustive learning models, which tend to work well mostly for large corporations with the resources to generate and manage massive amounts of data.
the obsession of many of our counterparts with collecting as much information about you as possible is barking up the wrong tree.
However, most organisations do not possess or generate such enormous quantities of data to make off-the-shelf AI applications relevant or even functional. Not to mention the tedious integration work and replacement of existing software that comes with installing redundant and expensive generalist/off-the-shelf AI applications, with lengthy staff training.
Hence, our Augmented Working Memory Theory is predicated on the principle of generating a maximum level of long-term intelligence output from a minimal input of usable information only. That is, a customer’s internal, deep data.
But, crucially, this approach does not involve, nor require, private or any identifiable personal information of users. So the obsession of many of our counterparts with collecting as much information about you as possible is barking up the wrong tree. Not to mention, it is unethical, and, at least from Nebuli’s point of view, it is useless for achieving any form of AI-powered hyper-personalisation. We describe this as lazy AI.
We identified the key stages needed for our systems to generate a human-like Working Memory, which is at the core of the company’s ongoing research and development. You can find out more about these stages via our Technology page.
Nebuli’s core systems generate Augmented Working Memory from newly given task scenarios based on a customer’s most needed and relevant datasets, which tend to be relatively small.
Being a long-term Augmented Working Memory, our systems also store and utilise other previously acquired “smaller” knowledge from different similar scenarios that might be applicable for the newly given task. Thus, making the process lighter yet more relevant and compelling for specialist applications for teams big and small.
Like the human brain, Nebuli’s systems do not demand constant data input and ever-increasing data storage in order to generate intelligence.
Instead, our models involve a more passive memory approach by storing only key data elements we need for a given task and discarding the rest. We call these data elements Memory Blocks.
To understand how our systems generate and store Memory Blocks, it is essential to emphasise that Nebuli’s Working Memory is human-centric. In other words, Nebuli’s entire operation is fueled by a customer’s (“client”) internal datasets without relying on any other external or third-party data references. Indeed, where necessary, external supporting datasets can be introduced to add value, such as reference data or research citations and public datasets, etc.
In essence, Nebuli’s core Augmented Intelligence creates a Data-Driven World (DDW) for each customer based on their dataset collections. The generated DDW then indexes and visualises the critical elements needed for this customer’s workflows and decision-making processes. This DDW is what we describe as a Memory block.
The key objectives of each memory block are the following:
Below are sample images of Nebuli’s Memory Blocks generated through our work with the University of Leicester’s (UoL) Library. The key objective is to visualise the hidden world of the UoL’s internal research papers, to help them facilitate new interdisciplinary and interdepartmental R&D collaborations, thus realising innovation and funding opportunities:
The above images show 2D and 3D SOM-based visualisation of segmented datasets according to specific parameters set by the UoL library team. The most relevant interdisciplinary opportunities are likely to be found where the dots mostly condense.
In this scenario, Nebuli only needed to utilise 13,000 research papers to generate the most insightful opportunities for the UoL, which was not otherwise possible with such services as Google Scholar, ResearchGate, Academia.edu, and many other tools offer research discovery with artificial intelligence.
While these tools promote their ability to mine over tens of millions of research papers, this level of data overload was not sufficiently beneficial nor insightful for the UoL. Hence, this scenario was more about the quality and relevancy of the available data than mere quantity.
The above summary of our research is the foundation of our human-centric augmented intelligence approach. It doesn’t only involve investigating and developing mathematical models but also the true meaning and the working of human intelligence.
This ongoing research allows us to focus closely on our customers’ current experiences with AI and those just starting their augmented intelligence journey with Nebuli.
Our principal intention is to facilitate real and meaningful collaboration between humans and technology. We do this by helping our customers uncover new methods that replace today’s “command and response” models with more personalised, interactive, exploratory and versatile user experiences. This is at the heart of Nebuli’s human centricity and the robotic co-worker functionality.