From our market research and discussions with customers and the wider business community, we found that the vast majority of, what they described as, “AI hype” (i.e. artificial intelligence), has been focused on automation and robotics. Of course, as we discussed previously, automation and robotics are not AI.
Since 2012, we witnessed emerging weaknesses and overhyped promises made by the legacy AI applications that have been exposed by their customers quite frequently. The overuse of the acronym “AI” can be further witnessed in companies applying almost no “real” artificial intelligence, or some minuscule automation within their workforce.
As highlighted by the Economist’s latest report, titled “An understanding of AI’s limitations is starting to sink in” (published on June 11th, 2020), despite some AI success stories, the fact remains that many of the grandest claims made about AI have once again failed to become reality.
They also highlighted that, as a result of such failures in delivering effective AI solutions, confidence is wavering as researchers start to wonder whether the technology has hit a wall.
Indeed, since we embarked on our augmented intelligence journey in the early 2010s, we are still seeing limited to no market consensus over exactly what counts as an AI technology or service. Not to mention, several VCs informed me directly that startups are raising massive sums of money off the AI buzzword alone, rather than having a genuine AI innovation.
We believe the core reason for this issue is the unintended restrictions that the AI market imposed on itself. Focusing only on mathematical models, confusing intelligence with automation, mixing it up with robotics, and then trying to encapsulate all of these into a SaaS product creates an obscure picture of what AI actually is all about.
If we look at history, classical philosophers’ attempted to describe human thinking as a symbolic system, which influenced the emergence of the field of artificial intelligence in the early 1950s. It was all about mimicking human’s intellectual capabilities. SaaS is not how our brain works and, in our opinion, this highlights a problem that the AI market is not studying the meaning of intelligence effectively enough, if at all.
For this reason, we believe very strongly that it is essential to understand the true meaning of intelligence. Otherwise, how can we artificially mimic something without understanding its mechanisms first, or at least attempt to analyse it more deeply?
Hence, my co-founders and I (who are ex-biomedical scientists) applied our medical point of view of intelligence, rather than mathematical, to architect Nebuli’s intelligence and data processing models. We were particularly interested in the neuropsychological theory of Working Memory to achieve the following principle:
Minimal Data Input Must Generate Maximum Intelligence Output.
The Working Memory is responsible for “flexible” manipulation of crucial information that is available for specific processing and is critical for many domains of cognition, including:
One caveat, however, is that Working Memory only temporarily stores the crucial information that it needs. Hence, we developed our Augmented Working Memory Theory where Nebuli builds a long-term “Augmented Working Memory” that is forever learning, expanding, self-manipulating and serving.
From our founding team’s experience with AI since the late 1990s, we compared the AI systems applied today to the brain’s short and long-term memory. Unlike the Working Memory, both short and long-term memories only refer to the short and long-term storage of all sorts of information, respectively, whether they are useful or not – i.e. like big data.
Successful AI systems rely heavily on large-scale data inputs with exhaustive learning models, which tend to work well mostly for large corporations with the resources to generate and manage massive amounts of data.
However, most organisations do not possess or generate such enormous quantities of data to make off-the-shelf AI applications relevant or even functional. Not to mention, the tedious integration work and replacement of existing software that comes with the installation of redundant and expensive generalist/off-the-shelf AI applications, with lengthy staff training.
Hence, our Augmented Working Memory Theory is predicated on the principle of generating a maximum level of long-term intelligence output from a minimal input of usable information only. That is, a customer’s internal, deep data.
We identified key stages needed for Nebuli to generate human-like Working Memory (shown in the diagram below), which is at the core of the company’s ongoing research and development.
Nebuli generates its Augmented Working Memory from newly given task scenarios based on a customer’s most needed datasets, which tend to be small.
Being a long-term Augmented Working Memory, Nebuli also stores and utilises other previously acquired “smaller” knowledge from different similar scenarios that might be applicable for the newly given task. Thus, making the process lighter yet more relevant and compelling for specialist applications for teams big and small.
Like the human brain, Nebuli does not demand constant data input and ever-increasing data storage in order to generate intelligence.
Instead, it applies a more passive memory approach by storing only key data elements that it needs for a given task and discards the rest. We call these data units Memory Blocks.
In order to understand Nebuli’s process of generating and storing Memory Blocks, it is essential to emphasise that Nebuli’s Working Memory is human-centric. In other words, Nebuli’s entire operation fueled by a customer’s (“client”) internal datasets without relying on any other external or third-party data references.
In essence, Nebuli creates a Data-Driven World (DDW) for each customer based on the customer’s data collection as a way of indexing and visualising the critical elements needed for this customer’s workflow. This DDW is what we describe as a Memory block.
The key objectives of each memory block are the following:
Below are sample images of Nebuli’s Memory Blocks generated through our work with the University of Leicester’s (UoL) Library. The aim here is to visualise the hidden world of the UoL’s internal research papers, to help them facilitate new interdisciplinary and interdepartmental R&D collaborations:
The above images show 2D and 3D SOM-based visualisation of segmented datasets according to specific parameters set by the UoL library team. Where the dots mostly condense is where the most relevant interdisciplinary opportunities are likely to be found.
In this scenario, Nebuli only needed to utilise 13,000 research papers to generate the most insightful opportunities for the UoL, which was not otherwise possible with such services as Google Scholar, ResearchGate, Academia.edu and many other tools that offer “research discovery with artificial intelligence.”
While these tools promote their ability to mine over tens of millions of research papers, this level of data overload was not sufficiently beneficial nor insightful for the UoL. Hence, this scenario was more about the quality and relevancy of the available data, rather than just quantity.
The above summary of our research is the foundation of our human-centric augmented intelligence approach. It doesn’t only involve investigating and developing mathematical models, but also the meaning and the working of human intelligence.
This ongoing research allows us to focus our work closely around our customers’ current experiences with AI, as well as those who are just starting their augmented intelligence journey with Nebuli.
Our principal intention here is to facilitate real and meaningful collaboration between humans and technology. We do this by helping our customers uncover new methods that replace today’s “command and response” models with more personalised, interactive, exploratory and versatile user experiences. This is at the heart of Nebuli’s co-worker functionality.