A Trust-Building AI for Accelerating Innovation and Organizational Transformation

Tom Kehler
15 min readJust now

--

Creating a Trusted Generative Collective Intelligence for Innovation and Transformation

Learning the collective mind — image generated by Mid-Journey

Trust is required for generative AI to impact organizational change.

With all the hope and hype, the strategy for how AI will impact innovation and transformation in companies and communities is, at best, a handwave. Task automation and cost reduction may improve the bottom line, but they do little to build long-term value. The AI evangelists with misty-eyed views of a hyper-productive AI future sweep aside the importance of what has been the true asset of corporations — the creativity of its workforce. It is time to think differently.

To bring human creativity into the loop, we need to address what Generative AI does not address: trust. Today’s Generative AI is adept at creating plausible solutions but falls short in learning and creating alignment on collective action and focus. This limitation has presented significant hurdles for organizations and individuals, compelling us to reevaluate our approach.

Today, most organizations are asking the question: how will AI impact our future? How should it inform our strategy? The best current AI models today can only suggest possibilities. They cannot lead to confident action because they cannot be trusted. For trust and confidence, we turn to collaboration and deliberative engagements with the teams that created the success we have seen to date. We need an AI architecture that oversees the collaborative work of humans and AI agents.

A new multi-agent AI architecture guides the co-creative process and generates knowledge. This new architecture automates a manual trust-building process rooted in collective intelligence — the core methodology of the scientific method of building trust in shared knowledge.

Learning shared priorities and preferences is the established manual process of building trust and alignment.

Design thinking workshops emerge as a commonly used practice to help us find solutions of mutual value. Inherently, design thinking contains solution elements, but it doesn’t scale. At its core, design thinking is about understanding customers’ or communities’ problems and needs — a shared understanding crucial for our collective advancement.

Getting aligned on priorities takes work, even with small groups. Alignment is built on shared understanding and perspective. The only way is through deliberation, bouncing ideas off each other, and working together to find a shared view.

Workshops and off-sites are typically used to prioritize objectives, consider alternatives, and create alignment on prioritized decisions and plans. So, the guiding principle for the design of an AI Facilitator/Guide is three-fold:

  1. Stimulate creative thinking (often done with yellow stickies)
  2. Grouping alternatives, organizing, and learning shared preferences
  3. Prioritizing and deciding the next steps.

This is the ‘inner loop’ of all manual processes. Its most fundamental weakness is scaling. It is limited to small groups. There is generally a severe break in momentum when operationalizing and scaling these creative initiatives.

Scaling has been hampered by the communication complexity problem known as Brook’s Law. While it is hard but possible to learn the preferences and alignment of seven people in-depth, it becomes impossible with larger groups. A human facilitator has to monitor 21 lines of communication for seven people. For 20 people, the complexity has grown to 190 lines of communication beyond human capability. In the AI mode presented here, we can scale to groups of any size.

An AI Guide for instrumenting and accelerating collaborative action

Adaptive knowledge acquisition of humans and AI agents working together requires thinking differently about AI methods. We can no longer use the ‘batch’ method of extensive training followed by deployment. We must learn as the brain learns: adaptively and continuously exploring cause-and-effect relationships. This adaptive learning method operates as a ‘guide,’ a ‘facilitator,’ and a ‘meta-agent,’ overseeing the collaborative work of humans and AI agents.

As Judea Pearl, one of AI’s most accomplished fathers, wisely stated, “You are smarter than your data. Data do not understand causes and effects; humans do.” To achieve sustainable value from AI, we must include human creativity. Corporations, communities, and governing organizations are more than the data they leave behind.

A new generation of AI builds on past successes in generative AI by creating a multi-agent architecture with a ‘meta agent’ that guides the collective intelligence of humans and AI Agents working together. It is based on two principles deeply rooted in physics and the natural sciences. One is the principle of emergent ordering from local interactions, and the other is the principle of least action — nature constantly seeks an efficient path forward. The AI meta-agent guides teams (human and AI Agents) through a step-by-step process, learning the team’s collective preferences that lead to the desired outcome. It does so by learning the cause-and-effect relationships. The meta-agent then delivers a generative language model of the collective reasoning process and a persistent causal model of the decision process. This creates a generative collective intelligence of human teams and AI agents working together, enabling a new era of dynamic learning organizations. Generative Collective Intelligence produces measures linked directly to organizational productivity — a measure of focus (an accurate measure of group preference) and alignment (an accurate measure of the ‘collective uncertainty’ concerning achieving an expected result). The technology has been awarded four registered patents. The system learns by guiding and observing the collaboration process.

The figure below captures the need for AI models beyond statistical learning. It is from a DARPA Perspective on AI published over six years ago.

Figure 1

As a participant in the first wave of AI, a key feature of the handcrafted knowledge models was their ability to explain. I call that out because statistical learning is the only form that has struggled with the problem of trust. The ability of an AI to explain its thinking is critical to building trust and working with humans. The middle part is where we are now, so we struggle to trust it. Large language models will tell you they are not to be trusted but to use critical thinking. An example of this is shown in the query and response below:

Question to ChatGPT: Do Large Language models provide a way to validate the source of knowledge?

ChatGPT: Large language models by themselves do not provide a way to validate the source of knowledge. These models are trained on vast amounts of text data and can generate text that is similar to the text they were trained on, but they do not have the ability to verify the accuracy or source of the information they provide. It’s important for people to use critical thinking skills and verify information from multiple sources in order to determine its accuracy and credibility.

In contrast, the Meta-Agent creates a complete audit trail of how it constructed the knowledge model of human collaboration so that the knowledge model it produces can explain how it gets its answers. We also require it to keep track of the provenance of that information.

This AI does not care if the agents are human or AI Agents. We can then quickly see that this ‘Meta-Agent’ can empower humans to curate the generative outputs of AI Agents based on LLMs or other AI technologies. This later property of curation enables human participants to apply collective critical thinking to the generative outputs of LLMs.

Building trust at scale: Multi-agent knowledge acquisition

Manual methods build trust by learning alignment. The ‘inner loop’ learns and confirms shared preferences about a desired outcome. Preference learning is critical to any group agreement and trust. Getting on the same page is about alignment in our judgment — learning shared cause-and-effect relationships operative in determining aligned confidence in an outcome.

Consider the general manual ‘inner loop’ of collaboration. This can be done scale-free with AI. It cannot be done with the old methods of long training on massive data sets. It requires continuous adaptive learning. The AI tools for continuous adaptive learning are Bayes Rule and Markov modeling. With these tools, we can create a meta-agent that orchestrates a continuous adaptive learning cycle that feeds on the data of collaborative work, like the inner loop of design thinking. It asks for responses to a question/prompt, stimulates the individual with active engagement by looking at others’ ideas, and signals their shared priorities. As it does this work, it builds a language model of the collaboration and constructs a knowledge model of their interactions.

We first look at learning shared preferences. The holy grail for preference learning is the A/B test or the law of comparative judgment. The law of comparative judgment underpins everything from eye tests to economic models predicting future choice. The meta-agent uses probabilistic adaptive learning methods to do this at scale. The figure is taken from a case where 100 people shared 3000 ideas in response to a prompt. An example output is shown below. Without a means of learning the shared priorities, all 3000 ideas are of equal importance — there is no clear signal forward.

Figure 2

The system rapidly learns the top items of shared importance or relevance to the topic. Shared beliefs about an outcome build trust — even if the idea came from generative AI. Group alignment on ‘what is true’ or ‘evidence’ is fundamental to the scientific knowledge discovery method. The system discovers ideas that may have been generated due to the collaborative conversation.

The combination of human creativity working with generative AI can foster an epiphany- a perception of a new path forward, a new way of seeing things, a way to break out on a new innovative solution.

Various reporting tools are available to analyze all inputs. The system delivers the ability to learn top shared priorities and drive focused collaboration. It drives a large-scale deliberative conversation focused on shared goals and priorities, building trust as in the manual case. The output from one conversation drives the next. In addition, the system provides topic-level analysis and a generative query interface for the results.

This data was generated from a collaboration with mostly AI agents, who were given different personas. A few humans participate in the conversation. Each narrative summary is supported by a ‘collective voice’ analysis, which shows the prioritized importance of each comment that led to this summary.

The system tracks who said what, when, and how relevant or important the statement was through the peer-reviewed process described previously. The 25 is the probability estimate that this is relevant to the entire group. There is also an ability for any participant to query the source (whether an AI agent or a human) and drill down on evidence.

Thinking Together with Generative Collective Intelligence

For organizations to build trust in AI, we must add the word ‘collective’ to Generative Intelligence or Generative AI. Generative Collective Intelligence takes inspiration from the intersection of design thinking and collective intelligence. Collective intelligence is the group’s shared intelligence that emerges when people work together, often with the help of technology, to solve complex problems. “Collective intelligence is believed to underlie the remarkable success of human society. “¹ The application of AI techniques to make collective intelligence work at scale requires an AI capability of collaborative adaptive learning. It is an entirely different approach to how we think about AI — a fundamental shift from learning from the data trails of human intelligence to becoming an integral part of the collaborative prediction, planning, and problem-solving of humans and AI Agents.

A new model derived from nature and brain science

AI for Generative Collective Intelligence follows a model derived from the Free Energy Principle (FEP).² The FEP is a theoretical framework explaining how the brain generates its goals and desires based on sensory input. In its most straightforward formulation, the FEP and the accompanying process of Active Inference ‘does science’ with the environment to learn the best path forward to increase the likelihood of maintaining its existence. Active Inference is based on the idea that an agent should actively explore its environment to reduce the uncertainty about its internal state and the external world. Generative Collective intelligence is modeled as a collection of agents (humans initially) working collectively to plan a path forward with the highest probability of achieving a specific outcome.

Research in brain imaging led to the development of the Free Energy Principle. The theory is rooted in an AI model based on a fundamental principle in physics: all physical systems seek a place of rest (equilibrium). The principle is closely related to the principle of least action — nature seeks the most efficient way to get things done. Energy and information follow the same mathematical rules, and living systems stay alive by reducing uncertainty in their choices. Minimizing free energy is the same as minimizing uncertainty in future choices. A model of how the FEP works in the brain is shown in Figure 2 below. In its simplest form, the brain compares sensory input to what it expects to observe, senses input, and compares expectations to observations. If sensory input is ‘surprising,’ the uncertainty of that surprise is called “free energy.” It is mathematically identical to free energy in thermodynamics. Free energy is energy available to do work. Free Energy, viewed as uncertainty, implies that work must be done to resolve it. Active Inference is a process that seeks to reduce uncertainty by taking action.

Hopefully, at this point, one can see how this approach to AI is critical to organizations as learning communities seeking to forge a path forward that leads to agreed-upon goals and objectives. We are forging a path forward in making decisions about our future, whether for corporate or public good.

Figure 3

Figure 2

Expectations that drive perception (figuring out what we see is what we expected to see) are generated from our knowledge of the world — our beliefs about what we expect to see. This process is generative. Current AI is based on statistical predictions from the data products of our past. This new AI is a generative model of how humans (or other living systems) learn from their interactions with the environment — with the flow of events that define our life experiences. Fundamentally, it is an unfolding model of how we live in a complex world — how we learn and gain experience through a generative process of testing our beliefs (hypotheses) with the realities of life. A principal difference is model generation based on cause and effect vs. plausible statistical learnings from data.

Causal generative models define the scientific process. My first introduction to generative modeling was in applied physics, learning how causal models of magnetism in thin films could be inferred from observations of how it reelected electromagnetic waves.

The method I used was machine learning from data. It was 1969, and similar methods to those used now (searching for solutions based on minimizing error)were available on far less powerful machines. The inferred properties of magnetic behavior were compared with theoretical models. In the physical sciences, if expected results do not match experimental results, the implied action is to ensure that the experimental test evidence is correct) or change the theory responsible for generating the prediction. Generative modeling is foundational in the discipline of practicing science.

The scientific method is the foundation of continuous adaptive learning AI architecture. Generative models of this form create knowledge. They provide a true path to learning tacit knowledge that is predictively accurate. Causal generative AI will generate scientifically reliable knowledge when coupled with the collective intelligence of the appropriate collection of humans. By appropriate, I mean the diverse perspectives of peers with similar expertise and training. That’s why this approach to AI is based on first principles and automates a process that has led to all scientific discoveries to date.

Note that this approach significantly diverges from current statistical generative AI. Today’s AI relies on extensive model training before use. The models are always generated from past results with a lag bounded by the training period. Note also that there is no foundation for determining the timing, context, and quality of the information used to generate results. For this reason, concerns about the trustworthiness of current generative AI are warranted.

A Multi-Agent AI Architecture for Generative Collective Intelligence

Artificial Intelligence that adaptively learns with each step, using what it learns to inform the next step, is closer to how we learn. Imagine the AI as a Meta-Agent, a Facilitator that listens, learns, and guides based on the activities of the collaborating group. This Meta-Agent concept is instrumental in guiding the generation of new knowledge from humans co-creating with AI Agents to design solutions to our most challenging problems. Pre-trained transformer models (the PT of GPT) are vital in the Meta Agent’s process of guiding the group’s collaborative learning. Transformer models made it possible to calculate with concepts, creating a content-addressable memory to remember the deliberation process as humans (and AI Agents) work together to create a desired outcome.

For the sake of simplicity, we call the Meta-Agent an AI Learning Facilitator, or AI Facilitator for short. The architecture is pictured below.

Figure 4

The Facilitator orchestrates an interaction among agents, building a language model of the collaboration while simultaneously building a causal model of the group’s deliberation process.

The Facilitator masks the identity of participants, mimicking the scientific literature review process. This significantly reduces bias and opens the door to finding common ground by masking identity.

The CrowdSmart Platform implements the AIE architecture.

Over nine years, the continuous adaptive learning approach described in this paper was implemented and tested in innovation and business transformation. The principle focus was using AI to leverage the power of human collective intelligence.

Given a script (a set of prompts/questions), the AI Facilitator learns the collective voice ( resonant themes in order of group preference) if it is a qualitative prompt. It also learns a causal knowledge model for multi-criteria decision models.

Applications

The use cases we employed for training were first investments. It accurately predicted the survivability of startups based on collective intelligence or teams of cognitively diverse experts (in investing, technology, etc.). It built trust. It built models that currently offer an audit trail on every decision made.

Trust and confidence are needed at each step of business transformation. This is done by tracking the collective voice (representing prioritized preferences) and the group’s alignment (their collective uncertainty or free energy). Shared priorities and shared confidence are the principal drivers of productivity.

Processes associated with design thinking workshops can be directly implemented with the platform.

Open API

CrowdSmart intends to open the API soon, enabling integration with various applications and collaboration platforms.

The path forward is Generative Collective Intelligence based on first principles.

Taping into the power of emergent correlations in data has created great advances. Recently, Hopfield and Hinton were awarded Nobel Prizes in physics for their significant work in neural networks and deep learning. Hinton and Bengio, founders of deep learning, have clarified that while deep learning is a substantial move in the right direction, it is not the complete answer to AI.

“In terms of how much progress we’ve made in this work over the last two decades: I don’t think we’re anywhere close today to the level of intelligence of a 2-year-old child. But maybe we have algorithms that are equivalent to lower animals for perception.”³

We are on a new road forward when we brace Generative Collective Intelligence because it embraces the scientific discovery process. It offers the ability to build trust and embrace our creative intelligence.

We have over-indexed on seeing learning statistical patterns from data as the future of AI. It is a false hope to believe that what we see today will lead to a positive future for AI. There is reason for caution and concern because we are not there yet. The answer is not to constrain or regulate but to see the bright future ahead by investing our energies in much richer models based on the first principles we can learn from physics and the natural sciences. As Richard Feynman states:

“Although we humans cut nature up in different ways, and we have different courses in different departments, such compartmentalization is really artificial… The imagination of nature is far, far greater than the imagination of man.” ~Richard Feynman

I invite you to explore application use cases. For information on corporate use cases, contact info@crowdsmart.ai.

The platform has clear opportunities for solving social challenges and creating greater efficiency and effectiveness in the non-profit world. We have formed a non-profit for that purpose. Common Good AI is actively creating use cases showing social impact. We also intend it to be a learning community and research initiative for continuous research and development in AI technologies based on first principles. If you are interested in this area, contact info@commongoodai.org.

Summary (TL: DR)

Trust in AI for decision guidance will only happen with an ability to apply collective critical thinking to results. Trust building historically happens through various manual processes (design thinking workshops, standards committees, scientific literature review, to name a few). These processes do not scale well. A new AI methodology based on a supervisory agent that automates building a process of collective reasoning and trust building among humans and between humans and machines opens the door to trust-building decision intelligence at scale. See CrowdSmart.ai for more information.

[1] Peter Krafft, Julia Zheng, Wei Pan, Nicolás Della Penna, Yaniv Altshuler, Erez Shmueli, Joshua B. Tenenbaum, and Alex Pentland. (2016). Human collective intelligence as distributed Bayesian inference. arXiv:1608.01987.

[2] Friston, K. The free-energy principle: a unified brain theory?. Nat Rev Neurosci 11, 127–138 (2010). https://doi.org/10.1038/nrn2787

[3] Yoshua Bengio, founder and scientific director of Mila-Quebec AI Institute

--

--

Tom Kehler

I love pioneering transformative technologies based on solid science. Co-founder and Chief Scientist at CrowdSmart.