Applying collective reasoning to investing

Tom Kehler
11 min readJul 1, 2020

--

“… we balance probabilities and choose the most likely. It is the scientific use of the imagination …” Sherlock Holmes. The Hound of the Baskervilles. AC Doyle, 1901

The first half of 2020 has been marked by a call for the need for imagination.[1] We are dealing with a problem unique to human history. Statistical learning from data, the current driver of AI technology development, falls short in many areas when what happened before no longer guides you as to what to do next. Imagination is a uniquely human capacity and it is why we are resilient. In this article the focus will be on leveraging human intelligence and artificial intelligence to create a collective expert system where we bring the best expertise together into a predictive system.

Results in the science of collective intelligence point to a solution. A diverse group of knowledgeable individuals will out-perform any single expert.[2],[3],[4] I believe augmenting collective intelligence with AI technologies holds substantial promise for leveraging the human capacity to imagine and invest in a better future.

Aligning diverse points of view, vital for our current time, is difficult in normal times, let alone times of extreme uncertainty as we currently face in a pandemic. Much is at risk if you get it wrong and the cost of inaction grows steeper with each passing day. In this piece, I will outline an approach that in fact has a history of predictive performance.

Prior to the onset of modern collective intelligence as we know it today, I was CEO of a company, Recipio (later known under the name Informative), that pioneered a scalable adaptive learning technology that accurately predicted customer preferences by asking simple open-ended questions that consumers could answer in their own words. The system adaptively sampled answers and encouraged consumers to rate their peers’ input. The process converged to a list of consumer responses ranked by relevance, which supported a predictive model of consumer preferences. This allowed major brands like NBC, P&G, GM, and LEGO to predict future market performance for products through partnership with their customers. After scaling and proving the applicability of our product, we ultimately sold the company to Satmetrix, the NPS company.

In 2008, a friend, John Seely Brown introduced me to “The Difference” by Scott Page. JSB was an advisor to Recipio. He nudged me to read Page and think about integrating the findings of Page with my prior 12 years work in applying adaptive learning to consumer groups. In late 2013, I began working on a new platform for augmented collective intelligence with a small team. This work paved the foundation for CrowdSmart, a company I founded with Kim Polese, Markus Guehrs, and Fred Campbell in 2015.

Distinct from crowd or swarm intelligence, augmented collective intelligence seeks to build a model of the collective knowledge of a cognitively diverse group of expert and intelligent contributors. The collective knowledge model is built through an asynchronous, guided process that stimulates divergent thinking, but drives to convergent conclusions. The process can be applied to evaluations as simple as an open-ended question (“What do you believe are the best policies for balancing safety and economic recovery in response to the Covid 19 pandemic?”) or as complex as an instrumented distributed investment committee.

For the past five years, we at CrowdSmart have pursued a single use case for augmented collective intelligence. We started with the hard problem, seed investing where there is little to no information and success is determined by insight of human predictors. We also recognized there is a randomizing element. Even in this case no one can foresee the future so what we really focused on was: “Can augmented collective intelligence scale and improve best practices?” The answer as you will see is a resounding yes.

Best practices were based on extensive qualitative research available in the literature[6]. Key factors are:

1. The more diligence the better the performance

2. A compelling business opportunity that addresses product market fit in a large market.

3. A team that is right for the opportunity

4. Early investors and advisors that bring a network effect (connected)

5. A high level of conviction and commitment of early investors and team

From these five factors, we designed a process that measures a startup’s market potential, team, network, and investor conviction. For each startup, we build an investment evaluation team using the principles of collective intelligence to insure cognitive diversity. Each team member has access to a data room with the typical materials available to a VC for investment diligence. Second, the evaluation team was guided through a multi-day asynchronous process of interaction and scoring. The process starts with a live Q&A discussion with the founding team. This session is recorded and transcribed. The team is then invited to review and provide their assessment and feedback along four discussion areas: the overall business (e.g. market, product market fit, competition), the team, the network value of early advisors and investors, and a discussion of “conviction” e.g. would you yourself invest or recommend to a friend or colleague. Each area is scored on a 1 to 10 scale. The evaluators are encouraged to provide free form text descriptions of the reasons for their score. The group then interacts through rating and ranking contributions in an adaptive learning process. Once the process begins, the identity of the investor/expert is not visible to the startup team or to other investors.

The system creates effectively an “idea competition” where ideas of peers are sampled and rated in priority order in order to learn shared priorities and relevance on why the startup under consideration will be a successful investment or not. The process is single blind thus the competition is based on the quality of the idea not who generated it. Data striped of identities is made available to the startup team and the investors encouraging discussion and resolution of questions. The system records each transaction and allows connecting review of evidence (as represented in the discussion) to changes in belief about the success trajectory of the startup.

The figure below summarizes the process:

Knowledge Acquisition Method

Knowledge acquisition from a diverse group results in reduced bias. Research in collective intelligence states that a cognitively diverse group of well-informed individuals will out-perform any individual expert in predictive accuracy. This is captured in the Diversity Prediction Theorem which states that the diversity of a group reduces the group error in making a prediction.[15]

Diversity Prediction Theorem

Given that we have properly balanced the group for diversity based on collective intelligence principles (done algorithmically), the challenge remains as to how best to extract a knowledge model from their inputs and interactions.

We learn from a group of evaluators a statistically valid representation of their collective view by moderating and measuring the results of their discussion. To do this we use a method that is a close cousin to how search works.

1. Ask an open-ended question with a focus (What must this management team do to be successful at executing their strategy?)

2. Allow for free form submission of their thoughts (before seeing the opinions of others)

3. Sample the universe of opinions submitted by all evaluators and provide the sample list to the reviewer to prioritize based on relevancy to their views.

Submitted priorities in step 3 result in a scoring event that then informs the sampling algorithm. This process is modeled as a Markov process that converges to a stable representation of the collective shared view. This process has been shown to produce a statistically valid rank ordering of shared opinion, turning bottom up qualitative free form statements into a rank ordered list based on shared relevancy. The figures below are from a simulation of the sampling and scoring process.

Learning a statistically accurate preference ranking from a group ideation session

The diagram shows learning correlation between “true rank” vs the “simulated rank” or learned ranking. The system starts out with statements having no shared relevance (e.g. random) but rapidly learns high priority rankings first. We can learn from a group the statements or propositions that reflect the opinion of the group with statistical accuracy.

This process has been deployed in predicting consumer preferences from collective conversations in media and consumer packaged goods. The process was tested in parallel studies and certified as convergent and reproducible.

This process is core to collective knowledge acquisition. From it we can then use natural language processing techniques to learn the topics or themes that are driving a prediction or decision. For those of you familiar with Markov processes and related algorithms such as PageRank this should seem reasonable. Once you have established a basic means to recognize prioritizing interactions, the ability to model it as a Markov process becomes clear.

From this framework we can now build a knowledge model. Each quantitative score (e.g. Team Score) has an associated set of statements describing the reasons for the score. Independently these statements or reasons are ranked by the collective group of evaluators independent. Since evaluators do not know the associated quantitative score associated with a particular statement each ranked statement will have associated with it a distribution of scores.

Specifically:

1. Each statement made by any evaluator is scored by peers as to whether or not it is relevant to the discussion

2. Each quantitative score has a collection of statements ranked by their relevancy to the peer group.

3. Themes or topics learned through NLP have a quantitative distribution derived from the associated scores with statements.

Given that the evaluation process can extend to a couple of weeks, there is an opportunity to get a rich amount of data on any individual investment opportunity yielding typically hundreds of quantitative data points and tens of thousands of words.

We now have the basis for constructing a knowledge model in the form of a Bayesian Belief Network (BBN). Statements and topics are linked through conditional probability tables. For example, a specific expert (Expert 38) made a comment as a reason for their score. That comment was classified into a theme or topic by an NLP theme classifier. The theme consists of a collections of comments each comment having a relevancy score. Therefore, the theme has a relevancy distribution, it also contains a distribution of quantitative scores. From this we can infer that certain comments imply certain scoring patters which infer the overall assessment of the probability of success.

Bayesian Belief Network of an Individual Investment Evaluation

The BBN model for each startup is produced by the system during the course of an evaluation of a startup described above. The model is an executable knowledge model that yields estimates of probability of being a successful company. Specifically, the probability, pROI, in the center of the figure is an estimate of the likelihood of the company establishing funding momentum (e.g. getting to series A/B). The score in conjunction with the BBN provide an explanation for the score. The initial feature set used to frame the evaluation process were derived from research on best practices in early stage investing. Thus the Bayesian prior (the beliefs about what constitutes a great startup investment) is based on a feature set derived from the heuristic: Startups with a strong market opportunity, a strong team, a set of early advisor/investors with strong network connections, and with a highly convinced set of investors are most likely to succeed. The process of building the BBN models the investor market view of that proposition as represented by the evaluation team in the context of a specific startup. Thus, the BBN is a collective knowledge model that predicts the startups ability to gain traction with investors — e.g. is there an investor market for this startup.

In traditional investing an investment committee executes a process of collecting information, deliberating and making a decision. The evaluation process we described in the previous paragraphs is an automated form of an investment committee. The resulting BBN of a particular investment is a persistent knowledge model of the collective human knowledge driving an invest or pass decision. For our project we constructed nearly 100 BBN models. These models provide a memory or audit trail for an investment portfolio empowering a learning platform for improving investment accuracy.

In addition to the BBN, a parallel and interoperable model was employed to learn the weighting of various parameters by applying a measure against “ground truth”. The approach we used for the course of this project was that of conversion to a growth round of funding (ground truth that there actually is an investor market for this startup). This one variable of sustainable funding has the highest correlation with profitable return on investment. Each startup is scored and then tracked over subsequent months and years to determine if in fact high scores predict high probability of follow on funding. In this specific case, we used a logistic regression classifier for the machine learning model.

While we continue work on building accuracy into the model, early results are very promising. The model performs at >80% accuracy meaning that if a startup was scored above 72% (the probability of being in the “invest” class), there is more than an 80% likelihood that the startup will go on to a follow-on round. This compares to roughly 10 % for the general population, ~20% for professional VCs and ~35% for top tier VCs.

There is then substantial promise that interoperable models based on augmented collective intelligence have very high potential in linking areas leveraging collective intelligence predictions (predictions from the a cognitively diverse set of well-informed humans) to real world performance. Furthermore, the results representing predictions and decisions in this form promises to lay a foundation for an entirely new field of applications linking the strongest potential of human intelligence (e.g. imagination, heuristic judgements, learning from small data sets) to the strongest potential of current AI models based on learning from data.

The integration of human cognitive processes with the systematic precision of data-driven learning has massive implications. In order to get the greatest potential from the developments in AI, the link between humans and machines must be transparent and interoperable. The result is prediction with explanation. The chart below shows how the latest in natural language processing techniques can provide

Given the promising results, we have now extended the platform to support all stages of investment. This approach is of particular importance to impact investing and making ESG type investments. It also is highly important to corporate investments where issues like strategic impact go beyond simply modeling financial returns.

Augmented collective intelligence leads to a form of intelligence unmatched by Artificial or Human Intelligence alone. With a human-empowered artificial intelligence I believe we will be able to innovate new economies that are impactful and more inclusive.

[1] https://hbr.org/2020/04/we-need-imagination-now-more-than-ever

[2] Tetlock, Phillip, Gardner, Dan, “Superforecasting” Crown 2015

[3] Malone, Thomas W., “Superminds”, Little Brown 2018

[4] Page, Scott E.,”The Difference”, Princeton, 2007

[5] Reichheld, Fred “The Ultimate Question”, HBS Press 2007

[6]https://www.angelcapitalassociation.org/data/Documents/Resources/AngelGroupResarch/1d%20-%20Resources%20-%20Research/ACEF%20Angel%20Performance%20Project%2004.28.09.pdf

--

--

Tom Kehler
Tom Kehler

Written by Tom Kehler

I love pioneering transformative technologies based on solid science. Co-founder and Chief Scientist at CrowdSmart.

No responses yet