Verticalized LLM Solutions for Private Equity
Businesses are starting to figure out how they can put an LLM-powered application in the hands of their employees to more easily traverse proprietary data.
The release of large language models (LLMs) and the subsequent explosion in their use has led to interesting extensions of the technology. A simple search for “interesting GPT projects” will return use cases ranging from simulating conversations with William Shakespeare to generating user interface mock-ups directly from text input. Despite the existence of niche use cases, users’ first interactions with LLMs are very similar. Apps like ChatGPT can answer some basic question such as, “Can you give me a recipe for apple pie?” or “What actor plays Logan Roy in the TV Series Succession?”. You were likely impressed by how natural the interface felt and the fact that the model could “find” you the answer you were looking for.
Interest in the technology has evolved beyond these day-to-day question and answer use cases, and businesses are starting to figure out how they can put an LLM-powered application in the hands of their employees to more easily traverse proprietary data. At Arctic, we refer to this concept as “verticalized LLM”. Businesses are ultimately looking for a system that understands the nuances of their company and industry far beyond what the publicly available applications can provide.
We will be publishing our thoughts and expertise as a series of Perspectives on Verticalized LLMs. We will start with an overview of LLM use cases in the private equity industry, but we will explore the topic in more depth and breadth in the coming weeks.
Want to learn about LLM solutions for your vertical? Get in touch
A solution at the right time
Corporate leaders across industries have noticed that LLMs have officially arrived and that they have great potential to generate value. The leaders of private equity firms are not exempt from this train of thought. In fact, these technologies have arrived at a uniquely challenging time in the private equity market. In our first Perspective on buy and build strategies, we highlighted the intensifying competition for deals as dry powder accumulates. As a result, there is increasing pressure on private equity executives to accelerate deal flow and expand their research efforts.
At the same time however, firms are facing human capital challenges. A study* performed by Alix Partners states that 41% of private equity leaders are concerned about the war for top talent, 42% are concerned about employee burnout, and 41% are concerned about their ability to retain high performing team members. They are at a point where asking investment teams to expand their scope without growing headcount is unlikely to sustain. Many are looking at LLM technologies as a lever to drive efficiency in their teams by automating research tasks, allowing professionals to focus on higher-value work, and ultimately enabling the firm to cover a larger universe of investment opportunities.
Some valuable use cases
It does not take long to identify a number of use cases for a GPT-style application built specifically for private equity:
Analysts who are working on a specific deal could type in due diligence questions into a chat interface and get direct answers instead of having to search through the data room to find the right document, slide, or excel model.
Investment professionals could ask questions in plain language about the firm’s previous deals such as, “What was the revenue multiple on the Company X deal back in 2006?”
Legal diligence teams could ask questions about a target company’s typical contract duration instead of having to click through and review each one individually.
Investment teams could use it to compare details from the CIMs or teasers of two similar investment opportunities instead of doing a side-by-side comparison of the two documents manually.
Investment teams could type in an investment hypothesis in plain language and generate a ranked list of companies that align with it
After digging into LLM research and having worked on the implementation of these systems, it becomes clear that today’s LLMs are not without their limitations. For example:
LLMs do not “search” for information. They are models that have been trained on enormous amounts of historical data and essentially predict (albeit quite accurately) what sequence of words and phrases best address the prompt from the user.
LLMs do not respond well to prompts that it has not been trained on before.
LLMs are known to hallucinate (i.e. generate information that is not present or supported by the input data it was trained on) which can produce misleading and outright incorrect information
LLMs can be fine-tuned on proprietary data so that they are able to answer questions that are specific to the firm. However, the fine-tuning process is not simple.
Fine-tuning trade-offs
To illustrate the considerations when building a Verticalized LLM for private equity, we will walk through the thought process of someone who is building a system that allows investment professionals to ask questions about the firm’s previous deals.
This verticalized LLM solution needs to not only understand language in the private equity domain, but also have knowledge of the deals that the firm has worked on in the past. To accomplish this, the firm needs to fine-tune an existing model. After some further research, two options typically emerge; fine-tune an open source LLM internally or use a SaaS / PaaS provider that offers fine-tuning capabilities. We have summarized the trade-offs between the two options below:
Ensuring trust and credibility are top of mind
LLMs are certainly not the only technology implementation that firms have attempted in recent years. From our experience the common theme amongst the failed implementations was that users lost trust in the technology early on. Investment teams have jobs that rely on having the right information to make critical decisions. You simply cannot just put a fine-tuned LLM in the hands of the investment team and move on. The first time that the LLM hallucinates will be the last time that the solution is used at the firm.
To combat the risk of hallucinations, the solution could give users additional information along with the LLM’s response like the source of the information or even a link to the document where the response could be verified. However, LLMs do not inherently “show their work” or cite the information they provide. There is a concept amongst LLMs called “chain of thought” whereby the model can be prompted to demonstrate reasoning on how it achieved a certain answer or conclusion. However, having an investment researcher probe the LLM to understand where the information came from seems like an inefficient solution to the problem. This is precisely why firms need to explore building a system that “wraps” a fine-tuned LLM with other software components that bring additional value to the use case.
A proprietary Dewey decimal system
The first value-added feature that the system needs is the ability to search through internal documents to find information and provide the source document to the user along with a text response generated by the LLM. Although this is not a typical problem solved by LLM technology, applying the software engineering concept of “indexing” provides a path forward.
Indexing is a method used to make a large amount of data or documents easier to sort through by attaching tags to it that make it easier to search. A useful analogy is how libraries organize books by topic and use the Dewey decimal system. A system without indexing is akin to walking into a library that has no signs on the shelves telling you what the topic of the books in that aisle are. You might get lucky and find the book you are looking for quickly, but odds are that you are going to spend a long time looking at irrelevant books before you find the right one.
The same can be applied to the LLM-based system that we are trying to design. The LLM could be used to understand what information the user is asking for, but without indexing the data, the solution would have to search through every single document to find a proper match.
We return to the library analogy for a moment. Libraries index their books in multiple ways like grouping them by genre and arranging them in alphabetical order by title or by author last name. These indexing decisions were made based on how users (i.e., people looking for a book) typically search. If they are looking for a specific book, they will know the genre, the title, and the author and can navigate the library layout and the shelves quickly. If they are browsing for a specific genre and author but do not have a book in mind, they will use the signs in the library to walk to the right section, find the author’s books on the shelf, and then read the abstract of multiple books to find the one that they want. See figure 1 below.
The same should be done when designing an index for this solution in private equity. The indexes should be defined based on the questions that users are most likely to ask such as:
What was the revenue multiple on the Company Y deal in 2023?
What was the range of EBITDA multiples in the semiconductor industry on our deals from 2020 to present?
What were some of the key trends that drove growth in biotech industry last year?
The possible indices such as sector, financial data, and document type emerge from the questions that users ask. See figure 2 below.
Detailed scoping is required to make sure that you choose appropriate indices, but if done properly, this approach greatly improves the efficiency and effectiveness of the system. Effort is required to assign indices to every document you want to search but there are also automation techniques (outside the scope of this article) that can help to expedite this process.
Not all data is made the same
The next logical place to turn our attention is the data we are searching itself. The first thing that many will notice is that data formats vary greatly: Excel files, PowerPoint presentations, Word documents, and different types of PDFs. Not only that, but the content in those documents (i.e., the modality) is not just text. They contain a combination of text, tables, charts, and images. You do not need to be a software or machine learning engineer to understand that some kind of pre-processing must take place to extract the relevant information. Below are just a few examples of such techniques that we have implemented before:
Optical Character Recognition (OCR) – Provides the ability to extract text from an image like old PDFs, scans of contracts, or infographics
Page segmentation – Provides the ability to separate pages or sections of a document so that the information can be better organized into topics and more efficiently scanned for relevant data
Table / chart extraction – Provides the ability to read a table or chart, decipher what information it has, and then store it in a searchable way
Document classification – Provides the ability to group documents based on their contents (e.g. industry report, board deck, financial projection, product roadmap, M&A strategy, capital allocation, etc.)
A natural extension to data preparation is to start thinking about how to interface with an LLM. For a user, interfacing in natural language is great. But there are use cases where the output of the LLM could be an input into another piece of software. For example, a user could ask about a company’s EBITDA trend for the past 10 years which naturally lends itself to a chart-style output instead of a large block of text. In this scenario, the LLM output would need to be fed to another software component that takes that data and creates a chart from it. However, LLM outputs do not follow a consistent format and therefore cannot be fed directly into another software component. This limitation will have to be overcome in order to chain LLM outputs to other pieces of software using file formats like JSON or csv.
In summary
There is so much information to digest on Vertical LLMs for private equity. While this Perspective only scratches the surface, we hope that it teaches you a few valuable lessons:
There is tremendous value to be had across LLM-powered use cases. The one that we explored was allowing investment professionals to ask questions about historical deals but there are many more.
Firms can explore fine-tuning in more detail but early research into the available options suggests that fine-tuning on a SaaS platform is a non-starter. The privacy of the firm’s data is too important and they do not want the data they provide to be used to train the platform’s LLM if it will be provided to other customers. Therefore, it’s worth exploring fine-tuning in-house and potentially using consultants to help accelerate the upfront engineering tasks.
A data collection and annotation project should be kicked off as soon as possible as it is required to fine-tune a model. It might be worthwhile to get help from an expert in the space as this process can be time consuming.
Building a “ChatGPT for your firm” will not be enough to build trust and credibility with users because of hallucination risks. The right solution will involve providing the source of the response so that investment professionals can verify its validity.
Indexing is an important first step and if firms go down the path of building this solution, this is the first place to start. They should be laser focused on understanding the types of questions that users are going to ask.
An LLM is just part of the solution firms will need other data preparation components to handle different data formats and modalities.
Firms need to do research and experiment with methods of taking LLM outputs and converting them into software-readable data structures that could be used to automate downstream tasks like chart creation.
Want to explore LLM opportunities in your vertical? Get in touch
* https://features.alixpartners.com/private-equity-leadership-survey-2023/#section-jGjvXTv3Z0