top of page
Writer's pictureShawna Pratt

What Does an LLM Have to Say About Taxonomies for LLMs?

ChatGPT and Large Language Models (LLMs) are at the forefront of many people's minds as these tools become more prominent and integrated into daily business life. As a taxonomy company, we have been in the AI field for years and are always interested to see how it develops and evolves.

The full conversation with ChatGPT about pre-built, curated taxonomies which provide several benefits to large language models like GPT 3.5

What might ChatGPT have to say about prebuilt taxonomies for Large Language Models?


Does its answer hold any value?


Would it even recognize its own need for taxonomies?


Our curiosity got the best of us, so we decided to just ask it:

How can pre-built, curated taxonomies benefit large language models?

The answer we got, which can be viewed in full-detail via the sidebar, is actually extremely informative and accurate! The language model generated seven advantages to using prebuilt, curated taxonomies in LLMs:

  1. Structured Organization

  2. Improved Contextual Understanding

  3. Enhanced Entity Recognition

  4. Facilitate Domain-Specific Knowledge

  5. Controlled Generation of Responses

  6. Data Augmentation and Generalization

  7. Assisting Human Users

While impressive, the level of detail and expert knowledge generated for this question makes sense when you consider that in our field, there are only a few people posting about taxonomies, and those who do are typically domain experts and people with experience in library sciences. The data that LLMs are being trained with is all publicly-accessible internet content, so in our case this information is probably heavily tilted towards the "expert" side.


But this may not be the case for every company. What would Reddit or Quora or Wikipedia articles, posts, and discussions have to say about your domains of knowledge? Would all the data generated be accurate and useful or are "AI hallucinations" and other inaccuracies more likely?


As ChatGPT pointed out in our discussion, prebuilt taxonomies provide several benefits to the training and behavior of LLMs, especially in situations which require more of those "expert" responses. How does this work exactly?


A prebuilt, curated taxonomy is a hierarchical list of terms specifically designed for your individual domains of knowledge. These terms are fed to the LLM to aid in its learning process. The LLM takes these taxonomies and learns about important terms and how they relate to one another and applies this knowledge to its final outputs.


Interestingly, an already trained Large Language Model like ChatGPT recognizes that taxonomies allow it to perform a variety of tasks with greater accuracy and efficacy. Let's take a look at each of their points in greater detail:


1. Organize and navigate information more effectively: Taxonomies are really built for organization. A good Large Language Model has an extremely complex organization system that allows users to navigate with ease. A prebuilt taxonomy just increases the value by already doing the hard work of gathering and organizing all those complex terms. From there, the Large Language Model just needs to apply the relationships between different entities to its programming.


2. Grasp the meaning of terms in a more nuanced manner: When ChatGPT first came out, people quickly found that it had an extremely literal interpretation of the human language and of topics, which produced some entertaining (and sometimes frightening) hallucinations and misunderstandings. With a taxonomy, those hierarchical terms and the corresponding relationships allow the LLM to develop a better understanding of context with less bumps in the road.


3. Identify and differentiate between entities more accurately: Again, Large Language Models can struggle with entity recognition and disambiguation. Taxonomies map out words and phrases within specific categories which helps the LLM in tasks related to entity recognition, information extraction, and question-answering.


4. Acquire specialized knowledge and terminology to lead to more accurate and contextually appropriate responses within those domains: This might be my favorite point. If you work in an industry which has specialized knowledge and terminology not known to the general public, using a Large Language Model trained purely on the data found in the general public, you probably won't be very satisfied with the results. Key pieces of information and understanding are bound to be missing, simply because you have specific knowledge. Passing on that knowledge to your LLM is the key to getting a product that is actually useful in your context! A prebuilt, curated taxonomy has those specialized terms already included in the hierarchical structure, which allows you as an organization to give your LLM the specialized knowledge it needs to perform with excellence in your industry.


5. Generate more coherent and structured responses, aligning with the desired domain or topic: Again, the key to taxonomies is the guiding framework which can control LLM responses. Coherent and structured responses which align with the topic at hand is the end goal of any Large Language Model, why wouldn't we want to make that process faster and more accurate?


6. Improve their generalization capabilities and ability to handle diverse inputs: Now we are getting into the human element. LLMs are designed to aid humans, but if humans and LLMs cannot understand each other, then something is broken. Information which is not conveyed effectively is not conveyed at all. Exposing LLMs to a broader range of training examples helps improve generalization and its ability to handle any inputs people in your organization may have.


7. Guide users towards the most relevant and useful information, improving user experience and satisfaction: Finally, we reach the end goal: relevant and useful information with great UX and satisfaction. Large Language Models and AI in general is being built at the speed and fervor it is because the goal is to make our lives easier and more efficient. If any AI product is making our lives more difficult, it's not a good product. But in the race to stay ahead of the game, many people are finding themselves working with AI which gives irrelevant and useless information through a terrible interface which leads to dissatisfaction. A prebuilt, curated taxonomy is one of the pieces to the AI puzzle which addresses these shortcomings and, as we've seen with the other benefits, makes the product better in numerous ways.


In the words of ChatGPT, "Overall, pre-built, curated taxonomies offer a valuable framework for organizing knowledge and guiding the behavior of large language models. They facilitate better understanding, domain-specific expertise, and controlled generation, leading to more accurate and contextually appropriate responses."


You heard it straight from the horse's mouth, folks, so what are you going to do about it? You may want to consider looking into prebuilt taxonomies for your LLM project, and here at WAND, we would love to talk to you about it. Feel free to contact us any time!






WAND has forty years of experience in prebuilt taxonomies and is excited about the advancements being made in the AI world –– we are constantly updating and adding to our products and services to help you and your company keep up with the current technologies.

Comments


bottom of page