Generative AI glossary for beginners


Embarking on a journey to learn new technology can feel daunting even for the most seasoned engineers. Wrapping your head around new concepts is already overwhelming, and you'll hear a lot of vague words being passed around in an actual project. It's important to know the terminology but it also takes time for those words and their meaning to solidify in our brains, so why not keep a handy dictionary around?

This post will cover some of the most common terms you will encounter when working on GenAI projects. I'll do my best to explain them in the simplest terms and provide some well-known examples or references.

Glossary

Generative AI - Full name stands for generative artificial intelligence, colloquially known as GenAI is a technology that is capable of generating text, images, or other media by using pre-trained data.

Large language model (better known as LLM) - This is the model that is used for text generation and powers tools such as ChatGPT. As the name already implies, these models are large because of the vast amount of data that was used to train them.

Example: Imagine that you don't know anything about world geography and that your mind is a blank page. Someone might give you an atlas and a geography book, and slowly but surely, you will learn what a continent is and where different countries are. In other words, you will acquire knowledge about certain things and the relations between them. This is very similar to how a large language model learns. It's given a vast amount of texts and it extracts the relation between words in these texts.

Prompt - A text input is given to a model that conditions the model to behave in a certain way. This can be a question, a set of instructions, examples, role-playing, etc. Depending on the model used, the response to a prompt can be text, image, video, or similar.

Example: Blog writing prompt

Prompt engineering - A set of techniques that helps us create better prompts and elicit useful responses from models. Finding the right prompt is an iterative process with a lot of trial and error, this also falls under prompt engineering.

Example: Useful prompt engineering guide

Ground truth - As the name implies, this technology is generative but how can we be sure that the generated text is correct? This is where the ground truth comes in, it's a way of testing the accuracy of your GenAI system.

Example: Imagine you've built a system that given the name of a plant should classify it into the fruit or vegetable category. To test our system, we can create an independent data set with the correct classification. This would be our ground truth.

Plant nameCategory
applefruit
bananafruit
potatovegetable
mangofruit
spinachvegetable

We can now compare the output of our system with our ground truth data set and determine the accuracy.

Token - The thing you get charged for when using GenAI through an API. The input (prompt) and output of any GenAI system are nothing but text. Every text consists of words, punctuations, spaces, etc. All of these are further broken down into tokens. Do note that you'll get charged for both input and output tokens.

Example: Try out a few sentences with TokenVisualizer and see how many tokens you get.

Retrieval-augmented generation (RAG) - In business circles also known as "Ask your documents". One limitation of tools such as ChatGPT is that these tools contain publicly available knowledge. On the other hand, many companies want to combine their proprietary data with the capabilities of ChatGPT. This is where RAG comes in. It's a system that takes your documents (PDF, Word, webpages, etc.) and augments ChatGPT knowledge so that it can provide answers based on your data.

Example: By using RAG with your HR documents, you would be able to ask ChatGPT questions such as - what is the annual leave policy at my company? Example: A video explanation of RAG

Fine-tuning - As already mentioned, large language models are trained on publicly available information. The main goal of fine-tuning is to adapt the model itself with a dataset that is specific to your task or industry. Instead of training a model from scratch (which is very expensive), we use an existing model but slightly tweak it so that it works better for our use case. Although somewhat similar to RAG, the main difference is that with RAG we never make any changes to the model itself.

Temperature - It does not refer to the weather.🙂 This is a configurable parameter that impacts the "creativity" of your model. The higher the temperature the more creative responses you will get. This is also dangerous as higher temperatures can also cause the model to hallucinate. When the temperature is lower the model is more predictable and produces consistent responses.

Example: Setting the temperature is highly dependent on the use case. If you want ChatGPT to write you a blog then you want it to be more creative so you'll choose a higher temperature. On the other hand, if you ask what is the capital of France and you want to consistently receive Paris then it's better to set the temperature to 0 to avoid getting surprised.

Hallucination - Something that you don't want in your AI-powered system. Let's build on our previous example of the capital of France. When a model hallucinates you might get an answer that the capital of France is Lyon or some other city. Hallucination is another way of saying that the model returned incorrect information. This is why it's important to always verify the information that you receive from a GenAI system. Things like low-quality training data or open-ended prompts are among the causes of hallucinations. There are also ways to tackle hallucinations, but that's a topic for another blog post.

Embedding - A long list of numbers. This is how a large language model internally represents words and relations between them. Under the hood, GenAI is just a lot of numbers and computations. Embeddings also play a very important role in a RAG system.

Example: Here is how one embedding might look like [-0.5426,0.6412,-0.5426...]. When forming a model, many embeddings are stored together in a multi-dimensional space. It's best when you visualize this and Embedding Project does a great job at that.

Starting is never easy and if you are new to this space, I hope this GenerativeAI glossary will help you find your way around. If you encounter some new or unfamiliar words, please let me know and I'll be happy to update this page.