Headlines keep buzzing with updates on cutting-edge new model versions of Large Language Models (LLMs) like Gemini, GPT, or Claude. In parallel to all this core AI progress, there is also a lot of discovery and work from many other companies on how to actually leverage these models to innovate, bring further value, and reduce costs. It is easy to feel overwhelmed and pressured to keep up with all this progress, I can say it happens to me a lot! In this blog post, I’m packing some of the most important concepts and their potential to products and companies to help you keep up.
There are some common trendy concepts around how companies are achieving the integration of LLMs and other GenAI models into their products or processes. These concepts are: prompting, fine-tuning, retrieval augmented generation (RAG), and agents. I’m sure you’ll have heard about several or all of these concepts before, but I feel sometimes the differences between the concepts are unclear, and most importantly we are still unaware of the potential they can provide to our companies or products.
In this blog post, we’ll have an overview of each of these concepts, with the aim that by the end of it you understand what they are, how they work, the differences between them, and their revolutionary potential for companies or digital products. There is no better way to understand a technology’s potential than through analyzing its use in specific examples. That’s why I’ll walk you through these concepts pivoting around one single use case -publishing an ad in a marketplace- to illustrate how each of these trending concepts can be leveraged to generate further value and efficiency.
In most marketplaces, users are able to publish ads or products, and the platforms provide a standardized publishing process. Let’s consider the scenario where this process involves multiple steps:
- “Publish new item” button: signals the user’s intent to list an item, and initiates the publishing process,
- Information tab: users are asked to provide specific details about the item. Let’s imagine in this case the user is asked to…