It’s been a while since my last LLM post and I’m excited to share that my prototype has been successfully productionized as Outside’s first LLM-powered chatbot, Scout. If you are an Outside+ member, you can check it out over at https://scout.outsideonline.com/.
This journey began as my weekend curiosity project back in March 2023. I had the idea to build a Q&A chatbot using OpenAI’s LLMs and Outside’s content as a knowledge base. Later I shared my prototype at our internal product demo day and I was thrilled by the interest it managed to spark. Scout quickly became an official project. On November 28th 2023, we launched Scout to limited Outside+ members. Fast forward to today, April 12th, 2024, over 28.3k unique users have already utilized this Outdoor Companion AI tool.
I couldn’t be more grateful for this moonstruck experience and I’ve been planning to write a mini-series to share some behind-the-scenes insights into what it takes to bring LLM & RAG powered apps to life. So far I’ve planned to cover the following three parts:
- 🦦 Part 1: Automate Pinecone Daily Upserts with Celery and Slack monitoring
- 🦦 Part 2: Building an LLM Websocket API in Django with Postman Testing
- 🦦 Part 3: Monitoring LLM Apps with Datadog: synthetic tests, OpenAI, and Pinecone usage tracking
This post will dive into Part 1, setting up scheduled tasks with Celery Beat to automatically upsert embeddings into the Pinecone vector database. And we’ll set up slack updates for easy monitoring. Let’s get started!
LLMs typically have training data cut off date, the current gpt-4-turbo was cut off at 2023-Dec (to my writing day -2024-April). The promise of using RAG is that we can equip LLMs with more fresh and domain specific data to reduce hallucinations and improve user experience. Thus the question: how can we keep the knowledge base fresh and up to date? The answer is — using Celery and Celery Beat to schedule a periodical task (daily or weekly) to embed newly published…