This Article was Originally Published on my Substack.
Today, it is already very difficult to know a true story from a fictional one. On top of this, AI is writing articles, creating videos, and Apple’s Vision Pro can produce a whole new world that only appears to the user.
As our technologies coverage we will soon be living in a world where it will be more and more difficult to distinguish what is real from what is synthetic.
This already posses a dire problem for AI and LLMs. AI is trained on data and as the online data get further contaminated with AI hallucinated content this is leading to a death spiral of AI reciting and learning from its own hallucinations.
After all, AI can’t distinguish or experience reality.
And this is where Philosophy comes in. It looks to explore what is real, what is valuable and how we interpret data and give information meaning.
Philosophy’s goal is to answer exactly the type of questions that are essential to building an AGI that reflects reality and that works towards our best interests.
To this end, let’s start with AI & LLMs.
The big breakthrough came when the Google research team wrote the infamous paper “Attention Is All You Need” by Vaswani et al. which led to the emergence of the Large Language Models (LLMs) we are seeing today.
In the old days, creating Conversational AI Agents, took a lot of meticulous work. Conversational AI agents were using NLP and NLU, in which a sentence would be broken down into intents (verbs) and entities (nouns).
Creating bots this way was time-consuming, expensive, and addressed a very limited set of questions.
LLMs solved these problems.