In a product organisation aiming to build AI capabilities into their products and services, there is always the challenge of bringing the non-AI-literates onboard the AI train. While not everybody needs to be an AI expert, it is necessary to have as many as possible contributing with ideas and possibilities of exploiting the power of AI to propel the company to the next level. This applies in particular to domain experts and product people, who are on top of the problems their products and services are trying to solve, and knowing where the shoe pinches.
One challenge I have learned is prevalent, is the basic question of “Which problems can we solve with AI?”. A question that is surprisingly hard to answer when posed by a non-expert. So I have devised three heuristic questions that you can use whenever you are looking at a problem, and you are wondering “Can this be solved with AI?”. If you can answer yes to all three of them, you may find yourself in position to start an AI project.
You can think of an AI as an oracle that answers questions. What you have to ask yourself about, is:
Can you express, in writing, the question you wish to have answered?
This is, of course, a test that applies to anything you wish to do. If you want to do something, but you can’t formulate what it is you want, you probably don’t really know what you want. Launching an AI project is no exception to this rule.
Example questions to ask an AI could be
- Is there a dog in this picture?
- What will the weather be tomorrow?
- What are next week’s lottery numbers?
All of these are well posed questions that can be asked. But not all of them can be answered, so we need another test.
We can think of the oracle as a function mapping questions to answers:
The circle on the left contains all the questions, and the circle on the right contains all the answers. The oracle is the function sending questions to answers. The next thing to ask oneself is:
Does the function exist?
This may seem odd, and it gets queerer still: you should ask this question on a metaphysical level — is there any theoretical possibility for this function to exist? Let us have some examples:
We have all seen AIs answering the “dog in the picture” question, so we know that this function exists. We have also seen the weather forecast, so we know it is possible, to some extent, to predict tomorrow’s weather. But there is no way to predict next week’s lottery numbers. And the reason for this is that the lottery is rigged exactly with the goal of this function not to exist. It is impossible. And this is what I mean by “on a metaphysical level”.
Why is this important? Because machine learning (which is how we make AIs) is about trying to approximate functions by learning from examples.
If we have a lot of examples of how the function (i.e. oracle) should behave, we can try to learn this behaviour, and mimic it as closely as possible. But you can only approximate a function that exists.
Admittedly, all of this is a bit abstract, so I recommend replacing this heuristic with the following meta-heuristic:
Can a well-informed human do the job?
Still metaphysically, given all the information in the world and unlimited time, can a human answer the question? Clearly, humans are pretty good at recognising dogs in pictures. And humans did develop weather forecasts, and do them too. But, we are not able to predict next week’s lottery numbers.
If you have come this far, answering yes twice, you have 1) a well posed question, and 2) you know that, at least in theory, the question can be answered. But there is one more box to check off:
This one is a wee bit more technical. The key to the question is that the oracle function often needs more information than just the question to find the answer. The informed human being, doing the job as oracle, may need additional information to make a decision or produce an answer. This is what I refer to as the context.
For example, the weather forecast oracle needs to know the current meteorological conditions as well as conditions from some days back to do forecasting. This information is not contained in the phrase “What is the weather going to be tomorrow?”
On the other hand, in the case of pictures of dogs and cats, the context is in the picture, and no additional context is required.
The reason why this is important, is that when we train an AI, the AI is presented with questions of the type
The AI then makes a guess before receiving the true answer, and over time it is hoped that the AI will learn the difference between cats and dogs. But for this to happen, the difference must be available, so that the AI can learn to identify the difference. In the case of pictures, this is straightforward — you just have to make sure the pictures are of sufficient quality to make the distinction possible. In the case of weather forecasting, it becomes more complicated — you actually have to make an informed decision to what information is required to make a weather prediction. This is a question best answered by domain experts, so you may have to reach out to get a good answer to this one.
But the bottom line is: if there is not enough information available for the informed human to answer the question, then there is little hope for the AI to learn how to answer the question also. You need that context.
So to sum up, if you wish to test your AI project idea, to see if this is something that can be solved with the use of AI, you can try answering the following three questions:
1. Can you express your question in writing?
2. Can an informed human do the job?
3. Is the context available?
If you can answer yes to all three, then you are ready to move on. There may still be hurdles to overcome, and perhaps it turns out to be too difficult in the end. But that is the topic of another post.
Good luck!
With sincere regards
Daniel Bakkelund