This is the last part of my series of nature-inspired articles. Earlier, I had talked about algorithms inspired by genetics, swarm, bees, and ants. Today, I will talk about wolves.
When a journal paper has a citation count spanning 5 figures, you know there’s some serious business going on. Grey Wolf Optimizer [1] (GWO) is one such example.
Like Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), and Ant Colony Optimization (ACO), GWO is also a meta-heuristic. Although there’s no mathematical guarantees to the solution, it works well in practice and does not require any analytical knowledge of the underlying problem. This allows us to query from a ‘blackbox’, and simply make use to the observed results to refine our solution.
As mentioned in my ACO article, all these ultimately relate back to the fundamental concept of explore-exploit trade-off. Why, then, are there so many different meta-heuristics?
Firstly, it is because researchers have to publish papers. A good part of their job entails exploring things from different angles and sharing the ways in which their findings bring about benefits over existing approaches. (Or as some would say, publishing papers to justify their salaries and seek promotions. But let’s not get there.)
Secondly, it is due to the ‘No Free Lunch’ theorem [2] which the authors of GWO themselves talked about. While that theorem was specifically saying there’s no free lunch for optimization algorithms, I think it is fair to say that the same is true for Data Science in general. There is no single ultimate one-size-fits-all solution, and we often have to try different approaches to see what works.
Therefore, let’s proceed to add yet another meta-heuristic to our toolbox. Because it never hurts to have another tool which might come in handy one day.
First, let’s consider a simple classification problem on images. A clever approach is to use pre-trained deep neural networks as feature extractors, to convert…