Intuition and code implementation for ABC, and exploring where it outperforms Particle Swarm Optimization
I shared about the intuition, implementation and usefulness of Particle Swarm Optimization (PSO) in a recent article, as part of my series of nature-inspired algorithms. Today, I will explain how Artificial Bee Colony (ABC) works.
Aren’t bees part of a swarm? Are these two algorithms simply two sides of the same coin?
For this article, I will jump right into the intuition of ABC. Next, I will provide the mathematics, followed by the implementation in Python. Finally, I will formulate a problem in which PSO fails to solve but ABC does with ease, and explain the aspects of ABC which makes this possible.
Much like in the case of Reinforcement Learning and Evolutionary Algorithms, a fundamental driver behind ABC is the balance between exploration and exploitation.
Those who are new to swarm intelligence algorithms may initially feel intimidated by the association with biology, and think that there’s some complicated mathematical modelling to mimic what exactly happens in nature. As variables are typically represented as Greek alphabets in textbooks, it adds to this false perception of complexity.
That is certainly not the case, at least for ABC. There is nothing at all about bees’ waggle dance that you need to understand. Nor is there anything beyond just high school math in this algorithm.
Essentially, it is simply having a local directional search towards promising locations, saving the results only if there’s an improvement in the objective function, along with a global random search upon encountering prolonged periods of no progress.
The creators of this algorithm then packaged it with fanciful names, and tagged these to employed bees, onlooker bees, and scout bees.
Like PSO, ABC is a metaheuristic algorithm?
What is ‘metaheuristic’, you might ask?