The human brain is an extraordinarily complex organ, often considered one of the most intricate and sophisticated systems in the known universe. The brain is hierarchically organized, with lower-level sensory processing areas sending information to higher-level cognitive and decision-making regions. This hierarchy allows for the integration of knowledge and complex behaviors. The brain processes information in parallel, with different regions and networks simultaneously working on various aspects of perception, cognition, and motor control. This parallel processing contributes to its efficiency and adaptability.
Can we adapt these hierarchy organization and parallel processing techniques in deep learning? Yes, the field of study is called Neural networks. Researchers at the University of Copenhagen present a graph neural network type of encoding in which the growth of a policy network is controlled by another network running in each neuron. They call it a Neural Developmental Program (NDP).
Some biological processes involve mapping a compact genotype to a larger phenotype. Inspired by this, the researchers have built indirect encoding methods. In Indirect encoding, the description of the solution is compressed. This allows the information to be reused, and the final solution will contain more components than the description itself. However, these encodings (particularly indirect encoding family) must be developed.
The NDP architecture comprises a Multilayer Perceptron (MLP) and a Graph Cellular Automata (GNCA). This updates the node embeddings after each message passing step during the developmental phase. In general, cellular automata are mathematical models consisting of a grid of cells in one of several states. These automata evolve over discrete time steps based on a set of rules that determine how the states of the cells change over time.
In NDP, the same model is applied to every. So, the number of parameters is constant with respect to the size of the graph in which it operates. This provides an advantage to NDP as it can operate upon any neural network of arbitrary size or architecture. The NDP neural network can also be trained with any black-box optimization algorithm to satisfy any objective function. This will allow neural networks to solve reinforcement learning and classification tasks and exhibit topological properties.
Researchers also tried to evaluate the differentiable NDP by comparing trained and tested models on different numbers of growth steps. They observed that for most tasks, the network’s performance decreased after a certain number of growth steps. The reason to observe this was that the new modes of the network got larger. You would require an automated method to know when to stop growing the steps. They say this automation would be an important addition to the NDP. In the future, they also want to include activity-dependent and reward-modulated growth and adaptation techniques for the NDP.
Check out the Paper. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 31k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.
If you like our work, you will love our newsletter..
Now, we are also on WhatsApp. Join our AI Channel on Whatsapp..
Arshad is an intern at MarktechPost. He is currently pursuing his Int. MSc Physics from the Indian Institute of Technology Kharagpur. Understanding things to the fundamental level leads to new discoveries which lead to advancement in technology. He is passionate about understanding the nature fundamentally with the help of tools like mathematical models, ML models and AI.