Evolution on steroids

New Publication:
Information-theoretic neuro-correlates boost evolution of cognitive systems

Have you ever tried to evolve something and you couldn’t? Funny question, I admit, but in my field of work this happens way too often. The reason for failure is typically simple, the fitness function you devised either doesn’t reward the desired behavior properly, which can be fixed easily, or the task requires too many different cognitive functions and we fail to incentivize their evolution properly. Imagine an agent needs to predict something but was never rewarded to evolve memory in the first place. It would be much better to either evolve memory first and then prediction, or use a better fitness function that can reward both independently. This would allow the agent to evolve required skills first and then obtain more fitness by putting these just evolved abilities to use. An even trickier problem comes from multiple parameter optimization, where some traits might even be antagonistic: You want to be fast and large, or have a quick answer and be accurate (see my paper “Computational evolution of decision-making strategies” for an example of this)?

Wouldn’t it be cool if we could reward agents to perform a task and just also have a “generally better brain”? My latest paper on how “Information-theoretic neuro-correlates boost evolution of cognitive systems” explores some of these options. The idea is that we use neuro-correlates (NC) together with the performance of the agent to speed up evolution. But wait, what are NCs in the first place? NCs are abstract measures that try to quantify something about the neural or cognitive function that is independent of performance. A simple one would be number of neurons, a more complex one would be the amount of memory. We devised a couple of these neuro-correlates earlier. One is φ, which quantifies the amount of information integration. The other is R, which quantifies how much an agent knows about it’s environment. NCs fit the idea of “generally better brains” very nicely. They can be used independently of the task and selecting for a higher NC should make the brain have more general abilities, which then can be coopted to also perform the task better. Both these measures, but surprisingly also brain diameter, which is the length of the longest shortest path (the mouthful under the NCs), boost evolution. You might know the graph diameter better as the six degrees of separation, which says that in complex graphs you can walk from node to node with at most six steps. It seams as if this is not only a property of natural graphs, but that increasing this diameter also makes good cognitive architectures – who could have known?

Another interesting result is that particularly sparse or overly connected graphs are not generally good. It turns out that some tasks respond well to one, whereas other tasks benefit from the other. And lastly, one of the referees kept asking about “predictive information” so we did that experiment as well. To be honest, I never liked predictive information (PI) as a meaningful concept at all. Increasing PI either means that your actions become more predictable, or the next sensor input becomes more predictable. Both can be trivially increased by either doing nothing or by closing your eyes … not to my surprise using PI makes evolution slower and makes agents perform worse than not using it at all. Anyways, now you know, if you can’t get it to evolve try φ, R, or graph diameter to give your fitness function a boost.

Cheers Arend