flying through the state space of an artificial neural network

We are adding a couple of artificial neural network tools to MABE and I wanted to test different ways to configure them and to see the effects different types of activation functions have. While ANNs are really easy to setup it is not necessarily easy to test them. So I decided to give them two inputs (the x and y position of the screen) and let them have three outputs. Those three outputs I translate to a color (r, b, and b) for each pixel of the screen. That already renders rather interesting pictures which all depend on the weights of the ANN. In my case I just generated random ones.

This principle has been used already in picbreeder but not with classic ANNs but with NEAT, which is a form of ANN that uses a much more sophisticated way to determine the connectivity and the weights. While in picbreeder you can choose the direction the network changes, I decided to do something else.

Imagine that the weights define a point in a very high dimensional state. Well, not just three like in the space we live in, but in this case hundreds. This point can slowly fly through this space, and when it hits an upper or lower limit of one dimension it just bounces off that wall. Again, while we can easily imagine a point in a cube bouncing around, we struggle when we have to imagine this in a multidimensional space. However, the principle is the same. The point is the state of all the weights, and if we slowly move this point around, so does the behavior of the ANN.

As a result, we not only get one image but a sequence of images, visualizing how the ANN changes its function over time. These images I throw together as a movie. Check out the amazing visualizations that this gives me,

Cheers Arend

 

 

You Say “Cow” and I Hear “Milk” – Repost from the BEACON Blog

You Say “Cow” and I Hear “Milk” – The Joy of Interdisciplinary Work

While I was listening to the many interesting talks of this year’s BEACON congress (2016), I was pondering about the journey that we took together to get here. Fortunately, I was around to not only witness the first BEACON congress, but all others since then, except for 2015. Amazing, exciting, controversial, and interdisciplinary were only some of the words that popped into my mind, and I deeply enjoyed reflecting on that ride. But why? What is the thing I liked most, what made this so special?

That was the moment I had the idea for the title of this blog post, because it characterizes so absurdly what makes working with the people in BEACON so exciting and rewarding. It is the misunderstanding between the different disciplines. When I talk to biologists, for example about mate selection, or navigation, or foraging then this person has a specific animal with a specific repertoire of behavior and methods to study it in mind – the “cow” – and also the limitations of said model system. I can make our computational model systems without constraints, but also, most often, I have no experience or knowledge about the animal my collaborators are talking about. I know what it means to have the “feeling for the animal”[1,2], but that’s it, I don’t have that about model organisms in general (with the exception of C. elegans maybe). I have a feeling about abstract systems, selection pressures, and how to design experiments in the computer, and I code worlds and environments that are loose enough analogies to animal system to get things to evolve – the “milk”.

This necessarily leads to misunderstanding, and in the process of picking up the pieces, we typically both learn things. I understand the animal better, you understand the modeling process, and together we find the right abstractions, and are able to form the exact hypotheses and experiments to do in the future. It is enlightening and rewarding, and we haven’t even conducted experiment yet, but at least we think we know what is going on, until the results come in.

In many cases, these results shake both of our understandings, not because we again communicated poorly, but because neither of us understood what was going on in the first place; or I go and fix a bug and we meet next week, hoping for new and surprising results. However, it is exactly this dialog, where we try to explain to each other what we do and how our system works that allows us to be creative. When I was listening to many talks, I could see how well we now understand each other. Having results from computational systems right between results from organismal systems, and the audience doesn’t even flinch, is amazing. We reached a state where we talk each others language, and appreciate what everyone can bring to the table, without hearing “milk” or “cow” but knowing that we talk about bovine evolution.

I also had the feeling that we might start to lose out on exactly this quality that made us strong. The presentations were great, but also much more focused on each topic, without giving the broader context we all add when we know that the audience isn’t too familiar with what we are working on. It is a sign that the past made a difference, and we indeed learned from each other, and maybe we just rose to a new level? In one of our workshops about AVIDA and Markov Brains/MABE we tried to go back to the basics and explain things from scratch. While that might have been preaching to the choir, I also had the feeling that we could do this more often. The strength of our computational model systems doesn’t come from what we did with them in the past. Their strength comes from what we can do with them in the future. In summary, I think we came very far, we should just make sure that we keep misunderstanding each other in the familiar productive way I learned to love!

Cheers Arend

[1] Holmberg, T. (2008). A feeling for the animal: On becoming an experimentalist. Society & Animals, 16(4), 316-335.

[2] Keller, E.F. (1983) A feeling for the organism. The life and work of Barbara McClintock. New York: Owl Books

 

Ancestry Challenge

Hi,

we had an interesting question in the lab about what one can infer about an evolving population from just the list of ancestors. To make this more specific, we give you a zip file that contains 100 files, each one from either of two evolutionary experiments. In both experiments we use 100 virtual organisms, however in one experiment we assume that the fitness of each organism is constantly 1.0 so that there are no differences between organisms and selection in just random, in the other experiment, we actually evolve the population, which means organisms have different fitnesses, which we use for selection.

In each file, we give you data about which organism reproduced into what other organism. Because we have a population of 100 individuals, the first generation consist of individuals numbered from 0-99. Each line in the files contains two IDs, the first is the ancestor separated from an offspring by a “,”. So for example:

76,199

means that organism 76 gave rise to organism 199. Keep in mind that we have distinct generations, so that we use the entire population to create a new one, and thus, the second number simply increases sequentially over time.

The files can be found here: Archive

If you are successful in telling us which of those 100 experiments belongs to what category, we have a second file for you to actually better test your method.

 

Cheers Arend

 

What is Tuebor?

Well, Tuebor is a computer game made by Strength in Numb3rs Studios about a dystopian future, so far so good. But this game doesn’t fall into any of your typical game categories because it incorporates aspects of many games you might have played and enjoyed and combines them into a very new experience. In brief, it is a collaborative multiplayer action game, where you control characters fighting an opposing force using Guns, Psi, Blaster, Lasers, Fire, High-Energy Anythings, Nanites, and everything else a great Sci-Fi game deserves. But what does that have to do with me, evolving AI?

I just got a grant from SiN studios (yeah!) to integrate evolving brain technology into this game, and I can’t be more excited. Enemy characters will not only have Markov Brains, which by itself already makes a difference, they also use Darwinian evolution to adapt their behavior to the opponent. To fully appreciate this we have to first take a look at conventional AI in game which falls into three classes: the algorithm, the dance, the decision tree.

  • You have all seen “the algorithm” in action when deep blue beat Kasparov in chess. It is a piece of software that crunches the numbers and does the perfect move. Typically here the challenge is think further and faster than the AI, and it is typically deployed in strategy games, and I know of no good example where this approach might make sense in an action game like Tuebor.
  • The “dance” describes a choreography of actions deployed by the AI that follows a strict order of events. World of Warcraft (WOW) does this a lot, and I just saw my kids play Skylanders which does exactly the same. The big advantage is that the player experience becomes consistent and you can very easily balance games. It also means that once you figured out what the AI does it is only a matter of doing the right thing at the right time. One effect is that you could actually watch a video of a boss battle, and once you saw what the enemy will do, you can just deploy the same successful player behavior you saw to overcome the enemy – I find this boring, and it is one of the reasons why I don’t dance the WOW tango…
  • The decision tree is pretty much the none plus ultra for many action games. Depending on the current situation, a decision tree decides whether or not to do something or to contemplate something else. This typically results in a rather machine like behavior where a guard on watch might walk back and forth, and once the guard spots the enemy it engages. Obviously, you can make this more complicated, and good game AI designers will come up with sophisticated trees, but what remains is the “state to state” nature, which results in rather predictable behavior. In most cases you want to present the player with a predictable situation that could be overcome once the behavior of the opponent becomes predictable. These behavior trees are much more flexible than the “dance” but are robot like enough that many player still prefer human opponents.

Markov Brains are different in two key aspects. They are stochastic, which makes them as unpredictable as a good human opponent. And they can form memory and representations, and use them to make better informed decisions. They don’t execute a stereotyped list of actions, but can recognize patterns and act accordingly. In their simplest form they can be similar to a decision tree with the one big exception that they can “jump” from one action to the other freely, which makes them much more dynamic and flexible, and hopefully much more dynamic opponents. Which by itself should give you an experience you never made before with any other game.

Lastly, we incorporate evolution in this for two key reasons. First, we simply can’t design these complex behaviors by hand, and therefor we let evolution sort out what is good and what is bad. Secondly, other than a pure machine learning approach, that optimize for one goal, here we hope to allow for arms races with the players. We know that players communicate a lot, and tell each other what worked and what didn’t, and new found exploits agains AI opponents spread quickly thanks to social networks. This creates a very interesting situation for our evolving AI. It means that something that was good now becomes bad, and evolution has to find a new solution. Which in turn first fixes the exploit by itself, and secondly creates never ending new challenges to the players.

Honestly, I can’t wait to see this, and I am thrilled that we are working on this. One of the things that I as a game developer seek to invoke in a player is a rewarding new experience, and this game and its AI seams to satisfy this perfectly.

Cheers Arend

 

Evolution on steroids

New Publication:
Information-theoretic neuro-correlates boost evolution of cognitive systems

Have you ever tried to evolve something and you couldn’t? Funny question, I admit, but in my field of work this happens way too often. The reason for failure is typically simple, the fitness function you devised either doesn’t reward the desired behavior properly, which can be fixed easily, or the task requires too many different cognitive functions and we fail to incentivize their evolution properly. Imagine an agent needs to predict something but was never rewarded to evolve memory in the first place. It would be much better to either evolve memory first and then prediction, or use a better fitness function that can reward both independently. This would allow the agent to evolve required skills first and then obtain more fitness by putting these just evolved abilities to use. An even trickier problem comes from multiple parameter optimization, where some traits might even be antagonistic: You want to be fast and large, or have a quick answer and be accurate (see my paper “Computational evolution of decision-making strategies” for an example of this)?

Wouldn’t it be cool if we could reward agents to perform a task and just also have a “generally better brain”? My latest paper on how “Information-theoretic neuro-correlates boost evolution of cognitive systems” explores some of these options. The idea is that we use neuro-correlates (NC) together with the performance of the agent to speed up evolution. But wait, what are NCs in the first place? NCs are abstract measures that try to quantify something about the neural or cognitive function that is independent of performance. A simple one would be number of neurons, a more complex one would be the amount of memory. We devised a couple of these neuro-correlates earlier. One is φ, which quantifies the amount of information integration. The other is R, which quantifies how much an agent knows about it’s environment. NCs fit the idea of “generally better brains” very nicely. They can be used independently of the task and selecting for a higher NC should make the brain have more general abilities, which then can be coopted to also perform the task better. Both these measures, but surprisingly also brain diameter, which is the length of the longest shortest path (the mouthful under the NCs), boost evolution. You might know the graph diameter better as the six degrees of separation, which says that in complex graphs you can walk from node to node with at most six steps. It seams as if this is not only a property of natural graphs, but that increasing this diameter also makes good cognitive architectures – who could have known?

Another interesting result is that particularly sparse or overly connected graphs are not generally good. It turns out that some tasks respond well to one, whereas other tasks benefit from the other. And lastly, one of the referees kept asking about “predictive information” so we did that experiment as well. To be honest, I never liked predictive information (PI) as a meaningful concept at all. Increasing PI either means that your actions become more predictable, or the next sensor input becomes more predictable. Both can be trivially increased by either doing nothing or by closing your eyes … not to my surprise using PI makes evolution slower and makes agents perform worse than not using it at all. Anyways, now you know, if you can’t get it to evolve try φ, R, or graph diameter to give your fitness function a boost.

Cheers Arend

 

AI – maybe you should be scared?

We find many new articles in the media stirring up fear about the advent of artificial intelligence (AI). The idea is that “obviously” these new systems will not only be better in any which way, but they will also demand leadership and ultimately get humankind extinct … an idea as old as skynet and terminator itself. This reminds me of all the cheesy alien SciFi movies of the sixties predicting a similarly gruesome end of humanity similar to “War of the worlds”. Once we landed on the moon and Hollywood’s technology became better, SciFi movies stopped being overly cheesy and mainstream media recognized the fact that little green men are too far far away and in a different Galaxy to do any immediate damage.

But AI is obviously different! This technology is not only lurking around the corner, some of it we carry around in our iPhones in the form of Siri or it will drive our cars sooner than later. Give it more time and it will go berserk like HAL 9000 or COG! Machines are not only expert systems that play chess or Jeopardy, these systems all contain the spark or sentience in them that just waits to collect more digital resources to collapse into some kind of information singularity and then we are doomed, right?

Here it is were fiction leaves the realm of sanity. Expert systems are made by large teams of programming experts, that cobble together algorithms that other teams of experts thought out, and stuff all of it into cloud (networked) computers. Then they link these systems either to ultra large data bases or hook them up to sophisticated sensors. They add some “Big Data”, and now we can ship AI. However, I think that this approach is not leading to our ultimate demise, but is ultimately doomed to failure itself. The very thing we want to make, a facsimile of the human brain, doesn’t work like a flow chart, isn’t made from well communicating teams of engineers, and doesn’t follow a blueprint understandable by human programmers. Simply put, the jello like gray substance you currently use to intellectually digest these lines was made by natures one and only creative force: Evolution!

Evolution is a biased sequence of random changes. Those changes that aren’t too bad, lead to marginally better systems, and so on, for eons. What we end up with is an extremely complex and complicated mesh of coincidences and exploited opportunities, lacking the very essence we use to design systems: order. Let us try to image the pure volume of books necessary to describe the brain? A massive collection of articles about neuronal functions, developmental neurobiology, cognitive neuroscience, neurophysiology, and those disciplines not even invented to describe how one would built a human brain. You get my point, brains aren’t engineered, and engineering them is a futile endeavor, not because we lack the ability to understand them, but because the brain lacks a narrative or design principle we can put in words, or communicate, or use as a blueprint. We would need to reverse engineer something that wasn’t engineered in the first place. The only option we have left, in my opinion, is to create those circumstances that lead to the evolution of intelligence in a computer, and let it happen again. Figuring out the circumstances that lead to the evolution of intelligence present a much easier task than trying to engineer a brain, don’t you think?

What does this approach have to do with AI being a threat or not?
Future AI will come from digital environments and has undergone evolution that is at least similar in principle to what humankind went through. Did this produce ruthless killers, or sentient beings that empathize and understand the value of synergy and diversity? I guess this is a question that you have to ask yourself. Are you a nice person that values diversity and that brings value to a mutual beneficial relation? Are you respectful to other sentient beings? Then all is fine, because you just showed that evolution is indeed capable of producing such individuals, and it is those entities that you will meet in the future. If you disagree, well then I am sorry, but your kind will become extinct, and then you should indeed be scared. Fortunately we already showed that evolution is indeed capable of creating cooperators, and since winning isn’t everything, I am confident that my approach will lead to nice AI – what those gangs of experts are up, to cobbling together code, on that other hand, I am not so sure about.

Cheers Arend