We are adding a couple of artificial neural network tools to MABE and I wanted to test different ways to configure them and to see the effects different types of activation functions have. While ANNs are really easy to setup it is not necessarily easy to test them. So I decided to give them two inputs (the x and y position of the screen) and let them have three outputs. Those three outputs I translate to a color (r, b, and b) for each pixel of the screen. That already renders rather interesting pictures which all depend on the weights of the ANN. In my case I just generated random ones.
This principle has been used already in picbreeder but not with classic ANNs but with NEAT, which is a form of ANN that uses a much more sophisticated way to determine the connectivity and the weights. While in picbreeder you can choose the direction the network changes, I decided to do something else.
Imagine that the weights define a point in a very high dimensional state. Well, not just three like in the space we live in, but in this case hundreds. This point can slowly fly through this space, and when it hits an upper or lower limit of one dimension it just bounces off that wall. Again, while we can easily imagine a point in a cube bouncing around, we struggle when we have to imagine this in a multidimensional space. However, the principle is the same. The point is the state of all the weights, and if we slowly move this point around, so does the behavior of the ANN.
As a result, we not only get one image but a sequence of images, visualizing how the ANN changes its function over time. These images I throw together as a movie. Check out the amazing visualizations that this gives me,