NashCoding Yet Another Artificial Intelligence Blog

29Oct/103

Tutorial – Evolving Neural Networks with SharpNEAT 2 (Part 3)

In parts 1 and 2 of this tutorial series, we evolved neural networks using the standard NEAT algorithm. In part 3, we're going to use the HyperNEAT algorithm.

So far, our neural networks have been effectively ignorant of the geometry of the Tic-Tac-Toe board. They are given nine inputs and are told simply to figure out how to play Tic-Tac-Toe. They have no concept of where each input is in relation to the others, which is an additional hurdle for the evolution to overcome.

The HyperNEAT algorithm1 incorporates the geometry of the search space by effectively adding a layer of indirection. Rather than evolving the neural network directly, HyperNEAT evolves a Compositional Pattern Producing Network (CPPN)2 which can generate a neural network. CPPNs take the location of two nodes in the problem domain and output the connection between them in the target neural network.

The IGenomeDecoder

The main difference between a HyperNEAT experiment and a standard NEAT experiment in SharpNEAT 2 is the genome decoding. In a NEAT experiment, your CreateGenomeDecoder method simply returns a new NeatGenomeDecoder, and you're done. In a HyperNEAT experiment, you have to create a custom decoder that specifies where the input and output nodes are in the problem domain.

public class TicTacToeHyperNeatExperiment
{
    ...

    /// <summary>
    /// Creates a new HyperNEAT genome decoder that can be used to convert 
    /// a genome into a phenome.
    /// </summary>
    public IGenomeDecoder<NeatGenome, IBlackBox> CreateGenomeDecoder()
    {
        // Create an input and an output layer for the HyperNEAT
        // substrate. Each layer corresponds to the game board
        // with 9 squares.
        SubstrateNodeSet inputLayer = new SubstrateNodeSet(9);
        SubstrateNodeSet outputLayer = new SubstrateNodeSet(9);

        // Each node in each layer needs a unique ID.
        // The input nodes use ID range [1,9] and
        // the output nodes use [10,18].
        uint inputId = 1, outputId = 10;

        // The game board is represented as a 2-dimensional plane.
        // Each square is at -1, 0, or 1 in the x and y axis.
        // Thus, the game board looks like this:
        //
        // (-1,1)  | (0,1)   | (1,1)
        // (-1,0)  | (0,0)   | (1,0)
        // (-1,-1) | (0,-1)  | (1,-1)
        //
        // Note that the NeatPlayer class orders the board from
        // left to right, then top to bottom. So we need to create
        // nodes with the following IDs:
        //
        //    1    |    2    |   3
        //    4    |    5    |   6
        //    7    |    8    |   9
        // 
        for(int x = -1; x <= 1; x++)
            for (int y = 1; y >= -1; y--, inputId++, outputId++)
            {
                inputLayer.NodeList.Add(new SubstrateNode(inputId, new double[] { x, y }));
                outputLayer.NodeList.Add(new SubstrateNode(outputId, new double[] { x, y }));
            }

        List<SubstrateNodeSet> nodeSetList = new List<SubstrateNodeSet>(2);
        nodeSetList.Add(inputLayer);
        nodeSetList.Add(outputLayer);

        // Define a connection mapping from the input layer to the output layer.
        List<NodeSetMapping> nodeSetMappingList = new List<NodeSetMapping>(1);
        nodeSetMappingList.Add(NodeSetMapping.Create(0, 1, (double?)null));

        // Construct the substrate using a steepened sigmoid as the phenome's
        // activation function. All weights under 0.2 will not generate 
        // connections in the final phenome.
        Substrate substrate = new Substrate(nodeSetList, 
            DefaultActivationFunctionLibrary.CreateLibraryCppn(), 
            0, 0.2, 5, nodeSetMappingList);

        // Create genome decoder. Decodes to a neural network packaged with
        // an activation scheme that defines a fixed number of activations per evaluation.
        IGenomeDecoder<NeatGenome, IBlackBox> genomeDecoder = 
            new HyperNeatDecoder(substrate, _activationSchemeCppn, 
                                            _activationScheme, false);

        return genomeDecoder;
    }
}

In our experiment, the Tic-Tac-Toe board is our problem domain and the game board is represented as a simple 2-d plane. Each node is given a pair of (x,y) coordinates corresponding to its location on the game board. These coordinates are used as inputs to the CPPN when it's generating the neural network.

The above example is a simple 2-layer CPPN with no hidden layers. If you want to add hidden layers, you simply need to create an additional SubstrateNodeSet and add it to the nodeSetList. Then you will need two NodeSetMappings, one from the input (0) to the hidden (1) layer, and one from the hidden (1) to the output (2) layer.3

Once you have everything specified, you create a Substrate (i.e., the CPPN genome) and return the HyperNeatDecoder.

Saving HyperNEAT Genomes

The only other change is going to be in handling the file I/O for the genomes.

class Program
{
    ...

    static void ea_UpdateEvent(object sender, EventArgs e)
    {
        Console.WriteLine(string.Format("gen={0:N0} bestFitness={1:N6}", 
                                _ea.CurrentGeneration, _ea.Statistics._maxFitness));

        // Save the best genome to file
        // Note: Unlike the other experiments, you must save
        // function IDs of HyperNEAT genomes.
        var doc = NeatGenomeXmlIO.SaveComplete(
                                new List() { _ea.CurrentChampGenome }, true);

        doc.Save(CHAMPION_FILE);
    }
}

Since HyperNEAT also evolves each node's activation function, we need to make sure to save that information to file along with the rest of the topology information.

Conclusion

Going from NEAT to HyperNEAT is actually very easy. Once you have a good idea of how you are going to map your problem domain to a geometric plane, it's straight-forward to transition.

Unfortunately, the problems we had still persist: the evolution does not converge on an optimal strategy. Since that was never really the goal of these tutorials, it's okay! Our goal was to learn how to use SharpNEAT 2 to evolve neural networks, which I think I can safely say has been accomplished.

Source code is available here. The project used for this part was TicTacToeHyperNeat.

  1. For more information on the HyperNEAT algorithm, see Stanley et. al.'s journal paper: A Hypercube-Based Encoding for Evolving Large-Scale Neural Networks
  2. CPPNs are similar to standard neural networks, except that they contain nodes with different activation functions. The intuition is that different activation functions are more effective at expressing different kinds of symmetries and regularities that may be present in the problem domain. For the purposes of this tutorial, you can basically assume a CPPN is just another kind of neural network.
  3. Note that the EPlex gang have figured out how to automatically place nodes in HyperNEAT, but the current release of SharpNEAT does not implement their algorithm. For more information on the algorithm, see this paper.
Comments (3) Trackbacks (0)
  1. Really interesting. First SharpNeat tutorial on the net. Keep going please :)

  2. It’s interesting that it was not so easy to develop a perfect solution for the TicTacToe game using an algorithm as advanced as HyperNEAT. I imagine TicTacToe doesn’t lend itself well to neural progamming (don’t know much really).
    Anyway, great articles ! And goal accomplished indeed.
    What are you up to these days ? Other interesting projects / tutorials ?

  3. In your project, you specify inputs/outputs for HyperNEAT as 9 & 9. This is incorrect as the CPPN that is decoded into a network takes 2 * (number of dimensions) as inputs. And produces 2 outputs: CPPN weight and bias. Then when the CPPN is decoded into a neural network, the substrate nodes are used as inputs for the CPPN and the output is the weight of the connection.

    If you or anyone has questions don’t hesitate to contact me islandman93 at gmail


Leave a comment

 

Trackbacks are disabled.