NashCoding Yet Another Artificial Intelligence Blog

12Mar/133

Walkthrough of the UCB1 Optimality Proof

The Finite-time Analysis of the Multiarmed Bandit Problem paper is a seminal work in bandit research. It was the first paper to present and prove an algorithm for selecting a bandit with finite-time optimality guarantees. If you're not familiar with bandits, Wikipedia has a decent entry.

In this post, I'm going to assume you have made an attempt at reading the UCB paper and tried to understand the proof for UCB-1. My goal is to provide an in-depth explanation for how they arrive at their final finite-time optimality guarantee for the algorithm.

10Apr/110

Building a Startup – Part 1 – Introduction

I've joined with David Fogel and Yanon Volcani to build EffectCheck. This post is the first in a series detailing how we're building this startup from the ground up.

29Oct/103

Tutorial – Evolving Neural Networks with SharpNEAT 2 (Part 3)

In parts 1 and 2 of this tutorial series, we evolved neural networks using the standard NEAT algorithm. In part 3, we're going to use the HyperNEAT algorithm.

29Sep/106

Evolving Nash Equilibria – A Quick Correction

As I noted in my first post, I was a little skeptical about whether a Nash Equilibrium (NE) could be evolved by taking the squared loss of each hand. My conclusions were that, given an expected value evaluation function, it was possible using a best-opponent fitness but not using a squared-loss fitness. There turns out to be a small bug in the squared loss code which caused a big change in results. Below are the results for the correct fitness function implementation.

24Jul/109

Tutorial – Evolving Neural Networks with SharpNEAT 2 (Part 2)

In part 1 of the tutorial, we setup a basic experiment to evolve a neural network to play Tic-Tac-Toe against a couple of hand-coded opponents. In part 2, we're going to create a competitive coevolution experiment where the networks evolve by playing against themselves.

17Jul/104

Tutorial – Evolving Neural Networks with SharpNEAT 2 (Part 1)

The Neuro-Evolution via Augmenting Topologies (NEAT) algorithm enables users to evolve neural networks without having to worry about esoteric details like hidden layers. Instead, NEAT is clever enough to incorporate all of that into the evolution process itself. You only have to worry about the inputs, outputs, and fitness evaluation.

This is a tutorial on how to use SharpNEAT 2, the second version of a popular C# implementation of the NEAT algorithm written by Colin Green. The source code for this tutorial is available here.

7Jul/104

Evolutionary Algorithms: The Little Things (Part 1)

I'm the kind of person who finds himself reading about a new technology or a cool algorithm, and tries to implement it based on the high-level description. Unfortunately, I don't always guess everything correctly, and sometimes the implementation turns out to not work; or it kind of works, but not as well as expected, which can be even worse.

A key example of this for me was when I read about Evolutionary Algorithms. At the core, it's sounds so ingeniously simple:

  1. Create a population of individuals
  2. Score the individuals based on some performance metric
  3. Kill off the weakest performers
  4. Create children from the surviving parents
  5. If not finished, go to #2

That's really it, right? I always thought so.

7Jul/103

Implementation of Sukhotin’s Algorithm

I just read an article on HackerNews about a little-known algorithm called Sukhotin's Algorithm. The algorithm takes a dictionary of words and tries to figure out what the vowels are, based on the assumption that vowels are typically next to consonants. This sounded really cool, so I downloaded a big list of English words and implemented it.

6Mar/1032

A Comment on “Evolving Nash-Optimal Poker Strategies Using Evolutionary Computation”

I recently read this paper where the authors claim that they are evolving psuedo-optimal strategies for poker. Given that we know evolutionary processes do not necessarily minimize losses, I was skeptical.