Evolutionary algorithm outperforms deep-learning machines at video games

hunglee2 | 125 points

There are some things about this article that irk me, like this:

Neural networks have garnered all the headlines, but a much more powerful approach is waiting in the wings.

Granted it's probably written for lay-people who don't know much about AI/ML techniques, but this is still pretty sloppy. It hasn't been proven that EA's are "more powerful" than NN's. And so far as that goes, I'm not even sure it makes sense to use a term like "more powerful" at all in this context. And EA approaches aren't some new, up-and-comer that haven't had their chance yet... those techniques have been around for a long time as well.

All of that said, I'm a big fan of EA's, dating back to when I implemented a parallel GA optimization algorithm in a parallel programming class in college. I've been fascinated with them ever since, and I do think that evolutionary approaches have a lot of potential. So I'm happy to see some positive press being directed in that direction, but I still get annoyed with some of the hand-wavy stuff and sloppy language.

Anyway, one thing about (some|many|most?) EA approaches is that they parallelize very well. And depending on what you're doing, you aren't necessarily doing Linear Algebra / matrix math, so you can likely accomplish a lot without spending beaucoup $$ on GPU's. A Beowulf cluster of CPU machines can be pretty effective.

mindcrime | 6 years ago

To quote Ben Recht

"When you end up with a bunch of papers showing that genetic algorithms are competitive with your methods, this does not mean that we’ve made an advance in genetic algorithms. It is far more likely that this means that your method is a lousy implementation of random search."

http://www.argmin.net/2018/02/20/reinforce/

aub3bhat | 6 years ago

I've been using the python-neat library in open-ai's retro with some success. And while it works quickly, it normally finds local maximas. It seems to struggle with long sequences. And defining the fitness function/parameters is an artform.

Here's a video of Donkey Kong Country played by python-neat in open-ai's retro. It took 8 generations of 20 genones to beat level one. I'll post the code if anyone's interested.

https://vimeo.com/280611464

wholemoley | 6 years ago

It seems clear to me that the biggest advances will be made through evolutionary and developmental neural networks where evolution lays down the algorithm that builds the gross neural network architecture and learning then refines this. However this will need massive amounts of computational power because you have G generations of population size P, and each individual phenotype needs to go through a developmental step (neurogensis) and then an evaluation step. On top of that we need a good genotype to phenotype map specifically for neural networks.

Conveniently gene regulatory networks that would control cell growt, division, and wiring up the neurons are themselves represented mathematically as neural network so in effect you’re evolving one class of neural networks the build another class of neural networks. Nature is quite elegant.

rdlecler1 | 6 years ago

It would be interesting to see how good the combination of EA an NN would be... This is basically us, humans: evolution + learning. Have there been any attempts to combine the two?

Drdrdrq | 6 years ago

Maybe largely off-topic but I still remember the fun I had actually building as a kid (after reading an article in Scientific American by Martin Gardner) the MENACE (Machine Educable Noughts And Crosses Engine):

http://www.mscroggs.co.uk/blog/19

For the beads I used Smarties, so each time the machine lost I ate the "wrong colour one".

jaclaz | 6 years ago

I've been meaning to read Handbook of Neuroevolution Through Erlang. Has anyone done it? If so, what's your opinion about it?

plainOldText | 6 years ago

I didn't know this was new. Most of the neat NN experiments you find on YouTube the past several years use genetic algorithms. Good ol' MarIO, for example. The rigged 3d human models "learning" how to walk and run. The various "navigate the maze" ones. Etc.

0xBA5ED | 6 years ago