Teaching myself AI with freecell

While I know relatively a lot about software, I know relatively little about training and using neural networks. This series documents me trying to demystify the topic, as a hobby.

Background and initial learning

My background in AI is rather limited. I took a couple university courses on the topic, and at an internship I used some pre-existing infrastructure at Facebook to train a web crawler I was building. With all the hype, and having some free time, I decided to dig in for myself.

I started with Andrei Karpathy's Zero to Hero video series, and followed along with the exercises and code in Jupyter notebooks. After gaining a basic understanding of the topic, it was time to get hands on and try a project of my own.

The idea

On a long flight home from Japan, I was playing freecell on my phone.

Freecell is not a difficult game, but it's complex enough to elude a winning algorithm. Mulling it over, I wondered if, despite lacking a winning algorithm, there were some heuristics to guide the next move? That's when I had a thought -- a neural network can be thought of as encoding a set of ineffable heuristics!

By this time I had learned enough about neural networks to see a vague project path:

  • Collect data from my freecell games played
  • Use (game state, next move) as training data
  • Predict the next move

Researching freecell

Back at home, I was excited to dig in to this project. The next step was to understand what else was out there

  • Are there any working freecell solvers?
    • How do they work?
  • What's the state of the art with neural networks and freecell?

Are there any working freecell solvers?

  • fc-solve is an open source project to solve freecell games
    • mostly written in C++, but it also runs in the browser
  • freecell is an open source project to solve freecell games
    • also written in C++ and runs in the browser
    • it has 1 million solutions cataloged in the repository

Both of these solvers use a search based approach to find solutions to freecell games, and are able to solve most games in a fraction of a second. To me, this indicates that solving freecell is in turn a solved problem.

However, searching for a solution is equivalent to playing the game with unlimited "undo" moves available, just playing really quickly. A refined and unsolved problem is to build a solver / heuristic that can solve games in one try, without undo.

Neural networks and freecell

I quickly found 3 papers on the topic, but none were trying to use a neural network to directly solve freecell games.

All of these papers identify freecell as an interesting problem domain for examining the problem solving abilities of neural networks. However, all of these papers use neural networks only as an advanced heuristic in an A* search (or similar).

While there is some evidence that a neural network based heuristic might shorten the number of search steps, there is no evidence that it would provide a practical speedup in actually finding the solutions.

Setting the project goal

Notably, it is not a goal to search for a freecell solution. Searching is like playing the game with unlimited "undo" moves.

Instead, I would like to see if a neural network can win games of freecell without undo. This seems like it could be a unique project, while being interesting and potentially achievable.

In summary, I want to build a neural network NN such that next_move = NN(board_state).

That's all for now, next time I'll talk about the project plan.