• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

brilee/MuGo: Replicating AlphaGo's architecture in a readable manner

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称:

brilee/MuGo

开源软件地址:

https://github.com/brilee/MuGo

开源编程语言:

Python 100.0%

开源软件介绍:

Update

This repo is abandoned as of mid-2017. The code at https://github.com/tensorflow/minigo is a continuation of the work here.

MuGo: A minimalist Go engine modeled after AlphaGo

This is a pure Python implementation of the essential parts of AlphaGo.

The logic / control flow of AlphaGo itself is not very complicated and is replicated here. The secret sauce of AlphaGo is in its various neural networks.

(As I understand it) AlphaGo uses three neural networks during play. The first NN is a slow but accurate policy network. This network is trained to predict human moves (~57% accuracy), and it outputs a list of plausible moves, with probabilities attached to each move. This first NN is used to seed the Monte Carlo tree search with plausible moves. One of the reasons this first NN is slow is because of its size, and because the inputs to the neural network are various expensive-to-compute properties of the Go board (liberty counts; ataris; ladder status; etc.). The second NN is a smaller, faster but less accurate (~24% accuracy) policy network, and doesn't use computed properties as input. Once a leaf node of the current MCTS tree is reached, the second faster network is used to play the position out to the end with vaguely plausible moves, and score the end position. The third NN is a value network: it outputs an expected win margin for that board, without attempting to play anything out. The results of the monte carlo playout using NN #2 and the value calculation using NN #3 are averaged, and this value is recorded as the approximate result for that MCTS node.

Using the priors from NN #1 and the accumulating results of MCTS, a new path is chosen for further Monte Carlo exploration.

Getting Started

Install Tensorflow

Start by installing Tensorflow along with GPU drivers (i.e. CUDA support for Nvidia cards).

Get SGFs for supervised learning

Second, find a source of SGF files. You can find 15 years of KGS high-dan games at u-go.net. Alternately, you can download a database of professional games from a variety of sources.

Preprocess SGFs

Third, preprocess the SGF files. This takes all positions in the SGF files and extracts features for each position, as well as recording the correct next move. These positions are then split into chunks, with one test chunk and the remainder as training chunks. This step may take a while, and must be repeated if you change the feature extraction steps in features.py

python main.py preprocess data/kgs-*

(This example takes advantage of bash wildcard expansion - say, if the KGS directories are named data/kgs-2006-01, data/kgs-2006-02, and so on.)

Supervised learning (policy network)

With the preprocessed SGF data (default output directory is ./processed_data/), you can train the policy network.

python main.py train processed_data/ --save-file=/tmp/savedmodel --epochs=1 --logdir=logs/my_training_run

As the network is trained, the current model will be saved at --save-file. You can resume training the same network as follows:

python main.py train processed_data/ --read-file=/tmp/savedmodel
 --save-file=/tmp/savedmodel --epochs=10 --logdir=logs/my_training_run

Additionally, you can follow along with the training progress with TensorBoard - if you give each run a different name (logs/my_training_run, logs/my_training_run2), you can overlay the runs on top of each other.

tensorboard --logdir=logs/

Play against MuGo

MuGo uses the GTP protocol, and you can use any gtp-compliant program with it. To invoke the raw policy network, use

python main.py gtp policy --read-file=/tmp/savedmodel

To invoke the MCTS-integrated version of the policy network, use

python main.py gtp mcts --read-file=/tmp/savedmodel

One way to play via GTP is to use gogui-display (which implements a UI that speaks GTP.) You can download the gogui set of tools at http://gogui.sourceforge.net/. See also documentation on interesting ways to use GTP.

gogui-twogtp -black 'python main.py gtp policy --read-file=/tmp/savedmodel' -white 'gogui-display' -size 19 -komi 7.5 -verbose -auto

Another way to play via GTP is to play against GnuGo, while spectating the games

BLACK="gnugo --mode gtp"
WHITE="python main.py gtp policy --read-file=/tmp/savedmodel"
TWOGTP="gogui-twogtp -black \"$BLACK\" -white \"$WHITE\" -games 10 \
  -size 19 -alternate -sgffile gnugo"
gogui -size 19 -program "$TWOGTP" -computer-both -auto

Another way to play via GTP is to connect to CGOS, the Computer Go Online Server. The CGOS server hosted by boardspace.net is actually abandoned; you'll want to connect to the CGOS server at yss-aya.com.

After configuring your cgos.config file, you can connect to CGOS with cgosGtp -c cgos.config and spectate your own game with cgosView yss-aya.com 6819

Running unit tests

python -m unittest discover tests



鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap