本文介绍了井字游戏变种方案,可通过设置xsize、ysize指定棋盘大小,winnum指定连珠数。用两个深度学习模型分别扮演玩家和电脑自动对弈,借QLearning记录每步,依胜负判定方案好坏。代码展示了模型训练等过程,包括迭代、下棋、胜负判定及模型更新等。
☞☞☞AI 智能聊天, 问答助手, AI 智能搜索, 免费无限量使用 DeepSeek R1 模型☜☜☜

Tic-Tac-Toe:井字游戏(井字棋)
是一种在3×3格子上进行的连珠游戏,和五子棋比较类似,由于棋盘一般不画边框,格线排成井字故得名。游戏需要的工具仅为纸和笔,然后由分别代表O和X的两个游戏者轮流在格子里留下标记(一般来说先手者为X)。由最先在任意一条直线上成功连接三个标记的一方获胜。
方案介绍
该方案为井字游戏的变种,可以通过设置xsize、ysize来指定棋盘大小,通过设置winnum来指定连珠数,每局结束的判定在VictoryRule.py文件中写明,QLearning.py文件是Q表格,用于记录电脑和玩家的每一步。
方案设置了两个深度学习模型扮演玩家和电脑,双方自动下棋,根据最后获胜方来判别方案的好坏
代码实现
In [1]
import numpy as npimport paddlefrom Model import Modelfrom VictoryRule import Rulefrom QLearning import QLearningfrom visualdl import LogWriterlog_writer = LogWriter(logdir="./log")Max_Epoch = 200 #最大迭代次数xsize = 3 #多少行ysize = 3 #多少列winnum = 3 #连珠数,多少个连珠则获胜learning_rate = 1e-3 #学习率decay_rate = 0.6 #每步衰减率player=1 #玩家是数字,非0,非负computer=2 #电脑的数字,非0,非负remain = [] #地图中剩余可下棋子位置rule = Rule(xsize,ysize,winnum) #规则Qchart = QLearning(xsize * ysize,decay_rate)#Q表格player_model = Model(xsize * ysize,xsize * ysize)player_model.train()computer_model = Model(xsize * ysize,xsize * ysize)computer_model.train()player_optimizer = paddle.optimizer.SGD(parameters=player_model.parameters(), learning_rate=learning_rate)computer_optimizer = paddle.optimizer.SGD(parameters=computer_model.parameters(), learning_rate=learning_rate)def restart(): "重启环境" Qchart.clear() remain.clear() rule.map = np.zeros(xsize * ysize,dtype=int) for i in range(xsize * ysize): remain.append(i)def modelupdate(player_loss,computer_loss): "模型更新" log_writer.add_scalar(tag="player/loss", step=epoch, value=player_loss.numpy()) log_writer.add_scalar(tag="computer/loss", step=epoch, value=computer_loss.numpy()) # 梯度更新 player_loss.backward() computer_loss.backward() player_optimizer.step() player_optimizer.clear_grad() computer_optimizer.step() computer_optimizer.clear_grad() paddle.save(player_model.state_dict(),'player_model') paddle.save(computer_model.state_dict(),'computer_model') for i in range(xsize * ysize): remain.append(i)for epoch in range(Max_Epoch): while True: player_predict = player_model(paddle.to_tensor(rule.map, dtype='float32',stop_gradient=False))#玩家方预测 for pred in np.argsort(-player_predict.numpy()): if pred in remain: remain.remove(pred) break rule.map[pred] = player Qchart.update(pred,'player') print('player down at {}'.format(pred)) overcode=rule.checkover(pred,player) if overcode == player: "获胜方为玩家" player_loss = paddle.nn.functional.mse_loss(player_predict, paddle.to_tensor(Qchart.playerstep, dtype='float32', stop_gradient=False)) computer_loss = paddle.nn.functional.mse_loss(computer_predict, paddle.to_tensor(-1 * Qchart.computerstep, dtype='float32', stop_gradient=False))#损失计算中,失败方的label为每步的负数 print("Player Victory!") print(rule.map.reshape(xsize,ysize)) #print("epoch:{}tplayer loss:{}tcomputer loss:{}".format(epoch,player_loss.numpy()[0],computer_loss.numpy()[0])) modelupdate(player_loss,computer_loss) restart() break elif overcode == 0: player_loss = paddle.nn.functional.mse_loss(player_predict, paddle.to_tensor(Qchart.playerstep, dtype='float32', stop_gradient=False)) computer_loss = paddle.nn.functional.mse_loss(computer_predict, paddle.to_tensor(Qchart.computerstep, dtype='float32', stop_gradient=False)) print("Draw!") print(rule.map.reshape(xsize,ysize)) #print("epoch:{}tplayer loss:{}tcomputer loss:{}".format(epoch,player_loss.numpy()[0],computer_loss.numpy()[0])) modelupdate(player_loss,computer_loss) restart() break computer_predict = computer_model(paddle.to_tensor(rule.map, dtype='float32',stop_gradient=False))#电脑方预测 for pred in np.argsort(-computer_predict.numpy()): if pred in remain: remain.remove(pred) break rule.map[pred] = computer Qchart.update(pred,'computer') print('computer down at {}'.format(pred)) overcode=rule.checkover(pred, computer) if overcode == computer: player_loss = paddle.nn.functional.mse_loss(player_predict, paddle.to_tensor(-1 * Qchart.playerstep, dtype='float32', stop_gradient=False)) computer_loss = paddle.nn.functional.mse_loss(computer_predict, paddle.to_tensor(Qchart.computerstep, dtype='float32', stop_gradient=False)) print("Computer Victory!") print(rule.map.reshape(xsize,ysize)) #print("epoch:{}tplayer loss:{}tcomputer loss:{}".format(epoch,player_loss.numpy()[0],computer_loss.numpy()[0])) modelupdate(player_loss,computer_loss) restart() break elif overcode == 0: player_loss = paddle.nn.functional.mse_loss(player_predict, paddle.to_tensor(Qchart.playerstep, dtype='float32', stop_gradient=False)) computer_loss = paddle.nn.functional.mse_loss(computer_predict, paddle.to_tensor(Qchart.computerstep, dtype='float32', stop_gradient=False)) print("Draw!") print(rule.map.reshape(xsize,ysize)) #print("epoch:{}tplayer loss:{}tcomputer loss:{}".format(epoch,player_loss.numpy()[0],computer_loss.numpy()[0])) modelupdate(player_loss,computer_loss) restart() break
输出格式
player down at 7computer down at 3player down at 1computer down at 8player down at 6computer down at 2player down at 0computer down at 5Computer Victory![[1 1 2] [2 0 2] [1 1 2]]
以上就是Tic-Tac-Toe:井字游戏(井字棋)的详细内容,更多请关注创想鸟其它相关文章!
版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。
如发现本站有涉嫌抄袭侵权/违法违规的内容, 请发送邮件至 chuangxiangniao@163.com 举报,一经查实,本站将立刻删除。
发布者:程序猿,转转请注明出处:https://www.chuangxiangniao.com/p/55366.html
微信扫一扫
支付宝扫一扫