该赛事围绕英雄联盟实时对局数据预测胜负展开。提供18万训练数据与2万测试数据,含击杀、伤害等30个字段,需预测测试集“win”标签,以准确率评分。Baseline流程含环境配置、代码运行等步骤,采用简单神经网络模型,还可通过提取交叉特征、加入验证集等优化模型提升成绩。
☞☞☞AI 智能聊天, 问答助手, AI 智能搜索, 免费无限量使用 DeepSeek R1 模型☜☜☜

赛事介绍
实时对战游戏是人工智能研究领域的一个热点。由于游戏复杂性、部分可观察和动态实时变化战局等游戏特点使得研究变得比较困难。我们可以在选择英雄阶段预测胜负概率,也可以在比赛期间根据比赛实时数据进行建模。那么我们英雄联盟对局进行期间,能知道自己的胜率吗?

赛事任务
比赛数据使用了英雄联盟玩家的实时游戏数据,记录下用户在游戏中对局数据(如击杀数、住物理伤害)。希望参赛选手能从数据集中挖掘出数据的规律,并预测玩家在本局游戏中的输赢情况。
赛题训练集案例如下:
训练集18万数据;测试集2万条数据;
import pandas as pdimport numpy as nptrain = pd.read_csv('train.csv.zip')
对于数据集中每一行为一个玩家的游戏数据,数据字段如下所示:
吐槽大师
吐槽大师(Roast Master) – 终极 AI 吐槽生成器,适用于 Instagram,Facebook,Twitter,Threads 和 Linkedin
26 查看详情
id:玩家记录idwin:是否胜利,标签变量kills:击杀次数deaths:死亡次数assists:助攻次数largestkillingspree:最大 killing spree(游戏术语,意味大杀特杀。当你连续杀死三个对方英雄而中途没有死亡时)largestmultikill:最大mult ikill(游戏术语,短时间内多重击杀)longesttimespentliving:最长存活时间doublekills:doublekills次数triplekills:doublekills次数quadrakills:quadrakills次数pentakills:pentakills次数totdmgdealt:总伤害magicdmgdealt:魔法伤害physicaldmgdealt:物理伤害truedmgdealt:真实伤害largestcrit:最大暴击伤害totdmgtochamp:对对方玩家的伤害magicdmgtochamp:对对方玩家的魔法伤害physdmgtochamp:对对方玩家的物理伤害truedmgtochamp:对对方玩家的真实伤害totheal:治疗量totunitshealed:痊愈的总单位dmgtoturrets:对炮塔的伤害timecc:法控时间totdmgtaken:承受的伤害magicdmgtaken:承受的魔法伤害physdmgtaken:承受的物理伤害truedmgtaken:承受的真实伤害wardsplaced:侦查守卫放置次数wardskilled:侦查守卫摧毁次数firstblood:是否为firstblood 测试集中label字段win为空,需要选手预测。
评审规则
数据说明
选手需要提交测试集队伍排名预测,具体的提交格式如下:
win0110
评估指标
本次竞赛的使用准确率进行评分,数值越高精度越高,评估代码参考:
from sklearn.metrics import accuracy_scorey_pred = [0, 2, 1, 3]y_true = [0, 1, 2, 3]accuracy_score(y_true, y_pred)
Baseline使用指导
1、点击‘fork按钮’,出现‘fork项目’弹窗
2、点击‘创建按钮’ ,出现‘运行项目’弹窗
3、点击‘运行项目’,自动跳转至新页面
4、点击‘启动环境’ ,出现‘选择运行环境’弹窗
5、选择运行环境(启动项目需要时间,请耐心等待),出现‘环境启动成功’弹窗,点击确定
6、点击进入环境,即可进入notebook环境
7、鼠标移至下方每个代码块内(代码块左侧边框会变成浅蓝色),再依次点击每个代码块左上角的‘三角形运行按钮’,待一个模块运行完以后再运行下一个模块,直至全部运行完成

8、下载页面左侧submission.zip压缩包
9、在比赛页提交submission.zip压缩包,等待系统评测结束后,即可登榜!
10、点击页面左侧‘版本-生成新版本’
11、填写‘版本名称’,点击‘生成版本按钮’,即可在个人主页查看到该项目(可选择公开此项目哦)
In [1]
import pandas as pdimport paddleimport numpy as np%pylab inlineimport seaborn as snstrain_df = pd.read_csv('data/data137276/train.csv.zip')test_df = pd.read_csv('data/data137276/test.csv.zip')train_df = train_df.drop(['id', 'timecc'], axis=1)test_df = test_df.drop(['id', 'timecc'], axis=1)
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/__init__.py:107: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working from collections import MutableMapping/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/rcsetup.py:20: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working from collections import Iterable, Mapping/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/colors.py:53: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working from collections import Sized
Populating the interactive namespace from numpy and matplotlib
数据分析
In [3]
train_df.isnull().mean(0)
win 0.0kills 0.0deaths 0.0assists 0.0largestkillingspree 0.0largestmultikill 0.0longesttimespentliving 0.0doublekills 0.0triplekills 0.0quadrakills 0.0pentakills 0.0totdmgdealt 0.0magicdmgdealt 0.0physicaldmgdealt 0.0truedmgdealt 0.0largestcrit 0.0totdmgtochamp 0.0magicdmgtochamp 0.0physdmgtochamp 0.0truedmgtochamp 0.0totheal 0.0totunitshealed 0.0dmgtoturrets 0.0totdmgtaken 0.0magicdmgtaken 0.0physdmgtaken 0.0truedmgtaken 0.0wardsplaced 0.0wardskilled 0.0firstblood 0.0dtype: float64
In [4]
train_df['win'].value_counts().plot(kind='bar')
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/cbook/__init__.py:2349: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working if isinstance(obj, collections.Iterator):/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/cbook/__init__.py:2366: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working return list(data) if isinstance(data, collections.MappingView) else data
In [5]
sns.distplot(train_df['kills'])
In [5]
sns.distplot(train_df['deaths'])
In [6]
sns.boxplot(y='kills', x='win', data=train_df)
In [7]
plt.scatter(train_df['kills'], train_df['deaths'])plt.xlabel('kills')plt.ylabel('deaths')
Text(0,0.5,'deaths')
In [8]
for col in train_df.columns[1:]: train_df[col] /= train_df[col].max() test_df[col] /= test_df[col].max()
搭建模型
In [9]
class Classifier(paddle.nn.Layer): # self代表类的实例自身 def __init__(self): # 初始化父类中的一些参数 super(Classifier, self).__init__() self.fc1 = paddle.nn.Linear(in_features=29, out_features=40) self.fc2 = paddle.nn.Linear(in_features=40, out_features=1) self.relu = paddle.nn.ReLU() # 网络的前向计算 def forward(self, inputs): x = self.relu(self.fc1(inputs)) x = self.fc2(x) return x
In [10]
model = Classifier()model.train()opt = paddle.optimizer.SGD(learning_rate=0.01, parameters=model.parameters())loss_fn = paddle.nn.BCEWithLogitsLoss()
W0427 14:43:44.334179 103 device_context.cc:447] Please NOTE: device: 0, GPU Compute Capability: 8.0, Driver API Version: 11.2, Runtime API Version: 11.2W0427 14:43:44.338698 103 device_context.cc:465] device: 0, cuDNN Version: 8.2.
In [11]
EPOCH_NUM = 2 # 设置外层循环次数BATCH_SIZE = 100 # 设置batch大小training_data = train_df.iloc[:-1000,].values.astype(np.float32)val_data = train_df.iloc[-1000:, ].values.astype(np.float32)# 定义外层循环for epoch_id in range(EPOCH_NUM): # 在每轮迭代开始之前,将训练数据的顺序随机的打乱 np.random.shuffle(training_data) # 将训练数据进行拆分,每个batch包含10条数据 mini_batches = [training_data[k:k+BATCH_SIZE] for k in range(0, len(training_data), BATCH_SIZE)] # 定义内层循环 for iter_id, mini_batch in enumerate(mini_batches): x = np.array(mini_batch[:, 1:]) # 获得当前批次训练数据 y = np.array(mini_batch[:, :1]) # 获得当前批次训练标签 # 将numpy数据转为飞桨动态图tensor的格式 features = paddle.to_tensor(x) y = paddle.to_tensor(y) # 前向计算 predicts = model(features) # 计算损失 loss = loss_fn(predicts, y, ) avg_loss = paddle.mean(loss) if iter_id%200==0: acc = (predicts > 0).astype(int).flatten() == y.flatten().astype(int) acc = acc.astype(float).mean() print("epoch: {}, iter: {}, loss is: {}, acc is {}".format(epoch_id, iter_id, avg_loss.numpy(), acc.numpy())) # 反向传播,计算每层参数的梯度值 avg_loss.backward() # 更新参数,根据设置好的学习率迭代一步 opt.step() # 清空梯度变量,以备下一轮计算 opt.clear_grad()
epoch: 0, iter: 0, loss is: [0.6994627], acc is [0.49]epoch: 0, iter: 200, loss is: [0.7009081], acc is [0.36]epoch: 0, iter: 400, loss is: [0.6921266], acc is [0.57]epoch: 0, iter: 600, loss is: [0.6839013], acc is [0.64]epoch: 0, iter: 800, loss is: [0.6739801], acc is [0.75]epoch: 0, iter: 1000, loss is: [0.65885824], acc is [0.83]epoch: 0, iter: 1200, loss is: [0.66508365], acc is [0.71]epoch: 0, iter: 1400, loss is: [0.6578212], acc is [0.74]epoch: 0, iter: 1600, loss is: [0.6562445], acc is [0.72]epoch: 1, iter: 0, loss is: [0.6200185], acc is [0.85]epoch: 1, iter: 200, loss is: [0.62804365], acc is [0.79]epoch: 1, iter: 400, loss is: [0.6358215], acc is [0.75]epoch: 1, iter: 600, loss is: [0.6242084], acc is [0.76]epoch: 1, iter: 800, loss is: [0.6128205], acc is [0.78]epoch: 1, iter: 1000, loss is: [0.6186602], acc is [0.68]epoch: 1, iter: 1200, loss is: [0.57297456], acc is [0.79]epoch: 1, iter: 1400, loss is: [0.57423747], acc is [0.78]epoch: 1, iter: 1600, loss is: [0.58428985], acc is [0.75]
In [12]
model.eval()test_data = paddle.to_tensor(test_df.values.astype(np.float32))test_predict = model(test_data)test_predict = (test_predict > 0).astype(int).flatten()
In [13]
pd.DataFrame({'win': test_predict.numpy() }).to_csv('submission.csv', index=None)!zip submission.zip submission.csv
updating: submission.csv (deflated 90%)
总结与上分点
原始赛题字段存在关联,可以进一步提取交叉特征。模型训练过程中可以加入验证集验证过程。In [ ]
以上就是英雄联盟大师 Baseline的详细内容,更多请关注创想鸟其它相关文章!
版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。
如发现本站有涉嫌抄袭侵权/违法违规的内容, 请发送邮件至 chuangxiangniao@163.com 举报,一经查实,本站将立刻删除。
发布者:程序猿,转转请注明出处:https://www.chuangxiangniao.com/p/316799.html
微信扫一扫
支付宝扫一扫