论文复现: DSIN-阿里点击率预测三部曲-3

本文介绍了基于PaddlePaddle复现Deep Session Interest Network(DSIN)用于点击率预测的项目。项目使用阿里Ali_Display_Ad_Click数据集,将用户历史交互数据划分为会话建模兴趣。复现的DSIN模型测试AUC达0.6356,详述了数据集预处理、模型结构、代码构成、训练测试流程及复现中遇到的问题与解决心得。

☞☞☞AI 智能聊天, 问答助手, AI 智能搜索, 免费无限量使用 DeepSeek R1 模型☜☜☜

论文复现: dsin-阿里点击率预测三部曲-3 - 创想鸟

一、简介

Deep Session Interest Network for Click-Through Rate Prediction 是点击率预测问题的一篇经典论文,该论文的先前工作有大家非常熟悉的DIN,DIEN。都是关注于用户的兴趣,对用户的历史会话行为建模出用户的兴趣表示。DSIN模型观察到了,用户在 session 中的兴趣是高度相近的,但是在不同的 session 中的兴趣是不同的,如下图所示: 论文复现: DSIN-阿里点击率预测三部曲-3 - 创想鸟

根据上述观察,DSIN将用户的历史交互数据划分成了一个个 session , 然后再通过自注意力和双向LSTM对用户的 session 兴趣进行建模。模型框架与DIN,DIEN类似,如下图所示:论文复现: DSIN-阿里点击率预测三部曲-3 - 创想鸟 论文连接:Deep Session Interest Network for Click-Through Rate Prediction

二、复现精度

基于paddlepaddle深度学习框架,对文献算法进行复现后,本项目达到的测试精度,如下表所示。

模型 auc batch_size epoch_num Time of each epoch

DSIN0.635640961约10分钟

参数设置可详见config_bigdata.yaml文件。

三、数据集

本项目所使用的数据集Ali_Display_Ad_Click是由阿里所提供的一个淘宝展示广告点击率预估数据集。

1、原始数据集介绍

原始样本骨架raw_sample:淘宝网站中随机抽样了114万用户8天内的广告展示/点击日志(2600万条记录),构成原始的样本骨架user:脱敏过的用户ID;adgroup_id:脱敏过的广告单元ID;time_stamp:时间戳;pid:资源位;nonclk:为1代表没有点击;为0代表点击;clk:为0代表没有点击;为1代表点击;

user,time_stamp,adgroup_id,pid,nonclk,clk581738,1494137644,1,430548_1007,1,0

广告基本信息表ad_feature:本数据集涵盖了raw_sample中全部广告的基本信息adgroup_id:脱敏过的广告ID;cate_id:脱敏过的商品类目ID;campaign_id:脱敏过的广告计划ID;customer: 脱敏过的广告主ID;brand:脱敏过的品牌ID;price: 宝贝的价格

adgroup_id,cate_id,campaign_id,customer,brand,price63133,6406,83237,1,95471,170.0

用户基本信息表user_profile:本数据集涵盖了raw_sample中全部用户的基本信息userid:脱敏过的用户ID;cms_segid:微群ID;cms_group_id:cms_group_id;final_gender_code:性别 1:男,2:女;age_level:年龄层次; 1234pvalue_level:消费档次,1:低档,2:中档,3:高档;shopping_level:购物深度,1:浅层用户,2:中度用户,3:深度用户occupation:是否大学生 ,1:是,0:否new_user_class_level:城市层级

userid,cms_segid,cms_group_id,final_gender_code,age_level,pvalue_level,shopping_level,occupation,new_user_class_level 234,0,5,2,5,,3,0,3

用户的行为日志behavior_log:本数据集涵盖了raw_sample中全部用户22天内的购物行为user:脱敏过的用户ID;time_stamp:时间戳;btag:行为类型, 包括以下四种:(pv:浏览),(cart:加入购物车),(fav:喜欢),(buy:购买)cate:脱敏过的商品类目id;brand: 脱敏过的品牌id;

user,time_stamp,btag,cate,brand558157,1493741625,pv,6250,91286

预处理数据集介绍

对原始数据集中的四个文件,参考原论文的数据预处理过程对数据进行处理,形成满足DSIN论文条件且可以被reader直接读取的数据集。 数据集共有八个pkl文件,训练集和测试集各自拥有四个,以训练集为例,这四个文件为train_feat_input.pkl、train_sess_input、train_sess_length和train_label.pkl。各自存储了按0.25的采样比进行采样后的user及item特征输入,用户会话特征输入、用户会话长度和标签数据。

四、环境依赖

硬件:

x86 cpuNVIDIA GPU

框架:

PaddlePaddle = 2.2.2Python = 3.7

其他依赖项:

阿里云AI平台 阿里云AI平台

阿里云AI平台

阿里云AI平台 26 查看详情 阿里云AI平台 PaddleRec

五、快速开始

1、克隆PaddleRec

In [1]

#clone PaddleRecimport os!ls /home/aistudio/data/!ls work/!python --version!pip list | grep paddlepaddleif not os.path.isdir('work/PaddleRec'):    !cd work && git clone https://gitee.com/paddlepaddle/PaddleRec.git
data131207PaddleRecPython 3.7.4paddlepaddle-gpu       2.2.2.post101

2、解压数据集并移动到PaddleRec的datasets里

In [2]

#解压数据集!tar -zxvf data/data131207/model_input.tar.gz!mkdir '/home/aistudio/work/PaddleRec/datasets/Ali_Display_Ad_Click_DSIN/'!mkdir '/home/aistudio/work/PaddleRec/datasets/Ali_Display_Ad_Click_DSIN/big_train/'!mkdir '/home/aistudio/work/PaddleRec/datasets/Ali_Display_Ad_Click_DSIN/big_test/'!mv model_input/test_feat_input.pkl work/PaddleRec/datasets/Ali_Display_Ad_Click_DSIN/big_test/!mv model_input/test_label.pkl work/PaddleRec/datasets/Ali_Display_Ad_Click_DSIN/big_test/!mv model_input/test_sess_input.pkl work/PaddleRec/datasets/Ali_Display_Ad_Click_DSIN/big_test/!mv model_input/test_session_length.pkl work/PaddleRec/datasets/Ali_Display_Ad_Click_DSIN/big_test/!mv model_input/train_feat_input.pkl work/PaddleRec/datasets/Ali_Display_Ad_Click_DSIN/big_train/!mv model_input/train_label.pkl work/PaddleRec/datasets/Ali_Display_Ad_Click_DSIN/big_train/!mv model_input/train_sess_input.pkl work/PaddleRec/datasets/Ali_Display_Ad_Click_DSIN/big_train/!mv model_input/train_session_length.pkl work/PaddleRec/datasets/Ali_Display_Ad_Click_DSIN/big_train/
model_input/model_input/test_session_length.pklmodel_input/test_sess_input.pklmodel_input/train_sess_input.pklmodel_input/train_feat_input.pklmodel_input/test_feat_input.pklmodel_input/test_label.pklmodel_input/train_label.pklmodel_input/train_session_length.pkl

3、写入模型相关代码

In [3]

!mkdir '/home/aistudio/work/PaddleRec/models/rank/dsin'%cd '/home/aistudio/work/PaddleRec/models/rank/dsin'
/home/aistudio/work/PaddleRec/models/rank/dsin

In [4]

%%writefile net.py# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at##     http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.import paddleimport paddle.nn as nnimport paddle.nn.functional as Fimport mathimport numpy as npfrom sequence_layers import PositionalEncoder, AttentionSequencePoolingLayer, MLPclass DSIN_layer(nn.Layer):    def __init__(self, user_size, adgroup_size, pid_size, cms_segid_size, cms_group_size,                 final_gender_size, age_level_size, pvalue_level_size, shopping_level_size,                 occupation_size, new_user_class_level_size, campaign_size,customer_size, cate_size, brand_size,  # above is all sparse feat size                 sparse_embed_size = 4, att_embedding_size = 8, sess_count = 5, sess_max_length = 10, l2_reg_embedding=1e-6):        super().__init__()        # feature size        self.user_size = user_size        self.adgroup_size = adgroup_size           self.pid_size = pid_size        self.cms_segid_size = cms_segid_size        self.cms_group_size = cms_group_size        self.final_gender_size = final_gender_size        self.age_level_size = age_level_size        self.pvalue_level_size = pvalue_level_size        self.shopping_level_size = shopping_level_size        self.occupation_size = occupation_size        self.new_user_class_level_size = new_user_class_level_size        self.campaign_size = campaign_size        self.customer_size = customer_size        self.cate_size = cate_size        self.brand_size = brand_size        # sparse embed size        self.sparse_embed_size = sparse_embed_size        # transform attention embed size        self.att_embedding_size = att_embedding_size        # hyper_parameters        self.sess_count = 5        self.sess_max_length = 10        # sparse embedding layer        self.userid_embeddings_var = paddle.nn.Embedding(            self.user_size,            self.sparse_embed_size,            sparse=True,            weight_attr=paddle.ParamAttr(                regularizer=paddle.regularizer.L2Decay(l2_reg_embedding),                initializer=nn.initializer.Normal(mean=0.0, std=0.0001)))        self.adgroup_embeddings_var = paddle.nn.Embedding(            self.adgroup_size,            self.sparse_embed_size,            sparse=True,            weight_attr=paddle.ParamAttr(                regularizer=paddle.regularizer.L2Decay(l2_reg_embedding),                initializer=nn.initializer.Normal(mean=0.0, std=0.0001)))        self.pid_embeddings_var = paddle.nn.Embedding(            self.pid_size,            self.sparse_embed_size,            #sparse=True,            weight_attr=paddle.ParamAttr(                regularizer=paddle.regularizer.L2Decay(l2_reg_embedding),                initializer=nn.initializer.Normal(mean=0.0, std=0.0001)))        self.cmsid_embeddings_var = paddle.nn.Embedding(            self.cms_segid_size,            self.sparse_embed_size,            #sparse=True,            weight_attr=paddle.ParamAttr(                regularizer=paddle.regularizer.L2Decay(l2_reg_embedding),                initializer=nn.initializer.Normal(mean=0.0, std=0.0001)))        self.cmsgroup_embeddings_var = paddle.nn.Embedding(            self.cms_group_size,            self.sparse_embed_size,            #sparse=True,            weight_attr=paddle.ParamAttr(                regularizer=paddle.regularizer.L2Decay(l2_reg_embedding),                initializer=nn.initializer.Normal(mean=0.0, std=0.0001)))        self.gender_embeddings_var = paddle.nn.Embedding(            self.final_gender_size,            self.sparse_embed_size,            #sparse=True,            weight_attr=paddle.ParamAttr(                regularizer=paddle.regularizer.L2Decay(l2_reg_embedding),                initializer=nn.initializer.Normal(mean=0.0, std=0.0001)))        self.age_embeddings_var = paddle.nn.Embedding(            self.age_level_size,            self.sparse_embed_size,            #sparse=True,            weight_attr=paddle.ParamAttr(                regularizer=paddle.regularizer.L2Decay(l2_reg_embedding),                initializer=nn.initializer.Normal(mean=0.0, std=0.0001)))        self.pvalue_embeddings_var = paddle.nn.Embedding(            self.pvalue_level_size,            self.sparse_embed_size,            #sparse=True,            weight_attr=paddle.ParamAttr(                regularizer=paddle.regularizer.L2Decay(l2_reg_embedding),                initializer=nn.initializer.Normal(mean=0.0, std=0.0001)))        self.shopping_embeddings_var = paddle.nn.Embedding(            self.shopping_level_size,            self.sparse_embed_size,            #sparse=True,            weight_attr=paddle.ParamAttr(                regularizer=paddle.regularizer.L2Decay(l2_reg_embedding),                initializer=nn.initializer.Normal(mean=0.0, std=0.0001)))        self.occupation_embeddings_var = paddle.nn.Embedding(            self.occupation_size,            self.sparse_embed_size,            #sparse=True,            weight_attr=paddle.ParamAttr(                regularizer=paddle.regularizer.L2Decay(l2_reg_embedding),                initializer=nn.initializer.Normal(mean=0.0, std=0.0001)))        self.new_user_class_level_embeddings_var = paddle.nn.Embedding(            self.new_user_class_level_size,            self.sparse_embed_size,            #sparse=True,            weight_attr=paddle.ParamAttr(                regularizer=paddle.regularizer.L2Decay(l2_reg_embedding),                initializer=nn.initializer.Normal(mean=0.0, std=0.0001)))        self.campaign_embeddings_var = paddle.nn.Embedding(            self.campaign_size,            self.sparse_embed_size,            sparse=True,            weight_attr=paddle.ParamAttr(                regularizer=paddle.regularizer.L2Decay(l2_reg_embedding),                initializer=nn.initializer.Normal(mean=0.0, std=0.0001)))        self.customer_embeddings_var = paddle.nn.Embedding(            self.customer_size,            self.sparse_embed_size,            sparse=True,            weight_attr=paddle.ParamAttr(                regularizer=paddle.regularizer.L2Decay(l2_reg_embedding),                initializer=nn.initializer.Normal(mean=0.0, std=0.0001)))        self.cate_embeddings_var = paddle.nn.Embedding(            self.cate_size,            self.sparse_embed_size,            sparse=True,            padding_idx=0,            weight_attr=paddle.ParamAttr(                regularizer=paddle.regularizer.L2Decay(l2_reg_embedding),                initializer=nn.initializer.Normal(mean=0.0, std=0.0001)))        self.brand_embeddings_var = paddle.nn.Embedding(            self.brand_size,            self.sparse_embed_size,            sparse=True,            padding_idx=0,            weight_attr=paddle.ParamAttr(                regularizer=paddle.regularizer.L2Decay(l2_reg_embedding),                initializer=nn.initializer.Normal(mean=0.0, std=0.0001)))        # sess interest extractor layer        self.position_encoding = PositionalEncoder(2*self.sparse_embed_size)        self.transform = nn.TransformerEncoderLayer(            d_model = self.att_embedding_size,             nhead = 8,            dim_feedforward = 64,            weight_attr = self._get_weight_attr(),            bias_attr= False,            dropout = 0.0)        # sess interest interacting layer        self.bilstm = nn.LSTM(2*self.sparse_embed_size, 2*self.sparse_embed_size, num_layers = 2, direction='bidirectional')        # sess interest activating layer        self.transform_actpool = AttentionSequencePoolingLayer(weight_normalization=True, name='transform')        self.lstm_actpool = AttentionSequencePoolingLayer(weight_normalization=True, name='lstm')        # MLP moudle        self.mlp = MLP(mlp_hidden_units=[77, 200, 80])    def _get_weight_attr(self):        return paddle.ParamAttr(initializer=nn.initializer.TruncatedNormal(std=0.05))    def forward(self, inputs):        '''        inputs : tulpe, (sparse_input, dense_input, sess_input, sess_length)            sparse_input: (N, 15)            dense_input: (N,)            sess_input:(N, 10, 10)            sess_length: (N,)        '''        sparse_input, dense_input, sess_input, sess_length = inputs        #assert(type(sess_length) == paddle.Tensor), f"At Attention SequencePoolingLayer expected inputs[2]'s type is paddle.Tensor, but got {type(sess_length)}"        # sparse and dense feature        self.user = sparse_input[:, 0]        self.adgroup = sparse_input[:, 1]        self.pid = sparse_input[:, 2]        self.cmsid = sparse_input[:, 3]        self.cmsgroup = sparse_input[:, 4]        self.gender = sparse_input[:, 5]        self.age = sparse_input[:, 6]        self.pvalue = sparse_input[:, 7]        self.shopping = sparse_input[:, 8]        self.occupation = sparse_input[:, 9]        self.new_user_class = sparse_input[:, 10]        self.campaign = sparse_input[:, 11]        self.customer = sparse_input[:, 12]        self.cate = sparse_input[:, 13]        self.brand = sparse_input[:, 14]        self.price = dense_input.unsqueeze_(-1)        # sparse feature embedding        self.user_embeded = self.userid_embeddings_var(self.user)        self.adgroup_embeded = self.adgroup_embeddings_var(self.adgroup)        self.pid_embeded = self.pid_embeddings_var(self.pid)        self.cmsid_embeded = self.cmsid_embeddings_var(self.cmsid)        self.cmsgroup_embeded = self.cmsgroup_embeddings_var(self.cmsgroup)        self.gender_embeded = self.gender_embeddings_var(self.gender)        self.age_embeded = self.age_embeddings_var(self.age)        self.pvalue_embeded = self.pvalue_embeddings_var(self.pvalue)        self.shopping_embeded = self.shopping_embeddings_var(self.shopping)        self.occupation_embeded = self.occupation_embeddings_var(self.occupation)        self.new_user_class_embeded = self.new_user_class_level_embeddings_var(self.new_user_class)        self.campaign_embeded = self.campaign_embeddings_var(self.campaign)        self.customer_embeded = self.customer_embeddings_var(self.customer)        self.cate_embeded = self.cate_embeddings_var(self.cate)        self.brand_embeded = self.brand_embeddings_var(self.brand)        # concat query embeded          # Note: query feature is cate_embeded and brand_embeded        query_embeded = paddle.concat([self.cate_embeded,self.brand_embeded],-1)        # concat sparse feature embeded          deep_input_embeded = paddle.concat([self.user_embeded, self.adgroup_embeded, self.pid_embeded, self.cmsid_embeded,                                    self.cmsgroup_embeded, self.gender_embeded, self.age_embeded, self.pvalue_embeded,                                    self.shopping_embeded, self.occupation_embeded, self.new_user_class_embeded,                                    self.campaign_embeded, self.customer_embeded, self.cate_embeded, self.brand_embeded], -1)        # sess_interest_division part        #cate_sess_embeded = self.cate_embeddings_var(paddle.to_tensor(sess_input[:, ::2, :]))        #brand_sess_embeded = self.brand_embeddings_var(paddle.to_tensor(sess_input[:, 1::2, :]))        cate_sess_embeded = self.cate_embeddings_var(sess_input[:, ::2, :])        brand_sess_embeded = self.brand_embeddings_var(sess_input[:, 1::2, :])        # tr_input (n,5,10,8)        tr_input = paddle.concat([cate_sess_embeded,brand_sess_embeded],axis=-1)         # sess interest extractor part        lstm_input = []        for i in range(self.sess_count):            tr_sess_input = self.position_encoding( tr_input[:, i, :, :] )            tr_sess_input = self.transform(tr_sess_input)            tr_sess_input = paddle.mean(tr_sess_input, axis=1, keepdim=True)            lstm_input.append(tr_sess_input)        lstm_input = paddle.concat([lstm_input[0], lstm_input[1], lstm_input[2], lstm_input[3], lstm_input[4]], axis=1)        lstm_output, _ = self.bilstm(lstm_input)        lstm_output = (lstm_output[:, :, :2*self.sparse_embed_size] + lstm_output[:, :, 2*self.sparse_embed_size:])/2        # sess interest activating layer        lstm_input = self.transform_actpool([query_embeded, lstm_input, sess_length])        lstm_output = self.lstm_actpool([query_embeded, lstm_output, sess_length])        # concatenate all moudle output        mlp_input = paddle.concat([deep_input_embeded, paddle.nn.Flatten()(lstm_input), paddle.nn.Flatten()(lstm_output), self.price], axis=-1)        out = self.mlp(mlp_input)        return out
Writing net.py

In [5]

%%writefile sequence_layers.py# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at##     http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.import paddleimport paddle.nn as nnimport numpy as npimport copyimport mathclass PositionalEncoder(nn.Layer):    def __init__(self, d_model, max_seq_len=50):        #d_model为嵌入维度        super(PositionalEncoder, self).__init__()        self.d_model = d_model        position = np.array([[pos / np.power(10000, 2. * i / self.d_model)                            for i in range(self.d_model)]                            for pos in range(max_seq_len)])         # Second part, apply the cosine to even columns and sin to odds.        position[:, 0::2] = np.sin(position[:, 0::2])  # dim 2i        position[:, 1::2] = np.cos(position[:, 1::2])  # dim 2i+1        self.position = self.create_parameter(shape=[max_seq_len,self.d_model],                                            default_initializer=paddle.nn.initializer.Assign(value=position))    def forward(self, x):        x = x*math.sqrt(self.d_model)        seq_len = x.shape[1]        x = x+self.position[:seq_len,:]        return xclass AttentionSequencePoolingLayer(nn.Layer):    def __init__(self, dnn_units=[8, 64, 16], dnn_activation='sigmoid', weight_normalization=False, name=None):        super().__init__()        self.dnn_units = dnn_units        self.dnn_activation = 'sigmoid'        self.weight_normalization = weight_normalization        self.name = name        layer_list = []        #bn_list = []        for i in range(len(dnn_units)-1):            dnn_layer = nn.Linear(                in_features = self.dnn_units[i] if i != 0 else self.dnn_units[i]*4 ,                out_features = self.dnn_units[i+1],                  weight_attr= self._weight_init())            self.add_sublayer(self.name + f'linear_{i}', dnn_layer)            layer_list.append(dnn_layer)            #layer_list.append(copy.deepcopy(dnn_layer))            #bn_layer = nn.BatchNorm(50)            #self.add_sublayer(self.name + f'bn_{i}', bn_layer)            #bn_list.append(bn_layer)            #bn_list.append(copy.deepcopy(bn_layer))        #self.bn_layer = nn.LayerList(bn_list)        self.layers = nn.LayerList(layer_list)        self.dnn = nn.Linear(self.dnn_units[-1], 1, weight_attr=self._weight_init())         self.activation = nn.Sigmoid()        self.soft = nn.Softmax()    def _weight_init(self):        return paddle.framework.ParamAttr(initializer=paddle.nn.initializer.XavierNormal())    def forward(self, inputs):        querys, keys, sess_length = inputs        #assert(type(sess_length) == paddle.Tensor), f"At Attention SequencePoolingLayer expected inputs[2]'s type is paddle.Tensor, but got {type(sess_length)}"        keys_length = keys.shape[1]        key_masks = nn.functional.sequence_mask(sess_length, keys_length)         querys = paddle.tile(querys.unsqueeze(1), [1, keys_length, 1])        att_input = paddle.concat([querys, keys, querys-keys, querys*keys], axis=-1)        for i, layer in enumerate(self.layers):            att_input = layer(att_input)            #att_input = self.bn_layer[i](att_input)  # BatchNomalization            att_input = self.activation(att_input) # activation         att_score = self.dnn(att_input)  # (N, 50, 1)        att_score = paddle.transpose(att_score, [0, 2, 1]) # (N, 1, 50)        if self.weight_normalization:            paddings = paddle.ones_like(att_score) * (-2 ** 32 + 1)        else:            paddings = paddle.zeros_like(att_score)        att_score = paddle.where(key_masks.unsqueeze(1) == 1, att_score, paddings)  # key_masks.unsqueeze in order to keep shape same as att_score        att_score = self.soft(att_score)        out = paddle.matmul(att_score, keys)        return outclass MLP(nn.Layer):    def __init__(self, mlp_hidden_units, use_bn=True):        super().__init__()        self.mlp_hidden_units = mlp_hidden_units        self.acitivation = paddle.nn.Sigmoid()        layer_list = []        for i in range(len(mlp_hidden_units)-1):            dnn_layer = nn.Linear(                in_features = self.mlp_hidden_units[i],                out_features = self.mlp_hidden_units[i+1],                  weight_attr= self._weight_init())            self.add_sublayer(f'linear_{i}', dnn_layer)            layer_list.append(dnn_layer)        self.layers = nn.LayerList(layer_list)        self.dense = nn.Linear(self.mlp_hidden_units[-1], 1, bias_attr=True, weight_attr= self._weight_init())        self.predict_layer = nn.Sigmoid()    def _weight_init(self):        return paddle.framework.ParamAttr(initializer=paddle.nn.initializer.XavierNormal())    def forward(self, x):        for layer in self.layers:            x = layer(x)            x = self.acitivation(x)        x = self.dense(x)        x = self.predict_layer(x)        return x
Writing sequence_layers.py

In [6]

%%writefile dygraph_model.py# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at##     http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.import paddleimport paddle.nn as nnimport paddle.nn.functional as Fimport mathimport netclass DygraphModel():    # define model    def create_model(self, config):        user_size = config.get("hyper_parameters.user_size")        cms_segid_size = config.get("hyper_parameters.cms_segid_size")        cms_group_size = config.get("hyper_parameters.cms_group_size")        final_gender_size = config.get(            "hyper_parameters.final_gender_size")        age_level_size = config.get("hyper_parameters.age_level_size")        pvalue_level_size = config.get("hyper_parameters.pvalue_level_size")        shopping_level_size = config.get(            "hyper_parameters.shopping_level_size")        occupation_size = config.get("hyper_parameters.occupation_size")        new_user_class_level_size = config.get(            "hyper_parameters.new_user_class_level_size")        adgroup_size = config.get("hyper_parameters.adgroup_size")        cate_size = config.get("hyper_parameters.cate_size")        campaign_size = config.get("hyper_parameters.campaign_size")        customer_size = config.get("hyper_parameters.customer_size")        brand_size = config.get("hyper_parameters.brand_size")        pid_size = config.get("hyper_parameters.pid_size")        feat_embed_size = config.get(            "hyper_parameters.feat_embed_size")        dsin_model = net.DSIN_layer(            user_size, adgroup_size, pid_size, cms_segid_size, cms_group_size,            final_gender_size, age_level_size, pvalue_level_size, shopping_level_size,            occupation_size, new_user_class_level_size, campaign_size, customer_size,            cate_size, brand_size, sparse_embed_size=feat_embed_size, l2_reg_embedding=1e-6)        return dsin_model    # define loss function by predicts and label    def create_loss(self, pred, label):        return paddle.nn.BCELoss()(pred,label)    # define feeds which convert numpy of batch data to paddle.tensor    def create_feeds(self, batch_data, config):        data, label = (batch_data[0], batch_data[1], batch_data[2], batch_data[3]), batch_data[-1]        #data, label = batch_data[0], batch_data[1]        label = label.reshape([-1,1])        return label, data    # define optimizer    def create_optimizer(self, dy_model, config):        lr = config.get("hyper_parameters.optimizer.learning_rate", 0.001)        optimizer = paddle.optimizer.Adam(            learning_rate=lr, parameters=dy_model.parameters())        return optimizer    # define metrics such as auc/acc    # multi-task need to define multi metric    def create_metrics(self):        metrics_list_name = ["auc"]        auc_metric = paddle.metric.Auc("ROC")        metrics_list = [auc_metric]        return metrics_list, metrics_list_name    # construct train forward phase    def train_forward(self, dy_model, metrics_list, batch_data, config):        label, input_tensor = self.create_feeds(batch_data, config)        pred = dy_model.forward(input_tensor)        # update metrics        predict_2d = paddle.concat(x=[1 - pred, pred], axis=1)        metrics_list[0].update(preds=predict_2d.numpy(), labels=label.numpy())        loss = self.create_loss(pred,paddle.cast(label, "float32"))        print_dict = {'loss': loss}        # print_dict = None        return loss, metrics_list, print_dict    def infer_forward(self, dy_model, metrics_list, batch_data, config):        label, input_tensor = self.create_feeds(batch_data, config)        pred = dy_model.forward(input_tensor)        # update metrics        predict_2d = paddle.concat(x=[1 - pred, pred], axis=1)        metrics_list[0].update(preds=predict_2d.numpy(), labels=label.numpy())        return metrics_list, None
Writing dygraph_model.py

In [7]

%%writefile dsin_reader.py# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at##     http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.from __future__ import print_functionimport numpy as npfrom paddle.io import IterableDatasetimport pandas as pdsparse_features = ['userid', 'adgroup_id', 'pid', 'cms_segid', 'cms_group_id', 'final_gender_code', 'age_level',                    'pvalue_level', 'shopping_level', 'occupation', 'new_user_class_level ', 'campaign_id',                    'customer', 'cate_id', 'brand']dense_features = ['price']class RecDataset(IterableDataset):    def __init__(self, file_list, config):        super().__init__()        self.file_list = file_list        data_file = [ f.split('/')[-1] for f in file_list]        mode = data_file[0].split('_')[0]        data_dir = file_list[0].split(data_file[0])[0]        assert(mode == 'train' or mode == 'test' or mode == 'sample'), f"mode must be 'train' or 'test', but get '{mode}'"        feat_input = pd.read_pickle(data_dir + mode + '_feat_input.pkl')        self.sess_input = pd.read_pickle(data_dir + mode + '_sess_input.pkl')        self.sess_length = pd.read_pickle(data_dir + mode + '_session_length.pkl')        self.label = pd.read_pickle(data_dir + mode + '_label.pkl')        if str(type(self.label)).split("'")[1] != 'numpy.ndarray':            self.label = self.label.to_numpy()        self.label = self.label.astype('int64')        self.num_samples = self.label.shape[0]        self.sparse_input = feat_input[sparse_features].to_numpy().astype('int64')        self.dense_input = feat_input[dense_features].to_numpy().reshape(-1).astype('float32')    def __iter__(self):        for i in range(self.num_samples):            yield [self.sparse_input[i, :], self.dense_input[i], self.sess_input[i, :, :], self.sess_length[i], self.label[i]]
Writing dsin_reader.py

In [8]

%%writefile config_bigdata.yaml# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at##     http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.runner:  train_data_dir: "../../../datasets/Ali_Display_Ad_Click_DSIN/big_train/"  train_reader_path: "dsin_reader" # importlib format  use_gpu: True  use_auc: True  train_batch_size: 4096  epochs: 1  print_interval: 50  model_save_path: "output_model_all_dsin"  test_data_dir: "../../../datasets/Ali_Display_Ad_Click_DSIN/big_test/"  infer_reader_path: "dsin_reader" # importlib format  infer_batch_size: 16384 # 2**14  infer_load_path: "output_model_all_dsin"  infer_start_epoch: 0  infer_end_epoch: 1# hyper parameters of user-defined networkhyper_parameters:  # optimizer config  optimizer:    class: Adam    learning_rate: 0.00235  # user feature size  user_size: 265442  cms_segid_size: 97  cms_group_size: 13  final_gender_size: 2  age_level_size: 7  pvalue_level_size: 4  shopping_level_size: 3  occupation_size: 2  new_user_class_level_size: 5  # item feature size  adgroup_size: 512431  cate_size: 11859   #max value + 1  campaign_size: 309448  customer_size: 195841  brand_size: 362855  #max value + 1  # context feature size  pid_size: 2  # embedding size  feat_embed_size: 4
Writing config_bigdata.yaml

4、利用PaddleRec的trainer以及infer进行模型训练及其测试

In [9]

!python ../../../tools/trainer.py -m config_bigdata.yaml
2022-05-11 19:50:56,823 - INFO - **************common.configs**********2022-05-11 19:50:56,823 - INFO - use_gpu: True, use_xpu: False, use_visual: False, train_batch_size: 4096, train_data_dir: ../../../datasets/Ali_Display_Ad_Click_DSIN/big_train/, epochs: 1, print_interval: 50, model_save_path: output_model_all_dsin2022-05-11 19:50:56,823 - INFO - **************common.configs**********W0511 19:50:56.825248  1525 device_context.cc:447] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.1, Runtime API Version: 10.1W0511 19:50:56.831076  1525 device_context.cc:465] device: 0, cuDNN Version: 7.6.2022-05-11 19:51:01,867 - INFO - read data2022-05-11 19:51:01,867 - INFO - reader path:dsin_reader2022-05-11 19:51:13,903 - INFO - epoch: 0, batch_id: 0, auc:0.502794, loss:0.85580873, avg_reader_cost: 0.00291 sec, avg_batch_cost: 0.01317 sec, avg_samples: 81.92000, ips: 6220.65504 ins/s2022-05-11 19:51:33,319 - INFO - epoch: 0, batch_id: 50, auc:0.495701, loss:0.19559237, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.38773 sec, avg_samples: 4096.00000, ips: 10564.02249 ins/s2022-05-11 19:51:52,451 - INFO - epoch: 0, batch_id: 100, auc:0.499694, loss:0.21434923, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.38206 sec, avg_samples: 4096.00000, ips: 10720.87298 ins/s2022-05-11 19:52:10,842 - INFO - epoch: 0, batch_id: 150, auc:0.512509, loss:0.19038938, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.36725 sec, avg_samples: 4096.00000, ips: 11153.31692 ins/s2022-05-11 19:52:28,755 - INFO - epoch: 0, batch_id: 200, auc:0.530944, loss:0.20696387, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.35769 sec, avg_samples: 4096.00000, ips: 11451.33054 ins/s2022-05-11 19:52:46,030 - INFO - epoch: 0, batch_id: 250, auc:0.545280, loss:0.18852976, avg_reader_cost: 0.00017 sec, avg_batch_cost: 0.34493 sec, avg_samples: 4096.00000, ips: 11874.79419 ins/s2022-05-11 19:53:03,111 - INFO - epoch: 0, batch_id: 300, auc:0.558348, loss:0.20377612, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.34106 sec, avg_samples: 4096.00000, ips: 12009.68762 ins/s2022-05-11 19:53:20,102 - INFO - epoch: 0, batch_id: 350, auc:0.567205, loss:0.2231454, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.33924 sec, avg_samples: 4096.00000, ips: 12073.90980 ins/s2022-05-11 19:53:36,952 - INFO - epoch: 0, batch_id: 400, auc:0.572662, loss:0.2543741, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.33644 sec, avg_samples: 4096.00000, ips: 12174.55680 ins/s2022-05-11 19:53:54,328 - INFO - epoch: 0, batch_id: 450, auc:0.577503, loss:0.16823483, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.34696 sec, avg_samples: 4096.00000, ips: 11805.51984 ins/s2022-05-11 19:54:13,481 - INFO - epoch: 0, batch_id: 500, auc:0.580811, loss:0.19309358, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.38248 sec, avg_samples: 4096.00000, ips: 10709.07133 ins/s2022-05-11 19:54:32,650 - INFO - epoch: 0, batch_id: 550, auc:0.584353, loss:0.19425544, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.38280 sec, avg_samples: 4096.00000, ips: 10700.23452 ins/s2022-05-11 19:54:51,018 - INFO - epoch: 0, batch_id: 600, auc:0.587535, loss:0.19358435, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.36678 sec, avg_samples: 4096.00000, ips: 11167.49886 ins/s2022-05-11 19:55:08,682 - INFO - epoch: 0, batch_id: 650, auc:0.590837, loss:0.21790585, avg_reader_cost: 0.00017 sec, avg_batch_cost: 0.35272 sec, avg_samples: 4096.00000, ips: 11612.52946 ins/s2022-05-11 19:55:26,055 - INFO - epoch: 0, batch_id: 700, auc:0.594234, loss:0.19218928, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.34689 sec, avg_samples: 4096.00000, ips: 11807.69064 ins/s2022-05-11 19:55:43,041 - INFO - epoch: 0, batch_id: 750, auc:0.597527, loss:0.20641877, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.33916 sec, avg_samples: 4096.00000, ips: 12076.80625 ins/s2022-05-11 19:55:59,994 - INFO - epoch: 0, batch_id: 800, auc:0.600670, loss:0.22155708, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.33848 sec, avg_samples: 4096.00000, ips: 12101.22339 ins/s2022-05-11 19:56:17,091 - INFO - epoch: 0, batch_id: 850, auc:0.603358, loss:0.19764367, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.34137 sec, avg_samples: 4096.00000, ips: 11998.85636 ins/s2022-05-11 19:56:34,397 - INFO - epoch: 0, batch_id: 900, auc:0.605445, loss:0.18218887, avg_reader_cost: 0.00017 sec, avg_batch_cost: 0.34556 sec, avg_samples: 4096.00000, ips: 11853.31707 ins/s2022-05-11 19:56:53,374 - INFO - epoch: 0, batch_id: 950, auc:0.606719, loss:0.20349224, avg_reader_cost: 0.00017 sec, avg_batch_cost: 0.37895 sec, avg_samples: 4096.00000, ips: 10808.89367 ins/s2022-05-11 19:57:12,244 - INFO - epoch: 0, batch_id: 1000, auc:0.608219, loss:0.18338634, avg_reader_cost: 0.00016 sec, avg_batch_cost: 0.37685 sec, avg_samples: 4096.00000, ips: 10868.97179 ins/s2022-05-11 19:57:30,490 - INFO - epoch: 0, batch_id: 1050, auc:0.610018, loss:0.18991007, avg_reader_cost: 0.00017 sec, avg_batch_cost: 0.36437 sec, avg_samples: 4096.00000, ips: 11241.38734 ins/s2022-05-11 19:57:48,290 - INFO - epoch: 0, batch_id: 1100, auc:0.611764, loss:0.19425409, avg_reader_cost: 0.00017 sec, avg_batch_cost: 0.35542 sec, avg_samples: 4096.00000, ips: 11524.47769 ins/s2022-05-11 19:58:05,738 - INFO - epoch: 0, batch_id: 1150, auc:0.613360, loss:0.18417387, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.34839 sec, avg_samples: 4096.00000, ips: 11756.97841 ins/s2022-05-11 19:58:22,780 - INFO - epoch: 0, batch_id: 1200, auc:0.615447, loss:0.2374034, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.34027 sec, avg_samples: 4096.00000, ips: 12037.41497 ins/s2022-05-11 19:58:39,730 - INFO - epoch: 0, batch_id: 1250, auc:0.616718, loss:0.21474466, avg_reader_cost: 0.00017 sec, avg_batch_cost: 0.33845 sec, avg_samples: 4096.00000, ips: 12102.39913 ins/s2022-05-11 19:58:56,387 - INFO - epoch: 0, batch_id: 1300, auc:0.618325, loss:0.17899244, avg_reader_cost: 0.00016 sec, avg_batch_cost: 0.33259 sec, avg_samples: 4096.00000, ips: 12315.36361 ins/s2022-05-11 19:59:13,529 - INFO - epoch: 0, batch_id: 1350, auc:0.619961, loss:0.21630415, avg_reader_cost: 0.00015 sec, avg_batch_cost: 0.34231 sec, avg_samples: 4096.00000, ips: 11965.62220 ins/s2022-05-11 19:59:14,210 - INFO - epoch: 0 done, auc: 0.620026,loss:0.14849854, epoch time: 480.97 s2022-05-11 19:59:14,386 - INFO - Already save model in output_model_all_dsin/0

In [10]

!python ../../../tools/infer.py -m config_bigdata.yaml
2022-05-11 19:59:48,026 - INFO - **************common.configs**********2022-05-11 19:59:48,026 - INFO - use_gpu: True, use_xpu: False, use_visual: False, infer_batch_size: 16384, test_data_dir: ../../../datasets/Ali_Display_Ad_Click_DSIN/big_test/, start_epoch: 0, end_epoch: 1, print_interval: 50, model_load_path: output_model_all_dsin2022-05-11 19:59:48,026 - INFO - **************common.configs**********W0511 19:59:48.027812  1904 device_context.cc:447] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.1, Runtime API Version: 10.1W0511 19:59:48.033318  1904 device_context.cc:465] device: 0, cuDNN Version: 7.6.2022-05-11 19:59:52,275 - INFO - read data2022-05-11 19:59:52,276 - INFO - reader path:dsin_reader2022-05-11 19:59:53,777 - INFO - load model epoch 02022-05-11 19:59:53,777 - INFO - start load model from output_model_all_dsin/02022-05-11 19:59:54,438 - INFO - epoch: 0, batch_id: 0, auc: 0.628742, avg_reader_cost: 0.00439 sec, avg_batch_cost: 0.01166 sec, avg_samples: 16384.00000, ips: 1239157.77 ins/s2022-05-11 20:00:02,133 - INFO - epoch: 0 done, auc: 0.635660, epoch time: 8.36 s

六、代码结构与详细说明

6.1 代码结构(在work/PaddleRec/models/rank/dsin目录下)

├── config_bigdata.yaml # 全量数据配置文件├── net.py # 模型核心组网(动静统一)├── sequence_layers.py # 模型组网模块├── dsin_reader.py # 数据读取程序├── dygraph_model.py # 构建动态图

6.2 参数说明

可以在 config_bigdata.yaml 中设置训练与评估相关参数,具体如下:

参数 默认值 说明 其他

–runner.train_data_dirNone训练数据路径
–runner.train_reader_pathdsin_reader训练数据读取器文件路径
–runner.use_gpuTrue是否使用gpu
–runner.use_auc5是否使用auc
–runner.train_batch_size4096模型训练的batch大小
–runner.epochs1训练epochs
–runner.print_interval50trainer训练时每print_interval个batch就打印一次指标

评估的相关参数与训练类似就不展开了,且模型的超参数也详细存放在了config_bigdata.yaml文件中了。

七、复现心得

本项目复现时遇到了许多的问题,分别对应在数据集获取,模型对齐,精度对齐这几方面。且主要因为源代码模块化程度较高,把很多细节都需要去仔细阅读源码才能彻底明白模型框架,论文上的模型框架图只是提供一个大致的思路。

(1)数据集获取: 因为该论文所使用的数据集是需要经过预处理的,而且原始数据集大小为23G+,所以在一开始复现的时候就在数据集这上面费了不少心思,也正是因为这样我学习到了如何去处理这种大数据。通过逐行读取,划分成多个子数据集的方式,将用户的历史行为数据切分成了多个。

(2)模型对齐: 如前面所说的,在一开始复现的时候我就只通过原文中的模型框架尝试去搭建,可想而知结果肯定是不好的。还是需要去仔细阅读原文代码才能很好的与原文模型进行对齐。(可以通过model summary的方式去观察模型框架以及每个layer的输入输出shape。)

(3)精度对齐: 在精度对齐上面,实在是太费心思了。重点是确保数据集一定要与原文一致,模型一定要与原文对齐。达到这两点后基本上就可以实现精度对齐了。

八、模型信息

训练完成后,模型和相关LOG保存在./output_model_all_dsin目录下。

信息 说明

发布者lfyzzz时间2022.5.11框架版本Paddle 2.2.2应用场景推荐系统支持硬件GPU、CPUIn [ ]


以上就是论文复现: DSIN-阿里点击率预测三部曲-3的详细内容,更多请关注创想鸟其它相关文章!

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。
如发现本站有涉嫌抄袭侵权/违法违规的内容, 请发送邮件至 chuangxiangniao@163.com 举报,一经查实,本站将立刻删除。
发布者:程序猿,转转请注明出处:https://www.chuangxiangniao.com/p/318867.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2025年11月5日 09:02:55
下一篇 2025年11月5日 09:03:43

相关推荐

  • Uniapp 中如何不拉伸不裁剪地展示图片?

    灵活展示图片:如何不拉伸不裁剪 在界面设计中,常常需要以原尺寸展示用户上传的图片。本文将介绍一种在 uniapp 框架中实现该功能的简单方法。 对于不同尺寸的图片,可以采用以下处理方式: 极端宽高比:撑满屏幕宽度或高度,再等比缩放居中。非极端宽高比:居中显示,若能撑满则撑满。 然而,如果需要不拉伸不…

    2025年12月24日
    400
  • 如何让小说网站控制台显示乱码,同时网页内容正常显示?

    如何在不影响用户界面的情况下实现控制台乱码? 当在小说网站上下载小说时,大家可能会遇到一个问题:网站上的文本在网页内正常显示,但是在控制台中却是乱码。如何实现此类操作,从而在不影响用户界面(UI)的情况下保持控制台乱码呢? 答案在于使用自定义字体。网站可以通过在服务器端配置自定义字体,并通过在客户端…

    2025年12月24日
    800
  • 如何在地图上轻松创建气泡信息框?

    地图上气泡信息框的巧妙生成 地图上气泡信息框是一种常用的交互功能,它简便易用,能够为用户提供额外信息。本文将探讨如何借助地图库的功能轻松创建这一功能。 利用地图库的原生功能 大多数地图库,如高德地图,都提供了现成的信息窗体和右键菜单功能。这些功能可以通过以下途径实现: 高德地图 JS API 参考文…

    2025年12月24日
    400
  • 如何使用 scroll-behavior 属性实现元素scrollLeft变化时的平滑动画?

    如何实现元素scrollleft变化时的平滑动画效果? 在许多网页应用中,滚动容器的水平滚动条(scrollleft)需要频繁使用。为了让滚动动作更加自然,你希望给scrollleft的变化添加动画效果。 解决方案:scroll-behavior 属性 要实现scrollleft变化时的平滑动画效果…

    2025年12月24日
    000
  • 如何为滚动元素添加平滑过渡,使滚动条滑动时更自然流畅?

    给滚动元素平滑过渡 如何在滚动条属性(scrollleft)发生改变时为元素添加平滑的过渡效果? 解决方案:scroll-behavior 属性 为滚动容器设置 scroll-behavior 属性可以实现平滑滚动。 html 代码: click the button to slide right!…

    2025年12月24日
    500
  • 如何选择元素个数不固定的指定类名子元素?

    灵活选择元素个数不固定的指定类名子元素 在网页布局中,有时需要选择特定类名的子元素,但这些元素的数量并不固定。例如,下面这段 html 代码中,activebar 和 item 元素的数量均不固定: *n *n 如果需要选择第一个 item元素,可以使用 css 选择器 :nth-child()。该…

    2025年12月24日
    200
  • 使用 SVG 如何实现自定义宽度、间距和半径的虚线边框?

    使用 svg 实现自定义虚线边框 如何实现一个具有自定义宽度、间距和半径的虚线边框是一个常见的前端开发问题。传统的解决方案通常涉及使用 border-image 引入切片图片,但是这种方法存在引入外部资源、性能低下的缺点。 为了避免上述问题,可以使用 svg(可缩放矢量图形)来创建纯代码实现。一种方…

    2025年12月24日
    100
  • 如何解决本地图片在使用 mask JS 库时出现的跨域错误?

    如何跨越localhost使用本地图片? 问题: 在本地使用mask js库时,引入本地图片会报跨域错误。 解决方案: 要解决此问题,需要使用本地服务器启动文件,以http或https协议访问图片,而不是使用file://协议。例如: python -m http.server 8000 然后,可以…

    2025年12月24日
    200
  • 旋转长方形后,如何计算其相对于画布左上角的轴距?

    绘制长方形并旋转,计算旋转后轴距 在拥有 1920×1080 画布中,放置一个宽高为 200×20 的长方形,其坐标位于 (100, 100)。当以任意角度旋转长方形时,如何计算它相对于画布左上角的 x、y 轴距? 以下代码提供了一个计算旋转后长方形轴距的解决方案: const x = 200;co…

    2025年12月24日
    000
  • 旋转长方形后,如何计算它与画布左上角的xy轴距?

    旋转后长方形在画布上的xy轴距计算 在画布中添加一个长方形,并将其旋转任意角度,如何计算旋转后的长方形与画布左上角之间的xy轴距? 问题分解: 要计算旋转后长方形的xy轴距,需要考虑旋转对长方形宽高和位置的影响。首先,旋转会改变长方形的长和宽,其次,旋转会改变长方形的中心点位置。 求解方法: 计算旋…

    2025年12月24日
    000
  • 旋转长方形后如何计算其在画布上的轴距?

    旋转长方形后计算轴距 假设长方形的宽、高分别为 200 和 20,初始坐标为 (100, 100),我们将它旋转一个任意角度。根据旋转矩阵公式,旋转后的新坐标 (x’, y’) 可以通过以下公式计算: x’ = x * cos(θ) – y * sin(θ)y’ = x * …

    2025年12月24日
    000
  • 如何让“元素跟随文本高度,而不是撑高父容器?

    如何让 元素跟随文本高度,而不是撑高父容器 在页面布局中,经常遇到父容器高度被子元素撑开的问题。在图例所示的案例中,父容器被较高的图片撑开,而文本的高度没有被考虑。本问答将提供纯css解决方案,让图片跟随文本高度,确保父容器的高度不会被图片影响。 解决方法 为了解决这个问题,需要将图片从文档流中脱离…

    2025年12月24日
    000
  • 如何计算旋转后长方形在画布上的轴距?

    旋转后长方形与画布轴距计算 在给定的画布中,有一个长方形,在随机旋转一定角度后,如何计算其在画布上的轴距,即距离左上角的距离? 以下提供一种计算长方形相对于画布左上角的新轴距的方法: const x = 200; // 初始 x 坐标const y = 90; // 初始 y 坐标const w =…

    2025年12月24日
    200
  • CSS元素设置em和transition后,为何载入页面无放大效果?

    css元素设置em和transition后,为何载入无放大效果 很多开发者在设置了em和transition后,却发现元素载入页面时无放大效果。本文将解答这一问题。 原问题:在视频演示中,将元素设置如下,载入页面会有放大效果。然而,在个人尝试中,并未出现该效果。这是由于macos和windows系统…

    2025年12月24日
    200
  • 为什么 CSS mask 属性未请求指定图片?

    解决 css mask 属性未请求图片的问题 在使用 css mask 属性时,指定了图片地址,但网络面板显示未请求获取该图片,这可能是由于浏览器兼容性问题造成的。 问题 如下代码所示: 立即学习“前端免费学习笔记(深入)”; icon [data-icon=”cloud”] { –icon-cl…

    2025年12月24日
    200
  • 如何利用 CSS 选中激活标签并影响相邻元素的样式?

    如何利用 css 选中激活标签并影响相邻元素? 为了实现激活标签影响相邻元素的样式需求,可以通过 :has 选择器来实现。以下是如何具体操作: 对于激活标签相邻后的元素,可以在 css 中使用以下代码进行设置: li:has(+li.active) { border-radius: 0 0 10px…

    2025年12月24日
    100
  • 如何模拟Windows 10 设置界面中的鼠标悬浮放大效果?

    win10设置界面的鼠标移动显示周边的样式(探照灯效果)的实现方式 在windows设置界面的鼠标悬浮效果中,光标周围会显示一个放大区域。在前端开发中,可以通过多种方式实现类似的效果。 使用css 使用css的transform和box-shadow属性。通过将transform: scale(1.…

    2025年12月24日
    200
  • 如何计算旋转后的长方形在画布上的 XY 轴距?

    旋转长方形后计算其画布xy轴距 在创建的画布上添加了一个长方形,并提供其宽、高和初始坐标。为了视觉化旋转效果,还提供了一些旋转特定角度后的图片。 问题是如何计算任意角度旋转后,这个长方形的xy轴距。这涉及到使用三角学来计算旋转后的坐标。 以下是一个 javascript 代码示例,用于计算旋转后长方…

    2025年12月24日
    000
  • 为什么我的 Safari 自定义样式表在百度页面上失效了?

    为什么在 Safari 中自定义样式表未能正常工作? 在 Safari 的偏好设置中设置自定义样式表后,您对其进行测试却发现效果不同。在您自己的网页中,样式有效,而在百度页面中却失效。 造成这种情况的原因是,第一个访问的项目使用了文件协议,可以访问本地目录中的图片文件。而第二个访问的百度使用了 ht…

    2025年12月24日
    000
  • 如何用前端实现 Windows 10 设置界面的鼠标移动探照灯效果?

    如何在前端实现 Windows 10 设置界面中的鼠标移动探照灯效果 想要在前端开发中实现 Windows 10 设置界面中类似的鼠标移动探照灯效果,可以通过以下途径: CSS 解决方案 DEMO 1: Windows 10 网格悬停效果:https://codepen.io/tr4553r7/pe…

    2025年12月24日
    000

发表回复

登录后才能评论
关注微信