EfficientFormer: 速度上可以与MobileNet媲美的ViT

EfficientFormer是纯Transformer模型,经优化设计,在移动设备上表现优异。最快的L1在ImageNet-1K准确率79.2%,iPhone 12延迟1.6毫秒,与MobileNetv2×1.4速度相当,证明合理设计的Transformer能兼顾低延迟与高性能。

☞☞☞AI 智能聊天, 问答助手, AI 智能搜索, 免费无限量使用 DeepSeek R1 模型☜☜☜

efficientformer: 速度上可以与mobilenet媲美的vit - 创想鸟

EfficientFormer: 速度上可以与MobileNet媲美的视觉Transformer

摘要

        视觉Transformer(ViT)在计算机视觉任务中取得了迅速的进展,在各种基准上都取得了很好的结果。 然而,由于ViT模型的参数和模型设计,例如注意力机制,其速度通常比轻量级卷积网络慢几倍。 因此,面向实时应用的ViT部署尤其具有挑战性,尤其是在资源受限的硬件上,如移动设备。 近年来的研究试图通过网络结构搜索或与MobileNet块的混合设计来降低ViT的计算复杂度,但推理速度仍不尽如人意。 这就引出了一个重要的问题:Transformer能在获得高性能的同时运行得像MobileNet一样快吗? 为了回答这个问题,我们首先回顾基于ViT的模型中使用的网络架构和运算符,并识别出低效设计。 然后我们介绍了一个维度一致的纯Transformer(没有MobileNet块)作为设计范例。 最后,我们进行延迟驱动的裁剪,得到一系列最终的模型,称为EfficientFormer。 通过大量的实验,证明了该算法在移动设备性能和速度上的优越性。 我们最快的模型EfficientFormer-L1在ImageNet-1K上的准确率达到79.2%,在iPhone 12(用CoreML编译)上的推理延迟仅为1.6毫秒,运行速度与MobileNetv2×1.4(1.6毫秒,74.7%Top-1)一样快。我们最大的模型EfficientFormer-L7在ImageNet-1K上的准确率达到83.3%,延迟仅为7.0毫秒。 我们的工作证明,适当设计的Transformer可以在移动设备上达到极低的延迟,同时保持高性能。

1. EfficientFormer

1.1 对轻量化视觉Transformer的一些思考

        从图2可以得到如下轻量化视觉Transformer的观察:

大内核、大步幅的Patch嵌入是移动设备上的一个速度瓶颈一致的特征尺寸对于选择Token Mixer很重要。 MHSA不一定是速度瓶颈Conv-BN比LN(GN)-Linear更有利于时延,精度下降一般可以接受(在推理阶段,BN可以通过重参数化技术融合到Conv中)非线性的延迟与硬件和编译器有关

EfficientFormer: 速度上可以与MobileNet媲美的ViT - 创想鸟        

1.2 EfficientFormer

        基于以上的观察,本文设计了一个新的轻量化视觉Transformer——EfficientFormer,从宏观上看,主要包含两种结构:Patch Embedding和Meta Transformer Block,用公式表示为:

Y=∏imMBi( PatchEmbed (X0B,3,H,W))Xi+1=MBi(Xi)=MLP⁡( TokenMixer (Xi))Y=∏imMBi( PatchEmbed (X0B,3,H,W))Xi+1=MBi(Xi)=MLP( TokenMixer (Xi))

        为了在早期捕获局部特征,本文使用类似于PoolFormer的架构(实际使用DWConv更好,但是本文想提出一个纯Transformer架构,因此没用),为了在后期捕获全局特征,本文使用原始的Transformer架构。同时,为了保证一致特征维度,早期是四维的使用卷积操作,后期是三维的使用线性层操作。

MB4DMB4D :

Ii=Pool⁡(XiB,Cj,H2j+1,W2j+1)+XiB,Cj,H2j+1,W2j+1,Xi+1B,Cj,H2j+1,W2j+1=B(B,G(Ii))+Ii,Ii=Pool(XiB,Cj,2j+1H,2j+1W)+XiB,Cj,2j+1H,2j+1W,Xi+1B,Cj,2j+1H,2j+1W=ConvB(ConvB,G(Ii))+Ii,

MB3DMB3D :

Ii=Linear⁡(MHSA⁡(Linear⁡(LN⁡(XiB,HW4j+1,Cj))))+XiB,HW4j+1,CjXi+1B,HW4j+1,Cj= Linear ( Linear G(LN⁡(Ii)))+IiMHSA⁡(Q,K,V)=Softmax⁡(Q⋅KTCj+b)⋅VIi=Linear(MHSA(Linear(LN(XiB,4j+1HW,Cj))))+XiB,4j+1HW,CjXi+1B,4j+1HW,Cj= Linear ( Linear G(LN(Ii)))+IiMHSA(Q,K,V)=Softmax(CjQ⋅KT+b)⋅V

EfficientFormer: 速度上可以与MobileNet媲美的ViT - 创想鸟        

2. 代码复现

2.1 下载并导入所需的库

In [ ]

!pip install paddlex

   In [ ]

%matplotlib inlineimport paddleimport paddle.fluid as fluidimport numpy as npimport matplotlib.pyplot as pltfrom paddle.vision.datasets import Cifar10from paddle.vision.transforms import Transposefrom paddle.io import Dataset, DataLoaderfrom paddle import nnimport paddle.nn.functional as Fimport paddle.vision.transforms as transformsimport osimport matplotlib.pyplot as pltfrom matplotlib.pyplot import figureimport paddleximport itertools

   

2.2 创建数据集

In [3]

train_tfm = transforms.Compose([    transforms.RandomResizedCrop(224, scale=(0.6, 1.0)),    transforms.ColorJitter(brightness=0.2,contrast=0.2, saturation=0.2),    transforms.RandomHorizontalFlip(0.5),    transforms.RandomRotation(20),    paddlex.transforms.MixupImage(),    transforms.ToTensor(),    transforms.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),])test_tfm = transforms.Compose([    transforms.Resize((224, 224)),    transforms.ToTensor(),    transforms.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),])

   In [4]

paddle.vision.set_image_backend('cv2')# 使用Cifar10数据集train_dataset = Cifar10(data_file='data/data152754/cifar-10-python.tar.gz', mode='train', transform = train_tfm, )val_dataset = Cifar10(data_file='data/data152754/cifar-10-python.tar.gz', mode='test',transform = test_tfm)print("train_dataset: %d" % len(train_dataset))print("val_dataset: %d" % len(val_dataset))

       

train_dataset: 50000val_dataset: 10000

       In [5]

batch_size=256

   In [6]

train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True, drop_last=True, num_workers=4)val_loader = DataLoader(val_dataset, batch_size=batch_size, shuffle=False, drop_last=False, num_workers=4)

   

2.3 模型的创建

2.3.1 标签平滑

In [7]

class LabelSmoothingCrossEntropy(nn.Layer):    def __init__(self, smoothing=0.1):        super().__init__()        self.smoothing = smoothing    def forward(self, pred, target):        confidence = 1. - self.smoothing        log_probs = F.log_softmax(pred, axis=-1)        idx = paddle.stack([paddle.arange(log_probs.shape[0]), target], axis=1)        nll_loss = paddle.gather_nd(-log_probs, index=idx)        smooth_loss = paddle.mean(-log_probs, axis=-1)        loss = confidence * nll_loss + self.smoothing * smooth_loss        return loss.mean()

   

2.3.2 DropPath

In [8]

def drop_path(x, drop_prob=0.0, training=False):    """    Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).    the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper...    See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ...    """    if drop_prob == 0.0 or not training:        return x    keep_prob = paddle.to_tensor(1 - drop_prob)    shape = (paddle.shape(x)[0],) + (1,) * (x.ndim - 1)    random_tensor = keep_prob + paddle.rand(shape, dtype=x.dtype)    random_tensor = paddle.floor(random_tensor)  # binarize    output = x.divide(keep_prob) * random_tensor    return outputclass DropPath(nn.Layer):    def __init__(self, drop_prob=None):        super(DropPath, self).__init__()        self.drop_prob = drop_prob    def forward(self, x):        return drop_path(x, self.drop_prob, self.training)

   

2.3.3 EfficientFormer模型的创建

In [9]

class Attention(nn.Layer):    def __init__(self, dim=384, key_dim=32, num_heads=8,                 attn_ratio=4,                 resolution=7):        super().__init__()        self.resolution = resolution        self.num_heads = num_heads        self.scale = key_dim ** -0.5        self.key_dim = key_dim        self.nh_kd = nh_kd = key_dim * num_heads        self.d = int(attn_ratio * key_dim)        self.dh = int(attn_ratio * key_dim) * num_heads        self.attn_ratio = attn_ratio        h = self.dh + nh_kd * 2        self.N = resolution ** 2        self.N2 = self.N        self.qkv = nn.Linear(dim, h)        self.proj = nn.Linear(self.dh, dim)        points = list(itertools.product(range(self.resolution), range(self.resolution)))        N = len(points)        self.N = N        attention_offsets = {}        idxs = []        for p1 in points:            for p2 in points:                offset = (abs(p1[0] - p2[0]), abs(p1[1] - p2[1]))                if offset not in attention_offsets:                    attention_offsets[offset] = len(attention_offsets)                idxs.append(attention_offsets[offset])        self.attention_biases = self.create_parameter((len(attention_offsets), num_heads), default_initializer=nn.initializer.Constant(0.0))        self.attention_bias_idxs = idxs    def forward(self, x):  # x (B,N,C)        B, N, C = x.shape        qkv = self.qkv(x)        q, k, v = qkv.reshape((B, N, self.num_heads, -1)).split([self.key_dim, self.key_dim, self.d], axis=3)        q = q.transpose((0, 2, 1, 3))        k = k.transpose((0, 2, 1, 3))        v = v.transpose((0, 2, 1, 3))        attn = (q @ k.transpose((0, 1, 3, 2))) * self.scale        attn = attn + self.attention_biases[self.attention_bias_idxs].transpose((1, 0)).reshape((1, self.num_heads, self.N, self.N))                attn = F.softmax(attn, axis=-1)        x = (attn @ v).transpose((0, 2, 1, 3)).reshape((B, N, self.dh))        x = self.proj(x)        return x

   In [10]

# Conv Stemdef stem(in_chs, out_chs):    return nn.Sequential(        nn.Conv2D(in_chs, out_chs // 2, kernel_size=3, stride=2, padding=1),        nn.BatchNorm2D(out_chs // 2),        nn.ReLU(),        nn.Conv2D(out_chs // 2, out_chs, kernel_size=3, stride=2, padding=1),        nn.BatchNorm2D(out_chs),        nn.ReLU())

   In [11]

class Embedding(nn.Layer):    """    Patch Embedding that is implemented by a layer of conv.    Input: tensor in shape [B, C, H, W]    Output: tensor in shape [B, C, H/stride, W/stride]    """    def __init__(self, patch_size=16, stride=16, padding=0,                 in_chans=3, embed_dim=768, norm_layer=nn.BatchNorm2D):        super().__init__()        self.proj = nn.Conv2D(in_chans, embed_dim, kernel_size=patch_size,                              stride=stride, padding=padding)        self.norm = norm_layer(embed_dim) if norm_layer else nn.Identity()    def forward(self, x):        x = self.proj(x)        x = self.norm(x)        return x

   In [12]

class Flat(nn.Layer):    def __init__(self, ):        super().__init__()    def forward(self, x):        x = x.flatten(2).transpose((0, 2, 1))        return x

   In [13]

class Pooling(nn.Layer):    """    Implementation of pooling for PoolFormer    --pool_size: pooling size    """    def __init__(self, pool_size=3):        super().__init__()        self.pool = nn.AvgPool2D(            pool_size, stride=1, padding=pool_size // 2)    def forward(self, x):        return self.pool(x) - x

   In [14]

class LinearMlp(nn.Layer):    """ MLP as used in Vision Transformer, MLP-Mixer and related networks    """    def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):        super().__init__()        out_features = out_features or in_features        hidden_features = hidden_features or in_features        self.fc1 = nn.Linear(in_features, hidden_features)        self.act = act_layer()        self.drop1 = nn.Dropout(drop)        self.fc2 = nn.Linear(hidden_features, out_features)        self.drop2 = nn.Dropout(drop)    def forward(self, x):        x = self.fc1(x)        x = self.act(x)        x = self.drop1(x)        x = self.fc2(x)        x = self.drop2(x)        return x

   In [15]

class Mlp(nn.Layer):    """    Implementation of MLP with 1*1 convolutions.    Input: tensor with shape [B, C, H, W]    """    def __init__(self, in_features, hidden_features=None,                 out_features=None, act_layer=nn.GELU, drop=0.):        super().__init__()        out_features = out_features or in_features        hidden_features = hidden_features or in_features        self.fc1 = nn.Conv2D(in_features, hidden_features, 1)        self.act = act_layer()        self.fc2 = nn.Conv2D(hidden_features, out_features, 1)        self.drop = nn.Dropout(drop)        self.norm1 = nn.BatchNorm2D(hidden_features)        self.norm2 = nn.BatchNorm2D(out_features)    def forward(self, x):        x = self.fc1(x)        x = self.norm1(x)        x = self.act(x)        x = self.drop(x)        x = self.fc2(x)        x = self.norm2(x)        x = self.drop(x)        return x

   In [16]

class Meta3D(nn.Layer):    def __init__(self, dim, mlp_ratio=4.,                 act_layer=nn.GELU, norm_layer=nn.LayerNorm,                 drop=0., drop_path=0.,                 use_layer_scale=True, layer_scale_init_value=1e-5):        super().__init__()        self.norm1 = norm_layer(dim)        self.token_mixer = Attention(dim)        self.norm2 = norm_layer(dim)        mlp_hidden_dim = int(dim * mlp_ratio)        self.mlp = LinearMlp(in_features=dim, hidden_features=mlp_hidden_dim,                             act_layer=act_layer, drop=drop)        self.drop_path = DropPath(drop_path) if drop_path > 0.             else nn.Identity()        self.use_layer_scale = use_layer_scale        if use_layer_scale:            self.layer_scale_1 = self.create_parameter([dim], default_initializer=nn.initializer.Constant(layer_scale_init_value))            self.layer_scale_2 = self.create_parameter([dim], default_initializer=nn.initializer.Constant(layer_scale_init_value))    def forward(self, x):        if self.use_layer_scale:            x = x + self.drop_path(self.layer_scale_1 * self.token_mixer(self.norm1(x)))            x = x + self.drop_path(self.layer_scale_2 * self.mlp(self.norm2(x)))        else:            x = x + self.drop_path(self.token_mixer(self.norm1(x)))            x = x + self.drop_path(self.mlp(self.norm2(x)))        return x

   In [17]

class Meta4D(nn.Layer):    def __init__(self, dim, pool_size=3, mlp_ratio=4.,                 act_layer=nn.GELU,                 drop=0., drop_path=0.,                 use_layer_scale=True, layer_scale_init_value=1e-5):        super().__init__()        self.token_mixer = Pooling(pool_size=pool_size)        mlp_hidden_dim = int(dim * mlp_ratio)        self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim,                       act_layer=act_layer, drop=drop)        self.drop_path = DropPath(drop_path) if drop_path > 0.             else nn.Identity()        self.use_layer_scale = use_layer_scale        if use_layer_scale:            self.layer_scale_1 = self.create_parameter([1, dim, 1, 1],                                 default_initializer=nn.initializer.Constant(layer_scale_init_value))            self.layer_scale_2 = self.create_parameter([1, dim, 1, 1],                                 default_initializer=nn.initializer.Constant(layer_scale_init_value))    def forward(self, x):        if self.use_layer_scale:            x = x + self.drop_path(self.layer_scale_1 * self.token_mixer(x))            x = x + self.drop_path(self.layer_scale_2 * self.mlp(x))        else:            x = x + self.drop_path(self.token_mixer(x))            x = x + self.drop_path(self.mlp(x))        return x

   In [18]

def meta_blocks(dim, index, layers,                pool_size=3, mlp_ratio=4.,                act_layer=nn.GELU, norm_layer=nn.LayerNorm,                drop_rate=.0, drop_path_rate=0.,                use_layer_scale=True, layer_scale_init_value=1e-5, vit_num=1):    blocks = []    if index == 3 and vit_num == layers[index]:        blocks.append(Flat())    for block_idx in range(layers[index]):        block_dpr = drop_path_rate * (                block_idx + sum(layers[:index])) / (sum(layers) - 1)        if index == 3 and layers[index] - block_idx <= vit_num:            blocks.append(Meta3D(                dim, mlp_ratio=mlp_ratio,                act_layer=act_layer, norm_layer=norm_layer,                drop=drop_rate, drop_path=block_dpr,                use_layer_scale=use_layer_scale,                layer_scale_init_value=layer_scale_init_value,            ))        else:            blocks.append(Meta4D(                dim, pool_size=pool_size, mlp_ratio=mlp_ratio,                act_layer=act_layer,                drop=drop_rate, drop_path=block_dpr,                use_layer_scale=use_layer_scale,                layer_scale_init_value=layer_scale_init_value,            ))            if index == 3 and layers[index] - block_idx - 1 == vit_num:                blocks.append(Flat())    blocks = nn.Sequential(*blocks)    return blocks

   In [19]

class EfficientFormer(nn.Layer):    def __init__(self, layers, embed_dims=None,                 mlp_ratios=4, downsamples=None,                 pool_size=3,                 norm_layer=nn.LayerNorm, act_layer=nn.GELU,                 num_classes=1000,                 down_patch_size=3, down_stride=2, down_pad=1,                 drop_rate=0., drop_path_rate=0.,                 use_layer_scale=True, layer_scale_init_value=1e-5,                 vit_num=0,                 distillation=False):        super().__init__()        self.num_classes = num_classes        self.patch_embed = stem(3, embed_dims[0])        network = []        for i in range(len(layers)):            stage = meta_blocks(embed_dims[i], i, layers,                                pool_size=pool_size, mlp_ratio=mlp_ratios,                                act_layer=act_layer, norm_layer=norm_layer,                                drop_rate=drop_rate,                                drop_path_rate=drop_path_rate,                                use_layer_scale=use_layer_scale,                                layer_scale_init_value=layer_scale_init_value,                                vit_num=vit_num)            network.append(stage)            if i >= len(layers) - 1:                break            if downsamples[i] or embed_dims[i] != embed_dims[i + 1]:                # downsampling between two stages                network.append(                    Embedding(                        patch_size=down_patch_size, stride=down_stride,                        padding=down_pad,                        in_chans=embed_dims[i], embed_dim=embed_dims[i + 1]                    )                )        self.network = nn.LayerList(network)        # Classifier head        self.norm = norm_layer(embed_dims[-1])        self.head = nn.Linear(            embed_dims[-1], num_classes) if num_classes > 0             else nn.Identity()        self.dist = distillation        if self.dist:            self.dist_head = nn.Linear(                embed_dims[-1], num_classes) if num_classes > 0                 else nn.Identity()        self.apply(self.cls_init_weights)    # init for classification    def cls_init_weights(self, m):        tn = nn.initializer.TruncatedNormal(std=.02)        kaiming = nn.initializer.KaimingNormal()        zero = nn.initializer.Constant(0.)        one = nn.initializer.Constant(1.)        if isinstance(m, nn.Linear):            tn(m.weight)            if isinstance(m, nn.Linear) and m.bias is not None:                zero(m.bias)                if isinstance(m, nn.Conv2D):            kaiming(m.weight)            if isinstance(m, nn.Conv2D) and m.bias is not None:                zero(m.bias)                if isinstance(m, (nn.BatchNorm2D, nn.LayerNorm)):            one(m.weight)            zero(m.bias)    def forward_tokens(self, x):        outs = []        for idx, block in enumerate(self.network):            x = block(x)        return x    def forward(self, x):        x = self.patch_embed(x)        x = self.forward_tokens(x)        x = self.norm(x)        if self.dist:            cls_out = self.head(x.mean(-2)), self.dist_head(x.mean(-2))            if not self.training:                cls_out = (cls_out[0] + cls_out[1]) / 2        else:            cls_out = self.head(x.mean(-2))        # for image classification        return cls_out

   

2.3.4 模型的参数

In [20]

EfficientFormer_width = {    'l1': [48, 96, 224, 448],    'l3': [64, 128, 320, 512],    'l7': [96, 192, 384, 768],}EfficientFormer_depth = {    'l1': [3, 2, 6, 4],    'l3': [4, 4, 12, 6],    'l7': [6, 6, 18, 8],}def efficientformer_l1(pretrained=False, **kwargs):    model = EfficientFormer(        layers=EfficientFormer_depth['l1'],        embed_dims=EfficientFormer_width['l1'],        downsamples=[True, True, True, True],        num_classes=10,        vit_num=1)    return modeldef efficientformer_l3(pretrained=False, **kwargs):    model = EfficientFormer(        layers=EfficientFormer_depth['l3'],        embed_dims=EfficientFormer_width['l3'],        downsamples=[True, True, True, True],        num_classes=10,        vit_num=4)    return modeldef efficientformer_l7(pretrained=False, **kwargs):    model = EfficientFormer(        layers=EfficientFormer_depth['l7'],        embed_dims=EfficientFormer_width['l7'],        downsamples=[True, True, True, True],        num_classes=10,        vit_num=8)    return model

   In [ ]

# EfficientFormer-L1model = efficientformer_l1()paddle.summary(model, (1, 3, 224, 224))

   

EfficientFormer: 速度上可以与MobileNet媲美的ViT - 创想鸟        

In [ ]

# EfficientFormer-L3model = efficientformer_l3()paddle.summary(model, (1, 3, 224, 224))

   

EfficientFormer: 速度上可以与MobileNet媲美的ViT - 创想鸟        

In [ ]

# EfficientFormer-L7model = efficientformer_l7()paddle.summary(model, (1, 3, 224, 224))

   

EfficientFormer: 速度上可以与MobileNet媲美的ViT - 创想鸟        

2.4 训练

In [24]

learning_rate = 0.001n_epochs = 100paddle.seed(42)np.random.seed(42)

   In [ ]

work_path = 'work/model'# EfficientFormer-L1model = efficientformer_l1()criterion = LabelSmoothingCrossEntropy()scheduler = paddle.optimizer.lr.CosineAnnealingDecay(learning_rate=learning_rate, T_max=50000 // batch_size * n_epochs, verbose=False)optimizer = paddle.optimizer.Adam(parameters=model.parameters(), learning_rate=scheduler, weight_decay=1e-5)gate = 0.0threshold = 0.0best_acc = 0.0val_acc = 0.0loss_record = {'train': {'loss': [], 'iter': []}, 'val': {'loss': [], 'iter': []}}   # for recording lossacc_record = {'train': {'acc': [], 'iter': []}, 'val': {'acc': [], 'iter': []}}      # for recording accuracyloss_iter = 0acc_iter = 0for epoch in range(n_epochs):    # ---------- Training ----------    model.train()    train_num = 0.0    train_loss = 0.0    val_num = 0.0    val_loss = 0.0    accuracy_manager = paddle.metric.Accuracy()    val_accuracy_manager = paddle.metric.Accuracy()    print("#===epoch: {}, lr={:.10f}===#".format(epoch, optimizer.get_lr()))    for batch_id, data in enumerate(train_loader):        x_data, y_data = data        labels = paddle.unsqueeze(y_data, axis=1)        logits = model(x_data)        loss = criterion(logits, y_data)        acc = paddle.metric.accuracy(logits, labels)        accuracy_manager.update(acc)        if batch_id % 10 == 0:            loss_record['train']['loss'].append(loss.numpy())            loss_record['train']['iter'].append(loss_iter)            loss_iter += 1        loss.backward()        optimizer.step()        scheduler.step()        optimizer.clear_grad()                train_loss += loss        train_num += len(y_data)    total_train_loss = (train_loss / train_num) * batch_size    train_acc = accuracy_manager.accumulate()    acc_record['train']['acc'].append(train_acc)    acc_record['train']['iter'].append(acc_iter)    acc_iter += 1    # Print the information.    print("#===epoch: {}, train loss is: {}, train acc is: {:2.2f}%===#".format(epoch, total_train_loss.numpy(), train_acc*100))    # ---------- Validation ----------    model.eval()    for batch_id, data in enumerate(val_loader):        x_data, y_data = data        labels = paddle.unsqueeze(y_data, axis=1)        with paddle.no_grad():          logits = model(x_data)        loss = criterion(logits, y_data)        acc = paddle.metric.accuracy(logits, labels)        val_accuracy_manager.update(acc)        val_loss += loss        val_num += len(y_data)    total_val_loss = (val_loss / val_num) * batch_size    loss_record['val']['loss'].append(total_val_loss.numpy())    loss_record['val']['iter'].append(loss_iter)    val_acc = val_accuracy_manager.accumulate()    acc_record['val']['acc'].append(val_acc)    acc_record['val']['iter'].append(acc_iter)        print("#===epoch: {}, val loss is: {}, val acc is: {:2.2f}%===#".format(epoch, total_val_loss.numpy(), val_acc*100))    # ===================save====================    if val_acc > best_acc:        best_acc = val_acc        paddle.save(model.state_dict(), os.path.join(work_path, 'best_model.pdparams'))        paddle.save(optimizer.state_dict(), os.path.join(work_path, 'best_optimizer.pdopt'))print(best_acc)paddle.save(model.state_dict(), os.path.join(work_path, 'final_model.pdparams'))paddle.save(optimizer.state_dict(), os.path.join(work_path, 'final_optimizer.pdopt'))

   

EfficientFormer: 速度上可以与MobileNet媲美的ViT - 创想鸟        

2.5 结果分析

In [26]

def plot_learning_curve(record, title='loss', ylabel='CE Loss'):    ''' Plot learning curve of your CNN '''    maxtrain = max(map(float, record['train'][title]))    maxval = max(map(float, record['val'][title]))    ymax = max(maxtrain, maxval) * 1.1    mintrain = min(map(float, record['train'][title]))    minval = min(map(float, record['val'][title]))    ymin = min(mintrain, minval) * 0.9    total_steps = len(record['train'][title])    x_1 = list(map(int, record['train']['iter']))    x_2 = list(map(int, record['val']['iter']))    figure(figsize=(10, 6))    plt.plot(x_1, record['train'][title], c='tab:red', label='train')    plt.plot(x_2, record['val'][title], c='tab:cyan', label='val')    plt.ylim(ymin, ymax)    plt.xlabel('Training steps')    plt.ylabel(ylabel)    plt.title('Learning curve of {}'.format(title))    plt.legend()    plt.show()

   In [27]

plot_learning_curve(loss_record, title='loss', ylabel='CE Loss')

       

               In [28]

plot_learning_curve(acc_record, title='acc', ylabel='Accuracy')

       

               In [29]

import timework_path = 'work/model'model = efficientformer_l1()model_state_dict = paddle.load(os.path.join(work_path, 'best_model.pdparams'))model.set_state_dict(model_state_dict)model.eval()aa = time.time()for batch_id, data in enumerate(val_loader):    x_data, y_data = data    labels = paddle.unsqueeze(y_data, axis=1)    with paddle.no_grad():        logits = model(x_data)bb = time.time()print("Throughout:{}".format(int(len(val_dataset)//(bb - aa))))

       

Throughout:856

       In [30]

def get_cifar10_labels(labels):      """返回CIFAR10数据集的文本标签。"""    text_labels = [        'airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog',        'horse', 'ship', 'truck']    return [text_labels[int(i)] for i in labels]

   In [31]

def show_images(imgs, num_rows, num_cols, pred=None, gt=None, scale=1.5):      """Plot a list of images."""    figsize = (num_cols * scale, num_rows * scale)    _, axes = plt.subplots(num_rows, num_cols, figsize=figsize)    axes = axes.flatten()    for i, (ax, img) in enumerate(zip(axes, imgs)):        if paddle.is_tensor(img):            ax.imshow(img.numpy())        else:            ax.imshow(img)        ax.axes.get_xaxis().set_visible(False)        ax.axes.get_yaxis().set_visible(False)        if pred or gt:            ax.set_title("pt: " + pred[i] + "ngt: " + gt[i])    return axes

   In [32]

work_path = 'work/model'X, y = next(iter(DataLoader(val_dataset, batch_size=18)))model = efficientformer_l1()model_state_dict = paddle.load(os.path.join(work_path, 'best_model.pdparams'))model.set_state_dict(model_state_dict)model.eval()logits = model(X)y_pred = paddle.argmax(logits, -1)X = paddle.transpose(X, [0, 2, 3, 1])axes = show_images(X.reshape((18, 224, 224, 3)), 1, 18, pred=get_cifar10_labels(y_pred), gt=get_cifar10_labels(y))plt.show()

       

Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).

       

               In [ ]

!pip install interpretdl

   In [34]

import interpretdl as it

   In [35]

work_path = 'work/model'model = efficientformer_l1()model_state_dict = paddle.load(os.path.join(work_path, 'best_model.pdparams'))model.set_state_dict(model_state_dict)

   In [36]

X, y = next(iter(DataLoader(val_dataset, batch_size=18)))lime = it.LIMECVInterpreter(model)

   In [44]

lime_weights = lime.interpret(X.numpy()[3], interpret_class=y.numpy()[3], batch_size=100, num_samples=10000, visual=True)

       

100%|██████████| 10000/10000 [00:50<00:00, 196.29it/s]

       

               

以上就是EfficientFormer: 速度上可以与MobileNet媲美的ViT的详细内容,更多请关注创想鸟其它相关文章!

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。
如发现本站有涉嫌抄袭侵权/违法违规的内容, 请发送邮件至 chuangxiangniao@163.com 举报,一经查实,本站将立刻删除。
发布者:程序猿,转转请注明出处:https://www.chuangxiangniao.com/p/41724.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2025年11月6日 15:44:32
下一篇 2025年11月6日 15:45:11

相关推荐

  • soul怎么发长视频瞬间_Soul长视频瞬间发布方法

    可通过分段发布、格式转换或剪辑压缩三种方法在Soul上传长视频。一、将长视频用相册编辑功能拆分为多个30秒内片段,依次发布并标注“Part 1”“Part 2”保持连贯;二、使用“格式工厂”等工具将视频转为MP4(H.264)、分辨率≤1080p、帧率≤30fps、大小≤50MB,适配平台要求;三、…

    2025年12月6日 软件教程
    600
  • 云闪付怎么快速赚取积点_云闪付积点快速获取方法

    通过微信小程序用云闪付支付可日赚692积点;62VIP会员消费满10元返积点,月上限3000;转账超1000元得2积点,还款超100元得10积点,每月各限3笔;扫本人收款码支付5元以上每笔得10积点,日限3笔;改定位至杭州领“浙里有优惠”活动卡可得2025积点。 如果您在使用云闪付时希望快速积累积点…

    2025年12月6日 软件教程
    700
  • 怎样用免费工具美化PPT_免费美化PPT的实用方法分享

    利用KIMI智能助手可免费将PPT美化为科技感风格,但需核对文字准确性;2. 天工AI擅长优化内容结构,提升逻辑性,适合高质量内容需求;3. SlidesAI支持语音输入与自动排版,操作便捷,利于紧急场景;4. Prezo提供多种模板,自动生成图文并茂幻灯片,适合学生与初创团队。 如果您有一份内容完…

    2025年12月6日 软件教程
    100
  • Pages怎么协作编辑同一文档 Pages多人实时协作的流程

    首先启用Pages共享功能,点击右上角共享按钮并选择“添加协作者”,设置为可编辑并生成链接;接着复制链接通过邮件或社交软件发送给成员,确保其使用Apple ID登录iCloud后即可加入编辑;也可直接在共享菜单中输入邮箱地址定向邀请,设定编辑权限后发送;最后在共享面板中管理协作者权限,查看实时在线状…

    2025年12月6日 软件教程
    200
  • 咸鱼遇到“只退款不退货”的买家怎么办_咸鱼处理只退款不退货方法

    先与买家协商解决,要求其按规则退货退款,并保留聊天记录;若协商无效,申请平台介入并提交发货、签收及沟通等证据;若平台处理不利且金额较大,可依法提起民事诉讼,主张买家违反《民法典》合同规定,追回货款。 如果您在咸鱼平台出售手机后,买家申请“仅退款不退货”,这可能导致您既损失商品又损失资金。以下是应对该…

    2025年12月6日 软件教程
    000
  • 哔哩哔哩的视频卡在加载中怎么办_哔哩哔哩视频加载卡顿解决方法

    视频加载停滞可先切换网络或重启路由器,再清除B站缓存并重装应用,接着调低播放清晰度并关闭自动选分辨率,随后更改播放策略为AVC编码,最后关闭硬件加速功能以恢复播放。 如果您尝试播放哔哩哔哩的视频,但进度条停滞在加载状态,无法继续播放,这通常是由于网络、应用缓存或播放设置等因素导致。以下是解决此问题的…

    2025年12月6日 软件教程
    000
  • REDMI K90系列正式发布,售价2599元起!

    10月23日,redmi k90系列正式亮相,推出redmi k90与redmi k90 pro max两款新机。其中,redmi k90搭载骁龙8至尊版处理器、7100mah大电池及100w有线快充等多项旗舰配置,起售价为2599元,官方称其为k系列迄今为止最完整的标准版本。 图源:REDMI红米…

    2025年12月6日 行业动态
    200
  • Linux中如何安装Nginx服务_Linux安装Nginx服务的完整指南

    首先更新系统软件包,然后通过对应包管理器安装Nginx,启动并启用服务,开放防火墙端口,最后验证欢迎页显示以确认安装成功。 在Linux系统中安装Nginx服务是搭建Web服务器的第一步。Nginx以高性能、低资源消耗和良好的并发处理能力著称,广泛用于静态内容服务、反向代理和负载均衡。以下是在主流L…

    2025年12月6日 运维
    000
  • Linux journalctl与systemctl status结合分析

    先看 systemctl status 确认服务状态,再用 journalctl 查看详细日志。例如 nginx 启动失败时,systemctl status 显示 Active: failed,journalctl -u nginx 发现端口 80 被占用,结合两者可快速定位问题根源。 在 Lin…

    2025年12月6日 运维
    100
  • 华为新机发布计划曝光:Pura 90系列或明年4月登场

    近日,有数码博主透露了华为2025年至2026年的新品规划,其中pura 90系列预计在2026年4月发布,有望成为华为新一代影像旗舰。根据路线图,华为将在2025年底至2026年陆续推出mate 80系列、折叠屏新机mate x7系列以及nova 15系列,而pura 90系列则将成为2026年上…

    2025年12月6日 行业动态
    100
  • TikTok视频无法下载怎么办 TikTok视频下载异常修复方法

    先检查链接格式、网络设置及工具版本。复制以https://www.tiktok.com/@或vm.tiktok.com开头的链接,删除?后参数,尝试短链接;确保网络畅通,可切换地区节点或关闭防火墙;更新工具至最新版,优先选用yt-dlp等持续维护的工具。 遇到TikTok视频下载不了的情况,别急着换…

    2025年12月6日 软件教程
    100
  • Linux如何优化系统性能_Linux系统性能优化的实用方法

    优化Linux性能需先监控资源使用,通过top、vmstat等命令分析负载,再调整内核参数如TCP优化与内存交换,结合关闭无用服务、选用合适文件系统与I/O调度器,持续按需调优以提升系统效率。 Linux系统性能优化的核心在于合理配置资源、监控系统状态并及时调整瓶颈环节。通过一系列实用手段,可以显著…

    2025年12月6日 运维
    000
  • Linux命令行中wc命令的实用技巧

    wc命令可统计文件的行数、单词数、字符数和字节数,常用-l统计行数,如wc -l /etc/passwd查看用户数量;结合grep可分析日志,如grep “error” logfile.txt | wc -l统计错误行数;-w统计单词数,-m统计字符数(含空格换行),-c统计…

    2025年12月6日 运维
    000
  • 传苹果A20 Pro采用全新封装工艺 或提升10%性能

    cnmo注意到,10月24日,有博主曝光了苹果a20 pro芯片的最新信息。据其透露,苹果a20 pro芯片有望采用台积电全新的封装工艺,配备nanoflex晶体管架构,预计性能将提升10%,同时功耗降低约20%。 相关爆料信息显示,苹果A20 Pro预计采用台积电新的晶圆级多芯片模块(WMCM)封…

    2025年12月6日 行业动态
    000
  • 曝小米17 Air正在筹备 超薄机身+2亿像素+eSIM技术?

    近日,手机行业再度掀起超薄机型热潮,三星与苹果已相继推出s25 edge与iphone air等轻薄旗舰,引发市场高度关注。在此趋势下,多家国产厂商被曝正积极布局相关技术,加速抢占这一细分赛道。据业内人士消息,小米的超薄旗舰机型小米17 air已进入筹备阶段。 小米17 Pro 爆料显示,小米正在评…

    2025年12月6日 行业动态
    000
  • 「世纪传奇刀片新篇」飞利浦影音双11声宴开启

    百年声学基因碰撞前沿科技,一场有关声音美学与设计美学的影音狂欢已悄然引爆2025“双十一”! 当绝大多数影音数码品牌还在价格战中挣扎时,飞利浦影音已然开启了一场跨越百年的“声”活革命。作为拥有深厚技术底蕴的音频巨头,飞利浦影音及配件此次“双十一”精准聚焦“传承经典”与“设计美学”两大核心,为热爱生活…

    2025年12月6日 行业动态
    000
  • 荣耀手表5Pro 10月23日正式开启首销国补优惠价1359.2元起售

    荣耀手表5pro自9月25日开启全渠道预售以来,市场热度持续攀升,上市初期便迎来抢购热潮,一度出现全线售罄、供不应求的局面。10月23日,荣耀手表5pro正式迎来首销,提供蓝牙版与esim版两种选择。其中,蓝牙版本的攀登者(橙色)、开拓者(黑色)和远航者(灰色)首销期间享受国补优惠价,到手价为135…

    2025年12月6日 行业动态
    000
  • Vue.js应用中配置环境变量:灵活管理后端通信地址

    在%ignore_a_1%应用中,灵活配置后端api地址等参数是开发与部署的关键。本文将详细介绍两种主要的环境变量配置方法:推荐使用的`.env`文件,以及通过`cross-env`库在命令行中设置环境变量。通过这些方法,开发者可以轻松实现开发、测试、生产等不同环境下配置的动态切换,提高应用的可维护…

    2025年12月6日 web前端
    000
  • VSCode终端美化:功率线字体配置

    首先需安装Powerline字体如Nerd Fonts,再在VSCode设置中将terminal.integrated.fontFamily设为’FiraCode Nerd Font’等支持字体,最后配合oh-my-zsh的powerlevel10k等Shell主题启用完整美…

    2025年12月6日 开发工具
    000
  • 环境搭建docker环境下如何快速部署mysql集群

    使用Docker Compose部署MySQL主从集群,通过配置文件设置server-id和binlog,编写docker-compose.yml定义主从服务并组网,启动后创建复制用户并配置主从连接,最后验证数据同步是否正常。 在Docker环境下快速部署MySQL集群,关键在于合理使用Docker…

    2025年12月6日 数据库
    000

发表回复

登录后才能评论
关注微信