J4 - ResNet与DenseNet结合

  • 🍨 本文为🔗365天深度学习训练营 中的学习记录博客
  • 🍖 原作者:K同学啊 | 接辅导、项目定制

目录

  • 环境
  • 模型设计
  • 模型效果展示
  • 总结与心得体会


环境

  • 系统: Linux
  • 语言: Python3.8.10
  • 深度学习框架: Pytorch2.0.0+cu118
  • 显卡:GTX2080TI

模型设计

原始的DenseNet结构图如下:
DenseNet结构图
原始的ResNet结构图如下:
ResNet结构图
经过对比可以发现,ResNet的恒等块是经过了3个Conv层,而DenseNet只有两个,于是将DenseNet的结构修改为ResNet的风格,然后进行测试。

# BN ReLU Conv 顺序的残差块
class ResidualBlock(nn.Sequential):
    def __init__(self, kernel_size, input_size, hidden_size, drop_rate):
        super().__init__()
        
        self.add_module('norm1', nn.BatchNorm2d(input_size)),
        self.add_module('relu1', nn.ReLU(inplace=True)),
        self.add_module('conv1', nn.Conv2d(input_size, hidden_size, kernel_size=1, bias=False))
        
        self.add_module('norm2', nn.BatchNorm2d(hidden_size)),
        self.add_module('relu2', nn.ReLU(inplace=True)),
        self.add_module('conv2', nn.Conv2d(hidden_size, hidden_size, kernel_size=kernel_size, padding='same', bias=False))
        
        
        self.add_module('norm3', nn.BatchNorm2d(hidden_size)),
        self.add_module('relu3', nn.ReLU(inplace=True)),
        self.add_module('conv3', nn.Conv2d(hidden_size, input_size, kernel_size=1, bias=False))
        
        self.drop_rate = drop_rate
        
    def forward(self, x):
        features = super().forward(x)
        if self.drop_rate > 0:
            features = F.dropout(features, p = self.drop_rate, training=self.training)
        
        return torch.concat([x, features], 1)
class DenseBlock(nn.Sequential):
    def __init__(self, num_layers, input_size, drop_rate):
        super().__init__()
        for i in range(num_layers):
            layer = ResidualBlock(3, input_size, int(input_size / 4), drop_rate)
            input_size *= 2 # 每次都是上个的堆叠,每次都翻倍
            self.add_module('denselayer%d'%(i+1,), layer)
# 过渡层没有任务变化
class Transition(nn.Sequential):
    def __init__(self, input_size, output_size):
        super().__init__()
        self.add_module('norm', nn.BatchNorm2d(input_size))
        self.add_module('relu', nn.ReLU())
        self.add_module('conv', nn.Conv2d(input_size, output_size, kernel_size=1, stride=1, bias=False))
        self.add_module('pool', nn.AvgPool2d(2, stride=2))
# 构建自定义的DenseNet
class DenseNet(nn.Module):
	# 模型的规模小一点,方便测试
    def __init__(self, growth_rate=32, block_config=(2,4,3, 2), 
                 init_size=64, bn_size=4, compression_rate=0.5, drop_rate=0, num_classes=1000):
        super().__init__()
        
        self.features = nn.Sequential(OrderedDict([
            ("conv0", nn.Conv2d(3, init_size, kernel_size=7, stride=2, padding=3, bias=False)),
            ('norm0', nn.BatchNorm2d(init_size)),
            ('relu0', nn.ReLU()),
            ('pool0', nn.MaxPool2d(3, stride=2, padding=1))
        ]))
        
        num_features = init_size
        for i, num_layers in enumerate(block_config):
            block = DenseBlock(num_layers, num_features, drop_rate)
            self.features.add_module('denseblock%d' % (i + 1), block)
            num_features = num_features*(2**num_layers)
            if i != len(block_config) - 1:
                transition = Transition(num_features, int(num_features*compression_rate))
                self.features.add_module('transition%d' % (i + 1), transition)
                num_features = int(num_features * compression_rate)
                
        self.features.add_module('norm5', nn.BatchNorm2d(num_features))
        self.features.add_module('relu5', nn.ReLU())
        
        self.classifier = nn.Linear(num_features, num_classes)
        
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                nn.init.kaiming_normal_(m.weight)
            elif isinstance(m, nn.BatchNorm2d):
                nn.init.constant_(m.bias, 0)
                nn.init.constant_(m.weight, 1)
            elif isinstance(m, nn.Linear):
                nn.init.constant_(m.bias, 0)
                
    def forward(self, x):
        features = self.features(x)
        out = F.avg_pool2d(features, 7, stride=1).view(features.size(0), -1)
        out = self.classifier(out)
        return out

打印一下模型的结构

DenseNet(
  (features): Sequential(
    (conv0): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
    (norm0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (relu0): ReLU()
    (pool0): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
    (denseblock1): DenseBlock(
      (denselayer1): ResidualBlock(
        (norm1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu1): ReLU(inplace=True)
        (conv1): Conv2d(64, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (norm2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu2): ReLU(inplace=True)
        (conv2): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=same, bias=False)
        (norm3): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu3): ReLU(inplace=True)
        (conv3): Conv2d(16, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
      )
      (denselayer2): ResidualBlock(
        (norm1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu1): ReLU(inplace=True)
        (conv1): Conv2d(128, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (norm2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu2): ReLU(inplace=True)
        (conv2): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=same, bias=False)
        (norm3): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu3): ReLU(inplace=True)
        (conv3): Conv2d(32, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      )
    )
    (transition1): Transition(
      (norm): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (pool): AvgPool2d(kernel_size=2, stride=2, padding=0)
    )
    (denseblock2): DenseBlock(
      (denselayer1): ResidualBlock(
        (norm1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu1): ReLU(inplace=True)
        (conv1): Conv2d(128, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (norm2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu2): ReLU(inplace=True)
        (conv2): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=same, bias=False)
        (norm3): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu3): ReLU(inplace=True)
        (conv3): Conv2d(32, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      )
      (denselayer2): ResidualBlock(
        (norm1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu1): ReLU(inplace=True)
        (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (norm2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu2): ReLU(inplace=True)
        (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=same, bias=False)
        (norm3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu3): ReLU(inplace=True)
        (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      )
      (denselayer3): ResidualBlock(
        (norm1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu1): ReLU(inplace=True)
        (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu2): ReLU(inplace=True)
        (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=same, bias=False)
        (norm3): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu3): ReLU(inplace=True)
        (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      )
      (denselayer4): ResidualBlock(
        (norm1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu1): ReLU(inplace=True)
        (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (norm2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu2): ReLU(inplace=True)
        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=same, bias=False)
        (norm3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu3): ReLU(inplace=True)
        (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      )
    )
    (transition2): Transition(
      (norm): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (conv): Conv2d(2048, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (pool): AvgPool2d(kernel_size=2, stride=2, padding=0)
    )
    (denseblock3): DenseBlock(
      (denselayer1): ResidualBlock(
        (norm1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu1): ReLU(inplace=True)
        (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (norm2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu2): ReLU(inplace=True)
        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=same, bias=False)
        (norm3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu3): ReLU(inplace=True)
        (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      )
      (denselayer2): ResidualBlock(
        (norm1): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu1): ReLU(inplace=True)
        (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (norm2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu2): ReLU(inplace=True)
        (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=same, bias=False)
        (norm3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu3): ReLU(inplace=True)
        (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
      )
      (denselayer3): ResidualBlock(
        (norm1): BatchNorm2d(4096, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu1): ReLU(inplace=True)
        (conv1): Conv2d(4096, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (norm2): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu2): ReLU(inplace=True)
        (conv2): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=same, bias=False)
        (norm3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu3): ReLU(inplace=True)
        (conv3): Conv2d(1024, 4096, kernel_size=(1, 1), stride=(1, 1), bias=False)
      )
    )
    (transition3): Transition(
      (norm): BatchNorm2d(8192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
      (conv): Conv2d(8192, 4096, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (pool): AvgPool2d(kernel_size=2, stride=2, padding=0)
    )
    (denseblock4): DenseBlock(
      (denselayer1): ResidualBlock(
        (norm1): BatchNorm2d(4096, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu1): ReLU(inplace=True)
        (conv1): Conv2d(4096, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (norm2): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu2): ReLU(inplace=True)
        (conv2): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=same, bias=False)
        (norm3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu3): ReLU(inplace=True)
        (conv3): Conv2d(1024, 4096, kernel_size=(1, 1), stride=(1, 1), bias=False)
      )
      (denselayer2): ResidualBlock(
        (norm1): BatchNorm2d(8192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu1): ReLU(inplace=True)
        (conv1): Conv2d(8192, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (norm2): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu2): ReLU(inplace=True)
        (conv2): Conv2d(2048, 2048, kernel_size=(3, 3), stride=(1, 1), padding=same, bias=False)
        (norm3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu3): ReLU(inplace=True)
        (conv3): Conv2d(2048, 8192, kernel_size=(1, 1), stride=(1, 1), bias=False)
      )
    )
    (norm5): BatchNorm2d(16384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (relu5): ReLU()
  )
  (classifier): Linear(in_features=16384, out_features=2, bias=True)
)
# 使用torchinfo打印
summary(model, input_size=(32, 3, 224, 224))
==========================================================================================
Layer (type:depth-idx)                   Output Shape              Param #
==========================================================================================
DenseNet                                 [32, 2]                   --
├─Sequential: 1-1                        [32, 16384, 7, 7]         --
│    └─Conv2d: 2-1                       [32, 64, 112, 112]        9,408
│    └─BatchNorm2d: 2-2                  [32, 64, 112, 112]        128
│    └─ReLU: 2-3                         [32, 64, 112, 112]        --
│    └─MaxPool2d: 2-4                    [32, 64, 56, 56]          --
│    └─DenseBlock: 2-5                   [32, 256, 56, 56]         --
│    │    └─ResidualBlock: 3-1           [32, 128, 56, 56]         4,544
│    │    └─ResidualBlock: 3-2           [32, 256, 56, 56]         17,792
│    └─Transition: 2-6                   [32, 128, 28, 28]         --
│    │    └─BatchNorm2d: 3-3             [32, 256, 56, 56]         512
│    │    └─ReLU: 3-4                    [32, 256, 56, 56]         --
│    │    └─Conv2d: 3-5                  [32, 128, 56, 56]         32,768
│    │    └─AvgPool2d: 3-6               [32, 128, 28, 28]         --
│    └─DenseBlock: 2-7                   [32, 2048, 28, 28]        --
│    │    └─ResidualBlock: 3-7           [32, 256, 28, 28]         17,792
│    │    └─ResidualBlock: 3-8           [32, 512, 28, 28]         70,400
│    │    └─ResidualBlock: 3-9           [32, 1024, 28, 28]        280,064
│    │    └─ResidualBlock: 3-10          [32, 2048, 28, 28]        1,117,184
│    └─Transition: 2-8                   [32, 1024, 14, 14]        --
│    │    └─BatchNorm2d: 3-11            [32, 2048, 28, 28]        4,096
│    │    └─ReLU: 3-12                   [32, 2048, 28, 28]        --
│    │    └─Conv2d: 3-13                 [32, 1024, 28, 28]        2,097,152
│    │    └─AvgPool2d: 3-14              [32, 1024, 14, 14]        --
│    └─DenseBlock: 2-9                   [32, 8192, 14, 14]        --
│    │    └─ResidualBlock: 3-15          [32, 2048, 14, 14]        1,117,184
│    │    └─ResidualBlock: 3-16          [32, 4096, 14, 14]        4,462,592
│    │    └─ResidualBlock: 3-17          [32, 8192, 14, 14]        17,838,080
│    └─Transition: 2-10                  [32, 4096, 7, 7]          --
│    │    └─BatchNorm2d: 3-18            [32, 8192, 14, 14]        16,384
│    │    └─ReLU: 3-19                   [32, 8192, 14, 14]        --
│    │    └─Conv2d: 3-20                 [32, 4096, 14, 14]        33,554,432
│    │    └─AvgPool2d: 3-21              [32, 4096, 7, 7]          --
│    └─DenseBlock: 2-11                  [32, 16384, 7, 7]         --
│    │    └─ResidualBlock: 3-22          [32, 8192, 7, 7]          17,838,080
│    │    └─ResidualBlock: 3-23          [32, 16384, 7, 7]         71,327,744
│    └─BatchNorm2d: 2-12                 [32, 16384, 7, 7]         32,768
│    └─ReLU: 2-13                        [32, 16384, 7, 7]         --
├─Linear: 1-2                            [32, 2]                   32,770
==========================================================================================
Total params: 149,871,874
Trainable params: 149,871,874
Non-trainable params: 0
Total mult-adds (G): 595.94
==========================================================================================
Input size (MB): 19.27
Forward/backward pass size (MB): 5317.85
Params size (MB): 599.49
Estimated Total Size (MB): 5936.61
==========================================================================================

模型效果展示

Epoch: 1, Train_acc:83.8, Train_loss: 0.392, Test_acc: 86.8, Test_loss: 0.324, Lr: 1.00E-04
Epoch: 2, Train_acc:86.8, Train_loss: 0.327, Test_acc: 88.5, Test_loss: 0.291, Lr: 1.00E-04
Epoch: 3, Train_acc:88.1, Train_loss: 0.290, Test_acc: 87.7, Test_loss: 0.415, Lr: 1.00E-04
Epoch: 4, Train_acc:88.1, Train_loss: 0.287, Test_acc: 89.8, Test_loss: 0.249, Lr: 1.00E-04
Epoch: 5, Train_acc:89.7, Train_loss: 0.251, Test_acc: 90.5, Test_loss: 0.235, Lr: 1.00E-04
Epoch: 6, Train_acc:90.2, Train_loss: 0.241, Test_acc: 90.7, Test_loss: 0.253, Lr: 1.00E-04
Epoch: 7, Train_acc:90.6, Train_loss: 0.227, Test_acc: 90.5, Test_loss: 0.236, Lr: 1.00E-04
Epoch: 8, Train_acc:91.5, Train_loss: 0.212, Test_acc: 90.5, Test_loss: 0.228, Lr: 1.00E-04
Epoch: 9, Train_acc:91.7, Train_loss: 0.207, Test_acc: 91.0, Test_loss: 0.247, Lr: 1.00E-04
Epoch:10, Train_acc:92.0, Train_loss: 0.206, Test_acc: 91.2, Test_loss: 0.290, Lr: 1.00E-04
Epoch:11, Train_acc:92.0, Train_loss: 0.203, Test_acc: 88.2, Test_loss: 0.283, Lr: 1.00E-04
Epoch:12, Train_acc:92.5, Train_loss: 0.185, Test_acc: 91.3, Test_loss: 0.232, Lr: 1.00E-04
Epoch:13, Train_acc:93.2, Train_loss: 0.172, Test_acc: 90.7, Test_loss: 0.247, Lr: 1.00E-04
Epoch:14, Train_acc:93.3, Train_loss: 0.177, Test_acc: 90.2, Test_loss: 0.238, Lr: 1.00E-04
Epoch:15, Train_acc:93.8, Train_loss: 0.166, Test_acc: 90.1, Test_loss: 0.357, Lr: 1.00E-04
Epoch:16, Train_acc:94.6, Train_loss: 0.146, Test_acc: 91.2, Test_loss: 0.255, Lr: 1.00E-04
Epoch:17, Train_acc:95.4, Train_loss: 0.119, Test_acc: 90.2, Test_loss: 0.270, Lr: 1.00E-04
Epoch:18, Train_acc:95.5, Train_loss: 0.116, Test_acc: 81.7, Test_loss: 0.752, Lr: 1.00E-04
Epoch:19, Train_acc:95.6, Train_loss: 0.117, Test_acc: 89.3, Test_loss: 0.339, Lr: 1.00E-04
Epoch:20, Train_acc:95.5, Train_loss: 0.120, Test_acc: 91.0, Test_loss: 0.285, Lr: 1.00E-04
Done

训练结果

打印评估结果

总结与心得体会

虽然大幅度的降低了模型的规模,实际的总参数还是数倍于DenseNet121。然而,模型似乎比DenseNet121的泛化性能好不少,训练和验证的Gap比DenseNet121小很多,甚至有的时候验证集上的表现比训练集还好。直接使用ResNet的ResidualBlock实现DenseNet会让参数量迅速的膨胀。接下来再改进,应该从如何压缩DenseNet的参数量的角度来考虑。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mfbz.cn/a/334364.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

基于springboot+vue的高校心理教育辅导系统(前后端分离)

博主主页:猫头鹰源码 博主简介:Java领域优质创作者、CSDN博客专家、公司架构师、全网粉丝5万、专注Java技术领域和毕业设计项目实战 主要内容:毕业设计(Javaweb项目|小程序等)、简历模板、学习资料、面试题库、技术咨询 文末联系获取 项目背景…

SpringCloud源码系列之(Ribbon、Hystrix超时正确配置)

这块的代码debug了一个礼拜,一开始看Fegin创建源码的时候没注意到,是基于RxJava的响应式编程。分析栈桢的时候,有很多异步栈桢。由于只是想搞清楚如下配置的生效时机、以及失效情况,期间还看了一堆与此无关的源码,真的…

新能源智慧充电桩方案:AI视频分析技术如何助力充电桩智能监管?

随着AI人工智能、大数据、云计算等技术快速发展与落地,视频智能分析技术在智慧充电桩场景中的应用也越来越广泛。这种技术能够为充电桩站点提供全方位的监控和管理,提高运营效率,保障充电桩设备的安全和稳定运行。 通过TSINGSEE青犀&触角…

Spring重要知识点

一、Spring中相关概念 1.IOC 控制反转 IoC(Inverse of Control:控制反转)是⼀种设计思想,就是将原本在程序中⼿动创建对象的控制权,交由Spring框架来管理。IoC 在其他语⾔中也有应⽤,并⾮ Spring 所独有。 IoC 容器…

uniapp 微信小程序 内嵌H5网页办法

uniapp 微信小程序 内嵌H5网页办法 如图所示 1.新建webView页面 <template><web-view v-ifhttpUrl :srchttpUrl></web-view> </template><script>export default {data() {return {httpUrl: "",};},onLoad(options) {options.http…

C#,入门教程(07)——软件项目的源文件与目录结构

上一篇&#xff1a; C#&#xff0c;入门教程(06)——解决方案资源管理器&#xff0c;代码文件与文件夹的管理工具https://blog.csdn.net/beijinghorn/article/details/124895033 创建新的 C# 项目后&#xff0c; Visual Studio 会自动创建一系列的目录与文件。 程序员后面的工…

Unity—配置lua环境变量+VSCode 搭建 Lua 开发环境

每日一句&#xff1a;保持须臾的浪漫&#xff0c;理想的喧嚣&#xff0c;平等的热情 Windows 11下配置lua环境变量 一、lua-5.4.4版本安装到本地电脑 链接&#xff1a;https://pan.baidu.com/s/14pAlOjhzz2_jmvpRZf9u6Q?pwdhd4s 提取码&#xff1a;hd4s 二、高级系统设置 此电…

1月下半笔记(个人向)

最近才开始看d2l&#xff08;这种东西早该在两年前看的&#xff0c;拖到现在了&#xff09; 为了做项目还得学一手OpenGL&#xff08;被windows安装GLFW逼疯了&#xff09; 1.15 打完ICPC EC final回来&#xff0c;也许可以出一篇博客写下简单的题解。 对蛋白质相似空间子结…

你还在找PDF转Word的工具?一个好软件推荐,赶紧查收!

前言 前段时间朋友跟小白吐槽&#xff1a;为啥PDF文件转Word文档的工具都要收费&#xff1f; 因为他们都收费啊。 小白之前找了很多类似有这种功能的工具&#xff0c;都发现&#xff1a;不但收费&#xff0c;可能还附带全家桶&#xff0c;而且还有……广告&#xff01; 在一次…

Java学习(二十一)--JDBC/数据库连接池

为什么需要 传统JDBC数据库连接&#xff0c;使用DriverManager来获取&#xff1b; 每次向数据库建立连接时都要将Connection加载到内存中&#xff0c;再验证IP地址、用户名和密码&#xff08;0.05s~1s)时间。 需要数据库连接时候&#xff0c;就向数据库要求一个&#xf…

DNS寻址过程

用一张图详细的描述DNS寻址的过程&#xff0c;是高级前端进阶的网络篇&#xff1a; 主要是第三步要记仔细就行啦&#xff0c;每一步都要详细的记录下来&#xff0c;总结的脉络如下&#xff1a; 本地DNS缓存本地DNS服务器根域名服务器 顶级域名服务器再次顶级域名服务器权威域名…

【PIE-Engine 数据资源】全球 10 米土地覆盖产品 (ESA-2020)

文章目录 一、 简介二、描述三、波段四、示例代码参考资料 一、 简介 数据名称全球 10 米土地覆盖产品 (ESA-2020)时间范围2020年空间范围全球数据来源ESA WorldCover project 2020代码片段var imagespie. ImageCollection (“ESA/WORLD_COVER_2020”) 二、描述 全球 10 米土…

CVer从0入门NLP(二)———LSTM、ELMO、Transformer模型

&#x1f34a;作者简介&#xff1a;秃头小苏&#xff0c;致力于用最通俗的语言描述问题 &#x1f34a;专栏推荐&#xff1a;深度学习网络原理与实战 &#x1f34a;近期目标&#xff1a;写好专栏的每一篇文章 &#x1f34a;支持小苏&#xff1a;点赞&#x1f44d;&#x1f3fc;、…

(二十)Flask之上下文管理第一篇(粗糙缕一遍源码)

每篇前言&#xff1a; &#x1f3c6;&#x1f3c6;作者介绍&#xff1a;【孤寒者】—CSDN全栈领域优质创作者、HDZ核心组成员、华为云享专家Python全栈领域博主、CSDN原力计划作者 &#x1f525;&#x1f525;本文已收录于Flask框架从入门到实战专栏&#xff1a;《Flask框架从入…

获取编译工具链默认的链接脚本

1、ld命令使用“–verbose”参数 命令&#xff1a;riscv64-linux-gnu-ld --verbose想使用自己的链接脚本&#xff0c;链接时使用“-T”指定

安装布隆过滤器

上传并解压文件解压文件 tar -zxvf RedisBloom-2.2.4.tar.gz 进入解压好的文件 make一下 返回进入conf 配置文件 loadmodule /usr/local/etc/redis/redisbloom.so 粘入 拷贝redisbloom.so到容器 : docker cp redisbloom.so redis:/usr/local/etc/redis 重启redis : docker …

MySQL(四)——约束

上期文章 MySQL&#xff08;三&#xff09;——函数 文章目录 上期文章概述约束演示外键约束添加外键删除外键删除/更新行为 总结 概述 概念&#xff1a;作用于表中字段上的规则&#xff0c;用于限制存储在表中的数据 目的&#xff1a;保证数据库中数据的正确、有效性和完整性…

Unity XR 设置VR设备手柄按键按下事件

一、Unity设置 1、导入XR Interaction Toolkit插件&#xff0c;导入示例资源&#xff08;如下图&#xff09;。 2、设置新版XR输入事件 ①打开XRI Default Input Action 面板。 ②设置左手柄上的按键就点击Action Maps 列表下的 XRI LeftHand Interaction选项&#xff0c;设置…

JS-节点操作

DOM节点 DOM树里的每一个内容都称之为节点 节点类型 1&#xff09;元素结点 所有的标签 比如body、div html是根节点 2&#xff09;属性节点 所有的属性 比如href、class 3&#xff09;文本节点 所有的文本 4&#xff09;其他 查找节点 父节点查找 parentNode属性 …

Spring Boot - 利用Resilience4j-Circuitbreaker实现断路器模式_防止级联故障

文章目录 PreResilience4j概述Resilience4j官方地址Resilience4j-Circuitbreaker应用场景微服务演示Address servicePOMModelRepositoryServiceControllerData InitProperties测试 Order serviceModelRepositoryServiceSet UpProperties测试 探究断路器调用order-service API 2…
最新文章