LDNet分割模型搭建

原论文:https://arxiv.org/abs/2110.09103
源码:https://github.com/unilight/LDNet

直接步入正题~~~

一、ESA_blcok模块

1、PPM模块

class PPM(nn.Module):
    def __init__(self, pooling_sizes=(1, 3, 5)):
        super().__init__()
        self.layer = nn.ModuleList([nn.AdaptiveAvgPool2d(output_size=(size,size)) for size in pooling_sizes])

    def forward(self, feat):
        b, c, h, w = feat.shape # 4, 512, 320, 320
        output = [layer(feat).view(b, c, -1) for layer in self.layer]
        output = torch.cat(output, dim=-1) # 4 3 35
        return output

2、ESA_layer模块

class ESA_layer(nn.Module):
    def __init__(self, dim, heads = 8, dim_head = 64, dropout = 0.):
        super().__init__()
        inner_dim = dim_head * heads # 512
        project_out = not (heads == 1 and dim_head == dim)

        self.heads = heads
        self.scale = dim_head ** -0.5 # 1/8

        self.attend = nn.Softmax(dim=-1)
        self.to_qkv = nn.Conv2d(dim, inner_dim * 3, kernel_size=1, stride=1, padding=0, bias=False)
        self.ppm = PPM(pooling_sizes=(1, 3, 5))
        self.to_out = nn.Sequential(
            nn.Linear(inner_dim, dim),
            nn.Dropout(dropout)
        ) if project_out else nn.Identity()

    def forward(self, x):
        # input x (b, c, h, w)
        b, c, h, w = x.shape #假设输入4, 3, 320, 320
        # .chunk沿dim=1维度,对张量进行均匀切分
        q, k, v = self.to_qkv(x).chunk(3, dim=1)  # q/k/v shape: 4, 512, 320, 320
        q = rearrange(q, 'b (head d) h w -> b head (h w) d', head=self.heads)   # q shape: 4, 8, 320*320, 64

        k, v = self.ppm(k), self.ppm(v)  # k/v shape: 4, 512, 35
        k = rearrange(k, 'b (head d) n -> b head n d', head=self.heads) # k shape: 4, 8, 35, 64
        v = rearrange(v, 'b (head d) n -> b head n d', head=self.heads) # v shape: 4, 8, 35, 64

        a = k.transpose(-1, -2) # 4, 8, 64, 35 将k的最后两个维度进行转置
        b = torch.matmul(q, a) # 4, 8, 320*320, 35
        dots = b * self.scale  # 4, 8, 320*320, 35
        attn = self.attend(dots) # 4, 8, 320*320, 35

        out = torch.matmul(attn, v) # 4, 8, 320*320, 64
        out = rearrange(out, 'b head n d -> b n (head d)') # 4, 320*320, 512
        out = self.to_out(out) # 4, 320*320, 3
        return out

3、ESA_blcok模块

class PreNorm(nn.Module):
    def __init__(self, dim, fn):
        super().__init__()
        self.norm = nn.LayerNorm(dim) # 对每个batch进行的归一化
        self.fn = fn # FeedForward

    def forward(self, x, **kwargs):
        return self.fn(self.norm(x), **kwargs) # 4, 320*320, 3


class FeedForward(nn.Module):
    def __init__(self, dim, hidden_dim, dropout = 0.):
        super().__init__()
        self.net = nn.Sequential(
            nn.Linear(dim, hidden_dim),
            nn.GELU(),
            nn.Dropout(dropout),
            nn.Linear(hidden_dim, dim),
            nn.Dropout(dropout)
        )
    def forward(self, x):  # 4, 320*320, 3 -- 4, 320*320, 512 -- 4, 320*320, 3
        return self.net(x)


class ESA_blcok(nn.Module):
    def __init__(self, dim, heads=8, dim_head=64, mlp_dim=512, dropout = 0.):
        super().__init__()
        self.ESAlayer = ESA_layer(dim, heads=heads, dim_head=dim_head, dropout=dropout)
        self.ff = PreNorm(dim, FeedForward(dim, mlp_dim, dropout = dropout))
       

    def forward(self, x):
        b, c, h, w = x.shape #假设输入4, 3, 320, 320
        out = rearrange(x, 'b c h w -> b (h w) c') # 4, 320*320, 3
        out = self.ESAlayer(x) + out # 4, 320*320, 3
        out = self.ff(out) + out # 4, 320*320, 3
        out = rearrange(out, 'b (h w) c -> b c h w', h=h) # 4, 3, 320, 320

        return out

 

二、LCA_blcok模块

1、MaskAveragePooling模块

def MaskAveragePooling(x, mask):
    mask = torch.sigmoid(mask) # mask shape:4, 1, 320, 320
    b, c, h, w = x.shape # 4, 512, 320, 320
    eps = 0.0005
    x_mask = x * mask # 4, 512, 320, 320
    h, w = x.shape[2], x.shape[3]
    area = F.avg_pool2d(mask, (h, w)) * h * w + eps # 4, 1, 1, 1
    m = F.avg_pool2d(x_mask, (h, w)) # 4, 512, 1, 1
    x_feat = m * h * w / area # 4, 512, 1, 1
    x_feat = x_feat.view(b, c, -1) # 4, 512, 1
    return x_feat # 4, 512, 1

2、LCA_layer模块

class LCA_layer(nn.Module):
    def __init__(self, dim, heads = 8, dim_head = 64, dropout = 0.):
        super().__init__()
        inner_dim = dim_head * heads # 512
        project_out = not (heads == 1 and dim_head == dim)
        self.heads = heads
        self.scale = dim_head ** -0.5

        self.attend = nn.Softmax(dim=-1)
        self.to_qkv = nn.Conv2d(dim, inner_dim * 3, kernel_size=1, stride=1, padding=0, bias=False)
        self.to_out = nn.Sequential(
            nn.Linear(inner_dim, dim),
            nn.Dropout(dropout)
        ) if project_out else nn.Identity()

    def forward(self, x, mask):
        # input x (b, c, h, w)
        b, c, h, w = x.shape #假设输入4, 3, 320, 320
        q, k, v = self.to_qkv(x).chunk(3, dim=1)  # q/k/v shape: 4, 512, 320, 320
        q = rearrange(q, 'b (head d) h w -> b head (h w) d', head=self.heads)  # q shape: 4, 8, 320*320, 64

        k, v = MaskAveragePooling(k, mask), MaskAveragePooling(v, mask)  # k/v shape:  # 4, 512, 1
        k = rearrange(k, 'b (head d) n -> b head n d', head=self.heads)  # k shape: (b, head, 1, d) 4, 8, 1, 64
        v = rearrange(v, 'b (head d) n -> b head n d', head=self.heads)  # v shape: (b, head, 1, d) 4, 8, 1, 64

        dots = torch.matmul(q, k.transpose(-1, -2)) * self.scale  # shape: (b, head, n_q, n_kv) 4, 8, 320*320, 1

        attn = self.attend(dots) # 4, 8, 320*320, 1

        out = torch.matmul(attn, v)  # shape: (b, head, n_q, d) 4, 8, 320*320, 64
        out = rearrange(out, 'b head n d -> b n (head d)') # 4, 320*320, 512
        return self.to_out(out) # 4, 320*320, 3

3、LCA_blcok模块

class LCA_blcok(nn.Module):
    def __init__(self, dim, heads=8, dim_head=64, mlp_dim=512, dropout = 0.):
        super().__init__()
        self.LCAlayer = LCA_layer(dim, heads=heads, dim_head=dim_head, dropout=dropout)
        self.ff = PreNorm(dim, FeedForward(dim, mlp_dim, dropout = dropout))

    def forward(self, x, mask):
        b, c, h, w = x.shape #假设输入4, 3, 320, 320
        out = rearrange(x, 'b c h w -> b (h w) c') # 4, 320*320, 3
        out = self.LCAlayer(x, mask) + out # 4, 320*320, 3
        out = self.ff(out) + out # 4, 320*320, 3
        out = rearrange(out, 'b (h w) c -> b c h w', h=h) # 4, 3, 320, 320

        return out

三、HeadUpdator模块

class DecoderBlock(nn.Module):
    def __init__(self, in_channels, out_channels, kernel_size=3, stride=1, padding=1):
        super(DecoderBlock, self).__init__()

        self.conv1 = ConvBlock(in_channels, in_channels // 4, kernel_size=kernel_size, stride=stride, padding=padding)

        self.conv2 = ConvBlock(in_channels // 4, out_channels, kernel_size=kernel_size, stride=stride, padding=padding)

        self.upsample = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)

    def forward(self, x):
        x = self.conv1(x)
        x = self.conv2(x)
        x = self.upsample(x)
        return x


class HeadUpdator(nn.Module):
    def __init__(self, in_channels=64, feat_channels=64, out_channels=None, conv_kernel_size=1):
        super(HeadUpdator, self).__init__()
        
        self.conv_kernel_size = conv_kernel_size

        # C == feat
        self.in_channels = in_channels
        self.feat_channels = feat_channels
        self.out_channels = out_channels if out_channels else in_channels
        # feat == in == out
        self.num_in = self.feat_channels
        self.num_out = self.feat_channels

        self.upsample = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=False)

        self.pred_transform_layer = nn.Linear(self.in_channels, self.num_in + self.num_out)
        self.head_transform_layer = nn.Linear(self.in_channels, self.num_in + self.num_out, 1)

        self.pred_gate = nn.Linear(self.num_in, self.feat_channels, 1)
        self.head_gate = nn.Linear(self.num_in, self.feat_channels, 1)

        self.pred_norm_in = nn.LayerNorm(self.feat_channels)
        self.head_norm_in = nn.LayerNorm(self.feat_channels)
        self.pred_norm_out = nn.LayerNorm(self.feat_channels)
        self.head_norm_out = nn.LayerNorm(self.feat_channels)

        self.fc_layer = nn.Linear(self.feat_channels, self.out_channels, 1)
        self.fc_norm = nn.LayerNorm(self.feat_channels)
        self.activation = nn.ReLU(inplace=True)


    def forward(self, feat, head, pred): #feat:B 64 28 28  head:B num_classes 64 1 1  pred:B num_classes 14 14

        bs, num_classes = head.shape[:2]
        # C, H, W = feat.shape[-3:]

        pred = self.upsample(pred)# B num_classes 28 28
        pred = torch.sigmoid(pred)

        """
        Head feature assemble 
        - use prediction to assemble head-aware feature
        """

        # [B, N, C]
        assemble_feat = torch.einsum('bnhw,bchw->bnc', pred, feat)# B num_classes 64

        # [B, N, C, K, K] -> [B, N, C, K*K] -> [B, N, K*K, C]
        head = head.reshape(bs, num_classes, self.in_channels, -1).permute(0, 1, 3, 2)#B num_classes 64 1 -- B num_classes 1 64
        
        """
        Update head
        - assemble_feat, head -> linear transform -> pred_feat, head_feat
        - both split into two parts: xxx_in & xxx_out
        - gate_feat = head_feat_in * pred_feat_in
        - gate_feat -> linear transform -> pred_gate, head_gate
        - update_head = pred_gate * pred_feat_out + head_gate * head_feat_out
        """
        # [B, N, C] -> [B*N, C]
        assemble_feat = assemble_feat.reshape(-1, self.in_channels)#B*num_classes 64
        bs_num = assemble_feat.size(0)#bs_num=B*num_classes

        # [B*N, C] -> [B*N, in+out]
        pred_feat = self.pred_transform_layer(assemble_feat)#B*num_classes 128
        
        # [B*N, in] 取所有行的前64列
        pred_feat_in = pred_feat[:, :self.num_in].view(-1, self.feat_channels)#B*num_classes 64
        # [B*N, out] 取所有行的后64列
        pred_feat_out = pred_feat[:, -self.num_out:].view(-1, self.feat_channels)#B*num_classes 64

        # [B, N, K*K, C] -> [B*N, K*K, C] -> [B*N, K*K, in+out]
        head_feat = self.head_transform_layer(
            head.reshape(bs_num, -1, self.in_channels))#B num_classes 1 64 -- B*num_classes 1 64 -- B*num_classes 1 128

        # [B*N, K*K, in]
        head_feat_in = head_feat[..., :self.num_in]#B*num_classes 1 64
        # [B*N, K*K, out]
        head_feat_out = head_feat[..., -self.num_out:]#B*num_classes 1 64

        # [B*N, K*K, in] * [B*N, 1, in] -> [B*N, K*K, in]
        gate_feat = head_feat_in * pred_feat_in.unsqueeze(-2)#B*num_classes 1 64

        # [B*N, K*K, feat]
        head_gate = self.head_norm_in(self.head_gate(gate_feat))#B*num_classes 1 64
        pred_gate = self.pred_norm_in(self.pred_gate(gate_feat))#B*num_classes 1 64

        head_gate = torch.sigmoid(head_gate)
        pred_gate = torch.sigmoid(pred_gate)

        # [B*N, K*K, out]
        head_feat_out = self.head_norm_out(head_feat_out)#B*num_classes 1 64
        # [B*N, out]
        pred_feat_out = self.pred_norm_out(pred_feat_out)#B*num_classes 64

        # [B*N, K*K, feat] or [B*N, K*K, C]
        update_head = pred_gate * pred_feat_out.unsqueeze(-2) + head_gate * head_feat_out#B*num_classes 1 64

        update_head = self.fc_layer(update_head)#B*num_classes 1 64
        update_head = self.fc_norm(update_head)#B*num_classes 1 64
        update_head = self.activation(update_head)#B*num_classes 1 64

        # [B*N, K*K, C] -> [B, N, K*K, C]
        update_head = update_head.reshape(bs, num_classes, -1, self.feat_channels)#B num_classes 1 64
        # [B, N, K*K, C] -> [B, N, C, K*K] -> [B, N, C, K, K]
        update_head = update_head.permute(0, 1, 3, 2).reshape(bs, num_classes, self.feat_channels, self.conv_kernel_size, self.conv_kernel_size)#B num_classes 64 1 1

        return update_head

四、整体网络结构

class LDNet(nn.Module):
    def __init__(self, num_classes=3, unified_channels=64, conv_kernel_size=1):
        super(LDNet, self).__init__()
        self.num_classes = num_classes
        self.conv_kernel_size = conv_kernel_size
        self.unified_channels = unified_channels

        res2net = res2net50_v1b_26w_4s(pretrained=True)
        
        # Encoder
        self.encoder1_conv = res2net.conv1
        self.encoder1_bn = res2net.bn1
        self.encoder1_relu = res2net.relu
        self.maxpool = res2net.maxpool
        self.encoder2 = res2net.layer1
        self.encoder3 = res2net.layer2
        self.encoder4 = res2net.layer3
        self.encoder5 = res2net.layer4

        self.reduce2 = nn.Conv2d(256, 64, 1)
        self.reduce3 = nn.Conv2d(512, 128, 1)
        self.reduce4 = nn.Conv2d(1024, 256, 1)
        self.reduce5 = nn.Conv2d(2048, 512, 1)
        # Decoder
        self.decoder5 = DecoderBlock(in_channels=512, out_channels=512)
        self.decoder4 = DecoderBlock(in_channels=512+256, out_channels=256)
        self.decoder3 = DecoderBlock(in_channels=256+128, out_channels=128)
        self.decoder2 = DecoderBlock(in_channels=128+64, out_channels=64)
        self.decoder1 = DecoderBlock(in_channels=64+64, out_channels=64)

       
        self.gobal_average_pool = nn.Sequential(
            # GroupNorm不会改变输入张量的shape,它只是按照group做normalization
            nn.GroupNorm(16, 512), # 即将512个channel分为16组
            nn.ReLU(inplace=True),
            nn.AdaptiveAvgPool2d(1), # 自适应平均池化,输出尺寸为1*1
        )
        #self.gobal_average_pool = nn.AdaptiveAvgPool2d(1)
        self.generate_head = nn.Linear(512, self.num_classes*self.unified_channels*self.conv_kernel_size*self.conv_kernel_size)

        # self.pred_head = nn.Conv2d(64, self.num_classes, self.conv_kernel_size)

        # self.headUpdators = nn.ModuleList()
        # for i in range(4):
        #     self.headUpdators.append(HeadUpdator())
        self.headUpdators = nn.ModuleList([HeadUpdator(), HeadUpdator(), HeadUpdator(), HeadUpdator()])

        # Unified channel
        self.unify1 = nn.Conv2d(64, 64, 1)
        self.unify2 = nn.Conv2d(64, 64, 1)
        self.unify3 = nn.Conv2d(128, 64, 1)
        self.unify4 = nn.Conv2d(256, 64, 1)
        self.unify5 = nn.Conv2d(512, 64, 1)

        # Efficient self-attention block
        self.esa1 = ESA_blcok(dim=64)
        self.esa2 = ESA_blcok(dim=64)
        self.esa3 = ESA_blcok(dim=128)
        self.esa4 = ESA_blcok(dim=256)
        #self.esa5 = ESA_blcok(dim=512)
        # Lesion-aware cross-attention block
        self.lca1 = LCA_blcok(dim=64)
        self.lca2 = LCA_blcok(dim=128)
        self.lca3 = LCA_blcok(dim=256)
        self.lca4 = LCA_blcok(dim=512)

        self.decoderList = nn.ModuleList([self.decoder4, self.decoder3, self.decoder2, self.decoder1])
        self.unifyList = nn.ModuleList([self.unify4, self.unify3, self.unify2, self.unify1])
        self.esaList = nn.ModuleList([self.esa4, self.esa3, self.esa2, self.esa1])
        self.lcaList = nn.ModuleList([self.lca4, self.lca3, self.lca2, self.lca1])


    def forward(self, x):
        # x = 224*224*3
        bs = x.shape[0]
        e1_ = self.encoder1_conv(x)  # 112*112*64
        e1_ = self.encoder1_bn(e1_)
        e1_ = self.encoder1_relu(e1_)
        e1_pool_ = self.maxpool(e1_)  # 56*56*64
        e2_ = self.encoder2(e1_pool_) # 56*56*256
        e3_ = self.encoder3(e2_)      # 28*28*512
        e4_ = self.encoder4(e3_)      # 14*14*1024
        e5_ = self.encoder5(e4_)      # 7*7*2048
        
        e1 = e1_
        e2 = self.reduce2(e2_)      # 56*56*64
        e3 = self.reduce3(e3_)      # 28*28*128
        e4 = self.reduce4(e4_)      # 14*14*256
        e5 = self.reduce5(e5_)      # 7*7*512
        
        #e5 = self.esa5(e5)
        d5 = self.decoder5(e5) # 7*7*512 -- 14*14*512
        
        feat5 = self.unify5(d5) # 14*14*64

        decoder_out = [d5]
        encoder_out = [e4, e3, e2, e1]

        """
        B = batch size (bs)
        N = number of classes (num_classes)
        C = feature channels
        K = conv kernel size
        """
        # [B, 512, 1, 1] -> [B, 512]
        gobal_context = self.gobal_average_pool(e5) # B, 512, 1, 1
        gobal_context = gobal_context.reshape(bs, -1) # B, 512
        
        # [B, N*C*K*K] -> [B, N, C, K, K]
        head = self.generate_head(gobal_context) # B, 512 -- B, 64*num_classes
        head = head.reshape(bs, self.num_classes, self.unified_channels, self.conv_kernel_size, self.conv_kernel_size) # B, num_classes, 64, 1, 1
        
        pred = []
        for t in range(bs):
            pred.append(F.conv2d(
                feat5[t:t+1],
                head[t],
                padding=int(self.conv_kernel_size // 2)))
        pred = torch.cat(pred, dim=0) # B, 1, 14, 14
        H, W = feat5.shape[-2:] # H=14, W=14
        # [B, N, H, W]
        pred = pred.reshape(bs, self.num_classes, H, W) # B, num_classes, 14, 14
        stage_out = [pred]

        # feat size: [B, C, H, W]
        # feats = [feat4, feat3, feat2, feat1]
        feats = []

        # self.decoderList = nn.ModuleList([self.decoder4, self.decoder3, self.decoder2, self.decoder1])
        # self.unifyList = nn.ModuleList([self.unify4, self.unify3, self.unify2, self.unify1])
        # self.esaList = nn.ModuleList([self.esa4, self.esa3, self.esa2, self.esa1])
        # self.lcaList = nn.ModuleList([self.lca4, self.lca3, self.lca2, self.lca1])
        # encoder_out = [e4, e3, e2, e1]
        for i in range(4):
            esa_out = self.esaList[i](encoder_out[i])#输入:B 256 14 14 输出:B 256 14 14
            lca_out = self.lcaList[i](decoder_out[-1], stage_out[-1])#输入{d5:B 512 14 14  pred:B num_classes 14 14} 输出:B 512 14 14
            comb = torch.cat([lca_out, esa_out], dim=1)#B 512+256 14 14
            
            d = self.decoderList[i](comb)#B 256 28 28
            decoder_out.append(d)#decoder_out = [d5 d]
            
            feat = self.unifyList[i](d)#B 64 28 28
            feats.append(feat)#feats = [feat]

            head = self.headUpdators[i](feats[i], head, pred)#输入{feat:B 64 28 28  head:B num_classes 64 1 1  pred:B num_classes 14 14} 输出:B num_classes 64 1 1
            pred = []

            for j in range(bs):
                pred.append(F.conv2d(
                    feats[i][j:j+1],
                    head[j],
                    padding=int(self.conv_kernel_size // 2)))
            pred = torch.cat(pred, dim=0)#B 1 28 28
            H, W = feats[i].shape[-2:] # H=28, W=28
            pred = pred.reshape(bs, self.num_classes, H, W)#B num_classes 28 28
            stage_out.append(pred)
            
        stage_out.reverse() #对列表的元素进行反向排序
        #return stage_out[0], stage_out[1], stage_out[2], stage_out[3], stage_out[4]
        return torch.sigmoid(stage_out[0]), torch.sigmoid(stage_out[1]), torch.sigmoid(stage_out[2]), \
               torch.sigmoid(stage_out[3]), torch.sigmoid(stage_out[4])

tips:虽然有涉及到num_classes参数,但num_classes只能为1,为其他数时会报错!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mfbz.cn/a/3658.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

蓝桥杯刷题冲刺 | 倒计时13天

作者:指针不指南吗 专栏:蓝桥杯倒计时冲刺 🐾马上就要蓝桥杯了,最后的这几天尤为重要,不可懈怠哦🐾 文章目录1.母牛的故事2.魔板1.母牛的故事 题目 链接: [递归]母牛的故事 - C语言网 (dotcpp.c…

基于微信小程序+爬虫制作一个表情包小程序

跟朋友聊天斗图失败气急败坏的我选择直接制作一个爬虫表情包小程序,从源头解决问题,从此再也不用担心在斗图中落入下风 精彩专栏持续更新↓↓↓ 微信小程序实战开发专栏 一、API1.1 项目创建1.2 图片爬虫帮助类1.3 测试窗体1.4 接口封装二、小程序2.1 项…

【iOS】GCD再学

文章目录前言GCD概要什么是GCD多线程编程GCD的APIDispatch Queuedispatch_queue_createMain Dispatch Queue/Global Dispatch Queuedispatch_set_target_queuedispatch_afterDispatch Groupdispatch_barrier_asyncdispatch_syncdispatch_applydispatch_suspend/dispatch_resume…

网络安全 2023 年为什么如此吃香?事实原来是这样....

前言由于我国网络安全起步晚,所以现在网络安全工程师十分紧缺。俗话说:没有网络安全就没有国家安全为什么选择网络安全?十四五发展规划建议明确提出建设网络强国,全面加强网络安全保障体系和能力建设,加强网络文明建设&#xff0c…

多线程(三):Thread 类的基本属性

上一个篇章浅浅了解了一下 线程的概念,进程与线程的区别,如何实现多线程编程。 而且上一章提到一个重要的面试点: start 方法和 run 方法的区别。 start 方法是从系统那里创建一个新的线程,这个线程会自动调用内部的run 方法&…

瑟瑟发抖吧~OpenAI刚刚推出王炸——引入ChatGPT插件,开启AI新生态

5分钟学会使用ChatGPT 插件(ChatGPT plugins)——ChatGPT生态建设的开端ChatGPT插件是什么OpenAI最新官方blog资料表示,已经在ChatGPT中实现了对插件的初步支持。插件是专门为以安全为核心原则的语言模型设计的工具,可帮助ChatGPT…

JSON 教程导读

JSON 教程导读在开始深入了解JSON知识之前,让我们先了解什么是JSON!JSON: JavaScript Object Notation(JavaScript 对象表示法) JSON 是存储和交换文本信息的语法,类似 XML。JSON 比 XML 更小、更快,更易解析。JSON实例&#xff1…

CODESYS增量式PID功能块(ST完整源代码)

增量式PID的详细算法公式和博途源代码,请参看下面的文章链接: 博途1200/1500PLC增量式PID算法(详细SCL代码)_博图scl语言pid增量编码器_RXXW_Dor的博客-CSDN博客SMART200PLC增量式PID可以参看下面这篇博文,文章里有完整的增量式PID算法公式,这里不在赘述西门子SMARTPLC增量…

你值得拥有——流星雨下的告白(Python实现)

目录1 前言2 霍金说移民外太空3 浪漫的流星雨展示 4 Python代码 1 前言我们先给个小故事,提一下大家兴趣;然后我给出论据,得出结论。最后再浪漫的流星雨表白代码奉上,还有我自创的一首诗。开始啦:2 霍金说移民外太空霍…

你的应用太慢了,给我司带来了巨额损失,该怎么办

记得很久之前看过谷歌官方有这么样的声明:如果一个页面的加载时间从 1 秒增加到3 秒,那么用户跳出的概率将增加 32%。 但是早在 2012 年,亚马逊就计算出了,页面加载速度一旦下降一秒钟,每年就会损失 16 亿美元的销售额…

杨辉三角形 (蓝桥杯) JAVA

目录题目描述:暴力破解(四成):二分法破解(满分):题目描述: 下面的图形是著名的杨辉三角形: 如果我们按从上到下、从左到右的顺序把所有数排成一列,可以得到如…

如何编写测试用例?

带着问题学习是最高效的学习方法。 因此,在介绍如何编写测试用例之前,先看一个软件系统登录功能的测试(如下截图所示): 要做这个登录页面的测试用例,你会从哪些方面思考进行测试呢? 看似简单的…

【C语言蓝桥杯每日一题】—— 货物摆放

【C语言蓝桥杯每日一题】—— 货物摆放😎前言🙌排序🙌总结撒花💞😎博客昵称:博客小梦 😊最喜欢的座右铭:全神贯注的上吧!!! 😊作者简介…

图话第一代女性开发者

写在前面的话想问大家一个有趣的问题,大家知道我们程序员圈的第一位女性开发者是谁吗?作为开发者,以前并没有认真去想过这个问题,这两天认真的看了一下百度百科查找了一下相关的专业知识。才知道历史上第一位女性程序员是&#xf…

docker+jenkins+maven+git构建聚合项目,实现自动化部署,走了800个坑

流程 主要的逻辑就是Docker上安装jenkins&#xff0c;然后拉取git上的代码&#xff0c;把git上的代码用Maven打包成jar包&#xff0c;然后在docker运行 这个流程上的难点 一个是聚合项目有可能Maven install的时候失败。 解决办法&#xff1a;在基础模块的pom文件上添加 <…

重谈“协议” + 序列化和反序列化

目录 1、重谈 "协议" 协议的概念 结构化数据的传输 序列化和反序列化 2、网络版计算器 2.1、服务端serverTcp.cc文件 服务端serverTcp.cc总代码 2.2、客户端clientTcp.cc文件 客户端clientTcp.cc总代码 2.3、协议定制Protocol.hpp文件 服务端需要的协议 客户端需要…

惠普官网驱动程序与软件下载,如何安装打印机驱动

惠普&#xff08;HP&#xff09;是一家全球知名的计算机硬件制造商&#xff0c;其产品涵盖台式电脑、笔记本电脑、打印机、扫描仪等。为了保证产品的正常运行和最佳性能&#xff0c;惠普为其设备提供了驱动程序和软件的下载服务。本文将介绍如何在惠普官网下载所需的驱动程序和…

【Linux】 基础IO——文件(中)

文章目录1. 文件描述符为什么从3开始使用&#xff1f;2. 文件描述符本质理解3. 如何理解Linux下的一切皆文件&#xff1f;4. FILE是什么&#xff0c;谁提供&#xff1f;和内核的struct有关系么&#xff1f;证明struct FILE结构体中存在文件描述符fd5. 重定向的本质输出重定向输…

Linux基础

环境搭建&#xff1a;linux安装、远程连接常用命令&#xff1a;文件、目录、拷贝、移动、打包、压缩、文本编辑安装软件&#xff1a;文件上传、jdk、tomcat、mysql项目部署&#xff1a;Java应用、Python应用、日志查看、系统管理、用户权限Linux是一套免费使用、自由传播的操作…

ngx之日志切割

正确记日志方式是每天都进行切割重新写&#xff0c;保留固定的时间后可使用 find 删除。 用系统自带有的 logrotate /etc/logrotate.d 下面再建立一个文件&#xff0c;这里是nginx &#xff08; 中途有 ctrlZ 暂停过任务&#xff0c;后面fg恢复的 &#xff09; /usr/local/ng…
最新文章