linux 网卡配置 vlan/bond/bridge/macvlan/ipvlan 模式

linux 网卡模式

linux网卡支持非vlan模式、vlan模式、bond模式、bridge模式,macvlan模式、ipvlan模式等,下面介绍交换机端及服务器端配置示例。

前置要求:

  • 准备一台物理交换机,以 H3C S5130 三层交换机为例
  • 准备一台物理服务器,以 Ubuntu 22.04 LTS 操作系统为例

交换机创建2个示例VLAN,vlan10和vlan20,及VLAN接口。

<H3C>system-view

[H3C]vlan 10 20

[H3C]interface Vlan-interface 10
[H3C-Vlan-interface10]ip address 172.16.10.1 24
[H3C-Vlan-interface10]undo shutdown
[H3C-Vlan-interface10]exit
[H3C]

[H3C]interface Vlan-interface 20
[H3C-Vlan-interface20]ip address 172.16.20.1 24
[H3C-Vlan-interface20]undo  shutdown 
[H3C-Vlan-interface20]exit
[H3C]

网卡非vlan模式

网卡非vlan模式,一般直接配置IP地址,对端上连交换机配置为access口,access口一般用于连接纯物理服务器或办公终端设备。

示意图如下
在这里插入图片描述

交换机配置,交换机接口配置为access模式,并加入对应vlan

<H3C>system-view
[H3C]interface GigabitEthernet 1/0/1
[H3C-GigabitEthernet1/0/1]port link-type access
[H3C-GigabitEthernet1/0/1]port access vlan 10
[H3C-GigabitEthernet1/0/1]exit
[H3C]
[H3C]interface GigabitEthernet 1/0/2
[H3C-GigabitEthernet1/0/2]port link-type access
[H3C-GigabitEthernet1/0/2]port access vlan 20
[H3C-GigabitEthernet1/0/2]exit
[H3C]

服务器1配置,服务器网卡直接配置IP地址

root@server1:~# cat /etc/netplan/00-installer-config.yaml
network:
  ethernets:
    enp1s0:
      dhcp4: false
      addresses:
        - 172.16.10.10/24
      nameservers:
        addresses:
          - 223.5.5.5
          - 223.6.6.6
      routes:
        - to: default
          via: 172.16.10.1
  version: 2

服务器2配置,服务器网卡直接配置IP地址

root@server2:~# cat /etc/netplan/00-installer-config.yaml
network:
  ethernets:
    enp1s0:
      dhcp4: false
      addresses:
        - 172.16.20.10/24
      nameservers:
        addresses:
          - 223.5.5.5
          - 223.6.6.6
      routes:
        - to: default
          via: 172.16.20.1
  version: 2

应用网卡配置

netplan apply

查看服务器接口信息

root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global enp1s0
       valid_lft forever preferred_lft forever
    inet6 fe80::7eb5:9bff:fe59:a71/64 scope link 
       valid_lft forever preferred_lft forever

通过server1 ping server2测试连通性,三层交换机支持路由功能,能够打通二层隔离的vlan网段。

root@server1:~# ping 172.16.20.10 -c 4
PING 172.16.20.10 (172.16.20.10) 56(84) bytes of data.
64 bytes from 172.16.20.10: icmp_seq=1 ttl=64 time=0.033 ms
64 bytes from 172.16.20.10: icmp_seq=2 ttl=64 time=0.048 ms
64 bytes from 172.16.20.10: icmp_seq=3 ttl=64 time=0.048 ms
64 bytes from 172.16.20.10: icmp_seq=4 ttl=64 time=0.047 ms

--- 172.16.20.10 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3061ms
rtt min/avg/max/mdev = 0.033/0.044/0.048/0.006 ms

网卡vlan模式

vlan模式下,对端上连交换机需要配置为trunk口,允许多个vlan通过。

示意图如下
在这里插入图片描述

交换机配置,交换机需要配置为trunk口,允许多个vlan通过

H3C>system-view 
[H3C]interface GigabitEthernet 1/0/1
[H3C-GigabitEthernet1/0/1]port link-type trunk
[H3C-GigabitEthernet1/0/1]port trunk permit vlan 10 20
[H3C-GigabitEthernet1/0/1]exit
[H3C]

服务器配置,服务器需要配置vlan子接口

root@server1:~# cat /etc/netplan/00-installer-config.yaml
network:
  ethernets:
    enp1s0:
      dhcp4: true
  vlans:
    vlan10:
      id: 10
      link: enp1s0
      addresses: [ "172.16.10.10/24" ]
      routes:
        - to: default
          via: 172.16.10.1
          metric: 200
    vlan20:
      id: 20
      link: enp1s0
      addresses: [ "172.16.20.10/24" ]
      routes:
        - to: default
          via: 172.16.20.1
          metric: 300
  version: 2

查看接口信息,新建了两个vlan子接口vlan10和vlan20

root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::7eb5:9bff:fe59:a71/64 scope link 
       valid_lft forever preferred_lft forever
10: vlan10@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global vlan10
       valid_lft forever preferred_lft forever
    inet6 fe80::7eb5:9bff:fe59:a71/64 scope link 
       valid_lft forever preferred_lft forever
11: vlan20@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet 172.16.20.10/24 brd 172.16.20.255 scope global vlan20
       valid_lft forever preferred_lft forever
    inet6 fe80::7eb5:9bff:fe59:a71/64 scope link 
       valid_lft forever preferred_lft forever

通过vlan10 和 vlan20测试与网关连通性

root@server1:~# ping 172.16.10.1 -c 4
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
64 bytes from 172.16.10.1: icmp_seq=1 ttl=64 time=0.033 ms
64 bytes from 172.16.10.1: icmp_seq=2 ttl=64 time=0.048 ms
64 bytes from 172.16.10.1: icmp_seq=3 ttl=64 time=0.048 ms
64 bytes from 172.16.10.1: icmp_seq=4 ttl=64 time=0.047 ms

--- 172.16.10.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3061ms
rtt min/avg/max/mdev = 0.033/0.044/0.048/0.006 ms
root@server1:~#
root@server1:~# ping 172.16.20.1 -c 4
PING 172.16.20.1 (172.16.20.1) 56(84) bytes of data.
64 bytes from 172.16.20.1: icmp_seq=1 ttl=64 time=0.033 ms
64 bytes from 172.16.20.1: icmp_seq=2 ttl=64 time=0.048 ms
64 bytes from 172.16.20.1: icmp_seq=3 ttl=64 time=0.048 ms
64 bytes from 172.16.20.1: icmp_seq=4 ttl=64 time=0.047 ms

--- 172.16.20.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3061ms
rtt min/avg/max/mdev = 0.033/0.044/0.048/0.006 ms

网卡bond模式

bond模式下,对端交换机需要配置bond聚合口。

示意图如下
在这里插入图片描述

交换机配置,配置动态链路聚合,将端口g1/0/1和g1/0/3加入聚合组。然后将bond口配置为trunk模式。

<H3C>system-view
[H3C]interface Bridge-Aggregation 1
[H3C-Bridge-Aggregation1]link-aggregation mode dynamic
[H3C-Bridge-Aggregation1]quit

[H3C]interface GigabitEthernet 1/0/1
[H3C-GigabitEthernet1/0/1]port link-aggregation group 1
[H3C-GigabitEthernet1/0/1]exit

[H3C]interface GigabitEthernet 1/0/3
[H3C-GigabitEthernet1/0/3]port link-aggregation group 1
[H3C-GigabitEthernet1/0/3]exit

[H3C]interface Bridge-Aggregation 1
[H3C-Bridge-Aggregation1]port link-type trunk
[H3C-Bridge-Aggregation1]port trunk permit vlan 10 20
[H3C-Bridge-Aggregation1]exit

服务器配置

root@server1:~# cat /etc/netplan/00-installer-config.yaml
network:
  version: 2
  ethernets:
    enp1s0:
      dhcp4: no
    enp2s0:
      dhcp4: no
  bonds:
    bond0:
      interfaces:
        - enp1s0
        - enp2s0
      parameters:
        mode: 802.3ad
        lacp-rate: fast
        mii-monitor-interval: 100
        transmit-hash-policy: layer2+3
  vlans:
    vlan10:
      id: 10
      link: bond0
      addresses: [ "172.16.10.10/24" ]
      routes:
        - to: default
          via: 172.16.10.1
          metric: 200
    vlan20:
      id: 20
      link: bond0
      addresses: [ "172.16.20.10/24" ]
      routes:
        - to: default
          via: 172.16.20.1
          metric: 300

查看网卡信息,新建了bond0网口,并且基于bond0网口创建了两个vlan子接口vlan10和vlan20,enp1s0和enp2s0显示master bond0,说明两个网卡属于bond0成员接口。

root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff permaddr 7c:b5:9b:59:0a:71
3: enp2s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff permaddr e4:54:e8:dc:e5:88
7: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff
    inet6 fe80::acfd:60ff:fe48:841a/64 scope link 
       valid_lft forever preferred_lft forever
8: vlan10@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global vlan10
       valid_lft forever preferred_lft forever
    inet6 fe80::acfd:60ff:fe48:841a/64 scope link 
       valid_lft forever preferred_lft forever
9: vlan20@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff
    inet 172.16.20.10/24 brd 172.16.20.255 scope global vlan20
       valid_lft forever preferred_lft forever
    inet6 fe80::acfd:60ff:fe48:841a/64 scope link 
       valid_lft forever preferred_lft forever

查看bond状态,Bonding Mode显示为IEEE 802.3ad Dynamic link aggregation,并且下面Slave Interface显示了两个成员接口的信息。

root@server1:~# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v5.15.0-60-generic

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2+3 (2)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0

802.3ad info
LACP active: on
LACP rate: fast
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: ae:fd:60:48:84:1a
Active Aggregator Info:
        Aggregator ID: 1
        Number of ports: 2
        Actor Key: 9
        Partner Key: 1
        Partner Mac Address: fc:60:9b:35:ad:18

Slave Interface: enp1s0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 2
Permanent HW addr: 7c:b5:9b:59:0a:71
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: ae:fd:60:48:84:1a
    port key: 9
    port priority: 255
    port number: 1
    port state: 63
details partner lacp pdu:
    system priority: 32768
    system mac address: fc:60:9b:35:ad:18
    oper key: 1
    port priority: 32768
    port number: 2
    port state: 61

Slave Interface: enp2s0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 3
Permanent HW addr: e4:54:e8:dc:e5:88
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: ae:fd:60:48:84:1a
    port key: 9
    port priority: 255
    port number: 2
    port state: 63
details partner lacp pdu:
    system priority: 32768
    system mac address: fc:60:9b:35:ad:18
    oper key: 1
    port priority: 32768
    port number: 1
    port state: 61

测试连通性,测试与交换机网关地址的连通性:

root@server1:~# ping 172.16.10.1 -c 4
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
64 bytes from 172.16.10.1: icmp_seq=1 ttl=255 time=1.64 ms
64 bytes from 172.16.10.1: icmp_seq=2 ttl=255 time=1.59 ms
64 bytes from 172.16.10.1: icmp_seq=3 ttl=255 time=1.95 ms
64 bytes from 172.16.10.1: icmp_seq=4 ttl=255 time=1.93 ms

--- 172.16.10.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3006ms
rtt min/avg/max/mdev = 1.589/1.776/1.953/0.165 ms
root@server1:~# 

关闭一个接口,再次测试连通性,依然能够ping通

root@server1:~# ip link set dev enp2s0 down
root@server1:~# ip link show enp2s0
3: enp2s0: <BROADCAST,MULTICAST,SLAVE> mtu 1500 qdisc fq_codel master bond0 state DOWN mode DEFAULT group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff permaddr e4:54:e8:dc:e5:88
root@server1:~# 
root@server1:~# ping 172.16.10.1 -c 4
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
64 bytes from 172.16.10.1: icmp_seq=1 ttl=255 time=1.54 ms
64 bytes from 172.16.10.1: icmp_seq=2 ttl=255 time=1.64 ms
64 bytes from 172.16.10.1: icmp_seq=3 ttl=255 time=2.73 ms
64 bytes from 172.16.10.1: icmp_seq=4 ttl=255 time=1.47 ms

--- 172.16.10.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3006ms
rtt min/avg/max/mdev = 1.470/1.844/2.732/0.516 ms

网卡桥接模式

桥接模式下,对端交换机可配置access模式或trunk模式。

示意图如下

在这里插入图片描述

交换机配置,交换机接口配置为access模式为例,并加入对应vlan

<H3C>system-view
[H3C]interface GigabitEthernet 1/0/1
[H3C-GigabitEthernet1/0/1]port link-type access
[H3C-GigabitEthernet1/0/1]port access vlan 10
[H3C-GigabitEthernet1/0/1]exit
[H3C]

服务器配置,物理网卡加入到网桥中,IP地址配置到网桥接口br0上。

root@server1:~# cat /etc/netplan/00-installer-config.yaml
network:
  version: 2
  ethernets:
    enp1s0:
      dhcp4: no
      dhcp6: no
  bridges:
    br0:
      interfaces: [enp1s0]
      addresses: [172.16.10.10/24]
      routes:
      - to: default
        via: 172.16.10.1
        metric: 100
        on-link: true
      mtu: 1500
      nameservers:
        addresses:
          - 223.5.5.5
          - 223.6.6.6
      parameters:
        stp: true
        forward-delay: 4

查看网卡信息

root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
12: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 0e:d0:7e:31:9c:74 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global br0
       valid_lft forever preferred_lft forever
    inet6 fe80::cd0:7eff:fe31:9c74/64 scope link 
       valid_lft forever preferred_lft forever

查看网桥及接口,当前网桥上只有一个物理接口enp1s0。

root@server1:~# apt install -y bridge-utils
root@ubuntu:~# brctl show
bridge name     bridge id               STP enabled     interfaces
br0             8000.0ed07e319c74       yes             enp1s0
root@server1:~# 

这样在KVM虚拟化环境,虚拟机实例连接到网桥后,虚拟机可以配置与物理网卡相同网段的IP地址。访问虚拟机可以像访问物理机一样方便。

网卡macvlan模式

macvlan(MAC Virtual LAN)是Linux内核提供的一种网络虚拟化技术,它允许在一个物理网卡接口上创建多个虚拟网卡接口,每个虚拟接口都有自己独立的MAC地址,也可以配置上 IP 地址进行通信。Macvlan 下的虚拟机或者容器网络和主机在同一个网段中,共享同一个广播域。

macvlan模式下,对端交换机可配置access模式或trunk模式,trunk模式下macvlan能够与vlan很好的结合使用。

示意图如下:

在这里插入图片描述

macvlan IP模式

该模式下,上连交换机接口配置为access模式,服务器macvlan主网卡和子接口直接配置相同网段的IP地址。

交换机配置

<H3C>system-view
[H3C]interface GigabitEthernet 1/0/1
[H3C-GigabitEthernet1/0/1]port link-type access
[H3C-GigabitEthernet1/0/1]port access vlan 10
[H3C-GigabitEthernet1/0/1]exit

服务器配置,macvlan支持多种模式,这里使用bridge模式,并持久化配置

cat >/etc/networkd-dispatcher/routable.d/10-macvlan-interfaces.sh<<EOF
#! /bin/bash
ip link add macvlan0 link enp1s0 type macvlan mode bridge
ip link add macvlan1 link enp1s0 type macvlan mode bridge
EOF

chmod o+x,g+x,u+x /etc/networkd-dispatcher/routable.d/10-macvlan-interfaces.sh

配置netplan

root@server1:~# cat /etc/netplan/00-installer-config.yaml
network:
  ethernets:
    enp1s0:
      dhcp4: false
      addresses:
        - 172.16.10.10/24
      nameservers:
        addresses:
          - 223.5.5.5
          - 223.6.6.6
      routes:
        - to: default
          via: 172.16.10.1
    macvlan0:
      addresses:
        - 172.16.10.11/24
    macvlan1:
      addresses:
        - 172.16.10.12/24
  version: 2

应用网卡配置

netplan apply

查看网卡信息,新建了两个macvlan接口,IP地址与主网卡位于同一网段,并且每个接口都有独立的MAC地址。

root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global enp1s0
       valid_lft forever preferred_lft forever
13: macvlan0@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 32:e8:b4:0a:47:62 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.11/24 brd 172.16.10.255 scope global macvlan0
       valid_lft forever preferred_lft forever
    inet6 fe80::30e8:b4ff:fe0a:4762/64 scope link 
       valid_lft forever preferred_lft forever
14: macvlan1@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d2:73:75:14:b2:04 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.12/24 brd 172.16.10.255 scope global macvlan1
       valid_lft forever preferred_lft forever
    inet6 fe80::d073:75ff:fe14:b204/64 scope link 
       valid_lft forever preferred_lft forever

测试与网关的连通性

root@server1:~# ping -c 3 172.16.10.1
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
64 bytes from 172.16.10.1: icmp_seq=1 ttl=255 time=3.60 ms
64 bytes from 172.16.10.1: icmp_seq=2 ttl=255 time=1.45 ms
64 bytes from 172.16.10.1: icmp_seq=3 ttl=255 time=1.44 ms

--- 172.16.10.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 1.441/2.163/3.602/1.017 ms
root@server1:~# 

macvlan vlan模式

该模式下,上连交换机接口配置为trunk模式,服务器macvlan主网卡不配置IP地址,每个macvlan子接口配置为不同的vlan子接口。

交换机配置

<H3C>system-view
[H3C]interface GigabitEthernet 1/0/1
[H3C-GigabitEthernet1/0/3]port link-type trunk
[H3C-GigabitEthernet1/0/3]port trunk permit vlan 10 20
[H3C-GigabitEthernet1/0/1]exit
[H3C]

服务器配置,macvlan支持多种模式,这里使用bridge模式,并持久化配置

cat >/etc/networkd-dispatcher/routable.d/10-macvlan-interfaces.sh<<EOF
#! /bin/bash
ip link add macvlan0 link enp1s0 type macvlan mode bridge
ip link add macvlan1 link enp1s0 type macvlan mode bridge
EOF

chmod o+x,g+x,u+x /etc/networkd-dispatcher/routable.d/10-macvlan-interfaces.sh

配置netplan,两个macvlan接口macvlan0和macvlan1分别配置vlan子接口vlan10和vlan20。

root@ubuntu:~# cat /etc/netplan/00-installer-config.yaml
network:
  ethernets:
    enp1s0:
      dhcp4: false
    macvlan0:
      dhcp4: false
    macvlan1:
      dhcp4: false
  vlans:
    vlan10:
      id: 10
      link: macvlan0
      addresses: [ "172.16.10.10/24" ]
      routes:
        - to: default
          via: 172.16.10.1
          metric: 200
    vlan20:
      id: 20
      link: macvlan1
      addresses: [ "172.16.20.10/24" ]
      routes:
        - to: default
          via: 172.16.20.1
          metric: 300
  version: 2

应用网卡配置

netplan apply

查看网卡信息,新建了两个macvlan接口,以及对应的两个vlan子接口。

root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::7eb5:9bff:fe59:a71/64 scope link 
       valid_lft forever preferred_lft forever
11: macvlan0@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 32:e8:b4:0a:47:62 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::30e8:b4ff:fe0a:4762/64 scope link 
       valid_lft forever preferred_lft forever
12: macvlan1@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d2:73:75:14:b2:04 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::d073:75ff:fe14:b204/64 scope link 
       valid_lft forever preferred_lft forever
13: vlan10@macvlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 32:e8:b4:0a:47:62 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global vlan10
       valid_lft forever preferred_lft forever
    inet6 fe80::30e8:b4ff:fe0a:4762/64 scope link 
       valid_lft forever preferred_lft forever
14: vlan20@macvlan1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d2:73:75:14:b2:04 brd ff:ff:ff:ff:ff:ff
    inet 172.16.20.10/24 brd 172.16.20.255 scope global vlan20
       valid_lft forever preferred_lft forever
    inet6 fe80::d073:75ff:fe14:b204/64 scope link 
       valid_lft forever preferred_lft forever

测试两个VLAN接口与外部网关的连通性

root@server1:~# ping -c 3 172.16.10.1
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
64 bytes from 172.16.10.1: icmp_seq=1 ttl=255 time=3.60 ms
64 bytes from 172.16.10.1: icmp_seq=2 ttl=255 time=1.45 ms
64 bytes from 172.16.10.1: icmp_seq=3 ttl=255 time=1.44 ms

--- 172.16.10.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 1.441/2.163/3.602/1.017 ms
root@server1:~# 
root@server1:~# ping -c 3 172.16.20.1 
PING 172.16.20.1 (172.16.20.1) 56(84) bytes of data.
64 bytes from 172.16.20.1: icmp_seq=1 ttl=255 time=1.35 ms
64 bytes from 172.16.20.1: icmp_seq=2 ttl=255 time=1.48 ms
64 bytes from 172.16.20.1: icmp_seq=3 ttl=255 time=1.46 ms

--- 172.16.20.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 1.353/1.429/1.477/0.054 ms
root@server1:~# 

网卡ipvlan模式

IPVLAN(IP Virtual LAN)是Linux内核提供的一种网络虚拟化技术,它可以在一个物理网卡上创建多个虚拟网卡接口,每个虚拟接口都有自己独立的IP地址。

IPVLAN和macvlan类似,都是从一个主机接口虚拟出多个虚拟网络接口。唯一比较大的区别就是ipvlan虚拟出的子接口都有相同的mac地址(与物理接口共用同个mac地址),但可配置不同的ip地址。

ipvlan模式下,对端交换机也可以配置access模式或trunk模式,trunk模式下ipvlan能够与vlan很好的结合使用。

示意图如下
在这里插入图片描述

交换机配置

<H3C>system-view
[H3C]interface GigabitEthernet 1/0/1
[H3C-GigabitEthernet1/0/1]port link-type access
[H3C-GigabitEthernet1/0/1]port access vlan 10
[H3C-GigabitEthernet1/0/1]exit
[H3C]

服务器配置,ipvlan支持三种模式(l2、l3、l3s),这里使用l3模式,并持久化配置

cat >/etc/networkd-dispatcher/routable.d/10-ipvlan-interfaces.sh<<EOF
#! /bin/bash
ip link add ipvlan0 link enp1s0 type ipvlan mode l3
ip link add ipvlan1 link enp1s0 type ipvlan mode l3
EOF
chmod o+x,g+x,u+x /etc/networkd-dispatcher/routable.d/10-ipvlan-interfaces.sh

配置netplan

root@server1:~# cat /etc/netplan/00-installer-config.yaml
network:
  ethernets:
    enp1s0:
      dhcp4: false
      addresses:
        - 172.16.10.10/24
      nameservers:
        addresses:
          - 223.5.5.5
          - 223.6.6.6
      routes:
        - to: default
          via: 172.16.10.1
    ipvlan0:
      addresses:
        - 172.16.10.11/24
    ipvlan1:
      addresses:
        - 172.16.10.12/24
  version: 2

应用网卡配置

netplan apply

查看网卡信息,新建了两ipvlan接口,IP地址与主网卡位于同一网段,并且每个接口都有与主网卡相同的MAC地址。

root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global enp1s0
       valid_lft forever preferred_lft forever
    inet6 fe80::7eb5:9bff:fe59:a71/64 scope link 
       valid_lft forever preferred_lft forever
9: ipvlan0@enp1s0: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.11/24 brd 172.16.10.255 scope global ipvlan0
       valid_lft forever preferred_lft forever
    inet6 fe80::7cb5:9b00:159:a71/64 scope link 
       valid_lft forever preferred_lft forever
10: ipvlan1@enp1s0: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.12/24 brd 172.16.10.255 scope global ipvlan1
       valid_lft forever preferred_lft forever
    inet6 fe80::7cb5:9b00:259:a71/64 scope link 
       valid_lft forever preferred_lft forever

测试与网关的连通性

root@server1:~# ping -c 3 172.16.10.1
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
64 bytes from 172.16.10.1: icmp_seq=1 ttl=255 time=3.60 ms
64 bytes from 172.16.10.1: icmp_seq=2 ttl=255 time=1.45 ms
64 bytes from 172.16.10.1: icmp_seq=3 ttl=255 time=1.44 ms

--- 172.16.10.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 1.441/2.163/3.602/1.017 ms
root@server1:~# 

bond、vlan与桥接混合配置

将服务器两块网卡组成bond口,在bond口之上创建两个vlan子接口,分别加入两个linux bridge中,然后在不同bridge下创建虚拟机,虚拟机将属于不同vlan。

示意图如下:
在这里插入图片描述
交换机配置,配置动态链路聚合,将端口g1/0/1和g1/0/3加入聚合组。然后将bond口配置为trunk模式。

<H3C>system-view
[H3C]interface Bridge-Aggregation 1
[H3C-Bridge-Aggregation1]link-aggregation mode dynamic
[H3C-Bridge-Aggregation1]quit

[H3C]interface GigabitEthernet 1/0/1
[H3C-GigabitEthernet1/0/1]port link-aggregation group 1
[H3C-GigabitEthernet1/0/1]exit

[H3C]interface GigabitEthernet 1/0/3
[H3C-GigabitEthernet1/0/3]port link-aggregation group 1
[H3C-GigabitEthernet1/0/3]exit

[H3C]interface Bridge-Aggregation 1
[H3C-Bridge-Aggregation1]port link-type trunk
[H3C-Bridge-Aggregation1]port trunk permit vlan 10 20
[H3C-Bridge-Aggregation1]exit

服务器网卡配置

root@server1:~# cat /etc/netplan/00-installer-config.yaml
network:
  version: 2
  ethernets:
    enp1s0:
      dhcp4: no
    enp2s0:
      dhcp4: no
  bonds:
    bond0:
      interfaces:
        - enp1s0
        - enp2s0
      parameters:
        mode: 802.3ad
        lacp-rate: fast
        mii-monitor-interval: 100
        transmit-hash-policy: layer2+3
  bridges:
    br10:
      interfaces: [ vlan10 ]
    br20:
      interfaces: [ vlan20 ]
  vlans:
    vlan10:
      id: 10
      link: bond0
    vlan20:
      id: 20
      link: bond0

查看网卡信息,新建了bond0网口,并且基于bond0网口创建了两个vlan子接口vlan10和vlan20。

root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff permaddr 7c:b5:9b:59:0a:71
3: enp2s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff permaddr e4:54:e8:dc:e5:88
15: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff
    inet6 fe80::acfd:60ff:fe48:841a/64 scope link 
       valid_lft forever preferred_lft forever
16: br10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ee:df:66:ab:c2:4b brd ff:ff:ff:ff:ff:ff
    inet6 fe80::ecdf:66ff:feab:c24b/64 scope link 
       valid_lft forever preferred_lft forever
17: br20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 9e:4d:f4:0a:6d:13 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::9c4d:f4ff:fe0a:6d13/64 scope link 
       valid_lft forever preferred_lft forever
18: vlan10@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br10 state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff
19: vlan20@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br20 state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff

查看创建的网桥

root@server1:~# brctl show
bridge name     bridge id               STP enabled     interfaces
br10            8000.eedf66abc24b       no              vlan10
br20            8000.9e4df40a6d13       no              vlan20

在server1安装kvm虚拟化环境,然后创建两个新的kvm网络,分别绑定到不同网桥

cat >br10-network.xml<<EOF
<network>
  <name>br10-net</name>
  <forward mode="bridge"/>
  <bridge name="br10"/>
</network>
EOF
cat >br20-network.xml<<EOF
<network>
  <name>br20-net</name>
  <forward mode="bridge"/>
  <bridge name="br20"/>
</network>
EOF

virsh net-define br10-network.xml
virsh net-define br20-network.xml
virsh net-start br10-net
virsh net-start br20-net
virsh net-autostart br10-net
virsh net-autostart br20-net

查看新建的网络

root@server1:~# virsh net-list
 Name       State    Autostart   Persistent
---------------------------------------------
 br10-net   active   yes         yes
 br20-net   active   yes         yes
 default    active   yes         yes

创建两个虚拟机,指定使用不同网络

virt-install \
  --name vm1 \
  --vcpus 1 \
  --memory 2048 \
  --disk path=/var/lib/libvirt/images/vm1/jammy-server-cloudimg-amd64.img \
  --os-variant ubuntu22.04 \
  --import \
  --autostart \
  --noautoconsole \
  --network network=br10-net

virt-install \
  --name vm2 \
  --vcpus 1 \
  --memory 2048 \
  --disk path=/var/lib/libvirt/images/vm2/jammy-server-cloudimg-amd64.img \
  --os-variant ubuntu22.04 \
  --import \
  --autostart \
  --noautoconsole \
  --network network=br20-net

查看创建的虚拟机

root@server1:~# virsh list
 Id   Name   State
----------------------
 13   vm1    running
 14   vm2    running

为vm1配置vlan10的IP地址

virsh console vm1
cat >/etc/netplan/00-installer-config.yaml<<EOF
network:
  ethernets:
    enp1s0:
      addresses:
      - 172.16.10.10/24
      nameservers:
        addresses:
        - 223.5.5.5
      routes:
      - to: default
        via: 172.16.10.1
  version: 2
EOF
netplan apply

为vm2配置vlan20的IP地址

virsh console vm2
cat >/etc/netplan/00-installer-config.yaml<<EOF
network:
  ethernets:
    enp1s0:
      addresses:
      - 172.16.20.10/24
      nameservers:
        addresses:
        - 223.5.5.5
      routes:
      - to: default
        via: 172.16.20.1
  version: 2
EOF
netplan apply

登录到vm1,测试vm1与外部网关连通性

root@vm1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:a4:aa:9d brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global enp1s0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fea4:aa9d/64 scope link 
       valid_lft forever preferred_lft forever
root@vm1:~# 
root@vm1:~# ping 172.16.10.1 -c 3
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
64 bytes from 172.16.10.1: icmp_seq=1 ttl=255 time=1.51 ms
64 bytes from 172.16.10.1: icmp_seq=2 ttl=255 time=7.10 ms
64 bytes from 172.16.10.1: icmp_seq=3 ttl=255 time=2.10 ms

--- 172.16.10.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 1.505/3.568/7.101/2.509 ms
root@vm1:~# 

登录到vm2,测试vm2与外部网关连通性

root@vm2:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:89:61:da brd ff:ff:ff:ff:ff:ff
    inet 172.16.20.10/24 brd 172.16.20.255 scope global enp1s0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fe89:61da/64 scope link 
       valid_lft forever preferred_lft forever
root@vm2:~# 
root@vm2:~# ping 172.16.20.1 -c 3
PING 172.16.20.1 (172.16.20.1) 56(84) bytes of data.
64 bytes from 172.16.20.1: icmp_seq=1 ttl=255 time=1.73 ms
64 bytes from 172.16.20.1: icmp_seq=2 ttl=255 time=2.00 ms
64 bytes from 172.16.20.1: icmp_seq=3 ttl=255 time=2.00 ms

--- 172.16.20.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 1.732/1.911/2.003/0.126 ms
root@vm2:~# 

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mfbz.cn/a/492063.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

中科数安——企业文件资料防泄密|数据防泄密|透明加密系统|源代码防泄露

#文件防泄密软件# 中科数安作为领先的信息安全解决方案提供商&#xff0c;专注于为企业及机构提供全面的文件资料防泄密和数据防泄漏解决方案&#xff0c;具体产品和服务涵盖以下几个方面&#xff1a; 中科数安 || 文件资料、数据防泄密系统 PC地址&#xff1a;www.weaem.com …

2024高安全个人密码本程序源码 可生成随机密码/备忘录/二代密码(带安装教程)

Youngxj Pwd 您的贴身密码管家 在这个网络发达的年代&#xff0c;人人都需要上网&#xff0c;一旦上网就不难避免需要用到账号密码&#xff0c;在账号众多的情况下&#xff0c;你是否还在为你复杂难记的密码担忧着&#xff0c;现在只需要记录一次&#xff0c;就可以随时查看你的…

YOLO中的预训练模型是否需要

这张图片显示的是使用YOLOv5&#xff08;一种流行的物体检测算法&#xff09;进行训练时的一段命令行指令以及对应的注释&#xff0c;这些注释是中文的。这里列出的是两个不同情况下的命令行用法。 上面的命令&#xff1a; python train.py --data custom.yaml --weights yolo…

【小黑送书—第十四期】>>重磅升级——《Excel函数与公式应用大全》(文末送书)

今天给大家带来AI时代系列书籍&#xff1a;《Excel 2019函数与公式应用大全》全新升级版&#xff0c;Excel Home多位微软全球MVP专家打造&#xff0c;精选Excel Home海量案例&#xff0c;披露Excel专家多年研究成果&#xff0c;让你分分钟搞定海量数据运算&#xff01; 由北京…

20240326,文件,格式化文件输入输出,二进制文件

一&#xff0c;文件 1.1 格式化输入和输出 1.1.1 FLAG -左对齐 在前面放或— (SPACE) 正数留空 0-0填充 //%[flag][width][.prec][hIL]type #include<stdio.h> int main(int argc,char const *argv[]){int i1234;printf("%d\n",i);printf…

AI论文速读 | 【综述】用于轨迹数据管理和挖掘的深度学习:综述与展望

论文标题&#xff1a;Deep Learning for Trajectory Data Management and Mining: A Survey and Beyond 作者&#xff1a;Wei Chen(陈伟), Yuxuan Liang(梁宇轩), Yuanshao Zhu, Yanchuan Chang, Kang Luo, Haomin Wen(温皓珉), Lei Li, Yanwei Yu(于彦伟), Qingsong Wen(文青…

借力AI+视频号电商,腾讯广告业务这驾马车能跑多远?

腾讯的“功劳簿”又添上了几笔。 日前&#xff0c;腾讯披露了2023年四季度及全年财报。报告显示&#xff0c;2023年&#xff0c;腾讯营收6090.15亿元&#xff0c;同比增长10%&#xff1b;调整后净利润&#xff08;Non-IFRS&#xff09;1576.88亿元&#xff0c;同比增长36%。 …

Git学习笔记之基础

本笔记是阅读《git pro》所写&#xff0c;仅供参考。 《git pro》网址https://git-scm.com/book/en/v2 git官网 https://git-scm.com/ 一、git起步 1.1、检查配置信息 git config --list查看所有的配置以及它们所在的文件 git config --list --show-origin可能有重复的变量名…

聚酰亚胺PI材料难于粘接,用什么胶水粘接?那么让我们先一步步的从认识它开始(十): 聚酰亚胺PI薄膜的用途是什么

聚酰亚胺PI薄膜的用途是什么 聚酰亚胺&#xff08;Polyimide&#xff0c;简称PI&#xff09;薄膜由于其独特的性能&#xff0c;被广泛用于多个领域。聚酰亚胺薄膜市场可分为挠性电路板(FPC)、特种制品、压敏胶带、电机/发电机、电线电缆等。目前在国内各类下游需求中&#xff…

HTML(一)---【基础】

零.前言&#xff1a; 本文章对于HTML的基础知识处理的十分细节&#xff0c;适合从头学习的初学者&#xff0c;亦或是想要提升基础的前端工程师。 1.什么是HTML&#xff1f; HTML是&#xff1a;“超文本标签语言”&#xff08;Hyper Text Markup Language&#xff09; HTML不…

如何提升买家对独立站的信任感?提升转化率的技巧

跨境电商独立站获得爆发式增长&#xff0c;有越来越多的商家开始尝试建自己的独立站。同时我们在社群里获得反馈&#xff0c;很多商家在建站初期&#xff0c;普遍都会面临一个问题&#xff1a; 好不容易从各个渠道引流到独立站&#xff0c;转化率却不高&#xff0c;没有订单。 …

探究网络延迟对事务的影响

1.背景概述 最近在做数据同步测试&#xff0c;需要通过DTS将kafka中的数据同步到数据库中&#xff0c;4G的数据量同步到数据库用了大约4个多小时&#xff0c;这看起来并不合理&#xff1b;此时查看数据库所在主机的CPU&#xff0c;IO的使用率都不高&#xff0c;没有瓶颈&#…

爬虫技术与IP代理池:数据采集的利器

文章目录 1、 爬虫技术的概念和原理1.1 爬虫的角色&#xff1a;1.2 爬虫的工作流程&#xff1a;1.3技术挑战和解决方案&#xff1a; 2、 IP代理池的功能和优势2.1 功能描述&#xff1a;2.2 优势描述&#xff1a;2.3 应用场景&#xff1a; 3、 IP代理池推荐 在当今数字化时代&am…

两种利用matplotlib绘制无填充的多边形的方法:ax.fill()和Polygon

两种利用matplotlib绘制无填充的多边形的方法&#xff1a;ax.fill()和Polygon 下面我们将使用np.rand随机生成5个多边形的顶点&#xff0c;使用不同的方法绘制多边形。 ax.fill()绘制多边形 函数原型为&#xff1a; Axes.fill(*args, dataNone, **kwargs) args参数指的是按x…

hadoop安装及基本使用

环境准备 三台centos7虚拟机&#xff0c;设置固定ip&#xff08;自己设置&#xff09;&#xff0c;设置ssh秘密登录&#xff08;自己设置&#xff09;&#xff0c;安装jdk8&#xff08;自己安装&#xff09; 准备安装包hadoop-3.3.6.tar.gz 位置在/home/hadoop 准备服务器之间…

【线段树二分】第十三届蓝桥杯省赛C++ A组/研究生组 Python 研究生组《扫描游戏》(C++)

【题目描述】 有一根围绕原点 O 顺时针旋转的棒 OA&#xff0c;初始时指向正上方&#xff08;Y 轴正向&#xff09;。 在平面中有若干物件&#xff0c;第 i 个物件的坐标为&#xff08;,)&#xff0c;价值为 。 当棒扫到某个物件时&#xff0c;棒的长度会瞬间增长 &#xff…

服务运营 | 印第安纳大学翟成成:改变生活的水井选址

编者按&#xff1a; 作者于2023年4月在“Production and Operations Management”上发表的“Improving drinking water access and equity in rural Sub-Saharan Africa”探讨了欠发达地区水资源供应中的可达性和公平性问题。作者于2020年1月去往非洲埃塞俄比亚提格雷地区进行…

鸿蒙操作系统-初识

HarmonyOS-初识 简述安装配置hello world1.创建项目2.目录解释3.构建页面4.真机运行 应用程序包共享包HARHSP 快速修复包 官方文档请参考&#xff1a;HarmonyOS 简述 1.定义&#xff1a;HarmonyOS是分布式操作系统&#xff0c;它旨在为不同类型的智能设备提供统一的操作系统&a…

【前端学习——js篇】4.浅拷贝与深拷贝

具体可见https://github.com/febobo/web-interview 4.浅拷贝与深拷贝 ①栈内存与堆内存 栈内存&#xff08;Stack Memory&#xff09; 栈内存用于存储基本类型的变量和引用类型的变量引用&#xff08;即指向堆内存中实际数据的指针&#xff09;。当一个函数被调用时&#xf…

javaWeb医院在线挂号系统

功能描述 医院挂号系统主要用于实现医院的挂号&#xff0c;前台基本功能包括&#xff1a;用户注册、用户登录、医院查询、挂号、取消挂号、修改个人信息、退出等。 后台基本功能包括&#xff1a;系统管理员登录、医院管理、科室管理、公告管理、退出系统等。 本系统结构如下&…
最新文章