hands on multus on openshift with bridge cni and macvlan cni
continue from previous single node openshift installation.
let’s create a POD with multuple interface.
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: cfosdefaultcni5
spec:
config: |-
{
"cniVersion": "0.3.1",
"type": "bridge",
"bridge": "cni5",
"isGateway": true,
"hairpinMode": true,
"ipam": {
"type": "host-local",
"subnet": "10.1.200.0/24",
"rangeStart": "10.1.200.20",
"rangeEnd": "10.1.200.100",
"routes": [
{ "dst": "1.1.1.1/32", "gw": "10.1.200.252"},
{ "dst": "34.117.186.0/24", "gw": "10.1.200.252"},
{ "dst": "10.1.100.0/24", "gw": "10.1.200.252"}
],
"gateway": "10.1.200.1"
}
}
isGateway: will create a interface on host node.
bridge: cni5 — the interface name on host node, make sure the interface does not exist before you apply the yaml file
gateway: 10.1.200.1, this is the cni5 ip address .
oc get net-attach-def
create a sample pod
apiVersion: v1
kind: Pod
metadata:
name: test-pod
annotations:
k8s.v1.cni.cncf.io/networks: '[{"name": "cfosdefaultcni5" }]'
spec:
securityContext:
runAsNonRoot: false
seccompProfile:
type: RuntimeDefault
containers:
- name: test-pod
# image: nginx
image: praqma/network-multitool
ports:
- containerPort: 80
name: web
protocol: TCP
above tell test-pod to get additional nic from net-attach-def cfosdefaultcni5
after bring up this yaml
check interface on hostnode
node=$(oc get nodes -o jsonpath='{.items[*].metadata.name}')
oc debug node/${node}
sh-4.4# ip link show cni5
115: cni5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether fa:c8:41:7a:31:af brd ff:ff:ff:ff:ff:ffsh-4.4# ip a show cni5
115: cni5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether fa:c8:41:7a:31:af brd ff:ff:ff:ff:ff:ff
inet 10.1.200.1/24 brd 10.1.200.255 scope global cni5
valid_lft forever preferred_lft forever
inet6 fe80::f8c8:41ff:fe7a:31af/64 scope link
valid_lft forever preferred_lft foreversh-4.4# ip r | grep cni5
10.1.200.0/24 dev cni5 proto kernel scope link src 10.1.200.1
check test-pod routing table
k exec -it po/test-pod -- sh
/ # ip r
default via 10.128.0.1 dev eth0
1.1.1.1 via 10.1.200.252 dev net1
10.1.100.0/24 via 10.1.200.252 dev net1
10.1.200.0/24 dev net1 proto kernel scope link src 10.1.200.20
10.128.0.0/23 dev eth0 proto kernel scope link src 10.128.0.116
10.128.0.0/14 via 10.128.0.1 dev eth0
34.117.186.0/24 via 10.1.200.252 dev net1
100.64.0.0/16 via 10.128.0.1 dev eth0
172.30.0.0/16 via 10.128.0.1 dev eth0
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0@if114: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8901 qdisc noqueue state UP group default
link/ether 0a:58:0a:80:00:74 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.128.0.116/23 brd 10.128.1.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::858:aff:fe80:74/64 scope link
valid_lft forever preferred_lft forever
3: net1@if116: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 22:09:69:ef:8f:33 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.1.200.20/24 brd 10.1.200.255 scope global net1
valid_lft forever preferred_lft forever
inet6 fe80::2009:69ff:feef:8f33/64 scope link
valid_lft forever preferred_lft forever
/ # ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0@if114: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8901 qdisc noqueue state UP mode DEFAULT group default
link/ether 0a:58:0a:80:00:74 brd ff:ff:ff:ff:ff:ff link-netnsid 0
3: net1@if116: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
link/ether 22:09:69:ef:8f:33 brd ff:ff:ff:ff:ff:ff link-netnsid 0
macvlan cni
to use macvlan CNI, you have to associate with host node master interface. but master interface already have ip address allocated, therefore, you need create a new interface for associate , there is multiple way to do this, for example, assign new ENI, create bond interface, or just create a vlan sub-interface, let’s create vlan subinterface.
the nmstate operator seems does not work on EC2 VM node. so , I just use shell command to create a vlan interface.
oc debug node/ip-10-0-5-203.ec2.internal
//after shell into do
ip link add link eth0 name eth0.1000 type vlan id 1000
ip link set eth0.1000 up
//verify with
sh-4.4# ip address show dev eth0.1000
174: eth0.1000@ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default qlen 1000
link/ether 0e:45:06:3a:bf:1d brd ff:ff:ff:ff:ff:ff
inet6 fe80::c45:6ff:fe3a:bf1d/64 scope link
valid_lft forever preferred_lft forever
create net-attach-def
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: nadapplication200
spec:
config: '{
"cniVersion": "0.3.0",
"type": "macvlan",
"master": "eth0.1000",
"mode": "bridge",
"ipam": {
"type": "host-local",
"subnet": "10.1.200.0/24",
"rangeStart": "10.1.200.20",
"rangeEnd": "10.1.200.100",
"routes": [
{ "dst": "1.1.1.1/32", "gw": "10.1.200.252"},
{ "dst": "34.117.186.0/24", "gw": "10.1.200.252"},
{ "dst": "10.1.100.0/24", "gw": "10.1.200.252"}
],
"gateway": "10.1.200.252"
}
}'
create application with this net-attach-def
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cfos7210250-deployment
labels:
app: cfos
spec:
replicas: 1
selector:
matchLabels:
app: cfos
template:
metadata:
annotations:
k8s.v1.cni.cncf.io/networks: '[ { "name": "cfosdefaultcni6", "ips": [ "10.1.200.252/24" ] }]'
labels:
app: cfos
spec:
serviceAccountName: cfos-serviceaccount
initContainers:
- name: init-myservice
image: busybox
command:
- sh
- -c
- |
echo "nameserver 172.30.0.10" > /mnt/resolv.conf
echo "search default.svc.cluster.local svc.cluster.local cluster.local" >> /mnt/resolv.conf;
securityContext:
allowPrivilegeEscalation: true
privileged: true
capabilities:
add: ["NET_ADMIN", "SYS_ADMIN", "NET_RAW"]
volumeMounts:
- name: resolv-conf
mountPath: /mnt
containers:
- name: cfos7210250-container
image: <deleted>
securityContext:
allowPrivilegeEscalation: true
privileged: true
capabilities:
add: ["NET_ADMIN", "SYS_ADMIN", "NET_RAW"]
ports:
- containerPort: 443
volumeMounts:
- mountPath: /data
name: data-volume
- mountPath: /etc/resolv.conf
name: resolv-conf
subPath: resolv.conf
volumes:
- name: data-volume
emptyDir: {}
- name: resolv-conf
emptyDir: {}
dnsPolicy: ClusterFirst