Docker Networking
One of the reasons Docker containers and services are so powerful is that you can connect them together, or connect them to non-Docker workloads. Docker containers and services do not even need to be aware that they are deployed on Docker, or whether their peers are also Docker workloads or not. Whether your Docker hosts run Linux, Windows, or a mix of the two, you can use Docker to manage them in a platform-agnostic way.
Docker networking under the hood
The Linux kernel has various features that have been developed to provide multi-tenancy on hosts. Namespaces provide functionality that offers different kinds of isolation, with network namespace being the one that provides network isolation.
Create network namespace using the ip
command in any Linux operating system.
ip netns add sfo
ip netns add nyc
ip netns list
nyc
sfo
Now, we can create a veth
pair to connect these network namespaces. Think of a veth
pair as a network cable with connectors at both ends.
ip link add veth-sfo type veth peer name veth-nyc
ip link list | grep veth
13: veth-nyc@veth-sfo: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
14: veth-sfo@veth-nyc: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
At this moment, the veth
pair (cable) exists on the host network namespace. Now let’s move the two ends of the veth
pair to their respective namespaces that we created earlier.
ip link set veth-sfo netns sfo
ip link set veth-nyc netns nyc
ip link list | grep veth
The veth
pair now doesn’t exist on the host network namespace.
ip netns exec sfo ip link
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
14: veth-sfo@if13: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether aa:c0:0b:1d:d8:6a brd ff:ff:ff:ff:ff:ff link-netnsid 1
ip netns exec nyc ip link
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
13: veth-nyc@if14: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 2a:e6:57:d1:a2:cc brd ff:ff:ff:ff:ff:ff link-netnsid 0
Now let’s assign IP addresses to these interfaces and bring them up:
ip netns exec sfo ip address add 10.0.0.11/24 dev veth-sfo
ip netns exec sfo ip link set veth-sfo up
ip netns exec nyc ip address add 10.0.0.12/24 dev veth-nyc
ip netns exec nyc ip link set veth-nyc up
Using the ping command, we can verify the two network namespaces have been connected and are reachable:
ip netns exec sfo ping 10.0.0.12
PING 10.0.0.12 (10.0.0.12) 56(84) bytes of data.
64 bytes from 10.0.0.12: icmp_seq=1 ttl=64 time=0.273 ms
--- 10.0.0.12 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms
If we would like to create more network namespaces and connect them together, it might not be a scalable solution to create a veth
pair for every combination of namespaces. Instead, one can create a Linux bridge and hook up these network namespaces to the bridge to get connectivity. And that’s exactly how Docker sets up networking between containers running on the same host!
Clean up the network namespaces that we just created.
ip netns del nyc sfo
The veth
pair gets cleaned up automatically.
Having docker already installed on machine led to the creation of the docker0
bridge.
Let’s spin up a test container now:
docker run --name testc1 -itd leodotcloud/swiss-army-knife
fee636119a04f549b2adfcac3112e01f8816ae5f56f28b0127e66aa1a4bf3869
Inspecting the container, we can figure out the network namespace details:
docker inspect testc1 --format ''
/var/run/docker/netns/6a1141406863
Since Docker doesn’t create the netns
in the default location, ip netns list
doesn’t show this network namespace. We can create a symlink to the expected location to overcome that limitation:
container_id=testc1
container_netns=$(docker inspect ${container_id} --format '')
mkdir -p /var/run/netns
rm -f /var/run/netns/${container_id}
ln -sv ${container_netns} /var/run/netns/${container_id}
'/var/run/netns/testc1' -> '/var/run/docker/netns/6a1141406863'
We can test to make sure the ip
command can list the namespace now:
ip netns list
testc1 (id: 0)
The other ip
commands will now work with the namespace too:
ip netns exec testc1 ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
15: eth0@if16: <broadcast,multicast,up,lower_up> mtu 1500 qdisc noqueue state up group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.2/16 scope global eth0
valid_lft forever preferred_lft forever
We can confirm that this is actually the container’s network namespace with the following command:
docker exec testc1 ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
15: eth0@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.2/16 scope global eth0
valid_lft forever preferred_lft forever
If you inspect the list of interfaces again on the host, you will find a new veth
interface:
ip link | grep veth
16: veth3569d0e@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default
The above output shows that this veth
interface has been connected to the docker0
bridge.
Let’s look at five ways to configure the network when a Docker container runs:
- Host
- Bridge
- Custom Bridge
- Container
- None
These concepts will allow us to explain how containers communicate when they are running on the same host and what options are available within Docker itself for container communication between hosts.
Docker Networking Types
References
- [rancher blog introduction-to-container-networking/(https://www.suse.com/c/rancher_blog/introduction-to-container-networking/)