Network Namespaces

We are progressing at a remarkable speed in every aspect and software industry is no exception. With the introduction of cloud-native applications and tools like Docker, Kubernetes etc. software containerisation is considered as the new hot topic nowadays. Whenever we talk about containerisation, the two main technologies that come into play are namespaces and cgroups. If we think in terms of a process, while cgroups limits the resource(CPU, RAM etc.) usage of a process , namespaces limits the scope of interaction with other processes. To be more specific, namespaces partitions kernel resources in such a way that each process can view it’s set of resources only.

Linux kernel provides six types of namespaces, namely mount(mnt), process id(pid), inter-process communication(ipc), network(net), UTS(uts) and user id(user). In this article, the focus is mainly on network namespaces.

Network namespaces are used by different containers(or processes) to implement network isolation. They enable each of these containers/processes to use an entirely different of set of network interfaces, even loopback interface is also different with respect to each network namespace. Also each network namespace has its own routing table and iptables. Linux initialises with a default network namespace which includes default network interfaces(mostly eth0) and a loopback interface.

Till now it is understood that a namespaces can be used to isolate resources from each other, but to understand how they work and why they should be used, let’s start by an example. The host has its own set of routing tables, interfaces which is connected to a LAN. All this information needs to be isolated from the hosted containers. When a container is created, a network namespace is also created along with it so that it doesn’t have any visibility of the host’s network details. The container will have its own virtual network interfaces and routing tables and cannot access the host resources. This can be verified with the following commands:

No alt text provided for this image

 

To check the interfaces and routing table information in above created namespaces,following commands can be executed:

No alt text provided for this image
No alt text provided for this image

 

Thus, that the container is prevented from accessing the host’s details using the namespaces. It can also so be seen that there is no network connectivity in these network namespaces. Similar to establishing connection between a host and LAN, the network namespaces can also be connected to the host and each other using a ‚veth pair‚. A veth pair is a virtual cable connecting two sides which could be two network namespaces or a namespace and a host. It consists of two virtual interfaces assigned to each side. A veth pair can be created with following commands:

 

No alt text provided for this image
No alt text provided for this image

 

Similarly more namespaces can be created and connected using another solution called as ‚veth bridge‘. However, it can also be required at times to connect the container with external traffic. This can be achieved by enable IPv4 forwarding in the host and masquerading.

No alt text provided for this image
No alt text provided for this image

 

Containerisation technologies make us of network namespaces to isolate containers which allows them to have their own virtual network, routing tables etc. which are different from host. This is quite helpful from the network perspective. Like network namespace, Linux kernel also provides other advanced technologies which play a significant role in software containerisation implementation.

 

 

Author:

Vaishali Jain

Vaishali Jain

cloudical.io & appflowsolutions.com