Side-by-side and nested Kubernetes and OpenStack deployment with Kuryr

Kuryr enables both side by side Kubernetes and OpenStack deployments, as well as nested ones where Kubernetes is installed inside OpenStack VMs. As highlighted in previous posts, there is nothing that precludes having a hybrid deployment, i.e., both side by side and nested containers at the same time. Thanks to Kuryr, a higher flexibility for the deployment is achieved, enabling diverse use cases where containers, regardless they are deployed bare metal or inside VMs, are in the same Neutron network as other co-located VMs.

This blog post is a step by step guide to deploy such a hybrid environment. The next steps describe how to create and configure an all-in-one devstack deployment that includes Kubernetes and Kuryr installation. We call this undercloud deployment. Then, we will describe the steps to create an OpenStack VM where Kubernetes and Kuryr are installed and configured to enabled the nested-containers. This deployment inside the VM is named overcloud.

Undercloud deployment

The first step is to clone the devstack git repository:

$ git clone https://git.openstack.org/openstack-dev/devstack

We then create the devstack configuration file (local.conf) inside the cloned directory:

$ cd devstack
$ cat local.conf
[[local|localrc]]

LOGFILE=devstack.log
LOG_COLOR=False

HOST_IP=CHANGEME
# Credentials
ADMIN_PASSWORD=pass
MYSQL_PASSWORD=pass
RABBIT_PASSWORD=pass
SERVICE_PASSWORD=pass
SERVICE_TOKEN=pass
# Enable Keystone v3
IDENTITY_API_VERSION=3

Q_PLUGIN=ml2
Q_ML2_TENANT_NETWORK_TYPE=vxlan

# LBaaSv2 service and Haproxy agent
enable_plugin neutron-lbaas \
 git://git.openstack.org/openstack/neutron-lbaas
enable_service q-lbaasv2
NEUTRON_LBAAS_SERVICE_PROVIDERV2="LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default"

enable_plugin kuryr-kubernetes \
 https://git.openstack.org/openstack/kuryr-kubernetes

enable_service docker
enable_service etcd
enable_service kubernetes-api
enable_service kubernetes-controller-manager
enable_service kubernetes-scheduler
enable_service kubelet
enable_service kuryr-kubernetes

[[post-config|/$Q_PLUGIN_CONF_FILE]]
[securitygroup]
firewall_driver = openvswitch

In this file we need to change the HOST_IP value by our IP. Note that OVS-firewall is used (last two lines of the file) to improve the nested container performance.

Finally, we start the devstack deployment with:

$ ./stack.sh

This command may take some time. Once the installation is completed, the next steps are focused on the configuration steps as well as the VM creation. As we are going to use the TrunkPort functionality from the Neutron, we need to enable it at neutron.conf configuration file by include ‘trunk’ at the service_plugins:

$ cat /etc/neutron/neutron.conf | grep service_plugins
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,lbaasv2, trunk

And then restart the neutron server (q-svc)  service:

$ screen -r
ctrl + a + n (until reaching q-svc tab)
crtl + c
up_arrow + enter
ctrl + a + d (exit)

The next step is to use the “demo” tenant created by devstack to create the VM and the Neutron networks to be used:

$ source openrc demo demo

We generate the key to log-in into the VMs:

$ ssh-keygen -t rsa -b 2048 -N '' -f id_rsa_demo
$ nova keypair-add --pub-key id_rsa_demo.pub demo

And create the networks to be used by the VMs and the containers:

$ # Create the networks
$ openstack network create net0
$ openstack network create net1

$ # Create the subnets
$ openstack subnet create --network net0 --subnet-range 10.0.4.0/24 subnet0
$ openstack subnet create --network net1 --subnet-range 10.0.5.0/24 subnet1

$ # Add subnets to the router (router1 is created by devstack and connected to the OpenStack public network)
$ openstack router add subnet router1 subnet0
$ openstack router add subnet router1 subnet1

Then, we modify the default security group rules to allow both ping and ssh into them:

$ openstack security group rule create --protocol icmp default
$ openstack security group rule create --protocol tcp --dst-port 22 default

And create the parent port (trunk port) to be used by the overcloud VM:

$ # create the port to be used as parent port
$ openstack port create --network net0 --security-group default port0

$ # Create a trunk using port0 as parent port (i.e. turn port0 into a trunk port)
$ openstack network trunk create --parent-port port0 trunk0

Finally, we boot the VM by using the parent port just created as well as the modified security group:

$ # get a vlan-aware-vm suitable image
$ wget https://download.fedoraproject.org/pub/fedora/linux/releases/24/CloudImages/x86_64/images/Fedora-Cloud-Base-24-1.2.x86_64.qcow2
$ openstack image create --container-format bare --disk-format qcow2 --file Fedora-Cloud-Base-24-1.2.x86_64.qcow2 fedora24

$ # Boot the VM, using a flavor with at least 4GB and 2 vcpus
$ openstack server create --image fedora24 --flavor m1.large --nic port-id=port0 --key-name demo fedoraVM

Overcloud deployment

Once the VM is running, we should log in into it, and similarly to the steps for the undercloud, clone the devstack repository and configure the local.conf file

$ sudo ip netns exec `ip netns | grep qrouter` ssh -l fedora -i id_rsa_demo `openstack server show fedoraVM -f value | grep net0 | cut -d'=' -f2`
[vm]$ sudo dnf install -y git
[vm]$ git clone https://git.openstack.org/openstack-dev/devstack
[vm]$ cd devstack
[vm]$ cat local.conf
[[local|localrc]]

RECLONE="no"

enable_plugin kuryr-kubernetes \
 https://git.openstack.org/openstack/kuryr-kubernetes

OFFLINE="no"
LOGFILE=devstack.log
LOG_COLOR=False
ADMIN_PASSWORD=pass
DATABASE_PASSWORD=pass
RABBIT_PASSWORD=pass
SERVICE_PASSWORD=pass
SERVICE_TOKEN=pass
IDENTITY_API_VERSION=3
ENABLED_SERVICES=""

SERVICE_HOST=UNDERCLOUD_CONTROLLER_IP
MULTI_HOST=1
KEYSTONE_SERVICE_HOST=$SERVICE_HOST
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST

enable_service docker
enable_service etcd
enable_service kubernetes-api
enable_service kubernetes-controller-manager
enable_service kubernetes-scheduler
enable_service kubelet
enable_service kuryr-kubernetes

Note the MULTI_HOST line to indicate the VM is not a controller node, but it has to connect to the undercloud, whose IP is provided at the SERVICE_HOST. It must also be noted the ENABLED_SERVICES=””, which disables all the services by default, so that only the ones later enabled are installed.

Before executing the devstack stack.sh script, we need to clone the kuryr-kubernetes repository and change the plugin.sh script to avoid problems due to not installing neutron components:

[vm]$ # Clone the kuryr-kubernetes repository
[vm]$ sudo mkdir /opt/stack
[vm]$ sudo chown fedora:fedora /opt/stack
[vm]$ cd /opt/stack
[vm]$ git clone https://github.com/openstack/kuryr-kubernetes.git

In the /opt/stack/kuryr-kubernetes/devstack/plugin.sh we comment out configure_neutron_defaults. This method is getting UUID of default Neutron resources project, pod_subnet etc. using local neutron client and setting those values in /etc/kuryr/kuryr.conf. This will not work at the moment because Neutron is running remotely. That is why this is being commented out and we will manually configure them later. After this we can trigger the devstack stack.sh script:

[vm]$ cd ~/devstack
[vm]$ ./stack.sh

Once the installation is completed, we update the configuration at /etc/kuryr/kuryr.conf to include the information missing:

  • Set the UUID of Neutron resources from the undercloud Neutron:
[neutron_defaults]
ovs_bridge = br-int
pod_security_groups = <UNDERCLOUD_DEFAULT_SG_UUID> (in our case default)
pod_subnet = <UNDERCLOUD_SUBNET_FOR_PODS_UUID> (in our case subnet1)
project = <UNDERCLOUD_DEFAULT_PROJECT_UUID> (in our case demo)
worker_nodes_subnet = <UNDERCLOUD_SUBNET_WORKER_NODES_UUID> (in our case subnet0)
  • Configure “pod_vif_driver” as “nested-vlan”:
[kubernetes]
pod_vif_driver = nested-vlan
  • Configure the binding section:
[binding]
driver = kuryr.lib.binding.drivers.vlan
link_iface = <VM interface name eg. eth0>
  • Restart the kuryr-kubernetes-controller service from the devstack screen (similarly to what we did for q-svc at the undercloud)

Demo

Now we are ready to test the connectivity between VMs and containers in the hybrid environment just deployed. First we create a container in the undercloud and ping the VM from it:

$ cat pod1.yml
apiVersion: v1
kind: Pod
metadata:
 name: busybox-sleep
spec:
 containers:
 - name: busybox
   image: busybox
   args:
   - sleep
   - "1000000"

$ # Create the container using the above template
$ kubectl create -f pod1.yml
$ kubectl get pods
NAME           READY STATUS            RESTARTS AGE
busybox-sleep  0/1   ContainerCreating 0        5s
$ kubectl get pods
NAME           READY STATUS  RESTARTS AGE
busybox-sleep  1/1   Running 0        3m

$ # Get inside the container to ping the VM
$ kubectl exec -it busybox-sleep -- sh
[container]$ ping VM_IP

...CHECK PING WORKS!!...

[container]$ exit

Then we log in into the VM and create a similar container inside, and check the connetivity with the other bare metal container, and the VM:

[vm]$ # Create the container
[vm]$ kubectl create -f pod1.yml

[vm]$ # Waint until pod becomes active
[vm]$ kubectl exec -it busybox-sleep -- sh
[container]$ ping CONTAINER_IP

...CHECK PING WORKS!!...

[container]$ ping VM_IP

...CHECK PING WORKS!!...

[container]$ exit

Finally, we ping both containers from the VM:

[vm]$ ping CONTAINER_UNDERCLOUD_IP

...CHECK PING WORKS!!...

[vm]$ ping CONTAINER_OVERCLOUD_IP

...CHECK PING WORKS!!...

 

Superfluidity: Containers and VMs deployment for the Mobile Network (Part 2)

Once we have the ‘glue’ between VMs and containers as presented in the previous blog post (https://ltomasbo.wordpress.com/2017/01/16/superfluidity-containers-and-vms-at-the-mobile-network-part-1/), an important decision is what type of deployment is most suitable for each use case. Some applications (MEC Apps) or Virtual Network Functions (VNFs) may need really fast scaling or spawn responses and require therefore to be run directly on bare metal deployments. In this case, they will run inside containers to take the advantage of their easy portability and the life cycle management, unlike the old-fashioned bare metal installations and configurations. On the other hand, there are other applications and VNFs that do not require such fast scaling or spawn times. On the contrary, they may require higher network performance (latency, throughput) but still retain the flexibility given by containers or VMs, thus requiring a VM with SRIOV or DPDK. Finally, there may be other applications or VNFs that benefit from extra manageability, consequently taking advantage of running in nested containers, with stronger isolation (and thus improved security), and where some extra information about the status of the applications is known (both the hosting VM and the nested containers). This approach also allows other types of orchestration actions over the applications. One example being the functionality provided by Magnum OpenStack project which allows to install Kubernetes on top of the OpenStack VMs, as well as some extra orchestration actions over the containers deployed through the virtualized infrastructure.

To sum it up, a common IT problem is that there is no unique solution to fit all the use cases, and therefore having a choice between side-by-side (some applications running in VMs while others running on bare metal containers) and nested (containers running inside VMs) deployments is a great advantage. Luckily, the latest updates on the Kuryr project give us the possibility to choose any of them based on given requirements.

Side-by-side OpenStack and OpenShift deployment

In order to enable side by side deployments through Kuryr, a few components have to be added to handle the OpenShift (and similarly the Kubernetes) container creation and networking. An overview of the components is presented in the next image

The main Kuryr components are highlighted in yellow. The Kuryr-Controllers is a service in charge of the interactions with the OpenShift (and similarly Kubernetes) API server, as well as the Neutron one. By contrast, the Kuryr CNI is in charge of the networking binding  for the containers and pods at each worker node, therefore there will be one kuryr CNI instance in each one of them.

The interaction process between these components, i.e., the Kubernetes, OpenShift and Neutron components, is depicted in the sequence diagram (for more details you can see: https://docs.google.com/presentation/d/1mofmPHRXzXdTx8ez3H73cegBtTQ_E4AQHnJKJNuEtbw).

Similarly to Kubelet, the Kuryr-Controller is watching over the OpenShift API server (or Kubernetes API server ). When a user request to create a pod reaches the API server, a notifications is sent to both, Kubelet and the Kuryr-Controller. The Kuryr-Controller then interacts with Neutron to create a Neutron port that will be used by the container later. It calls Neutron to create the port, and notifies the API server with the information about the created port (pod(vif)), while it is waiting for the Neutron server to notify it about the status of the port becoming active. Finally, when that happens, it notifies the API server about it. On the other hand, when Kubelet receives the notification about the pod creation request, it calls the Kuryr-CNI to handle the local bindings between the container and the network. The Kuryr-CNI waits for the notification with the information about the port and then starts the necessary steps to attach the container to the Neutron subnet. These consist of creating  a veth device and attaching one of its ends to the OVS bridge (br-int) while leaving the other end for the pod. Once the notifications about the port being active arrives, the Kuryr-CNI finishes its task and the Kubelet component creates a container with the provided veth device end, and connects it to the Neutron network.

In a side by side deployment, we have an OpenStack deployment on one side (which includes Keystone, Nova, Neutron, Glance, …) in charge of the VMs operation (creation, deletion, migration, connectivity, …), and on the other side, we have a deployment in charge of the containers, in this case OpenShift, but it could be Kubernetes as well, or even just raw containers deployed through Docker. An example of this is shown in the next figure, which depicts the environment used for the demo presented in: https://www.youtube.com/watch?v=F909pmf8lbc

In that demo, a side-by-side deployment of OpenShift and OpenStack was made, where a Neutron subnet with Kuryr for launching VMs and Containers was used. In that figure we can see:

  • An OpenStack controller: which includes the main components, and in this case also the Kuryr-Controller, although the later could have been located anywhere else if desired.
  • An OpenShift controller: which includes the components belonging to standard OpenShift master role, i.e., the API server, the scheduler, and the registry.
  • An OpenStack worker: where the VMs are to be deployed.
  • An OpenShift worker: which, besides having the normal components for an OpenShift node, also includes the Kuryr-CNI and the Neutron OVS agent so that created containers can be attached to Neutron networks.
  • ManageIQ: In addition to all the needed components, a ManageIQ instance was also present to demonstrate a single pane of glass where we can see both containers and VMs ports being created in the same Neutron network from a centralized view — even though they are different deployments.

Nested deployment: OpenShift on top of OpenStack

In order to make Kuryr working in nested environment, a few modifications, extensions, are needed. These modifications were recently merged into the Kuryr upstream branch, both for Docker and Kubernetes/OpenShift support:

The way the containers are connected to the outside Neutron subnets is by using a new feature included in Neutron, named Trunk Ports (https://wiki.openstack.org/wiki/Neutron/TrunkPort). The VM, where the containers are deployed, is booted with a Trunk Port, and then, for each container created inside the VM, a new subport is attached to that VM, therefore having a different encapsulation (VLAN) for different containers running inside the VM. They also differ from the own VM traffic, which leaves the VM untagged. Note that the subports do not have to be on the same subnet as the host VM. This thus allows containers both in the same and in different Neutron subnets to be created in the same VM.

To continue the previous example, we keep focusing on the OpenShift/Kubernetes scenario. A few changes were to be made to the two main components described above, Kuryr-Controller and Kuryr-CNI. As for the Kuryr-Controller, one of the main changes is regarding how the ports, which will be used by the containers, are created. Instead of just asking Neutron for a new port, there are two more steps to be performed once the port is created:

  • Obtaining a VLAN ID to be used for encapsulating containers traffic inside the VM.
  • Calling neutron to attach the created port to the VM’s trunk port by using VLAN as a segmentation type, and the previously obtained VLAN ID. This way, the port will be attached as a subport to the VM, and can be later used by the container.

Furthermore, the modifications at the Kuryr-CNI are targeting the new way to bind the containers to the network, as in this case, instead of being added to the OVS (br-int) bridge, they are connected to the VM ‘s vNIC in the specific vlan provided by the Kuryr-Controller (subport).

For the nested deployment, the interactions as well as the components are mainly the same. The main difference is how the components are distributed. Now, as the OpenShift environment is installed inside VMs, the Kuryr-Controller also needs to run on a VM so that it is reachable from the OpenShift/Kubernetes nodes running in other VMs on the same Neutron network. Same as before, it can be co-located in the OpenShift master VM or anywhere else. With regards to the Kuryr-CNI, instead of being located on the servers, they need to be located inside the VMs actuating as worker nodes, so that they can plug the container to the vNIC on the VM on which they are running.

PS: In a follow up blog post I will include instructions for a step-by-step Kubernetes on top of OpenStack VM deployment with Kuryr, as well as a brief demo to test the containers and VMs connectivity.

To conclude this blog post I just want to emphasize that, thanks to Kuryr, both side-by-side and nested deployment may be used in a single deployment. The only requirement is to install the appropriate services on the server, the VMs or both, depending on where the containers are to be deployed. This enables the VMs, and both bare metal and nested containers to be plugged into the same Neutron networks.

Superfluidity: Containers and VMs in the Mobile Network (Part 1)

Among other things, we, at the Superfluidity EU Project (http://superfluidity.eu/), are looking into different deployment models to enable an efficient network resource management in the mobile network. This post describes our findings and focuses on the Mobile Edge Computing (MEC) use case, as well as the technology project, called Kuryr, which enables it.

What is the Superfluidity EU Project about?

First, we will introduce the Superfluidity EU project and its main objectives. 

Superfluidity, as used in physics, is “a state in which the matter behaves like a fluid with zero viscosity”. Following the analogy, the Superfluidity project does the same for networks, i.e., it enables the ability to instantiate services on-the-fly anywhere on the network (including core, aggregation and edge), and to shift them to different locations in a transparent way.

For historical reasons current Telco networks are provisioned statistically, however the upcoming network traffic trends require a dynamic way of providing processing capabilities inside the network. These can span across mobile, access networks, core networks and clouds. Cellular networks are still designed for statically connected clients. There are several data and control plane gateways, called GGSN in 3G, and Packet Gateway (P-GW) and Serving Gateway (S-GW), in LTE, which are deployed centrally and in the same way they orchestrate the users’ traffic. 

The Superfluidity especially focuses on 5G networks, and tries to go one step further into the virtualization and orchestration of different network elements, including radio and network processing components, such as BBUs, EPCs, P-GW, S-GW, PCRF, MME, load balancers, SDN controllers, and others. These network functions are usually known as Virtual Network Functions (VNFs), and when a few VNFs are chained together for a common purpose they build what is known as Network Service (NS).

The main objective of moving the network functionality, currently running on bare-metal and on proprietary hardware, to virtualized environments is to avoid the rigidness and cost-ineffectivity of this model. The complexity, which emergins from heterogeneous attitude towards traffic and sources, services and needs, as well as access technologies with multi-vendor network components, makes the current model obsolete. This situation requires a significant change, similar to what happened in big datacenters a few years ago with the cloud computing breakthrough.

The Mobile Edge Computing (MEC) use case

Some major international operators and vendors started the ETSI ISG (Industry Specification Group) on Mobile Edge Computing, which advocates for the deployment of virtualized network services at remote access networks, which are placed next to base stations and aggregation points, and run on x86 commodity servers. In other words, its task is to enable services running at the edge of the network, so that services can benefit from higher bandwidth and low latency.

In such a scenario, some network services and applications may be possibly deployed in a specific edge of the network (MEC App and MEC Srv in orange boxes in the above figure). This creates new challenges. On one hand, the network services reaction to current situations (e.g., spikes in the amount of traffic handled by some specific VNFs/NS) needs to be extremely fast. The application lifecycle management, including instantiation, migration, scaling, and so on, needs to be quick enough to provide a good user experience. On the other hand, the amount of available resources at the edge is notably limited when compared to central data centers. Therefore, they must be used efficiently, which results in careful planning of virtualization overheads (time-wise and resource-wise).

Why do we need to react quickly?

To enable the benefits of MEC by merely moving the current functionality from bare metal to Virtual Machines (VMs) is not enough. Mobile networks have some specific requirements, for instance ensuring a certain latency, extremely high network throughput, or constant latency and throughput over time. The Superfluidity project deals with some of the crucial problems, such as long provisioning times, wasteful over-provisioning (to meet a variable demand), or reliance on rigid and cost-ineffective hardware devices.

There are already mechanisms in place that try to provide a more reliable VM performance, such as the NUMA-Aware pinning, huge pages, or the QoS max bandwidth rating to reduce noisy network neighbor effects. In addition, there are other techniques to increase the network throughput of VMs, such as SR-IOV to bypass the hypervisor, or DPDK to use polling instead of interruptions. However, due to high responsiveness requirements expected in 5G deployments, this may still not be enough, as the booting time of the VMs may not be fast enough for certain components, or the virtualization overhead may be prohibitive at some parts of the edge network, where maximizing the resource usage effectiveness is critical due to resource scarceness.

Why do we need containers and VMs?

VMs may not always be a proper approach for all the needs. Instead, other solutions such as the unikernel VMs and containers should be used. In fact, many organizations are looking at Linux containers because of their quick instantiation and great portability, but the containers are also limited to a certain point. It is a well known concern that they are less secure as they have one less isolation layer — i.e., no hypervisor.

It is important to consider where containers are going versus where virtualization already is. Even though there is a high interest in moving more and more functionality to containers over the next years, the priority so far is still set on new applications rather than the legacy ones. Another popular options is to make VMs more efficient in some specific uses. Unikernels, for example, reduce the footprint of VMs to a few MBs (or even KBs), as well as minimize their booting time to make them even faster than containers. This requires to optimize the VMs for a certain use and results in limited flexibility of such a solution. One remarkable example is clickOS. In the future, this will undoubtedly lead to a blend of VMs (both general and specific purpose ones) and containers.

On top of that, there is a belief that containers and virtualization are essentially the same thing, while they are not. Although they have a lot in common, they have some differences, too. They should be seen as complementary, rather than competitive technologies. For example, VMs can be a perfect environment for running containerized workload (it is already fairly common to run Kubernetes or OpenShift on top of OpenStack VMs), providing a more secure environment for running containers, as well as higher flexibility and even improved fault tolerance, and also taking advantage of accelerated application deployment and management through containers. This is commonly referred to as “nested” containers.

How to merge containers and VMS? With KURYR

The problem is not just how to create computational resources, be it VMs or containers, but also how to connect these computational resources among themselves and to the users, in other words, networking. Regarding the VMs in OpenStack, the Neutron project already has a very rich ecosystem of plug-ins and drivers which provide the networking solutions and services, like load-balancing-as-a-service (LBaaS), virtual-private-network-as-a-service (VPNaaS) and firewall-as-a-service (FWaaS).

By contrast, in container networking there is no standard networking API and implementation. So each solution tries to reinvent the wheel — overlapping with other existing solutions. This is especially true in hybrid environments including blends of containers and VMs. As an example, OpenStack Magnum had to introduce abstraction layers for different libnetwork drivers depending on the Container Orchestration Engine (COE).

Knowing these facts, and considering that the Superfluidity project targets quick resource provisioning at 5G deployments, there is a need to further advance in the container networking and its integration in the OpenStack environment. To accomplish this, we have worked on a recent project in OpenStack named Kuryr, which tries to leverage the abstraction and all the hard work previously done in Neutron, and its plug-ins and services, and use that to provide a production grade networking for containers use cases. There are two main objectives:

  1. Make use of neutron functionality in containers deployments. The Neutron features can be applied directly to containers’ ports, such as security groups or QoS, as well as the upcoming integration of Load Balancing as a Service for Kubernetes services;
  2. Being able to connect both VMs and Containers in hybrid deployments.

Besides the interaction with the Neutron API, we need to provide binding actions for the containers so that they can be bound to the network. This is one of the common problems for Neutron solutions supporting containers networking as there is a lack of nova port binding infrastructure and no libvirt support. To address this, Kuryr provides a generic VIF binding mechanisms that takes the port types received from container namespace end, and attaches them to the networking solution infrastructure as highlighted in the following figure.

In a nutshell, Kuryr aims to be the “integration bridge” between the two communities, containers and VMs networking, avoiding that each Neutron plug-in or solution need to find and close the gaps independently. Kuryr allows to map the container networking abstraction to the Neutron API, enabling the consumers to choose the vendor and keep one high quality API free of vendor lock-in, which in turn allows to bring container and VM networking together under one API. So all in all, it allows:

  • A single community sourced networking whether you run containers, VMs or both
  • Leveraging vendor OpenStack support experience in the container space
  • A quicker path to Kubernetes & OpenShift for users of Neutron networking
  • Ability to transition workloads to containers/microservices at your own pace

Additionally, Kuryr provides a way to avoid double encapsulation as is the case in current nested deployments, for example when the containers are running inside VMs deployed on OpenStack. As we can see in next figure, when using docker inside the OpenStack VMs, there is a double encapsulation: one for the Neutron overlay network and another one on top of that for the containers network (e.g., flannel overlay). This creates an overhead that needs to be removed for the 5G scenario target by Superfluidity.

Kuryr leverages on the new TrunkPort functionality provided by neutron (also known as VLAN-Aware-VMs) to be able to attach subports that are later bound to the containers inside the VMs, running a shim version of Kuryr to interact with the Neutron server. This enables better isolation between the containers co-located in the same VM, even if they belong to the same subnet as the network traffic will belong to different (local) VLANs.

The continuation (part 2) of this blog post presents two different deployment types enabled by kuryr: https://ltomasbo.wordpress.com/2017/01/24/superfluidity-containers-and-vms-deployment-for-the-mobile-network-part-2/