Skip to content
Snippets Groups Projects
deployment_guide.md 22.2 KiB
Newer Older
This section walks you through the process of deploying TeraFlowSDN on top of a machine running [MicroK8s Kubernetes platform](https://microk8s.io).
The guide includes the details on configuring and installing the machine, installing and 
configuring MicroK8s, and deploying and reporting the status of the TeraFlowSDN 
controller.

## **1.1. Configure your Machine**

In this section, we describe how to configure a machine (physical or virtual) to be used as the deployment, execution, and development environment for the ETSI TeraFlowSDN controller. Choose your preferred environment below and follow the instructions provided.

yangalicace1's avatar
yangalicace1 committed
**NOTE**: If you already have a remote physical server fitting the requirements specified in this section feel free to use it instead of deploying a local VM. Check [1.1.1. Physical Server](#111-physical-server) for further details.

Virtualization platforms tested are:

- [Physical Server](#111-physical-server)
- [Oracle Virtual Box](#112-oracle-virtual-box)
- [VMWare Fusion](#113-vmware-fusion)
- [OpenStack'](#114-openstack1-1-4-openstack)
- [Vagrant Box](#115-vagrant-box)
### **1.1.1. Physical Server**
This section describes how to configure a physical server for running ETSI TeraFlowSDN(TFS) controller.
<h3><u>Server Specifications</u></h3>
**Minimum Server Specifications for development and basic deployment**

- CPU: 4 cores
- RAM: 8 GB
- Disk: 60 GB
- 1 GbE NIC

**Recommended Server Specifications for development and basic deployment**

- CPU: 6 cores
- RAM: 12 GB
- Disk: 80 GB
- 1 GbE NIC

**Server Specifications for best development and deployment experience**

- CPU: 8 cores
- RAM: 32 GB
- Disk: 120 GB
- 1 GbE NIC

**NOTE**: the specifications listed above are provided as a reference. They depend also on the CPU clock frequency, the RAM memory, the disk technology and speed, etc.

For development purposes, it is recommended to run the VSCode IDE (or the IDE of your choice) in a more powerful server, for instance, the recommended server specifications for development and basic deployment.

Given that TeraFlowSDN follows a micro-services architecture, for the deployment, it might be better to use many clusterized servers with many slower cores than a single server with few highly performant cores.


<h3><u>Clusterized Deployment</u></h3>
You might consider creating a cluster of machines each featuring, at least, the minimum server specifications. That solution brings you scalability in the future.


<h3><u>Networking</u></h3>
No explicit indications are given in terms of networking besides that servers need access to the Internet for downloading dependencies, binaries, and packages while building and deploying the TeraFlowSDN components.

Besides that, the network requirements are essentially the same than that required for running a classical Kubernetes environment. To facilitate the deployment, we extensively use [MicroK8s](https://microk8s.io/), thus the network requirements are, essentially, the same demanded by MicroK8s, especially, if you consider creating a Kubernetes cluster.

As a reference, the other deployment solutions based on VMs assume the VM is connected to a virtual network configured with the IP range `10.0.2.0/24` and have the gateway at IP `10.0.2.1`. The VMs have the IP address `10.0.2.10`.

The minimum required ports to be accessible are:
- 22/SSH     : for management purposes
- 80/HTTP    : for the TeraFlowSDN WebUI and Grafana dashboard
- 8081/HTTPS : for the CockroachDB WebUI

Other ports might be required if you consider to deploy addons such as Kubernetes observability, etc. The details on these ports are left appart given they might vary depending on the Kubernetes environment you use.


<h3><u>Operating System</u></h3>

The recommended Operating System for deploying TeraFlowSDN is [Ubuntu Server 22.04 LTS](https://releases.ubuntu.com/jammy/) or [Ubuntu Server 20.04 LTS](https://releases.ubuntu.com/focal/). Other version might work, but we have not tested them. We strongly recommend using Long Term Support (LTS) versions as they provide better stability.

Below we provide some installation guidelines:
- Installation Language: English
- Autodetect your keyboard
- If asked, select "Ubuntu Server" (do not select "Ubuntu Server (minimized)").
- Configure static network specifications (adapt them based on your particular setup):

|Interface|IPv4 Method|Subnet     |Address  |Gateway |Name servers   |Search domains|
|---------|-----------|-----------|---------|--------|---------------|--------------|
|enp0s3   |Manual     |10.0.2.0/24|10.0.2.10|10.0.2.1|8.8.8.8,8.8.4.4|<empty>       |

- Leave proxy and mirror addresses as they are
- Let the installer self-upgrade (if asked).
- Use an entire disk for the installation
  - Disable setup of the disk as LVM group
  - Double check that NO swap space is allocated in the partition table. Kubernetes does not work properly with SWAP.
- Configure your user and system names:
  - User name: `TeraFlowSDN`
  - Server's name: `tfs-vm`
  - Username: `tfs`
  - Password: `tfs123`
- Install Open SSH Server
  - Import SSH keys, if any.
- Featured Server Snaps
  - Do not install featured server snaps. It will be done manually later to illustrate how to uninstall and reinstall them in case of trouble with.
- Let the system install and upgrade the packages.
  - This operation might take some minutes depending on how old is the Optical Drive ISO image you use and your Internet connection speed.
- Restart the VM when the installation is completed.

<h3><u>Upgrade the Ubuntu distribution</u></h3>
```bash
sudo apt-get update -y
sudo apt-get dist-upgrade -y
```
- If asked to restart services, restart the default ones proposed.
- Restart the VM when the installation is completed.
### **1.1.2. Oracle Virtual Box**
This section describes how to configure a VM for running ETSI TeraFlowSDN(TFS) controller using [Oracle VirtualBox](https://www.virtualbox.org/). It has been tested with VirtualBox up to version 6.1.40 r154048.
<h3><u>Create a NAT Network in VirtualBox</u></h3>
yangalicace1's avatar
yangalicace1 committed
In "Oracle VM VirtualBox Manager", Menu "File > Preferences... > Network", create a NAT 
network with the following specifications:

|Name       |CIDR       |DHCP    |IPv6    |
|-----------|-----------|--------|--------|
|TFS-NAT-Net|10.0.2.0/24|Disabled|Disabled|

Within the newly created "TFS-NAT-Net" NAT network, configure the following IPv4 
forwarding rules:

|Name|Protocol|Host IP  |Host Port|Guest IP |Guest Port|
|----|--------|---------|---------|---------|----------|
|SSH |TCP     |127.0.0.1|2200     |10.0.2.10|22        |
|HTTP|TCP     |127.0.0.1|8080     |10.0.2.10|80        |

__Note__: IP address 10.0.2.10 is the one that will be assigned to the VM.

<h3><u>Create VM in VirtualBox:</u></h3>
yangalicace1's avatar
yangalicace1 committed

- Name: TFS-VM
- Type/Version: Linux / Ubuntu (64-bit)
- CPU (*): 4 vCPUs @ 100% execution capacity
- RAM: 8 GB
- Disk: 60 GB, Virtual Disk Image (VDI), Dynamically allocated
- Optical Drive ISO Image: "ubuntu-22.04.X-live-server-amd64.iso"
  - Download the latest Long Term Support (LTS) version of the *Ubuntu Server* image from [Ubuntu 22.04 LTS](https://releases.ubuntu.com/22.04/), e.g., "ubuntu-22.04.X-live-server-amd64.iso".
  - __Note__: use Ubuntu Server image instead of Ubuntu Desktop to create a lightweight VM.
- Network Adapter 1 (*): enabled, attached to NAT Network "TFS-NAT-Net"
- Minor adjustments (*):
  - Audio: disabled
  - Boot order: disable "Floppy"

__Note__: (*) settings to be editing after the VM is created.

<h3><u>Install Ubuntu 22.04 LTS Operating System</u></h3>
yangalicace1's avatar
yangalicace1 committed
In "Oracle VM VirtualBox Manager", start the VM in normal mode, and follow the 
installation procedure.
Below we provide some installation guidelines:
- Installation Language: English
- Autodetect your keyboard
- If asked, select "Ubuntu Server" (do not select "Ubuntu Server (minimized)").
- Configure static network specifications:

|Interface|IPv4 Method|Subnet     |Address  |Gateway |Name servers   |Search domains|
|---------|-----------|-----------|---------|--------|---------------|--------------|
|enp0s3   |Manual     |10.0.2.0/24|10.0.2.10|10.0.2.1|8.8.8.8,8.8.4.4|<empty>       |

- Leave proxy and mirror addresses as they are
- Let the installer self-upgrade (if asked).
- Use an entire disk for the installation
  - Disable setup of the disk as LVM group
  - Double check that NO swap space is allocated in the partition table. Kubernetes does not work properly with SWAP.
- Configure your user and system names:
  - User name: TeraFlowSDN
  - Server's name: tfs-vm
  - Username: tfs
  - Password: tfs123
- Install Open SSH Server
  - Import SSH keys, if any.
- Featured Server Snaps
  - Do not install featured server snaps. It will be done manually later to illustrate how to uninstall and reinstall them in case of trouble with.
- Let the system install and upgrade the packages.
  - This operation might take some minutes depending on how old is the Optical Drive ISO image you use and your Internet connection speed.
- Restart the VM when the installation is completed.

**Upgrade the Ubuntu distribution**
yangalicace1's avatar
yangalicace1 committed
```bash
sudo apt-get update -y
sudo apt-get dist-upgrade -y
```
- If asked to restart services, restart the default ones proposed.
- Restart the VM when the installation is completed.


**Install VirtualBox Guest Additions**
yangalicace1's avatar
yangalicace1 committed
On VirtualBox Manager, open the VM main screen. If you are running the VM in headless 
mode, right click over the VM in the VirtualBox Manager window and click "Show".
If a dialog informing about how to leave the interface of the VM is shown, confirm 
pressing "Switch" button. The interface of the VM should appear.

Click menu "Device > Insert Guest Additions CD image..."

On the VM terminal, type:
```bash
sudo apt-get install -y linux-headers-$(uname -r) build-essential dkms
  # This command might take some minutes depending on your VM specs and your Internet access speed.
sudo mount /dev/cdrom /mnt/
cd /mnt/
sudo ./VBoxLinuxAdditions.run
  # This command might take some minutes depending on your VM specs.
sudo reboot
```

### **1.1.3. VMWare Fusion**
This section describes how to configure a VM for running ETSI TeraFlowSDN(TFS) controller using [VMWare Fusion](https://www.vmware.com/products/fusion.html). It has been tested with VMWare Fusion version 12 and 13.
<h3><u>Create VM in VMWare Fusion:</u></h3>

In "VMWare Fusion" manager, create a new network from the "Settings/Network" menu.

- Unlock to make changes
- Press the + icon and create a new network
- Change the name to TFS-NAT-Net
- Check "Allow virtual machines on this network to connect to external network (NAT)"
- Do not check "Enable IPv6"
- Add port forwarding for HTTP and SSH
- Uncheck "Provide address on this network via DHCP"

Create a new VM an Ubuntu 22.04.1 ISO:

- Display Name: TeraFlowSDN
- Username: tfs
- Password: tfs123

On the next screen press "Customize Settings", save the VM and in "Settings" change:
- Change to use 4 CPUs
- Change to access 8 GB of RAM
- Change disk to size 60 GB
- Change the network interface to use the previously created TFS-NAT-Net

Run the VM to start the installation.

<h3><u>Install Ubuntu 22.04.1 LTS Operating System</u></h3>

The installation will be automatic, without any configuration required.

- Configure the guest IP, gateway and DNS:

  Using the Network Settings for the wired connection, set the IP to 10.0.2.10,
  the mask to 255.255.255.0, the gateway to 10.0.2.2 and the DNS to 10.0.2.2.

- Disable and remove swap file:

  $ sudo swapoff -a
  $ sudo rm /swapfile

  Then you can remove or comment the /swapfile entry in /etc/fstab

- Install Open SSH Server
  - Import SSH keys, if any.

- Restart the VM when the installation is completed.

<h3><u>Upgrade the Ubuntu distribution</u></h3>
```bash
sudo apt-get update -y
sudo apt-get dist-upgrade -y
```

### **1.1.4. OpenStack**

This section describes how to configure a VM for running ETSI TeraFlowSDN(TFS) controller using [OpenStack](https://www.openstack.org/). It has been tested with OpenStack Kolla up to Yoga version. 

<h3><u>Create a Security Group in OpenStack</h3></u>
In OpenStack, go to Project - Network - Security Groups - Create Security Group with name TFS

Add the following rules:

|Direction  |Ether Type |IP Protocol |Port Range | Remote IP Prefix|
|-----------|-----------|------------|-----------|-----------------|
|Ingress	|IPv4	|TCP	|22 (SSH)	|0.0.0.0/0|
|Ingress	|IPv4	|TCP	|2200	|0.0.0.0/0|	
|Ingress	|IPv4	|TCP	|8080	|0.0.0.0/0|	
|Ingress	|IPv4	|TCP	|80	|0.0.0.0/0|
|Egress	|IPv4	|Any	|Any	|0.0.0.0/0|	
|Egress	|IPv6	|Any	|Any	|::/0|

__Note__: The IP address will be assigned depending on the network you have configured inside OpenStack. This IP will have to be modified in TeraFlow configuration files which by default use IP 10.0.2.10

<h3><u>Create a flavour</h3></u>

**From dashboard (Horizon)**

Go to Admin - Compute - Flavors and press Create Flavor

- Name: TFS
- VCPUs: 4
- RAM (MB): 8192
- Root Disk (GB): 60

**From CLI**
```
 openstack flavor create TFS --id auto --ram 8192 --disk 60 --vcpus 8
```
<h3><u>Create an instance in OpenStack:</h3></u>

- Instance name: TFS-VM
- Origin: [Ubuntu-22.04 cloud image] (https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img)
- Create new volume: No
- Flavor: TFS
- Networks: extnet 
- Security Groups: TFS
- Configuration: Include the following cloud-config

```
#cloud-config
# Modifies the password for the VM instance
username: ubuntu
password: <your-password>
chpasswd: { expire: False }
ssh_pwauth: True
```


<h3><u>Upgrade the Ubuntu distribution</h3></u>
```bash
sudo apt-get update -y
sudo apt-get dist-upgrade -y
```
- If asked to restart services, restart the default ones proposed.
- Restart the VM when the installation is completed.

### **1.1.5. Vagrant Box**
<TBD_LONG>
## **1.2. Install MicroK8s**

This section describes how to deploy the MicroK8s Kubernetes platform and configure it to be used with ETSI TeraFlowSDN controller. Besides, Docker is installed to build docker images for the ETSI TeraFlowSDN controller.

The steps described in this section might take some minutes depending on your internet connection speed and the resources assigned to your VM, or the specifications of your physical server.

To facilitate work, these steps are easier to be executed through an SSH connection, for instance using tools like [PuTTY](https://www.putty.org/) or [MobaXterm](https://mobaxterm.mobatek.net/).

<h3><u>Upgrade the Ubuntu distribution</h3></u>
Skip this step if you already did it during the creation of the VM.
```bash
sudo apt-get update -y
sudo apt-get dist-upgrade -y
```


<h3><u>Install prerequisites</h3></u>
```bash
sudo apt-get install -y ca-certificates curl gnupg lsb-release snapd jq
```

<h3><u>Install Docker CE</h3></u>
Install Docker CE and Docker BuildX plugin
```bash
sudo apt-get install -y docker.io docker-buildx
```

**NOTE**: Starting from Docker v23, [Build architecture](https://docs.docker.com/build/architecture/) has been updated and `docker build` command entered into deprecation process in favor of the new `docker buildx build` command. Package `docker-buildx` provides the new `docker buildx build` command.

Add key "insecure-registries" with the private repository to the daemon configuration. It is done in two commands since
sometimes read from and write to same file might cause trouble.

```bash
if [ -s /etc/docker/daemon.json ]; then cat /etc/docker/daemon.json; else echo '{}'; fi \
    | jq 'if has("insecure-registries") then . else .+ {"insecure-registries": []} end' -- \
    | jq '."insecure-registries" |= (.+ ["localhost:32000"] | unique)' -- \
    | tee tmp.daemon.json
sudo mv tmp.daemon.json /etc/docker/daemon.json
sudo chown root:root /etc/docker/daemon.json
sudo chmod 600 /etc/docker/daemon.json
```

Restart the Docker daemon
```bash
sudo systemctl restart docker
```

<h3><u>Install MicroK8s</h3></u>

**Important**: Some TeraFlowSDN dependencies need to be executed on top of MicroK8s/Kubernetes v1.24. It is not guaranteed (by now) to run on newer versions.

```bash
# Install MicroK8s
sudo snap install microk8s --classic --channel=1.24/stable

# Create alias for command "microk8s.kubectl" to be usable as "kubectl"
sudo snap alias microk8s.kubectl kubectl
```

It is important to make sure that `ufw` will not interfere with the internal pod-to-pod
and pod-to-Internet traffic.
To do so, first check the status.
If `ufw` is active, use the following command to enable the communication.

```bash

# Verify status of ufw firewall
sudo ufw status

# If ufw is active, install following rules to enable access pod-to-pod and pod-to-internet
sudo ufw allow in on cni0 && sudo ufw allow out on cni0
sudo ufw default allow routed
```

**NOTE**: MicroK8s can be used to compose a Highly Available Kubernetes cluster enabling you to construct an environment combining the CPU, RAM and storage resources of multiple machines. If you are interested in this procedure, review the official instructions in [How to build a highly available Kubernetes cluster with MicroK8s](https://ubuntu.com/tutorials/getting-started-with-kubernetes-ha), in particular, the step [Create a MicroK8s multi-node cluster](https://ubuntu.com/tutorials/getting-started-with-kubernetes-ha#4-create-a-microk8s-multinode-cluster).

**References:**

- [The lightweight Kubernetes > Install MicroK8s](https://microk8s.io/#install-microk8s)
- [Install a local Kubernetes with MicroK8s](https://ubuntu.com/tutorials/install-a-local-kubernetes-with-microk8s)
- [How to build a highly available Kubernetes cluster with MicroK8s](https://ubuntu.com/tutorials/getting-started-with-kubernetes-ha)


<h3><u>Add user to the docker and microk8s groups</h3></u>

It is important that your user has the permission to run `docker` and `microk8s` in the 
terminal.
To allow this, you need to add your user to the `docker` and `microk8s` groups with the 
following commands:

```bash
sudo usermod -a -G docker $USER
sudo usermod -a -G microk8s $USER
sudo chown -f -R $USER $HOME/.kube
sudo reboot
```

In case that you get trouble executing the following commands, might due to the .kube folder is not automatically provisioned into your home folder, you may follow the steps below:

```bash
mkdir -p $HOME/.kube
sudo chown -f -R $USER $HOME/.kube
microk8s config > $HOME/.kube/config
sudo reboot
```

<h3><u>Check status of Kubernetes and addons</h3></u>
To retrieve the status of Kubernetes __once__, run the following command:
```bash
microk8s.status --wait-ready
```

To retrieve the status of Kubernetes __periodically__ (e.g., every 1 second), run the 
following command:
```bash
watch -n 1 microk8s.status --wait-ready
```

<h3><u>Check all resources in Kubernetes</h3></u>
To retrieve the status of the Kubernetes resources __once__, run the following command:
```bash
kubectl get all --all-namespaces
```

To retrieve the status of the Kubernetes resources __periodically__ (e.g., every 1 
second), run the following command:
```bash
watch -n 1 kubectl get all --all-namespaces
```

<h3><u>Enable addons</h3></u>

First, we need to enable the community plugins (maintained by third parties):

```bash
microk8s.enable community
```

The Addons to be enabled are:
- `dns`: enables resolving the pods and services by name
- `helm3`: required to install NATS
- `hostpath-storage`: enables providing storage for the pods (required by `registry`)
- `ingress`: deploys an ingress controller to expose the microservices outside Kubernetes
- `registry`: deploys a private registry for the TFS controller images
- `linkerd`: deploys the [linkerd service mesh](https://linkerd.io) used for load balancing among replicas
- `prometheus`: set of tools that enable TFS observability through per-component instrumentation
- `metrics-server`: deploys the [Kubernetes metrics server](https://github.com/kubernetes-sigs/metrics-server) for API access to service metrics

```bash
microk8s.enable dns helm3 hostpath-storage ingress registry prometheus metrics-server linkerd
```

__Important__: Enabling some of the addons might take few minutes.
Do not proceed with next steps until the addons are ready.
Otherwise, the deployment might fail.
To confirm everything is up and running:
1. Periodically
   [Check the status of Kubernetes](./1.2.-Install-MicroK8s#check-status-of-kubernetes-and-addons)
   until you see the addons [dns, ha-cluster, helm3, hostpath-storage, ingress, linkerd, metrics-server, prometheus, registry, storage] in the enabled block.
2. Periodically
   [Check Kubernetes resources](./1.2.-Install-MicroK8s#check-all-resources-in-kubernetes)
   until all pods are __Ready__ and __Running__.
3. If it takes too long for the Pods to be ready, __we observed that rebooting the machine may help__.

Then, create aliases to make the commands easier to access:

```bash
sudo snap alias microk8s.helm3 helm3
sudo snap alias microk8s.linkerd linkerd
```

To validate that `linkerd` is working correctly, run:

```bash
linkerd check
```

To validate that the `metrics-server` is working correctly, run:
```bash
kubectl top pods --all-namespaces
```
and you should see a screen similar to the `top` command in Linux, showing the columns *namespace*, *pod name*, *CPU (cores)*, and *MEMORY (bytes)*.

In case pods are not starting, check information from pods logs. For example, linkerd is sensitive for proper /etc/resolv.conf syntax.
```bash
kubectl logs <podname> --namespace <namespace>
```
If the command shows an error message, also restarting the machine might help.

<h3><u>Stop, Restart, and Redeploy</h3></u>
Find below some additional commands you might need while you work with MicroK8s:
```bash
microk8s.stop  # stop MicroK8s cluster (for instance, before power off your computer)
microk8s.start # start MicroK8s cluster
microk8s.reset # reset infrastructure to a clean state
```

If the following commands does not work to recover the MicroK8s cluster, you can redeploy it.

If you want to keep MicroK8s configuration, use:
```bash
sudo snap remove microk8s
```

If you need to completely drop MicroK8s and its complete configuration, use:
```bash
sudo snap remove microk8s --purge
sudo apt-get remove --purge docker.io docker-buildx
```

**IMPORTANT**: After uninstalling MicroK8s, it is convenient to reboot the computer (the VM if you work on a VM, or the physical computer if you use a physical computer). Otherwise, there are system configurations that are not correctly cleaned. Especially in what port forwarding and firewall rules matters.

After the reboot, redeploy as it is described in this section.




















## **1.3. Deploy TeraFlowSDN**
## **1.4. WebUI and Grafana Dashboards**
## **1.5. Show Deployment and Logs**