Newer
Older
This section walks you through the process of deploying TeraFlowSDN on top of a machine running [MicroK8s Kubernetes platform](https://microk8s.io).
The guide includes the details on configuring and installing the machine, installing and
configuring MicroK8s, and deploying and reporting the status of the TeraFlowSDN
controller.
## **1.1. Configure your Machine**
In this section, we describe how to configure a machine (physical or virtual) to be used as the deployment, execution, and development environment for the ETSI TeraFlowSDN controller. Choose your preferred environment below and follow the instructions provided.
**NOTE**: If you already have a remote physical server fitting the requirements specified in this section feel free to use it instead of deploying a local VM. Check [1.1.1. Physical Server](#111-physical-server) for further details.
Virtualization platforms tested are:
- [Physical Server](#111-physical-server)
- [Oracle Virtual Box](#112-oracle-virtual-box)
- [VMWare Fusion](#113-vmware-fusion)
This section describes how to configure a physical server for running ETSI TeraFlowSDN(TFS) controller.
<h3><u>Server Specifications</u></h3>
**Minimum Server Specifications for development and basic deployment**
- CPU: 4 cores
- RAM: 8 GB
- Disk: 60 GB
- 1 GbE NIC
**Recommended Server Specifications for development and basic deployment**
- CPU: 6 cores
- RAM: 12 GB
- Disk: 80 GB
- 1 GbE NIC
**Server Specifications for best development and deployment experience**
- CPU: 8 cores
- RAM: 32 GB
- Disk: 120 GB
- 1 GbE NIC
**NOTE**: the specifications listed above are provided as a reference. They depend also on the CPU clock frequency, the RAM memory, the disk technology and speed, etc.
For development purposes, it is recommended to run the VSCode IDE (or the IDE of your choice) in a more powerful server, for instance, the recommended server specifications for development and basic deployment.
Given that TeraFlowSDN follows a micro-services architecture, for the deployment, it might be better to use many clusterized servers with many slower cores than a single server with few highly performant cores.
<h3><u>Clusterized Deployment</u></h3>
You might consider creating a cluster of machines each featuring, at least, the minimum server specifications. That solution brings you scalability in the future.
No explicit indications are given in terms of networking besides that servers need access to the Internet for downloading dependencies, binaries, and packages while building and deploying the TeraFlowSDN components.
Besides that, the network requirements are essentially the same than that required for running a classical Kubernetes environment. To facilitate the deployment, we extensively use [MicroK8s](https://microk8s.io/), thus the network requirements are, essentially, the same demanded by MicroK8s, especially, if you consider creating a Kubernetes cluster.
As a reference, the other deployment solutions based on VMs assume the VM is connected to a virtual network configured with the IP range `10.0.2.0/24` and have the gateway at IP `10.0.2.1`. The VMs have the IP address `10.0.2.10`.
The minimum required ports to be accessible are:
- 22/SSH : for management purposes
- 80/HTTP : for the TeraFlowSDN WebUI and Grafana dashboard
- 8081/HTTPS : for the CockroachDB WebUI
Other ports might be required if you consider to deploy addons such as Kubernetes observability, etc. The details on these ports are left appart given they might vary depending on the Kubernetes environment you use.
The recommended Operating System for deploying TeraFlowSDN is [Ubuntu Server 22.04 LTS](https://releases.ubuntu.com/jammy/) or [Ubuntu Server 20.04 LTS](https://releases.ubuntu.com/focal/). Other version might work, but we have not tested them. We strongly recommend using Long Term Support (LTS) versions as they provide better stability.
Below we provide some installation guidelines:
- Installation Language: English
- Autodetect your keyboard
- If asked, select "Ubuntu Server" (do not select "Ubuntu Server (minimized)").
- Configure static network specifications (adapt them based on your particular setup):
|Interface|IPv4 Method|Subnet |Address |Gateway |Name servers |Search domains|
|---------|-----------|-----------|---------|--------|---------------|--------------|
|enp0s3 |Manual |10.0.2.0/24|10.0.2.10|10.0.2.1|8.8.8.8,8.8.4.4|<empty> |
- Leave proxy and mirror addresses as they are
- Let the installer self-upgrade (if asked).
- Use an entire disk for the installation
- Disable setup of the disk as LVM group
- Double check that NO swap space is allocated in the partition table. Kubernetes does not work properly with SWAP.
- Configure your user and system names:
- User name: `TeraFlowSDN`
- Server's name: `tfs-vm`
- Username: `tfs`
- Password: `tfs123`
- Install Open SSH Server
- Import SSH keys, if any.
- Featured Server Snaps
- Do not install featured server snaps. It will be done manually later to illustrate how to uninstall and reinstall them in case of trouble with.
- Let the system install and upgrade the packages.
- This operation might take some minutes depending on how old is the Optical Drive ISO image you use and your Internet connection speed.
- Restart the VM when the installation is completed.
<h3><u>Upgrade the Ubuntu distribution</u></h3>
```bash
sudo apt-get update -y
sudo apt-get dist-upgrade -y
```
- If asked to restart services, restart the default ones proposed.
- Restart the VM when the installation is completed.
### **1.1.2. Oracle Virtual Box**
This section describes how to configure a VM for running ETSI TeraFlowSDN(TFS) controller using [Oracle VirtualBox](https://www.virtualbox.org/). It has been tested with VirtualBox up to version 6.1.40 r154048.
<h3><u>Create a NAT Network in VirtualBox</u></h3>
In "Oracle VM VirtualBox Manager", Menu "File > Preferences... > Network", create a NAT
network with the following specifications:
|Name |CIDR |DHCP |IPv6 |
|-----------|-----------|--------|--------|
|TFS-NAT-Net|10.0.2.0/24|Disabled|Disabled|
Within the newly created "TFS-NAT-Net" NAT network, configure the following IPv4
forwarding rules:
|Name|Protocol|Host IP |Host Port|Guest IP |Guest Port|
|----|--------|---------|---------|---------|----------|
|SSH |TCP |127.0.0.1|2200 |10.0.2.10|22 |
|HTTP|TCP |127.0.0.1|8080 |10.0.2.10|80 |
__Note__: IP address 10.0.2.10 is the one that will be assigned to the VM.
<h3><u>Create VM in VirtualBox:</u></h3>
- Name: TFS-VM
- Type/Version: Linux / Ubuntu (64-bit)
- CPU (*): 4 vCPUs @ 100% execution capacity
- RAM: 8 GB
- Disk: 60 GB, Virtual Disk Image (VDI), Dynamically allocated
- Optical Drive ISO Image: "ubuntu-22.04.X-live-server-amd64.iso"
- Download the latest Long Term Support (LTS) version of the *Ubuntu Server* image from [Ubuntu 22.04 LTS](https://releases.ubuntu.com/22.04/), e.g., "ubuntu-22.04.X-live-server-amd64.iso".
- __Note__: use Ubuntu Server image instead of Ubuntu Desktop to create a lightweight VM.
- Network Adapter 1 (*): enabled, attached to NAT Network "TFS-NAT-Net"
- Minor adjustments (*):
- Audio: disabled
- Boot order: disable "Floppy"
__Note__: (*) settings to be editing after the VM is created.
<h3><u>Install Ubuntu 22.04 LTS Operating System</u></h3>
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
In "Oracle VM VirtualBox Manager", start the VM in normal mode, and follow the
installation procedure.
Below we provide some installation guidelines:
- Installation Language: English
- Autodetect your keyboard
- If asked, select "Ubuntu Server" (do not select "Ubuntu Server (minimized)").
- Configure static network specifications:
|Interface|IPv4 Method|Subnet |Address |Gateway |Name servers |Search domains|
|---------|-----------|-----------|---------|--------|---------------|--------------|
|enp0s3 |Manual |10.0.2.0/24|10.0.2.10|10.0.2.1|8.8.8.8,8.8.4.4|<empty> |
- Leave proxy and mirror addresses as they are
- Let the installer self-upgrade (if asked).
- Use an entire disk for the installation
- Disable setup of the disk as LVM group
- Double check that NO swap space is allocated in the partition table. Kubernetes does not work properly with SWAP.
- Configure your user and system names:
- User name: TeraFlowSDN
- Server's name: tfs-vm
- Username: tfs
- Password: tfs123
- Install Open SSH Server
- Import SSH keys, if any.
- Featured Server Snaps
- Do not install featured server snaps. It will be done manually later to illustrate how to uninstall and reinstall them in case of trouble with.
- Let the system install and upgrade the packages.
- This operation might take some minutes depending on how old is the Optical Drive ISO image you use and your Internet connection speed.
- Restart the VM when the installation is completed.
**Upgrade the Ubuntu distribution**
```bash
sudo apt-get update -y
sudo apt-get dist-upgrade -y
```
- If asked to restart services, restart the default ones proposed.
- Restart the VM when the installation is completed.
**Install VirtualBox Guest Additions**
On VirtualBox Manager, open the VM main screen. If you are running the VM in headless
mode, right click over the VM in the VirtualBox Manager window and click "Show".
If a dialog informing about how to leave the interface of the VM is shown, confirm
pressing "Switch" button. The interface of the VM should appear.
Click menu "Device > Insert Guest Additions CD image..."
On the VM terminal, type:
```bash
sudo apt-get install -y linux-headers-$(uname -r) build-essential dkms
# This command might take some minutes depending on your VM specs and your Internet access speed.
sudo mount /dev/cdrom /mnt/
cd /mnt/
sudo ./VBoxLinuxAdditions.run
# This command might take some minutes depending on your VM specs.
sudo reboot
```
This section describes how to configure a VM for running ETSI TeraFlowSDN(TFS) controller using [VMWare Fusion](https://www.vmware.com/products/fusion.html). It has been tested with VMWare Fusion version 12 and 13.
<h3><u>Create VM in VMWare Fusion:</u></h3>
In "VMWare Fusion" manager, create a new network from the "Settings/Network" menu.
- Unlock to make changes
- Press the + icon and create a new network
- Change the name to TFS-NAT-Net
- Check "Allow virtual machines on this network to connect to external network (NAT)"
- Do not check "Enable IPv6"
- Add port forwarding for HTTP and SSH
- Uncheck "Provide address on this network via DHCP"
Create a new VM an Ubuntu 22.04.1 ISO:
- Display Name: TeraFlowSDN
- Username: tfs
- Password: tfs123
On the next screen press "Customize Settings", save the VM and in "Settings" change:
- Change to use 4 CPUs
- Change to access 8 GB of RAM
- Change disk to size 60 GB
- Change the network interface to use the previously created TFS-NAT-Net
Run the VM to start the installation.
<h3><u>Install Ubuntu 22.04.1 LTS Operating System</u></h3>
The installation will be automatic, without any configuration required.
- Configure the guest IP, gateway and DNS:
Using the Network Settings for the wired connection, set the IP to 10.0.2.10,
the mask to 255.255.255.0, the gateway to 10.0.2.2 and the DNS to 10.0.2.2.
- Disable and remove swap file:
$ sudo swapoff -a
$ sudo rm /swapfile
Then you can remove or comment the /swapfile entry in /etc/fstab
- Install Open SSH Server
- Import SSH keys, if any.
- Restart the VM when the installation is completed.
<h3><u>Upgrade the Ubuntu distribution</u></h3>
```bash
sudo apt-get update -y
sudo apt-get dist-upgrade -y
```
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
This section describes how to configure a VM for running ETSI TeraFlowSDN(TFS) controller using [OpenStack](https://www.openstack.org/). It has been tested with OpenStack Kolla up to Yoga version.
<h3><u>Create a Security Group in OpenStack</h3></u>
In OpenStack, go to Project - Network - Security Groups - Create Security Group with name TFS
Add the following rules:
|Direction |Ether Type |IP Protocol |Port Range | Remote IP Prefix|
|-----------|-----------|------------|-----------|-----------------|
|Ingress |IPv4 |TCP |22 (SSH) |0.0.0.0/0|
|Ingress |IPv4 |TCP |2200 |0.0.0.0/0|
|Ingress |IPv4 |TCP |8080 |0.0.0.0/0|
|Ingress |IPv4 |TCP |80 |0.0.0.0/0|
|Egress |IPv4 |Any |Any |0.0.0.0/0|
|Egress |IPv6 |Any |Any |::/0|
__Note__: The IP address will be assigned depending on the network you have configured inside OpenStack. This IP will have to be modified in TeraFlow configuration files which by default use IP 10.0.2.10
<h3><u>Create a flavour</h3></u>
**From dashboard (Horizon)**
Go to Admin - Compute - Flavors and press Create Flavor
- Name: TFS
- VCPUs: 4
- RAM (MB): 8192
- Root Disk (GB): 60
**From CLI**
```
openstack flavor create TFS --id auto --ram 8192 --disk 60 --vcpus 8
```
<h3><u>Create an instance in OpenStack:</h3></u>
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
- Instance name: TFS-VM
- Origin: [Ubuntu-22.04 cloud image] (https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img)
- Create new volume: No
- Flavor: TFS
- Networks: extnet
- Security Groups: TFS
- Configuration: Include the following cloud-config
```
#cloud-config
# Modifies the password for the VM instance
username: ubuntu
password: <your-password>
chpasswd: { expire: False }
ssh_pwauth: True
```
<h3><u>Upgrade the Ubuntu distribution</h3></u>
```bash
sudo apt-get update -y
sudo apt-get dist-upgrade -y
```
- If asked to restart services, restart the default ones proposed.
- Restart the VM when the installation is completed.
This section describes how to create a Vagrant Box, using the base virtual machine configured in [Oracle Virtual Box](#112-oracle-virtual-box).
<h3><u>Virtual Machine specifications</h3></u>
Most of the specifications can be as specified in the [Oracle Virtual Box](#112-oracle-virtual-box) page, however, there are a few particularities to Vagrant that must be accommodated, such as:
- Virtual Hard Disk
- Size: 60GB (at least)
- **Type**: VMDK

Also, before initiating the VM and installing the OS, we'll need to:
- Disable Floppy in the 'Boot Order'
- Disable audio
- Disable USB
- Ensure Network Adapter 1 is set to NAT
<h3><u>Network configurations</h3></u>
At Network Adapt 1, the following port-forwarding rule must be set.
| Name | Protocol | Host IP | Host Port | Guest IP | Guest Port |
| - | - | - | - | - | - |
| SSH | TCP | | **2222** | | 22 |

<h3><u>Installing the OS</h3></u>
For a Vagrant Box, it is generally suggested that the ISO's server version is used, as it is intended to be used via SSH, and any web GUI is expected to be forwarded to the host.



Make sure the disk is not configured as an LVM group!

<h3><u>Vagrant ser</h3></u>
Vagrant expects by default, that in the box's OS exists the user `vagrant` with the password also being `vagrant`.

<h3><u>SSH</h3></u>
Vagrant uses SSH to connect to the boxes, so installing it now will save the hassle of doing it later.

383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
<h3><u>Features server snaps</h3></u>
Do not install featured server snaps. It will be done manually [later](#12-install-microk8s) to illustrate how to uninstall and reinstall them in case of trouble with.
<h3><u>Updates</h3></u>
Let the system install and upgrade the packages. This operation might take some minutes depending on how old is the Optical Drive ISO image you use and your Internet connection speed.
<h3><u>Upgrade the Ubuntu distribution</h3></u>
```bash
sudo apt-get update -y
sudo apt-get dist-upgrade -y
```
- If asked to restart services, restart the default ones proposed.
- Restart the VM when the installation is completed.
<h3><u>Install VirtualBox Guest Additions</h3></u>
On VirtualBox Manager, open the VM main screen. If you are running the VM in headless
mode, right-click over the VM in the VirtualBox Manager window, and click "Show".
If a dialog informing about how to leave the interface of the VM is shown, confirm
by pressing the "Switch" button. The interface of the VM should appear.
Click the menu "Device > Insert Guest Additions CD image..."
On the VM terminal, type:
```bash
sudo apt-get install -y linux-headers-$(uname -r) build-essential dkms
# This command might take some minutes depending on your VM specs and your Internet access speed.
sudo mount /dev/cdrom /mnt/
cd /mnt/
sudo ./VBoxLinuxAdditions.run
# This command might take some minutes depending on your VM specs.
sudo reboot
```
<h3><u>ETSI TFS Installation</h3></u>
After this, proceed to [1.2. Install Microk8s](#12-install-microk8s), after which, return to this wiki to finish the Vagrant Box creation.
<h3><u>Box configuration and creation</h3></u>
Make sure the ETSI TFS controller is correctly configured. **You will not be able to change it after!**
It is advisable to do the next configurations from a host's terminal, via a SSH connection.
```bash
ssh -p 2222 vagrant@127.0.0.1
```
<h3><u>Set root password</h3></u>
Set the root password to `vagrant`.
```bash
sudo passwd root
```
<h3><u>Set the superuser</h3></u>
Set up the Vagrant user so that it’s able to use sudo without being prompted for a password.
Anything in the `/etc/sudoers.d/*` directory is included in the sudoers privileges when created by the root user.
Create a new sudo file.
```bash
sudo visudo -f /etc/sudoers.d/vagrant
```
and add the following lines
```text
# add vagrant user
vagrant ALL=(ALL) NOPASSWD:ALL
```
You can now test that it works by running a simple command.
```bash
sudo pwd
```
Issuing this command should result in an immediate response without a request for a password.
<h3><u>Install the Vagrant key</h3></u>
Vagrant uses a default set of SSH keys for you to directly connect to boxes via the CLI command `vagrant ssh`, after which it creates a new set of SSH keys for your new box. Because of this, we need to load the default key to be able to access the box after created.
```bash
chmod 0700 /home/vagrant/.ssh
wget --no-check-certificate https://raw.github.com/mitchellh/vagrant/master/keys/vagrant.pub -O /home/vagrant/.ssh/authorized_keys
chmod 0600 /home/vagrant/.ssh/authorized_keys
chown -R vagrant /home/vagrant/.ssh
```
<h3><u>Configure the OpenSSH Server</h3></u>
Edit the `/etc/ssh/sshd_config` file:
```bash
sudo vim /etc/ssh/sshd_config
```
And uncomment the following line:
```bash
AuthorizedKeysFile %h/.ssh/authorized_keys
```
Then restart SSH.
```bash
sudo service ssh restart
```
<h3><u>Package the box</h3></u>
Before you package the box, if you intend to make your box public, it is best to clean your bash history with:
```bash
history -c
```
Exit the SSH connection, and **at you're host machine**, package the VM:
```bash
vagrant package --base teraflowsdncontroller --output teraflowsdncontroller.box
```
<h3><u>Test run the box</h3></u>
Add the base box to you local Vagrant box list:
```bash
vagrant box add --name teraflowsdncontroller ./teraflowsdncontroller.box
```
Now you should try to run it, for that you'll need to create a **Vagrantfile**. For a simple run, this is the minimal required code for this box:
```ruby
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "teraflowsdncontroller"
config.vm.box_version = "1.1.0"
config.vm.network :forwarded_port, host: 8080 ,guest: 80
end
```
Now you'll be able to spin up the virtual machine by issuing the command:
```bash
vagrant up
```
And connect to the machine using:
```bash
vagrant ssh
```
<h3><u>Pre-configured boxes</h3></u>
If you do not wish to create your own Vagrant Box, you can use one of the existing ones created by TFS contributors.
- [davidjosearaujo/teraflowsdncontroller](https://app.vagrantup.com/davidjosearaujo/boxes/teraflowsdncontroller)
- ... <!-- Should create and host one at ETSI!! -->
To use them, you simply have to create a Vagrantfile and run `vagrant up controller` in the same directory. The following example Vagrantfile already allows you to do just that, with the bonus of exposing the multiple management GUIs to your `localhost`.
```ruby
Vagrant.configure("2") do |config|
config.vm.define "controller" do |controller|
controller.vm.box = "davidjosearaujo/teraflowsdncontroller"
controller.vm.network "forwarded_port", guest: 80, host: 8080 # WebUI
controller.vm.network "forwarded_port", guest: 8084, host: 50750 # Linkerd Viz Dashboard
controller.vm.network "forwarded_port", guest: 8081, host: 8081 # CockroachDB Dashboard
controller.vm.network "forwarded_port", guest: 8222, host: 8222 # NATS Dashboard
controller.vm.network "forwarded_port", guest: 9000, host: 9000 # QuestDB Dashboard
controller.vm.network "forwarded_port", guest: 9090, host: 9090 # Prometheus Dashboard
# Setup Linkerd Viz reverse proxy
## Copy config file
controller.vm.provision "file" do |f|
f.source = "./reverse-proxy-linkerdviz.sh"
f.destination = "./reverse-proxy-linkerdviz.sh"
end
## Execute configuration file
controller.vm.provision "shell" do |s|
s.inline = "chmod +x ./reverse-proxy-linkerdviz.sh && ./reverse-proxy-linkerdviz.sh"
end
# Update controller source code to the desired branch
if ENV['BRANCH'] != nil
controller.vm.provision "shell" do |s|
s.inline = "cd ./tfs-ctrl && git pull && git switch " + ENV['BRANCH']
end
end
end
end
```
This Vagrantfile also allows for **optional repository updates** on startup by running the command with a specified environment variable `BRANCH`
```bash
BRANCH=develop vagrant up controller
```
<h3><u>Linkerd DNS rebinding bypass</h3></u>
Because of Linkerd's security measures against DNS rebinding, a reverse proxy, that modifies the request's header `Host` field, is needed to expose the GUI to the host. The previous Vagrantfile already deploys such configurations, for that, all you need to do is create the `reverse-proxy-linkerdviz.sh` file in the same directory. The content of this file is displayed below.
```bash
# Install NGINX
sudo apt update && sudo apt install nginx -y
# NGINX reverse proxy configuration
echo 'server {
listen 8084;
location / {
proxy_pass http://127.0.0.1:50750;
proxy_set_header Host localhost;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}' > /home/vagrant/expose-linkerd
# Create symlink of the NGINX configuration file
sudo ln -s /home/vagrant/expose-linkerd /etc/nginx/sites-enabled/
# Commit the reverse proxy configurations
sudo systemctl restart nginx
# Enable start on login
echo "linkerd viz dashboard &" >> .profile
# Start dashboard
linkerd viz dashboard &
echo "Linkerd Viz dashboard running!"
```
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
This section describes how to deploy the MicroK8s Kubernetes platform and configure it to be used with ETSI TeraFlowSDN controller. Besides, Docker is installed to build docker images for the ETSI TeraFlowSDN controller.
The steps described in this section might take some minutes depending on your internet connection speed and the resources assigned to your VM, or the specifications of your physical server.
To facilitate work, these steps are easier to be executed through an SSH connection, for instance using tools like [PuTTY](https://www.putty.org/) or [MobaXterm](https://mobaxterm.mobatek.net/).
<h3><u>Upgrade the Ubuntu distribution</h3></u>
Skip this step if you already did it during the creation of the VM.
```bash
sudo apt-get update -y
sudo apt-get dist-upgrade -y
```
<h3><u>Install prerequisites</h3></u>
```bash
sudo apt-get install -y ca-certificates curl gnupg lsb-release snapd jq
```
<h3><u>Install Docker CE</h3></u>
Install Docker CE and Docker BuildX plugin
```bash
sudo apt-get install -y docker.io docker-buildx
```
**NOTE**: Starting from Docker v23, [Build architecture](https://docs.docker.com/build/architecture/) has been updated and `docker build` command entered into deprecation process in favor of the new `docker buildx build` command. Package `docker-buildx` provides the new `docker buildx build` command.
Add key "insecure-registries" with the private repository to the daemon configuration. It is done in two commands since
sometimes read from and write to same file might cause trouble.
```bash
if [ -s /etc/docker/daemon.json ]; then cat /etc/docker/daemon.json; else echo '{}'; fi \
| jq 'if has("insecure-registries") then . else .+ {"insecure-registries": []} end' -- \
| jq '."insecure-registries" |= (.+ ["localhost:32000"] | unique)' -- \
| tee tmp.daemon.json
sudo mv tmp.daemon.json /etc/docker/daemon.json
sudo chown root:root /etc/docker/daemon.json
sudo chmod 600 /etc/docker/daemon.json
```
Restart the Docker daemon
```bash
sudo systemctl restart docker
```
<h3><u>Install MicroK8s</h3></u>
**Important**: Some TeraFlowSDN dependencies need to be executed on top of MicroK8s/Kubernetes v1.24. It is not guaranteed (by now) to run on newer versions.
```bash
# Install MicroK8s
sudo snap install microk8s --classic --channel=1.24/stable
# Create alias for command "microk8s.kubectl" to be usable as "kubectl"
sudo snap alias microk8s.kubectl kubectl
```
It is important to make sure that `ufw` will not interfere with the internal pod-to-pod
and pod-to-Internet traffic.
To do so, first check the status.
If `ufw` is active, use the following command to enable the communication.
```bash
# Verify status of ufw firewall
sudo ufw status
# If ufw is active, install following rules to enable access pod-to-pod and pod-to-internet
sudo ufw allow in on cni0 && sudo ufw allow out on cni0
sudo ufw default allow routed
```
**NOTE**: MicroK8s can be used to compose a Highly Available Kubernetes cluster enabling you to construct an environment combining the CPU, RAM and storage resources of multiple machines. If you are interested in this procedure, review the official instructions in [How to build a highly available Kubernetes cluster with MicroK8s](https://ubuntu.com/tutorials/getting-started-with-kubernetes-ha), in particular, the step [Create a MicroK8s multi-node cluster](https://ubuntu.com/tutorials/getting-started-with-kubernetes-ha#4-create-a-microk8s-multinode-cluster).
**References:**
- [The lightweight Kubernetes > Install MicroK8s](https://microk8s.io/#install-microk8s)
- [Install a local Kubernetes with MicroK8s](https://ubuntu.com/tutorials/install-a-local-kubernetes-with-microk8s)
- [How to build a highly available Kubernetes cluster with MicroK8s](https://ubuntu.com/tutorials/getting-started-with-kubernetes-ha)
<h3><u>Add user to the docker and microk8s groups</h3></u>
It is important that your user has the permission to run `docker` and `microk8s` in the
terminal.
To allow this, you need to add your user to the `docker` and `microk8s` groups with the
following commands:
```bash
sudo usermod -a -G docker $USER
sudo usermod -a -G microk8s $USER
sudo chown -f -R $USER $HOME/.kube
sudo reboot
```
In case that you get trouble executing the following commands, might due to the .kube folder is not automatically provisioned into your home folder, you may follow the steps below:
```bash
mkdir -p $HOME/.kube
sudo chown -f -R $USER $HOME/.kube
microk8s config > $HOME/.kube/config
sudo reboot
```
<h3><u id="check-status-of-kubernetes-and-addons">Check status of Kubernetes and addons</h3></u>
To retrieve the status of Kubernetes __once__, run the following command:
```bash
microk8s.status --wait-ready
```
To retrieve the status of Kubernetes __periodically__ (e.g., every 1 second), run the
following command:
```bash
watch -n 1 microk8s.status --wait-ready
```
<h3><u id="check-all-resources-in-kubernetes">Check all resources in Kubernetes</h3></u>
To retrieve the status of the Kubernetes resources __once__, run the following command:
```bash
kubectl get all --all-namespaces
```
To retrieve the status of the Kubernetes resources __periodically__ (e.g., every 1
second), run the following command:
```bash
watch -n 1 kubectl get all --all-namespaces
```
<h3><u>Enable addons</h3></u>
First, we need to enable the community plugins (maintained by third parties):
```bash
microk8s.enable community
```
The Addons to be enabled are:
- `dns`: enables resolving the pods and services by name
- `helm3`: required to install NATS
- `hostpath-storage`: enables providing storage for the pods (required by `registry`)
- `ingress`: deploys an ingress controller to expose the microservices outside Kubernetes
- `registry`: deploys a private registry for the TFS controller images
- `linkerd`: deploys the [linkerd service mesh](https://linkerd.io) used for load balancing among replicas
- `prometheus`: set of tools that enable TFS observability through per-component instrumentation
- `metrics-server`: deploys the [Kubernetes metrics server](https://github.com/kubernetes-sigs/metrics-server) for API access to service metrics
```bash
microk8s.enable dns helm3 hostpath-storage ingress registry prometheus metrics-server linkerd
```
__Important__: Enabling some of the addons might take few minutes. Do not proceed with next steps until the addons are ready. Otherwise, the deployment might fail.
To confirm everything is up and running:
[Check the status of Kubernetes](#12-install-microk8s)
until you see the addons [dns, ha-cluster, helm3, hostpath-storage, ingress, linkerd, metrics-server, prometheus, registry, storage] in the enabled block.
2. Periodically
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
until all pods are __Ready__ and __Running__.
3. If it takes too long for the Pods to be ready, __we observed that rebooting the machine may help__.
Then, create aliases to make the commands easier to access:
```bash
sudo snap alias microk8s.helm3 helm3
sudo snap alias microk8s.linkerd linkerd
```
To validate that `linkerd` is working correctly, run:
```bash
linkerd check
```
To validate that the `metrics-server` is working correctly, run:
```bash
kubectl top pods --all-namespaces
```
and you should see a screen similar to the `top` command in Linux, showing the columns *namespace*, *pod name*, *CPU (cores)*, and *MEMORY (bytes)*.
In case pods are not starting, check information from pods logs. For example, linkerd is sensitive for proper /etc/resolv.conf syntax.
```bash
kubectl logs <podname> --namespace <namespace>
```
If the command shows an error message, also restarting the machine might help.
<h3><u>Stop, Restart, and Redeploy</h3></u>
Find below some additional commands you might need while you work with MicroK8s:
```bash
microk8s.stop # stop MicroK8s cluster (for instance, before power off your computer)
microk8s.start # start MicroK8s cluster
microk8s.reset # reset infrastructure to a clean state
```
If the following commands does not work to recover the MicroK8s cluster, you can redeploy it.
If you want to keep MicroK8s configuration, use:
```bash
sudo snap remove microk8s
```
If you need to completely drop MicroK8s and its complete configuration, use:
```bash
sudo snap remove microk8s --purge
sudo apt-get remove --purge docker.io docker-buildx
```
**IMPORTANT**: After uninstalling MicroK8s, it is convenient to reboot the computer (the VM if you work on a VM, or the physical computer if you use a physical computer). Otherwise, there are system configurations that are not correctly cleaned. Especially in what port forwarding and firewall rules matters.
After the reboot, redeploy as it is described in this section.
This section describes how to deploy TeraFlowSDN controller on top of MicroK8s using the environment configured in the previous sections.
<h3><u>Install prerequisites</h3></u>
```bash
sudo apt-get install -y git curl jq
```
<h3><u>Clone the Git repository of the TeraFlowSDN controller</h3></u>
Clone from ETSI-hosted GitLab code repository:
```bash
mkdir ~/tfs-ctrl
git clone https://labs.etsi.org/rep/tfs/controller.git ~/tfs-ctrl
```
__Important__: The original H2020-TeraFlow project hosted on GitLab.com has been
archieved and will not receive further contributions/updates.
Please, clone from [ETSI-hosted GitLab code repository](https://labs.etsi.org/rep/tfs/controller).
<h3><u>Checkout the appropriate Git branch</h3></u>
TeraFlowSDN controller versions can be found in the appropriate release tags and/or branches as described in [Home > Versions](https://tfs.etsi.org/news/).
By default the branch *master* is checked out and points to the latest stable version of the TeraFlowSDN controller, while branch *develop* contains the latest developments and contributions under test and validation.
To switch to the appropriate branch run the following command, changing `develop` by the name of the branch you want to deploy:
```bash
cd ~/tfs-ctrl
git checkout develop
```
<h3><u>Prepare a deployment script with the deployment settings</h3></u>
Create a new deployment script, e.g., `my_deploy.sh`, adding the appropriate settings as
follows.
This section provides just an overview of the available settings. An example [`my_deploy.sh`](https://labs.etsi.org/rep/tfs/controller/-/blob/master/my_deploy.sh) script is provided in the root folder of the project for your convenience with full description of all the settings.
__Note__: The example `my_deploy.sh` script provides reasonable settings for deploying a functional and complete enough TeraFlowSDN controller, and a brief description of their meaning. To see extended descriptions, check scripts in the `deploy` folder.
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
```bash
cd ~/tfs-ctrl
tee my_deploy.sh >/dev/null << EOF
# ----- TeraFlowSDN ------------------------------------------------------------
export TFS_REGISTRY_IMAGES="http://localhost:32000/tfs/"
export TFS_COMPONENTS="context device ztp monitoring pathcomp service slice nbi webui load_generator"
export TFS_IMAGE_TAG="dev"
export TFS_K8S_NAMESPACE="tfs"
export TFS_EXTRA_MANIFESTS="manifests/nginx_ingress_http.yaml"
export TFS_GRAFANA_PASSWORD="admin123+"
export TFS_SKIP_BUILD=""
# ----- CockroachDB ------------------------------------------------------------
export CRDB_NAMESPACE="crdb"
export CRDB_EXT_PORT_SQL="26257"
export CRDB_EXT_PORT_HTTP="8081"
export CRDB_USERNAME="tfs"
export CRDB_PASSWORD="tfs123"
export CRDB_DATABASE="tfs"
export CRDB_DEPLOY_MODE="single"
export CRDB_DROP_DATABASE_IF_EXISTS="YES"
export CRDB_REDEPLOY=""
# ----- NATS -------------------------------------------------------------------
export NATS_NAMESPACE="nats"
export NATS_EXT_PORT_CLIENT="4222"
export NATS_EXT_PORT_HTTP="8222"
export NATS_REDEPLOY=""
# ----- QuestDB ----------------------------------------------------------------
export QDB_NAMESPACE="qdb"
export QDB_EXT_PORT_SQL="8812"
export QDB_EXT_PORT_ILP="9009"
export QDB_EXT_PORT_HTTP="9000"
export QDB_USERNAME="admin"
export QDB_PASSWORD="quest"
export QDB_TABLE_MONITORING_KPIS="tfs_monitoring_kpis"
export QDB_TABLE_SLICE_GROUPS="tfs_slice_groups"
export QDB_DROP_TABLES_IF_EXIST="YES"
export QDB_REDEPLOY=""
EOF
```
The settings are organized in 4 sections:
- Section `TeraFlowSDN`:
- `TFS_REGISTRY_IMAGE` enables to specify the private Docker registry to be used, by default, we assume to use the Docker respository enabled in MicroK8s.
- `TFS_COMPONENTS` specifies the components their Docker image will be rebuilt, uploaded to the private Docker registry, and deployed in Kubernetes.
- `TFS_IMAGE_TAG` defines the tag to be used for Docker images being rebuilt and uploaded to the private Docker registry.
- `TFS_K8S_NAMESPACE` specifies the name of the Kubernetes namespace to be used for deploying the TFS components.
- `TFS_EXTRA_MANIFESTS` enables to provide additional manifests to be applied into the Kubernetes environment during the deployment. Typical use case is to deploy ingress controllers, service monitors for Prometheus, etc.
- `TFS_GRAFANA_PASSWORD` lets you specify the password you want to use for the `admin` user of the Grafana instance being deployed and linked to the Monitoring component.
- `TFS_SKIP_BUILD`, if set to `YES`, prevents rebuilding the Docker images. That means, the deploy script will redeploy existing Docker images without rebuilding/updating them.
- Section `CockroachDB`: enables to configure the deployment of the backend [CockroachDB](https://www.cockroachlabs.com/) database.
- Check example script [`my_deploy.sh`](https://labs.etsi.org/rep/tfs/controller/-/blob/master/my_deploy.sh) for further details.
- Section `NATS`: enables to configure the deployment of the backend [NATS](https://nats.io/) message broker.
- Check example script [`my_deploy.sh`](https://labs.etsi.org/rep/tfs/controller/-/blob/master/my_deploy.sh) for further details.
- Section `QuestDB`: enables to configure the deployment of the backend [QuestDB](https://questdb.io/) timeseries database.
- Check example script [`my_deploy.sh`](https://labs.etsi.org/rep/tfs/controller/-/blob/master/my_deploy.sh) for further details.
<h3><u>Confirm that MicroK8s is running</h3></u>
Run the following command:
```bash
microk8s status
```
If it is reported `microk8s is not running, try microk8s start`, run the following command to start MicroK8s:
```bash
microk8s start
```
Confirm everything is up and running:
1. Periodically [Check the status of Kubernetes](#check-status-of-kubernetes-and-addons) until you see the addons \[dns, ha-cluster, helm3, hostpath-storage, ingress, registry, storage\] in the enabled block.
2. Periodically [Check Kubernetes resources](#check-all-resources-in-kubernetes) until all pods are **Ready** and **Running**.
<h3><u id="deploy-tfs-controller">Deploy TFS controller</h3></u>
First, source the deployment settings defined in the previous section.
This way, you do not need to specify the environment variables in each and every command you execute to operate the TFS controller.
Be aware to re-source the file if you open new terminal sessions.
Then, run the following command to deploy TeraFlowSDN controller on top of the MicroK8s Kubernetes platform.
```bash
cd ~/tfs-ctrl
source my_deploy.sh
./deploy/all.sh
```
The script performs the following steps:
- Executes script `./deploy/crdb.sh` to automate deployment of CockroachDB database used by Context component.
- The script automatically checks if CockroachDB is already deployed.
- If there are settings instructing to drop the database and/or redeploy CockroachDB, it does the appropriate actions to honor them as defined in previous section.
- Executes script `./deploy/nats.sh` to automate deployment of NATS message broker used by Context component.
- The script automatically checks if NATS is already deployed.
- If there are settings instructing to redeploy the message broker, it does the appropriate actions to honor them as defined in previous section.
- Executes script `./deploy/qdb.sh` to automate deployment of QuestDB timeseries database used by Monitoring component.
- The script automatically checks if QuestDB is already deployed.
- If there are settings instructing to redeploy the timeseries database, it does the appropriate actions to honor them as defined in previous section.
- Executes script `./deploy/tfs.sh` to automate deployment of TeraFlowSDN.
- Creates the namespace defined in `TFS_K8S_NAMESPACE`
- Creates secrets for CockroachDB, NATS, and QuestDB to be used by Context and Monitoring components.
- Builds the Docker images for the components defined in `TFS_COMPONENTS`
- Tags the Docker images with the value of `TFS_IMAGE_TAG`
- Pushes the Docker images to the repository defined in `TFS_REGISTRY_IMAGE`
- Deploys the components defined in `TFS_COMPONENTS`
- Creates the file `tfs_runtime_env_vars.sh` with the environment variables for the components defined in `TFS_COMPONENTS` defining their local host addresses and their port numbers.
- Applies extra manifests defined in `TFS_EXTRA_MANIFESTS` such as:
- Creating an ingress controller listening at port 80 for HTTP connections to enable external access to the TeraFlowSDN WebUI, Grafana Dashboards, and Compute NBI interfaces.
- Deploying service monitors to enable monitoring the performance of the components, device drivers and service handlers.
- Initialize and configure the Grafana dashboards (if Monitoring component is deployed)
- Report a summary of the deployment
- See [Show Deployment and Logs](#15-show-deployment-and-logs)
## **1.4. WebUI and Grafana Dashboards**
This section describes how to get access to the TeraFlowSDN controller WebUI and the monitoring Grafana dashboards.
<h3><u>Access the TeraFlowSDN WebUI</h3></u>
If you followed the installation steps based on MicroK8s, you got an ingress controller installed that exposes on TCP port 80.
Besides, the ingress controller defines the following reverse proxy paths (on your local machine):
- `http://127.0.0.1/webui`: points to the WebUI of TeraFlowSDN.
- `http://127.0.0.1/grafana`: points to the Grafana dashboards.
This endpoint brings access to the monitoring dashboards of TeraFlowSDN.
The credentials for the `admin`user are those defined in the `my_deploy.sh` script, in the `TFS_GRAFANA_PASSWORD` variable.
- `http://127.0.0.1/restconf`: points to the Compute component NBI based on RestCONF.
This endpoint enables connecting external software, such as ETSI OpenSourceMANO NFV Orchestrator, to TeraFlowSDN.
**Note**: In the creation of the VM, a forward from host TCP port 8080 to VM's TCP port 80 is configured, so the WebUIs and REST APIs of TeraFlowSDN should be exposed on the endpoint `127.0.0.1:8080` of your local machine instead of `127.0.0.1:80`.
## **1.5. Show Deployment and Logs**
This section presents some helper scripts to inspect the status of the deployment and
the logs of the components.
These scripts are particularly helpful for troubleshooting during execution of
experiments, development, and debugging.
<h3><u>Report the deployment of the TFS controller</h3></u>
The summary report given at the end of the [Deploy TFS controller](#deploy-tfs-controller)
procedure can be generated manually at any time by running the following command.
You can avoid sourcing `my_deploy.sh` if it has been already done.
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
```bash
cd ~/tfs-ctrl
source my_deploy.sh
./deploy/show.sh
```
Use this script to validate that all the pods, deployments, replica sets, ingress
controller, etc. are ready and have the appropriate state, e.g., *running* for Pods, and
the services are deployed and have appropriate IP addresses and port numbers.
<h3><u>Report the log of a specific TFS controller component</h3></u>
A number of scripts are pre-created in the `scripts` folder to facilitate the inspection
of the component logs.
For instance, to dump the log of the Context component, run the following command.
You can avoid sourcing `my_deploy.sh` if it has been already done.
```bash
source my_deploy.sh
./scripts/show_logs_context.sh
```