Skip to content
Snippets Groups Projects
Commit 8096ce33 authored by yangalicace1's avatar yangalicace1
Browse files

Deployed 30d66dfe to develop in public with MkDocs 1.6.1 and mike 2.1.3

parent 06673fee
No related branches found
No related tags found
No related merge requests found
{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"0. Home","text":"<p>Welcome to the ETSI TeraFlowSDN (TFS) Controller wiki!</p> <p>This wiki provides a walkthrough on how to prepare your environment for executing and contributing to the ETSI SDG TeraFlowSDN. Besides, it describes how to run some example experiments.</p>"},{"location":"#try-teraflowsdn-release-30","title":"Try TeraFlowSDN Release 3.0","text":"<p>The new release launched on April 24th, 2024 incorporates a number of new features, improvements, and bug resolutions. Try it by following the guides below, and feel free to give us your feedback. See the Release Notes.</p>"},{"location":"#requisites","title":"Requisites","text":"<p>The guides and walkthroughs below make some reasonable assumptions to simplify the deployment of the TFS controller, the execution of experiments and tests, and the development of new contributions. In particular, we assume:</p> <ul> <li>A physical server or virtual machine for running the TFS controller with the following minimum specifications (check section Configure your Machine for additional details):</li> <li>4 cores / vCPUs</li> <li>8 GB of RAM (10 GB of RAM if you want to develop)</li> <li>60 GB of disk (100 GB of disk if you want to develop)</li> <li>1 NIC card</li> <li>VSCode with the Remote SSH extension</li> <li>Working machine software:</li> <li>Ubuntu Server 22.04.4 LTS or Ubuntu Server 20.04.6 LTS</li> <li>MicroK8s v1.24.17</li> </ul> <p>Use the Wiki menu in the right side of this page to navigate through the various contents of this wiki.</p>"},{"location":"#guides-and-walkthroughs","title":"Guides and Walkthroughs","text":"<p>The following guides and walkthroughs are provided:</p> <ul> <li>1. Deployment Guide</li> <li>2. Development Guide</li> <li>3. Run Experiments</li> <li>4. Features and Bugs</li> <li>5. Supported SBIs and Network Elements</li> <li>6. Supported NBIs</li> <li>7. Supported Service Handlers</li> <li>8. Troubleshooting</li> </ul>"},{"location":"#tutorials-and-tfs-virtual-machine","title":"Tutorials and TFS Virtual Machine","text":"<p>This section provides access to the links and all the materials prepared for the tutorials and hackfests involving ETSI TeraFlowSDN.</p> <ul> <li>TFS Hackfest #3 (Castelldefels, 16-17 October 2023)</li> <li> <p>The link includes explanatory material on P4 for TeraFlowSDN, the set of guided walkthrough, and the details on the interactive sessions the participants addressed (and recordings), as well as a TFS Virtual Machine (Release 2.1).</p> </li> <li> <p>TFS Hackfest #2 (Madrid, 20-21 June 2023)</p> </li> <li> <p>The link includes explanatory material on gNMI and ContainerLab for TeraFlowSDN, the set of challenges the participants addressed (and recordings), as well as a TFS Virtual Machine (Pre-Release 2.1).</p> </li> <li> <p>OFC SC472 (San Diego, 6 March 2023)</p> </li> <li> <p>The link includes a tutorial-style slide deck, as well as a TFS Virtual Machine (Release 2).</p> </li> <li> <p>TFS Hackfest #1 (Amsterdam, 20 October 2022)</p> </li> <li>The link includes a tutorial-style slide deck (and recordings), as well as a TFS Virtual Machine (Pre-Release 2).</li> </ul>"},{"location":"#versions","title":"Versions","text":"<p>New versions of TeraFlowSDN are periodically released. Each release is properly tagged and a branch is kept for its future bug fixing, if needed.</p> <ul> <li>The branch master, points always to the latest stable version of the TeraFlowSDN controller.</li> <li>The branches release/X.Y.Z, point to the code for the different release versions indicated in branch name.</li> <li>Code in these branches can be considered stable, and no new features are planned.</li> <li>In case of bugs, point releases increasing revision number (Z) might be created.</li> <li>The main development branch is named as develop.</li> <li>Use with care! Might not be stable.</li> <li>The latest developments and contributions are added to this branch for testing and validation before reaching a release. </li> </ul> <p>To choose the appropriate branch, follow the steps described in 1.3. Deploy TeraFlowSDN &gt; Checkout the Appropriate Git Branch</p>"},{"location":"#events","title":"Events","text":"<p>Find here after the list of past and future TFS Events:</p> <ul> <li>ETSI TeraFlowSDN Events </li> </ul>"},{"location":"#contact","title":"Contact","text":"<p>If your environment does not fit with the proposed assumptions and you experience issues preparing it to work with the ETSI TeraFlowSDN controller, contact the ETSI TeraFlowSDN SDG team through Slack</p>"},{"location":"deployment_guide/","title":"1. Deployment Guide","text":"<p>This section walks you through the process of deploying TeraFlowSDN on top of a machine running MicroK8s Kubernetes platform. The guide includes the details on configuring and installing the machine, installing and configuring MicroK8s, and deploying and reporting the status of the TeraFlowSDN controller.</p>"},{"location":"deployment_guide/#11-configure-your-machine","title":"1.1. Configure your Machine","text":"<p>In this section, we describe how to configure a machine (physical or virtual) to be used as the deployment, execution, and development environment for the ETSI TeraFlowSDN controller. Choose your preferred environment below and follow the instructions provided.</p> <p>NOTE: If you already have a remote physical server fitting the requirements specified in this section feel free to use it instead of deploying a local VM. Check 1.1.1. Physical Server for further details.</p> <p>Virtualization platforms tested are:</p> <ul> <li>Physical Server</li> <li>Oracle Virtual Box</li> <li>VMWare Fusion</li> <li>OpenStack</li> <li>Vagrant Box</li> </ul>"},{"location":"deployment_guide/#111-physical-server","title":"1.1.1. Physical ServerServer SpecificationsClusterized DeploymentNetworkingOperating SystemUpgrade the Ubuntu distribution","text":"<p>This section describes how to configure a physical server for running ETSI TeraFlowSDN(TFS) controller.</p> <p>Minimum Server Specifications for development and basic deployment</p> <ul> <li>CPU: 4 cores</li> <li>RAM: 8 GB</li> <li>Disk: 60 GB</li> <li>1 GbE NIC</li> </ul> <p>Recommended Server Specifications for development and basic deployment</p> <ul> <li>CPU: 6 cores</li> <li>RAM: 12 GB</li> <li>Disk: 80 GB</li> <li>1 GbE NIC</li> </ul> <p>Server Specifications for best development and deployment experience</p> <ul> <li>CPU: 8 cores</li> <li>RAM: 32 GB</li> <li>Disk: 120 GB</li> <li>1 GbE NIC</li> </ul> <p>NOTE: the specifications listed above are provided as a reference. They depend also on the CPU clock frequency, the RAM memory, the disk technology and speed, etc.</p> <p>For development purposes, it is recommended to run the VSCode IDE (or the IDE of your choice) in a more powerful server, for instance, the recommended server specifications for development and basic deployment.</p> <p>Given that TeraFlowSDN follows a micro-services architecture, for the deployment, it might be better to use many clusterized servers with many slower cores than a single server with few highly performant cores.</p> <p>You might consider creating a cluster of machines each featuring, at least, the minimum server specifications. That solution brings you scalability in the future.</p> <p>No explicit indications are given in terms of networking besides that servers need access to the Internet for downloading dependencies, binaries, and packages while building and deploying the TeraFlowSDN components.</p> <p>Besides that, the network requirements are essentially the same than that required for running a classical Kubernetes environment. To facilitate the deployment, we extensively use MicroK8s, thus the network requirements are, essentially, the same demanded by MicroK8s, especially, if you consider creating a Kubernetes cluster.</p> <p>As a reference, the other deployment solutions based on VMs assume the VM is connected to a virtual network configured with the IP range <code>10.0.2.0/24</code> and have the gateway at IP <code>10.0.2.1</code>. The VMs have the IP address <code>10.0.2.10</code>.</p> <p>The minimum required ports to be accessible are: - 22/SSH : for management purposes - 80/HTTP : for the TeraFlowSDN WebUI and Grafana dashboard - 8081/HTTPS : for the CockroachDB WebUI</p> <p>Other ports might be required if you consider to deploy addons such as Kubernetes observability, etc. The details on these ports are left appart given they might vary depending on the Kubernetes environment you use.</p> <p>The recommended Operating System for deploying TeraFlowSDN is Ubuntu Server 22.04 LTS or Ubuntu Server 20.04 LTS. Other version might work, but we have not tested them. We strongly recommend using Long Term Support (LTS) versions as they provide better stability.</p> <p>Below we provide some installation guidelines: - Installation Language: English - Autodetect your keyboard - If asked, select \"Ubuntu Server\" (do not select \"Ubuntu Server (minimized)\"). - Configure static network specifications (adapt them based on your particular setup):</p> Interface IPv4 Method Subnet Address Gateway Name servers Search domains enp0s3 Manual 10.0.2.0/24 10.0.2.10 10.0.2.1 8.8.8.8,8.8.4.4 <ul> <li>Leave proxy and mirror addresses as they are</li> <li>Let the installer self-upgrade (if asked).</li> <li>Use an entire disk for the installation</li> <li>Disable setup of the disk as LVM group</li> <li>Double check that NO swap space is allocated in the partition table. Kubernetes does not work properly with SWAP.</li> <li>Configure your user and system names:</li> <li>User name: <code>TeraFlowSDN</code></li> <li>Server's name: <code>tfs-vm</code></li> <li>Username: <code>tfs</code></li> <li>Password: <code>tfs123</code></li> <li>Install Open SSH Server</li> <li>Import SSH keys, if any.</li> <li>Featured Server Snaps</li> <li>Do not install featured server snaps. It will be done manually later to illustrate how to uninstall and reinstall them in case of trouble with.</li> <li>Let the system install and upgrade the packages.</li> <li>This operation might take some minutes depending on how old is the Optical Drive ISO image you use and your Internet connection speed.</li> <li>Restart the VM when the installation is completed.</li> </ul> <pre><code>sudo apt-get update -y\nsudo apt-get dist-upgrade -y\n</code></pre> <ul> <li>If asked to restart services, restart the default ones proposed.</li> <li>Restart the VM when the installation is completed.</li> </ul>"},{"location":"deployment_guide/#112-oracle-virtual-box","title":"1.1.2. Oracle Virtual BoxCreate a NAT Network in VirtualBoxCreate VM in VirtualBox:Install Ubuntu 22.04 LTS Operating System","text":"<p>This section describes how to configure a VM for running ETSI TeraFlowSDN(TFS) controller using Oracle VirtualBox. It has been tested with VirtualBox up to version 6.1.40 r154048.</p> <p>In \"Oracle VM VirtualBox Manager\", Menu \"File &gt; Preferences... &gt; Network\", create a NAT network with the following specifications:</p> Name CIDR DHCP IPv6 TFS-NAT-Net 10.0.2.0/24 Disabled Disabled <p>Within the newly created \"TFS-NAT-Net\" NAT network, configure the following IPv4 forwarding rules:</p> Name Protocol Host IP Host Port Guest IP Guest Port SSH TCP 127.0.0.1 2200 10.0.2.10 22 HTTP TCP 127.0.0.1 8080 10.0.2.10 80 <p>Note: IP address 10.0.2.10 is the one that will be assigned to the VM.</p> <ul> <li>Name: TFS-VM</li> <li>Type/Version: Linux / Ubuntu (64-bit)</li> <li>CPU (*): 4 vCPUs @ 100% execution capacity</li> <li>RAM: 8 GB</li> <li>Disk: 60 GB, Virtual Disk Image (VDI), Dynamically allocated</li> <li>Optical Drive ISO Image: \"ubuntu-22.04.X-live-server-amd64.iso\"</li> <li>Download the latest Long Term Support (LTS) version of the Ubuntu Server image from Ubuntu 22.04 LTS, e.g., \"ubuntu-22.04.X-live-server-amd64.iso\".</li> <li>Note: use Ubuntu Server image instead of Ubuntu Desktop to create a lightweight VM.</li> <li>Network Adapter 1 (*): enabled, attached to NAT Network \"TFS-NAT-Net\"</li> <li>Minor adjustments (*):</li> <li>Audio: disabled</li> <li>Boot order: disable \"Floppy\"</li> </ul> <p>Note: (*) settings to be editing after the VM is created.</p> <p>In \"Oracle VM VirtualBox Manager\", start the VM in normal mode, and follow the installation procedure. Below we provide some installation guidelines: - Installation Language: English - Autodetect your keyboard - If asked, select \"Ubuntu Server\" (do not select \"Ubuntu Server (minimized)\"). - Configure static network specifications:</p> Interface IPv4 Method Subnet Address Gateway Name servers Search domains enp0s3 Manual 10.0.2.0/24 10.0.2.10 10.0.2.1 8.8.8.8,8.8.4.4 <ul> <li>Leave proxy and mirror addresses as they are</li> <li>Let the installer self-upgrade (if asked).</li> <li>Use an entire disk for the installation</li> <li>Disable setup of the disk as LVM group</li> <li>Double check that NO swap space is allocated in the partition table. Kubernetes does not work properly with SWAP.</li> <li>Configure your user and system names:</li> <li>User name: TeraFlowSDN</li> <li>Server's name: tfs-vm</li> <li>Username: tfs</li> <li>Password: tfs123</li> <li>Install Open SSH Server</li> <li>Import SSH keys, if any.</li> <li>Featured Server Snaps</li> <li>Do not install featured server snaps. It will be done manually later to illustrate how to uninstall and reinstall them in case of trouble with.</li> <li>Let the system install and upgrade the packages.</li> <li>This operation might take some minutes depending on how old is the Optical Drive ISO image you use and your Internet connection speed.</li> <li>Restart the VM when the installation is completed.</li> </ul> <p>Upgrade the Ubuntu distribution</p> <pre><code>sudo apt-get update -y\nsudo apt-get dist-upgrade -y\n</code></pre> <ul> <li>If asked to restart services, restart the default ones proposed.</li> <li>Restart the VM when the installation is completed.</li> </ul> <p>Install VirtualBox Guest Additions On VirtualBox Manager, open the VM main screen. If you are running the VM in headless mode, right click over the VM in the VirtualBox Manager window and click \"Show\". If a dialog informing about how to leave the interface of the VM is shown, confirm pressing \"Switch\" button. The interface of the VM should appear.</p> <p>Click menu \"Device &gt; Insert Guest Additions CD image...\"</p> <p>On the VM terminal, type:</p> <pre><code>sudo apt-get install -y linux-headers-$(uname -r) build-essential dkms\n # This command might take some minutes depending on your VM specs and your Internet access speed.\nsudo mount /dev/cdrom /mnt/\ncd /mnt/\nsudo ./VBoxLinuxAdditions.run\n # This command might take some minutes depending on your VM specs.\nsudo reboot\n</code></pre>"},{"location":"deployment_guide/#113-vmware-fusion","title":"1.1.3. VMWare FusionCreate VM in VMWare Fusion:Install Ubuntu 22.04.1 LTS Operating SystemUpgrade the Ubuntu distribution","text":"<p>This section describes how to configure a VM for running ETSI TeraFlowSDN(TFS) controller using VMWare Fusion. It has been tested with VMWare Fusion version 12 and 13.</p> <p>In \"VMWare Fusion\" manager, create a new network from the \"Settings/Network\" menu.</p> <ul> <li>Unlock to make changes</li> <li>Press the + icon and create a new network</li> <li>Change the name to TFS-NAT-Net</li> <li>Check \"Allow virtual machines on this network to connect to external network (NAT)\"</li> <li>Do not check \"Enable IPv6\"</li> <li>Add port forwarding for HTTP and SSH</li> <li>Uncheck \"Provide address on this network via DHCP\"</li> </ul> <p>Create a new VM an Ubuntu 22.04.1 ISO:</p> <ul> <li>Display Name: TeraFlowSDN</li> <li>Username: tfs</li> <li>Password: tfs123</li> </ul> <p>On the next screen press \"Customize Settings\", save the VM and in \"Settings\" change: - Change to use 4 CPUs - Change to access 8 GB of RAM - Change disk to size 60 GB - Change the network interface to use the previously created TFS-NAT-Net</p> <p>Run the VM to start the installation.</p> <p>The installation will be automatic, without any configuration required.</p> <ul> <li>Configure the guest IP, gateway and DNS:</li> </ul> <p>Using the Network Settings for the wired connection, set the IP to 10.0.2.10, the mask to 255.255.255.0, the gateway to 10.0.2.2 and the DNS to 10.0.2.2.</p> <ul> <li>Disable and remove swap file:</li> </ul> <p>$ sudo swapoff -a $ sudo rm /swapfile</p> <p>Then you can remove or comment the /swapfile entry in /etc/fstab</p> <ul> <li>Install Open SSH Server</li> <li> <p>Import SSH keys, if any.</p> </li> <li> <p>Restart the VM when the installation is completed.</p> </li> </ul> <pre><code>sudo apt-get update -y\nsudo apt-get dist-upgrade -y\n</code></pre>"},{"location":"deployment_guide/#114-openstack","title":"1.1.4. OpenStackCreate a Security Group in OpenStack <p> In OpenStack, go to Project - Network - Security Groups - Create Security Group with name TFS</p> <p>Add the following rules:</p> Direction Ether Type IP Protocol Port Range Remote IP Prefix Ingress IPv4 TCP 22 (SSH) 0.0.0.0/0 Ingress IPv4 TCP 2200 0.0.0.0/0 Ingress IPv4 TCP 8080 0.0.0.0/0 Ingress IPv4 TCP 80 0.0.0.0/0 Egress IPv4 Any Any 0.0.0.0/0 Egress IPv6 Any Any ::/0 <p>Note: The IP address will be assigned depending on the network you have configured inside OpenStack. This IP will have to be modified in TeraFlow configuration files which by default use IP 10.0.2.10</p> Create a flavour <p></p> <p>From dashboard (Horizon)</p> <p>Go to Admin - Compute - Flavors and press Create Flavor</p> <ul> <li>Name: TFS</li> <li>VCPUs: 4</li> <li>RAM (MB): 8192</li> <li>Root Disk (GB): 60</li> </ul> <p>From CLI</p> <pre><code> openstack flavor create TFS --id auto --ram 8192 --disk 60 --vcpus 8\n</code></pre> Create an instance in OpenStack: <p></p> <ul> <li>Instance name: TFS-VM</li> <li>Origin: [Ubuntu-22.04 cloud image] (https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img)</li> <li>Create new volume: No</li> <li>Flavor: TFS</li> <li>Networks: extnet </li> <li>Security Groups: TFS</li> <li>Configuration: Include the following cloud-config</li> </ul> <pre><code>#cloud-config\n# Modifies the password for the VM instance\nusername: ubuntu\npassword: &lt;your-password&gt;\nchpasswd: { expire: False }\nssh_pwauth: True\n</code></pre> Upgrade the Ubuntu distribution <p></p> <pre><code>sudo apt-get update -y\nsudo apt-get dist-upgrade -y\n</code></pre> <ul> <li>If asked to restart services, restart the default ones proposed.</li> <li>Restart the VM when the installation is completed.</li> </ul>","text":"<p>This section describes how to configure a VM for running ETSI TeraFlowSDN(TFS) controller using OpenStack. It has been tested with OpenStack Kolla up to Yoga version. </p>"},{"location":"deployment_guide/#115-vagrant-box","title":"1.1.5. Vagrant Box <p>This section describes how to create a Vagrant Box, using the base virtual machine configured in Oracle Virtual Box.</p> Virtual Machine specifications <p> Most of the specifications can be as specified in the Oracle Virtual Box page, however, there are a few particularities to Vagrant that must be accommodated, such as:</p> <ul> <li>Virtual Hard Disk</li> <li>Size: 60GB (at least)</li> <li>Type: VMDK</li> </ul> <p></p> <p>Also, before initiating the VM and installing the OS, we'll need to:</p> <ul> <li>Disable Floppy in the 'Boot Order'</li> <li>Disable audio</li> <li>Disable USB</li> <li>Ensure Network Adapter 1 is set to NAT</li> </ul> Network configurations <p> At Network Adapt 1, the following port-forwarding rule must be set.</p> Name Protocol Host IP Host Port Guest IP Guest Port SSH TCP 2222 22 <p></p> Installing the OS <p></p> <p>For a Vagrant Box, it is generally suggested that the ISO's server version is used, as it is intended to be used via SSH, and any web GUI is expected to be forwarded to the host.</p> <p></p> <p></p> <p></p> <p>Make sure the disk is not configured as an LVM group!</p> <p></p> Vagrant ser <p> Vagrant expects by default, that in the box's OS exists the user <code>vagrant</code> with the password also being <code>vagrant</code>.</p> <p></p> SSH <p></p> <p>Vagrant uses SSH to connect to the boxes, so installing it now will save the hassle of doing it later.</p> <p></p> Features server snaps <p></p> <p>Do not install featured server snaps. It will be done manually later to illustrate how to uninstall and reinstall them in case of trouble with.</p> Updates <p></p> <p>Let the system install and upgrade the packages. This operation might take some minutes depending on how old is the Optical Drive ISO image you use and your Internet connection speed.</p> Upgrade the Ubuntu distribution <p></p> <pre><code>sudo apt-get update -y\nsudo apt-get dist-upgrade -y\n</code></pre> <ul> <li>If asked to restart services, restart the default ones proposed.</li> <li>Restart the VM when the installation is completed.</li> </ul> Install VirtualBox Guest Additions <p> On VirtualBox Manager, open the VM main screen. If you are running the VM in headless mode, right-click over the VM in the VirtualBox Manager window, and click \"Show\". If a dialog informing about how to leave the interface of the VM is shown, confirm by pressing the \"Switch\" button. The interface of the VM should appear.</p> <p>Click the menu \"Device &gt; Insert Guest Additions CD image...\"</p> <p>On the VM terminal, type:</p> <pre><code>sudo apt-get install -y linux-headers-$(uname -r) build-essential dkms\n # This command might take some minutes depending on your VM specs and your Internet access speed.\nsudo mount /dev/cdrom /mnt/\ncd /mnt/\nsudo ./VBoxLinuxAdditions.run\n # This command might take some minutes depending on your VM specs.\nsudo reboot\n</code></pre> ETSI TFS Installation <p> After this, proceed to 1.2. Install Microk8s, after which, return to this wiki to finish the Vagrant Box creation.</p> Box configuration and creation <p> Make sure the ETSI TFS controller is correctly configured. You will not be able to change it after!</p> <p>It is advisable to do the next configurations from a host's terminal, via a SSH connection.</p> <pre><code>ssh -p 2222 vagrant@127.0.0.1\n</code></pre> Set root password <p> Set the root password to <code>vagrant</code>.</p> <pre><code>sudo passwd root\n</code></pre> Set the superuser <p> Set up the Vagrant user so that it\u2019s able to use sudo without being prompted for a password. Anything in the <code>/etc/sudoers.d/*</code> directory is included in the sudoers privileges when created by the root user. Create a new sudo file.</p> <pre><code>sudo visudo -f /etc/sudoers.d/vagrant\n</code></pre> <p>and add the following lines</p> <pre><code># add vagrant user\nvagrant ALL=(ALL) NOPASSWD:ALL\n</code></pre> <p>You can now test that it works by running a simple command.</p> <pre><code>sudo pwd\n</code></pre> <p>Issuing this command should result in an immediate response without a request for a password.</p> Install the Vagrant key <p> Vagrant uses a default set of SSH keys for you to directly connect to boxes via the CLI command <code>vagrant ssh</code>, after which it creates a new set of SSH keys for your new box. Because of this, we need to load the default key to be able to access the box after created.</p> <pre><code>chmod 0700 /home/vagrant/.ssh\nwget --no-check-certificate https://raw.github.com/mitchellh/vagrant/master/keys/vagrant.pub -O /home/vagrant/.ssh/authorized_keys\nchmod 0600 /home/vagrant/.ssh/authorized_keys\nchown -R vagrant /home/vagrant/.ssh\n</code></pre> Configure the OpenSSH Server <p> Edit the <code>/etc/ssh/sshd_config</code> file:</p> <pre><code>sudo vim /etc/ssh/sshd_config\n</code></pre> <p>And uncomment the following line:</p> <pre><code>AuthorizedKeysFile %h/.ssh/authorized_keys\n</code></pre> <p>Then restart SSH.</p> <pre><code>sudo service ssh restart\n</code></pre> Package the box <p> Before you package the box, if you intend to make your box public, it is best to clean your bash history with:</p> <pre><code>history -c\n</code></pre> <p>Exit the SSH connection, and at you're host machine, package the VM:</p> <pre><code>vagrant package --base teraflowsdncontroller --output teraflowsdncontroller.box\n</code></pre> Test run the box <p> Add the base box to you local Vagrant box list:</p> <pre><code>vagrant box add --name teraflowsdncontroller ./teraflowsdncontroller.box\n</code></pre> <p>Now you should try to run it, for that you'll need to create a Vagrantfile. For a simple run, this is the minimal required code for this box:</p> <pre><code># -*- mode: ruby -*-\n# vi: set ft=ruby :\n\nVagrant.configure(\"2\") do |config|\n config.vm.box = \"teraflowsdncontroller\"\n config.vm.box_version = \"1.1.0\"\n config.vm.network :forwarded_port, host: 8080 ,guest: 80\nend\n</code></pre> <p>Now you'll be able to spin up the virtual machine by issuing the command:</p> <pre><code>vagrant up\n</code></pre> <p>And connect to the machine using:</p> <pre><code>vagrant ssh\n</code></pre> Pre-configured boxes <p> If you do not wish to create your own Vagrant Box, you can use one of the existing ones created by TFS contributors. - davidjosearaujo/teraflowsdncontroller - ... </p> <p>To use them, you simply have to create a Vagrantfile and run <code>vagrant up controller</code> in the same directory. The following example Vagrantfile already allows you to do just that, with the bonus of exposing the multiple management GUIs to your <code>localhost</code>.</p> <pre><code>Vagrant.configure(\"2\") do |config|\n\n config.vm.define \"controller\" do |controller|\n controller.vm.box = \"davidjosearaujo/teraflowsdncontroller\"\n controller.vm.network \"forwarded_port\", guest: 80, host: 8080 # WebUI\n controller.vm.network \"forwarded_port\", guest: 8084, host: 50750 # Linkerd Viz Dashboard\n controller.vm.network \"forwarded_port\", guest: 8081, host: 8081 # CockroachDB Dashboard\n controller.vm.network \"forwarded_port\", guest: 8222, host: 8222 # NATS Dashboard\n controller.vm.network \"forwarded_port\", guest: 9000, host: 9000 # QuestDB Dashboard\n controller.vm.network \"forwarded_port\", guest: 9090, host: 9090 # Prometheus Dashboard\n\n # Setup Linkerd Viz reverse proxy\n ## Copy config file\n controller.vm.provision \"file\" do |f|\n f.source = \"./reverse-proxy-linkerdviz.sh\"\n f.destination = \"./reverse-proxy-linkerdviz.sh\"\n end\n ## Execute configuration file\n controller.vm.provision \"shell\" do |s|\n s.inline = \"chmod +x ./reverse-proxy-linkerdviz.sh &amp;&amp; ./reverse-proxy-linkerdviz.sh\"\n end\n\n # Update controller source code to the desired branch\n if ENV['BRANCH'] != nil\n controller.vm.provision \"shell\" do |s|\n s.inline = \"cd ./tfs-ctrl &amp;&amp; git pull &amp;&amp; git switch \" + ENV['BRANCH']\n end\n end\n\n end\nend\n</code></pre> <p>This Vagrantfile also allows for optional repository updates on startup by running the command with a specified environment variable <code>BRANCH</code></p> <pre><code>BRANCH=develop vagrant up controller\n</code></pre> Linkerd DNS rebinding bypass <p> Because of Linkerd's security measures against DNS rebinding, a reverse proxy, that modifies the request's header <code>Host</code> field, is needed to expose the GUI to the host. The previous Vagrantfile already deploys such configurations, for that, all you need to do is create the <code>reverse-proxy-linkerdviz.sh</code> file in the same directory. The content of this file is displayed below.</p> <pre><code># Install NGINX\nsudo apt update &amp;&amp; sudo apt install nginx -y\n\n# NGINX reverse proxy configuration\necho 'server {\n listen 8084;\n\n location / {\n proxy_pass http://127.0.0.1:50750;\n proxy_set_header Host localhost;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Proto $scheme;\n }\n}' &gt; /home/vagrant/expose-linkerd\n\n# Create symlink of the NGINX configuration file\nsudo ln -s /home/vagrant/expose-linkerd /etc/nginx/sites-enabled/\n\n# Commit the reverse proxy configurations\nsudo systemctl restart nginx\n\n# Enable start on login\necho \"linkerd viz dashboard &amp;\" &gt;&gt; .profile\n\n# Start dashboard\nlinkerd viz dashboard &amp;\n\necho \"Linkerd Viz dashboard running!\"\n</code></pre>","text":""},{"location":"deployment_guide/#12-install-microk8s","title":"1.2. Install MicroK8s","text":"<p>This section describes how to deploy the MicroK8s Kubernetes platform and configure it to be used with ETSI TeraFlowSDN controller. Besides, Docker is installed to build docker images for the ETSI TeraFlowSDN controller.</p> <p>The steps described in this section might take some minutes depending on your internet connection speed and the resources assigned to your VM, or the specifications of your physical server.</p> <p>To facilitate work, these steps are easier to be executed through an SSH connection, for instance using tools like PuTTY or MobaXterm.</p> Upgrade the Ubuntu distribution <p> Skip this step if you already did it during the creation of the VM.</p> <pre><code>sudo apt-get update -y\nsudo apt-get dist-upgrade -y\n</code></pre> Install prerequisites <p></p> <pre><code>sudo apt-get install -y ca-certificates curl gnupg lsb-release snapd jq\n</code></pre> Install Docker CE <p> Install Docker CE and Docker BuildX plugin</p> <pre><code>sudo apt-get install -y docker.io docker-buildx\n</code></pre> <p>NOTE: Starting from Docker v23, Build architecture has been updated and <code>docker build</code> command entered into deprecation process in favor of the new <code>docker buildx build</code> command. Package <code>docker-buildx</code> provides the new <code>docker buildx build</code> command.</p> <p>Add key \"insecure-registries\" with the private repository to the daemon configuration. It is done in two commands since sometimes read from and write to same file might cause trouble.</p> <pre><code>if [ -s /etc/docker/daemon.json ]; then cat /etc/docker/daemon.json; else echo '{}'; fi \\\n | jq 'if has(\"insecure-registries\") then . else .+ {\"insecure-registries\": []} end' -- \\\n | jq '.\"insecure-registries\" |= (.+ [\"localhost:32000\"] | unique)' -- \\\n | tee tmp.daemon.json\nsudo mv tmp.daemon.json /etc/docker/daemon.json\nsudo chown root:root /etc/docker/daemon.json\nsudo chmod 600 /etc/docker/daemon.json\n</code></pre> <p>Restart the Docker daemon</p> <pre><code>sudo systemctl restart docker\n</code></pre> Install MicroK8s <p></p> <p>Important: Some TeraFlowSDN dependencies need to be executed on top of MicroK8s/Kubernetes v1.24. It is not guaranteed (by now) to run on newer versions.</p> <pre><code># Install MicroK8s\nsudo snap install microk8s --classic --channel=1.24/stable\n\n# Create alias for command \"microk8s.kubectl\" to be usable as \"kubectl\"\nsudo snap alias microk8s.kubectl kubectl\n</code></pre> <p>It is important to make sure that <code>ufw</code> will not interfere with the internal pod-to-pod and pod-to-Internet traffic. To do so, first check the status. If <code>ufw</code> is active, use the following command to enable the communication.</p> <pre><code>\n# Verify status of ufw firewall\nsudo ufw status\n\n# If ufw is active, install following rules to enable access pod-to-pod and pod-to-internet\nsudo ufw allow in on cni0 &amp;&amp; sudo ufw allow out on cni0\nsudo ufw default allow routed\n</code></pre> <p>NOTE: MicroK8s can be used to compose a Highly Available Kubernetes cluster enabling you to construct an environment combining the CPU, RAM and storage resources of multiple machines. If you are interested in this procedure, review the official instructions in How to build a highly available Kubernetes cluster with MicroK8s, in particular, the step Create a MicroK8s multi-node cluster.</p> <p>References:</p> <ul> <li>The lightweight Kubernetes &gt; Install MicroK8s</li> <li>Install a local Kubernetes with MicroK8s</li> <li>How to build a highly available Kubernetes cluster with MicroK8s</li> </ul> Add user to the docker and microk8s groups <p></p> <p>It is important that your user has the permission to run <code>docker</code> and <code>microk8s</code> in the terminal. To allow this, you need to add your user to the <code>docker</code> and <code>microk8s</code> groups with the following commands:</p> <pre><code>sudo usermod -a -G docker $USER\nsudo usermod -a -G microk8s $USER\nsudo chown -f -R $USER $HOME/.kube\nsudo reboot\n</code></pre> <p>In case that you get trouble executing the following commands, might due to the .kube folder is not automatically provisioned into your home folder, you may follow the steps below:</p> <pre><code>mkdir -p $HOME/.kube\nsudo chown -f -R $USER $HOME/.kube\nmicrok8s config &gt; $HOME/.kube/config\nsudo reboot\n</code></pre> Check status of Kubernetes and addons <p> To retrieve the status of Kubernetes once, run the following command:</p> <pre><code>microk8s.status --wait-ready\n</code></pre> <p>To retrieve the status of Kubernetes periodically (e.g., every 1 second), run the following command:</p> <pre><code>watch -n 1 microk8s.status --wait-ready\n</code></pre> Check all resources in Kubernetes <p> To retrieve the status of the Kubernetes resources once, run the following command:</p> <pre><code>kubectl get all --all-namespaces\n</code></pre> <p>To retrieve the status of the Kubernetes resources periodically (e.g., every 1 second), run the following command:</p> <pre><code>watch -n 1 kubectl get all --all-namespaces\n</code></pre> Enable addons <p></p> <p>First, we need to enable the community plugins (maintained by third parties):</p> <pre><code>microk8s.enable community\n</code></pre> <p>The Addons to be enabled are:</p> <ul> <li><code>dns</code>: enables resolving the pods and services by name</li> <li><code>helm3</code>: required to install NATS</li> <li><code>hostpath-storage</code>: enables providing storage for the pods (required by <code>registry</code>)</li> <li><code>ingress</code>: deploys an ingress controller to expose the microservices outside Kubernetes</li> <li><code>registry</code>: deploys a private registry for the TFS controller images</li> <li><code>linkerd</code>: deploys the linkerd service mesh used for load balancing among replicas</li> <li><code>prometheus</code>: set of tools that enable TFS observability through per-component instrumentation</li> <li><code>metrics-server</code>: deploys the Kubernetes metrics server for API access to service metrics</li> </ul> <pre><code>microk8s.enable dns helm3 hostpath-storage ingress registry prometheus metrics-server linkerd\n</code></pre> <p>Important: Enabling some of the addons might take few minutes. Do not proceed with next steps until the addons are ready. Otherwise, the deployment might fail. To confirm everything is up and running:</p> <ol> <li>Periodically Check the status of Kubernetes until you see the addons [dns, ha-cluster, helm3, hostpath-storage, ingress, linkerd, metrics-server, prometheus, registry, storage] in the enabled block.</li> <li>Periodically Check Kubernetes resources until all pods are Ready and Running.</li> <li>If it takes too long for the Pods to be ready, we observed that rebooting the machine may help.</li> </ol> <p>Then, create aliases to make the commands easier to access:</p> <pre><code>sudo snap alias microk8s.helm3 helm3\nsudo snap alias microk8s.linkerd linkerd\n</code></pre> <p>To validate that <code>linkerd</code> is working correctly, run:</p> <pre><code>linkerd check\n</code></pre> <p>To validate that the <code>metrics-server</code> is working correctly, run:</p> <pre><code>kubectl top pods --all-namespaces\n</code></pre> <p>and you should see a screen similar to the <code>top</code> command in Linux, showing the columns namespace, pod name, CPU (cores), and MEMORY (bytes).</p> <p>In case pods are not starting, check information from pods logs. For example, linkerd is sensitive for proper /etc/resolv.conf syntax.</p> <pre><code>kubectl logs &lt;podname&gt; --namespace &lt;namespace&gt;\n</code></pre> <p>If the command shows an error message, also restarting the machine might help.</p> Stop, Restart, and Redeploy <p> Find below some additional commands you might need while you work with MicroK8s:</p> <pre><code>microk8s.stop # stop MicroK8s cluster (for instance, before power off your computer)\nmicrok8s.start # start MicroK8s cluster\nmicrok8s.reset # reset infrastructure to a clean state\n</code></pre> <p>If the following commands does not work to recover the MicroK8s cluster, you can redeploy it.</p> <p>If you want to keep MicroK8s configuration, use:</p> <pre><code>sudo snap remove microk8s\n</code></pre> <p>If you need to completely drop MicroK8s and its complete configuration, use:</p> <pre><code>sudo snap remove microk8s --purge\nsudo apt-get remove --purge docker.io docker-buildx\n</code></pre> <p>IMPORTANT: After uninstalling MicroK8s, it is convenient to reboot the computer (the VM if you work on a VM, or the physical computer if you use a physical computer). Otherwise, there are system configurations that are not correctly cleaned. Especially in what port forwarding and firewall rules matters.</p> <p>After the reboot, redeploy as it is described in this section.</p>"},{"location":"deployment_guide/#13-deploy-teraflowsdn","title":"1.3. Deploy TeraFlowSDN","text":"<p>This section describes how to deploy TeraFlowSDN controller on top of MicroK8s using the environment configured in the previous sections.</p> Install prerequisites <p></p> <pre><code>sudo apt-get install -y git curl jq\n</code></pre> Clone the Git repository of the TeraFlowSDN controller <p> Clone from ETSI-hosted GitLab code repository:</p> <pre><code>mkdir ~/tfs-ctrl\ngit clone https://labs.etsi.org/rep/tfs/controller.git ~/tfs-ctrl\n</code></pre> <p>Important: The original H2020-TeraFlow project hosted on GitLab.com has been archieved and will not receive further contributions/updates. Please, clone from ETSI-hosted GitLab code repository.</p> Checkout the appropriate Git branch <p> TeraFlowSDN controller versions can be found in the appropriate release tags and/or branches as described in Home &gt; Versions.</p> <p>By default the branch master is checked out and points to the latest stable version of the TeraFlowSDN controller, while branch develop contains the latest developments and contributions under test and validation.</p> <p>To switch to the appropriate branch run the following command, changing <code>develop</code> by the name of the branch you want to deploy:</p> <pre><code>cd ~/tfs-ctrl\ngit checkout develop\n</code></pre> Prepare a deployment script with the deployment settings <p> Create a new deployment script, e.g., <code>my_deploy.sh</code>, adding the appropriate settings as follows. This section provides just an overview of the available settings. An example <code>my_deploy.sh</code> script is provided in the root folder of the project for your convenience with full description of all the settings.</p> <p>Note: The example <code>my_deploy.sh</code> script provides reasonable settings for deploying a functional and complete enough TeraFlowSDN controller, and a brief description of their meaning. To see extended descriptions, check scripts in the <code>deploy</code> folder.</p> <pre><code>cd ~/tfs-ctrl\ntee my_deploy.sh &gt;/dev/null &lt;&lt; EOF\n# ----- TeraFlowSDN ------------------------------------------------------------\nexport TFS_REGISTRY_IMAGES=\"http://localhost:32000/tfs/\"\nexport TFS_COMPONENTS=\"context device ztp monitoring pathcomp service slice nbi webui load_generator\"\nexport TFS_IMAGE_TAG=\"dev\"\nexport TFS_K8S_NAMESPACE=\"tfs\"\nexport TFS_EXTRA_MANIFESTS=\"manifests/nginx_ingress_http.yaml\"\nexport TFS_GRAFANA_PASSWORD=\"admin123+\"\nexport TFS_SKIP_BUILD=\"\"\n\n# ----- CockroachDB ------------------------------------------------------------\nexport CRDB_NAMESPACE=\"crdb\"\nexport CRDB_EXT_PORT_SQL=\"26257\"\nexport CRDB_EXT_PORT_HTTP=\"8081\"\nexport CRDB_USERNAME=\"tfs\"\nexport CRDB_PASSWORD=\"tfs123\"\nexport CRDB_DATABASE=\"tfs\"\nexport CRDB_DEPLOY_MODE=\"single\"\nexport CRDB_DROP_DATABASE_IF_EXISTS=\"YES\"\nexport CRDB_REDEPLOY=\"\"\n\n# ----- NATS -------------------------------------------------------------------\nexport NATS_NAMESPACE=\"nats\"\nexport NATS_EXT_PORT_CLIENT=\"4222\"\nexport NATS_EXT_PORT_HTTP=\"8222\"\nexport NATS_REDEPLOY=\"\"\n\n# ----- QuestDB ----------------------------------------------------------------\nexport QDB_NAMESPACE=\"qdb\"\nexport QDB_EXT_PORT_SQL=\"8812\"\nexport QDB_EXT_PORT_ILP=\"9009\"\nexport QDB_EXT_PORT_HTTP=\"9000\"\nexport QDB_USERNAME=\"admin\"\nexport QDB_PASSWORD=\"quest\"\nexport QDB_TABLE_MONITORING_KPIS=\"tfs_monitoring_kpis\"\nexport QDB_TABLE_SLICE_GROUPS=\"tfs_slice_groups\"\nexport QDB_DROP_TABLES_IF_EXIST=\"YES\"\nexport QDB_REDEPLOY=\"\"\n\nEOF\n</code></pre> <p>The settings are organized in 4 sections: - Section <code>TeraFlowSDN</code>: - <code>TFS_REGISTRY_IMAGE</code> enables to specify the private Docker registry to be used, by default, we assume to use the Docker respository enabled in MicroK8s. - <code>TFS_COMPONENTS</code> specifies the components their Docker image will be rebuilt, uploaded to the private Docker registry, and deployed in Kubernetes. - <code>TFS_IMAGE_TAG</code> defines the tag to be used for Docker images being rebuilt and uploaded to the private Docker registry. - <code>TFS_K8S_NAMESPACE</code> specifies the name of the Kubernetes namespace to be used for deploying the TFS components. - <code>TFS_EXTRA_MANIFESTS</code> enables to provide additional manifests to be applied into the Kubernetes environment during the deployment. Typical use case is to deploy ingress controllers, service monitors for Prometheus, etc. - <code>TFS_GRAFANA_PASSWORD</code> lets you specify the password you want to use for the <code>admin</code> user of the Grafana instance being deployed and linked to the Monitoring component. - <code>TFS_SKIP_BUILD</code>, if set to <code>YES</code>, prevents rebuilding the Docker images. That means, the deploy script will redeploy existing Docker images without rebuilding/updating them.</p> <ul> <li>Section <code>CockroachDB</code>: enables to configure the deployment of the backend CockroachDB database.</li> <li> <p>Check example script <code>my_deploy.sh</code> for further details.</p> </li> <li> <p>Section <code>NATS</code>: enables to configure the deployment of the backend NATS message broker.</p> </li> <li> <p>Check example script <code>my_deploy.sh</code> for further details.</p> </li> <li> <p>Section <code>QuestDB</code>: enables to configure the deployment of the backend QuestDB timeseries database.</p> </li> <li>Check example script <code>my_deploy.sh</code> for further details.</li> </ul> Confirm that MicroK8s is running <p></p> <p>Run the following command:</p> <pre><code>microk8s status\n</code></pre> <p>If it is reported <code>microk8s is not running, try microk8s start</code>, run the following command to start MicroK8s:</p> <pre><code>microk8s start\n</code></pre> <p>Confirm everything is up and running:</p> <ol> <li>Periodically Check the status of Kubernetes until you see the addons [dns, ha-cluster, helm3, hostpath-storage, ingress, registry, storage] in the enabled block.</li> <li>Periodically Check Kubernetes resources until all pods are Ready and Running.</li> </ol> Deploy TFS controller <p> First, source the deployment settings defined in the previous section. This way, you do not need to specify the environment variables in each and every command you execute to operate the TFS controller. Be aware to re-source the file if you open new terminal sessions. Then, run the following command to deploy TeraFlowSDN controller on top of the MicroK8s Kubernetes platform.</p> <pre><code>cd ~/tfs-ctrl\nsource my_deploy.sh\n./deploy/all.sh\n</code></pre> <p>The script performs the following steps:</p> <ul> <li>Executes script <code>./deploy/crdb.sh</code> to automate deployment of CockroachDB database used by Context component.</li> <li>The script automatically checks if CockroachDB is already deployed.</li> <li>If there are settings instructing to drop the database and/or redeploy CockroachDB, it does the appropriate actions to honor them as defined in previous section.</li> <li>Executes script <code>./deploy/nats.sh</code> to automate deployment of NATS message broker used by Context component.</li> <li>The script automatically checks if NATS is already deployed.</li> <li>If there are settings instructing to redeploy the message broker, it does the appropriate actions to honor them as defined in previous section.</li> <li>Executes script <code>./deploy/qdb.sh</code> to automate deployment of QuestDB timeseries database used by Monitoring component.</li> <li>The script automatically checks if QuestDB is already deployed.</li> <li>If there are settings instructing to redeploy the timeseries database, it does the appropriate actions to honor them as defined in previous section.</li> <li>Executes script <code>./deploy/tfs.sh</code> to automate deployment of TeraFlowSDN.</li> <li>Creates the namespace defined in <code>TFS_K8S_NAMESPACE</code></li> <li>Creates secrets for CockroachDB, NATS, and QuestDB to be used by Context and Monitoring components.</li> <li>Builds the Docker images for the components defined in <code>TFS_COMPONENTS</code></li> <li>Tags the Docker images with the value of <code>TFS_IMAGE_TAG</code></li> <li>Pushes the Docker images to the repository defined in <code>TFS_REGISTRY_IMAGE</code></li> <li>Deploys the components defined in <code>TFS_COMPONENTS</code></li> <li>Creates the file <code>tfs_runtime_env_vars.sh</code> with the environment variables for the components defined in <code>TFS_COMPONENTS</code> defining their local host addresses and their port numbers.</li> <li>Applies extra manifests defined in <code>TFS_EXTRA_MANIFESTS</code> such as:<ul> <li>Creating an ingress controller listening at port 80 for HTTP connections to enable external access to the TeraFlowSDN WebUI, Grafana Dashboards, and Compute NBI interfaces.</li> <li>Deploying service monitors to enable monitoring the performance of the components, device drivers and service handlers.</li> </ul> </li> <li>Initialize and configure the Grafana dashboards (if Monitoring component is deployed)</li> <li>Report a summary of the deployment</li> <li>See Show Deployment and Logs</li> </ul>"},{"location":"deployment_guide/#14-webui-and-grafana-dashboards","title":"1.4. WebUI and Grafana Dashboards","text":"<p>This section describes how to get access to the TeraFlowSDN controller WebUI and the monitoring Grafana dashboards.</p> Access the TeraFlowSDN WebUI <p> If you followed the installation steps based on MicroK8s, you got an ingress controller installed that exposes on TCP port 80.</p> <p>Besides, the ingress controller defines the following reverse proxy paths (on your local machine):</p> <ul> <li><code>http://127.0.0.1/webui</code>: points to the WebUI of TeraFlowSDN.</li> <li><code>http://127.0.0.1/grafana</code>: points to the Grafana dashboards. This endpoint brings access to the monitoring dashboards of TeraFlowSDN. The credentials for the <code>admin</code>user are those defined in the <code>my_deploy.sh</code> script, in the <code>TFS_GRAFANA_PASSWORD</code> variable.</li> <li><code>http://127.0.0.1/restconf</code>: points to the Compute component NBI based on RestCONF. This endpoint enables connecting external software, such as ETSI OpenSourceMANO NFV Orchestrator, to TeraFlowSDN.</li> </ul> <p>Note: In the creation of the VM, a forward from host TCP port 8080 to VM's TCP port 80 is configured, so the WebUIs and REST APIs of TeraFlowSDN should be exposed on the endpoint <code>127.0.0.1:8080</code> of your local machine instead of <code>127.0.0.1:80</code>.</p>"},{"location":"deployment_guide/#15-show-deployment-and-logs","title":"1.5. Show Deployment and Logs","text":"<p>This section presents some helper scripts to inspect the status of the deployment and the logs of the components. These scripts are particularly helpful for troubleshooting during execution of experiments, development, and debugging.</p> Report the deployment of the TFS controller <p></p> <p>The summary report given at the end of the Deploy TFS controller procedure can be generated manually at any time by running the following command. You can avoid sourcing <code>my_deploy.sh</code> if it has been already done.</p> <pre><code>cd ~/tfs-ctrl\nsource my_deploy.sh\n./deploy/show.sh\n</code></pre> <p>Use this script to validate that all the pods, deployments, replica sets, ingress controller, etc. are ready and have the appropriate state, e.g., running for Pods, and the services are deployed and have appropriate IP addresses and port numbers.</p> Report the log of a specific TFS controller component <p></p> <p>A number of scripts are pre-created in the <code>scripts</code> folder to facilitate the inspection of the component logs. For instance, to dump the log of the Context component, run the following command. You can avoid sourcing <code>my_deploy.sh</code> if it has been already done.</p> <pre><code>source my_deploy.sh\n./scripts/show_logs_context.sh\n</code></pre>"},{"location":"development_guide/","title":"2. Development Guide","text":""},{"location":"development_guide/#21-configure-environment","title":"2.1. Configure Environment","text":""},{"location":"development_guide/#211-python","title":"2.1.1. PythonUpgrade the Ubuntu distribution <p>Skip this step if you already did it during the installation of your machine.</p> <pre><code>sudo apt-get update -y\nsudo apt-get dist-upgrade -y\n</code></pre> Install PyEnv dependencies <p></p> <pre><code>sudo apt-get install -y make build-essential libssl-dev zlib1g-dev libbz2-dev libreadline-dev libsqlite3-dev wget \\\n curl llvm git libncursesw5-dev xz-utils tk-dev libxml2-dev libxmlsec1-dev libffi-dev liblzma-dev\n</code></pre> Install PyEnv <p></p> <p>We recommend installing PyEnv through PyEnv Installer. Below you can find the instructions, but we refer you to the link for updated instructions.</p> <pre><code>curl https://pyenv.run | bash\n# When finished, edit ~/.bash_profile // ~/.profile // ~/.bashrc as the installer proposes.\n# In general, it means to append the following lines to ~/.bashrc:\nexport PYENV_ROOT=\"$HOME/.pyenv\"\ncommand -v pyenv &gt;/dev/null || export PATH=\"$PYENV_ROOT/bin:$PATH\"\neval \"$(pyenv init -)\"\neval \"$(pyenv virtualenv-init -)\"\n</code></pre> <p>In case .bashrc is not linked properly to your profile, you may need to append the following line into your local .profile file:</p> <pre><code># Open ~/.profile and append this line:\n+source \"$HOME\"/.bashrc\n</code></pre> Restart the machine <p> Restart the machine for all the changes to take effect.</p> <pre><code>sudo reboot\n</code></pre> Install Python 3.9 over PyEnv <p></p> <p>ETSI TeraFlowSDN uses Python 3.9 by default. You should install the latest stable update of Python 3.9, i.e., avoid \"-dev\" versions. To find the latest version available in PyEnv, you can run the following command:</p> <pre><code>pyenv install --list | grep \" 3.9\"\n</code></pre> <p>At the time of writing, this command will output the following list:</p> <pre><code> 3.9.0\n 3.9-dev\n 3.9.1\n 3.9.2\n 3.9.4\n 3.9.5\n 3.9.6\n 3.9.7\n 3.9.8\n 3.9.9\n 3.9.10\n 3.9.11\n 3.9.12\n 3.9.13\n 3.9.14 \n 3.9.15\n 3.9.16 ** always select the latest version **\n</code></pre> <p>Therefore, the latest stable version is Python 3.9.16. To install this version, you should run:</p> <pre><code>pyenv install 3.9.16\n # This command might take some minutes depending on your Internet connection speed \n # and the performance of your machine.\n</code></pre> Create the Virtual Environment for TeraFlowSDN <p> The following commands create a virtual environment named as <code>tfs</code> using Python 3.9 and associate that environment with the current folder, i.e., <code>~/tfs-ctrl</code>. That way, when you are in that folder, the associated virtual environment will be used, thus inheriting the Python interpreter, i.e., Python 3.9, and the Python packages installed on it.</p> <pre><code>cd ~/tfs-ctrl\npyenv virtualenv 3.9.16 tfs\npyenv local 3.9.16/envs/tfs\n</code></pre> <p>After completing these commands, you should see in your prompt that now you're within the virtual environment <code>3.9.16/envs/tfs</code> on folder <code>~/tfs-ctrl</code>:</p> <pre><code>(3.9.16/envs/tfs) tfs@tfs-vm:~/tfs-ctrl$\n</code></pre> <p>In case that the correct pyenv does not get automatically activated when you change to the tfs-ctrl/ folder, then execute the following command:</p> <pre><code>cd ~/tfs-ctrl\npyenv activate 3.9.16/envs/tfs\n</code></pre> Install the basic Python packages within the virtual environment <p> From within the <code>3.9.16/envs/tfs</code> environment on folder <code>~/tfs-ctrl</code>, run the following commands to install the basic Python packages required to work with TeraFlowSDN.</p> <pre><code>cd ~/tfs-ctrl\n./install_requirements.sh\n</code></pre> <p>Some dependencies require to re-load the session, so log-out and log-in again.</p> Generate the Python code from the gRPC Proto messages and services <p></p> <p>The components, e.g., microservices, of the TeraFlowSDN controller, in general, use a gRPC-based open API to interoperate. All the protocol definitions can be found in sub-folder <code>proto</code> within the root project folder. For additional details on gRPC, visit the official web-page gRPC.</p> <p>In order to interact with the components, (re-)generate the Python code from gRPC definitions running the following command:</p> <pre><code>cd ~/tfs-ctrl\nproto/generate_code_python.sh\n</code></pre>","text":"<p>This section describes how to configure the Python environment to run experiments and develop code for the ETSI TeraFlowSDN controller. In particular, we use PyEnv to install the appropriate version of Python and manage the virtual environments.</p>"},{"location":"development_guide/#212-java-quarkus","title":"2.1.2. Java (Quarkus) <p>This section describe the steps needed to create a development environment for TFS components implemented in Java. Currently, ZTP and Policy components have been developed in Java (version 11) and use the Quarkus framework, which enables kubernetes-native development.</p> Install JDK <p> To begin, make sure that you have java installed and in the correct version</p> <pre><code>java --version\n</code></pre> <p>If you don't have java installed you will get an error like the following:</p> <pre><code>Command 'java' not found, but can be installed with:\n\nsudo apt install default-jre # version 2:1.11-72build1, or\nsudo apt install openjdk-11-jre-headless # version 11.0.14+9-0ubuntu2\nsudo apt install openjdk-17-jre-headless # version 17.0.2+8-1\nsudo apt install openjdk-18-jre-headless # version 18~36ea-1\nsudo apt install openjdk-8-jre-headless # version 8u312-b07-0ubuntu1\n</code></pre> <p>In which case you should use the following command to install the correct version:</p> <pre><code>sudo apt install openjdk-11-jre-headless\n</code></pre> <p>Else you should get something like the following:</p> <pre><code>openjdk 11.0.18 2023-01-17\nOpenJDK Runtime Environment (build 11.0.18+10-post-Ubuntu-0ubuntu120.04.1)\nOpenJDK 64-Bit Server VM (build 11.0.18+10-post-Ubuntu-0ubuntu120.04.1, mixed mode, sharing)\n</code></pre> Compiling and testing existing components <p> In the root directory of the existing Java components you will find an executable maven wrapper named <code>mvnw</code>. You could use this executable, which is already configured in pair with the components, instead of your local maven installation. So for example if you want to compile the project you would run the following:</p> <pre><code>./mvnw compile\n</code></pre> VS Code Quarkus plugin <p> In case you are using VS Code for development, we suggest to install the official Quarkus extension. The extension should be able to automatically find the current open project and integrate with the above <code>mvnw</code> maven wrapper, making it easier to control the maven lifecycle. Make sure that you open the specific component directory (i.e., <code>src/ztp</code> or <code>src/policy</code>) and not the general controller one (i.e., <code>src</code>.</p> New Java TFS component <p></p> <p>Sample Project</p> <p>If you want to create a new TFS component written in Java you could generate a new Quarkus project based on the following project:</p> <p>TFS Sample Quarkus Project</p> <p>In that way, you should have most of the libraries you would need to integrate with the rest of the TFS Components. Feel free however to add or remove libraries depending on your needs.</p> <p>Initial setup</p> <p>If you used the sample project above, you should have a project with a basic structure. However there are some steps that you should take before starting development.</p> <p>First make sure that you copy the protobuff files, that are found in the root directory of the TFS SDN controller, to the <code>new-component/src/main/proto</code> directory.</p> <p>Next you should create the following files:</p> <ul> <li><code>new-component/.gitlab-ci.yml</code></li> <li><code>new-component/Dockerfile</code></li> <li><code>new-component/src/resources/application.yaml</code></li> </ul> <p>We suggest to copy the respective files from existing components (Automation and Policy) and change them according to your needs.</p>","text":""},{"location":"development_guide/#213-java-maven","title":"2.1.3. Java (Maven) <p>Page under construction</p>","text":""},{"location":"development_guide/#214-rust","title":"2.1.4. Rust <p>Page under construction</p>","text":""},{"location":"development_guide/#215-erlang","title":"2.1.5. Erlang <p>This section describes how to configure the Erlang environment to run experiments and develop code for the ETSI TeraFlowSDN controller.</p> <p>First we need to install Erlang. There is multiple way, for development we will be using ASDF, a tool that allows the installation of multiple version of Erlang at the same time, and switch from one version to the other at will.</p> <ul> <li>First, install any missing dependencies:</li> </ul> <pre><code>sudo apt install curl git autoconf libncurses-dev build-essential m4 libssl-dev \n</code></pre> <ul> <li>Download ASDF tool to the local account:</li> </ul> <pre><code>git clone https://github.com/asdf-vm/asdf.git ~/.asdf --branch v0.10.2\n</code></pre> <ul> <li>Make ASDF activate on login by adding these lines at the end of the <code>~/.bashrc</code> file:</li> </ul> <pre><code>. $HOME/.asdf/asdf.sh\n. $HOME/.asdf/completions/asdf.bash\n</code></pre> <ul> <li>Logout and log back in to activate ASDF.</li> </ul> <p>ASDF supports multiple tools by installing there corresponding plugins.</p> <ul> <li>Install ASDF plugin for Erlang:</li> </ul> <pre><code>asdf plugin add erlang https://github.com/asdf-vm/asdf-erlang.git\n</code></pre> <ul> <li>Install a version of Erlang:</li> </ul> <pre><code>asdf install erlang 24.3.4.2\n</code></pre> <ul> <li>Activate Erlang locally for TFS controller. This will create a local file called <code>.tool-versions</code> defining which version of the tools to use when running under the current directory:</li> </ul> <pre><code>cd tfs-ctrl/\nasdf local erlang 24.3.4.2\n</code></pre> <p>Erlang projects uses a build tool called rebar3. It is used to manager project dependenecies, compile a project and generate project releases.</p> <ul> <li>Install rebar3 localy from source:</li> </ul> <pre><code>cd ~\ngit clone https://github.com/erlang/rebar3.git\ncd rebar3\nasdf local erlang 24.3.4.2\n./bootstrap\n./rebar3 local install\n</code></pre> <ul> <li>Update <code>~/.bashrc</code> to use rebar3 by adding this line at the end:</li> </ul> <pre><code>export PATH=$HOME/.cache/rebar3/bin:$PATH\n</code></pre> <ul> <li>Logout and log back in.</li> </ul>","text":""},{"location":"development_guide/#216-kotlin","title":"2.1.6. Kotlin <p>This section describes the steps needed to establish a development environment for TFS (TeraFlowSDN) components implemented in Kotlin. Currently, the <code>Gateway</code> component stands as the sole component developed in Kotlin.</p> Install Kotlin <p> To begin, make sure that you have kotlin installed and its current version:</p> <pre><code>kotlin -version\n</code></pre> <p>If you don't have kotlin installed you will get an error like the following:</p> <pre><code>Command 'kotlin' not found, but can be installed with:\nsudo snap install --classic kotlin\n</code></pre> <p>In which case you should use the following command to install the correct version:</p> <pre><code> sudo snap install --classic kotlin\n</code></pre> <p>Currently, the recommended version is 1.6.21, which uses Java Runtime Environment (JRE) version 11.</p> Compiling and testing existing components <p> To compile a Kotlin project using Gradle, similarly to using the Maven wrapper (mvnw) for Java projects, you can use the Gradle wrapper (gradlew) within the root directory of your Kotlin component, specifically the gateway directory.</p> <p>Navigate to the gateway directory within your Kotlin project. Ensure that it contains the gradlew script along with the gradle directory. Then, create a directory named <code>proto</code> and move all the files with extension <code>.proto</code> in this way:</p> <pre><code>mkdir proto\ncp ../../../proto/*.proto ./proto \n</code></pre> <p>For building the application, open a terminal or command prompt, navigate to the gateway directory, and run the following command:</p> <pre><code>./gradlew build\n</code></pre> <p>The following program runs the gateway application:</p> <pre><code>./gradlew runServer \n</code></pre> New Kotlin TFS component <p></p> <p>Sample Project</p> <p>If you want to create a new TFS component written in Kotlin you could generate a Kotlin project using <code>gradle</code>. The recommended version is 7.1. Follow the following Gradle guide for its installation. For building the prokect follow this link instead.</p> <p>From inside the new project directory, run the init task using the following command in a terminal: <code>gradle init</code>. </p> <p>The output will look like this:</p> <pre><code>$ gradle init\n\nSelect type of project to generate:\n 1: basic\n 2: application\n 3: library\n 4: Gradle plugin\nEnter selection (default: basic) [1..4] 2\n\nSelect implementation language:\n 1: C++\n 2: Groovy\n 3: Java\n 4: Kotlin\n 5: Scala\n 6: Swift\nEnter selection (default: Java) [1..6] 4\n\nSelect build script DSL:\n 1: Groovy\n 2: Kotlin\nEnter selection (default: Groovy) [1..2] 1\n\nProject name (default: demo):\nSource package (default: demo):\n\n\nBUILD SUCCESSFUL\n2 actionable tasks: 2 executed\n</code></pre> <p>Initial setup</p> <p>The <code>gradle init</code> command generates the new project. </p> <p>First, ensure the protobuf files are copied from the root directory of the TFS SDN controller. Run the following command in the directory of the new project:</p> <pre><code>mkdir proto \ncp TFS/project/root/path/proto/*.proto ./proto/\n</code></pre> <p>The file <code>build.gradle.ktl</code> is fundamental as it manages dependencies. Adjust it for adding external libraries. </p> <p>Next you should create the following files:</p> <ol> <li><code>new-component/.gitlab-ci.yml</code></li> <li><code>new-component/Dockerfile</code></li> </ol> <p>We recommend leveraging the structures and configurations found in the files of existing components for inspiration.</p> <p>Docker Container This project operates with Docker containers. Ensure the production of the container version for your component. To generate the container version of the project, modify the 'new-component/Dockerfile.' Execute the following command from the project's root directory:</p> <pre><code>docker build -t new-image -f new-component/Dockerfile ./\n</code></pre>","text":""},{"location":"development_guide/#22-configure-vscode","title":"2.2. Configure VScode","text":"Install VSCode and the required extensions <p>If not already done, install VSCode and the \"Remote SSH\" extension on your local machine, not in the VM.</p> <p>Note: \"Python\" extension is not required here. It will be installed later on the VSCode server running on the VM.</p> Configure the \"Remote SSH\" extension <p></p> <ul> <li>Go to left icon \"Remote Explorer\"</li> <li>Click the \"gear\" icon next to \"SSH TARGETS\" on top of \"Remote Explorer\" bar</li> <li>Choose to edit \"&lt;...&gt;/.ssh/config\" file (or equivalent)</li> <li>Add the following entry (assuming previous port forwarding configuration):</li> </ul> <pre><code>Host TFS-VM\n HostName 127.0.0.1\n Port 2200\n ForwardX11 no\n User tfs\n</code></pre> <ul> <li>Save the file</li> <li>An entry \"TFS-VM\" should appear on \"SSH TARGETS\".</li> </ul> Connect VSCode to the VM through \"Remote SSH\" extension <p></p> <ul> <li>Right-click on \"TFS-VM\"</li> <li>Select \"Connect to Host in Current Window\"</li> <li>Reply to the questions asked</li> <li>Platform of the remote host \"TFS-VM\": Linux</li> <li>\"TFS-VM\" has fingerprint \"\". Do you want to continue?: Continue <li>Type tfs user's password: tfs123</li> <li>You should be now connected to the TFS-VM.</li> <p>Note: if you get a connection error message, the reason might be due to wrong SSH server fingerprint. Edit file \"&lt;...&gt;/.ssh/known_hosts\" on your local user account, check if there is a line starting with \"[127.0.0.1]:2200\" (assuming previous port forwarding configuration), remove the entire line, save the file, and retry connection.</p> Add SSH key to prevent typing the password every time <p> This step creates an SSH key in the VM and installs it on the VSCode to prevent having to type the password every time.</p> <ul> <li>In VSCode (connected to the VM), click menu \"Terminal &gt; New Terminal\"</li> <li>Run the following commands on the VM's terminal through VSCode</li> </ul> <pre><code>ssh-keygen -t rsa -b 4096 -f ~/.ssh/tfs-vm.key\n # leave password empty\nssh-copy-id -i ~/.ssh/tfs-vm.key.pub tfs@10.0.2.10\n # tfs@10.0.2.10's password: &lt;type tfs user's password: tfs123&gt;\nrm .ssh/known_hosts \n</code></pre> <ul> <li>In VSCode, click left \"Explorer\" panel to expand, if not expanded, and click \"Open Folder\" button.</li> <li>Choose \"/home/tfs/\"</li> <li>Type tfs user's password when asked</li> <li>Trust authors of the \"/home/tfs [SSH: TFS-VM]\" folder when asked</li> <li>Right click on the file \"tfs-vm.key\" in the file explorer</li> <li>Select \"Download...\" option</li> <li>Download the file into your user's accout \".ssh\" folder</li> <li> <p>Delete files \"tfs-vm.key\" and \"tfs-vm.key.pub\" on the TFS-VM.</p> </li> <li> <p>In VSCode, click left \"Remote Explorer\" panel to expand</p> </li> <li>Click the \"gear\" icon next to \"SSH TARGETS\" on top of \"Remote Explorer\" bar</li> <li>Choose to edit \"&lt;...&gt;/.ssh/config\" file (or equivalent)</li> <li>Find entry \"Host TFS-VM\" and update it as follows:</li> </ul> <pre><code>Host TFS-VM\n HostName 127.0.0.1\n Port 2200\n ForwardX11 no\n User tfs\n IdentityFile \"&lt;path to the downloaded identity private key file&gt;\"\n</code></pre> <ul> <li>Save the file</li> <li>From now, VSCode will use the identity file to connect to the TFS-VM instead of the user's password.</li> </ul> Install VSCode Python Extension (in VSCode server) <p> This step installs Python extensions in VSCode server running in the VM.</p> <ul> <li>In VSCode (connected to the VM), click left button \"Extensions\"</li> <li>Search \"Python\" extension in the extension Marketplace.</li> <li> <p>Install official \"Python\" extension released by Microsoft.</p> <ul> <li>By default, since you're connected to the VM, it will be installed in the VSCode server running in the VM.</li> </ul> </li> <li> <p>In VSCode (connected to the VM), click left button \"Explorer\"</p> </li> <li>Click \"Ctrl+Alt+P\" and type \"Python: Select Interpreter\". Select option \"Python: 3.9.13 64-bit ('tfs')\"</li> </ul> Define environment variables for VSCode <p> The source code in the TFS controller project is hosted in folder <code>src/</code>. To help VSCode find the Python modules and packages, add the following file into your working space root folder:</p> <pre><code>echo \"PYTHONPATH=./src\" &gt;&gt; ~/tfs-ctrl/.env\n</code></pre>"},{"location":"development_guide/#23-develop-a-component-wip","title":"2.3. Develop A Component (WIP)","text":"<p>Page under construction</p>"},{"location":"features_and_bugs/","title":"4. Feature and bugs","text":"<p>This section describes the procedures to request new features and enhancements, report bugs, and the workflows implemented to manage them.</p> <ul> <li>Feature Request Procedure</li> <li>Bug Report Procedure</li> <li>Feature LifeCycle</li> </ul>"},{"location":"features_and_bugs/#41-feature-request-procedure","title":"4.1. Feature Request Procedure","text":"<p>Project features go through a discussion and approval process. To propose a New Feature, TFS uses the issues on its GitLab code repository.</p> <p>Important: A feature request is about functionality, not about implementation details.</p> <ul> <li>Describe WHAT you are proposing, and WHY it is important.</li> <li>DO NOT describe HOW to do it. This is done when the new feature is approved by TSC by populating the design details.</li> </ul> <p>Two kind of requests are considered in this procedure:</p> <ul> <li>New Feature: a big change that potentially affects a number of components and requires an appropriate design phase.</li> <li>Enhancement: a relatively small change enhancing TFS that does not require a design phase.</li> </ul> Steps: <p></p> <ol> <li> <p>Go to New Issue page <code>https://labs.etsi.org/rep/tfs/controller/-/issues/new</code>.</p> <ul> <li>You need to be authenticated.</li> </ul> </li> <li> <p>Create a New Issue for your feature</p> <ul> <li>Title: A concise high level description of your feature (see some other examples in GitLab)</li> <li>Type: Issue</li> <li>Description: Choose the \"new-feature\" or \"enhancement\" project templates and fill-in the auto-generated template describing the feature/enhancement.</li> <li>Labels:<ul> <li>Select the type of request: <code>type::new-feature</code> / <code>type::enhancement</code></li> <li>If you foresee the components affected by the request, pick the appropriate labels for them.<ul> <li>Component labels have the form <code>comp-&lt;component-name&gt;</code>.</li> </ul> </li> <li>PLEASE: Do not set other types of labels (to be set by TSC).</li> </ul> </li> <li>PLEASE: Do not set the following fields (to be set by TSC): EPIC, Assignee, Milestone, Weight, Due Date</li> <li>Submit the Issue</li> </ul> </li> <li> <p>Interact with the TSC and the Community through the issue.</p> <ul> <li>TSC will review your request. If it makes sense and its purpose is clear, it will be approved. Otherwise, TSC will provide questions for clarification.</li> </ul> </li> </ol> Designing a Feature: <p></p> <p>Once a feature has been approved, the design phase starts. The design should be included within the description of the feature (GitLab issue description) by concatenating the Design Feature Template (see below) and properly filling it in. In case the feature description becomes too long, attached files could be included to the feature.</p> <p>The design is expected to be socialized with the relevant stakeholders (e.g. MDLs and TSC). Dedicated slots can be allocated in the TECH calls on a per-request basis to discuss and refine it.</p> <p>For writing the design, you can check the design of existing features or use the design template below.</p> Templates: <p></p> <p>New feature / Enhancement request template:</p> <pre><code># Proposers\n\n- name-of-proposer-1 (institution-of-proposer-1)\n- name-of-proposer-2 (institution-of-proposer-2)\n...\n\n# Description\n\nDescribe your proposal in ~1000 characters.\nYou can reference external content listed in section \"References\" as [Ref-1].\n\n# Demo or definition of done\n\nDescribe which high level conditions needs to be fulfilled to demonstrate this feature implementation is completed.\nYou can reference external content (example, demo paper) listed in section \"References\" as [Ref-2].\n\n# References\n\n1. [Reference name](https://reference-url)\n2. Author1, Author2, Author3, et. al., \u201cMy demo using feature,\u201d in Conference-Name Demo Track, 20XX.\n</code></pre> <p>Feature design Template:</p> <pre><code># Feature Design\n\n## Clarifications to Expected Behavior Changes\n\nExisting component logic and workflows between components that need to be altered to realize this feature.\nRemember to justify these changes.\n...\n\n## References\n\nList of relevant references for this feature.\n...\n\n## Assumptions\n\nEnumerate the assumptions for this feature, e.g., fix XXX is implemented and merged, specific configurations, specific\ncomponents deployed.\n...\n\n## Impacted Components\n\nList of impacted components: Context, Device, Service, PathComp, Slice, Monitoring, Automation, Policy, Compute, etc.\nJust an enumeration, elaboration of impacts is done below.\n\n## Component1 Impact\n\nDescribe impact (changes) on component1.\n...\n\n## Component2 Impact\n\nDescribe impact (changes) on component2.\n...\n\n## Testing\n\nDescribe test sets (unitary and integration) to be carried out.\nThis section can include/reference external experiments, demo papers, etc.\n...\n</code></pre>"},{"location":"features_and_bugs/#42-bug-report-procedure","title":"4.2. Bug Report Procedure","text":"<p>Project bugs go through a review, confirmation, and resolution process. To report a Bug, TFS uses the issues on its GitLab code repository.</p> <p>Important: New bugs must be properly documented. Please, provide details on:</p> <ul> <li>the details on the deployment environment (Operating System, MicroK8s, etc.)</li> <li>the TeraFlowSDN version (or branch/commit)</li> <li>the TFS deployment settings (components, particular configurations, etc.)</li> <li>the particular sequence of actions that resulted in the bug</li> <li>the TFS components affected by the bug (if you know them)</li> <li>the expected behavior (if you know it)</li> </ul> <p>Without this minimal information, it will/might be difficult to reproduce and resolve the bug, as well as validating the completeness of the solution.</p> Steps: <p></p> <ol> <li> <p>Go to New Issue page <code>https://labs.etsi.org/rep/tfs/controller/-/issues/new</code>.</p> <ul> <li>You need to be authenticated.</li> </ul> </li> <li> <p>Create a New Issue for your bug</p> <ul> <li>Title: A concise high level description of your bug (see some other examples in GitLab)</li> <li>Type: Issue</li> <li>Description: Choose the \"bug\" project template and fill-in the auto-generated template describing the bug.</li> <li>Labels:<ul> <li>Select the type of request: <code>type::bug</code></li> <li>If you foresee the components affected by the bug, pick the appropriate labels for them.<ul> <li>Component labels have the form <code>comp-&lt;component-name&gt;</code>.</li> </ul> </li> <li>PLEASE: Do not set other types of labels (to be set by TSC).</li> </ul> </li> <li>PLEASE: Do not set the following fields (to be set by TSC): EPIC, Assignee, Milestone, Weight, Due Date</li> <li>Submit the Issue</li> </ul> </li> <li> <p>Interact with the TSC and the Community through the issue.</p> <ul> <li>TSC will review your reported bug and try to reproduce it. If we succeed in reproducing it, we will mark it as confirmed, and include its resolution in the development plans. Otherwise, TSC will provide questions for clarification.</li> </ul> </li> </ol>"},{"location":"features_and_bugs/#43-feature-lifecycle","title":"4.3. Feature LifeCycle","text":"<p>Once approved, a feature request could transition through the following steps:</p> <ul> <li>Approved: Feature approved by TSC; design phase can start.</li> <li>Design: Feature under design; discussing on HOW to do it.</li> <li>Development: Design approved; feature under development/implementation.</li> <li>Testing and Review: Feature implemented and under review and testing by the developers and the community.</li> <li>Completed: Testing and review completed, and feature merged.</li> <li>Abandoned: Feature abandoned.</li> </ul> <p>Important: An approved feature is not a guarantee for implementation. Implementing a feature requires resources, and resources come from the members, participants and individual contributors integrating the TFS Community, which might have prioritized the development of other features based on their own interests and the interests expressed by the LG, the TSC, and the MDGs.</p> <p>Once a Feature is mature, e.g., Testing, Review, Completed, it can be accepted for inclusion in a specific Release. This is accomplished by including the issue ticket in the respective EPIC \"ReleaseX.Y\". For instance, to see the Features included in Release X.Y, check EPIC \"ReleaseX.Y\".</p>"},{"location":"run_experiments/","title":"3. Run Experiments","text":"<p>This section walks you through the process of running experiments in TeraFlowSDN on top of a machine running MicroK8s Kubernetes platform. The guide includes the details on configuring the Python environment, some basic commands you might need, configuring the network topology, and executing different experiments.</p> <p>Note that the steps followed here are likely to work regardless of the platform where TeraFlowSDN is deployed over.</p> <p>Note also that this guide will keep growing with the new experiments and demonstrations that are being carried out involving the ETSI TeraFlowSDN controller.</p> <p>Important: The NBIs, workflows and drivers have to be considered as experimental. The configuration and monitoring capabilities they support are limited, partially implemented, or tested only with specific laboratory equipment. Use them with care.</p> <ul> <li> <p>3.1. OFC'22 Demo</p> <ul> <li>Bootstrapping of devices</li> <li>Monitoring of device endpoints</li> <li>Management of L3VPN services</li> <li>Integration with ETSI OpenSourceMANO</li> </ul> </li> <li> <p>3.2. ECOC'22 Demo</p> <ul> <li>Disjoint DC-2-DC L2VPN Service management</li> <li>Integration with ETSI OpenSourceMANO</li> </ul> </li> <li> <p>3.3. OECC-PSC'22 Demo (Work In Progress)</p> <ul> <li>Interdomain Slices</li> </ul> </li> <li> <p>3.4. NFV-SDN'22 Demo (Work In Progress)</p> <ul> <li>DLT-based context sharing</li> <li>DLT-based Interdomain Slices with SLAs</li> </ul> </li> </ul>"},{"location":"run_experiments/#31-ofc22-demo","title":"3.1. OFC'22 Demo","text":"<p>This functional test reproduces the live demonstration Demonstration of Zero-touch Device and L3-VPN Service Management Using the TeraFlow Cloud-native SDN Controller carried out at OFC'22 / Open access.</p> <p>The main features demonstrated are:</p> <ul> <li>Bootstrapping of devices</li> <li>Monitoring of device endpoints</li> <li>Management of L3VPN services</li> <li>Integration with ETSI OpenSourceMANO</li> </ul> Functional test folder <p></p> <p>This functional test can be found in folder <code>./src/tests/ofc22/</code>.</p> Execute with real devices <p></p> <p>This functional test is designed to operate both with real and emulated devices. By default, emulated devices are used; however, if you have access to real devices, you can create/modify the files <code>./src/tests/ofc22/tests/Objects.py</code> and <code>./src/tests/ofc22/tests/Credentials.py</code> to point to your devices, and map to your own network topology. Otherwise, you can modify the <code>./src/tests/ofc22/tests/descriptors_emulated.json</code> that is designed to be uploaded through the WebUI instead of using the command line scripts. Note that the default scenario assumes devices R2 and R4 are always emulated, while devices R1, R3, and O1 can be configured as emulated or real devices.</p> <p>Important: The device drivers operating with real devices, e.g., OpenConfigDriver, P4Driver, and TransportApiDriver, have to be considered as experimental. The configuration and monitoring capabilities they support are limited or partially implemented/tested. Use them with care.</p> Deployment and Dependencies <p></p> <p>To run this functional test, it is assumed you have deployed a MicroK8s-based Kubernetes environment and a TeraFlowSDN controller instance as described in the Tutorial: Deployment Guide, and you configured the Python environment as described in Tutorial: Development Guide &gt; Configure Environment &gt; Python.</p> Access to the WebUI and Dashboard <p></p> <p>When the deployment completes, you can connect to the TeraFlowSDN WebUI and Dashboards as described in Tutorial: Deployment Guide &gt; WebUI and Grafana Dashboards</p> <p>Notes:</p> <ul> <li>the default credentials for the Grafana Dashboiard is user/pass: <code>admin</code>/<code>admin123+</code>.</li> <li>in Grafana, you will find the L3-Monitorng in the Starred dashboards section.</li> </ul> Test execution <p></p> <p>Before executing the tests, we need to prepare a few things.</p> <p>First, you need to make sure that you have all the gRPC-generate code in your folder. To do so, run:</p> <pre><code>proto/generate_code_python.sh\n</code></pre> <p>Then, it is time to deploy TeraFlowSDN with the correct specification for this scenario. Make sure to load your deployment variables for this scenario by:</p> <pre><code>source src/tests/ofc22/deploy_specs.sh\n</code></pre> <p>Then, you need to deploy the components by running:</p> <pre><code>./deploy/all.sh\n</code></pre> <p>After the deployment is finished, you need to load the environment variables to support the execution of the tests by:</p> <pre><code>source tfs_runtime_env_vars.sh\n</code></pre> <p>To execute this functional test, four main steps needs to be carried out:</p> <ol> <li>Device bootstrapping</li> <li>L3VPN Service creation</li> <li>L3VPN Service removal</li> <li>Cleanup</li> </ol> <p>Upon the execution of each test progresses, a report will be generated indicating PASSED / FAILED / SKIPPED. If there is some error during the execution, you should see a detailed report on the error. See the troubleshooting section if needed.</p> <p>You can check the logs of the different components using the appropriate <code>scripts/show_logs_[component].sh</code> scripts after you execute each step.</p> <p>There are two ways to execute the functional tests, running all the tests with a single script or running each test independently. In the following we start with the first option, then we comment on how to run each test independently.</p> <p>Running all tests with a single script</p> <p>We have a script that executes all the steps at once. It is meant for being used to test if all components involved in this scenario are working correct. To run all the functional tests, you can run:</p> <pre><code>src/tests/ofc22/run_tests.sh\n\n</code></pre> <p>The following sections explain each one of the steps.</p> <p>Device bootstrapping</p> <p>This step configures some basic entities (Context and Topology), the devices, and the links in the topology. The expected results are:</p> <ul> <li>The devices to be added into the Topology.</li> <li>The devices to be pre-configured and initialized as ENABLED by the Automation component.</li> <li>The monitoring for the device ports (named as endpoints in TeraFlowSDN) to be activated and data collection to automatically start.</li> <li>The links to be added to the topology.</li> </ul> <p>To run this step, you can do it from the WebUI by uploading the file <code>./ofc22/tests/descriptors_emulated.json</code> that contains the descriptors of the contexts, topologies, devices, and links, or by executing the script:</p> <pre><code>./src/tests/ofc22/run_test_01_bootstrap.sh\n</code></pre> <p>When the bootstrapping finishes, check in the Grafana L3-Monitoring Dashboard and you should see the monitoring data being plotted and updated every 5 seconds (by default). Given that there is no service configured, you should see a 0-valued flat plot.</p> <p>In the WebUI, select the admin Context. Then, in the Devices tab you should see that 5 different emulated devices have been created and activated: 4 packet routers, and 1 optical line system controller. Besides, in the Services tab you should see that there is no service created. Note here that the emulated devices produce synthetic randomly-generated monitoring data and do not represent any particular services configured.</p> <p>L3VPN Service creation</p> <p>This step configures a new service emulating the request an OSM WIM would make by means of a Mock OSM instance.</p> <p>To run this step, execute the script:</p> <pre><code>./src/tests/ofc22/run_test_02_create_service.sh\n</code></pre> <p>When the script finishes, check the WebUI Services tab. You should see that two services have been created, one for the optical layer and another for the packet layer. Besides, you can check the Devices tab to see the configuration rules that have been configured in each device. In the Grafana Dashboard, given that there is now a service configured, you should see the plots with the monitored data for the device. By default, device R1-EMU is selected.</p> <p>L3VPN Service removal</p> <p>This step deconfigures the previously created services emulating the request an OSM WIM would make by means of a Mock OSM instance.</p> <p>To run this step, execute the script:</p> <pre><code>./src/tests/ofc22/run_test_03_delete_service.sh\n</code></pre> <p>or delete the L3NM service from the WebUI.</p> <p>When the script finishes, check the WebUI Services tab. You should see that the two services have been removed. Besides, in the Devices tab you can see that the appropriate configuration rules have been deconfigured. In the Grafana Dashboard, given that there is no service configured, you should see a 0-valued flat plot again.</p> <p>Cleanup</p> <p>This last step performs a cleanup of the scenario removing all the TeraFlowSDN entities for completeness. To run this step, execute the script:</p> <pre><code>./src/tests/ofc22/run_test_04_cleanup.sh\n</code></pre> <p>When the script finishes, check the WebUI Devices tab, you should see that the devices have been removed. Besides, in the Services tab you can see that the \"admin\" Context has no services given that that context has been removed.</p>"},{"location":"run_experiments/#32-ecoc22-demo","title":"3.2. ECOC'22 Demo","text":"<p>This functional test reproduces the experimental assessment of Experimental Demonstration of Transport Network Slicing with SLA Using the TeraFlowSDN Controller presented at ECOC'22 / IEEEXplore.</p> <p>The main features demonstrated are:</p> <ul> <li>Disjoint DC-2-DC L2VPN Service management</li> <li>Integration with ETSI OpenSourceMANO</li> </ul> Functional test folder <p></p> <p>This functional test can be found in folder <code>./src/tests/ecoc22/</code>.</p> Execute with real devices <p></p> <p>This functional test has only been tested with emulated devices; however, if you have access to real devices, you can modify the files <code>./src/tests/ecoc22/tests/Objects.py</code> and <code>./src/tests/ecoc22/tests/Credentials.py</code> to point to your devices, and map to your network topology. Otherwise, you can modify the <code>./src/tests/ecoc22/tests/descriptors_emulated.json</code> that is designed to be uploaded through the WebUI instead of using the command line scripts.</p> <p>Important: The device drivers operating with real devices, e.g., OpenConfigDriver, P4Driver, and TransportApiDriver, have to be considered as experimental. The configuration and monitoring capabilities they support are limited or partially implemented/tested. Use them with care.</p> Deployment and Dependencies <p></p> <p>To run this functional test, it is assumed you have deployed a MicroK8s-based Kubernetes environment and a TeraFlowSDN controller instance as described in the Tutorial: Deployment Guide, and you configured the Python environment as described in Tutorial: Development Guide &gt; Configure Environment &gt; Python.</p> Access to the WebUI <p></p> <p>When the deployment completes, you can connect to the TeraFlowSDN WebUI as described in Tutorial: Deployment Guide &gt; WebUI and Grafana Dashboards</p> <p>Notes:</p> <ul> <li>this experiment does not make use of Monitoring, so Grafana is not used.</li> <li>the default credentials for the Grafana Dashboard is user/pass: <code>admin</code>/<code>admin123+</code>.</li> <li>this functional test does not involve the Monitoring component, so no monitoring data is plotted in Grafana.</li> </ul> Test execution <p></p> <p>Before executing the tests, we need to prepare a few things.</p> <p>First, you need to make sure that you have all the gRPC-generate code in your folder. To do so, run:</p> <pre><code>proto/generate_code_python.sh\n</code></pre> <p>Second, it is time to deploy TeraFlowSDN with the correct specification for this scenario. Make sure to load your deployment variables for this scenario by:</p> <pre><code>source src/tests/ecoc22/deploy_specs.sh\n</code></pre> <p>Then, you need to deploy the components by running:</p> <pre><code>./deploy/all.sh\n</code></pre> <p>After the deployment is finished, you need to load the environment variables to support the execution of the tests by:</p> <pre><code>source tfs_runtime_env_vars.sh\n</code></pre> <p>To execute this functional test, four main steps needs to be carried out:</p> <ol> <li>Device bootstrapping</li> <li>L2VPN Slice and Services creation</li> <li>L2VPN Slice and Services removal</li> <li>Cleanup</li> </ol> <p>Upon the execution of each test progresses, a report will be generated indicating PASSED / FAILED / SKIPPED. If there is some error during the execution, you should see a detailed report on the error. See the troubleshooting section if needed.</p> <p>You can check the logs of the different components using the appropriate <code>scripts/show_logs_[component].sh</code> scripts after you execute each step.</p> <p>Device bootstrapping</p> <p>This step configures some basic entities (Context and Topology), the devices, and the links in the topology. The expected results are:</p> <ul> <li>The devices to be added into the Topology.</li> <li>The devices to be pre-configured and initialized as ENABLED by the Automation component.</li> <li>The links to be added to the topology.</li> </ul> <p>To run this step, you can do it from the WebUI by uploading the file <code>./src/tests/ecoc22/tests/descriptors_emulated.json</code> that contains the descriptors of the contexts, topologies, devices, and links, or by executing the <code>./src/tests/ecoc22/run_test_01_bootstrap.sh</code> script.</p> <p>In the WebUI, select the admin Context. Then, in the Devices tab you should see that 5 different emulated devices have been created and activated: 4 packet routers, and 1 optical Open Line System (OLS) controller. Besides, in the Services tab you should see that there is no service created. </p> <p>L2VPN Slice and Services creation</p> <p>This step configures a new service emulating the request an OSM WIM would make by means of a Mock OSM instance.</p> <p>To run this step, execute the <code>./src/tests/ecoc22/run_test_02_create_service.sh</code> script.</p> <p>When the script finishes, check the WebUI Slices and Services tab. You should see that, for the connectivity service requested by MockOSM, one slice has been created, three services have been created (two for the optical layer and another for the packet layer). Note that the two services for the optical layer correspond to the primary (service_uuid ending with \":0\") and the backup (service_uuid ending with \":1\") services. Each of the services indicates the connections and sub-services that are supporting them. Besides, you can check the Devices tab to see the configuration rules that have been configured in each device.</p> <p>L2VPN Slice and Services removal</p> <p>This step deconfigures the previously created slices and services emulating the request an OSM WIM would make by means of a Mock OSM instance.</p> <p>To run this step, execute the <code>./src/tests/ecoc22/run_test_03_delete_service.sh</code> script, or delete the slice from the WebUI.</p> <p>When the script finishes, check the WebUI Slices and Services tab. You should see that the slice and the services have been removed. Besides, in the Devices tab you can see that the appropriate configuration rules have been deconfigured.</p> <p>Cleanup</p> <p>This last step performs a cleanup of the scenario removing all the TeraFlowSDN entities for completeness.</p> <p>To run this step, execute the <code>./src/tests/ecoc22/run_test_04_cleanup.sh</code> script.</p> <p>When the script finishes, check the WebUI Devices tab, you should see that the devices have been removed. Besides, in the Slices and Services tab you can see that the admin Context has no services given that that context has been removed.</p>"},{"location":"run_experiments/#33-oecc-psc22-demo-work-in-progress","title":"3.3. OECC-PSC'22 Demo (Work In Progress)","text":"<p>Page under construction.</p> <p>The main features demonstrated are:</p> <ul> <li>Interdomain Slices</li> </ul>"},{"location":"run_experiments/#34-nfv-sdn22-demo-work-in-progress","title":"3.4. NFV-SDN'22 Demo (Work In Progress)","text":"<p>Page under construction.</p> <p>The main features demonstrated are:</p> <ul> <li>DLT-based context sharing</li> <li>DLT-based Interdomain Slices with SLAs</li> </ul>"}]}
\ No newline at end of file
{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"0. Home","text":"<p>Welcome to the ETSI TeraFlowSDN (TFS) Controller wiki!</p> <p>This wiki provides a walkthrough on how to prepare your environment for executing and contributing to the ETSI SDG TeraFlowSDN. Besides, it describes how to run some example experiments.</p>"},{"location":"#try-teraflowsdn-release-30","title":"Try TeraFlowSDN Release 3.0","text":"<p>The new release launched on April 24th, 2024 incorporates a number of new features, improvements, and bug resolutions. Try it by following the guides below, and feel free to give us your feedback. See the Release Notes.</p>"},{"location":"#requisites","title":"Requisites","text":"<p>The guides and walkthroughs below make some reasonable assumptions to simplify the deployment of the TFS controller, the execution of experiments and tests, and the development of new contributions. In particular, we assume:</p> <ul> <li>A physical server or virtual machine for running the TFS controller with the following minimum specifications (check section Configure your Machine for additional details):</li> <li>4 cores / vCPUs</li> <li>8 GB of RAM (10 GB of RAM if you want to develop)</li> <li>60 GB of disk (100 GB of disk if you want to develop)</li> <li>1 NIC card</li> <li>VSCode with the Remote SSH extension</li> <li>Working machine software:</li> <li>Ubuntu Server 22.04.4 LTS or Ubuntu Server 20.04.6 LTS</li> <li>MicroK8s v1.24.17</li> </ul> <p>Use the Wiki menu in the right side of this page to navigate through the various contents of this wiki.</p>"},{"location":"#guides-and-walkthroughs","title":"Guides and Walkthroughs","text":"<p>The following guides and walkthroughs are provided:</p> <ul> <li>1. Deployment Guide</li> <li>2. Development Guide</li> <li>3. Run Experiments</li> <li>4. Features and Bugs</li> <li>5. Supported SBIs and Network Elements</li> <li>6. Supported NBIs</li> <li>7. Supported Service Handlers</li> <li>8. Troubleshooting</li> </ul>"},{"location":"#tutorials-and-tfs-virtual-machine","title":"Tutorials and TFS Virtual Machine","text":"<p>This section provides access to the links and all the materials prepared for the tutorials and hackfests involving ETSI TeraFlowSDN.</p> <ul> <li>TFS Hackfest #3 (Castelldefels, 16-17 October 2023)</li> <li> <p>The link includes explanatory material on P4 for TeraFlowSDN, the set of guided walkthrough, and the details on the interactive sessions the participants addressed (and recordings), as well as a TFS Virtual Machine (Release 2.1).</p> </li> <li> <p>TFS Hackfest #2 (Madrid, 20-21 June 2023)</p> </li> <li> <p>The link includes explanatory material on gNMI and ContainerLab for TeraFlowSDN, the set of challenges the participants addressed (and recordings), as well as a TFS Virtual Machine (Pre-Release 2.1).</p> </li> <li> <p>OFC SC472 (San Diego, 6 March 2023)</p> </li> <li> <p>The link includes a tutorial-style slide deck, as well as a TFS Virtual Machine (Release 2).</p> </li> <li> <p>TFS Hackfest #1 (Amsterdam, 20 October 2022)</p> </li> <li>The link includes a tutorial-style slide deck (and recordings), as well as a TFS Virtual Machine (Pre-Release 2).</li> </ul>"},{"location":"#versions","title":"Versions","text":"<p>New versions of TeraFlowSDN are periodically released. Each release is properly tagged and a branch is kept for its future bug fixing, if needed.</p> <ul> <li>The branch master, points always to the latest stable version of the TeraFlowSDN controller.</li> <li>The branches release/X.Y.Z, point to the code for the different release versions indicated in branch name.</li> <li>Code in these branches can be considered stable, and no new features are planned.</li> <li>In case of bugs, point releases increasing revision number (Z) might be created.</li> <li>The main development branch is named as develop.</li> <li>Use with care! Might not be stable.</li> <li>The latest developments and contributions are added to this branch for testing and validation before reaching a release. </li> </ul> <p>To choose the appropriate branch, follow the steps described in 1.3. Deploy TeraFlowSDN &gt; Checkout the Appropriate Git Branch</p>"},{"location":"#events","title":"Events","text":"<p>Find here after the list of past and future TFS Events:</p> <ul> <li>ETSI TeraFlowSDN Events </li> </ul>"},{"location":"#contact","title":"Contact","text":"<p>If your environment does not fit with the proposed assumptions and you experience issues preparing it to work with the ETSI TeraFlowSDN controller, contact the ETSI TeraFlowSDN SDG team through Slack</p>"},{"location":"deployment_guide/","title":"1. Deployment Guide","text":"<p>This section walks you through the process of deploying TeraFlowSDN on top of a machine running MicroK8s Kubernetes platform. The guide includes the details on configuring and installing the machine, installing and configuring MicroK8s, and deploying and reporting the status of the TeraFlowSDN controller.</p>"},{"location":"deployment_guide/#11-configure-your-machine","title":"1.1. Configure your Machine","text":"<p>In this section, we describe how to configure a machine (physical or virtual) to be used as the deployment, execution, and development environment for the ETSI TeraFlowSDN controller. Choose your preferred environment below and follow the instructions provided.</p> <p>NOTE: If you already have a remote physical server fitting the requirements specified in this section feel free to use it instead of deploying a local VM. Check 1.1.1. Physical Server for further details.</p> <p>Virtualization platforms tested are:</p> <ul> <li>Physical Server</li> <li>Oracle Virtual Box</li> <li>VMWare Fusion</li> <li>OpenStack</li> <li>Vagrant Box</li> </ul>"},{"location":"deployment_guide/#111-physical-server","title":"1.1.1. Physical ServerServer SpecificationsClusterized DeploymentNetworkingOperating SystemUpgrade the Ubuntu distribution","text":"<p>This section describes how to configure a physical server for running ETSI TeraFlowSDN(TFS) controller.</p> <p>Minimum Server Specifications for development and basic deployment</p> <ul> <li>CPU: 4 cores</li> <li>RAM: 8 GB</li> <li>Disk: 60 GB</li> <li>1 GbE NIC</li> </ul> <p>Recommended Server Specifications for development and basic deployment</p> <ul> <li>CPU: 6 cores</li> <li>RAM: 12 GB</li> <li>Disk: 80 GB</li> <li>1 GbE NIC</li> </ul> <p>Server Specifications for best development and deployment experience</p> <ul> <li>CPU: 8 cores</li> <li>RAM: 32 GB</li> <li>Disk: 120 GB</li> <li>1 GbE NIC</li> </ul> <p>NOTE: the specifications listed above are provided as a reference. They depend also on the CPU clock frequency, the RAM memory, the disk technology and speed, etc.</p> <p>For development purposes, it is recommended to run the VSCode IDE (or the IDE of your choice) in a more powerful server, for instance, the recommended server specifications for development and basic deployment.</p> <p>Given that TeraFlowSDN follows a micro-services architecture, for the deployment, it might be better to use many clusterized servers with many slower cores than a single server with few highly performant cores.</p> <p>You might consider creating a cluster of machines each featuring, at least, the minimum server specifications. That solution brings you scalability in the future.</p> <p>No explicit indications are given in terms of networking besides that servers need access to the Internet for downloading dependencies, binaries, and packages while building and deploying the TeraFlowSDN components.</p> <p>Besides that, the network requirements are essentially the same than that required for running a classical Kubernetes environment. To facilitate the deployment, we extensively use MicroK8s, thus the network requirements are, essentially, the same demanded by MicroK8s, especially, if you consider creating a Kubernetes cluster.</p> <p>As a reference, the other deployment solutions based on VMs assume the VM is connected to a virtual network configured with the IP range <code>10.0.2.0/24</code> and have the gateway at IP <code>10.0.2.1</code>. The VMs have the IP address <code>10.0.2.10</code>.</p> <p>The minimum required ports to be accessible are: - 22/SSH : for management purposes - 80/HTTP : for the TeraFlowSDN WebUI and Grafana dashboard - 8081/HTTPS : for the CockroachDB WebUI</p> <p>Other ports might be required if you consider to deploy addons such as Kubernetes observability, etc. The details on these ports are left appart given they might vary depending on the Kubernetes environment you use.</p> <p>The recommended Operating System for deploying TeraFlowSDN is Ubuntu Server 22.04 LTS or Ubuntu Server 20.04 LTS. Other version might work, but we have not tested them. We strongly recommend using Long Term Support (LTS) versions as they provide better stability.</p> <p>Below we provide some installation guidelines: - Installation Language: English - Autodetect your keyboard - If asked, select \"Ubuntu Server\" (do not select \"Ubuntu Server (minimized)\"). - Configure static network specifications (adapt them based on your particular setup):</p> Interface IPv4 Method Subnet Address Gateway Name servers Search domains enp0s3 Manual 10.0.2.0/24 10.0.2.10 10.0.2.1 8.8.8.8,8.8.4.4 <ul> <li>Leave proxy and mirror addresses as they are</li> <li>Let the installer self-upgrade (if asked).</li> <li>Use an entire disk for the installation</li> <li>Disable setup of the disk as LVM group</li> <li>Double check that NO swap space is allocated in the partition table. Kubernetes does not work properly with SWAP.</li> <li>Configure your user and system names:</li> <li>User name: <code>TeraFlowSDN</code></li> <li>Server's name: <code>tfs-vm</code></li> <li>Username: <code>tfs</code></li> <li>Password: <code>tfs123</code></li> <li>Install Open SSH Server</li> <li>Import SSH keys, if any.</li> <li>Featured Server Snaps</li> <li>Do not install featured server snaps. It will be done manually later to illustrate how to uninstall and reinstall them in case of trouble with.</li> <li>Let the system install and upgrade the packages.</li> <li>This operation might take some minutes depending on how old is the Optical Drive ISO image you use and your Internet connection speed.</li> <li>Restart the VM when the installation is completed.</li> </ul> <pre><code>sudo apt-get update -y\nsudo apt-get dist-upgrade -y\n</code></pre> <ul> <li>If asked to restart services, restart the default ones proposed.</li> <li>Restart the VM when the installation is completed.</li> </ul>"},{"location":"deployment_guide/#112-oracle-virtual-box","title":"1.1.2. Oracle Virtual BoxCreate a NAT Network in VirtualBoxCreate VM in VirtualBox:Install Ubuntu 22.04 LTS Operating System","text":"<p>This section describes how to configure a VM for running ETSI TeraFlowSDN(TFS) controller using Oracle VirtualBox. It has been tested with VirtualBox up to version 6.1.40 r154048.</p> <p>In \"Oracle VM VirtualBox Manager\", Menu \"File &gt; Preferences... &gt; Network\", create a NAT network with the following specifications:</p> Name CIDR DHCP IPv6 TFS-NAT-Net 10.0.2.0/24 Disabled Disabled <p>Within the newly created \"TFS-NAT-Net\" NAT network, configure the following IPv4 forwarding rules:</p> Name Protocol Host IP Host Port Guest IP Guest Port SSH TCP 127.0.0.1 2200 10.0.2.10 22 HTTP TCP 127.0.0.1 8080 10.0.2.10 80 <p>Note: IP address 10.0.2.10 is the one that will be assigned to the VM.</p> <ul> <li>Name: TFS-VM</li> <li>Type/Version: Linux / Ubuntu (64-bit)</li> <li>CPU (*): 4 vCPUs @ 100% execution capacity</li> <li>RAM: 8 GB</li> <li>Disk: 60 GB, Virtual Disk Image (VDI), Dynamically allocated</li> <li>Optical Drive ISO Image: \"ubuntu-22.04.X-live-server-amd64.iso\"</li> <li>Download the latest Long Term Support (LTS) version of the Ubuntu Server image from Ubuntu 22.04 LTS, e.g., \"ubuntu-22.04.X-live-server-amd64.iso\".</li> <li>Note: use Ubuntu Server image instead of Ubuntu Desktop to create a lightweight VM.</li> <li>Network Adapter 1 (*): enabled, attached to NAT Network \"TFS-NAT-Net\"</li> <li>Minor adjustments (*):</li> <li>Audio: disabled</li> <li>Boot order: disable \"Floppy\"</li> </ul> <p>Note: (*) settings to be editing after the VM is created.</p> <p>In \"Oracle VM VirtualBox Manager\", start the VM in normal mode, and follow the installation procedure. Below we provide some installation guidelines: - Installation Language: English - Autodetect your keyboard - If asked, select \"Ubuntu Server\" (do not select \"Ubuntu Server (minimized)\"). - Configure static network specifications:</p> Interface IPv4 Method Subnet Address Gateway Name servers Search domains enp0s3 Manual 10.0.2.0/24 10.0.2.10 10.0.2.1 8.8.8.8,8.8.4.4 <ul> <li>Leave proxy and mirror addresses as they are</li> <li>Let the installer self-upgrade (if asked).</li> <li>Use an entire disk for the installation</li> <li>Disable setup of the disk as LVM group</li> <li>Double check that NO swap space is allocated in the partition table. Kubernetes does not work properly with SWAP.</li> <li>Configure your user and system names:</li> <li>User name: TeraFlowSDN</li> <li>Server's name: tfs-vm</li> <li>Username: tfs</li> <li>Password: tfs123</li> <li>Install Open SSH Server</li> <li>Import SSH keys, if any.</li> <li>Featured Server Snaps</li> <li>Do not install featured server snaps. It will be done manually later to illustrate how to uninstall and reinstall them in case of trouble with.</li> <li>Let the system install and upgrade the packages.</li> <li>This operation might take some minutes depending on how old is the Optical Drive ISO image you use and your Internet connection speed.</li> <li>Restart the VM when the installation is completed.</li> </ul> <p>Upgrade the Ubuntu distribution</p> <pre><code>sudo apt-get update -y\nsudo apt-get dist-upgrade -y\n</code></pre> <ul> <li>If asked to restart services, restart the default ones proposed.</li> <li>Restart the VM when the installation is completed.</li> </ul> <p>Install VirtualBox Guest Additions On VirtualBox Manager, open the VM main screen. If you are running the VM in headless mode, right click over the VM in the VirtualBox Manager window and click \"Show\". If a dialog informing about how to leave the interface of the VM is shown, confirm pressing \"Switch\" button. The interface of the VM should appear.</p> <p>Click menu \"Device &gt; Insert Guest Additions CD image...\"</p> <p>On the VM terminal, type:</p> <pre><code>sudo apt-get install -y linux-headers-$(uname -r) build-essential dkms\n # This command might take some minutes depending on your VM specs and your Internet access speed.\nsudo mount /dev/cdrom /mnt/\ncd /mnt/\nsudo ./VBoxLinuxAdditions.run\n # This command might take some minutes depending on your VM specs.\nsudo reboot\n</code></pre>"},{"location":"deployment_guide/#113-vmware-fusion","title":"1.1.3. VMWare FusionCreate VM in VMWare Fusion:Install Ubuntu 22.04.1 LTS Operating SystemUpgrade the Ubuntu distribution","text":"<p>This section describes how to configure a VM for running ETSI TeraFlowSDN(TFS) controller using VMWare Fusion. It has been tested with VMWare Fusion version 12 and 13.</p> <p>In \"VMWare Fusion\" manager, create a new network from the \"Settings/Network\" menu.</p> <ul> <li>Unlock to make changes</li> <li>Press the + icon and create a new network</li> <li>Change the name to TFS-NAT-Net</li> <li>Check \"Allow virtual machines on this network to connect to external network (NAT)\"</li> <li>Do not check \"Enable IPv6\"</li> <li>Add port forwarding for HTTP and SSH</li> <li>Uncheck \"Provide address on this network via DHCP\"</li> </ul> <p>Create a new VM an Ubuntu 22.04.1 ISO:</p> <ul> <li>Display Name: TeraFlowSDN</li> <li>Username: tfs</li> <li>Password: tfs123</li> </ul> <p>On the next screen press \"Customize Settings\", save the VM and in \"Settings\" change: - Change to use 4 CPUs - Change to access 8 GB of RAM - Change disk to size 60 GB - Change the network interface to use the previously created TFS-NAT-Net</p> <p>Run the VM to start the installation.</p> <p>The installation will be automatic, without any configuration required.</p> <ul> <li>Configure the guest IP, gateway and DNS:</li> </ul> <p>Using the Network Settings for the wired connection, set the IP to 10.0.2.10, the mask to 255.255.255.0, the gateway to 10.0.2.2 and the DNS to 10.0.2.2.</p> <ul> <li>Disable and remove swap file:</li> </ul> <p>$ sudo swapoff -a $ sudo rm /swapfile</p> <p>Then you can remove or comment the /swapfile entry in /etc/fstab</p> <ul> <li>Install Open SSH Server</li> <li> <p>Import SSH keys, if any.</p> </li> <li> <p>Restart the VM when the installation is completed.</p> </li> </ul> <pre><code>sudo apt-get update -y\nsudo apt-get dist-upgrade -y\n</code></pre>"},{"location":"deployment_guide/#114-openstack","title":"1.1.4. OpenStackCreate a Security Group in OpenStack <p> In OpenStack, go to Project - Network - Security Groups - Create Security Group with name TFS</p> <p>Add the following rules:</p> Direction Ether Type IP Protocol Port Range Remote IP Prefix Ingress IPv4 TCP 22 (SSH) 0.0.0.0/0 Ingress IPv4 TCP 2200 0.0.0.0/0 Ingress IPv4 TCP 8080 0.0.0.0/0 Ingress IPv4 TCP 80 0.0.0.0/0 Egress IPv4 Any Any 0.0.0.0/0 Egress IPv6 Any Any ::/0 <p>Note: The IP address will be assigned depending on the network you have configured inside OpenStack. This IP will have to be modified in TeraFlow configuration files which by default use IP 10.0.2.10</p> Create a flavour <p></p> <p>From dashboard (Horizon)</p> <p>Go to Admin - Compute - Flavors and press Create Flavor</p> <ul> <li>Name: TFS</li> <li>VCPUs: 4</li> <li>RAM (MB): 8192</li> <li>Root Disk (GB): 60</li> </ul> <p>From CLI</p> <pre><code> openstack flavor create TFS --id auto --ram 8192 --disk 60 --vcpus 8\n</code></pre> Create an instance in OpenStack: <p></p> <ul> <li>Instance name: TFS-VM</li> <li>Origin: [Ubuntu-22.04 cloud image] (https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img)</li> <li>Create new volume: No</li> <li>Flavor: TFS</li> <li>Networks: extnet </li> <li>Security Groups: TFS</li> <li>Configuration: Include the following cloud-config</li> </ul> <pre><code>#cloud-config\n# Modifies the password for the VM instance\nusername: ubuntu\npassword: &lt;your-password&gt;\nchpasswd: { expire: False }\nssh_pwauth: True\n</code></pre> Upgrade the Ubuntu distribution <p></p> <pre><code>sudo apt-get update -y\nsudo apt-get dist-upgrade -y\n</code></pre> <ul> <li>If asked to restart services, restart the default ones proposed.</li> <li>Restart the VM when the installation is completed.</li> </ul>","text":"<p>This section describes how to configure a VM for running ETSI TeraFlowSDN(TFS) controller using OpenStack. It has been tested with OpenStack Kolla up to Yoga version. </p>"},{"location":"deployment_guide/#115-vagrant-box","title":"1.1.5. Vagrant Box <p>This section describes how to create a Vagrant Box, using the base virtual machine configured in Oracle Virtual Box.</p> Virtual Machine specifications <p> Most of the specifications can be as specified in the Oracle Virtual Box page, however, there are a few particularities to Vagrant that must be accommodated, such as:</p> <ul> <li>Virtual Hard Disk</li> <li>Size: 60GB (at least)</li> <li>Type: VMDK</li> </ul> <p></p> <p>Also, before initiating the VM and installing the OS, we'll need to:</p> <ul> <li>Disable Floppy in the 'Boot Order'</li> <li>Disable audio</li> <li>Disable USB</li> <li>Ensure Network Adapter 1 is set to NAT</li> </ul> Network configurations <p> At Network Adapt 1, the following port-forwarding rule must be set.</p> Name Protocol Host IP Host Port Guest IP Guest Port SSH TCP 2222 22 <p></p> Installing the OS <p></p> <p>For a Vagrant Box, it is generally suggested that the ISO's server version is used, as it is intended to be used via SSH, and any web GUI is expected to be forwarded to the host.</p> <p></p> <p></p> <p></p> <p>Make sure the disk is not configured as an LVM group!</p> <p></p> Vagrant ser <p> Vagrant expects by default, that in the box's OS exists the user <code>vagrant</code> with the password also being <code>vagrant</code>.</p> <p></p> SSH <p></p> <p>Vagrant uses SSH to connect to the boxes, so installing it now will save the hassle of doing it later.</p> <p></p> Features server snaps <p></p> <p>Do not install featured server snaps. It will be done manually later to illustrate how to uninstall and reinstall them in case of trouble with.</p> Updates <p></p> <p>Let the system install and upgrade the packages. This operation might take some minutes depending on how old is the Optical Drive ISO image you use and your Internet connection speed.</p> Upgrade the Ubuntu distribution <p></p> <pre><code>sudo apt-get update -y\nsudo apt-get dist-upgrade -y\n</code></pre> <ul> <li>If asked to restart services, restart the default ones proposed.</li> <li>Restart the VM when the installation is completed.</li> </ul> Install VirtualBox Guest Additions <p> On VirtualBox Manager, open the VM main screen. If you are running the VM in headless mode, right-click over the VM in the VirtualBox Manager window, and click \"Show\". If a dialog informing about how to leave the interface of the VM is shown, confirm by pressing the \"Switch\" button. The interface of the VM should appear.</p> <p>Click the menu \"Device &gt; Insert Guest Additions CD image...\"</p> <p>On the VM terminal, type:</p> <pre><code>sudo apt-get install -y linux-headers-$(uname -r) build-essential dkms\n # This command might take some minutes depending on your VM specs and your Internet access speed.\nsudo mount /dev/cdrom /mnt/\ncd /mnt/\nsudo ./VBoxLinuxAdditions.run\n # This command might take some minutes depending on your VM specs.\nsudo reboot\n</code></pre> ETSI TFS Installation <p> After this, proceed to 1.2. Install Microk8s, after which, return to this wiki to finish the Vagrant Box creation.</p> Box configuration and creation <p> Make sure the ETSI TFS controller is correctly configured. You will not be able to change it after!</p> <p>It is advisable to do the next configurations from a host's terminal, via a SSH connection.</p> <pre><code>ssh -p 2222 vagrant@127.0.0.1\n</code></pre> Set root password <p> Set the root password to <code>vagrant</code>.</p> <pre><code>sudo passwd root\n</code></pre> Set the superuser <p> Set up the Vagrant user so that it\u2019s able to use sudo without being prompted for a password. Anything in the <code>/etc/sudoers.d/*</code> directory is included in the sudoers privileges when created by the root user. Create a new sudo file.</p> <pre><code>sudo visudo -f /etc/sudoers.d/vagrant\n</code></pre> <p>and add the following lines</p> <pre><code># add vagrant user\nvagrant ALL=(ALL) NOPASSWD:ALL\n</code></pre> <p>You can now test that it works by running a simple command.</p> <pre><code>sudo pwd\n</code></pre> <p>Issuing this command should result in an immediate response without a request for a password.</p> Install the Vagrant key <p> Vagrant uses a default set of SSH keys for you to directly connect to boxes via the CLI command <code>vagrant ssh</code>, after which it creates a new set of SSH keys for your new box. Because of this, we need to load the default key to be able to access the box after created.</p> <pre><code>chmod 0700 /home/vagrant/.ssh\nwget --no-check-certificate https://raw.github.com/mitchellh/vagrant/master/keys/vagrant.pub -O /home/vagrant/.ssh/authorized_keys\nchmod 0600 /home/vagrant/.ssh/authorized_keys\nchown -R vagrant /home/vagrant/.ssh\n</code></pre> Configure the OpenSSH Server <p> Edit the <code>/etc/ssh/sshd_config</code> file:</p> <pre><code>sudo vim /etc/ssh/sshd_config\n</code></pre> <p>And uncomment the following line:</p> <pre><code>AuthorizedKeysFile %h/.ssh/authorized_keys\n</code></pre> <p>Then restart SSH.</p> <pre><code>sudo service ssh restart\n</code></pre> Package the box <p> Before you package the box, if you intend to make your box public, it is best to clean your bash history with:</p> <pre><code>history -c\n</code></pre> <p>Exit the SSH connection, and at you're host machine, package the VM:</p> <pre><code>vagrant package --base teraflowsdncontroller --output teraflowsdncontroller.box\n</code></pre> Test run the box <p> Add the base box to you local Vagrant box list:</p> <pre><code>vagrant box add --name teraflowsdncontroller ./teraflowsdncontroller.box\n</code></pre> <p>Now you should try to run it, for that you'll need to create a Vagrantfile. For a simple run, this is the minimal required code for this box:</p> <pre><code># -*- mode: ruby -*-\n# vi: set ft=ruby :\n\nVagrant.configure(\"2\") do |config|\n config.vm.box = \"teraflowsdncontroller\"\n config.vm.box_version = \"1.1.0\"\n config.vm.network :forwarded_port, host: 8080 ,guest: 80\nend\n</code></pre> <p>Now you'll be able to spin up the virtual machine by issuing the command:</p> <pre><code>vagrant up\n</code></pre> <p>And connect to the machine using:</p> <pre><code>vagrant ssh\n</code></pre> Pre-configured boxes <p> If you do not wish to create your own Vagrant Box, you can use one of the existing ones created by TFS contributors. - davidjosearaujo/teraflowsdncontroller - ... </p> <p>To use them, you simply have to create a Vagrantfile and run <code>vagrant up controller</code> in the same directory. The following example Vagrantfile already allows you to do just that, with the bonus of exposing the multiple management GUIs to your <code>localhost</code>.</p> <pre><code>Vagrant.configure(\"2\") do |config|\n\n config.vm.define \"controller\" do |controller|\n controller.vm.box = \"davidjosearaujo/teraflowsdncontroller\"\n controller.vm.network \"forwarded_port\", guest: 80, host: 8080 # WebUI\n controller.vm.network \"forwarded_port\", guest: 8084, host: 50750 # Linkerd Viz Dashboard\n controller.vm.network \"forwarded_port\", guest: 8081, host: 8081 # CockroachDB Dashboard\n controller.vm.network \"forwarded_port\", guest: 8222, host: 8222 # NATS Dashboard\n controller.vm.network \"forwarded_port\", guest: 9000, host: 9000 # QuestDB Dashboard\n controller.vm.network \"forwarded_port\", guest: 9090, host: 9090 # Prometheus Dashboard\n\n # Setup Linkerd Viz reverse proxy\n ## Copy config file\n controller.vm.provision \"file\" do |f|\n f.source = \"./reverse-proxy-linkerdviz.sh\"\n f.destination = \"./reverse-proxy-linkerdviz.sh\"\n end\n ## Execute configuration file\n controller.vm.provision \"shell\" do |s|\n s.inline = \"chmod +x ./reverse-proxy-linkerdviz.sh &amp;&amp; ./reverse-proxy-linkerdviz.sh\"\n end\n\n # Update controller source code to the desired branch\n if ENV['BRANCH'] != nil\n controller.vm.provision \"shell\" do |s|\n s.inline = \"cd ./tfs-ctrl &amp;&amp; git pull &amp;&amp; git switch \" + ENV['BRANCH']\n end\n end\n\n end\nend\n</code></pre> <p>This Vagrantfile also allows for optional repository updates on startup by running the command with a specified environment variable <code>BRANCH</code></p> <pre><code>BRANCH=develop vagrant up controller\n</code></pre> Linkerd DNS rebinding bypass <p> Because of Linkerd's security measures against DNS rebinding, a reverse proxy, that modifies the request's header <code>Host</code> field, is needed to expose the GUI to the host. The previous Vagrantfile already deploys such configurations, for that, all you need to do is create the <code>reverse-proxy-linkerdviz.sh</code> file in the same directory. The content of this file is displayed below.</p> <pre><code># Install NGINX\nsudo apt update &amp;&amp; sudo apt install nginx -y\n\n# NGINX reverse proxy configuration\necho 'server {\n listen 8084;\n\n location / {\n proxy_pass http://127.0.0.1:50750;\n proxy_set_header Host localhost;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Proto $scheme;\n }\n}' &gt; /home/vagrant/expose-linkerd\n\n# Create symlink of the NGINX configuration file\nsudo ln -s /home/vagrant/expose-linkerd /etc/nginx/sites-enabled/\n\n# Commit the reverse proxy configurations\nsudo systemctl restart nginx\n\n# Enable start on login\necho \"linkerd viz dashboard &amp;\" &gt;&gt; .profile\n\n# Start dashboard\nlinkerd viz dashboard &amp;\n\necho \"Linkerd Viz dashboard running!\"\n</code></pre>","text":""},{"location":"deployment_guide/#12-install-microk8s","title":"1.2. Install MicroK8s","text":"<p>This section describes how to deploy the MicroK8s Kubernetes platform and configure it to be used with ETSI TeraFlowSDN controller. Besides, Docker is installed to build docker images for the ETSI TeraFlowSDN controller.</p> <p>The steps described in this section might take some minutes depending on your internet connection speed and the resources assigned to your VM, or the specifications of your physical server.</p> <p>To facilitate work, these steps are easier to be executed through an SSH connection, for instance using tools like PuTTY or MobaXterm.</p> Upgrade the Ubuntu distribution <p> Skip this step if you already did it during the creation of the VM.</p> <pre><code>sudo apt-get update -y\nsudo apt-get dist-upgrade -y\n</code></pre> Install prerequisites <p></p> <pre><code>sudo apt-get install -y ca-certificates curl gnupg lsb-release snapd jq\n</code></pre> Install Docker CE <p> Install Docker CE and Docker BuildX plugin</p> <pre><code>sudo apt-get install -y docker.io docker-buildx\n</code></pre> <p>NOTE: Starting from Docker v23, Build architecture has been updated and <code>docker build</code> command entered into deprecation process in favor of the new <code>docker buildx build</code> command. Package <code>docker-buildx</code> provides the new <code>docker buildx build</code> command.</p> <p>Add key \"insecure-registries\" with the private repository to the daemon configuration. It is done in two commands since sometimes read from and write to same file might cause trouble.</p> <pre><code>if [ -s /etc/docker/daemon.json ]; then cat /etc/docker/daemon.json; else echo '{}'; fi \\\n | jq 'if has(\"insecure-registries\") then . else .+ {\"insecure-registries\": []} end' -- \\\n | jq '.\"insecure-registries\" |= (.+ [\"localhost:32000\"] | unique)' -- \\\n | tee tmp.daemon.json\nsudo mv tmp.daemon.json /etc/docker/daemon.json\nsudo chown root:root /etc/docker/daemon.json\nsudo chmod 600 /etc/docker/daemon.json\n</code></pre> <p>Restart the Docker daemon</p> <pre><code>sudo systemctl restart docker\n</code></pre> Install MicroK8s <p></p> <p>Important: Some TeraFlowSDN dependencies need to be executed on top of MicroK8s/Kubernetes v1.24. It is not guaranteed (by now) to run on newer versions.</p> <pre><code># Install MicroK8s\nsudo snap install microk8s --classic --channel=1.24/stable\n\n# Create alias for command \"microk8s.kubectl\" to be usable as \"kubectl\"\nsudo snap alias microk8s.kubectl kubectl\n</code></pre> <p>It is important to make sure that <code>ufw</code> will not interfere with the internal pod-to-pod and pod-to-Internet traffic. To do so, first check the status. If <code>ufw</code> is active, use the following command to enable the communication.</p> <pre><code>\n# Verify status of ufw firewall\nsudo ufw status\n\n# If ufw is active, install following rules to enable access pod-to-pod and pod-to-internet\nsudo ufw allow in on cni0 &amp;&amp; sudo ufw allow out on cni0\nsudo ufw default allow routed\n</code></pre> <p>NOTE: MicroK8s can be used to compose a Highly Available Kubernetes cluster enabling you to construct an environment combining the CPU, RAM and storage resources of multiple machines. If you are interested in this procedure, review the official instructions in How to build a highly available Kubernetes cluster with MicroK8s, in particular, the step Create a MicroK8s multi-node cluster.</p> <p>References:</p> <ul> <li>The lightweight Kubernetes &gt; Install MicroK8s</li> <li>Install a local Kubernetes with MicroK8s</li> <li>How to build a highly available Kubernetes cluster with MicroK8s</li> </ul> Add user to the docker and microk8s groups <p></p> <p>It is important that your user has the permission to run <code>docker</code> and <code>microk8s</code> in the terminal. To allow this, you need to add your user to the <code>docker</code> and <code>microk8s</code> groups with the following commands:</p> <pre><code>sudo usermod -a -G docker $USER\nsudo usermod -a -G microk8s $USER\nsudo chown -f -R $USER $HOME/.kube\nsudo reboot\n</code></pre> <p>In case that you get trouble executing the following commands, might due to the .kube folder is not automatically provisioned into your home folder, you may follow the steps below:</p> <pre><code>mkdir -p $HOME/.kube\nsudo chown -f -R $USER $HOME/.kube\nmicrok8s config &gt; $HOME/.kube/config\nsudo reboot\n</code></pre> Check status of Kubernetes and addons <p> To retrieve the status of Kubernetes once, run the following command:</p> <pre><code>microk8s.status --wait-ready\n</code></pre> <p>To retrieve the status of Kubernetes periodically (e.g., every 1 second), run the following command:</p> <pre><code>watch -n 1 microk8s.status --wait-ready\n</code></pre> Check all resources in Kubernetes <p> To retrieve the status of the Kubernetes resources once, run the following command:</p> <pre><code>kubectl get all --all-namespaces\n</code></pre> <p>To retrieve the status of the Kubernetes resources periodically (e.g., every 1 second), run the following command:</p> <pre><code>watch -n 1 kubectl get all --all-namespaces\n</code></pre> Enable addons <p></p> <p>First, we need to enable the community plugins (maintained by third parties):</p> <pre><code>microk8s.enable community\n</code></pre> <p>The Addons to be enabled are:</p> <ul> <li><code>dns</code>: enables resolving the pods and services by name</li> <li><code>helm3</code>: required to install NATS</li> <li><code>hostpath-storage</code>: enables providing storage for the pods (required by <code>registry</code>)</li> <li><code>ingress</code>: deploys an ingress controller to expose the microservices outside Kubernetes</li> <li><code>registry</code>: deploys a private registry for the TFS controller images</li> <li><code>linkerd</code>: deploys the linkerd service mesh used for load balancing among replicas</li> <li><code>prometheus</code>: set of tools that enable TFS observability through per-component instrumentation</li> <li><code>metrics-server</code>: deploys the Kubernetes metrics server for API access to service metrics</li> </ul> <pre><code>microk8s.enable dns helm3 hostpath-storage ingress registry prometheus metrics-server linkerd\n</code></pre> <p>Important: Enabling some of the addons might take few minutes. Do not proceed with next steps until the addons are ready. Otherwise, the deployment might fail. To confirm everything is up and running:</p> <ol> <li>Periodically Check the status of Kubernetes until you see the addons [dns, ha-cluster, helm3, hostpath-storage, ingress, linkerd, metrics-server, prometheus, registry, storage] in the enabled block.</li> <li>Periodically Check Kubernetes resources until all pods are Ready and Running.</li> <li>If it takes too long for the Pods to be ready, we observed that rebooting the machine may help.</li> </ol> <p>Then, create aliases to make the commands easier to access:</p> <pre><code>sudo snap alias microk8s.helm3 helm3\nsudo snap alias microk8s.linkerd linkerd\n</code></pre> <p>To validate that <code>linkerd</code> is working correctly, run:</p> <pre><code>linkerd check\n</code></pre> <p>To validate that the <code>metrics-server</code> is working correctly, run:</p> <pre><code>kubectl top pods --all-namespaces\n</code></pre> <p>and you should see a screen similar to the <code>top</code> command in Linux, showing the columns namespace, pod name, CPU (cores), and MEMORY (bytes).</p> <p>In case pods are not starting, check information from pods logs. For example, linkerd is sensitive for proper /etc/resolv.conf syntax.</p> <pre><code>kubectl logs &lt;podname&gt; --namespace &lt;namespace&gt;\n</code></pre> <p>If the command shows an error message, also restarting the machine might help.</p> Stop, Restart, and Redeploy <p> Find below some additional commands you might need while you work with MicroK8s:</p> <pre><code>microk8s.stop # stop MicroK8s cluster (for instance, before power off your computer)\nmicrok8s.start # start MicroK8s cluster\nmicrok8s.reset # reset infrastructure to a clean state\n</code></pre> <p>If the following commands does not work to recover the MicroK8s cluster, you can redeploy it.</p> <p>If you want to keep MicroK8s configuration, use:</p> <pre><code>sudo snap remove microk8s\n</code></pre> <p>If you need to completely drop MicroK8s and its complete configuration, use:</p> <pre><code>sudo snap remove microk8s --purge\nsudo apt-get remove --purge docker.io docker-buildx\n</code></pre> <p>IMPORTANT: After uninstalling MicroK8s, it is convenient to reboot the computer (the VM if you work on a VM, or the physical computer if you use a physical computer). Otherwise, there are system configurations that are not correctly cleaned. Especially in what port forwarding and firewall rules matters.</p> <p>After the reboot, redeploy as it is described in this section.</p>"},{"location":"deployment_guide/#13-deploy-teraflowsdn","title":"1.3. Deploy TeraFlowSDN","text":"<p>This section describes how to deploy TeraFlowSDN controller on top of MicroK8s using the environment configured in the previous sections.</p> Install prerequisites <p></p> <pre><code>sudo apt-get install -y git curl jq\n</code></pre> Clone the Git repository of the TeraFlowSDN controller <p> Clone from ETSI-hosted GitLab code repository:</p> <pre><code>mkdir ~/tfs-ctrl\ngit clone https://labs.etsi.org/rep/tfs/controller.git ~/tfs-ctrl\n</code></pre> <p>Important: The original H2020-TeraFlow project hosted on GitLab.com has been archieved and will not receive further contributions/updates. Please, clone from ETSI-hosted GitLab code repository.</p> Checkout the appropriate Git branch <p> TeraFlowSDN controller versions can be found in the appropriate release tags and/or branches as described in Home &gt; Versions.</p> <p>By default the branch master is checked out and points to the latest stable version of the TeraFlowSDN controller, while branch develop contains the latest developments and contributions under test and validation.</p> <p>To switch to the appropriate branch run the following command, changing <code>develop</code> by the name of the branch you want to deploy:</p> <pre><code>cd ~/tfs-ctrl\ngit checkout develop\n</code></pre> Prepare a deployment script with the deployment settings <p> Create a new deployment script, e.g., <code>my_deploy.sh</code>, adding the appropriate settings as follows. This section provides just an overview of the available settings. An example <code>my_deploy.sh</code> script is provided in the root folder of the project for your convenience with full description of all the settings.</p> <p>Note: The example <code>my_deploy.sh</code> script provides reasonable settings for deploying a functional and complete enough TeraFlowSDN controller, and a brief description of their meaning. To see extended descriptions, check scripts in the <code>deploy</code> folder.</p> <pre><code>cd ~/tfs-ctrl\ntee my_deploy.sh &gt;/dev/null &lt;&lt; EOF\n# ----- TeraFlowSDN ------------------------------------------------------------\nexport TFS_REGISTRY_IMAGES=\"http://localhost:32000/tfs/\"\nexport TFS_COMPONENTS=\"context device ztp monitoring pathcomp service slice nbi webui load_generator\"\nexport TFS_IMAGE_TAG=\"dev\"\nexport TFS_K8S_NAMESPACE=\"tfs\"\nexport TFS_EXTRA_MANIFESTS=\"manifests/nginx_ingress_http.yaml\"\nexport TFS_GRAFANA_PASSWORD=\"admin123+\"\nexport TFS_SKIP_BUILD=\"\"\n\n# ----- CockroachDB ------------------------------------------------------------\nexport CRDB_NAMESPACE=\"crdb\"\nexport CRDB_EXT_PORT_SQL=\"26257\"\nexport CRDB_EXT_PORT_HTTP=\"8081\"\nexport CRDB_USERNAME=\"tfs\"\nexport CRDB_PASSWORD=\"tfs123\"\nexport CRDB_DATABASE=\"tfs\"\nexport CRDB_DEPLOY_MODE=\"single\"\nexport CRDB_DROP_DATABASE_IF_EXISTS=\"YES\"\nexport CRDB_REDEPLOY=\"\"\n\n# ----- NATS -------------------------------------------------------------------\nexport NATS_NAMESPACE=\"nats\"\nexport NATS_EXT_PORT_CLIENT=\"4222\"\nexport NATS_EXT_PORT_HTTP=\"8222\"\nexport NATS_REDEPLOY=\"\"\n\n# ----- QuestDB ----------------------------------------------------------------\nexport QDB_NAMESPACE=\"qdb\"\nexport QDB_EXT_PORT_SQL=\"8812\"\nexport QDB_EXT_PORT_ILP=\"9009\"\nexport QDB_EXT_PORT_HTTP=\"9000\"\nexport QDB_USERNAME=\"admin\"\nexport QDB_PASSWORD=\"quest\"\nexport QDB_TABLE_MONITORING_KPIS=\"tfs_monitoring_kpis\"\nexport QDB_TABLE_SLICE_GROUPS=\"tfs_slice_groups\"\nexport QDB_DROP_TABLES_IF_EXIST=\"YES\"\nexport QDB_REDEPLOY=\"\"\n\nEOF\n</code></pre> <p>The settings are organized in 4 sections: - Section <code>TeraFlowSDN</code>: - <code>TFS_REGISTRY_IMAGE</code> enables to specify the private Docker registry to be used, by default, we assume to use the Docker respository enabled in MicroK8s. - <code>TFS_COMPONENTS</code> specifies the components their Docker image will be rebuilt, uploaded to the private Docker registry, and deployed in Kubernetes. - <code>TFS_IMAGE_TAG</code> defines the tag to be used for Docker images being rebuilt and uploaded to the private Docker registry. - <code>TFS_K8S_NAMESPACE</code> specifies the name of the Kubernetes namespace to be used for deploying the TFS components. - <code>TFS_EXTRA_MANIFESTS</code> enables to provide additional manifests to be applied into the Kubernetes environment during the deployment. Typical use case is to deploy ingress controllers, service monitors for Prometheus, etc. - <code>TFS_GRAFANA_PASSWORD</code> lets you specify the password you want to use for the <code>admin</code> user of the Grafana instance being deployed and linked to the Monitoring component. - <code>TFS_SKIP_BUILD</code>, if set to <code>YES</code>, prevents rebuilding the Docker images. That means, the deploy script will redeploy existing Docker images without rebuilding/updating them.</p> <ul> <li>Section <code>CockroachDB</code>: enables to configure the deployment of the backend CockroachDB database.</li> <li> <p>Check example script <code>my_deploy.sh</code> for further details.</p> </li> <li> <p>Section <code>NATS</code>: enables to configure the deployment of the backend NATS message broker.</p> </li> <li> <p>Check example script <code>my_deploy.sh</code> for further details.</p> </li> <li> <p>Section <code>QuestDB</code>: enables to configure the deployment of the backend QuestDB timeseries database.</p> </li> <li>Check example script <code>my_deploy.sh</code> for further details.</li> </ul> Confirm that MicroK8s is running <p></p> <p>Run the following command:</p> <pre><code>microk8s status\n</code></pre> <p>If it is reported <code>microk8s is not running, try microk8s start</code>, run the following command to start MicroK8s:</p> <pre><code>microk8s start\n</code></pre> <p>Confirm everything is up and running:</p> <ol> <li>Periodically Check the status of Kubernetes until you see the addons [dns, ha-cluster, helm3, hostpath-storage, ingress, registry, storage] in the enabled block.</li> <li>Periodically Check Kubernetes resources until all pods are Ready and Running.</li> </ol> Deploy TFS controller <p> First, source the deployment settings defined in the previous section. This way, you do not need to specify the environment variables in each and every command you execute to operate the TFS controller. Be aware to re-source the file if you open new terminal sessions. Then, run the following command to deploy TeraFlowSDN controller on top of the MicroK8s Kubernetes platform.</p> <pre><code>cd ~/tfs-ctrl\nsource my_deploy.sh\n./deploy/all.sh\n</code></pre> <p>The script performs the following steps:</p> <ul> <li>Executes script <code>./deploy/crdb.sh</code> to automate deployment of CockroachDB database used by Context component.</li> <li>The script automatically checks if CockroachDB is already deployed.</li> <li>If there are settings instructing to drop the database and/or redeploy CockroachDB, it does the appropriate actions to honor them as defined in previous section.</li> <li>Executes script <code>./deploy/nats.sh</code> to automate deployment of NATS message broker used by Context component.</li> <li>The script automatically checks if NATS is already deployed.</li> <li>If there are settings instructing to redeploy the message broker, it does the appropriate actions to honor them as defined in previous section.</li> <li>Executes script <code>./deploy/qdb.sh</code> to automate deployment of QuestDB timeseries database used by Monitoring component.</li> <li>The script automatically checks if QuestDB is already deployed.</li> <li>If there are settings instructing to redeploy the timeseries database, it does the appropriate actions to honor them as defined in previous section.</li> <li>Executes script <code>./deploy/tfs.sh</code> to automate deployment of TeraFlowSDN.</li> <li>Creates the namespace defined in <code>TFS_K8S_NAMESPACE</code></li> <li>Creates secrets for CockroachDB, NATS, and QuestDB to be used by Context and Monitoring components.</li> <li>Builds the Docker images for the components defined in <code>TFS_COMPONENTS</code></li> <li>Tags the Docker images with the value of <code>TFS_IMAGE_TAG</code></li> <li>Pushes the Docker images to the repository defined in <code>TFS_REGISTRY_IMAGE</code></li> <li>Deploys the components defined in <code>TFS_COMPONENTS</code></li> <li>Creates the file <code>tfs_runtime_env_vars.sh</code> with the environment variables for the components defined in <code>TFS_COMPONENTS</code> defining their local host addresses and their port numbers.</li> <li>Applies extra manifests defined in <code>TFS_EXTRA_MANIFESTS</code> such as:<ul> <li>Creating an ingress controller listening at port 80 for HTTP connections to enable external access to the TeraFlowSDN WebUI, Grafana Dashboards, and Compute NBI interfaces.</li> <li>Deploying service monitors to enable monitoring the performance of the components, device drivers and service handlers.</li> </ul> </li> <li>Initialize and configure the Grafana dashboards (if Monitoring component is deployed)</li> <li>Report a summary of the deployment</li> <li>See Show Deployment and Logs</li> </ul>"},{"location":"deployment_guide/#14-webui-and-grafana-dashboards","title":"1.4. WebUI and Grafana Dashboards","text":"<p>This section describes how to get access to the TeraFlowSDN controller WebUI and the monitoring Grafana dashboards.</p> Access the TeraFlowSDN WebUI <p> If you followed the installation steps based on MicroK8s, you got an ingress controller installed that exposes on TCP port 80.</p> <p>Besides, the ingress controller defines the following reverse proxy paths (on your local machine):</p> <ul> <li><code>http://127.0.0.1/webui</code>: points to the WebUI of TeraFlowSDN.</li> <li><code>http://127.0.0.1/grafana</code>: points to the Grafana dashboards. This endpoint brings access to the monitoring dashboards of TeraFlowSDN. The credentials for the <code>admin</code>user are those defined in the <code>my_deploy.sh</code> script, in the <code>TFS_GRAFANA_PASSWORD</code> variable.</li> <li><code>http://127.0.0.1/restconf</code>: points to the Compute component NBI based on RestCONF. This endpoint enables connecting external software, such as ETSI OpenSourceMANO NFV Orchestrator, to TeraFlowSDN.</li> </ul> <p>Note: In the creation of the VM, a forward from host TCP port 8080 to VM's TCP port 80 is configured, so the WebUIs and REST APIs of TeraFlowSDN should be exposed on the endpoint <code>127.0.0.1:8080</code> of your local machine instead of <code>127.0.0.1:80</code>.</p>"},{"location":"deployment_guide/#15-show-deployment-and-logs","title":"1.5. Show Deployment and Logs","text":"<p>This section presents some helper scripts to inspect the status of the deployment and the logs of the components. These scripts are particularly helpful for troubleshooting during execution of experiments, development, and debugging.</p> Report the deployment of the TFS controller <p></p> <p>The summary report given at the end of the Deploy TFS controller procedure can be generated manually at any time by running the following command. You can avoid sourcing <code>my_deploy.sh</code> if it has been already done.</p> <pre><code>cd ~/tfs-ctrl\nsource my_deploy.sh\n./deploy/show.sh\n</code></pre> <p>Use this script to validate that all the pods, deployments, replica sets, ingress controller, etc. are ready and have the appropriate state, e.g., running for Pods, and the services are deployed and have appropriate IP addresses and port numbers.</p> Report the log of a specific TFS controller component <p></p> <p>A number of scripts are pre-created in the <code>scripts</code> folder to facilitate the inspection of the component logs. For instance, to dump the log of the Context component, run the following command. You can avoid sourcing <code>my_deploy.sh</code> if it has been already done.</p> <pre><code>source my_deploy.sh\n./scripts/show_logs_context.sh\n</code></pre>"},{"location":"development_guide/","title":"2. Development Guide","text":""},{"location":"development_guide/#21-configure-environment","title":"2.1. Configure Environment","text":""},{"location":"development_guide/#211-python","title":"2.1.1. PythonUpgrade the Ubuntu distribution <p>Skip this step if you already did it during the installation of your machine.</p> <pre><code>sudo apt-get update -y\nsudo apt-get dist-upgrade -y\n</code></pre> Install PyEnv dependencies <p></p> <pre><code>sudo apt-get install -y make build-essential libssl-dev zlib1g-dev libbz2-dev libreadline-dev libsqlite3-dev wget \\\n curl llvm git libncursesw5-dev xz-utils tk-dev libxml2-dev libxmlsec1-dev libffi-dev liblzma-dev\n</code></pre> Install PyEnv <p></p> <p>We recommend installing PyEnv through PyEnv Installer. Below you can find the instructions, but we refer you to the link for updated instructions.</p> <pre><code>curl https://pyenv.run | bash\n# When finished, edit ~/.bash_profile // ~/.profile // ~/.bashrc as the installer proposes.\n# In general, it means to append the following lines to ~/.bashrc:\nexport PYENV_ROOT=\"$HOME/.pyenv\"\ncommand -v pyenv &gt;/dev/null || export PATH=\"$PYENV_ROOT/bin:$PATH\"\neval \"$(pyenv init -)\"\neval \"$(pyenv virtualenv-init -)\"\n</code></pre> <p>In case .bashrc is not linked properly to your profile, you may need to append the following line into your local .profile file:</p> <pre><code># Open ~/.profile and append this line:\n+source \"$HOME\"/.bashrc\n</code></pre> Restart the machine <p> Restart the machine for all the changes to take effect.</p> <pre><code>sudo reboot\n</code></pre> Install Python 3.9 over PyEnv <p></p> <p>ETSI TeraFlowSDN uses Python 3.9 by default. You should install the latest stable update of Python 3.9, i.e., avoid \"-dev\" versions. To find the latest version available in PyEnv, you can run the following command:</p> <pre><code>pyenv install --list | grep \" 3.9\"\n</code></pre> <p>At the time of writing, this command will output the following list:</p> <pre><code> 3.9.0\n 3.9-dev\n 3.9.1\n 3.9.2\n 3.9.4\n 3.9.5\n 3.9.6\n 3.9.7\n 3.9.8\n 3.9.9\n 3.9.10\n 3.9.11\n 3.9.12\n 3.9.13\n 3.9.14 \n 3.9.15\n 3.9.16 ** always select the latest version **\n</code></pre> <p>Therefore, the latest stable version is Python 3.9.16. To install this version, you should run:</p> <pre><code>pyenv install 3.9.16\n # This command might take some minutes depending on your Internet connection speed \n # and the performance of your machine.\n</code></pre> Create the Virtual Environment for TeraFlowSDN <p> The following commands create a virtual environment named as <code>tfs</code> using Python 3.9 and associate that environment with the current folder, i.e., <code>~/tfs-ctrl</code>. That way, when you are in that folder, the associated virtual environment will be used, thus inheriting the Python interpreter, i.e., Python 3.9, and the Python packages installed on it.</p> <pre><code>cd ~/tfs-ctrl\npyenv virtualenv 3.9.16 tfs\npyenv local 3.9.16/envs/tfs\n</code></pre> <p>After completing these commands, you should see in your prompt that now you're within the virtual environment <code>3.9.16/envs/tfs</code> on folder <code>~/tfs-ctrl</code>:</p> <pre><code>(3.9.16/envs/tfs) tfs@tfs-vm:~/tfs-ctrl$\n</code></pre> <p>In case that the correct pyenv does not get automatically activated when you change to the tfs-ctrl/ folder, then execute the following command:</p> <pre><code>cd ~/tfs-ctrl\npyenv activate 3.9.16/envs/tfs\n</code></pre> Install the basic Python packages within the virtual environment <p> From within the <code>3.9.16/envs/tfs</code> environment on folder <code>~/tfs-ctrl</code>, run the following commands to install the basic Python packages required to work with TeraFlowSDN.</p> <pre><code>cd ~/tfs-ctrl\n./install_requirements.sh\n</code></pre> <p>Some dependencies require to re-load the session, so log-out and log-in again.</p> Generate the Python code from the gRPC Proto messages and services <p></p> <p>The components, e.g., microservices, of the TeraFlowSDN controller, in general, use a gRPC-based open API to interoperate. All the protocol definitions can be found in sub-folder <code>proto</code> within the root project folder. For additional details on gRPC, visit the official web-page gRPC.</p> <p>In order to interact with the components, (re-)generate the Python code from gRPC definitions running the following command:</p> <pre><code>cd ~/tfs-ctrl\nproto/generate_code_python.sh\n</code></pre>","text":"<p>This section describes how to configure the Python environment to run experiments and develop code for the ETSI TeraFlowSDN controller. In particular, we use PyEnv to install the appropriate version of Python and manage the virtual environments.</p>"},{"location":"development_guide/#212-java-quarkus","title":"2.1.2. Java (Quarkus) <p>This section describe the steps needed to create a development environment for TFS components implemented in Java. Currently, ZTP and Policy components have been developed in Java (version 11) and use the Quarkus framework, which enables kubernetes-native development.</p> Install JDK <p> To begin, make sure that you have java installed and in the correct version</p> <pre><code>java --version\n</code></pre> <p>If you don't have java installed you will get an error like the following:</p> <pre><code>Command 'java' not found, but can be installed with:\n\nsudo apt install default-jre # version 2:1.11-72build1, or\nsudo apt install openjdk-11-jre-headless # version 11.0.14+9-0ubuntu2\nsudo apt install openjdk-17-jre-headless # version 17.0.2+8-1\nsudo apt install openjdk-18-jre-headless # version 18~36ea-1\nsudo apt install openjdk-8-jre-headless # version 8u312-b07-0ubuntu1\n</code></pre> <p>In which case you should use the following command to install the correct version:</p> <pre><code>sudo apt install openjdk-11-jre-headless\n</code></pre> <p>Else you should get something like the following:</p> <pre><code>openjdk 11.0.18 2023-01-17\nOpenJDK Runtime Environment (build 11.0.18+10-post-Ubuntu-0ubuntu120.04.1)\nOpenJDK 64-Bit Server VM (build 11.0.18+10-post-Ubuntu-0ubuntu120.04.1, mixed mode, sharing)\n</code></pre> Compiling and testing existing components <p> In the root directory of the existing Java components you will find an executable maven wrapper named <code>mvnw</code>. You could use this executable, which is already configured in pair with the components, instead of your local maven installation. So for example if you want to compile the project you would run the following:</p> <pre><code>./mvnw compile\n</code></pre> VS Code Quarkus plugin <p> In case you are using VS Code for development, we suggest to install the official Quarkus extension. The extension should be able to automatically find the current open project and integrate with the above <code>mvnw</code> maven wrapper, making it easier to control the maven lifecycle. Make sure that you open the specific component directory (i.e., <code>src/ztp</code> or <code>src/policy</code>) and not the general controller one (i.e., <code>src</code>.</p> New Java TFS component <p></p> <p>Sample Project</p> <p>If you want to create a new TFS component written in Java you could generate a new Quarkus project based on the following project:</p> <p>TFS Sample Quarkus Project</p> <p>In that way, you should have most of the libraries you would need to integrate with the rest of the TFS Components. Feel free however to add or remove libraries depending on your needs.</p> <p>Initial setup</p> <p>If you used the sample project above, you should have a project with a basic structure. However there are some steps that you should take before starting development.</p> <p>First make sure that you copy the protobuff files, that are found in the root directory of the TFS SDN controller, to the <code>new-component/src/main/proto</code> directory.</p> <p>Next you should create the following files:</p> <ul> <li><code>new-component/.gitlab-ci.yml</code></li> <li><code>new-component/Dockerfile</code></li> <li><code>new-component/src/resources/application.yaml</code></li> </ul> <p>We suggest to copy the respective files from existing components (Automation and Policy) and change them according to your needs.</p>","text":""},{"location":"development_guide/#213-java-maven","title":"2.1.3. Java (Maven) <p>Page under construction</p>","text":""},{"location":"development_guide/#214-rust","title":"2.1.4. Rust <p>Page under construction</p>","text":""},{"location":"development_guide/#215-erlang","title":"2.1.5. Erlang <p>This section describes how to configure the Erlang environment to run experiments and develop code for the ETSI TeraFlowSDN controller.</p> <p>First we need to install Erlang. There is multiple way, for development we will be using ASDF, a tool that allows the installation of multiple version of Erlang at the same time, and switch from one version to the other at will.</p> <ul> <li>First, install any missing dependencies:</li> </ul> <pre><code>sudo apt install curl git autoconf libncurses-dev build-essential m4 libssl-dev \n</code></pre> <ul> <li>Download ASDF tool to the local account:</li> </ul> <pre><code>git clone https://github.com/asdf-vm/asdf.git ~/.asdf --branch v0.10.2\n</code></pre> <ul> <li>Make ASDF activate on login by adding these lines at the end of the <code>~/.bashrc</code> file:</li> </ul> <pre><code>. $HOME/.asdf/asdf.sh\n. $HOME/.asdf/completions/asdf.bash\n</code></pre> <ul> <li>Logout and log back in to activate ASDF.</li> </ul> <p>ASDF supports multiple tools by installing there corresponding plugins.</p> <ul> <li>Install ASDF plugin for Erlang:</li> </ul> <pre><code>asdf plugin add erlang https://github.com/asdf-vm/asdf-erlang.git\n</code></pre> <ul> <li>Install a version of Erlang:</li> </ul> <pre><code>asdf install erlang 24.3.4.2\n</code></pre> <ul> <li>Activate Erlang locally for TFS controller. This will create a local file called <code>.tool-versions</code> defining which version of the tools to use when running under the current directory:</li> </ul> <pre><code>cd tfs-ctrl/\nasdf local erlang 24.3.4.2\n</code></pre> <p>Erlang projects uses a build tool called rebar3. It is used to manager project dependenecies, compile a project and generate project releases.</p> <ul> <li>Install rebar3 localy from source:</li> </ul> <pre><code>cd ~\ngit clone https://github.com/erlang/rebar3.git\ncd rebar3\nasdf local erlang 24.3.4.2\n./bootstrap\n./rebar3 local install\n</code></pre> <ul> <li>Update <code>~/.bashrc</code> to use rebar3 by adding this line at the end:</li> </ul> <pre><code>export PATH=$HOME/.cache/rebar3/bin:$PATH\n</code></pre> <ul> <li>Logout and log back in.</li> </ul>","text":""},{"location":"development_guide/#216-kotlin","title":"2.1.6. Kotlin <p>This section describes the steps needed to establish a development environment for TFS (TeraFlowSDN) components implemented in Kotlin. Currently, the <code>Gateway</code> component stands as the sole component developed in Kotlin.</p> Install Kotlin <p> To begin, make sure that you have kotlin installed and its current version:</p> <pre><code>kotlin -version\n</code></pre> <p>If you don't have kotlin installed you will get an error like the following:</p> <pre><code>Command 'kotlin' not found, but can be installed with:\nsudo snap install --classic kotlin\n</code></pre> <p>In which case you should use the following command to install the correct version:</p> <pre><code> sudo snap install --classic kotlin\n</code></pre> <p>Currently, the recommended version is 1.6.21, which uses Java Runtime Environment (JRE) version 11.</p> Compiling and testing existing components <p> To compile a Kotlin project using Gradle, similarly to using the Maven wrapper (mvnw) for Java projects, you can use the Gradle wrapper (gradlew) within the root directory of your Kotlin component, specifically the gateway directory.</p> <p>Navigate to the gateway directory within your Kotlin project. Ensure that it contains the gradlew script along with the gradle directory. Then, create a directory named <code>proto</code> and move all the files with extension <code>.proto</code> in this way:</p> <pre><code>mkdir proto\ncp ../../../proto/*.proto ./proto \n</code></pre> <p>For building the application, open a terminal or command prompt, navigate to the gateway directory, and run the following command:</p> <pre><code>./gradlew build\n</code></pre> <p>The following program runs the gateway application:</p> <pre><code>./gradlew runServer \n</code></pre> New Kotlin TFS component <p></p> <p>Sample Project</p> <p>If you want to create a new TFS component written in Kotlin you could generate a Kotlin project using <code>gradle</code>. The recommended version is 7.1. Follow the following Gradle guide for its installation. For building the prokect follow this link instead.</p> <p>From inside the new project directory, run the init task using the following command in a terminal: <code>gradle init</code>. </p> <p>The output will look like this:</p> <pre><code>$ gradle init\n\nSelect type of project to generate:\n 1: basic\n 2: application\n 3: library\n 4: Gradle plugin\nEnter selection (default: basic) [1..4] 2\n\nSelect implementation language:\n 1: C++\n 2: Groovy\n 3: Java\n 4: Kotlin\n 5: Scala\n 6: Swift\nEnter selection (default: Java) [1..6] 4\n\nSelect build script DSL:\n 1: Groovy\n 2: Kotlin\nEnter selection (default: Groovy) [1..2] 1\n\nProject name (default: demo):\nSource package (default: demo):\n\n\nBUILD SUCCESSFUL\n2 actionable tasks: 2 executed\n</code></pre> <p>Initial setup</p> <p>The <code>gradle init</code> command generates the new project. </p> <p>First, ensure the protobuf files are copied from the root directory of the TFS SDN controller. Run the following command in the directory of the new project:</p> <pre><code>mkdir proto \ncp TFS/project/root/path/proto/*.proto ./proto/\n</code></pre> <p>The file <code>build.gradle.ktl</code> is fundamental as it manages dependencies. Adjust it for adding external libraries. </p> <p>Next you should create the following files:</p> <ol> <li><code>new-component/.gitlab-ci.yml</code></li> <li><code>new-component/Dockerfile</code></li> </ol> <p>We recommend leveraging the structures and configurations found in the files of existing components for inspiration.</p> <p>Docker Container This project operates with Docker containers. Ensure the production of the container version for your component. To generate the container version of the project, modify the 'new-component/Dockerfile.' Execute the following command from the project's root directory:</p> <pre><code>docker build -t new-image -f new-component/Dockerfile ./\n</code></pre>","text":""},{"location":"development_guide/#22-configure-vscode","title":"2.2. Configure VScode","text":"Install VSCode and the required extensions <p>If not already done, install VSCode and the \"Remote SSH\" extension on your local machine, not in the VM.</p> <p>Note: \"Python\" extension is not required here. It will be installed later on the VSCode server running on the VM.</p> Configure the \"Remote SSH\" extension <p></p> <ul> <li>Go to left icon \"Remote Explorer\"</li> <li>Click the \"gear\" icon next to \"SSH TARGETS\" on top of \"Remote Explorer\" bar</li> <li>Choose to edit \"&lt;...&gt;/.ssh/config\" file (or equivalent)</li> <li>Add the following entry (assuming previous port forwarding configuration):</li> </ul> <pre><code>Host TFS-VM\n HostName 127.0.0.1\n Port 2200\n ForwardX11 no\n User tfs\n</code></pre> <ul> <li>Save the file</li> <li>An entry \"TFS-VM\" should appear on \"SSH TARGETS\".</li> </ul> Connect VSCode to the VM through \"Remote SSH\" extension <p></p> <ul> <li>Right-click on \"TFS-VM\"</li> <li>Select \"Connect to Host in Current Window\"</li> <li>Reply to the questions asked</li> <li>Platform of the remote host \"TFS-VM\": Linux</li> <li>\"TFS-VM\" has fingerprint \"\". Do you want to continue?: Continue <li>Type tfs user's password: tfs123</li> <li>You should be now connected to the TFS-VM.</li> <p>Note: if you get a connection error message, the reason might be due to wrong SSH server fingerprint. Edit file \"&lt;...&gt;/.ssh/known_hosts\" on your local user account, check if there is a line starting with \"[127.0.0.1]:2200\" (assuming previous port forwarding configuration), remove the entire line, save the file, and retry connection.</p> Add SSH key to prevent typing the password every time <p> This step creates an SSH key in the VM and installs it on the VSCode to prevent having to type the password every time.</p> <ul> <li>In VSCode (connected to the VM), click menu \"Terminal &gt; New Terminal\"</li> <li>Run the following commands on the VM's terminal through VSCode</li> </ul> <pre><code>ssh-keygen -t rsa -b 4096 -f ~/.ssh/tfs-vm.key\n # leave password empty\nssh-copy-id -i ~/.ssh/tfs-vm.key.pub tfs@10.0.2.10\n # tfs@10.0.2.10's password: &lt;type tfs user's password: tfs123&gt;\nrm .ssh/known_hosts \n</code></pre> <ul> <li>In VSCode, click left \"Explorer\" panel to expand, if not expanded, and click \"Open Folder\" button.</li> <li>Choose \"/home/tfs/\"</li> <li>Type tfs user's password when asked</li> <li>Trust authors of the \"/home/tfs [SSH: TFS-VM]\" folder when asked</li> <li>Right click on the file \"tfs-vm.key\" in the file explorer</li> <li>Select \"Download...\" option</li> <li>Download the file into your user's accout \".ssh\" folder</li> <li> <p>Delete files \"tfs-vm.key\" and \"tfs-vm.key.pub\" on the TFS-VM.</p> </li> <li> <p>In VSCode, click left \"Remote Explorer\" panel to expand</p> </li> <li>Click the \"gear\" icon next to \"SSH TARGETS\" on top of \"Remote Explorer\" bar</li> <li>Choose to edit \"&lt;...&gt;/.ssh/config\" file (or equivalent)</li> <li>Find entry \"Host TFS-VM\" and update it as follows:</li> </ul> <pre><code>Host TFS-VM\n HostName 127.0.0.1\n Port 2200\n ForwardX11 no\n User tfs\n IdentityFile \"&lt;path to the downloaded identity private key file&gt;\"\n</code></pre> <ul> <li>Save the file</li> <li>From now, VSCode will use the identity file to connect to the TFS-VM instead of the user's password.</li> </ul> Install VSCode Python Extension (in VSCode server) <p> This step installs Python extensions in VSCode server running in the VM.</p> <ul> <li>In VSCode (connected to the VM), click left button \"Extensions\"</li> <li>Search \"Python\" extension in the extension Marketplace.</li> <li> <p>Install official \"Python\" extension released by Microsoft.</p> <ul> <li>By default, since you're connected to the VM, it will be installed in the VSCode server running in the VM.</li> </ul> </li> <li> <p>In VSCode (connected to the VM), click left button \"Explorer\"</p> </li> <li>Click \"Ctrl+Alt+P\" and type \"Python: Select Interpreter\". Select option \"Python: 3.9.13 64-bit ('tfs')\"</li> </ul> Define environment variables for VSCode <p> The source code in the TFS controller project is hosted in folder <code>src/</code>. To help VSCode find the Python modules and packages, add the following file into your working space root folder:</p> <pre><code>echo \"PYTHONPATH=./src\" &gt;&gt; ~/tfs-ctrl/.env\n</code></pre>"},{"location":"development_guide/#23-develop-a-component-wip","title":"2.3. Develop A Component (WIP)","text":"<p>Page under construction</p>"},{"location":"features_and_bugs/","title":"4. Feature and bugs","text":"<p>This section describes the procedures to request new features and enhancements, report bugs, and the workflows implemented to manage them.</p> <ul> <li>Feature Request Procedure</li> <li>Bug Report Procedure</li> <li>Feature LifeCycle</li> </ul>"},{"location":"features_and_bugs/#41-feature-request-procedure","title":"4.1. Feature Request Procedure","text":"<p>Project features go through a discussion and approval process. To propose a New Feature, TFS uses the issues on its GitLab code repository.</p> <p>Important: A feature request is about functionality, not about implementation details.</p> <ul> <li>Describe WHAT you are proposing, and WHY it is important.</li> <li>DO NOT describe HOW to do it. This is done when the new feature is approved by TSC by populating the design details.</li> </ul> <p>Two kind of requests are considered in this procedure:</p> <ul> <li>New Feature: a big change that potentially affects a number of components and requires an appropriate design phase.</li> <li>Enhancement: a relatively small change enhancing TFS that does not require a design phase.</li> </ul> Steps: <p></p> <ol> <li> <p>Go to New Issue page <code>https://labs.etsi.org/rep/tfs/controller/-/issues/new</code>.</p> <ul> <li>You need to be authenticated.</li> </ul> </li> <li> <p>Create a New Issue for your feature</p> <ul> <li>Title: A concise high level description of your feature (see some other examples in GitLab)</li> <li>Type: Issue</li> <li>Description: Choose the \"new-feature\" or \"enhancement\" project templates and fill-in the auto-generated template describing the feature/enhancement.</li> <li>Labels:<ul> <li>Select the type of request: <code>type::new-feature</code> / <code>type::enhancement</code></li> <li>If you foresee the components affected by the request, pick the appropriate labels for them.<ul> <li>Component labels have the form <code>comp-&lt;component-name&gt;</code>.</li> </ul> </li> <li>PLEASE: Do not set other types of labels (to be set by TSC).</li> </ul> </li> <li>PLEASE: Do not set the following fields (to be set by TSC): EPIC, Assignee, Milestone, Weight, Due Date</li> <li>Submit the Issue</li> </ul> </li> <li> <p>Interact with the TSC and the Community through the issue.</p> <ul> <li>TSC will review your request. If it makes sense and its purpose is clear, it will be approved. Otherwise, TSC will provide questions for clarification.</li> </ul> </li> </ol> Designing a Feature: <p></p> <p>Once a feature has been approved, the design phase starts. The design should be included within the description of the feature (GitLab issue description) by concatenating the Design Feature Template (see below) and properly filling it in. In case the feature description becomes too long, attached files could be included to the feature.</p> <p>The design is expected to be socialized with the relevant stakeholders (e.g. MDLs and TSC). Dedicated slots can be allocated in the TECH calls on a per-request basis to discuss and refine it.</p> <p>For writing the design, you can check the design of existing features or use the design template below.</p> Templates: <p></p> <p>New feature / Enhancement request template:</p> <pre><code># Proposers\n\n- name-of-proposer-1 (institution-of-proposer-1)\n- name-of-proposer-2 (institution-of-proposer-2)\n...\n\n# Description\n\nDescribe your proposal in ~1000 characters.\nYou can reference external content listed in section \"References\" as [Ref-1].\n\n# Demo or definition of done\n\nDescribe which high level conditions needs to be fulfilled to demonstrate this feature implementation is completed.\nYou can reference external content (example, demo paper) listed in section \"References\" as [Ref-2].\n\n# References\n\n1. [Reference name](https://reference-url)\n2. Author1, Author2, Author3, et. al., \u201cMy demo using feature,\u201d in Conference-Name Demo Track, 20XX.\n</code></pre> <p>Feature design Template:</p> <pre><code># Feature Design\n\n## Clarifications to Expected Behavior Changes\n\nExisting component logic and workflows between components that need to be altered to realize this feature.\nRemember to justify these changes.\n...\n\n## References\n\nList of relevant references for this feature.\n...\n\n## Assumptions\n\nEnumerate the assumptions for this feature, e.g., fix XXX is implemented and merged, specific configurations, specific\ncomponents deployed.\n...\n\n## Impacted Components\n\nList of impacted components: Context, Device, Service, PathComp, Slice, Monitoring, Automation, Policy, Compute, etc.\nJust an enumeration, elaboration of impacts is done below.\n\n## Component1 Impact\n\nDescribe impact (changes) on component1.\n...\n\n## Component2 Impact\n\nDescribe impact (changes) on component2.\n...\n\n## Testing\n\nDescribe test sets (unitary and integration) to be carried out.\nThis section can include/reference external experiments, demo papers, etc.\n...\n</code></pre>"},{"location":"features_and_bugs/#42-bug-report-procedure","title":"4.2. Bug Report Procedure","text":"<p>Project bugs go through a review, confirmation, and resolution process. To report a Bug, TFS uses the issues on its GitLab code repository.</p> <p>Important: New bugs must be properly documented. Please, provide details on:</p> <ul> <li>the details on the deployment environment (Operating System, MicroK8s, etc.)</li> <li>the TeraFlowSDN version (or branch/commit)</li> <li>the TFS deployment settings (components, particular configurations, etc.)</li> <li>the particular sequence of actions that resulted in the bug</li> <li>the TFS components affected by the bug (if you know them)</li> <li>the expected behavior (if you know it)</li> </ul> <p>Without this minimal information, it will/might be difficult to reproduce and resolve the bug, as well as validating the completeness of the solution.</p> Steps: <p></p> <ol> <li> <p>Go to New Issue page <code>https://labs.etsi.org/rep/tfs/controller/-/issues/new</code>.</p> <ul> <li>You need to be authenticated.</li> </ul> </li> <li> <p>Create a New Issue for your bug</p> <ul> <li>Title: A concise high level description of your bug (see some other examples in GitLab)</li> <li>Type: Issue</li> <li>Description: Choose the \"bug\" project template and fill-in the auto-generated template describing the bug.</li> <li>Labels:<ul> <li>Select the type of request: <code>type::bug</code></li> <li>If you foresee the components affected by the bug, pick the appropriate labels for them.<ul> <li>Component labels have the form <code>comp-&lt;component-name&gt;</code>.</li> </ul> </li> <li>PLEASE: Do not set other types of labels (to be set by TSC).</li> </ul> </li> <li>PLEASE: Do not set the following fields (to be set by TSC): EPIC, Assignee, Milestone, Weight, Due Date</li> <li>Submit the Issue</li> </ul> </li> <li> <p>Interact with the TSC and the Community through the issue.</p> <ul> <li>TSC will review your reported bug and try to reproduce it. If we succeed in reproducing it, we will mark it as confirmed, and include its resolution in the development plans. Otherwise, TSC will provide questions for clarification.</li> </ul> </li> </ol>"},{"location":"features_and_bugs/#43-feature-lifecycle","title":"4.3. Feature LifeCycle","text":"<p>Once approved, a feature request could transition through the following steps:</p> <ul> <li>Approved: Feature approved by TSC; design phase can start.</li> <li>Design: Feature under design; discussing on HOW to do it.</li> <li>Development: Design approved; feature under development/implementation.</li> <li>Testing and Review: Feature implemented and under review and testing by the developers and the community.</li> <li>Completed: Testing and review completed, and feature merged.</li> <li>Abandoned: Feature abandoned.</li> </ul> <p>Important: An approved feature is not a guarantee for implementation. Implementing a feature requires resources, and resources come from the members, participants and individual contributors integrating the TFS Community, which might have prioritized the development of other features based on their own interests and the interests expressed by the LG, the TSC, and the MDGs.</p> <p>Once a Feature is mature, e.g., Testing, Review, Completed, it can be accepted for inclusion in a specific Release. This is accomplished by including the issue ticket in the respective EPIC \"ReleaseX.Y\". For instance, to see the Features included in Release X.Y, check EPIC \"ReleaseX.Y\".</p>"},{"location":"run_experiments/","title":"3. Run Experiments","text":"<p>This section walks you through the process of running experiments in TeraFlowSDN on top of a machine running MicroK8s Kubernetes platform. The guide includes the details on configuring the Python environment, some basic commands you might need, configuring the network topology, and executing different experiments.</p> <p>Note that the steps followed here are likely to work regardless of the platform where TeraFlowSDN is deployed over.</p> <p>Note also that this guide will keep growing with the new experiments and demonstrations that are being carried out involving the ETSI TeraFlowSDN controller.</p> <p>Important: The NBIs, workflows and drivers have to be considered as experimental. The configuration and monitoring capabilities they support are limited, partially implemented, or tested only with specific laboratory equipment. Use them with care.</p> <ul> <li> <p>3.1. OFC'22 Demo</p> <ul> <li>Bootstrapping of devices</li> <li>Monitoring of device endpoints</li> <li>Management of L3VPN services</li> <li>Integration with ETSI OpenSourceMANO</li> </ul> </li> <li> <p>3.2. ECOC'22 Demo</p> <ul> <li>Disjoint DC-2-DC L2VPN Service management</li> <li>Integration with ETSI OpenSourceMANO</li> </ul> </li> <li> <p>3.3. OECC-PSC'22 Demo (Work In Progress)</p> <ul> <li>Interdomain Slices</li> </ul> </li> <li> <p>3.4. NFV-SDN'22 Demo (Work In Progress)</p> <ul> <li>DLT-based context sharing</li> <li>DLT-based Interdomain Slices with SLAs</li> </ul> </li> </ul>"},{"location":"run_experiments/#31-ofc22-demo","title":"3.1. OFC'22 Demo","text":"<p>This functional test reproduces the live demonstration Demonstration of Zero-touch Device and L3-VPN Service Management Using the TeraFlow Cloud-native SDN Controller carried out at OFC'22 / Open access.</p> <p>The main features demonstrated are:</p> <ul> <li>Bootstrapping of devices</li> <li>Monitoring of device endpoints</li> <li>Management of L3VPN services</li> <li>Integration with ETSI OpenSourceMANO</li> </ul> Functional test folder <p></p> <p>This functional test can be found in folder <code>./src/tests/ofc22/</code>.</p> Execute with real devices <p></p> <p>This functional test is designed to operate both with real and emulated devices. By default, emulated devices are used; however, if you have access to real devices, you can create/modify the files <code>./src/tests/ofc22/tests/Objects.py</code> and <code>./src/tests/ofc22/tests/Credentials.py</code> to point to your devices, and map to your own network topology. Otherwise, you can modify the <code>./src/tests/ofc22/tests/descriptors_emulated.json</code> that is designed to be uploaded through the WebUI instead of using the command line scripts. Note that the default scenario assumes devices R2 and R4 are always emulated, while devices R1, R3, and O1 can be configured as emulated or real devices.</p> <p>Important: The device drivers operating with real devices, e.g., OpenConfigDriver, P4Driver, and TransportApiDriver, have to be considered as experimental. The configuration and monitoring capabilities they support are limited or partially implemented/tested. Use them with care.</p> Deployment and Dependencies <p></p> <p>To run this functional test, it is assumed you have deployed a MicroK8s-based Kubernetes environment and a TeraFlowSDN controller instance as described in the Tutorial: Deployment Guide, and you configured the Python environment as described in Tutorial: Development Guide &gt; Configure Environment &gt; Python.</p> Access to the WebUI and Dashboard <p></p> <p>When the deployment completes, you can connect to the TeraFlowSDN WebUI and Dashboards as described in Tutorial: Deployment Guide &gt; WebUI and Grafana Dashboards</p> <p>Notes:</p> <ul> <li>the default credentials for the Grafana Dashboiard is user/pass: <code>admin</code>/<code>admin123+</code>.</li> <li>in Grafana, you will find the L3-Monitorng in the Starred dashboards section.</li> </ul> Test execution <p></p> <p>Before executing the tests, we need to prepare a few things.</p> <p>First, you need to make sure that you have all the gRPC-generate code in your folder. To do so, run:</p> <pre><code>proto/generate_code_python.sh\n</code></pre> <p>Then, it is time to deploy TeraFlowSDN with the correct specification for this scenario. Make sure to load your deployment variables for this scenario by:</p> <pre><code>source src/tests/ofc22/deploy_specs.sh\n</code></pre> <p>Then, you need to deploy the components by running:</p> <pre><code>./deploy/all.sh\n</code></pre> <p>After the deployment is finished, you need to load the environment variables to support the execution of the tests by:</p> <pre><code>source tfs_runtime_env_vars.sh\n</code></pre> <p>To execute this functional test, four main steps needs to be carried out:</p> <ol> <li>Device bootstrapping</li> <li>L3VPN Service creation</li> <li>L3VPN Service removal</li> <li>Cleanup</li> </ol> <p>Upon the execution of each test progresses, a report will be generated indicating PASSED / FAILED / SKIPPED. If there is some error during the execution, you should see a detailed report on the error. See the troubleshooting section if needed.</p> <p>You can check the logs of the different components using the appropriate <code>scripts/show_logs_[component].sh</code> scripts after you execute each step.</p> <p>There are two ways to execute the functional tests, running all the tests with a single script or running each test independently. In the following we start with the first option, then we comment on how to run each test independently.</p> <p>Running all tests with a single script</p> <p>We have a script that executes all the steps at once. It is meant for being used to test if all components involved in this scenario are working correct. To run all the functional tests, you can run:</p> <pre><code>src/tests/ofc22/run_tests.sh\n\n</code></pre> <p>The following sections explain each one of the steps.</p> <p>Device bootstrapping</p> <p>This step configures some basic entities (Context and Topology), the devices, and the links in the topology. The expected results are:</p> <ul> <li>The devices to be added into the Topology.</li> <li>The devices to be pre-configured and initialized as ENABLED by the Automation component.</li> <li>The monitoring for the device ports (named as endpoints in TeraFlowSDN) to be activated and data collection to automatically start.</li> <li>The links to be added to the topology.</li> </ul> <p>To run this step, you can do it from the WebUI by uploading the file <code>./ofc22/tests/descriptors_emulated.json</code> that contains the descriptors of the contexts, topologies, devices, and links, or by executing the script:</p> <pre><code>./src/tests/ofc22/run_test_01_bootstrap.sh\n</code></pre> <p>When the bootstrapping finishes, check in the Grafana L3-Monitoring Dashboard and you should see the monitoring data being plotted and updated every 5 seconds (by default). Given that there is no service configured, you should see a 0-valued flat plot.</p> <p>In the WebUI, select the admin Context. Then, in the Devices tab you should see that 5 different emulated devices have been created and activated: 4 packet routers, and 1 optical line system controller. Besides, in the Services tab you should see that there is no service created. Note here that the emulated devices produce synthetic randomly-generated monitoring data and do not represent any particular services configured.</p> <p>L3VPN Service creation</p> <p>This step configures a new service emulating the request an OSM WIM would make by means of a Mock OSM instance.</p> <p>To run this step, execute the script:</p> <pre><code>./src/tests/ofc22/run_test_02_create_service.sh\n</code></pre> <p>When the script finishes, check the WebUI Services tab. You should see that two services have been created, one for the optical layer and another for the packet layer. Besides, you can check the Devices tab to see the configuration rules that have been configured in each device. In the Grafana Dashboard, given that there is now a service configured, you should see the plots with the monitored data for the device. By default, device R1-EMU is selected.</p> <p>L3VPN Service removal</p> <p>This step deconfigures the previously created services emulating the request an OSM WIM would make by means of a Mock OSM instance.</p> <p>To run this step, execute the script:</p> <pre><code>./src/tests/ofc22/run_test_03_delete_service.sh\n</code></pre> <p>or delete the L3NM service from the WebUI.</p> <p>When the script finishes, check the WebUI Services tab. You should see that the two services have been removed. Besides, in the Devices tab you can see that the appropriate configuration rules have been deconfigured. In the Grafana Dashboard, given that there is no service configured, you should see a 0-valued flat plot again.</p> <p>Cleanup</p> <p>This last step performs a cleanup of the scenario removing all the TeraFlowSDN entities for completeness. To run this step, execute the script:</p> <pre><code>./src/tests/ofc22/run_test_04_cleanup.sh\n</code></pre> <p>When the script finishes, check the WebUI Devices tab, you should see that the devices have been removed. Besides, in the Services tab you can see that the \"admin\" Context has no services given that that context has been removed.</p>"},{"location":"run_experiments/#32-ecoc22-demo","title":"3.2. ECOC'22 Demo","text":"<p>This functional test reproduces the experimental assessment of Experimental Demonstration of Transport Network Slicing with SLA Using the TeraFlowSDN Controller presented at ECOC'22 / IEEEXplore.</p> <p>The main features demonstrated are:</p> <ul> <li>Disjoint DC-2-DC L2VPN Service management</li> <li>Integration with ETSI OpenSourceMANO</li> </ul> Functional test folder <p></p> <p>This functional test can be found in folder <code>./src/tests/ecoc22/</code>.</p> Execute with real devices <p></p> <p>This functional test has only been tested with emulated devices; however, if you have access to real devices, you can modify the files <code>./src/tests/ecoc22/tests/Objects.py</code> and <code>./src/tests/ecoc22/tests/Credentials.py</code> to point to your devices, and map to your network topology. Otherwise, you can modify the <code>./src/tests/ecoc22/tests/descriptors_emulated.json</code> that is designed to be uploaded through the WebUI instead of using the command line scripts.</p> <p>Important: The device drivers operating with real devices, e.g., OpenConfigDriver, P4Driver, and TransportApiDriver, have to be considered as experimental. The configuration and monitoring capabilities they support are limited or partially implemented/tested. Use them with care.</p> Deployment and Dependencies <p></p> <p>To run this functional test, it is assumed you have deployed a MicroK8s-based Kubernetes environment and a TeraFlowSDN controller instance as described in the Tutorial: Deployment Guide, and you configured the Python environment as described in Tutorial: Development Guide &gt; Configure Environment &gt; Python.</p> Access to the WebUI <p></p> <p>When the deployment completes, you can connect to the TeraFlowSDN WebUI as described in Tutorial: Deployment Guide &gt; WebUI and Grafana Dashboards</p> <p>Notes:</p> <ul> <li>this experiment does not make use of Monitoring, so Grafana is not used.</li> <li>the default credentials for the Grafana Dashboard is user/pass: <code>admin</code>/<code>admin123+</code>.</li> <li>this functional test does not involve the Monitoring component, so no monitoring data is plotted in Grafana.</li> </ul> Test execution <p></p> <p>Before executing the tests, we need to prepare a few things.</p> <p>First, you need to make sure that you have all the gRPC-generate code in your folder. To do so, run:</p> <pre><code>proto/generate_code_python.sh\n</code></pre> <p>Second, it is time to deploy TeraFlowSDN with the correct specification for this scenario. Make sure to load your deployment variables for this scenario by:</p> <pre><code>source src/tests/ecoc22/deploy_specs.sh\n</code></pre> <p>Then, you need to deploy the components by running:</p> <pre><code>./deploy/all.sh\n</code></pre> <p>After the deployment is finished, you need to load the environment variables to support the execution of the tests by:</p> <pre><code>source tfs_runtime_env_vars.sh\n</code></pre> <p>To execute this functional test, four main steps needs to be carried out:</p> <ol> <li>Device bootstrapping</li> <li>L2VPN Slice and Services creation</li> <li>L2VPN Slice and Services removal</li> <li>Cleanup</li> </ol> <p>Upon the execution of each test progresses, a report will be generated indicating PASSED / FAILED / SKIPPED. If there is some error during the execution, you should see a detailed report on the error. See the troubleshooting section if needed.</p> <p>You can check the logs of the different components using the appropriate <code>scripts/show_logs_[component].sh</code> scripts after you execute each step.</p> <p>Device bootstrapping</p> <p>This step configures some basic entities (Context and Topology), the devices, and the links in the topology. The expected results are:</p> <ul> <li>The devices to be added into the Topology.</li> <li>The devices to be pre-configured and initialized as ENABLED by the Automation component.</li> <li>The links to be added to the topology.</li> </ul> <p>To run this step, you can do it from the WebUI by uploading the file <code>./src/tests/ecoc22/tests/descriptors_emulated.json</code> that contains the descriptors of the contexts, topologies, devices, and links, or by executing the <code>./src/tests/ecoc22/run_test_01_bootstrap.sh</code> script.</p> <p>In the WebUI, select the admin Context. Then, in the Devices tab you should see that 5 different emulated devices have been created and activated: 4 packet routers, and 1 optical Open Line System (OLS) controller. Besides, in the Services tab you should see that there is no service created. </p> <p>L2VPN Slice and Services creation</p> <p>This step configures a new service emulating the request an OSM WIM would make by means of a Mock OSM instance.</p> <p>To run this step, execute the <code>./src/tests/ecoc22/run_test_02_create_service.sh</code> script.</p> <p>When the script finishes, check the WebUI Slices and Services tab. You should see that, for the connectivity service requested by MockOSM, one slice has been created, three services have been created (two for the optical layer and another for the packet layer). Note that the two services for the optical layer correspond to the primary (service_uuid ending with \":0\") and the backup (service_uuid ending with \":1\") services. Each of the services indicates the connections and sub-services that are supporting them. Besides, you can check the Devices tab to see the configuration rules that have been configured in each device.</p> <p>L2VPN Slice and Services removal</p> <p>This step deconfigures the previously created slices and services emulating the request an OSM WIM would make by means of a Mock OSM instance.</p> <p>To run this step, execute the <code>./src/tests/ecoc22/run_test_03_delete_service.sh</code> script, or delete the slice from the WebUI.</p> <p>When the script finishes, check the WebUI Slices and Services tab. You should see that the slice and the services have been removed. Besides, in the Devices tab you can see that the appropriate configuration rules have been deconfigured.</p> <p>Cleanup</p> <p>This last step performs a cleanup of the scenario removing all the TeraFlowSDN entities for completeness.</p> <p>To run this step, execute the <code>./src/tests/ecoc22/run_test_04_cleanup.sh</code> script.</p> <p>When the script finishes, check the WebUI Devices tab, you should see that the devices have been removed. Besides, in the Slices and Services tab you can see that the admin Context has no services given that that context has been removed.</p>"},{"location":"run_experiments/#33-oecc-psc22-demo-work-in-progress","title":"3.3. OECC-PSC'22 Demo (Work In Progress)","text":"<p>Page under construction.</p> <p>The main features demonstrated are:</p> <ul> <li>Interdomain Slices</li> </ul>"},{"location":"run_experiments/#34-nfv-sdn22-demo-work-in-progress","title":"3.4. NFV-SDN'22 Demo (Work In Progress)","text":"<p>Page under construction.</p> <p>The main features demonstrated are:</p> <ul> <li>DLT-based context sharing</li> <li>DLT-based Interdomain Slices with SLAs</li> </ul>"},{"location":"supported_sbis_and_network_elements/","title":"5. Supported SBIs and Network Elements","text":"<p>This section summarizes the SBI drivers supported by the TeraFlowSDN controller to interoperate with underlying network equipment and intermediate controllers.</p> <ul> <li>5.1. Emulated</li> <li>5.2. NetConf OpenConfig</li> <li>5.3. gNMI OpenConfig</li> <li>5.4. ONF Transport API</li> <li>5.5. P4</li> <li>5.6. Infinera IPM XR</li> <li>5.7. IETF L2VPN</li> <li>5.8. IETF ACTN</li> <li>5.9. NetConf OpenConfig Optical</li> <li>5.10. Optical TFS</li> <li>5.11. MicroWave</li> </ul>"},{"location":"supported_sbis_and_network_elements/#51-emulated","title":"5.1. Emulated","text":"<p>Documentation in progress</p> <p>This driver is provided for testing and debugging purposes. It implements an accept-anything behaviour and maintains an in-memory database with configured rules. It implements support for synthetic telemetry streaming data that is activated/deactivated according to enabled/disabled device endpoints.</p> Device Type: <p></p> <ul> <li>Software Emulated within TeraFlowSDN's Device component.</li> </ul> Supported features: <p></p> <ul> <li>Configuration</li> <li>Monitoring (synthetic)</li> </ul> Tested Devices/Controllers: <p> Not applicable.</p> Reference: <p> None</p>"},{"location":"supported_sbis_and_network_elements/#52-netconf-openconfig","title":"5.2. NetConf OpenConfig","text":"<p>Documentation in progress</p> Device Type: <p></p> <ul> <li>Packet Router</li> </ul> Supported features: <p></p> <ul> <li>Configuration of interfaces, L2/L3 VPNs, ACLs</li> <li>Monitoring of interfaces through polling</li> </ul> Tested Devices/Controllers: <p></p> <ul> <li>Infinera DRX-30 with ADVA NOS-OPX-B-21.5.1</li> <li>Edgecore AS7315-30X with ADVA NOS-OPX-B-21.5.1</li> </ul> Reference: <p></p> <ul> <li>IETF RFC6241: Network Configuration Protocol (NETCONF)</li> <li>OpenConfig</li> <li>OpenConfig GitHub</li> </ul>"},{"location":"supported_sbis_and_network_elements/#53-gnmi-openconfig","title":"5.3. gNMI OpenConfig","text":"<p>Documentation in progress</p> Device Type: <p></p> <ul> <li>Packet Router</li> </ul> Supported features: <p></p> <ul> <li>Configuration of interfaces and IPv4 network instances with static routes</li> <li>Monitoring of interfaces through telemetry streaming</li> </ul> Tested Devices/Controllers: <p></p> <ul> <li>Nokia SR Linux v23.3.1</li> </ul> Reference: <p></p> <ul> <li>OpenConfig</li> <li>OpenConfig GitHub</li> <li>OpenConfig gNMI</li> <li>OpenConfig gNMI GitHub</li> </ul>"},{"location":"supported_sbis_and_network_elements/#54-onf-transport-api","title":"5.4. ONF Transport API","text":"<p>Documentation in progress</p> Device Type: <p></p> <ul> <li>Intermediate Controller</li> </ul> Supported features: <p></p> <ul> <li>Configuration of L0 optical links</li> <li>Monitoring is not supported</li> </ul> Tested Devices/Controllers: <p></p> <ul> <li>CTTC Open Line System controller</li> </ul> Reference: <p></p> <ul> <li>ONF Transport API</li> <li>ONF Transport API GitHub</li> </ul>"},{"location":"supported_sbis_and_network_elements/#55-p4","title":"5.5. P4","text":"<p>Documentation in progress</p> Device Type: <p></p> <ul> <li>Intermediate Controller</li> </ul> Supported features: <p></p> <ul> <li>Configuration of L2 packet connections</li> <li>Monitoring not supported</li> </ul> Tested Devices/Controllers: <p></p> <ul> <li>BMV2</li> <li>Intel Tofino P4 switch</li> </ul> Reference: <p></p>"},{"location":"supported_sbis_and_network_elements/#56-infinera-ipm-xr","title":"5.6. Infinera IPM XR","text":"<p>Infinera XR Pluggables through Infinera IPM controller</p> <p>Documentation in progress</p> Device Type: <p></p> <ul> <li>Intermediate Controller</li> </ul> Supported features: <p></p> <ul> <li>Configuration of L0 optical links</li> <li>Monitoring not supported</li> </ul> Tested Devices/Controllers: <p></p> <ul> <li>Infinera Pluggable Manager (IPM) controller</li> </ul> Reference: <p></p>"},{"location":"supported_sbis_and_network_elements/#57-ietf-l2vpn","title":"5.7. IETF L2VPN","text":"<p>IETF RFC8466: A YANG Data Model for Layer 2 Virtual Private Network (L2VPN) Service Delivery</p> <p>Documentation in progress</p> Device Type: <p></p> <ul> <li>Intermediate Controller</li> </ul> Supported features: <p></p> <ul> <li>Discovery of underlyting topology</li> <li>Configuration of L2 VPNs</li> <li>Monitoring not supported</li> </ul> Tested Devices/Controllers: <p></p> <ul> <li>TeraFlowSDN as child IP controller</li> </ul> Reference: <p></p> <ul> <li>IETF RFC8466: A YANG Data Model for Layer 2 Virtual Private Network (L2VPN) Service Delivery</li> </ul>"},{"location":"supported_sbis_and_network_elements/#58-ietf-actn","title":"5.8. IETF ACTN","text":"<p>Documentation in progress</p> Device Type: <p></p> <ul> <li>Intermediate Controller</li> </ul> Supported features: <p></p> <ul> <li>Configure OSU Tunnels</li> <li>Configure Ethernet Transport Services</li> </ul> Tested Devices/Controllers: <p></p> <ul> <li>Huawei NCE-T SDN controller</li> </ul> Reference: <p></p> <ul> <li>IETF draft-ietf-ccamp-client-signal-yang-10: A YANG Data Model for Transport Network Client Signals</li> <li>IETF draft-ietf-teas-yang-te-34: A YANG Data Model for Traffic Engineering Tunnels, Label Switched Paths and Interfaces</li> </ul>"},{"location":"supported_sbis_and_network_elements/#59-netconf-openconfig-optical","title":"5.9. NetConf OpenConfig Optical","text":"<p>NetConf - OpenConfig for Optical Devices (EXPERIMENTAL)</p> <p>WARNING: This driver is experimental and contains proprietary extensions on top of OpenConfig. Use with care.</p> <p>Documentation in progress</p> Device Type: <p></p> <ul> <li>Optical Transponders</li> <li>Multi-granular Optical Nodes</li> </ul> Supported features: <p></p> <ul> <li>Configure L0 optical connections</li> <li>Monitoring not supported</li> </ul> Tested Devices/Controllers: <p></p> <ul> <li>Proprietary NetConf/OpenConfig agents</li> </ul> Reference: <p></p> <ul> <li>IETF RFC6241: Network Configuration Protocol (NETCONF)</li> <li>OpenConfig</li> <li>OpenConfig GitHub</li> </ul>"},{"location":"supported_sbis_and_network_elements/#510-optical-tfs","title":"5.10. Optical TFS","text":"<p>Documentation in progress</p> Device Type: <p></p> <ul> <li>Intermediate Controller</li> </ul> Supported features: <p></p> <ul> <li>L0 optical connections</li> <li>Monitoring is not supported</li> </ul> Tested Devices/Controllers: <p></p> <ul> <li>TeraFlowSDN as child optical controller</li> </ul> Reference: <p></p>"},{"location":"supported_sbis_and_network_elements/#511-microwave","title":"5.11. MicroWave","text":"<p>Documentation in progress</p> <p>This driver manages microwave radio links through an intermediate controller using the data model in IETF RFC8345 \"A YANG Data Model for Network Topologies\".</p> Device Type: <p></p> <ul> <li>Radio links between routers through an Intermediate Controller</li> </ul> Supported features: <p></p> <ul> <li>L2 radio links</li> <li>Monitoring not supported</li> </ul> Tested Devices/Controllers: <p></p> <ul> <li>SIAE intermediate MW controller version SM-DC 8.3.2 managing SIAE AGS20 radio terminals</li> </ul> Reference: <p></p> <ul> <li>IETF RFC8345: A YANG Data Model for Network Topologies</li> </ul>"}]}
\ No newline at end of file
......@@ -77,6 +77,11 @@
<label class="md-overlay" for="__drawer"></label>
<div data-md-component="skip">
<a href="#51-emulated" class="md-skip">
Skip to content
</a>
</div>
<div data-md-component="announce">
......@@ -337,6 +342,17 @@
<label class="md-nav__link md-nav__link--active" for="__toc">
<span class="md-ellipsis">
5. Supported SBIs and Network Elements
</span>
<span class="md-nav__icon md-icon"></span>
</label>
<a href="./" class="md-nav__link md-nav__link--active">
......@@ -347,6 +363,122 @@
</a>
<nav class="md-nav md-nav--secondary" aria-label="Table of contents">
<label class="md-nav__title" for="__toc">
<span class="md-nav__icon md-icon"></span>
Table of contents
</label>
<ul class="md-nav__list" data-md-component="toc" data-md-scrollfix>
<li class="md-nav__item">
<a href="#51-emulated" class="md-nav__link">
<span class="md-ellipsis">
5.1. Emulated
</span>
</a>
</li>
<li class="md-nav__item">
<a href="#52-netconf-openconfig" class="md-nav__link">
<span class="md-ellipsis">
5.2. NetConf OpenConfig
</span>
</a>
</li>
<li class="md-nav__item">
<a href="#53-gnmi-openconfig" class="md-nav__link">
<span class="md-ellipsis">
5.3. gNMI OpenConfig
</span>
</a>
</li>
<li class="md-nav__item">
<a href="#54-onf-transport-api" class="md-nav__link">
<span class="md-ellipsis">
5.4. ONF Transport API
</span>
</a>
</li>
<li class="md-nav__item">
<a href="#55-p4" class="md-nav__link">
<span class="md-ellipsis">
5.5. P4
</span>
</a>
</li>
<li class="md-nav__item">
<a href="#56-infinera-ipm-xr" class="md-nav__link">
<span class="md-ellipsis">
5.6. Infinera IPM XR
</span>
</a>
</li>
<li class="md-nav__item">
<a href="#57-ietf-l2vpn" class="md-nav__link">
<span class="md-ellipsis">
5.7. IETF L2VPN
</span>
</a>
</li>
<li class="md-nav__item">
<a href="#58-ietf-actn" class="md-nav__link">
<span class="md-ellipsis">
5.8. IETF ACTN
</span>
</a>
</li>
<li class="md-nav__item">
<a href="#59-netconf-openconfig-optical" class="md-nav__link">
<span class="md-ellipsis">
5.9. NetConf OpenConfig Optical
</span>
</a>
</li>
<li class="md-nav__item">
<a href="#510-optical-tfs" class="md-nav__link">
<span class="md-ellipsis">
5.10. Optical TFS
</span>
</a>
</li>
<li class="md-nav__item">
<a href="#511-microwave" class="md-nav__link">
<span class="md-ellipsis">
5.11. MicroWave
</span>
</a>
</li>
</ul>
</nav>
</li>
......@@ -428,7 +560,281 @@
<h1>5. Supported SBIs and Network Elements</h1>
<p>This section summarizes the SBI drivers supported by the TeraFlowSDN controller to interoperate with underlying network equipment and intermediate controllers.</p>
<ul>
<li><a href="#51-emulated">5.1. Emulated</a></li>
<li><a href="#52-netconf-openconfig">5.2. NetConf OpenConfig</a></li>
<li><a href="#53-gnmi-openconfig">5.3. gNMI OpenConfig</a></li>
<li><a href="#54-onf-transport-api">5.4. ONF Transport API</a></li>
<li><a href="#55-p4">5.5. P4</a></li>
<li><a href="#56-infinera-ipm-xr">5.6. Infinera IPM XR</a></li>
<li><a href="#57-ietf-l2vpn">5.7. IETF L2VPN</a></li>
<li><a href="#58-ietf-actn">5.8. IETF ACTN</a></li>
<li><a href="#59-netconf-openconfig-optical">5.9. NetConf OpenConfig Optical</a></li>
<li><a href="#510-optical-tfs">5.10. Optical TFS</a></li>
<li><a href="#511-microwave">5.11. MicroWave</a></li>
</ul>
<h2 id="51-emulated"><strong>5.1. Emulated</strong></h2>
<p><strong>Documentation in progress</strong></p>
<p>This driver is provided for testing and debugging purposes.
It implements an accept-anything behaviour and maintains an in-memory database with configured rules.
It implements support for synthetic telemetry streaming data that is activated/deactivated according to enabled/disabled device endpoints.</p>
<h3><u>Device Type:</h3>
<p></u></p>
<ul>
<li>Software Emulated within TeraFlowSDN's Device component.</li>
</ul>
<h3><u>Supported features:</h3>
<p></u></p>
<ul>
<li>Configuration</li>
<li>Monitoring (synthetic)</li>
</ul>
<h3><u>Tested Devices/Controllers:</h3>
<p></u>
Not applicable.</p>
<h3><u>Reference:</h3>
<p></u>
None</p>
<h2 id="52-netconf-openconfig"><strong>5.2. NetConf OpenConfig</strong></h2>
<p><strong>Documentation in progress</strong></p>
<h3><u>Device Type:</h3>
<p></u></p>
<ul>
<li>Packet Router</li>
</ul>
<h3><u>Supported features:</h3>
<p></u></p>
<ul>
<li>Configuration of interfaces, L2/L3 VPNs, ACLs</li>
<li>Monitoring of interfaces through polling</li>
</ul>
<h3><u>Tested Devices/Controllers:</h3>
<p></u></p>
<ul>
<li>Infinera DRX-30 with ADVA NOS-OPX-B-21.5.1</li>
<li>Edgecore AS7315-30X with ADVA NOS-OPX-B-21.5.1</li>
</ul>
<h3><u>Reference:</h3>
<p></u></p>
<ul>
<li><a href="https://datatracker.ietf.org/doc/html/rfc6241">IETF RFC6241: Network Configuration Protocol (NETCONF)</a></li>
<li><a href="https://www.openconfig.net/">OpenConfig</a></li>
<li><a href="https://github.com/openconfig/public">OpenConfig GitHub</a></li>
</ul>
<h2 id="53-gnmi-openconfig"><strong>5.3. gNMI OpenConfig</strong></h2>
<p><strong>Documentation in progress</strong></p>
<h3><u>Device Type:</h3>
<p></u></p>
<ul>
<li>Packet Router</li>
</ul>
<h3><u>Supported features:</h3>
<p></u></p>
<ul>
<li>Configuration of interfaces and IPv4 network instances with static routes</li>
<li>Monitoring of interfaces through telemetry streaming</li>
</ul>
<h3><u>Tested Devices/Controllers:</h3>
<p></u></p>
<ul>
<li>Nokia SR Linux v23.3.1</li>
</ul>
<h3><u>Reference:</h3>
<p></u></p>
<ul>
<li><a href="https://www.openconfig.net/">OpenConfig</a></li>
<li><a href="https://github.com/openconfig/public">OpenConfig GitHub</a></li>
<li><a href="https://www.openconfig.net/docs/gnmi/gnmi-specification/">OpenConfig gNMI</a></li>
<li><a href="https://github.com/openconfig/gnmi">OpenConfig gNMI GitHub</a></li>
</ul>
<h2 id="54-onf-transport-api"><strong>5.4. ONF Transport API</strong></h2>
<p><strong>Documentation in progress</strong></p>
<h3><u>Device Type:</h3>
<p></u></p>
<ul>
<li>Intermediate Controller</li>
</ul>
<h3><u>Supported features:</h3>
<p></u></p>
<ul>
<li>Configuration of L0 optical links</li>
<li>Monitoring is not supported</li>
</ul>
<h3><u>Tested Devices/Controllers:</h3>
<p></u></p>
<ul>
<li>CTTC Open Line System controller</li>
</ul>
<h3><u>Reference:</h3>
<p></u></p>
<ul>
<li><a href="https://wiki.opennetworking.org/display/OTCC/TAPI">ONF Transport API</a></li>
<li><a href="https://github.com/OpenNetworkingFoundation/TAPI">ONF Transport API GitHub</a></li>
</ul>
<h2 id="55-p4"><strong>5.5. P4</strong></h2>
<p><strong>Documentation in progress</strong></p>
<h3><u>Device Type:</h3>
<p></u></p>
<ul>
<li>Intermediate Controller</li>
</ul>
<h3><u>Supported features:</h3>
<p></u></p>
<ul>
<li>Configuration of L2 packet connections</li>
<li>Monitoring not supported</li>
</ul>
<h3><u>Tested Devices/Controllers:</h3>
<p></u></p>
<ul>
<li>BMV2</li>
<li>Intel Tofino P4 switch</li>
</ul>
<h3><u>Reference:</h3>
<p></u></p>
<h2 id="56-infinera-ipm-xr"><strong>5.6. Infinera IPM XR</strong></h2>
<p>Infinera XR Pluggables through Infinera IPM controller</p>
<p><strong>Documentation in progress</strong></p>
<h3><u>Device Type:</h3>
<p></u></p>
<ul>
<li>Intermediate Controller</li>
</ul>
<h3><u>Supported features:</h3>
<p></u></p>
<ul>
<li>Configuration of L0 optical links</li>
<li>Monitoring not supported</li>
</ul>
<h3><u>Tested Devices/Controllers:</h3>
<p></u></p>
<ul>
<li>Infinera Pluggable Manager (IPM) controller</li>
</ul>
<h3><u>Reference:</h3>
<p></u></p>
<h2 id="57-ietf-l2vpn"><strong>5.7. IETF L2VPN</strong></h2>
<p>IETF RFC8466: A YANG Data Model for Layer 2 Virtual Private Network (L2VPN) Service Delivery</p>
<p><strong>Documentation in progress</strong></p>
<h3><u>Device Type:</h3>
<p></u></p>
<ul>
<li>Intermediate Controller</li>
</ul>
<h3><u>Supported features:</h3>
<p></u></p>
<ul>
<li>Discovery of underlyting topology</li>
<li>Configuration of L2 VPNs</li>
<li>Monitoring not supported</li>
</ul>
<h3><u>Tested Devices/Controllers:</h3>
<p></u></p>
<ul>
<li>TeraFlowSDN as child IP controller</li>
</ul>
<h3><u>Reference:</h3>
<p></u></p>
<ul>
<li><a href="https://datatracker.ietf.org/doc/html/rfc8466">IETF RFC8466: A YANG Data Model for Layer 2 Virtual Private Network (L2VPN) Service Delivery</a></li>
</ul>
<h2 id="58-ietf-actn"><strong>5.8. IETF ACTN</strong></h2>
<p><strong>Documentation in progress</strong></p>
<h3><u>Device Type:</h3>
<p></u></p>
<ul>
<li>Intermediate Controller</li>
</ul>
<h3><u>Supported features:</h3>
<p></u></p>
<ul>
<li>Configure OSU Tunnels</li>
<li>Configure Ethernet Transport Services</li>
</ul>
<h3><u>Tested Devices/Controllers:</h3>
<p></u></p>
<ul>
<li>Huawei NCE-T SDN controller</li>
</ul>
<h3><u>Reference:</h3>
<p></u></p>
<ul>
<li><a href="https://datatracker.ietf.org/doc/draft-ietf-ccamp-client-signal-yang/">IETF draft-ietf-ccamp-client-signal-yang-10: A YANG Data Model for Transport Network Client Signals</a></li>
<li><a href="https://datatracker.ietf.org/doc/draft-ietf-teas-yang-te/">IETF draft-ietf-teas-yang-te-34: A YANG Data Model for Traffic Engineering Tunnels, Label Switched Paths and Interfaces</a></li>
</ul>
<h2 id="59-netconf-openconfig-optical"><strong>5.9. NetConf OpenConfig Optical</strong></h2>
<p>NetConf - OpenConfig for Optical Devices (EXPERIMENTAL)</p>
<p><strong>WARNING</strong>: This driver is experimental and contains proprietary extensions on top of OpenConfig. Use with care.</p>
<p><strong>Documentation in progress</strong></p>
<h3><u>Device Type:</h3>
<p></u></p>
<ul>
<li>Optical Transponders</li>
<li>Multi-granular Optical Nodes</li>
</ul>
<h3><u>Supported features:</h3>
<p></u></p>
<ul>
<li>Configure L0 optical connections</li>
<li>Monitoring not supported</li>
</ul>
<h3><u>Tested Devices/Controllers:</h3>
<p></u></p>
<ul>
<li>Proprietary NetConf/OpenConfig agents</li>
</ul>
<h3><u>Reference:</h3>
<p></u></p>
<ul>
<li><a href="https://datatracker.ietf.org/doc/html/rfc6241">IETF RFC6241: Network Configuration Protocol (NETCONF)</a></li>
<li><a href="https://www.openconfig.net/">OpenConfig</a></li>
<li><a href="https://github.com/openconfig/public">OpenConfig GitHub</a></li>
</ul>
<h2 id="510-optical-tfs"><strong>5.10. Optical TFS</strong></h2>
<p><strong>Documentation in progress</strong></p>
<h3><u>Device Type:</h3>
<p></u></p>
<ul>
<li>Intermediate Controller</li>
</ul>
<h3><u>Supported features:</h3>
<p></u></p>
<ul>
<li>L0 optical connections</li>
<li>Monitoring is not supported</li>
</ul>
<h3><u>Tested Devices/Controllers:</h3>
<p></u></p>
<ul>
<li>TeraFlowSDN as child optical controller</li>
</ul>
<h3><u>Reference:</h3>
<p></u></p>
<h2 id="511-microwave"><strong>5.11. MicroWave</strong></h2>
<p><strong>Documentation in progress</strong></p>
<p>This driver manages microwave radio links through an intermediate controller using the data model in IETF RFC8345 "A YANG Data Model for Network Topologies".</p>
<h3><u>Device Type:</h3>
<p></u></p>
<ul>
<li>Radio links between routers through an Intermediate Controller</li>
</ul>
<h3><u>Supported features:</h3>
<p></u></p>
<ul>
<li>L2 radio links</li>
<li>Monitoring not supported</li>
</ul>
<h3><u>Tested Devices/Controllers:</h3>
<p></u></p>
<ul>
<li>SIAE intermediate MW controller version SM-DC 8.3.2 managing SIAE AGS20 radio terminals</li>
</ul>
<h3><u>Reference:</h3>
<p></u></p>
<ul>
<li><a href="https://datatracker.ietf.org/doc/html/rfc8345">IETF RFC8345: A YANG Data Model for Network Topologies</a></li>
</ul>
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment