diff --git a/tutorial/1-0-deployment.md b/tutorial/1-0-deployment.md index 6d56808daf6df8ed8ed5ba1f6133858199d19994..6aa46aab71fa387cfa6c93120aea9062d7dca71e 100644 --- a/tutorial/1-0-deployment.md +++ b/tutorial/1-0-deployment.md @@ -1,8 +1,10 @@ # 1. Deployment Guide -This section walks you through the process of deploying TeraFlowSDN on top of a Virtual Machine (VM) running MicroK8s -Kubernetes platform. The guide includes the details on configuring and installing the VM, installing and configuring -MicroK8s, and deploying and reporting the status of the TeraFlowSDN controller. +This section walks you through the process of deploying TeraFlowSDN on top of a Virtual +Machine (VM) running [MicroK8s Kubernetes platform](https://microk8s.io). +The guide includes the details on configuring and installing the VM, installing and +configuring MicroK8s, and deploying and reporting the status of the TeraFlowSDN +controller. ## Table of Content: - [1.1. Create VM for the TeraFlowSDN controller](./1-1-create-vm.md) diff --git a/tutorial/1-1-1-create-vm-oracle-virtualbox.md b/tutorial/1-1-1-create-vm-oracle-virtualbox.md index ea0da6cabfa46ba2c16166b07cc8c6e345da2246..0a074d56a5698e5d5feff1f790f4ce1b68af63b4 100644 --- a/tutorial/1-1-1-create-vm-oracle-virtualbox.md +++ b/tutorial/1-1-1-create-vm-oracle-virtualbox.md @@ -1,14 +1,15 @@ ## 1.1.1. Oracle VirtualBox ### 1.1.1.1. Create a NAT Network in VirtualBox -In "Oracle VM VirtualBox Manager", Menu "File > Preferences... > Network", create a NAT network with the following -specifications: +In "Oracle VM VirtualBox Manager", Menu "File > Preferences... > Network", create a NAT +network with the following specifications: |Name |CIDR |DHCP |IPv6 | |-----------|-----------|--------|--------| |TFS-NAT-Net|10.0.2.0/24|Disabled|Disabled| -Within the newly created "TFS-NAT-Net" NAT network, configure the following IPv4 forwarding rules: +Within the newly created "TFS-NAT-Net" NAT network, configure the following IPv4 +forwarding rules: |Name|Protocol|Host IP |Host Port|Guest IP |Guest Port| |----|--------|---------|---------|---------|----------| @@ -36,8 +37,9 @@ __Note__: IP address 10.0.2.10 is the one that will be assigned to the VM. __Note__: (*) settings to be editing after the VM is created. ### 1.1.1.3. Install Ubuntu 20.04 LTS Operating System -In "Oracle VM VirtualBox Manager", start the VM in normal mode, and follow the installation procedure. Below we provide -some installation guidelines: +In "Oracle VM VirtualBox Manager", start the VM in normal mode, and follow the +installation procedure. +Below we provide some installation guidelines: - Installation Language: English - Autodetect your keyboard - Configure static network specifications: @@ -73,9 +75,10 @@ sudo apt-get dist-upgrade -y ``` ## 1.1.1.5. Install VirtualBox Guest Additions -On VirtualBox Manager, open the VM main screen. If you are running the VM in headless mode, right click over the VM in -the VirtualBox Manager window and click "Show". If a dialog informing about how to leave the interface of the VM is -hown, confirm pressing "Switch" button. The interface of the VM should appear. +On VirtualBox Manager, open the VM main screen. If you are running the VM in headless +mode, right click over the VM in the VirtualBox Manager window and click "Show". +If a dialog informing about how to leave the interface of the VM is shown, confirm +pressing "Switch" button. The interface of the VM should appear. Click menu "Device > Insert Guest Additions CD image..." diff --git a/tutorial/1-1-create-vm.md b/tutorial/1-1-create-vm.md index ce74e6dc6f8df07d5f7cf42d979a7b54d61bc9a6..6ebed2f198a789a1970abf41a55bf4a4dfb644a2 100644 --- a/tutorial/1-1-create-vm.md +++ b/tutorial/1-1-create-vm.md @@ -1,12 +1,16 @@ # 1.1. Create VM for the TeraFlowSDN controller -In this section, we install a VM to be used as the deployment, execution, and development environment for the ETSI -TeraFlowSDN controller. If you already have a remote physical server fitting the requirements specified in this section -feel free to use it instead of deploying a local VM. Other virtualization environments can also be used; in that case, -you will need to adapt these instructions to your particular case. +In this section, we install a VM to be used as the deployment, execution, and +development environment for the ETSI TeraFlowSDN controller. +If you already have a remote physical server fitting the requirements specified in this +section feel free to use it instead of deploying a local VM. +Other virtualization environments can also be used; in that case, you will need to adapt +these instructions to your particular case. -Different Hypervisors are considered for that. Check the table of contents for available options. If you want to -contribute with other Hypervisors, [contact](./README.md#contact) the TFS team through Slack. +Different Hypervisors are considered for that. +Check the table of contents for available options. +If you want to contribute with other Hypervisors, [contact](./README.md#contact) the TFS +team through Slack. ## Table of Content: - [1.1.1. Oracle VirtualBox](./1-1-1-create-vm-oracle-virtualbox.md) diff --git a/tutorial/1-2-install-microk8s.md b/tutorial/1-2-install-microk8s.md index 1f1b3e6d9ac60bf8f7ecb4de29ca4a525305e9bc..1cd14ef6ff7f2e6ca2beca3acde93c79c860e991 100644 --- a/tutorial/1-2-install-microk8s.md +++ b/tutorial/1-2-install-microk8s.md @@ -1,10 +1,12 @@ # 1.2. Install MicroK8s Kubernetes platform -This section describes how to deploy the MicroK8s Kubernetes platform and configure it to be used with ETSI TeraFlowSDN -controller. Besides, Docker is installed to build docker images for the ETSI TeraFlowSDN controller. +This section describes how to deploy the MicroK8s Kubernetes platform and configure it +to be used with ETSI TeraFlowSDN controller. +Besides, Docker is installed to build docker images for the ETSI TeraFlowSDN controller. -The steps described in this section might take some minutes depending on your internet connection speed and the -resources assigned to your VM, or the specifications of your physical server. +The steps described in this section might take some minutes depending on your internet +connection speed and the resources assigned to your VM, or the specifications of your +physical server. ## 1.2.1. Upgrade the Ubuntu distribution @@ -56,6 +58,14 @@ sudo snap install microk8s --classic --channel=1.24/stable # Create alias for command "microk8s.kubectl" to be usable as "kubectl" sudo snap alias microk8s.kubectl kubectl +``` + +It is important to make sure that `ufw` will not interfere with the internal pod-to-pod +and pod-to-Internet traffic. +To do so, first check the status. +If `ufw` is active, use the following command to enable the communication. + +```bash # Verify status of ufw firewall sudo ufw status @@ -67,6 +77,12 @@ sudo ufw default allow routed ## 1.2.5. Add user to the docker and microk8s groups + +It is important that your user has the permission to run `docker` and `microk8s` in the +terminal. +To allow this, you need to add your user to the `docker` and `microk8s` groups with the +following commands: + ```bash sudo usermod -a -G docker $USER sudo usermod -a -G microk8s $USER @@ -74,7 +90,8 @@ sudo chown -f -R $USER $HOME/.kube sudo reboot ``` -In case that the .kube file is not automatically provisioned into your home folder, you may follow the steps below: +In case that the .kube file is not automatically provisioned into your home folder, you +may follow the steps below: ```bash mkdir -p $HOME/.kube @@ -89,7 +106,8 @@ To retrieve the status of Kubernetes __once__, run the following command: microk8s.status --wait-ready ``` -To retrieve the status of Kubernetes __periodically__ (e.g., every 1 second), run the following command: +To retrieve the status of Kubernetes __periodically__ (e.g., every 1 second), run the +following command: ```bash watch -n 1 microk8s.status --wait-ready ``` @@ -100,7 +118,8 @@ To retrieve the status of the Kubernetes resources __once__, run the following c kubectl get all --all-namespaces ``` -To retrieve the status of the Kubernetes resources __periodically__ (e.g., every 1 second), run the following command: +To retrieve the status of the Kubernetes resources __periodically__ (e.g., every 1 +second), run the following command: ```bash watch -n 1 kubectl get all --all-namespaces ``` @@ -116,8 +135,10 @@ The Addons enabled are: microk8s.enable dns hostpath-storage ingress registry ``` -__Important__: Enabling some of the addons might take few minutes. Do not proceed with next steps until the addons are - ready. Otherwise, the deployment might fail. To confirm everything is up and running: +__Important__: Enabling some of the addons might take few minutes. +Do not proceed with next steps until the addons are ready. +Otherwise, the deployment might fail. +To confirm everything is up and running: 1. Periodically [Check the status of Kubernetes](./1-2-install-microk8s.md#126-check-status-of-kubernetes) until you see the addons [dns, ha-cluster, hostpath-storage, ingress, registry, storage] in the enabled block. diff --git a/tutorial/1-3-deploy-tfs.md b/tutorial/1-3-deploy-tfs.md index b1f86c7183e53c7fb443387e990ea77c11478b2b..ffd9dfe4919d63d05a35fae24600a28531fbe372 100644 --- a/tutorial/1-3-deploy-tfs.md +++ b/tutorial/1-3-deploy-tfs.md @@ -1,7 +1,7 @@ # 1.3. Deploy TeraFlowSDN over MicroK8s -This section describes how to deploy TeraFlowSDN controller on top of MicroK8s using the environment configured in the -previous sections. +This section describes how to deploy TeraFlowSDN controller on top of MicroK8s using the +environment configured in the previous sections. ## 1.3.1. Install prerequisites @@ -17,36 +17,36 @@ mkdir ~/tfs-ctrl git clone https://labs.etsi.org/rep/tfs/controller.git ~/tfs-ctrl ``` -__Important__: Original H2020-TeraFlow project hosted on GitLab.com has been archieved and will not receive further - contributions/updates. Please, clone from ETSI-hosted GitLab code repository. +__Important__: The original H2020-TeraFlow project hosted on GitLab.com has been +archieved and will not receive further contributions/updates. +Please, clone from [ETSI-hosted GitLab code repository](https://labs.etsi.org/rep/tfs/controller). ## 1.3.3. Checkout the appropriate Git branch -By default 'master' branch is checked out. If you want to deploy 'develop' that incorporates the most up-to-date code +By default the *master* branch is checked out. +If you want to deploy the *develop* branch, that incorporates the most up-to-date code contributions and features, run the following command: ```bash cd ~/tfs-ctrl git checkout develop ``` -__Important__: During the elaboration and validation of the tutorials, you should checkout branch - "feat/microk8s-deployment". Otherwise, you will not have important files such as "my_deploy.sh" or - "deploy.sh". As soon as the tutorials are completed and approved, we will remove this note and merge the - "feat/microk8s-deployment" into "develop" and later into "master", and then the previous step will be - effective. - ## 1.3.4. Prepare a deployment script with the deployment settings -Create a new deployment script, e.g., `my_deploy.sh`, adding the appropriate settings as follows. This script, by -default, makes use of the private Docker registry enabled in MicroK8s, as specified in `TFS_REGISTRY_IMAGE`. It builds -the Docker images for the subset of components defined in `TFS_COMPONENTS`, tags them with the tag defined in -`TFS_IMAGE_TAG`, deploys them in the namespace defined in `TFS_K8S_NAMESPACE`, and (optionally) deploys the extra -Kubernetes manifests listed in `TFS_EXTRA_MANIFESTS`. Besides, it lets you specify in `TFS_GRAFANA_PASSWORD` the -password to be set for the Grafana `admin` user. +Create a new deployment script, e.g., `my_deploy.sh`, adding the appropriate settings as +follows. +This script, by default, makes use of the private Docker registry enabled in MicroK8s, +as specified in `TFS_REGISTRY_IMAGE`. +It builds the Docker images for the subset of components defined in `TFS_COMPONENTS`, +tags them with the tag defined in `TFS_IMAGE_TAG`, deploys them in the namespace defined +in `TFS_K8S_NAMESPACE`, and (optionally) deploys the extra Kubernetes manifests listed +in `TFS_EXTRA_MANIFESTS`. +Besides, it lets you specify in `TFS_GRAFANA_PASSWORD` the password to be set for the +Grafana `admin` user. ```bash cd ~/tfs-ctrl -tee my_deploy.sh >/dev/null <<EOF +tee my_deploy.sh >/dev/null << EOF export TFS_REGISTRY_IMAGE="http://localhost:32000/tfs/" export TFS_COMPONENTS="context device automation pathcomp service slice compute monitoring webui" export TFS_IMAGE_TAG="dev" @@ -58,10 +58,12 @@ EOF ## 1.3.5. Deploy TFS controller -First, source the deployment settings defined in the previous section. This way, you do not need to specify the -environment variables in each and every command you execute to operate the TFS controller. Be aware to re-source the -file if you open new terminal sessions. -Then, run the following command to deploy TeraFlowSDN controller on top of the MicroK8s Kubernetes platform. +First, source the deployment settings defined in the previous section. +This way, you do not need to specify the environment variables in each and every command +you execute to operate the TFS controller. +Be aware to re-source the file if you open new terminal sessions. +Then, run the following command to deploy TeraFlowSDN controller on top of the MicroK8s +Kubernetes platform. ```bash cd ~/tfs-ctrl @@ -69,16 +71,14 @@ source my_deploy.sh ./deploy.sh ``` -The script does the following steps: -1. Build the Docker images for the components defined in `TFS_COMPONENTS` -2. Tag the Docker images with the value of `TFS_IMAGE_TAG` -3. Push the Docker images to the repository defined in `TFS_REGISTRY_IMAGE` -4. Create the namespace defined in `TFS_K8S_NAMESPACE` -5. Deploy the components defined in `TFS_COMPONENTS` -6. Create the file `tfs_runtime_env_vars.sh` with the environment variables for the components defined in - `TFS_COMPONENTS` defining their local host addresses and their port numbers. -7. Create an ingress controller listening at port 80 for HTTP connections to enable external access to the TeraFlowSDN - WebUI, Grafana Dashboards, Context Debug endpoints, and Compute NBI interfaces. +The script performs the following steps: +1. Builds the Docker images for the components defined in `TFS_COMPONENTS` +2. Tags the Docker images with the value of `TFS_IMAGE_TAG` +3. Pushes the Docker images to the repository defined in `TFS_REGISTRY_IMAGE` +4. Creates the namespace defined in `TFS_K8S_NAMESPACE` +5. Deploys the components defined in `TFS_COMPONENTS` +6. Creates the file `tfs_runtime_env_vars.sh` with the environment variables for the components defined in `TFS_COMPONENTS` defining their local host addresses and their port numbers. +7. Create an ingress controller listening at port 80 for HTTP connections to enable external access to the TeraFlowSDN WebUI, Grafana Dashboards, Context Debug endpoints, and Compute NBI interfaces. 8. Initialize and configure the Grafana dashboards 9. Report a summary of the deployment (see [1.5. Show Deployment and Log per Component](./1-5-deploy-logs-troubleshooting.md)) diff --git a/tutorial/1-4-access-webui.md b/tutorial/1-4-access-webui.md index 7769669e32d6c79aa330e56fd550c923580a149d..aa66ef190454bea51a7cd982df2535f8dc52e712 100644 --- a/tutorial/1-4-access-webui.md +++ b/tutorial/1-4-access-webui.md @@ -1,18 +1,26 @@ # 1.4. Access TeraFlowSDN WebUI and Grafana Dashboards -This section describes how to get access to the TeraFlowSDN controller WebUI and the monitoring Grafana dashboards. +This section describes how to get access to the TeraFlowSDN controller WebUI and the +monitoring Grafana dashboards. ## 1.4.1. Access the TeraFlowSDN WebUI -If you followed the installation steps based on MicroK8s, you got an ingress controller installed that exposes on TCP -port 80. In the creation of the VM, a forward from local TCP port 8080 to VM's TCP port 80 is configured, so the WebUIs -and REST APIs of TeraFlowSDN should be exposed on endpoint `127.0.0.1:8080`. -Besides, the ingress controller defines the following reverse proxy paths: +If you followed the installation steps based on MicroK8s, you got an ingress controller +installed that exposes on TCP port 80. +In the creation of the VM, a forward from local TCP port 8080 to VM's TCP port 80 is +configured, so the WebUIs and REST APIs of TeraFlowSDN should be exposed on the endpoint +`127.0.0.1:8080` of your local machine. +Besides, the ingress controller defines the following reverse proxy paths +(on your local machine): - `http://127.0.0.1:8080/webui`: points to the WebUI of TeraFlowSDN. -- `http://127.0.0.1:8080/grafana`: points to the Grafana dashboards. This endpoint brings access to the monitoring - dashboards of TeraFlowSDN. The credentials for the `admin`user are those defined in the `my_deploy.sh` script, in the - `TFS_GRAFANA_PASSWORD` variable. -- `http://127.0.0.1:8080/context`: points to the REST API exposed by the TeraFlowSDN Context component. This endpoint - is mainly used for debugging purposes. Note that this endpoint is designed to be accessed from the WebUI. -- `http://127.0.0.1:8080/restconf`: points to the Compute component NBI based on RestCONF. This endpoint enables - connecting external software, such as ETSI OpenSourceMANO NFV Orchestrator, to TeraFlowSDN. +- `http://127.0.0.1:8080/grafana`: points to the Grafana dashboards. + This endpoint brings access to the monitoring dashboards of TeraFlowSDN. + The credentials for the `admin`user are those defined in the `my_deploy.sh` script, in + the `TFS_GRAFANA_PASSWORD` variable. +- `http://127.0.0.1:8080/context`: points to the REST API exposed by the TeraFlowSDN + Context component. + This endpoint is mainly used for debugging purposes. + Note that this endpoint is designed to be accessed from the WebUI. +- `http://127.0.0.1:8080/restconf`: points to the Compute component NBI based on RestCONF. + This endpoint enables connecting external software, such as ETSI OpenSourceMANO NFV + Orchestrator, to TeraFlowSDN. diff --git a/tutorial/1-5-deploy-logs-troubleshooting.md b/tutorial/1-5-deploy-logs-troubleshooting.md index ce16a279cdf6a716d157582f7a4fba0e707f2757..3aa7acaee63dbe02873c8aa69673327ca0f66547 100644 --- a/tutorial/1-5-deploy-logs-troubleshooting.md +++ b/tutorial/1-5-deploy-logs-troubleshooting.md @@ -1,30 +1,33 @@ # 1.5. Show Deployment and Log per Component -This section presents some helper scripts to inspect the status of the deployment and the logs of the components. These -scripts are particularly helpful for troubleshooting during execution of experiments, development, and debugging. +This section presents some helper scripts to inspect the status of the deployment and +the logs of the components. +These scripts are particularly helpful for troubleshooting during execution of +experiments, development, and debugging. ## 1.5.1. Report the deployment of the TFS controller The summary report given at the end of the [Deploy TFS controller](./1-3-deploy-tfs.md#135-deploy-tfs-controller) -procedure can be generated manually at any time by running the following command. You can avoid sourcing `my_deploy.sh` -if it has been already done. +procedure can be generated manually at any time by running the following command. +You can avoid sourcing `my_deploy.sh` if it has been already done. ```bash cd ~/tfs-ctrl source my_deploy.sh ./show_deploy.sh ``` -Use this script to validate that all the pods, deployments, replica sets, ingress controller, etc. are ready and have -the appropriate state, e.g., "running" for Pods, and the services are deployed and have appropriate IP addresses and -port numbers. +Use this script to validate that all the pods, deployments, replica sets, ingress +controller, etc. are ready and have the appropriate state, e.g., *running* for Pods, and +the services are deployed and have appropriate IP addresses and port numbers. ## 1.5.2. Report the log of a specific TFS controller component -A number of scripts are pre-created in the `scripts` folder to facilitate the inspection of the component logs. For -instance, to dump the log of the Context component, run the following command. You can avoid sourcing `my_deploy.sh` -if it has been already done. +A number of scripts are pre-created in the `scripts` folder to facilitate the inspection +of the component logs. +For instance, to dump the log of the Context component, run the following command. +You can avoid sourcing `my_deploy.sh` if it has been already done. ```bash source my_deploy.sh diff --git a/tutorial/2-0-run-experiments.md b/tutorial/2-0-run-experiments.md index 82f6a56bf0481a4edeaf71251510f74c51138096..ab3c390e11e8a9a7a4b1aab1396fc5d03a5e9b2a 100644 --- a/tutorial/2-0-run-experiments.md +++ b/tutorial/2-0-run-experiments.md @@ -1,9 +1,13 @@ # 2. Run Experiments Guide (WORK IN PROGRESS) -This section walks you through the process of running experiments in TeraFlowSDN on top of a Oracle VirtualBox-based VM -running MicroK8s Kubernetes platform. The guide includes the details on configuring the Python environment, some basic +This section walks you through the process of running experiments in TeraFlowSDN on top +of a Oracle VirtualBox-based VM running MicroK8s Kubernetes platform. +The guide includes the details on configuring the Python environment, some basic commands you might need, configuring the network topology, and executing different experiments. +Note that the steps followed here are likely to work regardless of the platform (VM) +where TeraFlowSDN is deployed over. + ## Table of Content: - [2.1. Configure the Python environment](./2-1-python-environment.md) - [2.2. OFC'22 Demo - Bootstrap devices, Monitor device Endpoints, Manage L3VPN Services](./2-2-ofc22.md) diff --git a/tutorial/2-1-python-environment.md b/tutorial/2-1-python-environment.md index e03e3daff118f8c1f1268d85a215527aab0358b4..940a1183a8f349c0ffd74a2be6c53c0b1e93031c 100644 --- a/tutorial/2-1-python-environment.md +++ b/tutorial/2-1-python-environment.md @@ -1,9 +1,9 @@ # 2.1. Configure Python Environment -This section describes how to configure the Python environment to run experiments and develop code for the ETSI -TeraFlowSDN controller. -In particular, we use [PyEnv](https://github.com/pyenv/pyenv) to install the appropriate version of Python and manage -the virtual environments. +This section describes how to configure the Python environment to run experiments and +develop code for the ETSI TeraFlowSDN controller. +In particular, we use [PyEnv](https://github.com/pyenv/pyenv) to install the appropriate +version of Python and manage the virtual environments. ## 2.1.1. Upgrade the Ubuntu distribution @@ -22,6 +22,12 @@ sudo apt-get install -y make build-essential libssl-dev zlib1g-dev libbz2-dev li ## 2.1.3. Install PyEnv + +We recommend installing PyEnv through +[PyEnv Installer](https://github.com/pyenv/pyenv-installer). +Below you can find the instructions, but we refer you to the link for updated +instructions. + ```bash curl https://pyenv.run | bash # When finished, edit ~/.bash_profile // ~/.profile // ~/.bashrc as the installer proposes. @@ -32,7 +38,8 @@ eval "$(pyenv init -)" eval "$(pyenv virtualenv-init -)" ``` -In case .bashrc is not linked properly to your profile, you may need to append the following line into your local .profile file: +In case .bashrc is not linked properly to your profile, you may need to append the +following line into your local .profile file: ```bash # Open ~/.profile and append this line: @@ -48,40 +55,76 @@ sudo reboot ## 2.1.5. Install Python 3.9 over PyEnv + +ETSI TeraFlowSDN uses Python 3.9 by default. +You should install the latest update of Python 3.9. +To find the latest version available in PyEnv, you can run the following command: + +```bash +pyenv install --list | grep " 3.9" +``` + +At the time of writing, this command will output the following list: + +``` + 3.9.0 + 3.9-dev + 3.9.1 + 3.9.2 + 3.9.4 + 3.9.5 + 3.9.6 + 3.9.7 + 3.9.8 + 3.9.9 + 3.9.10 + 3.9.11 + 3.9.12 + 3.9.13 + 3.9.14 ** always select the latest version ** +``` + +Therefore, the latest version is Python 3.9.14. +To install this version, you should run: + ```bash -pyenv install 3.9.13 - # This command might take some minutes depending on your Internet connection speed and the performance of your VM. +pyenv install 3.9.14 + # This command might take some minutes depending on your Internet connection speed + # and the performance of your VM. ``` ## 2.1.6. Create the Virtual Environment for TeraFlowSDN -The following commands create a virtual environment named as `tfs` using Python v3.9.13 and associate that environment -with the current folder, i.e., `~/tfs-ctrl`. That way, when you are in that folder, the associated virtual environment -will be used, thus inheriting the Python interpreter, i.e., Python v3.9.13, and the Python packages installed on it. +The following commands create a virtual environment named as `tfs` using Python 3.9 and +associate that environment with the current folder, i.e., `~/tfs-ctrl`. +That way, when you are in that folder, the associated virtual environment will be used, +thus inheriting the Python interpreter, i.e., Python 3.9, and the Python packages +installed on it. ```bash cd ~/tfs-ctrl -pyenv virtualenv 3.9.13 tfs -pyenv local 3.9.13/envs/tfs +pyenv virtualenv 3.9.14 tfs +pyenv local 3.9.14/envs/tfs ``` -In case that the correct pyenv does not get automatically activated when you change to the tfs-ctrl/ folder, then execute the following command: +In case that the correct pyenv does not get automatically activated when you change to +the tfs-ctrl/ folder, then execute the following command: ```bash cd ~/tfs-ctrl -pyenv activate 3.9.13/envs/tfs +pyenv activate 3.9.14/envs/tfs ``` -After completing these commands, you should see in your prompt that now you're within the virtual environment -`3.9.13/envs/tfs` on folder `~/tfs-ctrl`: +After completing these commands, you should see in your prompt that now you're within +the virtual environment `3.9.14/envs/tfs` on folder `~/tfs-ctrl`: ``` -(3.9.13/envs/tfs) tfs@tfs-vm:~/tfs-ctrl$ +(3.9.14/envs/tfs) tfs@tfs-vm:~/tfs-ctrl$ ``` ## 2.1.7. Install the basic Python packages within the virtual environment -From within the `3.9.13/envs/tfs` environment on folder `~/tfs-ctrl`, run the following commands to install the basic -Python packages required to work with TeraFlowSDN. +From within the `3.9.14/envs/tfs` environment on folder `~/tfs-ctrl`, run the following +commands to install the basic Python packages required to work with TeraFlowSDN. ```bash cd ~/tfs-ctrl ./install_requirements.sh diff --git a/tutorial/2-2-ofc22.md b/tutorial/2-2-ofc22.md index 37dfb4032d9ed09fb154ec5caf86a2199b38010c..3b55a0961da78fdc78a8feb31499608589b9d0be 100644 --- a/tutorial/2-2-ofc22.md +++ b/tutorial/2-2-ofc22.md @@ -1,38 +1,40 @@ # 2.2. OFC'22 Demo - Bootstrap devices, Monitor device Endpoints, Manage L3VPN Services -This functional test reproduces the live demonstration "Demonstration of Zero-touch Device and L3-VPN Service Management -Using the TeraFlow Cloud-native SDN Controller" carried out at -[OFC'22](https://ieeexplore.ieee.org/document/9748575). - - +This functional test reproduces the live demonstration *Demonstration of Zero-touch +Device and L3-VPN Service Management Using the TeraFlow Cloud-native SDN Controller* +carried out at [OFC'22](https://ieeexplore.ieee.org/document/9748575) / +[Open access](https://research.chalmers.se/en/publication/c397ef36-837f-416d-a44d-6d3b561d582a). ## 2.2.1. Functional test folder -This functional test can be found in folder `./src/tests/ofc22/`. A convenience alias `./ofc22/` pointing to that folder -has been defined. +This functional test can be found in folder `./src/tests/ofc22/`. +A convenience alias `./ofc22/` pointing to that folder has been defined. ## 2.2.2. Execute with real devices This functional test is designed to operate both with real and emulated devices. -By default, emulated devices are used; however, if you have access to real devices, you can create/modify the files -`./ofc22/tests/Objects.py` and `./ofc22/tests/Credentials.py` to point to your devices, and map to your own network -topology. -Otherwise, you can modify the `./ofc22/tests/descriptors_emulated.json` that is designed to be uploaded through the -WebUI instead of using the command line scripts. -Note that the default scenario assumes devices R2 and R4 are always emulated, while devices R1, R3, and O1 -can be configured as emulated or real devices. - -__Important__: The device drivers operating with real devices, e.g., OpenConfigDriver, P4Driver, and TransportApiDriver, - have to be considered as experimental. The configuration and monitoring capabilities they support are +By default, emulated devices are used; +however, if you have access to real devices, you can create/modify the files +`./ofc22/tests/Objects.py` and `./ofc22/tests/Credentials.py` to point to your devices, +and map to your own network topology. +Otherwise, you can modify the `./ofc22/tests/descriptors_emulated.json` that is designed +to be uploaded through the WebUI instead of using the command line scripts. +Note that the default scenario assumes devices R2 and R4 are always emulated, while +devices R1, R3, and O1 can be configured as emulated or real devices. + +__Important__: The device drivers operating with real devices, e.g., OpenConfigDriver, + P4Driver, and TransportApiDriver, have to be considered as experimental. + The configuration and monitoring capabilities they support are limited or partially implemented/tested. Use them with care. ## 2.2.3. Deployment and Dependencies -To run this functional test, it is assumed you have deployed a MicroK8s-based Kubernetes environment and a TeraFlowSDN -controller instance as described in the [Tutorial: Deployment Guide](./1-0-deployment.md), and you configured the Python +To run this functional test, it is assumed you have deployed a MicroK8s-based Kubernetes +environment and a TeraFlowSDN controller instance as described in the +[Tutorial: Deployment Guide](./1-0-deployment.md), and you configured the Python environment as described in [Tutorial: Run Experiments Guide > 2.1. Configure Python Environment](./2-1-python-environment.md). Remember to source the scenario settings, e.g., `cd ~/tfs-ctrl && source ofc22/deploy_specs.sh` in each terminal you open. @@ -42,29 +44,33 @@ Then, re-build the protocol buffers code from the proto files: ## 2.2.4. Access to the WebUI and Dashboard -When the deployment completes, you can connect to the TeraFlowSDN WebUI and Dashboards as described in +When the deployment completes, you can connect to the TeraFlowSDN WebUI and Dashboards +as described in [Tutorial: Deployment Guide > 1.4. Access TeraFlowSDN WebUI and Grafana Dashboards](./1-4-access-webui.md) Notes: - the default credentials for the Grafana Dashboiard is user/pass: `admin`/`admin123+`. -- in Grafana, you will find the "L3-Monitorng" in the "Starred dashboards" section. +- in Grafana, you will find the *L3-Monitorng* in the *Starred dashboards* section. ## 2.2.5. Test execution -Before executing the tests, the environment variables need to be prepared. First, make sure to load your deployment variables by: +Before executing the tests, the environment variables need to be prepared. +First, make sure to load your deployment variables by: ``` source my_deploy.sh ``` -Then, you also need to load the environment variables to support the execution of the tests by: +Then, you also need to load the environment variables to support the execution of the +tests by: ``` source tfs_runtime_env_vars.sh ``` -You also need to make sure that you have all the gRPC-generate code in your folder. To do so, run: +You also need to make sure that you have all the gRPC-generate code in your folder. +To do so, run: ``` proto/generate_code_python.sh @@ -76,9 +82,10 @@ To execute this functional test, four main steps needs to be carried out: 3. L3VPN Service removal 4. Cleanup -Upon the execution of each test progresses, a report will be generated indicating PASSED / FAILED / SKIPPED. If there -is some error during the execution, you should see a detailed report on the error. See the troubleshooting section if -needed. +Upon the execution of each test progresses, a report will be generated indicating +*PASSED* / *FAILED* / *SKIPPED*. +If there is some error during the execution, you should see a detailed report on the error. +See the troubleshooting section if needed. You can check the logs of the different components using the appropriate `scripts/show_logs_[component].sh` scripts after you execute each step. @@ -86,57 +93,70 @@ after you execute each step. ### 2.2.5.1. Device bootstrapping -This step configures some basic entities (Context and Topology), the devices, and the links in the topology. The -expected results are: +This step configures some basic entities (Context and Topology), the devices, and the +links in the topology. +The expected results are: - The devices to be added into the Topology. - The devices to be pre-configured and initialized as ENABLED by the Automation component. -- The monitoring for the device ports (named as endpoints in TeraFlowSDN) to be activated and data collection to - automatically start. +- The monitoring for the device ports (named as endpoints in TeraFlowSDN) to be activated and data collection to automatically start. - The links to be added to the topology. To run this step, you can do it from the WebUI by uploading the file `./ofc22/tests/descriptors_emulated.json` that -contains the descriptors of the contexts, topologies, devices, and links, or by executing the -`./ofc22/run_test_01_bootstrap.sh` script. +contains the descriptors of the contexts, topologies, devices, and links, or by +executing the `./ofc22/run_test_01_bootstrap.sh` script. -When the bootstrapping finishes, check in the Grafana L3-Monitoring Dashboard and you should see the monitoring data -being plotted and updated every 5 seconds (by default). Given that there is no service configured, you should see a -0-valued flat plot. +When the bootstrapping finishes, check in the Grafana L3-Monitoring Dashboard and you +should see the monitoring data being plotted and updated every 5 seconds (by default). +Given that there is no service configured, you should see a 0-valued flat plot. -In the WebUI, select the "admin" Context. Then, in the "Devices" tab you should see that 5 different emulated devices -have been created and activated: 4 packet routers, and 1 optical line system controller. Besides, in the "Services" tab -you should see that there is no service created. Note here that the emulated devices produce synthetic -randomly-generated data and do not care about the services configured. +In the WebUI, select the *admin* Context. +Then, in the *Devices* tab you should see that 5 different emulated devices have been +created and activated: 4 packet routers, and 1 optical line system controller. +Besides, in the *Services* tab you should see that there is no service created. +Note here that the emulated devices produce synthetic randomly-generated monitoring data +and do not represent any particularservices configured. ### 2.2.5.2. L3VPN Service creation -This step configures a new service emulating the request an OSM WIM would make by means of a Mock OSM instance. +This step configures a new service emulating the request an OSM WIM would make by means +of a Mock OSM instance. To run this step, execute the `./ofc22/run_test_02_create_service.sh` script. -When the script finishes, check the WebUI "Services" tab. You should see that two services have been created, one for -the optical layer and another for the packet layer. Besides, you can check the "Devices" tab to see the configuration -rules that have been configured in each device. In the Grafana Dashboard, given that there is now a service configured, -you should see the plots with the monitored data for the device. By default, device R1-EMU is selected. +When the script finishes, check the WebUI *Services* tab. You should see that two +services have been created, one for the optical layer and another for the packet layer. +Besides, you can check the *Devices* tab to see the configuration rules that have been +configured in each device. +In the Grafana Dashboard, given that there is now a service configured, you should see +the plots with the monitored data for the device. +By default, device R1-EMU is selected. ### 2.2.5.3. L3VPN Service removal -This step deconfigures the previously created services emulating the request an OSM WIM would make by means of a Mock -OSM instance. +This step deconfigures the previously created services emulating the request an OSM WIM +would make by means of a Mock OSM instance. -To run this step, execute the `./ofc22/run_test_03_delete_service.sh` script, or delete the L3NM service from the WebUI. +To run this step, execute the `./ofc22/run_test_03_delete_service.sh` script, or delete +the L3NM service from the WebUI. -When the script finishes, check the WebUI "Services" tab. You should see that the two services have been removed. -Besides, in the "Devices" tab you can see that the appropriate configuration rules have been deconfigured. In the -Grafana Dashboard, given that there is no service configured, you should see a 0-valued flat plot again. +When the script finishes, check the WebUI *Services* tab. +You should see that the two services have been removed. +Besides, in the *Devices* tab you can see that the appropriate configuration rules have +been deconfigured. +In the Grafana Dashboard, given that there is no service configured, you should see a +0-valued flat plot again. ### 2.2.5.4. Cleanup -This last step performs a cleanup of the scenario removing all the TeraFlowSDN entities for completeness. +This last step performs a cleanup of the scenario removing all the TeraFlowSDN entities +for completeness. To run this step, execute the `./ofc22/run_test_04_cleanup.sh` script. -When the script finishes, check the WebUI "Devices" tab, you should see that the devices have been removed. Besides, in -the "Services" tab you can see that the "admin" Context has no services given that that context has been removed. +When the script finishes, check the WebUI *Devices* tab, you should see that the devices +have been removed. +Besides, in the *Services* tab you can see that the "admin" Context has no services +given that that context has been removed. diff --git a/tutorial/2-4-ecoc22.md b/tutorial/2-4-ecoc22.md index b6f92aadc692345b73c5529a4de9a56522c722d9..2b0292a08e25ab4d57e3980e291e539f907cadc4 100644 --- a/tutorial/2-4-ecoc22.md +++ b/tutorial/2-4-ecoc22.md @@ -1,30 +1,34 @@ # 2.4. ECOC'22 Demo - Disjoint DC-2-DC L3VPN Service (WORK IN PROGRESS) -This functional test reproduces the experimental assessment of "Experimental Demonstration of Transport Network Slicing -with SLA Using the TeraFlowSDN Controller" presented at [ECOC'22](https://www.optica.org/en-us/events/topical_meetings/ecoc/schedule/?day=Tuesday#Tuesday). +This functional test reproduces the experimental assessment of *Experimental +Demonstration of Transport Network Slicing with SLA Using the TeraFlowSDN Controller* +presented at [ECOC'22](https://www.optica.org/en-us/events/topical_meetings/ecoc/schedule/?day=Tuesday#Tuesday). ## 2.4.1. Functional test folder -This functional test can be found in folder `./src/tests/ecoc22/`. A convenience alias `./ecoc22/` pointing to that -folder has been defined. +This functional test can be found in folder `./src/tests/ecoc22/`. +A convenience alias `./ecoc22/` pointing to that folder has been defined. ## 2.4.2. Execute with real devices -This functional test has only been tested with emulated devices; however, if you have access to real devices, you can -modify the files `./ecoc22/tests/Objects.py` and `./ecoc22/tests/Credentials.py` to point to your devices, and map to -your network topology. -Otherwise, you can modify the `./ecoc22/tests/descriptors_emulated.json` that is designed to be uploaded through the -WebUI instead of using the command line scripts. +This functional test has only been tested with emulated devices; +however, if you have access to real devices, you can modify the files +`./ecoc22/tests/Objects.py` and `./ecoc22/tests/Credentials.py` to point to your devices, +and map to your network topology. +Otherwise, you can modify the `./ecoc22/tests/descriptors_emulated.json` that is +designed to be uploaded through the WebUI instead of using the command line scripts. -__Important__: The device drivers operating with real devices, e.g., OpenConfigDriver, P4Driver, and TransportApiDriver, - have to be considered as experimental. The configuration and monitoring capabilities they support are +__Important__: The device drivers operating with real devices, e.g., OpenConfigDriver, + P4Driver, and TransportApiDriver, have to be considered as experimental. + The configuration and monitoring capabilities they support are limited or partially implemented/tested. Use them with care. ## 2.4.3. Deployment and Dependencies -To run this functional test, it is assumed you have deployed a MicroK8s-based Kubernetes environment and a TeraFlowSDN -controller instance as described in the [Tutorial: Deployment Guide](./1-0-deployment.md), and you configured the Python +To run this functional test, it is assumed you have deployed a MicroK8s-based Kubernetes +environment and a TeraFlowSDN controller instance as described in the +[Tutorial: Deployment Guide](./1-0-deployment.md), and you configured the Python environment as described in [Tutorial: Run Experiments Guide > 2.1. Configure Python Environment](./2-1-python-environment.md). Remember to source the scenario settings, e.g., `cd ~/tfs-ctrl && source ecoc22/deploy_specs.sh` in each terminal you open. @@ -35,7 +39,8 @@ Then, re-build the protocol buffers code from the proto files: ## 2.4.4. Access to the WebUI and Dashboard -When the deployment completes, you can connect to the TeraFlowSDN WebUI and Dashboards as described in +When the deployment completes, you can connect to the TeraFlowSDN WebUI and Dashboards +as described in [Tutorial: Deployment Guide > 1.4. Access TeraFlowSDN WebUI and Grafana Dashboards](./1-4-access-webui.md) Notes: @@ -51,9 +56,11 @@ To execute this functional test, four main steps needs to be carried out: 3. L3VPN Service removal 4. Cleanup -Upon the execution of each test progresses, a report will be generated indicating PASSED / FAILED / SKIPPED. If there -is some error during the execution, you should see a detailed report on the error. See the troubleshooting section if -needed. +Upon the execution of each test progresses, a report will be generated indicating +*PASSED* / *FAILED* / *SKIPPED*. +If there is some error during the execution, you should see a detailed report on the +error. +See the troubleshooting section if needed. You can check the logs of the different components using the appropriate `scripts/show_logs_[component].sh` scripts after you execute each step. @@ -61,57 +68,72 @@ after you execute each step. ### 2.4.5.1. Device bootstrapping -This step configures some basic entities (Context and Topology), the devices, and the links in the topology. The -expected results are: +This step configures some basic entities (Context and Topology), the devices, and the +links in the topology. +The expected results are: - The devices to be added into the Topology. -- The devices to be pre-configured and initialized as ENABLED by the Automation component. -- The monitoring for the device ports (named as endpoints in TeraFlowSDN) to be activated and data collection to - automatically start. +- The devices to be pre-configured and initialized as *ENABLED* by the Automation component. +- The monitoring for the device ports (named as endpoints in TeraFlowSDN) to be activated + and data collection to automatically start. - The links to be added to the topology. To run this step, you can do it from the WebUI by uploading the file `./ecoc22/tests/descriptors_emulated.json` that contains the descriptors of the contexts, topologies, devices, and links, or by executing the `./ecoc22/run_test_01_bootstrap.sh` script. -When the bootstrapping finishes, check in the Grafana L3-Monitoring Dashboard and you should see the monitoring data -being plotted and updated every 5 seconds (by default). Given that there is no service configured, you should see a -0-valued flat plot. +When the bootstrapping finishes, check in the Grafana L3-Monitoring Dashboard and you +should see the monitoring data being plotted and updated every 5 seconds (by default). +Given that there is no service configured, you should see a 0-valued flat plot. -In the WebUI, select the "admin" Context. Then, in the "Devices" tab you should see that 5 different emulated devices -have been created and activated: 4 packet routers, and 1 optical line system controller. Besides, in the "Services" tab -you should see that there is no service created. Note here that the emulated devices produce synthetic -randomly-generated data and do not care about the services configured. +In the WebUI, select the *admin* Context. +Then, in the *Devices* tab you should see that 5 different emulated devices have been +created and activated: 4 packet routers, and 1 optical line system controller. +Besides, in the *Services* tab you should see that there is no service created. +Note here that the emulated devices produce synthetic randomly-generated data and do not +care about the services configured. ### 2.4.5.2. L3VPN Service creation -This step configures a new service emulating the request an OSM WIM would make by means of a Mock OSM instance. +This step configures a new service emulating the request an OSM WIM would make by means +of a Mock OSM instance. To run this step, execute the `./ecoc22/run_test_02_create_service.sh` script. -When the script finishes, check the WebUI "Services" tab. You should see that two services have been created, one for -the optical layer and another for the packet layer. Besides, you can check the "Devices" tab to see the configuration -rules that have been configured in each device. In the Grafana Dashboard, given that there is now a service configured, -you should see the plots with the monitored data for the device. By default, device R1-EMU is selected. +When the script finishes, check the WebUI *Services* tab. +You should see that two services have been created, one for the optical layer and +another for the packet layer. +Besides, you can check the *Devices* tab to see the configuration rules that have been +configured in each device. +In the Grafana Dashboard, given that there is now a service configured, you should see +the plots with the monitored data for the device. +By default, device R1-EMU is selected. ### 2.4.5.3. L3VPN Service removal -This step deconfigures the previously created services emulating the request an OSM WIM would make by means of a Mock -OSM instance. +This step deconfigures the previously created services emulating the request an OSM WIM +would make by means of a Mock OSM instance. -To run this step, execute the `./ecoc22/run_test_03_delete_service.sh` script, or delete the L3NM service from the WebUI. +To run this step, execute the `./ecoc22/run_test_03_delete_service.sh` script, or delete +the L3NM service from the WebUI. -When the script finishes, check the WebUI "Services" tab. You should see that the two services have been removed. -Besides, in the "Devices" tab you can see that the appropriate configuration rules have been deconfigured. In the -Grafana Dashboard, given that there is no service configured, you should see a 0-valued flat plot again. +When the script finishes, check the WebUI "Services" tab. You should see that the two +services have been removed. +Besides, in the *Devices* tab you can see that the appropriate configuration rules have +been deconfigured. +In the Grafana Dashboard, given that there is no service configured, you should see a +0-valued flat plot again. ### 2.4.5.4. Cleanup -This last step performs a cleanup of the scenario removing all the TeraFlowSDN entities for completeness. +This last step performs a cleanup of the scenario removing all the TeraFlowSDN entities +for completeness. To run this step, execute the `./ecoc22/run_test_04_cleanup.sh` script. -When the script finishes, check the WebUI "Devices" tab, you should see that the devices have been removed. Besides, in -the "Services" tab you can see that the "admin" Context has no services given that that context has been removed. +When the script finishes, check the WebUI "Devices" tab, you should see that the devices +have been removed. +Besides, in the *Services* tab you can see that the *admin* Context has no services +given that that context has been removed. diff --git a/tutorial/README.md b/tutorial/README.md index 836434e51b8797cf91a49a3f47298eec712fbe43..2d3b1050fc09b2b3e7a26ce20360c67a7c3256dc 100644 --- a/tutorial/README.md +++ b/tutorial/README.md @@ -2,22 +2,26 @@ ## Abstract -This document provides a walkthrough on how to prepare your environment for executing and contributing to the -[ETSI TeraFlowSDN OSG](https://tfs.etsi.org/). +This document provides a walkthrough on how to prepare your environment for executing +and contributing to the [ETSI TeraFlowSDN OSG](https://tfs.etsi.org/). -This walkthrough makes some reasonable assumptions to simplify the deployment of the ETSI TeraFlowSDN controller, the -execution of experiments and tests, and development of new contributions. In particular, we assume: +This walkthrough makes some reasonable assumptions to simplify the deployment of the +ETSI TeraFlowSDN controller, the execution of experiments and tests, and development of +new contributions. +In particular, we assume: - [VirtualBox](https://www.virtualbox.org/) version 6.1.34 r150636 -- [VSCode](https://code.visualstudio.com/) with the "Remote SSH" extension +- [VSCode](https://code.visualstudio.com/) with the + [*Remote SSH*](https://code.visualstudio.com/docs/remote/ssh) extension - VM software: - [Ubuntu Server 20.04 LTS](https://releases.ubuntu.com/20.04/) - [MicroK8s](https://microk8s.io/) ## Contact -If your environment does not fit with the proposed assumptions and you experience some trouble preparing it to work -with the ETSI TeraFlowSDN controller, contact the ETSI TeraFlowSDN OSG team through +If your environment does not fit with the proposed assumptions and you experience issues +preparing it to work with the ETSI TeraFlowSDN controller, contact the ETSI TeraFlowSDN +OSG team through [Slack](https://join.slack.com/t/teraflowsdn/shared_invite/zt-18gc5jvkh-1_DEZHFhxeuOqzJZPq~U~A)