From 5b66f2092190e193bbd72934817416ab9e6f4fb6 Mon Sep 17 00:00:00 2001 From: gifrerenom Date: Thu, 6 Oct 2022 17:11:38 +0000 Subject: [PATCH 1/8] Tutorial cleanup --- scripts/create_component.sh | 39 ++++++ tutorial/1-2-install-microk8s.md | 26 +++- tutorial/2-2-ofc22.md | 9 +- tutorial/2-4-ecoc22.md | 7 +- tutorial/3-0-development.md | 4 +- .../{3.3-debug-comp.md => 3-3-debug-comp.md} | 0 tutorial/3-X-develop-new-component.md | 132 ++++++++++++++++++ 7 files changed, 202 insertions(+), 15 deletions(-) create mode 100755 scripts/create_component.sh rename tutorial/{3.3-debug-comp.md => 3-3-debug-comp.md} (100%) create mode 100644 tutorial/3-X-develop-new-component.md diff --git a/scripts/create_component.sh b/scripts/create_component.sh new file mode 100755 index 000000000..17f6abc64 --- /dev/null +++ b/scripts/create_component.sh @@ -0,0 +1,39 @@ +#!/bin/bash +# Copyright 2021-2023 H2020 TeraFlow (https://www.teraflow-h2020.eu/) +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +COMPONENT_NAME=$1 +PROJECTDIR=`pwd` + +mkdir -p ${PROJECTDIR}/src/${COMPONENT_NAME} +mkdir -p ${PROJECTDIR}/src/${COMPONENT_NAME}/client +mkdir -p ${PROJECTDIR}/src/${COMPONENT_NAME}/service +mkdir -p ${PROJECTDIR}/src/${COMPONENT_NAME}/tests + +touch ${PROJECTDIR}/src/${COMPONENT_NAME}/client/__init__.py +touch ${PROJECTDIR}/src/${COMPONENT_NAME}/service/__init__.py +touch ${PROJECTDIR}/src/${COMPONENT_NAME}/tests/__init__.py +touch ${PROJECTDIR}/src/${COMPONENT_NAME}/.gitlab-ci.yml +touch ${PROJECTDIR}/src/${COMPONENT_NAME}/__init__.py +touch ${PROJECTDIR}/src/${COMPONENT_NAME}/Config.py +touch ${PROJECTDIR}/src/${COMPONENT_NAME}/Dockerfile +touch ${PROJECTDIR}/src/${COMPONENT_NAME}/requirements.in + +cd ${PROJECTDIR}/src +python gitlab-ci.yml_generator.py -t latest forecaster + +cd ${PROJECTDIR}/src/${COMPONENT_NAME} +mv .gitlab-ci.yml gitlab-ci.yaml +${PROJECTDIR}/scripts/add_license_header_to_files.sh +mv gitlab-ci.yaml .gitlab-ci.yml diff --git a/tutorial/1-2-install-microk8s.md b/tutorial/1-2-install-microk8s.md index 327c6af9e..1f1b3e6d9 100644 --- a/tutorial/1-2-install-microk8s.md +++ b/tutorial/1-2-install-microk8s.md @@ -83,17 +83,27 @@ microk8s config > $HOME/.kube/config sudo reboot ``` -## 1.2.6. Check status of Kubernetes +## 1.2.6. Check status of Kubernetes and addons +To retrieve the status of Kubernetes __once__, run the following command: ```bash microk8s.status --wait-ready ``` +To retrieve the status of Kubernetes __periodically__ (e.g., every 1 second), run the following command: +```bash +watch -n 1 microk8s.status --wait-ready +``` ## 1.2.7. Check all resources in Kubernetes +To retrieve the status of the Kubernetes resources __once__, run the following command: ```bash -microk8s.kubectl get all --all-namespaces +kubectl get all --all-namespaces ``` +To retrieve the status of the Kubernetes resources __periodically__ (e.g., every 1 second), run the following command: +```bash +watch -n 1 kubectl get all --all-namespaces +``` ## 1.2.8. Enable addons The Addons enabled are: @@ -106,10 +116,14 @@ The Addons enabled are: microk8s.enable dns hostpath-storage ingress registry ``` -__Note__: enabling some of the addons might take few minutes. - [Check status](./1-2-install-microk8s.md#124-check-status-of-kubernetes) periodically until all addons are - shown as enabled. Then [Check resources](./1-2-install-microk8s.md#125-check-all-resources-in-kubernetes) - periodically until all pods are Ready and Running. +__Important__: Enabling some of the addons might take few minutes. Do not proceed with next steps until the addons are + ready. Otherwise, the deployment might fail. To confirm everything is up and running: +1. Periodically + [Check the status of Kubernetes](./1-2-install-microk8s.md#126-check-status-of-kubernetes) + until you see the addons [dns, ha-cluster, hostpath-storage, ingress, registry, storage] in the enabled block. +2. Periodically + [Check Kubernetes resources](./1-2-install-microk8s.md#127-check-all-resources-in-kubernetes) + until all pods are __Ready__ and __Running__. ## 1.2.9. Stop, Restart, and Redeploy diff --git a/tutorial/2-2-ofc22.md b/tutorial/2-2-ofc22.md index 1a2ee8cda..37dfb4032 100644 --- a/tutorial/2-2-ofc22.md +++ b/tutorial/2-2-ofc22.md @@ -2,7 +2,9 @@ This functional test reproduces the live demonstration "Demonstration of Zero-touch Device and L3-VPN Service Management Using the TeraFlow Cloud-native SDN Controller" carried out at -[OFC'22](https://www.ofcconference.org/en-us/home/program-speakers/demo/). +[OFC'22](https://ieeexplore.ieee.org/document/9748575). + + ## 2.2.1. Functional test folder @@ -33,8 +35,9 @@ To run this functional test, it is assumed you have deployed a MicroK8s-based Ku controller instance as described in the [Tutorial: Deployment Guide](./1-0-deployment.md), and you configured the Python environment as described in [Tutorial: Run Experiments Guide > 2.1. Configure Python Environment](./2-1-python-environment.md). -Remember to source the scenario settings appropriately, e.g., `cd ~/tfs-ctrl && source my_deploy.sh` in each terminal -you open. +Remember to source the scenario settings, e.g., `cd ~/tfs-ctrl && source ofc22/deploy_specs.sh` in each terminal you open. +Then, re-build the protocol buffers code from the proto files: +`./proto/generate_code_python.sh` ## 2.2.4. Access to the WebUI and Dashboard diff --git a/tutorial/2-4-ecoc22.md b/tutorial/2-4-ecoc22.md index 6fc9333b5..b6f92aadc 100644 --- a/tutorial/2-4-ecoc22.md +++ b/tutorial/2-4-ecoc22.md @@ -1,7 +1,7 @@ # 2.4. ECOC'22 Demo - Disjoint DC-2-DC L3VPN Service (WORK IN PROGRESS) This functional test reproduces the experimental assessment of "Experimental Demonstration of Transport Network Slicing -with SLA Using the TeraFlowSDN Controller" presented at [ECOC'22](https://www.ecoc2022.org/). +with SLA Using the TeraFlowSDN Controller" presented at [ECOC'22](https://www.optica.org/en-us/events/topical_meetings/ecoc/schedule/?day=Tuesday#Tuesday). ## 2.4.1. Functional test folder @@ -27,10 +27,7 @@ To run this functional test, it is assumed you have deployed a MicroK8s-based Ku controller instance as described in the [Tutorial: Deployment Guide](./1-0-deployment.md), and you configured the Python environment as described in [Tutorial: Run Experiments Guide > 2.1. Configure Python Environment](./2-1-python-environment.md). -Remember to source the scenario settings appropriately, e.g., `cd ~/tfs-ctrl && source my_deploy.sh` in each terminal -you open. -Next, remember to source the environment variables created by the deployment, e.g., -`cd ~/tfs-ctrl && source tfs_runtime_env_vars.sh`. +Remember to source the scenario settings, e.g., `cd ~/tfs-ctrl && source ecoc22/deploy_specs.sh` in each terminal you open. Then, re-build the protocol buffers code from the proto files: `./proto/generate_code_python.sh` diff --git a/tutorial/3-0-development.md b/tutorial/3-0-development.md index c2b13315a..c8e7d0d9e 100644 --- a/tutorial/3-0-development.md +++ b/tutorial/3-0-development.md @@ -7,4 +7,6 @@ this guide assumes you are using the Oracle VirtualBox-based VM running MicroK8s ## Table of Content: - [3.1. Configure VSCode and Connect to the VM](./3-1-configure-vscode.md) - [3.2. Development Commands, Tricks, and Hints (WORK IN PROGRESS)](./3-2-develop-cth.md) -- [3.3. Debugging individual components in VSCode](./3.3-debug-comp.md) +- [3.3. Debugging individual components in VSCode](./3-3-debug-comp.md) + +- [3.X. Developing a new component: Forecaster (WORK IN PROGRESS)](./3-X-develop-new-component.md) diff --git a/tutorial/3.3-debug-comp.md b/tutorial/3-3-debug-comp.md similarity index 100% rename from tutorial/3.3-debug-comp.md rename to tutorial/3-3-debug-comp.md diff --git a/tutorial/3-X-develop-new-component.md b/tutorial/3-X-develop-new-component.md new file mode 100644 index 000000000..403527335 --- /dev/null +++ b/tutorial/3-X-develop-new-component.md @@ -0,0 +1,132 @@ +# 3.X. Developing a new component: Forecaster (WORK IN PROGRESS) + + +## 3.X.1. Preliminary requisites +As any microservice-based architecture, the components of TeraFlowSDN can be implemented using different programming languages. +For the sake of simplicity, and given it is the most widely used programming language in TeraFlow, this tutorial page assumes the reader will use Python. + +This tutorial assumes you hace successfully completed the steps in +[2.1. Configure the Python Environment](./2-1-python-environment.md) and +[3.1. Configure VSCode and Connect to the VM](./3-1-configure-vscode.md) to prepare your environment. + + +## 3.X.2. Create the component folder structure +The source code of each component of TeraFlowSDN is hosted in a particular folder within the `src` folder. +Within that folder, typically, 3 subfolders are created: +- Folder `client`: contains a client implementation that the rest of components can use to interact with the component. + See details in [3.X.4. Create the component client](./3-X-develop-new-component.md#3x4-create-the-component-client). +- Folder `service`: contains the implementation of the service logic. + See details in [3.X.5. Create the component service](./3-X-develop-new-component.md#3x5-create-the-component-service). +- Folder `tests`: contains the set of unitary tests to be executed over the component to ensure it is properly implemented. + See details in [3.X.6. Create the component tests](./3-X-develop-new-component.md#3x6-create-the-component-tests). +- File `__init__.py`: defines the component as a sub-package of TeraFlowSDN to facilitate imports. +- File `.gitlab-ci.yml`: defines the GitLab CI/CD settings to build, test, and deploy the component in an automated manner. +- File `Config.py`: contains particular configuration settings and constants for the component. +- File `Dockerfile`: defines the recipe to construct the Docker image for the component. +- File `requirements.in`: defines the Python dependencies that are required by this component. + +You can automate the creation of this file structure running the following command. +In this example, we create the `forecaster` component. +```bash +cd ~/tfs-ctrl +scripts/create_component.sh forecaster +``` + + +## 3.X.3. gRPC Proto messages and services +The components, e.g., microservices, of the TeraFlowSDN controller, in general, use a gRPC-based open API to interoperate. +All the protocol definitions can be found in sub-folder `proto` within the root project folder. +For additional details on gRPC, visit the official web-page [gRPC](https://grpc.io/). + +In general, each component has an associated _proto_ file named as the name of the component in snake_case.proto. +For instance, the _proto_ file for the `forecaster` component being developed in this tutorial is `proto/forecaster.proto` and implements 3 RPC methods: +- `rpc GetForecastOfTopology (context.TopologyId) returns (Forecast) {}´: + Takes a topology identifier as parameter, and computes the aggregated forecast for the topology. +- `rpc GetForecastOfLink(context.LinkId) returns (Forecast) {}´: + Takes a link identifier as parameter, and computes the aggregated forecast for that link. +- `rpc CheckService (context.ServiceId) returns (ForecastPrediction) {}´: + Takes a service identifier as parameter, computes the forecast for the connections of that service, and retrieves a value indicating if the resources can support the demand. + + +## 3.X.4. Create the component client +Each component has, by default, a pre-defined client that other components can import to inter-communicate. +The client module, by default, is named as the component's name concatenated with `client`, and written in CamelCase. +For instance, the client for the `forecaster` component would be `ForecasterClient.py`. + +This file contains a class with the same name as the file, e.g., `ForecasterClient`, and implements 3 main methods, plus one method for each RPC method supported by the service. These methods are: +- Main methods: + - `__init__(host=None, port=None)`: constructor of the client class. + - `connect(self)`: triggers the connection of the client to the service pointed by host and port class parameters. + - `close(self)`: disconnects the client from the service. +- RPC methods: one for each RPC method defined in the associated service within the proto file, e.g., `proto/forecaster.proto`. + +Create file `` + + + +## 3.X.3. Connect VSCode to the VM through "Remote SSH" extension +- Right-click on "TFS-VM" +- Select "Connect to Host in Current Window" +- Reply to the questions asked + - Platform of the remote host "TFS-VM": Linux + - "TFS-VM" has fingerprint "". Do you want to continue?: Continue + - Type tfs user's password: tfs123 +- You should be now connected to the TFS-VM. + +__Note__: if you get a connection error message, the reason might be due to wrong SSH server fingerprint. Edit file + "<...>/.ssh/known_hosts" on your local user account, check if there is a line starting with + "[127.0.0.1]:2200" (assuming previous port forwarding configuration), remove the entire line, save the file, + and retry connection. + + +## 3.X.4. Add SSH key to prevent typing the password every time +This step creates an SSH key in the VM and installs it on the VSCode to prevent having to type the password every time. + +- In VSCode (connected to the VM), click menu "Terminal > New Terminal" +- Run the following commands on the VM's terminal through VSCode +```bash +ssh-keygen -t rsa -b 4096 -f ~/.ssh/tfs-vm.key + # leave password empty +ssh-copy-id -i ~/.ssh/tfs-vm.key.pub tfs@10.0.2.10 + # tfs@10.0.2.10's password: +rm .ssh/known_hosts +``` + +- In VSCode, click left "Explorer" panel to expand, if not expanded, and click "Open Folder" button. + - Choose "/home/tfs/" + - Type tfs user's password when asked + - Trust authors of the "/home/tfs [SSH: TFS-VM]" folder when asked +- Right click on the file "tfs-vm.key" in the file explorer + - Select "Download..." option + - Download the file into your user's accout ".ssh" folder +- Delete files "tfs-vm.key" and "tfs-vm.key.pub" on the TFS-VM. + +- In VSCode, click left "Remote Explorer" panel to expand + - Click the "gear" icon next to "SSH TARGETS" on top of "Remote Explorer" bar + - Choose to edit "<...>/.ssh/config" file (or equivalent) + - Find entry "Host TFS-VM" and update it as follows: +``` +Host TFS-VM + HostName 127.0.0.1 + Port 2200 + ForwardX11 no + User tfs + IdentityFile "" +``` + - Save the file +- From now, VSCode will use the identity file to connect to the TFS-VM instead of the user's password. + + +## 3.X.5. Install VSCode Python Extension (in VSCode server) +This step installs Python extensions in VSCode server running in the VM. + +- In VSCode (connected to the VM), click left button "Extensions" +- Search "Python" extension in the extension Marketplace. +- Install official "Python" extension released by Microsoft. + - By default, since you're connected to the VM, it will be installed in the VSCode server running in the VM. + +- In VSCode (connected to the VM), click left button "Explorer" +- Click "Ctrl+Alt+P" and type "Python: Select Interpreter". Select option "Python: 3.9.13 64-bit ('tfs')" + +in terminal: set python path to be used by VSCode: +`echo "PYTHONPATH=./src" > ~/tfs-ctrl/.env` -- GitLab From 54ce6e00b542be915f552bbf2e680c85f33ed86b Mon Sep 17 00:00:00 2001 From: gifrerenom Date: Fri, 7 Oct 2022 11:19:56 +0200 Subject: [PATCH 2/8] Tutorial: - updated code repository to clone from --- tutorial/1-3-deploy-tfs.md | 18 ++++-------------- 1 file changed, 4 insertions(+), 14 deletions(-) diff --git a/tutorial/1-3-deploy-tfs.md b/tutorial/1-3-deploy-tfs.md index 9b2da4fc1..b1f86c718 100644 --- a/tutorial/1-3-deploy-tfs.md +++ b/tutorial/1-3-deploy-tfs.md @@ -11,25 +11,15 @@ sudo apt-get install -y git curl jq ## 1.3.2. Clone the Git repository of the TeraFlowSDN controller -__Important__: Right now, we have two repositories hosting the code of TeraFlowSDN: GitLab.com and ETSI owned GitLab - repository. Nowadays, only GitLab.com repository accepts code contributions that are periodically - mirrored to ETSI labs. In the near future, we plan to swap the repository roles and new contributions - will be accepted only at ETSI labs, while GitLab.com will probably be kept as a mirror of ETSI. If you - plan to contribute code to the TeraFlowSDN controller, by now, clone from GitLab.com. We will update the - tutorial as soon as roles of repositories are swapped. - -Clone from GitLab (if you want to contribute code to TeraFlowSDN): -```bash -mkdir ~/tfs-ctrl -git clone https://gitlab.com/teraflow-h2020/controller.git ~/tfs-ctrl -``` - -Clone from ETSI owned GitLab (if you do not plan to contribute code): +Clone from ETSI-hosted GitLab code repository: ```bash mkdir ~/tfs-ctrl git clone https://labs.etsi.org/rep/tfs/controller.git ~/tfs-ctrl ``` +__Important__: Original H2020-TeraFlow project hosted on GitLab.com has been archieved and will not receive further + contributions/updates. Please, clone from ETSI-hosted GitLab code repository. + ## 1.3.3. Checkout the appropriate Git branch By default 'master' branch is checked out. If you want to deploy 'develop' that incorporates the most up-to-date code -- GitLab From e48499f109f9e78ebea90e96d499ee96e018614d Mon Sep 17 00:00:00 2001 From: gifrerenom Date: Fri, 7 Oct 2022 11:35:26 +0200 Subject: [PATCH 3/8] Tutorial: - polished configuration of VSCode IDE --- tutorial/3-1-configure-vscode.md | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/tutorial/3-1-configure-vscode.md b/tutorial/3-1-configure-vscode.md index 10493ce22..34d204b30 100644 --- a/tutorial/3-1-configure-vscode.md +++ b/tutorial/3-1-configure-vscode.md @@ -88,5 +88,10 @@ This step installs Python extensions in VSCode server running in the VM. - In VSCode (connected to the VM), click left button "Explorer" - Click "Ctrl+Alt+P" and type "Python: Select Interpreter". Select option "Python: 3.9.13 64-bit ('tfs')" -in terminal: set python path to be used by VSCode: -`echo "PYTHONPATH=./src" > ~/tfs-ctrl/.env` +## 3.1.6. Define environment variables for VSCode +The source code in the TFS controller project is hosted in folder `src/`. To help VSCode find the Python modules and +packages, add the following file into your working space root folder: + +```bash +echo "PYTHONPATH=./src" >> ~/tfs-ctrl/.env +``` -- GitLab From f61eff1dbed6380d4dc95322be3b994a8b539b00 Mon Sep 17 00:00:00 2001 From: gifrerenom Date: Tue, 11 Oct 2022 11:40:56 +0000 Subject: [PATCH 4/8] Tutorial cleanup --- tutorial/3-1-configure-vscode.md | 1 + 1 file changed, 1 insertion(+) diff --git a/tutorial/3-1-configure-vscode.md b/tutorial/3-1-configure-vscode.md index 34d204b30..e7dbf3a6a 100644 --- a/tutorial/3-1-configure-vscode.md +++ b/tutorial/3-1-configure-vscode.md @@ -88,6 +88,7 @@ This step installs Python extensions in VSCode server running in the VM. - In VSCode (connected to the VM), click left button "Explorer" - Click "Ctrl+Alt+P" and type "Python: Select Interpreter". Select option "Python: 3.9.13 64-bit ('tfs')" + ## 3.1.6. Define environment variables for VSCode The source code in the TFS controller project is hosted in folder `src/`. To help VSCode find the Python modules and packages, add the following file into your working space root folder: -- GitLab From a592187df68433d7d82264c723cbcc9857272f45 Mon Sep 17 00:00:00 2001 From: Carlos Natalino Date: Tue, 18 Oct 2022 07:27:57 +0200 Subject: [PATCH 5/8] Pass on the tutorials to (i) verify that all parts are complete (did not run the installation procedures); (ii) updated instructions on how to select the Python version; (iii) general aestetic changes. --- tutorial/1-0-deployment.md | 8 +- tutorial/1-1-1-create-vm-oracle-virtualbox.md | 19 +-- tutorial/1-1-create-vm.md | 16 ++- tutorial/1-2-install-microk8s.md | 39 ++++-- tutorial/1-3-deploy-tfs.md | 64 ++++----- tutorial/1-4-access-webui.md | 32 +++-- tutorial/1-5-deploy-logs-troubleshooting.md | 23 ++-- tutorial/2-0-run-experiments.md | 8 +- tutorial/2-1-python-environment.md | 81 ++++++++--- tutorial/2-2-ofc22.md | 128 ++++++++++-------- tutorial/2-4-ecoc22.md | 108 +++++++++------ tutorial/README.md | 18 ++- 12 files changed, 339 insertions(+), 205 deletions(-) diff --git a/tutorial/1-0-deployment.md b/tutorial/1-0-deployment.md index 6d56808da..6aa46aab7 100644 --- a/tutorial/1-0-deployment.md +++ b/tutorial/1-0-deployment.md @@ -1,8 +1,10 @@ # 1. Deployment Guide -This section walks you through the process of deploying TeraFlowSDN on top of a Virtual Machine (VM) running MicroK8s -Kubernetes platform. The guide includes the details on configuring and installing the VM, installing and configuring -MicroK8s, and deploying and reporting the status of the TeraFlowSDN controller. +This section walks you through the process of deploying TeraFlowSDN on top of a Virtual +Machine (VM) running [MicroK8s Kubernetes platform](https://microk8s.io). +The guide includes the details on configuring and installing the VM, installing and +configuring MicroK8s, and deploying and reporting the status of the TeraFlowSDN +controller. ## Table of Content: - [1.1. Create VM for the TeraFlowSDN controller](./1-1-create-vm.md) diff --git a/tutorial/1-1-1-create-vm-oracle-virtualbox.md b/tutorial/1-1-1-create-vm-oracle-virtualbox.md index ea0da6cab..0a074d56a 100644 --- a/tutorial/1-1-1-create-vm-oracle-virtualbox.md +++ b/tutorial/1-1-1-create-vm-oracle-virtualbox.md @@ -1,14 +1,15 @@ ## 1.1.1. Oracle VirtualBox ### 1.1.1.1. Create a NAT Network in VirtualBox -In "Oracle VM VirtualBox Manager", Menu "File > Preferences... > Network", create a NAT network with the following -specifications: +In "Oracle VM VirtualBox Manager", Menu "File > Preferences... > Network", create a NAT +network with the following specifications: |Name |CIDR |DHCP |IPv6 | |-----------|-----------|--------|--------| |TFS-NAT-Net|10.0.2.0/24|Disabled|Disabled| -Within the newly created "TFS-NAT-Net" NAT network, configure the following IPv4 forwarding rules: +Within the newly created "TFS-NAT-Net" NAT network, configure the following IPv4 +forwarding rules: |Name|Protocol|Host IP |Host Port|Guest IP |Guest Port| |----|--------|---------|---------|---------|----------| @@ -36,8 +37,9 @@ __Note__: IP address 10.0.2.10 is the one that will be assigned to the VM. __Note__: (*) settings to be editing after the VM is created. ### 1.1.1.3. Install Ubuntu 20.04 LTS Operating System -In "Oracle VM VirtualBox Manager", start the VM in normal mode, and follow the installation procedure. Below we provide -some installation guidelines: +In "Oracle VM VirtualBox Manager", start the VM in normal mode, and follow the +installation procedure. +Below we provide some installation guidelines: - Installation Language: English - Autodetect your keyboard - Configure static network specifications: @@ -73,9 +75,10 @@ sudo apt-get dist-upgrade -y ``` ## 1.1.1.5. Install VirtualBox Guest Additions -On VirtualBox Manager, open the VM main screen. If you are running the VM in headless mode, right click over the VM in -the VirtualBox Manager window and click "Show". If a dialog informing about how to leave the interface of the VM is -hown, confirm pressing "Switch" button. The interface of the VM should appear. +On VirtualBox Manager, open the VM main screen. If you are running the VM in headless +mode, right click over the VM in the VirtualBox Manager window and click "Show". +If a dialog informing about how to leave the interface of the VM is shown, confirm +pressing "Switch" button. The interface of the VM should appear. Click menu "Device > Insert Guest Additions CD image..." diff --git a/tutorial/1-1-create-vm.md b/tutorial/1-1-create-vm.md index ce74e6dc6..6ebed2f19 100644 --- a/tutorial/1-1-create-vm.md +++ b/tutorial/1-1-create-vm.md @@ -1,12 +1,16 @@ # 1.1. Create VM for the TeraFlowSDN controller -In this section, we install a VM to be used as the deployment, execution, and development environment for the ETSI -TeraFlowSDN controller. If you already have a remote physical server fitting the requirements specified in this section -feel free to use it instead of deploying a local VM. Other virtualization environments can also be used; in that case, -you will need to adapt these instructions to your particular case. +In this section, we install a VM to be used as the deployment, execution, and +development environment for the ETSI TeraFlowSDN controller. +If you already have a remote physical server fitting the requirements specified in this +section feel free to use it instead of deploying a local VM. +Other virtualization environments can also be used; in that case, you will need to adapt +these instructions to your particular case. -Different Hypervisors are considered for that. Check the table of contents for available options. If you want to -contribute with other Hypervisors, [contact](./README.md#contact) the TFS team through Slack. +Different Hypervisors are considered for that. +Check the table of contents for available options. +If you want to contribute with other Hypervisors, [contact](./README.md#contact) the TFS +team through Slack. ## Table of Content: - [1.1.1. Oracle VirtualBox](./1-1-1-create-vm-oracle-virtualbox.md) diff --git a/tutorial/1-2-install-microk8s.md b/tutorial/1-2-install-microk8s.md index 1f1b3e6d9..1cd14ef6f 100644 --- a/tutorial/1-2-install-microk8s.md +++ b/tutorial/1-2-install-microk8s.md @@ -1,10 +1,12 @@ # 1.2. Install MicroK8s Kubernetes platform -This section describes how to deploy the MicroK8s Kubernetes platform and configure it to be used with ETSI TeraFlowSDN -controller. Besides, Docker is installed to build docker images for the ETSI TeraFlowSDN controller. +This section describes how to deploy the MicroK8s Kubernetes platform and configure it +to be used with ETSI TeraFlowSDN controller. +Besides, Docker is installed to build docker images for the ETSI TeraFlowSDN controller. -The steps described in this section might take some minutes depending on your internet connection speed and the -resources assigned to your VM, or the specifications of your physical server. +The steps described in this section might take some minutes depending on your internet +connection speed and the resources assigned to your VM, or the specifications of your +physical server. ## 1.2.1. Upgrade the Ubuntu distribution @@ -56,6 +58,14 @@ sudo snap install microk8s --classic --channel=1.24/stable # Create alias for command "microk8s.kubectl" to be usable as "kubectl" sudo snap alias microk8s.kubectl kubectl +``` + +It is important to make sure that `ufw` will not interfere with the internal pod-to-pod +and pod-to-Internet traffic. +To do so, first check the status. +If `ufw` is active, use the following command to enable the communication. + +```bash # Verify status of ufw firewall sudo ufw status @@ -67,6 +77,12 @@ sudo ufw default allow routed ## 1.2.5. Add user to the docker and microk8s groups + +It is important that your user has the permission to run `docker` and `microk8s` in the +terminal. +To allow this, you need to add your user to the `docker` and `microk8s` groups with the +following commands: + ```bash sudo usermod -a -G docker $USER sudo usermod -a -G microk8s $USER @@ -74,7 +90,8 @@ sudo chown -f -R $USER $HOME/.kube sudo reboot ``` -In case that the .kube file is not automatically provisioned into your home folder, you may follow the steps below: +In case that the .kube file is not automatically provisioned into your home folder, you +may follow the steps below: ```bash mkdir -p $HOME/.kube @@ -89,7 +106,8 @@ To retrieve the status of Kubernetes __once__, run the following command: microk8s.status --wait-ready ``` -To retrieve the status of Kubernetes __periodically__ (e.g., every 1 second), run the following command: +To retrieve the status of Kubernetes __periodically__ (e.g., every 1 second), run the +following command: ```bash watch -n 1 microk8s.status --wait-ready ``` @@ -100,7 +118,8 @@ To retrieve the status of the Kubernetes resources __once__, run the following c kubectl get all --all-namespaces ``` -To retrieve the status of the Kubernetes resources __periodically__ (e.g., every 1 second), run the following command: +To retrieve the status of the Kubernetes resources __periodically__ (e.g., every 1 +second), run the following command: ```bash watch -n 1 kubectl get all --all-namespaces ``` @@ -116,8 +135,10 @@ The Addons enabled are: microk8s.enable dns hostpath-storage ingress registry ``` -__Important__: Enabling some of the addons might take few minutes. Do not proceed with next steps until the addons are - ready. Otherwise, the deployment might fail. To confirm everything is up and running: +__Important__: Enabling some of the addons might take few minutes. +Do not proceed with next steps until the addons are ready. +Otherwise, the deployment might fail. +To confirm everything is up and running: 1. Periodically [Check the status of Kubernetes](./1-2-install-microk8s.md#126-check-status-of-kubernetes) until you see the addons [dns, ha-cluster, hostpath-storage, ingress, registry, storage] in the enabled block. diff --git a/tutorial/1-3-deploy-tfs.md b/tutorial/1-3-deploy-tfs.md index b1f86c718..ffd9dfe49 100644 --- a/tutorial/1-3-deploy-tfs.md +++ b/tutorial/1-3-deploy-tfs.md @@ -1,7 +1,7 @@ # 1.3. Deploy TeraFlowSDN over MicroK8s -This section describes how to deploy TeraFlowSDN controller on top of MicroK8s using the environment configured in the -previous sections. +This section describes how to deploy TeraFlowSDN controller on top of MicroK8s using the +environment configured in the previous sections. ## 1.3.1. Install prerequisites @@ -17,36 +17,36 @@ mkdir ~/tfs-ctrl git clone https://labs.etsi.org/rep/tfs/controller.git ~/tfs-ctrl ``` -__Important__: Original H2020-TeraFlow project hosted on GitLab.com has been archieved and will not receive further - contributions/updates. Please, clone from ETSI-hosted GitLab code repository. +__Important__: The original H2020-TeraFlow project hosted on GitLab.com has been +archieved and will not receive further contributions/updates. +Please, clone from [ETSI-hosted GitLab code repository](https://labs.etsi.org/rep/tfs/controller). ## 1.3.3. Checkout the appropriate Git branch -By default 'master' branch is checked out. If you want to deploy 'develop' that incorporates the most up-to-date code +By default the *master* branch is checked out. +If you want to deploy the *develop* branch, that incorporates the most up-to-date code contributions and features, run the following command: ```bash cd ~/tfs-ctrl git checkout develop ``` -__Important__: During the elaboration and validation of the tutorials, you should checkout branch - "feat/microk8s-deployment". Otherwise, you will not have important files such as "my_deploy.sh" or - "deploy.sh". As soon as the tutorials are completed and approved, we will remove this note and merge the - "feat/microk8s-deployment" into "develop" and later into "master", and then the previous step will be - effective. - ## 1.3.4. Prepare a deployment script with the deployment settings -Create a new deployment script, e.g., `my_deploy.sh`, adding the appropriate settings as follows. This script, by -default, makes use of the private Docker registry enabled in MicroK8s, as specified in `TFS_REGISTRY_IMAGE`. It builds -the Docker images for the subset of components defined in `TFS_COMPONENTS`, tags them with the tag defined in -`TFS_IMAGE_TAG`, deploys them in the namespace defined in `TFS_K8S_NAMESPACE`, and (optionally) deploys the extra -Kubernetes manifests listed in `TFS_EXTRA_MANIFESTS`. Besides, it lets you specify in `TFS_GRAFANA_PASSWORD` the -password to be set for the Grafana `admin` user. +Create a new deployment script, e.g., `my_deploy.sh`, adding the appropriate settings as +follows. +This script, by default, makes use of the private Docker registry enabled in MicroK8s, +as specified in `TFS_REGISTRY_IMAGE`. +It builds the Docker images for the subset of components defined in `TFS_COMPONENTS`, +tags them with the tag defined in `TFS_IMAGE_TAG`, deploys them in the namespace defined +in `TFS_K8S_NAMESPACE`, and (optionally) deploys the extra Kubernetes manifests listed +in `TFS_EXTRA_MANIFESTS`. +Besides, it lets you specify in `TFS_GRAFANA_PASSWORD` the password to be set for the +Grafana `admin` user. ```bash cd ~/tfs-ctrl -tee my_deploy.sh >/dev/null </dev/null << EOF export TFS_REGISTRY_IMAGE="http://localhost:32000/tfs/" export TFS_COMPONENTS="context device automation pathcomp service slice compute monitoring webui" export TFS_IMAGE_TAG="dev" @@ -58,10 +58,12 @@ EOF ## 1.3.5. Deploy TFS controller -First, source the deployment settings defined in the previous section. This way, you do not need to specify the -environment variables in each and every command you execute to operate the TFS controller. Be aware to re-source the -file if you open new terminal sessions. -Then, run the following command to deploy TeraFlowSDN controller on top of the MicroK8s Kubernetes platform. +First, source the deployment settings defined in the previous section. +This way, you do not need to specify the environment variables in each and every command +you execute to operate the TFS controller. +Be aware to re-source the file if you open new terminal sessions. +Then, run the following command to deploy TeraFlowSDN controller on top of the MicroK8s +Kubernetes platform. ```bash cd ~/tfs-ctrl @@ -69,16 +71,14 @@ source my_deploy.sh ./deploy.sh ``` -The script does the following steps: -1. Build the Docker images for the components defined in `TFS_COMPONENTS` -2. Tag the Docker images with the value of `TFS_IMAGE_TAG` -3. Push the Docker images to the repository defined in `TFS_REGISTRY_IMAGE` -4. Create the namespace defined in `TFS_K8S_NAMESPACE` -5. Deploy the components defined in `TFS_COMPONENTS` -6. Create the file `tfs_runtime_env_vars.sh` with the environment variables for the components defined in - `TFS_COMPONENTS` defining their local host addresses and their port numbers. -7. Create an ingress controller listening at port 80 for HTTP connections to enable external access to the TeraFlowSDN - WebUI, Grafana Dashboards, Context Debug endpoints, and Compute NBI interfaces. +The script performs the following steps: +1. Builds the Docker images for the components defined in `TFS_COMPONENTS` +2. Tags the Docker images with the value of `TFS_IMAGE_TAG` +3. Pushes the Docker images to the repository defined in `TFS_REGISTRY_IMAGE` +4. Creates the namespace defined in `TFS_K8S_NAMESPACE` +5. Deploys the components defined in `TFS_COMPONENTS` +6. Creates the file `tfs_runtime_env_vars.sh` with the environment variables for the components defined in `TFS_COMPONENTS` defining their local host addresses and their port numbers. +7. Create an ingress controller listening at port 80 for HTTP connections to enable external access to the TeraFlowSDN WebUI, Grafana Dashboards, Context Debug endpoints, and Compute NBI interfaces. 8. Initialize and configure the Grafana dashboards 9. Report a summary of the deployment (see [1.5. Show Deployment and Log per Component](./1-5-deploy-logs-troubleshooting.md)) diff --git a/tutorial/1-4-access-webui.md b/tutorial/1-4-access-webui.md index 7769669e3..aa66ef190 100644 --- a/tutorial/1-4-access-webui.md +++ b/tutorial/1-4-access-webui.md @@ -1,18 +1,26 @@ # 1.4. Access TeraFlowSDN WebUI and Grafana Dashboards -This section describes how to get access to the TeraFlowSDN controller WebUI and the monitoring Grafana dashboards. +This section describes how to get access to the TeraFlowSDN controller WebUI and the +monitoring Grafana dashboards. ## 1.4.1. Access the TeraFlowSDN WebUI -If you followed the installation steps based on MicroK8s, you got an ingress controller installed that exposes on TCP -port 80. In the creation of the VM, a forward from local TCP port 8080 to VM's TCP port 80 is configured, so the WebUIs -and REST APIs of TeraFlowSDN should be exposed on endpoint `127.0.0.1:8080`. -Besides, the ingress controller defines the following reverse proxy paths: +If you followed the installation steps based on MicroK8s, you got an ingress controller +installed that exposes on TCP port 80. +In the creation of the VM, a forward from local TCP port 8080 to VM's TCP port 80 is +configured, so the WebUIs and REST APIs of TeraFlowSDN should be exposed on the endpoint +`127.0.0.1:8080` of your local machine. +Besides, the ingress controller defines the following reverse proxy paths +(on your local machine): - `http://127.0.0.1:8080/webui`: points to the WebUI of TeraFlowSDN. -- `http://127.0.0.1:8080/grafana`: points to the Grafana dashboards. This endpoint brings access to the monitoring - dashboards of TeraFlowSDN. The credentials for the `admin`user are those defined in the `my_deploy.sh` script, in the - `TFS_GRAFANA_PASSWORD` variable. -- `http://127.0.0.1:8080/context`: points to the REST API exposed by the TeraFlowSDN Context component. This endpoint - is mainly used for debugging purposes. Note that this endpoint is designed to be accessed from the WebUI. -- `http://127.0.0.1:8080/restconf`: points to the Compute component NBI based on RestCONF. This endpoint enables - connecting external software, such as ETSI OpenSourceMANO NFV Orchestrator, to TeraFlowSDN. +- `http://127.0.0.1:8080/grafana`: points to the Grafana dashboards. + This endpoint brings access to the monitoring dashboards of TeraFlowSDN. + The credentials for the `admin`user are those defined in the `my_deploy.sh` script, in + the `TFS_GRAFANA_PASSWORD` variable. +- `http://127.0.0.1:8080/context`: points to the REST API exposed by the TeraFlowSDN + Context component. + This endpoint is mainly used for debugging purposes. + Note that this endpoint is designed to be accessed from the WebUI. +- `http://127.0.0.1:8080/restconf`: points to the Compute component NBI based on RestCONF. + This endpoint enables connecting external software, such as ETSI OpenSourceMANO NFV + Orchestrator, to TeraFlowSDN. diff --git a/tutorial/1-5-deploy-logs-troubleshooting.md b/tutorial/1-5-deploy-logs-troubleshooting.md index ce16a279c..3aa7acaee 100644 --- a/tutorial/1-5-deploy-logs-troubleshooting.md +++ b/tutorial/1-5-deploy-logs-troubleshooting.md @@ -1,30 +1,33 @@ # 1.5. Show Deployment and Log per Component -This section presents some helper scripts to inspect the status of the deployment and the logs of the components. These -scripts are particularly helpful for troubleshooting during execution of experiments, development, and debugging. +This section presents some helper scripts to inspect the status of the deployment and +the logs of the components. +These scripts are particularly helpful for troubleshooting during execution of +experiments, development, and debugging. ## 1.5.1. Report the deployment of the TFS controller The summary report given at the end of the [Deploy TFS controller](./1-3-deploy-tfs.md#135-deploy-tfs-controller) -procedure can be generated manually at any time by running the following command. You can avoid sourcing `my_deploy.sh` -if it has been already done. +procedure can be generated manually at any time by running the following command. +You can avoid sourcing `my_deploy.sh` if it has been already done. ```bash cd ~/tfs-ctrl source my_deploy.sh ./show_deploy.sh ``` -Use this script to validate that all the pods, deployments, replica sets, ingress controller, etc. are ready and have -the appropriate state, e.g., "running" for Pods, and the services are deployed and have appropriate IP addresses and -port numbers. +Use this script to validate that all the pods, deployments, replica sets, ingress +controller, etc. are ready and have the appropriate state, e.g., *running* for Pods, and +the services are deployed and have appropriate IP addresses and port numbers. ## 1.5.2. Report the log of a specific TFS controller component -A number of scripts are pre-created in the `scripts` folder to facilitate the inspection of the component logs. For -instance, to dump the log of the Context component, run the following command. You can avoid sourcing `my_deploy.sh` -if it has been already done. +A number of scripts are pre-created in the `scripts` folder to facilitate the inspection +of the component logs. +For instance, to dump the log of the Context component, run the following command. +You can avoid sourcing `my_deploy.sh` if it has been already done. ```bash source my_deploy.sh diff --git a/tutorial/2-0-run-experiments.md b/tutorial/2-0-run-experiments.md index 82f6a56bf..ab3c390e1 100644 --- a/tutorial/2-0-run-experiments.md +++ b/tutorial/2-0-run-experiments.md @@ -1,9 +1,13 @@ # 2. Run Experiments Guide (WORK IN PROGRESS) -This section walks you through the process of running experiments in TeraFlowSDN on top of a Oracle VirtualBox-based VM -running MicroK8s Kubernetes platform. The guide includes the details on configuring the Python environment, some basic +This section walks you through the process of running experiments in TeraFlowSDN on top +of a Oracle VirtualBox-based VM running MicroK8s Kubernetes platform. +The guide includes the details on configuring the Python environment, some basic commands you might need, configuring the network topology, and executing different experiments. +Note that the steps followed here are likely to work regardless of the platform (VM) +where TeraFlowSDN is deployed over. + ## Table of Content: - [2.1. Configure the Python environment](./2-1-python-environment.md) - [2.2. OFC'22 Demo - Bootstrap devices, Monitor device Endpoints, Manage L3VPN Services](./2-2-ofc22.md) diff --git a/tutorial/2-1-python-environment.md b/tutorial/2-1-python-environment.md index e03e3daff..940a1183a 100644 --- a/tutorial/2-1-python-environment.md +++ b/tutorial/2-1-python-environment.md @@ -1,9 +1,9 @@ # 2.1. Configure Python Environment -This section describes how to configure the Python environment to run experiments and develop code for the ETSI -TeraFlowSDN controller. -In particular, we use [PyEnv](https://github.com/pyenv/pyenv) to install the appropriate version of Python and manage -the virtual environments. +This section describes how to configure the Python environment to run experiments and +develop code for the ETSI TeraFlowSDN controller. +In particular, we use [PyEnv](https://github.com/pyenv/pyenv) to install the appropriate +version of Python and manage the virtual environments. ## 2.1.1. Upgrade the Ubuntu distribution @@ -22,6 +22,12 @@ sudo apt-get install -y make build-essential libssl-dev zlib1g-dev libbz2-dev li ## 2.1.3. Install PyEnv + +We recommend installing PyEnv through +[PyEnv Installer](https://github.com/pyenv/pyenv-installer). +Below you can find the instructions, but we refer you to the link for updated +instructions. + ```bash curl https://pyenv.run | bash # When finished, edit ~/.bash_profile // ~/.profile // ~/.bashrc as the installer proposes. @@ -32,7 +38,8 @@ eval "$(pyenv init -)" eval "$(pyenv virtualenv-init -)" ``` -In case .bashrc is not linked properly to your profile, you may need to append the following line into your local .profile file: +In case .bashrc is not linked properly to your profile, you may need to append the +following line into your local .profile file: ```bash # Open ~/.profile and append this line: @@ -48,40 +55,76 @@ sudo reboot ## 2.1.5. Install Python 3.9 over PyEnv + +ETSI TeraFlowSDN uses Python 3.9 by default. +You should install the latest update of Python 3.9. +To find the latest version available in PyEnv, you can run the following command: + +```bash +pyenv install --list | grep " 3.9" +``` + +At the time of writing, this command will output the following list: + +``` + 3.9.0 + 3.9-dev + 3.9.1 + 3.9.2 + 3.9.4 + 3.9.5 + 3.9.6 + 3.9.7 + 3.9.8 + 3.9.9 + 3.9.10 + 3.9.11 + 3.9.12 + 3.9.13 + 3.9.14 ** always select the latest version ** +``` + +Therefore, the latest version is Python 3.9.14. +To install this version, you should run: + ```bash -pyenv install 3.9.13 - # This command might take some minutes depending on your Internet connection speed and the performance of your VM. +pyenv install 3.9.14 + # This command might take some minutes depending on your Internet connection speed + # and the performance of your VM. ``` ## 2.1.6. Create the Virtual Environment for TeraFlowSDN -The following commands create a virtual environment named as `tfs` using Python v3.9.13 and associate that environment -with the current folder, i.e., `~/tfs-ctrl`. That way, when you are in that folder, the associated virtual environment -will be used, thus inheriting the Python interpreter, i.e., Python v3.9.13, and the Python packages installed on it. +The following commands create a virtual environment named as `tfs` using Python 3.9 and +associate that environment with the current folder, i.e., `~/tfs-ctrl`. +That way, when you are in that folder, the associated virtual environment will be used, +thus inheriting the Python interpreter, i.e., Python 3.9, and the Python packages +installed on it. ```bash cd ~/tfs-ctrl -pyenv virtualenv 3.9.13 tfs -pyenv local 3.9.13/envs/tfs +pyenv virtualenv 3.9.14 tfs +pyenv local 3.9.14/envs/tfs ``` -In case that the correct pyenv does not get automatically activated when you change to the tfs-ctrl/ folder, then execute the following command: +In case that the correct pyenv does not get automatically activated when you change to +the tfs-ctrl/ folder, then execute the following command: ```bash cd ~/tfs-ctrl -pyenv activate 3.9.13/envs/tfs +pyenv activate 3.9.14/envs/tfs ``` -After completing these commands, you should see in your prompt that now you're within the virtual environment -`3.9.13/envs/tfs` on folder `~/tfs-ctrl`: +After completing these commands, you should see in your prompt that now you're within +the virtual environment `3.9.14/envs/tfs` on folder `~/tfs-ctrl`: ``` -(3.9.13/envs/tfs) tfs@tfs-vm:~/tfs-ctrl$ +(3.9.14/envs/tfs) tfs@tfs-vm:~/tfs-ctrl$ ``` ## 2.1.7. Install the basic Python packages within the virtual environment -From within the `3.9.13/envs/tfs` environment on folder `~/tfs-ctrl`, run the following commands to install the basic -Python packages required to work with TeraFlowSDN. +From within the `3.9.14/envs/tfs` environment on folder `~/tfs-ctrl`, run the following +commands to install the basic Python packages required to work with TeraFlowSDN. ```bash cd ~/tfs-ctrl ./install_requirements.sh diff --git a/tutorial/2-2-ofc22.md b/tutorial/2-2-ofc22.md index 37dfb4032..3b55a0961 100644 --- a/tutorial/2-2-ofc22.md +++ b/tutorial/2-2-ofc22.md @@ -1,38 +1,40 @@ # 2.2. OFC'22 Demo - Bootstrap devices, Monitor device Endpoints, Manage L3VPN Services -This functional test reproduces the live demonstration "Demonstration of Zero-touch Device and L3-VPN Service Management -Using the TeraFlow Cloud-native SDN Controller" carried out at -[OFC'22](https://ieeexplore.ieee.org/document/9748575). - - +This functional test reproduces the live demonstration *Demonstration of Zero-touch +Device and L3-VPN Service Management Using the TeraFlow Cloud-native SDN Controller* +carried out at [OFC'22](https://ieeexplore.ieee.org/document/9748575) / +[Open access](https://research.chalmers.se/en/publication/c397ef36-837f-416d-a44d-6d3b561d582a). ## 2.2.1. Functional test folder -This functional test can be found in folder `./src/tests/ofc22/`. A convenience alias `./ofc22/` pointing to that folder -has been defined. +This functional test can be found in folder `./src/tests/ofc22/`. +A convenience alias `./ofc22/` pointing to that folder has been defined. ## 2.2.2. Execute with real devices This functional test is designed to operate both with real and emulated devices. -By default, emulated devices are used; however, if you have access to real devices, you can create/modify the files -`./ofc22/tests/Objects.py` and `./ofc22/tests/Credentials.py` to point to your devices, and map to your own network -topology. -Otherwise, you can modify the `./ofc22/tests/descriptors_emulated.json` that is designed to be uploaded through the -WebUI instead of using the command line scripts. -Note that the default scenario assumes devices R2 and R4 are always emulated, while devices R1, R3, and O1 -can be configured as emulated or real devices. - -__Important__: The device drivers operating with real devices, e.g., OpenConfigDriver, P4Driver, and TransportApiDriver, - have to be considered as experimental. The configuration and monitoring capabilities they support are +By default, emulated devices are used; +however, if you have access to real devices, you can create/modify the files +`./ofc22/tests/Objects.py` and `./ofc22/tests/Credentials.py` to point to your devices, +and map to your own network topology. +Otherwise, you can modify the `./ofc22/tests/descriptors_emulated.json` that is designed +to be uploaded through the WebUI instead of using the command line scripts. +Note that the default scenario assumes devices R2 and R4 are always emulated, while +devices R1, R3, and O1 can be configured as emulated or real devices. + +__Important__: The device drivers operating with real devices, e.g., OpenConfigDriver, + P4Driver, and TransportApiDriver, have to be considered as experimental. + The configuration and monitoring capabilities they support are limited or partially implemented/tested. Use them with care. ## 2.2.3. Deployment and Dependencies -To run this functional test, it is assumed you have deployed a MicroK8s-based Kubernetes environment and a TeraFlowSDN -controller instance as described in the [Tutorial: Deployment Guide](./1-0-deployment.md), and you configured the Python +To run this functional test, it is assumed you have deployed a MicroK8s-based Kubernetes +environment and a TeraFlowSDN controller instance as described in the +[Tutorial: Deployment Guide](./1-0-deployment.md), and you configured the Python environment as described in [Tutorial: Run Experiments Guide > 2.1. Configure Python Environment](./2-1-python-environment.md). Remember to source the scenario settings, e.g., `cd ~/tfs-ctrl && source ofc22/deploy_specs.sh` in each terminal you open. @@ -42,29 +44,33 @@ Then, re-build the protocol buffers code from the proto files: ## 2.2.4. Access to the WebUI and Dashboard -When the deployment completes, you can connect to the TeraFlowSDN WebUI and Dashboards as described in +When the deployment completes, you can connect to the TeraFlowSDN WebUI and Dashboards +as described in [Tutorial: Deployment Guide > 1.4. Access TeraFlowSDN WebUI and Grafana Dashboards](./1-4-access-webui.md) Notes: - the default credentials for the Grafana Dashboiard is user/pass: `admin`/`admin123+`. -- in Grafana, you will find the "L3-Monitorng" in the "Starred dashboards" section. +- in Grafana, you will find the *L3-Monitorng* in the *Starred dashboards* section. ## 2.2.5. Test execution -Before executing the tests, the environment variables need to be prepared. First, make sure to load your deployment variables by: +Before executing the tests, the environment variables need to be prepared. +First, make sure to load your deployment variables by: ``` source my_deploy.sh ``` -Then, you also need to load the environment variables to support the execution of the tests by: +Then, you also need to load the environment variables to support the execution of the +tests by: ``` source tfs_runtime_env_vars.sh ``` -You also need to make sure that you have all the gRPC-generate code in your folder. To do so, run: +You also need to make sure that you have all the gRPC-generate code in your folder. +To do so, run: ``` proto/generate_code_python.sh @@ -76,9 +82,10 @@ To execute this functional test, four main steps needs to be carried out: 3. L3VPN Service removal 4. Cleanup -Upon the execution of each test progresses, a report will be generated indicating PASSED / FAILED / SKIPPED. If there -is some error during the execution, you should see a detailed report on the error. See the troubleshooting section if -needed. +Upon the execution of each test progresses, a report will be generated indicating +*PASSED* / *FAILED* / *SKIPPED*. +If there is some error during the execution, you should see a detailed report on the error. +See the troubleshooting section if needed. You can check the logs of the different components using the appropriate `scripts/show_logs_[component].sh` scripts after you execute each step. @@ -86,57 +93,70 @@ after you execute each step. ### 2.2.5.1. Device bootstrapping -This step configures some basic entities (Context and Topology), the devices, and the links in the topology. The -expected results are: +This step configures some basic entities (Context and Topology), the devices, and the +links in the topology. +The expected results are: - The devices to be added into the Topology. - The devices to be pre-configured and initialized as ENABLED by the Automation component. -- The monitoring for the device ports (named as endpoints in TeraFlowSDN) to be activated and data collection to - automatically start. +- The monitoring for the device ports (named as endpoints in TeraFlowSDN) to be activated and data collection to automatically start. - The links to be added to the topology. To run this step, you can do it from the WebUI by uploading the file `./ofc22/tests/descriptors_emulated.json` that -contains the descriptors of the contexts, topologies, devices, and links, or by executing the -`./ofc22/run_test_01_bootstrap.sh` script. +contains the descriptors of the contexts, topologies, devices, and links, or by +executing the `./ofc22/run_test_01_bootstrap.sh` script. -When the bootstrapping finishes, check in the Grafana L3-Monitoring Dashboard and you should see the monitoring data -being plotted and updated every 5 seconds (by default). Given that there is no service configured, you should see a -0-valued flat plot. +When the bootstrapping finishes, check in the Grafana L3-Monitoring Dashboard and you +should see the monitoring data being plotted and updated every 5 seconds (by default). +Given that there is no service configured, you should see a 0-valued flat plot. -In the WebUI, select the "admin" Context. Then, in the "Devices" tab you should see that 5 different emulated devices -have been created and activated: 4 packet routers, and 1 optical line system controller. Besides, in the "Services" tab -you should see that there is no service created. Note here that the emulated devices produce synthetic -randomly-generated data and do not care about the services configured. +In the WebUI, select the *admin* Context. +Then, in the *Devices* tab you should see that 5 different emulated devices have been +created and activated: 4 packet routers, and 1 optical line system controller. +Besides, in the *Services* tab you should see that there is no service created. +Note here that the emulated devices produce synthetic randomly-generated monitoring data +and do not represent any particularservices configured. ### 2.2.5.2. L3VPN Service creation -This step configures a new service emulating the request an OSM WIM would make by means of a Mock OSM instance. +This step configures a new service emulating the request an OSM WIM would make by means +of a Mock OSM instance. To run this step, execute the `./ofc22/run_test_02_create_service.sh` script. -When the script finishes, check the WebUI "Services" tab. You should see that two services have been created, one for -the optical layer and another for the packet layer. Besides, you can check the "Devices" tab to see the configuration -rules that have been configured in each device. In the Grafana Dashboard, given that there is now a service configured, -you should see the plots with the monitored data for the device. By default, device R1-EMU is selected. +When the script finishes, check the WebUI *Services* tab. You should see that two +services have been created, one for the optical layer and another for the packet layer. +Besides, you can check the *Devices* tab to see the configuration rules that have been +configured in each device. +In the Grafana Dashboard, given that there is now a service configured, you should see +the plots with the monitored data for the device. +By default, device R1-EMU is selected. ### 2.2.5.3. L3VPN Service removal -This step deconfigures the previously created services emulating the request an OSM WIM would make by means of a Mock -OSM instance. +This step deconfigures the previously created services emulating the request an OSM WIM +would make by means of a Mock OSM instance. -To run this step, execute the `./ofc22/run_test_03_delete_service.sh` script, or delete the L3NM service from the WebUI. +To run this step, execute the `./ofc22/run_test_03_delete_service.sh` script, or delete +the L3NM service from the WebUI. -When the script finishes, check the WebUI "Services" tab. You should see that the two services have been removed. -Besides, in the "Devices" tab you can see that the appropriate configuration rules have been deconfigured. In the -Grafana Dashboard, given that there is no service configured, you should see a 0-valued flat plot again. +When the script finishes, check the WebUI *Services* tab. +You should see that the two services have been removed. +Besides, in the *Devices* tab you can see that the appropriate configuration rules have +been deconfigured. +In the Grafana Dashboard, given that there is no service configured, you should see a +0-valued flat plot again. ### 2.2.5.4. Cleanup -This last step performs a cleanup of the scenario removing all the TeraFlowSDN entities for completeness. +This last step performs a cleanup of the scenario removing all the TeraFlowSDN entities +for completeness. To run this step, execute the `./ofc22/run_test_04_cleanup.sh` script. -When the script finishes, check the WebUI "Devices" tab, you should see that the devices have been removed. Besides, in -the "Services" tab you can see that the "admin" Context has no services given that that context has been removed. +When the script finishes, check the WebUI *Devices* tab, you should see that the devices +have been removed. +Besides, in the *Services* tab you can see that the "admin" Context has no services +given that that context has been removed. diff --git a/tutorial/2-4-ecoc22.md b/tutorial/2-4-ecoc22.md index b6f92aadc..2b0292a08 100644 --- a/tutorial/2-4-ecoc22.md +++ b/tutorial/2-4-ecoc22.md @@ -1,30 +1,34 @@ # 2.4. ECOC'22 Demo - Disjoint DC-2-DC L3VPN Service (WORK IN PROGRESS) -This functional test reproduces the experimental assessment of "Experimental Demonstration of Transport Network Slicing -with SLA Using the TeraFlowSDN Controller" presented at [ECOC'22](https://www.optica.org/en-us/events/topical_meetings/ecoc/schedule/?day=Tuesday#Tuesday). +This functional test reproduces the experimental assessment of *Experimental +Demonstration of Transport Network Slicing with SLA Using the TeraFlowSDN Controller* +presented at [ECOC'22](https://www.optica.org/en-us/events/topical_meetings/ecoc/schedule/?day=Tuesday#Tuesday). ## 2.4.1. Functional test folder -This functional test can be found in folder `./src/tests/ecoc22/`. A convenience alias `./ecoc22/` pointing to that -folder has been defined. +This functional test can be found in folder `./src/tests/ecoc22/`. +A convenience alias `./ecoc22/` pointing to that folder has been defined. ## 2.4.2. Execute with real devices -This functional test has only been tested with emulated devices; however, if you have access to real devices, you can -modify the files `./ecoc22/tests/Objects.py` and `./ecoc22/tests/Credentials.py` to point to your devices, and map to -your network topology. -Otherwise, you can modify the `./ecoc22/tests/descriptors_emulated.json` that is designed to be uploaded through the -WebUI instead of using the command line scripts. +This functional test has only been tested with emulated devices; +however, if you have access to real devices, you can modify the files +`./ecoc22/tests/Objects.py` and `./ecoc22/tests/Credentials.py` to point to your devices, +and map to your network topology. +Otherwise, you can modify the `./ecoc22/tests/descriptors_emulated.json` that is +designed to be uploaded through the WebUI instead of using the command line scripts. -__Important__: The device drivers operating with real devices, e.g., OpenConfigDriver, P4Driver, and TransportApiDriver, - have to be considered as experimental. The configuration and monitoring capabilities they support are +__Important__: The device drivers operating with real devices, e.g., OpenConfigDriver, + P4Driver, and TransportApiDriver, have to be considered as experimental. + The configuration and monitoring capabilities they support are limited or partially implemented/tested. Use them with care. ## 2.4.3. Deployment and Dependencies -To run this functional test, it is assumed you have deployed a MicroK8s-based Kubernetes environment and a TeraFlowSDN -controller instance as described in the [Tutorial: Deployment Guide](./1-0-deployment.md), and you configured the Python +To run this functional test, it is assumed you have deployed a MicroK8s-based Kubernetes +environment and a TeraFlowSDN controller instance as described in the +[Tutorial: Deployment Guide](./1-0-deployment.md), and you configured the Python environment as described in [Tutorial: Run Experiments Guide > 2.1. Configure Python Environment](./2-1-python-environment.md). Remember to source the scenario settings, e.g., `cd ~/tfs-ctrl && source ecoc22/deploy_specs.sh` in each terminal you open. @@ -35,7 +39,8 @@ Then, re-build the protocol buffers code from the proto files: ## 2.4.4. Access to the WebUI and Dashboard -When the deployment completes, you can connect to the TeraFlowSDN WebUI and Dashboards as described in +When the deployment completes, you can connect to the TeraFlowSDN WebUI and Dashboards +as described in [Tutorial: Deployment Guide > 1.4. Access TeraFlowSDN WebUI and Grafana Dashboards](./1-4-access-webui.md) Notes: @@ -51,9 +56,11 @@ To execute this functional test, four main steps needs to be carried out: 3. L3VPN Service removal 4. Cleanup -Upon the execution of each test progresses, a report will be generated indicating PASSED / FAILED / SKIPPED. If there -is some error during the execution, you should see a detailed report on the error. See the troubleshooting section if -needed. +Upon the execution of each test progresses, a report will be generated indicating +*PASSED* / *FAILED* / *SKIPPED*. +If there is some error during the execution, you should see a detailed report on the +error. +See the troubleshooting section if needed. You can check the logs of the different components using the appropriate `scripts/show_logs_[component].sh` scripts after you execute each step. @@ -61,57 +68,72 @@ after you execute each step. ### 2.4.5.1. Device bootstrapping -This step configures some basic entities (Context and Topology), the devices, and the links in the topology. The -expected results are: +This step configures some basic entities (Context and Topology), the devices, and the +links in the topology. +The expected results are: - The devices to be added into the Topology. -- The devices to be pre-configured and initialized as ENABLED by the Automation component. -- The monitoring for the device ports (named as endpoints in TeraFlowSDN) to be activated and data collection to - automatically start. +- The devices to be pre-configured and initialized as *ENABLED* by the Automation component. +- The monitoring for the device ports (named as endpoints in TeraFlowSDN) to be activated + and data collection to automatically start. - The links to be added to the topology. To run this step, you can do it from the WebUI by uploading the file `./ecoc22/tests/descriptors_emulated.json` that contains the descriptors of the contexts, topologies, devices, and links, or by executing the `./ecoc22/run_test_01_bootstrap.sh` script. -When the bootstrapping finishes, check in the Grafana L3-Monitoring Dashboard and you should see the monitoring data -being plotted and updated every 5 seconds (by default). Given that there is no service configured, you should see a -0-valued flat plot. +When the bootstrapping finishes, check in the Grafana L3-Monitoring Dashboard and you +should see the monitoring data being plotted and updated every 5 seconds (by default). +Given that there is no service configured, you should see a 0-valued flat plot. -In the WebUI, select the "admin" Context. Then, in the "Devices" tab you should see that 5 different emulated devices -have been created and activated: 4 packet routers, and 1 optical line system controller. Besides, in the "Services" tab -you should see that there is no service created. Note here that the emulated devices produce synthetic -randomly-generated data and do not care about the services configured. +In the WebUI, select the *admin* Context. +Then, in the *Devices* tab you should see that 5 different emulated devices have been +created and activated: 4 packet routers, and 1 optical line system controller. +Besides, in the *Services* tab you should see that there is no service created. +Note here that the emulated devices produce synthetic randomly-generated data and do not +care about the services configured. ### 2.4.5.2. L3VPN Service creation -This step configures a new service emulating the request an OSM WIM would make by means of a Mock OSM instance. +This step configures a new service emulating the request an OSM WIM would make by means +of a Mock OSM instance. To run this step, execute the `./ecoc22/run_test_02_create_service.sh` script. -When the script finishes, check the WebUI "Services" tab. You should see that two services have been created, one for -the optical layer and another for the packet layer. Besides, you can check the "Devices" tab to see the configuration -rules that have been configured in each device. In the Grafana Dashboard, given that there is now a service configured, -you should see the plots with the monitored data for the device. By default, device R1-EMU is selected. +When the script finishes, check the WebUI *Services* tab. +You should see that two services have been created, one for the optical layer and +another for the packet layer. +Besides, you can check the *Devices* tab to see the configuration rules that have been +configured in each device. +In the Grafana Dashboard, given that there is now a service configured, you should see +the plots with the monitored data for the device. +By default, device R1-EMU is selected. ### 2.4.5.3. L3VPN Service removal -This step deconfigures the previously created services emulating the request an OSM WIM would make by means of a Mock -OSM instance. +This step deconfigures the previously created services emulating the request an OSM WIM +would make by means of a Mock OSM instance. -To run this step, execute the `./ecoc22/run_test_03_delete_service.sh` script, or delete the L3NM service from the WebUI. +To run this step, execute the `./ecoc22/run_test_03_delete_service.sh` script, or delete +the L3NM service from the WebUI. -When the script finishes, check the WebUI "Services" tab. You should see that the two services have been removed. -Besides, in the "Devices" tab you can see that the appropriate configuration rules have been deconfigured. In the -Grafana Dashboard, given that there is no service configured, you should see a 0-valued flat plot again. +When the script finishes, check the WebUI "Services" tab. You should see that the two +services have been removed. +Besides, in the *Devices* tab you can see that the appropriate configuration rules have +been deconfigured. +In the Grafana Dashboard, given that there is no service configured, you should see a +0-valued flat plot again. ### 2.4.5.4. Cleanup -This last step performs a cleanup of the scenario removing all the TeraFlowSDN entities for completeness. +This last step performs a cleanup of the scenario removing all the TeraFlowSDN entities +for completeness. To run this step, execute the `./ecoc22/run_test_04_cleanup.sh` script. -When the script finishes, check the WebUI "Devices" tab, you should see that the devices have been removed. Besides, in -the "Services" tab you can see that the "admin" Context has no services given that that context has been removed. +When the script finishes, check the WebUI "Devices" tab, you should see that the devices +have been removed. +Besides, in the *Services* tab you can see that the *admin* Context has no services +given that that context has been removed. diff --git a/tutorial/README.md b/tutorial/README.md index 836434e51..2d3b1050f 100644 --- a/tutorial/README.md +++ b/tutorial/README.md @@ -2,22 +2,26 @@ ## Abstract -This document provides a walkthrough on how to prepare your environment for executing and contributing to the -[ETSI TeraFlowSDN OSG](https://tfs.etsi.org/). +This document provides a walkthrough on how to prepare your environment for executing +and contributing to the [ETSI TeraFlowSDN OSG](https://tfs.etsi.org/). -This walkthrough makes some reasonable assumptions to simplify the deployment of the ETSI TeraFlowSDN controller, the -execution of experiments and tests, and development of new contributions. In particular, we assume: +This walkthrough makes some reasonable assumptions to simplify the deployment of the +ETSI TeraFlowSDN controller, the execution of experiments and tests, and development of +new contributions. +In particular, we assume: - [VirtualBox](https://www.virtualbox.org/) version 6.1.34 r150636 -- [VSCode](https://code.visualstudio.com/) with the "Remote SSH" extension +- [VSCode](https://code.visualstudio.com/) with the + [*Remote SSH*](https://code.visualstudio.com/docs/remote/ssh) extension - VM software: - [Ubuntu Server 20.04 LTS](https://releases.ubuntu.com/20.04/) - [MicroK8s](https://microk8s.io/) ## Contact -If your environment does not fit with the proposed assumptions and you experience some trouble preparing it to work -with the ETSI TeraFlowSDN controller, contact the ETSI TeraFlowSDN OSG team through +If your environment does not fit with the proposed assumptions and you experience issues +preparing it to work with the ETSI TeraFlowSDN controller, contact the ETSI TeraFlowSDN +OSG team through [Slack](https://join.slack.com/t/teraflowsdn/shared_invite/zt-18gc5jvkh-1_DEZHFhxeuOqzJZPq~U~A) -- GitLab From f0eae51daa232bccd21e4848a6045a46a04d851e Mon Sep 17 00:00:00 2001 From: gifrerenom Date: Tue, 18 Oct 2022 08:18:06 +0000 Subject: [PATCH 6/8] Tutorial Cleanup --- src/tests/ecoc22/descriptors_emulated.json | 122 ++++++++++++++++++ .../descriptors_emulated-BigNet.json | 0 .../descriptors_emulated-DC_CSGW_TN.json | 0 .../descriptors_emulated-DC_CSGW_TN_OLS.json | 0 tutorial/2-0-run-experiments.md | 9 +- tutorial/2-4-ecoc22.md | 73 +++++------ tutorial/3-0-development.md | 18 ++- ...ponent.md => 3-2-develop-new-component.md} | 22 ++-- ...{3-2-develop-cth.md => 3-4-develop-cth.md} | 6 +- tutorial/README.md | 8 +- 10 files changed, 192 insertions(+), 66 deletions(-) create mode 100644 src/tests/ecoc22/descriptors_emulated.json rename src/tests/ecoc22/{ => other_scenarios}/descriptors_emulated-BigNet.json (100%) rename src/tests/ecoc22/{ => other_scenarios}/descriptors_emulated-DC_CSGW_TN.json (100%) rename src/tests/ecoc22/{ => other_scenarios}/descriptors_emulated-DC_CSGW_TN_OLS.json (100%) rename tutorial/{3-X-develop-new-component.md => 3-2-develop-new-component.md} (91%) rename tutorial/{3-2-develop-cth.md => 3-4-develop-cth.md} (88%) diff --git a/src/tests/ecoc22/descriptors_emulated.json b/src/tests/ecoc22/descriptors_emulated.json new file mode 100644 index 000000000..46e518b24 --- /dev/null +++ b/src/tests/ecoc22/descriptors_emulated.json @@ -0,0 +1,122 @@ +{ + "contexts": [ + { + "context_id": {"context_uuid": {"uuid": "admin"}}, + "topology_ids": [], "service_ids": [] + } + ], + "topologies": [ + { + "topology_id": {"context_id": {"context_uuid": {"uuid": "admin"}}, "topology_uuid": {"uuid": "admin"}}, + "device_ids": [], "link_ids": [] + } + ], + "devices": [ + { + "device_id": {"device_uuid": {"uuid": "DC1-GW"}}, "device_type": "emu-datacenter", "device_drivers": [0], + "device_endpoints": [], "device_operational_status": 1, "device_config": {"config_rules": [ + {"action": 1, "custom": {"resource_key": "_connect/address", "resource_value": "127.0.0.1"}}, + {"action": 1, "custom": {"resource_key": "_connect/port", "resource_value": "0"}}, + {"action": 1, "custom": {"resource_key": "_connect/settings", "resource_value": "{\"endpoints\": [{\"sample_types\": [], \"type\": \"copper\", \"uuid\": \"eth1\"}, {\"sample_types\": [], \"type\": \"copper\", \"uuid\": \"eth2\"}, {\"sample_types\": [], \"type\": \"copper\", \"uuid\": \"int\"}]}"}} + ]} + }, + { + "device_id": {"device_uuid": {"uuid": "DC2-GW"}}, "device_type": "emu-datacenter", "device_drivers": [0], + "device_endpoints": [], "device_operational_status": 1, "device_config": {"config_rules": [ + {"action": 1, "custom": {"resource_key": "_connect/address", "resource_value": "127.0.0.1"}}, + {"action": 1, "custom": {"resource_key": "_connect/port", "resource_value": "0"}}, + {"action": 1, "custom": {"resource_key": "_connect/settings", "resource_value": "{\"endpoints\": [{\"sample_types\": [], \"type\": \"copper\", \"uuid\": \"eth1\"}, {\"sample_types\": [], \"type\": \"copper\", \"uuid\": \"eth2\"}, {\"sample_types\": [], \"type\": \"copper\", \"uuid\": \"int\"}]}"}} + ]} + }, + { + "device_id": {"device_uuid": {"uuid": "CS1-GW1"}}, "device_type": "packet-router", "device_drivers": [1], + "device_endpoints": [], "device_operational_status": 1, "device_config": {"config_rules": [ + {"action": 1, "custom": {"resource_key": "_connect/address", "resource_value": "127.0.0.1"}}, + {"action": 1, "custom": {"resource_key": "_connect/port", "resource_value": "0"}}, + {"action": 1, "custom": {"resource_key": "_connect/settings", "resource_value": "{\"endpoints\": [{\"sample_types\": [], \"type\": \"copper\", \"uuid\": \"10/1\"}, {\"sample_types\": [], \"type\": \"copper\", \"uuid\": \"1/1\"}]}"}} + ]} + }, + { + "device_id": {"device_uuid": {"uuid": "CS1-GW2"}}, "device_type": "packet-router", "device_drivers": [1], + "device_endpoints": [], "device_operational_status": 1, "device_config": {"config_rules": [ + {"action": 1, "custom": {"resource_key": "_connect/address", "resource_value": "127.0.0.1"}}, + {"action": 1, "custom": {"resource_key": "_connect/port", "resource_value": "0"}}, + {"action": 1, "custom": {"resource_key": "_connect/settings", "resource_value": "{\"endpoints\": [{\"sample_types\": [], \"type\": \"copper\", \"uuid\": \"10/1\"}, {\"sample_types\": [], \"type\": \"copper\", \"uuid\": \"1/1\"}]}"}} + ]} + }, + { + "device_id": {"device_uuid": {"uuid": "CS2-GW1"}}, "device_type": "packet-router", "device_drivers": [1], + "device_endpoints": [], "device_operational_status": 1, "device_config": {"config_rules": [ + {"action": 1, "custom": {"resource_key": "_connect/address", "resource_value": "127.0.0.1"}}, + {"action": 1, "custom": {"resource_key": "_connect/port", "resource_value": "0"}}, + {"action": 1, "custom": {"resource_key": "_connect/settings", "resource_value": "{\"endpoints\": [{\"sample_types\": [], \"type\": \"copper\", \"uuid\": \"10/1\"}, {\"sample_types\": [], \"type\": \"copper\", \"uuid\": \"1/1\"}]}"}} + ]} + }, + { + "device_id": {"device_uuid": {"uuid": "CS2-GW2"}}, "device_type": "packet-router", "device_drivers": [1], + "device_endpoints": [], "device_operational_status": 1, "device_config": {"config_rules": [ + {"action": 1, "custom": {"resource_key": "_connect/address", "resource_value": "127.0.0.1"}}, + {"action": 1, "custom": {"resource_key": "_connect/port", "resource_value": "0"}}, + {"action": 1, "custom": {"resource_key": "_connect/settings", "resource_value": "{\"endpoints\": [{\"sample_types\": [], \"type\": \"copper\", \"uuid\": \"10/1\"}, {\"sample_types\": [], \"type\": \"copper\", \"uuid\": \"1/1\"}]}"}} + ]} + }, + { + "device_id": {"device_uuid": {"uuid": "OLS"}}, "device_type": "emu-open-line-system", "device_drivers": [0], + "device_endpoints": [], "device_operational_status": 1, "device_config": {"config_rules": [ + {"action": 1, "custom": {"resource_key": "_connect/address", "resource_value": "127.0.0.1"}}, + {"action": 1, "custom": {"resource_key": "_connect/port", "resource_value": "0"}}, + {"action": 1, "custom": {"resource_key": "_connect/settings", "resource_value": "{\"endpoints\": [{\"sample_types\": [], \"type\": \"optical\", \"uuid\": \"aade6001-f00b-5e2f-a357-6a0a9d3de870\"}, {\"sample_types\": [], \"type\": \"optical\", \"uuid\": \"eb287d83-f05e-53ec-ab5a-adf6bd2b5418\"}, {\"sample_types\": [], \"type\": \"optical\", \"uuid\": \"0ef74f99-1acc-57bd-ab9d-4b958b06c513\"}, {\"sample_types\": [], \"type\": \"optical\", \"uuid\": \"50296d99-58cc-5ce7-82f5-fc8ee4eec2ec\"}]}"}} + ]} + } + ], + "links": [ + { + "link_id": {"link_uuid": {"uuid": "DC1-GW/eth1==CS1-GW1/10/1"}}, "link_endpoint_ids": [ + {"device_id": {"device_uuid": {"uuid": "DC1-GW"}}, "endpoint_uuid": {"uuid": "eth1"}}, + {"device_id": {"device_uuid": {"uuid": "CS1-GW1"}}, "endpoint_uuid": {"uuid": "10/1"}} + ] + }, + { + "link_id": {"link_uuid": {"uuid": "DC1-GW/eth2==CS1-GW2/10/1"}}, "link_endpoint_ids": [ + {"device_id": {"device_uuid": {"uuid": "DC1-GW"}}, "endpoint_uuid": {"uuid": "eth2"}}, + {"device_id": {"device_uuid": {"uuid": "CS1-GW2"}}, "endpoint_uuid": {"uuid": "10/1"}} + ] + }, + { + "link_id": {"link_uuid": {"uuid": "DC2-GW/eth1==CS2-GW1/10/1"}}, "link_endpoint_ids": [ + {"device_id": {"device_uuid": {"uuid": "DC2-GW"}}, "endpoint_uuid": {"uuid": "eth1"}}, + {"device_id": {"device_uuid": {"uuid": "CS2-GW1"}}, "endpoint_uuid": {"uuid": "10/1"}} + ] + }, + { + "link_id": {"link_uuid": {"uuid": "DC2-GW/eth2==CS2-GW2/10/1"}}, "link_endpoint_ids": [ + {"device_id": {"device_uuid": {"uuid": "DC2-GW"}}, "endpoint_uuid": {"uuid": "eth2"}}, + {"device_id": {"device_uuid": {"uuid": "CS2-GW2"}}, "endpoint_uuid": {"uuid": "10/1"}} + ] + }, + { + "link_id": {"link_uuid": {"uuid": "CS1-GW1/1/1==OLS/aade6001-f00b-5e2f-a357-6a0a9d3de870"}}, "link_endpoint_ids": [ + {"device_id": {"device_uuid": {"uuid": "CS1-GW1"}}, "endpoint_uuid": {"uuid": "1/1"}}, + {"device_id": {"device_uuid": {"uuid": "OLS"}}, "endpoint_uuid": {"uuid": "aade6001-f00b-5e2f-a357-6a0a9d3de870"}} + ] + }, + { + "link_id": {"link_uuid": {"uuid": "CS1-GW2/1/1==OLS/eb287d83-f05e-53ec-ab5a-adf6bd2b5418"}}, "link_endpoint_ids": [ + {"device_id": {"device_uuid": {"uuid": "CS1-GW2"}}, "endpoint_uuid": {"uuid": "1/1"}}, + {"device_id": {"device_uuid": {"uuid": "OLS"}}, "endpoint_uuid": {"uuid": "eb287d83-f05e-53ec-ab5a-adf6bd2b5418"}} + ] + }, + { + "link_id": {"link_uuid": {"uuid": "CS2-GW1/1/1==OLS/0ef74f99-1acc-57bd-ab9d-4b958b06c513"}}, "link_endpoint_ids": [ + {"device_id": {"device_uuid": {"uuid": "CS2-GW1"}}, "endpoint_uuid": {"uuid": "1/1"}}, + {"device_id": {"device_uuid": {"uuid": "OLS"}}, "endpoint_uuid": {"uuid": "0ef74f99-1acc-57bd-ab9d-4b958b06c513"}} + ] + }, + { + "link_id": {"link_uuid": {"uuid": "CS2-GW2/1/1==OLS/50296d99-58cc-5ce7-82f5-fc8ee4eec2ec"}}, "link_endpoint_ids": [ + {"device_id": {"device_uuid": {"uuid": "CS2-GW2"}}, "endpoint_uuid": {"uuid": "1/1"}}, + {"device_id": {"device_uuid": {"uuid": "OLS"}}, "endpoint_uuid": {"uuid": "50296d99-58cc-5ce7-82f5-fc8ee4eec2ec"}} + ] + } + ] +} diff --git a/src/tests/ecoc22/descriptors_emulated-BigNet.json b/src/tests/ecoc22/other_scenarios/descriptors_emulated-BigNet.json similarity index 100% rename from src/tests/ecoc22/descriptors_emulated-BigNet.json rename to src/tests/ecoc22/other_scenarios/descriptors_emulated-BigNet.json diff --git a/src/tests/ecoc22/descriptors_emulated-DC_CSGW_TN.json b/src/tests/ecoc22/other_scenarios/descriptors_emulated-DC_CSGW_TN.json similarity index 100% rename from src/tests/ecoc22/descriptors_emulated-DC_CSGW_TN.json rename to src/tests/ecoc22/other_scenarios/descriptors_emulated-DC_CSGW_TN.json diff --git a/src/tests/ecoc22/descriptors_emulated-DC_CSGW_TN_OLS.json b/src/tests/ecoc22/other_scenarios/descriptors_emulated-DC_CSGW_TN_OLS.json similarity index 100% rename from src/tests/ecoc22/descriptors_emulated-DC_CSGW_TN_OLS.json rename to src/tests/ecoc22/other_scenarios/descriptors_emulated-DC_CSGW_TN_OLS.json diff --git a/tutorial/2-0-run-experiments.md b/tutorial/2-0-run-experiments.md index ab3c390e1..8b5c8f6b8 100644 --- a/tutorial/2-0-run-experiments.md +++ b/tutorial/2-0-run-experiments.md @@ -1,4 +1,4 @@ -# 2. Run Experiments Guide (WORK IN PROGRESS) +# 2. Run Experiments Guide This section walks you through the process of running experiments in TeraFlowSDN on top of a Oracle VirtualBox-based VM running MicroK8s Kubernetes platform. @@ -8,9 +8,12 @@ commands you might need, configuring the network topology, and executing differe Note that the steps followed here are likely to work regardless of the platform (VM) where TeraFlowSDN is deployed over. +Note also that this guide will keep growing with the new experiments and demonstrations +that are being carried out involving the ETSI TeraFlowSDN controller. + ## Table of Content: - [2.1. Configure the Python environment](./2-1-python-environment.md) - [2.2. OFC'22 Demo - Bootstrap devices, Monitor device Endpoints, Manage L3VPN Services](./2-2-ofc22.md) -- [2.3. OECC/PSC'22 Demo (WORK IN PROGRESS)](./2-3-oeccpsc22.md) -- [2.4. ECOC'22 Demo - Disjoint DC-2-DC L3VPN Service (WORK IN PROGRESS)](./2-4-ecoc22.md) +- [2.3. OECC/PSC'22 Demo (PENDING)](./2-3-oeccpsc22.md) +- [2.4. ECOC'22 Demo - Disjoint DC-2-DC L2VPN Service](./2-4-ecoc22.md) - [2.5. NFV-SDN'22 Demo (PENDING)](./2-5-nfvsdn22.md) diff --git a/tutorial/2-4-ecoc22.md b/tutorial/2-4-ecoc22.md index 2b0292a08..87bc12be7 100644 --- a/tutorial/2-4-ecoc22.md +++ b/tutorial/2-4-ecoc22.md @@ -1,4 +1,4 @@ -# 2.4. ECOC'22 Demo - Disjoint DC-2-DC L3VPN Service (WORK IN PROGRESS) +# 2.4. ECOC'22 Demo - Disjoint DC-2-DC L2VPN Service This functional test reproduces the experimental assessment of *Experimental Demonstration of Transport Network Slicing with SLA Using the TeraFlowSDN Controller* @@ -31,29 +31,31 @@ environment and a TeraFlowSDN controller instance as described in the [Tutorial: Deployment Guide](./1-0-deployment.md), and you configured the Python environment as described in [Tutorial: Run Experiments Guide > 2.1. Configure Python Environment](./2-1-python-environment.md). -Remember to source the scenario settings, e.g., `cd ~/tfs-ctrl && source ecoc22/deploy_specs.sh` in each terminal you open. +Remember to source the scenario settings, e.g., `cd ~/tfs-ctrl && source ecoc22/deploy_specs.sh` +in each terminal you open. Then, re-build the protocol buffers code from the proto files: `./proto/generate_code_python.sh` -## 2.4.4. Access to the WebUI and Dashboard +## 2.4.4. Access to the WebUI -When the deployment completes, you can connect to the TeraFlowSDN WebUI and Dashboards -as described in +When the deployment completes, you can connect to the TeraFlowSDN WebUI as described in [Tutorial: Deployment Guide > 1.4. Access TeraFlowSDN WebUI and Grafana Dashboards](./1-4-access-webui.md) Notes: -- the default credentials for the Grafana Dashboiard is user/pass: `admin`/`admin123+`. -- this functional test does not involve the Monitoring component, so no monitoring data is plotted in Grafana. +- this experiment does not make use of Monitoring, so Grafana is not used. +- the default credentials for the Grafana Dashboard is user/pass: `admin`/`admin123+`. +- this functional test does not involve the Monitoring component, so no monitoring + data is plotted in Grafana. ## 2.4.5. Test execution To execute this functional test, four main steps needs to be carried out: 1. Device bootstrapping -2. L3VPN Service creation -3. L3VPN Service removal +2. L2VPN Slice and Services creation +3. L2VPN Slice and Services removal 4. Cleanup Upon the execution of each test progresses, a report will be generated indicating @@ -62,8 +64,8 @@ If there is some error during the execution, you should see a detailed report on error. See the troubleshooting section if needed. -You can check the logs of the different components using the appropriate `scripts/show_logs_[component].sh` scripts -after you execute each step. +You can check the logs of the different components using the appropriate +`scripts/show_logs_[component].sh` scripts after you execute each step. ### 2.4.5.1. Device bootstrapping @@ -73,57 +75,48 @@ links in the topology. The expected results are: - The devices to be added into the Topology. - The devices to be pre-configured and initialized as *ENABLED* by the Automation component. -- The monitoring for the device ports (named as endpoints in TeraFlowSDN) to be activated - and data collection to automatically start. - The links to be added to the topology. -To run this step, you can do it from the WebUI by uploading the file `./ecoc22/tests/descriptors_emulated.json` that -contains the descriptors of the contexts, topologies, devices, and links, or by executing the -`./ecoc22/run_test_01_bootstrap.sh` script. - -When the bootstrapping finishes, check in the Grafana L3-Monitoring Dashboard and you -should see the monitoring data being plotted and updated every 5 seconds (by default). -Given that there is no service configured, you should see a 0-valued flat plot. +To run this step, you can do it from the WebUI by uploading the file +`./ecoc22/tests/descriptors_emulated.json` that contains the descriptors of the contexts, +topologies, devices, and links, or by executing the `./ecoc22/run_test_01_bootstrap.sh` script. In the WebUI, select the *admin* Context. Then, in the *Devices* tab you should see that 5 different emulated devices have been -created and activated: 4 packet routers, and 1 optical line system controller. +created and activated: 4 packet routers, and 1 optical Open Line System (OLS) controller. Besides, in the *Services* tab you should see that there is no service created. -Note here that the emulated devices produce synthetic randomly-generated data and do not -care about the services configured. -### 2.4.5.2. L3VPN Service creation +### 2.4.5.2. L2VPN Slice and Services creation This step configures a new service emulating the request an OSM WIM would make by means of a Mock OSM instance. To run this step, execute the `./ecoc22/run_test_02_create_service.sh` script. -When the script finishes, check the WebUI *Services* tab. -You should see that two services have been created, one for the optical layer and -another for the packet layer. +When the script finishes, check the WebUI *Slices* and *Services* tab. +You should see that, for the connectivity service requested by MockOSM, one slice has +been created, three services have been created (two for the optical layer and another +for the packet layer). +Note that the two services for the optical layer correspond to the primary (service_uuid +ending with ":0") and the backup (service_uuid ending with ":1") services. +Each of the services indicates the connections and sub-services that are supporting them. Besides, you can check the *Devices* tab to see the configuration rules that have been configured in each device. -In the Grafana Dashboard, given that there is now a service configured, you should see -the plots with the monitored data for the device. -By default, device R1-EMU is selected. -### 2.4.5.3. L3VPN Service removal +### 2.4.5.3. L2VPN Slice and Services removal -This step deconfigures the previously created services emulating the request an OSM WIM -would make by means of a Mock OSM instance. +This step deconfigures the previously created slices and services emulating the request +an OSM WIM would make by means of a Mock OSM instance. To run this step, execute the `./ecoc22/run_test_03_delete_service.sh` script, or delete -the L3NM service from the WebUI. +the slice from the WebUI. -When the script finishes, check the WebUI "Services" tab. You should see that the two -services have been removed. +When the script finishes, check the WebUI *Slices* and *Services* tab. You should see +that the slice and the services have been removed. Besides, in the *Devices* tab you can see that the appropriate configuration rules have been deconfigured. -In the Grafana Dashboard, given that there is no service configured, you should see a -0-valued flat plot again. ### 2.4.5.4. Cleanup @@ -133,7 +126,7 @@ for completeness. To run this step, execute the `./ecoc22/run_test_04_cleanup.sh` script. -When the script finishes, check the WebUI "Devices" tab, you should see that the devices +When the script finishes, check the WebUI *Devices* tab, you should see that the devices have been removed. -Besides, in the *Services* tab you can see that the *admin* Context has no services +Besides, in the *Slices* and *Services* tab you can see that the *admin* Context has no services given that that context has been removed. diff --git a/tutorial/3-0-development.md b/tutorial/3-0-development.md index c8e7d0d9e..05e03ef8f 100644 --- a/tutorial/3-0-development.md +++ b/tutorial/3-0-development.md @@ -1,12 +1,18 @@ # 3. Development Guide (WORK IN PROGRESS) -This section walks you through the process of developing new components for the TeraFlowSDN controller. For convenience, -this guide assumes you are using the Oracle VirtualBox-based VM running MicroK8s Kubernetes platform as described in the -[Deployment Guide](./1-0-deployment.md). The guide includes the details on +This section walks you through the process of developing new components for +the TeraFlowSDN controller. +In particular, the guide includes the details on how to configure VSCode IDE, +develop a new component, and debug individual components. + +For convenience, this guide assumes you are using the Oracle VirtualBox-based +VM running MicroK8s Kubernetes platform as described in the +[Deployment Guide](./1-0-deployment.md). +BEsides, it assumes you installed the appropriate Python and PyEnv as +described in [2.1. Configure Python Environment](./2-1-python-environment.md). ## Table of Content: - [3.1. Configure VSCode and Connect to the VM](./3-1-configure-vscode.md) -- [3.2. Development Commands, Tricks, and Hints (WORK IN PROGRESS)](./3-2-develop-cth.md) +- [3.2. Developing a new component: Forecaster (WORK IN PROGRESS)](./3-2-develop-new-component.md) - [3.3. Debugging individual components in VSCode](./3-3-debug-comp.md) - -- [3.X. Developing a new component: Forecaster (WORK IN PROGRESS)](./3-X-develop-new-component.md) +- [3.4. Development Commands, Tricks, and Hints (WORK IN PROGRESS)](./3-4-develop-cth.md) diff --git a/tutorial/3-X-develop-new-component.md b/tutorial/3-2-develop-new-component.md similarity index 91% rename from tutorial/3-X-develop-new-component.md rename to tutorial/3-2-develop-new-component.md index 403527335..8d39b56a8 100644 --- a/tutorial/3-X-develop-new-component.md +++ b/tutorial/3-2-develop-new-component.md @@ -1,7 +1,7 @@ -# 3.X. Developing a new component: Forecaster (WORK IN PROGRESS) +# 3.2. Developing a new component: Forecaster (WORK IN PROGRESS) -## 3.X.1. Preliminary requisites +## 3.2.1. Preliminary requisites As any microservice-based architecture, the components of TeraFlowSDN can be implemented using different programming languages. For the sake of simplicity, and given it is the most widely used programming language in TeraFlow, this tutorial page assumes the reader will use Python. @@ -10,15 +10,15 @@ This tutorial assumes you hace successfully completed the steps in [3.1. Configure VSCode and Connect to the VM](./3-1-configure-vscode.md) to prepare your environment. -## 3.X.2. Create the component folder structure +## 3.2.2. Create the component folder structure The source code of each component of TeraFlowSDN is hosted in a particular folder within the `src` folder. Within that folder, typically, 3 subfolders are created: - Folder `client`: contains a client implementation that the rest of components can use to interact with the component. - See details in [3.X.4. Create the component client](./3-X-develop-new-component.md#3x4-create-the-component-client). + See details in [3.2.4. Create the component client](./3-X-develop-new-component.md#3x4-create-the-component-client). - Folder `service`: contains the implementation of the service logic. - See details in [3.X.5. Create the component service](./3-X-develop-new-component.md#3x5-create-the-component-service). + See details in [3.2.5. Create the component service](./3-X-develop-new-component.md#3x5-create-the-component-service). - Folder `tests`: contains the set of unitary tests to be executed over the component to ensure it is properly implemented. - See details in [3.X.6. Create the component tests](./3-X-develop-new-component.md#3x6-create-the-component-tests). + See details in [3.2.6. Create the component tests](./3-X-develop-new-component.md#3x6-create-the-component-tests). - File `__init__.py`: defines the component as a sub-package of TeraFlowSDN to facilitate imports. - File `.gitlab-ci.yml`: defines the GitLab CI/CD settings to build, test, and deploy the component in an automated manner. - File `Config.py`: contains particular configuration settings and constants for the component. @@ -33,7 +33,7 @@ scripts/create_component.sh forecaster ``` -## 3.X.3. gRPC Proto messages and services +## 3.2.3. gRPC Proto messages and services The components, e.g., microservices, of the TeraFlowSDN controller, in general, use a gRPC-based open API to interoperate. All the protocol definitions can be found in sub-folder `proto` within the root project folder. For additional details on gRPC, visit the official web-page [gRPC](https://grpc.io/). @@ -48,7 +48,7 @@ For instance, the _proto_ file for the `forecaster` component being developed in Takes a service identifier as parameter, computes the forecast for the connections of that service, and retrieves a value indicating if the resources can support the demand. -## 3.X.4. Create the component client +## 3.2.4. Create the component client Each component has, by default, a pre-defined client that other components can import to inter-communicate. The client module, by default, is named as the component's name concatenated with `client`, and written in CamelCase. For instance, the client for the `forecaster` component would be `ForecasterClient.py`. @@ -64,7 +64,7 @@ Create file `` -## 3.X.3. Connect VSCode to the VM through "Remote SSH" extension +## 3.2.3. Connect VSCode to the VM through "Remote SSH" extension - Right-click on "TFS-VM" - Select "Connect to Host in Current Window" - Reply to the questions asked @@ -79,7 +79,7 @@ __Note__: if you get a connection error message, the reason might be due to wron and retry connection. -## 3.X.4. Add SSH key to prevent typing the password every time +## 3.2.4. Add SSH key to prevent typing the password every time This step creates an SSH key in the VM and installs it on the VSCode to prevent having to type the password every time. - In VSCode (connected to the VM), click menu "Terminal > New Terminal" @@ -117,7 +117,7 @@ Host TFS-VM - From now, VSCode will use the identity file to connect to the TFS-VM instead of the user's password. -## 3.X.5. Install VSCode Python Extension (in VSCode server) +## 3.2.5. Install VSCode Python Extension (in VSCode server) This step installs Python extensions in VSCode server running in the VM. - In VSCode (connected to the VM), click left button "Extensions" diff --git a/tutorial/3-2-develop-cth.md b/tutorial/3-4-develop-cth.md similarity index 88% rename from tutorial/3-2-develop-cth.md rename to tutorial/3-4-develop-cth.md index 1b2a4690a..e4525b936 100644 --- a/tutorial/3-2-develop-cth.md +++ b/tutorial/3-4-develop-cth.md @@ -1,4 +1,4 @@ -# 3.2. Development Commands, Tricks, and Hints (WORK IN PROGRESS) +# 3.4. Development Commands, Tricks, and Hints (WORK IN PROGRESS) ## Building, running, testing and reporting code coverage locally @@ -37,9 +37,9 @@ run script `./expose_ingress_grpc.sh` to test: sudo apt-get install nmap -nmap -p 1010 127.0.0.1 # test if context is reachable +nmap -p 1010 147.0.0.1 # test if context is reachable should retrieve something like: -$ nmap -p 1010 127.0.0.1 +$ nmap -p 1010 147.0.0.1 Starting Nmap 7.80 ( https://nmap.org ) at 2022-07-29 15:06 UTC Nmap scan report for localhost (127.0.0.1) Host is up (0.00035s latency). diff --git a/tutorial/README.md b/tutorial/README.md index 2d3b1050f..69a334439 100644 --- a/tutorial/README.md +++ b/tutorial/README.md @@ -32,12 +32,14 @@ OSG team through - [1.3. Deploy TeraFlowSDN over MicroK8s](./1-3-deploy-tfs.md) - [1.4. Access TeraFlowSDN WebUI and Grafana Dashboards](./1-4-access-webui.md) - [1.5. Show Deployment and Log per Component](./1-5-deploy-logs-troubleshooting.md) -- [2. Run Experiments Guide (WORK IN PROGRESS)](./2-0-run-experiments.md) +- [2. Run Experiments Guide](./2-0-run-experiments.md) - [2.1. Configure the Python environment](./2-1-python-environment.md) - [2.2. OFC'22 Demo - Bootstrap devices, Monitor device Endpoints, Manage L3VPN Services](./2-2-ofc22.md) - [2.3. OECC/PSC'22 Demo (WORK IN PROGRESS)](./2-3-oeccpsc22.md) - - [2.4. ECOC'22 Demo (PENDING)](./2-4-ecoc22.md) + - [2.4. ECOC'22 Demo - Disjoint DC-2-DC L2VPN Service](./2-4-ecoc22.md) - [2.5. NFV-SDN'22 Demo (PENDING)](./2-5-nfvsdn22.md) - [3. Development Guide (WORK IN PROGRESS)](./3-0-development.md) - [3.1. Configure VSCode and Connect to the VM](./3-1-configure-vscode.md) - - [3.2. Development Commands, Tricks, and Hints (WORK IN PROGRESS)](./3-2-develop-cth.md) + - [3.2. Developing a new component: Forecaster (WORK IN PROGRESS)](./3-4-develop-new-component.md) + - [3.3. Debugging individual components in VSCode](./3-3-debug-comp.md) + - [3.4. Development Commands, Tricks, and Hints (WORK IN PROGRESS)](./3-2-develop-cth.md) -- GitLab From 8386cb51cd1c82f67c6f2bc4de66307aa2e415b5 Mon Sep 17 00:00:00 2001 From: gifrerenom Date: Tue, 18 Oct 2022 08:20:28 +0000 Subject: [PATCH 7/8] Tutorial Cleanup --- tutorial/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tutorial/README.md b/tutorial/README.md index 69a334439..f2fb3a219 100644 --- a/tutorial/README.md +++ b/tutorial/README.md @@ -35,7 +35,7 @@ OSG team through - [2. Run Experiments Guide](./2-0-run-experiments.md) - [2.1. Configure the Python environment](./2-1-python-environment.md) - [2.2. OFC'22 Demo - Bootstrap devices, Monitor device Endpoints, Manage L3VPN Services](./2-2-ofc22.md) - - [2.3. OECC/PSC'22 Demo (WORK IN PROGRESS)](./2-3-oeccpsc22.md) + - [2.3. OECC/PSC'22 Demo (PENDING)](./2-3-oeccpsc22.md) - [2.4. ECOC'22 Demo - Disjoint DC-2-DC L2VPN Service](./2-4-ecoc22.md) - [2.5. NFV-SDN'22 Demo (PENDING)](./2-5-nfvsdn22.md) - [3. Development Guide (WORK IN PROGRESS)](./3-0-development.md) -- GitLab From 9e65e44d968832b425014ae67e5ed8aca2a09245 Mon Sep 17 00:00:00 2001 From: gifrerenom Date: Tue, 18 Oct 2022 08:23:09 +0000 Subject: [PATCH 8/8] Tutorial Cleanup --- tutorial/3-4-develop-cth.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/tutorial/3-4-develop-cth.md b/tutorial/3-4-develop-cth.md index e4525b936..bb842658b 100644 --- a/tutorial/3-4-develop-cth.md +++ b/tutorial/3-4-develop-cth.md @@ -37,9 +37,9 @@ run script `./expose_ingress_grpc.sh` to test: sudo apt-get install nmap -nmap -p 1010 147.0.0.1 # test if context is reachable +nmap -p 1010 127.0.0.1 # test if context is reachable should retrieve something like: -$ nmap -p 1010 147.0.0.1 +$ nmap -p 1010 127.0.0.1 Starting Nmap 7.80 ( https://nmap.org ) at 2022-07-29 15:06 UTC Nmap scan report for localhost (127.0.0.1) Host is up (0.00035s latency). -- GitLab