Skip to content
Snippets Groups Projects
Commit 6b1ff3e8 authored by Lluis Gifre Renom's avatar Lluis Gifre Renom
Browse files

OFC'22 functional test:

- corrected device and endpoint names
- corrected bearer definitions
- deleted unneeded/redundant scripts
- removed code coverage from test executions
- added separate script for test and code coverage
- moved OFC'22 README to the tutorial folder
parent b70e8c30
No related branches found
No related tags found
1 merge request!54Release 2.0.0
Showing with 192 additions and 418 deletions
......@@ -21,10 +21,10 @@ DEFAULT_BGP_ROUTE_TARGET = '{:d}:{:d}'.format(DEFAULT_BGP_AS, 333)
# device_uuid:endpoint_uuid => (
# device_uuid, endpoint_uuid, router_id, route_distinguisher, sub_if_index, address_ip, address_prefix)
BEARER_MAPPINGS = {
'R1-INF:13/2/1': ('R1-INF', '13/2/1', '10.10.10.1', '65000:100', 400, '3.3.2.1', 24),
'R2-EMU:13/2/1': ('R2-EMU', '13/2/1', '12.12.12.1', '65000:120', 450, '3.4.2.1', 24),
'R3-INF:13/2/1': ('R3-INF', '13/2/1', '20.20.20.1', '65000:200', 500, '3.3.1.1', 24),
'R4-EMU:13/2/1': ('R4-EMU', '13/2/1', '22.22.22.1', '65000:220', 550, '3.4.1.1', 24),
'R1-EMU:13/1/2': ('R1-EMU', '13/1/2', '10.10.10.1', '65000:100', 400, '3.3.2.1', 24),
'R2-EMU:13/1/2': ('R2-EMU', '13/1/2', '12.12.12.1', '65000:120', 450, '3.4.2.1', 24),
'R3-EMU:13/1/2': ('R3-EMU', '13/1/2', '20.20.20.1', '65000:200', 500, '3.3.1.1', 24),
'R4-EMU:13/1/2': ('R4-EMU', '13/1/2', '22.22.22.1', '65000:220', 550, '3.4.1.1', 24),
'R1@D1:3/1': ('R1@D1', '3/1', '10.0.1.1', '65001:101', 100, '1.1.3.1', 24),
'R1@D1:3/2': ('R1@D1', '3/2', '10.0.1.1', '65001:101', 100, '1.1.3.2', 24),
......
......@@ -22,7 +22,7 @@ WIM_MAPPING = [
#'device_interface_id' : ??, # pop_switch_port
'service_endpoint_id' : 'ep-1', # wan_service_endpoint_id
'service_mapping_info': { # wan_service_mapping_info, other extra info
'bearer': {'bearer-reference': 'R1-INF:13/2/1'},
'bearer': {'bearer-reference': 'R1-EMU:13/1/2'},
'site-id': '1',
},
#'switch_dpid' : ??, # wan_switch_dpid
......@@ -34,7 +34,7 @@ WIM_MAPPING = [
#'device_interface_id' : ??, # pop_switch_port
'service_endpoint_id' : 'ep-2', # wan_service_endpoint_id
'service_mapping_info': { # wan_service_mapping_info, other extra info
'bearer': {'bearer-reference': 'R2-EMU:13/2/1'},
'bearer': {'bearer-reference': 'R2-EMU:13/1/2'},
'site-id': '2',
},
#'switch_dpid' : ??, # wan_switch_dpid
......@@ -46,7 +46,7 @@ WIM_MAPPING = [
#'device_interface_id' : ??, # pop_switch_port
'service_endpoint_id' : 'ep-3', # wan_service_endpoint_id
'service_mapping_info': { # wan_service_mapping_info, other extra info
'bearer': {'bearer-reference': 'R3-INF:13/2/1'},
'bearer': {'bearer-reference': 'R3-EMU:13/1/2'},
'site-id': '3',
},
#'switch_dpid' : ??, # wan_switch_dpid
......@@ -58,7 +58,7 @@ WIM_MAPPING = [
#'device_interface_id' : ??, # pop_switch_port
'service_endpoint_id' : 'ep-4', # wan_service_endpoint_id
'service_mapping_info': { # wan_service_mapping_info, other extra info
'bearer': {'bearer-reference': 'R4-EMU:13/2/1'},
'bearer': {'bearer-reference': 'R4-EMU:13/1/2'},
'site-id': '4',
},
#'switch_dpid' : ??, # wan_switch_dpid
......
# OFC'22 Demo - Bootstrap devices, Monitor device Endpoints, Manage L3VPN Services
This functional test reproduces the live demonstration "Demonstration of Zero-touch Device and L3-VPN Service
Management Using the TeraFlow Cloud-native SDN Controller" carried out at
[OFC'22](https://www.ofcconference.org/en-us/home/program-speakers/demo/).
## Functional test folder
This functional test can be found in folder `./src/tests/ofc22/`. A convenience alias `./ofc22/` pointing to that folder has been defined.
## Execute with real devices
This functional test is designed to operate both with real and emulated devices.
By default, emulated devices are used; however, if you have access to real devices, you can create/modify the files `./ofc22/tests/Objects.py` and `./ofc22/tests/Credentials.py` to point to your devices, and map to your network topology.
Note that the default scenario assumes devices R2 and R4 are always emulated, while devices R1, R3, and O1 can be configured as emulated or real devices.
__Important:__ The OpenConfigDriver, the P4Driver, and the TrandportApiDriver have to be considered as experimental. The configuration and monitoring capabilities they support are limited or partially implemented. Use them with care.
## Deployment
To run this functional test, it is assumed you have deployed a Kubernetes-based environment as described in [Wiki: Installing Kubernetes on your Linux machine](https://gitlab.com/teraflow-h2020/controller/-/wikis/Installing-Kubernetes-on-your-Linux-machine).
After installing Kubernetes, you can run it to deploy the appropriate components. Feel free to adapt it your particular case following the instructions described in [Wiki: Deploying a TeraFlow OS test instance](https://gitlab.com/teraflow-h2020/controller/-/wikis/Deploying-a-TeraFlow-OS-test-instance).
__Important:__
- The `./ofc22/deploy_in_kubernetes.sh` assumes you have installed the appropriate development dependencies using the `install_development_dependencies.sh` script.
- Before running the scripts in this folder, remember to update the environment variable K8S_HOSTNAME to point to the Kubernetes node you will be using as described in [Wiki: Deploying a TeraFlow OS test instance](https://gitlab.com/teraflow-h2020/controller/-/wikis/Deploying-a-TeraFlow-OS-test-instance).
For your convenience, the configuration s sript `./ofc22/deploy_in_kubernetes.sh` has been already defined. The script will take some minutes to download the dependencies, build the micro-services, deploy them, and leave them ready for operation. The deployment will finish with a report of the items that have been created.
## Access to the WebUI and Dashboard
When the deployment completes, you can connect to the TeraFlow OS WebUI and Dashboards as described in [Wiki: Using the WebUI](https://gitlab.com/teraflow-h2020/controller/-/wikis/Using-the-WebUI), or directly navigating to `http://[your-node-ip]:30800` for the WebUI and `http://[your-node-ip]:30300` for the Grafana Dashboard.
Notes:
- the default credentials for the Grafana Dashboiard is user/pass: `admin`/`admin123+`.
- in Grafana, you can find the "L3-Monitorng" in the "Starred dashboards" section.
## Test execution
To execute this functional test, four main steps needs to be carried out:
1. Device bootstrapping
2. L3VPN Service creation
3. L3VPN Service removal
4. Cleanup
Upon the execution of each test progresses, a report will be generated indicating PASSED / FAILED / SKIPPED. If there is some error during the execution, you should see a detailed report on the error. See the troubleshooting section in that case.
Feel free to check the logs of the different components using the appropriate `ofc22/show_logs_[component].sh` scripts after you execute each step.
### 1. Device bootstrapping
This step configures some basic entities (Context and Topology), the devices, and the links in the topology. The expected results are:
- The devices to be incorporated into the Topology.
- The devices to be pre-configured and initialized as ENABLED by the Automation component.
- The monitoring for the device ports (named as endpoints in TeraFlow OS) to be activated and data collection to automatically start.
- The links to be added to the topology.
To run this step, execute the following script:
`./ofc22/run_test_01_bootstrap.sh`
When the script finishes, check in the Grafana L3-Monitoring Dashboard and you should see the monitoring data being plotted and updated every 5 seconds (by default). Given that there is no service configured, you should see a 0-valued flat plot.
In the WebUI, select the "admin" Context. In the "Devices" tab you should see that 5 different emulated devices have been created and activated: 4 packet routers, and 1 optical line system controller. Besides, in the "Services" tab you should see that there is no service created. Note here that the emulated devices produce synthetic randomly-generated data and do not care about the services configured.
### 2. L3VPN Service creation
This step configures a new service emulating the request an OSM WIM would make by means of a Mock OSM instance.
To run this step, execute the following script:
`./ofc22/run_test_02_create_service.sh`
When the script finishes, check the WebUI "Services" tab. You should see that two services have been created, one for the optical layer and another for the packet layer. Besides, you can check the "Devices" tab to see the configuration rules that have been configured in each device. In the Grafana Dashboard, given that there is now a service configured, you should see the plots with the monitored data for the device. By default, device R1-INF is selected.
### 3. L3VPN Service removal
This step deconfigures the previously created services emulating the request an OSM WIM would make by means of a Mock OSM instance.
To run this step, execute the following script:
`./ofc22/run_test_03_delete_service.sh`
When the script finishes, check the WebUI "Services" tab. You should see that the two services have been removed. Besides, in the "Devices" tab you can see that the appropriate configuration rules have been deconfigured. In the Grafana Dashboard, given that there is no service configured, you should see a 0-valued flat plot again.
### 4. Cleanup
This last step just performs a cleanup of the scenario removing all the TeraFlow OS entities for completeness.
To run this step, execute the following script:
`./ofc22/run_test_04_cleanup.sh`
When the script finishes, check the WebUI "Devices" tab, you should see that the devices have been removed. Besides, in the "Services" tab you can see that the "admin" Context has no services given that that context has been removed.
## Troubleshooting
Different scripts are provided to help in troubleshooting issues in the execution of the test. These scripts are:
- `./ofc22/show_deployment.sh`: this script reports the items belonging to this deployment. Use it to validate that all the pods, deployments and replica sets are ready and have a state of "running"; and the services are deployed and have appropriate IP addresses and ports.
- `ofc22/show_logs_automation.sh`: this script reports the logs for the automation component.
- `ofc22/show_logs_compute.sh`: this script reports the logs for the compute component.
- `ofc22/show_logs_context.sh`: this script reports the logs for the context component.
- `ofc22/show_logs_device.sh`: this script reports the logs for the device component.
- `ofc22/show_logs_monitoring.sh`: this script reports the logs for the monitoring component.
- `ofc22/show_logs_service.sh`: this script reports the logs for the service component.
- `ofc22/show_logs_webui.sh`: this script reports the logs for the webui component.
......@@ -15,7 +15,7 @@
],
"devices": [
{
"device_id": {"device_uuid": {"uuid": "R1-INF"}},
"device_id": {"device_uuid": {"uuid": "R1-EMU"}},
"device_type": "emu-packet-router",
"device_config": {"config_rules": [
{"action": 1, "resource_key": "_connect/address", "resource_value": "127.0.0.1"},
......@@ -39,7 +39,7 @@
"device_endpoints": []
},
{
"device_id": {"device_uuid": {"uuid": "R3-INF"}},
"device_id": {"device_uuid": {"uuid": "R3-EMU"}},
"device_type": "emu-packet-router",
"device_config": {"config_rules": [
{"action": 1, "resource_key": "_connect/address", "resource_value": "127.0.0.1"},
......@@ -77,9 +77,9 @@
],
"links": [
{
"link_id": {"link_uuid": {"uuid": "R1-INF/13/0/0==O1-OLS/aade6001-f00b-5e2f-a357-6a0a9d3de870"}},
"link_id": {"link_uuid": {"uuid": "R1-EMU/13/0/0==O1-OLS/aade6001-f00b-5e2f-a357-6a0a9d3de870"}},
"link_endpoint_ids": [
{"device_id": {"device_uuid": {"uuid": "R1-INF"}}, "endpoint_uuid": {"uuid": "13/0/0"}},
{"device_id": {"device_uuid": {"uuid": "R1-EMU"}}, "endpoint_uuid": {"uuid": "13/0/0"}},
{"device_id": {"device_uuid": {"uuid": "O1-OLS"}}, "endpoint_uuid": {"uuid": "aade6001-f00b-5e2f-a357-6a0a9d3de870"}}
]
},
......@@ -91,9 +91,9 @@
]
},
{
"link_id": {"link_uuid": {"uuid": "R3-INF/13/0/0==O1-OLS/0ef74f99-1acc-57bd-ab9d-4b958b06c513"}},
"link_id": {"link_uuid": {"uuid": "R3-EMU/13/0/0==O1-OLS/0ef74f99-1acc-57bd-ab9d-4b958b06c513"}},
"link_endpoint_ids": [
{"device_id": {"device_uuid": {"uuid": "R3-INF"}}, "endpoint_uuid": {"uuid": "13/0/0"}},
{"device_id": {"device_uuid": {"uuid": "R3-EMU"}}, "endpoint_uuid": {"uuid": "13/0/0"}},
{"device_id": {"device_uuid": {"uuid": "O1-OLS"}}, "endpoint_uuid": {"uuid": "0ef74f99-1acc-57bd-ab9d-4b958b06c513"}}
]
},
......
# Copyright 2021-2023 H2020 TeraFlow (https://www.teraflow-h2020.eu/)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
apiVersion: v1
kind: Service
metadata:
name: contextservice-public
labels:
app: contextservice
spec:
type: NodePort
selector:
app: contextservice
ports:
- name: grpc
protocol: TCP
port: 1010
targetPort: 1010
nodePort: 30101
- name: redis
protocol: TCP
port: 6379
targetPort: 6379
nodePort: 30637
- name: http
protocol: TCP
port: 8080
targetPort: 8080
nodePort: 31808
---
apiVersion: v1
kind: Service
metadata:
name: deviceservice-public
labels:
app: deviceservice
spec:
type: NodePort
selector:
app: deviceservice
ports:
- name: grpc
protocol: TCP
port: 2020
targetPort: 2020
nodePort: 30202
---
apiVersion: v1
kind: Service
metadata:
name: monitoringservice-public
labels:
app: monitoringservice
spec:
type: NodePort
selector:
app: monitoringservice
ports:
- name: influx
protocol: TCP
port: 8086
targetPort: 8086
nodePort: 30886
---
apiVersion: v1
kind: Service
metadata:
name: computeservice-public
spec:
type: NodePort
selector:
app: computeservice
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 8080
nodePort: 30808
---
apiVersion: v1
kind: Service
metadata:
name: webuiservice-public
labels:
app: webuiservice
spec:
type: NodePort
selector:
app: webuiservice
ports:
- name: http
protocol: TCP
port: 8004
targetPort: 8004
nodePort: 30800
- name: grafana
protocol: TCP
port: 3000
targetPort: 3000
nodePort: 30300
#!/bin/bash
# Copyright 2021-2023 H2020 TeraFlow (https://www.teraflow-h2020.eu/)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
export COMPONENT="webui"
export IMAGE_TAG="ofc22"
export K8S_NAMESPACE="ofc22"
export K8S_HOSTNAME="kubernetes-master"
export GRAFANA_PASSWORD="admin123+"
# Constants
TMP_FOLDER="./tmp"
# Create a tmp folder for files modified during the deployment
TMP_MANIFESTS_FOLDER="$TMP_FOLDER/manifests"
mkdir -p $TMP_MANIFESTS_FOLDER
TMP_LOGS_FOLDER="$TMP_FOLDER/logs"
mkdir -p $TMP_LOGS_FOLDER
echo "Processing '$COMPONENT' component..."
IMAGE_NAME="$COMPONENT:$IMAGE_TAG"
echo " Building Docker image..."
BUILD_LOG="$TMP_LOGS_FOLDER/build_${COMPONENT}.log"
docker build -t "$IMAGE_NAME" -f ./src/"$COMPONENT"/Dockerfile ./src/ > "$BUILD_LOG"
sleep 1
echo " Deploying '$COMPONENT' component to Kubernetes..."
kubectl --namespace $K8S_NAMESPACE scale deployment --replicas=0 ${COMPONENT}service
kubectl --namespace $K8S_NAMESPACE scale deployment --replicas=1 ${COMPONENT}service
printf "\n"
sleep 1
echo "Waiting for '$COMPONENT' component..."
kubectl wait --namespace $K8S_NAMESPACE --for='condition=available' --timeout=300s deployment/${COMPONENT}service
printf "\n"
echo "Configuring DataStores and Dashboards..."
./configure_dashboards.sh
printf "\n\n"
echo "Reporting Deployment..."
kubectl --namespace $K8S_NAMESPACE get all
printf "\n"
echo "Done!"
......@@ -13,39 +13,4 @@
# See the License for the specific language governing permissions and
# limitations under the License.
PROJECTDIR=`pwd`
cd $PROJECTDIR/src
RCFILE=$PROJECTDIR/coverage/.coveragerc
COVERAGEFILE=$PROJECTDIR/coverage/.coverage
# Configure the correct folder on the .coveragerc file
cat $PROJECTDIR/coverage/.coveragerc.template | sed s+~/teraflow/controller+$PROJECTDIR+g > $RCFILE
# Destroy old coverage file
rm -f $COVERAGEFILE
# Set the name of the Kubernetes namespace and hostname to use.
K8S_NAMESPACE="ofc22"
# K8S_HOSTNAME="kubernetes-master"
# dynamically gets the name of the K8s master node
K8S_HOSTNAME=`kubectl get nodes --selector=node-role.kubernetes.io/master | tr -s " " | cut -f1 -d" " | sed -n '2 p'`
# Flush Context database
kubectl --namespace $K8S_NAMESPACE exec -it deployment/contextservice --container redis -- redis-cli FLUSHALL
export CONTEXTSERVICE_SERVICE_HOST=$(kubectl get node $K8S_HOSTNAME -o 'jsonpath={.status.addresses[?(@.type=="InternalIP")].address}')
export CONTEXTSERVICE_SERVICE_PORT_GRPC=$(kubectl get service contextservice-public --namespace $K8S_NAMESPACE -o 'jsonpath={.spec.ports[?(@.port==1010)].nodePort}')
export DEVICESERVICE_SERVICE_HOST=$(kubectl get node $K8S_HOSTNAME -o 'jsonpath={.status.addresses[?(@.type=="InternalIP")].address}')
export DEVICESERVICE_SERVICE_PORT_GRPC=$(kubectl get service deviceservice-public --namespace $K8S_NAMESPACE -o 'jsonpath={.spec.ports[?(@.port==2020)].nodePort}')
export COMPUTESERVICE_SERVICE_HOST=$(kubectl get node $K8S_HOSTNAME -o 'jsonpath={.status.addresses[?(@.type=="InternalIP")].address}')
export COMPUTESERVICE_SERVICE_PORT_HTTP=$(kubectl get service computeservice-public --namespace $K8S_NAMESPACE -o 'jsonpath={.spec.ports[?(@.port==8080)].nodePort}')
# Useful flags for pytest:
#-o log_cli=true -o log_file=device.log -o log_file_level=DEBUG
# Run functional test and analyze coverage of code at same time
coverage run --rcfile=$RCFILE --append -m pytest --log-level=INFO --verbose \
tests/ofc22/tests/test_functional_bootstrap.py
pytest --verbose src/tests/ofc22/tests/test_functional_bootstrap.py
......@@ -13,29 +13,4 @@
# See the License for the specific language governing permissions and
# limitations under the License.
PROJECTDIR=`pwd`
cd $PROJECTDIR/src
RCFILE=$PROJECTDIR/coverage/.coveragerc
COVERAGEFILE=$PROJECTDIR/coverage/.coverage
# Set the name of the Kubernetes namespace and hostname to use.
K8S_NAMESPACE="ofc22"
# dynamically gets the name of the K8s master node
K8S_HOSTNAME=`kubectl get nodes --selector=node-role.kubernetes.io/master | tr -s " " | cut -f1 -d" " | sed -n '2 p'`
export CONTEXTSERVICE_SERVICE_HOST=$(kubectl get node $K8S_HOSTNAME -o 'jsonpath={.status.addresses[?(@.type=="InternalIP")].address}')
export CONTEXTSERVICE_SERVICE_PORT_GRPC=$(kubectl get service contextservice-public --namespace $K8S_NAMESPACE -o 'jsonpath={.spec.ports[?(@.port==1010)].nodePort}')
export DEVICESERVICE_SERVICE_HOST=$(kubectl get node $K8S_HOSTNAME -o 'jsonpath={.status.addresses[?(@.type=="InternalIP")].address}')
export DEVICESERVICE_SERVICE_PORT_GRPC=$(kubectl get service deviceservice-public --namespace $K8S_NAMESPACE -o 'jsonpath={.spec.ports[?(@.port==2020)].nodePort}')
export COMPUTESERVICE_SERVICE_HOST=$(kubectl get node $K8S_HOSTNAME -o 'jsonpath={.status.addresses[?(@.type=="InternalIP")].address}')
export COMPUTESERVICE_SERVICE_PORT_HTTP=$(kubectl get service computeservice-public --namespace $K8S_NAMESPACE -o 'jsonpath={.spec.ports[?(@.port==8080)].nodePort}')
# Useful flags for pytest:
#-o log_cli=true -o log_file=device.log -o log_file_level=DEBUG
# Run functional test and analyze coverage of code at same time
coverage run --rcfile=$RCFILE --append -m pytest --log-level=INFO --verbose \
tests/ofc22/tests/test_functional_create_service.py
pytest --verbose src/tests/ofc22/tests/test_functional_create_service.py
......@@ -13,29 +13,4 @@
# See the License for the specific language governing permissions and
# limitations under the License.
PROJECTDIR=`pwd`
cd $PROJECTDIR/src
RCFILE=$PROJECTDIR/coverage/.coveragerc
COVERAGEFILE=$PROJECTDIR/coverage/.coverage
# Set the name of the Kubernetes namespace and hostname to use.
K8S_NAMESPACE="ofc22"
# dynamically gets the name of the K8s master node
K8S_HOSTNAME=`kubectl get nodes --selector=node-role.kubernetes.io/master | tr -s " " | cut -f1 -d" " | sed -n '2 p'`
export CONTEXTSERVICE_SERVICE_HOST=$(kubectl get node $K8S_HOSTNAME -o 'jsonpath={.status.addresses[?(@.type=="InternalIP")].address}')
export CONTEXTSERVICE_SERVICE_PORT_GRPC=$(kubectl get service contextservice-public --namespace $K8S_NAMESPACE -o 'jsonpath={.spec.ports[?(@.port==1010)].nodePort}')
export DEVICESERVICE_SERVICE_HOST=$(kubectl get node $K8S_HOSTNAME -o 'jsonpath={.status.addresses[?(@.type=="InternalIP")].address}')
export DEVICESERVICE_SERVICE_PORT_GRPC=$(kubectl get service deviceservice-public --namespace $K8S_NAMESPACE -o 'jsonpath={.spec.ports[?(@.port==2020)].nodePort}')
export COMPUTESERVICE_SERVICE_HOST=$(kubectl get node $K8S_HOSTNAME -o 'jsonpath={.status.addresses[?(@.type=="InternalIP")].address}')
export COMPUTESERVICE_SERVICE_PORT_HTTP=$(kubectl get service computeservice-public --namespace $K8S_NAMESPACE -o 'jsonpath={.spec.ports[?(@.port==8080)].nodePort}')
# Useful flags for pytest:
#-o log_cli=true -o log_file=device.log -o log_file_level=DEBUG
# Run functional test and analyze coverage of code at same time
coverage run --rcfile=$RCFILE --append -m pytest --log-level=INFO --verbose \
tests/ofc22/tests/test_functional_delete_service.py
pytest --verbose src/tests/ofc22/tests/test_functional_delete_service.py
......@@ -13,29 +13,4 @@
# See the License for the specific language governing permissions and
# limitations under the License.
PROJECTDIR=`pwd`
cd $PROJECTDIR/src
RCFILE=$PROJECTDIR/coverage/.coveragerc
COVERAGEFILE=$PROJECTDIR/coverage/.coverage
# Set the name of the Kubernetes namespace and hostname to use.
K8S_NAMESPACE="ofc22"
# dynamically gets the name of the K8s master node
K8S_HOSTNAME=`kubectl get nodes --selector=node-role.kubernetes.io/master | tr -s " " | cut -f1 -d" " | sed -n '2 p'`
export CONTEXTSERVICE_SERVICE_HOST=$(kubectl get node $K8S_HOSTNAME -o 'jsonpath={.status.addresses[?(@.type=="InternalIP")].address}')
export CONTEXTSERVICE_SERVICE_PORT_GRPC=$(kubectl get service contextservice-public --namespace $K8S_NAMESPACE -o 'jsonpath={.spec.ports[?(@.port==1010)].nodePort}')
export DEVICESERVICE_SERVICE_HOST=$(kubectl get node $K8S_HOSTNAME -o 'jsonpath={.status.addresses[?(@.type=="InternalIP")].address}')
export DEVICESERVICE_SERVICE_PORT_GRPC=$(kubectl get service deviceservice-public --namespace $K8S_NAMESPACE -o 'jsonpath={.spec.ports[?(@.port==2020)].nodePort}')
export COMPUTESERVICE_SERVICE_HOST=$(kubectl get node $K8S_HOSTNAME -o 'jsonpath={.status.addresses[?(@.type=="InternalIP")].address}')
export COMPUTESERVICE_SERVICE_PORT_HTTP=$(kubectl get service computeservice-public --namespace $K8S_NAMESPACE -o 'jsonpath={.spec.ports[?(@.port==8080)].nodePort}')
# Useful flags for pytest:
#-o log_cli=true -o log_file=device.log -o log_file_level=DEBUG
# Run functional test and analyze coverage of code at same time
coverage run --rcfile=$RCFILE --append -m pytest --log-level=INFO --verbose \
tests/ofc22/tests/test_functional_cleanup.py
pytest --verbose src/tests/ofc22/tests/test_functional_cleanup.py
......@@ -14,14 +14,30 @@
# limitations under the License.
# OFC 22 deployment settings
PROJECTDIR=`pwd`
export REGISTRY_IMAGE=""
export COMPONENTS="context device service compute webui automation monitoring"
export IMAGE_TAG="ofc22"
export K8S_NAMESPACE="ofc22"
export K8S_HOSTNAME="kubernetes-master"
export EXTRA_MANIFESTS="./ofc22/expose_services.yaml"
export GRAFANA_PASSWORD="admin123+"
cd $PROJECTDIR/src
RCFILE=$PROJECTDIR/coverage/.coveragerc
COVERAGEFILE=$PROJECTDIR/coverage/.coverage
./deploy_in_kubernetes.sh
# Configure the correct folder on the .coveragerc file
cat $PROJECTDIR/coverage/.coveragerc.template | sed s+~/teraflow/controller+$PROJECTDIR+g > $RCFILE
# Destroy old coverage file
rm -f $COVERAGEFILE
# Force a flush of Context database
kubectl --namespace $TFS_K8S_NAMESPACE exec -it deployment/contextservice --container redis -- redis-cli FLUSHALL
# Run functional tests and analyze code coverage at the same time
coverage run --rcfile=$RCFILE --append -m pytest --log-level=INFO --verbose \
tests/ofc22/tests/test_functional_bootstrap.py
coverage run --rcfile=$RCFILE --append -m pytest --log-level=INFO --verbose \
tests/ofc22/tests/test_functional_create_service.py
coverage run --rcfile=$RCFILE --append -m pytest --log-level=INFO --verbose \
tests/ofc22/tests/test_functional_delete_service.py
coverage run --rcfile=$RCFILE --append -m pytest --log-level=INFO --verbose \
tests/ofc22/tests/test_functional_cleanup.py
......@@ -86,7 +86,7 @@ if not USE_REAL_DEVICES:
json_device_packetrouter_disabled = json_device_emulated_packet_router_disabled
json_device_tapi_disabled = json_device_emulated_tapi_disabled
DEVICE_R1_UUID = 'R1-INF'
DEVICE_R1_UUID = 'R1-EMU'
DEVICE_R1_TIMEOUT = 120
DEVICE_R1_ENDPOINT_DEFS = [('13/0/0', 'optical', []), ('13/1/2', 'copper', PACKET_PORT_SAMPLE_TYPES)]
DEVICE_R1_ID = json_device_id(DEVICE_R1_UUID)
......@@ -113,7 +113,7 @@ ENDPOINT_ID_R2_13_1_2 = DEVICE_R2_ENDPOINT_IDS[1]
DEVICE_R2_CONNECT_RULES = json_device_emulated_connect_rules(DEVICE_R2_ENDPOINT_DEFS)
DEVICE_R3_UUID = 'R3-INF'
DEVICE_R3_UUID = 'R3-EMU'
DEVICE_R3_TIMEOUT = 120
DEVICE_R3_ENDPOINT_DEFS = [('13/0/0', 'optical', []), ('13/1/2', 'copper', PACKET_PORT_SAMPLE_TYPES)]
DEVICE_R3_ID = json_device_id(DEVICE_R3_UUID)
......@@ -186,24 +186,15 @@ def compose_service_endpoint_id(endpoint_id):
endpoint_uuid = endpoint_id['endpoint_uuid']['uuid']
return ':'.join([device_uuid, endpoint_uuid])
def compose_bearer(endpoint_id, router_id, route_distinguisher):
device_uuid = endpoint_id['device_id']['device_uuid']['uuid']
endpoint_uuid = endpoint_id['endpoint_uuid']['uuid']
return '#'.join([device_uuid, endpoint_uuid, router_id, route_distinguisher])
WIM_SEP_R1_ID = compose_service_endpoint_id(ENDPOINT_ID_R1_13_1_2)
WIM_SEP_R1_ROUTER_ID = '10.10.10.1'
WIM_SEP_R1_ROUTER_DIST = '65000:111'
WIM_SEP_R1_SITE_ID = '1'
WIM_SEP_R1_BEARER = compose_bearer(ENDPOINT_ID_R1_13_1_2, WIM_SEP_R1_ROUTER_ID, WIM_SEP_R1_ROUTER_DIST)
WIM_SRV_R1_VLAN_ID = 400
WIM_SEP_R3_ID = compose_service_endpoint_id(ENDPOINT_ID_R3_13_1_2)
WIM_SEP_R3_ROUTER_ID = '20.20.20.1'
WIM_SEP_R3_ROUTER_DIST = '65000:222'
WIM_SEP_R3_SITE_ID = '2'
WIM_SEP_R3_BEARER = compose_bearer(ENDPOINT_ID_R3_13_1_2, WIM_SEP_R3_ROUTER_ID, WIM_SEP_R3_ROUTER_DIST)
WIM_SRV_R3_VLAN_ID = 500
WIM_SEP_R1_ID = compose_service_endpoint_id(ENDPOINT_ID_R1_13_1_2)
WIM_SEP_R1_SITE_ID = '1'
WIM_SEP_R1_BEARER = WIM_SEP_R1_ID
WIM_SRV_R1_VLAN_ID = 400
WIM_SEP_R3_ID = compose_service_endpoint_id(ENDPOINT_ID_R3_13_1_2)
WIM_SEP_R3_SITE_ID = '2'
WIM_SEP_R3_BEARER = WIM_SEP_R3_ID
WIM_SRV_R3_VLAN_ID = 500
WIM_USERNAME = 'admin'
WIM_PASSWORD = 'admin'
......
# 2.2. OFC'22 (WORK IN PROGRESS)
# 2.2. OFC'22 Demo - Bootstrap devices, Monitor device Endpoints, Manage L3VPN Services
Check [Old Version](./../ofc22/README.md)
This functional test reproduces the live demonstration "Demonstration of Zero-touch Device and L3-VPN Service Management
Using the TeraFlow Cloud-native SDN Controller" carried out at
[OFC'22](https://www.ofcconference.org/en-us/home/program-speakers/demo/).
## 2.2.1. Functional test folder
This functional test can be found in folder `./src/tests/ofc22/`. A convenience alias `./ofc22/` pointing to that folder
has been defined.
## 2.2.2. Execute with real devices
This functional test is designed to operate both with real and emulated devices.
By default, emulated devices are used; however, if you have access to real devices, you can create/modify the files
`./ofc22/tests/Objects.py` and `./ofc22/tests/Credentials.py` to point to your devices, and map to your own network
topology.
Otherwise, you can modify the `./ofc22/tests/descriptors_emulated.json` that is designed to be uploaded through the
WebUI instead of using the command line scripts.
Note that the default scenario assumes devices R2 and R4 are always emulated, while devices R1, R3, and O1
can be configured as emulated or real devices.
__Important__: The device drivers operating with real devices, e.g., OpenConfigDriver, P4Driver, and TrandportApiDriver,
have to be considered as experimental. The configuration and monitoring capabilities they support are
limited or partially implemented/tested. Use them with care.
## 2.2.3. Deployment and Dependencies
To run this functional test, it is assumed you have deployed a MicroK8s-based Kubernetes environment and a TeraFlowSDN
controller instance as described in the [Tutorial: Deployment Guide](./1-0-deployment.md), and you configured
the Python environment as described in
[Tutorial: Run Experiments Guide > 2.1. Configure Python Environment](./2-1-python-environment.md).
Remember to source the scenario settings appropriately, e.g., `cd ~/tfs-ctrl && source my_deploy.sh` in each terminal
you open.
## 2.2.4. Access to the WebUI and Dashboard
When the deployment completes, you can connect to the TeraFlowSDN WebUI and Dashboards as described in
[Tutorial: Deployment Guide > 1.4. Access TeraFlowSDN WebUI and Grafana Dashboards](./1-4-access-webui.md)
Notes:
- the default credentials for the Grafana Dashboiard is user/pass: `admin`/`admin123+`.
- in Grafana, you will find the "L3-Monitorng" in the "Starred dashboards" section.
## 2.2.5. Test execution
To execute this functional test, four main steps needs to be carried out:
1. Device bootstrapping
2. L3VPN Service creation
3. L3VPN Service removal
4. Cleanup
Upon the execution of each test progresses, a report will be generated indicating PASSED / FAILED / SKIPPED. If there
is some error during the execution, you should see a detailed report on the error. See the troubleshooting section if
needed.
Feel free to check the logs of the different components using the appropriate `scripts/show_logs_[component].sh`
scripts after you execute each step.
### 2.2.5.1. Device bootstrapping
This step configures some basic entities (Context and Topology), the devices, and the links in the topology. The
expected results are:
- The devices to be added into the Topology.
- The devices to be pre-configured and initialized as ENABLED by the Automation component.
- The monitoring for the device ports (named as endpoints in TeraFlowSDN) to be activated and data collection to
automatically start.
- The links to be added to the topology.
To run this step, you can do it from the WebUI by uploading the file `./ofc22/tests/descriptors_emulated.json` that
contains the descriptors of the contexts, topologies, devices, and links, or by executing the
`./ofc22/run_test_01_bootstrap.sh` script.
When the bootstrapping finishes, check in the Grafana L3-Monitoring Dashboard and you should see the monitoring data
being plotted and updated every 5 seconds (by default). Given that there is no service configured, you should see a
0-valued flat plot.
In the WebUI, select the "admin" Context. Then, in the "Devices" tab you should see that 5 different emulated devices
have been created and activated: 4 packet routers, and 1 optical line system controller. Besides, in the "Services" tab
you should see that there is no service created. Note here that the emulated devices produce synthetic
randomly-generated data and do not care about the services configured.
### 2.2.5.2. L3VPN Service creation
This step configures a new service emulating the request an OSM WIM would make by means of a Mock OSM instance.
To run this step, execute the `./ofc22/run_test_02_create_service.sh` script.
When the script finishes, check the WebUI "Services" tab. You should see that two services have been created, one for
the optical layer and another for the packet layer. Besides, you can check the "Devices" tab to see the configuration
rules that have been configured in each device. In the Grafana Dashboard, given that there is now a service configured,
you should see the plots with the monitored data for the device. By default, device R1-EMU is selected.
### 2.2.5.3. L3VPN Service removal
This step deconfigures the previously created services emulating the request an OSM WIM would make by means of a Mock
OSM instance.
To run this step, execute the `./ofc22/run_test_03_delete_service.sh` script, or delete the L3NM service from the WebUI.
When the script finishes, check the WebUI "Services" tab. You should see that the two services have been removed.
Besides, in the "Devices" tab you can see that the appropriate configuration rules have been deconfigured. In the
Grafana Dashboard, given that there is no service configured, you should see a 0-valued flat plot again.
### 2.2.5.4. Cleanup
This last step performs a cleanup of the scenario removing all the TeraFlowSDN entities for completeness.
To run this step, execute the `./ofc22/run_test_04_cleanup.sh` script.
When the script finishes, check the WebUI "Devices" tab, you should see that the devices have been removed. Besides, in
the "Services" tab you can see that the "admin" Context has no services given that that context has been removed.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment