Skip to content
Snippets Groups Projects
Commit 6a4ae5fe authored by Lluis Gifre Renom's avatar Lluis Gifre Renom
Browse files

Multiple changes:

Common scripts:
- added script to dump logs of all pods/containers in a namespace

ECOC'22 demo:
- removed unneeded scripts and files
- cleaned up run_test scripts
- added run test and coverage script
- added deploy_specs.sh
- added scripts to generate JSON descriptors

OFC'22 demo:
- added deploy_specs.sh
parent 272fa305
No related branches found
No related tags found
2 merge requests!54Release 2.0.0,!4Compute component:
# Set the URL of your local Docker registry where the images will be uploaded to.
export TFS_REGISTRY_IMAGE="http://localhost:32000/tfs/"
# Set the list of components, separated by comas, you want to build images for, and deploy.
# Set the list of components, separated by spaces, you want to build images for, and deploy.
# Supported components are:
# context device automation policy service compute monitoring webui
# interdomain slice pathcomp dlt
......
#!/bin/bash
# Copyright 2021-2023 H2020 TeraFlow (https://www.teraflow-h2020.eu/)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
########################################################################################################################
# Define your deployment settings here
########################################################################################################################
# If not already set, set the name of the Kubernetes namespace to deploy to.
export TFS_K8S_NAMESPACE=${TFS_K8S_NAMESPACE:-"tfs-dev"}
########################################################################################################################
# Automated steps start here
########################################################################################################################
mkdir -p tmp/exec_logs/$TFS_K8S_NAMESPACE/
rm tmp/exec_logs/$TFS_K8S_NAMESPACE/*
PODS=$(kubectl get pods --namespace $TFS_K8S_NAMESPACE --no-headers --output=custom-columns=":metadata.name")
for POD in $PODS; do
CONTAINERS=$(kubectl get pods --namespace $TFS_K8S_NAMESPACE $POD -o jsonpath='{.spec.containers[*].name}')
for CONTAINER in $CONTAINERS; do
kubectl --namespace $TFS_K8S_NAMESPACE logs pod/${POD} --container ${CONTAINER} \
> tmp/exec_logs/$TFS_K8S_NAMESPACE/$POD\_\_$CONTAINER.log
done
done
# ECOC'22 Demo - Disjoint DC-2-DC L3VPN Service
This functional test reproduces the experimental assessment of "<ECOC-22 title>" presented at [ECOC'22](https://www.ecoc2022.org/).
## Functional test folder
This functional test can be found in folder `./src/tests/ecoc22/`. A convenience alias `./ecoc22/` pointing to that folder has been defined.
## Execute with real devices
This functional test has only been tested with emulated devices; however, if you have access to real devices, you can modify the files `./ecoc22/tests/Objects.py` and `./ofc22/tests/Credentials.py` to point to your devices, and map to your network topology.
__Important:__ The OpenConfigDriver, the P4Driver, and the TrandportApiDriver have to be considered as experimental. The configuration and monitoring capabilities they support are limited or partially implemented. Use them with care.
## Deployment
To run this functional test, it is assumed you have deployed a Kubernetes-based environment as described in [Wiki: Installing Kubernetes on your Linux machine](https://gitlab.com/teraflow-h2020/controller/-/wikis/Installing-Kubernetes-on-your-Linux-machine).
After installing Kubernetes, you can run it to deploy the appropriate components. Feel free to adapt it your particular case following the instructions described in [Wiki: Deploying a TeraFlow OS test instance](https://gitlab.com/teraflow-h2020/controller/-/wikis/Deploying-a-TeraFlow-OS-test-instance).
__Important:__
- The `./ecoc22/deploy_in_kubernetes.sh` assumes you have installed the appropriate development dependencies using the `install_development_dependencies.sh` script.
- Before running the scripts in this folder, remember to update the environment variable K8S_HOSTNAME to point to the Kubernetes node you will be using as described in [Wiki: Deploying a TeraFlow OS test instance](https://gitlab.com/teraflow-h2020/controller/-/wikis/Deploying-a-TeraFlow-OS-test-instance).
For your convenience, the configuration script `./ecoc22/deploy_in_kubernetes.sh` has been already defined. The script will take some minutes to download the dependencies, build the micro-services, deploy them, and leave them ready for operation. The deployment will finish with a report of the items that have been created.
## Access to the WebUI and Dashboard
When the deployment completes, you can connect to the TeraFlow OS WebUI and Dashboards as described in [Wiki: Using the WebUI](https://gitlab.com/teraflow-h2020/controller/-/wikis/Using-the-WebUI), or directly navigating to `http://[your-node-ip]:30800` for the WebUI and `http://[your-node-ip]:30300` for the Grafana Dashboard.
Notes:
- the default credentials for the Grafana Dashboiard is user/pass: `admin`/`admin123+`.
- this functional test does not involve the Monitoring component, so no monitoring data is plotted in Grafana.
## Test execution
To execute this functional test, four main steps needs to be carried out:
1. Device bootstrapping
2. L3VPN Service creation
3. L3VPN Service removal
4. Cleanup
Upon the execution of each test progresses, a report will be generated indicating PASSED / FAILED / SKIPPED. If there is some error during the execution, you should see a detailed report on the error. See the troubleshooting section in that case.
Feel free to check the logs of the different components using the appropriate `ecoc22/show_logs_[component].sh` scripts after you execute each step.
### 1. Device bootstrapping
This step configures some basic entities (Context and Topology), the devices, and the links in the topology. The expected results are:
- The devices to be incorporated into the Topology.
- The devices to be pre-configured and initialized as ENABLED by the Automation component.
- The monitoring for the device ports (named as endpoints in TeraFlow OS) to be activated and data collection to automatically start.
- The links to be added to the topology.
To run this step, execute the following script:
`./ofc22/run_test_01_bootstrap.sh`
When the script finishes, check in the Grafana L3-Monitoring Dashboard and you should see the monitoring data being plotted and updated every 5 seconds (by default). Given that there is no service configured, you should see a 0-valued flat plot.
In the WebUI, select the "admin" Context. In the "Devices" tab you should see that 5 different emulated devices have been created and activated: 4 packet routers, and 1 optical line system controller. Besides, in the "Services" tab you should see that there is no service created. Note here that the emulated devices produce synthetic randomly-generated data and do not care about the services configured.
### 2. L3VPN Service creation
This step configures a new service emulating the request an OSM WIM would make by means of a Mock OSM instance.
To run this step, execute the following script:
`./ofc22/run_test_02_create_service.sh`
When the script finishes, check the WebUI "Services" tab. You should see that two services have been created, one for the optical layer and another for the packet layer. Besides, you can check the "Devices" tab to see the configuration rules that have been configured in each device. In the Grafana Dashboard, given that there is now a service configured, you should see the plots with the monitored data for the device. By default, device R1-INF is selected.
### 3. L3VPN Service removal
This step deconfigures the previously created services emulating the request an OSM WIM would make by means of a Mock OSM instance.
To run this step, execute the following script:
`./ofc22/run_test_03_delete_service.sh`
When the script finishes, check the WebUI "Services" tab. You should see that the two services have been removed. Besides, in the "Devices" tab you can see that the appropriate configuration rules have been deconfigured. In the Grafana Dashboard, given that there is no service configured, you should see a 0-valued flat plot again.
### 4. Cleanup
This last step just performs a cleanup of the scenario removing all the TeraFlow OS entities for completeness.
To run this step, execute the following script:
`./ofc22/run_test_04_cleanup.sh`
When the script finishes, check the WebUI "Devices" tab, you should see that the devices have been removed. Besides, in the "Services" tab you can see that the "admin" Context has no services given that that context has been removed.
## Troubleshooting
Different scripts are provided to help in troubleshooting issues in the execution of the test. These scripts are:
- `./ofc22/show_deployment.sh`: this script reports the items belonging to this deployment. Use it to validate that all the pods, deployments and replica sets are ready and have a state of "running"; and the services are deployed and have appropriate IP addresses and ports.
- `ofc22/show_logs_automation.sh`: this script reports the logs for the automation component.
- `ofc22/show_logs_compute.sh`: this script reports the logs for the compute component.
- `ofc22/show_logs_context.sh`: this script reports the logs for the context component.
- `ofc22/show_logs_device.sh`: this script reports the logs for the device component.
- `ofc22/show_logs_monitoring.sh`: this script reports the logs for the monitoring component.
- `ofc22/show_logs_service.sh`: this script reports the logs for the service component.
- `ofc22/show_logs_webui.sh`: this script reports the logs for the webui component.
# Set the URL of your local Docker registry where the images will be uploaded to.
export TFS_REGISTRY_IMAGE="http://localhost:32000/tfs/"
# Set the list of components, separated by spaces, you want to build images for, and deploy.
export TFS_COMPONENTS="context device automation service slice compute webui"
# Set the tag you want to use for your images.
export TFS_IMAGE_TAG="dev"
# Set the name of the Kubernetes namespace to deploy to.
export TFS_K8S_NAMESPACE="tfs"
# Set additional manifest files to be applied after the deployment
export TFS_EXTRA_MANIFESTS="manifests/nginx_ingress_http.yaml"
# Set the neew Grafana admin password
export TFS_GRAFANA_PASSWORD="admin123+"
......@@ -13,39 +13,4 @@
# See the License for the specific language governing permissions and
# limitations under the License.
PROJECTDIR=`pwd`
cd $PROJECTDIR/src
RCFILE=$PROJECTDIR/coverage/.coveragerc
COVERAGEFILE=$PROJECTDIR/coverage/.coverage
# Configure the correct folder on the .coveragerc file
cat $PROJECTDIR/coverage/.coveragerc.template | sed s+~/teraflow/controller+$PROJECTDIR+g > $RCFILE
# Destroy old coverage file
rm -f $COVERAGEFILE
# Set the name of the Kubernetes namespace and hostname to use.
K8S_NAMESPACE="ecoc22"
# K8S_HOSTNAME="kubernetes-master"
# dynamically gets the name of the K8s master node
K8S_HOSTNAME=`kubectl get nodes --selector=node-role.kubernetes.io/master | tr -s " " | cut -f1 -d" " | sed -n '2 p'`
# Flush Context database
kubectl --namespace $K8S_NAMESPACE exec -it deployment/contextservice --container redis -- redis-cli FLUSHALL
export CONTEXTSERVICE_SERVICE_HOST=$(kubectl get node $K8S_HOSTNAME -o 'jsonpath={.status.addresses[?(@.type=="InternalIP")].address}')
export CONTEXTSERVICE_SERVICE_PORT_GRPC=$(kubectl get service contextservice-public --namespace $K8S_NAMESPACE -o 'jsonpath={.spec.ports[?(@.port==1010)].nodePort}')
export DEVICESERVICE_SERVICE_HOST=$(kubectl get node $K8S_HOSTNAME -o 'jsonpath={.status.addresses[?(@.type=="InternalIP")].address}')
export DEVICESERVICE_SERVICE_PORT_GRPC=$(kubectl get service deviceservice-public --namespace $K8S_NAMESPACE -o 'jsonpath={.spec.ports[?(@.port==2020)].nodePort}')
export COMPUTESERVICE_SERVICE_HOST=$(kubectl get node $K8S_HOSTNAME -o 'jsonpath={.status.addresses[?(@.type=="InternalIP")].address}')
export COMPUTESERVICE_SERVICE_PORT_HTTP=$(kubectl get service computeservice-public --namespace $K8S_NAMESPACE -o 'jsonpath={.spec.ports[?(@.port==8080)].nodePort}')
# Useful flags for pytest:
#-o log_cli=true -o log_file=device.log -o log_file_level=DEBUG
# Run functional test and analyze coverage of code at same time
coverage run --rcfile=$RCFILE --append -m pytest --log-level=INFO --verbose \
tests/ecoc22/tests/test_functional_bootstrap.py
pytest --verbose src/tests/ecoc22/tests/test_functional_bootstrap.py
......@@ -13,29 +13,4 @@
# See the License for the specific language governing permissions and
# limitations under the License.
PROJECTDIR=`pwd`
cd $PROJECTDIR/src
RCFILE=$PROJECTDIR/coverage/.coveragerc
COVERAGEFILE=$PROJECTDIR/coverage/.coverage
# Set the name of the Kubernetes namespace and hostname to use.
K8S_NAMESPACE="ecoc22"
# dynamically gets the name of the K8s master node
K8S_HOSTNAME=`kubectl get nodes --selector=node-role.kubernetes.io/master | tr -s " " | cut -f1 -d" " | sed -n '2 p'`
export CONTEXTSERVICE_SERVICE_HOST=$(kubectl get node $K8S_HOSTNAME -o 'jsonpath={.status.addresses[?(@.type=="InternalIP")].address}')
export CONTEXTSERVICE_SERVICE_PORT_GRPC=$(kubectl get service contextservice-public --namespace $K8S_NAMESPACE -o 'jsonpath={.spec.ports[?(@.port==1010)].nodePort}')
export DEVICESERVICE_SERVICE_HOST=$(kubectl get node $K8S_HOSTNAME -o 'jsonpath={.status.addresses[?(@.type=="InternalIP")].address}')
export DEVICESERVICE_SERVICE_PORT_GRPC=$(kubectl get service deviceservice-public --namespace $K8S_NAMESPACE -o 'jsonpath={.spec.ports[?(@.port==2020)].nodePort}')
export COMPUTESERVICE_SERVICE_HOST=$(kubectl get node $K8S_HOSTNAME -o 'jsonpath={.status.addresses[?(@.type=="InternalIP")].address}')
export COMPUTESERVICE_SERVICE_PORT_HTTP=$(kubectl get service computeservice-public --namespace $K8S_NAMESPACE -o 'jsonpath={.spec.ports[?(@.port==8080)].nodePort}')
# Useful flags for pytest:
#-o log_cli=true -o log_file=device.log -o log_file_level=DEBUG
# Run functional test and analyze coverage of code at same time
coverage run --rcfile=$RCFILE --append -m pytest --log-level=INFO --verbose -o log_cli=true \
tests/ecoc22/tests/test_functional_create_service.py
pytest --verbose src/tests/ecoc22/tests/test_functional_create_service.py
......@@ -13,29 +13,4 @@
# See the License for the specific language governing permissions and
# limitations under the License.
PROJECTDIR=`pwd`
cd $PROJECTDIR/src
RCFILE=$PROJECTDIR/coverage/.coveragerc
COVERAGEFILE=$PROJECTDIR/coverage/.coverage
# Set the name of the Kubernetes namespace and hostname to use.
K8S_NAMESPACE="ecoc22"
# dynamically gets the name of the K8s master node
K8S_HOSTNAME=`kubectl get nodes --selector=node-role.kubernetes.io/master | tr -s " " | cut -f1 -d" " | sed -n '2 p'`
export CONTEXTSERVICE_SERVICE_HOST=$(kubectl get node $K8S_HOSTNAME -o 'jsonpath={.status.addresses[?(@.type=="InternalIP")].address}')
export CONTEXTSERVICE_SERVICE_PORT_GRPC=$(kubectl get service contextservice-public --namespace $K8S_NAMESPACE -o 'jsonpath={.spec.ports[?(@.port==1010)].nodePort}')
export DEVICESERVICE_SERVICE_HOST=$(kubectl get node $K8S_HOSTNAME -o 'jsonpath={.status.addresses[?(@.type=="InternalIP")].address}')
export DEVICESERVICE_SERVICE_PORT_GRPC=$(kubectl get service deviceservice-public --namespace $K8S_NAMESPACE -o 'jsonpath={.spec.ports[?(@.port==2020)].nodePort}')
export COMPUTESERVICE_SERVICE_HOST=$(kubectl get node $K8S_HOSTNAME -o 'jsonpath={.status.addresses[?(@.type=="InternalIP")].address}')
export COMPUTESERVICE_SERVICE_PORT_HTTP=$(kubectl get service computeservice-public --namespace $K8S_NAMESPACE -o 'jsonpath={.spec.ports[?(@.port==8080)].nodePort}')
# Useful flags for pytest:
#-o log_cli=true -o log_file=device.log -o log_file_level=DEBUG
# Run functional test and analyze coverage of code at same time
coverage run --rcfile=$RCFILE --append -m pytest --log-level=INFO --verbose \
tests/ecoc22/tests/test_functional_delete_service.py
pytest --verbose src/tests/ecoc22/tests/test_functional_delete_service.py
......@@ -13,29 +13,4 @@
# See the License for the specific language governing permissions and
# limitations under the License.
PROJECTDIR=`pwd`
cd $PROJECTDIR/src
RCFILE=$PROJECTDIR/coverage/.coveragerc
COVERAGEFILE=$PROJECTDIR/coverage/.coverage
# Set the name of the Kubernetes namespace and hostname to use.
K8S_NAMESPACE="ecoc22"
# dynamically gets the name of the K8s master node
K8S_HOSTNAME=`kubectl get nodes --selector=node-role.kubernetes.io/master | tr -s " " | cut -f1 -d" " | sed -n '2 p'`
export CONTEXTSERVICE_SERVICE_HOST=$(kubectl get node $K8S_HOSTNAME -o 'jsonpath={.status.addresses[?(@.type=="InternalIP")].address}')
export CONTEXTSERVICE_SERVICE_PORT_GRPC=$(kubectl get service contextservice-public --namespace $K8S_NAMESPACE -o 'jsonpath={.spec.ports[?(@.port==1010)].nodePort}')
export DEVICESERVICE_SERVICE_HOST=$(kubectl get node $K8S_HOSTNAME -o 'jsonpath={.status.addresses[?(@.type=="InternalIP")].address}')
export DEVICESERVICE_SERVICE_PORT_GRPC=$(kubectl get service deviceservice-public --namespace $K8S_NAMESPACE -o 'jsonpath={.spec.ports[?(@.port==2020)].nodePort}')
export COMPUTESERVICE_SERVICE_HOST=$(kubectl get node $K8S_HOSTNAME -o 'jsonpath={.status.addresses[?(@.type=="InternalIP")].address}')
export COMPUTESERVICE_SERVICE_PORT_HTTP=$(kubectl get service computeservice-public --namespace $K8S_NAMESPACE -o 'jsonpath={.spec.ports[?(@.port==8080)].nodePort}')
# Useful flags for pytest:
#-o log_cli=true -o log_file=device.log -o log_file_level=DEBUG
# Run functional test and analyze coverage of code at same time
coverage run --rcfile=$RCFILE --append -m pytest --log-level=INFO --verbose \
tests/ecoc22/tests/test_functional_cleanup.py
pytest --verbose src/tests/ecoc22/tests/test_functional_cleanup.py
......@@ -13,12 +13,31 @@
# See the License for the specific language governing permissions and
# limitations under the License.
export COMPONENTS="context device service slice compute webui"
export K8S_NAMESPACE="ecoc22"
mkdir -p tmp/exec_logs/$K8S_NAMESPACE/
rm tmp/exec_logs/$K8S_NAMESPACE/*
PROJECTDIR=`pwd`
for COMPONENT in $COMPONENTS; do
kubectl --namespace $K8S_NAMESPACE logs deployment/${COMPONENT}service -c server > tmp/exec_logs/$K8S_NAMESPACE/$COMPONENT.log
done
cd $PROJECTDIR/src
RCFILE=$PROJECTDIR/coverage/.coveragerc
COVERAGEFILE=$PROJECTDIR/coverage/.coverage
# Configure the correct folder on the .coveragerc file
cat $PROJECTDIR/coverage/.coveragerc.template | sed s+~/teraflow/controller+$PROJECTDIR+g > $RCFILE
# Destroy old coverage file
rm -f $COVERAGEFILE
# Force a flush of Context database
kubectl --namespace $TFS_K8S_NAMESPACE exec -it deployment/contextservice --container redis -- redis-cli FLUSHALL
# Run functional tests and analyze code coverage at the same time
coverage run --rcfile=$RCFILE --append -m pytest --log-level=INFO --verbose \
tests/ecoc22/tests/test_functional_bootstrap.py
coverage run --rcfile=$RCFILE --append -m pytest --log-level=INFO --verbose \
tests/ecoc22/tests/test_functional_create_service.py
coverage run --rcfile=$RCFILE --append -m pytest --log-level=INFO --verbose \
tests/ecoc22/tests/test_functional_delete_service.py
coverage run --rcfile=$RCFILE --append -m pytest --log-level=INFO --verbose \
tests/ecoc22/tests/test_functional_cleanup.py
#!/bin/bash
# Copyright 2021-2023 H2020 TeraFlow (https://www.teraflow-h2020.eu/)
#
# Licensed under the Apache License, Version 2.0 (the "License");
......@@ -13,15 +12,24 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import copy, json, sys
from .Objects import CONTEXTS, DEVICES, LINKS, TOPOLOGIES
# ECOC 22 deployment settings
def main():
with open('tests/ecoc22/descriptors_emulated.json', 'w', encoding='UTF-8') as f:
devices = []
for device,connect_rules in DEVICES:
device = copy.deepcopy(device)
device['device_config']['config_rules'].extend(connect_rules)
devices.append(device)
export REGISTRY_IMAGE=""
export COMPONENTS="context device service slice compute webui"
export IMAGE_TAG="ecoc22"
export K8S_NAMESPACE="ecoc22"
export K8S_HOSTNAME="kubernetes-master"
export EXTRA_MANIFESTS="./ecoc22/expose_services.yaml"
export GRAFANA_PASSWORD="admin123+"
f.write(json.dumps({
'contexts': CONTEXTS,
'topologies': TOPOLOGIES,
'devices': devices,
'links': LINKS
}))
return 0
./deploy_in_kubernetes.sh
if __name__ == '__main__':
sys.exit(main())
#!/bin/bash
# Copyright 2021-2023 H2020 TeraFlow (https://www.teraflow-h2020.eu/)
#
# Licensed under the Apache License, Version 2.0 (the "License");
......@@ -13,6 +12,29 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import json, logging, sys
from common.Settings import get_setting
from context.client.ContextClient import ContextClient
from context.proto.context_pb2 import Context, Device, Link, Topology
from device.client.DeviceClient import DeviceClient
K8S_NAMESPACE="ecoc22"
kubectl --namespace $K8S_NAMESPACE get all
LOGGER = logging.getLogger(__name__)
LOGGER.setLevel(logging.DEBUG)
def main():
context_client = ContextClient(
get_setting('CONTEXTSERVICE_SERVICE_HOST'), get_setting('CONTEXTSERVICE_SERVICE_PORT_GRPC'))
device_client = DeviceClient(
get_setting('DEVICESERVICE_SERVICE_HOST'), get_setting('DEVICESERVICE_SERVICE_PORT_GRPC'))
with open('tests/ecoc22/descriptors.json', 'r', encoding='UTF-8') as f:
descriptors = json.loads(f.read())
for context in descriptors['contexts' ]: context_client.SetContext (Context (**context ))
for topology in descriptors['topologies']: context_client.SetTopology(Topology(**topology))
for device in descriptors['devices' ]: device_client .AddDevice (Device (**device ))
for link in descriptors['links' ]: context_client.SetLink (Link (**link ))
return 0
if __name__ == '__main__':
sys.exit(main())
# Set the URL of your local Docker registry where the images will be uploaded to.
export TFS_REGISTRY_IMAGE="http://localhost:32000/tfs/"
# Set the list of components, separated by spaces, you want to build images for, and deploy.
export TFS_COMPONENTS="context device automation service compute monitoring webui"
# Set the tag you want to use for your images.
export TFS_IMAGE_TAG="dev"
# Set the name of the Kubernetes namespace to deploy to.
export TFS_K8S_NAMESPACE="tfs"
# Set additional manifest files to be applied after the deployment
export TFS_EXTRA_MANIFESTS="manifests/nginx_ingress_http.yaml"
# Set the neew Grafana admin password
export TFS_GRAFANA_PASSWORD="admin123+"
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment