diff --git a/doc/deployment_guide/deploy_TeraFlowSDN/deploy_TeraFlowSDN.md b/doc/deployment_guide/deploy_TeraFlowSDN/deploy_TeraFlowSDN.md index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..05638ea48c41284518a1b41906f3a27fb5ccf04b 100644 --- a/doc/deployment_guide/deploy_TeraFlowSDN/deploy_TeraFlowSDN.md +++ b/doc/deployment_guide/deploy_TeraFlowSDN/deploy_TeraFlowSDN.md @@ -0,0 +1,159 @@ +This section describes how to deploy TeraFlowSDN controller on top of MicroK8s using the +environment configured in the previous sections. + + +## Install prerequisites +```bash +sudo apt-get install -y git curl jq +``` + + +## Clone the Git repository of the TeraFlowSDN controller +Clone from ETSI-hosted GitLab code repository: +```bash +mkdir ~/tfs-ctrl +git clone https://labs.etsi.org/rep/tfs/controller.git ~/tfs-ctrl +``` + +__Important__: The original H2020-TeraFlow project hosted on GitLab.com has been +archieved and will not receive further contributions/updates. +Please, clone from [ETSI-hosted GitLab code repository](https://labs.etsi.org/rep/tfs/controller). + + +## Checkout the appropriate Git branch +TeraFlowSDN controller versions can be found in the appropriate release tags and/or branches as described in [Home > Versions](/Home#versions). + +By default the branch *master* is checked out and points to the latest stable version of the TeraFlowSDN controller, while branch *develop* contains the latest developments and contributions under test and validation. + +To switch to the appropriate branch run the following command, changing `develop` by the name of the branch you want to deploy: +```bash +cd ~/tfs-ctrl +git checkout develop +``` + + +## Prepare a deployment script with the deployment settings +Create a new deployment script, e.g., `my_deploy.sh`, adding the appropriate settings as +follows. +This section provides just an overview of the available settings. An example [`my_deploy.sh`](https://labs.etsi.org/rep/tfs/controller/-/blob/master/my_deploy.sh) script is provided in the root folder of the project for your convenience with full description of all the settings. + +__Note__: The example `my_deploy.sh` script provides reasonable settings for deploying a functional and complete enough TeraFlowSDN controller, and a brief description of their meaning. To see extended descriptions, check scripts in the `deploy` folder. + +```bash +cd ~/tfs-ctrl +tee my_deploy.sh >/dev/null << EOF +# ----- TeraFlowSDN ------------------------------------------------------------ +export TFS_REGISTRY_IMAGES="http://localhost:32000/tfs/" +export TFS_COMPONENTS="context device ztp monitoring pathcomp service slice nbi webui load_generator" +export TFS_IMAGE_TAG="dev" +export TFS_K8S_NAMESPACE="tfs" +export TFS_EXTRA_MANIFESTS="manifests/nginx_ingress_http.yaml" +export TFS_GRAFANA_PASSWORD="admin123+" +export TFS_SKIP_BUILD="" + +# ----- CockroachDB ------------------------------------------------------------ +export CRDB_NAMESPACE="crdb" +export CRDB_EXT_PORT_SQL="26257" +export CRDB_EXT_PORT_HTTP="8081" +export CRDB_USERNAME="tfs" +export CRDB_PASSWORD="tfs123" +export CRDB_DATABASE="tfs" +export CRDB_DEPLOY_MODE="single" +export CRDB_DROP_DATABASE_IF_EXISTS="YES" +export CRDB_REDEPLOY="" + +# ----- NATS ------------------------------------------------------------------- +export NATS_NAMESPACE="nats" +export NATS_EXT_PORT_CLIENT="4222" +export NATS_EXT_PORT_HTTP="8222" +export NATS_REDEPLOY="" + +# ----- QuestDB ---------------------------------------------------------------- +export QDB_NAMESPACE="qdb" +export QDB_EXT_PORT_SQL="8812" +export QDB_EXT_PORT_ILP="9009" +export QDB_EXT_PORT_HTTP="9000" +export QDB_USERNAME="admin" +export QDB_PASSWORD="quest" +export QDB_TABLE_MONITORING_KPIS="tfs_monitoring_kpis" +export QDB_TABLE_SLICE_GROUPS="tfs_slice_groups" +export QDB_DROP_TABLES_IF_EXIST="YES" +export QDB_REDEPLOY="" + +EOF +``` + +The settings are organized in 4 sections: +- Section `TeraFlowSDN`: + - `TFS_REGISTRY_IMAGE` enables to specify the private Docker registry to be used, by default, we assume to use the Docker respository enabled in MicroK8s. + - `TFS_COMPONENTS` specifies the components their Docker image will be rebuilt, uploaded to the private Docker registry, and deployed in Kubernetes. + - `TFS_IMAGE_TAG` defines the tag to be used for Docker images being rebuilt and uploaded to the private Docker registry. + - `TFS_K8S_NAMESPACE` specifies the name of the Kubernetes namespace to be used for deploying the TFS components. + - `TFS_EXTRA_MANIFESTS` enables to provide additional manifests to be applied into the Kubernetes environment during the deployment. Typical use case is to deploy ingress controllers, service monitors for Prometheus, etc. + - `TFS_GRAFANA_PASSWORD` lets you specify the password you want to use for the `admin` user of the Grafana instance being deployed and linked to the Monitoring component. + - `TFS_SKIP_BUILD`, if set to `YES`, prevents rebuilding the Docker images. That means, the deploy script will redeploy existing Docker images without rebuilding/updating them. + +- Section `CockroachDB`: enables to configure the deployment of the backend [CockroachDB](https://www.cockroachlabs.com/) database. + - Check example script [`my_deploy.sh`](https://labs.etsi.org/rep/tfs/controller/-/blob/master/my_deploy.sh) for further details. + +- Section `NATS`: enables to configure the deployment of the backend [NATS](https://nats.io/) message broker. + - Check example script [`my_deploy.sh`](https://labs.etsi.org/rep/tfs/controller/-/blob/master/my_deploy.sh) for further details. + +- Section `QuestDB`: enables to configure the deployment of the backend [QuestDB](https://questdb.io/) timeseries database. + - Check example script [`my_deploy.sh`](https://labs.etsi.org/rep/tfs/controller/-/blob/master/my_deploy.sh) for further details. + + +## Confirm that MicroK8s is running + +Run the following command: +```bash +microk8s status +``` + +If it is reported `microk8s is not running, try microk8s start`, run the following command to start MicroK8s: +```bash +microk8s start +``` + +Confirm everything is up and running: + +1. Periodically [Check the status of Kubernetes](/1.-Deployment-Guide/1.2.-Install-MicroK8s#check-status-of-kubernetes-and-addons) until you see the addons \[dns, ha-cluster, helm3, hostpath-storage, ingress, registry, storage\] in the enabled block. +2. Periodically [Check Kubernetes resources](/1.-Deployment-Guide/1.2.-Install-MicroK8s#check-all-resources-in-kubernetes) until all pods are **Ready** and **Running**. + + +## Deploy TFS controller +First, source the deployment settings defined in the previous section. +This way, you do not need to specify the environment variables in each and every command you execute to operate the TFS controller. +Be aware to re-source the file if you open new terminal sessions. +Then, run the following command to deploy TeraFlowSDN controller on top of the MicroK8s Kubernetes platform. + +```bash +cd ~/tfs-ctrl +source my_deploy.sh +./deploy/all.sh +``` + +The script performs the following steps: +- Executes script `./deploy/crdb.sh` to automate deployment of CockroachDB database used by Context component. + - The script automatically checks if CockroachDB is already deployed. + - If there are settings instructing to drop the database and/or redeploy CockroachDB, it does the appropriate actions to honor them as defined in previous section. +- Executes script `./deploy/nats.sh` to automate deployment of NATS message broker used by Context component. + - The script automatically checks if NATS is already deployed. + - If there are settings instructing to redeploy the message broker, it does the appropriate actions to honor them as defined in previous section. +- Executes script `./deploy/qdb.sh` to automate deployment of QuestDB timeseries database used by Monitoring component. + - The script automatically checks if QuestDB is already deployed. + - If there are settings instructing to redeploy the timeseries database, it does the appropriate actions to honor them as defined in previous section. +- Executes script `./deploy/tfs.sh` to automate deployment of TeraFlowSDN. + - Creates the namespace defined in `TFS_K8S_NAMESPACE` + - Creates secrets for CockroachDB, NATS, and QuestDB to be used by Context and Monitoring components. + - Builds the Docker images for the components defined in `TFS_COMPONENTS` + - Tags the Docker images with the value of `TFS_IMAGE_TAG` + - Pushes the Docker images to the repository defined in `TFS_REGISTRY_IMAGE` + - Deploys the components defined in `TFS_COMPONENTS` + - Creates the file `tfs_runtime_env_vars.sh` with the environment variables for the components defined in `TFS_COMPONENTS` defining their local host addresses and their port numbers. + - Applies extra manifests defined in `TFS_EXTRA_MANIFESTS` such as: + - Creating an ingress controller listening at port 80 for HTTP connections to enable external access to the TeraFlowSDN WebUI, Grafana Dashboards, and Compute NBI interfaces. + - Deploying service monitors to enable monitoring the performance of the components, device drivers and service handlers. + - Initialize and configure the Grafana dashboards (if Monitoring component is deployed) +- Report a summary of the deployment + - See [Show Deployment and Logs](./1.5.-Show-Deployment-and-Logs) diff --git a/doc/deployment_guide/install_micro_k8s/install_micro_k8s.md b/doc/deployment_guide/install_micro_k8s/install_micro_k8s.md index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..207395f37eff08c9cd8411a4df34ad095a359656 100644 --- a/doc/deployment_guide/install_micro_k8s/install_micro_k8s.md +++ b/doc/deployment_guide/install_micro_k8s/install_micro_k8s.md @@ -0,0 +1,217 @@ +This section describes how to deploy the MicroK8s Kubernetes platform and configure it +to be used with ETSI TeraFlowSDN controller. +Besides, Docker is installed to build docker images for the ETSI TeraFlowSDN controller. + +The steps described in this section might take some minutes depending on your internet +connection speed and the resources assigned to your VM, or the specifications of your +physical server. + +To facilitate work, these steps are easier to be executed through an SSH connection, for instance using tools like [PuTTY](https://www.putty.org/) or [MobaXterm](https://mobaxterm.mobatek.net/). + + +## Upgrade the Ubuntu distribution +Skip this step if you already did it during the creation of the VM. +```bash +sudo apt-get update -y +sudo apt-get dist-upgrade -y +``` + + +## Install prerequisites +```bash +sudo apt-get install -y ca-certificates curl gnupg lsb-release snapd jq +``` + + +## Install Docker CE +Install Docker CE and Docker BuildX plugin +```bash +sudo apt-get install -y docker.io docker-buildx +``` + +**NOTE**: Starting from Docker v23, [Build architecture](https://docs.docker.com/build/architecture/) has been updated and `docker build` command entered into deprecation process in favor of the new `docker buildx build` command. Package `docker-buildx` provides the new `docker buildx build` command. + +Add key "insecure-registries" with the private repository to the daemon configuration. It is done in two commands since +sometimes read from and write to same file might cause trouble. + +```bash +if [ -s /etc/docker/daemon.json ]; then cat /etc/docker/daemon.json; else echo '{}'; fi \ + | jq 'if has("insecure-registries") then . else .+ {"insecure-registries": []} end' -- \ + | jq '."insecure-registries" |= (.+ ["localhost:32000"] | unique)' -- \ + | tee tmp.daemon.json +sudo mv tmp.daemon.json /etc/docker/daemon.json +sudo chown root:root /etc/docker/daemon.json +sudo chmod 600 /etc/docker/daemon.json +``` + +Restart the Docker daemon +```bash +sudo systemctl restart docker +``` + + +## Install MicroK8s + +**Important**: Some TeraFlowSDN dependencies need to be executed on top of MicroK8s/Kubernetes v1.24. It is not guaranteed (by now) to run on newer versions. + +```bash +# Install MicroK8s +sudo snap install microk8s --classic --channel=1.24/stable + +# Create alias for command "microk8s.kubectl" to be usable as "kubectl" +sudo snap alias microk8s.kubectl kubectl +``` + +It is important to make sure that `ufw` will not interfere with the internal pod-to-pod +and pod-to-Internet traffic. +To do so, first check the status. +If `ufw` is active, use the following command to enable the communication. + +```bash + +# Verify status of ufw firewall +sudo ufw status + +# If ufw is active, install following rules to enable access pod-to-pod and pod-to-internet +sudo ufw allow in on cni0 && sudo ufw allow out on cni0 +sudo ufw default allow routed +``` + +**NOTE**: MicroK8s can be used to compose a Highly Available Kubernetes cluster enabling you to construct an environment combining the CPU, RAM and storage resources of multiple machines. If you are interested in this procedure, review the official instructions in [How to build a highly available Kubernetes cluster with MicroK8s](https://ubuntu.com/tutorials/getting-started-with-kubernetes-ha), in particular, the step [Create a MicroK8s multi-node cluster](https://ubuntu.com/tutorials/getting-started-with-kubernetes-ha#4-create-a-microk8s-multinode-cluster). + +#### References: +- [The lightweight Kubernetes > Install MicroK8s](https://microk8s.io/#install-microk8s) +- [Install a local Kubernetes with MicroK8s](https://ubuntu.com/tutorials/install-a-local-kubernetes-with-microk8s) +- [How to build a highly available Kubernetes cluster with MicroK8s](https://ubuntu.com/tutorials/getting-started-with-kubernetes-ha) + + +## Add user to the docker and microk8s groups + +It is important that your user has the permission to run `docker` and `microk8s` in the +terminal. +To allow this, you need to add your user to the `docker` and `microk8s` groups with the +following commands: + +```bash +sudo usermod -a -G docker $USER +sudo usermod -a -G microk8s $USER +sudo chown -f -R $USER $HOME/.kube +sudo reboot +``` + +In case that you get trouble executing the following commands, might due to the .kube folder is not automatically provisioned into your home folder, you may follow the steps below: + +```bash +mkdir -p $HOME/.kube +sudo chown -f -R $USER $HOME/.kube +microk8s config > $HOME/.kube/config +sudo reboot +``` + +## Check status of Kubernetes and addons +To retrieve the status of Kubernetes __once__, run the following command: +```bash +microk8s.status --wait-ready +``` + +To retrieve the status of Kubernetes __periodically__ (e.g., every 1 second), run the +following command: +```bash +watch -n 1 microk8s.status --wait-ready +``` + +## Check all resources in Kubernetes +To retrieve the status of the Kubernetes resources __once__, run the following command: +```bash +kubectl get all --all-namespaces +``` + +To retrieve the status of the Kubernetes resources __periodically__ (e.g., every 1 +second), run the following command: +```bash +watch -n 1 kubectl get all --all-namespaces +``` + +## Enable addons + +First, we need to enable the community plugins (maintained by third parties): + +```bash +microk8s.enable community +``` + +The Addons to be enabled are: +- `dns`: enables resolving the pods and services by name +- `helm3`: required to install NATS +- `hostpath-storage`: enables providing storage for the pods (required by `registry`) +- `ingress`: deploys an ingress controller to expose the microservices outside Kubernetes +- `registry`: deploys a private registry for the TFS controller images +- `linkerd`: deploys the [linkerd service mesh](https://linkerd.io) used for load balancing among replicas +- `prometheus`: set of tools that enable TFS observability through per-component instrumentation +- `metrics-server`: deploys the [Kubernetes metrics server](https://github.com/kubernetes-sigs/metrics-server) for API access to service metrics + +```bash +microk8s.enable dns helm3 hostpath-storage ingress registry prometheus metrics-server linkerd +``` + +__Important__: Enabling some of the addons might take few minutes. +Do not proceed with next steps until the addons are ready. +Otherwise, the deployment might fail. +To confirm everything is up and running: +1. Periodically + [Check the status of Kubernetes](./1.2.-Install-MicroK8s#check-status-of-kubernetes-and-addons) + until you see the addons [dns, ha-cluster, helm3, hostpath-storage, ingress, linkerd, metrics-server, prometheus, registry, storage] in the enabled block. +2. Periodically + [Check Kubernetes resources](./1.2.-Install-MicroK8s#check-all-resources-in-kubernetes) + until all pods are __Ready__ and __Running__. +3. If it takes too long for the Pods to be ready, __we observed that rebooting the machine may help__. + +Then, create aliases to make the commands easier to access: + +```bash +sudo snap alias microk8s.helm3 helm3 +sudo snap alias microk8s.linkerd linkerd +``` + +To validate that `linkerd` is working correctly, run: + +```bash +linkerd check +``` + +To validate that the `metrics-server` is working correctly, run: +```bash +kubectl top pods --all-namespaces +``` +and you should see a screen similar to the `top` command in Linux, showing the columns *namespace*, *pod name*, *CPU (cores)*, and *MEMORY (bytes)*. + +In case pods are not starting, check information from pods logs. For example, linkerd is sensitive for proper /etc/resolv.conf syntax. +```bash +kubectl logs <podname> --namespace <namespace> +``` +If the command shows an error message, also restarting the machine might help. + +## Stop, Restart, and Redeploy +Find below some additional commands you might need while you work with MicroK8s: +```bash +microk8s.stop # stop MicroK8s cluster (for instance, before power off your computer) +microk8s.start # start MicroK8s cluster +microk8s.reset # reset infrastructure to a clean state +``` + +If the following commands does not work to recover the MicroK8s cluster, you can redeploy it. + +If you want to keep MicroK8s configuration, use: +```bash +sudo snap remove microk8s +``` + +If you need to completely drop MicroK8s and its complete configuration, use: +```bash +sudo snap remove microk8s --purge +sudo apt-get remove --purge docker.io docker-buildx +``` + +**IMPORTANT**: After uninstalling MicroK8s, it is convenient to reboot the computer (the VM if you work on a VM, or the physical computer if you use a physical computer). Otherwise, there are system configurations that are not correctly cleaned. Especially in what port forwarding and firewall rules matters. + +After the reboot, redeploy as it is described in this section. diff --git a/doc/deployment_guide/show_deployment_and_logs/show_deployment_and_logs.md b/doc/deployment_guide/show_deployment_and_logs/show_deployment_and_logs.md index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..96d6147463db79c8430887f673b8e424cec355c3 100644 --- a/doc/deployment_guide/show_deployment_and_logs/show_deployment_and_logs.md +++ b/doc/deployment_guide/show_deployment_and_logs/show_deployment_and_logs.md @@ -0,0 +1,33 @@ +This section presents some helper scripts to inspect the status of the deployment and +the logs of the components. +These scripts are particularly helpful for troubleshooting during execution of +experiments, development, and debugging. + + +## Report the deployment of the TFS controller + +The summary report given at the end of the [Deploy TFS controller](./1.3.-Deploy-TeraFlowSDN#deploy-tfs-controller) +procedure can be generated manually at any time by running the following command. +You can avoid sourcing `my_deploy.sh` if it has been already done. +```bash +cd ~/tfs-ctrl +source my_deploy.sh +./deploy/show.sh +``` + +Use this script to validate that all the pods, deployments, replica sets, ingress +controller, etc. are ready and have the appropriate state, e.g., *running* for Pods, and +the services are deployed and have appropriate IP addresses and port numbers. + + +## Report the log of a specific TFS controller component + +A number of scripts are pre-created in the `scripts` folder to facilitate the inspection +of the component logs. +For instance, to dump the log of the Context component, run the following command. +You can avoid sourcing `my_deploy.sh` if it has been already done. + +```bash +source my_deploy.sh +./scripts/show_logs_context.sh +``` diff --git a/doc/deployment_guide/webUI_and_grafana_dashboards/webUI_and_grafana_dashboards.md b/doc/deployment_guide/webUI_and_grafana_dashboards/webUI_and_grafana_dashboards.md index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..1bd9ec0f3e2a7dfa2430eef79bb65495c164cb3f 100644 --- a/doc/deployment_guide/webUI_and_grafana_dashboards/webUI_and_grafana_dashboards.md +++ b/doc/deployment_guide/webUI_and_grafana_dashboards/webUI_and_grafana_dashboards.md @@ -0,0 +1,14 @@ +This section describes how to get access to the TeraFlowSDN controller WebUI and the monitoring Grafana dashboards. + +## Access the TeraFlowSDN WebUI +If you followed the installation steps based on MicroK8s, you got an ingress controller installed that exposes on TCP port 80. + +Besides, the ingress controller defines the following reverse proxy paths (on your local machine): +- `http://127.0.0.1/webui`: points to the WebUI of TeraFlowSDN. +- `http://127.0.0.1/grafana`: points to the Grafana dashboards. + This endpoint brings access to the monitoring dashboards of TeraFlowSDN. + The credentials for the `admin`user are those defined in the `my_deploy.sh` script, in the `TFS_GRAFANA_PASSWORD` variable. +- `http://127.0.0.1/restconf`: points to the Compute component NBI based on RestCONF. + This endpoint enables connecting external software, such as ETSI OpenSourceMANO NFV Orchestrator, to TeraFlowSDN. + +**Note**: In the creation of the VM, a forward from host TCP port 8080 to VM's TCP port 80 is configured, so the WebUIs and REST APIs of TeraFlowSDN should be exposed on the endpoint `127.0.0.1:8080` of your local machine instead of `127.0.0.1:80`. \ No newline at end of file diff --git a/doc/development_guide/configure_environment/java_quarkus.md b/doc/development_guide/configure_environment/java_quarkus.md index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..36bfad1fb4eb7812a38841662966f8d6875b301a 100644 --- a/doc/development_guide/configure_environment/java_quarkus.md +++ b/doc/development_guide/configure_environment/java_quarkus.md @@ -0,0 +1,64 @@ +This section describe the steps needed to create a development environment for TFS components implemented in Java. Currently, ZTP and Policy components have been developed in Java (version 11) and use the [Quarkus](https://quarkus.io/) framework, which enables kubernetes-native development. + +## Install JDK +To begin, make sure that you have java installed and in the correct version +``` +java --version +``` + +If you don't have java installed you will get an error like the following: +``` +Command 'java' not found, but can be installed with: + +sudo apt install default-jre # version 2:1.11-72build1, or +sudo apt install openjdk-11-jre-headless # version 11.0.14+9-0ubuntu2 +sudo apt install openjdk-17-jre-headless # version 17.0.2+8-1 +sudo apt install openjdk-18-jre-headless # version 18~36ea-1 +sudo apt install openjdk-8-jre-headless # version 8u312-b07-0ubuntu1 +``` + +In which case you should use the following command to install the correct version: +``` +sudo apt install openjdk-11-jre-headless +``` + +Else you should get something like the following: +``` +openjdk 11.0.18 2023-01-17 +OpenJDK Runtime Environment (build 11.0.18+10-post-Ubuntu-0ubuntu120.04.1) +OpenJDK 64-Bit Server VM (build 11.0.18+10-post-Ubuntu-0ubuntu120.04.1, mixed mode, sharing) +``` + +## Compiling and testing existing components +In the root directory of the existing Java components you will find an executable maven wrapper named `mvnw`. You could use this executable, which is already configured in pair with the components, instead of your local maven installation. So for example if you want to compile the project you would run the following: +``` +./mvnw compile +``` + +## VS Code Quarkus plugin +In case you are using [VS Code](https://code.visualstudio.com/) for development, we suggest to install the [official Quarkus extension](https://marketplace.visualstudio.com/items?itemName=redhat.vscode-quarkus). +The extension should be able to automatically find the current open project and integrate with the above `mvnw` maven wrapper, making it easier to control the [maven lifecycle](https://maven.apache.org/guides/introduction/introduction-to-the-lifecycle.html). +Make sure that you open the specific component directory (i.e., `src/ztp` or `src/policy`) and not the general controller one (i.e., `src`. + +## New Java TFS component + +### Sample Project + +If you want to create a new TFS component written in Java you could generate a new Quarkus project based on the following project: + +[TFS Sample Quarkus Project](https://code.quarkus.io/?e=grpc&e=kubernetes&e=container-image-jib&e=kubernetes-service-binding&e=smallrye-health&e=resteasy-reactive) + +In that way, you should have most of the libraries you would need to integrate with the rest of the TFS Components. Feel free however to add or remove libraries depending on your needs. + +### Initial setup + +If you used the sample project above, you should have a project with a basic structure. However there are some steps that you should take before starting development. + +First make sure that you copy the protobuff files, that are found in the root directory of the TFS SDN controller, to the `new-component/src/main/proto` directory. + +Next you should create the following files: +* `new-component/.gitlab-ci.yml` +* `new-component/Dockerfile` +* `new-component/src/resources/application.yaml` + +We suggest to copy the respective files from existing components (Automation and Policy) and change them according to your needs. \ No newline at end of file diff --git a/doc/development_guide/configure_environment/python.md b/doc/development_guide/configure_environment/python.md index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..80d6874423672354cb3a9aeb669f6cd684a53169 100644 --- a/doc/development_guide/configure_environment/python.md +++ b/doc/development_guide/configure_environment/python.md @@ -0,0 +1,149 @@ +This section describes how to configure the Python environment to run experiments and +develop code for the ETSI TeraFlowSDN controller. +In particular, we use [PyEnv](https://github.com/pyenv/pyenv) to install the appropriate +version of Python and manage the virtual environments. + + +## Upgrade the Ubuntu distribution +Skip this step if you already did it during the installation of your machine. +```bash +sudo apt-get update -y +sudo apt-get dist-upgrade -y +``` + + +## Install PyEnv dependencies +```bash +sudo apt-get install -y make build-essential libssl-dev zlib1g-dev libbz2-dev libreadline-dev libsqlite3-dev wget \ + curl llvm git libncursesw5-dev xz-utils tk-dev libxml2-dev libxmlsec1-dev libffi-dev liblzma-dev +``` + + +## Install PyEnv + +We recommend installing PyEnv through +[PyEnv Installer](https://github.com/pyenv/pyenv-installer). +Below you can find the instructions, but we refer you to the link for updated +instructions. + +```bash +curl https://pyenv.run | bash +# When finished, edit ~/.bash_profile // ~/.profile // ~/.bashrc as the installer proposes. +# In general, it means to append the following lines to ~/.bashrc: +export PYENV_ROOT="$HOME/.pyenv" +command -v pyenv >/dev/null || export PATH="$PYENV_ROOT/bin:$PATH" +eval "$(pyenv init -)" +eval "$(pyenv virtualenv-init -)" +``` + +In case .bashrc is not linked properly to your profile, you may need to append the +following line into your local .profile file: + +```bash +# Open ~/.profile and append this line: ++source "$HOME"/.bashrc +``` + + +## Restart the machine +Restart the machine for all the changes to take effect. +```bash +sudo reboot +``` + + +## Install Python 3.9 over PyEnv + +ETSI TeraFlowSDN uses Python 3.9 by default. +You should install the latest stable update of Python 3.9, i.e., avoid "-dev" versions. +To find the latest version available in PyEnv, you can run the following command: + +```bash +pyenv install --list | grep " 3.9" +``` + +At the time of writing, this command will output the following list: + +``` + 3.9.0 + 3.9-dev + 3.9.1 + 3.9.2 + 3.9.4 + 3.9.5 + 3.9.6 + 3.9.7 + 3.9.8 + 3.9.9 + 3.9.10 + 3.9.11 + 3.9.12 + 3.9.13 + 3.9.14 + 3.9.15 + 3.9.16 ** always select the latest version ** +``` + +Therefore, the latest stable version is Python 3.9.16. +To install this version, you should run: + +```bash +pyenv install 3.9.16 + # This command might take some minutes depending on your Internet connection speed + # and the performance of your machine. +``` + + +## Create the Virtual Environment for TeraFlowSDN +The following commands create a virtual environment named as `tfs` using Python 3.9 and +associate that environment with the current folder, i.e., `~/tfs-ctrl`. +That way, when you are in that folder, the associated virtual environment will be used, +thus inheriting the Python interpreter, i.e., Python 3.9, and the Python packages +installed on it. + +```bash +cd ~/tfs-ctrl +pyenv virtualenv 3.9.16 tfs +pyenv local 3.9.16/envs/tfs +``` + +After completing these commands, you should see in your prompt that now you're within +the virtual environment `3.9.16/envs/tfs` on folder `~/tfs-ctrl`: + +``` +(3.9.16/envs/tfs) tfs@tfs-vm:~/tfs-ctrl$ +``` + +In case that the correct pyenv does not get automatically activated when you change to +the tfs-ctrl/ folder, then execute the following command: + +```bash +cd ~/tfs-ctrl +pyenv activate 3.9.16/envs/tfs +``` + + + +## Install the basic Python packages within the virtual environment +From within the `3.9.16/envs/tfs` environment on folder `~/tfs-ctrl`, run the following +commands to install the basic Python packages required to work with TeraFlowSDN. +```bash +cd ~/tfs-ctrl +./install_requirements.sh +``` + +Some dependencies require to re-load the session, so log-out and log-in again. + + +## Generate the Python code from the gRPC Proto messages and services + +The components, e.g., microservices, of the TeraFlowSDN controller, in general, use a gRPC-based open API to interoperate. +All the protocol definitions can be found in sub-folder `proto` within the root project folder. +For additional details on gRPC, visit the official web-page [gRPC](https://grpc.io/). + +In order to interact with the components, (re-)generate the Python code from gRPC definitions running the following command: + +```bash +cd ~/tfs-ctrl +proto/generate_code_python.sh +```