# Network Slice Controller (NSC) The Network Slice Controller (NSC) is a component defined by the IETF to orchestrate the request, realization, and lifecycle control of IETF Network Slices. --- ## 📑 Table of Contents 1. [Overview](#overview) 2. [Main Components](#main-components) - [NBI Processor](#nbi-processor) - [Mapper](#mapper) - [Realizer](#realizer) - [Slice Database](#slice-database) 3. [Workflow](#workflow) 4. [Architecture](#architecture) 5. [API](#api) 6. [WebUI](#webui) 7. [Requirements](#requirements) 8. [Configuration](#configuration) - [Logging](#logging) - [General](#general) - [Mapper](#mapper-1) - [Realizer](#realizer-1) - [Teraflow Configuration](#teraflow-configuration) - [Ixia Configuration](#ixia-configuration) - [WebUI](#webui-1) 9. [Usage](#usage) 10. [Available Branches and Releases](#available-branches-and-releases) --- ## Overview The NSC takes requests for IETF Network Slice Services and implements them using a suitable underlay technology. The NSC is the key component for control and management of the IETF Network Slices. It provides the creation/modification/deletion, monitoring, and optimization of IETF Network Slices in a multi-domain, multi-technology, and multi-vendor environment. The main task of the NSC is to map abstract IETF Network Slice Service requirements to concrete technologies and establish required connectivity, ensuring that resources are allocated to slice as necessary. The IETF Network Slice Service Interface is used for communicating details of an IETF Network Slice Service (configuration, selected policies, operational state, etc.) as well as information about status and performance of the IETF Network Slice. The NSC also handles end-to-end network slice requests originating from 5G customers. These requests are managed by the 5G end-to-end orchestrator, which configures RAN and Core Network elements accordingly and passes the request to the NSC for processing. The NSC then interacts with relevant network controllers to implement the network slice into the transport network. ## Main Components ### NBI Processor This component manages the requests entering the system. There are 4 kinds of requests: - Create: when this request arrives, the NBI processor checks the body of the request to analyze the format. If it is 3GPP NRM, it translates it into IETF Slice Service Request. If it is the latter, it propagates the request into the mapper. This function is able to process each request independently, even if several requests with different formats come together in the same request. Once created, the slice is stored in the slice database. - Delete: this request can be referring to a specific slice or to all slices configured. In any case, a deletion request is send to the southbound controller and the referred slices are deleted from the slice database - Get: this request can be referring to a specific slice or to all slices configured. In any case, the referred slices in the request are returned from the slice database. - Modify: this request allows changing the configuration of an existing slice. It works similarly to the create request, depending on the format present in the request, it is translated or not. Once processed, a modify request is sent to the controller and the referred slice is updated in the slice database. Detail of how customers can make different requests is provided in the API section ### Mapper As defined by IETF the mapper receives slices service requests from customers and process them to obtain an overall view of how this new request complements the rest of slices. Realizing a slice requires an existing network resource partition (NRP) with the specified slice requirements, which may not be available at the time of the request. This information will be retrieved from an external module, which is beyond the scope of this definition. This module will provide a response regarding the feasibility of realizing the slice. If there are no available NRPs for instantiating the slice, the mapper will request the realizer to create a new NRP. This involves interacting with the network controllers responsible for the transport network handled by the NSC. This process is iterative until the mapper determines that the slice realization is feasible. In the current version, it is assumed that there is only one available NRP corresponding to the entire network, and that it is always accessible to the user. ### Realizer The realizer implements slices by interacting with specific network controllers. It receives requests from the mapper and should have the intelligence to select the most adequate realizing technology to realize the slice. As it currently does not have that intelligence, it is supposed to use a fixed technology or one that selects the customer. The NSC operates over the TeraflowSDN controller and the IXIA NE II device. [TeraflowSDN](https://labs.etsi.org/rep/tfs/controller) is an open-source cloud native SDN controller enabling smart connectivity services for future networks beyond 5G. It allows establishing different services into a connected topology. The services currently supported are layer 2 VPNs and layer 3 VPNs. Therefore, the NSC has two specific functions that will be called “Realizing modules” that translate the IETF Slice Service Request into a request for deploying a VPN in TeraflowSDN: ``tfs_l2vpn`` and ``tfs_l3vpn``. TFS accepts two options to deploy its services, uploading a service via it webUI using a proprietary descriptor, or using an standardized interface via its NBI based on the L2SM and L3SM IETF YANG Models. The second option is the preferred one, as it follows and standardized approach, although both options are supported. The IXIA NEII is a device that allows emulating network impairments. In this context, it is used to simplify configurations in the data plane and offered channel characteristics as a proof of concept while focusing on the specific configurations on the control plane. The characteristics that can be emulated over a channel are: Ips, VLAN, bandwidth, latency, delay variance (which is requested as tolerance), packet disorder (which is requested as reliability). When the realizer receives a request, it translates it into a proprietary template with the specified characteristics to be consumed by the device API. ### Slice Database The slice database is updated after each request by adding, removing or updating the stored slices. It contains two fields: - ``slice_id``: it stores a unique identifier for the slice, serving as primary key, and is mapped from the id value present in the IETF Slice Service Model - ``intent``: it stores an object with the whole IETF Slice Service Model, that contains the characteristics and endpoints of the slice ## Workflow 1. A request comes into the NSC NBI 2. If it is a GET request, the slice database is consulted. If it is a POST request, the NBI processor inspects the body of the request 3. If it is in 3GPP format, it is translated to IETF Slice Service Request. If not, it is sent directly to the mapper 4. The mapper processes the request and interacts with the planner when activated 5. The planner processes the request, populates it with the optimal path and returns it to the mapper 6. The mapper sends the request to the realizer, which selects a realization technology 7. The realization module translate the request to the controller specific configuration 8. The realizer sends the request to the controller and updates the database with the new slice ## Arquitecture NSC Architecture ## API The API has two namespaces: tfs and ixia, one dedicated to each controller, with the operations POST, GET, PUT and DELETE - `GET /{namespace}/slice`: returns a list with all transport network slices currently available in the controller. - `POST /{namespace}/slice`: allows the submission of a new network slice request - `DELETE /{namespace}/slice`: deletes all transport network slices stored in the controller. - `GET /{namespace}/slice/{slice_id}`: retrieves detailed information about a specific transport network slice identified by its slice_id - `DELETE /{namespace}/slice/{slice_id}`: deletes a specific transport network slice identified by its slice_id - `PUT /{namespace}/slice/{slice_id}`: modifies a specific transport network slice identified by its slice_id The API is available in the swagger documentation panel at `{ip}:{NSC_PORT}/nsc` ## WebUI The WebUI is a graphical interface that allows operating the NSC. Currently, it has more limited operations than the API. It supports the creation of slices in both Teraflow and IXIA controllers, as well as getting information of the current slices. Modify and deletion is not yet supported. It is accessed at `{ip}:{NSC_PORT}/webui` ## Requirements - Python3.12 - python3-pip - python3-venv ## Configuration In the `src/config/.env.example` file, several constants can be adjusted to customize the Network Slice Controller (NSC) behaviour: ### Logging - `DEFAULT_LOGGING_LEVEL`: Sets logging verbosity - Default: `INFO` - Options: `DEBUG`, `INFO`, `WARNING`, `ERROR`, `NOTSET`, `CRITICAL` ### General - `DUMP_TEMPLATES`: Flag to deploy templates for debugging - Default: `false` ## Mapper - `NRP_ENABLED`: Flag to determine if the NSC performs NRPs - Default: `false` - `PLANNER_ENABLED`: Flag to activate the planner - Default: `false` - `PCE_EXTERNAL`: Flag to determine if external PCE is used - Default: `false` - `PLANNER_TYPE`: Type of planner to be used - Default: `ENERGY` - Options: `ENERGY`, `HRAT`, `TFS_OPTICAL` - `HRAT_IP`: HRAT planner IP - Default: `10.0.0.1` - `OPTICAL_PLANNER_IP`: Optical planner IP - Default: `10.0.0.1` ## Realizer - `DUMMY_MODE`: If true, no config sent to controllers - Default: `true` ### Teraflow Configuration - `UPLOAD_TYPE`: Configure type of upload to Teraflow - Default: `WEBUI` - Options: `WEBUI`, `NBI` - `TFS_IP`: Teraflow SDN controller IP - Default: `"127.0.0.1"` - `TFS_L2VPN_SUPPORT`: Enable additional L2VPN configuration support - Default: `False` ### Ixia Configuration - `IXIA_IP`: Ixia NEII IP - Default: `"127.0.0.1"` ### WebUI - `WEBUI_DEPLOY`: Flag to deploy WebUI - Default: `False` ## Usage To use the NSC, just build the image an run it in a container following these steps: 1. **Deploy** ``` ./deploy.sh ``` 2. **Send Slice Requests**: Send slice requests via **API** (/nsc) or **WebUI** (/webui) ## Available branches and releases [![Latest Release](https://labs.etsi.org/rep/tfs/nsc/-/badges/release.svg)](https://labs.etsi.org/rep/tfs/nsc/-/releases) - The branch `main` ([![pipeline status](https://labs.etsi.org/rep/tfs/nsc/badges/main/pipeline.svg)](https://labs.etsi.org/rep/tfs/nsc/-/commits/main) [![coverage report](https://labs.etsi.org/rep/tfs/nsc/badges/main/coverage.svg)](https://labs.etsi.org/rep/tfs/nsc/-/commits/main)), points always to the latest stable version of the TeraFlowSDN Network Slice Controller (NSC). - The branches `release/X.Y.Z`, point to the code for the different release versions indicated in the branch name. - Code in these branches can be considered stable, and no new features are planned. - In case of bugs, point releases increasing revision number (Z) might be created. - The `develop` ([![pipeline status](https://labs.etsi.org/rep/tfs/nsc/badges/develop/pipeline.svg)](https://labs.etsi.org/rep/tfs/nsc/-/commits/develop) [![coverage report](https://labs.etsi.org/rep/tfs/nsc/badges/develop/coverage.svg)](https://labs.etsi.org/rep/tfs/nsc/-/commits/develop)) branch is the main development branch and contains the latest contributions. - **Use it with care! It might not be stable.** - The latest developments and contributions are added to this branch for testing and validation before reaching a release.