Skip to content
Snippets Groups Projects
search_index.json 246 KiB
Newer Older
{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Introduction","text":"<p>Version: 2024Q2 - SNAPSHOT (Release Notes)</p> <p>The ETSI Software Development Group for OpenSlice (SDG OSL) is developing an open-source service-based Operations Support System (OSS) to deliver Network as a Service (NaaS) following specifications from major SDOs including ETSI, TM Forum and GSMA.</p>"},{"location":"#usage","title":"Usage","text":"<p>OpenSlice can be used in managing 5G network services from the user device to the core network and cloud as well as for Orchestrating cloud resources across private and public clouds for enterprise applications.  OpenSlice is capable of supporting most of the features of an end-to-end (E2E) service orchestration framework while many of them will be more mature in future releases. The following figure displays the general usage of OpenSlice. </p> <p></p> <p>The image illustrates how OpenSlice supports the idea of an E2E network service orchestration framework by integrating multiple network components and layers, from user devices at the edge to radio, transport networks, core and public cloud services, ensuring seamless, secure, and efficient delivery of network services. Assuming that there are domain controllers for all the above domains OpenSlice can create the end-to-end service via the domain controllers by following the process of creating and deploying the end-to-end service by implementing transformations, and consuming APIs from various network entities. OpenSlice, in a nutchell, offers user interfaces where users can interact with the framework to order, expose, and manage service catalogs, services and resources that can be ordered, following business logic and policies and exposed through the APIs. </p>"},{"location":"#an-end-to-end-e2e-service-orchestration-framework","title":"An end-to-end (E2E) service orchestration framework","text":"<p>An end-to-end (E2E) service orchestration framework is designed to manage and automate the entire lifecycle of services across multiple domains and technologies. For delivering, especially, Network as a Service (NaaS) a comprehensive system is needed that automates and manages the entire lifecycle of network services, from provisioning to monitoring and decommissioning, while ensuring seamless integration, operation, and delivery of services from the initial request to the final delivery, spanning all involved components and layers. Such E2E frameworks enable users to consume network services on-demand, similar to how cloud computing services are consumed. Some key components and features of such frameworks are:</p> <ul> <li>Service Catalogs including predefined Network Services based on service templates for common network services like 5G core functions, 5G slices, VPNs, SD-WAN, firewalls, load balancers, etc. as well as custom Network services with Options for users to define their own network configurations.</li> <li>User Interface (UI) and APIs exposure, offering both a Self-Service Portal that allows users to request, configure, and manage network services as well as APIs for enabling programmatic access to network services for integration with other systems and automation scripts.</li> <li>Service Design and Creation through service templates based on predefined models for creating services.</li> <li>Automation and Workflow Management via Orchestration Engines, supporting  Process Automation for automating repetitive tasks and processes,  workflow management and orchestration for automating the provisioning, configuration, and management of network services while coordinating multiple workflows to ensure services are delivered efficiently, ensuring that services comply with predefined policies and standards.</li> <li>Standardized API exposure for seamless integration with different systems and services and APIs transformation support for converting data formats and protocols to ensure compatibility and information exchange between systems during workflows orchestration</li> <li>Service and Resource management and Orchestration while including the capability of multi-domain coordination in managing services/resources across different domains like cloud, 5G core, radios, transport network, and edge including dynamic allocation with adjusting resources based on demand and service requirements. To accomplish the above advanced technologies need to be exploited like, Containerized workloads,  Network Function Virtualization (NFV) which uses virtualized network functions to provide services like routing, switching, and security and Software-Defined Networking (SDN) which Controls the network programmatically to dynamically manage traffic and resources.</li> <li>Monitoring and Analytics including  Service Monitoring while continuously tracking the performance and health of services with capabilities to analyse data to optimize service delivery and predict issues. Real-Time Monitoring is also needed for tracking the performance and health of network services enabling analytics that provide insights for optimization and troubleshooting.</li> <li>Security and Access Control for ensuring only authorized users and systems can access network services. while implementing rules and policies to comply with regulatory requirements.</li> </ul>"},{"location":"#an-e2e-service-orchestration-workflow","title":"An E2E service orchestration workflow","text":"<p>In general an E2E service orchestration workflow includes the following phases:</p> <ul> <li>Service Request: Users or systems request a network service through the self-service portal or API. The request can specify details such as bandwidth, security features, geographic coverage, and duration.</li> <li>Service Orchestration: The orchestration engine evaluates the request, determines the necessary resources, and initiates the automated workflows.It interacts with the underlying components (e.g. 5G Core, Radios, Containerized controllers, NFV, SDN controllers ) to provision and configure the required network functions and connectivity.</li> <li>Provisioning and Configuration: Services, network resources and network functions (VNFs) are instantiated and configured according to the service request during Service Orchestration through the orchestration engine. Other controllers manage their own domains, for example SDN controllers, manage the flow of data through the network to ensure optimal performance and adherence to policies, RAN controllers manage the RAN resources, Containerized controllers manage their workload, etc</li> <li>Service Delivery: The E2E network service is activated and made available to the user. Continuous monitoring ensures the service operates as expected, with automatic adjustments made as necessary.</li> <li>Lifecycle Management: The orchestration framework handles updates, scaling, and any necessary modifications throughout the service lifecycle.</li> <li>At the end of the service period, resources are decommissioned and reclaimed.</li> </ul>"},{"location":"#openslice-for-service-providers","title":"OpenSlice for Service Providers","text":"<p>OpenSlice is used by Service Providers to design Network Services, expose them in Service Catalogues and make them available for Service Orders. OpenSlice then can perform the E2E service orchestration workflow.</p> <p>There are various portals offering UI friendly access to users acting as Service Providers:</p> <ul> <li>The Services portal allows Service Providers to design and expose services.</li> <li>The Resource portal allows users to access resource specifications and running resources in resource inventory.</li> <li>The NFV portal allows users to manage NFV artifacts and onboard them to a target MANO/NFV Orchestrator.</li> <li>The Testing portal allows Service Providers to manage test artifacts</li> <li>The Products portal allows Service Providers to expose services as products</li> </ul>"},{"location":"#openslice-for-service-consumers","title":"OpenSlice for Service Consumers","text":"<p>OpenSlice allows Service Consumers to browse the available offered service specifications in a self-service manner. It also supports TMFORUM Northbound APIs  regarding Service Catalog Management, Ordering, Resource, etc. There are various portals offering UI friendly access to users acting as Service Consumers:</p> <ul> <li>The Services portal allows Service Consumers to select and order predefined services.</li> <li>The Resource portal allows users to access running resources in resource inventory.</li> <li>The NFV portal allows users to self-manage NFV artifacts and onboard them to a target MANO/NFV Orchestrator.</li> <li>The Testing portal allows Service Consumers to manage test artifacts</li> <li>The Products portal allows Service Consumers to expose services as products</li> </ul> <p>3rd party applications can use OpenSlice through TMForum Open APIs.</p>"},{"location":"#live-demo","title":"Live Demo","text":"<p>Check a live demo of OpenSlice in the following pages:</p> <ul> <li>OpenSlice demo: http://portal.openslice.eu/</li> <li>OpenSlice Service Catalogs and ordering: http://portal.openslice.eu/services/</li> <li>OpenSlice NFV Services onboarding: http://portal.openslice.eu/nfvportal</li> </ul> <p>Login credentials:</p> <ul> <li>username=admin, password=openslice </li> <li>username=admin, password=changeme</li> </ul>"},{"location":"#probe-further","title":"Probe further","text":"<ul> <li>How OpenSlice works? See the Architecture section</li> <li>Installing OpenSlice? See the Deployment section</li> <li>Learn more on how OpenSlice supports Network as a Service(NaaS)</li> <li>Who is maintaining OpenSlice? See OSL ETSI SDG</li> </ul>"},{"location":"alarms_actions/","title":"Alarms","text":"<p>In Openslice parts of TMF642 Alarm Management API are currently implemented. Alarms can be managed through the TMF API endpoint as well as the UI.</p>"},{"location":"alarms_actions/#alarms-and-actions","title":"Alarms and Actions","text":"<p>Note: Actions is an experimental feature. We expect to have a more mature solution in future. The component in the architecture is the Openslcie Assurance Services</p> <p>Alarms can be automatically resolved by specific actions. Today only the following actions are offered.</p> <ul> <li>execDay2</li> <li>scaleServiceEqually</li> </ul>"},{"location":"alarms_actions/#execday2","title":"execDay2","text":"<p>Usually used to perform a Day2 configuration (towards OSM). To use it, Create a New Action Specification Name=execDay2 as following</p> <p></p> <p>Now make a Service Order for your service. In this example \u03c2\u03b5 used a cirros NSD</p> <p>Create a  New Action Rule for the running services as the following example:</p> <p></p> <p>The scope is the running cirros service. </p> <p>Params should be paramname=value;paramname2=value2;paramname3=value3 (must exist in the VNF otherwise OSM will raise an error).</p> <p>In this case should be filename=test.txt</p> <p>Primitive=touch</p> <p>ServiceId = select the service which will accept the Day2. In this case is the same</p> <p>To test it:</p> <p>Go to the Service Inventory and select the active Service.</p> <p>Note the UUID of the service (e.g. c4e7990a-e174-4cd2-9133-b10e56721e08 copy from address bar),  DeploymentRequestID and NSDID from characteristics</p> <p>You can either use the UUID of the service or the DeploymentRequestID  and POST to the Alarms endpoint ( /tmf-api/alarmManagement/v4/alarm)</p> <p>If the DeploymentRequestID  is used then POST:</p> <pre><code>{\n  \"alarmRaisedTime\": \"2021-06-29T12:30:24.675Z\",\n  \"alarmReportingTime\": \"2021-06-29T12:30:54.675Z\",\n  \"state\": \"raised\",\n  \"alarmType\": \"qualityOfServiceAlarm\",\n  \"probableCause\": \"thresholdCrossed\",\n  \"ackState\": \"unacknowledged\",\n  \"perceivedSeverity\": \"major\",\n  \"sourceSystemId\": \"mano-client-service\",\n  \"alarmDetails\": \"NSID=3;DeploymentRequestID=1\",\n  \"specificProblem\": \"myalram raised\"\n}\n</code></pre> <p>If the UUID is used then POST:</p> <pre><code>{\n  \"alarmRaisedTime\": \"2021-06-29T12:30:24.675Z\",\n  \"alarmReportingTime\": \"2021-06-29T12:30:54.675Z\",\n  \"state\": \"raised\",\n  \"alarmType\": \"qualityOfServiceAlarm\",\n  \"probableCause\": \"thresholdCrossed\",\n  \"ackState\": \"unacknowledged\",\n  \"perceivedSeverity\": \"major\",\n  \"sourceSystemId\": \"mano-client-service\",\n  \"alarmDetails\": \"analarm\",\n  \"specificProblem\": \"myalram raised\",\n  \"affectedService\": [\n    {\n      \"id\": \"c4e7990a-e174-4cd2-9133-b10e56721e08\"\n    }\n  ]\n\n}\n</code></pre> <p>The Alarm to be created must have the affected Service ID equal to the running service from the scope (the cirros_ns)</p> <p>Go to service inventory you will see the notes and also the service characteristics for any  EXEC_ACTION updates</p> <p>You can also adjust the alarm conditions. They must match true so the alarm to be acknowledged So if another external service raises an Alarm (with POST) for the running service, a Day2 will be performed on another Service</p>"},{"location":"alarms_actions/#scaleserviceequally","title":"scaleServiceEqually","text":"<p>This action is used from getting a scaling event from OSM. Please see the next demo for details on how it works</p>"},{"location":"alarms_actions/#prototype-demo","title":"Prototype demo","text":"<p>You can watch how we used the prototype on the following ETSI ZMS PoC #2</p> <ul> <li>ETSI ZMS PoC #2: https://www.etsi.org/events/1905-webinar-zsm-poc-2-showcase-automated-network-slice-scaling-in-multi-site-environments/</li> </ul>"},{"location":"deployment_examples/","title":"OpenSlice Deployment Examples","text":"<p>Here are some examples from past and current efforts that use OpenSlice in various cases.</p>"},{"location":"deployment_examples/#5ginfire-eu-project-2018","title":"5GinFIRE EU project (2018)","text":"<ul> <li>MultiVIM approach</li> <li>9 Testbeds</li> <li>Automotive, Smart City, eHeath, PPDR, Media, SDR, Cloud</li> <li>22 Experiment proposals from Verticals</li> <li>100+ Users</li> <li>VxF catalog: </li> <li>150+ ONBOARDED VxFs</li> <li>OSM TWO, FOUR, FIVE</li> <li>50+ are public to be reused</li> <li>NSD catalog: </li> <li>90+ ONBOARDED NSDs</li> <li>30+ are public to be reused</li> <li>500+ Deployment requests \u000b(orchestrations) performed</li> </ul>"},{"location":"deployment_examples/#5g-vinni-eu-project-2020","title":"5G-VINNI  EU project (2020)","text":"<ul> <li>Multi-vendor challenge \u2013 Commercial and opensource</li> <li>5G services on multiple sites</li> <li>Introduction of TMFORUM models and APIs </li> </ul>"},{"location":"deployment_examples/#5gasp-eu-project-2021-2024","title":"5GASP  EU project (2021-2024)","text":"<ul> <li>Support a multi-site CI/CD testing automated DevOps cycle for network Applications</li> <li>Multiple NFVOs</li> <li>Introducing Service Test models</li> <li>Introducing the Product models for a network application marketplace</li> </ul>"},{"location":"deployment_examples/#fidal-eu-project-2023-","title":"FIDAL EU project (2023-)","text":"<ul> <li>Support multi-site automated testing</li> <li>Multiple testbeds/ different APIs</li> </ul>"},{"location":"deployment_examples/#across-eu-project-2023-","title":"ACROSS EU project (2023-)","text":"<ul> <li>Used as a cross-domain orchestrator</li> <li>Support the multi-domain orchestrator</li> <li>Support Zero-touch provisioning concepts</li> </ul>"},{"location":"deployment_examples/#incode-eu-project-2023-","title":"INCODE EU project (2023-)","text":"<ul> <li>Support the provisioning of end-to-end domain services</li> </ul>"},{"location":"deployment_examples/#imagineb5g-eu-project-2023-","title":"IMAGINEB5G EU project (2023-)","text":"<ul> <li>Support the provisioning of end-to-end domain services</li> </ul>"},{"location":"deployment_examples/#etsi-zsm-poc-2","title":"ETSI ZSM PoC #2","text":"<ul> <li>Automated Network Slice Scaling in Multi-Site Environments</li> <li>See more</li> </ul>"},{"location":"etsi_osl/","title":"OpenSlice under ETSI","text":"<p>Since October 2023, OpenSlice has been accepted under the umbrella of ETSI, forming its first Software Development Group (SDG), under the name ETSI SDG for OpenSlice (OSL).</p> <p>More information can be found at ETSI SDG OSL webpage.</p>"},{"location":"history/","title":"History","text":"<ul> <li>The NFV portal part of OpenSlice was initially developed in H2020 European Research project 5GinFIRE by University of Patras, Greece</li> <li>OpenSlice core services, APIs was further developed and maintained in H2020 European project 5G-VINNI by University of Patras, Greece</li> <li>OpenSlice has been a part of OSM's OSS/BSS ecosystem</li> <li>OpenSlice has been a part of ETZI ZSM PoC #2</li> <li>OpenSlice is the first ETSI Software Development Group (SDG) since October 2023</li> </ul>"},{"location":"history/#citation","title":"Citation","text":"<p>Please cite our ![paper] if you use OpenSlice in your research</p> <pre><code>@misc{tranoris2021openslice,\n      title={Openslice: An opensource OSS for Delivering Network Slice as a Service}, \n      author={Christos Tranoris},\n      year={2021},\n      eprint={2102.03290},\n      archivePrefix={arXiv},\n      primaryClass={cs.NI}\n}\n</code></pre>"},{"location":"terminology/","title":"Terminology","text":""},{"location":"terminology/#user-roles","title":"User Roles","text":"<ul> <li>OpenSlice Administrator: Installs, configures and maintains an OpenSlice instance</li> <li>OpenSlice Service Designer: Uses available resources to design service specifications that are offered to OpenSlice users</li> <li>OpenSlice User: Browses and orders offered services</li> </ul>"},{"location":"terminology/#useful-terms","title":"Useful Terms","text":"<ul> <li>Service Specification: A blueprint with detailed descriptions of services, including attributes, configurations, performance metrics, and SLAs (e.g. 5G Connectivity).</li> <li>Service: A running instance of a Service Specification.</li> <li>Resource Specification: A blueprint with detailed descriptions of resources, including attributes, configurations, and instantiation patterns (e.g. a Kubernetes CRD).</li> <li>Resource: A running instance of a Resource Specification.</li> <li>Resource Facing Service Specification (RFSS): A Service Specification that exposes a Resource Specification as a Service (e.g. UPF Deployment in K8s).</li> <li>Customer Facing Service Specification (CFSS): A Service Specification exposed to Users, available for Service Orders. Usually an exposed CFSS (e.g. 5G Connectivity) is a Service Bundle and comprises of other RFSSs (e.g. UPF Deployment, Core Deployment) and CFSSs (e.g. Connect Radio Nodes).</li> <li>Resource Facing Service (RFS): An running instance of an RFSS, created via a Service Order of a related Service Bundle.</li> <li>Customer Facing Service (CFS): An running instance of an CFSS, created via a Service Order.</li> <li>Service Category: A collection of CFSSs.</li> <li>Service Catalog: A collection of exposed Service Categories.</li> <li>Service Inventory: An inventory of all the service instances.</li> <li>Resource Inventory: An inventory of all the resource instances.</li> </ul>"},{"location":"under_construction/","title":"Under construction","text":"<p>under construction</p>"},{"location":"architecture/architecture/","title":"Architecture","text":""},{"location":"architecture/architecture/#high-level-introduction","title":"High-Level Introduction","text":"<p>OpenSlice consists of:</p> <ul> <li>Web frontend User Interface (UI) that consists of mainly two portal categories: <ol> <li>An NFV portal allowing users to onboard VNFDs/NSDs to facility\u2019s NFVOs and self-service management </li> <li>Several TMF-family portals (Product, Service, Resource, Testing) which allow users to browse the respective layers of a modern BSS/OSS solution</li> </ol> </li> <li>An API gateway that proxies the internal APIs, which are used by the Web frontend as well as any other 3rd party services, and consist of:<ol> <li>A microservice offering TMF-compliant API services (e.g. Product/Service/Resource Catalog API, Service Ordering API, etc)</li> <li>A microservice offering NFV-compliant API services (e.g. VNFD/NSD onboarding and management, etc) allowing to manage multiple NFVOs and store VNFDs and NSDs in the respective catalogues</li> </ol> </li> <li>A Message Bus used by all microservices to exchange messages either via message Queues or via publish/subscribe Topics</li> <li>An Authentication Server implementing Oauth2 authentication scheme</li> <li>A microservice that is capable to interface with an issue management system (e.g. it raises an issue to all related stakeholders - CSPs, NOPs, CSCs - that a new Service Order is requested)</li> <li>A Central Logging microservice that logs all distributed actions into an Elasticsearch cluster</li> <li>A Service Orchestrator (SO) solution that will fulfill Service Ordering requests by propagating the orchestration actions to underlying components (e.g. NFVOs or Kubernetes) or to external SOs</li> <li>A MANO Client microservice which interfaces with SOL005-compliant NFVOs (synchronizing artifacts and propagating actions)</li> <li>A Custom Resource (CR) to TMF bridge (CRIDGE) microservice which interfaces with Kubernetes</li> <li>A Metrics Retrieval Component (METRICO) which interfaces with external monitoring tools, retrieving and injecting desired metrics into OpenSlice orchestration pipeline</li> <li>An Assurance Services component which generates and monitors alerts, as well executing defined actions based on the latter</li> <li>A visualization server (KROKI) microservice which enables a intuitive illustration of dependency graphs and interactions</li> </ul>"},{"location":"architecture/architecture/#microservices-deployment","title":"Microservices Deployment","text":"<p>The following figure depicts how OpenSlice microservices are deployed</p> <p></p>"},{"location":"architecture/architecture/#deploying-openslice-in-multi-domain-scenarios","title":"Deploying OpenSlice in multi-domain scenarios","text":"<p>A typical deployment across domains, involves some typical components: </p> <ol> <li>an OSS/BSS to allow customers access the service catalog and perform service orders,</li> <li>a Service Orchestrator (SO) component for executing the service order workflow, </li> <li>a Network Functions Virtualization Orchestrator (NFVO) or Kubernetes for configuring the network resources.</li> </ol> <p>TMF Open APIs are introduced not only for exposing catalogues and accepting service orders, but also implementing the East-West interfaces between the domains, fulfilling also the LSO requirements as introduced by MEF.</p> <p>The following figure shows how openslice could be used in such scenarios:</p> <p></p> <p>See more Consuming Services From External Partner Organizations.</p>"},{"location":"architecture/centrallog/","title":"Central Logging","text":"<p>Openslice follows the centralized log management concept, i.e. a type of logging solution system that consolidates the log data from different services and pushes it to a central, accessible and easy-to-use interface. </p> <p>For that reason, Elasticsearch is elected as an open-source centralized logging solution for collecting, parsing and storing logs towards a real-time data analytics tool that provides insights from any type of structured and unstructured data source.</p>"},{"location":"architecture/issuemgt/","title":"Issue Management","text":"<p>For issue management support, Openslice relies on Bugzilla. Bugzilla is a ticketing tool that allows issue reporting and tracking via tickets to all relevant stakeholders. </p> <p>The figure below displays the overall issue management service architecture integrating Bugzilla as its core and how this tool interacts with other Openslice services presenting some distinctive scenarios. It should be noted that Bugzilla tickets will not only be used for bugs/errors, but also for general requests, e.g. Service Order procedure.</p> <p></p>"},{"location":"architecture/messagebus/","title":"Message Bus and exchanged Messages","text":"<p>Openslice has a Message bus which allows Openslice services to exchange messages via queues and topics.</p> <p>It is based on ActiveMQ.</p> <p>3rd party services can be attached to bus and subscribe to message topics or request resources via queues.</p>"},{"location":"architecture/messagebus/#queue-messages","title":"QUEUE MESSAGES","text":"Message Alias CATALOG_GET_SERVICEORDERS Name jms:queue:CATALOG.GET.SERVICEORDERS Type queue Destination TMF API service Producers OSOM Body Description Return a List as String Json Message Alias CATALOG_GET_SERVICEORDER_BY_ID Name jms:queue:CATALOG.GET.SERVICEORDER_BY_ID Type queue Destination TMF API service Producers OSOM Body String orderid Description Return a ServiceOrder as String Json Message Alias CATALOG_UPD_SERVICEORDER_BY_ID Name jms:queue:CATALOG.UPD.SERVICEORDER_BY_ID Type queue Destination TMF API service Producers OSOM Body ServiceOrderUpdate serviceOrder Headers \"orderid\"= orderid Description Returns a ServiceOrder as String Message Alias CATALOG_GET_SERVICESPEC_BY_ID Name jms:queue:CATALOG.GET.SERVICESPEC_BY_ID Type queue Destination TMF API service Producers OSOM Body specid Description Return a ServiceSpecification Message Alias CATALOG_ADD_SERVICESPEC Name jms:queue:CATALOG.ADD.SERVICESPEC Type queue Destination TMF API service Producers CRIDGE Body ServiceSpecCreate Description Creates a ServiceSpecification and  returns a ServiceSpecification as String Message Alias CATALOG_UPD_SERVICESPEC Name jms:queue:CATALOG.UPD.SERVICESPEC Type queue Destination TMF API service Producers CRIDGE Body ServiceSpecUpdate Headers \"serviceSpecid\" = serviceSpecId Description Updates a ServiceSpecification and  returns a ServiceSpecification as String. --- Message Alias CATALOG_UPDADD_SERVICESPEC Name jms:queue:CATALOG.UPDADD.SERVICESPEC Type queue Destination TMF API service Producers CRIDGE Body ServiceSpecUpdate Headers \"serviceSpecid\" = serviceSpecId, \"forceId\"=forceId Description Updates a ServiceSpecification and  returns a ServiceSpecification as String. If forceId is true then tries to assign the requested ID to the spec Message Alias CATALOG_ADD_SERVICEORDER Name jms:queue:CATALOG.ADD.SERVICEORDER Type queue Destination TMF API service Producers OSOM Body ServiceOrderCreate serviceOrder Headers Description Creates a ServiceOrder and  returns a ServiceOrder as String Message Alias CATALOG_GET_INITIAL_SERVICEORDERS_IDS Name jms:queue:CATALOG.GET.INITIAL_SERVICEORDERS Type queue Destination TMF API service Producers Body Description Return a List as String Json Message Alias CATALOG_GET_SERVICEORDER_IDS_BY_STATE Name jms:queue:CATALOG.GET.ACKNOWLEDGED_SERVICEORDERS Type queue Destination TMF API service Producers OSOM Body Headers \"orderstate\"= orderState Description String Json ArrayList of ServiceOrders Message Alias CATALOG_ADD_SERVICE Name jms:queue:CATALOG.ADD.SERVICE Type queue Destination TMF API service Producers OSOM Body ServiceCreate String json Headers \"orderid\"=orderid, \"serviceSpecid\"= specid Description Creates Service based an a Service Spec, Returns a Service object Message Alias CATALOG_UPD_SERVICE Name jms:queue:CATALOG.UPD.SERVICE Type queue Destination TMF API service Producers Body ServiceUpdate Headers \"serviceid\" = serviceId, \"propagateToSO\" = true/false Description will update a service by id and return the service instance. If propagateToSO=true then any service change will be handled by OSOM. This is needed to be controlled in order to avoid update loops Message Alias CATALOG_GET_SERVICE_BY_ID Name jms:queue:CATALOG.GET.SERVICE Type queue Destination TMF API service Producers OSOM Body String serviceID Description returns a Service instance <p>---| Message |    | | ------------- |----------------| |Alias |  CATALOG_GET_SERVICE_BY_ORDERID  | |Name |  jms:queue:CATALOG.GET.SERVICE_BY_ORDERID  | |Type | queue  | |Destination |   TMF API service | |Producers |  | |Body |  String serviceID | |Description |   returns Service IDs of a specific order given then order id |</p> Message Alias CATALOG_SERVICE_QUEUE_ITEMS_GET Name jms:queue:CATALOG.SERVICEQUEUEITEMS.GET Type queue Destination TMF API service Producers OSOM Body Description returns a LIST OF Service Queue Items --- Message Alias CATALOG_SERVICE_QUEUE_ITEM_UPD Name jms:queue:CATALOG.SERVICEQUEUEITEM.UPDATE Type queue Destination TMF API service Producers OSOM Body String SERVICEQUEUEITEM Headers \"itemid\" = SERVICEQUEUEITEM id Description ill update a service queue item by id and return the instance --- Message Alias CATALOG_SERVICE_QUEUE_ITEM_DELETE Name jms:queue:CATALOG.SERVICEQUEUEITEM.DELETE Type queue Destination TMF API service Producers OSOM Body Headers \"itemid\" = SERVICEQUEUEITEM id Description ill delete a service queue item by id Message Alias CATALOG_SERVICES_TO_TERMINATE Name jms:queue:CATALOG.GET.SERVICETOTERMINATE Type queue Destination TMF API service Producers OSOM Body Headers Description Get a list of ACTIVE services with END DAte in the past --- Message Alias CATALOG_SERVICES_OF_PARTNERS Name jms:queue:CATALOG.GET.SERVICESOFPARTNERS Type queue Destination TMF API service Producers OSOM Body Headers Description Get a list of ACTIVE services from the inventory of partners Message Alias NFV_CATALOG_GET_NSD_BY_ID Name jms:queue:NFVCATALOG.GET.NSD_BY_ID Type queue Destination NFV Catalog service Producers TMF API, OSOM Body NSDid Description Returns a NetworkServiceDescriptor object Message Alias NFV_CATALOG_DEPLOY_NSD_REQ Name jms:queue:NFVCATALOG.DEPLOY.NSD_REQ Type queue Destination NFV Catalog service Producers OSOM Body DeploymentDescriptor as Json String Headers NSD id Description Returns a DeploymentDescriptor object as json string containing deployment info Message Alias NFV_CATALOG_UPD_DEPLOYMENT_BY_ID Name jms:queue:NFVCATALOG.UPD.DEPLOYMENT_BY_ID Type queue Destination NFV Catalog service Producers OSOM Body DeploymentDescriptor as Json String Headers DeploymentDescriptor id Description Updates and Returns a DeploymentDescriptor object as json string containing deployment info Message Alias GET_USER_BY_USERNAME Name jms:queue:GET.USER_BY_USERNAME Type queue Destination NFV Catalog service (this is temproary for now) Producers TMF API Body username Headers Description Returns a PortalUser object as json string containing user info Message Alias NFV_CATALOG_GET_DEPLOYMENT_BY_ID Name jms:queue:NFVCATALOG.GET.DEPLOYMENT_BY_ID Type queue Destination NFV Catalog service Producers OSOM Body Deployment ID Description Returns a DeploymentDescriptor object Message Alias CATALOG_GET_EXTERNAL_SERVICE_PARTNERS Name jms:queue:CATALOG.GET.EXTERNALSERVICEPARTNERS Type queue Destination TMF API service Producers OSOM Body Headers Description As a String Json ArrayList of Organizaton objects containing the characteristic name EXTERNAL_TMFAPI Message Alias CATALOG_UPD_EXTERNAL_SERVICESPEC Name jms:queue:CATALOG.UPD.EXTERNAL_SERVICESPEC Type queue Destination TMF API service Producers OSOM or maybe used by others that would like to update a Service Spec Body A serviceSpecification as json string Headers servicespecification id, orgid id Description Updates (or inserts if does not exist in catalog) an external service specification) Message Alias NFV_CATALOG_NSACTIONS_SCALE Name jms:queue:NSACTIONS.SCALE Type queue Destination TMF API service Producers OSOM or maybe used by others that would like scale a NS Body A ScaleDescriptor as json string Headers none Description performs a scale Message Alias NFV_CATALOG_NS_LCMCHANGED Name NFV_CATALOG_NS_LCMCHANGED Type topic Destination any Producers MANO client Body A json string Headers none Description A NFV_CATALOG_NS_LCMCHANGED message is published when LCM of a running NS is changed"},{"location":"architecture/messagebus/#alarms","title":"ALARMS","text":"Message Alias ALARMS_ADD_ALARM Name jms:queue:ALARMS.ADD.ALARM Type queue Publishers Consumers TMF API Body AlarmCreate Headers Description Add an alarm Message Alias ALARMS_UPDATE_ALARM Name jms:queue:ALARMS.UPDATE.ALARM Type queue Publishers Consumers TMF API Body AlarmUpdate Headers alarmid = alarm id, body (AlarmUpdate object) Description Update an alarm Message Alias ALARMS_GET_ALARM Name jms:queue:ALARMS.GET.ALARM Type queue Publishers Consumers TMF API Body Headers alarmid = alarm id Description get an alarm"},{"location":"architecture/messagebus/#event-topics-in-message-bus","title":"EVENT TOPICS IN Message Bus","text":"Message Alias EVENT_SERVICE_CREATE Name jms:topic:EVENT.SERVICE.CREATE Type topic Publishers TMF API Consumers - Body Notification object Headers \"eventid\"=eventid, \"objId\"= objId Description xx Message Alias EVENT_SERVICE_STATE_CHANGED Name jms:topic:EVENT.SERVICE.STATECHANGED Type topic Publishers TMF API Consumers - Body Notification object Headers \"eventid\"=eventid, \"objId\"= objId Description xx Message Alias EVENT_SERVICE_DELETE Name jms:topic:EVENT.SERVICE.DELETE Type topic Publishers TMF API Consumers - Body Notification object. Can be one of ServiceOrderCreateNotification, ServiceOrderStateChangeNotification, ServiceOrderAttributeValueChangeNotification, ServiceOrderDeleteNotification, etc Headers \"eventid\"=eventid, \"objId\"= objId Description xx Message Alias EVENT_SERVICE_ATTRIBUTE_VALUE_CHANGED Name jms:topic:EVENT.SERVICE.ATTRCHANGED Type topic Publishers TMF API Consumers - Body Notification object. Can be one of ServiceOrderCreateNotification, ServiceOrderStateChangeNotification, ServiceOrderAttributeValueChangeNotification, ServiceOrderDeleteNotification, etc Headers \"eventid\"=eventid, \"objId\"= objId Description xx Message Alias EVENT_SERVICE_ORDER_CREATE Name jms:topic:EVENT.SERVICEORDER.CREATE Type topic Publishers TMF API Consumers BUGZILLA Service, CentralLog Service Body Notification object. Can be one of ServiceOrderCreateNotification, ServiceOrderStateChangeNotification, ServiceOrderAttributeValueChangeNotification, ServiceOrderDeleteNotification, etc Headers \"eventid\"=eventid, \"objId\"= objId Description The Event of the Notification object contains the ServiceOrder object. Bugzilla service for example uses this to create a new issue Message Alias EVENT_SERVICE_ORDER_STATE_CHANGED Name jms:topic:EVENT.SERVICEORDER.STATECHANGED Type topic Publishers TMF API Consumers BUGZILLA Service, CentralLog Service Body Notification object. Can be one of ServiceOrderCreateNotification, ServiceOrderStateChangeNotification, ServiceOrderAttributeValueChangeNotification, ServiceOrderDeleteNotification, etc Headers \"eventid\"=eventid, \"objId\"= objId Description The Event of the Notification object contains the ServiceOrder object. Bugzilla service for example uses this to update an issue Message Alias EVENT_SERVICE_ORDER_DELETE Name jms:topic:EVENT.SERVICEORDER.DELETE Type topic Publishers TMF API Consumers BUGZILLA Service, CentralLog Service Body Notification object. Can be one of ServiceOrderCreateNotification, ServiceOrderStateChangeNotification, ServiceOrderAttributeValueChangeNotification, ServiceOrderDeleteNotification, etc Headers \"eventid\"=eventid, \"objId\"= objId Description The Event of the Notification object contains the ServiceOrder object Message Alias EVENT_SERVICE_ORDER_ATTRIBUTE_VALUE_CHANGED Name jms:topic:EVENT.SERVICEORDER.ATTRCHANGED Type topic Publishers TMF API Consumers BUGZILLA Service, CentralLog Service Body Notification object. Can be one of ServiceOrderCreateNotification, ServiceOrderStateChangeNotification, ServiceOrderAttributeValueChangeNotification, ServiceOrderDeleteNotification, etc Headers \"eventid\"=eventid, \"objId\"= objId Description The Event of the Notification object contains the ServiceOrder object Message Alias EVENT_ALARM_CREATE Name jms:topic:EVENT.ALARM.CREATE Type topic Publishers TMF API Consumers OAS, BUGZILLA Service, CentralLog Service Body AlarmCreateEvent Headers Description The Event  contains the Alarm object in payload Message Alias CATALOG_ADD_RESOURCE Name jms:queue:CATALOG.ADD.RESOURCE Type topic Publishers TMF API Consumers any Body ResourceCreate Headers Description The Body  contains the ResourceCreate object to add Message Alias CATALOG_UPD_RESOURCE Name jms:queue:CATALOG.UPD.RESOURCE Type topic Publishers TMF API Consumers any Body ResourceUpdate Headers resourceid , propagateToSO Description The Body  contains the ResourceCreate object to update Message Alias CATALOG_GET_RESOURCE_BY_ID Name jms:queue:CATALOG.GET.RESOURCE Type topic Publishers TMF API Consumers any Body resourceid Headers Description The Body  contains the ResourceCreate object to update Message Alias CATALOG_RESOURCES_OF_PARTNERS Name jms:queue:CATALOG.GET.SERVICESOFPARTNERS Type topic Publishers TMF API Consumers any Body none Headers none Description retrieve all active services of partners Message Alias CATALOG_ADD_RESOURCESPEC Name jms:queue:CATALOG.ADD.RESOURCESPEC Type topic Publishers TMF API Consumers any Body ResourceSpecificationCreate Headers Description The Body  contains the ResourceSpecificationCreate object to add Message Alias CATALOG_UPD_RESOURCESPEC Name jms:queue:CATALOG.UPD.RESOURCESPEC Type topic Publishers TMF API Consumers any Body ResourceSpecificationUpdate Headers resourceSpecId Description The Body  contains the ResourceSpecificationCreate object to update Message Alias CATALOG_GET_RESOURCESPEC_BY_ID Name jms:queue:CATALOG.GET.RESOURCESPEC_BY_ID Type topic Publishers TMF API Consumers any Body resourceSpecid Headers Description The Body  contains the object id to find Message Alias CATALOG_UPDADD_RESOURCESPEC Name jms:queue:CATALOG.UPDADD.RESOURCESPEC Type topic Publishers TMF API Consumers any Body resourceid Headers Description The Body  contains the ResourceSpecificationCreate object to update or create if not exist Message Alias EVENT_RESOURCE_CREATE Name jms:topic:EVENT.RESOURCE.CREATE Type topic Publishers TMF API Consumers BUGZILLA Service, CentralLog Service, other Body Notification object. Headers \"eventid\"=eventid, \"objId\"= objId Description The Event of the Notification object contains the Resource object Message Alias EVENT_RESOURCE_STATE_CHANGED Name jms:topic:EVENT.RESOURCE.STATECHANGED Type topic Publishers TMF API Consumers BUGZILLA Service, CentralLog Service, other Body Notification object. Headers \"eventid\"=eventid, \"objId\"= objId Description The Event of the Notification object contains the Resource object Message Alias EVENT_RESOURCE_DELETE Name jms:topic:EVENT.SERVICE.RESOURCE Type topic Publishers TMF API Consumers BUGZILLA Service, CentralLog Service, other Body Notification object. Headers \"eventid\"=eventid, \"objId\"= objId Description The Event of the Notification object contains the Resource object Message Alias EVENT_RESOURCE_ATTRIBUTE_VALUE_CHANGED Name jms:topic:EVENT.RESOURCE.ATTRCHANGED Type topic Publishers TMF API Consumers BUGZILLA Service, CentralLog Service, other Body Notification object. Headers \"eventid\"=eventid, \"objId\"= objId Description The Event of the Notification object contains the Resource object Message Alias CATALOG_GET_LCMRULE_BY_ID Name jms:queue:CATALOG.GET.LCMRULE Type topic Publishers TMF API Consumers any Body lcmid Headers Description The Body  contains the LCMRuleSpec object Message Alias CATALOG_GET_LCMRULES_BY_SPECID_PHASE Name jms:queue:CATALOG.GET.LCMRULES_BY_SPECID_PHASE Type topic Publishers TMF API Consumers any Body Headers header.servicespecid, header.phasename Description The Body  contains the LCMRuleSpec objects of the specific Service Spec and the specific phase Message Alias CATALOG_GET_SERVICETESTSPEC_BY_ID Name jms:queue:CATALOG.GET.SERVICETESTSPEC_BY_ID Type queue Destination TMF API service Producers OSOM Body specid Description Return a ServiceTestSpecification Message Alias CATALOG_ADD_SERVICETEST Name jms:queue:CATALOG.ADD.SERVICETEST Type queue Destination TMF API service Producers OSOM Body ServiceTestCreate String json Headers \"orderid\"=orderid, \"serviceTestSpecid\"= specid Description Creates Service Test based an a Service Test Spec, Returns a ServiceTest object Message Alias CATALOG_UPD_SERVICETEST Name jms:queue:CATALOG.UPD.SERVICETEST Type queue Destination TMF API service Producers Body ServiceTestUpdate Headers \"serviceid\" = serviceId, \"propagateToSO\" = true/false Description will update a service test by id and return the service instance. If propagateToSO=true then any service change will be handled by OSOM. This is needed to be controlled in order to avoid update loops Message Alias CATALOG_GET_SERVICETEST_BY_ID Name jms:queue:CATALOG.GET.SERVICETEST Type queue Destination TMF API service Producers OSOM Body String serviceID Description returns a Service TEST instance Message Alias CRD_DEPLOY_CR_REQ Name jms:queue:CRD.DEPLOY.CR_REQ Type queue Destination CRD  service Producers OSOM Body CR spec as String Headers related service id Description Returns a String object containing deployment info Message Alias CRD_PATCH_CR_REQ Name jms:queue:CRD.PATCH.CR_REQ Type queue Destination CRD  service Producers OSOM Body CR  as String Headers related service id Description Returns a String object containing PATCH info Message Alias CRD_DELETE_CR_REQ Name jms:queue:CRD.DELETE.CR_REQ Type queue Destination CRD  service Producers OSOM Body CR  as String Headers related service id Description Returns a String object containing deletion info"},{"location":"architecture/nfvapi/","title":"API interaction","text":""},{"location":"architecture/nfvapi/#oauth-token","title":"OAuth token","text":"<p>See oauth</p>"},{"location":"architecture/nfvapi/#request-a-protected-api-resource","title":"Request a protected API resource","text":"<p>Example: Get all vxfs (check the <code>Authorization:Bearer</code> to be correct)</p> <p><pre><code>curl -H \"Authorization:Bearer eybGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyX25hbWUiOiJhZG1pbiIsInNjb3BlIjpbIm9wZW5hcGkiLCJhZG1pbiIsInJlYWQiLCJ3cml0ZSJdLCJvcmdhbml6YXRpb24iOiJteW9yZ2FuaXp0aW9uIiwiZXhwIjoxNTcxOTI0MjU2LCJhdXRob3JpdGllcyI6WyJST0xFX01FTlRPUiIsIlJPTEVfQURNSU4iXSwianRpIjoiNzNkZmIxODEtNTMwOS00MmExLThkOWUtOGM3YmQ0YTE1YmU0IiwiY2xpZW50X2lkIjoib3NhcGlXZWJDbGllbnRJZE91dCJ9.Pj_hxnyMGhFhN8avU_DiAw1-LlcaIz5Hp9HNqalw-X4\" http://localhost:13000/osapi/admin/vxfs\n</code></pre> Example response:</p> <pre><code>[\n  {\n    \"id\": 1,\n    \"owner\": {\n      \"id\": 1,\n      \"organization\": \"ee\",\n      \"name\": \"Portal Administrator\",\n      \"email\": \"\",\n      \"username\": \"admin\",\n      \"createdAt\": null\n    },\n    \"uuid\": \"a954daf2-16da-4b7e-ae42-4825936d453c\",\n    \"name\": \"cirros_vnfd\",\n    \"iconsrc\": \"/osapi/images/a954daf2-16da-4b7e-ae42-4825936d453c/cirros-64.png\",\n    \"shortDescription\": \"cirros_vnfd\",\n    \"longDescription\": \"Simple VNF example with a cirros\",\n    \"version\": \"1.0\",\n    \"packageLocation\": \"/osapi/packages/a954daf2-16da-4b7e-ae42-4825936d453c/cirros_vnf.tar.gz\",\n    \"dateCreated\": 1568971426000,\n    \"dateUpdated\": 1568981107000,\n    \"categories\": [\n      {\n        \"id\": 3,\n        \"name\": \"Service\",\n        \"productsCount\": 1,\n        \"appscount\": 0,\n        \"vxFscount\": 1\n      },\n      {\n        \"id\": 2,\n        \"name\": \"Networking\",\n        \"productsCount\": 1,\n        \"appscount\": 0,\n        \"vxFscount\": 1\n      }\n    ],\n    \"extensions\": [],\n    \"validationJobs\": [],\n    \"screenshots\": \"\",\n    \"vendor\": \"OSM\",\n    \"published\": false,\n    \"termsOfUse\": null,\n    \"descriptor\": \"vnfd-catalog:\\n    vnfd:\\n    -   connection-point:\\n        -   name: eth0\\n            type: VPORT\\n        description: Simple VNF example with a cirros\\n        id: cirros_vnfd\\n        logo: cirros-64.png\\n        mgmt-interface:\\n            cp: eth0\\n        name: cirros_vnfd\\n        short-name: cirros_vnfd\\n        vdu:\\n        -   count: 1\\n            description: cirros_vnfd-VM\\n            id: cirros_vnfd-VM\\n            image: cirros034\\n            interface:\\n            -   external-connection-point-ref: eth0\\n                name: eth0\\n                position: '1'\\n                type: EXTERNAL\\n                virtual-interface:\\n                    bandwidth: '0'\\n                    type: VIRTIO\\n                    vpci: 0000:00:0a.0\\n            name: cirros_vnfd-VM\\n            vm-flavor:\\n                memory-mb: 512\\n                storage-gb: 1\\n                vcpu-count: 1\\n        vendor: OSM\\n        version: '1.0'\\n\",\n    \"descriptorHTML\": \"&lt;h3&gt;cirros_vnfd&lt;/h3&gt;&lt;br&gt;&lt;b&gt;Vendor: &lt;/b&gt;OSM&lt;br&gt;&lt;b&gt;Version: &lt;/b&gt;1.0&lt;br&gt;&lt;b&gt;Description: &lt;/b&gt;Simple VNF example with a cirros&lt;br&gt;&lt;b&gt;VM Count: &lt;/b&gt;1&lt;br&gt;&lt;b&gt;vCPU Count: &lt;/b&gt;1&lt;br&gt;&lt;b&gt;Memory: &lt;/b&gt;512 MB&lt;br&gt;&lt;b&gt;Storage: &lt;/b&gt;1 GB&lt;br&gt;\",\n    \"certified\": false,\n    \"certifiedBy\": null,\n    \"validationStatus\": \"UNDER_REVIEW\",\n    \"packagingFormat\": \"OSMvFIVE\",\n    \"supportedMANOPlatforms\": [\n      {\n        \"id\": 1,\n        \"name\": \"osm fivee\",\n        \"version\": \"osm fivee\",\n        \"description\": \"osm five\"\n      }\n    ],\n    \"vxfOnBoardedDescriptors\": [],\n    \"vfimagesVDU\": [\n      {\n        \"id\": 1,\n        \"name\": \"cirros034\",\n        \"uuid\": \"d4549610-8abd-42ad-97f4-0a64e1c93977\",\n        \"shortDescription\": \"Automatically created during vxf cirros_vnfd submission. Owner must update.\",\n        \"packageLocation\": null,\n        \"publicURL\": null,\n        \"dateCreated\": 1568971426000,\n        \"dateUpdated\": null,\n        \"refVxFs\": [\n          {\n            \"id\": 1,\n            \"name\": \"cirros_vnfd\"\n          }\n        ],\n        \"owner\": {\n          \"id\": 1,\n          \"organization\": \"ee\",\n          \"name\": \"Portal Administrator\",\n          \"email\": \"\",\n          \"username\": \"admin\",\n          \"active\": true,\n          \"currentSessionID\": null,\n          \"apikey\": \"e41c1cc4-aa56-4b7e-9f4d-64589549d768\",\n          \"createdAt\": 1568711859000,\n          \"roles\": [\n            \"ADMIN\",\n            \"MENTOR\"\n          ]\n        },\n        \"published\": false,\n        \"termsOfUse\": null,\n        \"deployedInfrastructures\": []\n      }\n    ]\n  },\n  {\n    \"id\": 2,\n    \"owner\": {\n      \"id\": 1,\n      \"organization\": \"ee\",\n      \"name\": \"Portal Administrator\",\n      \"email\": \"\",\n      \"username\": \"admin\",\n      \"createdAt\": null\n    },\n    \"uuid\": \"4ab80095-a63e-4fe7-8598-e1c7e880706e\",\n    \"name\": \"cirros_sriov_vnfd\",\n    \"iconsrc\": null,\n    \"shortDescription\": \"cirros_sriov_vnf\",\n    \"longDescription\": \"Simple VNF example with a cirros SRIOV interface\",\n    \"version\": \"1.0\",\n    \"packageLocation\": \"/osapi/packages/4ab80095-a63e-4fe7-8598-e1c7e880706e/cirros_sriov.tar.gz\",\n    \"dateCreated\": 1568971740000,\n    \"dateUpdated\": 1568981100000,\n    \"categories\": [\n      {\n        \"id\": 4,\n        \"name\": \"tyu\",\n        \"productsCount\": 1,\n        \"appscount\": 0,\n        \"vxFscount\": 1\n      },\n      {\n        \"id\": 5,\n        \"name\": \"tyi\",\n        \"productsCount\": 1,\n        \"appscount\": 0,\n        \"vxFscount\": 1\n      }\n    ],\n    \"extensions\": [],\n    \"validationJobs\": [],\n    \"screenshots\": \"\",\n    \"vendor\": \"OSM\",\n    \"published\": false,\n    \"termsOfUse\": null,\n    \"descriptor\": \"vnfd:vnfd-catalog:\\n  vnfd:\\n  - connection-point:\\n    - name: eth0\\n      type: VPORT\\n    - name: eth1\\n      type: VPORT\\n    description: Simple VNF example with a cirros SRIOV interface\\n    id: cirros_sriov_vnfd\\n    logo: cirros-64.png\\n    mgmt-interface:\\n      cp: eth0\\n    name: cirros_sriov_vnf\\n    short-name: cirros_sriov_vnf\\n    vdu:\\n    - count: 1\\n      description: cirros_sriov_vnfd-VM\\n      guest-epa:\\n        cpu-pinning-policy: DEDICATED\\n        cpu-thread-pinning-policy: PREFER\\n        mempage-size: SMALL\\n        numa-node-policy:\\n          mem-policy: STRICT\\n          node:\\n          - id: '1'\\n          node-cnt: '1'\\n      id: cirros_sriov_vnfd-VM\\n      image: cirros-0.3.6-x86_64\\n      interface:\\n      - external-connection-point-ref: eth0\\n        name: eth0\\n        position: '1'\\n        type: EXTERNAL\\n        virtual-interface:\\n          bandwidth: '0'\\n          type: VIRTIO\\n          vpci: 0000:00:0a.0\\n      - external-connection-point-ref: eth1\\n        name: eth1\\n        position: '2'\\n        type: EXTERNAL\\n        virtual-interface:\\n          type: SR-IOV\\n      name: cirros_sriov_vnfd-VM\\n      vm-flavor:\\n        memory-mb: 4096\\n        storage-gb: 10\\n        vcpu-count: 4\\n    vendor: OSM\\n    version: '1.0'\\n\",\n    \"descriptorHTML\": \"&lt;h3&gt;cirros_sriov_vnf&lt;/h3&gt;&lt;br&gt;&lt;b&gt;Vendor: &lt;/b&gt;OSM&lt;br&gt;&lt;b&gt;Version: &lt;/b&gt;1.0&lt;br&gt;&lt;b&gt;Description: &lt;/b&gt;Simple VNF example with a cirros SRIOV interface&lt;br&gt;&lt;b&gt;VM Count: &lt;/b&gt;1&lt;br&gt;&lt;b&gt;vCPU Count: &lt;/b&gt;1&lt;br&gt;&lt;b&gt;Memory: &lt;/b&gt;4096 MB&lt;br&gt;&lt;b&gt;Storage: &lt;/b&gt;10 GB&lt;br&gt;\",\n    \"certified\": false,\n    \"certifiedBy\": null,\n    \"validationStatus\": \"UNDER_REVIEW\",\n    \"packagingFormat\": \"OSMvFIVE\",\n    \"supportedMANOPlatforms\": [\n      {\n        \"id\": 1,\n        \"name\": \"osm fivee\",\n        \"version\": \"osm fivee\",\n        \"description\": \"osm five\"\n      }\n    ],\n    \"vxfOnBoardedDescriptors\": [],\n    \"vfimagesVDU\": [\n      {\n        \"id\": 2,\n        \"name\": \"cirros-0.3.6-x86_64\",\n        \"uuid\": \"be121176-1d62-4a1b-a3c1-7dce2e069d22\",\n        \"shortDescription\": \"Automatically created during vxf cirros_sriov_vnfd submission. Owner must update.\",\n        \"packageLocation\": null,\n        \"publicURL\": null,\n        \"dateCreated\": 1568971740000,\n        \"dateUpdated\": null,\n        \"refVxFs\": [\n          {\n            \"id\": 2,\n            \"name\": \"cirros_sriov_vnfd\"\n          }\n        ],\n        \"owner\": {\n          \"id\": 1,\n          \"organization\": \"ee\",\n          \"name\": \"Portal Administrator\",\n          \"email\": \"\",\n          \"username\": \"admin\",\n          \"active\": true,\n          \"currentSessionID\": null,\n          \"apikey\": \"e41c1cc4-aa56-4b7e-9f4d-64589549d768\",\n          \"createdAt\": 1568711859000,\n          \"roles\": [\n            \"ROLE_ADMIN\",\n            \"ROLE_MENTOR\"\n          ]\n        },\n        \"published\": false,\n        \"termsOfUse\": null,\n        \"deployedInfrastructures\": []\n      }\n    ]\n  }\n]\n</code></pre>"},{"location":"architecture/oauth/","title":"Authentication Server","text":"<p>Authentication is based on oAuth2. Our authentication service is a Keycloak server which is deployed with Openslice deployment </p> <p>API users needs to authenticate.  All APIs (except grant token request) must include Bearer token in request Authorization header.</p>"},{"location":"architecture/oauth/#oauth-token","title":"OAuth token","text":"<p>Get first an oauth token, using your username and password.  <pre><code>curl -X POST http://portal.openslice.eu/auth/realms/openslice/protocol/openid-connect/token -H 'Content-Type: application/x-www-form-urlencoded' -d 'username=demouser' -d 'password=demouser' -d 'grant_type=password' -d 'client_id=osapiWebClientId' \n</code></pre> response:</p> <pre><code>                                                       {\"access_token\":\"eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJHZFRjQnpxczg2VW10NTRVZV8ybTJyWHJkV3dzaWdSZE9EUldMYm1memNvIn0.eyJleHAiOjE1ODgyNDA1NzAsImlhdCI6MTU4ODI0MDI3MCwianRpIjoiOGI2ZTU0NWUtNDIyYy00NzFiLWEwN2UtYTUzYzY1NDQ0MzZmIiwiaXNzIjoiaHR0cDovL3BvcnRhbC5vcGVuc2xpY2UuaW8vYXV0aC9yZWFsbXMvb3BlbnNsaWNlIiwiYXVkIjoiYWNjb3VudCIsInN1YiI6ImExYTI3NjVhLWVjODMtNDQ1Ni1iN2IyLTIwNzMxOTg2ZTAzNSIsInR5cCI6IkJlYXJlciIsImF6cCI6Im9zYXBpV2ViQ2xpZW50SWQiLCJzZXNzaW9uX3N0YXRlIjoiMzM1MGY0OTMtNjYyNy00MzczLTg1NDQtZGVmZDI3YWQzYzc0IiwiYWNyIjoiMSIsImFsbG93ZWQtb3JpZ2lucyI6WyJodHRwOi8vbG9jYWxob3N0OjEzMDgyIiwiaHR0cDovL2xvY2FsaG9zdCIsImh0dHA6Ly9vcGVuc2xpY2UuaW8iLCJodHRwOi8vbG9jYWxob3N0OjEzMDAwIiwiaHR0cDovL2xvY2FsaG9zdDo0MjAwIiwiaHR0cDovL3BvcnRhbC5vcGVuc2xpY2UuaW8iXSwicmVhbG1fYWNjZXNzIjp7InJvbGVzIjpbIk5GVl9ERVZFTE9QRVIiLCJvZmZsaW5lX2FjY2VzcyIsInVtYV9hdXRob3JpemF0aW9uIiwiRVhQRVJJTUVOVEVSIiwiVVNFUiJdfSwicmVzb3VyY2VfYWNjZXNzIjp7ImFjY291bnQiOnsicm9sZXMiOlsibWFuYWdlLWFjY291bnQiLCJtYW5hZ2UtYWNjb3VudC1saW5rcyIsInZpZXctcHJvZmlsZSJdfX0sInNjb3BlIjoicHJvZmlsZSBlbWFpbCIsImVtYWlsX3ZlcmlmaWVkIjp0cnVlLCJwcmVmZXJyZWRfdXNlcm5hbWUiOiJkZW1vdXNlciJ9.TnzzpRLMD94UTKpT5_wkr1h4_3KUQmr4TGvFLpJ7cZx-Klrv8tB_eRkWnPqqzCAM9G21a1qXboL8MLMW8ECzG7HYKpobKOGr7LSczMOTdA2ZDyBCRUSOdW77pchu54tJ0ITEkFaDwSKMKKt04V_Sy4U-eIndj0XzzRlkDolWDnK4Z2lRaXAI6fMwOKx1Toa6RfOcemxtzl3pdtjPx92zo6MaKlbIqHK82lxdK0k8aQQaT6TmIrXbZKV2dU_1d3O0q0dVUEZJ_1kzwqOFkmxr9w0EnndC6ccYJlDAr_-GgUhhhNOn5v6tjYLUQdj5e4KEAsxIPzaCreK4un7mEAPmDw\",\"expires_in\":300,\"refresh_expires_in\":1800,\"refresh_token\":\"eyJhbGciOiJIUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICIwZjUxMDk5Yy0wNTIzLTRjNGQtODM0Zi1iNDc0YzBjOTA1MzkifQ.eyJleHAiOjE1ODgyNDIwNzAsImlhdCI6MTU4ODI0MDI3MCwianRpIjoiZmViOTg5NWEtOTY5ZS00MzIzLWJjY2QtZTY2YzQ0NGE1MzJlIiwiaXNzIjoiaHR0cDovL3BvcnRhbC5vcGVuc2xpY2UuaW8vYXV0aC9yZWFsbXMvb3BlbnNsaWNlIiwiYXVkIjoiaHR0cDovL3BvcnRhbC5vcGVuc2xpY2UuaW8vYXV0aC9yZWFsbXMvb3BlbnNsaWNlIiwic3ViIjoiYTFhMjc2NWEtZWM4My00NDU2LWI3YjItMjA3MzE5ODZlMDM1IiwidHlwIjoiUmVmcmVzaCIsImF6cCI6Im9zYXBpV2ViQ2xpZW50SWQiLCJzZXNzaW9uX3N0YXRlIjoiMzM1MGY0OTMtNjYyNy00MzczLTg1NDQtZGVmZDI3YWQzYzc0Iiwic2NvcGUiOiJwcm9maWxlIGVtYWlsIn0.cDTx9BE1Df8EfGYm3VLr_MNFeymxZtJhMtlK7PVbIuk\",\"token_type\":\"bearer\",\"not-before-policy\":1586797346,\"session_state\":\"3350f493-6627-4373-8544-defd27ad3c74\",\"scope\":\"profile email\"}\n</code></pre> <p>The <code>access_token</code> will be used next as a Bearer.</p> <pre><code>curl http://portal.openslice.eu/tmf-api/serviceCatalogManagement/v4/serviceCatalog -H 'Authorization: Bearer yJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJHZFRjQnpxczg2VW10NTRVZV8ybTJyWHJkV3dzaWdSZE9EUldMYm1memNvIn0.eyJleHAiOjE1ODgyNDA1MjQsImlhdCI6MTU4ODI0MDIyNCwianRpIjoiYjg0NGYxZDAtMzk3Mi00YTMyLThiMWEtZDAxMDY3OGZjMTQ4IiwiaXNzIjoiaHR0cDovL3BvcnRhbC5vcGVuc2xpY2UuaW8vYXV0aC9yZWFsbXMvb3BlbnNsaWNlIiwic3ViIjoiYTFhMjc2NWEtZWM4My00NDU2LWI3YjItMjA3MzE5ODZlMDM1IiwidHlwIjoiQmVhcmVyIiwiYXpwIjoiYWRtaW4tY2xpIiwic2Vzc2lvbl9zdGF0ZSI6ImFmMmMzZmY1LTE4YWQtNDFkNC1hYTAyLTFlMGJkNzNmOTM5MSIsImFjciI6IjEiLCJzY29wZSI6InByb2ZpbGUgZW1haWwiLCJlbWFpbF92ZXJpZmllZCI6dHJ1ZSwicHJlZmVycmVkX3VzZXJuYW1lIjoiZGVtb3VzZXIifQ.SMtgV1E44_K_MQumGXZtWsLGVhYNaoM8Pk-DiFIZtUP4Zu-ervOsxHVQMX1frgVERR4jJidBcSshy9TnJ3UjF4l33WujHltbs-1UPy-gaIufVuEpl8RmbjOti3Up70vLfLXbzb6kN6WaahgobWXlbJsSXXwaBPQP6vSX5KigCa8TmzXcuqom14lOrlU-RB2zQTlJ30p7d9ag-a7o3I5m9GZWLJCZW2UYMl1JkskTHKgilA8HFQY4C9DYwWu8YDMyzqQSNumrTlURalBFidFbZvb1kp4dAyct8TysSWSbxxiwaL2RX1PWUqk-5Fpc1Q6BnBC8muMheiukFuoSkuADAg'^C\nubuntu@portal:~$ curl http://portal.openslice.eu/tmf-api/serviceCatalogManagement/v4/serviceCatalog -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJHZFRjQnpxczg2VW10NTRVZV8ybTJyWHJkV3dzaWdSZE9EUldMYm1memNvIn0.eyJleHAiOjE1ODgyNDA1NzAsImlhdCI6MTU4ODI0MDI3MCwianRpIjoiOGI2ZTU0NWUtNDIyYy00NzFiLWEwN2UtYTUzYzY1NDQ0MzZmIiwiaXNzIjoiaHR0cDovL3BvcnRhbC5vcGVuc2xpY2UuaW8vYXV0aC9yZWFsbXMvb3BlbnNsaWNlIiwiYXVkIjoiYWNjb3VudCIsInN1YiI6ImExYTI3NjVhLWVjODMtNDQ1Ni1iN2IyLTIwNzMxOTg2ZTAzNSIsInR5cCI6IkJlYXJlciIsImF6cCI6Im9zYXBpV2ViQ2xpZW50SWQiLCJzZXNzaW9uX3N0YXRlIjoiMzM1MGY0OTMtNjYyNy00MzczLTg1NDQtZGVmZDI3YWQzYzc0IiwiYWNyIjoiMSIsImFsbG93ZWQtb3JpZ2lucyI6WyJodHRwOi8vbG9jYWxob3N0OjEzMDgyIiwiaHR0cDovL2xvY2FsaG9zdCIsImh0dHA6Ly9vcGVuc2xpY2UuaW8iLCJodHRwOi8vbG9jYWxob3N0OjEzMDAwIiwiaHR0cDovL2xvY2FsaG9zdDo0MjAwIiwiaHR0cDovL3BvcnRhbC5vcGVuc2xpY2UuaW8iXSwicmVhbG1fYWNjZXNzIjp7InJvbGVzIjpbIk5GVl9ERVZFTE9QRVIiLCJvZmZsaW5lX2FjY2VzcyIsInVtYV9hdXRob3JpemF0aW9uIiwiRVhQRVJJTUVOVEVSIiwiVVNFUiJdfSwicmVzb3VyY2VfYWNjZXNzIjp7ImFjY291bnQiOnsicm9sZXMiOlsibWFuYWdlLWFjY291bnQiLCJtYW5hZ2UtYWNjb3VudC1saW5rcyIsInZpZXctcHJvZmlsZSJdfX0sInNjb3BlIjoicHJvZmlsZSBlbWFpbCIsImVtYWlsX3ZlcmlmaWVkIjp0cnVlLCJwcmVmZXJyZWRfdXNlcm5hbWUiOiJkZW1vdXNlciJ9.TnzzpRLMD94UTKpT5_wkr1h4_3KUQmr4TGvFLpJ7cZx-Klrv8tB_eRkWnPqqzCAM9G21a1qXboL8MLMW8ECzG7HYKpobKOGr7LSczMOTdA2ZDyBCRUSOdW77pchu54tJ0ITEkFaDwSKMKKt04V_Sy4U-eIndj0XzzRlkDolWDnK4Z2lRaXAI6fMwOKx1Toa6RfOcemxtzl3pdtjPx92zo6MaKlbIqHK82lxdK0k8aQQaT6TmIrXbZKV2dU_1d3O0q0dVUEZJ_1kzwqOFkmxr9w0EnndC6ccYJlDAr_-GgUhhhNOn5v6tjYLUQdj5e4KEAsxIPzaCreK4un7mEAPmDw'\n\nResponse:\n\n\n[{\"uuid\":\"9e186cd5-b2b2-4a06-b1d6-895720193bc9\",\"lastUpdate\":\"2020-03-11T23:19:05Z\",\"@baseType\":\"BaseEntity\",\"@schemaLocation\":null,\"@type\":\"ServiceCatalog\",\"href\":null,\"name\":\"Example Facility Services\",\"description\":\"Example Facility Services\",\"lifecycleStatus\":\"Active\",\"version\":\"1.0\",\"validFor\":{\"endDateTime\":\"2039-11-20T23:07:21Z\",\"startDateTime\":\"2019-11-20T23:07:21Z\"},\"relatedParty\":null,\"id\":\"9e186cd5-b2b2-4a06-b1d6-895720193bc9\",\"category\":[{\"@baseType\":\"org.etsi.osl.tmf.scm633.model.ServiceCategoryRef\",\"@schemaLocation\":null,\"@type\":\"org.etsi.osl.tmf.scm633.model.ServiceCategoryRef\",\"href\":null,\"name\":\"Generic Services\",\"@referredType\":null,\"id\":\"98b9adf1-a1d6-4165-855f-153ddc2131b1\"},{\"@baseType\":\"org.etsi.osl.tmf.scm633.model.ServiceCategoryRef\",\"@schemaLocation\":null,\"@type\":\"org.etsi.osl.tmf.scm633.model.ServiceCategoryRef\",\"href\":null,\"name\":\"External\",\"@referredType\":null,\"id\":\"08ffdb3c-6237-45d0-9f3a-d43b5fc5f0b6\"},{\"@baseType\":\"org.etsi.osl.tmf.scm633.model.ServiceCategoryRef\",\"@schemaLocation\":null,\"@type\":\"org.etsi.osl.tmf.scm633.model.ServiceCategoryRef\",\"href\":null,\"name\":\"eMBB\",\"@referredType\":null,\"id\":\"ef2c90dd-b65e-4a9f-a9c3-427c9fb0219b\"}]}]\n</code></pre>"},{"location":"architecture/osom/","title":"Openslice Service Orchestration and Order Management - OSOM","text":"<p>OSOM is a service responsible for:</p> <ul> <li>Service Order Management (SOM)</li> <li>Service Orchestration (SO)</li> </ul> <p>It uses open source Flowable Business process engine (https://www.flowable.org) .</p> <p>A Service Order follows the states as defined in TMF641 specification: </p> <p></p>"},{"location":"architecture/osom/#initial-state","title":"Initial state","text":"<p>When a new order is created, it goes into the Initial state. It is stored in the repository and triggers an Event.</p> <p></p> <p>Administrators are notified usually from the Ticketing System of a new order. They login to Openslice and change the State of the order either to ACKNOWLEDGED or REJECTED. If ACKNOWLEDGED they can Propose a startDate, add Notes, and add any additional service items</p>"},{"location":"architecture/osom/#order-scheduler","title":"Order scheduler","text":"<p>A process checks every 1 minute for ACKNOWLEDGED orders.</p> <p></p> <p></p> <p>It retrieves all orders that are in ACKNOWLEDGED state and if the start date is in time it will initialize the process by settingn the order in IN_PROGRESS state. Finally the Start Order Process will start.</p>"},{"location":"architecture/osom/#start-order-process","title":"Start order process","text":"<p>This process for now is a draft simple prototype to make a simple orchestration via NFVO. Here the actual Services (TMF638/640 model) are created and attached to Service Order and Service Inventory.</p> <p></p> <p></p> <p>We expect here to check which tasks can be orchestrated by NFVO and which by human. We create the equivalent Services(TMF638/640 model) for this order.</p> <ol> <li>During check it should decide to create Service(s) for this service order O1 and send it to ServiceInventory</li> <li>The Services(TMF638 model) are assigned to the Order O1 In supportService List</li> <li>Each OrderItem OI1 is related to one serviceSpecification</li> <li>Each ServiceSpecification has also related serviceSpecRelationships</li> <li>So if we receive an order O1 for a ServiceSpec A which relates to (a bundle of) 3 specs(2 CFS, 1 RFS) we do the following:<ol> <li>Create a Service S_A(TMF638 model) for ServiceSpec A for Order O1</li> <li>We create also 3 Services S_C1, S_C2 and S_R1 equivalent to the serviceSpecRelationships (2 CFS, 1 RFS) </li> <li>At this point the order will have 1 Local Service Orchestration Process(S_A),  2 supportingServices  refs(S_C1, S_C2) and 1 supportingResource(S_R1)</li> <li>The 3 supportingServices and 1 supportingResource correspond to total 4 Services in ServiceInventory</li> <li>Service S_A will have: <ol> <li>startMode 1: Automatically by the managed environment</li> <li>State: RESERVED and the Lifecycle will be handled by OSOM</li> </ol> </li> <li>Services S_C1 and S_C2 we decide that cannot be orchestrated then they have <ol> <li>startMode: 3: Manually by the Provider of the Service</li> <li>State: RESERVED and the Lifecycle will be handled by OSOM</li> <li>If the CFS is a bundle spec it is further recursively orchestrated </li> </ol> </li> <li>Service S_R1 will have <ol> <li>startMode 1: Automatically by the managed environment.</li> <li>State: RESERVED</li> <li>IF The Service has the characteristic CharacteristicByName( \"NSDID\") it will be further processed by the NFVO </li> </ol> </li> </ol> </li> </ol> <p>There will be two instances of task \"User Task Manual Complete Service\" each for Services S_C1 and S_C2. The task is Transient for now. It displays only the services that are not automated!  Here is a flow for future:</p> <ol> <li>We wait here for human decision.</li> <li>From API we get a result:     a. If set to ACTIVE/TERMINATED then we complete the task     b. In any other state we stay in this task until it is resolved as in step a     c. The Status of ORDER O1 is also updated to PARTIAL</li> </ol> <p>There will be an instance of  NFVODeploymentRequest process  each for Service S_R1. (see later)</p> <ol> <li>This process is related with the NFVO orchestration</li> <li>It will send a msg to NFVO(s?) for a specific deployment request</li> </ol> <p>All services in \"Order Complete\" are in a status:</p> <ol> <li>Depending on the result the service S_A is either ACTIVE or INACTIVE or TERMINATED</li> <li>The Status of ORDER O1 is also updated to COMPLETED  or PARTIAL (in case we have some services running) or FAILED (in cases we have errors)</li> </ol> <p>A Service follows the states as defined in TMF638 Service Inventory specification: </p> <p></p>"},{"location":"architecture/osom/#nfvodeploymentrequest-process","title":"NFVODeploymentRequest process","text":"<p>This process is related with the NFVO orchestration It will send a msg to NFVO(s?) for a specific deployment request Then it checks the deployment status. It will wait 30 secs each time until the deployment is running (or failed)</p>"},{"location":"architecture/osom/#check-in-progress-orders-process","title":"Check In Progress orders process","text":"<p>Every 1 minute the \"Check In Progress Orders\" process is executed checking if a supported Service changed state (i.e. to ACTIVE) then the whole Order will change state (e.g. go to COMPLETED)</p> <p></p>"},{"location":"architecture/osom/#external-service-provider-deployment-request-process","title":"External Service Provider Deployment Request process","text":"<p>This process contains tasks for submitting order requests to external partners. - Submit Order To External Service Provider Task: This task creates automatically a Service Order request to a 3rd party provider SO that hosts the Service Specification - Check external service order fulfillment task: This task Check external partner for Service creations and updates our local inventory of services the service characteristics of remote Service Inventory</p>"},{"location":"architecture/osom/#fetch-partner-services-process","title":"Fetch Partner Services Process","text":"<p>Every 2 minutes the \"fetchPartnerServicesProcess\" process is executed checking remote Partner Organizations for changes in the published catalogues. The Fetch and Update External Partner Services Task is executed in paralle l for each Partner Organization </p>"},{"location":"architecture/osom/#local-service-orchestration-process","title":"Local Service Orchestration Process","text":"<p>This process handles automatically services that need to be further orchestrated or processed by OSOM. For example, for a CFS Bundled service we create such automated service instances that just aggregate the underlying services. </p>"},{"location":"architecture/tmfapi/","title":"TMF OpenAPI specification","text":"<p>PLease check the complete specification here.</p>"},{"location":"architecture/tmfapi/#api-interaction","title":"API interaction","text":""},{"location":"architecture/tmfapi/#oauth-token","title":"OAuth token","text":"<p>See oauth</p>"},{"location":"architecture/tmfapi/#request-a-protected-api-resource","title":"Request a protected API resource","text":"<p>Example: Get all Service Catalogs (check the <code>Authorization:Bearer</code> to be correct)</p> <pre><code>curl -H \"Authorization:Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyX25hbWUiOiJhZG1pbiIsInNjb3BlIjpbIm9wZW5hcGkiLCJhZG1pbiIsInJlYWQiLCJ3cml0ZSJdLCJvcmdhbml6YXRpb24iOiJteW9yZ2FuaXp0aW9uIiwiZXhwIjoxNTc4NTA1MDcyLCJhdXRob3JpdGllcyI6WyJST0xFX01FTlRPUiIsIlJPTEVfQURNSU4iXSwianRpIjoiMTFlNGYxYTUtZDY0Ny00YzA1LWE0ZGMtYWFhYzUyMjk4YzMwIiwiY2xpZW50X2lkIjoib3NhcGlXZWJDbGllbnRJZE91dCJ9.gm7cKdusDrdMRkxEiFU5sENKGRC1xwVj2SgPRmE9xxx\"  -H  \"accept: application/json;charset=utf-8\" -X GET \"http://portal.openslice.eu/tmf-api/serviceCatalogManagement/v4/serviceCatalog\"\n</code></pre> <p>response:</p> <pre><code>[\n  {\n    \"uuid\": \"9e186cd5-b2b2-4a06-b1d6-895720193bc9\",\n    \"lastUpdate\": \"2019-12-19T10:45:55Z\",\n    \"@baseType\": \"BaseEntity\",\n    \"@schemaLocation\": null,\n    \"@type\": \"ServiceCatalog\",\n    \"href\": null,\n    \"name\": \"Example Facility Services\",\n    \"description\": \"Example Facility Services\",\n    \"lifecycleStatus\": \"Active\",\n    \"version\": \"1.0\",\n    \"validFor\": {\n      \"endDateTime\": \"2039-11-20T23:07:21Z\",\n      \"startDateTime\": \"2019-11-20T23:07:21Z\"\n    },\n    \"relatedParty\": null,\n    \"id\": \"9e186cd5-b2b2-4a06-b1d6-895720193bc9\",\n    \"category\": [\n      {\n        \"@baseType\": \"org.etsi.osl.tmf.scm633.model.ServiceCategoryRef\",\n        \"@schemaLocation\": null,\n        \"@type\": \"org.etsi.osl.tmf.scm633.model.ServiceCategoryRef\",\n        \"href\": null,\n        \"name\": \"eMBB\",\n        \"@referredType\": null,\n        \"id\": \"ef2c90dd-b65e-4a9f-a9c3-427c9fb0219b\"\n      },\n      {\n        \"@baseType\": \"org.etsi.osl.tmf.scm633.model.ServiceCategoryRef\",\n        \"@schemaLocation\": null,\n        \"@type\": \"org.etsi.osl.tmf.scm633.model.ServiceCategoryRef\",\n        \"href\": null,\n        \"name\": \"Generic Services\",\n        \"@referredType\": null,\n        \"id\": \"98b9adf1-a1d6-4165-855f-153ddc2131b1\"\n      }\n    ]\n  }\n]\n</code></pre>"},{"location":"architecture/cridge/cridge_introduction/","title":"CRIDGE: A Service to manage Custom Resources in a Kubernetes Cluster","text":""},{"location":"architecture/cridge/cridge_introduction/#introduction","title":"Introduction","text":"<p>Kubernetes is an orchestration system for automating software deployment, scaling, and management. One can interact though the Kubernetes API and it has a set of objects ready for use out of the box. Custom Resource Definitions (CRDs) is a way that allows to manage things other than Kubernetes itself and allows to create our own objects The use of CRDs makes the possibilities of Kubernetes management almost limitless. You can extend the base Kubernetes API with any object you like using CRDs.</p> <p>CRIDGE is a service designed to create and manage Custom Resources (CRs) based on Custom Resource Definitions (CRDs) installed on a Kubernetes cluster. By leveraging OpenSlice (OSL), CRIDGE enables seamless integration and orchestration within Kubernetes environments, utilizing Kubernetes APIs via the TMF APIs and models. Thus, more or less, OSL exposes Kubernetes APIs as TMF APIs and models.</p> <p>By allowing the design and lifecycle management of services/resources that expose CRDs/CRs in a Kubernetes cluster via the TMF APIs, OSL can be used in many complex scenarios now involing resources from multiple domains. </p> <p>Pros, in a nutshell:</p> <ol> <li> <p>CRIDGE service allows OSL to:</p> <ul> <li>Create and manage Custom Resources (CRs) using installed CRDs on a target Kubernetes cluster.</li> <li>Facilitate complex orchestration scenarios by wrapping Kubernetes APIs as TMF APIs and models.</li> <li>Handles connectivity to a Kubernetes cluster and manages the lifecycle of CRDs</li> <li>Wraps the Kubernetes API, receives and provides resources towards other OSL services via the service bus</li> </ul> </li> <li> <p>Enabling Loose Coupling and Orchestration</p> <ul> <li>Language Flexibility: Developers can write CRDs in any language and expose them via the Kubernetes APIs. OSL will reuse these CRDs, enhancing flexibility and integration capabilities.</li> <li>Familiar Deployment: Developers can create and deploy applications using familiar tools such as Helm charts, simplifying the process and reducing the learning curve.</li> </ul> </li> <li> <p>Ecosystem Reusability</p> <ul> <li>CRIDGE capitalizes on the extensive Kubernetes ecosystem, particularly focusing on operators (CRDs).</li> <li>Key repositories and hubs such as artifacthub.io and Operatorhub.io can be utilized for finding and deploying operators.</li> </ul> </li> <li> <p>Service Catalog Exposure and Deployment</p> <p>OSL can expose CRs in service catalogs, facilitating their deployment in complex scenarios. These scenarios may include service bundles that involve multiple systems, such as RAN controllers or other Kubernetes clusters, providing a robust and versatile deployment framework.</p> </li> </ol> <p>Why the CRIDGE name? We wanted to build a service that maps TMF models to CRDs; a kind of a CRD to TMF bridge. Therefore CRIDGE was born.</p>"},{"location":"architecture/cridge/cridge_introduction/#approach","title":"Approach","text":"<p>OSL in general is responsible for exposing Service Specifications which are ready to be ordered and orchestrated, through TMFORUM Open APIs as defined in the OSL Service Spec Catalog. Usually for a service specification a corresponding (one or more) Resource Specification (resourceSpecificationReference) is registered in the OSL Resource Spec Catalog.</p> <p>The following image illustrates the approach.</p> <p></p> <ol> <li>A CRD in a cluster will be mapped in TMF model as a Resource specification and therefore can be exposed as a service specification in a catalog</li> <li>Service Orders can be created for this service specification. The OSL Orchestrator (OSOM) will manage the lifecycle of the Service Order.</li> <li>OSOM creates a Resource in OSL Resource inventory and requests (via CRIDGE) a new Custom Resource (CR) in the target cluster<ul> <li>The resource is created in a specific namespace (for example the UUID of the Service Order)</li> <li>A CR in a cluster will be mapped in TMF model as a Resource in the resource Inventory</li> <li>Other related resources created by the CRD Controller within the namespace are automatically created in OSL Resource Inventory under the same Service Order</li> </ul> </li> </ol> <p></p> <p>The provided image illustrates the architecture and workflow of the CRIDGE service, showing how it interacts with other components within a Kubernetes (K8s) cluster. </p> <p>Following, there is an explanation of the key components and flow in the diagram:</p> <ul> <li>Other OSL Services: This box represents various OSL services such as Service Spec Catalogue, Resource Spec Catalogue, Service Inventory, Resource Inventory, and OSOM (OpenSlice Service Orchestration and Management).</li> <li>Service Bus: This is the communication layer that facilitates interaction between the CRIDGE service and other OSL services.</li> <li>CRIDGE: CRIDGE acts as a bridge that converts CRDs (Custom Resource Definitions) to TMF (TM Forum) APIs and models. It enables the creation and management of Custom Resources (CRs) in the Kubernetes cluster.</li> <li> <p>K8s API: The Kubernetes API server, which is the central control point for managing the Kubernetes cluster. CRIDGE interacts with the K8s API to manage CRDs and CRs.</p> <p>CRD (Custom Resource Definition): A CRD is a way to define custom resources in Kubernetes cluster-wise. It allows the extension of Kubernetes API to create and manage user-defined resources. Example : <pre><code>apiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n    name: myresource.example.com\n</code></pre></p> </li> <li> <p>Namespaces: Kubernetes namespaces provide a way to partition resources within a cluster. The diagram shows that multiple namespaces (nsxx, nsyy, nsz) can be managed by CRIDGE.</p> <p>CR (Custom Resource):  A CR is an instance of a CRD. It represents the actual custom resource that is managed within the Kubernetes cluster. Example shown in different namespaces: <pre><code>apiVersion: example.com/v1\nkind: Myresource\nmetadata:\n    name: example_resource_1\n</code></pre></p> </li> </ul> <p>In a nutchell:</p> <ul> <li>Various OSL services use the Service Bus to communicate with CRIDGE.</li> <li>CRIDGE converts requests towards Kubernetes API and vice-versa, facilitating the integration of custom resources with other OSL services.</li> <li>CRDs are defined and managed through the K8s API. The example CRD is named myresource.example.com.</li> <li>Deploying CRs in Namespaces: Custom resources defined by the CRD are created and managed within different namespaces in the Kubernetes cluster. Each namespace can have its own instances of the custom resources.</li> </ul> <p>The example CRD myresource.example.com allows the creation of custom resources of type Myresource.</p> <p>Instances of Myresource are created in various namespaces, each with unique names like example_resource_1.</p>"},{"location":"architecture/cridge/cridge_introduction/#mupliple-clusters-management","title":"Mupliple Clusters Management","text":"<p>A CRIDGE service is usually responsible for managing one cluster. In the following diagram we show how it can be used for managing multiple clusters:</p> <p></p> <p>We assume that there is an OSL Management cluster that OSL is installed. CRIDGE is also installed there if we would like to manage resources in the same management cluster. </p> <ul> <li>Each CRIDGE service has its own configuration to connect to target cluster</li> <li>Each CRIDGE can be installed either in the managed cluster or at the remote clusters. Connectivity is handled via the service bus.</li> <li>Important: Each CRIDGE has a different context and API endpoints. This is used to request CRDs on a different cluster.</li> </ul> <p>A CRD has a globally unique name for example mycrd.example.com. So we need to somehow identify also the different cluster.</p>"},{"location":"architecture/cridge/cridge_introduction/#awareness-for-crds-and-crs-in-a-cluster","title":"Awareness for CRDs and CRs in a Cluster","text":"<p>CRDs and CRs can appear (disappear) or change status at any time in a cluster. OSL Resource Inventory need to be aware of these events.</p> <p>The implemented synchronization process is explained by the following diagram:</p> <p></p> <p>WatcherService is executed when the CRIDGE service application starts (see onApplicationEvent). Specifically:</p> <ul> <li>KubernetesClientResource is a class that wraps fabric8\u2019s KubernetesClient<ul> <li>This fabric8 KubernetesClient is initialized from the kubeconf and default context of the machine that runs CRIDGE</li> </ul> </li> <li>On CRIDGE start-up we try to register this cluster and context to OSL catalogs.<ul> <li>See registerKubernetesClientInOSLResource method which registers the KubernetesContextDefinition in Resource Inventory as a LogicalResource via  createOrUpdateResourceByNameCategoryVersion method</li> </ul> </li> <li>After the creation (or update) of this cluster as a Resource in OSL we proceed to create SharedIndexInformers for CustomResourceDefinition objects</li> <li>In this way, CRIDGE is always aware of all CRDs and their CRs in the cluster, even if a CRD or CR is added/updated/deleted in the K8S cluster outside of OSL (CRIDGE)</li> <li>The SharedIndexInformer events notify CRIDGE, which is always aware of all CRDs and their CRs in the cluster, even if a CRD or CR is added/updated/deleted in the K8S cluster outside of OSL (CRIDGE)<ul> <li>NOTE: The ADD event is raised every time also we run CRIDGE. Therefore, on ADD we do the method to createORupdate resource specifications and resources</li> </ul> </li> <li>On ADD event:<ul> <li>The CRD is transformed to OSL Kubernetes domain model: method kubernetesClientResource.KubernetesCRD2OpensliceCRD</li> <li>Then the OSL Kubernetes domain model is:<ul> <li>Transformed to Resource Specification and is stored to catalog (see createOrUpdateResourceSpecByNameCategoryVersion)</li> <li>Transformed to Resource and is stored to catalog (see createOrUpdateResourceByNameCategoryVersion)</li> </ul> </li> <li>Conceptually while a CRD is a new resource located in the Kubernetes cluster resource, it is transformed also as a Resource Specification (a high-level entity) which is ready to be reused as an entity to other scenarios. The same concept as in Kubernetes where a CRD is a definition ready to be used for instantiating resources of this CRD</li> <li>Then for this CRD a Watcher is added for all Resources of this Kind (fabric8\u2019s GenericKubernetesResource entity)  </li> <li>When we have a newly added/updated/deleted resource of a certain CRD the method updateGenericKubernetesResourceInOSLCatalog is called for this object  (fabric8\u2019s GenericKubernetesResource entity)</li> <li>We examine if the resource has label org.etsi.osl.resourceId<ul> <li>This label is added by OSOM during service orders to correlate K8S requested resources with resources in inventory</li> </ul> </li> <li>If the label exists, we update the resource by ID updateResourceById</li> <li>Else a resource is created in catalog</li> </ul> </li> </ul>"},{"location":"architecture/cridge/cridge_introduction/#exposure-of-crds-as-service-specifications","title":"Exposure of CRDs as Service Specifications","text":"<p>See Exposing Kubernetes Resources section for ways to design services around CRDs.</p>"},{"location":"architecture/cridge/cridge_introduction/#service-orchestration-and-crdscrs","title":"Service Orchestration and CRDs/CRs","text":"<p>OSOM checks the presence of attribute _CR_SPEC at the RFS to make a request for a CR deployment</p> <ul> <li>_CR_SPEC is a JSON or YAML string that is used for the request<ul> <li>It is similar to what one will do with e.g. a kubectl apply</li> <li>There are tools to translate a yaml file to a json</li> </ul> </li> </ul> <p>LCM rules can be used to change attributes of this yaml/json file, before sending this for orchestration</p> <p>However, the following issue needs to be solved: How to map the CR lifecycle that is defined in the CRD with the TMF resource Lifecycle?</p> <p>For this We introduced the following characteristics:  </p> <ul> <li>_CR_CHECK_FIELD</li> <li>CR_CHECKVAL_STANDBY</li> <li>_CR_CHECKVAL_ALARM </li> <li>_CR_CHECKVAL_AVAILABLE</li> <li>_CR_CHECKVAL_RESERVED</li> <li>_CR_CHECKVAL_UNKNOWN</li> <li>_CR_CHECKVAL_SUSPENDED</li> </ul> <p>OSOM sends to CRIDGE a message with the following information:</p> <ul> <li>currentContextCluster: current context of cluster </li> <li>clusterMasterURL: current master url of the cluster </li> <li>org.etsi.osl.serviceId: This is the related service id that the created resource has a reference </li> <li>org.etsi.osl.resourceId: This is the related resource id that the created CR will wrap and reference.  </li> <li>org.etsi.osl.prefixName: we need to add a short prefix (default is cr) to various places. For example in K8s cannot start with a number </li> <li>org.etsi.osl.serviceOrderId: the related service order id of this deployment request </li> <li>org.etsi.osl.namespace: requested namespace name </li> <li>org.etsi.osl.statusCheckFieldName: The name of the field that is needed to be monitored in order to monitor the status of the service and translate it to TMF resource statys (RESERVED AVAILABLE, etc) </li> <li>org.etsi.osl.statusCheckValueStandby: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state STANDBY (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) </li> <li>org.etsi.osl.statusCheckValueAlarm: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state ALARMS (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) </li> <li>org.etsi.osl.statusCheckValueAvailable: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state AVAILABLE (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) </li> <li>org.etsi.osl.statusCheckValueReserved: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state RESERVED (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) </li> <li>org.etsi.osl.statusCheckValueUnknown: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state UNKNOWN (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) </li> <li> <p>org.etsi.osl.statusCheckValueSuspended: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state SUSPENDED (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) </p> </li> <li> <p>Parameters:</p> <ul> <li>aService reference to the service that the resource and the CR belongs to</li> <li>resourceCR reference the equivalent resource in TMF repo of the target CR. One to one mapping</li> <li>orderId related service order ID</li> <li>startDate start date of the deployment (not used currently)</li> <li>endDate end date of the deployment (not used currently)</li> <li>_CR_SPEC the spec that is sent to CRIDGE (in json)</li> </ul> </li> <li> <p>Returns:</p> <ul> <li>a string response from CRIDGE. It might return \"OK\" if everything is ok. \"SEE OTHER\" if there are multiple CRIDGEs then some other CRIDGE instance will handle the request for the equivalent cluster. Any other response is handled as error</li> </ul> </li> <li> <p>CRIDGE receives the message and creates according to the labels the necessary CR</p> </li> <li>It monitors the created resource(s) in namespace (see the Sequence Diagram in previous images)</li> <li>It monitors and tries to figure out and map the Status of the CR to the TMF Status according to the provided org.etsi.osl.statusCheck* labels</li> <li>It sends to the message bus the current resource for creation or update to the TMF service inventory</li> </ul>"},{"location":"architecture/cridge/cridge_introduction/#deployment-of-a-new-cr-based-on-a-crd","title":"Deployment of a new CR based on a CRD","text":"<p>The implemented process to deploy a CR is explained by the following diagram:</p> <p></p> <ul> <li>A message arrives to deploy a CR<ul> <li>The call examines if this CRIDGE service can handle the request (based on context and masterURL)</li> </ul> </li> <li>There are headers received and a _CR_SPEC in json</li> <li>The _CR_SPEC is unmarshaled as GenericKubernetesResource</li> <li>Headers are in format org.etsi.osl.*</li> <li>These headers are injected as labels (see Service Orchestration section)</li> <li>A  namespace is created for this resource</li> <li>Watchers are created for this namespace for e.g. new secrets, config maps etc, so that they can be available back as resources to the Inventory of OSL</li> </ul>"},{"location":"architecture/cridge/cridge_introduction/#probe-further","title":"Probe further","text":"<ul> <li>See examples of exposing Kubernetes Operators as a Service via OpenSlice:<ul> <li>Offering \"Calculator as a Service\"</li> <li>Offering \"Helm installation as a Service\" (Jenkins example)</li> </ul> </li> </ul>"},{"location":"contributing/developing/","title":"Developing","text":"<p>OpenSlice backend services are mainly implemented with Java 17 or above and Spring Boot.</p> <p>OpenSlice uses various subsystems and depending on the module would you like to work, other subsystems must be present (you can disable them though in the code, e.g. at docker-compose.yaml file).</p>"},{"location":"contributing/developing/#general-requirements","title":"General requirements","text":"<ul> <li>Docker should be installed in your development environment</li> <li>Run the core subsystems (see related section)</li> </ul>"},{"location":"contributing/developing/#versionrelease-management","title":"Version/release management","text":"<p>Check this nice article on how we develop and release versions.</p> <p>We develop in the <code>develop</code> branch and follow a issue driven development model.</p>"},{"location":"contributing/developing/#getting-started","title":"Getting Started","text":"<p>To get the latest development branch, execute:</p> <pre><code>wget https://labs.etsi.org/rep/osl/code/org.etsi.osl.main/-/raw/develop/compose/deploy.sh \nsudo ./deploy.sh develop  #[or replace develop with another branch name]\n</code></pre> <p>You may follow the installation process, as described at <code>develop</code> tagged documentation.</p>"},{"location":"contributing/developing/#contribute-to-a-subsystem","title":"Contribute to a subsystem","text":"<p>To work on a specific subsystem e.g. <code>org.etsi.osl.tmf.api</code>, you must:</p> <ol> <li> <p>Deploy only the core necessary subsystems through:</p> <pre><code>sudo docker compose --profile dev down;sudo docker compose --profile dev up -d --build\n</code></pre> <p>Note --profile dev that will only deploy the core dependency subsystems, instead   of the whole OpenSlice.</p> <p>OR</p> <p>Alternatively, comment out the respective container from the <code>docker-compose.yaml</code> file, so as to deploy the whole OpenSlice, except the subsystem you want to work on, following the provided installation steps.</p> </li> <li> <p>Clone the respective repository, for example: https://labs.etsi.org/rep/osl/code/org.etsi.osl.tmf.api/-/tree/develop (the clone URLs are available at this link)</p> </li> <li> <p>Code! \ud83d\ude0a</p> </li> </ol>"},{"location":"contributing/developing/#examples-of-developing-on-specific-subsystems","title":"Examples of developing on specific subsystems","text":""},{"location":"contributing/developing/#vnfnsd-catalog-management-and-nsd-deployment-api-service","title":"VNF/NSD Catalog Management and NSD Deployment API service","text":"<p>You need to:</p> <ol> <li> <p>Clone the repository: <code>https://labs.etsi.org/rep/osl/code/org.etsi.osl.portal.api/-/tree/develop</code></p> </li> <li> <p>Check the docker-compose.yml file. Default port is 13080. Check specifically the datasource username/password, server port.</p> </li> <li> <p>Make sure that the core subsystems are up and running.</p> <p>Execute it with: <pre><code>mvn spring-boot:run\n</code></pre></p> </li> </ol> <p>For verification, Swagger API of the service is at <code>http://localhost:13000/osapi/swagger-ui/index.html</code>.   There, you may try there various REST actions and authenticate via the OAuth server without the use of the UI.</p>"},{"location":"contributing/developing/#vnfnsd-catalog-management-and-nsd-deployment-web-ui-service","title":"VNF/NSD Catalog Management and NSD Deployment WEB UI service","text":"<p>The Web UI is written in <code>AngularJS</code>. To run it:</p> <ol> <li> <p>Clone the repository: https://labs.etsi.org/rep/osl/code/org.etsi.osl.portal.web/-/tree/develop (the clone URLs are available at that link)</p> <p>By default the project <code>org.etsi.osl.portal.api</code> exposes the folder <code>../org.etsi.osl.portal.web/src/</code> in a folder testweb (check class <code>MvcConfig.java</code> in <code>org.etsi.osl.portal.api</code>) for development. (In production nginx is used)</p> </li> <li> <p>Point your browser to <code>http://localhost:13000/osapi/testweb/index.html/</code></p> </li> </ol>"},{"location":"contributing/developing/#reach-us","title":"Reach us","text":"<p>We are available on different channels.</p>"},{"location":"contributing/developing/#slack","title":"Slack","text":"<p>Feel free to join OpenSlice's Slack workspace for any development oriented questions (preferred).</p>"},{"location":"contributing/developing/#e-mail","title":"E-mail","text":"<p>If you are a member or a participant, you can also reach out on the <code>OSL_TECH</code> mailing list.</p> <p>For administrative support, contact <code>SDGsupport@etsi.org</code>.</p>"},{"location":"contributing/developing/#wishlist","title":"Wishlist","text":"<p>Check also our wishlist of new features. You can add your own.</p> <p>See wishlist.</p>"},{"location":"contributing/documenting/","title":"Documenting","text":"<p>OpenSlice's documentation runs on MkDocs.</p>"},{"location":"contributing/documenting/#eligibility","title":"Eligibility","text":"<p>Documenting OpenSlice is limited to active contributors. So, if you:</p> <ol> <li> <p>are an active member or participant;</p> </li> <li> <p>wish to contribute to it;</p> </li> <li> <p>you're ready!</p> </li> </ol>"},{"location":"contributing/documenting/#documentation-system-and-structure","title":"Documentation System and Structure","text":"<p>MkDocs is a fast and simple static site generator that's geared towards building project documentation. Documentation source files are written in <code>Markdown</code>, and configured with a single <code>YAML</code> configuration file. Start by reading the introductory tutorial, then check the User Guide for more information.</p>"},{"location":"contributing/documenting/#getting-started","title":"Getting Started","text":"<p>To contribute to OpenSlice's documentation, you need to follow those easy steps:</p> <p>1) Clone the Documentation repository with:</p> <pre><code>git clone https://labs.etsi.org/rep/osl/documentation.git\n</code></pre> <p>2) Checkout the develop branch (incoming contributions are only accepted to the develop branch):</p> <pre><code>cd ./documentation\ngit checkout develop\n</code></pre> <p>3) Setup a local mkdocs server, using a virtual environment</p> <pre><code>python3 -m venv venv\nsource venv/bin/activate\npython -m pip install mkdocs\npython -m pip install mkdocs-material\npython -m pip install mike\n</code></pre> <p>4) Wait for all downloads to finish and start the mkdocs server</p> <pre><code>mkdocs serve\n</code></pre> <p>5) Document (and commit)! \ud83d\ude0a</p> <p>Before committing, you should make sure that the local mkdcocs server's terminal is not producing any INFO/WARNING message regarding your contributions.</p> <p>The documentation website supports branches, so your accepted changes will be reflected to the develop branch which becomes the Release branch after each corresponding cycle.</p>"},{"location":"contributing/wishlist/","title":"TMF API","text":"<ul> <li>RBAC of API endpoints</li> <li>TMF Ticketing API support</li> <li>TMF Product</li> <li>HATEOAS integration</li> <li>Select Specs that can be exposed to partners (maybe with a characteristic?)</li> </ul>"},{"location":"contributing/wishlist/#resource-management","title":"resource management","text":"<ul> <li>Resource Activation and Configuration API TMF702 (NEW) (https://projects.tmforum.org/wiki/pages/viewpage.action?pageId=128855518)</li> </ul>"},{"location":"contributing/wishlist/#tmf-web","title":"TMF WEB","text":""},{"location":"contributing/wishlist/#osom","title":"OSOM","text":"<ul> <li>Actions on service order item and acknowledge order status will define the lifecycle</li> <li>action shutdown on specific date for service order</li> <li>action edit on service order item</li> </ul>"},{"location":"contributing/wishlist/#dynamic-attribute-transformation","title":"Dynamic attribute transformation","text":"<ul> <li>DTM decision tables support per Service Specification</li> <li>Schedule Termination of completed order on running services</li> </ul>"},{"location":"contributing/wishlist/#nfvo-connectivity","title":"NFVO connectivity","text":""},{"location":"contributing/wishlist/#osm-client","title":"OSM Client","text":"<ul> <li>VNF/NSD config params Day 2</li> <li>NSD Lifcecylce</li> <li>NST support</li> </ul>"},{"location":"contributing/wishlist/#nfv-api","title":"NFV API","text":""},{"location":"contributing/wishlist/#nfv-web","title":"NFV WEB","text":""},{"location":"contributing/wishlist/#3rd-party-connectivity","title":"3rd party connectivity","text":""},{"location":"contributing/wishlist/#flowone-connector","title":"FlowOne connector","text":""},{"location":"contributing/wishlist/#openstack-connector","title":"Openstack connector","text":""},{"location":"contributing/wishlist/#kubernetes-connector","title":"Kubernetes connector","text":""},{"location":"contributing/wishlist/#centrallog","title":"CentralLog","text":"<ul> <li>Events from TMF to be written to Central Log</li> </ul>"},{"location":"contributing/wishlist/#authentication","title":"Authentication","text":""},{"location":"getting_started/portals/","title":"OpenSlice Portals","text":"<p>OpenSlice comprises of a web landing page (See Demo) that navigates to the respective Portals:</p> <ul> <li>Services Portal (See Demo)</li> <li>NFV Portal (See Demo)</li> <li>Products Portal (See Demo)</li> <li>Testing Portal (See Demo)</li> <li>Resources Portal (See Demo)</li> </ul> <p>Following you may find the scope each portal focuses on and the main TMF APIs it supports:</p> <p>Services Portal is a designated portal for the: - Service Designer - To design Customer Facing Services as bundles of Resource Facing Services that map to specific Resourses (e.g. NFV, Testing, General Resources). Then, it is charged with the designed Services' exposure to public Service Catalogs. - Service Customer - To browse the public Service Catalogs and order the offered Services. The fulfilment process of the Service Order is also captured and the final deployed Services are exposed to the Customer.</p> <p>NFV Portal is a designated portal for the: Indicatively, the portal can be used to: - Register new a new MANO provider (e.g. OSM) - Synchronize the onboarded VNF/NS packages, and the VIMs of the registered MANO provider - Onboard/Delete VNF/NS packages on specific MANO provider - Deploy a NS to a target MANO provider</p> <p>More information can be found at NFV Services.</p> <p>Resources Portal is a designated portal for the: - Resource Administrator - To view the available Resources that are being synchronized from the underlying infrastructure. -  Products Portal is a designated portal for the: - Product Designer - To design Products as bundles of available Services. Then, it is charged with the designed Products' exposure to public Product Catalogs. - Product Customer - To browse the public Product Catalogs and navigate to the respective offered Services.</p> <p>Testing Portal is a designated portal for the: - Testing Designer - To design Tests and provide the testing scripts as attachments to the latter. The Tests can be imported as Services at the Services Portal, and can be included in a Service Bundle.</p> TMF620 TMF632 TMF633 TMF634 TMF638 TMF639 TMF640 TMF641 TMF642 TMF653 TMF685 Services Portal x x x x x x Products Portal x x Testing Portal x x Resources Portal x x x x <p>The NFV Portal uses a proprietary API.</p>"},{"location":"getting_started/configuration/config_intro/","title":"Configuring and managing OpenSlice","text":"<p>Intended Audience: OpenSlice Administrators</p> <p>This section provides information on how to configure and manage different aspect of OpenSlice while in operation. For example:</p> <ul> <li>Manage user roles and access in Keycloak</li> <li>Configure/Manage NFVOs</li> <li>Advanced configuration scenarios</li> </ul>"},{"location":"getting_started/configuration/consuming_services_from_external_partners/","title":"Consuming Services From External Partner Organizations","text":"<p>A typical deployment across domains, involves today some typical components: i) an OSS/BSS to allow customers access the service catalog and perform service orders, ii) a Service Orchestrator (SO) component for executing the service order workflow, as well as iii) a Network Functions Virtualization Orchestrator (NFVO) for configuring the iv) network resources.</p> <p>TMF Open APIs are introduced not only for exposing catalogues and accepting service orders, but also implementing the East-West interfaces between the domains, fulfilling also the LSO requirements as introduced by MEF.</p> <p>The following figure shows how openslice could be used in such scenarios:</p> <p></p> <p>In Openslice we can consume services from 3rd parties via Open APIs.</p> <p>We use the TMF 632 Party Management model to specify Organizations that we can exchange items and other information such as:</p> <ul> <li>Import Service Specifications</li> <li>Create a Service Order</li> <li>Use the Service Inventory to query the status of the service ordered to the external partner organization</li> </ul>"},{"location":"getting_started/configuration/consuming_services_from_external_partners/#define-an-organization-as-3rd-party-to-consume-services-east-west","title":"Define an Organization as 3rd party to consume services East-West","text":"<p>An organization must have the following characteristics in openslice catalog, like for example:</p> <p>\"EXTERNAL_TMFAPI_BASEURL\", \"http://portal.openslice.eu\"</p> <p>\"EXTERNAL_TMFAPI_CLIENTREGISTRATIONID\", \"authOpensliceProvider\"</p> <p>\"EXTERNAL_TMFAPI_OAUTH2CLIENTID\", \"osapiWebClientId\"</p> <p>\"EXTERNAL_TMFAPI_OAUTH2CLIENTSECRET\", \"secret\"</p> <p>\"EXTERNAL_TMFAPI_OAUTH2SCOPES\", scopes</p> <p>\"EXTERNAL_TMFAPI_OAUTH2TOKENURI\", \"http://portal.openslice.eu/osapi-oauth-server/oauth/token\"</p> <p>\"EXTERNAL_TMFAPI_USERNAME\", \"admin\"</p> <p>\"EXTERNAL_TMFAPI_PASSWORD\", \"openslice\"</p> <p>\"EXTERNAL_TMFAPI_SERVICE_CATALOG_URLS\" = \"/tmf-api/serviceCatalogManagement/v4/serviceSpecification?type=CustomerFacingServiceSpecification\" (this is optional, fetch a list of service specs it will be relative with the BASEURL. If the url is empty then no specs will be fetched, the EXTERNAL_TMFAPI_SERVICE_CATEGORY_URLS might be used)</p> <p>\"EXTERNAL_TMFAPI_SERVICE_CATEGORY_URLS\" = \"/tmf-api/serviceCatalogManagement/v4/serviceCategory/{categoryid}\" (this example will fetch all specs in a category. You may define comma separated URLs of categories API URL . This will  fetch  specifications of every defined category. If you want only one specific category put for example the uuid only of one category: \"/tmf-api/serviceCatalogManagement/v4/serviceCategory/bda02821-bc4d-4bd6-b64b-d9c2aa5f8e6d\". multiple urls should be \"/tmf-api/serviceCatalogManagement/v4/serviceCategory/bda02821-bc4d-4bd6-b64b-d9c2aa5f8e6d,/tmf-api/serviceCatalogManagement/v4/serviceCategory/9b6d8bf3-abd2-43c4-8154-c8c6fe5545b2\")</p> <p>\"EXTERNAL_TMFAPI_SERVICE_SPEC\" = \"/tmf-api/serviceCatalogManagement/v4/serviceSpecification\"</p> <p>\"EXTERNAL_TMFAPI_SERVICE_ORDER_URLS\"= \"/test/v1/serviceorder\" (this is optional)</p> <p>An example Organization defined example in json: <pre><code>{\n  \"uuid\": \"1a09a8b5-6bd5-444b-b0b9-a73c69eb42ae\",\n  \"@baseType\": \"BaseEntity\",\n  \"@schemaLocation\": null,\n  \"@type\": null,\n  \"href\": null,\n  \"name\": \"Openslice.io\",\n  \"id\": \"1a09a8b5-6bd5-444b-b0b9-a73c69eb42ae\",\n  \"isHeadOffice\": null,\n  \"isLegalEntity\": null,\n  \"nameType\": null,\n  \"organizationType\": null,\n  \"tradingName\": null,\n  \"contactMedium\": [],\n  \"creditRating\": [],\n  \"existsDuring\": null,\n  \"externalReference\": [],\n  \"organizationChildRelationship\": [],\n  \"organizationIdentification\": [],\n  \"organizationParentRelationship\": null,\n  \"otherName\": [],\n  \"partyCharacteristic\": [\n    {\n      \"uuid\": \"3a2f7221-e0a2-4a6b-88d1-534c8e1963f6\",\n      \"@baseType\": \"BaseEntity\",\n      \"@schemaLocation\": null,\n      \"@type\": null,\n      \"href\": null,\n      \"name\": \"EXTERNAL_TMFAPI_CLIENTREGISTRATIONID\",\n      \"valueType\": null,\n      \"value\": {\n        \"value\": \"authOpensliceProvider\",\n        \"alias\": null\n      }\n    },\n    {\n      \"uuid\": \"c24bb527-f178-4d38-9b93-2027c1732876\",\n      \"@baseType\": \"BaseEntity\",\n      \"@schemaLocation\": null,\n      \"@type\": null,\n      \"href\": null,\n      \"name\": \"EXTERNAL_TMFAPI_USERNAME\",\n      \"valueType\": null,\n      \"value\": {\n        \"value\": \"admin\",\n        \"alias\": null\n      }\n    },\n    {\n      \"uuid\": \"27e45df8-414b-44c6-a5d5-3f064e2cfd3b\",\n      \"@baseType\": \"BaseEntity\",\n      \"@schemaLocation\": null,\n      \"@type\": null,\n      \"href\": null,\n      \"name\": \"EXTERNAL_TMFAPI_PASSWORD\",\n      \"valueType\": null,\n      \"value\": {\n        \"value\": \"openslice\",\n        \"alias\": null\n      }\n    },\n    {\n      \"uuid\": \"e0e470b8-6024-4014-8a18-2333e5465ce1\",\n      \"@baseType\": \"BaseEntity\",\n      \"@schemaLocation\": null,\n      \"@type\": null,\n      \"href\": null,\n      \"name\": \"EXTERNAL_TMFAPI_OAUTH2CLIENTSECRET\",\n      \"valueType\": null,\n      \"value\": {\n        \"value\": \"secret\",\n        \"alias\": null\n      }\n    },\n    {\n      \"uuid\": \"3e0de762-ac80-4c1e-a0a1-f265ff0899b4\",\n      \"@baseType\": \"BaseEntity\",\n      \"@schemaLocation\": null,\n      \"@type\": null,\n      \"href\": null,\n      \"name\": \"EXTERNAL_TMFAPI_OAUTH2SCOPES\",\n      \"valueType\": null,\n      \"value\": {\n        \"value\": \"admin;read\",\n        \"alias\": null\n      }\n    },\n    {\n      \"uuid\": \"0bbb8314-f7f2-420d-9fed-ba054b15f886\",\n      \"@baseType\": \"BaseEntity\",\n      \"@schemaLocation\": null,\n      \"@type\": null,\n      \"href\": null,\n      \"name\": \"EXTERNAL_TMFAPI_OAUTH2TOKENURI\",\n      \"valueType\": null,\n      \"value\": {\n        \"value\": \"http://portal.openslice.eu/osapi-oauth-server/oauth/token\",\n        \"alias\": null\n      }\n    },\n    {\n      \"uuid\": \"3a567de4-79eb-4006-a500-3e5229b44175\",\n      \"@baseType\": \"BaseEntity\",\n      \"@schemaLocation\": null,\n      \"@type\": null,\n      \"href\": null,\n      \"name\": \"EXTERNAL_TMFAPI_OAUTH2CLIENTID\",\n      \"valueType\": null,\n      \"value\": {\n        \"value\": \"osapiWebClientId\",\n        \"alias\": null\n      }\n    },\n    {\n      \"uuid\": \"6dca729f-dbe1-46b7-89f1-5c4f9fe89d4e\",\n      \"@baseType\": \"BaseEntity\",\n      \"@schemaLocation\": null,\n      \"@type\": null,\n      \"href\": null,\n      \"name\": \"EXTERNAL_TMFAPI_BASEURL\",\n      \"valueType\": null,\n      \"value\": {\n        \"value\": \"http://portal.openslice.eu\",\n        \"alias\": null\n      }\n    }\n  ],\n  \"relatedParty\": [],\n  \"status\": null,\n  \"taxExemptionCertificate\": []\n}\n</code></pre></p>"},{"location":"getting_started/configuration/nfvo_config/","title":"NFV Orchestrator Configuration","text":"<p>Currently we support Open Source MANO version EIGHT/NINE/TEN/ELEVEN/THIRTEEN. Later versions of OSM may also be supported by the existing configuration, as from OSM 9+ the project converged to the SOL005 interface, regarding the NBI, and SOL006 (YANG model), regarding the NFV/NS packaging. Also an implementation of a generic SOL005 interface is supported, but not extensively tested.</p> <p>Configuration of your target(s) NFVOs/MANO services with Openslice is performed through the NFV portal.</p> <ol> <li> <p>Login to {{yourdomain}}/nfvportal/</p> </li> <li> <p>Navigate to Admin &gt; Manage MANO Platforms &gt; Add New MANO Platform, pick one of the supported MANO platform(s), e.g. Name=OSMvTHIRTEEN, Version=OSMvTHIRTEEN and save. You may edit the saved MANO platforms after this.</p> </li> <li> <p>Navigate to Admin &gt; Manage MANO providers &gt; Add New MANO Provider and enter its details:</p> <ul> <li>Name and description of your choice. The selected name will supplement the NFV artifacts of this provider.</li> <li>One of the already defined MANO platforms</li> <li>API URL Endpoint, eg: https://10.10.10.10:9999 (This is the SOL005 NBI endpoint - Note the port 9999)</li> <li>Username, Password and Project of your OSM tenant.</li> </ul> </li> </ol> <p>Check EnabledForONBOARDING, if you want VNF/NS packages uploaded through the UI by the user, to also be automatically ONBOARDED to this MANO (1 step process). If left unchecked, the onboarding process must be performed manually after the VNF/NS package is uploaded to the portal, by the designated UI (2 step process).</p> <p>Check EnabledForSYNC, if you want to support the automatic synchronization of this MANO with OpenSlice. When enabled, the existing VNF/NS packages and VIMs (and any updates on them) of the registered MANO are also reflected to the portal to the respective UIs (Registered VNFs/NSDs and Manage Infrastructures). </p> <p>The synchronization is a continuous process that will confirm that the artifacts are still present in the MANO, updating the status field of the respective artifacts to <code>OSM_PRESENT</code>. If during this process, an artifact is deleted from the MANO, the respective status field will be updated to <code>OSM_MISSING</code>.</p>"},{"location":"getting_started/configuration/role_keycloak_management/","title":"Role management in Keycloak","text":"<p>Intended Audience: OpenSlice Administrators</p> <p>Some initial configuration of Keycloak happens at Installation/Deployment time. Here are some notes regarding user management</p> <p>There are cases that OpenSlice administrators need to configure Keycloak:</p> <ul> <li>Change user roles, e.g. make a Simple user a Service Designer</li> <li>Domain management</li> <li>User Password reset</li> </ul>"},{"location":"getting_started/deployment/docker_compose/","title":"OpenSlice Deployment Guide with Docker Compose","text":"<p>Intended Audience: OpenSlice Administrators</p>"},{"location":"getting_started/deployment/docker_compose/#requirements","title":"Requirements","text":""},{"location":"getting_started/deployment/docker_compose/#hardware-requirements","title":"Hardware requirements:","text":"Minimum Hardware Requirements Recomended Hardware Requirements 4 CPU cores 8 CPU cores 8 GB RAM 16 GB RAM 30 GB storage 50 GB storage"},{"location":"getting_started/deployment/docker_compose/#software-requirements","title":"Software Requirements:","text":"<ul> <li>Docker: A running environment for Docker Compose services</li> </ul>"},{"location":"getting_started/deployment/docker_compose/#preparing-the-environment","title":"Preparing the environment","text":""},{"location":"getting_started/deployment/docker_compose/#1-backup-your-previous-database-if-necessary","title":"1. Backup your previous database if necessary:","text":"<pre><code>sudo docker exec amysql /usr/bin/mysqldump -u root --password=letmein ostmfdb &gt; backup_ostmfdb.sql\n</code></pre>"},{"location":"getting_started/deployment/docker_compose/#2-install-docker","title":"2. Install docker","text":"<p>Since July 2023 Docker Compose V1 stopped receiving updates. OpenSlice fully reverted to Compose V2, which is integrated in the Docker installation.</p>"},{"location":"getting_started/deployment/docker_compose/#3-configure-containers-to-properly-resolve-the-dns-of-your-domain-optional","title":"3. Configure containers to properly resolve the DNS of your domain (optional)","text":"<pre><code>sudo nano /etc/docker/daemon.json\n</code></pre> <p>and add:</p> <pre><code>{ \n  \"dns\": [\"8.8.8.8\", \"8.8.4.4\"]\n}\n</code></pre> <p>After editing daemon.json restart docker daemon for the changes to take place</p> <pre><code>sudo systemctl restart docker\n</code></pre>"},{"location":"getting_started/deployment/docker_compose/#downloading-the-project","title":"Downloading the project","text":""},{"location":"getting_started/deployment/docker_compose/#1-create-a-new-folder-to-download-the-project","title":"1. Create a new folder to download the project","text":"<pre><code>mkdir openslice\n</code></pre> <pre><code>cd openslice\n</code></pre>"},{"location":"getting_started/deployment/docker_compose/#2-download-the-deployment-script","title":"2. Download the deployment script","text":"<p>Download the deployment / environment preparation script</p> <pre><code>wget https://labs.etsi.org/rep/osl/code/org.etsi.osl.main/-/raw/develop/compose/deploy.sh\n</code></pre> <p>Make it executable</p> <pre><code>sudo chmod +x deploy.sh\n</code></pre>"},{"location":"getting_started/deployment/docker_compose/#3-run-the-deployment-script","title":"3. Run the deployment script","text":"<p>OpenSlice is a multi repo project. This script selects the same branch for all repositories of the project to pull from.</p> <p>After that it builds the respective jar files locally and installs all the npm packages needed for the UI.</p> <p>If you run the script without selecting a branch the the main branch is going to be selected.</p> <p>We recommend:</p> <ul> <li>main branch for the most stable experience and</li> <li>develop branch for an experience with the latest features (for develop branch installation, it is strongly advisable that you may as well follow the develop documentation)</li> </ul> <pre><code>sudo ./deploy.sh develop #[or replace main with other branch name]\n</code></pre> <p>We recommend running the deploy.sh script with root permissions! In other case, some directories may not be accessible by the project building tools and hinder the smooth installation.</p>"},{"location":"getting_started/deployment/docker_compose/#configure-docker-compose-services","title":"Configure Docker Compose services","text":""},{"location":"getting_started/deployment/docker_compose/#1-create-configuration-specific-docker-compose-file-from-the-template","title":"1. Create configuration specific Docker Compose file from the template","text":"<pre><code>cd org.etsi.osl.main/compose/\n</code></pre> <pre><code>sudo cp docker-compose.yaml.configure docker-compose.yaml\n</code></pre>"},{"location":"getting_started/deployment/docker_compose/#2-configure-mysql-portal-container-optional","title":"2. Configure mysql-portal container (optional)","text":"<ol> <li>In folder <code>org.etsi.osl.main/compose/mysql-init</code> edit the file <code>01-databases.sql</code>.</li> <li>In the <code>org.etsi.osl.main/compose/docker-compose.yaml</code> edit the credentials of the users that services use to connect to the databases, if you wish.<ul> <li>portaluser (default is 12345) and</li> <li>keycloak (default is password)</li> </ul> </li> </ol>"},{"location":"getting_started/deployment/docker_compose/#3-configure-keycloak-container-optional","title":"3. Configure keycloak container (optional)","text":"<ol> <li> <p>If you made changes to keycloak's mysql credentials:</p> <p>In folder <code>org.etsi.osl.main/compose/</code> edit the file <code>docker-compose.yaml</code>.</p> </li> </ol> <pre><code>DB_DATABASE: keycloak\nDB_USER: keycloak\nDB_PASSWORD: password\n</code></pre> <ol> <li> <p>If you want to change the keycloak admin password:</p> <p>In folder <code>org.etsi.osl.main/compose/</code> edit the file <code>docker-compose.yaml</code></p> </li> </ol> <pre><code>KEYCLOAK_PASSWORD: Pa55w0rd\n</code></pre>"},{"location":"getting_started/deployment/docker_compose/#4-configure-bugzilla-container-optional","title":"4. Configure bugzilla container (optional)","text":"<p>If you want to utilise the Bugzilla connector:</p> <p>In folder <code>org.etsi.osl.main/compose/</code> edit the file <code>docker-compose.yaml</code></p> <pre><code>SPRING_APPLICATION_JSON: '{\n  \"spring.activemq.brokerUrl\": \"tcp://anartemis:61616?jms.watchTopicAdvisories=false\",\n  \"spring.activemq.user\": \"artemis\",\n  \"spring.activemq.password\": \"artemis\",\n  \"bugzillaurl\":\"\",\n  \"bugzillakey\":\"\",\n  \"main_operations_product\":\"\"\n}'\n</code></pre> <p>And add the provided Bugzilla installation information:</p> <pre><code>\"bugzillaurl\":\"bugzillaurl.xx:443/bugzilla/\",\n\"bugzillakey\":\"exampleKeyeqNNwxBlgxZgMEIne0Oeq0Bz\",\n\"main_operations_product\":\"Main Site Operations\" // this is the default product to issue tickets\n</code></pre> <p>Bugzilla should have the following components under the specified product:  </p> <ul> <li>NSD Deployment Request: Component used to schedule deployment req  </li> <li>Onboarding: Issues related to VNF/NSD Onboarding  </li> <li>Operations Support: Default component for operations support  </li> <li>Validation: Use to track validation processes of VNFs and NSDs  </li> <li>VPN Credentials/Access: Used for requesting VPN Credentials/Access</li> </ul> <p>Also in the 'Main Site Operations' product, a version named 'unspecified' must be created.</p>"},{"location":"getting_started/deployment/docker_compose/#5-configure-cridge-container-optional","title":"5. Configure CRIDGE container (optional)","text":"<p>If you want to create and manage Kubernetes Custom Resources (CRs), you will have to provide:</p> <ul> <li>a cluster-wide scope kubeconf file (typically located at <code>/home/{user}/.kube</code> directory of the Kubernetes Cluster's host)</li> </ul> <p>You will have to copy the kubeconf file to the <code>org.etsi.osl.main/compose/kubedir</code> directory, prior to the deployment.</p> <p>By default, the deployment process copies the contents of <code>org.etsi.osl.main/compose/kubedir</code> directory into the <code>/root/.kube</code> directory of the CRIDGE container.</p> <pre><code>volumes:\n- ./kubedir/:/root/.kube\n</code></pre> <p>The above configuration works for the default kubeconf file names. It explicitly expects a file named <code>config</code> within the <code>/root/.kube</code> directory of the created container.</p> <p>Optionally, if you want to use custom kubeconf file names, you will have to sync volumes by files and not entire directories, e.g.</p> <pre><code>volumes:\n- ./kubedir/custom-config-name:/root/.kube/config\n</code></pre> <p>OpenSlice also offers management support of multiple Kubernetes Clusters simultaneously. For this, you will have to:</p> <ul> <li>add all the respective kubeconf files into the <code>org.etsi.osl.main/compose/kubedir</code> directory.</li> <li>create a copy of CRIDGE service in the deployment file and map the appropriate volumes. Mind the need for a different service and container name.</li> </ul> <p>Below you may find an indicative example that only references the affected fields of the docker-compose file:</p> <pre><code>cridge-cluster1:\n  container_name: openslice-cluster1\n  ...\n  volumes:\n  - ./kubedir/config-cluster1:/root/.kube/config\n\ncridge-cluster2:\n  container_name: openslice-cluster2\n  ...\n  volumes:\n  - ./kubedir/config-cluster2:/root/.kube/config\n</code></pre> <p>Note the same <code>/root/.kube/config</code> container's path for the proper functionality. See the above note for explanation.</p>"},{"location":"getting_started/deployment/docker_compose/#6-configure-osportalapi-container-nfv-services-conditional","title":"6. Configure osportalapi container (NFV services) (conditional)","text":"<p>Change the respective fields:</p> <ul> <li>If you made changes to mysql and keycloak credentials.</li> <li>If you want to change logging level (TRACE / DEBUG / INFO / WARN / ERROR).</li> </ul> <p>If you are using a non-local domain, replace everywhere the http://keycloak:8080 with the respective {{protocol://domain.name}}, as well as \"spring.portal.main.domain\" property.</p> <p>In folder <code>org.etsi.osl.main/compose/</code> edit the file <code>docker-compose.yaml</code></p> <pre><code>SPRING_APPLICATION_JSON: '{\n  \"spring.datasource.username\":\"root\",\n  \"spring.datasource.password\":\"letmein\",\n  \"spring-addons.issuers[0].uri\": \"http://keycloak:8080/auth/realms/openslice\",\n  \"spring.security.oauth2.resourceserver.jwt.issuer-uri\": \"http://keycloak:8080/auth/realms/openslice\",\n  \"springdoc.oAuthFlow.authorizationUrl\": \"http://keycloak:8080/auth/realms/openslice/protocol/openid-connect/auth\",\n  \"springdoc.oAuthFlow.tokenUrl\": \"http://keycloak:8080/auth/realms/openslice/protocol/openid-connect/token\",  \n  \"spring.portal.main.domain\": \"http://localhost\",\n  \"logging.level.org.springframework\" : \"INFO\"\n}'\n</code></pre>"},{"location":"getting_started/deployment/docker_compose/#7-configure-osscapi-container-tmf-api-service-conditional","title":"7. Configure osscapi container (TMF API service) (conditional)","text":"<p>Change the respective fields: </p> <ul> <li>If you made changes to mysql and keycloak credentials.</li> <li>If you want to change logging level (TRACE / DEBUG / INFO / WARN / ERROR).</li> </ul> <p>If you are using a non-local domain, replace everywhere the http://keycloak:8080 with the respective {{protocol://domain.name}}.</p> <p>In folder <code>org.etsi.osl.main/compose/</code> edit the file <code>docker-compose.yaml</code></p> <pre><code>SPRING_APPLICATION_JSON: '{\n  \"spring.datasource.username\":\"root\",\n  \"spring.datasource.password\":\"letmein\",\n  \"spring-addons.issuers[0].uri\": \"http://keycloak:8080/auth/realms/openslice\",\n  \"spring.security.oauth2.resourceserver.jwt.issuer-uri\": \"http://keycloak:8080/auth/realms/openslice\",\n  \"springdoc.oAuthFlow.authorizationUrl\": \"http://keycloak:8080/auth/realms/openslice/protocol/openid-connect/auth\",\n  \"springdoc.oAuthFlow.tokenUrl\": \"http://keycloak:8080/auth/realms/openslice/protocol/openid-connect/token\",  \n  \"logging.level.org.springframework\" : \"INFO\"\n}'\n</code></pre>"},{"location":"getting_started/deployment/docker_compose/#configure-nginx","title":"Configure nginx","text":"<p>In folder <code>org.etsi.osl.main/compose/nginx</code> create a configuration specific <code>nginx.conf</code> file.</p> <pre><code>cd org.etsi.osl.main/compose/nginx/\n</code></pre> <pre><code>sudo cp nginx.conf.default nginx.conf\n</code></pre> <p>If needed, in the nginx.conf file, edit the server_name for an non-local deployment.</p>"},{"location":"getting_started/deployment/docker_compose/#configure-web-ui","title":"Configure Web UI","text":"<p>In folder <code>org.etsi.osl.portal.web/src/js/</code> create a configuration specific <code>config.js</code> file.</p> <pre><code>cd org.etsi.osl.portal.web/src/js\n</code></pre> <pre><code>sudo cp config.js.default config.js\n</code></pre> <p>Edit the <code>config.js</code> file with the information of your domain. <code>ROOTURL</code> will automatically extract the the Origin (Protocol://Domain:Port) of the deployment, but you must change <code>APIURL</code> property, if you are not aiming for a localhost installation, e.g. \"https://portal.openslice.eu\".</p> <p>Example file:</p> <pre><code>{     \n  \"BUGZILLA\": \"ROOTURL/bugzilla/\",\n  \"STATUS\": \"ROOTURL/status/\",\n  \"APIURL\": \"http://localhost\",\n  \"WEBURL\": \"ROOTURL/nfvportal\",\n  \"APIOAUTHURL\": \"ROOTURL/auth/realms/openslice\",\n  \"APITMFURL\": \"ROOTURL/tmf-api/serviceCatalogManagement/v4\"\n}\n</code></pre>"},{"location":"getting_started/deployment/docker_compose/#configure-tmf-web-ui","title":"Configure TMF Web UI","text":"<p>In the folder <code>org.etsi.osl.tmf.web/src/assets/config</code> there are 3 files available for configuration:</p> <ul> <li>config.prod.json (Basic information + API configuration)</li> <li>theming.scss (CSS color palette theming)</li> <li>config.theming.json (HTML configuration - Logo, Favicon, Footer)</li> </ul> <p>The first 2 files above (i.e. config.prod.json, theming.scss) are essential for the successful deployment of OpenSlice, thus created automatically during the initial deployment at <code>org.etsi.osl.tmf.web/src/assets/config</code> directory as a copy of the default ones from the remote repository.</p> <p>Ensure that you check the <code>config.prod.json</code> and <code>theming.scss</code> files and readjust to your deployment if needed.</p> <pre><code># Starting from the root project directory\ncd org.etsi.osl.tmf.web/src/assets/config\n</code></pre> <p>E.g. You may edit \"TITLE\", \"WIKI\", etc properties with your domain title. Also configure TMF's API and Keycloak's location for the web application, if needed.</p> <p>Example file:</p> <pre><code>{         \n    \"TITLE\": \"OpenSlice by ETSI\",\n    \"PORTALVERSION\":\"2024Q2\",\n    \"WIKI\": \"https://osl.etsi.org/documentation\",\n    \"BUGZILLA\": \"{BASEURL}/bugzilla/\",\n    \"STATUS\": \"{BASEURL}/status/\",\n    \"WEBURL\": \"{BASEURL}\",\n    \"PORTAL_REPO_APIURL\": \"{BASEURL}/osapi\",\n    \"ASSURANCE_SERVICE_MGMT_APIURL\": \"{BASEURL}/oas-api\",\n    \"APITMFURL\": \"{BASEURL}/tmf-api\",\n    \"OAUTH_CONFIG\" : {\n        \"issuer\": \"{BASEURL}/auth/realms/openslice\",\n        \"loginUrl\": \"{BASEURL}/auth/realms/openslice/protocol/openid-connect/auth\",\n        \"tokenEndpoint\": \"{BASEURL}/auth/realms/openslice/protocol/openid-connect/token\",\n        \"userinfoEndpoint\": \"{BASEURL}/auth/realms/openslice/protocol/openid-connect/userinfo\",\n        \"redirectUri\": \"{BASEURL}/redirect\",\n        \"logoutUrl\": \"{BASEURL}/auth/realms/openslice/protocol/openid-connect/logout\", \n        \"postLogoutRedirectUri\": \"{BASEURL}\",\n\n        \"responseType\": \"code\",\n        \"oidc\": false,\n        \"clientId\": \"osapiWebClientId\",\n        \"dummyClientSecret\": \"secret\",\n\n        \"requireHttps\": false,\n        \"useHttpBasicAuth\": true,\n        \"clearHashAfterLogin\": false,\n\n        \"showDebugInformation\": true\n    }\n}\n</code></pre> <p>The {BASEURL} placeholder in the file automatically detects the Origin (Protocol://Domain:Port) of the deployment and applies it to every respective property. E.g. If you are attempting a local deployment of OpenSlice, then {BASEURL} is automatically translated to \"http://localhost\". Similarly, you may use {BASEURL} to translate to a public deployment configuration, e.g. \"https://portal.openslice.eu\".</p> <p>If further customization, apart from the default provided, is needed for branding (Logo, Footer) then <code>config.theming.json</code> needs to be created in io.openslice.tmf.web/src/assets/config directory, as follows:</p> <pre><code># Starting from the root project directory\ncd org.etsi.osl.tmf.web/src/assets/config\n</code></pre> <pre><code>sudo cp config.theming.default.json config.theming.json\n</code></pre> <p>IMPORTANT NOTE: If you want to apply changes to the JSON configuration files without the need to rebuild the application, you have to apply the changes at the <code>org.etsi.osl.tmf.web/dist/io-openslice-portal-web/assets/config</code> directory. Although, it is mandatory to also apply these changes to the <code>org.etsi.osl.tmf.web/src/assets/config</code> for persistancy, as after any future rebuild of OpenSlice the <code>/dist</code> directory is being overwritten along with its contents. The OpenSlice team strongly recommends to always apply your changes to the TMF web UI configuration files at <code>org.etsi.osl.tmf.web/src/assets/config</code> and rebuild the application.</p>"},{"location":"getting_started/deployment/docker_compose/#deploy-openslice-via-docker-compose","title":"Deploy OpenSlice via Docker Compose","text":"<p>After configuring the services, and editing the docker compose file accordingly, the docker compose instantiation command can be performed.</p> <pre><code># Starting from the root project directory\ncd org.etsi.osl.main/compose/\n</code></pre> <pre><code>sudo docker compose --profile prod down;sudo docker compose --profile prod up -d --build\n</code></pre> <p>Depending on your machine, this process might take time. if for any reason the deployment fails during first time, please rerun the above before any further measures.</p>"},{"location":"getting_started/deployment/docker_compose/#validating-deployments-and-container-monitoring","title":"Validating deployments and container monitoring","text":"<p>You can monitor containers' status with portainer at port 9000 (http://your-ip:9000).</p> <p>Initially, you may monitor the local machine at portainer.</p> <p>Please check that all containers are in running state.</p>"},{"location":"getting_started/deployment/docker_compose/#post-installation-steps","title":"Post installation steps","text":"<p>After the successful deployment of OpenSlice, to ensure the E2E user experience, this section is mandatory. It contains crucial configuration in regard of authentication and user creation.</p>"},{"location":"getting_started/deployment/docker_compose/#configure-keycloak-server","title":"Configure Keycloak server","text":"<p>The Keycloack server is managing authentication and running on a container at port 8080. It is also proxied to your host via nginx under http://localhost/auth. </p> <ul> <li> <p>Navigate to http://domain.com/auth/ or https://domain.com/auth/, (http://ipaddress:8080/auth/ or https://ipaddress:8443/auth/ which are directly accessible without proxy) </p> </li> <li> <p>Navigate to Administration Console </p> </li> <li> <p>Login with the credentials from section Configure keycloak container. Default values are:</p> <ul> <li>user: admin and </li> <li>password: Pa55w0rd</li> </ul> </li> </ul> <p>if you are running in HTTP you will get a message: HTTPS required.</p> <p>To resolve this issue when running in HTTP: </p> <ul> <li>Select the master realm from top left corner</li> <li>Go to login Tab and select \"Require SSL\": None</li> <li>Repeat for realm Openslice</li> </ul> <p>If you are running in HTTPS, then \"Require SSL\" can be left unchanged to external requests.</p>"},{"location":"getting_started/deployment/docker_compose/#1-configure-redirects","title":"1. Configure redirects","text":"<p>Navigate to realm Openslice &gt; Clients &gt; osapiWebClientId and change the Root URL to your domain. </p> <p>Also, insert your domain, e.g. http://example.org/*, at:</p> <ul> <li>Valid Redirect URIs</li> <li>Web Origins</li> </ul>"},{"location":"getting_started/deployment/docker_compose/#2-configure-email","title":"2. Configure email","text":"<p>Keycloak allows new users to register. Subsequently, this will also allow new users to register to the OpenSlice portal.</p> <p>Navigate to realm Openslice &gt; Realm Settings &gt; Login Tab &gt; check User registration, Verify email, Forgot password etc.</p> <p>Finally, enter the details of the mail server at the Email Tab.</p> <p>Email configuration is optional for test runs, but if not provided the above functionalities (e.g. external user registration) will not be possible.</p>"},{"location":"getting_started/deployment/docker_compose/#3-add-an-openslice-admin-user","title":"3. Add an OpenSlice admin user","text":"<p>This step is mandatory so as to access the OpenSlice Web UI. To add an OpenSlice admin user you must:</p> <ul> <li>Navigate to realm Openslice &gt; Users &gt; Add user</li> <li>Set a password</li> <li>Upon creation, navigate to Role Mappings and add ADMIN to Assigned Roles list</li> </ul> <p>That user is different from the Keycloak admin user. It is required to login and browse the OpenSlice Web UI. The Role ADMIN guarantee full access through the OpenSlice UI, thus such a user is always required.</p>"},{"location":"getting_started/deployment/docker_compose/#keycloak-at-localhost","title":"Keycloak at localhost","text":"<p>This is an important step if you run Keycloak on localhost!</p> <p>1 - Edit your Hosts File, adding the line below</p> <p><code>127.0.0.1 keycloak</code></p> <p>Hosts File Location:</p> <ul> <li> <p>In Linux/Unix, the file's location is at /etc/hosts </p> </li> <li> <p>In Windows, its location is at c:\\Windows\\System32\\Drivers\\etc\\hosts</p> </li> </ul> <p>2 - Replace http://localhost/auth/ with http://keycloak:8080/auth/ in your Keycloak config for AngularJS and Angular (see examples below).</p> <p>Explanation: Nginx uses the http://keycloak:8080 URL, which is accessible via the internal docker system's network. The Front-end (TS/Angular) shall also use the http://keycloak:8080. This way, you will not get the invalid token error, as the API is acquiring the token from http://keycloak:8080 (internally) and the Front-end is getting verified by an issuer at the same URL, as well.</p> <p>2.1 - For the Angular configuration (TMF portal UI), navigate to  org.etsi.osl.tmf.web/src/assets/config and edit config.prod.json</p> <pre><code># Starting from the root project directory\ncd org.etsi.osl.tmf.web/src/assets/config\n</code></pre> <pre><code>nano config.prod.json\n</code></pre> <p>After editing, the displayed properties should look like the example below:</p> <pre><code>{         \n  \"OAUTH_CONFIG\" : {\n      \"issuer\": \"http://keycloak:8080/auth/realms/openslice\",\n      \"loginUrl\": \"http://keycloak:8080/auth/realms/openslice/protocol/openid-connect/auth\",\n      \"tokenEndpoint\": \"http://keycloak:8080/auth/realms/openslice/protocol/openid-connect/token\",\n      \"userinfoEndpoint\": \"http://keycloak:8080/auth/realms/openslice/protocol/openid-connect/userinfo\",\n      \"redirectUri\": \"{BASEURL}/redirect\",\n      \"logoutUrl\": \"http://keycloak:8080/auth/realms/openslice/protocol/openid-connect/logout\", \n      \"postLogoutRedirectUri\": \"{BASEURL}\",\n  }\n}\n</code></pre> <p>Note the difference in changing {BASEURL} -&gt; http://keycloak:8080</p> <p>If you want the changes to take place immediately without rebuilding the project, then repeat the process for org.etsi.osl.tmf.web/dist/org.etsi.osl.tmf.web/assets/config/config.prod.json</p> <p>2.2 - For the AngularJS configuration (NVF portal UI), navigate to org.etsi.osl.portal.web/src/js and edit config.js</p> <pre><code># Starting from the root project directory\ncd org.etsi.osl.portal.web/src/js\n</code></pre> <pre><code>nano config.js\n</code></pre> <p>After editing, the displayed properties should look like the example below:</p> <pre><code>var appConfig = angular.module('portalwebapp.config',[]);\n\n\nappConfig.factory('APIEndPointService', function() {\n   return {       \n      APIOAUTHURL: \"http://keycloak:8080/auth/realms/openslice\",\n   };\n});\n</code></pre> <p>Note the difference in \"APIOAUTHURL\" property, changing ROOTURL -&gt; http://keycloak:8080</p>"},{"location":"getting_started/deployment/docker_compose/#nfv-orchestrator-configuration","title":"NFV Orchestrator Configuration","text":"<p>After successfully deploying and configuring OpenSlice, you may configure its environment (e.g. the NFVO) that will facilitate the deployment of NFV artifacts.</p> <p>See NFV Orchestrator Configuration.</p>"},{"location":"getting_started/deployment/introduction/","title":"OpenSlice Deployment","text":"<p>Intended Audience: OpenSlice Administrators</p> <p>This section is meant to guide the user through the installation of OpenSlice. </p> <p>Following, you may thorough guides depending on the installation type of your choice:</p> <ul> <li>Installing via Docker Compose guide</li> <li>Installing via Kubernetes guide</li> </ul>"},{"location":"getting_started/deployment/kubernetes/","title":"OpenSlice Deployment Guide with Kubernetes","text":"<p>Intended Audience: OpenSlice Administrators</p>"},{"location":"getting_started/deployment/kubernetes/#requirements","title":"Requirements","text":""},{"location":"getting_started/deployment/kubernetes/#hardware-requirements","title":"Hardware requirements","text":"Minimum Hardware Requirements Recommended Hardware Requirements 4 CPU cores 8 CPU cores 8 GB RAM 16 GB RAM 30 GB storage 50 GB storage"},{"location":"getting_started/deployment/kubernetes/#software-requirements","title":"Software Requirements","text":"<ul> <li>git: For cloning the project repository.</li> <li>Kubernetes: A running cluster where OpenSlice will be deployed. <ul> <li>Disclaimer: The current manual setup of Persistent Volumes using <code>hostPath</code> is designed to operate with only a single worker node. This setup will not support data persistence if a pod is rescheduled to another node.</li> </ul> </li> <li>Helm: For managing the deployment of OpenSlice.</li> <li> <p>Ingress Controller: Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting. An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic. You must have an Ingress controller to satisfy an Ingress.</p> <ul> <li> <p>Nginx Ingress Controller (Kubernetes Community Edition):  The ingress resource is configured to use an Nginx type ingress controller. </p> <ul> <li>If you need to expose the message bus service (Artemis), which communicates using the TCP protocol, you must use version &gt;= 1.9.13 of the Nginx Ingress Controller (a prerequisite for managing multiple kubernetes clusters). This version or higher includes the required functionality to handle TCP services. Otherwise, earlier versions may suffice depending on your configuration.</li> <li> <p>To install or upgrade to the required version, run the following command:</p> <p><pre><code>helm upgrade nginx-ingress ingress-nginx/ingress-nginx --namespace ingress \\\n--set tcp.61616=\"&lt;openslice-namespace&gt;/&lt;openslice-helm-release-name&gt;-artemis:61616\"\n</code></pre> Replace <code>&lt;helm-release-name&gt;</code> with the name of your OpenSlice Helm release.</p> </li> <li> <p>More details regarding the Nginx Ingress Controller (Kubernetes Community Edition) can be found here.</p> </li> </ul> </li> <li> <p>Other Ingress Controller:  For non-Nginx ingress controllers, modify <code>[repo-root]/kubernetes/helm/openslice/templates/openslice-ingress.yaml</code> to meet your controller\u2019s requirements.</p> </li> </ul> </li> </ul>"},{"location":"getting_started/deployment/kubernetes/#exposure","title":"Exposure","text":""},{"location":"getting_started/deployment/kubernetes/#option-1-load-balancer","title":"Option 1 - Load balancer","text":"<ul> <li>Network Load Balancer: Required for exposing the service (e.g., GCP, AWS, Azure, MetalLB).</li> <li>Domain/IP Address: Necessary for accessing the application. This should be configured in <code>[repo-root]/kubernetes/helm/openslice/values.yaml</code> under <code>rooturl</code>.</li> </ul>"},{"location":"getting_started/deployment/kubernetes/#option-2-ingress","title":"Option 2 - Ingress","text":"<ul> <li>Ingress Controller with NodePort: You can expose the application using the NodePort of the Ingress Controller's service.</li> <li>IP Address and Port: Use the IP address of the master node and the assigned NodePort to access the application. This should be configured in <code>[repo-root]/kubernetes/helm/openslice/values.yaml</code> under <code>rooturl</code>.</li> </ul> <p>For example: <pre><code>rooturl: http://&lt;master-node-ip&gt;:&lt;nodeport&gt;\n</code></pre></p>"},{"location":"getting_started/deployment/kubernetes/#additional-configuration","title":"Additional Configuration","text":"<ul> <li>Storage Class: In a production environment, specify your <code>storageClass</code> in <code>[repo-root]/kubernetes/helm/openslice/values.yaml</code> under <code>storageClass</code>. If not defined, PVs will be created and managed manually.</li> </ul> <p>Disclaimer: Before deploying, confirm that your storage system supports claims of one 10G and two 1G volumes.</p>"},{"location":"getting_started/deployment/kubernetes/#preparing-the-environment","title":"Preparing the environment","text":""},{"location":"getting_started/deployment/kubernetes/#1-setting-up-a-kubernetes-cluster","title":"1. Setting Up A Kubernetes Cluster","text":"<p>Refer to the official Kubernetes documentation for setting up a cluster. Ensure your cluster meets the hardware requirements specified above.</p>"},{"location":"getting_started/deployment/kubernetes/#2-installing-helm","title":"2. Installing Helm","text":"<p>Helm must be installed on your machine to deploy OpenSlice via Helm charts. Follow the official Helm installation guide.</p>"},{"location":"getting_started/deployment/kubernetes/#downloading-the-project","title":"Downloading the project","text":""},{"location":"getting_started/deployment/kubernetes/#1-create-a-new-folder-to-download-the-project","title":"1. Create a new folder to download the project","text":"<pre><code>mkdir openslice\ncd openslice\n</code></pre>"},{"location":"getting_started/deployment/kubernetes/#2-download-the-project-code","title":"2. Download the project code","text":"<p>Clone the project code from the GitLab repository.  Note: This process will be simplified once the charts are published in the GitLab registry, requiring only the chart to be pulled.</p> <pre><code>git clone https://labs.etsi.org/rep/osl/code/org.etsi.osl.main.git\ncd org.etsi.osl.main/kubernetes/helm/openslice/\n</code></pre>"},{"location":"getting_started/deployment/kubernetes/#3-prerequisites-before-deployment","title":"3. Prerequisites before deployment","text":"<p>Before deploying the Helm chart, ensure you have configured the necessary components as detailed in the following section, i.e. Configure Helm Chart Services. By default, the <code>main</code> branch is selected for deployment.</p> <p>We recommend:</p> <ul> <li>main branch for the most stable experience and</li> <li>develop branch for an experience with the latest features (for develop branch installation, it is strongly advisable that you may as well follow the develop documentation)</li> </ul>"},{"location":"getting_started/deployment/kubernetes/#configure-helm-chart","title":"Configure Helm Chart","text":"<p>When deploying OpenSlice with Helm, service configurations are handled through the <code>values.yaml</code> file. This file allows you to define all necessary configurations for your deployment, including database credentials, service URLs, and logging levels. Below are examples of how to configure your services in Helm based on your provided values.</p>"},{"location":"getting_started/deployment/kubernetes/#database","title":"Database","text":"<p>To configure MySQL and other related services, you can directly set the values in your <code>values.yaml</code> file under the <code>oscreds</code> and <code>mysql</code> sections. For example:</p> <pre><code>oscreds:\n  mysql:\n    username: \"root\"\n    password: \"letmein\"\n    openslicedb: \"osdb\"\n    keycloak: \n      database: \"keycloak\"\n      username: \"keycloak\"\n      password: \"password\"\n      adminpassword: \"Pa55w0rd\"\n    portal:\n      database: \"osdb\"\n      username: \"portaluser\"\n      password: \"12345\"\n</code></pre>"},{"location":"getting_started/deployment/kubernetes/#keycloak","title":"Keycloak","text":"<p>Keycloak settings, including the database and admin password, are part of the <code>oscreds.mysql.keycloak</code> section. If you need to adjust Keycloak-specific settings like realms or client configurations, you'll likely need to customize your Helm chart further or manage these settings directly within Keycloak after deployment. The Keycloak realm configuration that is imported by default can be found under <code>kubernetes/helm/openslice/files/keycloak-init/realm-export.json</code>.</p> <pre><code>oscreds:\n  mysql:\n    keycloak: \n      database: \"keycloak\"\n      username: \"keycloak\"\n      password: \"password\"\n      adminpassword: \"Pa55w0rd\"\n</code></pre>"},{"location":"getting_started/deployment/kubernetes/#cridge","title":"CRIDGE","text":"<p>To create and manage Kubernetes Custom Resources (CRs), you have to install and configure the CRIDGE component. </p> <p>For CRIDGE to work properly, you need to provide a cluster-wide scope kubeconfig file (typically located at <code>/home/{user}/.kube</code> directory of the Kubernetes Cluster's host). This kubeconfig file allows CRIDGE to communicate with your Kubernetes cluster.</p> <p>There are two ways to install CRIDGE:</p>"},{"location":"getting_started/deployment/kubernetes/#bundled-cridge-deployment-with-the-openslice-helm-chart-same-cluster-environment","title":"Bundled CRIDGE deployment with the OpenSlice Helm chart (same cluster environment)","text":"<p>By default, the OpenSlice Helm chart also deploys CRIDGE alongside the bundle. To configure CRIDGE, there are three different ways to provide this kubeconfig file during deployment:</p> <ol> <li> <p>Manual Copy to Helm Files Directory:</p> <ul> <li>Copy the kubeconfig file to the following directory: <code>org.etsi.osl.main/kubernetes/helm/openslice/files/org.etsi.osl.cridge</code>.</li> <li>The deployment process will automatically copy the file into the <code>/root/.kube</code> directory of the CRIDGE container.</li> <li>Note: This method expects the kubeconfig file to be named exactly <code>kubeconfig.yaml</code> in the specified directory.</li> </ul> </li> <li> <p>Passing the Kubeconfig File Using Helm (<code>--set-file</code>):</p> <ul> <li> <p>If you do not wish to manually copy the file, you can pass it directly during the Helm installation using the <code>--set-file</code> option, at the final deployment process:</p> <pre><code>--set-file cridge.kubeconfig.raw=path/to/kubeconfig.yaml\n</code></pre> </li> <li> <p>This method reads the specified kubeconfig file and mounts it into the CRIDGE container during deployment.</p> </li> </ul> </li> <li> <p>Passing a Base64-Encoded Kubeconfig Using Helm (<code>--set</code>):</p> <ul> <li> <p>Alternatively, you can pass the kubeconfig as a base64-encoded string, during the Helm installation using the <code>--set</code> option, at the final deployment process:</p> <pre><code>--set cridge.kubeconfig.base64=\"$(base64 path/to/kubeconfig.yaml)\"\n</code></pre> </li> <li> <p>This method encodes the kubeconfig content and passes it directly to the CRIDGE container.</p> </li> </ul> </li> </ol> <p>Note: Regardless of the method you choose, if you're using a non-standard kubeconfig file name, make sure to adjust the references or rename the file as needed.</p>"},{"location":"getting_started/deployment/kubernetes/#standalone-cridge-deployment","title":"Standalone CRIDGE deployment","text":"<p>There can be cases where a separate deployment of CRIDGE, apart from the bundled OpenSlice deployment, may be needed. These cases comprise:</p> <ul> <li>remote cluster management, different from the one OpenSlice is installed</li> <li>more control over the component (e.g. multiple component instances / clusters)</li> </ul> <p>In this case, initially you have to disable CRIDGE from deploying with the rest of OpenSlice. To do so, in the <code>values.yaml</code> of OpenSlice Helm chart, you have to change the <code>cridge.enabled</code> flag to <code>false</code>.</p> <pre><code>cridge:\n  enabled: false\n</code></pre> <p>Following, clone the CRIDGE project from the GitLab, which also includes the respective standalone Helm chart.</p> <pre><code>git clone https://labs.etsi.org/rep/osl/code/org.etsi.osl.cridge.git\ncd org.etsi.osl.cridge/helm/cridge/\n</code></pre> <p>Similarly, to configure CRIDGE, there are three different ways to provide this kubeconfig file during deployment:</p> <ol> <li> <p>Manual Copy to Helm Files Directory:</p> <ul> <li>Copy the kubeconfig file to the following directory: <code>org.etsi.osl.cridge/helm/cridge/files/org.etsi.osl.cridge</code>.</li> <li>The deployment process will automatically copy the file into the <code>/root/.kube</code> directory of the CRIDGE container.</li> <li>Note: This method expects the kubeconfig file to be named exactly <code>kubeconfig.yaml</code> in the specified directory.</li> </ul> </li> <li> <p>Passing the Kubeconfig File Using Helm (<code>--set-file</code>):</p> <ul> <li> <p>If you do not wish to manually copy the file, you can pass it directly during the Helm installation using the <code>--set-file</code> option:</p> <pre><code>helm install cridge-release . --set-file kubeconfig.raw=path/to/kubeconfig.yaml\n</code></pre> </li> <li> <p>This method reads the specified kubeconfig file and mounts it into the CRIDGE container during deployment.</p> </li> </ul> </li> <li> <p>Passing a Base64-Encoded Kubeconfig Using Helm (<code>--set</code>):</p> <ul> <li> <p>Alternatively, you can pass the kubeconfig as a base64-encoded string:</p> <pre><code>helm install cridge-release . --set kubeconfig.base64=\"$(base64 path/to/kubeconfig.yaml)\"\n</code></pre> </li> <li> <p>This method encodes the kubeconfig content and passes it directly to the CRIDGE container.</p> </li> </ul> </li> </ol> <p>Note: Regardless of the method you choose, if you're using a non-standard kubeconfig file name, make sure to adjust the references or rename the file as needed.</p> <p>Important Note: If you are deploying CRIDGE in the same cluster and namespace as OpenSlice, no additional configuration is required for the message bus broker URL and OpenSlice communicates with CRIDGE directly. However, if CRIDGE is installed in a separate Kubernetes cluster from the one hosting OpenSlice, it is important to configure the <code>values.yaml</code> file for the CRIDGE Helm chart to point to the correct message bus broker URL. Please see Nginx Ingress Controller (Kubernetes Community Edition) configuration on how to properly expose the message bus in such scenario.</p> <p>In the <code>values.yaml</code> of the CRIDGE Helm chart, you must set <code>oscreds.activemq.brokerUrl</code> to point to the IP address of the ingress controller in the OpenSlice cluster, as shown below:</p> <pre><code>oscreds:\n  activemq:\n    brokerUrl: \"tcp://&lt;openslice-rootURL&gt;:61616?jms.watchTopicAdvisories=false\"\n</code></pre>"},{"location":"getting_started/deployment/kubernetes/#management-of-multiple-kubernetes-clusters","title":"Management of multiple Kubernetes Clusters","text":"<p>OpenSlice also offers management support of multiple Kubernetes Clusters simultaneously. </p> <p>For this, you will have to replicate the steps in Standalone CRIDGE deployment for every Cluster. Each CRIDGE instance will be in charged with the management of one Kubernetes Cluster.</p>"},{"location":"getting_started/deployment/kubernetes/#external-services-optional","title":"External Services (optional)","text":"<p>For configuring optional external services like Bugzilla and CentralLog, specify their URLs and credentials in the <code>values.yaml</code> file:</p> <pre><code>bugzillaurl: \"example.com:443/bugzilla\"\nbugzillakey: \"VH2Vw0iI5aYgALFFzVDWqhACwt6Hu3bXla9kSC1Z\"\nmain_operations_product: \"Main Site Operations\" // this is the default product to issue tickets\ncentrallogurl: \"http://elk_ip:elk_port/index_name/_doc\"\n</code></pre> <p>Bugzilla should have the following components under the specified product:  </p> <ul> <li>NSD Deployment Request: Component used to schedule deployment req  </li> <li>Onboarding: Issues related to VNF/NSD Onboarding  </li> <li>Operations Support: Default component for operations support  </li> <li>Validation: Use to track validation processes of VNFs and NSDs  </li> <li>VPN Credentials/Access: Used for requesting VPN Credentials/Access   </li> </ul> <p>Also in the 'Main Site Operations' product, a version named 'unspecified' must be created.</p>"},{"location":"getting_started/deployment/kubernetes/#application-and-logging","title":"Application and Logging","text":"<p>Application-specific configurations, such as OAuth client secrets, can be set in the <code>spring</code> section:</p> <pre><code>spring:\n  oauthClientSecret: \"secret\"\n</code></pre>"},{"location":"getting_started/deployment/kubernetes/#ingress-and-root-url","title":"Ingress and Root URL","text":"<p>To configure the ingress controller and root URL for OpenSlice, update the rooturl field with your ingress load balancer IP or domain. This setting is crucial for external access to your application:</p> <pre><code>rooturl: \"http://openslice.com\" # Example domain\n# or\nrooturl: \"http://3.15.198.35:8080\" # Example IP with port\n</code></pre>"},{"location":"getting_started/deployment/kubernetes/#persistent-volume-for-mysql","title":"Persistent Volume for MySQL","text":"<p>For persistent storage, especially for MySQL, define the storage size under the <code>mysql</code> section. This ensures that your database retains data across pod restarts and deployments.</p> <pre><code>mysql:\n  storage: \"10Gi\"\n</code></pre>"},{"location":"getting_started/deployment/kubernetes/#tcp-forwarding-for-artemis","title":"TCP Forwarding for Artemis","text":"<p>To expose the message bus service (Artemis) via the ingress controller, it\u2019s essential to configure TCP traffic forwarding. Artemis listens on port <code>61616</code>, and this traffic needs to be directed to the Artemis service within your Kubernetes cluster.</p> <p>In the Ingress Controller Setup section, you already configured the Nginx ingress controller to handle this TCP forwarding. By setting the rule for port <code>61616</code>, traffic arriving at the ingress will be forwarded to the Artemis service defined in your Helm release.</p> <p>This setup ensures that the message bus service is accessible externally via the ingress controller, completing the necessary configuration for Artemis.</p>"},{"location":"getting_started/deployment/kubernetes/#web-ui","title":"Web UI","text":"<p>In folder <code>kubernetes/helm/openslice/files/org.etsi.osl.portal.web/src/js</code> you must make a copy of <code>config.js.default</code> file and rename it to <code>config.js</code>. </p> <p>This is mandatory for the configuration file to be discoverable.</p> <p>Edit the <code>config.js</code> configuration file with your static configuration, if needed.</p> <pre><code>{\n  TITLE: \"OpenSlice by ETSI\",\n  WIKI: \"https://osl.etsi.org/documentation/\",\n  BUGZILLA: \"{{ .Values.rooturl }}/bugzilla\",\n  STATUS: \"{{ .Values.rooturl }}/status\",\n  APIURL: \"{{ .Values.rooturl }}\",\n  WEBURL: \"{{ .Values.rooturl }}/nfvportal\",\n  APIOAUTHURL: \"{{ .Values.rooturl }}/auth/realms/openslice\",\n  APITMFURL: \"{{ .Values.rooturl }}/tmf-api/serviceCatalogManagement/v4\"\n}\n</code></pre>"},{"location":"getting_started/deployment/kubernetes/#tmf-web-ui","title":"TMF Web UI","text":"<p>In the folder <code>kubernetes/helm/openslice/files/org.etsi.osl.tmf.web/src/assets/config</code> there are 3 files available for configuration:</p> <ul> <li>config.prod.default.json (Basic information + API configuration)</li> <li>theming.default.scss (CSS color palette theming)</li> <li>config.theming.default.json (HTML configuration - Logo, Favicon, Footer)</li> </ul> <p>You must make a copy of files:</p> <ul> <li><code>config.prod.default.json</code> and rename it to <code>config.prod.json</code></li> <li><code>theming.default.scss</code> and rename it to <code>theming.scss</code></li> </ul> <p>The 2 files above (i.e. config.prod.json, theming.scss) are essential for the successful deployment of OpenSlice, and executing the above steps is mandatory for the configuration files to be discoverable.</p> <p>Ensure that you check the <code>config.prod.json</code> and <code>theming.scss</code> files and readjust to your deployment if needed.</p> <pre><code># Starting from the root project directory\ncd kubernetes/helm/openslice/files/org.etsi.osl.tmf.web/src/assets/config\n</code></pre> <p>E.g. You may edit \"TITLE\", \"WIKI\", etc properties with your domain title. Also configure TMF's API and Keycloak's location for the web application, if needed.</p> <pre><code>{         \n    \"TITLE\": \"OpenSlice by ETSI\",\n    \"PORTALVERSION\":\"2024Q2\",\n    \"WIKI\": \"https://osl.etsi.org/documentation\",\n    \"BUGZILLA\": \"{BASEURL}/bugzilla/\",\n    \"STATUS\": \"{BASEURL}/status/\",\n    \"WEBURL\": \"{BASEURL}\",\n    \"PORTAL_REPO_APIURL\": \"{BASEURL}/osapi\",\n    \"ASSURANCE_SERVICE_MGMT_APIURL\": \"{BASEURL}/oas-api\",\n    \"APITMFURL\": \"{BASEURL}/tmf-api\",\n    \"OAUTH_CONFIG\" : {\n        \"issuer\": \"{BASEURL}/auth/realms/openslice\",\n        \"loginUrl\": \"{BASEURL}/auth/realms/openslice/protocol/openid-connect/auth\",\n        \"tokenEndpoint\": \"{BASEURL}/auth/realms/openslice/protocol/openid-connect/token\",\n        \"userinfoEndpoint\": \"{BASEURL}/auth/realms/openslice/protocol/openid-connect/userinfo\",\n        \"redirectUri\": \"{BASEURL}/redirect\",\n        \"logoutUrl\": \"{BASEURL}/auth/realms/openslice/protocol/openid-connect/logout\", \n        \"postLogoutRedirectUri\": \"{BASEURL}\",\n\n        \"responseType\": \"code\",\n        \"oidc\": false,\n        \"clientId\": \"osapiWebClientId\",\n        \"dummyClientSecret\": \"secret\",\n\n        \"requireHttps\": false,\n        \"useHttpBasicAuth\": true,\n        \"clearHashAfterLogin\": false,\n\n        \"showDebugInformation\": true\n    }\n}\n</code></pre> <p>The {BASEURL} placeholder in the file automatically detects the Origin (Protocol://Domain:Port) of the deployment and applies it to every respective property. E.g. If you are attempting a local deployment of OpenSlice, then {BASEURL} is automatically translated to \"http://localhost\". Similarly, you may use {BASEURL} to translate to a public deployment configuration, e.g. \"https://portal.openslice.eu\".</p> <p>If further customization, apart from the default provided, is needed for branding (Logo, Footer) then <code>config.theming.json</code> needs to be created in kubernetes/helm/openslice/files/org.etsi.osl.tmf.web/src/assets/config directory, as follows:</p> <pre><code># Starting from the root project directory\ncd kubernetes/helm/openslice/files/org.etsi.osl.tmf.web/src/assets/config\n</code></pre> <pre><code>sudo cp config.theming.default.json config.theming.json\n</code></pre>"},{"location":"getting_started/deployment/kubernetes/#deploy-the-helm-chart","title":"Deploy the Helm Chart","text":"<p>After configuring the services, and editing the <code>values.yaml</code> file accordingly, the helm install command can be performed.</p> <pre><code>cd kubernetes/helm/openslice/\nhelm install myopenslice . --namespace openslice --create-namespace\n</code></pre>"},{"location":"getting_started/deployment/kubernetes/#validating-deployments-and-container-monitoring","title":"Validating deployments and container monitoring","text":"<p>In a Kubernetes environment, you can monitor the status of your deployments and containers using <code>kubectl</code>, the Kubernetes command-line tool, which provides powerful capabilities for inspecting the state of resources in your cluster.</p>"},{"location":"getting_started/deployment/kubernetes/#checking-the-status-of-your-applications-deployment","title":"Checking the Status of your application's deployment","text":"<p>To check the status of your deployment, use the following commands. The output should be similar:</p> <p><pre><code>kubectl get pods -n openslice\n\nNAME                      READY   UP-TO-DATE   AVAILABLE   AGE\nmyopenslice-artemis       1/1     1            1           6m28s\nmyopenslice-blockdiag     1/1     1            1           6m28s\nmyopenslice-bugzilla      1/1     1            1           6m28s\nmyopenslice-centrallog    1/1     1            1           6m28s\nmyopenslice-cridge        1/1     1            1           6m28s\nmyopenslice-keycloak      1/1     1            1           6m28s\nmyopenslice-kroki         1/1     1            1           6m28s\nmyopenslice-manoclient    1/1     1            1           6m28s\nmyopenslice-oasapi        1/1     1            1           6m28s\nmyopenslice-osom          1/1     1            1           6m28s\nmyopenslice-osportalapi   1/1     1            1           6m28s\nmyopenslice-osscapi       1/1     1            1           6m28s\nmyopenslice-portalweb     1/1     1            1           6m28s\nmyopenslice-tmfweb        1/1     1            1           6m28s\n</code></pre> <pre><code>kubectl get deployments -n openslice\n\nNAME                      READY   UP-TO-DATE   AVAILABLE   AGE\nmyopenslice-artemis       1/1     1            1           7m17s\nmyopenslice-blockdiag     1/1     1            1           7m17s\nmyopenslice-bugzilla      1/1     1            1           7m17s\nmyopenslice-centrallog    1/1     1            1           7m17s\nmyopenslice-cridge        1/1     1            1           7m17s\nmyopenslice-keycloak      1/1     1            1           7m17s\nmyopenslice-kroki         1/1     1            1           7m17s\nmyopenslice-manoclient    1/1     1            1           7m17s\nmyopenslice-oasapi        1/1     1            1           7m17s\nmyopenslice-osom          1/1     1            1           7m17s\nmyopenslice-osportalapi   1/1     1            1           7m17s\nmyopenslice-osscapi       1/1     1            1           7m17s\nmyopenslice-portalweb     1/1     1            1           7m17s\nmyopenslice-tmfweb        1/1     1            1           7m17s\n</code></pre> <pre><code>kubectl get services -n openslice\n\nNAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                        AGE\nmyopenslice-artemis       ClusterIP   10.101.128.223   &lt;none&gt;        8161/TCP,61616/TCP,61613/TCP   7m43s\nmyopenslice-blockdiag     ClusterIP   10.109.196.90    &lt;none&gt;        8001/TCP                       7m43s\nmyopenslice-bugzilla      ClusterIP   10.107.10.101    &lt;none&gt;        13010/TCP                      7m43s\nmyopenslice-centrallog    ClusterIP   10.109.84.33     &lt;none&gt;        13013/TCP                      7m43s\nmyopenslice-keycloak      ClusterIP   10.104.172.73    &lt;none&gt;        8080/TCP,8443/TCP              7m43s\nmyopenslice-kroki         ClusterIP   10.106.92.111    &lt;none&gt;        8000/TCP                       7m43s\nmyopenslice-manoclient    ClusterIP   10.100.143.154   &lt;none&gt;        13011/TCP                      7m43s\nmyopenslice-mysql         ClusterIP   10.108.206.75    &lt;none&gt;        3306/TCP                       7m43s\nmyopenslice-oasapi        ClusterIP   10.100.107.66    &lt;none&gt;        13101/TCP                      7m43s\nmyopenslice-osom          ClusterIP   10.97.88.133     &lt;none&gt;        13100/TCP                      7m43s\nmyopenslice-osportalapi   ClusterIP   10.111.212.76    &lt;none&gt;        13000/TCP                      7m43s\nmyopenslice-osscapi       ClusterIP   10.101.84.220    &lt;none&gt;        13082/TCP                      7m43s\nmyopenslice-portalweb     ClusterIP   10.101.16.112    &lt;none&gt;        80/TCP                         7m43s\nmyopenslice-tmfweb        ClusterIP   10.101.157.185   &lt;none&gt;        80/TCP                         7m43s\n</code></pre></p>"},{"location":"getting_started/deployment/kubernetes/#accessing-logs-for-troubleshooting","title":"Accessing Logs for Troubleshooting","text":"<p>If a pod is not in the expected state, you can access its logs for troubleshooting:</p> <pre><code>kubectl logs &lt;pod-name&gt; -n openslice\n</code></pre>"},{"location":"getting_started/deployment/kubernetes/#post-installation-steps-mandatory","title":"Post installation steps (mandatory)","text":"<p>After the successful deployment of OpenSlice, to ensure the end-to-end user experience, this section is mandatory. It contains crucial configuration in regard of authentication and user creation.</p>"},{"location":"getting_started/deployment/kubernetes/#configure-keycloak-server","title":"Configure Keycloak server","text":"<p>The Keycloack server is managing authentication and running on a container at port 8080. It is also proxied to your host via the ingress resource under http://your-domain/auth. </p> <ul> <li> <p>Navigate to http://your-domain/auth/ or https://your-domain/auth/, (http://ipaddress:8080/auth/ or https://ipaddress:8443/auth/ which are directly accessible without proxy) </p> </li> <li> <p>Navigate to Administration Console </p> </li> <li> <p>Login with the credentials from section Keycloak Configuration. Default values are:</p> <ul> <li>user: admin </li> <li>password: Pa55w0rd</li> </ul> </li> </ul> <p>This applies only if you are running in HTTP and get a message: HTTPS required.</p> <p>To resolve this issue when running in HTTP: </p> <ul> <li>Select the master realm from top left corner</li> <li>Go to login Tab and select \"Require SSL\": None</li> <li>Repeat for realm Openslice</li> </ul> <p>If you are running in HTTPS, then \"Require SSL\" can be left unchanged to external requests.</p>"},{"location":"getting_started/deployment/kubernetes/#1-configure-email","title":"1. Configure email","text":"<p>Keycloak allows new users to register. Subsequently, this will also allow new users to register to the OpenSlice portal.</p> <p>Navigate to realm Openslice &gt; Realm Settings &gt; Login Tab &gt; check User registration, Verify email, Forgot password etc.</p> <p>Finally, enter the details of the mail server at the Email Tab.</p> <p>Email configuration is optional for test runs, but if not provided the above functionalities (e.g. external user registration) will not be possible.</p>"},{"location":"getting_started/deployment/kubernetes/#2-add-an-openslice-admin-user","title":"2. Add an OpenSlice admin user","text":"<p>This step is mandatory so as to access the OpenSlice Web UI. To add an OpenSlice admin user you must: - Navigate to realm Openslice &gt; Users &gt; Add user  - Set a password - Upon creation, navigate to Role Mappings and add ADMIN to Assigned Roles list</p> <p>That user is different from the Keycloak admin user. It is required to login and browse the OpenSlice Web UI. The Role ADMIN guarantee full access through the OpenSlice UI, thus such a user is always required.</p>"},{"location":"getting_started/deployment/kubernetes/#nfv-orchestrator-configuration","title":"NFV Orchestrator Configuration","text":"<p>After successfully deploying and configuring OpenSlice, you may configure its environment (e.g. the NFVO) that will facilitate the deployment of NFV artifacts.</p> <p>See NFV Orchestrator Configuration.</p>"},{"location":"naas/exposed_apis/","title":"Supported TMFORUM exposed APIs","text":"Endpoint Title Description Version /tmf-api/serviceCatalogManagement/v4 633 Service Catalog Management Provides a catalog of services. 4.0.0 /tmf-api/productCatalogManagement/v4/ 620 Product Catalog Management Provides a catalog of products. 4.0.0 /tmf-api/productOrderingManagement/v4/ v622 Product Ordering Provides a standardized mechanism for placing a product order. 4.0.0 /tmf-api/resourceCatalogManagement/v4 634 Resource Catalog Management This is Swagger UI environment generated for the TMF Resource Catalog Management specification. 4.0.0 /tmf-api/serviceInventory/v4 638 Service Inventory Management Provides a consistent/standardized mechanism to query and manipulate the Service inventory. 4.0.0 /tmf-api/serviceOrdering/v4 641 API ServiceOrdering Provides a standardized mechanism for managing Service Order. 4.0.0 /tmf-api/serviceQualityManagement/v2 657 Service Quality Management This is Swagger UI environment generated for the TMF Service Quality Management specification. 2.0.0 /tmf-api/partyRoleManagement/v4/ 669 Party Role Management This is Swagger UI environment generated for the TMF Party Role Management specification. 4.0.0 /tmf-api/party/v4/organization 632 API Party Provides standardized mechanism for party management such as creation, update, retrieval, deletion and notification of events. 4.0.0 /tmf-api/agreementManagement/v2/ 651 Agreement Management T his is Swagger UI environment generated for the TMF Agreement Management specification. 2.0.0 /tmf-api/resourceOrderingManagement/v4 652 Resource Order Management-v4.0.0 This is Swagger UI environment generated for the TMF 652-Resource Order Management-v4.0.0 specification. 4.0.0 /tmf-api/accountManagement/v4 666 Account Management This is Swagger UI environment generated for the TMF Account Management specification. 4.0.0 /tmf-api/customerManagement/v4 629 Customer Management TMF Customer Management 4.0.0 /tmf-api/userinfo 691 Federated ID TMF Federated ID 1.0.0 /tmf-api/ServiceActivationAndConfiguration/v3/ 640 API Service Activation and Configuration Provides the ability to activate and configure Services. 3.0.0 /tmf-api/alarmManagement/v4/ 642 API Alarm 4.0.0 /tmf-api/serviceTestManagement/v4 653 Service Test Management Provides the ability to manage tests of provisioned Services. 4.0.0 /tmf-api/resourceInventoryManagement/v4 639 API Resource Inventory Management Provides the ability to manage Resources. 4.0.0 /tmf-api/lcmrulesmanagement/v1/ LCM Rules Custom API environment for LCM Rules 1.0.0 /tmf-api/resourcePoolManagement/v1 685 Resource Pool Management Resources that can be reserved are only in one pool. 1.0.0 /tmf-api/geographicSiteManagement/v5 674 Geographic Site Management Covers the operations to manage sites that can be associated with entities 5.0.0"},{"location":"naas/gst_to_tmf/","title":"Generic Slice Template as a Service Specification","text":"<p>Intended Audience: OpenSlice Service Designers</p> <p>GSMA Generic Slice Template (GST) Defines customer-oriented service requirements, E.g. Availability, Area of service, delay tolerance, etc. and attempts to narrow down the gap between (network) service customers and vendors</p> <p>Moreove it Proposes standardized Network Slice Templates (NESTs) to target specific use cases</p> <p>In OpenSlice we made an effort and translated the GST to a Service Specification model. So Service Designers can use it as a template to design a new Service.</p> <p>The image illustrates the relationship between the GSMA Generic Slice Template (GST), TM Forum Service Specification, and how they are utilized within OpenSlice to offer network services.</p> <p></p> <p>The GST to TM Forum via OpenSlice:</p> <pre><code>    * GST Attributes List: A comprehensive list of service attributes, such as availability, delay tolerance, downlink throughput, energy efficiency, isolation level, mission-critical support, and many others.\n    * TMF Service Specification: Demonstrates the transformation of GST attributes into a TM Forum service specification, showing JSON code snippets that define service parameters.\n    * Offered Service based on GST: Represents the final offered service, an example of a GST-based service shown as an entry in a catalog, ready to be consumed by customers.\n</code></pre> <p>What was our flow:</p> <pre><code>* Started with defining service requirements and attributes using GST.\n* Translated these GST attributes into a formal TM Forum service specification.\n* Service Offering in OpenSlice: The service specification is then used to create and offer a specific network service within OpenSlice, available for customer selection and deployment.\n</code></pre>"},{"location":"naas/gst_to_tmf/#probe-further","title":"Probe further","text":"<p>See v9 of the  GST model in GSMA here</p>"},{"location":"naas/introduction/","title":"Network as a Service (NaaS)","text":"<p>This section describes some core concepts for Delivering Network as a Service in OpenSlice. There are many articles and reports on the subject like:</p> <ul> <li>TMF909 API Suite Specification for NaaS</li> <li>TMF926A Connectivity as a Service</li> <li>GSMA Open Gateway initiative</li> <li>TMF931 Open Gateway Onboarding and Ordering Component Suite</li> </ul> <p>In general Network as a Service (NaaS) is a  service model that allows users to consume network infrastructure and services, similar to how they would consume other cloud services like Software as a Service (SaaS) or Infrastructure as a Service (IaaS). NaaS abstracts the complexity of managing physical network infrastructure, providing users with virtualized network resources that can be dynamically allocated and managed through software.</p>"},{"location":"naas/introduction/#openslice-and-naas","title":"OpenSlice and NaaS","text":"<p>OpenSlice makes extensive use of TMFORUM's models and APIs. Therefore if one is familiar with TMF APIs the terminology and ideas are the same.</p> <p>To deliver NaaS we need to incorporate various APIs (see TMF909 API Suite Specification for NaaS). OpenSlice implements various TMF APIs to deliver NaaS and support the  lifecycle functions required to manage the network capabilities exposed as Network as a Service and managed by operational domains.</p>"},{"location":"naas/introduction/#probe-further","title":"Probe further","text":"<ul> <li>Check the TMFORUM - API assets - Onboarding and Ordering Component Suite (TMF931)</li> <li>For a complete list of supported APIs, see Supported APIs</li> <li>Check the defined user roles of OpenSlice in our Terminology</li> </ul>"},{"location":"naas/lcm_intro/","title":"Lifecycle Management - LCM","text":"<p>Intended Audience: OpenSlice Service Designers</p> <p>Lifecycle Management: The orchestration framework handles the activation, termination and any necessary modifications throughout the service lifecycle.</p> <p>In OpenSlice the Lifecycle of a service follows in general the concept of Network Slice lifecycle as defined by 3GPP.</p> <p></p>"},{"location":"naas/lcm_intro/#introduction-in-openslice-lcm","title":"Introduction in OpenSlice LCM","text":"<p>OpenSlice adopted the LCM model by 3GPP and mapped to the TMF model service state. Next we discuss briefly the process and the relationships.</p> <p>The lifecycle of a service, particularly in the context of Network Service lifecycle encompasses several stages that ensure the service is effectively planned, deployed, managed, and eventually decommissioned. </p> <p>Here is an overview of these stages and relationships with OpenSlice:</p>"},{"location":"naas/lcm_intro/#0-preparation-phase","title":"0. Preparation Phase","text":"<p>This phase is performed by Service Designers</p>"},{"location":"naas/lcm_intro/#service-design","title":"Service Design:","text":"<ul> <li>Requirements Gathering: Collect service requirements from stakeholders, including performance metrics, quality of service (QoS), security needs, and geographical coverage.</li> <li>Service Specification: Define the service in terms of functionalities, attributes, and dependencies. This can be formalized using standardized templates such as the GSMA Generic Slice Template (GST).</li> <li>Resource Planning: Identify and plan the required resources, including network functions, computing power, storage, and connectivity, inclluding network function configurations.</li> <li>Expose to Service Catalog:  Expose to service catalog for user ordering.</li> </ul> <p>Next phases are handled by the Service Orchestrator after a service is scheduled for instantiation</p> <p>If it is a bundle of services, each services follows its own Lifecycle!</p>"},{"location":"naas/lcm_intro/#1-instantiation-phase","title":"1. Instantiation Phase","text":""},{"location":"naas/lcm_intro/#service-instantiation","title":"Service Instantiation:","text":"<ul> <li>Configuration: Configure the network service according to the specifications including the user requirements from the service order, ensuring that all components are correctly set up to provide the desired service.</li> <li>Resource Allocation - Feasibility check: Allocate the necessary physical and virtual resources based on the service specification. This includes any containerized resources, virtual network functions (VNFs) and software-defined networking (SDN) components. (This step is not performed in OpenSlice)</li> <li>OpenSlice Service Orchestrator creates the services at \"RESERVED\" state</li> <li>User Notification:  There could be an email notification from the system (if Bugzilla is configured)</li> </ul>"},{"location":"naas/lcm_intro/#service-deployment","title":"Service Deployment:","text":"<ul> <li>Activation: OpenSlice Service Orchestrator activates the network service and makes the service available to the end-users. This may involve:</li> <li>Create any related services that the service depends on</li> <li>Contacting all related controllers during provisioning, e.g. Kubernetes controllers, Kubernetes operators, MANO Orchestrators, RAN controllers, SDN Controlles, or other external services (e.g. via REST calls)</li> <li>Scheduling instantiation, Resolving dependencies and passing paramemters between controllers,</li> <li>setting up user accounts, provisioning access credentials -this is performed either offline or via other services.</li> <li>OpenSlice Service Orchestrator if everything is succesful puts the service at \"ACTIVE\" state</li> <li>User Notification:  There could be an email notification from the system (if Bugzilla is configured)</li> </ul>"},{"location":"naas/lcm_intro/#2-operation-phase","title":"2. Operation Phase","text":""},{"location":"naas/lcm_intro/#service-operation","title":"Service Operation:","text":"<ul> <li>Lifecycle Management: Manage the network slice throughout its lifecycle, including scaling, reconfiguration, and adaptation to changing requirements. </li> <li>In OpenSlice this is performed with Lifecycle management Rules (see next)</li> </ul> <p>In this phase the Service Designer can define several aspects.  Be aware that these are NOT performed automatically by OpenSlice - further examples and future enhancements will address these. This could include:</p> <ul> <li>Monitoring: Continuously monitor the service for performance, availability, and compliance with SLAs. Utilize tools for real-time tracking and alerts for any anomalies or performance degradation.</li> <li>Maintenance: Conduct regular maintenance activities, including software updates, patching, and optimization to ensure the service runs smoothly.</li> <li>Scaling: Dynamically scale the resources up or down based on the demand and performance requirements.</li> <li>Fault Management: Detect and resolve faults in the network slice to minimize downtime and maintain service quality.</li> </ul>"},{"location":"naas/lcm_intro/#3-decommissioning-phase","title":"3. Decommissioning Phase","text":"<ul> <li>Service Termination: Service Orchestrator terminates the network service. This may involve:</li> <li>Terminate any related services that the service depends on</li> <li>Contacting all related controllers during termination to release resources, e.g. Kubernetes controllers, Kubernetes operators, MANO Orchestrators, RAN controllers, SDN Controlles, or other external services (e.g. via REST calls)</li> <li>Scheduling termination, Resolving dependencies and passing paramemters between controllers</li> <li>OpenSlice Service Orchestrator, if everything is succesful, puts the service at \"TERMINATED\" state</li> <li>User Notification:  There could be an email notification from the system (if Bugzilla is configured)</li> </ul>"},{"location":"naas/lcm_intro/#high-level-example-enhanced-mobile-broadband-embb-service-lifecycle","title":"High level example: Enhanced Mobile Broadband (eMBB) Service Lifecycle","text":"<ol> <li> <p>Preparation:</p> </li> <li> <p>Define eMBB service requirements for high bandwidth and low latency.</p> </li> <li>Create an eMBB service specification template specifying related services and resources to Kubernetes Operators, VNFs for content delivery and traffic management.</li> <li> <p>Expose to catalog</p> </li> <li> <p>Instantiation:</p> </li> <li> <p>Instantiate other services and allocate resources such as edge computing nodes and high-capacity links.</p> </li> <li> <p>Configure the service to prioritize video streaming traffic.</p> </li> <li> <p>Operation:</p> </li> <li> <p>Monitor the service to ensure it meets high-bandwidth requirements.</p> </li> <li> <p>Scale up resources during peak usage periods, such as live sports events.</p> </li> <li> <p>Decommissioning:</p> </li> <li> <p>Notify users about service termination.</p> </li> <li>Decommission the network service, reclaiming resources for other services.</li> </ol>"},{"location":"naas/lcm_intro/#probe-further","title":"Probe further","text":"<ul> <li>See 3GPP Lifecycle</li> </ul>"},{"location":"naas/lcm_rules_intro/","title":"Lifecycle Management Rules - LCM Rules","text":"<p>Intended Audience: OpenSlice Service Designers</p> <p>Lifecycle Management Rules: Defining complex conditions and actions during the lifecycle of a service and any necessary modifications throughout the service lifecycle.</p> <p>OpenSlice end-to-end (E2E) service orchestrator follows some predefined workflows to manage a service lifecycle (They are described in BPMN language and included in our orchestration engine)</p> <p>So in the system there are already predefined recipes, which in each process-step of the workflow some piece of code is executed. </p> <p>How is it possible to intervene in the workflow process and inject some user defined actions? The next image illustrates the idea</p> <p></p>"},{"location":"naas/lcm_rules_intro/#how-is-it-possible-to-intervene-in-the-workflow-process-and-affect-it","title":"How is it possible to intervene in the workflow process and affect it?","text":"<p>LCM Rules are used for defining complex conditions and actions during the lifecycle of a service. In OpenSlice there are the following types of rules defined:</p> <ul> <li>PRE_PROVISION</li> <li>CREATION</li> <li>AFTER_ACTIVATION </li> <li>SUPERVISION </li> <li>AFTER_DEACTIVATION </li> </ul> <p>The following figure displays the different phases that the rules are performed, during the lifecycle of a Network Service Instance. </p> <p></p> <ul> <li>PRE_PROVISION rules: Run only once just before creating a service with a given priority. </li> <li>CREATION rules: Run while the referenced service dependencies of a service are created</li> <li>AFTER_ACTIVATION rules: Run only once just after a service get the ACTIVE state</li> <li>SUPERVISION rules: Run when a characteristic of a service is changed and the service is in the ACTIVE state </li> <li>AFTER_DEACTIVATION rules: Run only once just after a service get the INACTIVE/TERMINATED state </li> </ul> <p>In general the rules allow to perform many actions during service LCM. These are some examples:</p> <ul> <li>Modify service specification parameters before the instantiation of a service (or during operation) based on other dependencies. These parameters might be part of other services already included in Service order</li> <li>Translate GST/NEST parameter values to other values passed later to NFVO for instantiation or control</li> <li>Define complex OSM Configs based on other dependencies and passing variables</li> <li>Define any dependencies when creating the referenced services</li> <li>Dynamically include new service dependencies</li> <li>Create new service orders so include dynamically other services</li> <li>Call external (RESTful) services (via http(s), define payload, examine response)</li> </ul>"},{"location":"naas/lcm_rules_intro/#examine-if-the-rules-are-executed-successfully","title":"Examine if the rules are executed successfully","text":"<p>Rules are transformed automatically to executable code (currently is Java). If a rule is performed successfully  or has any issues (e.g. unexpected syntax errors or exceptions) appear in OSOM logfiles and also tey are attached as Notes to the running Service.</p>"},{"location":"naas/lcm_rules_intro/#probe-further","title":"Probe further","text":"<ul> <li>In the Service Design section we present in details the Lifecycle rules and how one can design them</li> <li>Many of them are used in our provided Service Design examples</li> </ul>"},{"location":"naas/resource_catalog/","title":"OpenSlice Resource Catalog:","text":"<pre><code>* Resource Specifications: Defines the underlying resources required to deliver services, such as network components, servers, and software.\n* Resource Availability: Tracks the availability and status of resources to ensure efficient service delivery.\n</code></pre> <p>This section is WIP.</p>"},{"location":"naas/resource_inventory/","title":"Resources Inventory","text":"<p>This section is WIP.</p>"},{"location":"naas/resource_spec/","title":"Resources Specification","text":"<p>This section is WIP.</p>"},{"location":"naas/service_catalog/","title":"OpenSlice Service Catalogs","text":"<p>Intended Audience: OpenSlice Service Designers, Administrators, Users</p> <p>OpenSlice offers complete management of Service Catalogs to end users, which comprises:</p> <ul> <li>Service Categories: Lists the available services, including their specifications and performance metrics.</li> <li>Service Bundles: Combines multiple services into a single offering to provide added value to customers.</li> </ul> <p>Service Catalogs contain Service Specifications (organized in Service Categories) exposed to users for Service Orders.</p>"},{"location":"naas/service_catalog/#ui-management","title":"UI management","text":"<p>The UI is built around Service Catalogs and Categories exposed in the Service Marketplace. </p> <p>In the menu the administrator can manage the Service Catalogs and Categories.</p> <p></p>"},{"location":"naas/service_catalog/#api-exposed","title":"API exposed","text":"<p>When installing OpenSlice the API endpoints can be browsed at:  <pre><code>[YOURDOMAIN]/tmf-api/swagger-ui/index.html?urls.primaryName=tmf-api-633-ServiceCatalogManagement-v4.0.0\n\nendpoint examples:\n\n/serviceCatalogManagement/v4/serviceCatalog List or find ServiceCatalog objects\n/serviceCatalogManagement/v4/serviceCategory List or find ServiceCategory objects\n</code></pre></p>"},{"location":"naas/service_catalog/#example-use-case","title":"Example Use Case","text":"<p>Scenario: A service provider wants to offer a new managed XXXX service to enterprise customers.</p> <ul> <li>Service Definition:  Service Template thus create a template for the XXXX service, including specifications for bandwidth, network features, and performance metrics.</li> <li>Service Catalog Integration: Add to Service Catalog the XXXX service  with all relevant details.</li> <li>Service Delivery/Order: Provision Service by Using the orchestration system to provision and configure the XXXX service based on customer orders.</li> </ul>"},{"location":"naas/service_catalog/#probe-further","title":"Probe further","text":"<ul> <li>Read the model of Service Catalogs in TMF TMF633 Service Catalog API User Guide v4.0.0</li> <li>Check a demo of the API here</li> <li>Check a demo of the Catalog and Categories here</li> </ul>"},{"location":"naas/service_inventory/","title":"Service Inventory","text":"<p>Intended Audience: OpenSlice Service Designers, Administrators, Users</p> <p>Service Inventory contains references to running services that realize a Service Order.</p> <p>The Service Inventory is a repository that maintains detailed records of all active services and the underlying resources that support them. It acts as a central repository, tracking the lifecycle of each service from provisioning to decommissioning, and includes references to the specific virtual and physical resources that realize the service, such as servers, network components, storage, and software instances. </p> <p>This inventory enables real-time visibility into the status, configuration, and dependencies of each service, facilitating effective management, troubleshooting, and optimization. </p> <p>By providing a view of the active services, the Service Inventory includes services/resource allocation, and ensures that services are delivered in alignment with the initial request. </p>"},{"location":"naas/service_inventory/#ui-management","title":"UI management","text":"<p>Through the menu and dedicated forms the administrator can manage the Service Inventory and any active Services (reconfigure or terminate).Various examples in this document will guide you to the usage and the management of the Services in Service Inventory.</p>"},{"location":"naas/service_inventory/#api-exposed","title":"API exposed","text":"<p>When installing OpenSlice the API endpoints can be browsed at:  <pre><code>[YOURDOMAIN]/tmf-api/swagger-ui/index.html?urls.primaryName=tmf-api-638-ServiceInventoryManagement-v4.0.0\n</code></pre></p> <p>endpoint examples:</p> <p>/serviceInventory/v4/service List or find Service objects</p>"},{"location":"naas/service_inventory/#probe-further","title":"Probe further","text":"<ul> <li>See Ordering Services from catalogs</li> <li>See Service Design</li> </ul>"},{"location":"naas/service_ordering/","title":"Service Ordering","text":"<p>Intended Audience: OpenSlice Service Designers, Administrators</p> <p>Customer Facing Service Specifications - or also CFSSpec (organized in Service Categories) are exposed to users for Service Orders.</p> <p>The Service Order process is a structured sequence of steps initiated by a customer's Service Order request for a specific service, aimed at delivering and activating the desired service or services (if it is a service bundle), as well as its related services. It begins with the customer submitting a service request through OpenSlice Services portal or the Service Order API, specifying the necessary details such as service specification, configurations, and any specific requirements.</p> <p>The request is then validated and verified for completeness and eligibility by an administrator which marks the Service Order as ACKNOWLEDGED otherwise it rejects it. </p> <p>Once ACKNOWLEDGED, the service order is processed by OpenSlice orchestration system (OSOM), which schedules/automates the provisioning of the required resources and configurations, coordinating across various components such as MANO controllers for virtual network functions (VNFs), or Containerized controllers  or any 3rd party controllers or services or even physical infrastructure. The OpenSlice orchestration system ensures that all dependencies are managed and that the service is correctly configured.</p> <p>After provisioning, the service is activated and handed over to the customer, . This end-to-end process ensures a seamless, efficient, and automated delivery of services, enhancing customer satisfaction and operational efficiency.</p> <p>Ongoing monitoring and other actions can be performed by the Life Cycle management rules</p> <p>Future developments:  In future releases it might be possible the ongoing monitoring and support provided to ensure continuous performance and reliability. The service could undergo a series of tests to ensure it meets the specified performance metrics and SLAs before delivering</p>"},{"location":"naas/service_ordering/#ui-management","title":"UI management","text":"<p>Through the menu and dedicated forms the administrator can manage the Service Orders. Various examples in this document will guide you to the usage and the management of the Service Orders.</p>"},{"location":"naas/service_ordering/#api-exposed","title":"API exposed","text":"<p>When installing OpenSlice the API endpoints can be browsed at:  <pre><code>[YOURDOMAIN]/tmf-api/swagger-ui/index.html?urls.primaryName=tmf-api-641-ServiceOrdering-v4.0.0\n</code></pre></p> <p>endpoint examples:</p> <p>/serviceOrdering/v4/serviceOrder List or find ServiceOrder objects</p>"},{"location":"naas/service_ordering/#probe-further","title":"Probe further","text":"<ul> <li>See Ordering Services from catalogs</li> <li>See Service Design</li> </ul>"},{"location":"naas/service_spec/","title":"OpenSlice Service Specification","text":"<p>Intended Audience: OpenSlice Service Designers</p> <p>OpenSlice offers complete management of Service Specifications.</p> <p>Service Specification is an entity that describes a service offering. There are two types of Service Specifications:</p> <ul> <li>Resource Facing Service Specification (RFSS)</li> <li>Customer Facing Service Specification (CFSS)</li> </ul>"},{"location":"naas/service_spec/#resource-facing-service-specification","title":"Resource Facing Service Specification","text":"<p>Is a Service that It exposes a resource Specification as a Service. (For example expose a Network Service Descriptor as a Service)</p>"},{"location":"naas/service_spec/#customer-facing-service-specification","title":"Customer Facing Service Specification","text":"<p>Customer Facing Service Specifications - or also CFSSpec (organized in Service Categories) are exposed to users for Service Orders. Usually it exposes other CFSSpec(as a Service Bundle) or other RFSSpecs</p>"},{"location":"naas/service_spec/#definition","title":"Definition","text":"<p>Usually a Service Specification has the following aspects:</p> <ul> <li>Name, Description, Version</li> <li>Marked as a Service Bundle: Combines multiple services into a single offering to provide added value to customers.</li> <li>if is is a Bundle then you must add Related Service Specifications</li> <li>If it is a Resource Facing Service Specification has multiple related Resource Facing Service Specifications</li> <li>Characteristics: a list of service characteristics and their type (TEXT, INTEGER, etc)</li> <li>Also they can be exposed as \"Configurable\" so to allow to end-users during the Service Order to select or type values</li> <li>A logo, displayed if needed in the Service Marketplace</li> <li>Any attachments that further help the user</li> <li>Life Cycle Rules that determine the behavior of the Service and instrument the Service Orchestrator. More on Life Cycle Rules here</li> </ul> <p>Service Designers can create a Service Specification from scratch or use some templates:</p> <ul> <li>Create a Service based from a Network Service Descriptor (NSD)</li> <li>Create a Service based on a Kubernetes Operator</li> <li>Create a Service based on the GSMA GST - Generic Slice Template</li> </ul>"},{"location":"naas/service_spec/#ui-management","title":"UI management","text":"<p>In the UI this looks like the following.</p> <p>Through the menu and dedicated forms the administrator can manage the Service Specifications. Various examples in this document will guide you to the usage and the design of the services.</p>"},{"location":"naas/service_spec/#api-exposed","title":"API exposed","text":"<p>When installing OpenSlice the API endpoints can be browsed at:  <pre><code>[YOURDOMAIN]/tmf-api/swagger-ui/index.html?urls.primaryName=tmf-api-633-ServiceCatalogManagement-v4.0.0\n</code></pre></p> <p>endpoint examples:</p> <p>/serviceCatalogManagement/v4/serviceSpecification List or find ServiceSpecification objects</p>"},{"location":"naas/service_spec/#example-use-case","title":"Example Use Case","text":"<p>Scenario: A service provider wants to offer a new managed XXXX service to enterprise customers.</p> <ul> <li>Service Definition:  Create a service specification template for the XXXX service, including specifications for bandwidth, network features, and performance metrics.</li> </ul>"},{"location":"naas/service_spec/#probe-further","title":"Probe further","text":"<ul> <li>Read the model of Service Catalogs in TMF TMF633 Service Catalog API User Guide v4.0.0</li> <li>Check a demo of the API here</li> <li>Check a demo of the Service Specifications in Catalog and Categories here (You need to login - see main guide page)</li> <li>Check the GSMA GST</li> </ul>"},{"location":"naas/so_intro/","title":"Service Orchestration","text":"<p>Intended Audience: OpenSlice Service Designers</p> <p>Definition: The orchestration engine evaluates the request, determines the necessary resources, and initiates the automated workflows.It interacts with underlying controller components (e.g. 5G Core, Radios, Containerized controllers, NFV, SDN controllers) to provision and configure the required network functions and connectivity.</p> <p>OpenSlice end-to-end (E2E) service orchestration framework is designed to manage and automate the entire lifecycle of services across multiple domains and technologies. For delivering, Network as a Service (NaaS) OpenSlice automates and manages the entire lifecycle of network services, from provisioning to monitoring and decommissioning, while ensuring seamless integration, operation, and delivery of services from the initial request to the final delivery, spanning all involved components and layers.</p> <p>As next image depicts, service orchestrators follow some predefined workflows. OpenSlice end-to-end (E2E) service orchestrator follows some predefined workflows to manage a service lifecycle (They are described in BPMN language and included in our orchestration engine).</p> <p></p> <p>This section provides a high level overview of the Service Orchestration process.</p>"},{"location":"naas/so_servicespec_to_services_kubernetes/","title":"Exposing Kubernetes services","text":"<p>This section is WIP.</p>"},{"location":"naas/so_servicespec_to_services_nfv/","title":"From Service Specification to NFV based services","text":"<p>After a Service Order completion, active services with their additional characteristics are found:</p> <ul> <li>From the Order Items of a selected Service order</li> <li>from the menu of Service inventory and then selecting details of each service</li> <li>through the Service Inventory API (TMF 638 - Service Inventory Management ) </li> </ul> <p>Openslice creates a Service for the requested CFS. Customers make Service Orders and Openslice instantiates the requested Service Specifications for each Service Order Item of a Service Order. Running Services instantiated by Openslice, reside in Openslice Service Inventory. The following picture displays how Service Specifications are related to Running Services and how Running Services relate with instantiated running Network Services. </p> <p></p> <p>There is a hierarchy of services. Usually an Instantiated CFS has Supporting Services some Instantiated RFSs. Then an Instantiated RFS is related to some running NS managed by NFVO</p>"},{"location":"naas/so_servicespec_to_services_nfv/#interacting-with-an-active-service-day-2-config","title":"Interacting with an Active Service (Day 2 config)","text":"<p>In some cases, if the underlying service is configured with actions (for example in OSM Day 2 primitive actions), there are characteristics that can be modified.  Usually they are named like : ::Primitive:: <p>The user can edit the characteristic with a new value. The value is propagated through the OSOM and NFVO down to the related VNF.</p>"},{"location":"naas/so_servicespec_to_services_nfv/#terminatinginactivating-a-service","title":"Terminating/Inactivating a service","text":"<p>You can terminate the service with one of the following processes:</p> <ul> <li>Select the related Service Order and terminate the Order Item. This will delete all the underlying related active services. The Order goes to ACKNOWLEDGED-&gt;INPROGRESS-&gt;COMPLETE</li> <li>To terminate or inactivate a service, select the specific service from the inventory, press Edit and set the State either to Inactive or Terminated</li> </ul> <p>Warning: if you terminate or inactivate a service the action cannot be undone. </p>"},{"location":"naas/so_servicespec_to_services_nfv/#uml-sequence-diagram","title":"uml: sequence diagram","text":"<p>Here I will embed PlantUML markup to generate a sequence diagram.</p> <p>I can include as many plantuml segments as I want in my Markdown, and the diagrams can be of any type supported by PlantUML.</p>"},{"location":"naas/nfv/intro/","title":"Introduction","text":"<p>This section is WIP.</p>"},{"location":"naas/nfv/nfvcatalogs/","title":"Nfvcatalogs","text":"<p>This section is WIP.</p>"},{"location":"naas/nfv/nfvservices/","title":"NFV Services","text":"<p>NFV Services are managed through a dedicated UI, i.e. the NFV portal (eg., http://portal.openslice.eu/nfvportal).</p> <p>Users are able through this portal to manage their NFV artifacts towards the NFVO (for example onboard VNFs and NSDs to a target OSM).</p> <p>OpenSlice NFV Services target to accommodate the following envisaged user roles. All users are assumed to be Authenticated:</p> <ul> <li>NFV developer: This role is responsible to upload VNF and NSD Descriptors in the OpenSlice services towards NFVO, like OSM</li> <li>Services administrator: This role represents the user that are responsible for maintenance of the OpenSlice services</li> </ul> <p>(obsolete:)</p> <ul> <li>Testbed provider: This role represents users that are responsible for testbed administration, configuration, integration, adaptation, support, etc</li> <li>Experimenter: This role represents the user that will utilize our services and tools to deploy an experiment. That is the experiment description in terms of e.g.: NSD (Network Service Descriptor) or TOSCA Specification (in future versions)</li> </ul> <p>Finally an anonymous user role exists who has some really simple usage scenarios (e.g. signup through the portal)</p> <p>During the onboarding process the following occurs:</p> <ul> <li>A NFV developer submits a NFV archive (VNF or NSD) (he can later manage if needed some metadata)</li> <li>The administrator can manage the NFV artifact (e.g. edit it)</li> <li>The administrator On-Boards the NFV artifact to the target MANO</li> <li>The administrator can optionally mark the NFV:</li> <li>As public in order to be publicly visible by all portal users</li> <li>As Certified which means this is certified by a certain entity</li> </ul>"},{"location":"naas/nfv/nfvservices/#request-a-new-nsd-deployment-this-is-different-in-comparison-to-services","title":"Request a new NSD deployment (this is different in comparison to Services)","text":"<p>A developer requests a new network service deployment (which NSD, tentative dates, target infrastructure, etc.). The request is marked as UNDER_REVIEW</p> <ul> <li>The administrator is notified about the new request and he has the following options:</li> <li>Schedule the deployment for the requested dates or propose other dates. The request is marked as SCHEDULED</li> <li>Reject the request for some reason. The Request is marked as REJECTED</li> <li>Deploy the request to target VIM(s). The Request is marked as RUNNING</li> <li>Finalize the deployment and release resources. The Request is marked as COMPLETED</li> <li>every change of the request-lifecycle the experimenter is notified.</li> </ul>"},{"location":"service_design/catalogs/","title":"Catalogs and Templates","text":"<p>The OpenSlice Service Catalogue (accessible through the API or Services portal) contains the representation of Service Specifications, either created from the provider defining service attributes, or by supporting the GSMA Generic Slice Templates (GST) as well as the VINNI Service Blueprint. The following scenarios are supported by the OpenSlice Service Catalogue.</p>"},{"location":"service_design/catalogs/#createdesign-a-service-specification","title":"Create/Design a Service Specification","text":""},{"location":"service_design/catalogs/#createdesign-a-customer-facing-service-specification-cfss","title":"Create/Design a Customer Facing Service Specification (CFSS)","text":"<p>Customer Facing Service Specification are the services offered to customers. You can create a new Service Specification from the menu. The services created through the UI are Customer Facing Services Specifications (CFSSs). </p> <p>While CFSSs can describe an overall offered service, it must also contain the related realization (how this service is going to be offered). Usually you create a CFSS as a bundle and then you include Service Specification Relationships with Resource Facing Service Specifications (RFSSs) or/and other CFSSs. A CFSS can include multiple RFSS or/and CFSs.</p> <p>For example you can create a CFS spec called \"A 5G Service\" which is a bundle of two other services (include them in Service Specification Relationships) such as 5G eMBB Slice and a Customer VPN. So when the user orders \"A 5G Service\", a 5G eMBB Slice and a Customer VPN will be created during the order.</p>"},{"location":"service_design/catalogs/#assign-resources-as-resource-facing-service-specifications-rfsss","title":"Assign Resources as Resource Facing Service Specifications (RFSSs)","text":"<p>The Resource Facing Service Specifation (RFSS) is the realization of the designed services. It utilizes specific resources to offer the described service.</p> <p>For instance, OpenSlice can utilize Kubernetes and NFV resource to deploy the respective services through the corresponding stack, as seen in Probe further section.</p>"},{"location":"service_design/catalogs/#generic-slice-templates-gst","title":"Generic Slice Templates (GST)","text":"<p>(Offered only as a design for now. THere is no direct implementation to NFV) On October 16th 2019 GSMA published NG.116  Version 2.0 which defines the Generic Network Slice Template (GST). GST is a set of attributes that can characterise a type of network slice/service. GST is generic and is not tied to any specific network deployment. Here is a list of the various attributes of the template:</p> <ul> <li>Availability</li> <li>Area of Service</li> <li>Delay tolerance</li> <li>Deterministic communication</li> <li>Downlink throughput per network slice</li> <li>Downlink throughput per UE</li> <li>Energy efficiency</li> <li>Group communication support</li> <li>Isolation level</li> <li>Location based message delivery</li> <li>Maximum supported packet size</li> <li>Mission critical support</li> <li>MMTel support</li> <li>NB-IoT support</li> <li>Network Slice Customer network functions</li> <li>Number of connections</li> <li>Number of terminals</li> <li>Performance monitoring</li> <li>Performance prediction</li> <li>Positioning support</li> <li>Radio spectrum</li> <li>Reliability</li> <li>Root cause investigation</li> <li>Session and Service Continuity support </li> <li>Simultaneous use of the network slice</li> <li>Slice quality of service parameters</li> <li>Support for non-IP traffic </li> <li>Supported access technologies </li> <li>Supported device velocity </li> <li>Synchronicity</li> <li>Terminal density </li> <li>Uplink throughput per network slice </li> <li>Uplink throughput per UE</li> <li>User management openness</li> <li>User data access </li> <li>V2X communication mode</li> </ul> <p>OpenSlice offers the GST in a format that is machine readable and aligned with the TMF SID model. Here is a tentative approach in JSON : https://labs.etsi.org/rep/osl/code/org.etsi.osl.tmf.api/-/blob/main/src/main/resources/gst.json</p> <p>Providers can clone a GST as e NEST directly in OpenSlice Web portal and the adjust the default attributes to their Service Specification.</p>"},{"location":"service_design/catalogs/#manage-a-service-specification","title":"Manage a Service Specification","text":"<p>You may manage Service Specifications though the respective Web UI.</p>"},{"location":"service_design/catalogs/#assign-a-service-specification-to-service-categories-and-publish","title":"Assign a Service Specification to Service Categories and Publish","text":"<p>You may create Service Categories and from the menu provided to group Specifications under the same context, and then from the Service Category management UI you may assign Specifications to it.</p> <p>You cannot publish a Service Specification directly, but you may include the created Service Category into a public Service Catalog, which is exposed through the Service Marketplace. Doing so, all the assigned Service Specification to the Category are exposed and available for ordering, as well.</p>"},{"location":"service_design/catalogs/#retireremove-a-service-specification","title":"Retire/Remove a Service Specification","text":"<p>Delete it from the assigned Service Category. This action does not delete the actual Service Specification, which can be done from the respective list.</p>"},{"location":"service_design/catalogs/#consume-and-expose-service-specifications-from-other-service-catalogues","title":"Consume and expose Service Specifications from other Service Catalogues","text":"<p>See more on Consuming Services From External Partner Organizations.</p>"},{"location":"service_design/catalogs/#probe-further","title":"Probe further","text":"<p>Design Kubernetes-based Service Specifications</p> <p>Design NFV/OSM-based Service Specifications</p>"},{"location":"service_design/intro/","title":"Service Design in OpenSlice","text":"<p>Intended Audience: OpenSlice Service Designers</p> <p>This section offers details on how to design Service Specifications and expose them in Service Catalogs</p> <p>Service Designers create detailed service specifications, which are then managed and exposed in service catalogs. These services are integrated into OpenSlice E2E service orchestration framework to automate and optimize the delivery of network services.</p> <p>OpenSlice can be used to design service specifications for various services, even not networking related services. Here we cover how service designers can expose services related to the NFV world and the containerized world.</p>"},{"location":"service_design/intro/#probe-further","title":"Probe further","text":"<ul> <li>Design and expose services based on containerized resources via the Kubernetes Operator pattern</li> <li>Design and expose services based on NFV artifacts</li> </ul>"},{"location":"service_design/examples/intro/","title":"Introduction","text":"<p>This section contains examples on service design, which contain a step-by-step guide to reproduce them in your local environment.</p>"},{"location":"service_design/examples/calculator_crd_aas/calculator_crd_aas/","title":"Exposing Kubernetes Operators as a Service : Offering \"Calculator as a Service\" through OpenSlice","text":"<p>Intended Audience: OpenSlice Service Designers</p> <p>To illustrate the powerful concept of Kubernetes operators and how they can be utilized to offer a service through OpenSlice, let's provide an example of a \"Calculator as a Service.\"</p> <p>This example will demonstrate the flexibility and capabilities of Kubernetes operators in managing custom resources and automating operational tasks.</p>"},{"location":"service_design/examples/calculator_crd_aas/calculator_crd_aas/#offering-calculator-as-a-service-through-openslice","title":"Offering \"Calculator as a Service\" through OpenSlice","text":"<ul> <li>We have a service that can accept two integers and an action (SUM, SUB, etc) and returns a result</li> <li>We would like to offer it as a Service through OpenSlice</li> <li>So when a user orders it with some initial parameters, OpenSlice will create it and return the result</li> <li>Also while the service is active, we can do further calculations, until we destroy it.</li> </ul> <p>Assume the following simple CRD of a calculator model accepting two params (spec section) and an action and returning a result (status section)</p> <p>The controller (the calculator code) is implemented in any language and is installed in a Kubernetes cluster</p> <pre><code>apiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n  name: mycalculators.examples.osl.etsi.org\nspec:\n  group: examples.osl.etsi.org\n  names:\n    kind: MyCalculator\n    plural: mycalculators\n    singular: mycalculator\n  scope: Namespaced\n  versions:\n  - name: v1alpha1\n    schema:\n      openAPIV3Schema:\n        properties:\n          spec:\n            properties:\n              parama:\n                type: integer\n              paramb:\n                type: integer\n              action:\n                type: string\n            type: object\n          status:\n            properties:\n              result:\n                type: integer\n              status:\n                type: string\n            type: object\n        type: object\n    served: true\n    storage: true\n    subresources:\n      status: {}\n</code></pre> <p>Request to the cluster (through e.g. kubectl apply)</p> <pre><code>apiVersion: examples.osl.etsi.org/v1alpha1\nkind: MyCalculator\nmetadata:\n  name: mycalculator.examples.osl.etsi.org\nspec:\n  parama: 170\n  paramb: 180\n  action: 'SUM'\n</code></pre> <p>Response</p> <pre><code>apiVersion: examples.osl.etsi.org/v1alpha1\nkind: MyCalculator\nmetadata:\n\u00a0 creationTimestamp: '2023-12-05T12:26:07Z\u2019\n\n&lt;snip&gt;\n\nstatus:\n\u00a0 result: 350\n\u00a0 status: CALCULATED\nspec:\n\u00a0 action: SUM\n\u00a0 parama: 170\n\u00a0 paramb: 180\n</code></pre> <p>To perform this through OpenSlice as a Service Specification ready to be ordered we need to do the following:</p>"},{"location":"service_design/examples/calculator_crd_aas/calculator_crd_aas/#crd-is-saved-automatically-as-resource-specification","title":"CRD is saved automatically as Resource Specification","text":"<p>As soon as the CRD is deployed in the cluster (e.g. by your admin via kubectl or via any installation through the internet) it is automatically transformed and is available in OpenSlice catalogs as a Resource Specification.</p> <ul> <li>See also the fully qualified name of the resource specification. <ul> <li>MyCalculator@examples.osl.etsi.org/v1alpha1@docker-desktop@https://kubernetes.docker.internal:6443/</li> <li>The resource specification name is quite unique, so you can install the CRD in many clusters around the internet. Each CRD on each cluster will appear here, for example:<ul> <li>MyCalculator@examples.osl.etsi.org/v1alpha1@default_cluster@https://10.10.10.8:6443/</li> <li>MyCalculator@examples.osl.etsi.org/v1alpha1@edge1_cluster@https://172.16.10.10:6443/</li> </ul> </li> <li>Having this OpenSlice can manage resources in multiple clusters</li> </ul> </li> </ul> <p></p> <p>See also the detailed characteristics. See how OpenSlice makes all characteristics automatically flat and expanded with key-value style</p> <p></p>"},{"location":"service_design/examples/calculator_crd_aas/calculator_crd_aas/#expose-to-users","title":"Expose to Users","text":""},{"location":"service_design/examples/calculator_crd_aas/calculator_crd_aas/#create-a-resourcefacingservicespecification","title":"Create a ResourceFacingServiceSpecification","text":"<p>From the UI menu create a new Service Specification</p> <p></p> <p></p>"},{"location":"service_design/examples/calculator_crd_aas/calculator_crd_aas/#create-crd-related-characteristics","title":"Create CRD-related characteristics","text":"<ul> <li>We need now to adjust some characteristics of this CRD as Resource Specification.</li> <li>OpenSlice translated automatically the CRD spec in a flat list of characteristics.So the \"spec\" section from the original yaml for example, is now unfold into: spec, spec.parama, spec.paramb, etc. the same for \"status\" object</li> <li>We need to make OpenSlice aware of when the service will be active.<ul> <li>So we go to characteristic _CR_CHECK_FIELD and we define that the field that shows the status of the service is the characteristic \"status.status\" (is a text field) </li> <li>Then we go to _CR_CHECKVAL_AVAILABLE and we define the value CALCULATED, which signals the following: When the characteristic \"status.status\" has the value \"CALCULATED\" then OpenSlice will mark the underlying service as \"ACTIVE\"</li> <li>We need also to define the yaml file that OpenSLice will use to create the new resource in the kubernetes cluster</li> <li>We insert the YAML in the characteristic _CR_SPEC</li> </ul> </li> </ul> <p>The _CR_SPEC is: </p> <pre><code>apiVersion: examples.osl.etsi.org/v1alpha1\nkind: MyCalculator\nmetadata:\n  name: mycalculator.examples.osl.etsi.org\nspec:\n  parama: 170\n  paramb: 180\n  action: 'SUM'\n</code></pre> <p></p> <p>However the values are fixed. How do we allow a user to pass parameters through OpenSlice</p>"},{"location":"service_design/examples/calculator_crd_aas/calculator_crd_aas/#expose-in-catalog","title":"Expose in Catalog","text":"<p>Create a new CustomerFacingServiceSpecification</p> <pre><code>- Go to the menu Service Specification&gt;New Service Specification\n- Create a service My Calculator and mark it as a Bundle\n- Go to Service Specification Relationships and add MyCalculatorRFS\n- The service will be automatically transformed to a \"CustomerFacingServiceSpecification\"\n- Add the following characteristics as the image shows:\n</code></pre> <p></p>"},{"location":"service_design/examples/calculator_crd_aas/calculator_crd_aas/#allow-users-to-pass-new-values-through-openslice","title":"Allow users to pass new values through OpenSlice","text":"<p>We need to Create LCM rules in CustomerFacingServiceSpecification:</p> <pre><code>- The goal of the rules is to allow the user to pass parameters to the actual resource towards the cluster.\n- we will create one rule that will pass the parameters just before creating the service (PRE_PROVISION phase)\n- we will create one rule that will pass the parameters while the service is active (SUPERVISION phase)\n- The rules will be the same\n</code></pre> <p></p> <p>If we see one rule it will look like the following:</p> <p></p> <ul> <li>We need to change the _CR_SPEC characteristic of the referenced ResourceFacingServiceSpecification</li> <li>First bring a block from Service&gt;Relationships&gt;Service Refs and drop the \"Service MyCalculatorRFS\" block</li> <li>Then add a list block from Lists</li> <li>Then add the block that modifies a referenced characteristic from Service&gt;Relationships&gt;Service Refs the block \"Set value to characteristic of a Referenced Service\"</li> <li>Add a block for text _CR_SPEC </li> <li>We use a block that changes a String according to variables Text&gt;\"A formatted text replacing variables from List\"</li> <li>See that we have as Input string the YAML string lines<ul> <li>See that parama, paramb has a %d (they accept integers), action is %s (accepts a string)</li> <li>See that the variables tha will replace the %d, %d and %s are an list<ul> <li>the first %d will be replaced with the value from characteristic spec.parama</li> <li>the second %d will be replaced with the value from characteristic spec.paramb</li> <li>the %s will be replaced with the value from characteristic spec.action</li> </ul> </li> </ul> </li> </ul> <p>If we see the SUPERVISION rule it will look like the following:</p> <ul> <li>It contains also the Result field, which takes the value from the referenced service</li> <li>Add a block for the Result field from Service&gt;Number blocks</li> <li>Add a  str to int block from Number blocks</li> <li>Add Service&gt;Relationships&gt;Service Refs and drop the input block [Service MyCalculatorRFS] \"Get Service details from current context running service\" and select from the drop down the \"serviceCharacteristicValue\"</li> <li>Add as name the \"status.result\" </li> </ul> <p></p> <p></p> <p>Expose it then to a catalogue for orders through the Service Categories and Service Catalogs</p> <p></p>"},{"location":"service_design/examples/calculator_crd_aas/calculator_crd_aas/#order-the-service","title":"Order the Service","text":"<p>When a user orders the service, it will look like this:</p> <p></p> <ul> <li>After the Service Order we have 2 services in service inventory on CFS and on RFS. Both have references to values</li> <li>OpenSlice (via CRIDGE service) updates the Resource in Resource Inventory and OSOM updates the Services in Service Inventory</li> <li>The Actual resources are running in the Kubernetes cluster managed by OpenSlice</li> <li>The result is in the characteristic status.result of the running service</li> </ul> <p></p> <p></p>"},{"location":"service_design/examples/calculator_crd_aas/calculator_crd_aas/#modify-the-running-service","title":"Modify the running service","text":"<p>The user can modify the service</p> <p></p> <ul> <li>After a while the update is applied to the cluster, the controller will pick up the resource update and patch the resource</li> <li>OpenSlice (via CRIDGE service) updates the Resource in Resource Inventory and OSOM updates the Services in Service Inventory</li> <li>The result will be available to the respective characteristic \"Result\" after a few seconds, as need to go through various steps (OpenSlice orchestrator, down to kubernetes, to Calculator controller and back)</li> </ul> <p></p>"},{"location":"service_design/examples/jenkins_helm_install_aas/jenkins_helm_install_aas/","title":"Example: Offer Jenkins as a Service via Openslice","text":""},{"location":"service_design/examples/jenkins_helm_install_aas/jenkins_helm_install_aas/#design-the-jenkins-resource-facing-service","title":"Design the Jenkins (Resource-Facing) Service","text":"<p>Before reading this example please make sure that you went through the Design Helm as a Service </p> <p>In this example, we will use the <code>Kind: Application</code> of ArgoCD and create a ResourceFacingServiceSpecification (RFSS) for Jenkins. Eventually, we will offer Jenkins as a Service.</p> <pre><code>1. Go to Service Specifications\n2. Create New Specification\n3. Provide a Name, eg. jenkinsrfs\n4. Go to Resource Specification Relationships\n5. Assign **Application@argoproj.io/v1alpha1@kubernetes@https://10.10.10.144:6443/** as a related Resource Specification\n</code></pre> <p>Please note that the https://10.10.10.144:6443/ part of the Resource Specification's name will vary in different Kubernetes environments. </p> <p></p> <p>Now, we shall focus on the Characteristics' configuration of the created Service Specification. This can be achieved from the the \"Service Specification Characteristics\" tab.</p> <p>Specifically, we need to map the lifecycle of ArgoCD Application (e.g. Progressing, Healthy, etc.) to TMF Resource State (e.g. reserved, active, etc.).</p> <p>In ArgoCD, the field status.health.status has the value that we need to check (Healty, Progressing, etc) for the lifecycle of the application. This is captured by the _CR_CHECK_FIELD characteristic.</p> <p>Also, the different ArgoCD lifycycle states must be captured by the respective _CR_CHECKVAL_xxx characteristics, as show in the figure below:</p> <p></p> <p>After the state mapping, we must provide the template that ArgoCD will use to deploy the Jenkins HELM Chart as an ArgoCD application. For this, we must populate the _CR_SPEC characteristic. The _CR_SPEC can be designed first in a YAML or json editor for better parsing. </p> <p>Let's see a YAML definition:</p> <pre><code>apiVersion: argoproj.io/v1alpha1\nkind: Application\nmetadata:\n  finalizers:\n  - resources-finalizer.argocd.argoproj.io\n  name: openslice-jenkins\n  namespace: argocd\nspec:\n  project: default\n  destination:\n    namespace: opencrdtest\n    name: in-cluster\n  source:\n    repoURL: https://charts.jenkins.io\n    targetRevision: 5.7.21\n    chart: jenkins\n    helm:\n      values: |\n        controller:\n         serviceType: ClusterIP\n        persistence:\n         enabled: false\n  syncPolicy:\n    automated:\n      prune: true\n      selfHeal: true\n      allowEmpty: false\n    syncOptions:\n    - Validate=false\n    - CreateNamespace=true\n    - PrunePropagationPolicy=foreground\n    - PruneLast=true\n    - RespectIgnoreDifferences=true\n</code></pre> <p>NOTE 1: The above template assumes that the Jenkins Server will acquire a ClusterIP. The user should handle the external exposure and access of the Jenkins Server, depending on its cluster configuration. Also, persistency of the data is disabled to facilitate the deployment without the need to define storage classes and volumes, as this serves as an example. </p> <pre><code>helm:\n  values: |\n    controller:\n      serviceType: ClusterIP\n    persistence:\n      enabled: false\n</code></pre> <p>NOTE 2: On each installation, OSOM will change the name of the resource in order to be unique (will have a UUID), instead of \"openslice-jenkins\".</p> <pre><code>name: openslice-jenkins\n</code></pre> <p>NOTE 3: The namespace that ArgoCD will use to deploy the Jenkins HELM Chart is the \"opencrdtest\".</p> <pre><code>destination:\n  namespace: opencrdtest\n</code></pre> <p>The latter implies that ArgoCD will always install Jenkins in the same namespace.</p> <p>To avoid this we will create a simple LCM rule (pre-provision) to change the namespace accordingly with a unique ID, generated with every new Service Order.</p> <p>The LCM rule can be created from the \"Life Cycle Rules\" tab, pressing the \"Create new rule\" button. The following image contains the LCM rule that needs to be created for this purpose:</p> <p></p> <p>Let's create it step-by-step:</p> <ol> <li>Drag-Drop the _CR_SPEC block (Set characteristic value) of jenkinsrfs from the Service &gt; Text blocks</li> <li>Drag-Drop the Text &gt; Formatted text block and attach it after the block from the previous step</li> <li>Drag-Drop the Text &gt; Multi-line text input block and attach it at the Input(String) connector of the block from the previous text </li> <li>Copy paste the previously provided YAML text</li> <li>Change the spec:destination:namespace property to the value %s</li> <li>Drag-Drop the Lists &gt; Create list block, delete the 2 extra items (click the gear icon). Attach it at the Variables(Array) connector of the formatted text block from the previous step.</li> <li>Drag-Drop the Service &gt; Context &gt; Current Service Order block and select the ID from the drop-down menu. Attach it to the List block of the previous step.</li> <li>Save the PRE_PROVISION Rule</li> </ol>"},{"location":"service_design/examples/jenkins_helm_install_aas/jenkins_helm_install_aas/#expose-the-jenkins-customer-facing-service-to-the-users","title":"Expose the Jenkins (Customer-Facing) Service to the users","text":"<p>To expose a service towards the users, the designer must create the respective CustomerFacingServiceSpecification, by using the previously designed RFSS as a related service.</p> <ol> <li>Go to Service Specifications</li> <li>Create New Specification</li> <li>Create a Jenkins service, mark as Bundle (to enable Service Specification Relationships) and save it </li> <li>Go to the \"Service Specification Relationships\" tab and assign Jenkinsrfs</li> <li>(Optionally) Add a logo, from the respective \"Logo\" tab, if you wish</li> </ol> <p></p> <p></p> <p>Next, the designer must expose it through an already created Service Catalog and Service Category so as to make it visible to the users, thus available for ordering.</p> <p></p>"},{"location":"service_design/examples/jenkins_helm_install_aas/jenkins_helm_install_aas/#order-the-jenkins-service","title":"Order the Jenkins Service","text":"<p>Order the service from the previously assigned Service Catalog &gt; Service Category. </p> <p>As soon as the Service Order is in ACKNOWLEDGED state (may require user intervention as the initial Service Order state is \"INITIAL\"), it will be processed and eventually completed rendering the services active, as seen in the figure below:</p> <p></p>"},{"location":"service_design/examples/jenkins_helm_install_aas/jenkins_helm_install_aas/#access-the-jenkins-installation","title":"Access the Jenkins installation","text":"<p>Starting from the Service Order overview and specifically the Order Item #1 tab &gt; Supporting Services, select the ResourceFacingService (jenkinsrfs).</p> <p>Accordingly, the ResourceFacingService has supporting resources from the resource inventory. The latter are available through the \"Supporting Resources\" tab.</p> <p></p> <p>The supporting resources of the Jenkins service are:</p> <ul> <li>A resource reference to the application (e.g. cr_tmpname...)</li> <li>A secret resource (e.g. cr87893...). </li> </ul> <p>Select the secret resource, which will navigate you to the Resource Inventory page of OpenSlice. There, you may find the login credentials encoded as Base64. </p> <p></p> <p>Use a Base64 decoder to parse the credentials and use them to login in your Jenkins installation, through the exposed Jenkins Server UI.</p> <p>Exposing Jenkins externally is a matter of cluster configuration and request (nodeport, load balancing, etc), thus is not a topic for this example</p>"},{"location":"service_design/kubernetes/design_helm_aas/","title":"Expose Helm charts as Service Specifications","text":"<p>Intended Audience: OpenSlice Service Designers</p> <p>This section introduces ways to manage Helm charts installations via OpenSlice Service Specifications and Service Orders.</p>"},{"location":"service_design/kubernetes/design_helm_aas/#kubernetes-and-helm-introduction","title":"Kubernetes and Helm Introduction","text":"<p>Kubernetes is an orchestration system for automating software deployment, scaling, and management. One can interact though the Kubernetes API and it has a set of objects ready for use out of the box.</p> <p>Helm is a tool that automates the creation, packaging, configuration, and deployment of Kubernetes applications by combining your configuration files into a single reusable package.</p> <p>At the heart of Helm is the packaging format called charts. Each chart comprises one or more Kubernetes manifests -- and a given chart can have child charts and dependent charts, as well. Using Helm charts:</p> <ul> <li>Reduces the complexity of deploying Microservices</li> <li>Enhances deployment speed</li> <li>Developers already know the technology</li> </ul> <p>Below the core advantages in using Helms with OpenSlice are presented:</p> <ul> <li>There are many Helm charts and Helm repositories there that are ready to be used</li> <li>Enable loose coupling and more orchestration scenarios</li> <li>Developers create and deploy applications in things they already know (e.g. Helm charts)</li> <li>Usage of the TMF models as wrapper entities around Helm charts</li> </ul> <p>Also, OpenSlice can expose them in service catalogs and deploy them in complex scenarios (Service Bundles) involving also other systems:</p> <ul> <li>Include e.g. RAN controllers, </li> <li>Pass values through life cycle rules from one service to another, </li> <li>Manage multiple Helms in multiple clusters</li> </ul>"},{"location":"service_design/kubernetes/design_helm_aas/#the-installation-of-helm-charts-is-based-on-openslice-crd-support","title":"The installation of HELM charts is based on OpenSlice CRD support","text":"<p>Please read more here.</p> <p>For installing HELM charts we will use ArgoCD a well known Kubernetes-native continuous deployment (CD) tool. </p> <p>ArgoCD is a Kubernetes-native continuous deployment (CD) tool</p> <p>While just deploying HELM charts is just a scenario for ArgoCD , in future one can exploit it for many things</p> <p>Despite some other tools like FluxCD, it provides also a UI which is useful for management and troubleshooting</p> <p>We will mainly use the CRD of <code>Kind: Application</code> that ArgoCD can manage</p> <p>Before proceeding, install ArgoCD in your management cluster, by following ArgoCD instructions</p> <p>As soon as you install ArgoCD, OpenSlice is automatically aware for specific new Kinds. The one we will use is is the <code>Kind: Application</code> that ArgoCD can manage under the apiGroup argoproj.io</p> <p>Browse to Resource Specifications. You will see an entry like the following:</p> <p><code>Application@argoproj.io/v1alpha1@kubernetes@https://10.10.10.144:6443/</code></p> <p>see image: </p> <p></p>"},{"location":"service_design/kubernetes/design_helm_aas/#probe-further","title":"Probe further","text":"<p>See the Example: Offer Jenkins as a Service via OpenSlice </p>"},{"location":"service_design/kubernetes/exposing_kubernetes_resources/","title":"Expose and manage Kubernetes Custom Resource Definitions (Operators) in a Kubernetes Cluster","text":"<p>Intended Audience: OpenSlice Service Designers</p> <p>OpenSlice is capable of exposing Kubernetes Resources and Definitions as Service Specifications.</p> <p>Use OpenSlice to expose NFV resources in service catalogs and deploy them in complex scenarios (service bundles) involving also other systems:</p> <ul> <li>Include external resources, e.g. RAN controllers</li> <li>Manage multiple NSDs in linked NFVOs (OSM installations)</li> <li>Combine designed services</li> <li>Control the lifecycle of services and pass values from one service to another</li> </ul>"},{"location":"service_design/kubernetes/exposing_kubernetes_resources/#awareness-for-crds-and-crs-in-cluster","title":"Awareness for CRDs and CRs in cluster","text":"<p>CRDs and CRs can appear (disappear) or change status at any time in a cluster. OpenSlice Resource Inventory need to be aware of these events.</p> <p>When installing OpenSlice you can configure at least one management cluster. OpenSlice connects via a provided kubeconf</p> <ul> <li>On start-up, OSL tries to register this cluster and context to OSL catalogs.</li> <li>After the registration of this cluster as a Resource in OSL OSL is always aware of all CRDs and their CRs in the cluster, even if a CRD or CR is added/updated/deleted in the K8S cluster outside of OSL</li> <li>Resources created by OpenSlice have labels, e.g. (org.etsi.osl.*)</li> </ul>"},{"location":"service_design/kubernetes/exposing_kubernetes_resources/#expose-crds-as-service-specifications-in-openslice-catalogs","title":"Expose CRDs as Service Specifications in OpenSlice catalogs","text":"<p>A CRD by default is exposed as a Resource Specification</p> <p>To ensure unique names across the clusters that OpenSlice can manage, the name of a CRD is constructed as follows:</p> <p><code>Kind @ ApiGroup/version @ ContextCluster @ masterURL</code></p> <p>For example you might see resource Specifications like:</p> <ul> <li>Application@argoproj.io/v1alpha1@kubernetes@https://10.10.10.144:6443/```</li> <li><code>IPAddressPool@metallb.io/v1beta1@kubernetes@https://10.10.10.144:6443/</code></li> <li><code>Provider@pkg.crossplane.io/v1@kubernetes@https://10.10.10.144:6443/</code></li> </ul> <p>All attributes of the CRD are translated into characteristics</p> <p>The following specific characteristics are added:</p> <pre><code>- _CR_SPEC: Used for providing the json Custom Resource description to apply\n- _CR_CHECK_FIELD: Used for providing the field that need to be checked for the resource status\n- _CR_CHECKVAL_STANDBY: Used for providing the equivalent value from resource to signal the standby status\n- _CR_CHECKVAL_ALARM: Used for providing the equivalent value from resource to signal the alarm status\n- _CR_CHECKVAL_AVAILABLE: Used for providing the equivalent value from resource to signal the available status\n- _CR_CHECKVAL_RESERVED: Used for providing the equivalent value from resource to signal the reserved status\n- _CR_CHECKVAL_UNKNOWN: Used for providing the equivalent value from resource to signal the unknown status\n- _CR_CHECKVAL_SUSPENDED: Used for providing the equivalent value from resource to signal the suspended status\n</code></pre> <ol> <li> <p>Create a new Service Specification and use this Resource Specification in Resource Specification Relationships</p> <ul> <li>Then the Service Specification is saved as ResourceFacingServiceSpecification</li> </ul> <p>1.1. At this stage, you can give values to the characteristics:</p> <pre><code>- _CR_SPEC, \n- _CR_CHECK_FIELD\n- _CR_CHECKVAL_STANDBY\n- _CR_CHECKVAL_ALARM\n- _CR_CHECKVAL_AVAILABLE\n- _CR_CHECKVAL_RESERVED\n- _CR_CHECKVAL_UNKNOWN\n- _CR_CHECKVAL_SUSPENDED\n</code></pre> <p>1.2. You can now create LCM rules if you wish</p> </li> <li> <p>Create a new Service Specification and use the Resource Facing Service Specification in Service Specification Relationships</p> <ul> <li>Then the Service Specification is saved as CustomerFacingServiceSpecification</li> </ul> <p>2.1. At this stage, you can give values to the characteristics: </p> <pre><code>- _CR_SPEC, \n- _CR_CHECK_FIELD\n- _CR_CHECKVAL_STANDBY\n- _CR_CHECKVAL_ALARM\n- _CR_CHECKVAL_AVAILABLE\n- _CR_CHECKVAL_RESERVED\n- _CR_CHECKVAL_UNKNOWN\n- _CR_CHECKVAL_SUSPENDED\n</code></pre> <p>2.2. You can create LCM rules for this new Service Specification</p> <p>2.3. You can expose configurable values for users to configure during service order</p> </li> </ol> <p></p>"},{"location":"service_design/kubernetes/exposing_kubernetes_resources/#service-orchestration-and-crdscrs","title":"Service Orchestration and CRDs/CRs","text":"<p>OSOM - OpenSlice Service Orchestrator, checks the presence of attribute _CR_SPEC at the RFS to make a request for a CR deployment.</p> <ul> <li>_CR_SPEC is a JSON or YAML string that is used for the request<ul> <li>It is similar to what one will do with e.g. a kubectl apply</li> <li>There are tools to translate a yaml file to a json</li> </ul> </li> </ul> <p>LCM rules can be used to change attributes of this yaml/json file, before sending this for orchestration</p>"},{"location":"service_design/kubernetes/exposing_kubernetes_resources/#mapping-the-cr-lifecycle-that-is-defined-in-the-crd-with-the-openslice-tmf-based-resource-lifecycle","title":"Mapping the CR lifecycle that is defined in the CRD with the OpenSLice (TMF-based) resource Lifecycle","text":"<p>OpenSlice adds automatically as we see the following characteristics: </p> <pre><code>- _CR_CHECK_FIELD\n- _CR_CHECKVAL_STANDBY\n- _CR_CHECKVAL_ALARM\n- _CR_CHECKVAL_AVAILABLE\n- _CR_CHECKVAL_RESERVED\n- _CR_CHECKVAL_UNKNOWN\n- _CR_CHECKVAL_SUSPENDED\n</code></pre> <p>These characteristics instrument OpenSlice services to manage and reflect the lifecycle of a kubernetes resource to OpenSlice's (TMF based) lifecycle</p> <ul> <li>_CR_CHECK_FIELD: The name of the field that is needed to be monitored in order to monitor the status of the service and translate it to TMF resource status (RESERVED AVAILABLE, etc) </li> <li>_CR_CHECKVAL_STANDBY: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state STANDBY (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) </li> <li>_CR_CHECKVAL_ALARM: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state ALARMS (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) </li> <li>_CR_CHECKVAL_AVAILABLE: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state AVAILABLE (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) </li> <li>_CR_CHECKVAL_RESERVED: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state RESERVED (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) </li> <li>_CR_CHECKVAL_UNKNOWN: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state UNKNOWN (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) </li> <li>_CR_CHECKVAL_SUSPENDED: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state SUSPENDED (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) </li> </ul>"},{"location":"service_design/kubernetes/exposing_kubernetes_resources/#probe-further","title":"Probe further","text":"<ul> <li>See examples of exposing Kubernetes Operators as a Service via OpenSlice:<ul> <li>Offering \"Calculator as a Service\"</li> <li>Offering \"Helm installation as a Service\" (Jenkins example)</li> </ul> </li> <li>Learn more about CRIDGE, the service in OpenSlice that manages CRDs/CRs</li> </ul>"},{"location":"service_design/kubernetes/intro/","title":"OpenSlice and support for kubernetes","text":"<p>This section contains information on how Service Designers can expose Kubernetes resources as services</p>"},{"location":"service_design/lcmrules/examples/","title":"Examples of Rules","text":"<p>The following images provide some examples of rules.</p>"},{"location":"service_design/lcmrules/examples/#define-variables-according-to-cases","title":"Define variables according to cases","text":"<p>In the following example we :</p> <ul> <li>define a String variable. </li> <li>Then according to the Area of Service selected from the Service Order of the Service Specification we need to define it properly.</li> <li>We output the value to the OSOM Log</li> <li>Define dynamically the value of another parameter (This is fictional) and then do some other condition check</li> </ul> <p>The strAreaCodes could be passed then e.g. to NFVO for instantiation of services to these cells.</p> <p></p>"},{"location":"service_design/lcmrules/examples/#define-rules-to-create-a-resource-for-a-kubernetes-operator","title":"Define Rules to create a resource for a Kubernetes Operator","text":"<ul> <li>Modify the _CR_SPEC characteristic</li> <li>Add an \"Input with variables block\"</li> <li>Add a multiline Text block</li> <li>Mark with %s, %d, etc the parameters to modify with some action</li> <li>Add a list with the variables and their values</li> </ul> <p>in the example we modify a YAML spec with parama, paramb, action values from the characteristics spec.parama, spec.paramb, spec.action</p> <p></p>"},{"location":"service_design/lcmrules/examples/#define-complex-osm-configs-for-day-0","title":"Define complex OSM configs for DAY 0","text":"<p>The following displays some complex examples for defining the parameters to pass to the NFV. In this case is OSM.</p> <ul> <li> <p>NOTE: The OSM_CONFIG characteristic of a service is the one that it is used in orchestration to instantiate NS from OSM</p> </li> <li> <p>check the variable strTargetsParam. It is passed to the variable strOsmConfig3 which is executed if the Number of Cameras is more than 100. </p> </li> <li>if the Video quality requested is 3, then the Maximum Namber of camers will be 8. Check the OSM detailed configuration block and its syntax.</li> <li>if the Video quality requested is 2, we use a simpler OSM Config block to configure the parameter OSM_CONFIG. We just injected a json text ( watch the Escape of the string for the Quotes!)</li> <li>if the Video quality requested is 1, again we use a simpler OSM Config block to configure the parameter OSM_CONFIG. We use as injected json text a variable constructed later</li> </ul> <p></p>"},{"location":"service_design/lcmrules/examples/#define-and-instantiate-different-services-according-to-service-order-request","title":"Define and instantiate different services according to Service Order request","text":"<p>In the following example we would like to offer a service either as Platinum, Gold or Silver. Depending on the selection we need to instantiate different services.</p> <p>There are different ways to accomplish this:</p> <ul> <li>create dynamically New Service Orders of RFSs with equivalent quality of Services</li> <li>change for example the VIMs that you deploy the NS</li> <li>change the NSD (that is use different VNFs)</li> </ul> <p>The following image displays for example the latter case.</p> <p></p>"},{"location":"service_design/lcmrules/examples/#call-an-external-restful-service","title":"Call an external RESTful service","text":"<p>This is useful in cases for example of alarms , external logging, calling other services e.g. email or even a complex algorithm written in other language e.g. call an external service and get a result. (service e.g. a Python service)</p> <p></p> <p></p>"},{"location":"service_design/lcmrules/examples/#create-new-service-orders","title":"Create New Service Orders","text":"<p>The following example calls to Order a New Service Specification with specific Parameter Values</p> <p></p>"},{"location":"service_design/lcmrules/intro/","title":"LCM Rules introduction","text":"<p>Intended Audience: OpenSlice Service Designers</p> <p>Lifecycle Management Rules: Defining complex conditions and actions during the lifecycle of a service and any necessary modifications throughout the service lifecycle.</p> <p>In Naas LCM Introduction it was presented briefly the LCM Rules concept.</p> <p>This section goes deeply on how Service Designers can use them.</p> <p>LCM Rules are used for defining complex conditions and actions during the lifecycle of a service. In Openslice, there are five types of rules defined:</p> <ul> <li>PRE_PROVISION</li> <li>CREATION</li> <li>AFTER_ACTIVATION </li> <li>SUPERVISION </li> <li>AFTER_DEACTIVATION </li> </ul> <p>The following figure displays the different phases that the rules are performed, during the lifecycle of a Network Slice Instance.</p> <p></p> <ul> <li>PRE_PROVISION rules: Run only once just before creating a service with a given priority.</li> <li>CREATION rules: Run while the referenced service dependencies of a service are created.</li> <li>AFTER_ACTIVATION rules: Run only once just after a service get the ACTIVE state.</li> <li>SUPERVISION rules: Run when a characteristic of a service is changed and the service is in the ACTIVE state.</li> <li>AFTER_DEACTIVATION rules: Run only once just after a service get the INACTIVE/TERMINATED state.</li> </ul> <p>In general the rules allow to perform many actions during service LCM. Below, there are some examples:</p> <ul> <li>Modify service specification parameters before the instantiation of a service (or during operation) based on other dependencies. These parameters might be part of other services already included in Service order.</li> <li>Translate GST/NEST parameter values to other values passed later to NFVO for instantiation or control.</li> <li>Define complex OSM Configs based on other dependencies and passing variables.</li> <li>Define any dependencies when creating the referenced services.</li> <li>Dynamically include new service dependencies.</li> <li>Create new service orders so include dynamically other services.</li> <li>Call external (RESTful) services (via http(s), define payload, examine response).</li> </ul>"},{"location":"service_design/lcmrules/intro/#examine-if-the-rules-are-executed-successfully","title":"Examine if the rules are executed successfully","text":"<p>Rules are transformed automatically to executable code (currently is Java). If a rule is performed successfully  or has any issues (e.g. unexpected syntax errors or exceptions) appear in OSOM log files and also tey are attached as Notes to the running Service.</p>"},{"location":"service_design/lcmrules/intro/#lcm-rules-and-osom-service-orchestration","title":"LCM Rules and OSOM Service Orchestration","text":"<p>OSOM is the responsible service for executing the rules on a specific phase. The following image explains the design in the BPMN phases:</p> <p></p>"},{"location":"service_design/lcmrules/intro/#define-rules","title":"Define Rules","text":"<p>Rules are defined when designing a Service Spec. Here is an example of a list of rules:</p> <p></p> <p>Execution order of rules on a specific phase is random</p> <p>NOTE: There is a priority field. The lower the number the highest the priority of rule execution. For example Rule with priority 0 will run before rule with priority 1.</p>"},{"location":"service_design/lcmrules/intro/#definition-language","title":"Definition language","text":"<ul> <li>The visual language that Openslice used is based on Google's Blockly (see https://developers.google.com/blockly)</li> <li>The blockly graph is automatically translated to Java internally and then dynamically executed during orchestration phases.</li> </ul> <p>The following figure is an example of such a rule design. The rule for example will run in PRE_PROVISION phase:</p> <p></p> <ul> <li>The goal of the above rule is to properly define a variable AreaCodes given the chosen AreaOfService from a Service Order.</li> <li>On the right side the user can define some rule properties or observe the underlying generated java code.</li> </ul>"},{"location":"service_design/lcmrules/intro/#the-blocks-library","title":"The Blocks Library","text":"<p>See our LCM Blocks specification</p>"},{"location":"service_design/lcmrules/intro/#probe-further","title":"Probe further","text":"<ul> <li>Check our examples for more usages</li> <li>See next the complete specification</li> </ul>"},{"location":"service_design/lcmrules/specification/","title":"LCM Blocks Specification","text":"<p>Intended Audience: OpenSlice Service Designers</p> <p>The following images describe some blocks found in the library.</p> <p>Blockly has syntax rules. It helps with colours to define them. </p> <p>So for example a parameter that is a Number cannot be combined with a String. Will need some conversion first</p> <p> </p>"},{"location":"service_design/nfv/design_nfv_services/","title":"Design NFV services","text":"<p>Intended Audience: OpenSlice Service Designers</p> <p>OpenSlice is capable of exposing NFV-related resources (VNFs/NSDs) as Service Specifications.</p> <p>Use OpenSlice to expose NFV resources in service catalogs and deploy them in complex scenarios (service bundles) involving also other systems:</p> <ul> <li>Include external resources, e.g. RAN controllers</li> <li>Manage multiple NSDs in linked NFVOs (OSM installations)</li> <li>Combine designed services</li> <li>Control the lifecycle of services and pass values from one service to another</li> </ul>"},{"location":"service_design/nfv/design_nfv_services/#initial-configuration-for-osm-deployment","title":"Initial configuration for OSM deployment","text":"<p>if you have an initial configuration that needs to be applied in the NSD deployment, then you go to the RFS (or CFS) and in Service Specification Characteristics go and edit the OSM_CONFIG characteristic. You can add in the Service Characteristic Value, in the Value field something like the following example which gives a floating IP to a VNF:</p> <pre><code>{ \"nsdId\": \"e855be91-567b-45cf-9f86-18653e7ea\", \"vimAccountId\": \"4efd8bf4-5292-4634-87b7-7b3d49108\" , \"vnf\": [ {\"member-vnf-index\": \"1\", \"vdu\": [ {\"id\": \"MyCharmedVNF-VM\", \"interface\": [{\"name\": \"eth0\", \"floating-ip-required\": true }]}]}]}\n</code></pre> <p>or a more complex example (beautify it first if you want to view it, but in the parameter OSM_CONFIG must be minified like the example):</p> <pre><code>{\"nsdId\":\"e855be91-567b-45cf-9f86-18653e7\",\"vimAccountId\":\"4efd8bf4-5292-4634-87b7-7b3d491\",\"vnf\":[{\"member-vnf-index\":\"1\",\"vdu\":[{\"id\":\"haproxy_vdu\",\"interface\":[{\"name\":\"haproxy_vdu_eth1\",\"floating-ip-required\":true}]}]}],\"vld\":[{\"name\":\"pub_net\",\"vim-network-name\":\"OSMFIVE_selfservice01\"},{\"name\":\"management\",\"vim-network-name\":\"OSMFIVE_selfservice01\"},{\"name\":\"lba_net\",\"vim-network-name\":\"lba_net\",\"vnfd-connection-point-ref\":[{\"member-vnf-index-ref\":\"1\",\"vnfd-connection-point-ref\":\"haproxy_private\",\"ip-address\":\"192.168.28.2\"}]},{\"name\":\"backend_net\",\"vim-network-name\":\"backend_net\",\"vnfd-connection-point-ref\":[{\"member-vnf-index-ref\":\"3\",\"vnfd-connection-point-ref\":\"haproxy_public\",\"ip-address\":\"192.168.20.2\"}]},{\"name\":\"lb_sb_net\",\"vim-network-name\":\"lb_sb_net\",\"vnfd-connection-point-ref\":[{\"member-vnf-index-ref\":\"3\",\"vnfd-connection-point-ref\":\"haproxy_private\",\"ip-address\":\"192.168.28.2\"}]},{\"name\":\"breaking_point_Spain\",\"vim-network-name\":\"sb_repo_net\"},{\"name\":\"breaking_point_Greece\",\"vim-network-name\":\"5TONICexternal\"}],\"additionalParamsForVnf\":[{\"member-vnf-index\":\"2\",\"additionalParams\":{\"target_IP\":\"192.168.20.2\"}},{\"member-vnf-index\":\"4\",\"additionalParams\":{\"target1_IP\":\"192.168.21.2\",\"target2_IP\":\"10.154.252.10\"}}]}\n</code></pre> <p>You can leave the Alias and Unit of Measure as is. Check also the \"is Default\" box.</p>"},{"location":"service_design/nfv/design_nfv_services/#osm-ns-lcm-status","title":"OSM NS LCM Status","text":"<p>When a Service is deployed, OpenSlice provides the ability to see the status messages from the NFVO. This status can be regarding NS instantiation, primitive execution, etc.</p> <p>Going to <code>ResourceFacingService (RFS)</code> -&gt; <code>Contextual Features</code> -&gt; <code>MANO NSLCM</code>, you will be able to see a beautified view of the status messages retrieved by OSM.</p> <p></p>"},{"location":"service_design/nfv/design_nfv_services/#day-2-primitive-actions","title":"Day 2 Primitive Actions","text":"<p>NFVOs like OSM allow to perform actions while a service is running, for example change attributes or make actions on a specific VNF. OpenSlice supports the invocation of day 2 primitives using Open Source MANO (OSM). This feature allows users to perform various actions while a service is running, such as changing attributes or executing specific actions on a Network Service's (NSD) Virtual Network Function (VNF). This capability enhances the flexibility and control over network services, making it easier to manage them in real-time.</p>"},{"location":"service_design/nfv/design_nfv_services/#design-a-primitive-to-be-automatically-invoked","title":"Design a Primitive to be automatically invoked","text":"<p>This example involves a primitive invocation at a Service design level. It is usually used, if the designer wants to automatically invoke the Primitive at a specific landmark throughout the service's lifecycle via the LCM rules.</p> <ol> <li>Navigate to the RFSS related to the NSD that contains VNFs with primitives</li> <li>Create a characteristic named Primitive:: , e.g. Primitive::touch <li>Select Value Type: ARRAY</li> <li>Add Service Characteristic Value:      1) alias=primitive, value= (e.g. touch);     2) alias=member_vnf_index, value= (e.g. 1);      3) add the params that the LCM rule (or user) will change in alias the name of param and in value an initial value (e.g. alias=filename, value=myfile.txt); <p>At the above example, when the service is running and the LCM rule (or user) MODIFIES it, i.e. changes the value of the alias=filename, value=myfile.txt, to value=secondfile.txt, the primitive will be executed. Then, inside the VNF a file will be created called secondfile.txt.</p>"},{"location":"service_design/nfv/design_nfv_services/#manually-invoke-a-primitive","title":"Manually invoke a Primitive","text":"<p>Especially, for user-invoked primitives, OpenSlice offers a dedicated UI.</p> <p>To invoke primitives, do the following steps:</p> <ol> <li> <p>When a service is instantiated, go to its <code>ResourceFacingService (RFS)</code> -&gt; <code>Contextual Features</code> -&gt; <code>MANO Primitives List</code>;</p> <p></p> </li> <li> <p>There, you will find that VNF available primitives;</p> </li> <li>Click on the <code>Execute Primitive</code> button of the chosen VNF;</li> <li> <p>On the <code>Execute MANO Primitives</code> window:</p> <ol> <li>Select the desired primitive on the <code>Primitive Parameter Name</code>;</li> <li> <p>Provide the <code>Primitive Parameter Value</code>;</p> <p></p> </li> </ol> </li> <li> <p>Click on the <code>Submit</code> button.</p> </li> </ol> <p>After the previous steps, after a while, you should be able to see the status of the primitive execution above on the same page, on the <code>MANO NSLCM</code> section.</p> <p>IMPORTANT NOTE: As of now, OpenSlice still only supports the invoking of VNF-level primitives. We expect to have VDU-level primitives in the future.</p>"},{"location":"service_ordering/ordering_services/","title":"Service Ordering","text":"<p>Intended Audience: OpenSlice Users</p> <p>This section is WIP.</p>"}]}