diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index 20f28f1d47a7888b6d5bd43ca4fed20b2f6aa8a1..a4f7250af07235a5cbb9d7849c9a2733fc95dc12 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -6,7 +6,7 @@ pages: PAGES_BRANCH: gl-pages HTTPS_REMOTE: https://gitlab-ci-token:${ACCESS_TOKEN}@${CI_SERVER_HOST}/rep/${CI_PROJECT_PATH}.git before_script: - - pip install -q mkdocs-material mike + - pip install -q mkdocs-material mkdocs-glightbox mkdocs-markdownextradata-plugin mike - apt-get update -qq && apt-get -qq install -y git > /dev/null - git config --global --replace-all user.name $GITLAB_USER_NAME - git config --global --replace-all user.email $GITLAB_USER_EMAIL diff --git a/doc/alarms_actions.md b/doc/alarms_actions.md index b7715494e2c504668c69d5a690197346fdb7be4b..96c40be97531fbb112c91ef48f321427cceda4ef 100644 --- a/doc/alarms_actions.md +++ b/doc/alarms_actions.md @@ -18,7 +18,7 @@ Alarms can be automatically resolved by specific actions. Today only the followi Usually used to perform a Day2 configuration (towards OSM). To use it, Create a New Action Specification Name=execDay2 as following -[](./images/alarms_actions/day2actionspec.png) + Now make a Service Order for your service. In this example ςε used a cirros NSD @@ -26,7 +26,7 @@ Now make a Service Order for your service. In this example ςε used a cirros NS Create a New Action Rule for the running services as the following example: -[](./images/alarms_actions/action_rule_exampleday2.png) + The scope is the running cirros service. diff --git a/doc/architecture/architecture.md b/doc/architecture/architecture.md index 9181348600e7706d51ec95802c023ccc68a3982b..3847dfdbb9c40df35dac7aa1e92bf28c593749db 100644 --- a/doc/architecture/architecture.md +++ b/doc/architecture/architecture.md @@ -1,49 +1,60 @@ # Architecture - -Openslice offers the following main functionalities: +## High-Level Introduction +<!-- OpenSlice offers the following main functionalities: * Service Catalog Management: A CSP will have the ability to manage the Service Catalog Items, their attributes , organize in categories and decide what to make available to Customers * Services Specifications: A CSP will be able to manage Service Specifications * Service Catalog Exposure: A CSP will be able to expose catalog to customers and related parties * Service Catalog to Service Catalog: Openslice able to consume and provide Service Catalog items to other catalogs * Service Order: The Customer will be able to place a Service Order -* Service Inventory: The Customer and Provider will be able to view deployed Services status +* Service Inventory: The Customer and Provider will be able to view deployed Services status --> + + -The following figure displays the overall architecture of Openslice. -[](../images/architecture.png) +OpenSlice consists of: +* Web frontend User Interface (UI) that consists of mainly two portal categories: + 1. An NFV portal allowing users to onboard VNFDs/NSDs to facility’s NFVOs and self-service management + 2. Several TMF-family portals (Product, Service, Resource, Testing) which allow users to browse the respective layers of a modern BSS/OSS solution +* An API gateway that proxies the internal APIs, which are used by the Web frontend as well as any other 3rd party services, and consist of: + 1. A microservice offering TMF-compliant API services (e.g. Product/Service/Resource Catalog API, Service Ordering API, etc) + 2. A microservice offering NFV-compliant API services (e.g. VNFD/NSD onboarding and management, etc) allowing to manage multiple NFVOs and store VNFDs and NSDs in the respective catalogues +* A Message Bus used by all microservices to exchange messages either via message Queues or via publish/subscribe Topics +* An Authentication Server implementing Oauth2 authentication scheme +* A microservice that is capable to interface with an issue management system (e.g. it raises an issue to all related stakeholders - CSPs, NOPs, CSCs - that a new Service Order is requested) +* A Central Logging microservice that logs all distributed actions into an Elasticsearch cluster +* A Service Orchestrator (SO) solution that will fulfill Service Ordering requests by propagating the orchestration actions to underlying components (e.g. NFVOs or Kubernetes) or to external SOs +* A MANO Client microservice which interfaces with SOL005-compliant NFVOs (synchronizing artifacts and propagating actions) +* A Custom Resource (CR) to TMF bridge (CRIDGE) microservice which interfaces with Kubernetes +* A Metrics Retrieval Component (METRICO) which interfaces with external monitoring tools, retrieving and injecting desired metrics into OpenSlice orchestration pipeline +* An Assurance Services component which generates and monitors alerts, as well executing defined actions based on the latter +* A visualization server (KROKI) microservice which enables a intuitive illustration of dependency graphs and interactions -Openslice allows Vertical Customers browsing the available offered service specifications. It consists of: -* Web frontend UIs that consist of mainly two portals: i) a NFV portal allowing users self-service management and onboarding VNFDs/NSDs to facility’s NFVO ii) a Services Portal, which allows users to browse the Service Catalog, Service Blueprints specifications and the Service Inventory -* An API gateway that proxies the internal APIs and used by the web front end as well as any other 3rd party service -* A Message Bus where all microservices use it to exchange messages either via message queues or via publish/subscribe topics -* An authentication server implementing Oauth2 authentication scheme -* A microservice offering TMF compliant API services (eg Service Catalog API, Service Ordering APIetc) -* A microservice offering NFV API services (eg VNF/NSD onboarding etc) and allows to store VNFDs and NSDs in a catalog -* A microservice that is capable to interface to an issue management system. For example it raises an issue to all related stakeholders (CSP, NOP, CSC) that a new Service Order is requested -* Central logging microservice that is capable to log all distributed actions in to an Elasticsearch cluster -* A Service Orchestrator solution that will propagate Service Ordering requests to the equivalent SOs and NFVOs +## Microservices Deployment +The following figure depicts how OpenSlice microservices are deployed -The following figure depicts how Openslice microservices are deployed + -[](../images/microservices_network_deployment.png) +## Deploying OpenSlice in multi-domain scenarios -## Deploying Openslice in multi domain scenarios +A typical deployment across domains, involves some typical components: -A typical deployment across domains, involves today some typical components: i) an OSS/BSS to allow customers access the service catalog and perform service orders, ii) a Service Orchestrator (SO) component for executing the service order workflow, as well as iii) a Network Functions Virtualization Orchestrator (NFVO) for configuring the iv) network resources. +1. an OSS/BSS to allow customers access the service catalog and perform service orders, +2. a Service Orchestrator (SO) component for executing the service order workflow, +3. a Network Functions Virtualization Orchestrator (NFVO) or Kubernetes for configuring the network resources. -TMF Open APIs are introduced not only for exposing catalogues and accepting service orders, but also implementing the East-West interfaces between the domains, fulfilling also the LSO requirements as introduced by MEF. +TMF Open APIs are introduced not only for exposing catalogues and accepting service orders, but also implementing the East-West interfaces between the domains, fulfilling also the [LSO requirements](https://wiki.mef.net/pages/viewpage.action?pageId=56165271) as introduced by MEF. -The following figure shows how openslice could be used in such scenarios: +The following figure shows how OpenSlice could be used in such scenarios: -[](../images/multi-domain-architecture.png) + -See more [Consuming Services From External Partner Organizations](./consumingServicesFromExternalPartners.md) +See more at [Consuming Services From External Partner Organizations](../getting_started/configuration/consuming_services_from_external_partners.md). \ No newline at end of file diff --git a/doc/architecture/CRIDGE/CRIDGEforDevelopers.md b/doc/architecture/cridge/cridge_introduction.md similarity index 69% rename from doc/architecture/CRIDGE/CRIDGEforDevelopers.md rename to doc/architecture/cridge/cridge_introduction.md index 0457e529c34d138e17203fb2fbb2b10f858a386b..2866d16627516a24bca32c92b948bfb5dec46c29 100644 --- a/doc/architecture/CRIDGE/CRIDGEforDevelopers.md +++ b/doc/architecture/cridge/cridge_introduction.md @@ -1,22 +1,23 @@ # CRIDGE: A Service to manage Custom Resources in a Kubernetes Cluster -## Intended Audience: OSL developers - - > Kubernetes is an orchestration system for automating software deployment, scaling, and management. One can interact though the Kubernetes API and it has a set of objects ready for use out of the box. Custom Resource Definitions (CRDs) is a way that allows to manage things other than Kubernetes itself and allows to create our own objects The use of CRDs makes the possibilities of Kubernetes management almost limitless. You can extend the base Kubernetes API with any object you like using CRDs. +<!-- **Intended Audience: OpenSlice Developers** --> + +## Introduction +Kubernetes is an orchestration system for automating software deployment, scaling, and management. One can interact though the Kubernetes API and it has a set of objects ready for use out of the box. Custom Resource Definitions (CRDs) is a way that allows to manage things other than Kubernetes itself and allows to create our own objects The use of CRDs makes the possibilities of Kubernetes management almost limitless. You can extend the base Kubernetes API with any object you like using CRDs. -CRIDGE is a service designed to create and manage Custom Resources (CRs) based on Custom Resource Definitions (CRDs) installed on a Kubernetes cluster. By leveraging the OpenSlice (OSL), CRIDGE enables seamless integration and orchestration within Kubernetes environments, utilizing Kubernetes APIs via the TMF APIs and models. Thus, more or less, OSL exposes Kubernetes APIs as TMF APIs and models. +CRIDGE is a service designed to create and manage Custom Resources (CRs) based on Custom Resource Definitions (CRDs) installed on a Kubernetes cluster. By leveraging OpenSlice (OSL), CRIDGE enables seamless integration and orchestration within Kubernetes environments, utilizing Kubernetes APIs via the TMF APIs and models. Thus, more or less, OSL exposes Kubernetes APIs as TMF APIs and models. - >By allowing the design and lifecycle management of services/resources that expose CRDs/CRs in a Kubernetes cluster via the TMF APIs, OSL can be used in many complex scenarios now involing resources from multiple domains. - +By allowing the design and lifecycle management of services/resources that expose CRDs/CRs in a Kubernetes cluster via the TMF APIs, OSL can be used in many complex scenarios now involing resources from multiple domains. +Pros, in a nutshell: 1. CRIDGE service allows OSL to: - Create and manage Custom Resources (CRs) using installed CRDs on a target Kubernetes cluster. - Facilitate complex orchestration scenarios by wrapping Kubernetes APIs as TMF APIs and models. - Handles connectivity to a Kubernetes cluster and manages the lifecycle of CRDs - - Wraps the Kubernetes API, Receives and provides resources towards other OSL services via the service bus + - Wraps the Kubernetes API, receives and provides resources towards other OSL services via the service bus 2. Enabling Loose Coupling and Orchestration - Language Flexibility: Developers can write CRDs in any language and expose them via the Kubernetes APIs. OSL will reuse these CRDs, enhancing flexibility and integration capabilities. @@ -30,18 +31,17 @@ CRIDGE is a service designed to create and manage Custom Resources (CRs) based o OSL can expose CRs in service catalogs, facilitating their deployment in complex scenarios. These scenarios may include service bundles that involve multiple systems, such as RAN controllers or other Kubernetes clusters, providing a robust and versatile deployment framework. - - > Why the CRIDGE name? we wanted to build a service that maps TMF models to CRDs; a kind of a **CR**D to TMF br**idge**. Therefore CRIDGE was born +> Why the CRIDGE name? We wanted to build a service that maps TMF models to CRDs; a kind of a **CR**D to TMF br**idge**. Therefore CRIDGE was born. -# Approach +## Approach - > OSL in general is responible for exposing service specifications which are ready to be ordered and orchestrated, through tmforum Open APIs as defined in the OSL Service Spec Catalog. Usually for a service specification a corresponding (one or more) resource specification (resourceSpecificationReference) is registered in the OSL Resource Spec Catalog. +> OSL in general is responsible for exposing Service Specifications which are ready to be ordered and orchestrated, through TMFORUM Open APIs as defined in the OSL Service Spec Catalog. Usually for a service specification a corresponding (one or more) Resource Specification (resourceSpecificationReference) is registered in the OSL Resource Spec Catalog. The following image illustrates the approach. -[]() + 1. A CRD in a cluster will be mapped in TMF model as a Resource specification and therefore can be exposed as a service specification in a catalog 2. Service Orders can be created for this service specification. The OSL Orchestrator (OSOM) will manage the lifecycle of the Service Order. @@ -52,11 +52,13 @@ The following image illustrates the approach. -[]() + -The provided image illustrates the architecture and workflow of the CRIDGE service, showing how it interacts with other components within a Kubernetes (K8s) cluster. Here is an explanation of the key components and flow in the diagram: +The provided image illustrates the architecture and workflow of the CRIDGE service, showing how it interacts with other components within a Kubernetes (K8s) cluster. + +Following, there is an explanation of the key components and flow in the diagram: - Other OSL Services: This box represents various OSL services such as Service Spec Catalogue, Resource Spec Catalogue, Service Inventory, Resource Inventory, and OSOM (OpenSlice Service Orchestration and Management). - Service Bus: This is the communication layer that facilitates interaction between the CRIDGE service and other OSL services. @@ -64,21 +66,21 @@ The provided image illustrates the architecture and workflow of the CRIDGE servi - K8s API: The Kubernetes API server, which is the central control point for managing the Kubernetes cluster. CRIDGE interacts with the K8s API to manage CRDs and CRs. > CRD (Custom Resource Definition): A CRD is a way to define custom resources in Kubernetes cluster-wise. It allows the extension of Kubernetes API to create and manage user-defined resources. Example : -``` - apiVersion: apiextensions.k8s.io/v1 - kind: CustomResourceDefinition - metadata: - name: myresource.example.com +```yaml +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + name: myresource.example.com ``` - Namespaces: Kubernetes namespaces provide a way to partition resources within a cluster. The diagram shows that multiple namespaces (nsxx, nsyy, nsz) can be managed by CRIDGE. - > CR (Custom Resource): A CR is an instance of a CRD. It represents the actual custom resource that is managed within the Kubernetes cluster. Example shown in different namespaces: -``` - apiVersion: example.com/v1 - kind: Myresource - metadata: - name: example_resource_1 + > CR (Custom Resource): A CR is an instance of a CRD. It represents the actual custom resource that is managed within the Kubernetes cluster. Example shown in different namespaces: +```yaml +apiVersion: example.com/v1 +kind: Myresource +metadata: + name: example_resource_1 ``` In a nutchell: @@ -88,50 +90,51 @@ In a nutchell: - CRDs are defined and managed through the K8s API. The example CRD is named myresource.example.com. - Deploying CRs in Namespaces: Custom resources defined by the CRD are created and managed within different namespaces in the Kubernetes cluster. Each namespace can have its own instances of the custom resources. - > The example CRD myresource.example.com allows the creation of custom resources of type Myresource. - > Instances of Myresource are created in various namespaces, each with unique names like example_resource_1. +> The example CRD myresource.example.com allows the creation of custom resources of type Myresource. + +> Instances of Myresource are created in various namespaces, each with unique names like example_resource_1. -# Handling more than one clusters +## Mupliple Clusters Management A CRIDGE service is usually responsible for managing one cluster. In the following diagram we show how it can be used for managing multiple clusters: -[]() + We assume that there is an OSL Management cluster that OSL is installed. CRIDGE is also installed there if we would like to manage resources in the same management cluster. - - Each CRIDGE service has for example its own configuration to connect to target cluster - - Each CRIDGE can be installed either in the managed cluster or at the remote clusters. Connectivity is handled via the service bus - - Important: Each CRIDGE has a different context and API endpoints. This is used to request CRDs on a different cluster + + - Each CRIDGE service has its own configuration to connect to target cluster + - Each CRIDGE can be installed either in the managed cluster or at the remote clusters. Connectivity is handled via the service bus. + - **Important**: Each CRIDGE has a different context and API endpoints. This is used to request CRDs on a different cluster. - > A CRD has a globally unique name for example mycrd.example.com. So we need to somehow identify also the different cluster + > A CRD has a globally unique name for example mycrd.example.com. So we need to somehow identify also the different cluster. -# Awareness for CRDs and CRs in cluster +## Awareness for CRDs and CRs in a Cluster > CRDs and CRs can appear (disappear) or change status at any time in a cluster. OSL Resource Inventory need to be aware of these events. -The sync process is found in the code and explained by the following picture: - +The implemented synchronization process is explained by the following diagram: -[]() + - WatcherService is executed when the cridge service application starts (see onApplicationEvent). First things: + WatcherService is executed when the CRIDGE service application starts (see onApplicationEvent). Specifically: - KubernetesClientResource is a class that wraps fabric8’s KubernetesClient - This fabric8 KubernetesClient is initialized from the kubeconf and default context of the machine that runs CRIDGE -- On CRIDGE Start up we try to register this cluster and context to OSL catalogs. +- On CRIDGE start-up we try to register this cluster and context to OSL catalogs. - See registerKubernetesClientInOSLResource method which registers the KubernetesContextDefinition in Resource Inventory as a LogicalResource via createOrUpdateResourceByNameCategoryVersion method -- After the creation(or update) of this cluster as a Resource in OSL we proceed to create SharedIndexInformers for CustomResourceDefinition objects -- In this way CRIDGE is always aware of all CRDs and their CRs in the cluster, even if a CRD or CR is added/updated/deleted in the K8S cluster outside of OSL(CRIDGE) -- The SharedIndexInformer events notify CRIDGE, which is always aware of all CRDs and their CRs in the cluster, even if a CRD or CR is added/updated/deleted in the K8S cluster outside of OSL(CRIDGE) +- After the creation (or update) of this cluster as a Resource in OSL we proceed to create SharedIndexInformers for CustomResourceDefinition objects +- In this way, CRIDGE is always aware of all CRDs and their CRs in the cluster, even if a CRD or CR is added/updated/deleted in the K8S cluster outside of OSL (CRIDGE) +- The SharedIndexInformer events notify CRIDGE, which is always aware of all CRDs and their CRs in the cluster, even if a CRD or CR is added/updated/deleted in the K8S cluster outside of OSL (CRIDGE) - NOTE: The ADD event is raised every time also we run CRIDGE. Therefore, on ADD we do the method to createORupdate resource specifications and resources - On ADD event: - The CRD is transformed to OSL Kubernetes domain model: method kubernetesClientResource.KubernetesCRD2OpensliceCRD - Then the OSL Kubernetes domain model is: - - transformed to Resource Specification and is stored to catalog (see createOrUpdateResourceSpecByNameCategoryVersion) + - Transformed to Resource Specification and is stored to catalog (see createOrUpdateResourceSpecByNameCategoryVersion) - Transformed to Resource and is stored to catalog (see createOrUpdateResourceByNameCategoryVersion) - Conceptually while a CRD is a new resource located in the Kubernetes cluster resource, it is transformed also as a Resource Specification (a high-level entity) which is ready to be reused as an entity to other scenarios. The same concept as in Kubernetes where a CRD is a definition ready to be used for instantiating resources of this CRD - Then for this CRD a Watcher is added for all Resources of this Kind (fabric8’s GenericKubernetesResource entity) @@ -142,30 +145,12 @@ The sync process is found in the code and explained by the following picture: - Else a resource is created in catalog -# Deployment of a new CR based on a CRD +## Exposure of CRDs as Service Specifications +See [Exposing Kubernetes Resources](../../service_design/kubernetes/exposing_kubernetes_resources.md) section for ways to design services around CRDs. -[]() - -- A message arrives to deploy a CR - - The call examines if this CRIDGE service can handle the request (based on context and masterURL) -- There are headers received and a crspec in json -- The crspec is unmarshaled as GenericKubernetesResource -- Headers are in format org.etsi.osl.* -- These headers are injected as labels - - (see later in orchestration) -- A namespace is created for this resource -- Watchers are created for this namespace for e.g. new secrets, config maps etc , so that they can be available back as resources to the Inventory of OSL (Note only Secrets for now are watched) - -# Expose CRDs as Service Specifications in OpenSlice catalogs - -See [ExposingKubernetesResources](ExposingKubernetesResources.md) - - - - -# Service Orchestration and CRDs/CRs +## Service Orchestration and CRDs/CRs OSOM checks the presence of attribute _CR_SPEC at the RFS to make a request for a CR deployment @@ -175,8 +160,17 @@ OSOM checks the presence of attribute _CR_SPEC at the RFS to make a request for > LCM rules can be used to change attributes of this yaml/json file, before sending this for orchestration -However, the following issue needs to be solved: ** How to map the CR lifecycle that is defined in the CRD with the TMF resource Lifecycle? ** - - For this We introduced the following characteristics: _CR_CHECK_FIELD, _CR_CHECKVAL_STANDBY, _CR_CHECKVAL_ALARM, _CR_CHECKVAL_AVAILABLE, _CR_CHECKVAL_RESERVED, _CR_CHECKVAL_UNKNOWN, _CR_CHECKVAL_SUSPENDED +However, the following issue needs to be solved: **How to map the CR lifecycle that is defined in the CRD with the TMF resource Lifecycle?** + +For this We introduced the following characteristics: + +- _CR_CHECK_FIELD +- CR_CHECKVAL_STANDBY +- _CR_CHECKVAL_ALARM +- _CR_CHECKVAL_AVAILABLE +- _CR_CHECKVAL_RESERVED +- _CR_CHECKVAL_UNKNOWN +- _CR_CHECKVAL_SUSPENDED OSOM sends to CRIDGE a message with the following information: @@ -201,9 +195,9 @@ OSOM sends to CRIDGE a message with the following information: - orderId related service order ID - startDate start date of the deployment (not used currently) - endDate end date of the deployment (not used currently) - - _CR_SPEC the spec that is sent to cridge (in json) + - _CR_SPEC the spec that is sent to CRIDGE (in json) - Returns: - - a string response from cridge. It might return "OK" if everything is ok. "SEE OTHER" if there are multiple CRIDGEs then some other cridge will handle the request for the equivalent cluster. Any other response is handled as error + - a string response from CRIDGE. It might return "OK" if everything is ok. "SEE OTHER" if there are multiple CRIDGEs then some other CRIDGE instance will handle the request for the equivalent cluster. Any other response is handled as error - CRIDGE receives the message and creates according to the labels the necessary CR @@ -211,14 +205,27 @@ OSOM sends to CRIDGE a message with the following information: - It monitors and tries to figure out and map the Status of the CR to the TMF Status according to the provided org.etsi.osl.statusCheck* labels - It sends to the message bus the current resource for creation or update to the TMF service inventory +## Deployment of a new CR based on a CRD ---- +The implemented process to deploy a CR is explained by the following diagram: + + + + +- A message arrives to deploy a CR + - The call examines if this CRIDGE service can handle the request (based on context and masterURL) +- There are headers received and a _CR_SPEC in json +- The _CR_SPEC is unmarshaled as GenericKubernetesResource +- Headers are in format org.etsi.osl.* +- These headers are injected as labels (see [Service Orchestration section](#service-orchestration-and-crdscrs)) +- A namespace is created for this resource +- Watchers are created for this namespace for e.g. new secrets, config maps etc, so that they can be available back as resources to the Inventory of OSL ## Probe further - See examples of exposing Kubernetes Operators as a Service via OpenSlice: - - [Offering "Calculator as a Service"](../../service_design/examples/ExposingCRDs_aaS_Example_Calculator/ExposingCRDs_aaS_Example_Calculator.md) - - [Offering "Helm installation as a Service" (Jenkins example)](../../service_design/examples/helmInstallation_aaS_Example_Jenkins/HELM_Installation_aaS_Jenkins_Example.md) + - [Offering "Calculator as a Service"](../../service_design/examples/calculator_crd_aas/calculator_crd_aas.md) + - [Offering "Helm installation as a Service" (Jenkins example)](../../service_design/examples/jenkins_helm_install_aas/jenkins_helm_install_aas.md) diff --git a/doc/architecture/CRIDGE/img01.png b/doc/architecture/cridge/img01.png similarity index 100% rename from doc/architecture/CRIDGE/img01.png rename to doc/architecture/cridge/img01.png diff --git a/doc/architecture/CRIDGE/img02.png b/doc/architecture/cridge/img02.png similarity index 100% rename from doc/architecture/CRIDGE/img02.png rename to doc/architecture/cridge/img02.png diff --git a/doc/architecture/CRIDGE/img03.png b/doc/architecture/cridge/img03.png similarity index 100% rename from doc/architecture/CRIDGE/img03.png rename to doc/architecture/cridge/img03.png diff --git a/doc/architecture/CRIDGE/img04.png b/doc/architecture/cridge/img04.png similarity index 100% rename from doc/architecture/CRIDGE/img04.png rename to doc/architecture/cridge/img04.png diff --git a/doc/architecture/CRIDGE/img05.png b/doc/architecture/cridge/img05.png similarity index 100% rename from doc/architecture/CRIDGE/img05.png rename to doc/architecture/cridge/img05.png diff --git a/doc/architecture/images/architecture.png b/doc/architecture/images/architecture.png new file mode 100644 index 0000000000000000000000000000000000000000..6ad6fd808dc6c911c284e223843c98f79346dc9e Binary files /dev/null and b/doc/architecture/images/architecture.png differ diff --git a/doc/images/microservices_network_deployment.png b/doc/architecture/images/microservices_network_deployment.png similarity index 100% rename from doc/images/microservices_network_deployment.png rename to doc/architecture/images/microservices_network_deployment.png diff --git a/doc/architecture/issuemgt.md b/doc/architecture/issuemgt.md index b50f5459a186f1eff82bebcb5e805c0752d6dde8..c5ca920726903c7a409c357daa1cce3ab0a9844c 100644 --- a/doc/architecture/issuemgt.md +++ b/doc/architecture/issuemgt.md @@ -4,4 +4,4 @@ For issue management support, Openslice relies on Bugzilla. Bugzilla is a ticket The figure below displays the overall issue management service architecture integrating Bugzilla as its core and how this tool interacts with other Openslice services presenting some distinctive scenarios. It should be noted that Bugzilla tickets will not only be used for bugs/errors, but also for general requests, e.g. Service Order procedure. -[](../images/issue_management.png) \ No newline at end of file + \ No newline at end of file diff --git a/doc/architecture/osom.md b/doc/architecture/osom.md index 173aa8017d603bb3854eb6b4c0e9baa905e512f6..0e29dfd91cfa66457b3694b8dbb8e883225e53bf 100644 --- a/doc/architecture/osom.md +++ b/doc/architecture/osom.md @@ -9,14 +9,14 @@ It uses open source Flowable Business process engine (https://www.flowable.org) A Service Order follows the states as defined in TMF641 specification: -[](../images/service_order_states.png) + ## Initial state When a new order is created, it goes into the Initial state. It is stored in the repository and triggers an Event. -[](../images/service_order_initial_state.png) + Administrators are notified usually from the Ticketing System of a new order. They login to Openslice and change the State of the order either to ACKNOWLEDGED or REJECTED. If ACKNOWLEDGED they can Propose a startDate, add Notes, and add any additional service items @@ -24,9 +24,9 @@ Administrators are notified usually from the Ticketing System of a new order. Th A process checks every 1 minute for ACKNOWLEDGED orders. -[](../images/order_scheduler_bpm.png) + -[](../images/order_scheduler_diagram.png) + It retrieves all orders that are in ACKNOWLEDGED state and if the start date is in time it will initialize the process by settingn the order in IN_PROGRESS state. Finally the Start Order Process will start. @@ -35,9 +35,9 @@ It retrieves all orders that are in ACKNOWLEDGED state and if the start date is This process for now is a draft simple prototype to make a simple orchestration via NFVO. Here the actual Services (TMF638/640 model) are created and attached to Service Order and Service Inventory. -[](../images/start_order_process_bpm.png) + -[](../images/start_order_process_diagram.png) + We expect here to check which tasks can be orchestrated by NFVO and which by human. We create the equivalent Services(TMF638/640 model) for this order. @@ -86,14 +86,14 @@ All services in "Order Complete" are in a status: A Service follows the states as defined in TMF638 Service Inventory specification: -[](../images/service_states.png) + ## NFVODeploymentRequest process -[](../images/NFVODeploymentReq_process.png) + This process is related with the NFVO orchestration It will send a msg to NFVO(s?) for a specific deployment request @@ -105,12 +105,12 @@ Then it checks the deployment status. It will wait 30 secs each time until the d Every 1 minute the "Check In Progress Orders" process is executed checking if a supported Service changed state (i.e. to ACTIVE) then the whole Order will change state (e.g. go to COMPLETED) -[](../images/check_inProgress_orders.png) + ## External Service Provider Deployment Request process -[](../images/externalSPDeploymentReq.png) + This process contains tasks for submitting order requests to external partners. - Submit Order To External Service Provider Task: This task creates automatically a Service Order request to a 3rd party provider SO that hosts the Service Specification @@ -120,7 +120,7 @@ This process contains tasks for submitting order requests to external partners. ## Fetch Partner Services Process -[](../images/fetchPartnerServices.png) + Every 2 minutes the "fetchPartnerServicesProcess" process is executed checking remote Partner Organizations for changes in the published catalogues. The Fetch and Update External Partner Services Task is executed in paralle l for each Partner Organization @@ -129,7 +129,7 @@ The Fetch and Update External Partner Services Task is executed in paralle l for ## Local Service Orchestration Process -[](../images/LocalServiceOrchestrationProcess.png) + This process handles automatically services that need to be further orchestrated or processed by OSOM. For example, for a CFS Bundled service we create such automated service instances that just aggregate the underlying services. diff --git a/doc/contributing/developing.md b/doc/contributing/developing.md index e8939eeca651aadf2bedef512ed8009df305b4aa..432e5b003f0fad621b8ac3c4fb9a5da4efb98afc 100644 --- a/doc/contributing/developing.md +++ b/doc/contributing/developing.md @@ -1,89 +1,101 @@ # Developing -OpenSlice backend services are mainly implemented with Java 17 or above and Spring boot. +OpenSlice backend services are mainly implemented with Java 17 or above and Spring Boot. -OpenSlice uses various subsystems and depending on the module would you like to work, other subsystems must be present (you can disable them though in the code, e.g. at docker-compose.yaml file). +OpenSlice uses various subsystems and depending on the module would you like to work, other subsystems must be present *(you can disable them though in the code, e.g. at docker-compose.yaml file)*. -To get the latest development branch: -```bash -wget https://labs.etsi.org/rep/osl/code/org.etsi.osl.main/-/raw/develop/compose/deploy.sh -sudo ./deploy.sh develop #[or replace develop with other branch name] -``` +## General requirements -You may follow the [installation process](https://osl.etsi.org/documentation/develop/deployment/), as described at "develop" tagged documentation. +- Docker should be installed in your development environment +- Run the core subsystems (see [related](#contribute-to-a-subsystem) section) -To work on a specific subsystem e.g. org.etsi.osl.tmf.api, you must: -1a - Deploy only the core necessary subsystems through: -```bash -sudo docker compose --profile dev down;sudo docker compose --profile dev up -d --build -``` -> Note **--profile dev** that will only deploy the core dependency subsystems, instead of the whole OpenSlice. +## Version/release management -1b - Or alternatively, commend out the respective container from the docker-compose.yaml file, so as to deploy the whole OpenSlice, except the subsystem you want to work on, following the provided installation steps. +Check [this](https://nvie.com/posts/a-successful-git-branching-model/) nice article on how we develop and release versions. -2 - Clone the respective repository, e.g. https://labs.etsi.org/rep/osl/code/org.etsi.osl.tmf.api/-/tree/develop +We develop in the `develop` branch and follow a issue driven development model. -3 - Code :) +## Getting Started -## General requirements +To get the latest development branch, execute: -- Docker should be installed in your development environment -- Run the core subsystems (see above section) +```bash +wget https://labs.etsi.org/rep/osl/code/org.etsi.osl.main/-/raw/develop/compose/deploy.sh +sudo ./deploy.sh develop #[or replace develop with another branch name] +``` +You may follow the [installation process](https://osl.etsi.org/documentation/develop/deployment/), as described at `develop` tagged documentation. -## Slack +## Contribute to a subsystem -Feel free to join OpenSlice [Slack](https://openslice.slack.com) for any development oriented questions. +To work on a specific subsystem e.g. `org.etsi.osl.tmf.api`, you must: -## Examples of developing on specific subsystems +1. Deploy only the core necessary subsystems through: -### VNF/NSD Catalog Management and NSD Deployment API service + ```bash + sudo docker compose --profile dev down;sudo docker compose --profile dev up -d --build + ``` + > Note **--profile dev** that will only deploy the core dependency subsystems, instead of the whole OpenSlice. -Clone the repository: https://labs.etsi.org/rep/osl/code/org.etsi.osl.portal.api/-/tree/develop + *OR* -Check the docker-compose.yml file. Default port is 13080. Check specifically the datasource username/password, server port. + Alternatively, comment out the respective container from the `docker-compose.yaml` file, so as to deploy the whole OpenSlice, except the subsystem you want to work on, following the provided installation steps. -Make sure that the core subsystems are up and running. +2. Clone the respective repository, for example: [https://labs.etsi.org/rep/osl/code/org.etsi.osl.tmf.api/-/tree/develop](https://labs.etsi.org/rep/osl/code/org.etsi.osl.tmf.api/-/tree/develop) (the clone URLs are available at this link) -Execute it with -```bash -mvn spring-boot:run -``` +3. Code! 😊 -For verification, Swagger API of the service is at `http://localhost:13000/osapi/swagger-ui/index.html`. -There, you may try there various REST actions and authenticate via the OAuth server without the use of the UI. +## Examples of developing on specific subsystems +### VNF/NSD Catalog Management and NSD Deployment API service -### VNF/NSD Catalog Management and NSD Deployment WEB UI service +You need to: -The Web UI is written in AngularJS. +1. Clone the repository: `https://labs.etsi.org/rep/osl/code/org.etsi.osl.portal.api/-/tree/develop` -Clone the repository: https://labs.etsi.org/rep/osl/code/org.etsi.osl.portal.web/-/tree/develop +2. Check the docker-compose.yml file. Default port is 13080. Check specifically the datasource username/password, server port. -By default the project org.etsi.osl.portal.api exposes the folder ../org.etsi.osl.portal.web/src/ in a folder testweb (Check class MvcConfig.java in org.etsi.osl.portal.api) for development. (In production nginx is used). Point your browser to `http://localhost:13000/osapi/testweb/index.html/` +3. Make sure that the core subsystems are up and running. -## Version/release management + Execute it with: + ```bash + mvn spring-boot:run + ``` -Check this nice article on how we develop and release versions. +> For verification, Swagger API of the service is at `http://localhost:13000/osapi/swagger-ui/index.html`. + There, you may try there various REST actions and authenticate via the OAuth server without the use of the UI. -https://nvie.com/posts/a-successful-git-branching-model/ +### VNF/NSD Catalog Management and NSD Deployment WEB UI service -We develop in the develop branch and follow a issue driven development model. +The Web UI is written in `AngularJS`. To run it: ---- -## Wishlist +1. Clone the repository: [https://labs.etsi.org/rep/osl/code/org.etsi.osl.portal.web/-/tree/develop](https://labs.etsi.org/rep/osl/code/org.etsi.osl.portal.web/-/tree/develop) (the clone URLs are available at that link) -Check also our wishlist of new features. You can add your own. + > By default the project `org.etsi.osl.portal.api` exposes the folder `../org.etsi.osl.portal.web/src/` in a folder testweb (check class `MvcConfig.java` in `org.etsi.osl.portal.api`) for development. (In production *nginx* is used) + +2. Point your browser to `http://localhost:13000/osapi/testweb/index.html/` -See [Wishlist](./wishlist.md). +## Reach us +We are available on different channels. +### Slack +Feel free to join OpenSlice's [Slack](https://openslice.slack.com) workspace for any development oriented questions *(preferred)*. +### E-mail +If you are a member or a participant, you can also reach out on the `OSL_TECH` mailing list. +For administrative support, contact `SDGsupport@etsi.org`. + +--- +## Wishlist + +Check also our wishlist of new features. You can add your own. +See [wishlist](./wishlist.md). \ No newline at end of file diff --git a/doc/contributing/documenting.md b/doc/contributing/documenting.md new file mode 100644 index 0000000000000000000000000000000000000000..f5a14e5b53dbf459891da5816b29d4854a255e06 --- /dev/null +++ b/doc/contributing/documenting.md @@ -0,0 +1,78 @@ + +# Documenting + +OpenSlice's documentation runs on [MkDocs](https://www.mkdocs.org/). + +## Eligibility + +Documenting OpenSlice is limited to active contributors. So, if you: + +1. are an active member or participant; + +2. wish to contribute to it; + +3. you're ready! + +## Documentation System and Structure + +[MkDocs](https://www.mkdocs.org/) is a fast and simple static site generator that's geared towards building project documentation. Documentation source files are written in `Markdown`, and configured with a single `YAML` configuration file. Start by reading the [introductory tutorial](https://www.mkdocs.org/getting-started/), then check the [User Guide](https://www.mkdocs.org/user-guide/) for more information. + +## Getting Started + +To contribute to OpenSlice's documentation, you need to follow those easy steps: + +1) Clone the [Documentation repository](https://labs.etsi.org/rep/osl/documentation) with: + +```bash +git clone https://labs.etsi.org/rep/osl/documentation.git +``` + +2) Checkout the develop branch (incoming contributions are only accepted to the develop branch): + +```bash +cd ./documentation +git checkout develop +``` + +3) Setup a local mkdocs server, using a virtual environment + +=== "Linux + macOS" + + ``` bash + python3 -m venv venv + source venv/bin/activate + ``` + ``` bash + python -m pip install mkdocs + python -m pip install mkdocs-material + python -m pip install mkdocs-glightbox + python -m pip install mkdocs-markdownextradata-plugin + python -m pip install mike + ``` + +=== "Windows" + + ``` bash + python -m venv venv + venv\Scripts\activate + ``` + ``` bash + python -m pip install mkdocs + python -m pip install mkdocs-material + python -m pip install mkdocs-glightbox + python -m pip install mkdocs-markdownextradata-plugin + python -m pip install mike + ``` + + +4) Wait for all downloads to finish and start the mkdocs server + +```bash +mkdocs serve +``` + +5) Document (and commit)! 😊 + +Before committing, you should make sure that the local mkdcocs server's terminal is not producing any INFO/WARNING message regarding your contributions. + +> The documentation website supports branches, so your accepted changes will be reflected to the develop branch which becomes the Release branch after each corresponding cycle. diff --git a/doc/OpenSlice_deployment_examples.md b/doc/deployment_examples.md similarity index 98% rename from doc/OpenSlice_deployment_examples.md rename to doc/deployment_examples.md index 1c21271e0479574aad0e62ffabf9510d9342e594..d0d98813cd5182ee738d06f4a94418d88697976e 100644 --- a/doc/OpenSlice_deployment_examples.md +++ b/doc/deployment_examples.md @@ -1,4 +1,4 @@ -# OpenSlice deployment examples +# OpenSlice Deployment Examples Here are some examples from past and current efforts that use OpenSlice in various cases. diff --git a/doc/etsi_osl.md b/doc/etsi_osl.md index 08f6df2aec3b4030475851106e731a06dfb54657..e33a2abcbeef4a3ce219427ab044d912192e38d9 100644 --- a/doc/etsi_osl.md +++ b/doc/etsi_osl.md @@ -1,3 +1,5 @@ -# The ETSI SDG OSL +# OpenSlice under ETSI -OpenSlice is developed by the OSL ETSI Software Development Group [see more info](https://osl.etsi.org/). \ No newline at end of file +Since October 2023, OpenSlice has been accepted under the umbrella of ETSI, forming its first Software Development Group (SDG), under the name **ETSI SDG for OpenSlice (OSL)**. + +More information can be found at [ETSI SDG OSL webpage](https://osl.etsi.org/). \ No newline at end of file diff --git a/doc/config_intro.md b/doc/getting_started/configuration/config_intro.md similarity index 84% rename from doc/config_intro.md rename to doc/getting_started/configuration/config_intro.md index c3549c1554349b33b949f361341d15cbb7bbd500..94fe71c9fa525f6c759fe4a424d1677276eb7842 100644 --- a/doc/config_intro.md +++ b/doc/getting_started/configuration/config_intro.md @@ -1,6 +1,6 @@ # Configuring and managing OpenSlice -## Intended Audience: OpenSlice administrators +**Intended Audience: OpenSlice Administrators** This section provides information on how to configure and manage different aspect of OpenSlice while in operation. For example: diff --git a/doc/architecture/consumingServicesFromExternalPartners.md b/doc/getting_started/configuration/consuming_services_from_external_partners.md similarity index 97% rename from doc/architecture/consumingServicesFromExternalPartners.md rename to doc/getting_started/configuration/consuming_services_from_external_partners.md index e43ca9829fdda74ef8c02b9ccb4b01d99fc94cfd..128d1a26935c155e8a05351b53376d77bfb2ffbd 100644 --- a/doc/architecture/consumingServicesFromExternalPartners.md +++ b/doc/getting_started/configuration/consuming_services_from_external_partners.md @@ -6,7 +6,7 @@ TMF Open APIs are introduced not only for exposing catalogues and accepting serv The following figure shows how openslice could be used in such scenarios: -[](../images/multi-domain-architecture.png) + In Openslice we can consume services from 3rd parties via Open APIs. @@ -47,8 +47,7 @@ An organization must have the following characteristics in openslice catalog, li An example Organization defined example in json: -``` - +```json { "uuid": "1a09a8b5-6bd5-444b-b0b9-a73c69eb42ae", "@baseType": "BaseEntity", diff --git a/doc/images/multi-domain-architecture.png b/doc/getting_started/configuration/images/multi-domain-architecture.png similarity index 100% rename from doc/images/multi-domain-architecture.png rename to doc/getting_started/configuration/images/multi-domain-architecture.png diff --git a/doc/nfvoconfig.md b/doc/getting_started/configuration/nfvo_config.md similarity index 98% rename from doc/nfvoconfig.md rename to doc/getting_started/configuration/nfvo_config.md index 0cd2c77887fb274c77f845082213436dd302f309..639df650ac8153a38371cb57cb2f3c74ab6bbb4c 100644 --- a/doc/nfvoconfig.md +++ b/doc/getting_started/configuration/nfvo_config.md @@ -1,4 +1,4 @@ -# NFV Orchestrator configuration +# NFV Orchestrator Configuration > Currently we support Open Source MANO version EIGHT/NINE/TEN/ELEVEN/THIRTEEN. Later versions of OSM may also be supported by the existing configuration, as from OSM 9+ the project converged to the SOL005 interface, regarding the NBI, and SOL006 (YANG model), regarding the NFV/NS packaging. Also an implementation of a generic SOL005 interface is supported, but not extensively tested. diff --git a/doc/role_keycloak_management.md b/doc/getting_started/configuration/role_keycloak_management.md similarity index 87% rename from doc/role_keycloak_management.md rename to doc/getting_started/configuration/role_keycloak_management.md index 153d5c312b3b30b8020097d6dd2a0346446c3e3c..a7d6a1a39bd9ee02aaf1bc38741bb50512a1e587 100644 --- a/doc/role_keycloak_management.md +++ b/doc/getting_started/configuration/role_keycloak_management.md @@ -1,8 +1,8 @@ # Role management in Keycloak -Some initial configuration of Keycloak happens at Installation/Deployment time. Here are some notes regarding user management +**Intended Audience: OpenSlice Administrators** -## Intended Audience: OpenSlice administrators +Some initial configuration of Keycloak happens at Installation/Deployment time. Here are some notes regarding user management There are cases that OpenSlice administrators need to configure Keycloak: diff --git a/doc/deploymentCompose.md b/doc/getting_started/deployment/docker_compose.md similarity index 96% rename from doc/deploymentCompose.md rename to doc/getting_started/deployment/docker_compose.md index b8c8b34873a7e6d90c624e011241468b887ceed8..b7d917f069d02ac5c9f94e82b21496692900c0ff 100644 --- a/doc/deploymentCompose.md +++ b/doc/getting_started/deployment/docker_compose.md @@ -1,6 +1,6 @@ # OpenSlice Deployment Guide with Docker Compose -## Intended Audience: OpenSlice administrators +**Intended Audience: OpenSlice Administrators** ## Requirements @@ -37,7 +37,7 @@ sudo nano /etc/docker/daemon.json and add: -``` +```json { "dns": ["8.8.8.8", "8.8.4.4"] } @@ -66,7 +66,7 @@ cd openslice Download the deployment / environment preparation script ```bash -wget https://labs.etsi.org/rep/osl/code/org.etsi.osl.main/-/raw/2024Q2/compose/deploy.sh +wget https://labs.etsi.org/rep/osl/code/org.etsi.osl.main/-/raw/{{{ documentation_version }}}/compose/deploy.sh ``` Make it executable @@ -86,7 +86,7 @@ If you run the script without selecting a branch the the main branch is going to We recommend: * main branch for the most stable experience and -* develop branch for an experience with the latest features (for develop branch installation, it is strongly advisable that you may as well follow the [develop documentation](https://osl.etsi.org/documentation/develop/deployment/)) +* develop branch for an experience with the latest features (for develop branch installation, it is strongly advisable that you may as well follow the [develop documentation](https://osl.etsi.org/documentation/develop/)) ```bash sudo ./deploy.sh main #[or replace main with other branch name] @@ -140,7 +140,7 @@ If you want to utilise the Bugzilla connector: In folder `org.etsi.osl.main/compose/` edit the file `docker-compose.yaml` -``` +```json SPRING_APPLICATION_JSON: '{ "spring.activemq.brokerUrl": "tcp://anartemis:61616?jms.watchTopicAdvisories=false", "spring.activemq.user": "artemis", @@ -227,7 +227,7 @@ Change the respective fields: In folder `org.etsi.osl.main/compose/` edit the file `docker-compose.yaml` -``` +```json SPRING_APPLICATION_JSON: '{ "spring.datasource.username":"root", "spring.datasource.password":"letmein", @@ -251,7 +251,7 @@ Change the respective fields: In folder `org.etsi.osl.main/compose/` edit the file `docker-compose.yaml` -``` +```json SPRING_APPLICATION_JSON: '{ "spring.datasource.username":"root", "spring.datasource.password":"letmein", @@ -297,14 +297,14 @@ Edit the `config.js` file with the information of your domain. `ROOTURL` will au Example file: -``` +```json { - BUGZILLA: "ROOTURL/bugzilla/", - STATUS: "ROOTURL/status/", - APIURL: "http://localhost", - WEBURL: "ROOTURL/nfvportal", - APIOAUTHURL: "ROOTURL/auth/realms/openslice", - APITMFURL: "ROOTURL/tmf-api/serviceCatalogManagement/v4" + "BUGZILLA": "ROOTURL/bugzilla/", + "STATUS": "ROOTURL/status/", + "APIURL": "http://localhost", + "WEBURL": "ROOTURL/nfvportal", + "APIOAUTHURL": "ROOTURL/auth/realms/openslice", + "APITMFURL": "ROOTURL/tmf-api/serviceCatalogManagement/v4" } ``` @@ -330,10 +330,10 @@ E.g. You may edit "TITLE", "WIKI", etc properties with your domain title. Also c Example file: -``` +```json { "TITLE": "OpenSlice by ETSI", - "PORTALVERSION":"2024Q2", + "PORTALVERSION":"{{{ documentation_version }}}", "WIKI": "https://osl.etsi.org/documentation", "BUGZILLA": "{BASEURL}/bugzilla/", "STATUS": "{BASEURL}/status/", @@ -479,9 +479,7 @@ Hosts File Location: 2 - Replace http://localhost/auth/ with http://keycloak:8080/auth/ in your Keycloak config for AngularJS and Angular (see examples below). -> Explanation - -Nginx uses the http://keycloak:8080 URL, which is accessible via the internal docker system's network. +> **Explanation**: Nginx uses the http://keycloak:8080 URL, which is accessible via the internal docker system's network. The Front-end (TS/Angular) shall also use the http://keycloak:8080. This way, you will not get the invalid token error, as the API is acquiring the token from http://keycloak:8080 (internally) and the Front-end is getting verified by an issuer at the same URL, as well. @@ -500,7 +498,7 @@ nano config.prod.json After editing, the displayed properties should look like the example below: -```yaml +```json { "OAUTH_CONFIG" : { "issuer": "http://keycloak:8080/auth/realms/openslice", @@ -531,7 +529,7 @@ nano config.js After editing, the displayed properties should look like the example below: -``` +```js var appConfig = angular.module('portalwebapp.config',[]); @@ -549,4 +547,4 @@ appConfig.factory('APIEndPointService', function() { After successfully deploying and configuring OpenSlice, you may configure its environment (e.g. the NFVO) that will facilitate the deployment of NFV artifacts. -See [NFV Orchestrator Configuration](./nfvoconfig.md). +See [NFV Orchestrator Configuration](../configuration/nfvo_config.md). diff --git a/doc/deployment.md b/doc/getting_started/deployment/introduction.md similarity index 53% rename from doc/deployment.md rename to doc/getting_started/deployment/introduction.md index e2a495a67146ca9f1e31d7009bb2024f0cd98734..ff2889baa3eb91b7eecb2c326152f29ffcc54443 100644 --- a/doc/deployment.md +++ b/doc/getting_started/deployment/introduction.md @@ -1,11 +1,11 @@ # OpenSlice Deployment -This section is meant to guide the user through the installation of OpenSlice. +**Intended Audience: OpenSlice Administrators** -## Intended Audience: OpenSlice administrators +This section is meant to guide the user through the installation of OpenSlice. Following, you may thorough guides depending on the installation type of your choice: -- [Installing via Docker Compose guide](./deploymentCompose.md) -- [Installing via Kubernetes guide](./deploymentK8s.md) +- [Installing via Docker Compose guide](./docker_compose.md) +- [Installing via Kubernetes guide](./kubernetes.md) diff --git a/doc/deploymentK8s.md b/doc/getting_started/deployment/kubernetes.md similarity index 64% rename from doc/deploymentK8s.md rename to doc/getting_started/deployment/kubernetes.md index 1fec43033e55756a3006eb415731d2950fbff52e..a3f794ed12cd8be296002fa290616cf2d2af813c 100644 --- a/doc/deploymentK8s.md +++ b/doc/getting_started/deployment/kubernetes.md @@ -1,10 +1,10 @@ # OpenSlice Deployment Guide with Kubernetes -## Intended Audience: OpenSlice administrators +**Intended Audience: OpenSlice Administrators** ## Requirements -### Hardware requirements: +### Hardware requirements | **Minimum Hardware Requirements** | **Recommended Hardware Requirements** | | --------------------------------- | ------------------------------------ | @@ -12,22 +12,49 @@ | 8 GB RAM | 16 GB RAM | | 30 GB storage | 50 GB storage | -### Software Requirements: +### Software Requirements * **git:** For cloning the project repository. * **Kubernetes:** A running cluster where OpenSlice will be deployed. * **Disclaimer:** The current manual setup of Persistent Volumes using `hostPath` is designed to operate with **only a single worker node**. This setup will not support data persistence if a pod is rescheduled to another node. * **Helm:** For managing the deployment of OpenSlice. * **Ingress Controller:** Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting. An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic. You must have an Ingress controller to satisfy an Ingress. - * An Nginx ingress controller is required, which can be installed using [this guide](https://docs.nginx.com/nginx-ingress-controller/installation/installing-nic/installation-with-helm/). - * If you use another type of ingress controller, you'll need to modify `[repo-root]/kubernetes/helm/openslice/templates/openslice-ingress.yaml` to conform to your ingress controller's requirements. + * **Nginx Ingress Controller (Kubernetes Community Edition):** The ingress resource is configured to use an Nginx type ingress controller. + * If you need to expose the message bus service (Artemis), which communicates using the TCP protocol, you must use version **>= 1.9.13** of the Nginx Ingress Controller (a prerequisite for [managing multiple kubernetes clusters](#management-of-multiple-kubernetes-clusters)). This version or higher includes the required functionality to handle TCP services. Otherwise, earlier versions may suffice depending on your configuration. + * To install or upgrade to the required version, run the following command: + + ``` bash + helm upgrade nginx-ingress ingress-nginx/ingress-nginx --namespace ingress \ + --set tcp.61616="<openslice-namespace>/<openslice-helm-release-name>-artemis:61616" + ``` + Replace `<helm-release-name>` with the name of your OpenSlice Helm release. + + * More details regarding the Nginx Ingress Controller (Kubernetes Community Edition) can be found [here](https://kubernetes.github.io/ingress-nginx/deploy/). + + * **Other Ingress Controller:** For non-Nginx ingress controllers, modify `[repo-root]/kubernetes/helm/openslice/templates/openslice-ingress.yaml` to meet your controller’s requirements. + +### Exposure + +#### Option 1 - Load balancer + * **Network Load Balancer:** Required for exposing the service (e.g., GCP, AWS, Azure, MetalLB). * **Domain/IP Address:** Necessary for accessing the application. This should be configured in `[repo-root]/kubernetes/helm/openslice/values.yaml` under `rooturl`. +#### Option 2 - Ingress + +* **Ingress Controller with NodePort:** You can expose the application using the NodePort of the Ingress Controller's service. +* **IP Address and Port:** Use the IP address of the **master node** and the assigned NodePort to access the application. This should be configured in `[repo-root]/kubernetes/helm/openslice/values.yaml` under `rooturl`. + +For example: +``` +rooturl: http://<master-node-ip>:<nodeport> +``` + ### Additional Configuration * **Storage Class:** In a production environment, specify your `storageClass` in `[repo-root]/kubernetes/helm/openslice/values.yaml` under `storageClass`. If not defined, PVs will be created and managed manually. - * **Disclaimer:** Before deploying, confirm that your storage system supports claims of one 10G and two 1G volumes. + +> **Disclaimer:** Before deploying, confirm that your storage system supports claims of one 10G and two 1G volumes. ## Preparing the environment @@ -57,21 +84,19 @@ cd org.etsi.osl.main/kubernetes/helm/openslice/ ### 3. Prerequisites before deployment -Before deploying the Helm chart, ensure you have configured the necessary components as detailed in the following section, i.e. [Configure Helm Chart Services](#configure-helm-chart-services). By default, the `main` branch is selected for deployment. +Before deploying the Helm chart, ensure you have configured the necessary components as detailed in the following section, i.e. [Configure Helm Chart Services](#configure-helm-chart). We recommend: * main branch for the most stable experience and -* develop branch for an experience with the latest features (for develop branch installation, it is strongly advisable that you may as well follow the [develop documentation](https://osl.etsi.org/documentation/develop/deployment/)) +* develop branch for an experience with the latest features (for develop branch installation, it is strongly advisable that you may as well follow the [develop documentation](https://osl.etsi.org/documentation/develop/)) -## Configure Helm Chart Services +## Configure Helm Chart When deploying OpenSlice with Helm, service configurations are handled through the `values.yaml` file. This file allows you to define all necessary configurations for your deployment, including database credentials, service URLs, and logging levels. Below are examples of how to configure your services in Helm based on your provided values. -### Configuring Services - -#### 1. Database Configuration +### Database To configure MySQL and other related services, you can directly set the values in your `values.yaml` file under the `oscreds` and `mysql` sections. For example: @@ -92,7 +117,7 @@ oscreds: password: "12345" ``` -#### 2. Keycloak Configuration +### Keycloak Keycloak settings, including the database and admin password, are part of the `oscreds.mysql.keycloak` section. If you need to adjust Keycloak-specific settings like realms or client configurations, you'll likely need to customize your Helm chart further or manage these settings directly within Keycloak after deployment. The Keycloak realm configuration that is imported by default can be found under `kubernetes/helm/openslice/files/keycloak-init/realm-export.json`. @@ -106,33 +131,114 @@ oscreds: adminpassword: "Pa55w0rd" ``` -#### 3. CRIDGE Configuration +### CRIDGE + +To create and manage Kubernetes Custom Resources (CRs), you have to install and configure the CRIDGE component. + +For CRIDGE to work properly, you need to provide a **cluster-wide scope kubeconfig** file (typically located at `/home/{user}/.kube` directory of the Kubernetes Cluster's host). This kubeconfig file allows CRIDGE to communicate with your Kubernetes cluster. + +There are two ways to install CRIDGE: + +#### **Bundled CRIDGE deployment with the OpenSlice Helm chart (same cluster environment)** + +By default, the OpenSlice Helm chart also deploys CRIDGE alongside the bundle. To configure CRIDGE, there are three different ways to provide this kubeconfig file during deployment: + +1. **Manual Copy to Helm Files Directory**: + + - Copy the kubeconfig file to the following directory: + `org.etsi.osl.main/kubernetes/helm/openslice/files/org.etsi.osl.cridge`. + - The deployment process will automatically copy the file into the `/root/.kube` directory of the CRIDGE container. + - **Note:** This method expects the kubeconfig file to be named exactly `kubeconfig.yaml` in the specified directory. + +2. **Passing the Kubeconfig File Using Helm (`--set-file`)**: + + - If you do not wish to manually copy the file, you can pass it directly during the Helm installation using the `--set-file` option, at the final [deployment process](#deploy-the-helm-chart): + + ```bash + --set-file cridge.kubeconfig.raw=path/to/kubeconfig.yaml + ``` + + - This method reads the specified kubeconfig file and mounts it into the CRIDGE container during deployment. + +3. **Passing a Base64-Encoded Kubeconfig Using Helm (`--set`)**: + + - Alternatively, you can pass the kubeconfig as a base64-encoded string, during the Helm installation using the `--set` option, at the final [deployment process](#deploy-the-helm-chart): + + ```bash + --set cridge.kubeconfig.base64="$(base64 path/to/kubeconfig.yaml)" + ``` + + - This method encodes the kubeconfig content and passes it directly to the CRIDGE container. + +> **Note:** Regardless of the method you choose, if you're using a non-standard kubeconfig file name, make sure to adjust the references or rename the file as needed. + +#### **Standalone CRIDGE deployment** + +There can be cases where a separate deployment of CRIDGE, apart from the bundled OpenSlice deployment, may be needed. These cases comprise: + +- remote cluster management, different from the one OpenSlice is installed +- more control over the component (e.g. multiple component instances / clusters) + +In this case, initially you have to disable CRIDGE from deploying with the rest of OpenSlice. To do so, in the `values.yaml` of OpenSlice Helm chart, you have to change the `cridge.enabled` flag to `false`. + +```yaml +cridge: + enabled: false +``` + +Following, clone the CRIDGE project from the GitLab, which also includes the respective standalone Helm chart. + +```bash +git clone https://labs.etsi.org/rep/osl/code/org.etsi.osl.cridge.git +cd org.etsi.osl.cridge/helm/cridge/ +``` + +Similarly, to configure CRIDGE, there are three different ways to provide this kubeconfig file during deployment: -If you want to create and manage Kubernetes Custom Resources (CRs), you will have to provide: +1. **Manual Copy to Helm Files Directory**: + - Copy the kubeconfig file to the following directory: + `org.etsi.osl.cridge/helm/cridge/files/org.etsi.osl.cridge`. + - The deployment process will automatically copy the file into the `/root/.kube` directory of the CRIDGE container. + - **Note:** This method expects the kubeconfig file to be named exactly `kubeconfig.yaml` in the specified directory. -- a cluster-wide scope kubeconf file (typically located at `/home/{user}/.kube` directory of the Kubernetes Cluster's host) +2. **Passing the Kubeconfig File Using Helm (`--set-file`)**: + - If you do not wish to manually copy the file, you can pass it directly during the Helm installation using the `--set-file` option: -You will have to copy the kubeconf file to the `org.etsi.osl.main/kubernetes/helm/openslice/files/org.etsi.osl.cridge` directory, *prior to the deployment*. + ```bash + helm install cridge-release . --set-file kubeconfig.raw=path/to/kubeconfig.yaml + ``` -By default, the deployment process copies the `org.etsi.osl.main/kubernetes/helm/openslice/files/org.etsi.osl.cridge/config` file into the `/root/.kube` directory of the CRIDGE container. + - This method reads the specified kubeconfig file and mounts it into the CRIDGE container during deployment. -> **The above configuration works for the default kubeconf file names. It explicitly expects a file named `config` within the `org.etsi.osl.main/kubernetes/helm/openslice/files/org.etsi.osl.cridge` directory. If you are working with custom kubeconf file names, you will have to rename them.** +3. **Passing a Base64-Encoded Kubeconfig Using Helm (`--set`)**: + - Alternatively, you can pass the kubeconfig as a base64-encoded string: + + ```bash + helm install cridge-release . --set kubeconfig.base64="$(base64 path/to/kubeconfig.yaml)" + ``` + + - This method encodes the kubeconfig content and passes it directly to the CRIDGE container. -OpenSlice also offers management support of *multiple Kubernetes Clusters* simultaneously. For this, you will have to: +> **Note:** Regardless of the method you choose, if you're using a non-standard kubeconfig file name, make sure to adjust the references or rename the file as needed. -- add all the respective kubeconf files into the `org.etsi.osl.main/compose/kubedir` directory. -- create a copy of the `cridge.yaml` and `cridge-config.yaml` in `\org.etsi.osl.main\kubernetes\helm\openslice\templates` directory for every Cluster. *Mind the need for different naming*. -- update every `cridge-config.yaml` file to get the appropriate kubeconf file for every Cluster. +> **Important Note:** If you are deploying CRIDGE in the same cluster and namespace as OpenSlice, no additional configuration is required for the message bus broker URL and OpenSlice communicates with CRIDGE directly. However, if CRIDGE is installed in a **separate Kubernetes cluster** from the one hosting OpenSlice, it is important to configure the `values.yaml` file for the CRIDGE Helm chart to point to the correct message bus broker URL. Please see [Nginx Ingress Controller (Kubernetes Community Edition) configuration](#software-requirements) on how to properly expose the message bus in such scenario. -Below you may find an indicative example that only references the affected fields of each cridge-config.yaml file: +In the `values.yaml` of the CRIDGE Helm chart, you must set `oscreds.activemq.brokerUrl` to point to the IP address of the ingress controller in the OpenSlice cluster, as shown below: ```yaml -data: - config: |- - {{- .Files.Get "files/org.etsi.osl.cridge/config-clusterX" | nindent 4 }} +oscreds: + activemq: + brokerUrl: "tcp://<openslice-rootURL>:61616?jms.watchTopicAdvisories=false" ``` -#### 4. External Services Configuration +#### Management of multiple Kubernetes Clusters + +OpenSlice also offers management support of *multiple Kubernetes Clusters* simultaneously. + +For this, you will have to replicate the steps in [Standalone CRIDGE deployment](#standalone-cridge-deployment) for every Cluster. Each CRIDGE instance will be in charged with the management of one Kubernetes Cluster. + + +### External Services (optional) For configuring optional external services like Bugzilla and CentralLog, specify their URLs and credentials in the `values.yaml` file: @@ -153,7 +259,7 @@ Bugzilla should have the following components under the specified product: Also in the 'Main Site Operations' product, a version named 'unspecified' must be created. -#### 5. Application and Logging Configuration +### Application and Logging Application-specific configurations, such as OAuth client secrets, can be set in the `spring` section: @@ -162,7 +268,7 @@ spring: oauthClientSecret: "secret" ``` -#### 6. Ingress and Root URL +### Ingress and Root URL To configure the ingress controller and root URL for OpenSlice, update the rooturl field with your ingress load balancer IP or domain. This setting is crucial for external access to your application: @@ -172,7 +278,7 @@ rooturl: "http://openslice.com" # Example domain rooturl: "http://3.15.198.35:8080" # Example IP with port ``` -#### 7. Persistent Volume for MySQL +### Persistent Volume for MySQL For persistent storage, especially for MySQL, define the storage size under the `mysql` section. This ensures that your database retains data across pod restarts and deployments. @@ -181,7 +287,16 @@ mysql: storage: "10Gi" ``` -### Configure Web UI +### TCP Forwarding for Artemis + +To expose the message bus service (Artemis) via the ingress controller, it’s essential to configure TCP traffic forwarding. Artemis listens on port `61616`, and this traffic needs to be directed to the Artemis service within your Kubernetes cluster. + +In the [Ingress Controller Setup](#software-requirements) section, you already configured the Nginx ingress controller to handle this TCP forwarding. By setting the rule for port `61616`, traffic arriving at the ingress will be forwarded to the Artemis service defined in your Helm release. + +This setup ensures that the message bus service is accessible externally via the ingress controller, completing the necessary configuration for Artemis. + + +### Web UI In folder `kubernetes/helm/openslice/files/org.etsi.osl.portal.web/src/js` you must make a copy of `config.js.default` file and rename it to `config.js`. @@ -189,7 +304,7 @@ This is **mandatory** for the configuration file to be discoverable. Edit the `config.js` configuration file with your static configuration, if needed. -``` +```js { TITLE: "OpenSlice by ETSI", WIKI: "https://osl.etsi.org/documentation/", @@ -204,7 +319,7 @@ Edit the `config.js` configuration file with your static configuration, if neede -### Configure TMF Web UI +### TMF Web UI In the folder `kubernetes/helm/openslice/files/org.etsi.osl.tmf.web/src/assets/config` there are 3 files available for configuration: @@ -228,10 +343,10 @@ cd kubernetes/helm/openslice/files/org.etsi.osl.tmf.web/src/assets/config E.g. You may edit "TITLE", "WIKI", etc properties with your domain title. Also configure TMF's API and Keycloak's location for the web application, if needed. -``` +```json { "TITLE": "OpenSlice by ETSI", - "PORTALVERSION":"2024Q2", + "PORTALVERSION":"{{{ documentation_version }}}", "WIKI": "https://osl.etsi.org/documentation", "BUGZILLA": "{BASEURL}/bugzilla/", "STATUS": "{BASEURL}/status/", @@ -360,9 +475,9 @@ If a pod is not in the expected state, you can access its logs for troubleshooti kubectl logs <pod-name> -n openslice ``` -## Post installation steps +## Post installation steps (mandatory) -After the successful deployment of OpenSlice, to ensure the E2E user experience, **this section is mandatory**. It contains crucial configuration in regard of authentication and user creation. +After the successful deployment of OpenSlice, to ensure the end-to-end user experience, **this section is mandatory**. It contains crucial configuration in regard of authentication and user creation. ### Configure Keycloak server @@ -372,7 +487,7 @@ The Keycloack server is managing authentication and running on a container at po - Navigate to Administration Console -- Login with the credentials from section [Keycloak Configuration](#3-keycloak-configuration). Default values are: +- Login with the credentials from section [Keycloak Configuration](#keycloak). Default values are: - user: admin - password: Pa55w0rd @@ -411,4 +526,4 @@ This step is mandatory so as to access the OpenSlice Web UI. To add an OpenSlice After successfully deploying and configuring OpenSlice, you may configure its environment (e.g. the NFVO) that will facilitate the deployment of NFV artifacts. -See [NFV Orchestrator Configuration](./nfvoconfig.md). \ No newline at end of file +See [NFV Orchestrator Configuration](../configuration/nfvo_config.md). \ No newline at end of file diff --git a/doc/portals_intro.md b/doc/getting_started/portals.md similarity index 97% rename from doc/portals_intro.md rename to doc/getting_started/portals.md index 426a1fb9cb7492f3b2af65d4044e742a79964352..f118b49ac4a846ffbf7195a936c8d93c1bd7caf5 100644 --- a/doc/portals_intro.md +++ b/doc/getting_started/portals.md @@ -22,7 +22,7 @@ Indicatively, the portal can be used to: - Onboard/Delete VNF/NS packages on specific MANO provider - Deploy a NS to a target MANO provider -More information can be found at [NFV Services](./naas/nfv/nfvservices.md). +More information can be found at [NFV Services](../naas/nfv/nfvservices.md). **Resources Portal** is a designated portal for the: - Resource Administrator - To view the available Resources that are being synchronized from the underlying infrastructure. diff --git a/doc/history.md b/doc/history.md index 5608a95fe296b702489f96b385b43394226699fe..15edae0be2c1a53a0288dda12e081fcb568391bf 100644 --- a/doc/history.md +++ b/doc/history.md @@ -2,13 +2,14 @@ * The NFV portal part of OpenSlice was initially developed in H2020 European Research project [5GinFIRE](https://5ginfire.eu) by University of Patras, Greece * OpenSlice core services, APIs was further developed and maintained in H2020 European project [5G-VINNI](https://5g-vinni.eu/) by University of Patras, Greece -* OpenSlice has been a part of OSM's OSS/BSS ecosystem -* OpenSlice is now an ETSI SDG Group since 2023 +* OpenSlice has been a part of [OSM's OSS/BSS ecosystem](https://osm.etsi.org/wikipub/index.php/OSS_BSS) +* OpenSlice has been a part of [ETZI ZSM PoC #2](https://zsmwiki.etsi.org/index.php?title=PoC_2_Automated_Network_Slice_Scaling_in_Multi-Site_Environments) +* OpenSlice is the first ETSI Software Development Group (SDG) since October 2023 ## Citation -Please cite our [![paper]](https://arxiv.org/abs/2102.03290) if you use OpenSlice in your research +Please cite our [paper](https://arxiv.org/abs/2102.03290) if you use OpenSlice in your research ``` @misc{tranoris2021openslice, diff --git a/doc/index.md b/doc/index.md index ac9dbf4dc5f4b6d25f546f21a6411dbb4af73cde..198cf6593a71573d949c24ee805afbcdcd677543 100644 --- a/doc/index.md +++ b/doc/index.md @@ -1,6 +1,9 @@ + +# Introduction + <img src="images/openslice_logo.png" alt="logo" width="200"/> -**Version**: 2024Q2 ([Release Notes](https://labs.etsi.org/rep/osl/code/org.etsi.osl.main/-/releases/2024Q2)) +**Version**: {{{ documentation_version }}} ([Release Notes](https://labs.etsi.org/rep/osl/code/org.etsi.osl.main/-/releases/{{{ documentation_version }}})) The ETSI Software Development Group for OpenSlice (SDG OSL) is developing an open-source service-based Operations Support System (OSS) to deliver Network as a Service (NaaS) following specifications from major SDOs including ETSI, TM Forum and GSMA. @@ -9,7 +12,7 @@ The ETSI Software Development Group for OpenSlice (SDG OSL) is developing an ope OpenSlice can be used in managing 5G network services from the user device to the core network and cloud as well as for Orchestrating cloud resources across private and public clouds for enterprise applications. OpenSlice is capable of supporting most of the features of an end-to-end (E2E) service orchestration framework while many of them will be more mature in future releases. The following figure displays the general usage of OpenSlice. -[](./images/global_approach.png) + The image illustrates how OpenSlice supports the idea of an E2E network service orchestration framework by integrating multiple network components and layers, from user devices at the edge to radio, transport networks, core and public cloud services, ensuring seamless, secure, and efficient delivery of network services. Assuming that there are domain controllers for all the above domains OpenSlice can create the end-to-end service via the domain controllers by following the process of creating and deploying the end-to-end service by implementing transformations, and consuming APIs from various network entities. OpenSlice, in a nutchell, offers user interfaces where users can interact with the framework to order, expose, and manage service catalogs, services and resources that can be ordered, following business logic and policies and exposed through the APIs. @@ -79,9 +82,10 @@ Login credentials: * username=admin, password=openslice * username=admin, password=changeme -# Probe further +## Probe further + +* How OpenSlice works? See the [Architecture](./architecture/architecture.md) section +* Installing OpenSlice? See the [Deployment](./getting_started/deployment/introduction.md) section +* Learn more on [how OpenSlice supports Network as a Service(NaaS)](./naas/introduction.md) +* Who is maintaining OpenSlice? See [OSL ETSI SDG](https://osl.etsi.org/) -* Installing OpenSlice. See the [Deployment](deployment.md) of OpenSlice -* Learn more on [how OpenSlice supports Network as a Service(NaaS)](./naas/introduction) -* Who is implementing OpenSlice? See [OSL ETSI SDG](https://osl.etsi.org/) -* How OpenSlice works? See the [Architecture](./architecture/architecture.md) of OpenSlice diff --git a/doc/naas/gst_to_tmf.md b/doc/naas/gst_to_tmf.md index 8de472024ac206982ef938b28e86ffdf51584fe2..6def11a82622e8661323e6e8e5032c17c5f573d2 100644 --- a/doc/naas/gst_to_tmf.md +++ b/doc/naas/gst_to_tmf.md @@ -1,6 +1,6 @@ # Generic Slice Template as a Service Specification -## Intended Audience: Service Designers +**Intended Audience: OpenSlice Service Designers** GSMA Generic Slice Template (GST) Defines customer-oriented service requirements, E.g. Availability, Area of service, delay tolerance, etc. and attempts to narrow down the gap between (network) service customers and vendors @@ -11,7 +11,7 @@ In OpenSlice we made an effort and translated the GST to a Service Specification The image illustrates the relationship between the GSMA Generic Slice Template (GST), TM Forum Service Specification, and how they are utilized within OpenSlice to offer network services. -[](./gst_to_gsma/img01.png) + The GST to TM Forum via OpenSlice: diff --git a/doc/naas/introduction.md b/doc/naas/introduction.md index e986a8c77dae83bf7753a93e1179bce8afa3b32b..0a95245745e197a30b51aaa1b18fd62c91ad5416 100644 --- a/doc/naas/introduction.md +++ b/doc/naas/introduction.md @@ -2,13 +2,12 @@ This section describes some core concepts for Delivering Network as a Service in OpenSlice. There are many articles and reports on the subject like: - * TMF909 API Suite Specification for NaaS -* TMF926A Connectivity as a Service -* TMF931-Open Gateway Onboarding and Ordering Component Suite -* GSMA Open Gatewy initiative +* TMF926A Connectivity as a Service +* GSMA Open Gateway initiative +* TMF931 Open Gateway Onboarding and Ordering Component Suite -In general Network as a Service (NaaS) is a service model that allows users to consume network infrastructure and services , similar to how they would consume other cloud services like Software as a Service (SaaS) or Infrastructure as a Service (IaaS). NaaS abstracts the complexity of managing physical network infrastructure, providing users with virtualized network resources that can be dynamically allocated and managed through software. +In general Network as a Service (NaaS) is a service model that allows users to consume network infrastructure and services, similar to how they would consume other cloud services like Software as a Service (SaaS) or Infrastructure as a Service (IaaS). NaaS abstracts the complexity of managing physical network infrastructure, providing users with virtualized network resources that can be dynamically allocated and managed through software. ## OpenSlice and NaaS diff --git a/doc/naas/lcm_intro.md b/doc/naas/lcm_intro.md index 5670c58f2197bba6e08f015996b137f3f307842e..77e844ac77a4c652e8d6cb71833fc293d88ddf96 100644 --- a/doc/naas/lcm_intro.md +++ b/doc/naas/lcm_intro.md @@ -1,16 +1,14 @@ # Lifecycle Management - LCM - -Lifecycle Management: The orchestration framework handles the activation, termination and any necessary modifications throughout the service lifecycle. +**Intended Audience: OpenSlice Service Designers** -## Intended Audience: Service Designers - +Lifecycle Management: The orchestration framework handles the activation, termination and any necessary modifications throughout the service lifecycle. In OpenSlice the Lifecycle of a service follows in general the concept of Network Slice lifecycle as defined by 3GPP. -[](./lcm/img01.png) + ## Introduction in OpenSlice LCM diff --git a/doc/naas/lcm_rules_intro.md b/doc/naas/lcm_rules_intro.md index f7d70588e8a9b783d071a147571db4d57183fe95..5341327616f9230987d4da4819fd6fb00944b43c 100644 --- a/doc/naas/lcm_rules_intro.md +++ b/doc/naas/lcm_rules_intro.md @@ -1,22 +1,20 @@ # Lifecycle Management Rules - LCM Rules +**Intended Audience: OpenSlice Service Designers** Lifecycle Management Rules: Defining complex conditions and actions during the lifecycle of a service and any necessary modifications throughout the service lifecycle. +OpenSlice end-to-end (E2E) service orchestrator follows some predefined workflows to manage a service lifecycle (They are described in BPMN language and included in our orchestration engine) -## Intended Audience: Service Designers +So in the system there are already predefined recipes, which in each process-step of the workflow some piece of code is executed. - OpenSlice end-to-end (E2E) service orchestrator follows some predefined workflows to manage a service lifecycle (They are described in BPMN language and included in our orchestration engine) +How is it possible to intervene in the workflow process and inject some user defined actions? The next image illustrates the idea - So in the system there are already predefined recipes, which in each process-step of the workflow some piece of code is executed. - - How is it possible to intervene in the workflow process and inject some user defined actions? The next image illustrates the idea - -[](./lcm/img02.png) + ## How is it possible to intervene in the workflow process and affect it? -LCM Rules are used for defining complex conditions and actions during the lifecycle of a service. In Openslice there are the following types of rules defined: +LCM Rules are used for defining complex conditions and actions during the lifecycle of a service. In OpenSlice there are the following types of rules defined: * PRE_PROVISION * CREATION @@ -27,7 +25,7 @@ LCM Rules are used for defining complex conditions and actions during the lifecy The following figure displays the different phases that the rules are performed, during the lifecycle of a Network Service Instance. -[](./lcm/img03.png) + * PRE_PROVISION rules: Run only once just before creating a service with a given priority. diff --git a/doc/naas/service_catalog.md b/doc/naas/service_catalog.md index c29d2b0f5781149fc4ddce8512f47cde81a561c9..56a4694d88eb06d54e87fa29fdd5c7e5ee93d422 100644 --- a/doc/naas/service_catalog.md +++ b/doc/naas/service_catalog.md @@ -1,13 +1,10 @@ # OpenSlice Service Catalogs -OpenSlice offers complete management of Service Catalogs. +**Intended Audience: OpenSlice Service Designers, Administrators, Users** -## Intended Audience: Service Designers, OpenSlice administrators, Users +OpenSlice offers complete management of Service Catalogs to end users, which comprises: - -OpenSlice offers complete management of Service Catalogs which offer to end users: - -* Service categories: Lists the available services, including their specifications and performance metrics. +* Service Categories: Lists the available services, including their specifications and performance metrics. * Service Bundles: Combines multiple services into a single offering to provide added value to customers. Service Catalogs contain Service Specifications (organized in Service Categories) exposed to users for Service Orders. @@ -15,11 +12,11 @@ Service Catalogs contain Service Specifications (organized in Service Categories ## UI management -In the UI this looks like the following. Service catalogs and categories exposed in Service marketplace. +The UI is built around Service Catalogs and Categories exposed in the Service Marketplace. In the menu the administrator can manage the Service Catalogs and Categories. -[](./service_catalog/img01.png) + ## API exposed diff --git a/doc/naas/service_inventory.md b/doc/naas/service_inventory.md index f162b499292af5137873ac24baa81fcc2ac3b870..51c63a52e5cd739b313f3d691281e2ecbf733e75 100644 --- a/doc/naas/service_inventory.md +++ b/doc/naas/service_inventory.md @@ -1,9 +1,8 @@ # Service Inventory -Service Inventory contains references to running services that realize a Service Order. - -## Intended Audience: Service Designers, OpenSlice administrators, Users +**Intended Audience: OpenSlice Service Designers, Administrators, Users** +Service Inventory contains references to running services that realize a Service Order. The Service Inventory is a repository that maintains detailed records of all active services and the underlying resources that support them. It acts as a central repository, tracking the lifecycle of each service from provisioning to decommissioning, and includes references to the specific virtual and physical resources that realize the service, such as servers, network components, storage, and software instances. diff --git a/doc/naas/service_ordering.md b/doc/naas/service_ordering.md index 29584172298d6851fed7ad1e988880e77650bd8f..48c40d2e05c3ebacea168f05f5a2f35a250ca7a0 100644 --- a/doc/naas/service_ordering.md +++ b/doc/naas/service_ordering.md @@ -1,15 +1,14 @@ # Service Ordering -Customer Facing Service Specifications - or also CFSSpec (organized in Service Categories) are exposed to users for Service Orders. - -## Intended Audience: Service Designers, OpenSlice administrators +**Intended Audience: OpenSlice Service Designers, Administrators** +Customer Facing Service Specifications - or also CFSSpec (organized in Service Categories) are exposed to users for Service Orders. The Service Order process is a structured sequence of steps initiated by a customer's Service Order request for a specific service, aimed at delivering and activating the desired service or services (if it is a service bundle), as well as its related services. It begins with the customer submitting a service request through OpenSlice Services portal or the Service Order API, specifying the necessary details such as service specification, configurations, and any specific requirements. The request is then validated and verified for completeness and eligibility by an administrator which marks the Service Order as ACKNOWLEDGED otherwise it rejects it. -Once ACKNOWLEDGED, the service order is processed by OpenSlice orchestration system (OSOM), which schedules/automates the provisioning of the required resources and configurations, coordinating across various components such as MANO controlers for virtual network functions (VNFs), or Containerized controllers or any 3rd party controllers or services or even physical infrastructure. The OpenSlice orchestration system ensures that all dependencies are managed and that the service is correctly configured. +Once ACKNOWLEDGED, the service order is processed by OpenSlice orchestration system (OSOM), which schedules/automates the provisioning of the required resources and configurations, coordinating across various components such as MANO controllers for virtual network functions (VNFs), or Containerized controllers or any 3rd party controllers or services or even physical infrastructure. The OpenSlice orchestration system ensures that all dependencies are managed and that the service is correctly configured. After provisioning, the service is activated and handed over to the customer, . This end-to-end process ensures a seamless, efficient, and automated delivery of services, enhancing customer satisfaction and operational efficiency. diff --git a/doc/naas/service_spec.md b/doc/naas/service_spec.md index 983acdadf56f805170a02cf84f39f2675837870d..50a2b7449baa21d57d2a8d1654118db9252f03e2 100644 --- a/doc/naas/service_spec.md +++ b/doc/naas/service_spec.md @@ -1,13 +1,13 @@ # OpenSlice Service Specification -OpenSlice offers complete management of Service Specifications. +**Intended Audience: OpenSlice Service Designers** -## Intended Audience: Service Designers +OpenSlice offers complete management of Service Specifications. Service Specification is an entity that describes a service offering. There are two types of Service Specifications: -* Resource Facing Service Specification -* Customer Facing Service Specification +* Resource Facing Service Specification (RFSS) +* Customer Facing Service Specification (CFSS) ## Resource Facing Service Specification @@ -33,9 +33,9 @@ Usually a Service Specification has the following aspects: Service Designers can create a Service Specification from scratch or use some templates: - * Create a Service based from a Network Service Descriptor (NSD) - * Create a Service based on a Kubernetes Operator - * Create a Service based on the GSMA GST - Generic Slice Template +* Create a Service based from a Network Service Descriptor (NSD) +* Create a Service based on a Kubernetes Operator +* Create a Service based on the GSMA GST - Generic Slice Template ## UI management @@ -55,12 +55,11 @@ endpoint examples: /serviceCatalogManagement/v4/serviceSpecification List or find ServiceSpecification objects - ## Example Use Case Scenario: A service provider wants to offer a new managed XXXX service to enterprise customers. - * Service Definition: Create a service specification template for the XXXX service, including specifications for bandwidth, network features, and performance metrics. +* Service Definition: Create a service specification template for the XXXX service, including specifications for bandwidth, network features, and performance metrics. ## Probe further diff --git a/doc/naas/so_intro.md b/doc/naas/so_intro.md index 7e454bc6a411afdcb4af8c49b2462961cb661637..95c2aa7f68dbb1c4f86d98162d2211f8d152a4e2 100644 --- a/doc/naas/so_intro.md +++ b/doc/naas/so_intro.md @@ -1,14 +1,15 @@ # Service Orchestration -Definition: The orchestration engine evaluates the request, determines the necessary resources, and initiates the automated workflows.It interacts with underlying controller components (e.g. 5G Core, Radios, Containerized controllers, NFV, SDN controllers) to provision and configure the required network functions and connectivity. +**Intended Audience: OpenSlice Service Designers** + +*Definition*: The orchestration engine evaluates the request, determines the necessary resources, and initiates the automated workflows.It interacts with underlying controller components (e.g. 5G Core, Radios, Containerized controllers, NFV, SDN controllers) to provision and configure the required network functions and connectivity. -## Intended Audience: Service Designers OpenSlice end-to-end (E2E) service orchestration framework is designed to manage and automate the entire lifecycle of services across multiple domains and technologies. For delivering, Network as a Service (NaaS) OpenSlice automates and manages the entire lifecycle of network services, from provisioning to monitoring and decommissioning, while ensuring seamless integration, operation, and delivery of services from the initial request to the final delivery, spanning all involved components and layers. As next image depicts, service orchestrators follow some predefined workflows. OpenSlice end-to-end (E2E) service orchestrator follows some predefined workflows to manage a service lifecycle (They are described in BPMN language and included in our orchestration engine). -[](./so/img01.png) + This section provides a high level overview of the Service Orchestration process. diff --git a/doc/naas/so_servicespec_to_services_nfv.md b/doc/naas/so_servicespec_to_services_nfv.md index 7ca3911dae86d5bf9b704184d11415fda1dac81f..aaacbfe96854ea070c0312dbaa3b4155ff94b270 100644 --- a/doc/naas/so_servicespec_to_services_nfv.md +++ b/doc/naas/so_servicespec_to_services_nfv.md @@ -10,7 +10,7 @@ After a Service Order completion, active services with their additional characte Openslice creates a Service for the requested CFS. Customers make Service Orders and Openslice instantiates the requested Service Specifications for each Service Order Item of a Service Order. Running Services instantiated by Openslice, reside in Openslice Service Inventory. The following picture displays how Service Specifications are related to Running Services and how Running Services relate with instantiated running Network Services. -[](./so/service_specification_instantiation.png) + There is a hierarchy of services. Usually an Instantiated CFS has Supporting Services some Instantiated RFSs. Then an Instantiated RFS is related to some running NS managed by NFVO diff --git a/doc/ole_keycloak_management.md b/doc/ole_keycloak_management.md deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/doc/service_design/catalogs.md b/doc/service_design/catalogs.md index fb89bedb51a0e7d59f2a36db20b61b0b483440fd..b34701caef6638c277f43d3257a2e4f8c509cdd9 100644 --- a/doc/service_design/catalogs.md +++ b/doc/service_design/catalogs.md @@ -87,10 +87,10 @@ Delete it from the assigned Service Category. This action does not delete the ac ## Consume and expose Service Specifications from other Service Catalogues -See more on [Consuming Services From External Partner Organizations]( ../architecture/consumingServicesFromExternalPartners.md) +See more on [Consuming Services From External Partner Organizations]( ../getting_started/configuration/consuming_services_from_external_partners.md). ## Probe further -[Design Kubernetes-based Service Specifications](./kubernetes/ExposingKubernetesResources.md/) +[Design Kubernetes-based Service Specifications](./kubernetes/exposing_kubernetes_resources.md/) [Design NFV/OSM-based Service Specifications](./nfv/design_nfv_services.md) diff --git a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/ExposingCRDs_aaS_Example_Calculator.md b/doc/service_design/examples/calculator_crd_aas/calculator_crd_aas.md similarity index 91% rename from doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/ExposingCRDs_aaS_Example_Calculator.md rename to doc/service_design/examples/calculator_crd_aas/calculator_crd_aas.md index 6d9dd1525590a20b982dee72b212e0d230d5fff5..f2c0f52c71a86ee872303e0acd7e7030f213c1d0 100644 --- a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/ExposingCRDs_aaS_Example_Calculator.md +++ b/doc/service_design/examples/calculator_crd_aas/calculator_crd_aas.md @@ -1,14 +1,12 @@ # Exposing Kubernetes Operators as a Service : Offering "Calculator as a Service" through OpenSlice -## Intended Audience: Service Designers +**Intended Audience: OpenSlice Service Designers** - -> To illustrate the powerful concept of Kubernetes operators and how they can be utilized to offer a service through OpenSlice, let's provide an example of a "Calculator as a Service." +> To illustrate the powerful concept of Kubernetes operators and how they can be utilized to offer a service through OpenSlice, let's provide an example of a "Calculator as a Service." > This example will demonstrate the flexibility and capabilities of Kubernetes operators in managing custom resources and automating operational tasks. - --- ## Offering "Calculator as a Service" through OpenSlice @@ -22,8 +20,7 @@ Assume the following simple CRD of a calculator model accepting two params (spec The controller (the calculator code) is implemented in any language and is installed in a Kubernetes cluster -``` - +```yaml apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: @@ -66,7 +63,7 @@ spec: Request to the cluster (through e.g. kubectl apply) -``` +```yaml apiVersion: examples.osl.etsi.org/v1alpha1 kind: MyCalculator metadata: @@ -75,12 +72,11 @@ spec: parama: 170 paramb: 180 action: 'SUM' - ``` Response -``` +```yaml apiVersion: examples.osl.etsi.org/v1alpha1 kind: MyCalculator metadata: @@ -103,7 +99,7 @@ To perform this through OpenSlice as a Service Specification ready to be ordered --- ### CRD is saved automatically as Resource Specification -As soon as the CRD is deployed in the cluster (e.g. by your admin via kubctl or via any installation through the internet) it is automatically transformed and is available in OpenSlice catalogs as a Resource Specification. +As soon as the CRD is deployed in the cluster (e.g. by your admin via kubectl or via any installation through the internet) it is automatically transformed and is available in OpenSlice catalogs as a Resource Specification. - See also the fully qualified name of the resource specification. - MyCalculator@examples.osl.etsi.org/v1alpha1@docker-desktop@https://kubernetes.docker.internal:6443/ @@ -122,9 +118,9 @@ As soon as the CRD is deployed in the cluster (e.g. by your admin via kubctl or --- -# Expose to Users +## Expose to Users -## Start by Creating a ResourceFacingServiceSpecification +### Create a ResourceFacingServiceSpecification From the UI menu create a new Service Specification @@ -135,19 +131,19 @@ From the UI menu create a new Service Specification -### Creation of CRD-related characteristics +#### Create CRD-related characteristics - We need now to adjust some characteristics of this CRD as Resource Specification. -- OpenSlice transalted automatically the CRD spec in a flat list of characteristics.So the "spec" section from the original yaml for example, is now unfold into: spec, spec.parama, spec.paramb, etc. the same for "status" object -- We need to make OpenSlice aware of when the service will be active. +- OpenSlice translated automatically the CRD spec in a flat list of characteristics.So the "spec" section from the original yaml for example, is now unfold into: spec, spec.parama, spec.paramb, etc. the same for "status" object +- We need to make OpenSlice aware of when the service will be active. - So we go to characteristic _CR_CHECK_FIELD and we define that the field that shows the status of the service is the characteristic "status.status" (is a text field) - Then we go to _CR_CHECKVAL_AVAILABLE and we define the value CALCULATED, which signals the following: When the characteristic "status.status" has the value "CALCULATED" then OpenSlice will mark the underlying service as "ACTIVE" - We need also to define the yaml file that OpenSLice will use to create the new resource in the kubernetes cluster - We insert the YAML in the characteristic _CR_SPEC - the _CR_SPEC is: +The _CR_SPEC is: -``` +```yaml apiVersion: examples.osl.etsi.org/v1alpha1 kind: MyCalculator metadata: @@ -165,12 +161,12 @@ spec: > However the values are fixed. How do we allow a user to pass parameters through OpenSlice -## Expose in Catalog +### Expose in Catalog Create a new CustomerFacingServiceSpecification - Go to the menu Service Specification>New Service Specification - - Create a service My Calulator and mark it as a Bundle + - Create a service My Calculator and mark it as a Bundle - Go to Service Specification Relationships and add MyCalculatorRFS - The service will be automatically transformed to a "CustomerFacingServiceSpecification" - Add the following characteristics as the image shows: @@ -192,11 +188,8 @@ We need to Create LCM rules in CustomerFacingServiceSpecification:  - - If we see one rule it will look like the following: -  - We need to change the _CR_SPEC characteristic of the referenced ResourceFacingServiceSpecification @@ -206,7 +199,7 @@ If we see one rule it will look like the following: - Add a block for text _CR_SPEC - We use a block that changes a String according to variables Text>"A formatted text replacing variables from List" - See that we have as Input string the YAML string lines - - see that parama, paramb has a %d (they accept integers), action is %s (accepts a string) + - See that parama, paramb has a %d (they accept integers), action is %s (accepts a string) - See that the variables tha will replace the %d, %d and %s are an list - the first %d will be replaced with the value from characteristic spec.parama - the second %d will be replaced with the value from characteristic spec.paramb @@ -237,14 +230,12 @@ Expose it then to a catalogue for orders through the Service Categories and Serv -### Order the Service +## Order the Service When a user orders the service, it will look like this:  - - - After the Service Order we have 2 services in service inventory on CFS and on RFS. Both have references to values - OpenSlice (via CRIDGE service) updates the Resource in Resource Inventory and OSOM updates the Services in Service Inventory - The Actual resources are running in the Kubernetes cluster managed by OpenSlice @@ -256,16 +247,14 @@ When a user orders the service, it will look like this:  -### Modify the running service +## Modify the running service - The user can modify the service +The user can modify the service -  - + - - After a while the update is applied to the cluster, the controller will pick up the resource update and patch the resource - OpenSlice (via CRIDGE service) updates the Resource in Resource Inventory and OSOM updates the Services in Service Inventory - The result will be available to the respective characteristic "Result" after a few seconds, as need to go through various steps (OpenSlice orchestrator, down to kubernetes, to Calculator controller and back) -  \ No newline at end of file + \ No newline at end of file diff --git a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/cfs_img12.png b/doc/service_design/examples/calculator_crd_aas/cfs_img12.png similarity index 100% rename from doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/cfs_img12.png rename to doc/service_design/examples/calculator_crd_aas/cfs_img12.png diff --git a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img07.png b/doc/service_design/examples/calculator_crd_aas/img07.png similarity index 100% rename from doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img07.png rename to doc/service_design/examples/calculator_crd_aas/img07.png diff --git a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img08.png b/doc/service_design/examples/calculator_crd_aas/img08.png similarity index 100% rename from doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img08.png rename to doc/service_design/examples/calculator_crd_aas/img08.png diff --git a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img09.png b/doc/service_design/examples/calculator_crd_aas/img09.png similarity index 100% rename from doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img09.png rename to doc/service_design/examples/calculator_crd_aas/img09.png diff --git a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img10.png b/doc/service_design/examples/calculator_crd_aas/img10.png similarity index 100% rename from doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img10.png rename to doc/service_design/examples/calculator_crd_aas/img10.png diff --git a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img11.png b/doc/service_design/examples/calculator_crd_aas/img11.png similarity index 100% rename from doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img11.png rename to doc/service_design/examples/calculator_crd_aas/img11.png diff --git a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img12.png b/doc/service_design/examples/calculator_crd_aas/img12.png similarity index 100% rename from doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img12.png rename to doc/service_design/examples/calculator_crd_aas/img12.png diff --git a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img13.png b/doc/service_design/examples/calculator_crd_aas/img13.png similarity index 100% rename from doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img13.png rename to doc/service_design/examples/calculator_crd_aas/img13.png diff --git a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img13_1.png b/doc/service_design/examples/calculator_crd_aas/img13_1.png similarity index 100% rename from doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img13_1.png rename to doc/service_design/examples/calculator_crd_aas/img13_1.png diff --git a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img14.png b/doc/service_design/examples/calculator_crd_aas/img14.png similarity index 100% rename from doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img14.png rename to doc/service_design/examples/calculator_crd_aas/img14.png diff --git a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img15.png b/doc/service_design/examples/calculator_crd_aas/img15.png similarity index 100% rename from doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img15.png rename to doc/service_design/examples/calculator_crd_aas/img15.png diff --git a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img16.png b/doc/service_design/examples/calculator_crd_aas/img16.png similarity index 100% rename from doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img16.png rename to doc/service_design/examples/calculator_crd_aas/img16.png diff --git a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img17.png b/doc/service_design/examples/calculator_crd_aas/img17.png similarity index 100% rename from doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img17.png rename to doc/service_design/examples/calculator_crd_aas/img17.png diff --git a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img18.png b/doc/service_design/examples/calculator_crd_aas/img18.png similarity index 100% rename from doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img18.png rename to doc/service_design/examples/calculator_crd_aas/img18.png diff --git a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img19.png b/doc/service_design/examples/calculator_crd_aas/img19.png similarity index 100% rename from doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img19.png rename to doc/service_design/examples/calculator_crd_aas/img19.png diff --git a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img20.png b/doc/service_design/examples/calculator_crd_aas/img20.png similarity index 100% rename from doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img20.png rename to doc/service_design/examples/calculator_crd_aas/img20.png diff --git a/doc/service_design/examples/helmInstallation_aaS_Example_Jenkins/img01.png b/doc/service_design/examples/jenkins_helm_install_aas/img01.png similarity index 100% rename from doc/service_design/examples/helmInstallation_aaS_Example_Jenkins/img01.png rename to doc/service_design/examples/jenkins_helm_install_aas/img01.png diff --git a/doc/service_design/examples/helmInstallation_aaS_Example_Jenkins/img02.png b/doc/service_design/examples/jenkins_helm_install_aas/img02.png similarity index 100% rename from doc/service_design/examples/helmInstallation_aaS_Example_Jenkins/img02.png rename to doc/service_design/examples/jenkins_helm_install_aas/img02.png diff --git a/doc/service_design/examples/helmInstallation_aaS_Example_Jenkins/img03.png b/doc/service_design/examples/jenkins_helm_install_aas/img03.png similarity index 100% rename from doc/service_design/examples/helmInstallation_aaS_Example_Jenkins/img03.png rename to doc/service_design/examples/jenkins_helm_install_aas/img03.png diff --git a/doc/service_design/examples/helmInstallation_aaS_Example_Jenkins/img04.png b/doc/service_design/examples/jenkins_helm_install_aas/img04.png similarity index 100% rename from doc/service_design/examples/helmInstallation_aaS_Example_Jenkins/img04.png rename to doc/service_design/examples/jenkins_helm_install_aas/img04.png diff --git a/doc/service_design/examples/helmInstallation_aaS_Example_Jenkins/img05.png b/doc/service_design/examples/jenkins_helm_install_aas/img05.png similarity index 100% rename from doc/service_design/examples/helmInstallation_aaS_Example_Jenkins/img05.png rename to doc/service_design/examples/jenkins_helm_install_aas/img05.png diff --git a/doc/service_design/examples/helmInstallation_aaS_Example_Jenkins/img06.png b/doc/service_design/examples/jenkins_helm_install_aas/img06.png similarity index 100% rename from doc/service_design/examples/helmInstallation_aaS_Example_Jenkins/img06.png rename to doc/service_design/examples/jenkins_helm_install_aas/img06.png diff --git a/doc/service_design/examples/helmInstallation_aaS_Example_Jenkins/img061.png b/doc/service_design/examples/jenkins_helm_install_aas/img061.png similarity index 100% rename from doc/service_design/examples/helmInstallation_aaS_Example_Jenkins/img061.png rename to doc/service_design/examples/jenkins_helm_install_aas/img061.png diff --git a/doc/service_design/examples/helmInstallation_aaS_Example_Jenkins/img07.png b/doc/service_design/examples/jenkins_helm_install_aas/img07.png similarity index 100% rename from doc/service_design/examples/helmInstallation_aaS_Example_Jenkins/img07.png rename to doc/service_design/examples/jenkins_helm_install_aas/img07.png diff --git a/doc/service_design/examples/helmInstallation_aaS_Example_Jenkins/img08.png b/doc/service_design/examples/jenkins_helm_install_aas/img08.png similarity index 100% rename from doc/service_design/examples/helmInstallation_aaS_Example_Jenkins/img08.png rename to doc/service_design/examples/jenkins_helm_install_aas/img08.png diff --git a/doc/service_design/examples/helmInstallation_aaS_Example_Jenkins/img09.png b/doc/service_design/examples/jenkins_helm_install_aas/img09.png similarity index 100% rename from doc/service_design/examples/helmInstallation_aaS_Example_Jenkins/img09.png rename to doc/service_design/examples/jenkins_helm_install_aas/img09.png diff --git a/doc/service_design/examples/helmInstallation_aaS_Example_Jenkins/HELM_Installation_aaS_Jenkins_Example.md b/doc/service_design/examples/jenkins_helm_install_aas/jenkins_helm_install_aas.md similarity index 98% rename from doc/service_design/examples/helmInstallation_aaS_Example_Jenkins/HELM_Installation_aaS_Jenkins_Example.md rename to doc/service_design/examples/jenkins_helm_install_aas/jenkins_helm_install_aas.md index 8623e5625385581820b7c5001b478216c2c90359..dae3315444d107f8acf7056c59731234a7c44610 100644 --- a/doc/service_design/examples/helmInstallation_aaS_Example_Jenkins/HELM_Installation_aaS_Jenkins_Example.md +++ b/doc/service_design/examples/jenkins_helm_install_aas/jenkins_helm_install_aas.md @@ -2,7 +2,7 @@ ## Design the Jenkins (Resource-Facing) Service -Before reading this example please make sure that you went through the [Design Helm as a Service](../../kubernetes/helm/design_helmaas.md) +Before reading this example please make sure that you went through the [Design Helm as a Service](../../kubernetes/design_helm_aas.md) In this example, we will use the ```Kind: Application``` of ArgoCD and create a ResourceFacingServiceSpecification (RFSS) for Jenkins. Eventually, we will offer Jenkins as a Service. @@ -47,7 +47,7 @@ spec: name: in-cluster source: repoURL: https://charts.jenkins.io - targetRevision: 5.3.6 + targetRevision: 5.7.21 chart: jenkins helm: values: | diff --git a/doc/service_design/intro.md b/doc/service_design/intro.md index d3ba61cdacab1d4f0ba837cbb228eed7bb126890..140c814639d6ed88eff06dfb17034240f06794b0 100644 --- a/doc/service_design/intro.md +++ b/doc/service_design/intro.md @@ -1,8 +1,9 @@ # Service Design in OpenSlice -This section offers details on how to design Service Specifications and expose them in Service Catalogs +**Intended Audience: OpenSlice Service Designers** + -## Intended Audience: Service Designers +This section offers details on how to design Service Specifications and expose them in Service Catalogs Service Designers create detailed service specifications, which are then managed and exposed in service catalogs. These services are integrated into OpenSlice E2E service orchestration framework to automate and optimize the delivery of network services. @@ -11,7 +12,7 @@ OpenSlice can be used to design service specifications for various services, eve ## Probe further -* [Design and expose services based on containerized resources via the Kubernetes Operator pattern](./kubernetes/ExposingKubernetesResources.md) +* [Design and expose services based on containerized resources via the Kubernetes Operator pattern](./kubernetes/exposing_kubernetes_resources.md) * [Design and expose services based on NFV artifacts](./nfv/design_nfv_services.md) diff --git a/doc/service_design/kubernetes/ExposingKubernetesResources.md b/doc/service_design/kubernetes/ExposingKubernetesResources.md deleted file mode 100644 index e17df5f374b109c1eb95b5a22cd338025dff4a05..0000000000000000000000000000000000000000 --- a/doc/service_design/kubernetes/ExposingKubernetesResources.md +++ /dev/null @@ -1,177 +0,0 @@ - -# Expose and manage Kubernetes Custom Resource Definitions (Operators) in a Kubernetes Cluster - -OpenSlice is capable of exposing Kubernetes Resources and Definitions as Service Specifications. - -## Intended Audience: Service Designers - -Use OpenSlice to expose NFV resources in service catalogs and deploy them in complex scenarios (service bundles) involving also other systems: - -* Include external resources, e.g. RAN controllers -* Manage multiple NSDs in linked NFVOs (OSM installations) -* Combine designed services -* Control the lifecycle of services and pass values from one service to another - - > Kubernetes is an orchestration system for automating software deployment, scaling, and management. One can interact though the Kubernetes API and it has a set of objects ready for use out of the box. Custom Resource Definitions (CRDs) is a way that allows to manage things other than Kubernetes itself and allows to create our own objects The use of CRDs makes the possibilities of Kubernetes management almost limitless. You can extend the base Kubernetes API with any object you like using CRDs. - - > By allowing the design and lifecycle management of services/resources that expose CRDs/CRs in a Kubernetes cluster via the TMF APIs, OSL can be used in many complex scenarios now involing resources from multiple domains. - - -1. OpenSlice is capable to: - - Create and manage Custom Resources (CRs) using installed CRDs on a target Kubernetes cluster. - - Facilitate complex orchestration scenarios by wrapping Kubernetes APIs as TMF APIs and models. - - Handles connectivity to a Kubernetes cluster and manages the lifecycle of CRDs - - Wraps the Kubernetes API, Receives and provides resources towards other OpenSlice services via the service bus - -2. Enabling Loose Coupling and Orchestration - - Language Flexibility: Developers can write CRDs in any language and expose them via the Kubernetes APIs. OSL will reuse these CRDs, enhancing flexibility and integration capabilities. - - Familiar Deployment: Developers can create and deploy applications using familiar tools such as Helm charts, simplifying the process and reducing the learning curve. - -3. Ecosystem Reusability - - OpenSlice capitalizes on the extensive Kubernetes ecosystem, particularly focusing on operators (CRDs). - - Key repositories and hubs such as artifacthub.io and Operatorhub.io can be utilized for finding and deploying operators. - -4. Service Catalog Exposure and Deployment - - OpenSlice can expose CRs in service catalogs, facilitating their deployment in complex scenarios. - - These scenarios may include service bundles that involve multiple systems, such as RAN controllers or other Kubernetes clusters, providing a robust and versatile deployment framework. - -## Approach - - > OpenSlice in general is responible for exposing service specifications which are ready to be ordered and orchestrated, through tmforum Open APIs as defined in the OSL Service Spec Catalog. Usually for a service specification a corresponding (one or more) resource specification (resourceSpecificationReference) is registered in the OSL Resource Spec Catalog. - -The following image illustrates the approach. - - - - -1. A CRD in a cluster will be mapped in TMF model as a Resource specification and therefore can be exposed as a service specification in a catalog -2. Service Orders can be created for this service specification. -3. OSOM creates a Resource in OSL Resource inventory and requests new Custom Resource (CR) in the target cluster - - The resource is created in a specific namespace (for example the UUID of the Service Order) - - A CR in a cluster will be mapped in TMF model as a Resource in the resource Inventory - - Other related resources created by the CRD Controller within the namespace are automatically created in OSL Resource Inventory under the same Service Order - - -## Awareness for CRDs and CRs in cluster - -> CRDs and CRs can appear (disappear) or change status at any time in a cluster. OpenSlice Resource Inventory need to be aware of these events. - - When installing OpenSlice you can configure at least one management cluster. OpenSlice connects via a provided kubeconf - -- On Start up OSL tries to register this cluster and context to OSL catalogs. -- After the registration of this cluster as a Resource in OSL OSL is always aware of all CRDs and their CRs in the cluster, even if a CRD or CR is added/updated/deleted in the K8S cluster outside of OSL -- Resources created by OpenSlice have labels, e.g. (org.etsi.osl.*) - -## Expose CRDs as Service Specifications in OpenSlice catalogs - -**A CRD by default is exposed as a Resource Specification** - -To ensure unique names across the clusters that OpenSlice can manage, the name of a CRD is constructed as follows: - -```Kind @ ApiGroup/version @ ContextCluster @ masterURL``` - -For example you might see resource Specifications like: - - - ```Application@argoproj.io/v1alpha1@kubernetes@https://10.10.10.144:6443/``` - - ```IPAddressPool@metallb.io/v1beta1@kubernetes@https://10.10.10.144:6443/``` - - ```Provider@pkg.crossplane.io/v1@kubernetes@https://10.10.10.144:6443/``` - -All attributes of the CRD are translated into characteristics - -The following specific characteristics are **added**: - - - _CR_SPEC: Used for providing the json Custom Resource description to apply - - _CR_CHECK_FIELD: Used for providing the field that need to be checked for the resource status - - _CR_CHECKVAL_STANDBY: Used for providing the equivalent value from resource to signal the standby status - - _CR_CHECKVAL_ALARM: Used for providing the equivalent value from resource to signal the alarm status - - _CR_CHECKVAL_AVAILABLE: Used for providing the equivalent value from resource to signal the available status - - _CR_CHECKVAL_RESERVED: Used for providing the equivalent value from resource to signal the reserved status - - _CR_CHECKVAL_UNKNOWN: Used for providing the equivalent value from resource to signal the unknown status - - _CR_CHECKVAL_SUSPENDED: Used for providing the equivalent value from resource to signal the suspended status - - -1. Create a new Service Specification and use this Resource Specification in Resource Specification Relationships - - Then the Service Specification is saved as ResourceFacingServiceSpecification - - 1.1. You can give at this stage values to the characteristics: - - - _CR_SPEC, - - _CR_CHECK_FIELD - - _CR_CHECKVAL_STANDBY - - _CR_CHECKVAL_ALARM - - _CR_CHECKVAL_AVAILABLE - - _CR_CHECKVAL_RESERVED - - _CR_CHECKVAL_UNKNOWN - - _CR_CHECKVAL_SUSPENDED - - 1.2. You can now create LCM rules if you wish - -2. Create a new Service Specification and use the Resource Facing Service Specification in Service Specification Relationships - - Then the Service Specification is saved as CustomerFacingServiceSpecification - - 2.1. You can give at this stage values to the characteristics: - - - _CR_SPEC, - - _CR_CHECK_FIELD - - _CR_CHECKVAL_STANDBY - - _CR_CHECKVAL_ALARM - - _CR_CHECKVAL_AVAILABLE - - _CR_CHECKVAL_RESERVED - - _CR_CHECKVAL_UNKNOWN - - _CR_CHECKVAL_SUSPENDED - - 2.2. You We can create LCM rules for this new Service Specification - - 2.3. You Expose configurable values for users to configure during service order - - - -## Service Orchestration and CRDs/CRs - -OSOM - OpenSlice Service Orchestrator, checks the presence of attribute _CR_SPEC at the RFS to make a request for a CR deployment - -- _CR_SPEC is a JSON or YAML string that is used for the request - - It is similar to what one will do with e.g. a kubectl apply - - There are tools to translate a yaml file to a json - -> LCM rules can be used to change attributes of this yaml/json file, before sending this for orchestration - - -## Mapping the CR lifecycle that is defined in the CRD with the OpenSLice (TMF-based) resource Lifecycle - -OpenSlice adds automatically as we see the following characteristics: - - - _CR_CHECK_FIELD - - _CR_CHECKVAL_STANDBY - - _CR_CHECKVAL_ALARM - - _CR_CHECKVAL_AVAILABLE - - _CR_CHECKVAL_RESERVED - - _CR_CHECKVAL_UNKNOWN - - _CR_CHECKVAL_SUSPENDED - -**These characteristics instrument OpenSlice services to manage and reflect the lifecycle of a kubernetes resource to OpenSlice's (TMF based) lifecycle** - - -- _CR_CHECK_FIELD: The name of the field that is needed to be monitored in order to monitor the status of the service and translate it to TMF resource statys (RESERVED AVAILABLE, etc) -- _CR_CHECKVAL_STANDBY: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state STANDBY (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) -- _CR_CHECKVAL_ALARM: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state ALARMS (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) -- _CR_CHECKVAL_AVAILABLE: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state AVAILABLE (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) -- _CR_CHECKVAL_RESERVED: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state RESERVED (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) -- _CR_CHECKVAL_UNKNOWN: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state UNKNOWN (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) -- _CR_CHECKVAL_SUSPENDED: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state SUSPENDED (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) - - ---- - -## Probe further - - -- See examples of exposing Kubernetes Operators as a Service via OpenSlice: - - [Offering "Calculator as a Service"](../examples/ExposingCRDs_aaS_Example_Calculator/ExposingCRDs_aaS_Example_Calculator.md) - - [Offering "Helm installation as a Service" (Jenkins example)](../examples/helmInstallation_aaS_Example_Jenkins/HELM_Installation_aaS_Jenkins_Example.md) -- [Learn more about CRIDGE, the service in OpenSlice that manages CRDs/CRs](../../architecture/CRIDGE/CRIDGEforDevelopers.md) - - - - - diff --git a/doc/service_design/kubernetes/design_helm_aas.md b/doc/service_design/kubernetes/design_helm_aas.md new file mode 100644 index 0000000000000000000000000000000000000000..3a8924e29a7c76d9a3bb4224c6cbbb3a5f5f807a --- /dev/null +++ b/doc/service_design/kubernetes/design_helm_aas.md @@ -0,0 +1,64 @@ +# Expose Helm charts as Service Specifications + +**Intended Audience: OpenSlice Service Designers** + +This section introduces ways to manage Helm charts installations via OpenSlice Service Specifications and Service Orders. + +## Kubernetes and Helm Introduction + +Kubernetes is an orchestration system for automating software deployment, scaling, and management. One can interact though the Kubernetes API and it has a set of objects ready for use out of the box. + +Helm is a tool that automates the creation, packaging, configuration, and deployment of Kubernetes applications by combining your configuration files into a single reusable package. + +At the heart of Helm is the packaging format called charts. Each chart comprises one or more Kubernetes manifests -- and a given chart can have child charts and dependent charts, as well. Using Helm charts: + +- Reduces the complexity of deploying Microservices +- Enhances deployment speed +- Developers already know the technology + +Below the core advantages in using Helms with OpenSlice are presented: + +- There are many Helm charts and Helm repositories there that are ready to be used +- Enable loose coupling and more orchestration scenarios +- Developers create and deploy applications in things they already know (e.g. Helm charts) +- Usage of the TMF models as wrapper entities around Helm charts + + +Also, OpenSlice can expose them in service catalogs and deploy them in complex scenarios (Service Bundles) involving also other systems: + +- Include e.g. RAN controllers, +- Pass values through life cycle rules from one service to another, +- Manage multiple Helms in multiple clusters + + +## The installation of HELM charts is based on OpenSlice CRD support + +Please read more [here](./exposing_kubernetes_resources.md). + + +For installing HELM charts we will use ArgoCD a well known Kubernetes-native continuous deployment (CD) tool. + +> ArgoCD is a Kubernetes-native continuous deployment (CD) tool + +> While just deploying HELM charts is just a scenario for ArgoCD , in future one can exploit it for many things + +> Despite some other tools like FluxCD, it provides also a UI which is useful for management and troubleshooting + + +We will mainly use the CRD of ```Kind: Application``` that ArgoCD can manage + +Before proceeding, install ArgoCD in your management cluster, by following ArgoCD instructions + +As soon as you install ArgoCD, OpenSlice is automatically aware for specific new Kinds. The one we will use is is the ```Kind: Application``` that ArgoCD can manage under the apiGroup argoproj.io + +Browse to Resource Specifications. You will see an entry like the following: + +```Application@argoproj.io/v1alpha1@kubernetes@https://10.10.10.144:6443/``` + +see image: + + + +## Probe further + +See the [Example: Offer Jenkins as a Service via OpenSlice](../examples/jenkins_helm_install_aas/jenkins_helm_install_aas.md) diff --git a/doc/service_design/kubernetes/exposing_kubernetes_resources.md b/doc/service_design/kubernetes/exposing_kubernetes_resources.md new file mode 100644 index 0000000000000000000000000000000000000000..5eba68457f7ffaaa69954de0f81a381bdcfb8c7d --- /dev/null +++ b/doc/service_design/kubernetes/exposing_kubernetes_resources.md @@ -0,0 +1,142 @@ + +# Expose and manage Kubernetes Custom Resource Definitions (Operators) in a Kubernetes Cluster + +**Intended Audience: OpenSlice Service Designers** + +OpenSlice is capable of exposing Kubernetes Resources and Definitions as Service Specifications. + +Use OpenSlice to expose NFV resources in service catalogs and deploy them in complex scenarios (service bundles) involving also other systems: + +* Include external resources, e.g. RAN controllers +* Manage multiple NSDs in linked NFVOs (OSM installations) +* Combine designed services +* Control the lifecycle of services and pass values from one service to another + + +## Awareness for CRDs and CRs in cluster + +> CRDs and CRs can appear (disappear) or change status at any time in a cluster. OpenSlice Resource Inventory need to be aware of these events. + +When installing OpenSlice you can configure at least one management cluster. OpenSlice connects via a provided kubeconf + +- On start-up, OSL tries to register this cluster and context to OSL catalogs. +- After the registration of this cluster as a Resource in OSL OSL is always aware of all CRDs and their CRs in the cluster, even if a CRD or CR is added/updated/deleted in the K8S cluster outside of OSL +- Resources created by OpenSlice have labels, e.g. (org.etsi.osl.*) + +## Expose CRDs as Service Specifications in OpenSlice catalogs + +**A CRD by default is exposed as a Resource Specification** + +To ensure unique names across the clusters that OpenSlice can manage, the name of a CRD is constructed as follows: + +```Kind @ ApiGroup/version @ ContextCluster @ masterURL``` + +For example you might see resource Specifications like: + +- Application@argoproj.io/v1alpha1@kubernetes@https://10.10.10.144:6443/``` +- ```IPAddressPool@metallb.io/v1beta1@kubernetes@https://10.10.10.144:6443/``` +- ```Provider@pkg.crossplane.io/v1@kubernetes@https://10.10.10.144:6443/``` + +All attributes of the CRD are translated into characteristics + +The following specific characteristics are **added**: + +```yaml +- _CR_SPEC: Used for providing the json Custom Resource description to apply +- _CR_CHECK_FIELD: Used for providing the field that need to be checked for the resource status +- _CR_CHECKVAL_STANDBY: Used for providing the equivalent value from resource to signal the standby status +- _CR_CHECKVAL_ALARM: Used for providing the equivalent value from resource to signal the alarm status +- _CR_CHECKVAL_AVAILABLE: Used for providing the equivalent value from resource to signal the available status +- _CR_CHECKVAL_RESERVED: Used for providing the equivalent value from resource to signal the reserved status +- _CR_CHECKVAL_UNKNOWN: Used for providing the equivalent value from resource to signal the unknown status +- _CR_CHECKVAL_SUSPENDED: Used for providing the equivalent value from resource to signal the suspended status +``` + +1. Create a new Service Specification and use this Resource Specification in Resource Specification Relationships + - Then the Service Specification is saved as ResourceFacingServiceSpecification + + 1.1. At this stage, you can give values to the characteristics: + + ``` + - _CR_SPEC, + - _CR_CHECK_FIELD + - _CR_CHECKVAL_STANDBY + - _CR_CHECKVAL_ALARM + - _CR_CHECKVAL_AVAILABLE + - _CR_CHECKVAL_RESERVED + - _CR_CHECKVAL_UNKNOWN + - _CR_CHECKVAL_SUSPENDED + ``` + + 1.2. You can now create LCM rules if you wish + +2. Create a new Service Specification and use the Resource Facing Service Specification in Service Specification Relationships + - Then the Service Specification is saved as CustomerFacingServiceSpecification + + 2.1. At this stage, you can give values to the characteristics: + + ``` + - _CR_SPEC, + - _CR_CHECK_FIELD + - _CR_CHECKVAL_STANDBY + - _CR_CHECKVAL_ALARM + - _CR_CHECKVAL_AVAILABLE + - _CR_CHECKVAL_RESERVED + - _CR_CHECKVAL_UNKNOWN + - _CR_CHECKVAL_SUSPENDED + ``` + + 2.2. You can create LCM rules for this new Service Specification + + 2.3. You can expose configurable values for users to configure during service order + + + +## Service Orchestration and CRDs/CRs + +OSOM - OpenSlice Service Orchestrator, checks the presence of attribute _CR_SPEC at the RFS to make a request for a CR deployment. + +- _CR_SPEC is a JSON or YAML string that is used for the request + - It is similar to what one will do with e.g. a kubectl apply + - There are tools to translate a yaml file to a json + +> LCM rules can be used to change attributes of this yaml/json file, before sending this for orchestration + + +## Mapping the CR lifecycle that is defined in the CRD with the OpenSLice (TMF-based) resource Lifecycle + +OpenSlice adds automatically as we see the following characteristics: + +``` +- _CR_CHECK_FIELD +- _CR_CHECKVAL_STANDBY +- _CR_CHECKVAL_ALARM +- _CR_CHECKVAL_AVAILABLE +- _CR_CHECKVAL_RESERVED +- _CR_CHECKVAL_UNKNOWN +- _CR_CHECKVAL_SUSPENDED +``` + +**These characteristics instrument OpenSlice services to manage and reflect the lifecycle of a kubernetes resource to OpenSlice's (TMF based) lifecycle** + + +- _CR_CHECK_FIELD: The name of the field that is needed to be monitored in order to monitor the status of the service and translate it to TMF resource status (RESERVED AVAILABLE, etc) +- _CR_CHECKVAL_STANDBY: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state STANDBY (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) +- _CR_CHECKVAL_ALARM: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state ALARMS (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) +- _CR_CHECKVAL_AVAILABLE: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state AVAILABLE (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) +- _CR_CHECKVAL_RESERVED: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state RESERVED (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) +- _CR_CHECKVAL_UNKNOWN: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state UNKNOWN (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) +- _CR_CHECKVAL_SUSPENDED: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state SUSPENDED (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) + + +## Probe further + +- See examples of exposing Kubernetes Operators as a Service via OpenSlice: + - [Offering "Calculator as a Service"](../examples/calculator_crd_aas/calculator_crd_aas.md) + - [Offering "Helm installation as a Service" (Jenkins example)](../examples/jenkins_helm_install_aas/jenkins_helm_install_aas.md) +- [Learn more about CRIDGE, the service in OpenSlice that manages CRDs/CRs](../../architecture/cridge/cridge_introduction.md) + + + + + diff --git a/doc/service_design/kubernetes/helm/design_helmaas.md b/doc/service_design/kubernetes/helm/design_helmaas.md deleted file mode 100644 index ba69891b7d7c50086bf27b0307818056eacb60dc..0000000000000000000000000000000000000000 --- a/doc/service_design/kubernetes/helm/design_helmaas.md +++ /dev/null @@ -1,67 +0,0 @@ -# Expose HELM charts as Service Specifications -Manage Helm charts installations via OpenSlice Service Specifications and Service Orders. -## Intended Audience: Service Designers - - -> Kubernetes is an orchestration system for automating software deployment, scaling, and management. One can interact though the Kubernetes API and it has a set of objects ready for use out of the box. - -> Helm is a tool that automates the creation, packaging, configuration, and deployment of Kubernetes applications by combining your configuration files into a single reusable package - -> At the heart of Helm is the packaging format called charts. Each chart comprises one or more Kubernetes manifests -- and a given chart can have child charts and dependent charts, as well. Using Helm charts: - -> - Reduces the complexity of deploying Microservices -> - Enhances deployment speed -> - Developers already know the technology - -> There are many Helm charts and Helm repositories there that are ready to be used - -> Enable loose coupling and more orchestration scenarios - -> Developers create and deploy applications in things they already know (e.g. Helm charts) - -> Use the TMF models as wrapper entities around Helm charts - - -Use OpenSlice to expose them in service catalogs and deploy them in complex scenarios (service bundles) involving also other systems: - - - Include e.g. RAN controllers, - - Pass values through life cycle rules from one service to another, - - Manage multiple Helms in multiple clusters - - -## The installation of HELM charts is based on OpenSlice CRD support - -Please read more [here](../ExposingKubernetesResources.md). - - -For installing HELM charts we will use ArgoCD a well known Kubernetes-native continuous deployment (CD) tool - -> ArgoCD is a Kubernetes-native continuous deployment (CD) tool - -> While just deploying HELM charts is just a scenario for ArgoCD , in future one can exploit it for many things - -> Despite some other tools like FluxCD, it provides also a UI which is useful for management and troubleshooting - - -We will mainly use the CRD of ```Kind: Application``` that ArgoCD can manage - - - -Before proceeding, install ArgoCD in your management cluster, by following ArgoCD instructions - -As soon as you install ArgoCD, OpenSlice is automatically aware for specific new Kinds. The one we will use is is the ```Kind: Application``` that ArgoCD can manage under the apiGroup argoproj.io - -Browse to Resource Specifications. You will see an entry like the following: - -```Application@argoproj.io/v1alpha1@kubernetes@https://10.10.10.144:6443/``` - -see image: - - - - - -## Probe further - - -See the [Example: Offer Jenkins as a Service via OpenSlice](../../examples/helmInstallation_aaS_Example_Jenkins/HELM_Installation_aaS_Jenkins_Example.md) diff --git a/doc/service_design/lcmrules/examples.md b/doc/service_design/lcmrules/examples.md index 52065b0742c5cd841f154b145ce2582baa3fd897..941acb2a3a2bf54618bce0a59cec206314ad19a1 100644 --- a/doc/service_design/lcmrules/examples.md +++ b/doc/service_design/lcmrules/examples.md @@ -15,7 +15,7 @@ In the following example we : The strAreaCodes could be passed then e.g. to NFVO for instantiation of services to these cells. -[](./images/lcm/lcmfig9.png) + @@ -30,7 +30,7 @@ The strAreaCodes could be passed then e.g. to NFVO for instantiation of services in the example we modify a YAML spec with parama, paramb, action values from the characteristics spec.parama, spec.paramb, spec.action -[](./images/lcm/lcmfig15.png) + ### Define complex OSM configs for DAY 0 @@ -45,7 +45,7 @@ The following displays some complex examples for defining the parameters to pass * if the Video quality requested is 1, again we use a simpler OSM Config block to configure the parameter OSM_CONFIG. We use as injected json text a variable constructed later -[](./images/lcm/lcmfig10.png) + ### Define and instantiate different services according to Service Order request @@ -60,7 +60,7 @@ There are different ways to accomplish this: The following image displays for example the latter case. -[](./images/lcm/lcmfig11.png) + ### Call an external RESTful service @@ -68,12 +68,12 @@ The following image displays for example the latter case. This is useful in cases for example of alarms , external logging, calling other services e.g. email or even a complex algorithm written in other language e.g. call an external service and get a result. (service e.g. a Python service) -[](./images/lcm/lcmfig12.png) + -[](./images/lcm/lcmfig13.png) + ### Create New Service Orders The following example calls to Order a New Service Specification with specific Parameter Values -[](./images/lcm/lcmfig14.png) \ No newline at end of file + \ No newline at end of file diff --git a/doc/service_design/lcmrules/intro.md b/doc/service_design/lcmrules/intro.md index 0fb4cf473803391d7faceef8ad98581637fa5e5f..8282b8c6cbf0a70afce66c745d421d91bcb1ad68 100644 --- a/doc/service_design/lcmrules/intro.md +++ b/doc/service_design/lcmrules/intro.md @@ -1,18 +1,16 @@ # LCM Rules introduction +**Intended Audience: OpenSlice Service Designers** Lifecycle Management Rules: Defining complex conditions and actions during the lifecycle of a service and any necessary modifications throughout the service lifecycle. - -## Intended Audience: Service Designers - In [Naas LCM Introduction](../../naas/lcm_intro.md) it was presented briefly the LCM Rules concept. This section goes deeply on how Service Designers can use them. -LCM Rules are used for defining complex conditions and actions during the lifecycle of a service. In Openslice there are four types of rules defined: +LCM Rules are used for defining complex conditions and actions during the lifecycle of a service. In Openslice, there are five types of rules defined: * PRE_PROVISION * CREATION @@ -23,50 +21,47 @@ LCM Rules are used for defining complex conditions and actions during the lifecy The following figure displays the different phases that the rules are performed, during the lifecycle of a Network Slice Instance. -[](./images/lcm/lcmfig1.png) - -* PRE_PROVISION rules: Run only once just before creating a service with a given priority. -* CREATION rules: Run while the referenced service dependencies of a service are created -* AFTER_ACTIVATION rules: Run only once just after a service get the ACTIVE state -* SUPERVISION rules: Run when a characteristic of a service is changed and the service is in the ACTIVE state -* AFTER_DEACTIVATION rules: Run only once just after a service get the INACTIVE/TERMINATED state - -In general the rules allow to perform many actions during service LCM. Thes are some examples: + -* Modify service specification parameters before the instantiation of a service (or during operation) based on other dependencies. These parameters might be part of other services already included in Service order -* Translate GST/NEST parameter values to other values passed later to NFVO for instantiation or control -* Define complex OSM Configs based on other dependencies and passing variables -* Define any dependencies when creating the referenced services -* Dynamically include new service dependencies -* Create new service orders so include dynamically other services -* Call external (RESTful) services (via http(s), define payload, examine response) +* PRE_PROVISION rules: Run only once just before creating a service with a given priority. +* CREATION rules: Run while the referenced service dependencies of a service are created. +* AFTER_ACTIVATION rules: Run only once just after a service get the ACTIVE state. +* SUPERVISION rules: Run when a characteristic of a service is changed and the service is in the ACTIVE state. +* AFTER_DEACTIVATION rules: Run only once just after a service get the INACTIVE/TERMINATED state. +In general the rules allow to perform many actions during service LCM. Below, there are some examples: +* Modify service specification parameters before the instantiation of a service (or during operation) based on other dependencies. These parameters might be part of other services already included in Service order. +* Translate GST/NEST parameter values to other values passed later to NFVO for instantiation or control. +* Define complex OSM Configs based on other dependencies and passing variables. +* Define any dependencies when creating the referenced services. +* Dynamically include new service dependencies. +* Create new service orders so include dynamically other services. +* Call external (RESTful) services (via http(s), define payload, examine response). -## Examine if the rules are executed successfully +## Examine if the rules are executed successfully -Rules are transformed automatically to executable code (currently is Java). If a rule is performed successfully or has any issues (e.g. unexpected syntax errors or exceptions) appear in OSOM logfiles and also tey are attached as Notes to the running Service. +Rules are transformed automatically to executable code (currently is Java). If a rule is performed successfully or has any issues (e.g. unexpected syntax errors or exceptions) appear in OSOM log files and also tey are attached as Notes to the running Service. ## LCM Rules and OSOM Service Orchestration OSOM is the responsible service for executing the rules on a specific phase. The following image explains the design in the BPMN phases: -[](./images/lcm/lcmfig1_osom.png) + - -## Define rules +## Define Rules Rules are defined when designing a Service Spec. Here is an example of a list of rules: -[](./images/lcm/lcmfig2.png) + Execution order of rules on a specific phase is random -* NOTE: There is a priority field. The lower the number the highest the priority of rule execution. For example Rule with priority 0 will run before rule with priority 1. +> NOTE: There is a priority field. The lower the number the highest the priority of rule execution. For example Rule with priority 0 will run before rule with priority 1. ### Definition language @@ -76,23 +71,21 @@ Execution order of rules on a specific phase is random The following figure is an example of such a rule design. The rule for example will run in PRE_PROVISION phase: -[](./images/lcm/lcmfig3.png) + * The goal of the above rule is to properly define a variable AreaCodes given the chosen AreaOfService from a Service Order. * On the right side the user can define some rule properties or observe the underlying generated java code. -## The blocks library +## The Blocks Library See our [LCM Blocks specification](./specification.md) - - ## Probe further * Check our [examples](./examples.md) for more usages -* See next the complete [specification](./specification.md) +* See next the complete [Specification](./specification.md) diff --git a/doc/service_design/lcmrules/specification.md b/doc/service_design/lcmrules/specification.md index b89323f14dbf1551c403df86a7b38efb0e0cbb3c..01890f64fcf6084b5c9fbb3d7a48ef5f15abfbf2 100644 --- a/doc/service_design/lcmrules/specification.md +++ b/doc/service_design/lcmrules/specification.md @@ -1,17 +1,16 @@ -# LCM Blocks specification +# LCM Blocks Specification - -## Intended Audience: Service Designers +**Intended Audience: OpenSlice Service Designers** The following images describe some blocks found in the library. Blockly has syntax rules. It helps with colours to define them. -So for example a parameter that is a Number cannot be "glued" with a String. Will need some conversion first +So for example a parameter that is a Number cannot be combined with a String. Will need some conversion first -[](./images/lcm/lcmfig4.png) -[](./images/lcm/lcmfig5.png) -[](./images/lcm/lcmfig6.png) -[](./images/lcm/lcmfig7.png) -[](./images/lcm/lcmfig8.png) + + + + + diff --git a/doc/service_design/nfv/design_nfv_services.md b/doc/service_design/nfv/design_nfv_services.md index 9c0cc8e6ad0da530aa00672f3f23ce5243c22aca..7cb056684848e7b85ae0cc480b6fcf5d414ca1e6 100644 --- a/doc/service_design/nfv/design_nfv_services.md +++ b/doc/service_design/nfv/design_nfv_services.md @@ -1,8 +1,8 @@ # Design NFV services -OpenSlice is capable of exposing NFV-related resources (VNFs/NSDs) as Service Specifications. +**Intended Audience: OpenSlice Service Designers** -## Intended Audience: Service Designers +OpenSlice is capable of exposing NFV-related resources (VNFs/NSDs) as Service Specifications. Use OpenSlice to expose NFV resources in service catalogs and deploy them in complex scenarios (service bundles) involving also other systems: @@ -13,17 +13,16 @@ Use OpenSlice to expose NFV resources in service catalogs and deploy them in com ## Initial configuration for OSM deployment -if you have an initial configuration that needs to be applied in the NSD deployment, then you go to the RFS (or CFS) and in Service Specification Characteristics go and edit the OSM_CONFIG characteristic. +if you have an initial configuration that needs to be applied in the NSD deployment, then you go to the RFS (or CFS) and in Service Specification Characteristics go and edit the OSM_CONFIG characteristic. You can add in the Service Characteristic Value, in the Value field something like the following example which gives a floating IP to a VNF: -``` +```json { "nsdId": "e855be91-567b-45cf-9f86-18653e7ea", "vimAccountId": "4efd8bf4-5292-4634-87b7-7b3d49108" , "vnf": [ {"member-vnf-index": "1", "vdu": [ {"id": "MyCharmedVNF-VM", "interface": [{"name": "eth0", "floating-ip-required": true }]}]}]} - ``` or a more complex example (beautify it first if you want to view it, but in the parameter OSM_CONFIG must be minified like the example): -``` +```json {"nsdId":"e855be91-567b-45cf-9f86-18653e7","vimAccountId":"4efd8bf4-5292-4634-87b7-7b3d491","vnf":[{"member-vnf-index":"1","vdu":[{"id":"haproxy_vdu","interface":[{"name":"haproxy_vdu_eth1","floating-ip-required":true}]}]}],"vld":[{"name":"pub_net","vim-network-name":"OSMFIVE_selfservice01"},{"name":"management","vim-network-name":"OSMFIVE_selfservice01"},{"name":"lba_net","vim-network-name":"lba_net","vnfd-connection-point-ref":[{"member-vnf-index-ref":"1","vnfd-connection-point-ref":"haproxy_private","ip-address":"192.168.28.2"}]},{"name":"backend_net","vim-network-name":"backend_net","vnfd-connection-point-ref":[{"member-vnf-index-ref":"3","vnfd-connection-point-ref":"haproxy_public","ip-address":"192.168.20.2"}]},{"name":"lb_sb_net","vim-network-name":"lb_sb_net","vnfd-connection-point-ref":[{"member-vnf-index-ref":"3","vnfd-connection-point-ref":"haproxy_private","ip-address":"192.168.28.2"}]},{"name":"breaking_point_Spain","vim-network-name":"sb_repo_net"},{"name":"breaking_point_Greece","vim-network-name":"5TONICexternal"}],"additionalParamsForVnf":[{"member-vnf-index":"2","additionalParams":{"target_IP":"192.168.20.2"}},{"member-vnf-index":"4","additionalParams":{"target1_IP":"192.168.21.2","target2_IP":"10.154.252.10"}}]} ``` diff --git a/doc/service_ordering/ordering_services.md b/doc/service_ordering/ordering_services.md index 4041aa09440367caf7e861ba5dfc18c4c91d91f0..c88edca3e4f5f42a7bd560b66722a8bffed78876 100644 --- a/doc/service_ordering/ordering_services.md +++ b/doc/service_ordering/ordering_services.md @@ -1,5 +1,5 @@ # Service Ordering -## Intended Audience: Users +**Intended Audience: OpenSlice Users** _This section is WIP._ \ No newline at end of file diff --git a/doc/terminology.md b/doc/terminology.md index c021048c7d57bf77db58d6dee32b0a4f348f44f9..a639136292e07d44152f1bdc2bff16e3602e31ca 100644 --- a/doc/terminology.md +++ b/doc/terminology.md @@ -1,17 +1,25 @@ -# User Roles +# Terminology -* User -* Service Designer -* OpenSlice administrator +## User Roles +* **OpenSlice Administrator**: Installs, configures and maintains an OpenSlice instance +* **OpenSlice Service Designer**: Uses available resources to design service specifications that are offered to OpenSlice users +* **OpenSlice User**: Browses and orders offered services -# Terms -* Resource Facing Service Specification (RFSSpec): A Service that exposes a resource Specification as a Service. -* Customer Facing Service Specification (CFSSpec): Service exposed to users for Service Orders. Usually it exposes other CFSSpec(Service Bundle) or other RFSSpecs -* OpenSlice management cluster -* Service Specification: Detailed descriptions of services, including attributes, configurations, performance metrics, and SLAs. -* Service Catalog -* Service Categories -* Service Inventory + +## Useful Terms + +* **Service Specification**: A blueprint with detailed descriptions of services, including attributes, configurations, performance metrics, and SLAs (e.g. 5G Connectivity). +* **Service**: A running instance of a Service Specification. +* **Resource Specification**: A blueprint with detailed descriptions of resources, including attributes, configurations, and instantiation patterns (e.g. a Kubernetes CRD). +* **Resource**: A running instance of a Resource Specification. +* **Resource Facing Service Specification (RFSS)**: A Service Specification that exposes a Resource Specification as a Service (e.g. UPF Deployment in K8s). +* **Customer Facing Service Specification (CFSS)**: A Service Specification exposed to Users, available for Service Orders. Usually an exposed CFSS (e.g. 5G Connectivity) is a Service Bundle and comprises of other RFSSs (e.g. UPF Deployment, Core Deployment) and CFSSs (e.g. Connect Radio Nodes). +* **Resource Facing Service (RFS)**: An running instance of an RFSS, created via a Service Order of a related Service Bundle. +* **Customer Facing Service (CFS)**: An running instance of an CFSS, created via a Service Order. +* **Service Category**: A collection of CFSSs. +* **Service Catalog**: A collection of exposed Service Categories. +* **Service Inventory**: An inventory of all the service instances. +* **Resource Inventory**: An inventory of all the resource instances. diff --git a/mkdocs.yml b/mkdocs.yml index 388f7e64fc4214792ab46ec063acd7e4d8d9bd93..a40ae000685f0e2e08ecfb0cf692ee778cb7da6c 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -37,6 +37,7 @@ theme: - navigation.path - search - search.highlight + - content.code.copy palette: primary: 'cyan' @@ -48,8 +49,40 @@ theme: icon: repo: fontawesome/brands/gitlab +markdown_extensions: + - pymdownx.highlight: + anchor_linenums: true + line_spans: __span + pygments_lang_class: true + - pymdownx.inlinehilite + - pymdownx.snippets + - pymdownx.superfences + - pymdownx.tabbed: + alternate_style: true + +plugins: + - glightbox: + touchNavigation: true + loop: false + effect: zoom + slide_effect: slide + width: 100% + height: auto + zoomable: true + draggable: true + skip_classes: + - custom-skip-class-name + auto_caption: false + caption_position: bottom + background: white + shadow: true + manual: false + - markdownextradata: + jinja_options: + variable_start_string: '{{{' + variable_end_string: '}}}' # Copyright -copyright: "Copyright © 2024 ETSI OSL" +copyright: "Copyright © 2025 ETSI OSL" # Options extra: @@ -66,27 +99,27 @@ extra: link: https://twitter.com/OpensliceOSS version: provider: mike + documentation_version: "2024Q4" # Page tree nav: - Overview: - Introduction: index.md - - OpenSlice deployment examples: OpenSlice_deployment_examples.md + - Deployment examples: deployment_examples.md - History: history.md - - ETSI OSL: etsi_osl.md + - OpenSlice under ETSI: etsi_osl.md - Getting Started: - Deployment: - - Introduction: deployment.md - - Docker Compose: deploymentCompose.md - - Kubernetes: deploymentK8s.md - - Portals: - - Introduction: portals_intro.md + - Introduction: ./getting_started/deployment/introduction.md + - Docker Compose: ./getting_started/deployment/docker_compose.md + - Kubernetes: ./getting_started/deployment/kubernetes.md + - Portals: ./getting_started/portals.md - Configuration/Management: - - Introduction: config_intro.md - - Role/Keycloak management: role_keycloak_management.md - - NFV Orchestrator Configuration: nfvoconfig.md + - Introduction: ./getting_started/configuration/config_intro.md + - Role/Keycloak management: ./getting_started/configuration/role_keycloak_management.md + - NFV Orchestrator Configuration: ./getting_started/configuration/nfvo_config.md - Advanced topics: - - Consuming Services From External OSS: ./architecture/consumingServicesFromExternalPartners.md + - Consuming Services From External OSS: ./getting_started/configuration/consuming_services_from_external_partners.md - NaaS: - Introduction: ./naas/introduction.md - Services: @@ -115,38 +148,36 @@ nav: - Catalogs: ./service_design/catalogs.md - Support for Kubernetes: - Introduction: ./service_design/kubernetes/intro.md - - Exposing Kubernetes Resources: ./service_design/kubernetes/ExposingKubernetesResources.md - - Design Helm as a Service: ./service_design/kubernetes/helm/design_helmaas.md + - Exposing Kubernetes Resources: ./service_design/kubernetes/exposing_kubernetes_resources.md + - Design Helm as a Service: ./service_design/kubernetes/design_helm_aas.md - Support for NFV: - Design NFV Services: ./service_design/nfv/design_nfv_services.md - LCM Rules: - Introduction: ./service_design/lcmrules/intro.md - Specification: ./service_design/lcmrules/specification.md - Typical Examples: ./service_design/lcmrules/examples.md - - Service Specification Examples: + - Examples: - Introduction: ./service_design/examples/intro.md - - Open5GS (NFV approach): ./service_design/examples/open5gs_nfv.md - - Open5GS (Kubernetes approach): ./service_design/examples/open5gs_kubernetes.md - - Exposing CRDs_aaS_Example_Calculator: ./service_design/examples/ExposingCRDs_aaS_Example_Calculator/ExposingCRDs_aaS_Example_Calculator.md - - HELM Installation aaS Jenkins Example: ./service_design/examples/helmInstallation_aaS_Example_Jenkins/HELM_Installation_aaS_Jenkins_Example.md + # - Open5GS (NFV approach): ./service_design/examples/open5gs_nfv.md + # - Open5GS (Kubernetes approach): ./service_design/examples/open5gs_kubernetes.md + - Calculator CRD aaS: ./service_design/examples/calculator_crd_aas/calculator_crd_aas.md + - Jenkins Helm Installation aaS : ./service_design/examples/jenkins_helm_install_aas/jenkins_helm_install_aas.md - Service Ordering: - Ordering Services from catalogs: ./service_ordering/ordering_services.md - - Testing services: - - Testing Specification: ./testing_services/test_spec.md - - Testing Catalogs: ./testing_services/test_catalogs.md - - Testing Inventory: ./testing_services/test_inventory.md - - Service as a Product: - - Product Specification: ./product_model/product_spec.md - - Product Catalogs: ./product_model/product_catalogs.md - - Product Inventory: ./product_model/product_inventory.md - - Assurance services: - - Introduction: ./assurance_services/intro.md - - Alarms and Actions: ./assurance_services/alarms_actions.md + # - Testing services: + # - Testing Specification: ./testing_services/test_spec.md + # - Testing Catalogs: ./testing_services/test_catalogs.md + # - Testing Inventory: ./testing_services/test_inventory.md + # - Service as a Product: + # - Product Specification: ./product_model/product_spec.md + # - Product Catalogs: ./product_model/product_catalogs.md + # - Product Inventory: ./product_model/product_inventory.md + # - Assurance services: + # - Introduction: ./assurance_services/intro.md + # - Alarms and Actions: ./assurance_services/alarms_actions.md - Design & Architecture: - Architecture: ./architecture/architecture.md - - Cloud native support: - - Introduction: ./architecture/CRIDGE_cloud_native_intro.md - - CRIDGE for Developers: ./architecture/CRIDGE/CRIDGEforDevelopers.md + - CRIDGE: ./architecture/cridge/cridge_introduction.md - Message bus: ./architecture/messagebus.md - OSOM: ./architecture/osom.md - Authentication: ./architecture/oauth.md @@ -157,4 +188,5 @@ nav: - Central logging: ./architecture/centrallog.md - Contributing to OSL: - Developing: ./contributing/developing.md + - Documenting: ./contributing/documenting.md - Terminology: terminology.md