> Kubernetes is an orchestration system for automating software deployment, scaling, and management. One can interact though the Kubernetes API and it has a set of objects ready for use out of the box. Custom Resource Definitions (CRDs) is a way that allows to manage things other than Kubernetes itself and allows to create our own objects The use of CRDs makes the possibilities of Kubernetes management almost limitless. You can extend the base Kubernetes API with any object you like using CRDs.
## Introduction
Kubernetes is an orchestration system for automating software deployment, scaling, and management. One can interact though the Kubernetes API and it has a set of objects ready for use out of the box. Custom Resource Definitions (CRDs) is a way that allows to manage things other than Kubernetes itself and allows to create our own objects The use of CRDs makes the possibilities of Kubernetes management almost limitless. You can extend the base Kubernetes API with any object you like using CRDs.
CRIDGE is a service designed to create and manage Custom Resources (CRs) based on Custom Resource Definitions (CRDs) installed on a Kubernetes cluster. By leveraging the OpenSlice (OSL), CRIDGE enables seamless integration and orchestration within Kubernetes environments, utilizing Kubernetes APIs via the TMF APIs and models. Thus, more or less, OSL exposes Kubernetes APIs as TMF APIs and models.
CRIDGE is a service designed to create and manage Custom Resources (CRs) based on Custom Resource Definitions (CRDs) installed on a Kubernetes cluster. By leveraging OpenSlice (OSL), CRIDGE enables seamless integration and orchestration within Kubernetes environments, utilizing Kubernetes APIs via the TMF APIs and models. Thus, more or less, OSL exposes Kubernetes APIs as TMF APIs and models.
>By allowing the design and lifecycle management of services/resources that expose CRDs/CRs in a Kubernetes cluster via the TMF APIs, OSL can be used in many complex scenarios now involing resources from multiple domains.
By allowing the design and lifecycle management of services/resources that expose CRDs/CRs in a Kubernetes cluster via the TMF APIs, OSL can be used in many complex scenarios now involing resources from multiple domains.
Pros, in a nutshell:
1. CRIDGE service allows OSL to:
1. CRIDGE service allows OSL to:
- Create and manage Custom Resources (CRs) using installed CRDs on a target Kubernetes cluster.
- Create and manage Custom Resources (CRs) using installed CRDs on a target Kubernetes cluster.
- Facilitate complex orchestration scenarios by wrapping Kubernetes APIs as TMF APIs and models.
- Facilitate complex orchestration scenarios by wrapping Kubernetes APIs as TMF APIs and models.
- Handles connectivity to a Kubernetes cluster and manages the lifecycle of CRDs
- Handles connectivity to a Kubernetes cluster and manages the lifecycle of CRDs
- Wraps the Kubernetes API, Receives and provides resources towards other OSL services via the service bus
- Wraps the Kubernetes API, receives and provides resources towards other OSL services via the service bus
2. Enabling Loose Coupling and Orchestration
2. Enabling Loose Coupling and Orchestration
- Language Flexibility: Developers can write CRDs in any language and expose them via the Kubernetes APIs. OSL will reuse these CRDs, enhancing flexibility and integration capabilities.
- Language Flexibility: Developers can write CRDs in any language and expose them via the Kubernetes APIs. OSL will reuse these CRDs, enhancing flexibility and integration capabilities.
...
@@ -32,12 +33,11 @@ CRIDGE is a service designed to create and manage Custom Resources (CRs) based o
...
@@ -32,12 +33,11 @@ CRIDGE is a service designed to create and manage Custom Resources (CRs) based o
These scenarios may include service bundles that involve multiple systems, such as RAN controllers or other Kubernetes clusters, providing a robust and versatile deployment framework.
These scenarios may include service bundles that involve multiple systems, such as RAN controllers or other Kubernetes clusters, providing a robust and versatile deployment framework.
> Why the CRIDGE name? We wanted to build a service that maps TMF models to CRDs; a kind of a **CR**D to TMF br**idge**. Therefore CRIDGE was born.
> Why the CRIDGE name? we wanted to build a service that maps TMF models to CRDs; a kind of a **CR**D to TMF br**idge**. Therefore CRIDGE was born
## Approach
# Approach
> OSL in general is responible for exposing service specifications which are ready to be ordered and orchestrated, through tmforum Open APIs as defined in the OSL Service Spec Catalog. Usually for a service specification a corresponding (one or more) resource specification (resourceSpecificationReference) is registered in the OSL Resource Spec Catalog.
> OSL in general is responsible for exposing Service Specifications which are ready to be ordered and orchestrated, through TMFORUM Open APIs as defined in the OSL Service Spec Catalog. Usually for a service specification a corresponding (one or more) Resource Specification (resourceSpecificationReference) is registered in the OSL Resource Spec Catalog.
The following image illustrates the approach.
The following image illustrates the approach.
...
@@ -56,7 +56,9 @@ The following image illustrates the approach.
...
@@ -56,7 +56,9 @@ The following image illustrates the approach.
The provided image illustrates the architecture and workflow of the CRIDGE service, showing how it interacts with other components within a Kubernetes (K8s) cluster. Here is an explanation of the key components and flow in the diagram:
The provided image illustrates the architecture and workflow of the CRIDGE service, showing how it interacts with other components within a Kubernetes (K8s) cluster.
Following, there is an explanation of the key components and flow in the diagram:
- Other OSL Services: This box represents various OSL services such as Service Spec Catalogue, Resource Spec Catalogue, Service Inventory, Resource Inventory, and OSOM (OpenSlice Service Orchestration and Management).
- Other OSL Services: This box represents various OSL services such as Service Spec Catalogue, Resource Spec Catalogue, Service Inventory, Resource Inventory, and OSOM (OpenSlice Service Orchestration and Management).
- Service Bus: This is the communication layer that facilitates interaction between the CRIDGE service and other OSL services.
- Service Bus: This is the communication layer that facilitates interaction between the CRIDGE service and other OSL services.
...
@@ -64,7 +66,7 @@ The provided image illustrates the architecture and workflow of the CRIDGE servi
...
@@ -64,7 +66,7 @@ The provided image illustrates the architecture and workflow of the CRIDGE servi
- K8s API: The Kubernetes API server, which is the central control point for managing the Kubernetes cluster. CRIDGE interacts with the K8s API to manage CRDs and CRs.
- K8s API: The Kubernetes API server, which is the central control point for managing the Kubernetes cluster. CRIDGE interacts with the K8s API to manage CRDs and CRs.
> CRD (Custom Resource Definition): A CRD is a way to define custom resources in Kubernetes cluster-wise. It allows the extension of Kubernetes API to create and manage user-defined resources. Example :
> CRD (Custom Resource Definition): A CRD is a way to define custom resources in Kubernetes cluster-wise. It allows the extension of Kubernetes API to create and manage user-defined resources. Example :
```
```yaml
apiVersion:apiextensions.k8s.io/v1
apiVersion:apiextensions.k8s.io/v1
kind:CustomResourceDefinition
kind:CustomResourceDefinition
metadata:
metadata:
...
@@ -74,7 +76,7 @@ The provided image illustrates the architecture and workflow of the CRIDGE servi
...
@@ -74,7 +76,7 @@ The provided image illustrates the architecture and workflow of the CRIDGE servi
- Namespaces: Kubernetes namespaces provide a way to partition resources within a cluster. The diagram shows that multiple namespaces (nsxx, nsyy, nsz) can be managed by CRIDGE.
- Namespaces: Kubernetes namespaces provide a way to partition resources within a cluster. The diagram shows that multiple namespaces (nsxx, nsyy, nsz) can be managed by CRIDGE.
> CR (Custom Resource): A CR is an instance of a CRD. It represents the actual custom resource that is managed within the Kubernetes cluster. Example shown in different namespaces:
> CR (Custom Resource): A CR is an instance of a CRD. It represents the actual custom resource that is managed within the Kubernetes cluster. Example shown in different namespaces:
```
```yaml
apiVersion:example.com/v1
apiVersion:example.com/v1
kind:Myresource
kind:Myresource
metadata:
metadata:
...
@@ -89,10 +91,11 @@ In a nutchell:
...
@@ -89,10 +91,11 @@ In a nutchell:
- Deploying CRs in Namespaces: Custom resources defined by the CRD are created and managed within different namespaces in the Kubernetes cluster. Each namespace can have its own instances of the custom resources.
- Deploying CRs in Namespaces: Custom resources defined by the CRD are created and managed within different namespaces in the Kubernetes cluster. Each namespace can have its own instances of the custom resources.
> The example CRD myresource.example.com allows the creation of custom resources of type Myresource.
> The example CRD myresource.example.com allows the creation of custom resources of type Myresource.
> Instances of Myresource are created in various namespaces, each with unique names like example_resource_1.
> Instances of Myresource are created in various namespaces, each with unique names like example_resource_1.
# Handling more than one clusters
## Mupliple Clusters Management
A CRIDGE service is usually responsible for managing one cluster. In the following diagram we show how it can be used for managing multiple clusters:
A CRIDGE service is usually responsible for managing one cluster. In the following diagram we show how it can be used for managing multiple clusters:
...
@@ -100,38 +103,38 @@ A CRIDGE service is usually responsible for managing one cluster. In the followi
...
@@ -100,38 +103,38 @@ A CRIDGE service is usually responsible for managing one cluster. In the followi
[]()
[]()
We assume that there is an OSL Management cluster that OSL is installed. CRIDGE is also installed there if we would like to manage resources in the same management cluster.
We assume that there is an OSL Management cluster that OSL is installed. CRIDGE is also installed there if we would like to manage resources in the same management cluster.
- Each CRIDGE service has for example its own configuration to connect to target cluster
- Each CRIDGE can be installed either in the managed cluster or at the remote clusters. Connectivity is handled via the service bus
- Important: Each CRIDGE has a different context and API endpoints. This is used to request CRDs on a different cluster
- Each CRIDGE service has its own configuration to connect to target cluster
- Each CRIDGE can be installed either in the managed cluster or at the remote clusters. Connectivity is handled via the service bus.
-**Important**: Each CRIDGE has a different context and API endpoints. This is used to request CRDs on a different cluster.
> A CRD has a globally unique name for example mycrd.example.com. So we need to somehow identify also the different cluster
> A CRD has a globally unique name for example mycrd.example.com. So we need to somehow identify also the different cluster.
# Awareness for CRDs and CRs in cluster
> CRDs and CRs can appear (disappear) or change status at any time in a cluster. OSL Resource Inventory need to be aware of these events.
## Awareness for CRDs and CRs in a Cluster
The sync process is found in the code and explained by the following picture:
> CRDs and CRs can appear (disappear) or change status at any time in a cluster. OSL Resource Inventory need to be aware of these events.
The implemented synchronization process is explained by the following diagram:
[]()
[]()
WatcherService is executed when the cridge service application starts (see onApplicationEvent). First things:
WatcherService is executed when the CRIDGE service application starts (see onApplicationEvent). Specifically:
- KubernetesClientResource is a class that wraps fabric8’s KubernetesClient
- KubernetesClientResource is a class that wraps fabric8’s KubernetesClient
- This fabric8 KubernetesClient is initialized from the kubeconf and default context of the machine that runs CRIDGE
- This fabric8 KubernetesClient is initialized from the kubeconf and default context of the machine that runs CRIDGE
- On CRIDGE Startup we try to register this cluster and context to OSL catalogs.
- On CRIDGE start-up we try to register this cluster and context to OSL catalogs.
- See registerKubernetesClientInOSLResource method which registers the KubernetesContextDefinition in Resource Inventory as a LogicalResource via createOrUpdateResourceByNameCategoryVersion method
- See registerKubernetesClientInOSLResource method which registers the KubernetesContextDefinition in Resource Inventory as a LogicalResource via createOrUpdateResourceByNameCategoryVersion method
- After the creation (or update) of this cluster as a Resource in OSL we proceed to create SharedIndexInformers for CustomResourceDefinition objects
- After the creation (or update) of this cluster as a Resource in OSL we proceed to create SharedIndexInformers for CustomResourceDefinition objects
- In this way CRIDGE is always aware of all CRDs and their CRs in the cluster, even if a CRD or CR is added/updated/deleted in the K8S cluster outside of OSL(CRIDGE)
- In this way, CRIDGE is always aware of all CRDs and their CRs in the cluster, even if a CRD or CR is added/updated/deleted in the K8S cluster outside of OSL(CRIDGE)
- The SharedIndexInformer events notify CRIDGE, which is always aware of all CRDs and their CRs in the cluster, even if a CRD or CR is added/updated/deleted in the K8S cluster outside of OSL (CRIDGE)
- The SharedIndexInformer events notify CRIDGE, which is always aware of all CRDs and their CRs in the cluster, even if a CRD or CR is added/updated/deleted in the K8S cluster outside of OSL (CRIDGE)
- NOTE: The ADD event is raised every time also we run CRIDGE. Therefore, on ADD we do the method to createORupdate resource specifications and resources
- NOTE: The ADD event is raised every time also we run CRIDGE. Therefore, on ADD we do the method to createORupdate resource specifications and resources
- On ADD event:
- On ADD event:
- The CRD is transformed to OSL Kubernetes domain model: method kubernetesClientResource.KubernetesCRD2OpensliceCRD
- The CRD is transformed to OSL Kubernetes domain model: method kubernetesClientResource.KubernetesCRD2OpensliceCRD
- Then the OSL Kubernetes domain model is:
- Then the OSL Kubernetes domain model is:
-transformed to Resource Specification and is stored to catalog (see createOrUpdateResourceSpecByNameCategoryVersion)
-Transformed to Resource Specification and is stored to catalog (see createOrUpdateResourceSpecByNameCategoryVersion)
- Transformed to Resource and is stored to catalog (see createOrUpdateResourceByNameCategoryVersion)
- Transformed to Resource and is stored to catalog (see createOrUpdateResourceByNameCategoryVersion)
- Conceptually while a CRD is a new resource located in the Kubernetes cluster resource, it is transformed also as a Resource Specification (a high-level entity) which is ready to be reused as an entity to other scenarios. The same concept as in Kubernetes where a CRD is a definition ready to be used for instantiating resources of this CRD
- Conceptually while a CRD is a new resource located in the Kubernetes cluster resource, it is transformed also as a Resource Specification (a high-level entity) which is ready to be reused as an entity to other scenarios. The same concept as in Kubernetes where a CRD is a definition ready to be used for instantiating resources of this CRD
- Then for this CRD a Watcher is added for all Resources of this Kind (fabric8’s GenericKubernetesResource entity)
- Then for this CRD a Watcher is added for all Resources of this Kind (fabric8’s GenericKubernetesResource entity)
...
@@ -142,30 +145,12 @@ The sync process is found in the code and explained by the following picture:
...
@@ -142,30 +145,12 @@ The sync process is found in the code and explained by the following picture:
- Else a resource is created in catalog
- Else a resource is created in catalog
# Deployment of a new CR based on a CRD
## Exposure of CRDs as Service Specifications
[]()
- A message arrives to deploy a CR
- The call examines if this CRIDGE service can handle the request (based on context and masterURL)
- There are headers received and a crspec in json
- The crspec is unmarshaled as GenericKubernetesResource
- Headers are in format org.etsi.osl.*
- These headers are injected as labels
- (see later in orchestration)
- A namespace is created for this resource
- Watchers are created for this namespace for e.g. new secrets, config maps etc , so that they can be available back as resources to the Inventory of OSL (Note only Secrets for now are watched)
# Expose CRDs as Service Specifications in OpenSlice catalogs
See [ExposingKubernetesResources](../../service_design/kubernetes/ExposingKubernetesResources.md)
See [Exposing Kubernetes Resources](../../service_design/kubernetes/exposing_kubernetes_resources.md) section for ways to design services around CRDs.
## Service Orchestration and CRDs/CRs
# Service Orchestration and CRDs/CRs
OSOM checks the presence of attribute _CR_SPEC at the RFS to make a request for a CR deployment
OSOM checks the presence of attribute _CR_SPEC at the RFS to make a request for a CR deployment
...
@@ -176,7 +161,16 @@ OSOM checks the presence of attribute _CR_SPEC at the RFS to make a request for
...
@@ -176,7 +161,16 @@ OSOM checks the presence of attribute _CR_SPEC at the RFS to make a request for
> LCM rules can be used to change attributes of this yaml/json file, before sending this for orchestration
> LCM rules can be used to change attributes of this yaml/json file, before sending this for orchestration
However, the following issue needs to be solved: **How to map the CR lifecycle that is defined in the CRD with the TMF resource Lifecycle?**
However, the following issue needs to be solved: **How to map the CR lifecycle that is defined in the CRD with the TMF resource Lifecycle?**
- For this We introduced the following characteristics: _CR_CHECK_FIELD, _CR_CHECKVAL_STANDBY, _CR_CHECKVAL_ALARM, _CR_CHECKVAL_AVAILABLE, _CR_CHECKVAL_RESERVED, _CR_CHECKVAL_UNKNOWN, _CR_CHECKVAL_SUSPENDED
For this We introduced the following characteristics:
- _CR_CHECK_FIELD
- CR_CHECKVAL_STANDBY
- _CR_CHECKVAL_ALARM
- _CR_CHECKVAL_AVAILABLE
- _CR_CHECKVAL_RESERVED
- _CR_CHECKVAL_UNKNOWN
- _CR_CHECKVAL_SUSPENDED
OSOM sends to CRIDGE a message with the following information:
OSOM sends to CRIDGE a message with the following information:
...
@@ -201,9 +195,9 @@ OSOM sends to CRIDGE a message with the following information:
...
@@ -201,9 +195,9 @@ OSOM sends to CRIDGE a message with the following information:
- orderId related service order ID
- orderId related service order ID
- startDate start date of the deployment (not used currently)
- startDate start date of the deployment (not used currently)
- endDate end date of the deployment (not used currently)
- endDate end date of the deployment (not used currently)
- _CR_SPEC the spec that is sent to cridge (in json)
- _CR_SPEC the spec that is sent to CRIDGE (in json)
- Returns:
- Returns:
- a string response from cridge. It might return "OK" if everything is ok. "SEE OTHER" if there are multiple CRIDGEs then some other cridge will handle the request for the equivalent cluster. Any other response is handled as error
- a string response from CRIDGE. It might return "OK" if everything is ok. "SEE OTHER" if there are multiple CRIDGEs then some other CRIDGE instance will handle the request for the equivalent cluster. Any other response is handled as error
- CRIDGE receives the message and creates according to the labels the necessary CR
- CRIDGE receives the message and creates according to the labels the necessary CR
...
@@ -211,14 +205,27 @@ OSOM sends to CRIDGE a message with the following information:
...
@@ -211,14 +205,27 @@ OSOM sends to CRIDGE a message with the following information:
- It monitors and tries to figure out and map the Status of the CR to the TMF Status according to the provided org.etsi.osl.statusCheck* labels
- It monitors and tries to figure out and map the Status of the CR to the TMF Status according to the provided org.etsi.osl.statusCheck* labels
- It sends to the message bus the current resource for creation or update to the TMF service inventory
- It sends to the message bus the current resource for creation or update to the TMF service inventory
## Deployment of a new CR based on a CRD
The implemented process to deploy a CR is explained by the following diagram:
---
[]()
- A message arrives to deploy a CR
- The call examines if this CRIDGE service can handle the request (based on context and masterURL)
- There are headers received and a _CR_SPEC in json
- The _CR_SPEC is unmarshaled as GenericKubernetesResource
- Headers are in format org.etsi.osl.*
- These headers are injected as labels (see [Service Orchestration section](#service-orchestration-and-crdscrs))
- A namespace is created for this resource
- Watchers are created for this namespace for e.g. new secrets, config maps etc, so that they can be available back as resources to the Inventory of OSL
## Probe further
## Probe further
- See examples of exposing Kubernetes Operators as a Service via OpenSlice:
- See examples of exposing Kubernetes Operators as a Service via OpenSlice:
-[Offering "Calculator as a Service"](../../service_design/examples/ExposingCRDs_aaS_Example_Calculator/ExposingCRDs_aaS_Example_Calculator.md)
-[Offering "Calculator as a Service"](../../service_design/examples/calculator_crd_aas/calculator_crd_aas.md)
-[Offering "Helm installation as a Service" (Jenkins example)](../../service_design/examples/helmInstallation_aaS_Example_Jenkins/HELM_Installation_aaS_Jenkins_Example.md)
-[Offering "Helm installation as a Service" (Jenkins example)](../../service_design/examples/jenkins_helm_install_aas/jenkins_helm_install_aas.md)