diff --git a/doc/architecture/CRIDGEforDevelopers/CRIDGEforDevelopers.md b/doc/architecture/CRIDGEforDevelopers/CRIDGEforDevelopers.md new file mode 100644 index 0000000000000000000000000000000000000000..defea971cbe0b2b0153389327981784457c57869 --- /dev/null +++ b/doc/architecture/CRIDGEforDevelopers/CRIDGEforDevelopers.md @@ -0,0 +1,221 @@ + +# CRIDGE: A Service to manage Custom Resources in a Kubernetes Cluster + +## Intended Audience: OSL developers + + > Kubernetes is an orchestration system for automating software deployment, scaling, and management. One can interact though the Kubernetes API and it has a set of objects ready for use out of the box. Custom Resource Definitions (CRDs) is a way that allows to manage things other than Kubernetes itself and allows to create our own objects The use of CRDs makes the possibilities of Kubernetes management almost limitless. You can extend the base Kubernetes API with any object you like using CRDs. + + +CRIDGE is a service designed to create and manage Custom Resources (CRs) based on Custom Resource Definitions (CRDs) installed on a Kubernetes cluster. By leveraging the OpenSlice (OSL), CRIDGE enables seamless integration and orchestration within Kubernetes environments, utilizing Kubernetes APIs via the TMF APIs and models. Thus, more or less, OSL exposes Kubernetes APIs as TMF APIs and models. + + >By allowing the design and lifecycle management of services/resources that expose CRDs/CRs in a Kubernetes cluster via the TMF APIs, OSL can be used in many complex scenarios now involing resources from multiple domains. + + + +1. CRIDGE service allows OSL to: + - Create and manage Custom Resources (CRs) using installed CRDs on a target Kubernetes cluster. + - Facilitate complex orchestration scenarios by wrapping Kubernetes APIs as TMF APIs and models. + - Handles connectivity to a Kubernetes cluster and manages the lifecycle of CRDs + - Wraps the Kubernetes API, Receives and provides resources towards other OSL services via the service bus + +2. Enabling Loose Coupling and Orchestration + - Language Flexibility: Developers can write CRDs in any language and expose them via the Kubernetes APIs. OSL will reuse these CRDs, enhancing flexibility and integration capabilities. + - Familiar Deployment: Developers can create and deploy applications using familiar tools such as Helm charts, simplifying the process and reducing the learning curve. + +3. Ecosystem Reusability + - CRIDGE capitalizes on the extensive Kubernetes ecosystem, particularly focusing on operators (CRDs). + - Key repositories and hubs such as artifacthub.io and Operatorhub.io can be utilized for finding and deploying operators. + +4. Service Catalog Exposure and Deployment + + OSL can expose CRs in service catalogs, facilitating their deployment in complex scenarios. + These scenarios may include service bundles that involve multiple systems, such as RAN controllers or other Kubernetes clusters, providing a robust and versatile deployment framework. + + + + > Why the CRIDGE name? we wanted to build a service that maps TMF models to CRDs; a kind of a **CR**D to TMF br**idge**. Therefore CRIDGE was born + +# Approach + + > OSL in general is responible for exposing service specifications which are ready to be ordered and orchestrated, through tmforum Open APIs as defined in the OSL Service Spec Catalog. Usually for a service specification a corresponding (one or more) resource specification (resourceSpecificationReference) is registered in the OSL Resource Spec Catalog. + +The following image illustrates the approach. + +<img src="img01.png" width=1024px> + +1. A CRD in a cluster will be mapped in TMF model as a Resource specification and therefore can be exposed as a service specification in a catalog +2. Service Orders can be created for this service specification. The OSL Orchestrator (OSOM) will manage the lifecycle of the Service Order. +3. OSOM creates a Resource in OSL Resource inventory and requests (via CRIDGE) a new Custom Resource (CR) in the target cluster + - The resource is created in a specific namespace (for example the UUID of the Service Order) + - A CR in a cluster will be mapped in TMF model as a Resource in the resource Inventory + - Other related resources created by the CRD Controller within the namespace are automatically created in OSL Resource Inventory under the same Service Order + + +<img src="img02.png" width=800px> + +The provided image illustrates the architecture and workflow of the CRIDGE service, showing how it interacts with other components within a Kubernetes (K8s) cluster. Here is an explanation of the key components and flow in the diagram: + + - Other OSL Services: This box represents various OSL services such as Service Spec Catalogue, Resource Spec Catalogue, Service Inventory, Resource Inventory, and OSOM (OpenSlice Service Orchestration and Management). + - Service Bus: This is the communication layer that facilitates interaction between the CRIDGE service and other OSL services. + - CRIDGE: CRIDGE acts as a bridge that converts CRDs (Custom Resource Definitions) to TMF (TM Forum) APIs and models. It enables the creation and management of Custom Resources (CRs) in the Kubernetes cluster. + - K8s API: The Kubernetes API server, which is the central control point for managing the Kubernetes cluster. CRIDGE interacts with the K8s API to manage CRDs and CRs. + + > CRD (Custom Resource Definition): A CRD is a way to define custom resources in Kubernetes cluster-wise. It allows the extension of Kubernetes API to create and manage user-defined resources. Example : +``` + apiVersion: apiextensions.k8s.io/v1 + kind: CustomResourceDefinition + metadata: + name: myresource.example.com +``` + + - Namespaces: Kubernetes namespaces provide a way to partition resources within a cluster. The diagram shows that multiple namespaces (nsxx, nsyy, nsz) can be managed by CRIDGE. + + > CR (Custom Resource): A CR is an instance of a CRD. It represents the actual custom resource that is managed within the Kubernetes cluster. Example shown in different namespaces: +``` + apiVersion: example.com/v1 + kind: Myresource + metadata: + name: example_resource_1 +``` + +In a nutchell: + + - Various OSL services use the Service Bus to communicate with CRIDGE. + - CRIDGE converts requests towards Kubernetes API and vice-versa, facilitating the integration of custom resources with other OSL services. + - CRDs are defined and managed through the K8s API. The example CRD is named myresource.example.com. + - Deploying CRs in Namespaces: Custom resources defined by the CRD are created and managed within different namespaces in the Kubernetes cluster. Each namespace can have its own instances of the custom resources. + + > The example CRD myresource.example.com allows the creation of custom resources of type Myresource. + > Instances of Myresource are created in various namespaces, each with unique names like example_resource_1. + + +# Handling more than one clusters + +A CRIDGE service is usually responsible for managing one cluster. In the following diagram we show how it can be used for managing multiple clusters: + +<img src="img03.png" width=1024px> + +We assume that there is an OSL Management cluster that OSL is installed. CRIDGE is also installed there if we would like to manage resources in the same management cluster. + - Each CRIDGE service has for example its own configuration to connect to target cluster + - Each CRIDGE can be installed either in the managed cluster or at the remote clusters. Connectivity is handled via the service bus + - Important: Each CRIDGE has a different context and API endpoints. This is used to request CRDs on a different cluster + + + > A CRD has a globally unique name for example mycrd.example.com. So we need to somehow identify also the different cluster + + +# Awareness for CRDs and CRs in cluster + +> CRDs and CRs can appear (disappear) or change status at any time in a cluster. OSL Resource Inventory need to be aware of these events. + +The sync process is found in the code and explained by the following picture: + + +<img src="img04.png" width=1024px> + + WatcherService is executed when the cridge service application starts (see onApplicationEvent). First things: + +- KubernetesClientResource is a class that wraps fabric8’s KubernetesClient + - This fabric8 KubernetesClient is initialized from the kubeconf and default context of the machine that runs CRIDGE +- On CRIDGE Start up we try to register this cluster and context to OSL catalogs. + - See registerKubernetesClientInOSLResource method which registers the KubernetesContextDefinition in Resource Inventory as a LogicalResource via createOrUpdateResourceByNameCategoryVersion method +- After the creation(or update) of this cluster as a Resource in OSL we proceed to create SharedIndexInformers for CustomResourceDefinition objects +- In this way CRIDGE is always aware of all CRDs and their CRs in the cluster, even if a CRD or CR is added/updated/deleted in the K8S cluster outside of OSL(CRIDGE) +- The SharedIndexInformer events notify CRIDGE, which is always aware of all CRDs and their CRs in the cluster, even if a CRD or CR is added/updated/deleted in the K8S cluster outside of OSL(CRIDGE) + - NOTE: The ADD event is raised every time also we run CRIDGE. Therefore, on ADD we do the method to createORupdate resource specifications and resources +- On ADD event: + - The CRD is transformed to OSL Kubernetes domain model: method kubernetesClientResource.KubernetesCRD2OpensliceCRD + - Then the OSL Kubernetes domain model is: + - transformed to Resource Specification and is stored to catalog (see createOrUpdateResourceSpecByNameCategoryVersion) + - Transformed to Resource and is stored to catalog (see createOrUpdateResourceByNameCategoryVersion) + - Conceptually while a CRD is a new resource located in the Kubernetes cluster resource, it is transformed also as a Resource Specification (a high-level entity) which is ready to be reused as an entity to other scenarios. The same concept as in Kubernetes where a CRD is a definition ready to be used for instantiating resources of this CRD + - Then for this CRD a Watcher is added for all Resources of this Kind (fabric8’s GenericKubernetesResource entity) + - When we have a newly added/updated/deleted resource of a certain CRD the method updateGenericKubernetesResourceInOSLCatalog is called for this object (fabric8’s GenericKubernetesResource entity) + - We examine if the resource has label org.etsi.osl.resourceId + - This label is added by OSOM during service orders to correlate K8S requested resources with resources in inventory + - If the label exists, we update the resource by ID updateResourceById + - Else a resource is created in catalog + + +# Deployment of a new CR based on a CRD + + +<img src="img05.png" width=1024px> + +- A message arrives to deploy a CR + - The call examines if this CRIDGE service can handle the request (based on context and masterURL) +- There are headers received and a crspec in json +- The crspec is unmarshaled as GenericKubernetesResource +- Headers are in format org.etsi.osl.* +- These headers are injected as labels + - (see later in orchestration) +- A namespace is created for this resource +- Watchers are created for this namespace for e.g. new secrets, config maps etc , so that they can be available back as resources to the Inventory of OSL (Note only Secrets for now are watched) + + +# Expose CRDs as Service Specifications in OpenSlice catalogs + +See [ExposingKubernetesResources](ExposingKubernetesResources.md) + + + + +# Service Orchestration and CRDs/CRs + +OSOM checks the presence of attribute _CR_SPEC at the RFS to make a request for a CR deployment + +- _CR_SPEC is a JSON or YAML string that is used for the request + - It is similar to what one will do with e.g. a kubectl apply + - There are tools to translate a yaml file to a json + +> LCM rules can be used to change attributes of this yaml/json file, before sending this for orchestration + +However, the following issue needs to be solved: ** How to map the CR lifecycle that is defined in the CRD with the TMF resource Lifecycle? ** + - For this We introduced the following characteristics: _CR_CHECK_FIELD, _CR_CHECKVAL_STANDBY, _CR_CHECKVAL_ALARM, _CR_CHECKVAL_AVAILABLE, _CR_CHECKVAL_RESERVED, _CR_CHECKVAL_UNKNOWN, _CR_CHECKVAL_SUSPENDED + +OSOM sends to CRIDGE a message with the following information: + +- currentContextCluster: current context of cluster +- clusterMasterURL: current master url of the cluster +- org.etsi.osl.serviceId: This is the related service id that the created resource has a reference +- org.etsi.osl.resourceId: This is the related resource id that the created CR will wrap and reference. +- org.etsi.osl.prefixName: we need to add a short prefix (default is cr) to various places. For example in K8s cannot start with a number +- org.etsi.osl.serviceOrderId: the related service order id of this deployment request +- org.etsi.osl.namespace: requested namespace name +- org.etsi.osl.statusCheckFieldName: The name of the field that is needed to be monitored in order to monitor the status of the service and translate it to TMF resource statys (RESERVED AVAILABLE, etc) +- org.etsi.osl.statusCheckValueStandby: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state STANDBY (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) +- org.etsi.osl.statusCheckValueAlarm: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state ALARMS (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) +- org.etsi.osl.statusCheckValueAvailable: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state AVAILABLE (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) +- org.etsi.osl.statusCheckValueReserved: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state RESERVED (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) +- org.etsi.osl.statusCheckValueUnknown: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state UNKNOWN (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) +- org.etsi.osl.statusCheckValueSuspended: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state SUSPENDED (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) + +- Parameters: + - aService reference to the service that the resource and the CR belongs to + - resourceCR reference the equivalent resource in TMF repo of the target CR. One to one mapping + - orderId related service order ID + - startDate start date of the deployment (not used currently) + - endDate end date of the deployment (not used currently) + - _CR_SPEC the spec that is sent to cridge (in json) +- Returns: + - a string response from cridge. It might return "OK" if everything is ok. "SEE OTHER" if there are multiple CRIDGEs then some other cridge will handle the request for the equivalent cluster. Any other response is handled as error + + +- CRIDGE receives the message and creates according to the labels the necessary CR +- It monitors the created resource(s) in namespace (see the Sequence Diagram in previous images) +- It monitors and tries to figure out and map the Status of the CR to the TMF Status according to the provided org.etsi.osl.statusCheck* labels +- It sends to the message bus the current resource for creation or update to the TMF service inventory + + +--- + +## What's next? + + - See examples of exposing operators via OpenSlice: + - [Exposing Kubernetes Operators as a Service : Offering "Calculator as a Service" through OpenSlice](ExposingCRDs_aaS_Example_Calculator.md) + + + + + + diff --git a/doc/architecture/CRIDGEforDevelopers/img01.png b/doc/architecture/CRIDGEforDevelopers/img01.png new file mode 100644 index 0000000000000000000000000000000000000000..d9f6f73dd21ff94572fd411c68e31a2915426f70 Binary files /dev/null and b/doc/architecture/CRIDGEforDevelopers/img01.png differ diff --git a/doc/architecture/CRIDGEforDevelopers/img02.png b/doc/architecture/CRIDGEforDevelopers/img02.png new file mode 100644 index 0000000000000000000000000000000000000000..fb546ad61cce1ce5c9cc593507f816ef64c7bcc5 Binary files /dev/null and b/doc/architecture/CRIDGEforDevelopers/img02.png differ diff --git a/doc/architecture/CRIDGEforDevelopers/img03.png b/doc/architecture/CRIDGEforDevelopers/img03.png new file mode 100644 index 0000000000000000000000000000000000000000..79d0d2a208ca3c63489264a87d4a155d44d12cc4 Binary files /dev/null and b/doc/architecture/CRIDGEforDevelopers/img03.png differ diff --git a/doc/architecture/CRIDGEforDevelopers/img04.png b/doc/architecture/CRIDGEforDevelopers/img04.png new file mode 100644 index 0000000000000000000000000000000000000000..233831d1bb986c92432fdaff602db44fc2c0df48 Binary files /dev/null and b/doc/architecture/CRIDGEforDevelopers/img04.png differ diff --git a/doc/architecture/CRIDGEforDevelopers/img05.png b/doc/architecture/CRIDGEforDevelopers/img05.png new file mode 100644 index 0000000000000000000000000000000000000000..3b05561502893694d3d72d12cbbfc154786da853 Binary files /dev/null and b/doc/architecture/CRIDGEforDevelopers/img05.png differ diff --git a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/ExposingCRDs_aaS_Example_Calculator.md b/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/ExposingCRDs_aaS_Example_Calculator.md new file mode 100644 index 0000000000000000000000000000000000000000..e50c05c53e03a0fc0e24ddbe117b6397dfa6bdab --- /dev/null +++ b/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/ExposingCRDs_aaS_Example_Calculator.md @@ -0,0 +1,262 @@ + +# Exposing Kubernetes Operators as a Service : Offering "Calculator as a Service" through OpenSlice + +## Intended Audience: Service Designers + + +> To illustrate the powerful concept of Kubernetes operators and how they can be utilized to offer a service through OpenSlice, let's provide an example of a "Calculator as a Service." + +> This example will demonstrate the flexibility and capabilities of Kubernetes operators in managing custom resources and automating operational tasks. + +--- +## Offering "Calculator as a Service" through OpenSlice + +- We have a service that can accept two integers and an action (SUM, SUB, etc) and returns a result +- We would like to offer it as a Service through OpenSlice +- So when a user orders it with some initial parameters, OpenSlice will create it and return the result +- Also while the service is active, we can do further calculations, until we destroy it. + + +Assume the following simple CRD of a calculator model accepting two params (spec section) and an action and returning a result (status section) + +The controller (the calculator code) is implemented in any language and is installed in a Kubernetes cluster + +``` + +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + name: mycalculators.examples.osl.etsi.org +spec: + group: examples.osl.etsi.org + names: + kind: MyCalculator + plural: mycalculators + singular: mycalculator + scope: Namespaced + versions: + - name: v1alpha1 + schema: + openAPIV3Schema: + properties: + spec: + properties: + parama: + type: integer + paramb: + type: integer + action: + type: string + type: object + status: + properties: + result: + type: integer + status: + type: string + type: object + type: object + served: true + storage: true + subresources: + status: {} +``` + + +Request to the cluster (through e.g. kubectl apply) + +``` +apiVersion: examples.osl.etsi.org/v1alpha1 +kind: MyCalculator +metadata: + name: mycalculator.examples.osl.etsi.org +spec: + parama: 170 + paramb: 180 + action: 'SUM' + +``` + +Response + +``` +apiVersion: examples.osl.etsi.org/v1alpha1 +kind: MyCalculator +metadata: + creationTimestamp: '2023-12-05T12:26:07Z’ + +<snip> + +status: + result: 350 + status: CALCULATED +spec: + action: SUM + parama: 170 + paramb: 180 + +``` + +To perform this through OpenSlice as a Service Specification ready to be ordered we need to do the following: + +--- +### CRD is saved automatically as Resource Specification + +As soon as the CRD is deployed in the cluster (e.g. by your admin via kubctl or via any installation through the internet) it is automatically transformed and is available in OpenSlice catalogs as a Resource Specification. + +- See also the fully qualified name of the resource specification. + - MyCalculator@examples.osl.etsi.org/v1alpha1@docker-desktop@https://kubernetes.docker.internal:6443/ + - The resource specification name is quite unique, so you can install the CRD in many clusters around the internet. Each CRD on each cluster will appear here, for example: + - MyCalculator@examples.osl.etsi.org/v1alpha1@default_cluster@https://10.10.10.8:6443/ + - MyCalculator@examples.osl.etsi.org/v1alpha1@edge1_cluster@https://172.16.10.10:6443/ + - Having this OpenSlice can manage resources in multiple clusters + + +<img src="img07.png" > + +> See also the detailed characteristics. See how OpenSlice makes all characteristics automatically flat and expanded with key-value style + +<img src="img08.png" width=1024px> + +--- +# Expose to Users + +## Start by Creating a ResourceFacingServiceSpecification + +From the UI menu create a new Service Specification + + + +<img src="img09.png" width=1024px> +<img src="img10.png" width=1024px> + + + + +### Creation of CRD-related characteristics + +- We need now to adjust some characteristics of this CRD as Resource Specification. +- OpenSlice transalted automatically the CRD spec in a flat list of characteristics.So the "spec" section from the original yaml for example, is now unfold into: spec, spec.parama, spec.paramb, etc. the same for "status" object +- We need to make OpenSlice aware of when the service will be active. + - So we go to characteristic _CR_CHECK_FIELD and we define that the field that shows the status of the service is the characteristic "status.status" (is a text field) + - Then we go to _CR_CHECKVAL_AVAILABLE and we define the value CALCULATED, which signals the following: When the characteristic "status.status" has the value "CALCULATED" then OpenSlice will mark the underlying service as "ACTIVE" + - We need also to define the yaml file that OpenSLice will use to create the new resource in the kubernetes cluster + - We insert the YAML in the characteristic _CR_SPEC + + the _CR_SPEC is: + +``` +apiVersion: examples.osl.etsi.org/v1alpha1 +kind: MyCalculator +metadata: + name: mycalculator.examples.osl.etsi.org +spec: + parama: 170 + paramb: 180 + action: 'SUM' + +``` + +<img src="img11.png" width=1024px> + + +> However the values are fixed. How do we allow a user to pass parameters through OpenSlice + +## Expose in Catalog + +Create a new CustomerFacingServiceSpecification + + - Go to the menu Service Specification>New Service Specification + - Create a service My Calulator and mark it as a Bundle + - Go to Service Specification Relationships and add MyCalculatorRFS + - The service will be automatically transformed to a "CustomerFacingServiceSpecification" + - Add the following characteristics as the image shows: + + +<img src="cfs_img12.png" width=1024px> + + + + + +### Allow users to pass new values through OpenSlice + + + +We need to Create LCM rules in CustomerFacingServiceSpecification: + + - The goal of the rules is to allow the user to pass parameters to the actual resource towards the cluster. + - we will create one rule that will pass the parameters just before creating the service (PRE_PROVISION phase) + - we will create one rule that will pass the parameters while the service is active (SUPERVISION phase) + - The rules will be the same + +<img src="img12.png" width=1024px> + +If we see one rule it will look like the following: +<img src="img13.png" width=1024px> + +- We need to change the _CR_SPEC characteristic of the referenced ResourceFacingServiceSpecification +- First bring a block from Service>Relationships>Service Refs and drop the "Service MyCalculatorRFS" block +- Then add a list block from Lists +- Then add the block that modifies a referenced characteristic from Service>Relationships>Service Refs the block "Set value to characteristic of a Referenced Service" +- Add a block for text _CR_SPEC +- We use a block that changes a String according to variables Text>"A formatted text replacing variables from List" +- See that we have as Input string the YAML string lines + - see that parama, paramb has a %d (they accept integers), action is %s (accepts a string) + - See that the variables tha will replace the %d, %d and %s are an list + - the first %d will be replaced with the value from characteristic spec.parama + - the second %d will be replaced with the value from characteristic spec.paramb + - the %s will be replaced with the value from characteristic spec.action + + + +If we see the SUPERVISION rule it will look like the following: + +- It contains also the Result field, which takes the value from the referenced service +- Add a block for the Result field from Service>Number blocks +- Add a str to int block from Number blocks +- Add Service>Relationships>Service Refs and drop the input block [Service MyCalculatorRFS] "Get Service details from current context running service" and select from the drop down the "serviceCharacteristicValue" +- Add as name the "status.result" + + +<img src="img13_1.png" width=1024px> + + +<img src="img14.png" width=1024px> + +Expose it then to a catalogue for orders through the Service Categories and Service Catalogs + + +<img src="img15.png"> + + +### Order the Service + +When a user orders the service, it will look like this: + +<img src="img16.png" width=1024px> + + + +- After the Service Order we have 2 services in service inventory on CFS and on RFS. Both have references to values +- OpenSlice (via CRIDGE service) updates the Resource in Resource Inventory and OSOM updates the Services in Service Inventory +- The Actual resources are running in the Kubernetes cluster managed by OpenSlice +- The result is in the characteristic status.result of the running service + +<img src="img17.png" width=800px> + +<img src="img18.png" width=1024px> + +### Modify the running service + + The user can modify the service + +<img src="img19.png" width=1024px> + +- After a while the update is applied to the cluster, the controller will pick up the resource update and patch the resource +- OpenSlice (via CRIDGE service) updates the Resource in Resource Inventory and OSOM updates the Services in Service Inventory +- The result will be available to the respective characteristic "Result" after a few seconds, as need to go through various steps (OpenSlice orchestrator, down to kubernetes, to Calculator controller and back) + + +<img src="img20.png" width=1024px> + \ No newline at end of file diff --git a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/cfs_img12.png b/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/cfs_img12.png new file mode 100644 index 0000000000000000000000000000000000000000..d3f55dad53fef145b49e720afd5a13140a097917 Binary files /dev/null and b/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/cfs_img12.png differ diff --git a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img07.png b/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img07.png new file mode 100644 index 0000000000000000000000000000000000000000..6990bf18122d41d5c3eee345270c4c18b5b0a1dc Binary files /dev/null and b/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img07.png differ diff --git a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img08.png b/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img08.png new file mode 100644 index 0000000000000000000000000000000000000000..fdb27bf11096f00458a89586ae5efc2cb8e162b8 Binary files /dev/null and b/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img08.png differ diff --git a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img09.png b/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img09.png new file mode 100644 index 0000000000000000000000000000000000000000..a6b963879355580e242b02b10e007e1434fdceb3 Binary files /dev/null and b/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img09.png differ diff --git a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img10.png b/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img10.png new file mode 100644 index 0000000000000000000000000000000000000000..73d0ef7c9aaa5e56098c5fca4420fd6c50b8cd2a Binary files /dev/null and b/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img10.png differ diff --git a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img11.png b/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img11.png new file mode 100644 index 0000000000000000000000000000000000000000..af0be10ddb844714a802343c3cef7d1231049355 Binary files /dev/null and b/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img11.png differ diff --git a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img12.png b/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img12.png new file mode 100644 index 0000000000000000000000000000000000000000..47b924e5c384702fec7c911f194b45be001fe814 Binary files /dev/null and b/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img12.png differ diff --git a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img13.png b/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img13.png new file mode 100644 index 0000000000000000000000000000000000000000..f97136493d1c2b6bca4c52a99a2d237a4109b090 Binary files /dev/null and b/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img13.png differ diff --git a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img13_1.png b/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img13_1.png new file mode 100644 index 0000000000000000000000000000000000000000..f3d019ce3704a7397b9b5049022362dd1b8bc32a Binary files /dev/null and b/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img13_1.png differ diff --git a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img14.png b/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img14.png new file mode 100644 index 0000000000000000000000000000000000000000..570933dcd05ec67463254237a008515e098699eb Binary files /dev/null and b/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img14.png differ diff --git a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img15.png b/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img15.png new file mode 100644 index 0000000000000000000000000000000000000000..4e35c0818e4daae0f500ccb0bab4181a5310c4d6 Binary files /dev/null and b/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img15.png differ diff --git a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img16.png b/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img16.png new file mode 100644 index 0000000000000000000000000000000000000000..11cd78e7cb200ee0506f09a681b01b41331de5bb Binary files /dev/null and b/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img16.png differ diff --git a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img17.png b/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img17.png new file mode 100644 index 0000000000000000000000000000000000000000..d11c3c47217474cc72f95a746bbb71c5f913b840 Binary files /dev/null and b/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img17.png differ diff --git a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img18.png b/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img18.png new file mode 100644 index 0000000000000000000000000000000000000000..5dfc8b9a5f7afdfcc63df9ac3c62e2d5f367d28f Binary files /dev/null and b/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img18.png differ diff --git a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img19.png b/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img19.png new file mode 100644 index 0000000000000000000000000000000000000000..ad9d34b5396d69f7654447e4ee74331386adb2a5 Binary files /dev/null and b/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img19.png differ diff --git a/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img20.png b/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img20.png new file mode 100644 index 0000000000000000000000000000000000000000..be0c6f94c46cfb6d0a5024115ebbe9200ae8812a Binary files /dev/null and b/doc/service_design/examples/ExposingCRDs_aaS_Example_Calculator/img20.png differ diff --git a/doc/service_design/kubernetes/helm/HELM_Installation_aaS_Jenkins_Example.md b/doc/service_design/kubernetes/helm/HELM_Installation_aaS_Jenkins_Example.md new file mode 100644 index 0000000000000000000000000000000000000000..ca1995192974c6b7bd2f14da7d954b6f9ab9de7d --- /dev/null +++ b/doc/service_design/kubernetes/helm/HELM_Installation_aaS_Jenkins_Example.md @@ -0,0 +1,199 @@ +# Expose HELM charts as Service Specifications +Manage Helm charts installations via OpenSlice Service Specifications and Service Orders. +## Intended Audience: Service Designers + + +> Kubernetes is an orchestration system for automating software deployment, scaling, and management. One can interact though the Kubernetes API and it has a set of objects ready for use out of the box. + +> Helm is a tool that automates the creation, packaging, configuration, and deployment of Kubernetes applications by combining your configuration files into a single reusable package + +> At the heart of Helm is the packaging format called charts. Each chart comprises one or more Kubernetes manifests -- and a given chart can have child charts and dependent charts, as well. Using Helm charts: + +> - Reduces the complexity of deploying Microservices +> - Enhances deployment speed +> - Developers already know the technology + +> There are many Helm charts and Helm repositories there that are ready to be used + +> Enable loose coupling and more orchestration scenarios + +> Developers create and deploy applications in things they already know (e.g. Helm charts) + +> Use the TMF models as wrapper entities around Helm charts + + +Use OpenSlice to expose them in service catalogs and deploy them in complex scenarios (service bundles) involving also other systems: + + - Include e.g. RAN controllers, + - Pass values through life cycle rules from one service to another, + - Manage multiple Helms in multiple clusters + + +## The installation of HELM charts is based on OpenSlice CRD support + +Please read more [here](../ExposingKubernetesResources.md) + + +For installing HELM charts we will use ArgoCD a well known Kubernetes-native continuous deployment (CD) tool + +> ArgoCD is a Kubernetes-native continuous deployment (CD) tool + +> While just deploying Heml charts is just a scenario for ArgoCD , in future one can exploit it for many things + +> Despite some other tools like FluxCD, it provides also a UI which is useful for management and troubleshooting + + +We will mainly use the CRD of ```Kind: Application``` that ArgoCD can manage + + + +Before proceeding, install ArgoCD in your management cluster, by following ArgoCD instructions + +As soon as you install ArgoCD, OpenSlice is automatically aware for specific new Kinds. The one we will use is is the ```Kind: Application``` that ArgoCD can manage under the apiGroup argoproj.io + +Browse to Resource Specifications. You will see an entry like the following: + +```Application@argoproj.io/v1alpha1@kubernetes@https://10.10.10.144:6443/``` + +see image: + +<img src="img01.png" width=1024px> + +# Example: Offer Jenkins as a Service via Openslice + +We will use the ```Kind: Application``` of ArgoCD and create a ResourceFacingServiceSpecification for Jenkins + + 1. Go to Service Specifications + 2. Create New Specification + 3. Give a Name, eg. jenkinsrfs + 4. Go to Resource Specification Relationships + 5. Assign ```Application@argoproj.io/v1alpha1@kubernetes@https://10.10.10.144:6443/``` + +<img src="img02.png" width=640px> + + +Focus now on the characteristics configuration. + +First we need to map the lifecycle of ArgoCD Application to TMF Resource State + + +<img src="img05.png" width=640px> + +In ArgoCD the field **health.status** has the value that we need to check (Healty, Progressing, etc) + + +The _CR_SPEC can be designed first in a YAML or json editor. Let's see a YAML definition: + + + ``` + +apiVersion: argoproj.io/v1alpha1 +kind: Application +metadata: + finalizers: + - resources-finalizer.argocd.argoproj.io + name: openslice-jenkins + namespace: argocd +spec: + project: default + destination: + namespace: opencrdtest + name: in-cluster + source: + repoURL: https://charts.jenkins.io + targetRevision: 4.6.1 + chart: jenkins + helm: + values: | + controller: + service: + type: ClusterIP + syncPolicy: + automated: + prune: true + selfHeal: true + allowEmpty: false + syncOptions: + - Validate=false + - CreateNamespace=true + - PrunePropagationPolicy=foreground + - PruneLast=true + - RespectIgnoreDifferences=true +``` + +**NOTICE** + +On each installation OSOM will change the name of the resource in order to be unique (will have a UUID) + + ```name: openslice-jenkins``` + + destination namespace that ArgoCD will use is the name ```opencrdtest``` + + ```destination: + namespace: opencrdtest + ``` + +**This implies that ArgoCD installs the Jenkins always in the same namespace** + +**To avoid this we will create a simple pre-provision rule to change the namespace properly** + +See the following image: +<img src="img06.png" > + +1. Drag-Drop the _CR_SPEC characteristic of jenkinsrfs from the Service>Text blocks +2. Drag-Drop Text>Formatted text block +3. Drag-Drop Text>Multi-line text input block +4. Copy paste the YAML text +5. Change the spec: destination:namespace to the value %s +6. Drag-Drop Lists>Create list with block delete 2 items (click the gear icon). Connect it to formatted text block +7. Drag-Drop Context>Current Service Order block and select the id. Connect it to the List +8. Save the PRE_PROVISION Rule + +# Expose the service to your users + +Expose then as CustomerFacingServiceSpecification by using the previous RFSS as Service Specification Relationship +1. Create a Jenkins service and mark as Bundle and save it +2. Go to Service Specification Relationships and assign Jenkinsrfs +3. Add also a Logo if you wish + +<img src="img03.png" width=640px> +<img src="img04.png" width=640px> + +Expose it now to a Category and a Catalog to be available for ordering. + +<img src="img061.png" > + +# Order the service + +Order the service from the catalog. + + +Soon the order will be completed and the Services will be active + + +<img src="img07.png" width=640px> + +# How to access the Jenkins installation: + +From the Supporting services of the Service Order, select the ResourceFacingService (jenkinsrfs) + +The ResourceFacingService has also supporting resources in resource inventory. + +<img src="img08.png" width=640px> + +One is the resource reference to the application (e.g. _cr_tmpname_...), the other is a secret (e.g. cr87893...). + +Click to go to the secret resource (This is in the Resource inventory of OpenSlice) + +<img src="img09.png" width=640px> + +Use them to login in your Jenkins. + + +> Exposing Jenkins to you external is a matter of cluster configuration and request (nodeport, load balancing, etc)! This is not a topic for this example + + + + + + diff --git a/doc/service_design/kubernetes/helm/img01.png b/doc/service_design/kubernetes/helm/img01.png new file mode 100644 index 0000000000000000000000000000000000000000..bbe29df97934dd4438f39f2f3e0e0e443ba0e4d3 Binary files /dev/null and b/doc/service_design/kubernetes/helm/img01.png differ diff --git a/doc/service_design/kubernetes/helm/img02.png b/doc/service_design/kubernetes/helm/img02.png new file mode 100644 index 0000000000000000000000000000000000000000..c3719c6c04d5e04fc126d5d8388e8862a524ba2b Binary files /dev/null and b/doc/service_design/kubernetes/helm/img02.png differ diff --git a/doc/service_design/kubernetes/helm/img03.png b/doc/service_design/kubernetes/helm/img03.png new file mode 100644 index 0000000000000000000000000000000000000000..fa3d7327fa68743d247aef9842ace2262cdeb84e Binary files /dev/null and b/doc/service_design/kubernetes/helm/img03.png differ diff --git a/doc/service_design/kubernetes/helm/img04.png b/doc/service_design/kubernetes/helm/img04.png new file mode 100644 index 0000000000000000000000000000000000000000..8a6c0bf1deb000fc1fd494317bbe0a318eb1d3e5 Binary files /dev/null and b/doc/service_design/kubernetes/helm/img04.png differ diff --git a/doc/service_design/kubernetes/helm/img05.png b/doc/service_design/kubernetes/helm/img05.png new file mode 100644 index 0000000000000000000000000000000000000000..a505d800a71024148fe04533d15d7c26d0f1a75f Binary files /dev/null and b/doc/service_design/kubernetes/helm/img05.png differ diff --git a/doc/service_design/kubernetes/helm/img06.png b/doc/service_design/kubernetes/helm/img06.png new file mode 100644 index 0000000000000000000000000000000000000000..2089e793a2c668e24dab0597c7c2965700b57f5f Binary files /dev/null and b/doc/service_design/kubernetes/helm/img06.png differ diff --git a/doc/service_design/kubernetes/helm/img061.png b/doc/service_design/kubernetes/helm/img061.png new file mode 100644 index 0000000000000000000000000000000000000000..f2dcdf951f264b1164804fba435449ad176e8da7 Binary files /dev/null and b/doc/service_design/kubernetes/helm/img061.png differ diff --git a/doc/service_design/kubernetes/helm/img07.png b/doc/service_design/kubernetes/helm/img07.png new file mode 100644 index 0000000000000000000000000000000000000000..d3bced145f17fc2804b6f36ab1a4919ff3ec656a Binary files /dev/null and b/doc/service_design/kubernetes/helm/img07.png differ diff --git a/doc/service_design/kubernetes/helm/img08.png b/doc/service_design/kubernetes/helm/img08.png new file mode 100644 index 0000000000000000000000000000000000000000..979adc4d20afd24eb0aabf78f72171b26e8ef25e Binary files /dev/null and b/doc/service_design/kubernetes/helm/img08.png differ diff --git a/doc/service_design/kubernetes/helm/img09.png b/doc/service_design/kubernetes/helm/img09.png new file mode 100644 index 0000000000000000000000000000000000000000..06b8306a2416fc3a15110672b2971ddada0b4d69 Binary files /dev/null and b/doc/service_design/kubernetes/helm/img09.png differ