diff --git a/docs/CRIDGEforDevelopers.md b/docs/CRIDGEforDevelopers.md new file mode 100644 index 0000000000000000000000000000000000000000..ad0ecf7a3551af5a90b7fa76420f8415fef6d8f2 --- /dev/null +++ b/docs/CRIDGEforDevelopers.md @@ -0,0 +1,443 @@ + +# CRIDGE: A Service to manage Custom Resources in a Kubernetes Cluster + + > Kubernetes is an orchestration system for automating software deployment, scaling, and management. One can interact though the Kubernetes API and it has a set of objects ready for use out of the box. Custom Resource Definitions (CRDs) is a way that allows to manage things other than Kubernetes itself and allows to create our own objects The use of CRDs makes the possibilities of Kubernetes management almost limitless. You can extend the base Kubernetes API with any object you like using CRDs. + + +CRIDGE is a service designed to create and manage Custom Resources (CRs) based on Custom Resource Definitions (CRDs) installed on a Kubernetes cluster. By leveraging the OpenSlice (OSL), CRIDGE enables seamless integration and orchestration within Kubernetes environments, utilizing Kubernetes APIs via the TMF APIs and models. Thus, more or less, OSL exposes Kubernetes APIs as TMF APIs and models. + + >By allowing the design and lifecycle management of services/resources that expose CRDs/CRs in a Kubernetes cluster via the TMF APIs, OSL can be used in many complex scenarios now involing resources from multiple domains. + + + +1. CRIDGE service allows OSL to: + - Create and manage Custom Resources (CRs) using installed CRDs on a target Kubernetes cluster. + - Facilitate complex orchestration scenarios by wrapping Kubernetes APIs as TMF APIs and models. + - Handles connectivity to a Kubernetes cluster and manages the lifecycle of CRDs + - Wraps the Kubernetes API, Receives and provides resources towards other OSL services via the service bus + +2. Enabling Loose Coupling and Orchestration + - Language Flexibility: Developers can write CRDs in any language and expose them via the Kubernetes APIs. OSL will reuse these CRDs, enhancing flexibility and integration capabilities. + - Familiar Deployment: Developers can create and deploy applications using familiar tools such as Helm charts, simplifying the process and reducing the learning curve. + +3. Ecosystem Reusability + - CRIDGE capitalizes on the extensive Kubernetes ecosystem, particularly focusing on operators (CRDs). + - Key repositories and hubs such as artifacthub.io and Operatorhub.io can be utilized for finding and deploying operators. + +4. Service Catalog Exposure and Deployment + + OSL can expose CRs in service catalogs, facilitating their deployment in complex scenarios. + These scenarios may include service bundles that involve multiple systems, such as RAN controllers or other Kubernetes clusters, providing a robust and versatile deployment framework. + + + + > Why the CRIDGE name? we wanted to build a service that maps TMF models to CRDs; a kind of a **CR**D to TMF br**idge**. Therefore CRIDGE was born + +# Approach + + > OSL in general is responible for exposing service specifications which are ready to be ordered and orchestrated, through tmforum Open APIs as defined in the OSL Service Spec Catalog. Usually for a service specification a corresponding (one or more) resource specification (resourceSpecificationReference) is registered in the OSL Resource Spec Catalog. + +The following image illustrates the approach. + +<img src="img01.png" width=1024px> + +1. A CRD in a cluster will be mapped in TMF model as a Resource specification and therefore can be exposed as a service specification in a catalog +2. Service Orders can be created for this service specification. The OSL Orchestrator (OSOM) will manage the lifecycle of the Service Order. +3. OSOM creates a Resource in OSL Resource inventory and requests (via CRIDGE) a new Custom Resource (CR) in the target cluster + - The resource is created in a specific namespace (for example the UUID of the Service Order) + - A CR in a cluster will be mapped in TMF model as a Resource in the resource Inventory + - Other related resources created by the CRD Controller within the namespace are automatically created in OSL Resource Inventory under the same Service Order + + +<img src="img02.png" width=800px> + +The provided image illustrates the architecture and workflow of the CRIDGE service, showing how it interacts with other components within a Kubernetes (K8s) cluster. Here is an explanation of the key components and flow in the diagram: + + - Other OSL Services: This box represents various OSL services such as Service Spec Catalogue, Resource Spec Catalogue, Service Inventory, Resource Inventory, and OSOM (OpenSlice Service Orchestration and Management). + - Service Bus: This is the communication layer that facilitates interaction between the CRIDGE service and other OSL services. + - CRIDGE: CRIDGE acts as a bridge that converts CRDs (Custom Resource Definitions) to TMF (TM Forum) APIs and models. It enables the creation and management of Custom Resources (CRs) in the Kubernetes cluster. + - K8s API: The Kubernetes API server, which is the central control point for managing the Kubernetes cluster. CRIDGE interacts with the K8s API to manage CRDs and CRs. + + > CRD (Custom Resource Definition): A CRD is a way to define custom resources in Kubernetes cluster-wise. It allows the extension of Kubernetes API to create and manage user-defined resources. Example : +``` + apiVersion: apiextensions.k8s.io/v1 + kind: CustomResourceDefinition + metadata: + name: myresource.example.com +``` + + - Namespaces: Kubernetes namespaces provide a way to partition resources within a cluster. The diagram shows that multiple namespaces (nsxx, nsyy, nsz) can be managed by CRIDGE. + + > CR (Custom Resource): A CR is an instance of a CRD. It represents the actual custom resource that is managed within the Kubernetes cluster. Example shown in different namespaces: +``` + apiVersion: example.com/v1 + kind: Myresource + metadata: + name: example_resource_1 +``` + +In a nutchell: + + - Various OSL services use the Service Bus to communicate with CRIDGE. + - CRIDGE converts requests towards Kubernetes API and vice-versa, facilitating the integration of custom resources with other OSL services. + - CRDs are defined and managed through the K8s API. The example CRD is named myresource.example.com. + - Deploying CRs in Namespaces: Custom resources defined by the CRD are created and managed within different namespaces in the Kubernetes cluster. Each namespace can have its own instances of the custom resources. + + > The example CRD myresource.example.com allows the creation of custom resources of type Myresource. + > Instances of Myresource are created in various namespaces, each with unique names like example_resource_1. + + +# Handling more than one clusters + +A CRIDGE service is usually responsible for managing one cluster. In the following diagram we show how it can be used for managing multiple clusters: + +<img src="img03.png" width=1024px> + +We assume that there is an OSL Management cluster that OSL is installed. CRIDGE is also installed there if we would like to manage resources in the same management cluster. + - Each CRIDGE service has for example its own configuration to connect to target cluster + - Each CRIDGE can be installed either in the managed cluster or at the remote clusters. Connectivity is handled via the service bus + - Important: Each CRIDGE has a different context and API endpoints. This is used to request CRDs on a different cluster + + + > A CRD has a globally unique name for example mycrd.example.com. So we need to somehow identify also the different cluster + + +# Awareness for CRDs and CRs in cluster + +> CRDs and CRs can appear (disappear) or change status at any time in a cluster. OSL Resource Inventory need to be aware of these events. + +The sync process is found in the code and explained by the following picture: + + +<img src="img04.png" width=1024px> + + WatcherService is executed when the cridge service application starts (see onApplicationEvent). First things: + +- KubernetesClientResource is a class that wraps fabric8’s KubernetesClient + - This fabric8 KubernetesClient is initialized from the kubeconf and default context of the machine that runs CRIDGE +- On CRIDGE Start up we try to register this cluster and context to OSL catalogs. + - See registerKubernetesClientInOSLResource method which registers the KubernetesContextDefinition in Resource Inventory as a LogicalResource via createOrUpdateResourceByNameCategoryVersion method +- After the creation(or update) of this cluster as a Resource in OSL we proceed to create SharedIndexInformers for CustomResourceDefinition objects +- In this way CRIDGE is always aware of all CRDs and their CRs in the cluster, even if a CRD or CR is added/updated/deleted in the K8S cluster outside of OSL(CRIDGE) +- The SharedIndexInformer events notify CRIDGE, which is always aware of all CRDs and their CRs in the cluster, even if a CRD or CR is added/updated/deleted in the K8S cluster outside of OSL(CRIDGE) + - NOTE: The ADD event is raised every time also we run CRIDGE. Therefore, on ADD we do the method to createORupdate resource specifications and resources +- On ADD event: + - The CRD is transformed to OSL Kubernetes domain model: method kubernetesClientResource.KubernetesCRD2OpensliceCRD + - Then the OSL Kubernetes domain model is: + - transformed to Resource Specification and is stored to catalog (see createOrUpdateResourceSpecByNameCategoryVersion) + - Transformed to Resource and is stored to catalog (see createOrUpdateResourceByNameCategoryVersion) + - Conceptually while a CRD is a new resource located in the Kubernetes cluster resource, it is transformed also as a Resource Specification (a high-level entity) which is ready to be reused as an entity to other scenarios. The same concept as in Kubernetes where a CRD is a definition ready to be used for instantiating resources of this CRD + - Then for this CRD a Watcher is added for all Resources of this Kind (fabric8’s GenericKubernetesResource entity) + - When we have a newly added/updated/deleted resource of a certain CRD the method updateGenericKubernetesResourceInOSLCatalog is called for this object (fabric8’s GenericKubernetesResource entity) + - We examine if the resource has label org.etsi.osl.resourceId + - This label is added by OSOM during service orders to correlate K8S requested resources with resources in inventory + - If the label exists, we update the resource by ID updateResourceById + - Else a resource is created in catalog + + +# Deployment of a new CR based on a CRD + + +<img src="img05.png" width=1024px> + +- A message arrives to deploy a CR + - The call examines if this CRIDGE service can handle the request (based on context and masterURL) +- There are headers received and a crspec in json +- The crspec is unmarshaled as GenericKubernetesResource +- Headers are in format org.etsi.osl.* +- These headers are injected as labels + - (see later in orchestration) +- A namespace is created for this resource +- Watchers are created for this namespace for e.g. new secrets, config maps etc , so that they can be available back as resources to the Inventory of OSL (Note only Secrets for now are watched) + + +# Expose CRD based service specs for users in catalog + +- A CRD is exposed as a Resource Specification + - Many attributes of the CRD are translated into characteristics + - The following specific characteristics are added: + - _CR_SPEC: Used for providing the json Custom Resource description to apply + - _CR_CHECK_FIELD: Used for providing the field that need to be checked for the resource status + - _CR_CHECKVAL_STANDBY: Used for providing the equivalent value from resource to signal the standby status + - _CR_CHECKVAL_ALARM: Used for providing the equivalent value from resource to signal the alarm status + - _CR_CHECKVAL_AVAILABLE: Used for providing the equivalent value from resource to signal the available status + - _CR_CHECKVAL_RESERVED: Used for providing the equivalent value from resource to signal the reserved status + - _CR_CHECKVAL_UNKNOWN: Used for providing the equivalent value from resource to signal the unknown status + - _CR_CHECKVAL_SUSPENDED: Used for providing the equivalent value from resource to signal the suspended status +- Create a new Service Specification and use this Resource Specification in Resource Specification Relationships + - Then the Service Specification is saved as ResourceFacingServiceSpecification +- We can give at this stage values to the characteristics: + - _CR_SPEC, _CR_CHECK_FIELD, _CR_CHECKVAL_STANDBY, _CR_CHECKVAL_ALARM, _CR_CHECKVAL_AVAILABLE, _CR_CHECKVAL_RESERVED, _CR_CHECKVAL_UNKNOWN, _CR_CHECKVAL_SUSPENDED +- We can create LCM rules +- Create a new Service Specification and use the Resource Facing Service Specification in Service Specification Relationships + - Then the Service Specification is saved as CustomerFacingServiceSpecification +- We can give at this stage values to the characteristics: + - _CR_SPEC, _CR_CHECK_FIELD, _CR_CHECKVAL_STANDBY, _CR_CHECKVAL_ALARM, _CR_CHECKVAL_AVAILABLE, _CR_CHECKVAL_RESERVED, _CR_CHECKVAL_UNKNOWN, _CR_CHECKVAL_SUSPENDED +- We can create LCM rules for this new Service Specification +- Expose configurable values for users to configure during service order + +<img src="img06.png" width=1024px> + + +# Service Orchestration and CRDs/CRs + +OSOM checks the presence of attribute _CR_SPEC at the RFS to make a request for a CR deployment + +- _CR_SPEC is a JSON or YAML string that is used for the request + - It is similar to what one will do with e.g. a kubectl apply + - There are tools to translate a yaml file to a json + +> LCM rules can be used to change attributes of this yaml/json file, before sending this for orchestration + +However, the following issue needs to be solved: ** How to map the CR lifecycle that is defined in the CRD with the TMF resource Lifecycle? ** + - For this We introduced the following characteristics: _CR_CHECK_FIELD, _CR_CHECKVAL_STANDBY, _CR_CHECKVAL_ALARM, _CR_CHECKVAL_AVAILABLE, _CR_CHECKVAL_RESERVED, _CR_CHECKVAL_UNKNOWN, _CR_CHECKVAL_SUSPENDED + +OSOM sends to CRIDGE a message with the following information: + +- currentContextCluster: current context of cluster +- clusterMasterURL: current master url of the cluster +- org.etsi.osl.serviceId: This is the related service id that the created resource has a reference +- org.etsi.osl.resourceId: This is the related resource id that the created CR will wrap and reference. +- org.etsi.osl.prefixName: we need to add a short prefix (default is cr) to various places. For example in K8s cannot start with a number +- org.etsi.osl.serviceOrderId: the related service order id of this deployment request +- org.etsi.osl.namespace: requested namespace name +- org.etsi.osl.statusCheckFieldName: The name of the field that is needed to be monitored in order to monitor the status of the service and translate it to TMF resource statys (RESERVED AVAILABLE, etc) +- org.etsi.osl.statusCheckValueStandby: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state STANDBY (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) +- org.etsi.osl.statusCheckValueAlarm: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state ALARMS (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) +- org.etsi.osl.statusCheckValueAvailable: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state AVAILABLE (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) +- org.etsi.osl.statusCheckValueReserved: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state RESERVED (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) +- org.etsi.osl.statusCheckValueUnknown: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state UNKNOWN (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) +- org.etsi.osl.statusCheckValueSuspended: The CR specific value (of the CheckFieldName) that needs to me mapped to the TMF resource state SUSPENDED (see org.etsi.osl.tmf.ri639.model.ResourceStatusType) + +- Parameters: + - aService reference to the service that the resource and the CR belongs to + - resourceCR reference the equivalent resource in TMF repo of the target CR. One to one mapping + - orderId related service order ID + - startDate start date of the deployment (not used currently) + - endDate end date of the deployment (not used currently) + - _CR_SPEC the spec that is sent to cridge (in json) +- Returns: + - a string response from cridge. It might return "OK" if everything is ok. "SEE OTHER" if there are multiple CRIDGEs then some other cridge will handle the request for the equivalent cluster. Any other response is handled as error + + +- CRIDGE receives the message and creates according to the labels the necessary CR +- It monitors the created resource(s) in namespace (see the Sequence Diagram in previous images) +- It monitors and tries to figure out and map the Status of the CR to the TMF Status according to the provided org.etsi.osl.statusCheck* labels +- It sends to the message bus the current resource for creation or update to the TMF service inventory + +--- +# Example CRD and its controller + +To illustrate the powerful concept of Kubernetes operators and how they can be utilized to offer a service through OpenSlice, +let's provide an example of a "Calculator as a Service." +This example will demonstrate the flexibility and capabilities of Kubernetes operators in managing custom resources +and automating operational tasks. + +--- +## Offering "Calculator as a Service" through OpenSlice + +- We have a service that can accept two integers and an action (SUM, SUB, etc) and returns a result +- We would like to offer it as a Service through OpenSlice +- So when a user orders it with some initial parameters, OpenSlice will create it and return the result +- Also while the service is active, we can do further calculations, until we destroy it. + + +- Assume the following simple CRD of a calculator model accepting two params (spec section) and an action and returning a result (status section) +- The controller (the calculato code) is implemented in any language and is installed in a Kubernetes cluster + +``` + +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + name: mycalculators.examples.osl.etsi.org +spec: + group: examples.osl.etsi.org + names: + kind: MyCalculator + plural: mycalculators + singular: mycalculator + scope: Namespaced + versions: + - name: v1alpha1 + schema: + openAPIV3Schema: + properties: + spec: + properties: + parama: + type: integer + paramb: + type: integer + action: + type: string + type: object + status: + properties: + result: + type: integer + status: + type: string + type: object + type: object + served: true + storage: true + subresources: + status: {} +``` + + +Request to the cluster (through e.g. kubectl apply) + +``` +apiVersion: examples.osl.etsi.org/v1alpha1 +kind: MyCalculator +metadata: + name: mycalculator.examples.osl.etsi.org +spec: + parama: 170 + paramb: 180 + action: 'SUM' + +``` + +Response + +``` +apiVersion: examples.osl.etsi.org/v1alpha1 +kind: MyCalculator +metadata: + creationTimestamp: '2023-12-05T12:26:07Z’ + +<snip> + +status: + result: 350 + status: CALCULATED +spec: + action: SUM + parama: 170 + paramb: 180 + +``` + +To perform this through OpenSlice as a Service Specification ready to be ordered we need to do the following: + +--- +### CRD is saved automatically as Resource Specification + +As soon as the CRD is deployed in the cluster (e.g. by your admin via kubctl or via any installation through the internet) it is automatically transformed and is available in OpenSlice catalogs as a Resource Specification. +- See also the fully qualified name of the resource specification. +- The resource specification name is quite unique, so you can install the CRD in many clusters around the internet. Each CRD on each cluster will appear here + + +<img src="img07.png" > + + +<img src="img08.png" width=1024px> + +--- +### Create a ResourceFacingServiceSpecification + + +<img src="img09.png" width=1024px> +<img src="img10.png" width=1024px> + + + + +### Creation of CRD-related characteristics + +- We need now to adjust some characteristics of this CRD as Resoruce Specification. +- OpenSlice transalted automatically the CRD spec in a flat list of characteristics.So the "spec" section from the original yaml for example, is now unfold into: spec, spec.parama, spec.paramb, etc. the same for "status" object +- We need to make OpenSlice aware of when the service will be active. + - So we go to characteristic _CR_CHECK_FIELD and we define that the field that shows the status of the service is the characteristic "status.status" (is a text field) + - Then we go to _CR_CHECKVAL_AVAILABLE and we define the value CALCULATED, which signals the following: When the characteristic "status.status" has the value "CALCULATED" then OpenSlice will mark the underlying service as "ACTIVE" + - We need also to define the yaml file that OpenSLice will use to create the new resource in the kubernetes cluster + - We insert the YAML in the characteristic _CR_SPEC + + the _CR_SPEC is: + + +``` +apiVersion: examples.osl.etsi.org/v1alpha1 +kind: MyCalculator +metadata: + name: mycalculator.examples.osl.etsi.org +spec: + parama: 170 + paramb: 180 + action: 'SUM' + +``` + +<img src="img11.png" width=1024px> + + +> However the values are fixed. How do we allow a user to pass parameters through OpenSlice + +### Pass parameters through OpenSlice + +We need to Create LCM rules in ResourceFacingServiceSpecification +- The goal of the rules is to allow the user to pass parameters to the actual resource towards the cluster. +- we will create one rule that will pass the parameters just before creating the service (PRE_PROVISION phase) +- we will create one rule that will pass the parameters while the service is active (SUPERVISION phase) +- The rules will be the same + +<img src="img12.png" width=1024px> + +If we see one rule it will look like the following: +<img src="img13.png" width=1024px> + +- We need to change the _CR_SPEC characteristic +- We use a block that changes a String according to variables +- See that we have as Input string the YAML string lines + - see that parama, paramb has a %d (they accept integers), action is %s (accepts a string) + - See that the variables tha will replace the %d, %d and %s are an list + - the first %d will be replaced with the value from characteristic spec.parama + - the second %d will be replaced with the value from characteristic spec.paramb + - the %s will be replaced with the value from characteristic spec.action + + + +### Create a CustomerFacingServiceSpecification + +We will now expose it to our users by creating a CustomerFacingServiceSpecification + +<img src="img14.png" width=1024px> + +Expose it then to a catalogue for orders through the Service Categories and Service Catalogs + + +<img src="img15.png"> + + +### Order the Service + +When a user orders the service, it will look like this: + +<img src="img16.png" width=1024px> + + + +- After the Service Order we have 2 services in service inventory on CFS and on RFS. Both have references to values +- CRIDGE updates the Resource in Resource Inventory and OSOM updated the Services in Service Inventory +- The Actual resources are running in the Kubernetes cluster managed by OpenSlice +- The result is in the characteristic status.result + +<img src="img17.png" width=800px> + +<img src="img18.png" width=1024px> + + ### Modify the running service + + The use can modify the service + +<img src="img19.png" width=1024px> + +- After a while the update is applied to the cluster, the controller will pick up the resource update and patch the resource +- CRIDGE updates the Resource in Resource Inventory and OSOM updates the Services in Service Inventory + + +<img src="img20.png" width=1024px> + \ No newline at end of file diff --git a/docs/img01.png b/docs/img01.png new file mode 100644 index 0000000000000000000000000000000000000000..d9f6f73dd21ff94572fd411c68e31a2915426f70 Binary files /dev/null and b/docs/img01.png differ diff --git a/docs/img02.png b/docs/img02.png new file mode 100644 index 0000000000000000000000000000000000000000..fb546ad61cce1ce5c9cc593507f816ef64c7bcc5 Binary files /dev/null and b/docs/img02.png differ diff --git a/docs/img03.png b/docs/img03.png new file mode 100644 index 0000000000000000000000000000000000000000..79d0d2a208ca3c63489264a87d4a155d44d12cc4 Binary files /dev/null and b/docs/img03.png differ diff --git a/docs/img04.png b/docs/img04.png new file mode 100644 index 0000000000000000000000000000000000000000..233831d1bb986c92432fdaff602db44fc2c0df48 Binary files /dev/null and b/docs/img04.png differ diff --git a/docs/img05.png b/docs/img05.png new file mode 100644 index 0000000000000000000000000000000000000000..3b05561502893694d3d72d12cbbfc154786da853 Binary files /dev/null and b/docs/img05.png differ diff --git a/docs/img06.png b/docs/img06.png new file mode 100644 index 0000000000000000000000000000000000000000..ef8d3879cbb887e5ce686a246b981da14cb6e64c Binary files /dev/null and b/docs/img06.png differ diff --git a/docs/img07.png b/docs/img07.png new file mode 100644 index 0000000000000000000000000000000000000000..6990bf18122d41d5c3eee345270c4c18b5b0a1dc Binary files /dev/null and b/docs/img07.png differ diff --git a/docs/img08.png b/docs/img08.png new file mode 100644 index 0000000000000000000000000000000000000000..fdb27bf11096f00458a89586ae5efc2cb8e162b8 Binary files /dev/null and b/docs/img08.png differ diff --git a/docs/img09.png b/docs/img09.png new file mode 100644 index 0000000000000000000000000000000000000000..a6b963879355580e242b02b10e007e1434fdceb3 Binary files /dev/null and b/docs/img09.png differ diff --git a/docs/img10.png b/docs/img10.png new file mode 100644 index 0000000000000000000000000000000000000000..73d0ef7c9aaa5e56098c5fca4420fd6c50b8cd2a Binary files /dev/null and b/docs/img10.png differ diff --git a/docs/img11.png b/docs/img11.png new file mode 100644 index 0000000000000000000000000000000000000000..af0be10ddb844714a802343c3cef7d1231049355 Binary files /dev/null and b/docs/img11.png differ diff --git a/docs/img12.png b/docs/img12.png new file mode 100644 index 0000000000000000000000000000000000000000..47b924e5c384702fec7c911f194b45be001fe814 Binary files /dev/null and b/docs/img12.png differ diff --git a/docs/img13.png b/docs/img13.png new file mode 100644 index 0000000000000000000000000000000000000000..75f1d429ccd072b3c5a1a215165c5ab5e34d5d32 Binary files /dev/null and b/docs/img13.png differ diff --git a/docs/img14.png b/docs/img14.png new file mode 100644 index 0000000000000000000000000000000000000000..570933dcd05ec67463254237a008515e098699eb Binary files /dev/null and b/docs/img14.png differ diff --git a/docs/img15.png b/docs/img15.png new file mode 100644 index 0000000000000000000000000000000000000000..4e35c0818e4daae0f500ccb0bab4181a5310c4d6 Binary files /dev/null and b/docs/img15.png differ diff --git a/docs/img16.png b/docs/img16.png new file mode 100644 index 0000000000000000000000000000000000000000..83a167d66f2a4846b1f0f1364c503712f8a42011 Binary files /dev/null and b/docs/img16.png differ diff --git a/docs/img17.png b/docs/img17.png new file mode 100644 index 0000000000000000000000000000000000000000..52943f7a131fa0f247140cc93ff2cb2bc3a9c84f Binary files /dev/null and b/docs/img17.png differ diff --git a/docs/img18.png b/docs/img18.png new file mode 100644 index 0000000000000000000000000000000000000000..5dfc8b9a5f7afdfcc63df9ac3c62e2d5f367d28f Binary files /dev/null and b/docs/img18.png differ diff --git a/docs/img19.png b/docs/img19.png new file mode 100644 index 0000000000000000000000000000000000000000..39c9d78b7b1b43d72b732b35a7b93a550626e78c Binary files /dev/null and b/docs/img19.png differ diff --git a/docs/img20.png b/docs/img20.png new file mode 100644 index 0000000000000000000000000000000000000000..b7f4f6ad25699833493cde4cde23269bfb8c84d0 Binary files /dev/null and b/docs/img20.png differ