diff --git a/doc/deploymentK8s.md b/doc/deploymentK8s.md index c99ebc4aa393197178dd9b9e4e5cc0dcd813c40c..b7e06d7b8474cc2de33a1a30a3c17f4c2894b62d 100644 --- a/doc/deploymentK8s.md +++ b/doc/deploymentK8s.md @@ -4,7 +4,7 @@ ## Requirements -### Hardware requirements: +### Hardware requirements | **Minimum Hardware Requirements** | **Recommended Hardware Requirements** | | --------------------------------- | ------------------------------------ | @@ -12,18 +12,41 @@ | 8 GB RAM | 16 GB RAM | | 30 GB storage | 50 GB storage | -### Software Requirements: +### Software Requirements * **git:** For cloning the project repository. * **Kubernetes:** A running cluster where OpenSlice will be deployed. * **Disclaimer:** The current manual setup of Persistent Volumes using `hostPath` is designed to operate with **only a single worker node**. This setup will not support data persistence if a pod is rescheduled to another node. * **Helm:** For managing the deployment of OpenSlice. * **Ingress Controller:** Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting. An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic. You must have an Ingress controller to satisfy an Ingress. - * An Nginx ingress controller is required, which can be installed using [this guide](https://kubernetes.github.io/ingress-nginx/deploy/). - * If you use another type of ingress controller, you'll need to modify `[repo-root]/kubernetes/helm/openslice/templates/openslice-ingress.yaml` to conform to your ingress controller's requirements. + * **Nginx Ingress Controller (Kubernetes Community Edition):** The ingress resource is configured to use an Nginx type ingress controller. + * If you need to expose the message bus service (Artemis), which communicates using the TCP protocol, you must use version **>= 1.9.13** of the Nginx Ingress Controller (a prerequisite for [managing multiple kubernetes clusters](#management-of-multiple-kubernetes-clusters)). This version or higher includes the required functionality to handle TCP services. Otherwise, earlier versions may suffice depending on your configuration. + * To install or upgrade to the required version, run the following command: + ```bash + helm upgrade nginx-ingress ingress-nginx/ingress-nginx --namespace ingress \ + --set tcp.61616="<openslice-namespace>/<openslice-helm-release-name>-artemis:61616" + ``` + Replace `<helm-release-name>` with the name of your OpenSlice Helm release. + * More details regarding the Nginx Ingress Controller (Kubernetes Community Edition) can be found [here](https://kubernetes.github.io/ingress-nginx/deploy/). + * **Other Ingress Controller:** For non-Nginx ingress controllers, modify `[repo-root]/kubernetes/helm/openslice/templates/openslice-ingress.yaml` to meet your controller’s requirements. + +### Exposure + +#### Option 1 - Load balancer + * **Network Load Balancer:** Required for exposing the service (e.g., GCP, AWS, Azure, MetalLB). * **Domain/IP Address:** Necessary for accessing the application. This should be configured in `[repo-root]/kubernetes/helm/openslice/values.yaml` under `rooturl`. +#### Option 2 - Ingress + +* **Ingress Controller with NodePort:** You can expose the application using the NodePort of the Ingress Controller's service. +* **IP Address and Port:** Use the IP address of the **master node** and the assigned NodePort to access the application. This should be configured in `[repo-root]/kubernetes/helm/openslice/values.yaml` under `rooturl`. + +For example: +``` +rooturl: http://<master-node-ip>:<nodeport> +``` + ### Additional Configuration * **Storage Class:** In a production environment, specify your `storageClass` in `[repo-root]/kubernetes/helm/openslice/values.yaml` under `storageClass`. If not defined, PVs will be created and managed manually. @@ -65,7 +88,7 @@ We recommend: * develop branch for an experience with the latest features (for develop branch installation, it is strongly advisable that you may as well follow the [develop documentation](https://osl.etsi.org/documentation/develop/deployment/)) -## Configure Helm Chart Services +## Configure Helm Chart When deploying OpenSlice with Helm, service configurations are handled through the `values.yaml` file. This file allows you to define all necessary configurations for your deployment, including database credentials, service URLs, and logging levels. Below are examples of how to configure your services in Helm based on your provided values. @@ -108,30 +131,107 @@ oscreds: #### 3. CRIDGE Configuration -If you want to create and manage Kubernetes Custom Resources (CRs), you will have to provide: +To create and manage Kubernetes Custom Resources (CRs), you have to install and configure the CRIDGE component. + +For CRIDGE to work properly, you need to provide a **cluster-wide scope kubeconfig** file (typically located at `/home/{user}/.kube` directory of the Kubernetes Cluster's host). This kubeconfig file allows CRIDGE to communicate with your Kubernetes cluster. + +There are two ways to install CRIDGE: + +##### 3.1 **Bundled CRIDGE deployment with the OpenSlice Helm chart (same cluster environment)** + +By default, the OpenSlice Helm chart also deploys CRIDGE alongside the bundle. To configure CRIDGE, there are three different ways to provide this kubeconfig file during deployment: + +1. **Manual Copy to Helm Files Directory**: + + - Copy the kubeconfig file to the following directory: + `org.etsi.osl.main/kubernetes/helm/openslice/files/org.etsi.osl.cridge`. + - The deployment process will automatically copy the file into the `/root/.kube` directory of the CRIDGE container. + - **Note:** This method expects the kubeconfig file to be named exactly `kubeconfig.yaml` in the specified directory. + +2. **Passing the Kubeconfig File Using Helm (`--set-file`)**: + + - If you do not wish to manually copy the file, you can pass it directly during the Helm installation using the `--set-file` option, at the final [deployment process](#deploy-the-helm-chart): + + ```bash + --set-file cridge.kubeconfig.raw=path/to/kubeconfig.yaml + ``` + + - This method reads the specified kubeconfig file and mounts it into the CRIDGE container during deployment. + +3. **Passing a Base64-Encoded Kubeconfig Using Helm (`--set`)**: + + - Alternatively, you can pass the kubeconfig as a base64-encoded string, during the Helm installation using the `--set` option, at the final [deployment process](#deploy-the-helm-chart): + + ```bash + --set cridge.kubeconfig.base64="$(base64 path/to/kubeconfig.yaml)" + ``` + + - This method encodes the kubeconfig content and passes it directly to the CRIDGE container. + +> **Note:** Regardless of the method you choose, if you're using a non-standard kubeconfig file name, make sure to adjust the references or rename the file as needed. -- a cluster-wide scope kubeconf file (typically located at `/home/{user}/.kube` directory of the Kubernetes Cluster's host) +##### 3.2 **Standalone CRIDGE deployment** -You will have to copy the kubeconf file to the `org.etsi.osl.main/kubernetes/helm/openslice/files/org.etsi.osl.cridge` directory, *prior to the deployment*. +There can be cases where a separate deployment of CRIDGE, apart from the bundled OpenSlice deployment, may be needed. These cases comprise: -By default, the deployment process copies the `org.etsi.osl.main/kubernetes/helm/openslice/files/org.etsi.osl.cridge/config` file into the `/root/.kube` directory of the CRIDGE container. +- remote cluster management, different from the one OpenSlice is installed +- more control over the component (e.g. multiple component instances / clusters) -> **The above configuration works for the default kubeconf file names. It explicitly expects a file named `config` within the `org.etsi.osl.main/kubernetes/helm/openslice/files/org.etsi.osl.cridge` directory. If you are working with custom kubeconf file names, you will have to rename them.** +In this case, initially you have to disable CRIDGE from deploying with the rest of OpenSlice. To do so, in the `values.yaml` of OpenSlice Helm chart, you have to change the `cridge.enabled` flag to `false`. -OpenSlice also offers management support of *multiple Kubernetes Clusters* simultaneously. For this, you will have to: +```yaml +cridge: + enabled: false +``` + +Following, clone the CRIDGE project from the GitLab, which also includes the respective standalone Helm chart. + +```bash +git clone https://labs.etsi.org/rep/osl/code/org.etsi.osl.cridge.git +cd org.etsi.osl.cridge/helm/cridge/ +``` + +Similarly, to configure CRIDGE, there are three different ways to provide this kubeconfig file during deployment: + +1. **Manual Copy to Helm Files Directory**: + - Copy the kubeconfig file to the following directory: + `org.etsi.osl.cridge/helm/cridge/files/org.etsi.osl.cridge`. + - The deployment process will automatically copy the file into the `/root/.kube` directory of the CRIDGE container. + - **Note:** This method expects the kubeconfig file to be named exactly `kubeconfig.yaml` in the specified directory. + +2. **Passing the Kubeconfig File Using Helm (`--set-file`)**: + - If you do not wish to manually copy the file, you can pass it directly during the Helm installation using the `--set-file` option: + ```bash + helm install cridge-release . --set-file kubeconfig.raw=path/to/kubeconfig.yaml + ``` + - This method reads the specified kubeconfig file and mounts it into the CRIDGE container during deployment. + +3. **Passing a Base64-Encoded Kubeconfig Using Helm (`--set`)**: + - Alternatively, you can pass the kubeconfig as a base64-encoded string: + ```bash + helm install cridge-release . --set kubeconfig.base64="$(base64 path/to/kubeconfig.yaml)" + ``` + - This method encodes the kubeconfig content and passes it directly to the CRIDGE container. + +> **Note:** Regardless of the method you choose, if you're using a non-standard kubeconfig file name, make sure to adjust the references or rename the file as needed. -- add all the respective kubeconf files into the `org.etsi.osl.main/compose/kubedir` directory. -- create a copy of the `cridge.yaml` and `cridge-config.yaml` in `\org.etsi.osl.main\kubernetes\helm\openslice\templates` directory for every Cluster. *Mind the need for different naming*. -- update every `cridge-config.yaml` file to get the appropriate kubeconf file for every Cluster. +> **Important Note:** If you are deploying CRIDGE in the same cluster and namespace as OpenSlice, no additional configuration is required for the message bus broker URL and OpenSlice communicates with CRIDGE directly. However, if CRIDGE is installed in a **separate Kubernetes cluster** from the one hosting OpenSlice, it is important to configure the `values.yaml` file for the CRIDGE Helm chart to point to the correct message bus broker URL. Please see [Nginx Ingress Controller (Kubernetes Community Edition) configuration](#software-requirements) on how to properly expose the message bus in such scenario. -Below you may find an indicative example that only references the affected fields of each cridge-config.yaml file: +In the `values.yaml` of the CRIDGE Helm chart, you must set `oscreds.activemq.brokerUrl` to point to the IP address of the ingress controller in the OpenSlice cluster, as shown below: ```yaml -data: - config: |- - {{- .Files.Get "files/org.etsi.osl.cridge/config-clusterX" | nindent 4 }} +oscreds: + activemq: + brokerUrl: "tcp://<openslice-rootURL>:61616?jms.watchTopicAdvisories=false" ``` +##### Management of multiple Kubernetes Clusters + +OpenSlice also offers management support of *multiple Kubernetes Clusters* simultaneously. + +For this, you will have to replicate the steps in [Standalone CRIDGE deployment](#32-standalone-cridge-deployment) for every Cluster. Each CRIDGE instance will be in charged with the management of one Kubernetes Cluster. + + #### 4. External Services Configuration For configuring optional external services like Bugzilla and CentralLog, specify their URLs and credentials in the `values.yaml` file: @@ -181,6 +281,15 @@ mysql: storage: "10Gi" ``` +#### 8. Configuring TCP Forwarding for Artemis + +To expose the message bus service (Artemis) via the ingress controller, it’s essential to configure TCP traffic forwarding. Artemis listens on port `61616`, and this traffic needs to be directed to the Artemis service within your Kubernetes cluster. + +In the [Ingress Controller Setup](#software-requirements) section, you already configured the Nginx ingress controller to handle this TCP forwarding. By setting the rule for port `61616`, traffic arriving at the ingress will be forwarded to the Artemis service defined in your Helm release. + +This setup ensures that the message bus service is accessible externally via the ingress controller, completing the necessary configuration for Artemis. + + ### Configure Web UI In folder `kubernetes/helm/openslice/files/org.etsi.osl.portal.web/src/js` you must make a copy of `config.js.default` file and rename it to `config.js`.