diff --git a/doc/deploymentK8s.md b/doc/deploymentK8s.md index c99ebc4aa393197178dd9b9e4e5cc0dcd813c40c..176d36babd1f7ce8e420fe4adc3dab6eb37a20ee 100644 --- a/doc/deploymentK8s.md +++ b/doc/deploymentK8s.md @@ -19,8 +19,16 @@ * **Disclaimer:** The current manual setup of Persistent Volumes using `hostPath` is designed to operate with **only a single worker node**. This setup will not support data persistence if a pod is rescheduled to another node. * **Helm:** For managing the deployment of OpenSlice. * **Ingress Controller:** Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting. An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic. You must have an Ingress controller to satisfy an Ingress. - * An Nginx ingress controller is required, which can be installed using [this guide](https://kubernetes.github.io/ingress-nginx/deploy/). - * If you use another type of ingress controller, you'll need to modify `[repo-root]/kubernetes/helm/openslice/templates/openslice-ingress.yaml` to conform to your ingress controller's requirements. + * **Nginx Ingress Controller (Kubernetes Community Edition):** Version **>= 1.9.13** is required to expose the message bus service (Artemis), which communicates using the TCP protocol. + * To install or upgrade to the required version, run the following command: + ```bash + helm upgrade nginx-ingress ingress-nginx/ingress-nginx --namespace ingress \ + --set tcp.61616="/-artemis:61616" + ``` + Replace `` with the name of your OpenSlice Helm release. + * More details regarding the Nginx Ingress Controller (Kubernetes Community Edition) can be found [here](https://kubernetes.github.io/ingress-nginx/deploy/). + * If using another ingress controller, you'll need to modify `[repo-root]/kubernetes/helm/openslice/templates/openslice-ingress.yaml` to conform to your ingress controller's requirements. + * **Network Load Balancer:** Required for exposing the service (e.g., GCP, AWS, Azure, MetalLB). * **Domain/IP Address:** Necessary for accessing the application. This should be configured in `[repo-root]/kubernetes/helm/openslice/values.yaml` under `rooturl`. @@ -114,9 +122,9 @@ If you want to create and manage Kubernetes Custom Resources (CRs), you will hav You will have to copy the kubeconf file to the `org.etsi.osl.main/kubernetes/helm/openslice/files/org.etsi.osl.cridge` directory, *prior to the deployment*. -By default, the deployment process copies the `org.etsi.osl.main/kubernetes/helm/openslice/files/org.etsi.osl.cridge/config` file into the `/root/.kube` directory of the CRIDGE container. +By default, the deployment process copies the `org.etsi.osl.main/kubernetes/helm/openslice/files/org.etsi.osl.cridge/kubeconfig.yaml` file into the `/root/.kube` directory of the CRIDGE container. -> **The above configuration works for the default kubeconf file names. It explicitly expects a file named `config` within the `org.etsi.osl.main/kubernetes/helm/openslice/files/org.etsi.osl.cridge` directory. If you are working with custom kubeconf file names, you will have to rename them.** +> **The above configuration works for the default kubeconf file names. It explicitly expects a file named `kubeconfig.yaml` within the `org.etsi.osl.main/kubernetes/helm/openslice/files/org.etsi.osl.cridge` directory. If you are working with custom kubeconf file names, you will have to rename them.** OpenSlice also offers management support of *multiple Kubernetes Clusters* simultaneously. For this, you will have to: @@ -132,6 +140,16 @@ data: {{- .Files.Get "files/org.etsi.osl.cridge/config-clusterX" | nindent 4 }} ``` +If you are deploying CRIDGE in the same cluster as OpenSlice, no additional configuration is required for the message bus broker URL. However, if CRIDGE is installed in a **separate Kubernetes cluster** from the one hosting OpenSlice, it is important to configure the `values.yaml` file for the CRIDGE Helm chart to point to the correct message bus broker URL. + +In the `values.yaml` of the CRIDGE Helm chart, you must set `oscreds.activemq.brokerUrl` to point to the IP address of the ingress controller in the OpenSlice cluster, as shown below: + +```yaml +oscreds: + activemq: + brokerUrl: "tcp://:61616?jms.watchTopicAdvisories=false" +``` + #### 4. External Services Configuration For configuring optional external services like Bugzilla and CentralLog, specify their URLs and credentials in the `values.yaml` file: @@ -181,6 +199,15 @@ mysql: storage: "10Gi" ``` +#### 8. Configuring TCP Forwarding for Artemis + +To expose the message bus service (Artemis) via the ingress controller, it’s essential to configure TCP traffic forwarding. Artemis listens on port `61616`, and this traffic needs to be directed to the Artemis service within your Kubernetes cluster. + +In the [Ingress Controller Setup](#software-requirements) section, you already configured the Nginx ingress controller to handle this TCP forwarding. By setting the rule for port `61616`, traffic arriving at the ingress will be forwarded to the Artemis service defined in your Helm release. + +This setup ensures that the message bus service is accessible externally via the ingress controller, completing the necessary configuration for Artemis. + + ### Configure Web UI In folder `kubernetes/helm/openslice/files/org.etsi.osl.portal.web/src/js` you must make a copy of `config.js.default` file and rename it to `config.js`.