Scheduled maintenance on Saturday, 27 September 2025, from 07:00 AM to 4:00 PM GMT (09:00 AM to 6:00 PM CEST) - some services may be unavailable -

Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • camara-integration
  • cnit-optical-band-expansion
  • cnit-p2mp-premerge
  • cnit_related_activity_premerge
  • cnit_tapi
  • cnit_transponders
  • develop
  • feat/108-extend-sbi-with-auto-discovery-of-endpoints-and-channels
  • feat/110-cttc-incorrect-endpoint-lookup-in-nbi-etsi-bwm-plugin
  • feat/113-cttc-implement-nbi-connector-to-manage-network-access-control-lists-acls
  • feat/116-ubi-updates-in-telemetry-backend-to-support-p4-in-band-network-telemetry
  • feat/116-ubi-updates-in-telemetry-backend-to-support-p4-in-band-network-telemetry-2
  • feat/128-cttc-add-ids-support
  • feat/138-ubi-error-handling-in-ztp-component
  • feat/138-ubi-error-handling-in-ztp-component-2
  • feat/139-ubi-p4-driver-does-not-correctly-retrieve-resources
  • feat/161-tid-creation-of-ip-link-with-supporting-coherent-pluggable-to-pluggable-connection
  • feat/163-implement-camara-based-nbi-connector-old-to-be-removed
  • feat/167-ansible-for-configuring-a-tfs-compatible-microk8s-cluster
  • feat/169-cttc-implement-vnt-manager-component
  • feat/183-create-qosprofile-component
  • feat/190-cttc-generalize-service-database-management
  • feat/192-cttc-implement-telemetry-backend-collector-gnmi-openconfig
  • feat/236-integration-with-multiband-amplifier-with-ocm
  • feat/253-tid-tapi-support
  • feat/264-tid-nbi-fot-sap-topology
  • feat/265-tid-new-service-type-for-pon-controller
  • feat/270-job-failed-131879
  • feat/278-cnit-basic-flexgrid-lightpath-deployment
  • feat/280-cttc-enhance-bgp-support-in-netconf-openconfig-sbi-driver
  • feat/281-optical-bandwidth-expansion
  • feat/292-cttc-implement-integration-test-for-ryu-openflow
  • feat/294-cttc-correct-ci-cd-descriptors
  • feat/296-cttc-camara-end-to-end-ci-cd-tests-fail
  • feat/301-cttc-dscm-pluggables
  • feat/303-add-test-to-feature-develop-during-234-related-to-osm-integration-2
  • feat/303-add-test-to-feature-develop-during-234-related-to-osm-integration-3
  • feat/304-cttc-netconf-based-openconfig-telemetry-collector
  • feat/305-cttc-enhanced-netconf-openconfig-sbi-driver-for-dscm-pluggables
  • feat/306-cttc-enhanced-restconf-based-openconfig-nbi-for-dscm-pluggables
  • feat/307-update-python-version
  • feat/307-update-python-version-service
  • feat/308-code-formatting
  • feat/310-cttc-implement-nbi-connector-to-interface-with-osm-client
  • feat/312-tid-new-service-to-configure-interfaces-via-openconfig
  • feat/313-tid-new-tapi-service-for-lsp-configuration
  • feat/313-tid-new-tapi-service-for-lsp-configuration-2
  • feat/314-tid-new-service-for-ipowdm-configuration-fron-orchestrator-to-ipowdm-controller
  • feat/316-cnit-basic-point-multiploint-optical-connections
  • feat/320-cttc-ietf-simap-basic-support-with-kafka-yang-push
  • feat/321-add-support-for-gnmi-configuration-via-proto
  • feat/322-add-read-support-for-ipinfusion-devices-via-netconf
  • feat/323-add-support-for-restconf-protocol-in-devices
  • feat/324-tid-nbi-ietf_l3vpn-deploy-fail
  • feat/325-tid-nbi-e2e-to-manage-e2e-path-computation
  • feat/326-tid-external-management-of-devices-telemetry-nbi
  • feat/327-tid-new-service-to-ipowdm-controller-to-manage-transceivers-configuration-on-external-agent
  • feat/46-cttc-rename-sbi-component
  • feat/62-tid-add-support-to-nbi-to-export-the-device-inventory-items-2
  • feat/92-cttc-implement-sbi-driver-for-nokia-sr-linux-l2-vpns-through-gnmi
  • feat/94-cttc-nbi-unitary-tests-not-running-and-or-not-working
  • feat/automation-revisited
  • feat/automation-workflow-plugin
  • feat/cttc-nbi-post-service
  • feat/cttc-service-concurrent-task-executor
  • feat/energy-monitoring
  • feat/hackfest
  • feat/hackfest-r2
  • feat/hackfest-r2.1
  • feat/hackfest3
  • feat/hackfest4
  • feat/hackfest5
  • feat/policy-refactor
  • feat/refactor-upgrade-policy
  • feat/siae-mw-driver-enhancement
  • feat/telemetry-collector-int
  • feat/telemetry-int-collector-support-p4
  • feat/tid-bgp
  • feat/tid-logical-resources-component
  • feat/tid-new-pcep-component
  • feat/tid-newer-pcep-component
  • feat/tid-openconfig-refactoring
  • feat/tid-p4
  • feat/tid-pcep
  • feat/tid-pcep-component
  • feat/tid-sap-topology
  • feat/ztp-error-handling
  • fix/58-adapt-interdomain-and-dlt-components-for-release-3-0-to-automate-nfv-sdn-22-experiment
  • fix/remove_automation_subscribe
  • master
  • openroadm-flex-grid
  • release/1.0.0
  • release/2.0.0
  • release/2.0.1
  • release/2.1.0
  • release/3.0.0
  • release/3.0.1
  • release/4.0.0
  • release/5.0.0
  • temp-pr-p4-int
  • demo-dpiab-eucnc2024
  • v1.0.0
  • v2.0.0
  • v2.1.0
  • v3.0.0
  • v4.0.0
  • v5.0.0
107 results

Target

Select target project
No results found
Select Git revision
  • camara-integration
  • cnit-optical-band-expansion
  • cnit-p2mp-premerge
  • cnit_related_activity_premerge
  • cnit_tapi
  • cnit_transponders
  • develop
  • feat/108-extend-sbi-with-auto-discovery-of-endpoints-and-channels
  • feat/110-cttc-incorrect-endpoint-lookup-in-nbi-etsi-bwm-plugin
  • feat/113-cttc-implement-nbi-connector-to-manage-network-access-control-lists-acls
  • feat/116-ubi-updates-in-telemetry-backend-to-support-p4-in-band-network-telemetry
  • feat/116-ubi-updates-in-telemetry-backend-to-support-p4-in-band-network-telemetry-2
  • feat/128-cttc-add-ids-support
  • feat/138-ubi-error-handling-in-ztp-component
  • feat/138-ubi-error-handling-in-ztp-component-2
  • feat/139-ubi-p4-driver-does-not-correctly-retrieve-resources
  • feat/161-tid-creation-of-ip-link-with-supporting-coherent-pluggable-to-pluggable-connection
  • feat/163-implement-camara-based-nbi-connector-old-to-be-removed
  • feat/167-ansible-for-configuring-a-tfs-compatible-microk8s-cluster
  • feat/169-cttc-implement-vnt-manager-component
  • feat/183-create-qosprofile-component
  • feat/190-cttc-generalize-service-database-management
  • feat/192-cttc-implement-telemetry-backend-collector-gnmi-openconfig
  • feat/236-integration-with-multiband-amplifier-with-ocm
  • feat/253-tid-tapi-support
  • feat/264-tid-nbi-fot-sap-topology
  • feat/265-tid-new-service-type-for-pon-controller
  • feat/270-job-failed-131879
  • feat/278-cnit-basic-flexgrid-lightpath-deployment
  • feat/280-cttc-enhance-bgp-support-in-netconf-openconfig-sbi-driver
  • feat/281-optical-bandwidth-expansion
  • feat/292-cttc-implement-integration-test-for-ryu-openflow
  • feat/294-cttc-correct-ci-cd-descriptors
  • feat/296-cttc-camara-end-to-end-ci-cd-tests-fail
  • feat/301-cttc-dscm-pluggables
  • feat/303-add-test-to-feature-develop-during-234-related-to-osm-integration-2
  • feat/303-add-test-to-feature-develop-during-234-related-to-osm-integration-3
  • feat/304-cttc-netconf-based-openconfig-telemetry-collector
  • feat/305-cttc-enhanced-netconf-openconfig-sbi-driver-for-dscm-pluggables
  • feat/306-cttc-enhanced-restconf-based-openconfig-nbi-for-dscm-pluggables
  • feat/307-update-python-version
  • feat/307-update-python-version-service
  • feat/308-code-formatting
  • feat/310-cttc-implement-nbi-connector-to-interface-with-osm-client
  • feat/312-tid-new-service-to-configure-interfaces-via-openconfig
  • feat/313-tid-new-tapi-service-for-lsp-configuration
  • feat/313-tid-new-tapi-service-for-lsp-configuration-2
  • feat/314-tid-new-service-for-ipowdm-configuration-fron-orchestrator-to-ipowdm-controller
  • feat/316-cnit-basic-point-multiploint-optical-connections
  • feat/320-cttc-ietf-simap-basic-support-with-kafka-yang-push
  • feat/321-add-support-for-gnmi-configuration-via-proto
  • feat/322-add-read-support-for-ipinfusion-devices-via-netconf
  • feat/323-add-support-for-restconf-protocol-in-devices
  • feat/324-tid-nbi-ietf_l3vpn-deploy-fail
  • feat/325-tid-nbi-e2e-to-manage-e2e-path-computation
  • feat/326-tid-external-management-of-devices-telemetry-nbi
  • feat/327-tid-new-service-to-ipowdm-controller-to-manage-transceivers-configuration-on-external-agent
  • feat/46-cttc-rename-sbi-component
  • feat/62-tid-add-support-to-nbi-to-export-the-device-inventory-items-2
  • feat/92-cttc-implement-sbi-driver-for-nokia-sr-linux-l2-vpns-through-gnmi
  • feat/94-cttc-nbi-unitary-tests-not-running-and-or-not-working
  • feat/automation-revisited
  • feat/automation-workflow-plugin
  • feat/cttc-nbi-post-service
  • feat/cttc-service-concurrent-task-executor
  • feat/energy-monitoring
  • feat/hackfest
  • feat/hackfest-r2
  • feat/hackfest-r2.1
  • feat/hackfest3
  • feat/hackfest4
  • feat/hackfest5
  • feat/policy-refactor
  • feat/refactor-upgrade-policy
  • feat/siae-mw-driver-enhancement
  • feat/telemetry-collector-int
  • feat/telemetry-int-collector-support-p4
  • feat/tid-bgp
  • feat/tid-logical-resources-component
  • feat/tid-new-pcep-component
  • feat/tid-newer-pcep-component
  • feat/tid-openconfig-refactoring
  • feat/tid-p4
  • feat/tid-pcep
  • feat/tid-pcep-component
  • feat/tid-sap-topology
  • feat/ztp-error-handling
  • fix/58-adapt-interdomain-and-dlt-components-for-release-3-0-to-automate-nfv-sdn-22-experiment
  • fix/remove_automation_subscribe
  • master
  • openroadm-flex-grid
  • release/1.0.0
  • release/2.0.0
  • release/2.0.1
  • release/2.1.0
  • release/3.0.0
  • release/3.0.1
  • release/4.0.0
  • release/5.0.0
  • temp-pr-p4-int
  • demo-dpiab-eucnc2024
  • v1.0.0
  • v2.0.0
  • v2.1.0
  • v3.0.0
  • v4.0.0
  • v5.0.0
107 results
Show changes
207 files
+ 7423
1565
Compare changes
  • Side-by-side
  • Inline

Files

+3 −0
Original line number Original line Diff line number Diff line
@@ -27,6 +27,7 @@ include:
  - local: '/src/context/.gitlab-ci.yml'
  - local: '/src/context/.gitlab-ci.yml'
  - local: '/src/device/.gitlab-ci.yml'
  - local: '/src/device/.gitlab-ci.yml'
  - local: '/src/service/.gitlab-ci.yml'
  - local: '/src/service/.gitlab-ci.yml'
  - local: '/src/qkd_app/.gitlab-ci.yml'
  - local: '/src/dbscanserving/.gitlab-ci.yml'
  - local: '/src/dbscanserving/.gitlab-ci.yml'
  - local: '/src/opticalattackmitigator/.gitlab-ci.yml'
  - local: '/src/opticalattackmitigator/.gitlab-ci.yml'
  - local: '/src/opticalattackdetector/.gitlab-ci.yml'
  - local: '/src/opticalattackdetector/.gitlab-ci.yml'
@@ -54,6 +55,8 @@ include:
  - local: '/src/qos_profile/.gitlab-ci.yml'
  - local: '/src/qos_profile/.gitlab-ci.yml'
  - local: '/src/vnt_manager/.gitlab-ci.yml'
  - local: '/src/vnt_manager/.gitlab-ci.yml'
  - local: '/src/e2e_orchestrator/.gitlab-ci.yml'
  - local: '/src/e2e_orchestrator/.gitlab-ci.yml'
  - local: '/src/ztp_server/.gitlab-ci.yml'
  - local: '/src/osm_client/.gitlab-ci.yml'


  # This should be last one: end-to-end integration tests
  # This should be last one: end-to-end integration tests
  - local: '/src/tests/.gitlab-ci.yml'
  - local: '/src/tests/.gitlab-ci.yml'
Original line number Original line Diff line number Diff line
@@ -13,14 +13,17 @@
# limitations under the License.
# limitations under the License.


coverage==6.3
coverage==6.3
grpcio==1.47.*
# grpcio==1.47.*
grpcio==1.60.0
grpcio-health-checking==1.47.*
grpcio-health-checking==1.47.*
grpcio-reflection==1.47.*
grpcio-reflection==1.47.*
grpcio-tools==1.47.*
# grpcio-tools==1.47.*
grpcio-tools==1.60.0
grpclib==0.4.4
grpclib==0.4.4
prettytable==3.5.0
prettytable==3.5.0
prometheus-client==0.13.0
prometheus-client==0.13.0
protobuf==3.20.*
# protobuf==3.20.*
protobuf==4.21.6
pytest==6.2.5
pytest==6.2.5
pytest-benchmark==3.4.1
pytest-benchmark==3.4.1
python-dateutil==2.8.2
python-dateutil==2.8.2
+22 −2
Original line number Original line Diff line number Diff line
@@ -151,6 +151,26 @@ export NATS_DEPLOY_MODE=${NATS_DEPLOY_MODE:-"single"}
export NATS_REDEPLOY=${NATS_REDEPLOY:-""}
export NATS_REDEPLOY=${NATS_REDEPLOY:-""}




# ----- Apache Kafka -----------------------------------------------------------

# If not already set, set the namespace where Kafka will be deployed.
export KFK_NAMESPACE=${KFK_NAMESPACE:-"kafka"}

# If not already set, set the external port Kafka Client interface will be exposed to.
export KFK_EXT_PORT_CLIENT=${KFK_EXT_PORT_CLIENT:-"9092"}

# If not already set, set Kafka installation mode. Accepted values are: 'single'.
# - If KFK_DEPLOY_MODE is "single", Kafka is deployed in single node mode. It is convenient for
#   development and testing purposes and should fit in a VM. IT SHOULD NOT BE USED IN PRODUCTION ENVIRONMENTS.
# NOTE: Production mode is still not supported. Will be provided in the future.
export KFK_DEPLOY_MODE=${KFK_DEPLOY_MODE:-"single"}

# If not already set, disable flag for re-deploying Kafka from scratch.
# WARNING: ACTIVATING THIS FLAG IMPLIES LOOSING THE MESSAGE BROKER INFORMATION!
# If KFK_REDEPLOY is "YES", the message broker will be dropped while checking/deploying Kafka.
export KFK_REDEPLOY=${KFK_REDEPLOY:-""}


# ----- QuestDB ----------------------------------------------------------------
# ----- QuestDB ----------------------------------------------------------------


# If not already set, set the namespace where QuestDB will be deployed.
# If not already set, set the namespace where QuestDB will be deployed.
@@ -215,8 +235,8 @@ export GRAF_EXT_PORT_HTTP=${GRAF_EXT_PORT_HTTP:-"3000"}
# Deploy Apache Kafka
# Deploy Apache Kafka
./deploy/kafka.sh
./deploy/kafka.sh


#Deploy Monitoring (Prometheus, Mimir, Grafana)
#Deploy Monitoring (Prometheus Gateway, Prometheus)
./deploy/monitoring.sh
# ./deploy/monitoring.sh


# Expose Dashboard
# Expose Dashboard
./deploy/expose_dashboard.sh
./deploy/expose_dashboard.sh
+8 −1
Original line number Original line Diff line number Diff line
@@ -66,7 +66,7 @@ CRDB_MANIFESTS_PATH="manifests/cockroachdb"


# Create a tmp folder for files modified during the deployment
# Create a tmp folder for files modified during the deployment
TMP_MANIFESTS_FOLDER="${TMP_FOLDER}/${CRDB_NAMESPACE}/manifests"
TMP_MANIFESTS_FOLDER="${TMP_FOLDER}/${CRDB_NAMESPACE}/manifests"
mkdir -p $TMP_MANIFESTS_FOLDER
mkdir -p ${TMP_MANIFESTS_FOLDER}


function crdb_deploy_single() {
function crdb_deploy_single() {
    echo "CockroachDB Namespace"
    echo "CockroachDB Namespace"
@@ -105,6 +105,13 @@ function crdb_deploy_single() {
            sleep 1
            sleep 1
        done
        done
        kubectl wait --namespace ${CRDB_NAMESPACE} --for=condition=Ready --timeout=300s pod/cockroachdb-0
        kubectl wait --namespace ${CRDB_NAMESPACE} --for=condition=Ready --timeout=300s pod/cockroachdb-0

        # Wait for CockroachDB to notify "start_node_query"
        echo ">>> CockroachDB pods created. Waiting CockroachDB server to be started..."
        while ! kubectl --namespace ${CRDB_NAMESPACE} logs pod/cockroachdb-0 -c cockroachdb 2>&1 | grep -q 'start_node_query'; do
            printf "%c" "."
            sleep 1
        done
    fi
    fi
    echo
    echo


+87 −53
Original line number Original line Diff line number Diff line
@@ -13,17 +13,26 @@
# See the License for the specific language governing permissions and
# See the License for the specific language governing permissions and
# limitations under the License.
# limitations under the License.



########################################################################################################################
########################################################################################################################
# Read deployment settings
# Read deployment settings
########################################################################################################################
########################################################################################################################


# If not already set, set the namespace where Apache Kafka will be deployed.
# If not already set, set the namespace where Kafka will be deployed.
export KFK_NAMESPACE=${KFK_NAMESPACE:-"kafka"}
export KFK_NAMESPACE=${KFK_NAMESPACE:-"kafka"}


# If not already set, set the port Apache Kafka server will be exposed to.
# If not already set, set the external port Kafka client interface will be exposed to.
export KFK_SERVER_PORT=${KFK_SERVER_PORT:-"9092"}
export KFK_EXT_PORT_CLIENT=${KFK_EXT_PORT_CLIENT:-"9092"}

# If not already set, set Kafka installation mode. Accepted values are: 'single'.
# - If KFK_DEPLOY_MODE is "single", Kafka is deployed in single node mode. It is convenient for
#   development and testing purposes and should fit in a VM. IT SHOULD NOT BE USED IN PRODUCTION ENVIRONMENTS.
# NOTE: Production mode is still not supported. Will be provided in the future.
export KFK_DEPLOY_MODE=${KFK_DEPLOY_MODE:-"single"}


# If not already set, if flag is YES, Apache Kafka will be redeployed and all topics will be lost.
# If not already set, disable flag for re-deploying Kafka from scratch.
# WARNING: ACTIVATING THIS FLAG IMPLIES LOOSING THE MESSAGE BROKER INFORMATION!
# If KFK_REDEPLOY is "YES", the message broker will be dropped while checking/deploying Kafka.
export KFK_REDEPLOY=${KFK_REDEPLOY:-""}
export KFK_REDEPLOY=${KFK_REDEPLOY:-""}




@@ -34,58 +43,83 @@ export KFK_REDEPLOY=${KFK_REDEPLOY:-""}
# Constants
# Constants
TMP_FOLDER="./tmp"
TMP_FOLDER="./tmp"
KFK_MANIFESTS_PATH="manifests/kafka"
KFK_MANIFESTS_PATH="manifests/kafka"
    KFK_ZOOKEEPER_MANIFEST="01-zookeeper.yaml"
    KFK_MANIFEST="02-kafka.yaml"


# Create a tmp folder for files modified during the deployment
# Create a tmp folder for files modified during the deployment
TMP_MANIFESTS_FOLDER="${TMP_FOLDER}/${KFK_NAMESPACE}/manifests"
TMP_MANIFESTS_FOLDER="${TMP_FOLDER}/${KFK_NAMESPACE}/manifests"
mkdir -p ${TMP_MANIFESTS_FOLDER}
mkdir -p ${TMP_MANIFESTS_FOLDER}


function kafka_deploy() {
function kfk_deploy_single() {
    # copy zookeeper and kafka manifest files to temporary manifest location
    echo "Kafka Namespace"
    cp "${KFK_MANIFESTS_PATH}/${KFK_ZOOKEEPER_MANIFEST}" "${TMP_MANIFESTS_FOLDER}/${KFK_ZOOKEEPER_MANIFEST}"
    echo ">>> Create Kafka Namespace (if missing)"
    cp "${KFK_MANIFESTS_PATH}/${KFK_MANIFEST}" "${TMP_MANIFESTS_FOLDER}/${KFK_MANIFEST}"

    # echo "Apache Kafka Namespace"
    echo "Delete Apache Kafka Namespace"
    kubectl delete namespace ${KFK_NAMESPACE} --ignore-not-found

    echo "Create Apache Kafka Namespace"
    kubectl create namespace ${KFK_NAMESPACE}
    kubectl create namespace ${KFK_NAMESPACE}
    echo


    # echo ">>> Deplying Apache Kafka Zookeeper"
    echo "Kafka (single-mode)"
    # Kafka zookeeper service should be deployed before the kafka service
    echo ">>> Checking if Kafka is deployed..."
    kubectl --namespace ${KFK_NAMESPACE} apply -f "${TMP_MANIFESTS_FOLDER}/${KFK_ZOOKEEPER_MANIFEST}"
    if kubectl get --namespace ${KFK_NAMESPACE} statefulset/kafka &> /dev/null; then

        echo ">>> Kafka is present; skipping step."
    #KFK_ZOOKEEPER_SERVICE="zookeeper-service"    # this command may be replaced with command to extract service name automatically
    else
    #KFK_ZOOKEEPER_IP=$(kubectl --namespace ${KFK_NAMESPACE} get service ${KFK_ZOOKEEPER_SERVICE} -o 'jsonpath={.spec.clusterIP}')
        echo ">>> Deploy Kafka"
        cp "${KFK_MANIFESTS_PATH}/single-node.yaml" "${TMP_MANIFESTS_FOLDER}/kfk_single_node.yaml"
        #sed -i "s/<KFK_NAMESPACE>/${KFK_NAMESPACE}/" "${TMP_MANIFESTS_FOLDER}/kfk_single_node.yaml"
        kubectl --namespace ${KFK_NAMESPACE} apply -f "${TMP_MANIFESTS_FOLDER}/kfk_single_node.yaml"

        echo ">>> Waiting Kafka statefulset to be created..."
        while ! kubectl get --namespace ${KFK_NAMESPACE} statefulset/kafka &> /dev/null; do
            printf "%c" "."
            sleep 1
        done

        # Wait for statefulset condition "Available=True" does not work
        # Wait for statefulset condition "jsonpath='{.status.readyReplicas}'=3" throws error:
        #   "error: readyReplicas is not found"
        # Workaround: Check the pods are ready
        #echo ">>> Kafka statefulset created. Waiting for readiness condition..."
        #kubectl wait --namespace  ${KFK_NAMESPACE} --for=condition=Available=True --timeout=300s statefulset/kafka
        #kubectl wait --namespace ${KGK_NAMESPACE} --for=jsonpath='{.status.readyReplicas}'=3 --timeout=300s \
        #    statefulset/kafka
        echo ">>> Kafka statefulset created. Waiting Kafka pods to be created..."
        while ! kubectl get --namespace ${KFK_NAMESPACE} pod/kafka-0 &> /dev/null; do
            printf "%c" "."
            sleep 1
        done
        kubectl wait --namespace ${KFK_NAMESPACE} --for=condition=Ready --timeout=300s pod/kafka-0

        # Wait for Kafka to notify "Kafka Server started"
        echo ">>> Kafka pods created. Waiting Kafka Server to be started..."
        while ! kubectl --namespace ${KFK_NAMESPACE} logs pod/kafka-0 -c kafka 2>&1 | grep -q 'Kafka Server started'; do
            printf "%c" "."
            sleep 1
        done
    fi
    echo
}


    # Kafka service should be deployed after the zookeeper service
    #sed -i "s/<ZOOKEEPER_INTERNAL_IP>/${KFK_ZOOKEEPER_IP}/" "${TMP_MANIFESTS_FOLDER}/$KFK_MANIFEST"
    sed -i "s/<KAFKA_NAMESPACE>/${KFK_NAMESPACE}/" "${TMP_MANIFESTS_FOLDER}/$KFK_MANIFEST"


    # echo ">>> Deploying Apache Kafka Broker"
function kfk_undeploy_single() {
    kubectl --namespace ${KFK_NAMESPACE} apply -f "${TMP_MANIFESTS_FOLDER}/$KFK_MANIFEST"
    echo "Kafka (single-mode)"
    echo ">>> Checking if Kafka is deployed..."
    if kubectl get --namespace ${KFK_NAMESPACE} statefulset/kafka &> /dev/null; then
        echo ">>> Undeploy Kafka"
        kubectl delete --namespace ${KFK_NAMESPACE} -f "${TMP_MANIFESTS_FOLDER}/kfk_single_node.yaml" --ignore-not-found
    else
        echo ">>> Kafka is not present; skipping step."
    fi
    echo


    # echo ">>> Verifing Apache Kafka deployment"
    echo "Kafka Namespace"
    sleep 5
    echo ">>> Delete Kafka Namespace (if exists)"
    # KFK_PODS_STATUS=$(kubectl --namespace ${KFK_NAMESPACE} get pods)
    echo "NOTE: this step might take few minutes to complete!"
    # if echo "$KFK_PODS_STATUS" | grep -qEv 'STATUS|Running'; then
    kubectl delete namespace ${KFK_NAMESPACE} --ignore-not-found
    #     echo "Deployment Error: \n $KFK_PODS_STATUS"
    echo
    # else
    #     echo "$KFK_PODS_STATUS"
    # fi
}
}


echo ">>> Apache Kafka"
if [ "$KFK_DEPLOY_MODE" == "single" ]; then
echo "Checking if Apache Kafka is deployed ... "
    if [ "$KFK_REDEPLOY" == "YES" ]; then
    if [ "$KFK_REDEPLOY" == "YES" ]; then
    echo "Redeploying kafka namespace"
        kfk_undeploy_single
    kafka_deploy
    fi
elif kubectl get namespace "${KFK_NAMESPACE}" &> /dev/null; then

    echo "Apache Kafka already present; skipping step." 
    kfk_deploy_single
else
else
    echo "Kafka namespace doesn't exists. Deploying kafka namespace"
    echo "Unsupported value: KFK_DEPLOY_MODE=$KFK_DEPLOY_MODE"
    kafka_deploy
fi
fi
echo
Original line number Original line Diff line number Diff line
@@ -14,6 +14,8 @@
# limitations under the License.
# limitations under the License.


set -euo pipefail
set -euo pipefail
: "${KUBECONFIG:=/var/snap/microk8s/current/credentials/client.config}"



# -----------------------------------------------------------
# -----------------------------------------------------------
# Global namespace for all deployments
# Global namespace for all deployments
@@ -28,7 +30,7 @@ RELEASE_NAME_PROM="mon-prometheus"
CHART_REPO_NAME_PROM="prometheus-community"
CHART_REPO_NAME_PROM="prometheus-community"
CHART_REPO_URL_PROM="https://prometheus-community.github.io/helm-charts"
CHART_REPO_URL_PROM="https://prometheus-community.github.io/helm-charts"
CHART_NAME_PROM="prometheus"
CHART_NAME_PROM="prometheus"
VALUES_FILE_PROM="$VALUES_FILE_PATH/prometheus_values.yaml"
VALUES_FILE_PROM="$VALUES_FILE_PATH/prometheus_values.yaml"       # Values file for Prometheus and gateway


# -----------------------------------------------------------
# -----------------------------------------------------------
# Mimir Configuration
# Mimir Configuration
@@ -76,7 +78,8 @@ deploy_chart() {
    echo "Installing/Upgrading $release_name using custom values from $values_file..."
    echo "Installing/Upgrading $release_name using custom values from $values_file..."
    helm upgrade --install "$release_name" "$chart_repo_name/$chart_name" \
    helm upgrade --install "$release_name" "$chart_repo_name/$chart_name" \
      --namespace "$namespace" \
      --namespace "$namespace" \
      --values "$values_file"
      --values "$values_file" \
      --kubeconfig "$KUBECONFIG"
  else
  else
    echo "Installing/Upgrading $release_name with default chart values..."
    echo "Installing/Upgrading $release_name with default chart values..."
    helm upgrade --install "$release_name" "$chart_repo_name/$chart_name" \
    helm upgrade --install "$release_name" "$chart_repo_name/$chart_name" \
+2 −28
Original line number Original line Diff line number Diff line
@@ -51,12 +51,6 @@ export TFS_SKIP_BUILD=${TFS_SKIP_BUILD:-""}
# If not already set, set the namespace where CockroackDB will be deployed.
# If not already set, set the namespace where CockroackDB will be deployed.
export CRDB_NAMESPACE=${CRDB_NAMESPACE:-"crdb"}
export CRDB_NAMESPACE=${CRDB_NAMESPACE:-"crdb"}


# If not already set, set the external port CockroackDB Postgre SQL interface will be exposed to.
export CRDB_EXT_PORT_SQL=${CRDB_EXT_PORT_SQL:-"26257"}

# If not already set, set the external port CockroackDB HTTP Mgmt GUI interface will be exposed to.
export CRDB_EXT_PORT_HTTP=${CRDB_EXT_PORT_HTTP:-"8081"}

# If not already set, set the database username to be used by Context.
# If not already set, set the database username to be used by Context.
export CRDB_USERNAME=${CRDB_USERNAME:-"tfs"}
export CRDB_USERNAME=${CRDB_USERNAME:-"tfs"}


@@ -69,27 +63,12 @@ export CRDB_PASSWORD=${CRDB_PASSWORD:-"tfs123"}
# If not already set, set the namespace where NATS will be deployed.
# If not already set, set the namespace where NATS will be deployed.
export NATS_NAMESPACE=${NATS_NAMESPACE:-"nats"}
export NATS_NAMESPACE=${NATS_NAMESPACE:-"nats"}


# If not already set, set the external port NATS Client interface will be exposed to.
export NATS_EXT_PORT_CLIENT=${NATS_EXT_PORT_CLIENT:-"4222"}

# If not already set, set the external port NATS HTTP Mgmt GUI interface will be exposed to.
export NATS_EXT_PORT_HTTP=${NATS_EXT_PORT_HTTP:-"8222"}



# ----- QuestDB ----------------------------------------------------------------
# ----- QuestDB ----------------------------------------------------------------


# If not already set, set the namespace where QuestDB will be deployed.
# If not already set, set the namespace where QuestDB will be deployed.
export QDB_NAMESPACE=${QDB_NAMESPACE:-"qdb"}
export QDB_NAMESPACE=${QDB_NAMESPACE:-"qdb"}


# If not already set, set the external port QuestDB Postgre SQL interface will be exposed to.
export QDB_EXT_PORT_SQL=${QDB_EXT_PORT_SQL:-"8812"}

# If not already set, set the external port QuestDB Influx Line Protocol interface will be exposed to.
export QDB_EXT_PORT_ILP=${QDB_EXT_PORT_ILP:-"9009"}

# If not already set, set the external port QuestDB HTTP Mgmt GUI interface will be exposed to.
export QDB_EXT_PORT_HTTP=${QDB_EXT_PORT_HTTP:-"9000"}

# If not already set, set the database username to be used for QuestDB.
# If not already set, set the database username to be used for QuestDB.
export QDB_USERNAME=${QDB_USERNAME:-"admin"}
export QDB_USERNAME=${QDB_USERNAME:-"admin"}


@@ -114,14 +93,9 @@ export GRAF_EXT_PORT_HTTP=${GRAF_EXT_PORT_HTTP:-"3000"}


# ----- Apache Kafka ------------------------------------------------------
# ----- Apache Kafka ------------------------------------------------------


# If not already set, set the namespace where Apache Kafka will be deployed.
# If not already set, set the namespace where Kafka will be deployed.
export KFK_NAMESPACE=${KFK_NAMESPACE:-"kafka"}
export KFK_NAMESPACE=${KFK_NAMESPACE:-"kafka"}


# If not already set, set the port Apache Kafka server will be exposed to.
export KFK_SERVER_PORT=${KFK_SERVER_PORT:-"9092"}

# If not already set, if flag is YES, Apache Kafka will be redeployed and topic will be lost.
export KFK_REDEPLOY=${KFK_REDEPLOY:-""}


########################################################################################################################
########################################################################################################################
# Automated steps start here
# Automated steps start here
@@ -154,7 +128,7 @@ kubectl create secret generic crdb-data --namespace ${TFS_K8S_NAMESPACE} --type=
printf "\n"
printf "\n"


echo ">>> Create Secret with Apache Kafka..."
echo ">>> Create Secret with Apache Kafka..."
KFK_SERVER_PORT=$(kubectl --namespace ${KFK_NAMESPACE} get service kafka-service -o 'jsonpath={.spec.ports[0].port}')
KFK_SERVER_PORT=$(kubectl --namespace ${KFK_NAMESPACE} get service kafka-public -o 'jsonpath={.spec.ports[0].port}')
kubectl create secret generic kfk-kpi-data --namespace ${TFS_K8S_NAMESPACE} --type='Opaque' \
kubectl create secret generic kfk-kpi-data --namespace ${TFS_K8S_NAMESPACE} --type='Opaque' \
    --from-literal=KFK_NAMESPACE=${KFK_NAMESPACE} \
    --from-literal=KFK_NAMESPACE=${KFK_NAMESPACE} \
    --from-literal=KFK_SERVER_PORT=${KFK_SERVER_PORT}
    --from-literal=KFK_SERVER_PORT=${KFK_SERVER_PORT}
Original line number Original line Diff line number Diff line
@@ -61,7 +61,7 @@ spec:
      containers:
      containers:
      - name: cockroachdb
      - name: cockroachdb
        image: cockroachdb/cockroach:latest-v22.2
        image: cockroachdb/cockroach:latest-v22.2
        imagePullPolicy: Always
        imagePullPolicy: IfNotPresent
        args:
        args:
        - start-single-node
        - start-single-node
        ports:
        ports:
Original line number Original line Diff line number Diff line
@@ -55,9 +55,15 @@ spec:
          readinessProbe:
          readinessProbe:
            exec:
            exec:
              command: ["/bin/grpc_health_probe", "-addr=:1010"]
              command: ["/bin/grpc_health_probe", "-addr=:1010"]
            initialDelaySeconds: 50   # Context's gunicorn takes 30~40 seconds to bootstrap
            periodSeconds: 10
            failureThreshold: 10
          livenessProbe:
          livenessProbe:
            exec:
            exec:
              command: ["/bin/grpc_health_probe", "-addr=:1010"]
              command: ["/bin/grpc_health_probe", "-addr=:1010"]
            initialDelaySeconds: 50   # Context's gunicorn takes 30~40 seconds to bootstrap
            periodSeconds: 10
            failureThreshold: 10
          resources:
          resources:
            requests:
            requests:
              cpu: 250m
              cpu: 250m

manifests/kafka/01-zookeeper.yaml

deleted100644 → 0
+0 −53
Original line number Original line Diff line number Diff line
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Service
metadata:
  labels:
    app: zookeeper-service
  name: zookeeper-service
spec:
  type: ClusterIP
  ports:
    - name: zookeeper-port
      port: 2181
      #nodePort: 30181
      #targetPort: 2181
  selector:
    app: zookeeper
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: zookeeper
  name: zookeeper
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zookeeper
  template:
    metadata:
      labels:
        app: zookeeper
    spec:
      containers:
        - image: wurstmeister/zookeeper
          imagePullPolicy: IfNotPresent
          name: zookeeper
          ports:
            - containerPort: 2181

manifests/kafka/02-kafka.yaml

deleted100644 → 0
+0 −60
Original line number Original line Diff line number Diff line
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Service
metadata:
  labels:
    app: kafka-broker
  name: kafka-service
spec:
  ports:
  - port: 9092
  selector:
    app: kafka-broker
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: kafka-broker
  name: kafka-broker
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kafka-broker
  template:
    metadata:
      labels:
        app: kafka-broker
    spec:
      hostname: kafka-broker
      containers:
      - env:
        - name: KAFKA_BROKER_ID
          value: "1"
        - name: KAFKA_ZOOKEEPER_CONNECT
          #value: <ZOOKEEPER_INTERNAL_IP>:2181
          value: zookeeper-service.<KAFKA_NAMESPACE>.svc.cluster.local:2181
        - name: KAFKA_LISTENERS
          value: PLAINTEXT://:9092
        - name: KAFKA_ADVERTISED_LISTENERS
          value: PLAINTEXT://kafka-service.<KAFKA_NAMESPACE>.svc.cluster.local:9092
        image: wurstmeister/kafka
        imagePullPolicy: IfNotPresent
        name: kafka-broker
        ports:
          - containerPort: 9092
+99 −0
Original line number Original line Diff line number Diff line
# Copyright 2022-2025 ETSI SDG TeraFlowSDN (TFS) (https://tfs.etsi.org/)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Service
metadata:
  name: kafka-public
  labels:
    app.kubernetes.io/component: message-broker
    app.kubernetes.io/instance: kafka
    app.kubernetes.io/name: kafka
spec:
  type: ClusterIP
  selector:
    app.kubernetes.io/component: message-broker
    app.kubernetes.io/instance: kafka
    app.kubernetes.io/name: kafka
  ports:
  - name: clients
    port: 9092
    protocol: TCP
    targetPort: 9092
  - name: control-plane
    port: 9093
    protocol: TCP
    targetPort: 9093
  - name: external
    port: 9094
    protocol: TCP
    targetPort: 9094
---


apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: kafka
spec:
  selector:
    matchLabels:
      app.kubernetes.io/component: message-broker
      app.kubernetes.io/instance: kafka
      app.kubernetes.io/name: kafka
  serviceName: "kafka-public"
  replicas: 1
  minReadySeconds: 5
  template:
    metadata:
      labels:
        app.kubernetes.io/component: message-broker
        app.kubernetes.io/instance: kafka
        app.kubernetes.io/name: kafka
    spec:
      terminationGracePeriodSeconds: 10
      restartPolicy: Always
      containers:
      - name: kafka
        image: bitnami/kafka:latest
        imagePullPolicy: IfNotPresent
        ports:
        - name: clients
          containerPort: 9092
        - name: control-plane
          containerPort: 9093
        - name: external
          containerPort: 9094
        env:
          - name: KAFKA_CFG_NODE_ID
            value: "1"
          - name: KAFKA_CFG_PROCESS_ROLES
            value: "controller,broker"
          - name: KAFKA_CFG_LISTENERS
            value: "PLAINTEXT://:9092,CONTROLLER://:9093,EXTERNAL://:9094"
          - name: KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP
            value: "PLAINTEXT:PLAINTEXT,CONTROLLER:PLAINTEXT,EXTERNAL:PLAINTEXT"
          - name: KAFKA_CFG_ADVERTISED_LISTENERS
            value: "PLAINTEXT://kafka-public.kafka.svc.cluster.local:9092,EXTERNAL://localhost:9094"
          - name: KAFKA_CFG_CONTROLLER_LISTENER_NAMES
            value: "CONTROLLER"
          - name: KAFKA_CFG_CONTROLLER_QUORUM_VOTERS
            value: "1@kafka-0:9093"
        resources:
          requests:
            cpu: "250m"
            memory: 1Gi
          limits:
            cpu: "1"
            memory: 2Gi
Original line number Original line Diff line number Diff line
@@ -41,7 +41,7 @@ spec:
            - name: LOG_LEVEL
            - name: LOG_LEVEL
              value: "INFO"
              value: "INFO"
            - name: FLASK_ENV
            - name: FLASK_ENV
              value: "production"  # change to "development" if developing
              value: "production"  # normal value is "production", change to "development" if developing
            - name: IETF_NETWORK_RENDERER
            - name: IETF_NETWORK_RENDERER
              value: "LIBYANG"
              value: "LIBYANG"
          envFrom:
          envFrom:
+20 −14
Original line number Original line Diff line number Diff line
@@ -20,13 +20,15 @@
export TFS_REGISTRY_IMAGES="http://localhost:32000/tfs/"
export TFS_REGISTRY_IMAGES="http://localhost:32000/tfs/"


# Set the list of components, separated by spaces, you want to build images for, and deploy.
# Set the list of components, separated by spaces, you want to build images for, and deploy.
export TFS_COMPONENTS="context device pathcomp service slice nbi webui"
# export TFS_COMPONENTS="context device pathcomp service slice nbi webui"
export TFS_COMPONENTS="context device pathcomp service webui"


# Uncomment to activate Monitoring (old)
# Uncomment to activate Monitoring (old)
#export TFS_COMPONENTS="${TFS_COMPONENTS} monitoring"
#export TFS_COMPONENTS="${TFS_COMPONENTS} monitoring"


# Uncomment to activate Monitoring Framework (new)
# Uncomment to activate Monitoring Framework (new)
#export TFS_COMPONENTS="${TFS_COMPONENTS} kpi_manager kpi_value_writer kpi_value_api telemetry analytics automation"
#export TFS_COMPONENTS="${TFS_COMPONENTS} kpi_manager kpi_value_writer kpi_value_api telemetry analytics automation"
export TFS_COMPONENTS="${TFS_COMPONENTS} kpi_manager telemetry"


# Uncomment to activate QoS Profiles
# Uncomment to activate QoS Profiles
#export TFS_COMPONENTS="${TFS_COMPONENTS} qos_profile"
#export TFS_COMPONENTS="${TFS_COMPONENTS} qos_profile"
@@ -134,7 +136,7 @@ export CRDB_PASSWORD="tfs123"
export CRDB_DEPLOY_MODE="single"
export CRDB_DEPLOY_MODE="single"


# Disable flag for dropping database, if it exists.
# Disable flag for dropping database, if it exists.
export CRDB_DROP_DATABASE_IF_EXISTS=""
export CRDB_DROP_DATABASE_IF_EXISTS="YES"


# Disable flag for re-deploying CockroachDB from scratch.
# Disable flag for re-deploying CockroachDB from scratch.
export CRDB_REDEPLOY=""
export CRDB_REDEPLOY=""
@@ -159,6 +161,22 @@ export NATS_DEPLOY_MODE="single"
export NATS_REDEPLOY=""
export NATS_REDEPLOY=""




# ----- Apache Kafka -----------------------------------------------------------

# Set the namespace where Apache Kafka will be deployed.
export KFK_NAMESPACE="kafka"

# Set the port Apache Kafka server will be exposed to.
export KFK_EXT_PORT_CLIENT="9092"

# Set Kafka installation mode to 'single'. This option is convenient for development and testing.
# See ./deploy/all.sh or ./deploy/kafka.sh for additional details
export KFK_DEPLOY_MODE="single"

# Disable flag for re-deploying Kafka from scratch.
export KFK_REDEPLOY=""


# ----- QuestDB ----------------------------------------------------------------
# ----- QuestDB ----------------------------------------------------------------


# Set the namespace where QuestDB will be deployed.
# Set the namespace where QuestDB will be deployed.
@@ -199,15 +217,3 @@ export PROM_EXT_PORT_HTTP="9090"


# Set the external port Grafana HTTP Dashboards will be exposed to.
# Set the external port Grafana HTTP Dashboards will be exposed to.
export GRAF_EXT_PORT_HTTP="3000"
export GRAF_EXT_PORT_HTTP="3000"


# ----- Apache Kafka -----------------------------------------------------------

# Set the namespace where Apache Kafka will be deployed.
export KFK_NAMESPACE="kafka"

# Set the port Apache Kafka server will be exposed to.
export KFK_SERVER_PORT="9092"

# Set the flag to YES for redeploying of Apache Kafka
export KFK_REDEPLOY=""
Original line number Original line Diff line number Diff line
@@ -19,7 +19,7 @@ PROJECTDIR=`pwd`
cd $PROJECTDIR/src
cd $PROJECTDIR/src


RCFILE=$PROJECTDIR/coverage/.coveragerc
RCFILE=$PROJECTDIR/coverage/.coveragerc
CRDB_SQL_ADDRESS=$(kubectl --namespace ${CRDB_NAMESPACE} get service cockroachdb-public -o 'jsonpath={.spec.clusterIP}')
# CRDB_SQL_ADDRESS=$(kubectl --namespace ${CRDB_NAMESPACE} get service cockroachdb-public -o 'jsonpath={.spec.clusterIP}')
export CRDB_URI="cockroachdb://tfs:tfs123@${CRDB_SQL_ADDRESS}:26257/tfs_kpi_mgmt?sslmode=require"
# export CRDB_URI="cockroachdb://tfs:tfs123@${CRDB_SQL_ADDRESS}:26257/tfs_kpi_mgmt?sslmode=require"
python3 -m pytest --log-level=DEBUG --log-cli-level=DEBUG --verbose \
python3 -m pytest --log-level=DEBUG --log-cli-level=DEBUG --verbose \
    kpi_value_writer/tests/test_metric_writer_to_prom.py
    kpi_value_writer/tests/test_metric_writer_to_prom.py
Original line number Original line Diff line number Diff line
@@ -21,7 +21,7 @@ docker container prune -f
docker pull "bitnami/kafka:latest"
docker pull "bitnami/kafka:latest"
docker buildx build -t "mock_tfs_nbi_dependencies:test" -f ./src/tests/tools/mock_tfs_nbi_dependencies/Dockerfile .
docker buildx build -t "mock_tfs_nbi_dependencies:test" -f ./src/tests/tools/mock_tfs_nbi_dependencies/Dockerfile .
docker buildx build -t "nbi:latest" -f ./src/nbi/Dockerfile .
docker buildx build -t "nbi:latest" -f ./src/nbi/Dockerfile .
docker images --filter="dangling=true" --quiet | xargs -r docker rmi
docker image prune --force


docker network create -d bridge teraflowbridge
docker network create -d bridge teraflowbridge


Original line number Original line Diff line number Diff line
@@ -37,13 +37,13 @@ echo
echo "Build optical attack detector:"
echo "Build optical attack detector:"
echo "------------------------------"
echo "------------------------------"
docker build -t "opticalattackdetector:latest" -f ./src/opticalattackdetector/Dockerfile .
docker build -t "opticalattackdetector:latest" -f ./src/opticalattackdetector/Dockerfile .
docker images --filter="dangling=true" --quiet | xargs -r docker rmi
docker image prune --force


echo
echo
echo "Build dbscan serving:"
echo "Build dbscan serving:"
echo "---------------------"
echo "---------------------"
docker build -t "dbscanserving:latest" -f ./src/dbscanserving/Dockerfile .
docker build -t "dbscanserving:latest" -f ./src/dbscanserving/Dockerfile .
docker images --filter="dangling=true" --quiet | xargs -r docker rmi
docker image prune --force


echo
echo
echo "Create test environment:"
echo "Create test environment:"
Original line number Original line Diff line number Diff line
#!/bin/bash
# Copyright 2022-2025 ETSI SDG TeraFlowSDN (TFS) (https://tfs.etsi.org/)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# # Cleanup
# docker rm --force qkd-node
# docker network rm --force qkd-node-br

# # Create Docker network
# docker network create --driver bridge --subnet=172.254.250.0/24 --gateway=172.254.250.254 qkd-node-br

# <<<<<<<< HEAD:scripts/run_tests_locally-telemetry-gnmi.sh
PROJECTDIR=`pwd`
cd $PROJECTDIR/src
# RCFILE=$PROJECTDIR/coverage/.coveragerc

export KFK_SERVER_ADDRESS='127.0.0.1:9094'
# CRDB_SQL_ADDRESS=$(kubectl get service cockroachdb-public --namespace crdb -o jsonpath='{.spec.clusterIP}')
# export CRDB_URI="cockroachdb://tfs:tfs123@${CRDB_SQL_ADDRESS}:26257/tfs_telemetry?sslmode=require"
RCFILE=$PROJECTDIR/coverage/.coveragerc

# this is unit test (should be tested with container-lab running)
# python3 -m pytest --log-level=info --log-cli-level=info --verbose \
#     telemetry/backend/tests/gnmi_oc/test_unit_GnmiOpenConfigCollector.py 

# this is integration test (should be tested with container-lab running)
python3 -m pytest --log-level=info --log-cli-level=info --verbose \
    telemetry/backend/tests/gnmi_oc/test_integration_GnmiOCcollector.py # this is integration test
# ========
# # Create QKD Node
# docker run --detach --name qkd-node --network qkd-node-br --ip 172.254.250.101 mock-qkd-node:test

# # Dump QKD Node Docker containers
# docker ps -a

# echo "Bye!"
# >>>>>>>> develop:src/tests/tools/mock_qkd_node/run.sh
Original line number Original line Diff line number Diff line
@@ -19,6 +19,7 @@ build analytics:
    IMAGE_TAG: 'latest'             # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest'             # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    # This first build tags the builder resulting image to prevent being removed by dangling image removal command
    # This first build tags the builder resulting image to prevent being removed by dangling image removal command
@@ -30,7 +31,7 @@ build analytics:
    - docker push "$CI_REGISTRY_IMAGE/${IMAGE_NAME}-frontend:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/${IMAGE_NAME}-frontend:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/${IMAGE_NAME}-backend:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/${IMAGE_NAME}-backend:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build automation:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build bgpls_speaker:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -69,19 +69,19 @@ def format_custom_config_rules(config_rules : List[Dict]) -> List[Dict]:
def format_device_custom_config_rules(device : Dict) -> Dict:
def format_device_custom_config_rules(device : Dict) -> Dict:
    config_rules = device.get('device_config', {}).get('config_rules', [])
    config_rules = device.get('device_config', {}).get('config_rules', [])
    config_rules = format_custom_config_rules(config_rules)
    config_rules = format_custom_config_rules(config_rules)
    device['device_config']['config_rules'] = config_rules
    device.setdefault('device_config', {})['config_rules'] = config_rules
    return device
    return device


def format_service_custom_config_rules(service : Dict) -> Dict:
def format_service_custom_config_rules(service : Dict) -> Dict:
    config_rules = service.get('service_config', {}).get('config_rules', [])
    config_rules = service.get('service_config', {}).get('config_rules', [])
    config_rules = format_custom_config_rules(config_rules)
    config_rules = format_custom_config_rules(config_rules)
    service['service_config']['config_rules'] = config_rules
    service.setdefault('service_config', {})['config_rules'] = config_rules
    return service
    return service


def format_slice_custom_config_rules(slice_ : Dict) -> Dict:
def format_slice_custom_config_rules(slice_ : Dict) -> Dict:
    config_rules = slice_.get('slice_config', {}).get('config_rules', [])
    config_rules = slice_.get('slice_config', {}).get('config_rules', [])
    config_rules = format_custom_config_rules(config_rules)
    config_rules = format_custom_config_rules(config_rules)
    slice_['slice_config']['config_rules'] = config_rules
    slice_.setdefault('slice_config', {})['config_rules'] = config_rules
    return slice_
    return slice_


def split_devices_by_rules(devices : List[Dict]) -> Tuple[List[Dict], List[Dict]]:
def split_devices_by_rules(devices : List[Dict]) -> Tuple[List[Dict], List[Dict]]:
@@ -138,6 +138,19 @@ def link_type_to_str(link_type : Union[int, str]) -> Optional[str]:
    if isinstance(link_type, str): return LinkTypeEnum.Name(LinkTypeEnum.Value(link_type))
    if isinstance(link_type, str): return LinkTypeEnum.Name(LinkTypeEnum.Value(link_type))
    return None
    return None


LINK_TYPES_NORMAL = {
    LinkTypeEnum.LINKTYPE_UNKNOWN,
    LinkTypeEnum.LINKTYPE_COPPER,
    LinkTypeEnum.LINKTYPE_RADIO,
    LinkTypeEnum.LINKTYPE_MANAGEMENT,
}
LINK_TYPES_OPTICAL = {
    LinkTypeEnum.LINKTYPE_FIBER,
}
LINK_TYPES_VIRTUAL = {
    LinkTypeEnum.LINKTYPE_VIRTUAL,
}

def split_links_by_type(links : List[Dict]) -> Dict[str, List[Dict]]:
def split_links_by_type(links : List[Dict]) -> Dict[str, List[Dict]]:
    typed_links = collections.defaultdict(list)
    typed_links = collections.defaultdict(list)
    for link in links:
    for link in links:
@@ -148,11 +161,11 @@ def split_links_by_type(links : List[Dict]) -> Dict[str, List[Dict]]:
            raise Exception(MSG.format(str(link)))
            raise Exception(MSG.format(str(link)))


        link_type = LinkTypeEnum.Value(str_link_type)
        link_type = LinkTypeEnum.Value(str_link_type)
        if link_type in {LinkTypeEnum.LINKTYPE_UNKNOWN, LinkTypeEnum.LINKTYPE_COPPER, LinkTypeEnum.LINKTYPE_RADIO, LinkTypeEnum.LINKTYPE_MANAGEMENT}:
        if link_type in LINK_TYPES_NORMAL:
            typed_links['normal'].append(link)
            typed_links['normal'].append(link)
        elif link_type in {LinkTypeEnum.LINKTYPE_FIBER}:
        elif link_type in LINK_TYPES_OPTICAL:
            typed_links['optical'].append(link)
            typed_links['optical'].append(link)
        elif link_type in {LinkTypeEnum.LINKTYPE_VIRTUAL}:
        elif link_type in LINK_TYPES_VIRTUAL:
            typed_links['virtual'].append(link)
            typed_links['virtual'].append(link)
        else:
        else:
            MSG = 'Unsupported LinkType({:s}) in Link({:s})'
            MSG = 'Unsupported LinkType({:s}) in Link({:s})'
Original line number Original line Diff line number Diff line
@@ -20,7 +20,14 @@ from common.Settings import get_setting




LOGGER = logging.getLogger(__name__)
LOGGER = logging.getLogger(__name__)
KFK_SERVER_ADDRESS_TEMPLATE = 'kafka-service.{:s}.svc.cluster.local:{:s}'
KFK_SERVER_ADDRESS_TEMPLATE = 'kafka-public.{:s}.svc.cluster.local:{:s}'

KAFKA_TOPIC_NUM_PARTITIONS         = 1
KAFKA_TOPIC_REPLICATION_FACTOR     = 1
#KAFKA_TOPIC_LIST_TIMEOUT           = 5
KAFKA_TOPIC_CREATE_REQUEST_TIMEOUT = 60_000 # ms
KAFKA_TOPIC_CREATE_WAIT_ITERATIONS = 10
KAFKA_TOPIC_CREATE_WAIT_TIME       = 1


KAFKA_TOPIC_NUM_PARTITIONS         = 1
KAFKA_TOPIC_NUM_PARTITIONS         = 1
KAFKA_TOPIC_REPLICATION_FACTOR     = 1
KAFKA_TOPIC_REPLICATION_FACTOR     = 1
@@ -35,8 +42,12 @@ class KafkaConfig(Enum):
    def get_kafka_address() -> str:
    def get_kafka_address() -> str:
        kafka_server_address  = get_setting('KFK_SERVER_ADDRESS', default=None)
        kafka_server_address  = get_setting('KFK_SERVER_ADDRESS', default=None)
        if kafka_server_address is None:
        if kafka_server_address is None:
            try:
                KFK_NAMESPACE = get_setting('KFK_NAMESPACE')
                KFK_NAMESPACE = get_setting('KFK_NAMESPACE')
                KFK_PORT      = get_setting('KFK_SERVER_PORT')
                KFK_PORT      = get_setting('KFK_SERVER_PORT')
            except Exception:
                KFK_NAMESPACE = 'kafka'
                KFK_PORT      = '9092'
            kafka_server_address = KFK_SERVER_ADDRESS_TEMPLATE.format(KFK_NAMESPACE, KFK_PORT)
            kafka_server_address = KFK_SERVER_ADDRESS_TEMPLATE.format(KFK_NAMESPACE, KFK_PORT)
        return kafka_server_address
        return kafka_server_address
        
        
@@ -52,10 +63,10 @@ class KafkaTopic(Enum):
    # TODO: Later to be populated from ENV variable.
    # TODO: Later to be populated from ENV variable.
    TELEMETRY_REQUEST    = 'topic_telemetry_request' 
    TELEMETRY_REQUEST    = 'topic_telemetry_request' 
    TELEMETRY_RESPONSE   = 'topic_telemetry_response'
    TELEMETRY_RESPONSE   = 'topic_telemetry_response'
    RAW                  = 'topic_raw' 
    RAW                  = 'topic_raw'                  # TODO: Update name to telemetry_raw
    LABELED              = 'topic_labeled'
    LABELED              = 'topic_labeled'              # TODO: Update name to telemetry_labeled
    VALUE                = 'topic_value'
    VALUE                = 'topic_value'                # TODO: Update name to telemetry_value
    ALARMS               = 'topic_alarms'
    ALARMS               = 'topic_alarms'               # TODO: Update name to telemetry_alarms
    ANALYTICS_REQUEST    = 'topic_analytics_request'
    ANALYTICS_REQUEST    = 'topic_analytics_request'
    ANALYTICS_RESPONSE   = 'topic_analytics_response'
    ANALYTICS_RESPONSE   = 'topic_analytics_response'
    VNTMANAGER_REQUEST   = 'topic_vntmanager_request' 
    VNTMANAGER_REQUEST   = 'topic_vntmanager_request' 
@@ -137,7 +148,6 @@ class KafkaTopic(Enum):
            LOGGER.debug('All topics created and available.')
            LOGGER.debug('All topics created and available.')
            return True
            return True


# TODO: create all topics after the deployments (Telemetry and Analytics)


if __name__ == '__main__':
if __name__ == '__main__':
    import os
    import os
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build context:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build dbscanserving:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -19,6 +19,7 @@ build device:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker ps -aq | xargs -r docker rm -f
    - docker ps -aq | xargs -r docker rm -f
    - containerlab destroy --all --cleanup || true
    - containerlab destroy --all --cleanup || true
@@ -27,7 +28,7 @@ build device:
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
@@ -40,30 +41,6 @@ build device:
      - manifests/${IMAGE_NAME}service.yaml
      - manifests/${IMAGE_NAME}service.yaml
      - .gitlab-ci.yml
      - .gitlab-ci.yml


## Start Mock QKD Nodes before unit testing
#start_mock_nodes:
#  stage: deploy
#  script:
#    - bash src/tests/tools/mock_qkd_nodes/start.sh &
#    - sleep 10 # wait for nodes to spin up
#  artifacts:
#    paths:
#      - mock_nodes.log
#  rules:
#    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
#    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'

## Prepare Scenario (Start NBI, mock services)
#prepare_scenario:
#  stage: deploy
#  script:
#    - pytest src/tests/qkd/unit/PrepareScenario.py
#  needs:
#    - start_mock_nodes
#  rules:
#    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
#    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'

# Apply unit test to the component
# Apply unit test to the component
unit_test device:
unit_test device:
  variables:
  variables:
@@ -72,8 +49,6 @@ unit_test device:
  stage: unit_test
  stage: unit_test
  needs:
  needs:
    - build device
    - build device
    #- start_mock_nodes
    #- prepare_scenario
  before_script:
  before_script:
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - >
    - >
@@ -97,6 +72,10 @@ unit_test device:
    - docker exec -i $IMAGE_NAME bash -c "coverage run --append -m pytest --log-level=INFO --verbose $IMAGE_NAME/tests/test_unitary_emulated.py --junitxml=/opt/results/${IMAGE_NAME}_report_emulated.xml"
    - docker exec -i $IMAGE_NAME bash -c "coverage run --append -m pytest --log-level=INFO --verbose $IMAGE_NAME/tests/test_unitary_emulated.py --junitxml=/opt/results/${IMAGE_NAME}_report_emulated.xml"
    - docker exec -i $IMAGE_NAME bash -c "coverage run --append -m pytest --log-level=INFO --verbose $IMAGE_NAME/tests/test_unitary_ietf_actn.py --junitxml=/opt/results/${IMAGE_NAME}_report_ietf_actn.xml"
    - docker exec -i $IMAGE_NAME bash -c "coverage run --append -m pytest --log-level=INFO --verbose $IMAGE_NAME/tests/test_unitary_ietf_actn.py --junitxml=/opt/results/${IMAGE_NAME}_report_ietf_actn.xml"
    #- docker exec -i $IMAGE_NAME bash -c "coverage run --append -m pytest --log-level=INFO --verbose $IMAGE_NAME/tests/qkd/unit/test_*.py"
    #- docker exec -i $IMAGE_NAME bash -c "coverage run --append -m pytest --log-level=INFO --verbose $IMAGE_NAME/tests/qkd/unit/test_*.py"
    #- docker exec -i $IMAGE_NAME bash -c "coverage run --append -m pytest --log-level=INFO --verbose $IMAGE_NAME/tests/qkd/unit/test_qkd_compliance.py"
    #- docker exec -i $IMAGE_NAME bash -c "coverage run --append -m pytest --log-level=INFO --verbose $IMAGE_NAME/tests/qkd/unit/test_mock_qkd_node.py"
    #- docker exec -i $IMAGE_NAME bash -c "coverage run --append -m pytest --log-level=INFO --verbose $IMAGE_NAME/tests/qkd/unit/test_qkd_error_handling.py"
    #- docker exec -i $IMAGE_NAME bash -c "coverage run --append -m pytest --log-level=INFO --verbose $IMAGE_NAME/tests/qkd/unit/test_Set_new_configuration.py"
    - docker exec -i $IMAGE_NAME bash -c "coverage report --include='${IMAGE_NAME}/*' --show-missing"
    - docker exec -i $IMAGE_NAME bash -c "coverage report --include='${IMAGE_NAME}/*' --show-missing"
  coverage: '/TOTAL\s+\d+\s+\d+\s+(\d+%)/'
  coverage: '/TOTAL\s+\d+\s+\d+\s+(\d+%)/'
  after_script:
  after_script:
@@ -112,6 +91,7 @@ unit_test device:
      - src/$IMAGE_NAME/Dockerfile
      - src/$IMAGE_NAME/Dockerfile
      - src/$IMAGE_NAME/tests/*.py
      - src/$IMAGE_NAME/tests/*.py
      - src/$IMAGE_NAME/tests/Dockerfile
      - src/$IMAGE_NAME/tests/Dockerfile
      #- src/tests/tools/mock_qkd_nodes/**
      - manifests/${IMAGE_NAME}service.yaml
      - manifests/${IMAGE_NAME}service.yaml
      - .gitlab-ci.yml
      - .gitlab-ci.yml
  artifacts:
  artifacts:
Original line number Original line Diff line number Diff line
@@ -224,7 +224,12 @@ def fetch_node(url: str, resource_key: str, headers: Dict[str, str], auth: Optio
    try:
    try:
        r = requests.get(url, timeout=timeout, verify=False, auth=auth, headers=headers)
        r = requests.get(url, timeout=timeout, verify=False, auth=auth, headers=headers)
        r.raise_for_status()
        r.raise_for_status()
        result.append((resource_key, r.json().get('qkd_node', {})))
        data = r.json()
        data.pop('qkdn_capabilities', None)
        data.pop('qkd_applications', None)
        data.pop('qkd_interfaces', None)
        data.pop('qkd_links', None)
        result.append((resource_key, data))
    except requests.RequestException as e:
    except requests.RequestException as e:
        LOGGER.error(f"Error fetching node from {url}: {e}")
        LOGGER.error(f"Error fetching node from {url}: {e}")
        result.append((resource_key, e))
        result.append((resource_key, e))
Original line number Original line Diff line number Diff line
@@ -15,10 +15,18 @@


import pytest
import pytest
import requests
import requests
from tests.tools.mock_qkd_nodes.YangValidator import YangValidator
from requests.exceptions import HTTPError
from tests.tools.mock_qkd_node.YangValidator import YangValidator


def test_compliance_with_yang_models():
def test_compliance_with_yang_models():
    validator = YangValidator('etsi-qkd-sdn-node', ['etsi-qkd-node-types'])
    validator = YangValidator('etsi-qkd-sdn-node', ['etsi-qkd-node-types'])
    try:
        response = requests.get('http://127.0.0.1:11111/restconf/data/etsi-qkd-sdn-node:qkd_node')
        response = requests.get('http://127.0.0.1:11111/restconf/data/etsi-qkd-sdn-node:qkd_node')
        response.raise_for_status()
        data = response.json()
        data = response.json()
    assert validator.parse_to_dict(data) is not None
        assert validator.parse_to_dict(data) is not None, "Data validation failed against YANG model."
    except HTTPError as e:
        pytest.fail(f"HTTP error occurred: {e}")
    except Exception as e:
        pytest.fail(f"Unexpected error occurred: {e}")
Original line number Original line Diff line number Diff line
@@ -40,7 +40,7 @@ def test_invalid_operations_on_network_links(qkd_driver):


    try:
    try:
        # Attempt to perform an invalid operation (simulate wrong resource key)
        # Attempt to perform an invalid operation (simulate wrong resource key)
        response = requests.post(f'http://{qkd_driver.address}/invalid_resource', json=invalid_payload)
        response = requests.post(f'http://{qkd_driver.address}:{qkd_driver.port}/invalid_resource', json=invalid_payload)
        response.raise_for_status()
        response.raise_for_status()


    except HTTPError as e:
    except HTTPError as e:
Original line number Original line Diff line number Diff line
@@ -12,16 +12,35 @@
# See the License for the specific language governing permissions and
# See the License for the specific language governing permissions and
# limitations under the License.
# limitations under the License.


import pytest, requests
import pytest
import requests
import time
import socket
from unittest.mock import patch
from unittest.mock import patch
from device.service.drivers.qkd.QKDDriver import QKDDriver
from device.service.drivers.qkd.QKDDriver2 import QKDDriver


MOCK_QKD_ADDRRESS = '127.0.0.1'
MOCK_QKD_ADDRESS = '127.0.0.1'  # Use localhost to connect to the mock node in the Docker container
MOCK_PORT = 11111
MOCK_PORT = 11111


@pytest.fixture(scope="module")
def wait_for_mock_node():
    """
    Fixture to wait for the mock QKD node to be ready before running tests.
    """
    timeout = 30  # seconds
    start_time = time.time()
    while True:
        try:
            with socket.create_connection((MOCK_QKD_ADDRESS, MOCK_PORT), timeout=1):
                break  # Success
        except (socket.timeout, socket.error):
            if time.time() - start_time > timeout:
                raise RuntimeError("Timed out waiting for mock QKD node to be ready.")
            time.sleep(1)

@pytest.fixture
@pytest.fixture
def qkd_driver():
def qkd_driver(wait_for_mock_node):
    return QKDDriver(address=MOCK_QKD_ADDRRESS, port=MOCK_PORT, username='user', password='pass')
    return QKDDriver(address=MOCK_QKD_ADDRESS, port=MOCK_PORT, username='user', password='pass')


# Deliverable Test ID: SBI_Test_01
# Deliverable Test ID: SBI_Test_01
def test_qkd_driver_connection(qkd_driver):
def test_qkd_driver_connection(qkd_driver):
@@ -29,7 +48,7 @@ def test_qkd_driver_connection(qkd_driver):


# Deliverable Test ID: SBI_Test_01
# Deliverable Test ID: SBI_Test_01
def test_qkd_driver_invalid_connection():
def test_qkd_driver_invalid_connection():
    qkd_driver = QKDDriver(address='127.0.0.1', port=12345, username='user', password='pass')  # Use invalid port directly
    qkd_driver = QKDDriver(address=MOCK_QKD_ADDRESS, port=12345, username='user', password='pass')  # Use invalid port directly
    assert qkd_driver.Connect() is False
    assert qkd_driver.Connect() is False


# Deliverable Test ID: SBI_Test_10
# Deliverable Test ID: SBI_Test_10
@@ -38,4 +57,3 @@ def test_qkd_driver_timeout_connection(mock_get, qkd_driver):
    mock_get.side_effect = requests.exceptions.Timeout
    mock_get.side_effect = requests.exceptions.Timeout
    qkd_driver.timeout = 0.001  # Simulate very short timeout
    qkd_driver.timeout = 0.001  # Simulate very short timeout
    assert qkd_driver.Connect() is False
    assert qkd_driver.Connect() is False
Original line number Original line Diff line number Diff line
@@ -53,7 +53,7 @@ def create_qkd_app(driver, qkdn_id, backing_qkdl_id, client_app_id=None):
        print(f"Sending payload to {driver.address}: {app_payload}")
        print(f"Sending payload to {driver.address}: {app_payload}")


        # Send POST request to create the application
        # Send POST request to create the application
        response = requests.post(f'http://{driver.address}/app/create_qkd_app', json=app_payload)
        response = requests.post(f'http://{driver.address}/qkd_app/create_qkd_app', json=app_payload)
        
        
        # Check if the request was successful (HTTP 2xx)
        # Check if the request was successful (HTTP 2xx)
        response.raise_for_status()
        response.raise_for_status()
Original line number Original line Diff line number Diff line
@@ -19,6 +19,7 @@ build dlt:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    # This first build tags the builder resulting image to prevent being removed by dangling image removal command
    # This first build tags the builder resulting image to prevent being removed by dangling image removal command
@@ -29,7 +30,7 @@ build dlt:
    - docker push "$CI_REGISTRY_IMAGE/${IMAGE_NAME}-gateway:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/${IMAGE_NAME}-gateway:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/${IMAGE_NAME}-connector:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/${IMAGE_NAME}-connector:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build e2e_orchestrator:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"' 
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"' 
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build forecaster:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build interdomain:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build kpi-manager:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build kpi-value-api:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build kpi-value-writer:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -43,20 +43,21 @@ def log_all_methods(request):




# -------- Initial Test ----------------
# -------- Initial Test ----------------
def test_validate_kafka_topics():
# def test_validate_kafka_topics():
    LOGGER.debug(" >>> test_validate_kafka_topics: START <<< ")
#     LOGGER.debug(" >>> test_validate_kafka_topics: START <<< ")
    response = KafkaTopic.create_all_topics()
#     response = KafkaTopic.create_all_topics()
    assert isinstance(response, bool)
#     assert isinstance(response, bool)


# --------------
# --------------
# NOT FOR GITHUB PIPELINE (Local testing only)
# NOT FOR GITHUB PIPELINE (Local testing only)
# --------------
# --------------
# def test_KafkaConsumer(kpi_manager_client):
# def test_KafkaConsumer(kpi_manager_client):


#     # kpidescriptor = create_kpi_descriptor_request()
    # kpidescriptor = create_kpi_descriptor_request()
#     # kpi_manager_client.SetKpiDescriptor(kpidescriptor)
    # kpi_manager_client.SetKpiDescriptor(kpidescriptor)


    # kpi_value_writer = KpiValueWriter()
    # kpi_value_writer = KpiValueWriter()
    # kpi_value_writer.KafkaKpiConsumer()
    # kpi_value_writer.KafkaKpiConsumer()
#     LOGGER.debug(" waiting for timer to finish ")
    # timer = 300
    # LOGGER.debug(f" waiting for timer to finish {timer} seconds")
    # time.sleep(300)
    # time.sleep(300)
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build l3_attackmitigator:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build l3_centralizedattackdetector:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build l3_distributedattackdetector:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build load_generator:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build monitoring:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build nbi:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
@@ -73,7 +74,7 @@ unit_test nbi:
    - docker pull "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker pull "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker pull "$CI_REGISTRY_IMAGE/mock_tfs_nbi_dependencies:test"
    - docker pull "$CI_REGISTRY_IMAGE/mock_tfs_nbi_dependencies:test"
    - docker pull "bitnami/kafka:latest"
    - docker pull "bitnami/kafka:latest"
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
    - >
    - >
      docker run --name kafka -d --network=teraflowbridge -p 9092:9092 -p 9093:9093
      docker run --name kafka -d --network=teraflowbridge -p 9092:9092 -p 9093:9093
      --env KAFKA_CFG_NODE_ID=1
      --env KAFKA_CFG_NODE_ID=1
Original line number Original line Diff line number Diff line
@@ -12,16 +12,19 @@
# See the License for the specific language governing permissions and
# See the License for the specific language governing permissions and
# limitations under the License.
# limitations under the License.


import uuid
import json, logging
import json
from flask import request
from flask import request
from flask.json import jsonify
from flask_restful import Resource
from flask_restful import Resource
from common.proto.context_pb2 import Empty
from common.proto.context_pb2 import Empty
from common.proto.qkd_app_pb2 import App, QKDAppTypesEnum
from common.proto.qkd_app_pb2 import App, QKDAppTypesEnum
from common.Constants import DEFAULT_CONTEXT_NAME
from common.Constants import DEFAULT_CONTEXT_NAME
from context.client.ContextClient import ContextClient
from context.client.ContextClient import ContextClient
from nbi.service._tools.HttpStatusCodes import HTTP_OK, HTTP_SERVERERROR
from qkd_app.client.QKDAppClient import QKDAppClient
from qkd_app.client.QKDAppClient import QKDAppClient


LOGGER = logging.getLogger(__name__)

class _Resource(Resource):
class _Resource(Resource):
    def __init__(self) -> None:
    def __init__(self) -> None:
        super().__init__()
        super().__init__()
@@ -30,7 +33,7 @@ class _Resource(Resource):


class Index(_Resource):
class Index(_Resource):
    def get(self):
    def get(self):
        return {'hello': 'world'}
        return {}


class ListDevices(_Resource):
class ListDevices(_Resource):
    def get(self):
    def get(self):
@@ -79,20 +82,35 @@ class CreateQKDApp(_Resource):
    def post(self):
    def post(self):
        app = request.get_json()['app']
        app = request.get_json()['app']
        devices = self.context_client.ListDevices(Empty()).devices
        devices = self.context_client.ListDevices(Empty()).devices
        local_device = None

        local_qkdn_id = app.get('local_qkdn_id')
        if local_qkdn_id is None:
            MSG = 'local_qkdn_id not specified in qkd_app({:s})'
            msg = MSG.format(str(app))
            LOGGER.exception(msg)
            response = jsonify({'error': msg})
            response.status_code = HTTP_SERVERERROR
            return response


        # This for-loop won't be necessary if Device ID is guaranteed to be the same as QKDN Id
        # This for-loop won't be necessary if Device ID is guaranteed to be the same as QKDN Id
        local_device = None
        for device in devices:
        for device in devices:
            for config_rule in device.device_config.config_rules:
            for config_rule in device.device_config.config_rules:
                if config_rule.custom.resource_key == '__node__':
                if config_rule.custom.resource_key != '__node__': continue
                value = json.loads(config_rule.custom.resource_value)
                value = json.loads(config_rule.custom.resource_value)
                    qkdn_id = value['qkdn_id']
                qkdn_id = value.get('qkdn_id')
                    if app['local_qkdn_id'] == qkdn_id:
                if qkdn_id is None: continue
                if local_qkdn_id != qkdn_id: continue
                local_device = device
                local_device = device
                break
                break


        if local_device is None:
        if local_device is None:
            return {"status": "fail"}
            MSG = 'Unable to find device for local_qkdn_id({:s})'
            msg = MSG.format(str(local_qkdn_id))
            LOGGER.exception(msg)
            response = jsonify({'error': msg})
            response.status_code = HTTP_SERVERERROR
            return response


        external_app_src_dst = {
        external_app_src_dst = {
            'app_id': {'context_id': {'context_uuid': {'uuid': DEFAULT_CONTEXT_NAME}}, 'app_uuid': {'uuid': ''}},
            'app_id': {'context_id': {'context_uuid': {'uuid': DEFAULT_CONTEXT_NAME}}, 'app_uuid': {'uuid': ''}},
@@ -107,5 +125,6 @@ class CreateQKDApp(_Resource):


        self.qkd_app_client.RegisterApp(App(**external_app_src_dst))
        self.qkd_app_client.RegisterApp(App(**external_app_src_dst))


        return {"status": "success"}
        response = jsonify({'status': 'success'})
        response.status_code = HTTP_OK
        return response
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build opticalattackdetector:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"' 
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"' 
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build opticalattackmanager:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build opticalattackmitigator:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"' 
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"' 
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build opticalcontroller:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
+122 −0
Original line number Original line Diff line number Diff line
# Copyright 2022-2025 ETSI SDG TeraFlowSDN (TFS) (https://tfs.etsi.org/)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Build, tag, and push the Docker image to the GitLab Docker registry
build osm_client:
  variables:
    IMAGE_NAME: 'osm_client' # name of the microservice
    MOCK_IMAGE_NAME: 'mock_osm_nbi' # name of the mock 
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
    - docker image prune --force
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - changes:
      - src/common/**/*.py
      - proto/*.proto
      - src/$IMAGE_NAME/**/*.{py,in,yml}
      - src/$IMAGE_NAME/Dockerfile
      - src/$IMAGE_NAME/tests/*.py
      - manifests/${IMAGE_NAME}service.yaml
      - src/tests/tools/mock_osm_nbi/**/*.{py,in,yml,yaml,yang,sh,json}
      - src/tests/tools/mock_osm_nbi/Dockerfile
      - src/tests/.gitlab-ci.yml
      - .gitlab-ci.yml

# Apply unit test to the component
unit_test osm_client:
  variables:
    IMAGE_NAME: 'osm_client' # name of the microservice
    MOCK_IMAGE_NAME: 'mock_osm_nbi'
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: unit_test
  needs:
    - build osm_client
    - build mock_osm_nbi
  before_script:
    # Do Docker cleanup
    - docker ps --all --quiet | xargs --no-run-if-empty docker stop
    - docker container prune --force
    - docker ps --all --quiet | xargs --no-run-if-empty docker rm --force
    - docker image prune --force
    - docker network prune --force
    - docker volume prune --all --force
    - docker buildx prune --force

    # Login Docker repository
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
    - docker pull "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker pull "$CI_REGISTRY_IMAGE/mock-osm-nbi:test"
    - docker network create -d bridge teraflowbridge
    - >
      docker run --name mock_osm_nbi -d 
      --network=teraflowbridge
      --env LOG_LEVEL=DEBUG
      --env FLASK_ENV=development
      $CI_REGISTRY_IMAGE/mock-osm-nbi:test
    - >
      docker run --name $IMAGE_NAME -d -v "$PWD/src/$IMAGE_NAME/tests:/opt/results" 
      --network=teraflowbridge
      --env LOG_LEVEL=DEBUG
      --env FLASK_ENV=development
      --env OSM_ADDRESS=mock_osm_nbi
      $CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG
    - while ! docker logs $IMAGE_NAME 2>&1 | grep -q 'Running...'; do sleep 1; done
    - docker ps -a
    - docker logs $IMAGE_NAME
    - docker logs mock_osm_nbi
    - docker exec -i $IMAGE_NAME bash -c "coverage run -m pytest --log-level=INFO --verbose $IMAGE_NAME/tests/test_unitary.py --junitxml=/opt/results/${IMAGE_NAME}_report_unitary.xml"
    - docker exec -i $IMAGE_NAME bash -c "coverage report --include='${IMAGE_NAME}/*' --show-missing"
  coverage: '/TOTAL\s+\d+\s+\d+\s+(\d+%)/'
  after_script:
    - docker logs $IMAGE_NAME
    - docker logs mock_osm_nbi

    # Do Docker cleanup
    - docker ps --all --quiet | xargs --no-run-if-empty docker stop
    - docker container prune --force
    - docker ps --all --quiet | xargs --no-run-if-empty docker rm --force
    - docker image prune --force
    - docker network prune --force
    - docker volume prune --all --force
    - docker buildx prune --force

  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - changes:
      - src/common/**/*.py
      - proto/*.proto
      - src/$IMAGE_NAME/**/*.{py,in,yml}
      - src/$IMAGE_NAME/Dockerfile
      - src/$IMAGE_NAME/tests/*.py
      - manifests/${IMAGE_NAME}service.yaml
      - src/tests/tools/mock_osm_nbi/**/*.{py,in,yml,yaml,yang,sh,json}
      - src/tests/tools/mock_osm_nbi/Dockerfile
      - src/tests/.gitlab-ci.yml
      - .gitlab-ci.yml
  artifacts:
      when: always
      reports:
        junit: src/$IMAGE_NAME/tests/${IMAGE_NAME}_report_*.xml
Original line number Original line Diff line number Diff line
@@ -16,9 +16,9 @@ FROM python:3.10.16-slim




# Install dependencies
# Install dependencies
RUN apt-get --yes --quiet --quiet update
# Unneeded: build-essential cmake libpcre2-dev python3-dev python3-pip python3-cffi curl software-properties-common libmagic-dev
RUN apt-get --yes --quiet --quiet install wget g++ git build-essential cmake make git \
RUN apt-get --yes --quiet --quiet update && \
    libpcre2-dev python3-dev python3-pip python3-cffi curl software-properties-common && \
    apt-get --yes --quiet --quiet install wget g++ git make libmagic1 && \
    rm -rf /var/lib/apt/lists/*
    rm -rf /var/lib/apt/lists/*


# Set Python to show logs as they occur
# Set Python to show logs as they occur
@@ -62,9 +62,11 @@ WORKDIR /var/teraflow/osm_client
ENV OSM_CLIENT_VERSION=v16.0
ENV OSM_CLIENT_VERSION=v16.0
RUN python3 -m pip install -r "https://osm.etsi.org/gitweb/?p=osm/IM.git;a=blob_plain;f=requirements.txt;hb=${OSM_CLIENT_VERSION}"
RUN python3 -m pip install -r "https://osm.etsi.org/gitweb/?p=osm/IM.git;a=blob_plain;f=requirements.txt;hb=${OSM_CLIENT_VERSION}"
RUN python3 -m pip install "git+https://osm.etsi.org/gerrit/osm/IM.git@${OSM_CLIENT_VERSION}#egg=osm-im" --upgrade
RUN python3 -m pip install "git+https://osm.etsi.org/gerrit/osm/IM.git@${OSM_CLIENT_VERSION}#egg=osm-im" --upgrade

#Clone OsmCLient code
#Clone OsmCLient code
RUN git clone https://osm.etsi.org/gerrit/osm/osmclient
RUN git clone https://osm.etsi.org/gerrit/osm/osmclient
RUN git -C osmclient checkout ${OSM_CLIENT_VERSION}
RUN git -C osmclient checkout ${OSM_CLIENT_VERSION}

# Install osmClient using pip
# Install osmClient using pip
RUN python3 -m pip install -r osmclient/requirements.txt
RUN python3 -m pip install -r osmclient/requirements.txt
RUN python3 -m pip install ./osmclient
RUN python3 -m pip install ./osmclient
Original line number Original line Diff line number Diff line
@@ -16,7 +16,10 @@ import grpc, logging
from common.method_wrappers.Decorator import MetricsPool, safe_and_metered_rpc_method
from common.method_wrappers.Decorator import MetricsPool, safe_and_metered_rpc_method
from common.tools.grpc.Tools import grpc_message_to_json_string
from common.tools.grpc.Tools import grpc_message_to_json_string
from common.proto.context_pb2 import (Empty)
from common.proto.context_pb2 import (Empty)
from common.proto.osm_client_pb2 import CreateRequest, CreateResponse, NsiListResponse, GetRequest, GetResponse, DeleteRequest, DeleteResponse
from common.proto.osm_client_pb2 import (
    CreateRequest, CreateResponse, NsiListResponse, GetRequest, GetResponse,
    DeleteRequest, DeleteResponse
)
from common.proto.osm_client_pb2_grpc import OsmServiceServicer
from common.proto.osm_client_pb2_grpc import OsmServiceServicer
from osmclient import client
from osmclient import client
from osmclient.common.exceptions import ClientException
from osmclient.common.exceptions import ClientException
Original line number Original line Diff line number Diff line
@@ -53,7 +53,7 @@ def main():
    grpc_service = OsmClientService()
    grpc_service = OsmClientService()
    grpc_service.start()
    grpc_service.start()


    LOGGER.debug('Configured Rules:')
    LOGGER.info('Running...')


    # Wait for Ctrl+C or termination signal
    # Wait for Ctrl+C or termination signal
    while not terminate.wait(timeout=1.0): pass
    while not terminate.wait(timeout=1.0): pass
Original line number Original line Diff line number Diff line
# Copyright 2022-2025 ETSI SDG TeraFlowSDN (TFS) (https://tfs.etsi.org/)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import pytest, os

from common.Settings import (
    ENVVAR_SUFIX_SERVICE_HOST, ENVVAR_SUFIX_SERVICE_PORT_GRPC,
    ENVVAR_SUFIX_SERVICE_PORT_HTTP, get_env_var_name, get_service_port_grpc
)

from common.Constants import ServiceNameEnum
from osm_client.client.OsmClient import OsmClient
from osm_client.service.OsmClientService import OsmClientService

LOCAL_HOST = '127.0.0.1'
GRPC_PORT = 10000 + int(get_service_port_grpc(ServiceNameEnum.OSMCLIENT))

os.environ[get_env_var_name(ServiceNameEnum.OSMCLIENT, ENVVAR_SUFIX_SERVICE_HOST     )] = str(LOCAL_HOST)
os.environ[get_env_var_name(ServiceNameEnum.OSMCLIENT, ENVVAR_SUFIX_SERVICE_PORT_HTTP)] = str(GRPC_PORT)

@pytest.fixture(scope='session')
def osm_client_service(): # pylint: disable=redefined-outer-name
    _service = OsmClientService()
    _service.start()
    yield _service
    _service.stop()

@pytest.fixture(scope='session')
def osm_client(osm_client_service : OsmClientService):    # pylint: disable=redefined-outer-name
    _client = OsmClient()
    yield _client
    _client.close()
+14 −0
Original line number Original line Diff line number Diff line
# Copyright 2022-2025 ETSI SDG TeraFlowSDN (TFS) (https://tfs.etsi.org/)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Original line number Original line Diff line number Diff line
@@ -12,31 +12,33 @@
# See the License for the specific language governing permissions and
# See the License for the specific language governing permissions and
# limitations under the License.
# limitations under the License.


import libyang, os
import grpc, pytest
from typing import Dict, Optional
from osm_client.client.OsmClient import OsmClient
from common.proto.osm_client_pb2 import CreateRequest, CreateResponse, NsiListResponse
from common.proto.context_pb2 import Empty


YANG_DIR = os.path.join(os.path.dirname(__file__), 'yang')


class YangValidator:
from .PrepareTestScenario import ( # pylint: disable=unused-import
    def __init__(self, main_module : str, dependency_modules : [str]) -> None:
    # be careful, order of symbols is important here!
        self._yang_context = libyang.Context(YANG_DIR)
    osm_client_service, osm_client
)


        self._yang_module = self._yang_context.load_module(main_module)
def test_OsmClient(
        mods = [self._yang_context.load_module(mod) for mod in dependency_modules] + [self._yang_module]
    osm_client : OsmClient,
):  # pylint: disable=redefined-outer-name


        for mod in mods:
    nbi_list_request = Empty()
            mod.feature_enable_all()


    osm_list_reply = osm_client.NsiList(nbi_list_request)
    assert len(osm_list_reply.id) == 0


    nbi_create_request = CreateRequest()
    nbi_create_request.nst_name = "nst1"
    nbi_create_request.nsi_name = "nsi1"
    nbi_create_request.account = "account1"


    def parse_to_dict(self, message : Dict) -> Dict:
    osm_create_reply = osm_client.NsiCreate(nbi_create_request)
        dnode : Optional[libyang.DNode] = self._yang_module.parse_data_dict(
    assert osm_create_reply.succeded == True
            message, validate_present=True, validate=True, strict=True
        )
        if dnode is None: raise Exception('Unable to parse Message({:s})'.format(str(message)))
        message = dnode.print_dict()
        dnode.free()
        return message


    def destroy(self) -> None:
    osm_list_reply2 = osm_client.NsiList(nbi_list_request)
        self._yang_context.destroy()
    assert len(osm_list_reply2.id) == 1
Original line number Original line Diff line number Diff line
@@ -19,6 +19,7 @@ build pathcomp:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    # This first build tags the builder resulting image to prevent being removed by dangling image removal command
    # This first build tags the builder resulting image to prevent being removed by dangling image removal command
@@ -32,7 +33,7 @@ build pathcomp:
    - docker push "$CI_REGISTRY_IMAGE/${IMAGE_NAME}-backend:${IMAGE_TAG}"
    - docker push "$CI_REGISTRY_IMAGE/${IMAGE_NAME}-backend:${IMAGE_TAG}"
    - docker push "$CI_REGISTRY_IMAGE/${IMAGE_NAME}-frontend:${IMAGE_TAG}"
    - docker push "$CI_REGISTRY_IMAGE/${IMAGE_NAME}-frontend:${IMAGE_TAG}"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -26,7 +26,7 @@ docker run --name pathcomp-backend -d --network=tfbr --ip 172.28.0.2 pathcomp-b
docker rm -f pathcomp-frontend pathcomp-backend
docker rm -f pathcomp-frontend pathcomp-backend
docker network rm tfbr
docker network rm tfbr


docker images --filter="dangling=true" --quiet | xargs -r docker rmi
docker image prune --force


docker exec -i pathcomp bash -c "pytest --log-level=INFO --verbose pathcomp/tests/test_unitary.py"
docker exec -i pathcomp bash -c "pytest --log-level=INFO --verbose pathcomp/tests/test_unitary.py"


Original line number Original line Diff line number Diff line
@@ -20,13 +20,16 @@ variables:
# Package application needed to run tests & build the image on next stage
# Package application needed to run tests & build the image on next stage
build policy:
build policy:
  stage: build
  stage: build
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - export IMAGE_TAG=$(grep -m1 '<version>' ./src/$IMAGE_NAME_POLICY/pom.xml | grep -oP  '(?<=>).*(?=<)')
    - export IMAGE_TAG=$(grep -m1 '<version>' ./src/$IMAGE_NAME_POLICY/pom.xml | grep -oP  '(?<=>).*(?=<)')
    - echo "IMAGE_TAG=${IMAGE_TAG}" >> ${BUILD_ENV_POLICY}
    - echo "IMAGE_TAG=${IMAGE_TAG}" >> ${BUILD_ENV_POLICY}
    - cat ${BUILD_ENV_POLICY}
    - cat ${BUILD_ENV_POLICY}
    - docker buildx build -t "$IMAGE_NAME_POLICY:$IMAGE_TAG" -f ./src/$IMAGE_NAME_POLICY/src/main/docker/Dockerfile.multistage.jvm ./src/$IMAGE_NAME_POLICY/ --target builder
    - docker buildx build -t "$IMAGE_NAME_POLICY:$IMAGE_TAG" -f ./src/$IMAGE_NAME_POLICY/src/main/docker/Dockerfile.multistage.jvm ./src/$IMAGE_NAME_POLICY/ --target builder
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  artifacts:
  artifacts:
    reports:
    reports:
      dotenv: ${BUILD_ENV_POLICY}
      dotenv: ${BUILD_ENV_POLICY}
Original line number Original line Diff line number Diff line
@@ -41,49 +41,7 @@ import org.etsi.tfs.policy.acl.AclLogActionEnum;
import org.etsi.tfs.policy.acl.AclMatch;
import org.etsi.tfs.policy.acl.AclMatch;
import org.etsi.tfs.policy.acl.AclRuleSet;
import org.etsi.tfs.policy.acl.AclRuleSet;
import org.etsi.tfs.policy.acl.AclRuleTypeEnum;
import org.etsi.tfs.policy.acl.AclRuleTypeEnum;
import org.etsi.tfs.policy.context.model.ConfigActionEnum;
import org.etsi.tfs.policy.context.model.*;
import org.etsi.tfs.policy.context.model.ConfigRule;
import org.etsi.tfs.policy.context.model.ConfigRuleAcl;
import org.etsi.tfs.policy.context.model.ConfigRuleCustom;
import org.etsi.tfs.policy.context.model.ConfigRuleTypeAcl;
import org.etsi.tfs.policy.context.model.ConfigRuleTypeCustom;
import org.etsi.tfs.policy.context.model.Constraint;
import org.etsi.tfs.policy.context.model.ConstraintCustom;
import org.etsi.tfs.policy.context.model.ConstraintEndPointLocation;
import org.etsi.tfs.policy.context.model.ConstraintSchedule;
import org.etsi.tfs.policy.context.model.ConstraintSlaAvailability;
import org.etsi.tfs.policy.context.model.ConstraintSlaCapacity;
import org.etsi.tfs.policy.context.model.ConstraintSlaIsolationLevel;
import org.etsi.tfs.policy.context.model.ConstraintSlaLatency;
import org.etsi.tfs.policy.context.model.ConstraintTypeCustom;
import org.etsi.tfs.policy.context.model.ConstraintTypeEndPointLocation;
import org.etsi.tfs.policy.context.model.ConstraintTypeSchedule;
import org.etsi.tfs.policy.context.model.ConstraintTypeSlaAvailability;
import org.etsi.tfs.policy.context.model.ConstraintTypeSlaCapacity;
import org.etsi.tfs.policy.context.model.ConstraintTypeSlaIsolationLevel;
import org.etsi.tfs.policy.context.model.ConstraintTypeSlaLatency;
import org.etsi.tfs.policy.context.model.Device;
import org.etsi.tfs.policy.context.model.DeviceConfig;
import org.etsi.tfs.policy.context.model.DeviceDriverEnum;
import org.etsi.tfs.policy.context.model.DeviceOperationalStatus;
import org.etsi.tfs.policy.context.model.Empty;
import org.etsi.tfs.policy.context.model.EndPoint;
import org.etsi.tfs.policy.context.model.EndPointId;
import org.etsi.tfs.policy.context.model.Event;
import org.etsi.tfs.policy.context.model.EventTypeEnum;
import org.etsi.tfs.policy.context.model.GpsPosition;
import org.etsi.tfs.policy.context.model.IsolationLevelEnum;
import org.etsi.tfs.policy.context.model.Location;
import org.etsi.tfs.policy.context.model.LocationTypeGpsPosition;
import org.etsi.tfs.policy.context.model.LocationTypeRegion;
import org.etsi.tfs.policy.context.model.Service;
import org.etsi.tfs.policy.context.model.ServiceConfig;
import org.etsi.tfs.policy.context.model.ServiceId;
import org.etsi.tfs.policy.context.model.ServiceStatus;
import org.etsi.tfs.policy.context.model.ServiceStatusEnum;
import org.etsi.tfs.policy.context.model.ServiceTypeEnum;
import org.etsi.tfs.policy.context.model.SliceId;
import org.etsi.tfs.policy.context.model.TopologyId;
import org.etsi.tfs.policy.kpi_sample_types.model.KpiSampleType;
import org.etsi.tfs.policy.kpi_sample_types.model.KpiSampleType;
import org.etsi.tfs.policy.monitoring.model.AlarmDescriptor;
import org.etsi.tfs.policy.monitoring.model.AlarmDescriptor;
import org.etsi.tfs.policy.monitoring.model.AlarmResponse;
import org.etsi.tfs.policy.monitoring.model.AlarmResponse;
@@ -904,6 +862,22 @@ public class Serializer {
            builder.setSlaLatency(serializedConstraintSlaLatency);
            builder.setSlaLatency(serializedConstraintSlaLatency);
        }
        }


        if (constraintTypeSpecificType instanceof ConstraintExclusions) {
            final var isPermanent = ((ConstraintExclusions) constraintTypeSpecificType).isPermanent();
            final var deviceIds = ((ConstraintExclusions) constraintTypeSpecificType).getDeviceIds();

            final var serializedDeviceIds =
                    deviceIds.stream().map(this::serializeDeviceId).collect(Collectors.toList());

            final var serializedConstraintExclusions =
                    ContextOuterClass.Constraint_Exclusions.newBuilder()
                            .setIsPermanent(isPermanent)
                            .addAllDeviceIds(serializedDeviceIds)
                            .build();

            builder.setExclusions(serializedConstraintExclusions);
        }

        return builder.build();
        return builder.build();
    }
    }


@@ -982,6 +956,21 @@ public class Serializer {
                        new ConstraintTypeSlaIsolationLevel(constraintSlaIsolation);
                        new ConstraintTypeSlaIsolationLevel(constraintSlaIsolation);


                return new Constraint(constraintTypeSlaIsolation);
                return new Constraint(constraintTypeSlaIsolation);
            case EXCLUSIONS:
                final var exclusions = serializedConstraint.getExclusions();

                final var isPermanent = exclusions.getIsPermanent();
                final var serializedDevices = exclusions.getDeviceIdsList();

                final var deviceIds =
                        serializedDevices.stream().map(this::deserialize).collect(Collectors.toList());

                final var constraintExclusions =
                        new org.etsi.tfs.policy.context.model.ConstraintExclusions(
                                isPermanent, deviceIds, new ArrayList<>(), new ArrayList<>());
                final var constraintTypeExclusions =
                        new org.etsi.tfs.policy.context.model.ConstraintTypeExclusions(constraintExclusions);
                return new Constraint(constraintTypeExclusions);


            default:
            default:
            case CONSTRAINT_NOT_SET:
            case CONSTRAINT_NOT_SET:
Original line number Original line Diff line number Diff line
/*
 * Copyright 2022-2024 ETSI SDG TeraFlowSDN (TFS) (https://tfs.etsi.org/)
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package org.etsi.tfs.policy.context.model;

import java.util.List;

public class ConstraintExclusions {

    private final boolean isPermanent;
    private final List<String> deviceIds;
    private final List<EndPointId> endpointIds;
    private final List<LinkId> linkIds;

    public ConstraintExclusions(
            boolean isPermanent,
            List<String> deviceIds,
            List<EndPointId> endpointIds,
            List<LinkId> linkIds) {
        this.isPermanent = isPermanent;
        this.deviceIds = deviceIds;
        this.endpointIds = endpointIds;
        this.linkIds = linkIds;
    }

    public boolean isPermanent() {
        return isPermanent;
    }

    public List<String> getDeviceIds() {
        return deviceIds;
    }

    public List<EndPointId> getEndpointIds() {
        return endpointIds;
    }

    public List<LinkId> getLinkIds() {
        return linkIds;
    }

    @Override
    public String toString() {
        return "ConstraintExclusions{"
                + "permanent="
                + isPermanent
                + ", deviceIds="
                + deviceIds
                + ", endpointIds="
                + endpointIds
                + ", linkIds="
                + linkIds
                + '}';
    }
}
Original line number Original line Diff line number Diff line
/*
 * Copyright 2022-2024 ETSI SDG TeraFlowSDN (TFS) (https://tfs.etsi.org/)
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package org.etsi.tfs.policy.context.model;

public class ConstraintTypeExclusions implements ConstraintType<ConstraintExclusions> {
    private final ConstraintExclusions constraintExclusions;

    public ConstraintTypeExclusions(ConstraintExclusions constraintExclusions) {
        this.constraintExclusions = constraintExclusions;
    }

    @Override
    public ConstraintExclusions getConstraintType() {
        return this.constraintExclusions;
    }

    @Override
    public String toString() {
        return String.format("%s:{%s}", getClass().getSimpleName(), constraintExclusions);
    }
}
Original line number Original line Diff line number Diff line
/*
 * Copyright 2022-2024 ETSI SDG TeraFlowSDN (TFS) (https://tfs.etsi.org/)
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package org.etsi.tfs.policy.context.model;

public class DeviceId {

    private final String id;

    public DeviceId(String id) {
        this.id = id;
    }

    public String getId() {
        return id;
    }

    @Override
    public String toString() {
        return "DeviceId{" + "id='" + id + '\'' + '}';
    }
}
Original line number Original line Diff line number Diff line
/*
 * Copyright 2022-2024 ETSI SDG TeraFlowSDN (TFS) (https://tfs.etsi.org/)
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package org.etsi.tfs.policy.context.model;

public class LinkId {

    private final String id;

    public LinkId(String id) {
        this.id = id;
    }

    public String getId() {
        return id;
    }

    @Override
    public String toString() {
        return "LinkId{" + "id='" + id + '\'' + '}';
    }
}
Original line number Original line Diff line number Diff line
@@ -135,6 +135,8 @@ public class CommonPolicyServiceImpl {
                addServiceConfigRule(policyRuleService, policyRuleAction);
                addServiceConfigRule(policyRuleService, policyRuleAction);
            case POLICY_RULE_ACTION_RECALCULATE_PATH:
            case POLICY_RULE_ACTION_RECALCULATE_PATH:
                callRecalculatePathRPC(policyRuleService, policyRuleAction);
                callRecalculatePathRPC(policyRuleService, policyRuleAction);
            case POLICY_RULE_ACTION_CALL_SERVICE_RPC:
                callUpdateServiceRpc(policyRuleService, policyRuleAction);
            default:
            default:
                LOGGER.errorf(INVALID_MESSAGE, policyRuleAction.getPolicyRuleActionEnum());
                LOGGER.errorf(INVALID_MESSAGE, policyRuleAction.getPolicyRuleActionEnum());
                return;
                return;
@@ -509,6 +511,26 @@ public class CommonPolicyServiceImpl {
                        });
                        });
    }
    }


    private void callUpdateServiceRpc(
            PolicyRuleService policyRuleService, PolicyRuleAction policyRuleAction) {

        final var deserializedServiceUni = contextService.getService(policyRuleService.getServiceId());

        deserializedServiceUni
                .subscribe()
                .with(
                        deserializedService -> {
                            serviceService
                                    .updateService(deserializedService)
                                    .subscribe()
                                    .with(
                                            x -> {
                                                LOGGER.info(deserializedService);
                                                setPolicyRuleServiceToContext(policyRuleService, ENFORCED_POLICYRULE_STATE);
                                            });
                        });
    }

    private void callRecalculatePathRPC(
    private void callRecalculatePathRPC(
            PolicyRuleService policyRuleService, PolicyRuleAction policyRuleAction) {
            PolicyRuleService policyRuleService, PolicyRuleAction policyRuleAction) {


Original line number Original line Diff line number Diff line
@@ -65,15 +65,21 @@ public class PolicyRuleConditionValidator {
        return contextService
        return contextService
                .getService(serviceId)
                .getService(serviceId)
                .onFailure()
                .onFailure()
                .recoverWithItem((Service) null)
                .invoke(
                        throwable ->
                                LOGGER.error(
                                        "Failed to get service: " + serviceId + "Message " + throwable.getMessage(),
                                        throwable))
                //                .recoverWithItem((Service) null)
                .onItem()
                .onItem()
                .transform(service -> checkIfServiceIsValid(service, serviceId, deviceIds));
                .transform(service -> checkIfServiceIsValid(service, serviceId, deviceIds));
    }
    }


    private boolean checkIfServiceIsValid(
    private boolean checkIfServiceIsValid(
            Service service, ServiceId serviceId, List<String> deviceIds) {
            Service service, ServiceId serviceId, List<String> deviceIds) {
        return (checkIfServiceIdExists(service, serviceId)
        boolean checkIfServiceIdExists = checkIfServiceIdExists(service, serviceId);
                && checkIfServicesDeviceIdsExist(service, deviceIds));
        boolean checkIfServicesDeviceIdsExist = checkIfServicesDeviceIdsExist(service, deviceIds);
        return (checkIfServiceIdExists && checkIfServicesDeviceIdsExist);
    }
    }


    private boolean checkIfServiceIdExists(Service service, ServiceId serviceId) {
    private boolean checkIfServiceIdExists(Service service, ServiceId serviceId) {