Scheduled maintenance on Saturday, 27 September 2025, from 07:00 AM to 4:00 PM GMT (09:00 AM to 6:00 PM CEST) - some services may be unavailable -

Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • camara-integration
  • cnit-optical-band-expansion
  • cnit-p2mp-premerge
  • cnit_related_activity_premerge
  • cnit_tapi
  • cnit_transponders
  • develop
  • feat/108-extend-sbi-with-auto-discovery-of-endpoints-and-channels
  • feat/110-cttc-incorrect-endpoint-lookup-in-nbi-etsi-bwm-plugin
  • feat/113-cttc-implement-nbi-connector-to-manage-network-access-control-lists-acls
  • feat/116-ubi-updates-in-telemetry-backend-to-support-p4-in-band-network-telemetry
  • feat/116-ubi-updates-in-telemetry-backend-to-support-p4-in-band-network-telemetry-2
  • feat/128-cttc-add-ids-support
  • feat/138-ubi-error-handling-in-ztp-component
  • feat/138-ubi-error-handling-in-ztp-component-2
  • feat/139-ubi-p4-driver-does-not-correctly-retrieve-resources
  • feat/161-tid-creation-of-ip-link-with-supporting-coherent-pluggable-to-pluggable-connection
  • feat/163-implement-camara-based-nbi-connector-old-to-be-removed
  • feat/167-ansible-for-configuring-a-tfs-compatible-microk8s-cluster
  • feat/169-cttc-implement-vnt-manager-component
  • feat/183-create-qosprofile-component
  • feat/190-cttc-generalize-service-database-management
  • feat/192-cttc-implement-telemetry-backend-collector-gnmi-openconfig
  • feat/236-integration-with-multiband-amplifier-with-ocm
  • feat/253-tid-tapi-support
  • feat/264-tid-nbi-fot-sap-topology
  • feat/265-tid-new-service-type-for-pon-controller
  • feat/270-job-failed-131879
  • feat/278-cnit-basic-flexgrid-lightpath-deployment
  • feat/280-cttc-enhance-bgp-support-in-netconf-openconfig-sbi-driver
  • feat/281-optical-bandwidth-expansion
  • feat/292-cttc-implement-integration-test-for-ryu-openflow
  • feat/294-cttc-correct-ci-cd-descriptors
  • feat/296-cttc-camara-end-to-end-ci-cd-tests-fail
  • feat/301-cttc-dscm-pluggables
  • feat/303-add-test-to-feature-develop-during-234-related-to-osm-integration-2
  • feat/303-add-test-to-feature-develop-during-234-related-to-osm-integration-3
  • feat/304-cttc-netconf-based-openconfig-telemetry-collector
  • feat/305-cttc-enhanced-netconf-openconfig-sbi-driver-for-dscm-pluggables
  • feat/306-cttc-enhanced-restconf-based-openconfig-nbi-for-dscm-pluggables
  • feat/307-update-python-version
  • feat/307-update-python-version-service
  • feat/308-code-formatting
  • feat/310-cttc-implement-nbi-connector-to-interface-with-osm-client
  • feat/312-tid-new-service-to-configure-interfaces-via-openconfig
  • feat/313-tid-new-tapi-service-for-lsp-configuration
  • feat/313-tid-new-tapi-service-for-lsp-configuration-2
  • feat/314-tid-new-service-for-ipowdm-configuration-fron-orchestrator-to-ipowdm-controller
  • feat/316-cnit-basic-point-multiploint-optical-connections
  • feat/320-cttc-ietf-simap-basic-support-with-kafka-yang-push
  • feat/321-add-support-for-gnmi-configuration-via-proto
  • feat/322-add-read-support-for-ipinfusion-devices-via-netconf
  • feat/323-add-support-for-restconf-protocol-in-devices
  • feat/324-tid-nbi-ietf_l3vpn-deploy-fail
  • feat/325-tid-nbi-e2e-to-manage-e2e-path-computation
  • feat/326-tid-external-management-of-devices-telemetry-nbi
  • feat/327-tid-new-service-to-ipowdm-controller-to-manage-transceivers-configuration-on-external-agent
  • feat/46-cttc-rename-sbi-component
  • feat/62-tid-add-support-to-nbi-to-export-the-device-inventory-items-2
  • feat/92-cttc-implement-sbi-driver-for-nokia-sr-linux-l2-vpns-through-gnmi
  • feat/94-cttc-nbi-unitary-tests-not-running-and-or-not-working
  • feat/automation-revisited
  • feat/automation-workflow-plugin
  • feat/cttc-nbi-post-service
  • feat/cttc-service-concurrent-task-executor
  • feat/energy-monitoring
  • feat/hackfest
  • feat/hackfest-r2
  • feat/hackfest-r2.1
  • feat/hackfest3
  • feat/hackfest4
  • feat/hackfest5
  • feat/policy-refactor
  • feat/refactor-upgrade-policy
  • feat/siae-mw-driver-enhancement
  • feat/telemetry-collector-int
  • feat/telemetry-int-collector-support-p4
  • feat/tid-bgp
  • feat/tid-logical-resources-component
  • feat/tid-new-pcep-component
  • feat/tid-newer-pcep-component
  • feat/tid-openconfig-refactoring
  • feat/tid-p4
  • feat/tid-pcep
  • feat/tid-pcep-component
  • feat/tid-sap-topology
  • feat/ztp-error-handling
  • fix/58-adapt-interdomain-and-dlt-components-for-release-3-0-to-automate-nfv-sdn-22-experiment
  • fix/remove_automation_subscribe
  • master
  • openroadm-flex-grid
  • release/1.0.0
  • release/2.0.0
  • release/2.0.1
  • release/2.1.0
  • release/3.0.0
  • release/3.0.1
  • release/4.0.0
  • release/5.0.0
  • temp-pr-p4-int
  • demo-dpiab-eucnc2024
  • v1.0.0
  • v2.0.0
  • v2.1.0
  • v3.0.0
  • v4.0.0
  • v5.0.0
107 results

Target

Select target project
No results found
Select Git revision
  • camara-integration
  • cnit-optical-band-expansion
  • cnit-p2mp-premerge
  • cnit_related_activity_premerge
  • cnit_tapi
  • cnit_transponders
  • develop
  • feat/108-extend-sbi-with-auto-discovery-of-endpoints-and-channels
  • feat/110-cttc-incorrect-endpoint-lookup-in-nbi-etsi-bwm-plugin
  • feat/113-cttc-implement-nbi-connector-to-manage-network-access-control-lists-acls
  • feat/116-ubi-updates-in-telemetry-backend-to-support-p4-in-band-network-telemetry
  • feat/116-ubi-updates-in-telemetry-backend-to-support-p4-in-band-network-telemetry-2
  • feat/128-cttc-add-ids-support
  • feat/138-ubi-error-handling-in-ztp-component
  • feat/138-ubi-error-handling-in-ztp-component-2
  • feat/139-ubi-p4-driver-does-not-correctly-retrieve-resources
  • feat/161-tid-creation-of-ip-link-with-supporting-coherent-pluggable-to-pluggable-connection
  • feat/163-implement-camara-based-nbi-connector-old-to-be-removed
  • feat/167-ansible-for-configuring-a-tfs-compatible-microk8s-cluster
  • feat/169-cttc-implement-vnt-manager-component
  • feat/183-create-qosprofile-component
  • feat/190-cttc-generalize-service-database-management
  • feat/192-cttc-implement-telemetry-backend-collector-gnmi-openconfig
  • feat/236-integration-with-multiband-amplifier-with-ocm
  • feat/253-tid-tapi-support
  • feat/264-tid-nbi-fot-sap-topology
  • feat/265-tid-new-service-type-for-pon-controller
  • feat/270-job-failed-131879
  • feat/278-cnit-basic-flexgrid-lightpath-deployment
  • feat/280-cttc-enhance-bgp-support-in-netconf-openconfig-sbi-driver
  • feat/281-optical-bandwidth-expansion
  • feat/292-cttc-implement-integration-test-for-ryu-openflow
  • feat/294-cttc-correct-ci-cd-descriptors
  • feat/296-cttc-camara-end-to-end-ci-cd-tests-fail
  • feat/301-cttc-dscm-pluggables
  • feat/303-add-test-to-feature-develop-during-234-related-to-osm-integration-2
  • feat/303-add-test-to-feature-develop-during-234-related-to-osm-integration-3
  • feat/304-cttc-netconf-based-openconfig-telemetry-collector
  • feat/305-cttc-enhanced-netconf-openconfig-sbi-driver-for-dscm-pluggables
  • feat/306-cttc-enhanced-restconf-based-openconfig-nbi-for-dscm-pluggables
  • feat/307-update-python-version
  • feat/307-update-python-version-service
  • feat/308-code-formatting
  • feat/310-cttc-implement-nbi-connector-to-interface-with-osm-client
  • feat/312-tid-new-service-to-configure-interfaces-via-openconfig
  • feat/313-tid-new-tapi-service-for-lsp-configuration
  • feat/313-tid-new-tapi-service-for-lsp-configuration-2
  • feat/314-tid-new-service-for-ipowdm-configuration-fron-orchestrator-to-ipowdm-controller
  • feat/316-cnit-basic-point-multiploint-optical-connections
  • feat/320-cttc-ietf-simap-basic-support-with-kafka-yang-push
  • feat/321-add-support-for-gnmi-configuration-via-proto
  • feat/322-add-read-support-for-ipinfusion-devices-via-netconf
  • feat/323-add-support-for-restconf-protocol-in-devices
  • feat/324-tid-nbi-ietf_l3vpn-deploy-fail
  • feat/325-tid-nbi-e2e-to-manage-e2e-path-computation
  • feat/326-tid-external-management-of-devices-telemetry-nbi
  • feat/327-tid-new-service-to-ipowdm-controller-to-manage-transceivers-configuration-on-external-agent
  • feat/46-cttc-rename-sbi-component
  • feat/62-tid-add-support-to-nbi-to-export-the-device-inventory-items-2
  • feat/92-cttc-implement-sbi-driver-for-nokia-sr-linux-l2-vpns-through-gnmi
  • feat/94-cttc-nbi-unitary-tests-not-running-and-or-not-working
  • feat/automation-revisited
  • feat/automation-workflow-plugin
  • feat/cttc-nbi-post-service
  • feat/cttc-service-concurrent-task-executor
  • feat/energy-monitoring
  • feat/hackfest
  • feat/hackfest-r2
  • feat/hackfest-r2.1
  • feat/hackfest3
  • feat/hackfest4
  • feat/hackfest5
  • feat/policy-refactor
  • feat/refactor-upgrade-policy
  • feat/siae-mw-driver-enhancement
  • feat/telemetry-collector-int
  • feat/telemetry-int-collector-support-p4
  • feat/tid-bgp
  • feat/tid-logical-resources-component
  • feat/tid-new-pcep-component
  • feat/tid-newer-pcep-component
  • feat/tid-openconfig-refactoring
  • feat/tid-p4
  • feat/tid-pcep
  • feat/tid-pcep-component
  • feat/tid-sap-topology
  • feat/ztp-error-handling
  • fix/58-adapt-interdomain-and-dlt-components-for-release-3-0-to-automate-nfv-sdn-22-experiment
  • fix/remove_automation_subscribe
  • master
  • openroadm-flex-grid
  • release/1.0.0
  • release/2.0.0
  • release/2.0.1
  • release/2.1.0
  • release/3.0.0
  • release/3.0.1
  • release/4.0.0
  • release/5.0.0
  • temp-pr-p4-int
  • demo-dpiab-eucnc2024
  • v1.0.0
  • v2.0.0
  • v2.1.0
  • v3.0.0
  • v4.0.0
  • v5.0.0
107 results
Show changes
207 files
+ 7423
1565
Compare changes
  • Side-by-side
  • Inline

Files

+3 −0
Original line number Original line Diff line number Diff line
@@ -27,6 +27,7 @@ include:
  - local: '/src/context/.gitlab-ci.yml'
  - local: '/src/context/.gitlab-ci.yml'
  - local: '/src/device/.gitlab-ci.yml'
  - local: '/src/device/.gitlab-ci.yml'
  - local: '/src/service/.gitlab-ci.yml'
  - local: '/src/service/.gitlab-ci.yml'
  - local: '/src/qkd_app/.gitlab-ci.yml'
  - local: '/src/dbscanserving/.gitlab-ci.yml'
  - local: '/src/dbscanserving/.gitlab-ci.yml'
  - local: '/src/opticalattackmitigator/.gitlab-ci.yml'
  - local: '/src/opticalattackmitigator/.gitlab-ci.yml'
  - local: '/src/opticalattackdetector/.gitlab-ci.yml'
  - local: '/src/opticalattackdetector/.gitlab-ci.yml'
@@ -54,6 +55,8 @@ include:
  - local: '/src/qos_profile/.gitlab-ci.yml'
  - local: '/src/qos_profile/.gitlab-ci.yml'
  - local: '/src/vnt_manager/.gitlab-ci.yml'
  - local: '/src/vnt_manager/.gitlab-ci.yml'
  - local: '/src/e2e_orchestrator/.gitlab-ci.yml'
  - local: '/src/e2e_orchestrator/.gitlab-ci.yml'
  - local: '/src/ztp_server/.gitlab-ci.yml'
  - local: '/src/osm_client/.gitlab-ci.yml'


  # This should be last one: end-to-end integration tests
  # This should be last one: end-to-end integration tests
  - local: '/src/tests/.gitlab-ci.yml'
  - local: '/src/tests/.gitlab-ci.yml'
Original line number Original line Diff line number Diff line
@@ -13,14 +13,17 @@
# limitations under the License.
# limitations under the License.


coverage==6.3
coverage==6.3
grpcio==1.47.*
# grpcio==1.47.*
grpcio==1.60.0
grpcio-health-checking==1.47.*
grpcio-health-checking==1.47.*
grpcio-reflection==1.47.*
grpcio-reflection==1.47.*
grpcio-tools==1.47.*
# grpcio-tools==1.47.*
grpcio-tools==1.60.0
grpclib==0.4.4
grpclib==0.4.4
prettytable==3.5.0
prettytable==3.5.0
prometheus-client==0.13.0
prometheus-client==0.13.0
protobuf==3.20.*
# protobuf==3.20.*
protobuf==4.21.6
pytest==6.2.5
pytest==6.2.5
pytest-benchmark==3.4.1
pytest-benchmark==3.4.1
python-dateutil==2.8.2
python-dateutil==2.8.2
+22 −2
Original line number Original line Diff line number Diff line
@@ -151,6 +151,26 @@ export NATS_DEPLOY_MODE=${NATS_DEPLOY_MODE:-"single"}
export NATS_REDEPLOY=${NATS_REDEPLOY:-""}
export NATS_REDEPLOY=${NATS_REDEPLOY:-""}




# ----- Apache Kafka -----------------------------------------------------------

# If not already set, set the namespace where Kafka will be deployed.
export KFK_NAMESPACE=${KFK_NAMESPACE:-"kafka"}

# If not already set, set the external port Kafka Client interface will be exposed to.
export KFK_EXT_PORT_CLIENT=${KFK_EXT_PORT_CLIENT:-"9092"}

# If not already set, set Kafka installation mode. Accepted values are: 'single'.
# - If KFK_DEPLOY_MODE is "single", Kafka is deployed in single node mode. It is convenient for
#   development and testing purposes and should fit in a VM. IT SHOULD NOT BE USED IN PRODUCTION ENVIRONMENTS.
# NOTE: Production mode is still not supported. Will be provided in the future.
export KFK_DEPLOY_MODE=${KFK_DEPLOY_MODE:-"single"}

# If not already set, disable flag for re-deploying Kafka from scratch.
# WARNING: ACTIVATING THIS FLAG IMPLIES LOOSING THE MESSAGE BROKER INFORMATION!
# If KFK_REDEPLOY is "YES", the message broker will be dropped while checking/deploying Kafka.
export KFK_REDEPLOY=${KFK_REDEPLOY:-""}


# ----- QuestDB ----------------------------------------------------------------
# ----- QuestDB ----------------------------------------------------------------


# If not already set, set the namespace where QuestDB will be deployed.
# If not already set, set the namespace where QuestDB will be deployed.
@@ -215,8 +235,8 @@ export GRAF_EXT_PORT_HTTP=${GRAF_EXT_PORT_HTTP:-"3000"}
# Deploy Apache Kafka
# Deploy Apache Kafka
./deploy/kafka.sh
./deploy/kafka.sh


#Deploy Monitoring (Prometheus, Mimir, Grafana)
#Deploy Monitoring (Prometheus Gateway, Prometheus)
./deploy/monitoring.sh
# ./deploy/monitoring.sh


# Expose Dashboard
# Expose Dashboard
./deploy/expose_dashboard.sh
./deploy/expose_dashboard.sh
+8 −1
Original line number Original line Diff line number Diff line
@@ -66,7 +66,7 @@ CRDB_MANIFESTS_PATH="manifests/cockroachdb"


# Create a tmp folder for files modified during the deployment
# Create a tmp folder for files modified during the deployment
TMP_MANIFESTS_FOLDER="${TMP_FOLDER}/${CRDB_NAMESPACE}/manifests"
TMP_MANIFESTS_FOLDER="${TMP_FOLDER}/${CRDB_NAMESPACE}/manifests"
mkdir -p $TMP_MANIFESTS_FOLDER
mkdir -p ${TMP_MANIFESTS_FOLDER}


function crdb_deploy_single() {
function crdb_deploy_single() {
    echo "CockroachDB Namespace"
    echo "CockroachDB Namespace"
@@ -105,6 +105,13 @@ function crdb_deploy_single() {
            sleep 1
            sleep 1
        done
        done
        kubectl wait --namespace ${CRDB_NAMESPACE} --for=condition=Ready --timeout=300s pod/cockroachdb-0
        kubectl wait --namespace ${CRDB_NAMESPACE} --for=condition=Ready --timeout=300s pod/cockroachdb-0

        # Wait for CockroachDB to notify "start_node_query"
        echo ">>> CockroachDB pods created. Waiting CockroachDB server to be started..."
        while ! kubectl --namespace ${CRDB_NAMESPACE} logs pod/cockroachdb-0 -c cockroachdb 2>&1 | grep -q 'start_node_query'; do
            printf "%c" "."
            sleep 1
        done
    fi
    fi
    echo
    echo


+87 −53
Original line number Original line Diff line number Diff line
@@ -13,17 +13,26 @@
# See the License for the specific language governing permissions and
# See the License for the specific language governing permissions and
# limitations under the License.
# limitations under the License.



########################################################################################################################
########################################################################################################################
# Read deployment settings
# Read deployment settings
########################################################################################################################
########################################################################################################################


# If not already set, set the namespace where Apache Kafka will be deployed.
# If not already set, set the namespace where Kafka will be deployed.
export KFK_NAMESPACE=${KFK_NAMESPACE:-"kafka"}
export KFK_NAMESPACE=${KFK_NAMESPACE:-"kafka"}


# If not already set, set the port Apache Kafka server will be exposed to.
# If not already set, set the external port Kafka client interface will be exposed to.
export KFK_SERVER_PORT=${KFK_SERVER_PORT:-"9092"}
export KFK_EXT_PORT_CLIENT=${KFK_EXT_PORT_CLIENT:-"9092"}

# If not already set, set Kafka installation mode. Accepted values are: 'single'.
# - If KFK_DEPLOY_MODE is "single", Kafka is deployed in single node mode. It is convenient for
#   development and testing purposes and should fit in a VM. IT SHOULD NOT BE USED IN PRODUCTION ENVIRONMENTS.
# NOTE: Production mode is still not supported. Will be provided in the future.
export KFK_DEPLOY_MODE=${KFK_DEPLOY_MODE:-"single"}


# If not already set, if flag is YES, Apache Kafka will be redeployed and all topics will be lost.
# If not already set, disable flag for re-deploying Kafka from scratch.
# WARNING: ACTIVATING THIS FLAG IMPLIES LOOSING THE MESSAGE BROKER INFORMATION!
# If KFK_REDEPLOY is "YES", the message broker will be dropped while checking/deploying Kafka.
export KFK_REDEPLOY=${KFK_REDEPLOY:-""}
export KFK_REDEPLOY=${KFK_REDEPLOY:-""}




@@ -34,58 +43,83 @@ export KFK_REDEPLOY=${KFK_REDEPLOY:-""}
# Constants
# Constants
TMP_FOLDER="./tmp"
TMP_FOLDER="./tmp"
KFK_MANIFESTS_PATH="manifests/kafka"
KFK_MANIFESTS_PATH="manifests/kafka"
    KFK_ZOOKEEPER_MANIFEST="01-zookeeper.yaml"
    KFK_MANIFEST="02-kafka.yaml"


# Create a tmp folder for files modified during the deployment
# Create a tmp folder for files modified during the deployment
TMP_MANIFESTS_FOLDER="${TMP_FOLDER}/${KFK_NAMESPACE}/manifests"
TMP_MANIFESTS_FOLDER="${TMP_FOLDER}/${KFK_NAMESPACE}/manifests"
mkdir -p ${TMP_MANIFESTS_FOLDER}
mkdir -p ${TMP_MANIFESTS_FOLDER}


function kafka_deploy() {
function kfk_deploy_single() {
    # copy zookeeper and kafka manifest files to temporary manifest location
    echo "Kafka Namespace"
    cp "${KFK_MANIFESTS_PATH}/${KFK_ZOOKEEPER_MANIFEST}" "${TMP_MANIFESTS_FOLDER}/${KFK_ZOOKEEPER_MANIFEST}"
    echo ">>> Create Kafka Namespace (if missing)"
    cp "${KFK_MANIFESTS_PATH}/${KFK_MANIFEST}" "${TMP_MANIFESTS_FOLDER}/${KFK_MANIFEST}"

    # echo "Apache Kafka Namespace"
    echo "Delete Apache Kafka Namespace"
    kubectl delete namespace ${KFK_NAMESPACE} --ignore-not-found

    echo "Create Apache Kafka Namespace"
    kubectl create namespace ${KFK_NAMESPACE}
    kubectl create namespace ${KFK_NAMESPACE}
    echo


    # echo ">>> Deplying Apache Kafka Zookeeper"
    echo "Kafka (single-mode)"
    # Kafka zookeeper service should be deployed before the kafka service
    echo ">>> Checking if Kafka is deployed..."
    kubectl --namespace ${KFK_NAMESPACE} apply -f "${TMP_MANIFESTS_FOLDER}/${KFK_ZOOKEEPER_MANIFEST}"
    if kubectl get --namespace ${KFK_NAMESPACE} statefulset/kafka &> /dev/null; then

        echo ">>> Kafka is present; skipping step."
    #KFK_ZOOKEEPER_SERVICE="zookeeper-service"    # this command may be replaced with command to extract service name automatically
    else
    #KFK_ZOOKEEPER_IP=$(kubectl --namespace ${KFK_NAMESPACE} get service ${KFK_ZOOKEEPER_SERVICE} -o 'jsonpath={.spec.clusterIP}')
        echo ">>> Deploy Kafka"
        cp "${KFK_MANIFESTS_PATH}/single-node.yaml" "${TMP_MANIFESTS_FOLDER}/kfk_single_node.yaml"
        #sed -i "s/<KFK_NAMESPACE>/${KFK_NAMESPACE}/" "${TMP_MANIFESTS_FOLDER}/kfk_single_node.yaml"
        kubectl --namespace ${KFK_NAMESPACE} apply -f "${TMP_MANIFESTS_FOLDER}/kfk_single_node.yaml"

        echo ">>> Waiting Kafka statefulset to be created..."
        while ! kubectl get --namespace ${KFK_NAMESPACE} statefulset/kafka &> /dev/null; do
            printf "%c" "."
            sleep 1
        done

        # Wait for statefulset condition "Available=True" does not work
        # Wait for statefulset condition "jsonpath='{.status.readyReplicas}'=3" throws error:
        #   "error: readyReplicas is not found"
        # Workaround: Check the pods are ready
        #echo ">>> Kafka statefulset created. Waiting for readiness condition..."
        #kubectl wait --namespace  ${KFK_NAMESPACE} --for=condition=Available=True --timeout=300s statefulset/kafka
        #kubectl wait --namespace ${KGK_NAMESPACE} --for=jsonpath='{.status.readyReplicas}'=3 --timeout=300s \
        #    statefulset/kafka
        echo ">>> Kafka statefulset created. Waiting Kafka pods to be created..."
        while ! kubectl get --namespace ${KFK_NAMESPACE} pod/kafka-0 &> /dev/null; do
            printf "%c" "."
            sleep 1
        done
        kubectl wait --namespace ${KFK_NAMESPACE} --for=condition=Ready --timeout=300s pod/kafka-0

        # Wait for Kafka to notify "Kafka Server started"
        echo ">>> Kafka pods created. Waiting Kafka Server to be started..."
        while ! kubectl --namespace ${KFK_NAMESPACE} logs pod/kafka-0 -c kafka 2>&1 | grep -q 'Kafka Server started'; do
            printf "%c" "."
            sleep 1
        done
    fi
    echo
}


    # Kafka service should be deployed after the zookeeper service
    #sed -i "s/<ZOOKEEPER_INTERNAL_IP>/${KFK_ZOOKEEPER_IP}/" "${TMP_MANIFESTS_FOLDER}/$KFK_MANIFEST"
    sed -i "s/<KAFKA_NAMESPACE>/${KFK_NAMESPACE}/" "${TMP_MANIFESTS_FOLDER}/$KFK_MANIFEST"


    # echo ">>> Deploying Apache Kafka Broker"
function kfk_undeploy_single() {
    kubectl --namespace ${KFK_NAMESPACE} apply -f "${TMP_MANIFESTS_FOLDER}/$KFK_MANIFEST"
    echo "Kafka (single-mode)"
    echo ">>> Checking if Kafka is deployed..."
    if kubectl get --namespace ${KFK_NAMESPACE} statefulset/kafka &> /dev/null; then
        echo ">>> Undeploy Kafka"
        kubectl delete --namespace ${KFK_NAMESPACE} -f "${TMP_MANIFESTS_FOLDER}/kfk_single_node.yaml" --ignore-not-found
    else
        echo ">>> Kafka is not present; skipping step."
    fi
    echo


    # echo ">>> Verifing Apache Kafka deployment"
    echo "Kafka Namespace"
    sleep 5
    echo ">>> Delete Kafka Namespace (if exists)"
    # KFK_PODS_STATUS=$(kubectl --namespace ${KFK_NAMESPACE} get pods)
    echo "NOTE: this step might take few minutes to complete!"
    # if echo "$KFK_PODS_STATUS" | grep -qEv 'STATUS|Running'; then
    kubectl delete namespace ${KFK_NAMESPACE} --ignore-not-found
    #     echo "Deployment Error: \n $KFK_PODS_STATUS"
    echo
    # else
    #     echo "$KFK_PODS_STATUS"
    # fi
}
}


echo ">>> Apache Kafka"
if [ "$KFK_DEPLOY_MODE" == "single" ]; then
echo "Checking if Apache Kafka is deployed ... "
    if [ "$KFK_REDEPLOY" == "YES" ]; then
    if [ "$KFK_REDEPLOY" == "YES" ]; then
    echo "Redeploying kafka namespace"
        kfk_undeploy_single
    kafka_deploy
    fi
elif kubectl get namespace "${KFK_NAMESPACE}" &> /dev/null; then

    echo "Apache Kafka already present; skipping step." 
    kfk_deploy_single
else
else
    echo "Kafka namespace doesn't exists. Deploying kafka namespace"
    echo "Unsupported value: KFK_DEPLOY_MODE=$KFK_DEPLOY_MODE"
    kafka_deploy
fi
fi
echo
Original line number Original line Diff line number Diff line
@@ -14,6 +14,8 @@
# limitations under the License.
# limitations under the License.


set -euo pipefail
set -euo pipefail
: "${KUBECONFIG:=/var/snap/microk8s/current/credentials/client.config}"



# -----------------------------------------------------------
# -----------------------------------------------------------
# Global namespace for all deployments
# Global namespace for all deployments
@@ -28,7 +30,7 @@ RELEASE_NAME_PROM="mon-prometheus"
CHART_REPO_NAME_PROM="prometheus-community"
CHART_REPO_NAME_PROM="prometheus-community"
CHART_REPO_URL_PROM="https://prometheus-community.github.io/helm-charts"
CHART_REPO_URL_PROM="https://prometheus-community.github.io/helm-charts"
CHART_NAME_PROM="prometheus"
CHART_NAME_PROM="prometheus"
VALUES_FILE_PROM="$VALUES_FILE_PATH/prometheus_values.yaml"
VALUES_FILE_PROM="$VALUES_FILE_PATH/prometheus_values.yaml"       # Values file for Prometheus and gateway


# -----------------------------------------------------------
# -----------------------------------------------------------
# Mimir Configuration
# Mimir Configuration
@@ -76,7 +78,8 @@ deploy_chart() {
    echo "Installing/Upgrading $release_name using custom values from $values_file..."
    echo "Installing/Upgrading $release_name using custom values from $values_file..."
    helm upgrade --install "$release_name" "$chart_repo_name/$chart_name" \
    helm upgrade --install "$release_name" "$chart_repo_name/$chart_name" \
      --namespace "$namespace" \
      --namespace "$namespace" \
      --values "$values_file"
      --values "$values_file" \
      --kubeconfig "$KUBECONFIG"
  else
  else
    echo "Installing/Upgrading $release_name with default chart values..."
    echo "Installing/Upgrading $release_name with default chart values..."
    helm upgrade --install "$release_name" "$chart_repo_name/$chart_name" \
    helm upgrade --install "$release_name" "$chart_repo_name/$chart_name" \
+2 −28
Original line number Original line Diff line number Diff line
@@ -51,12 +51,6 @@ export TFS_SKIP_BUILD=${TFS_SKIP_BUILD:-""}
# If not already set, set the namespace where CockroackDB will be deployed.
# If not already set, set the namespace where CockroackDB will be deployed.
export CRDB_NAMESPACE=${CRDB_NAMESPACE:-"crdb"}
export CRDB_NAMESPACE=${CRDB_NAMESPACE:-"crdb"}


# If not already set, set the external port CockroackDB Postgre SQL interface will be exposed to.
export CRDB_EXT_PORT_SQL=${CRDB_EXT_PORT_SQL:-"26257"}

# If not already set, set the external port CockroackDB HTTP Mgmt GUI interface will be exposed to.
export CRDB_EXT_PORT_HTTP=${CRDB_EXT_PORT_HTTP:-"8081"}

# If not already set, set the database username to be used by Context.
# If not already set, set the database username to be used by Context.
export CRDB_USERNAME=${CRDB_USERNAME:-"tfs"}
export CRDB_USERNAME=${CRDB_USERNAME:-"tfs"}


@@ -69,27 +63,12 @@ export CRDB_PASSWORD=${CRDB_PASSWORD:-"tfs123"}
# If not already set, set the namespace where NATS will be deployed.
# If not already set, set the namespace where NATS will be deployed.
export NATS_NAMESPACE=${NATS_NAMESPACE:-"nats"}
export NATS_NAMESPACE=${NATS_NAMESPACE:-"nats"}


# If not already set, set the external port NATS Client interface will be exposed to.
export NATS_EXT_PORT_CLIENT=${NATS_EXT_PORT_CLIENT:-"4222"}

# If not already set, set the external port NATS HTTP Mgmt GUI interface will be exposed to.
export NATS_EXT_PORT_HTTP=${NATS_EXT_PORT_HTTP:-"8222"}



# ----- QuestDB ----------------------------------------------------------------
# ----- QuestDB ----------------------------------------------------------------


# If not already set, set the namespace where QuestDB will be deployed.
# If not already set, set the namespace where QuestDB will be deployed.
export QDB_NAMESPACE=${QDB_NAMESPACE:-"qdb"}
export QDB_NAMESPACE=${QDB_NAMESPACE:-"qdb"}


# If not already set, set the external port QuestDB Postgre SQL interface will be exposed to.
export QDB_EXT_PORT_SQL=${QDB_EXT_PORT_SQL:-"8812"}

# If not already set, set the external port QuestDB Influx Line Protocol interface will be exposed to.
export QDB_EXT_PORT_ILP=${QDB_EXT_PORT_ILP:-"9009"}

# If not already set, set the external port QuestDB HTTP Mgmt GUI interface will be exposed to.
export QDB_EXT_PORT_HTTP=${QDB_EXT_PORT_HTTP:-"9000"}

# If not already set, set the database username to be used for QuestDB.
# If not already set, set the database username to be used for QuestDB.
export QDB_USERNAME=${QDB_USERNAME:-"admin"}
export QDB_USERNAME=${QDB_USERNAME:-"admin"}


@@ -114,14 +93,9 @@ export GRAF_EXT_PORT_HTTP=${GRAF_EXT_PORT_HTTP:-"3000"}


# ----- Apache Kafka ------------------------------------------------------
# ----- Apache Kafka ------------------------------------------------------


# If not already set, set the namespace where Apache Kafka will be deployed.
# If not already set, set the namespace where Kafka will be deployed.
export KFK_NAMESPACE=${KFK_NAMESPACE:-"kafka"}
export KFK_NAMESPACE=${KFK_NAMESPACE:-"kafka"}


# If not already set, set the port Apache Kafka server will be exposed to.
export KFK_SERVER_PORT=${KFK_SERVER_PORT:-"9092"}

# If not already set, if flag is YES, Apache Kafka will be redeployed and topic will be lost.
export KFK_REDEPLOY=${KFK_REDEPLOY:-""}


########################################################################################################################
########################################################################################################################
# Automated steps start here
# Automated steps start here
@@ -154,7 +128,7 @@ kubectl create secret generic crdb-data --namespace ${TFS_K8S_NAMESPACE} --type=
printf "\n"
printf "\n"


echo ">>> Create Secret with Apache Kafka..."
echo ">>> Create Secret with Apache Kafka..."
KFK_SERVER_PORT=$(kubectl --namespace ${KFK_NAMESPACE} get service kafka-service -o 'jsonpath={.spec.ports[0].port}')
KFK_SERVER_PORT=$(kubectl --namespace ${KFK_NAMESPACE} get service kafka-public -o 'jsonpath={.spec.ports[0].port}')
kubectl create secret generic kfk-kpi-data --namespace ${TFS_K8S_NAMESPACE} --type='Opaque' \
kubectl create secret generic kfk-kpi-data --namespace ${TFS_K8S_NAMESPACE} --type='Opaque' \
    --from-literal=KFK_NAMESPACE=${KFK_NAMESPACE} \
    --from-literal=KFK_NAMESPACE=${KFK_NAMESPACE} \
    --from-literal=KFK_SERVER_PORT=${KFK_SERVER_PORT}
    --from-literal=KFK_SERVER_PORT=${KFK_SERVER_PORT}
Original line number Original line Diff line number Diff line
@@ -61,7 +61,7 @@ spec:
      containers:
      containers:
      - name: cockroachdb
      - name: cockroachdb
        image: cockroachdb/cockroach:latest-v22.2
        image: cockroachdb/cockroach:latest-v22.2
        imagePullPolicy: Always
        imagePullPolicy: IfNotPresent
        args:
        args:
        - start-single-node
        - start-single-node
        ports:
        ports:
Original line number Original line Diff line number Diff line
@@ -55,9 +55,15 @@ spec:
          readinessProbe:
          readinessProbe:
            exec:
            exec:
              command: ["/bin/grpc_health_probe", "-addr=:1010"]
              command: ["/bin/grpc_health_probe", "-addr=:1010"]
            initialDelaySeconds: 50   # Context's gunicorn takes 30~40 seconds to bootstrap
            periodSeconds: 10
            failureThreshold: 10
          livenessProbe:
          livenessProbe:
            exec:
            exec:
              command: ["/bin/grpc_health_probe", "-addr=:1010"]
              command: ["/bin/grpc_health_probe", "-addr=:1010"]
            initialDelaySeconds: 50   # Context's gunicorn takes 30~40 seconds to bootstrap
            periodSeconds: 10
            failureThreshold: 10
          resources:
          resources:
            requests:
            requests:
              cpu: 250m
              cpu: 250m

manifests/kafka/01-zookeeper.yaml

deleted100644 → 0
+0 −53
Original line number Original line Diff line number Diff line
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Service
metadata:
  labels:
    app: zookeeper-service
  name: zookeeper-service
spec:
  type: ClusterIP
  ports:
    - name: zookeeper-port
      port: 2181
      #nodePort: 30181
      #targetPort: 2181
  selector:
    app: zookeeper
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: zookeeper
  name: zookeeper
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zookeeper
  template:
    metadata:
      labels:
        app: zookeeper
    spec:
      containers:
        - image: wurstmeister/zookeeper
          imagePullPolicy: IfNotPresent
          name: zookeeper
          ports:
            - containerPort: 2181

manifests/kafka/02-kafka.yaml

deleted100644 → 0
+0 −60
Original line number Original line Diff line number Diff line
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Service
metadata:
  labels:
    app: kafka-broker
  name: kafka-service
spec:
  ports:
  - port: 9092
  selector:
    app: kafka-broker
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: kafka-broker
  name: kafka-broker
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kafka-broker
  template:
    metadata:
      labels:
        app: kafka-broker
    spec:
      hostname: kafka-broker
      containers:
      - env:
        - name: KAFKA_BROKER_ID
          value: "1"
        - name: KAFKA_ZOOKEEPER_CONNECT
          #value: <ZOOKEEPER_INTERNAL_IP>:2181
          value: zookeeper-service.<KAFKA_NAMESPACE>.svc.cluster.local:2181
        - name: KAFKA_LISTENERS
          value: PLAINTEXT://:9092
        - name: KAFKA_ADVERTISED_LISTENERS
          value: PLAINTEXT://kafka-service.<KAFKA_NAMESPACE>.svc.cluster.local:9092
        image: wurstmeister/kafka
        imagePullPolicy: IfNotPresent
        name: kafka-broker
        ports:
          - containerPort: 9092
+99 −0
Original line number Original line Diff line number Diff line
# Copyright 2022-2025 ETSI SDG TeraFlowSDN (TFS) (https://tfs.etsi.org/)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Service
metadata:
  name: kafka-public
  labels:
    app.kubernetes.io/component: message-broker
    app.kubernetes.io/instance: kafka
    app.kubernetes.io/name: kafka
spec:
  type: ClusterIP
  selector:
    app.kubernetes.io/component: message-broker
    app.kubernetes.io/instance: kafka
    app.kubernetes.io/name: kafka
  ports:
  - name: clients
    port: 9092
    protocol: TCP
    targetPort: 9092
  - name: control-plane
    port: 9093
    protocol: TCP
    targetPort: 9093
  - name: external
    port: 9094
    protocol: TCP
    targetPort: 9094
---


apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: kafka
spec:
  selector:
    matchLabels:
      app.kubernetes.io/component: message-broker
      app.kubernetes.io/instance: kafka
      app.kubernetes.io/name: kafka
  serviceName: "kafka-public"
  replicas: 1
  minReadySeconds: 5
  template:
    metadata:
      labels:
        app.kubernetes.io/component: message-broker
        app.kubernetes.io/instance: kafka
        app.kubernetes.io/name: kafka
    spec:
      terminationGracePeriodSeconds: 10
      restartPolicy: Always
      containers:
      - name: kafka
        image: bitnami/kafka:latest
        imagePullPolicy: IfNotPresent
        ports:
        - name: clients
          containerPort: 9092
        - name: control-plane
          containerPort: 9093
        - name: external
          containerPort: 9094
        env:
          - name: KAFKA_CFG_NODE_ID
            value: "1"
          - name: KAFKA_CFG_PROCESS_ROLES
            value: "controller,broker"
          - name: KAFKA_CFG_LISTENERS
            value: "PLAINTEXT://:9092,CONTROLLER://:9093,EXTERNAL://:9094"
          - name: KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP
            value: "PLAINTEXT:PLAINTEXT,CONTROLLER:PLAINTEXT,EXTERNAL:PLAINTEXT"
          - name: KAFKA_CFG_ADVERTISED_LISTENERS
            value: "PLAINTEXT://kafka-public.kafka.svc.cluster.local:9092,EXTERNAL://localhost:9094"
          - name: KAFKA_CFG_CONTROLLER_LISTENER_NAMES
            value: "CONTROLLER"
          - name: KAFKA_CFG_CONTROLLER_QUORUM_VOTERS
            value: "1@kafka-0:9093"
        resources:
          requests:
            cpu: "250m"
            memory: 1Gi
          limits:
            cpu: "1"
            memory: 2Gi
Original line number Original line Diff line number Diff line
@@ -41,7 +41,7 @@ spec:
            - name: LOG_LEVEL
            - name: LOG_LEVEL
              value: "INFO"
              value: "INFO"
            - name: FLASK_ENV
            - name: FLASK_ENV
              value: "production"  # change to "development" if developing
              value: "production"  # normal value is "production", change to "development" if developing
            - name: IETF_NETWORK_RENDERER
            - name: IETF_NETWORK_RENDERER
              value: "LIBYANG"
              value: "LIBYANG"
          envFrom:
          envFrom:
+20 −14
Original line number Original line Diff line number Diff line
@@ -20,13 +20,15 @@
export TFS_REGISTRY_IMAGES="http://localhost:32000/tfs/"
export TFS_REGISTRY_IMAGES="http://localhost:32000/tfs/"


# Set the list of components, separated by spaces, you want to build images for, and deploy.
# Set the list of components, separated by spaces, you want to build images for, and deploy.
export TFS_COMPONENTS="context device pathcomp service slice nbi webui"
# export TFS_COMPONENTS="context device pathcomp service slice nbi webui"
export TFS_COMPONENTS="context device pathcomp service webui"


# Uncomment to activate Monitoring (old)
# Uncomment to activate Monitoring (old)
#export TFS_COMPONENTS="${TFS_COMPONENTS} monitoring"
#export TFS_COMPONENTS="${TFS_COMPONENTS} monitoring"


# Uncomment to activate Monitoring Framework (new)
# Uncomment to activate Monitoring Framework (new)
#export TFS_COMPONENTS="${TFS_COMPONENTS} kpi_manager kpi_value_writer kpi_value_api telemetry analytics automation"
#export TFS_COMPONENTS="${TFS_COMPONENTS} kpi_manager kpi_value_writer kpi_value_api telemetry analytics automation"
export TFS_COMPONENTS="${TFS_COMPONENTS} kpi_manager telemetry"


# Uncomment to activate QoS Profiles
# Uncomment to activate QoS Profiles
#export TFS_COMPONENTS="${TFS_COMPONENTS} qos_profile"
#export TFS_COMPONENTS="${TFS_COMPONENTS} qos_profile"
@@ -134,7 +136,7 @@ export CRDB_PASSWORD="tfs123"
export CRDB_DEPLOY_MODE="single"
export CRDB_DEPLOY_MODE="single"


# Disable flag for dropping database, if it exists.
# Disable flag for dropping database, if it exists.
export CRDB_DROP_DATABASE_IF_EXISTS=""
export CRDB_DROP_DATABASE_IF_EXISTS="YES"


# Disable flag for re-deploying CockroachDB from scratch.
# Disable flag for re-deploying CockroachDB from scratch.
export CRDB_REDEPLOY=""
export CRDB_REDEPLOY=""
@@ -159,6 +161,22 @@ export NATS_DEPLOY_MODE="single"
export NATS_REDEPLOY=""
export NATS_REDEPLOY=""




# ----- Apache Kafka -----------------------------------------------------------

# Set the namespace where Apache Kafka will be deployed.
export KFK_NAMESPACE="kafka"

# Set the port Apache Kafka server will be exposed to.
export KFK_EXT_PORT_CLIENT="9092"

# Set Kafka installation mode to 'single'. This option is convenient for development and testing.
# See ./deploy/all.sh or ./deploy/kafka.sh for additional details
export KFK_DEPLOY_MODE="single"

# Disable flag for re-deploying Kafka from scratch.
export KFK_REDEPLOY=""


# ----- QuestDB ----------------------------------------------------------------
# ----- QuestDB ----------------------------------------------------------------


# Set the namespace where QuestDB will be deployed.
# Set the namespace where QuestDB will be deployed.
@@ -199,15 +217,3 @@ export PROM_EXT_PORT_HTTP="9090"


# Set the external port Grafana HTTP Dashboards will be exposed to.
# Set the external port Grafana HTTP Dashboards will be exposed to.
export GRAF_EXT_PORT_HTTP="3000"
export GRAF_EXT_PORT_HTTP="3000"


# ----- Apache Kafka -----------------------------------------------------------

# Set the namespace where Apache Kafka will be deployed.
export KFK_NAMESPACE="kafka"

# Set the port Apache Kafka server will be exposed to.
export KFK_SERVER_PORT="9092"

# Set the flag to YES for redeploying of Apache Kafka
export KFK_REDEPLOY=""
Original line number Original line Diff line number Diff line
@@ -19,7 +19,7 @@ PROJECTDIR=`pwd`
cd $PROJECTDIR/src
cd $PROJECTDIR/src


RCFILE=$PROJECTDIR/coverage/.coveragerc
RCFILE=$PROJECTDIR/coverage/.coveragerc
CRDB_SQL_ADDRESS=$(kubectl --namespace ${CRDB_NAMESPACE} get service cockroachdb-public -o 'jsonpath={.spec.clusterIP}')
# CRDB_SQL_ADDRESS=$(kubectl --namespace ${CRDB_NAMESPACE} get service cockroachdb-public -o 'jsonpath={.spec.clusterIP}')
export CRDB_URI="cockroachdb://tfs:tfs123@${CRDB_SQL_ADDRESS}:26257/tfs_kpi_mgmt?sslmode=require"
# export CRDB_URI="cockroachdb://tfs:tfs123@${CRDB_SQL_ADDRESS}:26257/tfs_kpi_mgmt?sslmode=require"
python3 -m pytest --log-level=DEBUG --log-cli-level=DEBUG --verbose \
python3 -m pytest --log-level=DEBUG --log-cli-level=DEBUG --verbose \
    kpi_value_writer/tests/test_metric_writer_to_prom.py
    kpi_value_writer/tests/test_metric_writer_to_prom.py
Original line number Original line Diff line number Diff line
@@ -21,7 +21,7 @@ docker container prune -f
docker pull "bitnami/kafka:latest"
docker pull "bitnami/kafka:latest"
docker buildx build -t "mock_tfs_nbi_dependencies:test" -f ./src/tests/tools/mock_tfs_nbi_dependencies/Dockerfile .
docker buildx build -t "mock_tfs_nbi_dependencies:test" -f ./src/tests/tools/mock_tfs_nbi_dependencies/Dockerfile .
docker buildx build -t "nbi:latest" -f ./src/nbi/Dockerfile .
docker buildx build -t "nbi:latest" -f ./src/nbi/Dockerfile .
docker images --filter="dangling=true" --quiet | xargs -r docker rmi
docker image prune --force


docker network create -d bridge teraflowbridge
docker network create -d bridge teraflowbridge


Original line number Original line Diff line number Diff line
@@ -37,13 +37,13 @@ echo
echo "Build optical attack detector:"
echo "Build optical attack detector:"
echo "------------------------------"
echo "------------------------------"
docker build -t "opticalattackdetector:latest" -f ./src/opticalattackdetector/Dockerfile .
docker build -t "opticalattackdetector:latest" -f ./src/opticalattackdetector/Dockerfile .
docker images --filter="dangling=true" --quiet | xargs -r docker rmi
docker image prune --force


echo
echo
echo "Build dbscan serving:"
echo "Build dbscan serving:"
echo "---------------------"
echo "---------------------"
docker build -t "dbscanserving:latest" -f ./src/dbscanserving/Dockerfile .
docker build -t "dbscanserving:latest" -f ./src/dbscanserving/Dockerfile .
docker images --filter="dangling=true" --quiet | xargs -r docker rmi
docker image prune --force


echo
echo
echo "Create test environment:"
echo "Create test environment:"
Original line number Original line Diff line number Diff line
#!/bin/bash
# Copyright 2022-2025 ETSI SDG TeraFlowSDN (TFS) (https://tfs.etsi.org/)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# # Cleanup
# docker rm --force qkd-node
# docker network rm --force qkd-node-br

# # Create Docker network
# docker network create --driver bridge --subnet=172.254.250.0/24 --gateway=172.254.250.254 qkd-node-br

# <<<<<<<< HEAD:scripts/run_tests_locally-telemetry-gnmi.sh
PROJECTDIR=`pwd`
cd $PROJECTDIR/src
# RCFILE=$PROJECTDIR/coverage/.coveragerc

export KFK_SERVER_ADDRESS='127.0.0.1:9094'
# CRDB_SQL_ADDRESS=$(kubectl get service cockroachdb-public --namespace crdb -o jsonpath='{.spec.clusterIP}')
# export CRDB_URI="cockroachdb://tfs:tfs123@${CRDB_SQL_ADDRESS}:26257/tfs_telemetry?sslmode=require"
RCFILE=$PROJECTDIR/coverage/.coveragerc

# this is unit test (should be tested with container-lab running)
# python3 -m pytest --log-level=info --log-cli-level=info --verbose \
#     telemetry/backend/tests/gnmi_oc/test_unit_GnmiOpenConfigCollector.py 

# this is integration test (should be tested with container-lab running)
python3 -m pytest --log-level=info --log-cli-level=info --verbose \
    telemetry/backend/tests/gnmi_oc/test_integration_GnmiOCcollector.py # this is integration test
# ========
# # Create QKD Node
# docker run --detach --name qkd-node --network qkd-node-br --ip 172.254.250.101 mock-qkd-node:test

# # Dump QKD Node Docker containers
# docker ps -a

# echo "Bye!"
# >>>>>>>> develop:src/tests/tools/mock_qkd_node/run.sh
Original line number Original line Diff line number Diff line
@@ -19,6 +19,7 @@ build analytics:
    IMAGE_TAG: 'latest'             # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest'             # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    # This first build tags the builder resulting image to prevent being removed by dangling image removal command
    # This first build tags the builder resulting image to prevent being removed by dangling image removal command
@@ -30,7 +31,7 @@ build analytics:
    - docker push "$CI_REGISTRY_IMAGE/${IMAGE_NAME}-frontend:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/${IMAGE_NAME}-frontend:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/${IMAGE_NAME}-backend:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/${IMAGE_NAME}-backend:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build automation:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build bgpls_speaker:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -69,19 +69,19 @@ def format_custom_config_rules(config_rules : List[Dict]) -> List[Dict]:
def format_device_custom_config_rules(device : Dict) -> Dict:
def format_device_custom_config_rules(device : Dict) -> Dict:
    config_rules = device.get('device_config', {}).get('config_rules', [])
    config_rules = device.get('device_config', {}).get('config_rules', [])
    config_rules = format_custom_config_rules(config_rules)
    config_rules = format_custom_config_rules(config_rules)
    device['device_config']['config_rules'] = config_rules
    device.setdefault('device_config', {})['config_rules'] = config_rules
    return device
    return device


def format_service_custom_config_rules(service : Dict) -> Dict:
def format_service_custom_config_rules(service : Dict) -> Dict:
    config_rules = service.get('service_config', {}).get('config_rules', [])
    config_rules = service.get('service_config', {}).get('config_rules', [])
    config_rules = format_custom_config_rules(config_rules)
    config_rules = format_custom_config_rules(config_rules)
    service['service_config']['config_rules'] = config_rules
    service.setdefault('service_config', {})['config_rules'] = config_rules
    return service
    return service


def format_slice_custom_config_rules(slice_ : Dict) -> Dict:
def format_slice_custom_config_rules(slice_ : Dict) -> Dict:
    config_rules = slice_.get('slice_config', {}).get('config_rules', [])
    config_rules = slice_.get('slice_config', {}).get('config_rules', [])
    config_rules = format_custom_config_rules(config_rules)
    config_rules = format_custom_config_rules(config_rules)
    slice_['slice_config']['config_rules'] = config_rules
    slice_.setdefault('slice_config', {})['config_rules'] = config_rules
    return slice_
    return slice_


def split_devices_by_rules(devices : List[Dict]) -> Tuple[List[Dict], List[Dict]]:
def split_devices_by_rules(devices : List[Dict]) -> Tuple[List[Dict], List[Dict]]:
@@ -138,6 +138,19 @@ def link_type_to_str(link_type : Union[int, str]) -> Optional[str]:
    if isinstance(link_type, str): return LinkTypeEnum.Name(LinkTypeEnum.Value(link_type))
    if isinstance(link_type, str): return LinkTypeEnum.Name(LinkTypeEnum.Value(link_type))
    return None
    return None


LINK_TYPES_NORMAL = {
    LinkTypeEnum.LINKTYPE_UNKNOWN,
    LinkTypeEnum.LINKTYPE_COPPER,
    LinkTypeEnum.LINKTYPE_RADIO,
    LinkTypeEnum.LINKTYPE_MANAGEMENT,
}
LINK_TYPES_OPTICAL = {
    LinkTypeEnum.LINKTYPE_FIBER,
}
LINK_TYPES_VIRTUAL = {
    LinkTypeEnum.LINKTYPE_VIRTUAL,
}

def split_links_by_type(links : List[Dict]) -> Dict[str, List[Dict]]:
def split_links_by_type(links : List[Dict]) -> Dict[str, List[Dict]]:
    typed_links = collections.defaultdict(list)
    typed_links = collections.defaultdict(list)
    for link in links:
    for link in links:
@@ -148,11 +161,11 @@ def split_links_by_type(links : List[Dict]) -> Dict[str, List[Dict]]:
            raise Exception(MSG.format(str(link)))
            raise Exception(MSG.format(str(link)))


        link_type = LinkTypeEnum.Value(str_link_type)
        link_type = LinkTypeEnum.Value(str_link_type)
        if link_type in {LinkTypeEnum.LINKTYPE_UNKNOWN, LinkTypeEnum.LINKTYPE_COPPER, LinkTypeEnum.LINKTYPE_RADIO, LinkTypeEnum.LINKTYPE_MANAGEMENT}:
        if link_type in LINK_TYPES_NORMAL:
            typed_links['normal'].append(link)
            typed_links['normal'].append(link)
        elif link_type in {LinkTypeEnum.LINKTYPE_FIBER}:
        elif link_type in LINK_TYPES_OPTICAL:
            typed_links['optical'].append(link)
            typed_links['optical'].append(link)
        elif link_type in {LinkTypeEnum.LINKTYPE_VIRTUAL}:
        elif link_type in LINK_TYPES_VIRTUAL:
            typed_links['virtual'].append(link)
            typed_links['virtual'].append(link)
        else:
        else:
            MSG = 'Unsupported LinkType({:s}) in Link({:s})'
            MSG = 'Unsupported LinkType({:s}) in Link({:s})'
Original line number Original line Diff line number Diff line
@@ -20,7 +20,14 @@ from common.Settings import get_setting




LOGGER = logging.getLogger(__name__)
LOGGER = logging.getLogger(__name__)
KFK_SERVER_ADDRESS_TEMPLATE = 'kafka-service.{:s}.svc.cluster.local:{:s}'
KFK_SERVER_ADDRESS_TEMPLATE = 'kafka-public.{:s}.svc.cluster.local:{:s}'

KAFKA_TOPIC_NUM_PARTITIONS         = 1
KAFKA_TOPIC_REPLICATION_FACTOR     = 1
#KAFKA_TOPIC_LIST_TIMEOUT           = 5
KAFKA_TOPIC_CREATE_REQUEST_TIMEOUT = 60_000 # ms
KAFKA_TOPIC_CREATE_WAIT_ITERATIONS = 10
KAFKA_TOPIC_CREATE_WAIT_TIME       = 1


KAFKA_TOPIC_NUM_PARTITIONS         = 1
KAFKA_TOPIC_NUM_PARTITIONS         = 1
KAFKA_TOPIC_REPLICATION_FACTOR     = 1
KAFKA_TOPIC_REPLICATION_FACTOR     = 1
@@ -35,8 +42,12 @@ class KafkaConfig(Enum):
    def get_kafka_address() -> str:
    def get_kafka_address() -> str:
        kafka_server_address  = get_setting('KFK_SERVER_ADDRESS', default=None)
        kafka_server_address  = get_setting('KFK_SERVER_ADDRESS', default=None)
        if kafka_server_address is None:
        if kafka_server_address is None:
            try:
                KFK_NAMESPACE = get_setting('KFK_NAMESPACE')
                KFK_NAMESPACE = get_setting('KFK_NAMESPACE')
                KFK_PORT      = get_setting('KFK_SERVER_PORT')
                KFK_PORT      = get_setting('KFK_SERVER_PORT')
            except Exception:
                KFK_NAMESPACE = 'kafka'
                KFK_PORT      = '9092'
            kafka_server_address = KFK_SERVER_ADDRESS_TEMPLATE.format(KFK_NAMESPACE, KFK_PORT)
            kafka_server_address = KFK_SERVER_ADDRESS_TEMPLATE.format(KFK_NAMESPACE, KFK_PORT)
        return kafka_server_address
        return kafka_server_address
        
        
@@ -52,10 +63,10 @@ class KafkaTopic(Enum):
    # TODO: Later to be populated from ENV variable.
    # TODO: Later to be populated from ENV variable.
    TELEMETRY_REQUEST    = 'topic_telemetry_request' 
    TELEMETRY_REQUEST    = 'topic_telemetry_request' 
    TELEMETRY_RESPONSE   = 'topic_telemetry_response'
    TELEMETRY_RESPONSE   = 'topic_telemetry_response'
    RAW                  = 'topic_raw' 
    RAW                  = 'topic_raw'                  # TODO: Update name to telemetry_raw
    LABELED              = 'topic_labeled'
    LABELED              = 'topic_labeled'              # TODO: Update name to telemetry_labeled
    VALUE                = 'topic_value'
    VALUE                = 'topic_value'                # TODO: Update name to telemetry_value
    ALARMS               = 'topic_alarms'
    ALARMS               = 'topic_alarms'               # TODO: Update name to telemetry_alarms
    ANALYTICS_REQUEST    = 'topic_analytics_request'
    ANALYTICS_REQUEST    = 'topic_analytics_request'
    ANALYTICS_RESPONSE   = 'topic_analytics_response'
    ANALYTICS_RESPONSE   = 'topic_analytics_response'
    VNTMANAGER_REQUEST   = 'topic_vntmanager_request' 
    VNTMANAGER_REQUEST   = 'topic_vntmanager_request' 
@@ -137,7 +148,6 @@ class KafkaTopic(Enum):
            LOGGER.debug('All topics created and available.')
            LOGGER.debug('All topics created and available.')
            return True
            return True


# TODO: create all topics after the deployments (Telemetry and Analytics)


if __name__ == '__main__':
if __name__ == '__main__':
    import os
    import os
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build context:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build dbscanserving:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -19,6 +19,7 @@ build device:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker ps -aq | xargs -r docker rm -f
    - docker ps -aq | xargs -r docker rm -f
    - containerlab destroy --all --cleanup || true
    - containerlab destroy --all --cleanup || true
@@ -27,7 +28,7 @@ build device:
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
@@ -40,30 +41,6 @@ build device:
      - manifests/${IMAGE_NAME}service.yaml
      - manifests/${IMAGE_NAME}service.yaml
      - .gitlab-ci.yml
      - .gitlab-ci.yml


## Start Mock QKD Nodes before unit testing
#start_mock_nodes:
#  stage: deploy
#  script:
#    - bash src/tests/tools/mock_qkd_nodes/start.sh &
#    - sleep 10 # wait for nodes to spin up
#  artifacts:
#    paths:
#      - mock_nodes.log
#  rules:
#    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
#    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'

## Prepare Scenario (Start NBI, mock services)
#prepare_scenario:
#  stage: deploy
#  script:
#    - pytest src/tests/qkd/unit/PrepareScenario.py
#  needs:
#    - start_mock_nodes
#  rules:
#    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
#    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'

# Apply unit test to the component
# Apply unit test to the component
unit_test device:
unit_test device:
  variables:
  variables:
@@ -72,8 +49,6 @@ unit_test device:
  stage: unit_test
  stage: unit_test
  needs:
  needs:
    - build device
    - build device
    #- start_mock_nodes
    #- prepare_scenario
  before_script:
  before_script:
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - >
    - >
@@ -97,6 +72,10 @@ unit_test device:
    - docker exec -i $IMAGE_NAME bash -c "coverage run --append -m pytest --log-level=INFO --verbose $IMAGE_NAME/tests/test_unitary_emulated.py --junitxml=/opt/results/${IMAGE_NAME}_report_emulated.xml"
    - docker exec -i $IMAGE_NAME bash -c "coverage run --append -m pytest --log-level=INFO --verbose $IMAGE_NAME/tests/test_unitary_emulated.py --junitxml=/opt/results/${IMAGE_NAME}_report_emulated.xml"
    - docker exec -i $IMAGE_NAME bash -c "coverage run --append -m pytest --log-level=INFO --verbose $IMAGE_NAME/tests/test_unitary_ietf_actn.py --junitxml=/opt/results/${IMAGE_NAME}_report_ietf_actn.xml"
    - docker exec -i $IMAGE_NAME bash -c "coverage run --append -m pytest --log-level=INFO --verbose $IMAGE_NAME/tests/test_unitary_ietf_actn.py --junitxml=/opt/results/${IMAGE_NAME}_report_ietf_actn.xml"
    #- docker exec -i $IMAGE_NAME bash -c "coverage run --append -m pytest --log-level=INFO --verbose $IMAGE_NAME/tests/qkd/unit/test_*.py"
    #- docker exec -i $IMAGE_NAME bash -c "coverage run --append -m pytest --log-level=INFO --verbose $IMAGE_NAME/tests/qkd/unit/test_*.py"
    #- docker exec -i $IMAGE_NAME bash -c "coverage run --append -m pytest --log-level=INFO --verbose $IMAGE_NAME/tests/qkd/unit/test_qkd_compliance.py"
    #- docker exec -i $IMAGE_NAME bash -c "coverage run --append -m pytest --log-level=INFO --verbose $IMAGE_NAME/tests/qkd/unit/test_mock_qkd_node.py"
    #- docker exec -i $IMAGE_NAME bash -c "coverage run --append -m pytest --log-level=INFO --verbose $IMAGE_NAME/tests/qkd/unit/test_qkd_error_handling.py"
    #- docker exec -i $IMAGE_NAME bash -c "coverage run --append -m pytest --log-level=INFO --verbose $IMAGE_NAME/tests/qkd/unit/test_Set_new_configuration.py"
    - docker exec -i $IMAGE_NAME bash -c "coverage report --include='${IMAGE_NAME}/*' --show-missing"
    - docker exec -i $IMAGE_NAME bash -c "coverage report --include='${IMAGE_NAME}/*' --show-missing"
  coverage: '/TOTAL\s+\d+\s+\d+\s+(\d+%)/'
  coverage: '/TOTAL\s+\d+\s+\d+\s+(\d+%)/'
  after_script:
  after_script:
@@ -112,6 +91,7 @@ unit_test device:
      - src/$IMAGE_NAME/Dockerfile
      - src/$IMAGE_NAME/Dockerfile
      - src/$IMAGE_NAME/tests/*.py
      - src/$IMAGE_NAME/tests/*.py
      - src/$IMAGE_NAME/tests/Dockerfile
      - src/$IMAGE_NAME/tests/Dockerfile
      #- src/tests/tools/mock_qkd_nodes/**
      - manifests/${IMAGE_NAME}service.yaml
      - manifests/${IMAGE_NAME}service.yaml
      - .gitlab-ci.yml
      - .gitlab-ci.yml
  artifacts:
  artifacts:
Original line number Original line Diff line number Diff line
@@ -224,7 +224,12 @@ def fetch_node(url: str, resource_key: str, headers: Dict[str, str], auth: Optio
    try:
    try:
        r = requests.get(url, timeout=timeout, verify=False, auth=auth, headers=headers)
        r = requests.get(url, timeout=timeout, verify=False, auth=auth, headers=headers)
        r.raise_for_status()
        r.raise_for_status()
        result.append((resource_key, r.json().get('qkd_node', {})))
        data = r.json()
        data.pop('qkdn_capabilities', None)
        data.pop('qkd_applications', None)
        data.pop('qkd_interfaces', None)
        data.pop('qkd_links', None)
        result.append((resource_key, data))
    except requests.RequestException as e:
    except requests.RequestException as e:
        LOGGER.error(f"Error fetching node from {url}: {e}")
        LOGGER.error(f"Error fetching node from {url}: {e}")
        result.append((resource_key, e))
        result.append((resource_key, e))
Original line number Original line Diff line number Diff line
@@ -15,10 +15,18 @@


import pytest
import pytest
import requests
import requests
from tests.tools.mock_qkd_nodes.YangValidator import YangValidator
from requests.exceptions import HTTPError
from tests.tools.mock_qkd_node.YangValidator import YangValidator


def test_compliance_with_yang_models():
def test_compliance_with_yang_models():
    validator = YangValidator('etsi-qkd-sdn-node', ['etsi-qkd-node-types'])
    validator = YangValidator('etsi-qkd-sdn-node', ['etsi-qkd-node-types'])
    try:
        response = requests.get('http://127.0.0.1:11111/restconf/data/etsi-qkd-sdn-node:qkd_node')
        response = requests.get('http://127.0.0.1:11111/restconf/data/etsi-qkd-sdn-node:qkd_node')
        response.raise_for_status()
        data = response.json()
        data = response.json()
    assert validator.parse_to_dict(data) is not None
        assert validator.parse_to_dict(data) is not None, "Data validation failed against YANG model."
    except HTTPError as e:
        pytest.fail(f"HTTP error occurred: {e}")
    except Exception as e:
        pytest.fail(f"Unexpected error occurred: {e}")
Original line number Original line Diff line number Diff line
@@ -40,7 +40,7 @@ def test_invalid_operations_on_network_links(qkd_driver):


    try:
    try:
        # Attempt to perform an invalid operation (simulate wrong resource key)
        # Attempt to perform an invalid operation (simulate wrong resource key)
        response = requests.post(f'http://{qkd_driver.address}/invalid_resource', json=invalid_payload)
        response = requests.post(f'http://{qkd_driver.address}:{qkd_driver.port}/invalid_resource', json=invalid_payload)
        response.raise_for_status()
        response.raise_for_status()


    except HTTPError as e:
    except HTTPError as e:
Original line number Original line Diff line number Diff line
@@ -12,16 +12,35 @@
# See the License for the specific language governing permissions and
# See the License for the specific language governing permissions and
# limitations under the License.
# limitations under the License.


import pytest, requests
import pytest
import requests
import time
import socket
from unittest.mock import patch
from unittest.mock import patch
from device.service.drivers.qkd.QKDDriver import QKDDriver
from device.service.drivers.qkd.QKDDriver2 import QKDDriver


MOCK_QKD_ADDRRESS = '127.0.0.1'
MOCK_QKD_ADDRESS = '127.0.0.1'  # Use localhost to connect to the mock node in the Docker container
MOCK_PORT = 11111
MOCK_PORT = 11111


@pytest.fixture(scope="module")
def wait_for_mock_node():
    """
    Fixture to wait for the mock QKD node to be ready before running tests.
    """
    timeout = 30  # seconds
    start_time = time.time()
    while True:
        try:
            with socket.create_connection((MOCK_QKD_ADDRESS, MOCK_PORT), timeout=1):
                break  # Success
        except (socket.timeout, socket.error):
            if time.time() - start_time > timeout:
                raise RuntimeError("Timed out waiting for mock QKD node to be ready.")
            time.sleep(1)

@pytest.fixture
@pytest.fixture
def qkd_driver():
def qkd_driver(wait_for_mock_node):
    return QKDDriver(address=MOCK_QKD_ADDRRESS, port=MOCK_PORT, username='user', password='pass')
    return QKDDriver(address=MOCK_QKD_ADDRESS, port=MOCK_PORT, username='user', password='pass')


# Deliverable Test ID: SBI_Test_01
# Deliverable Test ID: SBI_Test_01
def test_qkd_driver_connection(qkd_driver):
def test_qkd_driver_connection(qkd_driver):
@@ -29,7 +48,7 @@ def test_qkd_driver_connection(qkd_driver):


# Deliverable Test ID: SBI_Test_01
# Deliverable Test ID: SBI_Test_01
def test_qkd_driver_invalid_connection():
def test_qkd_driver_invalid_connection():
    qkd_driver = QKDDriver(address='127.0.0.1', port=12345, username='user', password='pass')  # Use invalid port directly
    qkd_driver = QKDDriver(address=MOCK_QKD_ADDRESS, port=12345, username='user', password='pass')  # Use invalid port directly
    assert qkd_driver.Connect() is False
    assert qkd_driver.Connect() is False


# Deliverable Test ID: SBI_Test_10
# Deliverable Test ID: SBI_Test_10
@@ -38,4 +57,3 @@ def test_qkd_driver_timeout_connection(mock_get, qkd_driver):
    mock_get.side_effect = requests.exceptions.Timeout
    mock_get.side_effect = requests.exceptions.Timeout
    qkd_driver.timeout = 0.001  # Simulate very short timeout
    qkd_driver.timeout = 0.001  # Simulate very short timeout
    assert qkd_driver.Connect() is False
    assert qkd_driver.Connect() is False
Original line number Original line Diff line number Diff line
@@ -53,7 +53,7 @@ def create_qkd_app(driver, qkdn_id, backing_qkdl_id, client_app_id=None):
        print(f"Sending payload to {driver.address}: {app_payload}")
        print(f"Sending payload to {driver.address}: {app_payload}")


        # Send POST request to create the application
        # Send POST request to create the application
        response = requests.post(f'http://{driver.address}/app/create_qkd_app', json=app_payload)
        response = requests.post(f'http://{driver.address}/qkd_app/create_qkd_app', json=app_payload)
        
        
        # Check if the request was successful (HTTP 2xx)
        # Check if the request was successful (HTTP 2xx)
        response.raise_for_status()
        response.raise_for_status()
Original line number Original line Diff line number Diff line
@@ -19,6 +19,7 @@ build dlt:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    # This first build tags the builder resulting image to prevent being removed by dangling image removal command
    # This first build tags the builder resulting image to prevent being removed by dangling image removal command
@@ -29,7 +30,7 @@ build dlt:
    - docker push "$CI_REGISTRY_IMAGE/${IMAGE_NAME}-gateway:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/${IMAGE_NAME}-gateway:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/${IMAGE_NAME}-connector:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/${IMAGE_NAME}-connector:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build e2e_orchestrator:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"' 
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"' 
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build forecaster:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build interdomain:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build kpi-manager:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build kpi-value-api:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build kpi-value-writer:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -43,20 +43,21 @@ def log_all_methods(request):




# -------- Initial Test ----------------
# -------- Initial Test ----------------
def test_validate_kafka_topics():
# def test_validate_kafka_topics():
    LOGGER.debug(" >>> test_validate_kafka_topics: START <<< ")
#     LOGGER.debug(" >>> test_validate_kafka_topics: START <<< ")
    response = KafkaTopic.create_all_topics()
#     response = KafkaTopic.create_all_topics()
    assert isinstance(response, bool)
#     assert isinstance(response, bool)


# --------------
# --------------
# NOT FOR GITHUB PIPELINE (Local testing only)
# NOT FOR GITHUB PIPELINE (Local testing only)
# --------------
# --------------
# def test_KafkaConsumer(kpi_manager_client):
# def test_KafkaConsumer(kpi_manager_client):


#     # kpidescriptor = create_kpi_descriptor_request()
    # kpidescriptor = create_kpi_descriptor_request()
#     # kpi_manager_client.SetKpiDescriptor(kpidescriptor)
    # kpi_manager_client.SetKpiDescriptor(kpidescriptor)


    # kpi_value_writer = KpiValueWriter()
    # kpi_value_writer = KpiValueWriter()
    # kpi_value_writer.KafkaKpiConsumer()
    # kpi_value_writer.KafkaKpiConsumer()
#     LOGGER.debug(" waiting for timer to finish ")
    # timer = 300
    # LOGGER.debug(f" waiting for timer to finish {timer} seconds")
    # time.sleep(300)
    # time.sleep(300)
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build l3_attackmitigator:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build l3_centralizedattackdetector:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build l3_distributedattackdetector:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build load_generator:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build monitoring:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build nbi:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
@@ -73,7 +74,7 @@ unit_test nbi:
    - docker pull "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker pull "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker pull "$CI_REGISTRY_IMAGE/mock_tfs_nbi_dependencies:test"
    - docker pull "$CI_REGISTRY_IMAGE/mock_tfs_nbi_dependencies:test"
    - docker pull "bitnami/kafka:latest"
    - docker pull "bitnami/kafka:latest"
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
    - >
    - >
      docker run --name kafka -d --network=teraflowbridge -p 9092:9092 -p 9093:9093
      docker run --name kafka -d --network=teraflowbridge -p 9092:9092 -p 9093:9093
      --env KAFKA_CFG_NODE_ID=1
      --env KAFKA_CFG_NODE_ID=1
Original line number Original line Diff line number Diff line
@@ -12,16 +12,19 @@
# See the License for the specific language governing permissions and
# See the License for the specific language governing permissions and
# limitations under the License.
# limitations under the License.


import uuid
import json, logging
import json
from flask import request
from flask import request
from flask.json import jsonify
from flask_restful import Resource
from flask_restful import Resource
from common.proto.context_pb2 import Empty
from common.proto.context_pb2 import Empty
from common.proto.qkd_app_pb2 import App, QKDAppTypesEnum
from common.proto.qkd_app_pb2 import App, QKDAppTypesEnum
from common.Constants import DEFAULT_CONTEXT_NAME
from common.Constants import DEFAULT_CONTEXT_NAME
from context.client.ContextClient import ContextClient
from context.client.ContextClient import ContextClient
from nbi.service._tools.HttpStatusCodes import HTTP_OK, HTTP_SERVERERROR
from qkd_app.client.QKDAppClient import QKDAppClient
from qkd_app.client.QKDAppClient import QKDAppClient


LOGGER = logging.getLogger(__name__)

class _Resource(Resource):
class _Resource(Resource):
    def __init__(self) -> None:
    def __init__(self) -> None:
        super().__init__()
        super().__init__()
@@ -30,7 +33,7 @@ class _Resource(Resource):


class Index(_Resource):
class Index(_Resource):
    def get(self):
    def get(self):
        return {'hello': 'world'}
        return {}


class ListDevices(_Resource):
class ListDevices(_Resource):
    def get(self):
    def get(self):
@@ -79,20 +82,35 @@ class CreateQKDApp(_Resource):
    def post(self):
    def post(self):
        app = request.get_json()['app']
        app = request.get_json()['app']
        devices = self.context_client.ListDevices(Empty()).devices
        devices = self.context_client.ListDevices(Empty()).devices
        local_device = None

        local_qkdn_id = app.get('local_qkdn_id')
        if local_qkdn_id is None:
            MSG = 'local_qkdn_id not specified in qkd_app({:s})'
            msg = MSG.format(str(app))
            LOGGER.exception(msg)
            response = jsonify({'error': msg})
            response.status_code = HTTP_SERVERERROR
            return response


        # This for-loop won't be necessary if Device ID is guaranteed to be the same as QKDN Id
        # This for-loop won't be necessary if Device ID is guaranteed to be the same as QKDN Id
        local_device = None
        for device in devices:
        for device in devices:
            for config_rule in device.device_config.config_rules:
            for config_rule in device.device_config.config_rules:
                if config_rule.custom.resource_key == '__node__':
                if config_rule.custom.resource_key != '__node__': continue
                value = json.loads(config_rule.custom.resource_value)
                value = json.loads(config_rule.custom.resource_value)
                    qkdn_id = value['qkdn_id']
                qkdn_id = value.get('qkdn_id')
                    if app['local_qkdn_id'] == qkdn_id:
                if qkdn_id is None: continue
                if local_qkdn_id != qkdn_id: continue
                local_device = device
                local_device = device
                break
                break


        if local_device is None:
        if local_device is None:
            return {"status": "fail"}
            MSG = 'Unable to find device for local_qkdn_id({:s})'
            msg = MSG.format(str(local_qkdn_id))
            LOGGER.exception(msg)
            response = jsonify({'error': msg})
            response.status_code = HTTP_SERVERERROR
            return response


        external_app_src_dst = {
        external_app_src_dst = {
            'app_id': {'context_id': {'context_uuid': {'uuid': DEFAULT_CONTEXT_NAME}}, 'app_uuid': {'uuid': ''}},
            'app_id': {'context_id': {'context_uuid': {'uuid': DEFAULT_CONTEXT_NAME}}, 'app_uuid': {'uuid': ''}},
@@ -107,5 +125,6 @@ class CreateQKDApp(_Resource):


        self.qkd_app_client.RegisterApp(App(**external_app_src_dst))
        self.qkd_app_client.RegisterApp(App(**external_app_src_dst))


        return {"status": "success"}
        response = jsonify({'status': 'success'})
        response.status_code = HTTP_OK
        return response
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build opticalattackdetector:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"' 
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"' 
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build opticalattackmanager:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build opticalattackmitigator:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"' 
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"' 
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build opticalcontroller:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
+122 −0
Original line number Original line Diff line number Diff line
# Copyright 2022-2025 ETSI SDG TeraFlowSDN (TFS) (https://tfs.etsi.org/)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Build, tag, and push the Docker image to the GitLab Docker registry
build osm_client:
  variables:
    IMAGE_NAME: 'osm_client' # name of the microservice
    MOCK_IMAGE_NAME: 'mock_osm_nbi' # name of the mock 
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
    - docker image prune --force
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - changes:
      - src/common/**/*.py
      - proto/*.proto
      - src/$IMAGE_NAME/**/*.{py,in,yml}
      - src/$IMAGE_NAME/Dockerfile
      - src/$IMAGE_NAME/tests/*.py
      - manifests/${IMAGE_NAME}service.yaml
      - src/tests/tools/mock_osm_nbi/**/*.{py,in,yml,yaml,yang,sh,json}
      - src/tests/tools/mock_osm_nbi/Dockerfile
      - src/tests/.gitlab-ci.yml
      - .gitlab-ci.yml

# Apply unit test to the component
unit_test osm_client:
  variables:
    IMAGE_NAME: 'osm_client' # name of the microservice
    MOCK_IMAGE_NAME: 'mock_osm_nbi'
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: unit_test
  needs:
    - build osm_client
    - build mock_osm_nbi
  before_script:
    # Do Docker cleanup
    - docker ps --all --quiet | xargs --no-run-if-empty docker stop
    - docker container prune --force
    - docker ps --all --quiet | xargs --no-run-if-empty docker rm --force
    - docker image prune --force
    - docker network prune --force
    - docker volume prune --all --force
    - docker buildx prune --force

    # Login Docker repository
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
    - docker pull "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker pull "$CI_REGISTRY_IMAGE/mock-osm-nbi:test"
    - docker network create -d bridge teraflowbridge
    - >
      docker run --name mock_osm_nbi -d 
      --network=teraflowbridge
      --env LOG_LEVEL=DEBUG
      --env FLASK_ENV=development
      $CI_REGISTRY_IMAGE/mock-osm-nbi:test
    - >
      docker run --name $IMAGE_NAME -d -v "$PWD/src/$IMAGE_NAME/tests:/opt/results" 
      --network=teraflowbridge
      --env LOG_LEVEL=DEBUG
      --env FLASK_ENV=development
      --env OSM_ADDRESS=mock_osm_nbi
      $CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG
    - while ! docker logs $IMAGE_NAME 2>&1 | grep -q 'Running...'; do sleep 1; done
    - docker ps -a
    - docker logs $IMAGE_NAME
    - docker logs mock_osm_nbi
    - docker exec -i $IMAGE_NAME bash -c "coverage run -m pytest --log-level=INFO --verbose $IMAGE_NAME/tests/test_unitary.py --junitxml=/opt/results/${IMAGE_NAME}_report_unitary.xml"
    - docker exec -i $IMAGE_NAME bash -c "coverage report --include='${IMAGE_NAME}/*' --show-missing"
  coverage: '/TOTAL\s+\d+\s+\d+\s+(\d+%)/'
  after_script:
    - docker logs $IMAGE_NAME
    - docker logs mock_osm_nbi

    # Do Docker cleanup
    - docker ps --all --quiet | xargs --no-run-if-empty docker stop
    - docker container prune --force
    - docker ps --all --quiet | xargs --no-run-if-empty docker rm --force
    - docker image prune --force
    - docker network prune --force
    - docker volume prune --all --force
    - docker buildx prune --force

  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - changes:
      - src/common/**/*.py
      - proto/*.proto
      - src/$IMAGE_NAME/**/*.{py,in,yml}
      - src/$IMAGE_NAME/Dockerfile
      - src/$IMAGE_NAME/tests/*.py
      - manifests/${IMAGE_NAME}service.yaml
      - src/tests/tools/mock_osm_nbi/**/*.{py,in,yml,yaml,yang,sh,json}
      - src/tests/tools/mock_osm_nbi/Dockerfile
      - src/tests/.gitlab-ci.yml
      - .gitlab-ci.yml
  artifacts:
      when: always
      reports:
        junit: src/$IMAGE_NAME/tests/${IMAGE_NAME}_report_*.xml
Original line number Original line Diff line number Diff line
@@ -16,9 +16,9 @@ FROM python:3.10.16-slim




# Install dependencies
# Install dependencies
RUN apt-get --yes --quiet --quiet update
# Unneeded: build-essential cmake libpcre2-dev python3-dev python3-pip python3-cffi curl software-properties-common libmagic-dev
RUN apt-get --yes --quiet --quiet install wget g++ git build-essential cmake make git \
RUN apt-get --yes --quiet --quiet update && \
    libpcre2-dev python3-dev python3-pip python3-cffi curl software-properties-common && \
    apt-get --yes --quiet --quiet install wget g++ git make libmagic1 && \
    rm -rf /var/lib/apt/lists/*
    rm -rf /var/lib/apt/lists/*


# Set Python to show logs as they occur
# Set Python to show logs as they occur
@@ -62,9 +62,11 @@ WORKDIR /var/teraflow/osm_client
ENV OSM_CLIENT_VERSION=v16.0
ENV OSM_CLIENT_VERSION=v16.0
RUN python3 -m pip install -r "https://osm.etsi.org/gitweb/?p=osm/IM.git;a=blob_plain;f=requirements.txt;hb=${OSM_CLIENT_VERSION}"
RUN python3 -m pip install -r "https://osm.etsi.org/gitweb/?p=osm/IM.git;a=blob_plain;f=requirements.txt;hb=${OSM_CLIENT_VERSION}"
RUN python3 -m pip install "git+https://osm.etsi.org/gerrit/osm/IM.git@${OSM_CLIENT_VERSION}#egg=osm-im" --upgrade
RUN python3 -m pip install "git+https://osm.etsi.org/gerrit/osm/IM.git@${OSM_CLIENT_VERSION}#egg=osm-im" --upgrade

#Clone OsmCLient code
#Clone OsmCLient code
RUN git clone https://osm.etsi.org/gerrit/osm/osmclient
RUN git clone https://osm.etsi.org/gerrit/osm/osmclient
RUN git -C osmclient checkout ${OSM_CLIENT_VERSION}
RUN git -C osmclient checkout ${OSM_CLIENT_VERSION}

# Install osmClient using pip
# Install osmClient using pip
RUN python3 -m pip install -r osmclient/requirements.txt
RUN python3 -m pip install -r osmclient/requirements.txt
RUN python3 -m pip install ./osmclient
RUN python3 -m pip install ./osmclient
Original line number Original line Diff line number Diff line
@@ -16,7 +16,10 @@ import grpc, logging
from common.method_wrappers.Decorator import MetricsPool, safe_and_metered_rpc_method
from common.method_wrappers.Decorator import MetricsPool, safe_and_metered_rpc_method
from common.tools.grpc.Tools import grpc_message_to_json_string
from common.tools.grpc.Tools import grpc_message_to_json_string
from common.proto.context_pb2 import (Empty)
from common.proto.context_pb2 import (Empty)
from common.proto.osm_client_pb2 import CreateRequest, CreateResponse, NsiListResponse, GetRequest, GetResponse, DeleteRequest, DeleteResponse
from common.proto.osm_client_pb2 import (
    CreateRequest, CreateResponse, NsiListResponse, GetRequest, GetResponse,
    DeleteRequest, DeleteResponse
)
from common.proto.osm_client_pb2_grpc import OsmServiceServicer
from common.proto.osm_client_pb2_grpc import OsmServiceServicer
from osmclient import client
from osmclient import client
from osmclient.common.exceptions import ClientException
from osmclient.common.exceptions import ClientException
Original line number Original line Diff line number Diff line
@@ -53,7 +53,7 @@ def main():
    grpc_service = OsmClientService()
    grpc_service = OsmClientService()
    grpc_service.start()
    grpc_service.start()


    LOGGER.debug('Configured Rules:')
    LOGGER.info('Running...')


    # Wait for Ctrl+C or termination signal
    # Wait for Ctrl+C or termination signal
    while not terminate.wait(timeout=1.0): pass
    while not terminate.wait(timeout=1.0): pass
Original line number Original line Diff line number Diff line
# Copyright 2022-2025 ETSI SDG TeraFlowSDN (TFS) (https://tfs.etsi.org/)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import pytest, os

from common.Settings import (
    ENVVAR_SUFIX_SERVICE_HOST, ENVVAR_SUFIX_SERVICE_PORT_GRPC,
    ENVVAR_SUFIX_SERVICE_PORT_HTTP, get_env_var_name, get_service_port_grpc
)

from common.Constants import ServiceNameEnum
from osm_client.client.OsmClient import OsmClient
from osm_client.service.OsmClientService import OsmClientService

LOCAL_HOST = '127.0.0.1'
GRPC_PORT = 10000 + int(get_service_port_grpc(ServiceNameEnum.OSMCLIENT))

os.environ[get_env_var_name(ServiceNameEnum.OSMCLIENT, ENVVAR_SUFIX_SERVICE_HOST     )] = str(LOCAL_HOST)
os.environ[get_env_var_name(ServiceNameEnum.OSMCLIENT, ENVVAR_SUFIX_SERVICE_PORT_HTTP)] = str(GRPC_PORT)

@pytest.fixture(scope='session')
def osm_client_service(): # pylint: disable=redefined-outer-name
    _service = OsmClientService()
    _service.start()
    yield _service
    _service.stop()

@pytest.fixture(scope='session')
def osm_client(osm_client_service : OsmClientService):    # pylint: disable=redefined-outer-name
    _client = OsmClient()
    yield _client
    _client.close()
+14 −0
Original line number Original line Diff line number Diff line
# Copyright 2022-2025 ETSI SDG TeraFlowSDN (TFS) (https://tfs.etsi.org/)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Original line number Original line Diff line number Diff line
@@ -12,31 +12,33 @@
# See the License for the specific language governing permissions and
# See the License for the specific language governing permissions and
# limitations under the License.
# limitations under the License.


import libyang, os
import grpc, pytest
from typing import Dict, Optional
from osm_client.client.OsmClient import OsmClient
from common.proto.osm_client_pb2 import CreateRequest, CreateResponse, NsiListResponse
from common.proto.context_pb2 import Empty


YANG_DIR = os.path.join(os.path.dirname(__file__), 'yang')


class YangValidator:
from .PrepareTestScenario import ( # pylint: disable=unused-import
    def __init__(self, main_module : str, dependency_modules : [str]) -> None:
    # be careful, order of symbols is important here!
        self._yang_context = libyang.Context(YANG_DIR)
    osm_client_service, osm_client
)


        self._yang_module = self._yang_context.load_module(main_module)
def test_OsmClient(
        mods = [self._yang_context.load_module(mod) for mod in dependency_modules] + [self._yang_module]
    osm_client : OsmClient,
):  # pylint: disable=redefined-outer-name


        for mod in mods:
    nbi_list_request = Empty()
            mod.feature_enable_all()


    osm_list_reply = osm_client.NsiList(nbi_list_request)
    assert len(osm_list_reply.id) == 0


    nbi_create_request = CreateRequest()
    nbi_create_request.nst_name = "nst1"
    nbi_create_request.nsi_name = "nsi1"
    nbi_create_request.account = "account1"


    def parse_to_dict(self, message : Dict) -> Dict:
    osm_create_reply = osm_client.NsiCreate(nbi_create_request)
        dnode : Optional[libyang.DNode] = self._yang_module.parse_data_dict(
    assert osm_create_reply.succeded == True
            message, validate_present=True, validate=True, strict=True
        )
        if dnode is None: raise Exception('Unable to parse Message({:s})'.format(str(message)))
        message = dnode.print_dict()
        dnode.free()
        return message


    def destroy(self) -> None:
    osm_list_reply2 = osm_client.NsiList(nbi_list_request)
        self._yang_context.destroy()
    assert len(osm_list_reply2.id) == 1
Original line number Original line Diff line number Diff line
@@ -19,6 +19,7 @@ build pathcomp:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    # This first build tags the builder resulting image to prevent being removed by dangling image removal command
    # This first build tags the builder resulting image to prevent being removed by dangling image removal command
@@ -32,7 +33,7 @@ build pathcomp:
    - docker push "$CI_REGISTRY_IMAGE/${IMAGE_NAME}-backend:${IMAGE_TAG}"
    - docker push "$CI_REGISTRY_IMAGE/${IMAGE_NAME}-backend:${IMAGE_TAG}"
    - docker push "$CI_REGISTRY_IMAGE/${IMAGE_NAME}-frontend:${IMAGE_TAG}"
    - docker push "$CI_REGISTRY_IMAGE/${IMAGE_NAME}-frontend:${IMAGE_TAG}"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -26,7 +26,7 @@ docker run --name pathcomp-backend -d --network=tfbr --ip 172.28.0.2 pathcomp-b
docker rm -f pathcomp-frontend pathcomp-backend
docker rm -f pathcomp-frontend pathcomp-backend
docker network rm tfbr
docker network rm tfbr


docker images --filter="dangling=true" --quiet | xargs -r docker rmi
docker image prune --force


docker exec -i pathcomp bash -c "pytest --log-level=INFO --verbose pathcomp/tests/test_unitary.py"
docker exec -i pathcomp bash -c "pytest --log-level=INFO --verbose pathcomp/tests/test_unitary.py"


Original line number Original line Diff line number Diff line
@@ -20,13 +20,16 @@ variables:
# Package application needed to run tests & build the image on next stage
# Package application needed to run tests & build the image on next stage
build policy:
build policy:
  stage: build
  stage: build
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - export IMAGE_TAG=$(grep -m1 '<version>' ./src/$IMAGE_NAME_POLICY/pom.xml | grep -oP  '(?<=>).*(?=<)')
    - export IMAGE_TAG=$(grep -m1 '<version>' ./src/$IMAGE_NAME_POLICY/pom.xml | grep -oP  '(?<=>).*(?=<)')
    - echo "IMAGE_TAG=${IMAGE_TAG}" >> ${BUILD_ENV_POLICY}
    - echo "IMAGE_TAG=${IMAGE_TAG}" >> ${BUILD_ENV_POLICY}
    - cat ${BUILD_ENV_POLICY}
    - cat ${BUILD_ENV_POLICY}
    - docker buildx build -t "$IMAGE_NAME_POLICY:$IMAGE_TAG" -f ./src/$IMAGE_NAME_POLICY/src/main/docker/Dockerfile.multistage.jvm ./src/$IMAGE_NAME_POLICY/ --target builder
    - docker buildx build -t "$IMAGE_NAME_POLICY:$IMAGE_TAG" -f ./src/$IMAGE_NAME_POLICY/src/main/docker/Dockerfile.multistage.jvm ./src/$IMAGE_NAME_POLICY/ --target builder
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  artifacts:
  artifacts:
    reports:
    reports:
      dotenv: ${BUILD_ENV_POLICY}
      dotenv: ${BUILD_ENV_POLICY}
Original line number Original line Diff line number Diff line
@@ -41,49 +41,7 @@ import org.etsi.tfs.policy.acl.AclLogActionEnum;
import org.etsi.tfs.policy.acl.AclMatch;
import org.etsi.tfs.policy.acl.AclMatch;
import org.etsi.tfs.policy.acl.AclRuleSet;
import org.etsi.tfs.policy.acl.AclRuleSet;
import org.etsi.tfs.policy.acl.AclRuleTypeEnum;
import org.etsi.tfs.policy.acl.AclRuleTypeEnum;
import org.etsi.tfs.policy.context.model.ConfigActionEnum;
import org.etsi.tfs.policy.context.model.*;
import org.etsi.tfs.policy.context.model.ConfigRule;
import org.etsi.tfs.policy.context.model.ConfigRuleAcl;
import org.etsi.tfs.policy.context.model.ConfigRuleCustom;
import org.etsi.tfs.policy.context.model.ConfigRuleTypeAcl;
import org.etsi.tfs.policy.context.model.ConfigRuleTypeCustom;
import org.etsi.tfs.policy.context.model.Constraint;
import org.etsi.tfs.policy.context.model.ConstraintCustom;
import org.etsi.tfs.policy.context.model.ConstraintEndPointLocation;
import org.etsi.tfs.policy.context.model.ConstraintSchedule;
import org.etsi.tfs.policy.context.model.ConstraintSlaAvailability;
import org.etsi.tfs.policy.context.model.ConstraintSlaCapacity;
import org.etsi.tfs.policy.context.model.ConstraintSlaIsolationLevel;
import org.etsi.tfs.policy.context.model.ConstraintSlaLatency;
import org.etsi.tfs.policy.context.model.ConstraintTypeCustom;
import org.etsi.tfs.policy.context.model.ConstraintTypeEndPointLocation;
import org.etsi.tfs.policy.context.model.ConstraintTypeSchedule;
import org.etsi.tfs.policy.context.model.ConstraintTypeSlaAvailability;
import org.etsi.tfs.policy.context.model.ConstraintTypeSlaCapacity;
import org.etsi.tfs.policy.context.model.ConstraintTypeSlaIsolationLevel;
import org.etsi.tfs.policy.context.model.ConstraintTypeSlaLatency;
import org.etsi.tfs.policy.context.model.Device;
import org.etsi.tfs.policy.context.model.DeviceConfig;
import org.etsi.tfs.policy.context.model.DeviceDriverEnum;
import org.etsi.tfs.policy.context.model.DeviceOperationalStatus;
import org.etsi.tfs.policy.context.model.Empty;
import org.etsi.tfs.policy.context.model.EndPoint;
import org.etsi.tfs.policy.context.model.EndPointId;
import org.etsi.tfs.policy.context.model.Event;
import org.etsi.tfs.policy.context.model.EventTypeEnum;
import org.etsi.tfs.policy.context.model.GpsPosition;
import org.etsi.tfs.policy.context.model.IsolationLevelEnum;
import org.etsi.tfs.policy.context.model.Location;
import org.etsi.tfs.policy.context.model.LocationTypeGpsPosition;
import org.etsi.tfs.policy.context.model.LocationTypeRegion;
import org.etsi.tfs.policy.context.model.Service;
import org.etsi.tfs.policy.context.model.ServiceConfig;
import org.etsi.tfs.policy.context.model.ServiceId;
import org.etsi.tfs.policy.context.model.ServiceStatus;
import org.etsi.tfs.policy.context.model.ServiceStatusEnum;
import org.etsi.tfs.policy.context.model.ServiceTypeEnum;
import org.etsi.tfs.policy.context.model.SliceId;
import org.etsi.tfs.policy.context.model.TopologyId;
import org.etsi.tfs.policy.kpi_sample_types.model.KpiSampleType;
import org.etsi.tfs.policy.kpi_sample_types.model.KpiSampleType;
import org.etsi.tfs.policy.monitoring.model.AlarmDescriptor;
import org.etsi.tfs.policy.monitoring.model.AlarmDescriptor;
import org.etsi.tfs.policy.monitoring.model.AlarmResponse;
import org.etsi.tfs.policy.monitoring.model.AlarmResponse;
@@ -904,6 +862,22 @@ public class Serializer {
            builder.setSlaLatency(serializedConstraintSlaLatency);
            builder.setSlaLatency(serializedConstraintSlaLatency);
        }
        }


        if (constraintTypeSpecificType instanceof ConstraintExclusions) {
            final var isPermanent = ((ConstraintExclusions) constraintTypeSpecificType).isPermanent();
            final var deviceIds = ((ConstraintExclusions) constraintTypeSpecificType).getDeviceIds();

            final var serializedDeviceIds =
                    deviceIds.stream().map(this::serializeDeviceId).collect(Collectors.toList());

            final var serializedConstraintExclusions =
                    ContextOuterClass.Constraint_Exclusions.newBuilder()
                            .setIsPermanent(isPermanent)
                            .addAllDeviceIds(serializedDeviceIds)
                            .build();

            builder.setExclusions(serializedConstraintExclusions);
        }

        return builder.build();
        return builder.build();
    }
    }


@@ -982,6 +956,21 @@ public class Serializer {
                        new ConstraintTypeSlaIsolationLevel(constraintSlaIsolation);
                        new ConstraintTypeSlaIsolationLevel(constraintSlaIsolation);


                return new Constraint(constraintTypeSlaIsolation);
                return new Constraint(constraintTypeSlaIsolation);
            case EXCLUSIONS:
                final var exclusions = serializedConstraint.getExclusions();

                final var isPermanent = exclusions.getIsPermanent();
                final var serializedDevices = exclusions.getDeviceIdsList();

                final var deviceIds =
                        serializedDevices.stream().map(this::deserialize).collect(Collectors.toList());

                final var constraintExclusions =
                        new org.etsi.tfs.policy.context.model.ConstraintExclusions(
                                isPermanent, deviceIds, new ArrayList<>(), new ArrayList<>());
                final var constraintTypeExclusions =
                        new org.etsi.tfs.policy.context.model.ConstraintTypeExclusions(constraintExclusions);
                return new Constraint(constraintTypeExclusions);


            default:
            default:
            case CONSTRAINT_NOT_SET:
            case CONSTRAINT_NOT_SET:
Original line number Original line Diff line number Diff line
/*
 * Copyright 2022-2024 ETSI SDG TeraFlowSDN (TFS) (https://tfs.etsi.org/)
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package org.etsi.tfs.policy.context.model;

import java.util.List;

public class ConstraintExclusions {

    private final boolean isPermanent;
    private final List<String> deviceIds;
    private final List<EndPointId> endpointIds;
    private final List<LinkId> linkIds;

    public ConstraintExclusions(
            boolean isPermanent,
            List<String> deviceIds,
            List<EndPointId> endpointIds,
            List<LinkId> linkIds) {
        this.isPermanent = isPermanent;
        this.deviceIds = deviceIds;
        this.endpointIds = endpointIds;
        this.linkIds = linkIds;
    }

    public boolean isPermanent() {
        return isPermanent;
    }

    public List<String> getDeviceIds() {
        return deviceIds;
    }

    public List<EndPointId> getEndpointIds() {
        return endpointIds;
    }

    public List<LinkId> getLinkIds() {
        return linkIds;
    }

    @Override
    public String toString() {
        return "ConstraintExclusions{"
                + "permanent="
                + isPermanent
                + ", deviceIds="
                + deviceIds
                + ", endpointIds="
                + endpointIds
                + ", linkIds="
                + linkIds
                + '}';
    }
}
Original line number Original line Diff line number Diff line
/*
 * Copyright 2022-2024 ETSI SDG TeraFlowSDN (TFS) (https://tfs.etsi.org/)
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package org.etsi.tfs.policy.context.model;

public class ConstraintTypeExclusions implements ConstraintType<ConstraintExclusions> {
    private final ConstraintExclusions constraintExclusions;

    public ConstraintTypeExclusions(ConstraintExclusions constraintExclusions) {
        this.constraintExclusions = constraintExclusions;
    }

    @Override
    public ConstraintExclusions getConstraintType() {
        return this.constraintExclusions;
    }

    @Override
    public String toString() {
        return String.format("%s:{%s}", getClass().getSimpleName(), constraintExclusions);
    }
}
Original line number Original line Diff line number Diff line
/*
 * Copyright 2022-2024 ETSI SDG TeraFlowSDN (TFS) (https://tfs.etsi.org/)
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package org.etsi.tfs.policy.context.model;

public class DeviceId {

    private final String id;

    public DeviceId(String id) {
        this.id = id;
    }

    public String getId() {
        return id;
    }

    @Override
    public String toString() {
        return "DeviceId{" + "id='" + id + '\'' + '}';
    }
}
Original line number Original line Diff line number Diff line
/*
 * Copyright 2022-2024 ETSI SDG TeraFlowSDN (TFS) (https://tfs.etsi.org/)
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package org.etsi.tfs.policy.context.model;

public class LinkId {

    private final String id;

    public LinkId(String id) {
        this.id = id;
    }

    public String getId() {
        return id;
    }

    @Override
    public String toString() {
        return "LinkId{" + "id='" + id + '\'' + '}';
    }
}
Original line number Original line Diff line number Diff line
@@ -135,6 +135,8 @@ public class CommonPolicyServiceImpl {
                addServiceConfigRule(policyRuleService, policyRuleAction);
                addServiceConfigRule(policyRuleService, policyRuleAction);
            case POLICY_RULE_ACTION_RECALCULATE_PATH:
            case POLICY_RULE_ACTION_RECALCULATE_PATH:
                callRecalculatePathRPC(policyRuleService, policyRuleAction);
                callRecalculatePathRPC(policyRuleService, policyRuleAction);
            case POLICY_RULE_ACTION_CALL_SERVICE_RPC:
                callUpdateServiceRpc(policyRuleService, policyRuleAction);
            default:
            default:
                LOGGER.errorf(INVALID_MESSAGE, policyRuleAction.getPolicyRuleActionEnum());
                LOGGER.errorf(INVALID_MESSAGE, policyRuleAction.getPolicyRuleActionEnum());
                return;
                return;
@@ -509,6 +511,26 @@ public class CommonPolicyServiceImpl {
                        });
                        });
    }
    }


    private void callUpdateServiceRpc(
            PolicyRuleService policyRuleService, PolicyRuleAction policyRuleAction) {

        final var deserializedServiceUni = contextService.getService(policyRuleService.getServiceId());

        deserializedServiceUni
                .subscribe()
                .with(
                        deserializedService -> {
                            serviceService
                                    .updateService(deserializedService)
                                    .subscribe()
                                    .with(
                                            x -> {
                                                LOGGER.info(deserializedService);
                                                setPolicyRuleServiceToContext(policyRuleService, ENFORCED_POLICYRULE_STATE);
                                            });
                        });
    }

    private void callRecalculatePathRPC(
    private void callRecalculatePathRPC(
            PolicyRuleService policyRuleService, PolicyRuleAction policyRuleAction) {
            PolicyRuleService policyRuleService, PolicyRuleAction policyRuleAction) {


Original line number Original line Diff line number Diff line
@@ -65,15 +65,21 @@ public class PolicyRuleConditionValidator {
        return contextService
        return contextService
                .getService(serviceId)
                .getService(serviceId)
                .onFailure()
                .onFailure()
                .recoverWithItem((Service) null)
                .invoke(
                        throwable ->
                                LOGGER.error(
                                        "Failed to get service: " + serviceId + "Message " + throwable.getMessage(),
                                        throwable))
                //                .recoverWithItem((Service) null)
                .onItem()
                .onItem()
                .transform(service -> checkIfServiceIsValid(service, serviceId, deviceIds));
                .transform(service -> checkIfServiceIsValid(service, serviceId, deviceIds));
    }
    }


    private boolean checkIfServiceIsValid(
    private boolean checkIfServiceIsValid(
            Service service, ServiceId serviceId, List<String> deviceIds) {
            Service service, ServiceId serviceId, List<String> deviceIds) {
        return (checkIfServiceIdExists(service, serviceId)
        boolean checkIfServiceIdExists = checkIfServiceIdExists(service, serviceId);
                && checkIfServicesDeviceIdsExist(service, deviceIds));
        boolean checkIfServicesDeviceIdsExist = checkIfServicesDeviceIdsExist(service, deviceIds);
        return (checkIfServiceIdExists && checkIfServicesDeviceIdsExist);
    }
    }


    private boolean checkIfServiceIdExists(Service service, ServiceId serviceId) {
    private boolean checkIfServiceIdExists(Service service, ServiceId serviceId) {
Original line number Original line Diff line number Diff line
@@ -127,6 +127,94 @@ public final class KpiSampleTypes {
         * <code>KPISAMPLETYPE_SERVICE_LATENCY_MS_AGG_OUTPUT = 1701;</code>
         * <code>KPISAMPLETYPE_SERVICE_LATENCY_MS_AGG_OUTPUT = 1701;</code>
         */
         */
        KPISAMPLETYPE_SERVICE_LATENCY_MS_AGG_OUTPUT(1701),
        KPISAMPLETYPE_SERVICE_LATENCY_MS_AGG_OUTPUT(1701),
        /**
         * <pre>
         * INT KPIs
         * </pre>
         *
         * <code>KPISAMPLETYPE_INT_SEQ_NUM = 2001;</code>
         */
        KPISAMPLETYPE_INT_SEQ_NUM(2001),
        /**
         * <code>KPISAMPLETYPE_INT_TS_ING = 2002;</code>
         */
        KPISAMPLETYPE_INT_TS_ING(2002),
        /**
         * <code>KPISAMPLETYPE_INT_TS_EGR = 2003;</code>
         */
        KPISAMPLETYPE_INT_TS_EGR(2003),
        /**
         * <code>KPISAMPLETYPE_INT_HOP_LAT = 2004;</code>
         */
        KPISAMPLETYPE_INT_HOP_LAT(2004),
        /**
         * <code>KPISAMPLETYPE_INT_PORT_ID_ING = 2005;</code>
         */
        KPISAMPLETYPE_INT_PORT_ID_ING(2005),
        /**
         * <code>KPISAMPLETYPE_INT_PORT_ID_EGR = 2006;</code>
         */
        KPISAMPLETYPE_INT_PORT_ID_EGR(2006),
        /**
         * <code>KPISAMPLETYPE_INT_QUEUE_OCCUP = 2007;</code>
         */
        KPISAMPLETYPE_INT_QUEUE_OCCUP(2007),
        /**
         * <code>KPISAMPLETYPE_INT_QUEUE_ID = 2008;</code>
         */
        KPISAMPLETYPE_INT_QUEUE_ID(2008),
        /**
         * <code>KPISAMPLETYPE_INT_HOP_LAT_SW01 = 2101;</code>
         */
        KPISAMPLETYPE_INT_HOP_LAT_SW01(2101),
        /**
         * <code>KPISAMPLETYPE_INT_HOP_LAT_SW02 = 2102;</code>
         */
        KPISAMPLETYPE_INT_HOP_LAT_SW02(2102),
        /**
         * <code>KPISAMPLETYPE_INT_HOP_LAT_SW03 = 2103;</code>
         */
        KPISAMPLETYPE_INT_HOP_LAT_SW03(2103),
        /**
         * <code>KPISAMPLETYPE_INT_HOP_LAT_SW04 = 2104;</code>
         */
        KPISAMPLETYPE_INT_HOP_LAT_SW04(2104),
        /**
         * <code>KPISAMPLETYPE_INT_HOP_LAT_SW05 = 2105;</code>
         */
        KPISAMPLETYPE_INT_HOP_LAT_SW05(2105),
        /**
         * <code>KPISAMPLETYPE_INT_HOP_LAT_SW06 = 2106;</code>
         */
        KPISAMPLETYPE_INT_HOP_LAT_SW06(2106),
        /**
         * <code>KPISAMPLETYPE_INT_HOP_LAT_SW07 = 2107;</code>
         */
        KPISAMPLETYPE_INT_HOP_LAT_SW07(2107),
        /**
         * <code>KPISAMPLETYPE_INT_HOP_LAT_SW08 = 2108;</code>
         */
        KPISAMPLETYPE_INT_HOP_LAT_SW08(2108),
        /**
         * <code>KPISAMPLETYPE_INT_HOP_LAT_SW09 = 2109;</code>
         */
        KPISAMPLETYPE_INT_HOP_LAT_SW09(2109),
        /**
         * <code>KPISAMPLETYPE_INT_HOP_LAT_SW10 = 2110;</code>
         */
        KPISAMPLETYPE_INT_HOP_LAT_SW10(2110),
        /**
         * <code>KPISAMPLETYPE_INT_LAT_ON_TOTAL = 2120;</code>
         */
        KPISAMPLETYPE_INT_LAT_ON_TOTAL(2120),
        /**
         * <code>KPISAMPLETYPE_INT_IS_DROP = 2201;</code>
         */
        KPISAMPLETYPE_INT_IS_DROP(2201),
        /**
         * <code>KPISAMPLETYPE_INT_DROP_REASON = 2202;</code>
         */
        KPISAMPLETYPE_INT_DROP_REASON(2202),
        UNRECOGNIZED(-1);
        UNRECOGNIZED(-1);


        /**
        /**
@@ -261,6 +349,115 @@ public final class KpiSampleTypes {
         */
         */
        public static final int KPISAMPLETYPE_SERVICE_LATENCY_MS_AGG_OUTPUT_VALUE = 1701;
        public static final int KPISAMPLETYPE_SERVICE_LATENCY_MS_AGG_OUTPUT_VALUE = 1701;


        /**
         * <pre>
         * INT KPIs
         * </pre>
         *
         * <code>KPISAMPLETYPE_INT_SEQ_NUM = 2001;</code>
         */
        public static final int KPISAMPLETYPE_INT_SEQ_NUM_VALUE = 2001;

        /**
         * <code>KPISAMPLETYPE_INT_TS_ING = 2002;</code>
         */
        public static final int KPISAMPLETYPE_INT_TS_ING_VALUE = 2002;

        /**
         * <code>KPISAMPLETYPE_INT_TS_EGR = 2003;</code>
         */
        public static final int KPISAMPLETYPE_INT_TS_EGR_VALUE = 2003;

        /**
         * <code>KPISAMPLETYPE_INT_HOP_LAT = 2004;</code>
         */
        public static final int KPISAMPLETYPE_INT_HOP_LAT_VALUE = 2004;

        /**
         * <code>KPISAMPLETYPE_INT_PORT_ID_ING = 2005;</code>
         */
        public static final int KPISAMPLETYPE_INT_PORT_ID_ING_VALUE = 2005;

        /**
         * <code>KPISAMPLETYPE_INT_PORT_ID_EGR = 2006;</code>
         */
        public static final int KPISAMPLETYPE_INT_PORT_ID_EGR_VALUE = 2006;

        /**
         * <code>KPISAMPLETYPE_INT_QUEUE_OCCUP = 2007;</code>
         */
        public static final int KPISAMPLETYPE_INT_QUEUE_OCCUP_VALUE = 2007;

        /**
         * <code>KPISAMPLETYPE_INT_QUEUE_ID = 2008;</code>
         */
        public static final int KPISAMPLETYPE_INT_QUEUE_ID_VALUE = 2008;

        /**
         * <code>KPISAMPLETYPE_INT_HOP_LAT_SW01 = 2101;</code>
         */
        public static final int KPISAMPLETYPE_INT_HOP_LAT_SW01_VALUE = 2101;

        /**
         * <code>KPISAMPLETYPE_INT_HOP_LAT_SW02 = 2102;</code>
         */
        public static final int KPISAMPLETYPE_INT_HOP_LAT_SW02_VALUE = 2102;

        /**
         * <code>KPISAMPLETYPE_INT_HOP_LAT_SW03 = 2103;</code>
         */
        public static final int KPISAMPLETYPE_INT_HOP_LAT_SW03_VALUE = 2103;

        /**
         * <code>KPISAMPLETYPE_INT_HOP_LAT_SW04 = 2104;</code>
         */
        public static final int KPISAMPLETYPE_INT_HOP_LAT_SW04_VALUE = 2104;

        /**
         * <code>KPISAMPLETYPE_INT_HOP_LAT_SW05 = 2105;</code>
         */
        public static final int KPISAMPLETYPE_INT_HOP_LAT_SW05_VALUE = 2105;

        /**
         * <code>KPISAMPLETYPE_INT_HOP_LAT_SW06 = 2106;</code>
         */
        public static final int KPISAMPLETYPE_INT_HOP_LAT_SW06_VALUE = 2106;

        /**
         * <code>KPISAMPLETYPE_INT_HOP_LAT_SW07 = 2107;</code>
         */
        public static final int KPISAMPLETYPE_INT_HOP_LAT_SW07_VALUE = 2107;

        /**
         * <code>KPISAMPLETYPE_INT_HOP_LAT_SW08 = 2108;</code>
         */
        public static final int KPISAMPLETYPE_INT_HOP_LAT_SW08_VALUE = 2108;

        /**
         * <code>KPISAMPLETYPE_INT_HOP_LAT_SW09 = 2109;</code>
         */
        public static final int KPISAMPLETYPE_INT_HOP_LAT_SW09_VALUE = 2109;

        /**
         * <code>KPISAMPLETYPE_INT_HOP_LAT_SW10 = 2110;</code>
         */
        public static final int KPISAMPLETYPE_INT_HOP_LAT_SW10_VALUE = 2110;

        /**
         * <code>KPISAMPLETYPE_INT_LAT_ON_TOTAL = 2120;</code>
         */
        public static final int KPISAMPLETYPE_INT_LAT_ON_TOTAL_VALUE = 2120;

        /**
         * <code>KPISAMPLETYPE_INT_IS_DROP = 2201;</code>
         */
        public static final int KPISAMPLETYPE_INT_IS_DROP_VALUE = 2201;

        /**
         * <code>KPISAMPLETYPE_INT_DROP_REASON = 2202;</code>
         */
        public static final int KPISAMPLETYPE_INT_DROP_REASON_VALUE = 2202;

        public final int getNumber() {
        public final int getNumber() {
            if (this == UNRECOGNIZED) {
            if (this == UNRECOGNIZED) {
                throw new java.lang.IllegalArgumentException("Can't get the number of an unknown enum value.");
                throw new java.lang.IllegalArgumentException("Can't get the number of an unknown enum value.");
@@ -332,6 +529,48 @@ public final class KpiSampleTypes {
                    return KPISAMPLETYPE_BYTES_DROPPED_AGG_OUTPUT;
                    return KPISAMPLETYPE_BYTES_DROPPED_AGG_OUTPUT;
                case 1701:
                case 1701:
                    return KPISAMPLETYPE_SERVICE_LATENCY_MS_AGG_OUTPUT;
                    return KPISAMPLETYPE_SERVICE_LATENCY_MS_AGG_OUTPUT;
                case 2001:
                    return KPISAMPLETYPE_INT_SEQ_NUM;
                case 2002:
                    return KPISAMPLETYPE_INT_TS_ING;
                case 2003:
                    return KPISAMPLETYPE_INT_TS_EGR;
                case 2004:
                    return KPISAMPLETYPE_INT_HOP_LAT;
                case 2005:
                    return KPISAMPLETYPE_INT_PORT_ID_ING;
                case 2006:
                    return KPISAMPLETYPE_INT_PORT_ID_EGR;
                case 2007:
                    return KPISAMPLETYPE_INT_QUEUE_OCCUP;
                case 2008:
                    return KPISAMPLETYPE_INT_QUEUE_ID;
                case 2101:
                    return KPISAMPLETYPE_INT_HOP_LAT_SW01;
                case 2102:
                    return KPISAMPLETYPE_INT_HOP_LAT_SW02;
                case 2103:
                    return KPISAMPLETYPE_INT_HOP_LAT_SW03;
                case 2104:
                    return KPISAMPLETYPE_INT_HOP_LAT_SW04;
                case 2105:
                    return KPISAMPLETYPE_INT_HOP_LAT_SW05;
                case 2106:
                    return KPISAMPLETYPE_INT_HOP_LAT_SW06;
                case 2107:
                    return KPISAMPLETYPE_INT_HOP_LAT_SW07;
                case 2108:
                    return KPISAMPLETYPE_INT_HOP_LAT_SW08;
                case 2109:
                    return KPISAMPLETYPE_INT_HOP_LAT_SW09;
                case 2110:
                    return KPISAMPLETYPE_INT_HOP_LAT_SW10;
                case 2120:
                    return KPISAMPLETYPE_INT_LAT_ON_TOTAL;
                case 2201:
                    return KPISAMPLETYPE_INT_IS_DROP;
                case 2202:
                    return KPISAMPLETYPE_INT_DROP_REASON;
                default:
                default:
                    return null;
                    return null;
            }
            }
@@ -389,7 +628,7 @@ public final class KpiSampleTypes {
    private static com.google.protobuf.Descriptors.FileDescriptor descriptor;
    private static com.google.protobuf.Descriptors.FileDescriptor descriptor;


    static {
    static {
        java.lang.String[] descriptorData = { "\n\026kpi_sample_types.proto\022\020kpi_sample_typ" + "es*\200\010\n\rKpiSampleType\022\031\n\025KPISAMPLETYPE_UN" + "KNOWN\020\000\022%\n!KPISAMPLETYPE_PACKETS_TRANSMI" + "TTED\020e\022\"\n\036KPISAMPLETYPE_PACKETS_RECEIVED" + "\020f\022!\n\035KPISAMPLETYPE_PACKETS_DROPPED\020g\022$\n" + "\037KPISAMPLETYPE_BYTES_TRANSMITTED\020\311\001\022!\n\034K" + "PISAMPLETYPE_BYTES_RECEIVED\020\312\001\022 \n\033KPISAM" + "PLETYPE_BYTES_DROPPED\020\313\001\022+\n&KPISAMPLETYP" + "E_LINK_TOTAL_CAPACITY_GBPS\020\255\002\022*\n%KPISAMP" + "LETYPE_LINK_USED_CAPACITY_GBPS\020\256\002\022 \n\033KPI" + "SAMPLETYPE_ML_CONFIDENCE\020\221\003\022*\n%KPISAMPLE" + "TYPE_OPTICAL_SECURITY_STATUS\020\365\003\022)\n$KPISA" + "MPLETYPE_L3_UNIQUE_ATTACK_CONNS\020\331\004\022*\n%KP" + "ISAMPLETYPE_L3_TOTAL_DROPPED_PACKTS\020\332\004\022&" + "\n!KPISAMPLETYPE_L3_UNIQUE_ATTACKERS\020\333\004\0220" + "\n+KPISAMPLETYPE_L3_UNIQUE_COMPROMISED_CL" + "IENTS\020\334\004\022,\n\'KPISAMPLETYPE_L3_SECURITY_ST" + "ATUS_CRYPTO\020\335\004\022%\n KPISAMPLETYPE_SERVICE_" + "LATENCY_MS\020\275\005\0221\n,KPISAMPLETYPE_PACKETS_T" + "RANSMITTED_AGG_OUTPUT\020\315\010\022.\n)KPISAMPLETYP" + "E_PACKETS_RECEIVED_AGG_OUTPUT\020\316\010\022-\n(KPIS" + "AMPLETYPE_PACKETS_DROPPED_AGG_OUTPUT\020\317\010\022" + "/\n*KPISAMPLETYPE_BYTES_TRANSMITTED_AGG_O" + "UTPUT\020\261\t\022,\n\'KPISAMPLETYPE_BYTES_RECEIVED" + "_AGG_OUTPUT\020\262\t\022+\n&KPISAMPLETYPE_BYTES_DR" + "OPPED_AGG_OUTPUT\020\263\t\0220\n+KPISAMPLETYPE_SER" + "VICE_LATENCY_MS_AGG_OUTPUT\020\245\rb\006proto3" };
        java.lang.String[] descriptorData = { "\n\026kpi_sample_types.proto\022\020kpi_sample_typ" + "es*\346\r\n\rKpiSampleType\022\031\n\025KPISAMPLETYPE_UN" + "KNOWN\020\000\022%\n!KPISAMPLETYPE_PACKETS_TRANSMI" + "TTED\020e\022\"\n\036KPISAMPLETYPE_PACKETS_RECEIVED" + "\020f\022!\n\035KPISAMPLETYPE_PACKETS_DROPPED\020g\022$\n" + "\037KPISAMPLETYPE_BYTES_TRANSMITTED\020\311\001\022!\n\034K" + "PISAMPLETYPE_BYTES_RECEIVED\020\312\001\022 \n\033KPISAM" + "PLETYPE_BYTES_DROPPED\020\313\001\022+\n&KPISAMPLETYP" + "E_LINK_TOTAL_CAPACITY_GBPS\020\255\002\022*\n%KPISAMP" + "LETYPE_LINK_USED_CAPACITY_GBPS\020\256\002\022 \n\033KPI" + "SAMPLETYPE_ML_CONFIDENCE\020\221\003\022*\n%KPISAMPLE" + "TYPE_OPTICAL_SECURITY_STATUS\020\365\003\022)\n$KPISA" + "MPLETYPE_L3_UNIQUE_ATTACK_CONNS\020\331\004\022*\n%KP" + "ISAMPLETYPE_L3_TOTAL_DROPPED_PACKTS\020\332\004\022&" + "\n!KPISAMPLETYPE_L3_UNIQUE_ATTACKERS\020\333\004\0220" + "\n+KPISAMPLETYPE_L3_UNIQUE_COMPROMISED_CL" + "IENTS\020\334\004\022,\n\'KPISAMPLETYPE_L3_SECURITY_ST" + "ATUS_CRYPTO\020\335\004\022%\n KPISAMPLETYPE_SERVICE_" + "LATENCY_MS\020\275\005\0221\n,KPISAMPLETYPE_PACKETS_T" + "RANSMITTED_AGG_OUTPUT\020\315\010\022.\n)KPISAMPLETYP" + "E_PACKETS_RECEIVED_AGG_OUTPUT\020\316\010\022-\n(KPIS" + "AMPLETYPE_PACKETS_DROPPED_AGG_OUTPUT\020\317\010\022" + "/\n*KPISAMPLETYPE_BYTES_TRANSMITTED_AGG_O" + "UTPUT\020\261\t\022,\n\'KPISAMPLETYPE_BYTES_RECEIVED" + "_AGG_OUTPUT\020\262\t\022+\n&KPISAMPLETYPE_BYTES_DR" + "OPPED_AGG_OUTPUT\020\263\t\0220\n+KPISAMPLETYPE_SER" + "VICE_LATENCY_MS_AGG_OUTPUT\020\245\r\022\036\n\031KPISAMP" + "LETYPE_INT_SEQ_NUM\020\321\017\022\035\n\030KPISAMPLETYPE_I" + "NT_TS_ING\020\322\017\022\035\n\030KPISAMPLETYPE_INT_TS_EGR" + "\020\323\017\022\036\n\031KPISAMPLETYPE_INT_HOP_LAT\020\324\017\022\"\n\035K" + "PISAMPLETYPE_INT_PORT_ID_ING\020\325\017\022\"\n\035KPISA" + "MPLETYPE_INT_PORT_ID_EGR\020\326\017\022\"\n\035KPISAMPLE" + "TYPE_INT_QUEUE_OCCUP\020\327\017\022\037\n\032KPISAMPLETYPE" + "_INT_QUEUE_ID\020\330\017\022#\n\036KPISAMPLETYPE_INT_HO" + "P_LAT_SW01\020\265\020\022#\n\036KPISAMPLETYPE_INT_HOP_L" + "AT_SW02\020\266\020\022#\n\036KPISAMPLETYPE_INT_HOP_LAT_" + "SW03\020\267\020\022#\n\036KPISAMPLETYPE_INT_HOP_LAT_SW0" + "4\020\270\020\022#\n\036KPISAMPLETYPE_INT_HOP_LAT_SW05\020\271" + "\020\022#\n\036KPISAMPLETYPE_INT_HOP_LAT_SW06\020\272\020\022#" + "\n\036KPISAMPLETYPE_INT_HOP_LAT_SW07\020\273\020\022#\n\036K" + "PISAMPLETYPE_INT_HOP_LAT_SW08\020\274\020\022#\n\036KPIS" + "AMPLETYPE_INT_HOP_LAT_SW09\020\275\020\022#\n\036KPISAMP" + "LETYPE_INT_HOP_LAT_SW10\020\276\020\022#\n\036KPISAMPLET" + "YPE_INT_LAT_ON_TOTAL\020\310\020\022\036\n\031KPISAMPLETYPE" + "_INT_IS_DROP\020\231\021\022\"\n\035KPISAMPLETYPE_INT_DRO" + "P_REASON\020\232\021b\006proto3" };
        descriptor = com.google.protobuf.Descriptors.FileDescriptor.internalBuildGeneratedFileFrom(descriptorData, new com.google.protobuf.Descriptors.FileDescriptor[] {});
        descriptor = com.google.protobuf.Descriptors.FileDescriptor.internalBuildGeneratedFileFrom(descriptorData, new com.google.protobuf.Descriptors.FileDescriptor[] {});
    }
    }
    // @@protoc_insertion_point(outer_class_scope)
    // @@protoc_insertion_point(outer_class_scope)
Original line number Original line Diff line number Diff line
@@ -12,19 +12,20 @@
# See the License for the specific language governing permissions and
# See the License for the specific language governing permissions and
# limitations under the License.
# limitations under the License.


build app:
build qkd_app:
  variables:
  variables:
    IMAGE_NAME: 'qkd_app' # name of the microservice
    IMAGE_NAME: 'qkd_app' # name of the microservice
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
@@ -37,44 +38,86 @@ build app:
      - manifests/${IMAGE_NAME}service.yaml
      - manifests/${IMAGE_NAME}service.yaml
      - .gitlab-ci.yml
      - .gitlab-ci.yml


# Apply unit test to the component
## Apply unit test to the component
unit_test app:
#unit_test qkd_app:
  variables:
#  variables:
    IMAGE_NAME: 'qkd_app' # name of the microservice
#    IMAGE_NAME: 'qkd_app' # name of the microservice
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
#    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: unit_test
#  stage: unit_test
  needs:
#  needs:
    - build app
#    - build qkd_app
    - unit_test service
#  before_script:
  before_script:
#    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
#    - if docker network list | grep teraflowbridge; then echo "teraflowbridge is already created"; else docker network create -d bridge teraflowbridge; fi
    - if docker network list | grep teraflowbridge; then echo "teraflowbridge is already created"; else docker network create -d bridge teraflowbridge; fi
#    - if docker container ls | grep $IMAGE_NAME; then docker rm -f $IMAGE_NAME; else echo "$IMAGE_NAME image is not in the system"; fi
    - if docker container ls | grep $IMAGE_NAME; then docker rm -f $IMAGE_NAME; else echo "$IMAGE_NAME image is not in the system"; fi
#  script:
  script:
#    - docker pull "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker pull "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
#    - docker run --name $IMAGE_NAME -d -p 10070:10070 -p 8005:8005 -v "$PWD/src/$IMAGE_NAME/tests:/opt/results" --network=teraflowbridge $CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG
    - docker run --name $IMAGE_NAME -d -p 10070:10070 -p 8005:8005 -v "$PWD/src/$IMAGE_NAME/tests:/opt/results" --network=teraflowbridge $CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG
#    - sleep 5
    - sleep 5
#    - docker ps -a
    - docker ps -a
#    - docker logs $IMAGE_NAME
    - docker logs $IMAGE_NAME
#    - docker exec -i $IMAGE_NAME bash -c "coverage run -m pytest --log-level=INFO --verbose $IMAGE_NAME/tests/test_unitary.py --junitxml=/opt/results/${IMAGE_NAME}_report.xml"
    - docker exec -i $IMAGE_NAME bash -c "coverage run -m pytest --log-level=INFO --verbose $IMAGE_NAME/tests/test_unitary.py --junitxml=/opt/results/${IMAGE_NAME}_report.xml"
#    - docker exec -i $IMAGE_NAME bash -c "coverage report --include='${IMAGE_NAME}/*' --show-missing"
    - docker exec -i $IMAGE_NAME bash -c "coverage report --include='${IMAGE_NAME}/*' --show-missing"
#
  coverage: '/TOTAL\s+\d+\s+\d+\s+(\d+%)/'
#    # Mock QKD Nodes Deployment
  after_script:
#    - |
    - docker rm -f $IMAGE_NAME
#      echo "Starting stage: deploy_mock_nodes"
    - docker network rm teraflowbridge
#    - pip install flask  # Install Flask to ensure it is available
  rules:
#    - |
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
#      for port in 11111 22222 33333; do
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
#        if lsof -i:$port >/dev/null 2>&1; then
    - changes:
#          echo "Freeing up port $port..."
      - src/common/**/*.py
#          fuser -k $port/tcp
      - proto/*.proto
#        fi
      - src/$IMAGE_NAME/**/*.{py,in,yml}
#      done
      - src/$IMAGE_NAME/Dockerfile
#      MOCK_NODES_DIR="$PWD/src/tests/tools/mock_qkd_nodes"
      - src/$IMAGE_NAME/tests/*.py
#      if [ -d "$MOCK_NODES_DIR" ]; then
      - src/$IMAGE_NAME/tests/Dockerfile
#        cd "$MOCK_NODES_DIR" || exit
      - manifests/${IMAGE_NAME}service.yaml
#        ./start.sh &
      - .gitlab-ci.yml
#        MOCK_NODES_PID=$!
  artifacts:
#      else
      when: always
#        echo "Error: Mock QKD nodes directory '$MOCK_NODES_DIR' not found."
      reports:
#        exit 1
        junit: src/$IMAGE_NAME/tests/${IMAGE_NAME}_report.xml
#      fi
#      echo "Waiting for mock nodes to be up..."
#      RETRY_COUNT=0
#      MAX_RETRIES=15
#      while [ $RETRY_COUNT -lt $MAX_RETRIES ]; do
#        if curl -s http://127.0.0.1:11111 > /dev/null && \
#           curl -s http://127.0.0.1:22222 > /dev/null && \
#           curl -s http://127.0.0.1:33333 > /dev/null; then
#            echo "Mock nodes are up!"
#            break
#        else
#            echo "Mock nodes not ready, retrying in 5 seconds..."
#            RETRY_COUNT=$((RETRY_COUNT + 1))
#            sleep 5
#        fi
#      done
#      if [ $RETRY_COUNT -ge $MAX_RETRIES ]; then
#        echo "Error: Mock nodes failed to start after multiple attempts."
#        exit 1
#      fi
#      
#  # Run additional QKD unit tests
#    - docker exec -i $IMAGE_NAME bash -c "pytest --log-level=INFO --verbose $IMAGE_NAME/tests/test_create_apps.py"
#  coverage: '/TOTAL\s+\d+\s+\d+\s+(\d+%)/'
#  after_script:
#    - docker rm -f $IMAGE_NAME
#    - docker network rm teraflowbridge
#  rules:
#    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
#    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
#    - changes:
#      - src/common/**/*.py
#      - proto/*.proto
#      - src/$IMAGE_NAME/**/*.{py,in,yml}
#      - src/$IMAGE_NAME/Dockerfile
#      - src/$IMAGE_NAME/tests/*.py
#      - src/$IMAGE_NAME/tests/Dockerfile
#      - manifests/${IMAGE_NAME}service.yaml
#      - .gitlab-ci.yml
#  artifacts:
#      when: always
#      reports:
#        junit: src/$IMAGE_NAME/tests/${IMAGE_NAME}_report.xml
Original line number Original line Diff line number Diff line
@@ -139,48 +139,18 @@ class AppServiceServicerImpl(AppServiceServicer):
        """
        """
        Lists all apps in the system, including their statistics and QoS attributes.
        Lists all apps in the system, including their statistics and QoS attributes.
        """
        """
        LOGGER.debug(f"Received ListApps request: {grpc_message_to_json_string(request)}")
        return app_list_objs(self.db_engine, request)

        try:
            apps = app_list_objs(self.db_engine, request.context_uuid.uuid)
            for app in apps.apps:
                LOGGER.debug(f"App retrieved: {grpc_message_to_json_string(app)}")

            LOGGER.debug(f"ListApps returned {len(apps.apps)} apps for context_id: {request.context_uuid.uuid}")
            return apps
        except Exception as e:
            context.set_code(grpc.StatusCode.INTERNAL)
            context.set_details("An internal error occurred while listing apps.")
            raise e


    @safe_and_metered_rpc_method(METRICS_POOL, LOGGER)
    @safe_and_metered_rpc_method(METRICS_POOL, LOGGER)
    def GetApp(self, request : AppId, context : grpc.ServicerContext) -> App:
    def GetApp(self, request : AppId, context : grpc.ServicerContext) -> App:
        """
        """
        Fetches details of a specific app based on its AppId, including QoS and performance stats.
        Fetches details of a specific app based on its AppId, including QoS and performance stats.
        """
        """
        LOGGER.debug(f"Received GetApp request: {grpc_message_to_json_string(request)}")
        return app_get(self.db_engine, request)
        try:
            app = app_get(self.db_engine, request)
            LOGGER.debug(f"GetApp found app with app_uuid: {request.app_uuid.uuid}")
            return app
        except NotFoundException as e:
            context.set_code(grpc.StatusCode.NOT_FOUND)
            context.set_details(f"App not found: {e}")
            raise e


    @safe_and_metered_rpc_method(METRICS_POOL, LOGGER)
    @safe_and_metered_rpc_method(METRICS_POOL, LOGGER)
    def DeleteApp(self, request : AppId, context : grpc.ServicerContext) -> Empty:
    def DeleteApp(self, request : AppId, context : grpc.ServicerContext) -> Empty:
        """
        """
        Deletes an app from the system by its AppId, following ETSI compliance.
        Deletes an app from the system by its AppId, following ETSI compliance.
        """
        """
        LOGGER.debug(f"Received DeleteApp request for app_uuid: {request.app_uuid.uuid}")
        return app_delete(self.db_engine, request)
        try:
            app_delete(self.db_engine, request.app_uuid.uuid)
            LOGGER.debug(f"App with UUID {request.app_uuid.uuid} deleted successfully.")
            return Empty()
        except NotFoundException as e:
            context.set_code(grpc.StatusCode.NOT_FOUND)
            context.set_details(f"App not found: {e}")
            raise e

Original line number Original line Diff line number Diff line
@@ -23,9 +23,8 @@ from sqlalchemy_cockroachdb import run_transaction


from common.method_wrappers.ServiceExceptions import NotFoundException
from common.method_wrappers.ServiceExceptions import NotFoundException
from common.message_broker.MessageBroker import MessageBroker
from common.message_broker.MessageBroker import MessageBroker
from common.proto.context_pb2 import ContextId, Empty
from common.proto.qkd_app_pb2 import AppList, App, AppId
from common.proto.qkd_app_pb2 import AppList, App, AppId
from qkd_app.service.database.uuids._Builder import get_uuid_from_string, get_uuid_random
from common.method_wrappers.ServiceExceptions import InvalidArgumentsException
from common.tools.object_factory.QKDApp import json_app_id
from common.tools.object_factory.QKDApp import json_app_id
from common.tools.object_factory.Context import json_context_id
from common.tools.object_factory.Context import json_context_id


@@ -38,22 +37,20 @@ from .models.enums.QKDAppTypes import grpc_to_enum__qkd_app_types
LOGGER = logging.getLogger(__name__)
LOGGER = logging.getLogger(__name__)




def app_list_objs(db_engine: Engine, context_uuid: str = None) -> AppList:
def app_list_objs(db_engine : Engine, request : ContextId) -> AppList:
    """
    """
    Fetches a list of all QKD applications from the database. Optionally filters by context UUID.
    Fetches a list of all QKD applications from the database. Optionally filters by context UUID.


    :param db_engine: SQLAlchemy Engine for DB connection
    :param db_engine: SQLAlchemy Engine for DB connection
    :param context_uuid: UUID of the context to filter by (optional)
    :param request: Context Id containing the UUID of the context to filter by
    :return: AppList containing all apps
    :return: AppList containing all apps
    """
    """
    context_uuid = context_get_uuid(request, allow_random=False)
    def callback(session : Session) -> List[Dict]:
    def callback(session : Session) -> List[Dict]:
        query = session.query(AppModel)
        query = session.query(AppModel)
        
        if context_uuid:
        query = query.filter_by(context_uuid=context_uuid)
        query = query.filter_by(context_uuid=context_uuid)

        obj_list : List[AppModel] = query.all()
        return [obj.dump() for obj in query.all()]
        return [obj.dump() for obj in obj_list]

    apps = run_transaction(sessionmaker(bind=db_engine), callback)
    apps = run_transaction(sessionmaker(bind=db_engine), callback)
    return AppList(apps=apps)
    return AppList(apps=apps)


@@ -67,17 +64,21 @@ def app_get(db_engine: Engine, request: AppId) -> App:
    :return: App protobuf object
    :return: App protobuf object
    :raises NotFoundException: If the app is not found in the database
    :raises NotFoundException: If the app is not found in the database
    """
    """
    app_uuid = app_get_uuid(request, allow_random=False)
    context_uuid,app_uuid = app_get_uuid(request, allow_random=False)


    def callback(session : Session) -> Optional[Dict]:
    def callback(session : Session) -> Optional[Dict]:
        obj = session.query(AppModel).filter_by(app_uuid=app_uuid).one_or_none()
        query = session.query(AppModel)
        return obj.dump() if obj else None
        query = query.filter_by(app_uuid=app_uuid)
        obj : Optional[AppModel] = query.one_or_none()
        return None if obj is None else obj.dump()


    obj = run_transaction(sessionmaker(bind=db_engine), callback)
    obj = run_transaction(sessionmaker(bind=db_engine), callback)


    if not obj:
    if obj is None:
        raise NotFoundException('App', request.app_uuid.uuid, extra_details=[
        raw_app_uuid = '{:s}/{:s}'.format(request.context_id.context_uuid.uuid, request.app_uuid.uuid)
            f'app_uuid generated was: {app_uuid}'
        raise NotFoundException('App', raw_app_uuid, extra_details=[
            'context_uuid generated was: {:s}'.format(context_uuid),
            'app_uuid generated was: {:s}'.format(app_uuid),
        ])
        ])


    return App(**obj)
    return App(**obj)
@@ -93,8 +94,7 @@ def app_set(db_engine: Engine, messagebroker: MessageBroker, request: App) -> Ap
    :param request: App protobuf object containing app data
    :param request: App protobuf object containing app data
    :return: AppId protobuf object representing the newly created or updated app
    :return: AppId protobuf object representing the newly created or updated app
    """
    """
    context_uuid = context_get_uuid(request.app_id.context_id, allow_random=False)
    context_uuid,app_uuid = app_get_uuid(request.app_id, allow_random=True)
    app_uuid = app_get_uuid(request.app_id, allow_random=True)


    # Prepare app data for insertion/update
    # Prepare app data for insertion/update
    app_data = {
    app_data = {
@@ -154,20 +154,21 @@ def app_get_by_server(db_engine: Engine, server_app_id: str) -> App:
    return App(**obj)
    return App(**obj)




def app_delete(db_engine: Engine, app_uuid: str) -> None:
def app_delete(db_engine : Engine, request : AppId) -> Empty:
    """
    """
    Deletes an app by its UUID from the database.
    Deletes an app by its UUID from the database.


    :param db_engine: SQLAlchemy Engine for DB connection
    :param db_engine: SQLAlchemy Engine for DB connection
    :param app_uuid: The UUID of the app to be deleted
    :param app_id: The UUID of the app to be deleted
    """
    """
    def callback(session: Session) -> bool:
        app_obj = session.query(AppModel).filter_by(app_uuid=app_uuid).one_or_none()


        if app_obj is None:
    _,app_uuid = app_get_uuid(request, allow_random=False)
            raise NotFoundException('App', app_uuid)


        session.delete(app_obj)
    def callback(session : Session) -> bool:
        return True
        query = session.query(AppModel)
        query = query.filter_by(app_uuid=app_uuid)
        num_deleted = query.delete()
        return num_deleted > 0


    run_transaction(sessionmaker(bind=db_engine), callback)
    run_transaction(sessionmaker(bind=db_engine), callback)
    return Empty()
Original line number Original line Diff line number Diff line
# Copyright 2022-2025 ETSI SDG TeraFlowSDN (TFS) (https://tfs.etsi.org/)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from common.Constants import DEFAULT_CONTEXT_NAME
from common.proto.context_pb2 import ContextId
from common.method_wrappers.ServiceExceptions import InvalidArgumentsException
from ._Builder import get_uuid_from_string, get_uuid_random

def context_get_uuid(
    context_id : ContextId, context_name : str = '', allow_random : bool = False, allow_default : bool = False
) -> str:
    context_uuid = context_id.context_uuid.uuid

    if len(context_uuid) > 0:
        return get_uuid_from_string(context_uuid)
    if len(context_name) > 0:
        return get_uuid_from_string(context_name)
    if allow_default:
        return get_uuid_from_string(DEFAULT_CONTEXT_NAME)
    if allow_random:
        return get_uuid_random()

    raise InvalidArgumentsException([
        ('context_id.context_uuid.uuid', context_uuid),
        ('name', context_name),
    ], extra_details=['At least one is required to produce a Context UUID'])
Original line number Original line Diff line number Diff line
@@ -12,26 +12,36 @@
# See the License for the specific language governing permissions and
# See the License for the specific language governing permissions and
# limitations under the License.
# limitations under the License.


from typing import Tuple
from common.proto.qkd_app_pb2 import AppId
from common.proto.qkd_app_pb2 import AppId
from common.method_wrappers.ServiceExceptions import InvalidArgumentsException
from common.method_wrappers.ServiceExceptions import InvalidArgumentsException
from ._Builder import get_uuid_from_string, get_uuid_random
from ._Builder import get_uuid_from_string, get_uuid_random
from .Context import context_get_uuid


def app_get_uuid(app_id: AppId, allow_random: bool = False) -> str:
def app_get_uuid(
    app_id : AppId, app_name : str = '', allow_random : bool = False
) -> Tuple[str, str]:
    """
    """
    Retrieves or generates the UUID for an app.
    Retrieves or generates the UUID for an app.
    
    
    :param app_id: AppId object that contains the app UUID
    :param application_id: AppId object that contains the app UUID
    :param application_name: string that contains optional app name
    :param allow_random: If True, generates a random UUID if app_uuid is not set
    :param allow_random: If True, generates a random UUID if app_uuid is not set
    :return: App UUID as a string
    :return: Context UUID as a string , App UUID as a string
    """
    """
    app_uuid = app_id.app_uuid.uuid
    context_uuid = context_get_uuid(app_id.context_id, allow_random=False, allow_default=True)
    raw_app_uuid = app_id.app_uuid.uuid


    if app_uuid:
    if len(raw_app_uuid) > 0:
        return get_uuid_from_string(app_uuid)
        return context_uuid, get_uuid_from_string(raw_app_uuid, prefix_for_name=context_uuid)

    if len(app_name) > 0:
        return context_uuid, get_uuid_from_string(app_name, prefix_for_name=context_uuid)


    if allow_random:
    if allow_random:
        return get_uuid_random()
        return context_uuid, get_uuid_random()


    raise InvalidArgumentsException([
    raise InvalidArgumentsException([
        ('app_id.app_uuid.uuid', app_uuid),
        ('app_id.app_uuid.uuid', raw_app_uuid),
    ], extra_details=['At least one UUID is required to identify the app.'])
        ('name', app_name),
    ], extra_details=['At least one is required to produce a App UUID'])
Original line number Original line Diff line number Diff line
@@ -14,7 +14,7 @@


import requests
import requests


QKD_ADDRESS = '10.0.2.10'
QKD_ADDRESS = '127.0.0.1'
QKD_URL     = 'http://{:s}/qkd_app/create_qkd_app'.format(QKD_ADDRESS)
QKD_URL     = 'http://{:s}/qkd_app/create_qkd_app'.format(QKD_ADDRESS)


QKD_REQUEST_1 = {
QKD_REQUEST_1 = {
@@ -22,19 +22,21 @@ QKD_REQUEST_1 = {
        'server_app_id': '1',
        'server_app_id': '1',
        'client_app_id': [],
        'client_app_id': [],
        'app_status': 'ON',
        'app_status': 'ON',
        'local_qkdn_id': '00000001-0000-0000-0000-0000000000',
        'local_qkdn_id': '00000001-0000-0000-0000-000000000000',
        'backing_qkdl_id': ['00000003-0002-0000-0000-0000000000']
        'backing_qkdl_id': ['00000003-0002-0000-0000-000000000000'],
    }
    }
}
}
print(requests.post(QKD_URL, json=QKD_REQUEST_1))
reply = requests.post(QKD_URL, json=QKD_REQUEST_1)
print(reply.status_code, reply.text)


QKD_REQUEST_2 = {
QKD_REQUEST_2 = {
    'app': {
    'app': {
        'server_app_id': '1',
        'server_app_id': '1',
        'client_app_id': [],
        'client_app_id': [],
        'app_status': 'ON',
        'app_status': 'ON',
        'local_qkdn_id': '00000003-0000-0000-0000-0000000000',
        'local_qkdn_id': '00000003-0000-0000-0000-000000000000',
        'backing_qkdl_id': ['00000003-0002-0000-0000-0000000000']
        'backing_qkdl_id': ['00000003-0002-0000-0000-000000000000'],
    }
    }
}
}
print(requests.post(QKD_URL, json=QKD_REQUEST_2))
reply = requests.post(QKD_URL, json=QKD_REQUEST_2)
print(reply.status_code, reply.text)
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build qos_profile:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build service:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
@@ -153,9 +154,8 @@ unit_test service:
    - docker logs $IMAGE_NAME
    - docker logs $IMAGE_NAME


    # Run the tests
    # Run the tests
    - >
    - docker exec -i $IMAGE_NAME bash -c "coverage run -m pytest --log-level=INFO --verbose $IMAGE_NAME/tests/test_unitary.py --junitxml=/opt/results/${IMAGE_NAME}_report.xml"
      docker exec -i $IMAGE_NAME bash -c
    #- docker exec -i $IMAGE_NAME bash -c "coverage run --append -m pytest --log-level=INFO --verbose service/tests/qkd/test_functional_bootstrap.py"
      "coverage run -m pytest --log-level=INFO --verbose $IMAGE_NAME/tests/test_unitary.py --junitxml=/opt/results/${IMAGE_NAME}_report.xml"
    - docker exec -i $IMAGE_NAME bash -c "coverage report --include='${IMAGE_NAME}/*' --show-missing"
    - docker exec -i $IMAGE_NAME bash -c "coverage report --include='${IMAGE_NAME}/*' --show-missing"


  coverage: '/TOTAL\s+\d+\s+\d+\s+(\d+%)/'
  coverage: '/TOTAL\s+\d+\s+\d+\s+(\d+%)/'
Original line number Original line Diff line number Diff line
@@ -66,8 +66,12 @@ COPY src/context/__init__.py context/__init__.py
COPY src/context/client/. context/client/
COPY src/context/client/. context/client/
COPY src/device/__init__.py device/__init__.py
COPY src/device/__init__.py device/__init__.py
COPY src/device/client/. device/client/
COPY src/device/client/. device/client/
COPY src/kpi_manager/__init__.py kpi_manager/__init__.py
COPY src/kpi_manager/client/. kpi_manager/client/
COPY src/pathcomp/frontend/__init__.py pathcomp/frontend/__init__.py
COPY src/pathcomp/frontend/__init__.py pathcomp/frontend/__init__.py
COPY src/pathcomp/frontend/client/. pathcomp/frontend/client/
COPY src/pathcomp/frontend/client/. pathcomp/frontend/client/
COPY src/telemetry/frontend/__init__.py telemetry/frontend/__init__.py
COPY src/telemetry/frontend/client/. telemetry/frontend/client/
COPY src/e2e_orchestrator/__init__.py e2e_orchestrator/__init__.py
COPY src/e2e_orchestrator/__init__.py e2e_orchestrator/__init__.py
COPY src/e2e_orchestrator/client/. e2e_orchestrator/client/
COPY src/e2e_orchestrator/client/. e2e_orchestrator/client/
COPY src/qkd_app/__init__.py qkd_app/__init__.py
COPY src/qkd_app/__init__.py qkd_app/__init__.py
Original line number Original line Diff line number Diff line
@@ -36,6 +36,8 @@ LOGGER = logging.getLogger(__name__)
INT_COLLECTOR_INFO = "int_collector_info"
INT_COLLECTOR_INFO = "int_collector_info"
INT_REPORT_MIRROR_ID_LIST = "int_report_mirror_id_list"
INT_REPORT_MIRROR_ID_LIST = "int_report_mirror_id_list"
PORT_INT = "int_port"           # In-band Network Telemetry transport port (of the collector)
PORT_INT = "int_port"           # In-band Network Telemetry transport port (of the collector)
DURATION_SEC = "duration_sec"
INTERVAL_SEC = "interval_sec"


# INT tables
# INT tables
TABLE_INT_WATCHLIST = "FabricIngress.int_watchlist.watchlist"
TABLE_INT_WATCHLIST = "FabricIngress.int_watchlist.watchlist"
@@ -51,6 +53,10 @@ INT_REPORT_TYPE_FLOW = 1
INT_REPORT_TYPE_QUEUE = 2
INT_REPORT_TYPE_QUEUE = 2
INT_REPORT_TYPE_DROP = 4
INT_REPORT_TYPE_DROP = 4


# INT collection timings
DEF_DURATION_SEC = 3000
DEF_INTERVAL_SEC = 1



def rules_set_up_int_watchlist(action : ConfigActionEnum) -> List [Tuple]: # type: ignore
def rules_set_up_int_watchlist(action : ConfigActionEnum) -> List [Tuple]: # type: ignore
    rule_no = cache_rule(TABLE_INT_WATCHLIST, action)
    rule_no = cache_rule(TABLE_INT_WATCHLIST, action)
Original line number Original line Diff line number Diff line
@@ -22,8 +22,12 @@ https://p4.org/p4-spec/docs/INT_v0_5.pdf


import logging
import logging
from typing import Any, List, Optional, Tuple, Union
from typing import Any, List, Optional, Tuple, Union
from uuid import uuid4
from common.method_wrappers.Decorator import MetricsPool, metered_subclass_method
from common.method_wrappers.Decorator import MetricsPool, metered_subclass_method
from common.proto.context_pb2 import ConfigActionEnum, DeviceId, Service, Device
from common.proto.context_pb2 import ConfigActionEnum, ContextIdList, DeviceId, Service, Device, Empty
from common.proto.kpi_manager_pb2 import KpiId, KpiDescriptor
from common.proto.kpi_sample_types_pb2 import KpiSampleType
from common.proto.telemetry_frontend_pb2 import Collector, CollectorId
from common.tools.object_factory.Device import json_device_id
from common.tools.object_factory.Device import json_device_id
from common.type_checkers.Checkers import chk_type, chk_address_mac, chk_address_ipv4,\
from common.type_checkers.Checkers import chk_type, chk_address_mac, chk_address_ipv4,\
    chk_transport_port, chk_vlan_id
    chk_transport_port, chk_vlan_id
@@ -32,6 +36,10 @@ from service.service.service_handler_api.SettingsHandler import SettingsHandler
from service.service.service_handlers.p4_fabric_tna_commons.p4_fabric_tna_commons import *
from service.service.service_handlers.p4_fabric_tna_commons.p4_fabric_tna_commons import *
from service.service.task_scheduler.TaskExecutor import TaskExecutor
from service.service.task_scheduler.TaskExecutor import TaskExecutor


from context.client.ContextClient import ContextClient
from kpi_manager.client.KpiManagerClient import KpiManagerClient
from telemetry.frontend.client.TelemetryFrontendClient import TelemetryFrontendClient

from .p4_fabric_tna_int_config import *
from .p4_fabric_tna_int_config import *


LOGGER = logging.getLogger(__name__)
LOGGER = logging.getLogger(__name__)
@@ -63,6 +71,9 @@ class P4FabricINTServiceHandler(_ServiceHandler):
        self._parse_settings()
        self._parse_settings()
        self._print_settings()
        self._print_settings()


        # TODO: Check whether the Telemetry service is up before issuing this call
        self._start_collector()

    @metered_subclass_method(METRICS_POOL)
    @metered_subclass_method(METRICS_POOL)
    def SetEndpoint(
    def SetEndpoint(
        self, endpoints : List[Tuple[str, str, Optional[str]]], connection_uuid : Optional[str] = None
        self, endpoints : List[Tuple[str, str, Optional[str]]], connection_uuid : Optional[str] = None
@@ -296,10 +307,13 @@ class P4FabricINTServiceHandler(_ServiceHandler):
    def _init_settings(self):
    def _init_settings(self):
        self.__switch_info = {}
        self.__switch_info = {}
        self.__int_collector_info = {}
        self.__int_collector_info = {}
        self.__int_collector_iface = ""
        self.__int_collector_mac = ""
        self.__int_collector_mac = ""
        self.__int_collector_ip = ""
        self.__int_collector_ip = ""
        self.__int_collector_port = -1
        self.__int_collector_port = -1
        self.__int_vlan_id = -1
        self.__int_vlan_id = DEF_VLAN
        self.__int_collector_duration_s = DEF_DURATION_SEC
        self.__int_collector_interval_s = DEF_INTERVAL_SEC


        try:
        try:
            self.__settings = self.__settings_handler.get('/settings')
            self.__settings = self.__settings_handler.get('/settings')
@@ -311,13 +325,14 @@ class P4FabricINTServiceHandler(_ServiceHandler):
    def _parse_settings(self):
    def _parse_settings(self):
        try:
        try:
            switch_info = self.__settings.value[SWITCH_INFO]
            switch_info = self.__settings.value[SWITCH_INFO]
            assert isinstance(switch_info, list), "Switch info object must be a list"
        except Exception as ex:
        except Exception as ex:
            LOGGER.error("Failed to parse service settings: {}".format(ex))
            LOGGER.error("Failed to parse service settings: {}".format(ex))
            raise Exception(ex)
            raise Exception(ex)
        assert isinstance(switch_info, list), "Switch info object must be a list"


        for switch in switch_info:
        for switch in switch_info:
            for switch_name, sw_info in switch.items():
            for switch_name, sw_info in switch.items():
                try:
                    assert switch_name, "Invalid P4 switch name"
                    assert switch_name, "Invalid P4 switch name"
                    assert isinstance(sw_info, dict), "Switch {} info must be a map with arch, dpid, mac, ip, and int_port items)"
                    assert isinstance(sw_info, dict), "Switch {} info must be a map with arch, dpid, mac, ip, and int_port items)"
                    assert sw_info[ARCH] in SUPPORTED_TARGET_ARCH_LIST, \
                    assert sw_info[ARCH] in SUPPORTED_TARGET_ARCH_LIST, \
@@ -336,11 +351,18 @@ class P4FabricINTServiceHandler(_ServiceHandler):
                        sw_info[RECIRCULATION_PORT_LIST] = RECIRCULATION_PORTS_V1MODEL
                        sw_info[RECIRCULATION_PORT_LIST] = RECIRCULATION_PORTS_V1MODEL
                        sw_info[INT_REPORT_MIRROR_ID_LIST] = INT_REPORT_MIRROR_ID_LIST_V1MODEL
                        sw_info[INT_REPORT_MIRROR_ID_LIST] = INT_REPORT_MIRROR_ID_LIST_V1MODEL
                    assert isinstance(sw_info[RECIRCULATION_PORT_LIST], list), "Switch {} - Recirculation ports must be described as a list".format(switch_name)
                    assert isinstance(sw_info[RECIRCULATION_PORT_LIST], list), "Switch {} - Recirculation ports must be described as a list".format(switch_name)
                except Exception as ex:
                    LOGGER.error("Failed to parse switch {} information".format(switch_name))
                    return
                self.__switch_info[switch_name] = sw_info
                self.__switch_info[switch_name] = sw_info


        try:
            self.__int_collector_info = self.__settings.value[INT_COLLECTOR_INFO]
            self.__int_collector_info = self.__settings.value[INT_COLLECTOR_INFO]
            assert isinstance(self.__int_collector_info, dict), "INT collector info object must be a map with mac, ip, port, and vlan_id keys)"
            assert isinstance(self.__int_collector_info, dict), "INT collector info object must be a map with mac, ip, port, and vlan_id keys)"


            self.__int_collector_iface = self.__int_collector_info[IFACE]
            assert self.__int_collector_iface, "Invalid P4 INT collector network interface"

            self.__int_collector_mac = self.__int_collector_info[MAC]
            self.__int_collector_mac = self.__int_collector_info[MAC]
            assert chk_address_mac(self.__int_collector_mac), "Invalid P4 INT collector MAC address"
            assert chk_address_mac(self.__int_collector_mac), "Invalid P4 INT collector MAC address"


@@ -350,8 +372,27 @@ class P4FabricINTServiceHandler(_ServiceHandler):
            self.__int_collector_port = self.__int_collector_info[PORT]
            self.__int_collector_port = self.__int_collector_info[PORT]
            assert chk_transport_port(self.__int_collector_port), "Invalid P4 INT collector transport port"
            assert chk_transport_port(self.__int_collector_port), "Invalid P4 INT collector transport port"


            if self.__int_collector_info[VLAN_ID] > 0:
                self.__int_vlan_id = self.__int_collector_info[VLAN_ID]
                self.__int_vlan_id = self.__int_collector_info[VLAN_ID]
        assert chk_vlan_id(self.__int_vlan_id), "Invalid VLAN ID"
                assert chk_vlan_id(self.__int_vlan_id), "Invalid VLAN ID for INT"
            else:
                LOGGER.warning("No or invalid INT VLAN ID is provided. Default VLAN ID is set to {} (No VLAN)".\
                               format(self.__int_vlan_id))

            if self.__int_collector_info[DURATION_SEC] > 0:
                self.__int_collector_duration_s = self.__int_collector_info[DURATION_SEC]
            else:
                LOGGER.warning("No or invalid INT collection duration is provided. Default duration is set to {} seconds".\
                               format(self.__int_collector_duration_s))

            if self.__int_collector_info[INTERVAL_SEC] > 0:
                self.__int_collector_interval_s = self.__int_collector_info[INTERVAL_SEC]
            else:
                LOGGER.warning("No or invalid INT collection interval is provided. Default interval is set to {} seconds".\
                               format(self.__int_collector_interval_s))
        except Exception as ex:
            LOGGER.error("Failed to parse INT collector information")
            return


    def _print_settings(self):
    def _print_settings(self):
        LOGGER.info("-------------------- {} settings --------------------".format(self.__service.name))
        LOGGER.info("-------------------- {} settings --------------------".format(self.__service.name))
@@ -366,10 +407,13 @@ class P4FabricINTServiceHandler(_ServiceHandler):
            LOGGER.info("\t\t|           INT port type: {}".format(switch_info[PORT_INT][PORT_TYPE]))
            LOGGER.info("\t\t|           INT port type: {}".format(switch_info[PORT_INT][PORT_TYPE]))
            LOGGER.info("\t\t| Recirculation port list: {}".format(switch_info[RECIRCULATION_PORT_LIST]))
            LOGGER.info("\t\t| Recirculation port list: {}".format(switch_info[RECIRCULATION_PORT_LIST]))
            LOGGER.info("\t\t|   Report mirror ID list: {}".format(switch_info[INT_REPORT_MIRROR_ID_LIST]))
            LOGGER.info("\t\t|   Report mirror ID list: {}".format(switch_info[INT_REPORT_MIRROR_ID_LIST]))
        LOGGER.info("--- INT collector interface: {}".format(self.__int_collector_iface))
        LOGGER.info("--- INT collector       MAC: {}".format(self.__int_collector_mac))
        LOGGER.info("--- INT collector       MAC: {}".format(self.__int_collector_mac))
        LOGGER.info("--- INT collector        IP: {}".format(self.__int_collector_ip))
        LOGGER.info("--- INT collector        IP: {}".format(self.__int_collector_ip))
        LOGGER.info("--- INT collector      port: {}".format(self.__int_collector_port))
        LOGGER.info("--- INT collector      port: {}".format(self.__int_collector_port))
        LOGGER.info("--- INT             VLAN ID: {}".format(self.__int_vlan_id))
        LOGGER.info("--- INT             VLAN ID: {}".format(self.__int_vlan_id))
        LOGGER.info("--- INT collector  duration: {} sec".format(self.__int_collector_duration_s))
        LOGGER.info("--- INT collector  interval: {} sec".format(self.__int_collector_interval_s))
        LOGGER.info("-----------------------------------------------------------------")
        LOGGER.info("-----------------------------------------------------------------")


    def _create_rules(self, device_obj : Device, action : ConfigActionEnum): # type: ignore
    def _create_rules(self, device_obj : Device, action : ConfigActionEnum): # type: ignore
@@ -474,3 +518,86 @@ class P4FabricINTServiceHandler(_ServiceHandler):
            raise Exception(ex)
            raise Exception(ex)


        return rules
        return rules

    def _retrieve_context_for_int_collector(self):
        ctx_id = service_id = dev_id = ep_id = None

        try:
            context_client = ContextClient()
            response : ContextIdList = context_client.ListContextIds(Empty()) # type: ignore

            # Get the context
            ctx_id = response.context_ids[0].context_uuid.uuid
            assert ctx_id, "Cannot create INT collector with invalid context ID"
            LOGGER.debug("Context ID: {}".format(ctx_id))

            service_id = self.__service.service_id.service_uuid.uuid
            assert service_id, "Cannot create INT collector with invalid service ID"
            LOGGER.debug("Service ID: {}".format(service_id))

            # Get a service endpoint
            svc_endpoints = self.__service.service_endpoint_ids[0]
            assert svc_endpoints, "Cannot create INT collector: No service endpoints are established"

            # Get a P4 device associated with this endpoint
            dev_id = svc_endpoints.device_id.device_uuid.uuid
            assert dev_id, "Cannot create INT collector with invalid device ID"
            LOGGER.debug("Device ID: {}".format(dev_id))

            # Get the endpoint ID
            ep_id = svc_endpoints.endpoint_uuid.uuid
            assert ep_id, "Cannot create INT collector with invalid endpoint ID"
            LOGGER.debug("Endpoint ID: {}".format(ep_id))
        except Exception as ex:
            LOGGER.error("Failed to retrieve context for starting the INT collector: {}".format(ex))
            raise ex

        return ctx_id, service_id, dev_id, ep_id

    def _start_collector(self):
        ctx_id = service_id = dev_id = ep_id = None
        try:
            ctx_id, service_id, dev_id, ep_id = self._retrieve_context_for_int_collector()
        except Exception:
            LOGGER.error("INT collector cannot be initialized due to missing information")
            return

        # Create a "virtual" INT KPI associated with this context and P4 dataplane
        kpi_id_int = None
        try:
            kpi_descriptor_int = KpiDescriptor()
            kpi_descriptor_int.kpi_sample_type = KpiSampleType.KPISAMPLETYPE_UNKNOWN
            kpi_descriptor_int.service_id.service_uuid.uuid = service_id
            kpi_descriptor_int.device_id.device_uuid.uuid = dev_id
            kpi_descriptor_int.endpoint_id.endpoint_uuid.uuid = ep_id
            kpi_descriptor_int.kpi_id.kpi_id.uuid = str(uuid4())

            # Set this new KPI
            kpi_manager_client = KpiManagerClient()
            kpi_id_int: KpiId = kpi_manager_client.SetKpiDescriptor(kpi_descriptor_int) # type: ignore
            LOGGER.debug("INT KPI ID: {}".format(kpi_id_int))
        except Exception:
            LOGGER.error("INT collector cannot be initialized due to failed KPI initialization")
            return

        # Initialize an INT collector object
        try:
            collect_int = Collector()
            collect_int.collector_id.collector_id.uuid = str(uuid4())
            collect_int.kpi_id.kpi_id.uuid = kpi_id_int.kpi_id.uuid
            collect_int.duration_s = self.__int_collector_duration_s
            collect_int.interval_s = self.__int_collector_interval_s
            collect_int.int_collector.interface = self.__int_collector_iface
            collect_int.int_collector.transport_port = self.__int_collector_port
            collect_int.int_collector.service_id = service_id
            collect_int.int_collector.context_id = ctx_id
            LOGGER.info("INT Collector: {}".format(str(collect_int)))

            telemetry_frontend_client = TelemetryFrontendClient()
            collect_id: CollectorId = telemetry_frontend_client.StartCollector(collect_int) # type: ignore
            assert collect_id.uuid, "INT collector failed to start"
        except Exception:
            LOGGER.error("INT collector cannot be initialized")
            return

        LOGGER.info("INT collector with ID {} is successfully invoked".format(collect_id))
Original line number Original line Diff line number Diff line
@@ -10,68 +10,64 @@
            "device_id": {"device_uuid": {"uuid": "QKD1"}}, "device_type": "qkd-node",
            "device_id": {"device_uuid": {"uuid": "QKD1"}}, "device_type": "qkd-node",
            "device_operational_status": 0, "device_drivers": [12], "device_endpoints": [],
            "device_operational_status": 0, "device_drivers": [12], "device_endpoints": [],
            "device_config": {"config_rules": [
            "device_config": {"config_rules": [
                {"action": 1, "custom": {"resource_key": "_connect/address", "resource_value": "10.0.2.10"}},
                {"action": 1, "custom": {"resource_key": "_connect/address", "resource_value": "<YOUR_MACHINE_IP>"}},
                {"action": 1, "custom": {"resource_key": "_connect/port", "resource_value": "11111"}},
                {"action": 1, "custom": {"resource_key": "_connect/port", "resource_value": "11111"}},
                {"action": 1, "custom": {"resource_key": "_connect/settings", "resource_value": {
                {"action": 1, "custom": {"resource_key": "_connect/settings", "resource_value": {
                    "scheme": "http"
                    "scheme": "http"
                }}}
                }}}
            ]}
            ]}

        },
        },
        {
        {
            "device_id": {"device_uuid": {"uuid": "QKD2"}}, "device_type": "qkd-node",
            "device_id": {"device_uuid": {"uuid": "QKD2"}}, "device_type": "qkd-node",
            "device_operational_status": 0, "device_drivers": [12], "device_endpoints": [],
            "device_operational_status": 0, "device_drivers": [12], "device_endpoints": [],
            "device_config": {"config_rules": [
            "device_config": {"config_rules": [
                {"action": 1, "custom": {"resource_key": "_connect/address", "resource_value": "10.0.2.10"}},
                {"action": 1, "custom": {"resource_key": "_connect/address", "resource_value": "<YOUR_MACHINE_IP>"}},
                {"action": 1, "custom": {"resource_key": "_connect/port", "resource_value": "22222"}},
                {"action": 1, "custom": {"resource_key": "_connect/port", "resource_value": "22222"}},
                {"action": 1, "custom": {"resource_key": "_connect/settings", "resource_value": {
                {"action": 1, "custom": {"resource_key": "_connect/settings", "resource_value": {
                    "scheme": "http"
                    "scheme": "http"
                }}}
                }}}
            ]}
            ]}

        },
        },
        {
        {
            "device_id": {"device_uuid": {"uuid": "QKD3"}}, "device_type": "qkd-node",
            "device_id": {"device_uuid": {"uuid": "QKD3"}}, "device_type": "qkd-node",
            "device_operational_status": 0, "device_drivers": [12], "device_endpoints": [],
            "device_operational_status": 0, "device_drivers": [12], "device_endpoints": [],
            "device_config": {"config_rules": [
            "device_config": {"config_rules": [
                {"action": 1, "custom": {"resource_key": "_connect/address", "resource_value": "10.0.2.10"}},
                {"action": 1, "custom": {"resource_key": "_connect/address", "resource_value": "<YOUR_MACHINE_IP>"}},
                {"action": 1, "custom": {"resource_key": "_connect/port", "resource_value": "33333"}},
                {"action": 1, "custom": {"resource_key": "_connect/port", "resource_value": "33333"}},
                {"action": 1, "custom": {"resource_key": "_connect/settings", "resource_value": {
                {"action": 1, "custom": {"resource_key": "_connect/settings", "resource_value": {
                    "scheme": "http"
                    "scheme": "http"
                }}}
                }}}
            ]}
            ]}

        }
        }
    ],
    ],
    "links": [
    "links": [
        {
        {
            "link_id": {"link_uuid": {"uuid": "QKD1/10.0.2.10:1001==QKD2/10.0.2.10:2001"}},
            "link_id": {"link_uuid": {"uuid": "QKD1/<YOUR_MACHINE_IP>:1001==QKD2/<YOUR_MACHINE_IP>:2001"}},
            "link_endpoint_ids": [
            "link_endpoint_ids": [
                {"device_id": {"device_uuid": {"uuid": "QKD1"}}, "endpoint_uuid": {"uuid": "10.0.2.10:1001"}},
                {"device_id": {"device_uuid": {"uuid": "QKD1"}}, "endpoint_uuid": {"uuid": "<YOUR_MACHINE_IP>:1001"}},
                {"device_id": {"device_uuid": {"uuid": "QKD2"}}, "endpoint_uuid": {"uuid": "10.0.2.10:2001"}}
                {"device_id": {"device_uuid": {"uuid": "QKD2"}}, "endpoint_uuid": {"uuid": "<YOUR_MACHINE_IP>:2001"}}
            ]
            ]
        },
        },
        {
        {
            "link_id": {"link_uuid": {"uuid": "QKD2/10.0.2.10:2001==QKD1/10.0.2.10:1001"}},
            "link_id": {"link_uuid": {"uuid": "QKD2/<YOUR_MACHINE_IP>:2001==QKD1/<YOUR_MACHINE_IP>:1001"}},
            "link_endpoint_ids": [
            "link_endpoint_ids": [
		        {"device_id": {"device_uuid": {"uuid": "QKD2"}}, "endpoint_uuid": {"uuid": "10.0.2.10:2001"}},
                {"device_id": {"device_uuid": {"uuid": "QKD2"}}, "endpoint_uuid": {"uuid": "<YOUR_MACHINE_IP>:2001"}},
                {"device_id": {"device_uuid": {"uuid": "QKD1"}}, "endpoint_uuid": {"uuid": "10.0.2.10:1001"}}
                {"device_id": {"device_uuid": {"uuid": "QKD1"}}, "endpoint_uuid": {"uuid": "<YOUR_MACHINE_IP>:1001"}}
            ]
            ]
        },
        },
        {
        {
            "link_id": {"link_uuid": {"uuid": "QKD2/10.0.2.10:2002==QKD3/10.0.2.10:3001"}},
            "link_id": {"link_uuid": {"uuid": "QKD2/<YOUR_MACHINE_IP>:2002==QKD3/<YOUR_MACHINE_IP>:3001"}},
            "link_endpoint_ids": [
            "link_endpoint_ids": [
		        {"device_id": {"device_uuid": {"uuid": "QKD2"}}, "endpoint_uuid": {"uuid": "10.0.2.10:2002"}},
                {"device_id": {"device_uuid": {"uuid": "QKD2"}}, "endpoint_uuid": {"uuid": "<YOUR_MACHINE_IP>:2002"}},
                {"device_id": {"device_uuid": {"uuid": "QKD3"}}, "endpoint_uuid": {"uuid": "10.0.2.10:3001"}}
                {"device_id": {"device_uuid": {"uuid": "QKD3"}}, "endpoint_uuid": {"uuid": "<YOUR_MACHINE_IP>:3001"}}
            ]
            ]
        },
        },
        {
        {
            "link_id": {"link_uuid": {"uuid": "QKD3/10.0.2.10:3001==QKD2/10.0.2.10:2002"}},
            "link_id": {"link_uuid": {"uuid": "QKD3/<YOUR_MACHINE_IP>:3001==QKD2/<YOUR_MACHINE_IP>:2002"}},
            "link_endpoint_ids": [
            "link_endpoint_ids": [
                {"device_id": {"device_uuid": {"uuid": "QKD3"}}, "endpoint_uuid": {"uuid": "10.0.2.10:3001"}},
                {"device_id": {"device_uuid": {"uuid": "QKD3"}}, "endpoint_uuid": {"uuid": "<YOUR_MACHINE_IP>:3001"}},
                {"device_id": {"device_uuid": {"uuid": "QKD2"}}, "endpoint_uuid": {"uuid": "10.0.2.10:2002"}}
                {"device_id": {"device_uuid": {"uuid": "QKD2"}}, "endpoint_uuid": {"uuid": "<YOUR_MACHINE_IP>:2002"}}
            ]
            ]
        }
        }

    ]
    ]
}
}
 No newline at end of file
Original line number Original line Diff line number Diff line
# Copyright 2022-2024 ETSI OSG/SDG TeraFlowSDN (TFS) (https://tfs.etsi.org/)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import logging, os, time, json, socket, re
from common.Constants import DEFAULT_CONTEXT_NAME
from common.proto.context_pb2 import ContextId, DeviceOperationalStatusEnum, Empty
from common.tools.descriptor.Loader import DescriptorLoader, check_descriptor_load_results, validate_empty_scenario
from common.tools.object_factory.Context import json_context_id
from context.client.ContextClient import ContextClient
from device.client.DeviceClient import DeviceClient
from tests.Fixtures import context_client, device_client # pylint: disable=unused-import

LOGGER = logging.getLogger(__name__)
LOGGER.setLevel(logging.DEBUG)

# Update the path to your QKD descriptor file
DESCRIPTOR_FILE_PATH = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'descriptorQKD_links.json')
ADMIN_CONTEXT_ID = ContextId(**json_context_id(DEFAULT_CONTEXT_NAME))

def load_descriptor_with_runtime_ip(descriptor_file_path):
    """
    Load the descriptor file and replace placeholder IP with the machine's IP address.
    """
    with open(descriptor_file_path, 'r') as descriptor_file:
        descriptor = descriptor_file.read()

    # Get the current machine's IP address
    try:
        # Use socket to get the local IP address directly from the network interface
        s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
        s.connect(("8.8.8.8", 80))
        current_ip = s.getsockname()[0]
        s.close()
    except Exception as e:
        raise Exception(f"Unable to get the IP address: {str(e)}")

    # Replace all occurrences of <YOUR_MACHINE_IP> with the current IP
    updated_descriptor = re.sub(r"<YOUR_MACHINE_IP>", current_ip, descriptor)

    # Write updated descriptor back
    with open(descriptor_file_path, 'w') as descriptor_file:
        descriptor_file.write(updated_descriptor)

    return json.loads(updated_descriptor)

def load_and_process_descriptor(context_client, device_client, descriptor_file_path):
    """
    Function to load and process descriptor programmatically, similar to what WebUI does.
    """
    print(f"Loading descriptor from file: {descriptor_file_path}")
    try:
        # Update the descriptor with the runtime IP address
        descriptor = load_descriptor_with_runtime_ip(descriptor_file_path)

        # Initialize DescriptorLoader with the updated descriptor file
        descriptor_loader = DescriptorLoader(
            descriptors_file=descriptor_file_path, context_client=context_client, device_client=device_client
        )

        # Process and validate the descriptor
        print("Processing the descriptor...")
        results = descriptor_loader.process()
        print(f"Descriptor processing results: {results}")

        print("Checking descriptor load results...")
        check_descriptor_load_results(results, descriptor_loader)

        print("Validating descriptor...")
        descriptor_loader.validate()
        print("Descriptor validated successfully.")
    except Exception as e:
        LOGGER.error(f"Failed to load and process descriptor: {e}")
        raise e

def test_qkd_scenario_bootstrap(
    context_client: ContextClient,  # pylint: disable=redefined-outer-name
    device_client: DeviceClient,    # pylint: disable=redefined-outer-name
) -> None:
    """
    This test validates that the QKD scenario is correctly bootstrapped.
    """
    print("Starting QKD scenario bootstrap test...")

    # Check if context_client and device_client are instantiated
    if context_client is None:
        print("Error: context_client is not instantiated!")
    else:
        print(f"context_client is instantiated: {context_client}")

    if device_client is None:
        print("Error: device_client is not instantiated!")
    else:
        print(f"device_client is instantiated: {device_client}")

    # Validate empty scenario
    print("Validating empty scenario...")
    validate_empty_scenario(context_client)

    # Load the descriptor
    load_and_process_descriptor(context_client, device_client, DESCRIPTOR_FILE_PATH)

def test_qkd_devices_enabled(
    context_client: ContextClient,  # pylint: disable=redefined-outer-name
) -> None:
    """
    This test validates that the QKD devices are enabled.
    """
    print("Starting QKD devices enabled test...")

    # Check if context_client is instantiated
    if context_client is None:
        print("Error: context_client is not instantiated!")
    else:
        print(f"context_client is instantiated: {context_client}")

    DEVICE_OP_STATUS_ENABLED = DeviceOperationalStatusEnum.DEVICEOPERATIONALSTATUS_ENABLED

    num_devices = -1
    num_devices_enabled, num_retry = 0, 0

    while (num_devices != num_devices_enabled) and (num_retry < 10):
        print(f"Attempt {num_retry + 1}: Checking device status...")

        time.sleep(1.0)  # Add a delay to allow for device enablement

        response = context_client.ListDevices(Empty())
        num_devices = len(response.devices)
        print(f"Total devices found: {num_devices}")

        num_devices_enabled = 0
        for device in response.devices:
            if device.device_operational_status == DEVICE_OP_STATUS_ENABLED:
                num_devices_enabled += 1
        
        print(f"Devices enabled: {num_devices_enabled}/{num_devices}")
        num_retry += 1

    # Final check to ensure all devices are enabled
    print(f"Final device status: {num_devices_enabled}/{num_devices} devices enabled.")
    assert num_devices_enabled == num_devices
    print("QKD devices enabled test completed.")
 No newline at end of file
Original line number Original line Diff line number Diff line
@@ -19,13 +19,14 @@ build slice:
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest' # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker buildx build -t "$IMAGE_NAME:$IMAGE_TAG" -f ./src/$IMAGE_NAME/Dockerfile .
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker tag "$IMAGE_NAME:$IMAGE_TAG" "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/$IMAGE_NAME:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
Original line number Original line Diff line number Diff line
@@ -19,6 +19,7 @@ build telemetry:
    IMAGE_TAG: 'latest'             # tag of the container image (production, development, etc)
    IMAGE_TAG: 'latest'             # tag of the container image (production, development, etc)
  stage: build
  stage: build
  before_script:
  before_script:
    - docker image prune --force
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
  script:
    # This first build tags the builder resulting image to prevent being removed by dangling image removal command
    # This first build tags the builder resulting image to prevent being removed by dangling image removal command
@@ -30,7 +31,7 @@ build telemetry:
    - docker push "$CI_REGISTRY_IMAGE/${IMAGE_NAME}-frontend:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/${IMAGE_NAME}-frontend:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/${IMAGE_NAME}-backend:$IMAGE_TAG"
    - docker push "$CI_REGISTRY_IMAGE/${IMAGE_NAME}-backend:$IMAGE_TAG"
  after_script:
  after_script:
    - docker images --filter="dangling=true" --quiet | xargs -r docker rmi
    - docker image prune --force
  rules:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop" || $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "develop"'
+20 −0
Original line number Original line Diff line number Diff line
# Copyright 2022-2024 ETSI SDG TeraFlowSDN (TFS) (https://tfs.etsi.org/)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.


import os

TRUE_VALUES = {'T', 'TRUE', 'YES', '1'}
DEVICE_EMULATED_ONLY = os.environ.get('DEVICE_EMULATED_ONLY')
LOAD_ALL_DEVICE_DRIVERS = (DEVICE_EMULATED_ONLY is None) or (DEVICE_EMULATED_ONLY.upper() not in TRUE_VALUES)
Original line number Original line Diff line number Diff line
@@ -16,9 +16,24 @@ FROM python:3.9-slim


# Install dependencies
# Install dependencies
RUN apt-get --yes --quiet --quiet update && \
RUN apt-get --yes --quiet --quiet update && \
    apt-get --yes --quiet --quiet install wget g++ git && \
    apt-get --yes --quiet --quiet install wget g++ git build-essential cmake libpcre2-dev python3-dev python3-cffi && \
    rm -rf /var/lib/apt/lists/*
    rm -rf /var/lib/apt/lists/*


# Download, build and install libyang. Note that APT package is outdated
# - Ref: https://github.com/CESNET/libyang
# - Ref: https://github.com/CESNET/libyang-python/
# RUN mkdir -p /var/libyang
# RUN git clone https://github.com/CESNET/libyang.git /var/libyang
# WORKDIR /var/libyang
# RUN git fetch
# RUN git checkout v2.1.148
# RUN mkdir -p /var/libyang/build
# WORKDIR /var/libyang/build
# RUN cmake -D CMAKE_BUILD_TYPE:String="Release" ..
# RUN make
# RUN make install
# RUN ldconfig

# Set Python to show logs as they occur
# Set Python to show logs as they occur
ENV PYTHONUNBUFFERED=0
ENV PYTHONUNBUFFERED=0


@@ -77,5 +92,18 @@ COPY src/vnt_manager/client/. vnt_manager/client/
COPY src/telemetry/__init__.py telemetry/__init__.py
COPY src/telemetry/__init__.py telemetry/__init__.py
COPY src/telemetry/backend/. telemetry/backend/
COPY src/telemetry/backend/. telemetry/backend/


# Clone OpenConfig YANG models
# RUN mkdir -p /tmp/openconfig
# RUN git clone https://github.com/openconfig/public.git /tmp/openconfig
# WORKDIR /tmp/openconfig
# RUN git fetch
# RUN git checkout v4.4.0
# RUN rm -rf /var/teraflow/telemetry/backend/collectors/gnmi_openconfig/git
# RUN mkdir -p /var/teraflow/telemetry/backend/collectors/gnmi_openconfig/git/openconfig/public
# RUN mv /tmp/openconfig/release /var/teraflow/telemetry/backend/collectors/gnmi_openconfig/git/openconfig/public
# RUN mv /tmp/openconfig/third_party /var/teraflow/telemetry/backend/collectors/gnmi_openconfig/git/openconfig/public
# RUN rm -rf /tmp/openconfig
# WORKDIR /var/teraflow

# Start the service
# Start the service
ENTRYPOINT ["python", "-m", "telemetry.backend.service"]
ENTRYPOINT ["python", "-m", "telemetry.backend.service"]
+28 −0
Original line number Original line Diff line number Diff line
# Copyright 2022-2024 ETSI SDG TeraFlowSDN (TFS) (https://tfs.etsi.org/)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from typing import Any, Dict
from common.proto.context_pb2 import DeviceConfig, ConfigActionEnum


def get_connect_rules(device_config : DeviceConfig) -> Dict[str, Any]:
    connect_rules = dict()
    for config_rule in device_config.config_rules:
        if config_rule.action != ConfigActionEnum.CONFIGACTION_SET: continue
        if config_rule.WhichOneof('config_rule') != 'custom': continue
        if not config_rule.custom.resource_key.startswith('_connect/'): continue
        connect_attribute = config_rule.custom.resource_key.replace('_connect/', '')
        connect_rules[connect_attribute] = config_rule.custom.resource_value
    return connect_rules
Original line number Original line Diff line number Diff line
# Copyright 2022-2025 ETSI SDG TeraFlowSDN (TFS) (https://tfs.etsi.org/)
# Copyright 2022-2025 ETSI SDG TeraFlowSDN (TFS) (https://tfs.etsi.org/)
# Copyright 2022-2025 ETSI SDG TeraFlowSDN (TFS) (https://tfs.etsi.org/)
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# you may not use this file except in compliance with the License.
@@ -14,7 +15,10 @@


anytree==2.8.0
anytree==2.8.0
APScheduler>=3.10.4
APScheduler>=3.10.4
APScheduler>=3.10.4
confluent-kafka==2.3.*
confluent-kafka==2.3.*
kafka-python==2.0.6
kafka-python==2.0.6
numpy==2.0.1
numpy==2.0.1
pytz>=2025.2
pytz>=2025.2
deepdiff==6.7.*
pygnmi==0.8.14
Original line number Original line Diff line number Diff line
# Copyright 2022-2024 ETSI SDG TeraFlowSDN (TFS) (https://tfs.etsi.org/)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

_DEVICE_ID          = 'DeviceId({device_uuid:s})'
_ENDPOINT_ID        = 'EndpointId({endpoint_uuid:s})'
_KPI                = 'Kpi({kpi_uuid:s})'
_DEVICE_ENDPOINT_ID = _DEVICE_ID + '/' + _ENDPOINT_ID
_RESOURCE           = 'Resource({resource_data:s})'
_RESOURCE_KEY       = 'Resource(key={resource_key:s})'
_RESOURCE_KEY_VALUE = 'Resource(key={resource_key:s}, value={resource_value:s})'
_SUBSCRIPTION       = 'Subscription(key={subscr_key:s}, duration={subscr_duration:s}, interval={subscr_interval:s})'
_SAMPLE_TYPE        = 'SampleType({sample_type_id:s}/{sample_type_name:s})'
_ERROR              = 'Error({error:s})'

ERROR_MISSING_DRIVER = _DEVICE_ID + ' has not been added to this Device instance'
ERROR_MISSING_KPI    = _KPI + ' not found'

ERROR_BAD_RESOURCE   = _DEVICE_ID + ': GetConfig retrieved malformed ' + _RESOURCE
ERROR_UNSUP_RESOURCE = _DEVICE_ID + ': GetConfig retrieved unsupported ' + _RESOURCE

ERROR_GET            = _DEVICE_ID + ': Unable to Get ' + _RESOURCE_KEY + '; ' + _ERROR
ERROR_GET_INIT       = _DEVICE_ID + ': Unable to Get Initial ' + _RESOURCE_KEY + '; ' + _ERROR
ERROR_DELETE         = _DEVICE_ID + ': Unable to Delete ' + _RESOURCE_KEY_VALUE + '; ' + _ERROR
ERROR_SET            = _DEVICE_ID + ': Unable to Set ' + _RESOURCE_KEY_VALUE + '; ' + _ERROR

ERROR_SAMPLETYPE     = _DEVICE_ENDPOINT_ID + ': ' + _SAMPLE_TYPE + ' not supported'

ERROR_SUBSCRIBE      = _DEVICE_ID + ': Unable to Subscribe ' + _SUBSCRIPTION + '; ' + _ERROR
ERROR_UNSUBSCRIBE    = _DEVICE_ID + ': Unable to Unsubscribe ' + _SUBSCRIPTION + '; ' + _ERROR
Original line number Original line Diff line number Diff line
# Copyright 2022-2025 ETSI SDG TeraFlowSDN (TFS) (https://tfs.etsi.org/)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import uuid
import logging
from .collector_api._Collector               import _Collector
from .collector_api.DriverInstanceCache      import get_driver
from common.proto.kpi_manager_pb2            import KpiId
from common.tools.context_queries.Device     import get_device
from common.tools.context_queries.EndPoint   import get_endpoint_names

LOGGER = logging.getLogger(__name__)

def get_subscription_parameters(
        kpi_id : str, kpi_manager_client, context_client, duration, interval
        ) -> list[tuple] | None:
    """
    Method to get subscription parameters based on KPI ID.
    Returns a list of tuples with subscription parameters.
    Each tuple contains:
        - Subscription ID (str)
        - Dictionary with:
            - "kpi" (str): KPI ID
            - "endpoint" (str): Endpoint name (e.g., 'eth0')
            - "resource" (str): Resource type (e.g., 'interface')
        - Sample interval (float)
        - Report interval (float)
    If the KPI ID is not found or the device is not available, returns None.
    Preconditions:
        - A KPI Descriptor must be added in KPI DB with correct device_id.
        - The device must be available in the context.
    """
    kpi_id_obj = KpiId()
    kpi_id_obj.kpi_id.uuid = kpi_id              # pyright: ignore[reportAttributeAccessIssue]
    kpi_descriptor = kpi_manager_client.GetKpiDescriptor(kpi_id_obj)
    if not kpi_descriptor:
        LOGGER.warning(f"KPI ID: {kpi_id} - Descriptor not found. Skipping...")
        return None

    kpi_sample_type = kpi_descriptor.kpi_sample_type
    LOGGER.info(f"KPI Descriptor (KPI Sample Type): {kpi_sample_type}")

    device = get_device( context_client       = context_client,
                         device_uuid          = kpi_descriptor.device_id.device_uuid.uuid,
                         include_config_rules = False,
                         include_components   = False
                         )
    if not device:
        raise Exception(f"KPI ID: {kpi_id} - Device not found for KPI descriptor.")
    endpoints = device.device_endpoints

    # LOGGER.info(f"Device for KPI ID: {kpi_id} - {endpoints}")
    # LOGGER.info(f"--------------------")
    endpointsIds = [endpoint_id.endpoint_id for endpoint_id in endpoints]
    # for endpoint_id in endpoints:
    #     LOGGER.info(f"Endpoint UUID: {endpoint_id.endpoint_id}")
        
    # Getting endpoint names
    device_names, endpoint_data = get_endpoint_names(
        context_client = context_client,
        endpoint_ids   = endpointsIds
    )
    # LOGGER.info(f"Device names: {device_names}")
    # LOGGER.info(f"Endpoint data: {endpoint_data}")

    subscriptions = []
    sub_id = None
    for endpoint in endpointsIds:
        sub_id = str(uuid.uuid4())  # Generate a unique subscription ID
        LOGGER.info(f"Endpoint names only: {endpoint_data[endpoint.endpoint_uuid.uuid][0]}") 
        subscriptions.append(
            (
                sub_id,  # Example subscription ID
                {
                    "kpi"      : kpi_sample_type,   # As request is based on the single KPI so it should have only one endpoint
                    "endpoint" : endpoint_data[endpoint.endpoint_uuid.uuid][0],  # Endpoint name
                    "resource" : 'interface',  # Example resource type
                },
                float(duration),
                float(interval),
            )
        )
    return subscriptions


def get_collector_by_kpi_id(kpi_id: str, kpi_manager_client, context_client, driver_instance_cache
                            ) -> _Collector | None:
    """
    Method to get a collector instance based on KPI ID.
    Preconditions:
        - A KPI Descriptor must be added in KPI DB with correct device_id.
        - The device must be available in the context.
    Returns:
        - Collector instance if found, otherwise None.
    Raises:
        - Exception if the KPI ID is not found or the collector cannot be created.
    """
    LOGGER.info(f"Getting collector for KPI ID: {kpi_id}")
    kpi_id_obj = KpiId()
    kpi_id_obj.kpi_id.uuid = kpi_id              # pyright: ignore[reportAttributeAccessIssue]
    kpi_descriptor     = kpi_manager_client.GetKpiDescriptor(kpi_id_obj)
    # LOGGER.info(f"KPI Descriptor: {kpi_descriptor}")
    if not kpi_descriptor:
        raise Exception(f"KPI ID: {kpi_id} - Descriptor not found.")
    
    # device_uuid       = kpi_descriptor.device_id.device_uuid.uuid
    device = get_device( context_client       = context_client,
                         device_uuid          = kpi_descriptor.device_id.device_uuid.uuid,
                         include_config_rules = True,
                         include_components   = False,
                         )

    # Getting device collector (testing)
    collector : _Collector = get_driver(driver_instance_cache, device)
    if collector is None:
        raise Exception(f"KPI ID: {kpi_id} - Collector not found for device {device.device_uuid.uuid}.")
    # LOGGER.info(f"Collector for KPI ID: {kpi_id} - {collector.__class__.__name__}")
    return collector
Original line number Original line Diff line number Diff line
@@ -16,22 +16,27 @@ import json
import time
import time
import logging
import logging
import threading
import threading
from typing           import Any, Dict, Tuple

from datetime         import datetime, timezone
from .HelperMethods   import get_collector_by_kpi_id, get_subscription_parameters
from confluent_kafka  import Producer as KafkaProducer
from confluent_kafka  import Consumer as KafkaConsumer
from confluent_kafka  import KafkaError
from common.Constants import ServiceNameEnum
from common.Constants import ServiceNameEnum
from common.Settings  import get_service_port_grpc
from common.Settings  import get_service_port_grpc
from confluent_kafka  import Consumer as KafkaConsumer
from confluent_kafka  import KafkaError
from confluent_kafka  import Producer as KafkaProducer
from datetime         import datetime, timezone
from typing           import Any, Dict

from .collector_api._Collector               import _Collector
from .collector_api.DriverInstanceCache      import DriverInstanceCache, get_driver
from .collectors.emulated.EmulatedCollector  import EmulatedCollector
from common.method_wrappers.Decorator        import MetricsPool
from common.method_wrappers.Decorator        import MetricsPool
from common.proto.kpi_manager_pb2            import KpiId
from common.tools.context_queries.Device     import get_device
from common.tools.kafka.Variables            import KafkaConfig, KafkaTopic
from common.tools.kafka.Variables            import KafkaConfig, KafkaTopic
from common.tools.service.GenericGrpcService import GenericGrpcService
from common.tools.service.GenericGrpcService import GenericGrpcService
from common.tools.context_queries.Device     import get_device
from common.proto.kpi_manager_pb2            import KpiId

from kpi_manager.client.KpiManagerClient     import KpiManagerClient
from context.client.ContextClient            import ContextClient
from context.client.ContextClient            import ContextClient
from telemetry.backend.collectors.emulated.EmulatedCollector import EmulatedCollector
from kpi_manager.client.KpiManagerClient     import KpiManagerClient



LOGGER       = logging.getLogger(__name__)
LOGGER       = logging.getLogger(__name__)
METRICS_POOL = MetricsPool('TelemetryBackend', 'backendService')
METRICS_POOL = MetricsPool('TelemetryBackend', 'backendService')
@@ -41,7 +46,7 @@ class TelemetryBackendService(GenericGrpcService):
    Class listens for request on Kafka topic, fetches requested metrics from device.
    Class listens for request on Kafka topic, fetches requested metrics from device.
    Produces metrics on both TELEMETRY_RESPONSE and VALUE kafka topics.
    Produces metrics on both TELEMETRY_RESPONSE and VALUE kafka topics.
    """
    """
    def __init__(self, cls_name : str = __name__) -> None:
    def __init__(self, driver_instance_cache : DriverInstanceCache, cls_name : str = __name__) -> None:
        LOGGER.info('Init TelemetryBackendService')
        LOGGER.info('Init TelemetryBackendService')
        port = get_service_port_grpc(ServiceNameEnum.TELEMETRYBACKEND)
        port = get_service_port_grpc(ServiceNameEnum.TELEMETRYBACKEND)
        super().__init__(port, cls_name=cls_name)
        super().__init__(port, cls_name=cls_name)
@@ -49,7 +54,9 @@ class TelemetryBackendService(GenericGrpcService):
        self.kafka_consumer = KafkaConsumer({'bootstrap.servers' : KafkaConfig.get_kafka_address(),
        self.kafka_consumer = KafkaConsumer({'bootstrap.servers' : KafkaConfig.get_kafka_address(),
                                            'group.id'           : 'backend',
                                            'group.id'           : 'backend',
                                            'auto.offset.reset'  : 'latest'})
                                            'auto.offset.reset'  : 'latest'})
        self.collector          = None
        self.driver_instance_cache = driver_instance_cache
        self.device_collector      = None
        self.collector             = None           # This should be replaced with device_collector (later to be removed)
        self.context_client        = ContextClient()
        self.context_client        = ContextClient()
        self.kpi_manager_client    = KpiManagerClient()
        self.kpi_manager_client    = KpiManagerClient()
        self.active_jobs = {}
        self.active_jobs = {}
@@ -79,9 +86,7 @@ class TelemetryBackendService(GenericGrpcService):
                    LOGGER.error("Consumer error: {}".format(receive_msg.error()))
                    LOGGER.error("Consumer error: {}".format(receive_msg.error()))
                    break
                    break
            try: 
            try: 
                collector = json.loads(
                collector = json.loads(receive_msg.value().decode('utf-8'))
                    receive_msg.value().decode('utf-8')
                )
                collector_id = receive_msg.key().decode('utf-8')
                collector_id = receive_msg.key().decode('utf-8')
                LOGGER.debug('Recevied Collector: {:} - {:}'.format(collector_id, collector))
                LOGGER.debug('Recevied Collector: {:} - {:}'.format(collector_id, collector))


@@ -93,7 +98,7 @@ class TelemetryBackendService(GenericGrpcService):
                    if collector_id not in self.active_jobs:
                    if collector_id not in self.active_jobs:
                        stop_event = threading.Event()
                        stop_event = threading.Event()
                        self.active_jobs[collector_id] = stop_event
                        self.active_jobs[collector_id] = stop_event
                        threading.Thread(target = self.CollectorHandler, 
                        threading.Thread(target = self.GenericCollectorHandler, 
                                    args=(
                                    args=(
                                        collector_id, 
                                        collector_id, 
                                        collector['kpi_id'],
                                        collector['kpi_id'],
@@ -106,7 +111,7 @@ class TelemetryBackendService(GenericGrpcService):
                            def stop_after_duration(completion_time, stop_event):
                            def stop_after_duration(completion_time, stop_event):
                                time.sleep(completion_time)
                                time.sleep(completion_time)
                                if not stop_event.is_set():
                                if not stop_event.is_set():
                                    LOGGER.warning(f"Execution duration ({completion_time}) completed of Collector: {collector_id}")
                                    LOGGER.info(f"Execution duration ({completion_time}) completed of Collector: {collector_id}")
                                    self.TerminateCollector(collector_id)
                                    self.TerminateCollector(collector_id)
                                
                                
                            duration_thread = threading.Thread(
                            duration_thread = threading.Thread(
@@ -119,38 +124,60 @@ class TelemetryBackendService(GenericGrpcService):
            except Exception as e:
            except Exception as e:
                LOGGER.warning("Unable to consumer message from topic: {:}. ERROR: {:}".format(KafkaTopic.TELEMETRY_REQUEST.value, e))
                LOGGER.warning("Unable to consumer message from topic: {:}. ERROR: {:}".format(KafkaTopic.TELEMETRY_REQUEST.value, e))


    def CollectorHandler(self, collector_id, kpi_id, duration, interval, stop_event):
    def GenericCollectorHandler(self, collector_id, kpi_id, duration, interval, stop_event ):
        """
        """
        Method to handle collector request.
        Method to handle collector request.
        """
        """
        device_type, end_points = self.get_endpoint_detail(kpi_id)
        # CONFIRM: The method (get_collector_by_kpi_id) is working correctly. testcase in integration tests.
        self.device_collector = get_collector_by_kpi_id(
            kpi_id, self.kpi_manager_client, self.context_client, self.driver_instance_cache)

        if not self.device_collector:
            LOGGER.warning(f"KPI ID: {kpi_id} - Collector not found. Skipping...")
            raise Exception(f"KPI ID: {kpi_id} - Collector not found.")
        LOGGER.info(("----- Number done -----"))

        # CONFIRM: The method (get_subscription_parameters) is working correctly. testcase in telemetery backend tests.
        resource_to_subscribe = get_subscription_parameters(
            kpi_id, self.kpi_manager_client, self.context_client, duration, interval
        )
        if not resource_to_subscribe:
            LOGGER.warning(f"KPI ID: {kpi_id} - Resource to subscribe not found. Skipping...")
            raise Exception(f"KPI ID: {kpi_id} - Resource to subscribe not found.")
        LOGGER.info("------ Number done 2 -----")


        if end_points is None:
        responses = self.device_collector.SubscribeState(resource_to_subscribe)
            LOGGER.warning("KPI ID: {:} - Endpoints not found. Skipping...".format(kpi_id))
            return


        if device_type and "emu" in device_type:
        for status in responses:
            LOGGER.info("KPI ID: {:} - Device Type: {:} - Endpoints: {:}".format(kpi_id, device_type, end_points))
            if isinstance(status, Exception):
            subscription = [collector_id, end_points, duration, interval]
                LOGGER.error(f"Subscription failed for KPI ID: {kpi_id} - Error: {status}")
            self.EmulatedCollectorHandler(subscription, duration, collector_id, kpi_id, stop_event)
                raise status
            else:
            else:
            LOGGER.warning("KPI ID: {:} - Device Type: {:} - Not Supported".format(kpi_id, device_type))
                LOGGER.info(f"Subscription successful for KPI ID: {kpi_id} - Status: {status}")
        LOGGER.info