Skip to content
Snippets Groups Projects
Commit 16d012eb authored by delacal's avatar delacal
Browse files

- Added GetConfiguredACLRules RPC and ACLRules message to AM protobuf file to...

- Added GetConfiguredACLRules RPC and ACLRules message to AM protobuf file to allow exporting ACL rules created by this component to other components.
- Implemented new RPC in the AM component to allow exporting ACL rules created to other components.
- Increased KPI monitoring aggregation time interval from 5 seconds to 60 seconds.
- Cleaned up CAD protobuf file.
- Added useful logging statements to CAD and AM components.
- Formatted CAD and AM code.
- Removed unused gRPC channels in AM.
- Restructured the cryptomining detector model directory.
- Renamed cryptomining detector model file to include relevant details to help better identify the model.
- Improved cryptomining detector model loading by removing the need for the model filename to be hardcoded in the code.
- Removed old cryptomining detector models.
- Refactored CAD code to improve readability.
- Added script to automatically copy compiled protobuf files to DAD component.
- Updated complete_deploy.sh to conform to the new TFS Release 2 deployment process.
- Updated CAD and AM protbuf file messages and RPCs to improve readability and clarity.
- Updated CAD and AM client and service to use the new RPCs names.
- Changed CAD output message to use the cryptomining detector model filename as identifier.
- Added ConnectionInfo class in CAD component to facilitate comparison and serialization of connection information.
- Removed test statements used to check the time taken by the cryptomining detector to perform the inference using different batch sizes.
- Added a function to properly measure the time taken by the cryptomining detector model to process a batch of connection statistics.
- Added IP addresses corresponding to known cryptomining connections to validate the cryptomining detector model performance in the classification task.
- Implemented several metrics in the CAD component to monitor the performance of the cryptomining detector model in the classification task.
- Implemented a function to export the performance metrics of the cryptomining detector in the classification task to an external file.
- Added script to retrieve performance metrics of the cryptomining detector in the classification task from CAD container.
parent bec9ad8d
No related branches found
No related tags found
2 merge requests!142Release TeraFlowSDN 2.1,!93Updated L3 components + scalability
Showing
with 55 additions and 55943 deletions
This diff is collapsed.
source my_deploy.sh; ./deploy.sh; source tfs_runtime_env_vars.sh; ofc22/run_test_01_bootstrap.sh; ofc22/run_test_02_create_service.sh
\ No newline at end of file
./src/tests/ofc22/run_test_03_delete_service.sh
./src/tests/ofc22/run_test_04_cleanup.sh
source src/tests/ofc22/deploy_specs.sh
source my_deploy.sh
./deploy/all.sh
source tfs_runtime_env_vars.sh
ofc22/run_test_01_bootstrap.sh
ofc22/run_test_02_create_service.sh
\ No newline at end of file
[2022-09-30 10:06:58,753] {/var/teraflow/monitoring/service/MonitoringServiceServicerImpl.py:151} INFO - IncludeKpi
INFO:monitoringservice-server:IncludeKpi
[2022-09-30 10:06:58,754] {/var/teraflow/monitoring/service/MonitoringServiceServicerImpl.py:194} INFO - getting Kpi by KpiID
INFO:monitoringservice-server:getting Kpi by KpiID
[2022-09-30 10:06:58,764] {/var/teraflow/monitoring/service/MonitoringServiceServicerImpl.py:215} ERROR - GetKpiDescriptor exception
Traceback (most recent call last):
File "/var/teraflow/monitoring/service/MonitoringServiceServicerImpl.py", line 196, in GetKpiDescriptor
kpi_db = self.sql_db.get_KPI(int(request.kpi_id.uuid))
ValueError: invalid literal for int() with base 10: 'kpi_id {\n uuid: "17"\n}\n'
ERROR:monitoringservice-server:GetKpiDescriptor exception
Traceback (most recent call last):
File "/var/teraflow/monitoring/service/MonitoringServiceServicerImpl.py", line 196, in GetKpiDescriptor
kpi_db = self.sql_db.get_KPI(int(request.kpi_id.uuid))
ValueError: invalid literal for int() with base 10: 'kpi_id {\n uuid: "17"\n}\n'
[2022-09-30 10:06:58,780] {/var/teraflow/monitoring/service/MonitoringServiceServicerImpl.py:156} WARNING - Ignoring sample with KPIId(kpi_id {
uuid: "kpi_id {\n uuid: \"17\"\n}\n"
}
): not found in database
WARNING:monitoringservice-server:Ignoring sample with KPIId(kpi_id {
uuid: "kpi_id {\n uuid: \"17\"\n}\n"
}
): not found in database
[2022-09-30 10:06:58,807] {/var/teraflow/monitoring/service/MonitoringServiceServicerImpl.py:151} INFO - IncludeKpi
#!/bin/bash
# Copyright 2022-2023 ETSI TeraFlowSDN - TFS OSG (https://tfs.etsi.org/)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
########################################################################################################################
# Define your deployment settings here
########################################################################################################################
# If not already set, set the name of the Kubernetes namespace to deploy to.
export TFS_K8S_NAMESPACE=${TFS_K8S_NAMESPACE:-"tfs"}
# If not already set, set the list of components you want to build images for, and deploy.
export TFS_COMPONENTS=${TFS_COMPONENTS:-"context device automation policy service compute monitoring l3_attackmitigator l3_centralizedattackdetector webui"}
########################################################################################################################
# Automated steps start here
########################################################################################################################
echo "Exposing GRPC ports for components..."
for COMPONENT in $TFS_COMPONENTS; do
echo "Processing '$COMPONENT' component..."
SERVICE_GRPC_PORT=$(kubectl get service ${COMPONENT}service --namespace $TFS_K8S_NAMESPACE -o 'jsonpath={.spec.ports[?(@.name=="grpc")].port}')
echo " '$COMPONENT' service port: $SERVICE_GRPC_PORT"
if [ -z "${SERVICE_GRPC_PORT}" ]; then
printf "\n"
continue;
fi
COMPONENT_OBJNAME=$(echo "${COMPONENT}" | sed "s/\_/-/")
PATCH='{"data": {"'${SERVICE_GRPC_PORT}'": "'$TFS_K8S_NAMESPACE'/'${COMPONENT_OBJNAME}service':'${SERVICE_GRPC_PORT}'"}}'
#echo "PATCH: ${PATCH}"
kubectl patch configmap nginx-ingress-tcp-microk8s-conf --namespace ingress --patch "${PATCH}"
PORT_MAP='{"containerPort": '${SERVICE_GRPC_PORT}', "hostPort": '${SERVICE_GRPC_PORT}'}'
CONTAINER='{"name": "nginx-ingress-microk8s", "ports": ['${PORT_MAP}']}'
PATCH='{"spec": {"template": {"spec": {"containers": ['${CONTAINER}']}}}}'
#echo "PATCH: ${PATCH}"
kubectl patch daemonset nginx-ingress-microk8s-controller --namespace ingress --patch "${PATCH}"
printf "\n"
done
echo "Done!"
This diff is collapsed.
pod=$(kubectl get pods -n "tfs" -l app=l3-centralizedattackdetectorservice | sed -n '2p' | cut -d " " -f1)
while true; do kubectl -n "tfs" cp $pod:prediction_accuracy.txt ./prediction_accuracy.txt; clear; cat prediction_accuracy.txt | tail -n 10; sleep 1; done
\ No newline at end of file
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -36,7 +36,7 @@ spec:
- containerPort: 9192
env:
- name: LOG_LEVEL
value: "DEBUG"
value: "INFO"
envFrom:
- secretRef:
name: qdb-data
......
......@@ -36,7 +36,7 @@ spec:
- containerPort: 9192
env:
- name: LOG_LEVEL
value: "DEBUG"
value: "INFO"
readinessProbe:
exec:
command: ["/bin/grpc_health_probe", "-addr=:3030"]
......
......@@ -25,5 +25,8 @@ export QDB_NAMESPACE="qdb"
export QDB_USERNAME="admin"
export QDB_PASSWORD="quest"
export QDB_TABLE="tfs_monitoring"
export QDB_REDEPLOY=""
export CRDB_DROP_DATABASE_IF_EXISTS="YES"
export CRDB_REDEPLOY="YES"
export NATS_REDEPLOY="YES"
export QDB_REDEPLOY="TRUE"
\ No newline at end of file
#!/bin/bash
# Set the variables for the remote host and destination directory
REMOTE_HOST="192.168.165.73"
DEST_DIR="/home/ubuntu/TeraflowDockerDistributed/l3_distributedattackdetector/proto"
# Copy the files to the remote host
sshpass -p "ubuntu" scp /home/ubuntu/tfs-ctrl-new/proto/src/python/l3_centralizedattackdetector_pb2.py "$REMOTE_HOST:$DEST_DIR"
sshpass -p "ubuntu" scp /home/ubuntu/tfs-ctrl-new/proto/src/python/l3_centralizedattackdetector_pb2_grpc.py "$REMOTE_HOST:$DEST_DIR"
sshpass -p "ubuntu" scp /home/ubuntu/tfs-ctrl-new/proto/src/python/l3_attackmitigator_pb2.py "$REMOTE_HOST:$DEST_DIR"
sshpass -p "ubuntu" scp /home/ubuntu/tfs-ctrl-new/proto/src/python/l3_attackmitigator_pb2_grpc.py "$REMOTE_HOST:$DEST_DIR"
\ No newline at end of file
......@@ -17,10 +17,12 @@ syntax = "proto3";
import "context.proto";
service L3Attackmitigator{
// Sends a greeting
rpc SendOutput (L3AttackmitigatorOutput) returns (context.Empty) {}
// Sends another greeting
// Perform Mitigation
rpc PerformMitigation (L3AttackmitigatorOutput) returns (context.Empty) {}
// Get Mitigation
rpc GetMitigation (context.Empty) returns (context.Empty) {}
// Get Configured ACL Rules
rpc GetConfiguredACLRules (context.Empty) returns (ACLRules) {}
}
......@@ -41,3 +43,7 @@ message L3AttackmitigatorOutput {
float time_start = 14;
float time_end = 15;
}
message ACLRules {
repeated context.ConfigRule acl_rules = 1;
}
......@@ -17,14 +17,14 @@ syntax = "proto3";
import "context.proto";
service L3Centralizedattackdetector {
// Sends single input to the ML model in the CAD component
rpc SendInput (L3CentralizedattackdetectorMetrics) returns (Empty) {}
// Analyze single input to the ML model in the CAD component
rpc AnalyzeConnectionStatistics (L3CentralizedattackdetectorMetrics) returns (Empty) {}
// Sends a batch of inputs to the ML model in the CAD component
rpc SendInputBatch (L3CentralizedattackdetectorModelInput) returns (Empty) {}
// Analyze a batch of inputs to the ML model in the CAD component
rpc AnalyzeBatchConnectionStatistics (L3CentralizedattackdetectorBatchInput) returns (Empty) {}
// DAD request of the list of features in CAD
rpc SendFeatures (Empty) returns (AutoFeatures) {}
// Get the list of features used by the ML model in the CAD component
rpc GetFeaturesIds (Empty) returns (AutoFeatures) {}
}
message Feature {
......@@ -35,18 +35,6 @@ message L3CentralizedattackdetectorMetrics {
// Input sent by the DAD compoenent to the ML model integrated in the CAD component.
// Machine learning model features
/*
float c_pkts_all = 1;
float c_ack_cnt = 2;
float c_bytes_uniq = 3;
float c_pkts_data = 4;
float c_bytes_all = 5;
float s_pkts_all = 6;
float s_ack_cnt = 7;
float s_bytes_uniq = 8;
float s_pkts_data = 9;
float s_bytes_all = 10;*/
repeated Feature features = 1;
ConnectionMetadata connection_metadata = 2;
......@@ -65,23 +53,16 @@ message ConnectionMetadata {
float time_end = 10;
}
// Collection of int values representing ML features
// Collection of values representing ML features
message AutoFeatures {
repeated float autoFeatures = 1;
repeated float auto_features = 1;
}
// Collection (batcb) of model inputs that will be sent to the model
message L3CentralizedattackdetectorModelInput {
// Collection (batch) of model inputs that will be sent to the model
message L3CentralizedattackdetectorBatchInput {
repeated L3CentralizedattackdetectorMetrics metrics = 1;
}
message Empty {
string message = 1;
}
// Collections or streams?
/*
message InputCollection {
repeated model_input = 1;
}
*/
s.sh 0 → 100755
if [ -s /etc/docker/daemon.json ]; then cat /etc/docker/daemon.json; else echo '{}'; fi \
| jq 'if has("insecure-registries") then . else .+ {"insecure-registries": []} end' -- \
| jq '."insecure-registries" |= (.+ ["localhost:32000"] | unique)' -- \
| tee tmp.daemon.json
sudo mv tmp.daemon.json /etc/docker/daemon.json
sudo chown root:root /etc/docker/daemon.json
sudo chmod 600 /etc/docker/daemon.json
......@@ -128,9 +128,6 @@ class DeviceServiceServicerImpl(DeviceServiceServicer):
# TODO: use of datastores (might be virtual ones) to enable rollbacks
resources_to_set, resources_to_delete = compute_rules_to_add_delete(device, request)
for resource in resources_to_set:
LOGGER.debug('Resource to set: %s', resource)
errors = []
errors.extend(configure_rules(device, driver, resources_to_set))
......
......@@ -193,9 +193,6 @@ def compute_rules_to_add_delete(
def configure_rules(device : Device, driver : _Driver, resources_to_set : List[Tuple[str, Any]]) -> List[str]:
if len(resources_to_set) == 0: return []
for resource_key, resource_value in resources_to_set:
LOGGER.debug('Setting config rule: %s = %s', resource_key, resource_value)
results_setconfig = driver.SetConfig(resources_to_set)
results_setconfig = [
......
......@@ -69,7 +69,6 @@ class DriverFactory:
field_candidate_driver_classes = set()
for field_value in field_values:
LOGGER.info("field_value: %s", field_value)
if field_enum_values is not None and field_value not in field_enum_values:
raise UnsupportedFilterFieldValueException(field_name, field_value, field_enum_values)
field_indice_drivers = field_indice.get(field_value)
......
......@@ -80,13 +80,9 @@ def get_driver(driver_instance_cache : DriverInstanceCache, device : Device) ->
driver : _Driver = driver_instance_cache.get(device_uuid)
if driver is not None: return driver
LOGGER.info('[get_driver] device = {:s}'.format(str(device)))
driver_filter_fields = get_device_driver_filter_fields(device)
connect_rules = get_connect_rules(device.device_config)
LOGGER.info('[get_driver] driver_filter_fields = {:s}'.format(str(driver_filter_fields)))
#LOGGER.info('[get_driver] connect_rules = {:s}'.format(str(connect_rules)))
address = connect_rules.get('address', '127.0.0.1')
......@@ -110,6 +106,4 @@ def get_driver(driver_instance_cache : DriverInstanceCache, device : Device) ->
def preload_drivers(driver_instance_cache : DriverInstanceCache) -> None:
context_client = ContextClient()
devices = context_client.ListDevices(Empty())
for device in devices.devices:
LOGGER.info('[preload_drivers] device = {:s}'.format(str(device)))
get_driver(driver_instance_cache, device)
for device in devices.devices: get_driver(driver_instance_cache, device)
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment