Skip to content
Snippets Groups Projects
Commit 053968fd authored by delacal's avatar delacal
Browse files

Finished exp1 development

parent e15922c0
No related branches found
No related tags found
2 merge requests!142Release TeraFlowSDN 2.1,!128Fixes on L3 Cybersecurity components
cad.txt 0 → 100644
This diff is collapsed.
cad_log.txt 0 → 100644
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
INFO:__main__:Starting...
DEBUG:l3_centralizedattackdetector.service.l3_centralizedattackdetectorService:Starting Service (tentative endpoint: 0.0.0.0:10001, max_workers: 10)...
INFO:l3_centralizedattackdetector.service.l3_centralizedattackdetectorServiceServicerImpl:Creating Centralized Attack Detector Service
INFO:l3_centralizedattackdetector.service.l3_centralizedattackdetectorServiceServicerImpl:Cryptomining Detector Features: [3.0, 5.0, 7.0, 8.0, 9.0, 17.0, 19.0, 21.0, 22.0, 23.0]
INFO:l3_centralizedattackdetector.service.l3_centralizedattackdetectorServiceServicerImpl:Batch size: 10
DEBUG:monitoring.client.MonitoringClient:Creating channel to 10.152.183.227:7070...
DEBUG:monitoring.client.MonitoringClient:Channel created
DEBUG:l3_attackmitigator.client.l3_attackmitigatorClient:Creating channel to l3-attackmitigatorservice:10002...
DEBUG:l3_attackmitigator.client.l3_attackmitigatorClient:Channel created
INFO:l3_centralizedattackdetector.service.l3_centralizedattackdetectorServiceServicerImpl:This replica's identifier is: 84c48a95-dedc-4d31-a2f8-358a0237f320
INFO:l3_centralizedattackdetector.service.l3_centralizedattackdetectorService:Listening on 0.0.0.0:10001...
DEBUG:l3_centralizedattackdetector.service.l3_centralizedattackdetectorService:Service started
./src/tests/ofc22/run_test_03_delete_service.sh
./src/tests/ofc22/run_test_04_cleanup.sh
source src/tests/ofc22/deploy_specs.sh source src/tests/ofc22/deploy_specs.sh
source my_deploy.sh source my_deploy.sh
./deploy/all.sh ./deploy/all.sh
......
This diff is collapsed.
This diff is collapsed.
timestamp_first_req,timestamp_last_req,total_time,batch_size
1684439500.7262454,1684439518.696577,17.97033166885376,256
1684439518.7009058,1684439536.8979518,18.197046041488647,256
1684439536.9033403,1684439573.1907332,36.28739285469055,256
1684439573.1981168,1684439591.7938573,18.59574055671692,256
1684439591.801213,1684439626.0076125,34.20639944076538,256
1684439626.0122116,1684439660.0969336,34.08472204208374,256
1684439660.1023145,1684439694.6814883,34.57917380332947,256
1684439694.6866412,1684439713.177154,18.49051284790039,256
1684439713.183632,1684439748.8026614,35.61902952194214,256
1684439748.8648906,1684439768.0725336,19.207643032073975,256
1684439768.0773811,1684439786.7156172,18.638236045837402,256
1684439786.7640705,1684439805.4091456,18.64507508277893,256
1684439805.4138126,1684439841.2038834,35.79007077217102,256
1684439841.2108314,1684439860.10726,18.89642858505249,256
1684439860.1131806,1684439878.264643,18.15146231651306,256
1684439878.2704298,1684439896.9845712,18.714141368865967,256
1684439897.0112722,1684439915.293344,18.282071828842163,256
1684439915.2984266,1684439934.505506,19.20707941055298,256
1684439934.5113277,1684439952.389038,17.877710342407227,256
folder_name="cad_exp_1_results"
if [ -d "$folder_name" ]; then
echo "Folder '$folder_name' already exists. Emptying it..."
rm -r "$folder_name"/*
else
echo "Creating folder '$folder_name'..."
mkdir "$folder_name"
fi
while true; do
list=($(kubectl get pods --namespace tfs | grep l3-centralized | awk '{print $1}'))
#kubectl -n "tfs" cp $pod_name:exp_1.csv $folder_name/$pod_name.csv -c server
echo "These are the pods for now"
for item in "${list[@]}"; do
echo $item
kubectl -n "tfs" cp $item:cad_metrics.csv $folder_name/$item.csv -c server
done
sleep 2
done
# kubectl get pods --namespace tfs | grep l3-centralized | wc -l
# kubectl --namespace tfs get all | grep autoscaling/l3-centralizedattackdetectorservice-hpa | awk '{print $3}'
\ No newline at end of file
...@@ -34,8 +34,14 @@ spec: ...@@ -34,8 +34,14 @@ spec:
- containerPort: 10001 - containerPort: 10001
- containerPort: 9192 - containerPort: 9192
env: env:
- name: LOG_LEVEL - name: LOG_LEVEL
value: "DEBUG" value: "DEBUG"
- name: BATCH_SIZE
value: "256"
- name: CAD_CLASSIFICATION_THRESHOLD
value: "0.5"
- name: MONITORED_KPIS_TIME_INTERVAL_AGG
value: "60"
readinessProbe: readinessProbe:
exec: exec:
command: ["/bin/grpc_health_probe", "-addr=:10001"] command: ["/bin/grpc_health_probe", "-addr=:10001"]
...@@ -44,10 +50,10 @@ spec: ...@@ -44,10 +50,10 @@ spec:
command: ["/bin/grpc_health_probe", "-addr=:10001"] command: ["/bin/grpc_health_probe", "-addr=:10001"]
resources: resources:
requests: requests:
cpu: 250m cpu: 100m
memory: 512Mi memory: 512Mi
limits: limits:
cpu: 700m cpu: 150m
memory: 1024Mi memory: 1024Mi
--- ---
apiVersion: v1 apiVersion: v1
...@@ -87,7 +93,7 @@ spec: ...@@ -87,7 +93,7 @@ spec:
name: cpu name: cpu
target: target:
type: Utilization type: Utilization
averageUtilization: 80 averageUtilization: 0.99
behavior: behavior:
scaleDown: scaleDown:
stabilizationWindowSeconds: 120 stabilizationWindowSeconds: 120
......
folder_name="exp1_results"
echo "Output folder: $folder_name"
if [ -d "$folder_name" ]; then
echo "Output folder '$folder_name' already exists."
echo "Removing all files in '$folder_name'..."
rm -r "$folder_name"/*
else
echo "Creating output folder '$folder_name'..."
mkdir "$folder_name"
fi
# Write column names to csv file
echo "number_active_pods,cpu_usage" > $folder_name/pod_info.csv
echo "Starting collection of CAD metrics..."
while true; do
list=($(kubectl get pods --namespace tfs | grep l3-centralized | awk '{print $1}'))
#kubectl -n "tfs" cp $pod_name:exp_1.csv $folder_name/$pod_name.csv -c server
echo "Currently running pods:"
for item in "${list[@]}"; do
echo "Pod: $item"
echo "Copying CAD metrics csv file to $folder_name/response_times_$item.csv"
kubectl -n "tfs" cp $item:response_times.csv $folder_name/response_times_$item.csv -c server
done
echo "Getting number of currently active pods and CPU usage..."
number_pods=$(kubectl get pods --namespace tfs | grep l3-centralized | wc -l)
cpu_usage=$(kubectl --namespace tfs get all | grep autoscaling/l3-centralizedattackdetectorservice-hpa | awk '{print $3}')
# check that cpu_usage does not contain "unknown"
if [[ $cpu_usage == *"unknown"* ]]; then
cpu_usage=0
fi
echo
echo "Number of currently active pods: $number_pods"
echo "CPU usage: $cpu_usage"
echo
echo "Writing number of currently active pods and CPU usage to $folder_name/pod_info.csv"
echo "$number_pods,$cpu_usage" >> $folder_name/pod_info.csv
sleep 1
# check if file "stop_exp1" exists
if [ -f "stop_exp1" ]; then
echo "File 'stop_exp1' found. Stopping experiment."
break
fi
done
echo "Collection of CAD metrics stopped."
rm stop_exp1
#!/bin/bash
export BATCH_SIZE=${1:-10}
export TARGET_CPU_UTIL=${2:-80}
echo "exp1 parameters set to:"
echo "BATCH_SIZE: $BATCH_SIZE"
echo "TARGET_CPU_UTIL: $TARGET_CPU_UTIL"
CAD_manifest="./manifests/l3_centralizedattackdetectorservice.yaml"
# Copyright 2022-2023 ETSI TeraFlowSDN - TFS OSG (https://tfs.etsi.org/)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# apiVersion: apps/v1
# kind: Deployment
# metadata:
# name: l3-centralizedattackdetectorservice
# spec:
# selector:
# matchLabels:
# app: l3-centralizedattackdetectorservice
# template:
# metadata:
# labels:
# app: l3-centralizedattackdetectorservice
# spec:
# terminationGracePeriodSeconds: 5
# containers:
# - name: server
# image: labs.etsi.org:5050/tfs/controller/l3_centralizedattackdetector:latest
# imagePullPolicy: Always
# ports:
# - containerPort: 10001
# - containerPort: 9192
# env:
# - name: LOG_LEVEL
# value: "DEBUG"
# - name: BATCH_SIZE
# value: "10"
# - name: CAD_CLASSIFICATION_THRESHOLD
# Update BATCH_SIZE value in the CAD manifest
echo "Updating BATCH_SIZE value in the CAD manifest to $BATCH_SIZE"
found=0
line_num=0
while read line; do
line_num=$((line_num+1))
if [[ $line == *"name: BATCH_SIZE"* ]]; then
found=1
fi
if [[ $found == 1 ]]; then
if [[ $line == *"value"* ]]; then
echo "Found BATCH_SIZE value in the CAD manifest at line $line_num"
sed -i "${line_num}s/\(value: \).*/\1\"$BATCH_SIZE\"/" $CAD_manifest
break
fi
fi
done < $CAD_manifest
# Update averageUtilization value in the CAD manifest to TARGET_CPU_UTIL
echo "Updating averageUtilization value in the CAD manifest to $TARGET_CPU_UTIL"
sed -i "s/\(averageUtilization: \).*/\1$TARGET_CPU_UTIL/" $CAD_manifest
\ No newline at end of file
...@@ -37,24 +37,19 @@ from common.tools.timestamp.Converters import timestamp_utcnow_to_float ...@@ -37,24 +37,19 @@ from common.tools.timestamp.Converters import timestamp_utcnow_to_float
from common.proto.context_pb2 import Timestamp, SliceId, ConnectionId from common.proto.context_pb2 import Timestamp, SliceId, ConnectionId
from l3_attackmitigator.client.l3_attackmitigatorClient import l3_attackmitigatorClient from l3_attackmitigator.client.l3_attackmitigatorClient import l3_attackmitigatorClient
import uuid import uuid
from common.method_wrappers.Decorator import MetricsPool, safe_and_metered_rpc_method from common.method_wrappers.Decorator import MetricsPool, safe_and_metered_rpc_method
import csv import csv
LOGGER = logging.getLogger(__name__) LOGGER = logging.getLogger(__name__)
current_dir = os.path.dirname(os.path.abspath(__file__)) current_dir = os.path.dirname(os.path.abspath(__file__))
# Demo constants # Constants
DEMO_MODE = False DEMO_MODE = False
ATTACK_IPS = ["37.187.95.110", "91.121.140.167", "94.23.23.52", "94.23.247.226", "149.202.83.171"] ATTACK_IPS = ["37.187.95.110", "91.121.140.167", "94.23.23.52", "94.23.247.226", "149.202.83.171"]
BATCH_SIZE = int(os.getenv("BATCH_SIZE", 10))
BATCH_SIZE= 10 METRICS_POOL = MetricsPool("l3_centralizedattackdetector", "RPC")
METRICS_POOL = MetricsPool('l3_centralizedattackdetector', 'RPC')
class ConnectionInfo: class ConnectionInfo:
...@@ -101,8 +96,8 @@ class l3_centralizedattackdetectorServiceServicerImpl(L3Centralizedattackdetecto ...@@ -101,8 +96,8 @@ class l3_centralizedattackdetectorServiceServicerImpl(L3Centralizedattackdetecto
self.cryptomining_detector_features_metadata = [float(x) for x in self.cryptomining_detector_features_metadata] self.cryptomining_detector_features_metadata = [float(x) for x in self.cryptomining_detector_features_metadata]
self.cryptomining_detector_features_metadata.sort() self.cryptomining_detector_features_metadata.sort()
LOGGER.info("Cryptomining Detector Features: " + str(self.cryptomining_detector_features_metadata)) LOGGER.info("Cryptomining Detector Features: " + str(self.cryptomining_detector_features_metadata))
LOGGER.info("Batch size: " + str(BATCH_SIZE)) LOGGER.info(f"Batch size: {BATCH_SIZE}")
self.input_name = self.cryptomining_detector_model.get_inputs()[0].name self.input_name = self.cryptomining_detector_model.get_inputs()[0].name
self.label_name = self.cryptomining_detector_model.get_outputs()[0].name self.label_name = self.cryptomining_detector_model.get_outputs()[0].name
...@@ -125,7 +120,7 @@ class l3_centralizedattackdetectorServiceServicerImpl(L3Centralizedattackdetecto ...@@ -125,7 +120,7 @@ class l3_centralizedattackdetectorServiceServicerImpl(L3Centralizedattackdetecto
self.l3_unique_attackers = 0 self.l3_unique_attackers = 0
self.l3_non_empty_time_interval = False self.l3_non_empty_time_interval = False
self.active_requests = [] self.active_requests = []
self.monitoring_client = MonitoringClient() self.monitoring_client = MonitoringClient()
...@@ -165,8 +160,8 @@ class l3_centralizedattackdetectorServiceServicerImpl(L3Centralizedattackdetecto ...@@ -165,8 +160,8 @@ class l3_centralizedattackdetectorServiceServicerImpl(L3Centralizedattackdetecto
self.attackmitigator_client = l3_attackmitigatorClient() self.attackmitigator_client = l3_attackmitigatorClient()
# Environment variables # Environment variables
self.CLASSIFICATION_THRESHOLD = os.getenv("CAD_CLASSIFICATION_THRESHOLD", 0.5) self.CLASSIFICATION_THRESHOLD = float(os.getenv("CAD_CLASSIFICATION_THRESHOLD", 0.5))
self.MONITORED_KPIS_TIME_INTERVAL_AGG = os.getenv("MONITORED_KPIS_TIME_INTERVAL_AGG", 60) self.MONITORED_KPIS_TIME_INTERVAL_AGG = int(os.getenv("MONITORED_KPIS_TIME_INTERVAL_AGG", 60))
# Constants # Constants
self.NORMAL_CLASS = 0 self.NORMAL_CLASS = 0
...@@ -191,19 +186,18 @@ class l3_centralizedattackdetectorServiceServicerImpl(L3Centralizedattackdetecto ...@@ -191,19 +186,18 @@ class l3_centralizedattackdetectorServiceServicerImpl(L3Centralizedattackdetecto
self.total_predictions = 0 self.total_predictions = 0
self.false_positives = 0 self.false_positives = 0
self.false_negatives = 0 self.false_negatives = 0
self.replica_uuid = uuid.uuid4() self.replica_uuid = uuid.uuid4()
self.first_batch_request_time = 0 self.first_batch_request_time = 0
self.last_batch_request_time = 0 self.last_batch_request_time = 0
LOGGER.info("This replica's identifier is: " + str(self.replica_uuid)) LOGGER.info("This replica's identifier is: " + str(self.replica_uuid))
csv_file_path = 'hola_mundo.csv' self.response_times_csv_file_path = "response_times.csv"
col_names = ["timestamp_first_req", "timestamp_last_req", "total_time", "batch_size"]
col_names = ['timestamp_first_req', 'timestamp_last_req', 'total_time', 'batch_size']
with open(self.response_times_csv_file_path, "w", newline="") as file:
with open(csv_file_path, 'w', newline='') as file:
writer = csv.writer(file) writer = csv.writer(file)
writer.writerow(col_names) writer.writerow(col_names)
...@@ -411,14 +405,14 @@ class l3_centralizedattackdetectorServiceServicerImpl(L3Centralizedattackdetecto ...@@ -411,14 +405,14 @@ class l3_centralizedattackdetectorServiceServicerImpl(L3Centralizedattackdetecto
LOGGER.debug("batch_size: {}".format(batch_size)) LOGGER.debug("batch_size: {}".format(batch_size))
LOGGER.debug("x_data.shape: {}".format(x_data.shape)) LOGGER.debug("x_data.shape: {}".format(x_data.shape))
inference_time_start = time.perf_counter() inference_time_start = time.time()
# Perform inference # Perform inference
predictions = self.cryptomining_detector_model.run( predictions = self.cryptomining_detector_model.run(
[self.prob_name], {self.input_name: x_data.astype(np.float32)} [self.prob_name], {self.input_name: x_data.astype(np.float32)}
)[0] )[0]
inference_time_end = time.perf_counter() inference_time_end = time.time()
# Measure inference time # Measure inference time
inference_time = inference_time_end - inference_time_start inference_time = inference_time_end - inference_time_start
...@@ -476,7 +470,7 @@ class l3_centralizedattackdetectorServiceServicerImpl(L3Centralizedattackdetecto ...@@ -476,7 +470,7 @@ class l3_centralizedattackdetectorServiceServicerImpl(L3Centralizedattackdetecto
output_message["tag"] = self.NORMAL_CLASS output_message["tag"] = self.NORMAL_CLASS
return output_message return output_message
""" """
Classify connection as standard traffic or cryptomining attack and return results Classify connection as standard traffic or cryptomining attack and return results
-input: -input:
...@@ -497,14 +491,14 @@ class l3_centralizedattackdetectorServiceServicerImpl(L3Centralizedattackdetecto ...@@ -497,14 +491,14 @@ class l3_centralizedattackdetectorServiceServicerImpl(L3Centralizedattackdetecto
# Print input data shape # Print input data shape
LOGGER.debug("x_data.shape: {}".format(x_data.shape)) LOGGER.debug("x_data.shape: {}".format(x_data.shape))
inference_time_start = time.perf_counter() inference_time_start = time.time()
# Perform inference # Perform inference
predictions = self.cryptomining_detector_model.run( predictions = self.cryptomining_detector_model.run(
[self.prob_name], {self.input_name: x_data.astype(np.float32)} [self.prob_name], {self.input_name: x_data.astype(np.float32)}
)[0] )[0]
inference_time_end = time.perf_counter() inference_time_end = time.time()
# Measure inference time # Measure inference time
inference_time = inference_time_end - inference_time_start inference_time = inference_time_end - inference_time_start
...@@ -536,23 +530,25 @@ class l3_centralizedattackdetectorServiceServicerImpl(L3Centralizedattackdetecto ...@@ -536,23 +530,25 @@ class l3_centralizedattackdetectorServiceServicerImpl(L3Centralizedattackdetecto
# Gather the predicted class, the probability of that class and other relevant information required to block the attack # Gather the predicted class, the probability of that class and other relevant information required to block the attack
output_messages = [] output_messages = []
for i, request in enumerate(requests): for i, request in enumerate(requests):
output_messages.append({ output_messages.append(
"confidence": None, {
"timestamp": datetime.now().strftime("%d/%m/%Y %H:%M:%S"), "confidence": None,
"ip_o": request.connection_metadata.ip_o, "timestamp": datetime.now().strftime("%d/%m/%Y %H:%M:%S"),
"ip_d": request.connection_metadata.ip_d, "ip_o": request.connection_metadata.ip_o,
"tag_name": None, "ip_d": request.connection_metadata.ip_d,
"tag": None, "tag_name": None,
"flow_id": request.connection_metadata.flow_id, "tag": None,
"protocol": request.connection_metadata.protocol, "flow_id": request.connection_metadata.flow_id,
"port_o": request.connection_metadata.port_o, "protocol": request.connection_metadata.protocol,
"port_d": request.connection_metadata.port_d, "port_o": request.connection_metadata.port_o,
"ml_id": self.cryptomining_detector_file_name, "port_d": request.connection_metadata.port_d,
"service_id": request.connection_metadata.service_id, "ml_id": self.cryptomining_detector_file_name,
"endpoint_id": request.connection_metadata.endpoint_id, "service_id": request.connection_metadata.service_id,
"time_start": request.connection_metadata.time_start, "endpoint_id": request.connection_metadata.endpoint_id,
"time_end": request.connection_metadata.time_end, "time_start": request.connection_metadata.time_start,
}) "time_end": request.connection_metadata.time_end,
}
)
if predictions[i][1] >= self.CLASSIFICATION_THRESHOLD: if predictions[i][1] >= self.CLASSIFICATION_THRESHOLD:
output_messages[i]["confidence"] = predictions[i][1] output_messages[i]["confidence"] = predictions[i][1]
...@@ -571,21 +567,29 @@ class l3_centralizedattackdetectorServiceServicerImpl(L3Centralizedattackdetecto ...@@ -571,21 +567,29 @@ class l3_centralizedattackdetectorServiceServicerImpl(L3Centralizedattackdetecto
+ request: L3CentralizedattackdetectorMetrics object with connection features information + request: L3CentralizedattackdetectorMetrics object with connection features information
-output: Empty object with a message about the execution of the function -output: Empty object with a message about the execution of the function
""" """
@safe_and_metered_rpc_method(METRICS_POOL, LOGGER) @safe_and_metered_rpc_method(METRICS_POOL, LOGGER)
def AnalyzeConnectionStatistics(self, request, context): def AnalyzeConnectionStatistics(self, request, context):
# Perform inference with the data sent in the request # Perform inference with the data sent in the request
if len(self.active_requests) == 0: if len(self.active_requests) == 0:
self.first_batch_request_time = time.perf_counter() self.first_batch_request_time = time.time()
self.active_requests.append(request) self.active_requests.append(request)
if len(self.active_requests) == BATCH_SIZE: LOGGER.debug("active_requests length: {}".format(len(self.active_requests)))
LOGGER.debug("BATCH_SIZE: {}".format(BATCH_SIZE))
logging.debug("Performing inference... {}".format(self.replica_uuid)) LOGGER.debug(len(self.active_requests) == BATCH_SIZE)
LOGGER.debug("type(len(self.active_requests)): {}".format(type(len(self.active_requests))))
LOGGER.debug("type(BATCH_SIZE): {}".format(type(BATCH_SIZE)))
if len(self.active_requests) >= BATCH_SIZE:
LOGGER.debug("Performing inference... {}".format(self.replica_uuid))
inference_time_start = time.time() inference_time_start = time.time()
cryptomining_detector_output = self.perform_distributed_inference(self.active_requests) cryptomining_detector_output = self.perform_distributed_inference(self.active_requests)
inference_time_end = time.time() inference_time_end = time.time()
LOGGER.debug("Inference performed in {} seconds".format(inference_time_end - inference_time_start)) LOGGER.debug("Inference performed in {} seconds".format(inference_time_end - inference_time_start))
logging.info("Inference performed correctly") logging.info("Inference performed correctly")
...@@ -659,7 +663,7 @@ class l3_centralizedattackdetectorServiceServicerImpl(L3Centralizedattackdetecto ...@@ -659,7 +663,7 @@ class l3_centralizedattackdetectorServiceServicerImpl(L3Centralizedattackdetecto
self.total_predictions += 1 self.total_predictions += 1
# if False: # if False:
notification_time_start = time.perf_counter() notification_time_start = time.time()
LOGGER.debug("Crypto attack detected") LOGGER.debug("Crypto attack detected")
...@@ -672,7 +676,7 @@ class l3_centralizedattackdetectorServiceServicerImpl(L3Centralizedattackdetecto ...@@ -672,7 +676,7 @@ class l3_centralizedattackdetectorServiceServicerImpl(L3Centralizedattackdetecto
logging.info("Sending the connection information to the Attack Mitigator component...") logging.info("Sending the connection information to the Attack Mitigator component...")
message = L3AttackmitigatorOutput(**cryptomining_detector_output[i]) message = L3AttackmitigatorOutput(**cryptomining_detector_output[i])
response = self.attackmitigator_client.PerformMitigation(message) response = self.attackmitigator_client.PerformMitigation(message)
notification_time_end = time.perf_counter() notification_time_end = time.time()
self.am_notification_times.append(notification_time_end - notification_time_start) self.am_notification_times.append(notification_time_end - notification_time_start)
...@@ -705,8 +709,8 @@ class l3_centralizedattackdetectorServiceServicerImpl(L3Centralizedattackdetecto ...@@ -705,8 +709,8 @@ class l3_centralizedattackdetectorServiceServicerImpl(L3Centralizedattackdetecto
# logging.info("Attack Mitigator notified and received response: ", response.message) # FIX No message received # logging.info("Attack Mitigator notified and received response: ", response.message) # FIX No message received
logging.info("Attack Mitigator notified") logging.info("Attack Mitigator notified")
#return Empty(message="OK, information received and mitigator notified abou the attack") # return Empty(message="OK, information received and mitigator notified abou the attack")
except Exception as e: except Exception as e:
logging.error("Error notifying the Attack Mitigator component about the attack: ", e) logging.error("Error notifying the Attack Mitigator component about the attack: ", e)
logging.error("Couldn't find l3_attackmitigator") logging.error("Couldn't find l3_attackmitigator")
...@@ -725,22 +729,25 @@ class l3_centralizedattackdetectorServiceServicerImpl(L3Centralizedattackdetecto ...@@ -725,22 +729,25 @@ class l3_centralizedattackdetectorServiceServicerImpl(L3Centralizedattackdetecto
self.total_predictions += 1 self.total_predictions += 1
# return Empty(message="Ok, information received (no attack detected)") # return Empty(message="Ok, information received (no attack detected)")
self.active_requests = [] self.active_requests = []
self.last_batch_request_time = time.time()
csv_file_path = 'cad_metrics.csv' col_values = [
self.last_batch_request_time = time.perf_counter() self.first_batch_request_time,
self.last_batch_request_time,
col_values = [self.first_batch_request_time, self.last_batch_request_time, self.last_batch_request_time - self.first_batch_request_time,
self.last_batch_request_time - self.first_batch_request_time, BATCH_SIZE] BATCH_SIZE,
]
with open(csv_file_path, 'a', newline='') as file:
LOGGER.debug("col_values: {}".format(col_values))
with open(self.response_times_csv_file_path, "a", newline="") as file:
writer = csv.writer(file) writer = csv.writer(file)
writer.writerow(col_values) writer.writerow(col_values)
return Empty(message="Ok, metrics processed") return Empty(message="Ok, metrics processed")
return Empty(message="Ok, information received") return Empty(message="Ok, information received")
def analyze_prediction_accuracy(self, confidence): def analyze_prediction_accuracy(self, confidence):
...@@ -798,6 +805,7 @@ class l3_centralizedattackdetectorServiceServicerImpl(L3Centralizedattackdetecto ...@@ -798,6 +805,7 @@ class l3_centralizedattackdetectorServiceServicerImpl(L3Centralizedattackdetecto
Send features allocated in the metadata of the onnx file to the DAD Send features allocated in the metadata of the onnx file to the DAD
-output: ONNX metadata as a list of integers -output: ONNX metadata as a list of integers
""" """
@safe_and_metered_rpc_method(METRICS_POOL, LOGGER) @safe_and_metered_rpc_method(METRICS_POOL, LOGGER)
def GetFeaturesIds(self, request: Empty, context): def GetFeaturesIds(self, request: Empty, context):
features = AutoFeatures() features = AutoFeatures()
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment