Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • tfs/controller
1 result
Show changes
Showing
with 650 additions and 53 deletions
......@@ -4,29 +4,25 @@
The following requirements should be fulfilled before the execuation of Analytics service.
1. A virtual enviornment exist with all the required packages listed in [requirements.in](https://labs.etsi.org/rep/tfs/controller/-/blob/develop/src/analytics/frontend/requirements.in) sucessfully installed.
2. Verify the creation of required database and table. The
[Analytics DB test](https://labs.etsi.org/rep/tfs/controller/-/blob/develop/src/analytics/tests/test_analytics_db.py) python file lists the functions to create tables and the database.
3. The Analytics backend service should be running.
4. All required Kafka topics must exist. Call `create_all_topics` from the [Kafka class](https://labs.etsi.org/rep/tfs/controller/-/blob/develop/src/common/tools/kafka/Variables.py) to create any topics that do not already exist.
2. The Analytics backend service should be running.
3. All required Kafka topics must exist. Call `create_all_topics` from the [Kafka class](https://labs.etsi.org/rep/tfs/controller/-/blob/develop/src/common/tools/kafka/Variables.py) to create any topics that do not already exist.
```
from common.tools.kafka.Variables import KafkaTopic
KafkaTopic.create_all_topics()
```
5. There will be an input stream on the Kafka topic that the Spark Streamer will consume and apply a defined thresholds.
4. There will be an input stream on the Kafka topic that the Streamer class will consume and apply a defined thresholds.
- A JSON encoded string should be generated in the following format:
```
'{"time_stamp": "2024-09-03T12:36:26Z", "kpi_id": "6e22f180-ba28-4641-b190-2287bf448888", "kpi_value": 44.22}'
```
- `kpi_value` should be float or int.
- The Kafka producer key should be the UUID of the Analyzer used when creating it.
- Use the following Kafka topic to generate the stream: `KafkaTopic.ANALYTICS_RESPONSE.value`.
- Generate the stream on the following Kafka topic: `KafkaTopic.ANALYTICS_RESPONSE.value`.
## Steps to create and start Analyzer
The analyzer can be declared as below but there are many other ways to declare:
The given object creation process for `_create_analyzer` involves defining an instance of the `Analyzer` message from the [gRPC definition](https://labs.etsi.org/rep/tfs/controller/-/blob/feat/194-unable-to-correctly-extract-the-aggregation-function-names-from-the-dictionary-received-as/proto/analytics_frontend.proto) and populating its fields.
The given object creation process for `_create_analyzer` involves defining an instance of the `Analyzer` message from the [gRPC definition](https://labs.etsi.org/rep/tfs/controller/-/blob/develop/proto/analytics_frontend.proto) and populating its fields.
```
from common.proto.analytics_frontend_pb2 import AnalyzerId
......@@ -101,18 +97,5 @@ analytics_client_object = AnalyticsFrontendClient()
analytics_client_object.StartAnalyzer(_create_analyzer_id)
```
### **How to Receive Analyzer Responses**
- There is a non-gRPC method in the analyzer frontend called `StartResponseListener(<analyzer_uuid>)`. The `analyzer_uuid` is the UUID of the analyzer provided when calling `StartAnalyzer()`. The following code will log the responses:
```python
from analytics.frontend.service.AnalyticsFrontendServiceServicerImpl import AnalyticsFrontendServiceServicerImpl
analytic_frontend_service_object = AnalyticsFrontendServiceServicerImpl()
for response in analytic_frontend_service_object.StartResponseListener(<analyzer_uuid>):
LOGGER.debug(response)
```
### **Understanding the Output of the Analyzer**
- **Output Column Names**: The output JSON string will include two keys for each defined threshold. For example, the `min_latency` threshold will generate two keys: `min_latency_THRESHOLD_FAIL` and `min_latency_THRESHOLD_RAISE`.
- `min_latency_THRESHOLD_FAIL` is triggered if the average latency calculated within the defined window size is less than the specified threshold range.
- `min_latency_THRESHOLD_RAISE` is triggered if the average latency calculated within the defined window size exceeds the specified threshold range.
- The thresholds `min_latency_THRESHOLD_FAIL` and `min_latency_THRESHOLD_RAISE` will have a value of `TRUE` if activated; otherwise, they will be set to `FALSE`.
### **How to Receive Analyzer Response**
- `GetAlarms(<kpi_id>) -> KpiAlarms` is a method in the `KPI Value Api` that retrieves alarms for a given KPI ID. This method returns a stream of alarms associated with the specified KPI.
......@@ -5,20 +5,14 @@ Ensure the following requirements are met before executing the KPI management se
1. A virtual enviornment exist with all the required packages listed in ["requirements.in"](https://labs.etsi.org/rep/tfs/controller/-/blob/develop/src/kpi_manager/requirements.in) sucessfully installed.
2. Verify the creation of required database and table. The
[KPI DB test](https://labs.etsi.org/rep/tfs/controller/-/blob/develop/src/kpi_manager/tests/test_kpi_db.py) python file lists the functions to create tables and the database. The
[KPI Engine](https://labs.etsi.org/rep/tfs/controller/-/blob/develop/src/kpi_manager/database/KpiEngine.py) file contains the DB string.
[KPI DB test](https://labs.etsi.org/rep/tfs/controller/-/blob/develop/src/kpi_manager/tests/test_kpi_db.py) python file lists the functions to create tables and the database.
### Messages format templates
The ["messages"](https://labs.etsi.org/rep/tfs/controller/-/blob/develop/src/kpi_manager/tests/test_messages.py) python file contains templates for creating gRPC messages.
### Unit test file
The ["KPI manager test"](https://labs.etsi.org/rep/tfs/controller/-/blob/develop/src/kpi_manager/tests/test_kpi_manager.py) python file lists various tests conducted to validate functionality.
### Flow of execution (Kpi Manager Service functions)
1. Call the gRPC method `SetKpiDescriptor(KpiDescriptor)->KpiId` to add the KpiDescriptor to the `Kpi` DB. `KpiDescriptor` and `KpiId` are both pre-defined gRPC message types.
### Flow of execution (Kpi Maanager Service functions)
1. Call the `create_database()` and `create_tables()` functions from `Kpi_DB` class to create the required database and table if they don't exist. Call `verify_tables` to verify the existence of KPI table.
2. Call `GetKpiDescriptor(KpiId)->KpiDescriptor` to read the `KpiDescriptor` from the DB and `DeleteKpiDescriptor(KpiId)` to delete the `KpiDescriptor` from the DB.
2. Call the gRPC method `SetKpiDescriptor(KpiDescriptor)->KpiId` to add the KpiDescriptor to the `Kpi` DB. `KpiDescriptor` and `KpiId` are both pre-defined gRPC message types.
3. Call `GetKpiDescriptor(KpiId)->KpiDescriptor` to read the `KpiDescriptor` from the DB and `DeleteKpiDescriptor(KpiId)` to delete the `KpiDescriptor` from the DB.
4. Call `SelectKpiDescriptor(KpiDescriptorFilter)->KpiDescriptorList` to get all `KpiDescriptor` objects that matches filter criteria. `KpiDescriptorFilter` and `KpiDescriptorList` are pre-defined gRPC message types.
3. Call `SelectKpiDescriptor(KpiDescriptorFilter)->KpiDescriptorList` to get all `KpiDescriptor` objects that matches filter criteria. `KpiDescriptorFilter` and `KpiDescriptorList` are pre-defined gRPC message types.
# How to locally run and test KPI Value API micro-service
# How to locally run and test KPI Value API
### Pre-requisets
Ensure the following requirements are met before executing the KPI Value API service.
......@@ -7,7 +7,6 @@ Ensure the following requirements are met before executing the KPI Value API ser
2. A virtual enviornment exist with all the required packages listed in ["requirements.in"](https://labs.etsi.org/rep/tfs/controller/-/blob/develop/src/kpi_value_api/requirements.in) file sucessfully installed.
3. Call the ["create_all_topics()"](https://labs.etsi.org/rep/tfs/controller/-/blob/develop/src/common/tools/kafka/Variables.py) function to verify the existence of all required topics on kafka.
### Messages format templates
The ["messages"](https://labs.etsi.org/rep/tfs/controller/-/blob/develop/src/kpi_value_api/tests/messages.py) python file contains templates for creating gRPC messages.
......@@ -15,9 +14,9 @@ The ["messages"](https://labs.etsi.org/rep/tfs/controller/-/blob/develop/src/kpi
### Unit test file
The ["KPI Value API test"](https://labs.etsi.org/rep/tfs/controller/-/blob/develop/src/kpi_value_api/tests/test_kpi_value_api.py) python file enlist various tests conducted to validate functionality.
### Flow of execution (Kpi Maanager Service functions)
1. Call the `create_new_topic_if_not_exists(<list of string>)` method to create any new topics if needed.
### Flow of execution (Kpi Value Api Service functions)
1. Call `StoreKpiValues(KpiValueList)` to produce `Kpi Value` on a Kafka Topic. (The `KpiValueWriter` microservice will consume and process this `Kpi Value`)
2. Call `StoreKpiValues(KpiValueList)` to produce `Kpi Value` on a Kafka Topic. (The `KpiValueWriter` microservice will consume and process the `Kpi Value`)
2. Call `SelectKpiValues(KpiValueFilter) -> KpiValueList` to read metric from the Prometheus DB.
3. Call `SelectKpiValues(KpiValueFilter) -> KpiValueList` to read metric from the Prometheus DB.
3. Call `GetKpiAlarms(KpiId) -> KpiAlarms` to read alrams from the Kafka.
# How to locally run and test the KPI Value Writer micro-service
# How to locally run and test the KPI Value Writer
### Pre-requisets
Ensure the following requirements are meet before executing the KPI Value Writer service>
Ensure the following requirements are meet before executing the KPI Value Writer service.
1. The KPI Manger and KPI Value API services are running and Apache Kafka is running.
1. The KPI Manger and KPI Value API services are running.
2. A Virtual enviornment exist with all the required packages listed in the ["requirements.in"](https://labs.etsi.org/rep/tfs/controller/-/blob/develop/src/kpi_value_writer/requirements.in) file installed sucessfully.
### Messages format templates
The ["messages"](https://labs.etsi.org/rep/tfs/controller/-/blob/develop/src/kpi_value_writer/tests/test_messages.py) python file contains the templates to create gRPC messages.
### Unit test file
The ["KPI Value API test"](https://labs.etsi.org/rep/tfs/controller/-/blob/develop/src/kpi_value_writer/tests/test_kpi_value_writer.py) python file enlist various tests conducted to validate functionality.
### Flow of execution
1. Call the `RunKafkaConsumer` method from the `KpiValueWriter` class to start consuming the `KPI Value` generated by the `KPI Value API` or `Telemetry`. For every valid `KPI Value` consumer from Kafka, it invokes the `PrometheusWriter` class to prepare and push the metric to the Promethues DB.
1. The service will be running, consuming KPI values from the Kafka topic, and pushing KPI metrics to Prometheus.
# How to locally run and test Telemetry service
### Pre-requisets
The following requirements should be fulfilled before the execuation of Telemetry service.
The following requirements should be fulfilled before the execuation of Analytics service.
1. verify that [telmetry_frontend.proto](https://labs.etsi.org/rep/tfs/controller/-/blob/feat/71-cttc-separation-of-monitoring/proto/telemetry_frontend.proto) file exists and grpcs file are generated sucessfully.
2. virtual enviornment exist with all the required packages listed in ["requirements.in"](https://labs.etsi.org/rep/tfs/controller/-/blob/feat/71-cttc-separation-of-monitoring/src/telemetry/telemetry_virenv.txt) are installed sucessfully.
3. verify the creation of required database and table.
[DB test](https://labs.etsi.org/rep/tfs/controller/-/blob/feat/71-cttc-separation-of-monitoring/src/telemetry/database/tests/managementDBtests.py) python file enlist the functions to create tables and database.
[KPI Engine](https://labs.etsi.org/rep/tfs/controller/-/blob/feat/71-cttc-separation-of-monitoring/src/kpi_manager/service/database/KpiEngine.py) contains the DB string, update the string as per your deployment.
1. A virtual enviornment exist with all the required packages listed in [requirements.in](https://labs.etsi.org/rep/tfs/controller/-/blob/develop/src/telemetry/requirements.in) sucessfully installed.
2. The Telemetry backend service should be running.
3. All required Kafka topics must exist. Call `create_all_topics` from the [Kafka class](https://labs.etsi.org/rep/tfs/controller/-/blob/develop/src/common/tools/kafka/Variables.py) to create any topics that do not already exist.
```
from common.tools.kafka.Variables import KafkaTopic
KafkaTopic.create_all_topics()
```
## Steps to create telemetry collector
The collector can be declared as below but there are many other ways to declare:
```
_create_collector_request = telemetry_frontend_pb2.Collector()
_create_collector_request.collector_id.collector_id.uuid = str(uuid.uuid4())
_create_collector_request.kpi_id.kpi_id.uuid = str(uuid.uuid4())
_create_collector_request.duration_s = 100 # in seconds
_create_collector_request.interval_s = 10 # in seconds
```
{
"contexts": [
{"context_id": {"context_uuid": {"uuid": "admin"}}}
],
"topologies": [
{"topology_id": {"context_id": {"context_uuid": {"uuid": "admin"}}, "topology_uuid": {"uuid": "admin"}}}
],
"devices": [
{
"device_id": {"device_uuid": {"uuid": "core-net"}}, "device_type": "network",
"device_drivers": ["DEVICEDRIVER_UNDEFINED"],
"device_config": {"config_rules": [
{"action": "CONFIGACTION_SET", "custom": {"resource_key": "_connect/address", "resource_value": "127.0.0.1"}},
{"action": "CONFIGACTION_SET", "custom": {"resource_key": "_connect/port", "resource_value": "0"}},
{"action": "CONFIGACTION_SET", "custom": {"resource_key": "_connect/settings", "resource_value": {"endpoints": [
{"uuid": "int", "type": "copper"},
{"uuid": "eth1", "type": "copper"}
]}}}
]}
},
{
"device_id": {"device_uuid": {"uuid": "edge-net"}}, "device_type": "network",
"device_drivers": ["DEVICEDRIVER_UNDEFINED"],
"device_config": {"config_rules": [
{"action": "CONFIGACTION_SET", "custom": {"resource_key": "_connect/address", "resource_value": "127.0.0.1"}},
{"action": "CONFIGACTION_SET", "custom": {"resource_key": "_connect/port", "resource_value": "0"}},
{"action": "CONFIGACTION_SET", "custom": {"resource_key": "_connect/settings", "resource_value": {"endpoints": [
{"uuid": "int", "type": "copper"},
{"uuid": "eth1", "type": "copper"}
]}}}
]}
},
{
"device_id": {"device_uuid": {"uuid": "r1"}}, "device_type": "packet-router",
"device_drivers": ["DEVICEDRIVER_GNMI_OPENCONFIG"],
"device_config": {"config_rules": [
{"action": "CONFIGACTION_SET", "custom": {"resource_key": "_connect/address", "resource_value": "172.20.20.101"}},
{"action": "CONFIGACTION_SET", "custom": {"resource_key": "_connect/port", "resource_value": "6030"}},
{"action": "CONFIGACTION_SET", "custom": {"resource_key": "_connect/settings", "resource_value": {
"username": "admin", "password": "admin", "use_tls": false
}}}
]}
},
{
"device_id": {"device_uuid": {"uuid": "r2"}}, "device_type": "packet-router",
"device_drivers": ["DEVICEDRIVER_GNMI_OPENCONFIG"],
"device_config": {"config_rules": [
{"action": "CONFIGACTION_SET", "custom": {"resource_key": "_connect/address", "resource_value": "172.20.20.102"}},
{"action": "CONFIGACTION_SET", "custom": {"resource_key": "_connect/port", "resource_value": "6030"}},
{"action": "CONFIGACTION_SET", "custom": {"resource_key": "_connect/settings", "resource_value": {
"username": "admin", "password": "admin", "use_tls": false
}}}
]}
}
],
"links": [
{
"link_id": {"link_uuid": {"uuid": "r1/Ethernet2==r2/Ethernet1"}},
"link_endpoint_ids": [
{"device_id": {"device_uuid": {"uuid": "r1"}}, "endpoint_uuid": {"uuid": "Ethernet2"}},
{"device_id": {"device_uuid": {"uuid": "r2"}}, "endpoint_uuid": {"uuid": "Ethernet1"}}
]
},
{
"link_id": {"link_uuid": {"uuid": "r2/Ethernet1==r1/Ethernet2"}},
"link_endpoint_ids": [
{"device_id": {"device_uuid": {"uuid": "r2"}}, "endpoint_uuid": {"uuid": "Ethernet1"}},
{"device_id": {"device_uuid": {"uuid": "r1"}}, "endpoint_uuid": {"uuid": "Ethernet2"}}
]
},
{
"link_id": {"link_uuid": {"uuid": "r1/Ethernet10==core-net/eth1"}},
"link_endpoint_ids": [
{"device_id": {"device_uuid": {"uuid": "r1"}}, "endpoint_uuid": {"uuid": "Ethernet10"}},
{"device_id": {"device_uuid": {"uuid": "core-net"}}, "endpoint_uuid": {"uuid": "eth1"}}
]
},
{
"link_id": {"link_uuid": {"uuid": "core-net/eth1==r1/Ethernet10"}},
"link_endpoint_ids": [
{"device_id": {"device_uuid": {"uuid": "core-net"}}, "endpoint_uuid": {"uuid": "eth1"}},
{"device_id": {"device_uuid": {"uuid": "r1"}}, "endpoint_uuid": {"uuid": "Ethernet10"}}
]
},
{
"link_id": {"link_uuid": {"uuid": "r2/Ethernet10==edge-net/eth1"}},
"link_endpoint_ids": [
{"device_id": {"device_uuid": {"uuid": "r2"}}, "endpoint_uuid": {"uuid": "Ethernet10"}},
{"device_id": {"device_uuid": {"uuid": "edge-net"}}, "endpoint_uuid": {"uuid": "eth1"}}
]
},
{
"link_id": {"link_uuid": {"uuid": "edge-net/eth1==r2/Ethernet10"}},
"link_endpoint_ids": [
{"device_id": {"device_uuid": {"uuid": "edge-net"}}, "endpoint_uuid": {"uuid": "eth1"}},
{"device_id": {"device_uuid": {"uuid": "r2"}}, "endpoint_uuid": {"uuid": "Ethernet10"}}
]
}
]
}
{
"ietf-l3vpn-svc:l3vpn-svc": {
"vpn-services": {"vpn-service": [{"vpn-id": "ietf-l3vpn-edge-core"}]},
"sites": {
"site": [
{
"site-id": "site_core-net",
"management": {"type": "ietf-l3vpn-svc:provider-managed"},
"locations": {"location": [{"location-id": "core-net"}]},
"devices": {"device": [{"device-id": "core-net", "location": "core-net"}]},
"site-network-accesses": {
"site-network-access": [
{
"site-network-access-id": "int",
"site-network-access-type": "ietf-l3vpn-svc:multipoint",
"device-reference": "core-net",
"vpn-attachment": {"vpn-id": "ietf-l3vpn-edge-core", "site-role": "ietf-l3vpn-svc:spoke-role"},
"ip-connection": {
"ipv4": {
"address-allocation-type": "ietf-l3vpn-svc:static-address",
"addresses": {
"provider-address": "10.10.10.229",
"customer-address": "10.10.10.0",
"prefix-length": 24
}
}
},
"service": {
"svc-mtu": 1500,
"svc-input-bandwidth": 1000000000,
"svc-output-bandwidth": 1000000000,
"qos": {"qos-profile": {"classes": {"class": [{
"class-id": "qos-realtime",
"direction": "ietf-l3vpn-svc:both",
"latency": {"latency-boundary": 10},
"bandwidth": {"guaranteed-bw-percent": 100}
}]}}}
}
}
]
}
},
{
"site-id": "site_edge-net",
"management": {"type": "ietf-l3vpn-svc:provider-managed"},
"locations": {"location": [{"location-id": "edge-net"}]},
"devices": {"device": [{"device-id": "edge-net", "location": "edge-net"}]},
"site-network-accesses": {
"site-network-access": [
{
"site-network-access-id": "int",
"site-network-access-type": "ietf-l3vpn-svc:multipoint",
"device-reference": "edge-net",
"vpn-attachment": {"vpn-id": "ietf-l3vpn-edge-core", "site-role": "ietf-l3vpn-svc:hub-role"},
"ip-connection": {
"ipv4": {
"address-allocation-type": "ietf-l3vpn-svc:static-address",
"addresses": {
"provider-address": "10.158.72.229",
"customer-address": "10.158.72.0",
"prefix-length": 24
}
}
},
"service": {
"svc-mtu": 1500,
"svc-input-bandwidth": 1000000000,
"svc-output-bandwidth": 1000000000,
"qos": {"qos-profile": {"classes": {"class": [{
"class-id": "qos-realtime",
"direction": "ietf-l3vpn-svc:both",
"latency": {"latency-boundary": 10},
"bandwidth": {"guaranteed-bw-percent": 100}
}]}}}
}
}
]
}
}
]
}
}
}
{
"services": [
{
"service_id": {
"context_id": {"context_uuid": {"uuid": "admin"}}, "service_uuid": {"uuid": "core-to-edge-l2svc"}
},
"service_type": "SERVICETYPE_L3NM",
"service_status": {"service_status": "SERVICESTATUS_PLANNED"},
"service_endpoint_ids": [
{"device_id": {"device_uuid": {"uuid": "core-net"}}, "endpoint_uuid": {"uuid": "int"}},
{"device_id": {"device_uuid": {"uuid": "edge-net"}}, "endpoint_uuid": {"uuid": "int"}}
],
"service_constraints": [],
"service_config": {"config_rules": [
{"action": "CONFIGACTION_SET", "custom": {
"resource_key": "/device[core-net]/endpoint[eth1]/settings",
"resource_value": {"address_ip": "10.10.10.0", "address_prefix": 24, "index": 0}
}},
{"action": "CONFIGACTION_SET", "custom": {
"resource_key": "/device[r1]/endpoint[Ethernet10]/settings",
"resource_value": {"address_ip": "10.10.10.229", "address_prefix": 24, "index": 0}
}},
{"action": "CONFIGACTION_SET", "custom": {
"resource_key": "/device[r2]/endpoint[Ethernet10]/settings",
"resource_value": {"address_ip": "10.158.72.229", "address_prefix": 24, "index": 0}
}},
{"action": "CONFIGACTION_SET", "custom": {
"resource_key": "/device[edge-net]/endpoint[eth1]/settings",
"resource_value": {"address_ip": "10.158.72.0", "address_prefix": 24, "index": 0}
}}
]}
}
]
}
#!/bin/bash
# Copyright 2022-2024 ETSI OSG/SDG TeraFlowSDN (TFS) (https://tfs.etsi.org/)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
docker exec -it clab-sns4sns-r1 Cli
#!/bin/bash
# Copyright 2022-2024 ETSI OSG/SDG TeraFlowSDN (TFS) (https://tfs.etsi.org/)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
docker exec -it clab-sns4sns-r2 Cli
#!/bin/bash
# Copyright 2022-2024 ETSI OSG/SDG TeraFlowSDN (TFS) (https://tfs.etsi.org/)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
cd /home/$USER/tfs-ctrl/src/tests/sns4sns/
sudo containerlab deploy --topo sns4sns.clab.yml
#!/bin/bash
# Copyright 2022-2024 ETSI OSG/SDG TeraFlowSDN (TFS) (https://tfs.etsi.org/)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
cd /home/$USER/tfs-ctrl/src/tests/sns4sns/
sudo containerlab destroy --topo sns4sns.clab.yml
sudo rm -rf clab-sns4sns/ .sns4sns.clab.yml.bak
#!/bin/bash
# Copyright 2022-2024 ETSI OSG/SDG TeraFlowSDN (TFS) (https://tfs.etsi.org/)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
cd /home/$USER/tfs-ctrl/src/tests/sns4sns/
sudo containerlab inspect --topo sns4sns.clab.yml
#!/bin/bash
# Copyright 2022-2024 ETSI OSG/SDG TeraFlowSDN (TFS) (https://tfs.etsi.org/)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ----- TeraFlowSDN ------------------------------------------------------------
# Set the URL of the internal MicroK8s Docker registry where the images will be uploaded to.
export TFS_REGISTRY_IMAGES="http://localhost:32000/tfs/"
# Set the list of components, separated by spaces, you want to build images for, and deploy.
#export TFS_COMPONENTS="context device pathcomp service slice nbi webui load_generator"
export TFS_COMPONENTS="context device pathcomp service slice nbi webui"
# Uncomment to activate Monitoring
export TFS_COMPONENTS="${TFS_COMPONENTS} monitoring"
# Uncomment to activate ZTP
#export TFS_COMPONENTS="${TFS_COMPONENTS} ztp"
# Uncomment to activate Policy Manager
#export TFS_COMPONENTS="${TFS_COMPONENTS} policy"
# Uncomment to activate Optical CyberSecurity
#export TFS_COMPONENTS="${TFS_COMPONENTS} dbscanserving opticalattackmitigator opticalattackdetector opticalattackmanager"
# Uncomment to activate L3 CyberSecurity
#export TFS_COMPONENTS="${TFS_COMPONENTS} l3_attackmitigator l3_centralizedattackdetector"
# Uncomment to activate TE
#export TFS_COMPONENTS="${TFS_COMPONENTS} te"
# Uncomment to activate Forecaster
#export TFS_COMPONENTS="${TFS_COMPONENTS} forecaster"
# Set the tag you want to use for your images.
export TFS_IMAGE_TAG="dev"
# Set the name of the Kubernetes namespace to deploy TFS to.
export TFS_K8S_NAMESPACE="tfs"
# Set additional manifest files to be applied after the deployment
export TFS_EXTRA_MANIFESTS="manifests/nginx_ingress_http.yaml"
# Uncomment to monitor performance of components
#export TFS_EXTRA_MANIFESTS="${TFS_EXTRA_MANIFESTS} manifests/servicemonitors.yaml"
# Uncomment when deploying Optical CyberSecurity
#export TFS_EXTRA_MANIFESTS="${TFS_EXTRA_MANIFESTS} manifests/cachingservice.yaml"
# Set the new Grafana admin password
export TFS_GRAFANA_PASSWORD="admin123+"
# Disable skip-build flag to rebuild the Docker images.
export TFS_SKIP_BUILD=""
# ----- CockroachDB ------------------------------------------------------------
# Set the namespace where CockroackDB will be deployed.
export CRDB_NAMESPACE="crdb"
# Set the external port CockroackDB Postgre SQL interface will be exposed to.
export CRDB_EXT_PORT_SQL="26257"
# Set the external port CockroackDB HTTP Mgmt GUI interface will be exposed to.
export CRDB_EXT_PORT_HTTP="8081"
# Set the database username to be used by Context.
export CRDB_USERNAME="tfs"
# Set the database user's password to be used by Context.
export CRDB_PASSWORD="tfs123"
# Set the database name to be used by Context.
export CRDB_DATABASE="tfs"
# Set CockroachDB installation mode to 'single'. This option is convenient for development and testing.
# See ./deploy/all.sh or ./deploy/crdb.sh for additional details
export CRDB_DEPLOY_MODE="single"
# Disable flag for dropping database, if it exists.
export CRDB_DROP_DATABASE_IF_EXISTS="YES"
# Disable flag for re-deploying CockroachDB from scratch.
export CRDB_REDEPLOY=""
# ----- NATS -------------------------------------------------------------------
# Set the namespace where NATS will be deployed.
export NATS_NAMESPACE="nats"
# Set the external port NATS Client interface will be exposed to.
export NATS_EXT_PORT_CLIENT="4222"
# Set the external port NATS HTTP Mgmt GUI interface will be exposed to.
export NATS_EXT_PORT_HTTP="8222"
# Disable flag for re-deploying NATS from scratch.
export NATS_REDEPLOY=""
# ----- QuestDB ----------------------------------------------------------------
# Set the namespace where QuestDB will be deployed.
export QDB_NAMESPACE="qdb"
# Set the external port QuestDB Postgre SQL interface will be exposed to.
export QDB_EXT_PORT_SQL="8812"
# Set the external port QuestDB Influx Line Protocol interface will be exposed to.
export QDB_EXT_PORT_ILP="9009"
# Set the external port QuestDB HTTP Mgmt GUI interface will be exposed to.
export QDB_EXT_PORT_HTTP="9000"
# Set the database username to be used for QuestDB.
export QDB_USERNAME="admin"
# Set the database user's password to be used for QuestDB.
export QDB_PASSWORD="quest"
# Set the table name to be used by Monitoring for KPIs.
export QDB_TABLE_MONITORING_KPIS="tfs_monitoring_kpis"
# Set the table name to be used by Slice for plotting groups.
export QDB_TABLE_SLICE_GROUPS="tfs_slice_groups"
# Disable flag for dropping tables if they exist.
export QDB_DROP_TABLES_IF_EXIST="YES"
# Disable flag for re-deploying QuestDB from scratch.
export QDB_REDEPLOY=""
# ----- K8s Observability ------------------------------------------------------
# Set the external port Prometheus Mgmt HTTP GUI interface will be exposed to.
export PROM_EXT_PORT_HTTP="9090"
# Set the external port Grafana HTTP Dashboards will be exposed to.
export GRAF_EXT_PORT_HTTP="3000"
#!/bin/bash
curl -X POST \
--header "Content-Type: application/json" \
--data @02-ietf-l3vpn-nbi.json \
--user "admin:admin" \
http://10.10.10.41/restconf/data/ietf-l3vpn-svc:l3vpn-svc/vpn-services
#!/bin/bash
curl -X DELETE \
--user "admin:admin" \
http://10.10.10.41/restconf/data/ietf-l3vpn-svc:l3vpn-svc/vpn-services/vpn-service=ietf-l3vpn-edge-core/
#!/bin/bash
curl --user "admin:admin" \
http://10.10.10.41/restconf/data/ietf-l3vpn-svc:l3vpn-svc/vpn-services/vpn-service=ietf-l3vpn-edge-core/
# Copyright 2022-2024 ETSI OSG/SDG TeraFlowSDN (TFS) (https://tfs.etsi.org/)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Ref: https://containerlab.dev/manual/network/#macvlan-links
# Ref: https://containerlab.dev/manual/network/#host-links
# ETSI SNS4SNS OSL+TFS Integration (Static configuration)
name: sns4sns-static
mgmt:
network: mgmt-net
ipv4-subnet: 172.20.20.0/24
mtu: 1400
topology:
kinds:
arista_ceos:
kind: arista_ceos
#image: ceos:4.30.4M
image: ceos:4.31.2F
nodes:
r1:
kind: arista_ceos
mgmt-ipv4: 172.20.20.101
r2:
kind: arista_ceos
mgmt-ipv4: 172.20.20.102
links:
- endpoints: ["r1:eth2", "r2:eth1"]
- endpoints: ["r1:eth10", "macvlan:enp0s3"] # connect to core domain virtual network
- endpoints: ["r2:eth10", "macvlan:enp0s5"] # connect to shared virtual network with edge domain
! Startup-config last modified at Tue Oct 15 12:24:02 2024 by root
! device: r1 (cEOSLab, EOS-4.31.2F-35442176.4312F (engineering build))
!
no aaa root
!
username admin privilege 15 role network-admin secret sha512 $6$WlffHUsmnrG2WVRf$cfZfWFJtrnv9wuGlkyMHRS66VQeA8bOjxM0jSXTB1deScpsqz0I3oVEcvrR6IMrqVOsXANKmoghcZvcDbC4Ry/
!
transceiver qsfp default-mode 4x10G
!
service routing protocols model multi-agent
!
hostname r1
!
spanning-tree mode mstp
!
system l1
unsupported speed action error
unsupported error-correction action error
!
management api http-commands
no shutdown
!
management api gnmi
transport grpc default
!
management api netconf
transport ssh default
!
interface Ethernet2
no switchport
ip address 10.254.254.1/30
!
interface Ethernet10
no switchport
ip address 10.10.10.229/24
!
interface Management0
ip address 172.20.20.101/24
!
ip routing
!
ip route 0.0.0.0/0 172.20.20.1
ip route 10.158.72.0/24 10.254.254.2
!
end
! Startup-config last modified at Tue Oct 15 12:23:41 2024 by root
! device: r2 (cEOSLab, EOS-4.31.2F-35442176.4312F (engineering build))
!
no aaa root
!
username admin privilege 15 role network-admin secret sha512 $6$3nmPs7/wiY.aN139$BrgG79cp9R5bd.bQST4LnQB6wq6GLuIHKdbafZkcVH2R5D.v771gZNgeQSILN6ubz1.j29Wy5UmavY9Pavsoy0
!
transceiver qsfp default-mode 4x10G
!
service routing protocols model multi-agent
!
hostname r2
!
spanning-tree mode mstp
!
system l1
unsupported speed action error
unsupported error-correction action error
!
management api http-commands
no shutdown
!
management api gnmi
transport grpc default
!
management api netconf
transport ssh default
!
interface Ethernet1
no switchport
ip address 10.254.254.2/30
!
interface Ethernet10
no switchport
ip address 10.158.72.229/24
!
interface Management0
ip address 172.20.20.102/24
!
ip routing
!
ip route 0.0.0.0/0 172.20.20.1
ip route 10.10.10.0/24 10.254.254.1
!
end