Newer
Older
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
ssh_pwauth: True
</code></pre>
<h3><u>Upgrade the Ubuntu distribution</h3>
<p></u></p>
<pre><code class="language-bash">sudo apt-get update -y
sudo apt-get dist-upgrade -y
</code></pre>
<ul>
<li>If asked to restart services, restart the default ones proposed.</li>
<li>Restart the VM when the installation is completed.</li>
</ul>
<h3 id="115-vagrant-box"><strong>1.1.5. Vagrant Box</strong></h3>
<p>This section describes how to create a Vagrant Box, using the base virtual machine configured in <a href="#112-oracle-virtual-box">Oracle Virtual Box</a>.</p>
<h3><u>Virtual Machine specifications</h3>
<p></u></p>
<p>Most of the specifications can be as specified in the <a href="#112-oracle-virtual-box">Oracle Virtual Box</a> page, however, there are a few particularities to Vagrant that must be accommodated, such as:</p>
<ul>
<li>Virtual Hard Disk<ul>
<li>Size: 60GB (at least)</li>
<li><strong>Type</strong>: VMDK</li>
</ul>
</li>
</ul>
<p><img alt="Screenshottt_from_2024-10-21_18-13-43" src="../images/deployment_guide/01_vagrant_box.jpg" /></p>
<p>Also, before initiating the VM and installing the OS, we'll need to:</p>
<ul>
<li>Disable Floppy in the 'Boot Order'</li>
<li>Disable audio</li>
<li>Disable USB</li>
<li>Ensure Network Adapter 1 is set to NAT</li>
</ul>
<h3><u>Network configurations</h3>
<p></u>
At Network Adapt 1, the following port-forwarding rule must be set.</p>
<table>
<thead>
<tr>
<th>Name</th>
<th>Protocol</th>
<th>Host IP</th>
<th>Host Port</th>
<th>Guest IP</th>
<th>Guest Port</th>
</tr>
</thead>
<tbody>
<tr>
<td>SSH</td>
<td>TCP</td>
<td></td>
<td><strong>2222</strong></td>
<td></td>
<td>22</td>
</tr>
</tbody>
</table>
<p><img alt="Screenshot_from_2023-07-10_18-25-18" src="../images/deployment_guide/02_vagrant_box.jpg" /></p>
<h3><u>Installing the OS</h3>
<p></u></p>
<p>For a Vagrant Box, it is generally suggested that the ISO's server version is used, as it is intended to be used via SSH, and any web GUI is expected to be forwarded to the host.</p>
<p><img alt="Screenshot_from_2023-07-10_18-41-49" src="../images/deployment_guide/03_vagrant_box.jpg" /></p>
<p><img alt="Screenshot_from_2023-07-10_18-42-30" src="../images/deployment_guide/04_vagrant_box.jpg" /></p>
<p><img alt="Screenshot_from_2023-07-10_18-42-45" src="../images/deployment_guide/05_vagrant_box.jpg" /></p>
<p>Make sure the disk is not configured as an LVM group!</p>
<p><img alt="Screenshot_from_2023-07-10_18-43-16" src="../images/deployment_guide/06_vagrant_box.jpg" /></p>
<h3><u>Vagrant ser</h3>
<p></u>
Vagrant expects by default, that in the box's OS exists the user <code>vagrant</code> with the password also being <code>vagrant</code>.</p>
<p><img alt="Screenshot_from_2023-07-10_18-54-12" src="../images/deployment_guide/07_vagrant_box.jpg" /></p>
<h3><u>SSH</h3>
<p></u></p>
<p>Vagrant uses SSH to connect to the boxes, so installing it now will save the hassle of doing it later.</p>
<p><img alt="Screenshot_from_2023-07-10_18-54-48" src="../images/deployment_guide/08_vagrant_box.jpg" /></p>
<h3><u>Features server snaps</h3>
<p></u></p>
<p>Do not install featured server snaps. It will be done manually <a href="#12-install-microk8s">later</a> to illustrate how to uninstall and reinstall them in case of trouble with.</p>
<h3><u>Updates</h3>
<p></u></p>
<p>Let the system install and upgrade the packages. This operation might take some minutes depending on how old is the Optical Drive ISO image you use and your Internet connection speed.</p>
<h3><u>Upgrade the Ubuntu distribution</h3>
<p></u></p>
<pre><code class="language-bash">sudo apt-get update -y
sudo apt-get dist-upgrade -y
</code></pre>
<ul>
<li>If asked to restart services, restart the default ones proposed.</li>
<li>Restart the VM when the installation is completed.</li>
</ul>
<h3><u>Install VirtualBox Guest Additions</h3>
<p></u>
On VirtualBox Manager, open the VM main screen. If you are running the VM in headless
mode, right-click over the VM in the VirtualBox Manager window, and click "Show".
If a dialog informing about how to leave the interface of the VM is shown, confirm
by pressing the "Switch" button. The interface of the VM should appear.</p>
<p>Click the menu "Device > Insert Guest Additions CD image..."</p>
<p>On the VM terminal, type:</p>
<pre><code class="language-bash">sudo apt-get install -y linux-headers-$(uname -r) build-essential dkms
# This command might take some minutes depending on your VM specs and your Internet access speed.
sudo mount /dev/cdrom /mnt/
cd /mnt/
sudo ./VBoxLinuxAdditions.run
# This command might take some minutes depending on your VM specs.
sudo reboot
</code></pre>
<h3><u>ETSI TFS Installation</h3>
<p></u>
After this, proceed to <a href="#12-install-microk8s">1.2. Install Microk8s</a>, after which, return to this wiki to finish the Vagrant Box creation.</p>
<h3><u>Box configuration and creation</h3>
<p></u>
Make sure the ETSI TFS controller is correctly configured. <strong>You will not be able to change it after!</strong></p>
<p>It is advisable to do the next configurations from a host's terminal, via a SSH connection.</p>
<pre><code class="language-bash">ssh -p 2222 vagrant@127.0.0.1
</code></pre>
<h3><u>Set root password</h3>
<p></u>
Set the root password to <code>vagrant</code>.</p>
<pre><code class="language-bash">sudo passwd root
</code></pre>
<h3><u>Set the superuser</h3>
<p></u>
Set up the Vagrant user so that it’s able to use sudo without being prompted for a password.
Anything in the <code>/etc/sudoers.d/*</code> directory is included in the sudoers privileges when created by the root user.
Create a new sudo file.</p>
<pre><code class="language-bash">sudo visudo -f /etc/sudoers.d/vagrant
</code></pre>
<p>and add the following lines</p>
<pre><code class="language-text"># add vagrant user
vagrant ALL=(ALL) NOPASSWD:ALL
</code></pre>
<p>You can now test that it works by running a simple command.</p>
<pre><code class="language-bash">sudo pwd
</code></pre>
<p>Issuing this command should result in an immediate response without a request for a password.</p>
<h3><u>Install the Vagrant key</h3>
<p></u>
Vagrant uses a default set of SSH keys for you to directly connect to boxes via the CLI command <code>vagrant ssh</code>, after which it creates a new set of SSH keys for your new box. Because of this, we need to load the default key to be able to access the box after created.</p>
<pre><code class="language-bash">chmod 0700 /home/vagrant/.ssh
wget --no-check-certificate https://raw.github.com/mitchellh/vagrant/master/keys/vagrant.pub -O /home/vagrant/.ssh/authorized_keys
chmod 0600 /home/vagrant/.ssh/authorized_keys
chown -R vagrant /home/vagrant/.ssh
</code></pre>
<h3><u>Configure the OpenSSH Server</h3>
<p></u>
Edit the <code>/etc/ssh/sshd_config</code> file:</p>
<pre><code class="language-bash">sudo vim /etc/ssh/sshd_config
</code></pre>
<p>And uncomment the following line:</p>
<pre><code class="language-bash">AuthorizedKeysFile %h/.ssh/authorized_keys
</code></pre>
<p>Then restart SSH.</p>
<pre><code class="language-bash">sudo service ssh restart
</code></pre>
<h3><u>Package the box</h3>
<p></u>
Before you package the box, if you intend to make your box public, it is best to clean your bash history with:</p>
<pre><code class="language-bash">history -c
</code></pre>
<p>Exit the SSH connection, and <strong>at you're host machine</strong>, package the VM:</p>
<pre><code class="language-bash">vagrant package --base teraflowsdncontroller --output teraflowsdncontroller.box
</code></pre>
<h3><u>Test run the box</h3>
<p></u>
Add the base box to you local Vagrant box list:</p>
<pre><code class="language-bash">vagrant box add --name teraflowsdncontroller ./teraflowsdncontroller.box
</code></pre>
<p>Now you should try to run it, for that you'll need to create a <strong>Vagrantfile</strong>. For a simple run, this is the minimal required code for this box:</p>
<pre><code class="language-ruby"># -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "teraflowsdncontroller"
config.vm.box_version = "1.1.0"
config.vm.network :forwarded_port, host: 8080 ,guest: 80
end
</code></pre>
<p>Now you'll be able to spin up the virtual machine by issuing the command:</p>
<pre><code class="language-bash">vagrant up
</code></pre>
<p>And connect to the machine using:</p>
<pre><code class="language-bash">vagrant ssh
</code></pre>
<h3><u>Pre-configured boxes</h3>
<p></u></p>
<p>If you do not wish to create your own Vagrant Box, you can use one of the existing ones created by TFS contributors.
- <a href="https://app.vagrantup.com/davidjosearaujo/boxes/teraflowsdncontroller">davidjosearaujo/teraflowsdncontroller</a>
- ... <!-- Should create and host one at ETSI!! --></p>
<p>To use them, you simply have to create a Vagrantfile and run <code>vagrant up controller</code> in the same directory. The following example Vagrantfile already allows you to do just that, with the bonus of exposing the multiple management GUIs to your <code>localhost</code>.</p>
<pre><code class="language-ruby">Vagrant.configure("2") do |config|
config.vm.define "controller" do |controller|
controller.vm.box = "davidjosearaujo/teraflowsdncontroller"
controller.vm.network "forwarded_port", guest: 80, host: 8080 # WebUI
controller.vm.network "forwarded_port", guest: 8084, host: 50750 # Linkerd Viz Dashboard
controller.vm.network "forwarded_port", guest: 8081, host: 8081 # CockroachDB Dashboard
controller.vm.network "forwarded_port", guest: 8222, host: 8222 # NATS Dashboard
controller.vm.network "forwarded_port", guest: 9000, host: 9000 # QuestDB Dashboard
controller.vm.network "forwarded_port", guest: 9090, host: 9090 # Prometheus Dashboard
# Setup Linkerd Viz reverse proxy
## Copy config file
controller.vm.provision "file" do |f|
f.source = "./reverse-proxy-linkerdviz.sh"
f.destination = "./reverse-proxy-linkerdviz.sh"
end
## Execute configuration file
controller.vm.provision "shell" do |s|
s.inline = "chmod +x ./reverse-proxy-linkerdviz.sh && ./reverse-proxy-linkerdviz.sh"
end
# Update controller source code to the desired branch
if ENV['BRANCH'] != nil
controller.vm.provision "shell" do |s|
s.inline = "cd ./tfs-ctrl && git pull && git switch " + ENV['BRANCH']
end
end
end
end
</code></pre>
<p>This Vagrantfile also allows for <strong>optional repository updates</strong> on startup by running the command with a specified environment variable <code>BRANCH</code></p>
<pre><code class="language-bash">BRANCH=develop vagrant up controller
</code></pre>
<h3><u>Linkerd DNS rebinding bypass</h3>
<p></u>
Because of Linkerd's security measures against DNS rebinding, a reverse proxy, that modifies the request's header <code>Host</code> field, is needed to expose the GUI to the host. The previous Vagrantfile already deploys such configurations, for that, all you need to do is create the <code>reverse-proxy-linkerdviz.sh</code> file in the same directory. The content of this file is displayed below.</p>
<pre><code class="language-bash"># Install NGINX
sudo apt update && sudo apt install nginx -y
# NGINX reverse proxy configuration
echo 'server {
listen 8084;
location / {
proxy_pass http://127.0.0.1:50750;
proxy_set_header Host localhost;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}' > /home/vagrant/expose-linkerd
# Create symlink of the NGINX configuration file
sudo ln -s /home/vagrant/expose-linkerd /etc/nginx/sites-enabled/
# Commit the reverse proxy configurations
sudo systemctl restart nginx
# Enable start on login
echo "linkerd viz dashboard &" >> .profile
# Start dashboard
linkerd viz dashboard &
echo "Linkerd Viz dashboard running!"
</code></pre>
<h2 id="12-install-microk8s"><strong>1.2. Install MicroK8s</strong></h2>
<p>This section describes how to deploy the MicroK8s Kubernetes platform and configure it to be used with ETSI TeraFlowSDN controller. Besides, Docker is installed to build docker images for the ETSI TeraFlowSDN controller.</p>
<p>The steps described in this section might take some minutes depending on your internet connection speed and the resources assigned to your VM, or the specifications of your physical server.</p>
<p>To facilitate work, these steps are easier to be executed through an SSH connection, for instance using tools like <a href="https://www.putty.org/">PuTTY</a> or <a href="https://mobaxterm.mobatek.net/">MobaXterm</a>.</p>
<h3><u>Upgrade the Ubuntu distribution</h3>
<p></u>
Skip this step if you already did it during the creation of the VM.</p>
<pre><code class="language-bash">sudo apt-get update -y
sudo apt-get dist-upgrade -y
</code></pre>
<h3><u>Install prerequisites</h3>
<p></u></p>
<pre><code class="language-bash">sudo apt-get install -y ca-certificates curl gnupg lsb-release snapd jq
</code></pre>
<h3><u>Install Docker CE</h3>
<p></u>
Install Docker CE and Docker BuildX plugin</p>
<pre><code class="language-bash">sudo apt-get install -y docker.io docker-buildx
</code></pre>
<p><strong>NOTE</strong>: Starting from Docker v23, <a href="https://docs.docker.com/build/architecture/">Build architecture</a> has been updated and <code>docker build</code> command entered into deprecation process in favor of the new <code>docker buildx build</code> command. Package <code>docker-buildx</code> provides the new <code>docker buildx build</code> command.</p>
<p>Add key "insecure-registries" with the private repository to the daemon configuration. It is done in two commands since
sometimes read from and write to same file might cause trouble.</p>
<pre><code class="language-bash">if [ -s /etc/docker/daemon.json ]; then cat /etc/docker/daemon.json; else echo '{}'; fi \
| jq 'if has("insecure-registries") then . else .+ {"insecure-registries": []} end' -- \
| jq '."insecure-registries" |= (.+ ["localhost:32000"] | unique)' -- \
| tee tmp.daemon.json
sudo mv tmp.daemon.json /etc/docker/daemon.json
sudo chown root:root /etc/docker/daemon.json
sudo chmod 600 /etc/docker/daemon.json
</code></pre>
<p>Restart the Docker daemon</p>
<pre><code class="language-bash">sudo systemctl restart docker
</code></pre>
<h3><u>Install MicroK8s</h3>
<p></u></p>
<p><strong>Important</strong>: By default, Kubernetes uses CIDR 10.1.0.0/16 for pods and CIDR 10.152.183.0/24 for services. If they conflict with your internal network CIDR, you might need to change Kubernetes CIDRs at deployment time. To do so, check links below and ask for support if needed.</p>
<ul>
<li><a href="https://microk8s.io/docs/how-to-dual-stack">MicroK8s - How to configure network Dual-stack</a></li>
<li><a href="https://microk8s.io/docs/change-cidr">MicroK8s - MicroK8s CNI Configuration</a></li>
</ul>
<pre><code class="language-bash"># Install MicroK8s
sudo snap install microk8s --classic --channel=1.24/stable
# Create alias for command "microk8s.kubectl" to be usable as "kubectl"
sudo snap alias microk8s.kubectl kubectl
</code></pre>
<p>It is important to make sure that <code>ufw</code> will not interfere with the internal pod-to-pod
and pod-to-Internet traffic.
To do so, first check the status.
If <code>ufw</code> is active, use the following command to enable the communication.</p>
<pre><code class="language-bash">
# Verify status of ufw firewall
sudo ufw status
# If ufw is active, install following rules to enable access pod-to-pod and pod-to-internet
sudo ufw allow in on cni0 && sudo ufw allow out on cni0
sudo ufw default allow routed
</code></pre>
<p><strong>NOTE</strong>: MicroK8s can be used to compose a Highly Available Kubernetes cluster enabling you to construct an environment combining the CPU, RAM and storage resources of multiple machines. If you are interested in this procedure, review the official instructions in <a href="https://ubuntu.com/tutorials/getting-started-with-kubernetes-ha">How to build a highly available Kubernetes cluster with MicroK8s</a>, in particular, the step <a href="https://ubuntu.com/tutorials/getting-started-with-kubernetes-ha#4-create-a-microk8s-multinode-cluster">Create a MicroK8s multi-node cluster</a>.</p>
<p><strong>References:</strong></p>
<ul>
<li><a href="https://microk8s.io/#install-microk8s">The lightweight Kubernetes > Install MicroK8s</a></li>
<li><a href="https://ubuntu.com/tutorials/install-a-local-kubernetes-with-microk8s">Install a local Kubernetes with MicroK8s</a></li>
<li><a href="https://ubuntu.com/tutorials/getting-started-with-kubernetes-ha">How to build a highly available Kubernetes cluster with MicroK8s</a></li>
<li><a href="https://microk8s.io/docs/how-to-dual-stack">MicroK8s - How to configure network Dual-stack</a></li>
<li><a href="https://microk8s.io/docs/change-cidr">MicroK8s - MicroK8s CNI Configuration</a></li>
</ul>
<h3><u>Add user to the docker and microk8s groups</h3>
<p></u></p>
<p>It is important that your user has the permission to run <code>docker</code> and <code>microk8s</code> in the
terminal.
To allow this, you need to add your user to the <code>docker</code> and <code>microk8s</code> groups with the
following commands:</p>
<pre><code class="language-bash">sudo usermod -a -G docker $USER
sudo usermod -a -G microk8s $USER
sudo chown -f -R $USER $HOME/.kube
sudo reboot
</code></pre>
<p>In case that you get trouble executing the following commands, might due to the .kube folder is not automatically provisioned into your home folder, you may follow the steps below:</p>
<pre><code class="language-bash">mkdir -p $HOME/.kube
sudo chown -f -R $USER $HOME/.kube
microk8s config > $HOME/.kube/config
sudo reboot
</code></pre>
<h3><u id="check-status-of-kubernetes-and-addons">Check status of Kubernetes and addons</h3>
<p></u>
To retrieve the status of Kubernetes <strong>once</strong>, run the following command:</p>
<pre><code class="language-bash">microk8s.status --wait-ready
</code></pre>
<p>To retrieve the status of Kubernetes <strong>periodically</strong> (e.g., every 1 second), run the
following command:</p>
<pre><code class="language-bash">watch -n 1 microk8s.status --wait-ready
</code></pre>
<h3><u id="check-all-resources-in-kubernetes">Check all resources in Kubernetes</h3>
<p></u>
To retrieve the status of the Kubernetes resources <strong>once</strong>, run the following command:</p>
<pre><code class="language-bash">kubectl get all --all-namespaces
</code></pre>
<p>To retrieve the status of the Kubernetes resources <strong>periodically</strong> (e.g., every 1
second), run the following command:</p>
<pre><code class="language-bash">watch -n 1 kubectl get all --all-namespaces
</code></pre>
<h3><u>Enable addons</h3>
<p></u></p>
<p>First, we need to enable the community plugins (maintained by third parties):</p>
<pre><code class="language-bash">microk8s.enable community
</code></pre>
<p>The Addons to be enabled are:</p>
<ul>
<li><code>dns</code>: enables resolving the pods and services by name</li>
<li><code>helm3</code>: required to install NATS</li>
<li><code>hostpath-storage</code>: enables providing storage for the pods (required by <code>registry</code>)</li>
<li><code>ingress</code>: deploys an ingress controller to expose the microservices outside Kubernetes</li>
<li><code>registry</code>: deploys a private registry for the TFS controller images</li>
<li><code>linkerd</code>: deploys the <a href="https://linkerd.io">linkerd service mesh</a> used for load balancing among replicas</li>
<li><code>prometheus</code>: set of tools that enable TFS observability through per-component instrumentation</li>
<li><code>metrics-server</code>: deploys the <a href="https://github.com/kubernetes-sigs/metrics-server">Kubernetes metrics server</a> for API access to service metrics</li>
</ul>
<pre><code class="language-bash">microk8s.enable dns helm3 hostpath-storage ingress registry prometheus metrics-server linkerd
</code></pre>
<p><strong>Important</strong>: Enabling some of the addons might take few minutes. Do not proceed with next steps until the addons are ready. Otherwise, the deployment might fail.
To confirm everything is up and running:</p>
<ol>
<li>Periodically
<a href="#12-install-microk8s">Check the status of Kubernetes</a>
until you see the addons [dns, ha-cluster, helm3, hostpath-storage, ingress, linkerd, metrics-server, prometheus, registry, storage] in the enabled block.</li>
<li>Periodically
<a href="">Check Kubernetes resources</a>
until all pods are <strong>Ready</strong> and <strong>Running</strong>.</li>
<li>If it takes too long for the Pods to be ready, <strong>we observed that rebooting the machine may help</strong>.</li>
</ol>
<p>Then, create aliases to make the commands easier to access:</p>
<pre><code class="language-bash">sudo snap alias microk8s.helm3 helm3
sudo snap alias microk8s.linkerd linkerd
</code></pre>
<p>To validate that <code>linkerd</code> is working correctly, run:</p>
<pre><code class="language-bash">linkerd check
</code></pre>
<p>To validate that the <code>metrics-server</code> is working correctly, run:</p>
<pre><code class="language-bash">kubectl top pods --all-namespaces
</code></pre>
<p>and you should see a screen similar to the <code>top</code> command in Linux, showing the columns <em>namespace</em>, <em>pod name</em>, <em>CPU (cores)</em>, and <em>MEMORY (bytes)</em>.</p>
<p>In case pods are not starting, check information from pods logs. For example, linkerd is sensitive for proper /etc/resolv.conf syntax.</p>
<pre><code class="language-bash">kubectl logs <podname> --namespace <namespace>
</code></pre>
<p>If the command shows an error message, also restarting the machine might help.</p>
<h3><u>Stop, Restart, and Redeploy</h3>
<p></u>
Find below some additional commands you might need while you work with MicroK8s:</p>
<pre><code class="language-bash">microk8s.stop # stop MicroK8s cluster (for instance, before power off your computer)
microk8s.start # start MicroK8s cluster
microk8s.reset # reset infrastructure to a clean state
</code></pre>
<p>If the following commands does not work to recover the MicroK8s cluster, you can redeploy it.</p>
<p>If you want to keep MicroK8s configuration, use:</p>
<pre><code class="language-bash">sudo snap remove microk8s
</code></pre>
<p>If you need to completely drop MicroK8s and its complete configuration, use:</p>
<pre><code class="language-bash">sudo snap remove microk8s --purge
sudo apt-get remove --purge docker.io docker-buildx
</code></pre>
<p><strong>IMPORTANT</strong>: After uninstalling MicroK8s, it is convenient to reboot the computer (the VM if you work on a VM, or the physical computer if you use a physical computer). Otherwise, there are system configurations that are not correctly cleaned. Especially in what port forwarding and firewall rules matters.</p>
<p>After the reboot, redeploy as it is described in this section.</p>
<h2 id="13-deploy-teraflowsdn"><strong>1.3. Deploy TeraFlowSDN</strong></h2>
<p>This section describes how to deploy TeraFlowSDN controller on top of MicroK8s using the environment configured in the previous sections.</p>
<h3><u>Install prerequisites</h3>
<p></u></p>
<pre><code class="language-bash">sudo apt-get install -y git curl jq
</code></pre>
<h3><u>Clone the Git repository of the TeraFlowSDN controller</h3>
<p></u>
Clone from ETSI-hosted GitLab code repository:</p>
<pre><code class="language-bash">mkdir ~/tfs-ctrl
git clone https://labs.etsi.org/rep/tfs/controller.git ~/tfs-ctrl
</code></pre>
<p><strong>Important</strong>: The original H2020-TeraFlow project hosted on GitLab.com has been
archieved and will not receive further contributions/updates.
Please, clone from <a href="https://labs.etsi.org/rep/tfs/controller">ETSI-hosted GitLab code repository</a>.</p>
<h3><u>Checkout the appropriate Git branch</h3>
<p></u>
TeraFlowSDN controller versions can be found in the appropriate release tags and/or branches as described in <a href="https://tfs.etsi.org/news/">Home > Versions</a>.</p>
<p>By default the branch <em>master</em> is checked out and points to the latest stable version of the TeraFlowSDN controller, while branch <em>develop</em> contains the latest developments and contributions under test and validation.</p>
<p>To switch to the appropriate branch run the following command, changing <code>develop</code> by the name of the branch you want to deploy:</p>
<pre><code class="language-bash">cd ~/tfs-ctrl
git checkout develop
</code></pre>
<h3><u>Prepare a deployment script with the deployment settings</h3>
<p></u>
Create a new deployment script, e.g., <code>my_deploy.sh</code>, adding the appropriate settings as
follows.
This section provides just an overview of the available settings. An example <a href="https://labs.etsi.org/rep/tfs/controller/-/blob/master/my_deploy.sh"><code>my_deploy.sh</code></a> script is provided in the root folder of the project for your convenience with full description of all the settings.</p>
<p><strong>Note</strong>: The example <code>my_deploy.sh</code> script provides reasonable settings for deploying a functional and complete enough TeraFlowSDN controller, and a brief description of their meaning. To see extended descriptions, check scripts in the <code>deploy</code> folder.</p>
<pre><code class="language-bash">cd ~/tfs-ctrl
tee my_deploy.sh >/dev/null << EOF
# ----- TeraFlowSDN ------------------------------------------------------------
export TFS_REGISTRY_IMAGES="http://localhost:32000/tfs/"
export TFS_COMPONENTS="context device ztp monitoring pathcomp service slice nbi webui load_generator"
export TFS_IMAGE_TAG="dev"
export TFS_K8S_NAMESPACE="tfs"
export TFS_EXTRA_MANIFESTS="manifests/nginx_ingress_http.yaml"
export TFS_GRAFANA_PASSWORD="admin123+"
export TFS_SKIP_BUILD=""
# ----- CockroachDB ------------------------------------------------------------
export CRDB_NAMESPACE="crdb"
export CRDB_EXT_PORT_SQL="26257"
export CRDB_EXT_PORT_HTTP="8081"
export CRDB_USERNAME="tfs"
export CRDB_PASSWORD="tfs123"
export CRDB_DATABASE="tfs"
export CRDB_DEPLOY_MODE="single"
export CRDB_DROP_DATABASE_IF_EXISTS="YES"
export CRDB_REDEPLOY=""
# ----- NATS -------------------------------------------------------------------
export NATS_NAMESPACE="nats"
export NATS_EXT_PORT_CLIENT="4222"
export NATS_EXT_PORT_HTTP="8222"
export NATS_REDEPLOY=""
# ----- QuestDB ----------------------------------------------------------------
export QDB_NAMESPACE="qdb"
export QDB_EXT_PORT_SQL="8812"
export QDB_EXT_PORT_ILP="9009"
export QDB_EXT_PORT_HTTP="9000"
export QDB_USERNAME="admin"
export QDB_PASSWORD="quest"
export QDB_TABLE_MONITORING_KPIS="tfs_monitoring_kpis"
export QDB_TABLE_SLICE_GROUPS="tfs_slice_groups"
export QDB_DROP_TABLES_IF_EXIST="YES"
export QDB_REDEPLOY=""
EOF
</code></pre>
<p>The settings are organized in 4 sections:</p>
<ul>
<li>Section <code>TeraFlowSDN</code>:<ul>
<li><code>TFS_REGISTRY_IMAGE</code> enables to specify the private Docker registry to be used, by default, we assume to use the Docker respository enabled in MicroK8s.</li>
<li><code>TFS_COMPONENTS</code> specifies the components their Docker image will be rebuilt, uploaded to the private Docker registry, and deployed in Kubernetes.</li>
<li><code>TFS_IMAGE_TAG</code> defines the tag to be used for Docker images being rebuilt and uploaded to the private Docker registry.</li>
<li><code>TFS_K8S_NAMESPACE</code> specifies the name of the Kubernetes namespace to be used for deploying the TFS components.</li>
<li><code>TFS_EXTRA_MANIFESTS</code> enables to provide additional manifests to be applied into the Kubernetes environment during the deployment. Typical use case is to deploy ingress controllers, service monitors for Prometheus, etc.</li>
<li><code>TFS_GRAFANA_PASSWORD</code> lets you specify the password you want to use for the <code>admin</code> user of the Grafana instance being deployed and linked to the Monitoring component.</li>
<li><code>TFS_SKIP_BUILD</code>, if set to <code>YES</code>, prevents rebuilding the Docker images. That means, the deploy script will redeploy existing Docker images without rebuilding/updating them.</li>
</ul>
</li>
<li>Section <code>CockroachDB</code>: enables to configure the deployment of the backend <a href="https://www.cockroachlabs.com/">CockroachDB</a> database.<ul>
<li>Check example script <a href="https://labs.etsi.org/rep/tfs/controller/-/blob/master/my_deploy.sh"><code>my_deploy.sh</code></a> for further details.</li>
</ul>
</li>
<li>Section <code>NATS</code>: enables to configure the deployment of the backend <a href="https://nats.io/">NATS</a> message broker.<ul>
<li>Check example script <a href="https://labs.etsi.org/rep/tfs/controller/-/blob/master/my_deploy.sh"><code>my_deploy.sh</code></a> for further details.</li>
</ul>
</li>
<li>Section <code>QuestDB</code>: enables to configure the deployment of the backend <a href="https://questdb.io/">QuestDB</a> timeseries database.<ul>
<li>Check example script <a href="https://labs.etsi.org/rep/tfs/controller/-/blob/master/my_deploy.sh"><code>my_deploy.sh</code></a> for further details.</li>
</ul>
</li>
</ul>
<h3><u>Confirm that MicroK8s is running</h3>
<p></u></p>
<p>Run the following command:</p>
<pre><code class="language-bash">microk8s status
</code></pre>
<p>If it is reported <code>microk8s is not running, try microk8s start</code>, run the following command to start MicroK8s:</p>
<pre><code class="language-bash">microk8s start
</code></pre>
<p>Confirm everything is up and running:</p>
<ol>
<li>Periodically <a href="#check-status-of-kubernetes-and-addons">Check the status of Kubernetes</a> until you see the addons [dns, ha-cluster, helm3, hostpath-storage, ingress, registry, storage] in the enabled block.</li>
<li>Periodically <a href="#check-all-resources-in-kubernetes">Check Kubernetes resources</a> until all pods are <strong>Ready</strong> and <strong>Running</strong>.</li>
</ol>
<h3><u id="deploy-tfs-controller">Deploy TFS controller</h3>
<p></u>
First, source the deployment settings defined in the previous section.
This way, you do not need to specify the environment variables in each and every command you execute to operate the TFS controller.
Be aware to re-source the file if you open new terminal sessions.
Then, run the following command to deploy TeraFlowSDN controller on top of the MicroK8s Kubernetes platform.</p>
<pre><code class="language-bash">cd ~/tfs-ctrl
source my_deploy.sh
./deploy/all.sh
</code></pre>
<p>The script performs the following steps:</p>
<ul>
<li>Executes script <code>./deploy/crdb.sh</code> to automate deployment of CockroachDB database used by Context component.<ul>
<li>The script automatically checks if CockroachDB is already deployed.</li>
<li>If there are settings instructing to drop the database and/or redeploy CockroachDB, it does the appropriate actions to honor them as defined in previous section.</li>
</ul>
</li>
<li>Executes script <code>./deploy/nats.sh</code> to automate deployment of NATS message broker used by Context component.<ul>
<li>The script automatically checks if NATS is already deployed.</li>
<li>If there are settings instructing to redeploy the message broker, it does the appropriate actions to honor them as defined in previous section.</li>
</ul>
</li>
<li>Executes script <code>./deploy/qdb.sh</code> to automate deployment of QuestDB timeseries database used by Monitoring component.<ul>
<li>The script automatically checks if QuestDB is already deployed.</li>
<li>If there are settings instructing to redeploy the timeseries database, it does the appropriate actions to honor them as defined in previous section.</li>
</ul>
</li>
<li>Executes script <code>./deploy/tfs.sh</code> to automate deployment of TeraFlowSDN.<ul>
<li>Creates the namespace defined in <code>TFS_K8S_NAMESPACE</code></li>
<li>Creates secrets for CockroachDB, NATS, and QuestDB to be used by Context and Monitoring components.</li>
<li>Builds the Docker images for the components defined in <code>TFS_COMPONENTS</code></li>
<li>Tags the Docker images with the value of <code>TFS_IMAGE_TAG</code></li>
<li>Pushes the Docker images to the repository defined in <code>TFS_REGISTRY_IMAGE</code></li>
<li>Deploys the components defined in <code>TFS_COMPONENTS</code></li>
<li>Creates the file <code>tfs_runtime_env_vars.sh</code> with the environment variables for the components defined in <code>TFS_COMPONENTS</code> defining their local host addresses and their port numbers.</li>
<li>Applies extra manifests defined in <code>TFS_EXTRA_MANIFESTS</code> such as:</li>
<li>Creating an ingress controller listening at port 80 for HTTP connections to enable external access to the TeraFlowSDN WebUI, Grafana Dashboards, and Compute NBI interfaces.</li>
<li>Deploying service monitors to enable monitoring the performance of the components, device drivers and service handlers.</li>
<li>Initialize and configure the Grafana dashboards (if Monitoring component is deployed)</li>
</ul>
</li>
<li>Report a summary of the deployment<ul>
<li>See <a href="#15-show-deployment-and-logs">Show Deployment and Logs</a></li>
</ul>
</li>
</ul>
<h2 id="14-webui-and-grafana-dashboards"><strong>1.4. WebUI and Grafana Dashboards</strong></h2>
<p>This section describes how to get access to the TeraFlowSDN controller WebUI and the monitoring Grafana dashboards.</p>
<h3><u>Access the TeraFlowSDN WebUI</h3>
<p></u>
If you followed the installation steps based on MicroK8s, you got an ingress controller installed that exposes on TCP port 80.</p>
<p>Besides, the ingress controller defines the following reverse proxy paths (on your local machine):</p>
<ul>
<li><code>http://127.0.0.1/webui</code>: points to the WebUI of TeraFlowSDN.</li>
<li><code>http://127.0.0.1/grafana</code>: points to the Grafana dashboards.
This endpoint brings access to the monitoring dashboards of TeraFlowSDN.
The credentials for the <code>admin</code>user are those defined in the <code>my_deploy.sh</code> script, in the <code>TFS_GRAFANA_PASSWORD</code> variable.</li>
<li><code>http://127.0.0.1/restconf</code>: points to the Compute component NBI based on RestCONF.
This endpoint enables connecting external software, such as ETSI OpenSourceMANO NFV Orchestrator, to TeraFlowSDN.</li>
</ul>
<p><strong>Note</strong>: In the creation of the VM, a forward from host TCP port 8080 to VM's TCP port 80 is configured, so the WebUIs and REST APIs of TeraFlowSDN should be exposed on the endpoint <code>127.0.0.1:8080</code> of your local machine instead of <code>127.0.0.1:80</code>.</p>
<h2 id="15-show-deployment-and-logs"><strong>1.5. Show Deployment and Logs</strong></h2>
<p>This section presents some helper scripts to inspect the status of the deployment and
the logs of the components.
These scripts are particularly helpful for troubleshooting during execution of
experiments, development, and debugging.</p>
<h3><u>Report the deployment of the TFS controller</h3>
<p></u></p>
<p>The summary report given at the end of the <a href="#deploy-tfs-controller">Deploy TFS controller</a>
procedure can be generated manually at any time by running the following command.
You can avoid sourcing <code>my_deploy.sh</code> if it has been already done.</p>
<pre><code class="language-bash">cd ~/tfs-ctrl
source my_deploy.sh
./deploy/show.sh
</code></pre>
<p>Use this script to validate that all the pods, deployments, replica sets, ingress
controller, etc. are ready and have the appropriate state, e.g., <em>running</em> for Pods, and
the services are deployed and have appropriate IP addresses and port numbers.</p>
<h3><u>Report the log of a specific TFS controller component</h3>
<p></u></p>
<p>A number of scripts are pre-created in the <code>scripts</code> folder to facilitate the inspection
of the component logs.
For instance, to dump the log of the Context component, run the following command.
You can avoid sourcing <code>my_deploy.sh</code> if it has been already done.</p>
<pre><code class="language-bash">source my_deploy.sh
./scripts/show_logs_context.sh
</code></pre>
</article>
</div>
<script>var target=document.getElementById(location.hash.slice(1));target&&target.name&&(target.checked=target.name.startsWith("__tabbed_"))</script>
</div>
<button type="button" class="md-top md-icon" data-md-component="top" hidden>
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24"><path d="M13 20h-2V8l-5.5 5.5-1.42-1.42L12 4.16l7.92 7.92-1.42 1.42L13 8z"/></svg>
Back to top
</button>
</main>
<footer class="md-footer">
<nav class="md-footer__inner md-grid" aria-label="Footer" >
<a href=".." class="md-footer__link md-footer__link--prev" aria-label="Previous: 0. Home">
<div class="md-footer__button md-icon">
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24"><path d="M20 11v2H8l5.5 5.5-1.42 1.42L4.16 12l7.92-7.92L13.5 5.5 8 11z"/></svg>
</div>
<div class="md-footer__title">
<span class="md-footer__direction">
Previous
</span>
<div class="md-ellipsis">
0. Home
</div>
</div>
</a>
<a href="../development_guide/" class="md-footer__link md-footer__link--next" aria-label="Next: 2. Development Guide">
<div class="md-footer__title">
<span class="md-footer__direction">
Next
</span>
<div class="md-ellipsis">
2. Development Guide
</div>
</div>
<div class="md-footer__button md-icon">
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24"><path d="M4 11v2h12l-5.5 5.5 1.42 1.42L19.84 12l-7.92-7.92L10.5 5.5 16 11z"/></svg>
</div>
</a>
</nav>
<div class="md-footer-meta md-typeset">
<div class="md-footer-meta__inner md-grid">
<div class="md-copyright">
<div class="md-copyright__highlight">
Copyright © 2019-2024 TeraflowSDN Project
</div>
Made with
<a href="https://squidfunk.github.io/mkdocs-material/" target="_blank" rel="noopener">
Material for MkDocs
</a>
</div>
<div class="md-social">
<a href="https://tfs.etsi.org/" target="_blank" rel="noopener" title="tfs.etsi.org" class="md-social__link">
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><!--! Font Awesome Free 6.6.0 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License) Copyright 2024 Fonticons, Inc.--><path d="M352 256c0 22.2-1.2 43.6-3.3 64H163.4c-2.2-20.4-3.3-41.8-3.3-64s1.2-43.6 3.3-64h185.3c2.2 20.4 3.3 41.8 3.3 64m28.8-64h123.1c5.3 20.5 8.1 41.9 8.1 64s-2.8 43.5-8.1 64H380.8c2.1-20.6 3.2-42 3.2-64s-1.1-43.4-3.2-64m112.6-32H376.7c-10-63.9-29.8-117.4-55.3-151.6 78.3 20.7 142 77.5 171.9 151.6zm-149.1 0H167.7c6.1-36.4 15.5-68.6 27-94.7 10.5-23.6 22.2-40.7 33.5-51.5C239.4 3.2 248.7 0 256 0s16.6 3.2 27.8 13.8c11.3 10.8 23 27.9 33.5 51.5 11.6 26 20.9 58.2 27 94.7m-209 0H18.6c30-74.1 93.6-130.9 172-151.6-25.5 34.2-45.3 87.7-55.3 151.6M8.1 192h123.1c-2.1 20.6-3.2 42-3.2 64s1.1 43.4 3.2 64H8.1C2.8 299.5 0 278.1 0 256s2.8-43.5 8.1-64m186.6 254.6c-11.6-26-20.9-58.2-27-94.6h176.6c-6.1 36.4-15.5 68.6-27 94.6-10.5 23.6-22.2 40.7-33.5 51.5-11.2 10.7-20.5 13.9-27.8 13.9s-16.6-3.2-27.8-13.8c-11.3-10.8-23-27.9-33.5-51.5zM135.3 352c10 63.9 29.8 117.4 55.3 151.6-78.4-20.7-142-77.5-172-151.6zm358.1 0c-30 74.1-93.6 130.9-171.9 151.6 25.5-34.2 45.2-87.7 55.3-151.6h116.7z"/></svg>
</a>
<a href="https://labs.etsi.org/rep/tfs" target="_blank" rel="noopener" title="labs.etsi.org" class="md-social__link">
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><!--! Font Awesome Free 6.6.0 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License) Copyright 2024 Fonticons, Inc.--><path d="m503.5 204.6-.7-1.8-69.7-181.78c-1.4-3.57-3.9-6.59-7.2-8.64-2.4-1.55-5.1-2.515-8-2.81s-5.7.083-8.4 1.11c-2.7 1.02-5.1 2.66-7.1 4.78-1.9 2.12-3.3 4.67-4.1 7.44l-47 144H160.8l-47.1-144c-.8-2.77-2.2-5.31-4.1-7.43-2-2.12-4.4-3.75-7.1-4.77a18.1 18.1 0 0 0-8.38-1.113 18.4 18.4 0 0 0-8.04 2.793 18.1 18.1 0 0 0-7.16 8.64L9.267 202.8l-.724 1.8a129.57 129.57 0 0 0-3.52 82c7.747 26.9 24.047 50.7 46.447 67.6l.27.2.59.4 105.97 79.5 52.6 39.7 32 24.2c3.7 1.9 8.3 4.3 13 4.3s9.3-2.4 13-4.3l32-24.2 52.6-39.7 106.7-79.9.3-.3c22.4-16.9 38.7-40.6 45.6-67.5 8.6-27 7.4-55.8-2.6-82"/></svg>
</a>
<a href="https://www.linkedin.com/company/teraflowsdn/" target="_blank" rel="noopener" title="www.linkedin.com" class="md-social__link">
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 448 512"><!--! Font Awesome Free 6.6.0 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License) Copyright 2024 Fonticons, Inc.--><path d="M416 32H31.9C14.3 32 0 46.5 0 64.3v383.4C0 465.5 14.3 480 31.9 480H416c17.6 0 32-14.5 32-32.3V64.3c0-17.8-14.4-32.3-32-32.3M135.4 416H69V202.2h66.5V416zm-33.2-243c-21.3 0-38.5-17.3-38.5-38.5S80.9 96 102.2 96c21.2 0 38.5 17.3 38.5 38.5 0 21.3-17.2 38.5-38.5 38.5m282.1 243h-66.4V312c0-24.8-.5-56.7-34.5-56.7-34.6 0-39.9 27-39.9 54.9V416h-66.4V202.2h63.7v29.2h.9c8.9-16.8 30.6-34.5 62.9-34.5 67.2 0 79.7 44.3 79.7 101.9z"/></svg>
</a>
<a href="https://twitter.com/TeraflowSDN" target="_blank" rel="noopener" title="twitter.com" class="md-social__link">
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><!--! Font Awesome Free 6.6.0 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License) Copyright 2024 Fonticons, Inc.--><path d="M389.2 48h70.6L305.6 224.2 487 464H345L233.7 318.6 106.5 464H35.8l164.9-188.5L26.8 48h145.6l100.5 132.9zm-24.8 373.8h39.1L151.1 88h-42z"/></svg>
</a>
</div>
</div>
</div>
</footer>
</div>
<div class="md-dialog" data-md-component="dialog">
<div class="md-dialog__inner md-typeset"></div>
</div>
<div class="md-progress" data-md-component="progress" role="progressbar"></div>
<script id="__config" type="application/json">{"base": "..", "features": ["navigation.instant", "navigation.instant.progress", "navigation.top", "navigation.footer", "navigation.path", "search", "search.highlight", "toc.integrate"], "search": "../assets/javascripts/workers/search.6ce7567c.min.js", "translations": {"clipboard.copied": "Copied to clipboard", "clipboard.copy": "Copy to clipboard", "search.result.more.one": "1 more on this page", "search.result.more.other": "# more on this page", "search.result.none": "No matching documents", "search.result.one": "1 matching document", "search.result.other": "# matching documents", "search.result.placeholder": "Type to start searching", "search.result.term.missing": "Missing", "select.version": "Select version"}, "version": {"provider": "mike"}}</script>
<script src="../assets/javascripts/bundle.83f73b43.min.js"></script>
</body>
</html>