XDC CLI

Basics

The experiment development container (XDC) has a utility installed to aid experimentors, appropriately called xdc. It can be found at /usr/local/bin/xdc on all XDCs. It is used to administrate secure tunnels to materialized experiments, generate useful materialization specfic configuration, and show status.

The utility uses the Merge API for some tasks, so it must authenticate the expermentor. The user id and password is the same as with the mergetb utility and the Merge Launch GUI. To login use the login command.

We start by listing and describing all commands, then give an example that puts it all together.

Command structure

Administrative

Materialization connection

These commands configure and control the secure connection from an XDC to a materialized experiment. They create and configure a Wireguard tunnel from the XDC to the infrapod of the materialization. They also configure name resolution to resolve experiment names from the XDC through the tunnel.

Experiment Control

These commands control aspects of a materialized and attached experiment.

Ansible

The ansible collection of commands are meant to make it easier to use ansible to configure and provision exeperiment nodes.

Configuration Generation

This set of commands extracts useful information from an experiment materialization and generates configuration files or information for experimentors.

Command reference

Administrative

Login

The username and password are the same as the ones used to login to the GUI or mergetb.

glawler@one:~$ xdc login -h
Login to the Merge portal
Usage:
xdc login <username> [flags]
Flags:
-a, --api string merge API endpoint (default "api.mergetb.net")
-c, --certificate string path to API certificate
-h, --help help for login
--passwd string user password

A specific API endpoint (and it's corresponding certificate) may be specified, but the default will generally be what you want unless you're running a local or custom Merge instance.

glawler@one:~$ xdc login glawler
password:
glawler@one:~$ xdc show login
User: glawler
API: api.mergetb.net

You will remain logged in until you logout via xdc logout.

Logout

Logout removes the locally stored authentication token and connection configuration information.

glawler@one:~$ xdc logout -h
Logout of the merge api
Usage:
xdc logout [flags]
Flags:
-h, --help help for logout
glawler@one:~$ xdc show login
User: glawler
API: api.mergetb.net
glawler@one:~$ xdc logout
glawler@one:~$ xdc show login
Not logged in.
glawler@one:~$

Show Login

Show information about the login instance.

glawler@one:~$ xdc show login
User: glawler
API: api.mergetb.net

Materialization Connection

These commands configure secure wireguard tunnels between the xdc and an experiment materialization.

Attach

The attach command will create the tunnel between the xdc and the experiment materialization that is specified. It also updates the name resolution on the xdc (in /etc/resolv.conf) to point to the name resolution service in the infrapod on the materialization side of the tunnel. This means resolution of experiment node names will resolve.

glawler@one:~$ xdc show tunnel
xdc not attached: no wireguard interface found
glawler@one:~$ xdc attach glawler hi one
glawler@one:~$ xdc show tunnel
Materialization: glawler hi one
Interface: wgdiuyc (UP)
Address: 192.168.254.13/32
glawler@one:~$ ping n00
PING n00.one.hi.glawler (172.30.0.10) 56(84) bytes of data.
64 bytes from 172.30.0.10 (172.30.0.10): icmp_seq=1 ttl=63 time=2.55 ms
^C
--- n00.one.hi.glawler ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 2.553/2.553/2.553/0.000 ms
glawler@one:~$

Detach

Destroy an existing tunnel.

glawler@one:~$ xdc show tunnel
Materialization: glawler hi one
Interface: wgdiuyc (UP)
Address: 192.168.254.13/32
glawler@one:~$ xdc detach
glawler@one:~$ xdc show tunnel
xdc not attached: no wireguard interface found
glawler@one:~$ ping n00
ping: n00: Name or service not known
glawler@one:~$

Show Tunnel

Give details about an existing tunnel, if there is one.

glawler@one:~$ xdc attach glawler hi one
glawler@one:~$ xdc show tunnel
Materialization: glawler hi one
Interface: wgdqwgq (UP)
Address: 192.168.254.14/32
glawler@one:~$ ip -br -c link
lo UNKNOWN 00:00:00:00:00:00 <LOOPBACK,UP,LOWER_UP>
eth0@if169 UP ea:97:24:83:bf:12 <BROADCAST,MULTICAST,UP,LOWER_UP>
wgdqwgq UNKNOWN <POINTOPOINT,NOARP,UP,LOWER_UP>
glawler@one:~$

Experiment Control

Experiment node power control

These commands let you power cycle, power off, and power on attached experiment nodes. All commands take one or more node names. Both experiment and resource names are supported. That is, you can use the hostname of the experiment node or the resource name given by the site administrator. Both of these names refer to the same node and can be seen via the mergetb show realization ... command.

These commands invoke the beluga power control daemon at the correct site with the correct arguments.

Commands related to power for experiment nodes
Usage:
xdc power [command]
Available Commands:
cycle Power cycle the given node (node can be experiment or site-resource name)
off Power off the given node (node can be experiment or site-resource name)
on Power on the given node (node can be experiment or site-resource name)
status Show the status if the given node (node can be experiment or site-resource name)
Flags:
-b, --belugactl string Power control daemon endpoint (default "belugactl:6941")
-h, --help help for power
Global Flags:
-n, --nocolor disable color in output
Use "xdc power [command] --help" for more information about a command.

Cycle

Power cycle turns the node off then on again after a short period.

Power cycle the given node (node can be experiment or site-resource name)
Usage:
xdc power cycle <node> <node> ... <node> [flags]
Flags:
--hard if given, hard power cycle (default true)
-h, --help help for cycle
Global Flags:
-b, --belugactl string Power control daemon endpoint (default "belugactl:6941")
-n, --nocolor disable color in output

Off / On

Turns the node off or on. On help shown, but is the same for off.

Power on the given node (node can be experiment or site-resource name)
Usage:
xdc power on <node> <node> ... <node> [flags]
Flags:
-h, --help help for on
Global Flags:
-b, --belugactl string Power control daemon endpoint (default "belugactl:6941")
-n, --nocolor disable color in output

Status

Show the status of the given nodes (node can be experiment or site-resource name)
Usage:
xdc power status <node> <node> ... <node> [flags]
Flags:
-h, --help help for status
Global Flags:
-b, --belugactl string Power control daemon endpoint (default "belugactl:6941")
-n, --nocolor disable color in output

Ansible

Ansible Inventory

The ansible inventory command generates an Ansible inventory from the attached materialization. It will put all nodes in to an [all] group and optionally into other groups based on the group properties of the node.

glawler@one:~$ xdc ansible inv
#
# Ansible Inventory File - auto-generated
#
[all]
n00
n01
n02
n03
n04
n05
[edge]
n00
n05
[router]
n01
n02
n03
n04

Example of setting a node group property in mergexp:

import mergexp as mx
topo = mx.Topology("Example")
...
node = topo.device('n00')
node.props['group'] = 'router'

Given this model, the n00 node would be put in the all and router groups in the inventory file.

Ansible Ping

Use the ansible ping module to ping all nodes in an attached materialization.

glawler@one:~$ xdc ansible ping -h
Call ansible ping on all attached experiment nodes
Usage:
xdc ansible ping [flags]
Flags:
-h, --help help for ping
glawler@one:~$ xdc ansible ping
n03 | SUCCESS => {
"changed": false,
"ping": "pong"
}
n02 | SUCCESS => {
"changed": false,
"ping": "pong"
}
n00 | SUCCESS => {
"changed": false,
"ping": "pong"
}
n04 | SUCCESS => {
"changed": false,
"ping": "pong"
}
n01 | SUCCESS => {
"changed": false,
"ping": "pong"
}
n05 | SUCCESS => {
"changed": false,
"ping": "pong"
}

Generate hosts file

There are two networks on each experiment node, the infrastructure network (for testbed control and configuration) and the experiment network (which is used by the experiment itself).

This command generates an /etc/hosts file for an attached materialization. This parses all ip addresses for the given (or attached) materialization and mmaps them to the node names on the experiment network. Each address is assigned to an interface, so for nodes with multiple interfaces there would be mutliple addresses assigned to each name. To assign unique names to each address, the generator adds the zero-based interface index integer to the end of the name. e.g. the second interface of the node henry would be named henry-1. The first interface uses the node name itself so at least one interface on the node will resolve using just the node name.

The name resolution in the materialization infrapod will resolve the names given in the experiment model to the infrastructure network. You may generate the same names on the experiment network, but the order of the resolution may be OS specific. (i.e. which henry will get resolved? The infrastructure network or the experiment network?) The generator has a -p (--prefix) argument to get around this. The given prefix will be prefixed to all names generated. So xdc generate etchosts -p exp_ would generate names like exp_henry. Thus the expermentor can keep the names separate across networks.

glawler@one:~$ xdc generate etchost -h
Generate an /etc/hosts file based on a realization. If no args, generate based on attached materialization.
Usage:
xdc generate etchosts [pid eid rid] [flags]
Flags:
-h, --help help for etchosts
-p, --prefix string If given prefix this to the node names.
glawler@one:~$ xdc generate etchost -p exp_
10.0.0.1 exp_n00
10.0.0.2 exp_n01
10.0.1.1 exp_n01-1
10.0.1.2 exp_n02
10.0.2.1 exp_n01-2
10.0.2.2 exp_n03
10.0.3.1 exp_n02-1
10.0.3.2 exp_n04
10.0.4.1 exp_n03-1
10.0.4.2 exp_n04-1
10.0.5.1 exp_n04-2
10.0.5.2 exp_n05
glawler@one:~$

Putting it all together

This example shows how to create and add /etc/hosts mapping on all your experiment nodes so you can resolve the nodes by name.

Note that the node names given in the experiment model are understood by the experiment-specific DNS that is created with your experiment's materialization. So we prefix the names with exp_ to distingush them.

glawler@one:~$ xdc attach glawler one hi
glawler@one:~$ xdc show tunnel
Materialization: glawler hi one
Interface: wgdpyzh (UP)
Address: 192.168.254.5/32
glawler@one:~$ xdc ansible inv > myexp.inv
glawler@one:~$ xdc generate etchosts -p exp_ > myexp.etchosts

We use the blockinfile ansible module to insert the etchosts data into all the /etc/hosts nodes of the materialized experiment.

glawler@one:~$ cat add_hosts.yml
- hosts: all
become: true
tasks:
- name: Insert file into /etc/hosts
blockinfile:
dest: /etc/hosts
block: "{{ lookup('file', 'myexp.etchosts') }}"
marker: "# {mark} Ansible Managed Exp. Network "

Execute the playbook to insert the host names.

glawler@one:~$ ansible-playbook -i myexp.inv add_hosts.yml
PLAY [all] ********************************************************************************************
TASK [Gathering Facts] ********************************************************************************
ok: [n00]
ok: [n05]
ok: [n01]
ok: [n03]
ok: [n04]
ok: [n02]
TASK [Insert file into /etc/hosts] ********************************************************************
changed: [n03]
changed: [n04]
changed: [n01]
changed: [n05]
changed: [n00]
changed: [n02]
PLAY RECAP ********************************************************************************************
n00 : ok=2 changed=1 unreachable=0 failed=0
n01 : ok=2 changed=1 unreachable=0 failed=0
n02 : ok=2 changed=1 unreachable=0 failed=0
n03 : ok=2 changed=1 unreachable=0 failed=0
n04 : ok=2 changed=1 unreachable=0 failed=0
n05 : ok=2 changed=1 unreachable=0 failed=0

Now we confirm the change by sshing to an experiment node and pinging using the infrastructure name and the experiment network name.

glawler@one:~$ ssh n00
Linux n00 4.19.0-8-amd64 #1 SMP Debian 4.19.98-1 (2020-01-26) x86_64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Tue Apr 28 23:23:33 2020 from 192.168.254.5
glawler@n00:~$ cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 debian
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
# BEGIN Ansible Managed Exp. Network
10.0.0.1 exp_n00
10.0.0.2 exp_n01
10.0.1.1 exp_n01-1
10.0.1.2 exp_n02
10.0.2.1 exp_n01-2
10.0.2.2 exp_n03
10.0.3.1 exp_n02-1
10.0.3.2 exp_n04
10.0.4.1 exp_n03-1
10.0.4.2 exp_n04-1
10.0.5.1 exp_n04-2
10.0.5.2 exp_n05
# END Ansible Managed Exp. Network
glawler@n00:~$ ping exp_n05
PING exp_n05 (10.0.5.2) 56(84) bytes of data.
64 bytes from exp_n05 (10.0.5.2): icmp_seq=1 ttl=61 time=6.12 ms
^C
--- exp_n05 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 6.124/6.124/6.124/0.000 ms
glawler@n00:~$ ping n05
PING n05.two.hi.glawler (172.30.0.11) 56(84) bytes of data.
64 bytes from 172.30.0.11 (172.30.0.11): icmp_seq=1 ttl=64 time=1.63 ms
64 bytes from 172.30.0.11 (172.30.0.11): icmp_seq=2 ttl=64 time=1.01 ms
^C
--- n05.two.hi.glawler ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 3ms
rtt min/avg/max/mdev = 1.006/1.319/1.633/0.315 ms
glawler@n00:~$

Note once we're on an experiment node, we can now resolve the original node names to the infrastructure network (172.30.0.0/16) and our new exp_ names on the experiment network (which we've chosen to be in 10.0.0.0/16).