Sneak Preview – We Give You More Fantastic VSA Configurations

By Isak - June 22, 2016

Well yes, it’s time for some pre middle ages holidays in Sweden. While we sing silly songs and having a blast, you get to read our final sneak preview of Vidispine 4.6 before release (also having a blast). This time you will get more goodies around VSA in a Vidispine clustered configuration.

With vidispine-server it is easier than ever to set up a high-availability configuration with Vidispine. Install Solr with ZooKeeper, use a stand-alone ActiveMQ, and any redundant PostgreSQL/MySQL solution, and you are ready to launch any number of vidispine-server instances. However, the Vidispine Server Agent (VSA) could only connect to one Vidispine instance. This caused things to break, as any Vidispine instance might have to connect to VSA to execute jobs. Not so any longer with VS 4.6.

Let’s start with some diagrams. The normal setup for VSA in 4.4 and 4.5 is by using the operating system’s SSH service, see Figure 1.

46 sneak preview normal VSA setup

As a sidenote, in version 4.6, we also introduces the possibility of not using SSH at all, which can be used if both VS and VSA are in a VPC/VPN, see Figure 2, and read more about it in a previous VSA sneak preview.

46 Sneak Preview VSA without SSH tunnel v2

The SSH configuration in 4.4/4.5 depends on the operating system, and it requires a special user to be created. In VS 4.6, this is no longer necessary. Instead, vidispine-server bundles its own SSH server, which listens to its own port and does not require any other Linux user account. In addition, the bundled SSH server is really locked down, it does not even have the shell subsystem enabled. See Figure 3. The new model is also great if you are using Docker or other container services, where SSH provided by that operating system might be more than a one-liner, or where you do not want to enable SSH. Just map any port to port 8183 in the container, and enable the VSA node.

46 Sneak Preview VSA With Multiple VS

In order to enable the VSA node, you need to do three things:

  1. Enable the VSA port in vidispine-server. This is done by adding these two lines to your server.yaml file, and restarting vidispine-server.
    vsaconnection:
      bindPort: 8183
    
  2. Add the VSA node to vidispine-server. This is done using the vidispine-admin tool.
    $ vidispine-admin.py vsa-add-node
    enter administrator user name (Enter for 'admin'):
    enter administrator password:
    enter internal address for Vidispine (Enter for localhost):
    enter VS API port (Enter for 8080):
    enter external address for Vidispine: vsnode1.example.com
    enter external VSA port for Vidispine (Enter for system default):
    enter unique name for this node (optional but recommended, if not entered host name will be used): vsanode_kigali
    enter uuid for this node (optional):
    do you want to add more VS nodes? (HA/cluster setup) y/n: y
    enter external address for Vidispine: vsnode2.example.com
    enter external VSA port for Vidispine (Enter for system default):
    do you want to add more VS nodes? (HA/cluster setup) y/n: n
    add VSA node..................: ok
    #Copy this content to /etc/vidispine/agent.conf on the VSA machine and restart the vidispine-agent service
    #Wed Jun 22 10:30:26 CEST 2016
    uuid=f57fca56-be27-43d5-8016-c8840ba4a875
    connectionString=vsnode1.example.com\t8183\tf57fca56-be27-43d5-8016-c8840ba4a875\t-----BEGIN DSA PRIVATE KEY-----\nMIIBuwIBAAKBgQDtSGUCF1rGbNKaO5Noeqcg+A58f0zS2OQdz/MN14uTa4IwrMzT\njT/Wq2gJz5IjYvR9yCtnkgHUFoTFhvPkeiBY6M3N86Wuk9tMBPgMWnurcN6hEGUZ\njKM1R2NpZiQ70PCM31jIvG27LiM24XuZH6lmBh3U6DxDKBXsz480fPeL9wIVAKcY\nBrnl5LONcBfsPI8O1dG4HCEDAoGBAKHMdBWK6tDJxDsJVhIbHChfmqTisbc1L6Vy\nuRIeUwM5Cje0HsRmzOVrX0Vqyt80B0mh9QWrgWUWZIE3VsObLQvAj3eUdV8X7KJW\nj7yArvYVSAs2OQlbiwX/pmmuO3A01YxfUsQXPEk9EF68KDrCh4PoNNLPE58Z+bjM\nfyXIeiOnAoGAA/4ORZTAq2qB50n1GiKjIbODxHvsvep8kl6H/PTw5r2DLlC4d6dl\nx+0h2fJLGJIMH6HWEfd9AUcvfR+WeRAVuujlDOkYW9DOxh3MZ+pS/CUhBAfhElWb\nO1V9nUc2ve2Na10LXs3qIbItbsC1/DVTOh1mtJu58KG7Po3SFX/W49ECFEuOdhk4\nq8B77dic2HQTBcWHb1a8\n-----END DSA PRIVATE KEY-----\n
    connectionString1=vsnode2.example.com\t8183\tf57fca56-be27-43d5-8016-c8840ba4a875\t-----BEGIN DSA PRIVATE KEY-----\nMIIBuwIBAAKBgQDtSGUCF1rGbNKaO5Noeqcg+A58f0zS2OQdz/MN14uTa4IwrMzT\njT/Wq2gJz5IjYvR9yCtnkgHUFoTFhvPkeiBY6M3N86Wuk9tMBPgMWnurcN6hEGUZ\njKM1R2NpZiQ70PCM31jIvG27LiM24XuZH6lmBh3U6DxDKBXsz480fPeL9wIVAKcY\nBrnl5LONcBfsPI8O1dG4HCEDAoGBAKHMdBWK6tDJxDsJVhIbHChfmqTisbc1L6Vy\nuRIeUwM5Cje0HsRmzOVrX0Vqyt80B0mh9QWrgWUWZIE3VsObLQvAj3eUdV8X7KJW\nj7yArvYVSAs2OQlbiwX/pmmuO3A01YxfUsQXPEk9EF68KDrCh4PoNNLPE58Z+bjM\nfyXIeiOnAoGAA/4ORZTAq2qB50n1GiKjIbODxHvsvep8kl6H/PTw5r2DLlC4d6dl\nx+0h2fJLGJIMH6HWEfd9AUcvfR+WeRAVuujlDOkYW9DOxh3MZ+pS/CUhBAfhElWb\nO1V9nUc2ve2Na10LXs3qIbItbsC1/DVTOh1mtJu58KG7Po3SFX/W49ECFEuOdhk4\nq8B77dic2HQTBcWHb1a8\n-----END DSA PRIVATE KEY-----\n
    operationMode=VSA-VS
    vxaname=vsanode_kigali
    fingerPrint=a3\:30\:6d\:04\:37\:46\:fb\:5d\:dd\:10\:4b\:8f\:4e\:6e\:91\:72
    
  3. Add the output from the tool to the VSA’s /etc/vidispine/agent.conf, and restart VSA.

Let me walk you though the input and output of the text block above. First some standard stuff.

enter administrator user name (Enter for 'admin'):
enter administrator password:

I assume you are running vidispine-admin on the same machine as vidispine-server, so just hit enter here:

enter internal address for Vidispine (Enter for localhost):
enter VS API port (Enter for 8080):

Now, which address should VSA use to connect to vidispine-server?

enter external address for Vidispine: vsnode1.example.com

If you are using docker, firewall port forwarding, or anything that means that VSA should not connect to the same port number as was specified in your server.yaml, provide the external port number here. Otherwise hit enter.

enter external VSA port for Vidispine (Enter for system default):

I really suggest that you give the VSA node a name. You will find more, new, reasons below.

enter unique name for this node (optional but recommended, if not entered host name will be used): vsanode_kigali

We let the system assign the UUID:

enter uuid for this node (optional):

Now we could be done. But we are using a clustered vidispine-server, and VSA needs to be able to connect to both. So we add the other one as well:

do you want to add more VS nodes? (HA/cluster setup) y/n: y
enter external address for Vidispine: vsnode2.example.com
enter external VSA port for Vidispine (Enter for system default):
do you want to add more VS nodes? (HA/cluster setup) y/n: n

That’s it. What will happen now is that vidispine-server will generate a key pair and return the private key in the text output. The public key is stored in Vidispine’s database.

A final goodie. VSA URIs (starting with vxa://) are not very human-readable as they contain the UUID of the VSA node. With 4.6, you can use the name of the node as well. Like  /API/vxa/vsanode_kigali/ . When URIs are returned from Vidispine, you can use methodMetadata to specify that you want the URIs returned to contain VSA names instead of UUIDs, e.g., /API/item?content=shape&methodMetadata=vsauri=NAME.

Note! Specifying VSA’s by name, and return VSA URIs with name, only works if the name is unique. If two or more VSAs have the same name, you will get a 404 back.