TL;DR – The path is
/data/vco/usr/lib/vco/app-server/conf by default.
The reason for this post is because and old problem is made new and more interesting, and it’s kind of annoying to deal with if you don’t know Kubernetes or containers.
A little back story, vRealize Orchestrator has always had a bit of a problem with clustered setups not really being very “clustered”, in that jobs are run on worker nodes and the only thing shared between the nodes are the workflows, actions, et al and not any of the backing files. This is mostly fine for most use cases but for some this becomes a problem, but the problem was fairly easily solved before because the application was just run on the virtual machines directly so there wasn’t an additional abstraction layer to deal with, and the logging is pretty easy to understand in that context, but when you add containers and Kubernetes to the mix it makes things less straight forward.
The main problem is that any SSH keys generated (or stored) on one vRO node in a cluster is not automatically replicated to the rest of the nodes in the cluster, and thus requires manual steps to copy it around so that your workflow won’t fail just because it got run on the “wrong” node. If you look in the workflow itself you’ll see the “path” to the key, which is pretty straight forward. The default is
We find ourselves in a position where that isn’t the absolute path to the key now, though. With Kubernetes involved, we now need to figure out where that volume is mounted to be able to find the key itself. This isn’t really straight forward. First we need to see the volumes and where they’re mounted in the vRO server container. To do this we’ll need to get the pod names, which have randomly generated strings on the end so that there isn’t a collision on the names. To do this we’ll run
kubectl get pods -n prelude . The pod that we are looking for will be of the form
root@computer [ ~ ]# kubectl get pods -n prelude NAME READY STATUS RESTARTS AGE abx-service-app-b9857bb96-ckfxt 1/1 Running 4 59d approval-service-664b575b4f-4f4h2 1/1 Running 10 59d assessment-service-app-574658cb54-wtrcj 1/1 Running 13 59d automation-ui-app-58bbdcd5f-zfppx 1/1 Running 2 59d blueprints-ui-app-849c88547f-crttr 1/1 Running 2 59d catalog-service-app-8545969b8f-4r678 1/1 Running 6 59d catalog-ui-app-dbb8d6d98-c4bbn 1/1 Running 2 59d cgs-service-b75884c64-jm2gt 1/1 Running 15 59d cgs-ui-app-6cdcc65db7-9d94r 1/1 Running 2 59d cmx-service-app-684bd8bdbf-945gj 1/1 Running 3 59d codestream-app-56fdb89555-jfztf 1/1 Running 10 59d deployment-ui-app-6b6d8d55c8-xlrd6 1/1 Running 2 59d docker-registry-5bf9b47b68-4k9qp 1/1 Running 2 59d docker-registry-proxy-57bb4b45bf-66rdl 1/1 Running 13 59d ebs-app-6f7dcd6f79-mxp8t 1/1 Running 3 59d extensibility-ui-app-b9d69cbd9-6jxcr 1/1 Running 2 59d form-service-app-b557d4fbb-2lzbl 1/1 Running 12 59d identity-service-app-7f9896794c-49kc6 1/1 Running 12 59d identity-ui-app-6fccc5f87c-8gj4w 1/1 Running 2 59d landing-ui-app-6f7d59d84d-v8h24 1/1 Running 2 59d migration-service-app-76796b64f7-sf4rn 1/1 Running 6 59d migration-ui-app-674668fd45-brhmj 1/1 Running 2 59d nginx-httpd-app-c59965577-s58h6 1/1 Running 2 59d no-license-app-85bc44cc-mbf4p 1/1 Running 2 59d orchestration-ui-app-6744d98cd5-r9zfc 1/1 Running 2 59d pgpool-6c5c685dcc-8nsfb 1/1 Running 8 59d pipeline-ui-app-544649545f-dlff9 1/1 Running 2 59d postgres-0 1/1 Running 2 59d project-service-app-788d8f484b-d9drr 1/1 Running 4 59d provisioning-service-app-867966c8fc-wswcx 1/1 Running 3 59d provisioning-ui-app-7555fb8f6c-8v792 1/1 Running 2 59d proxy-service-8574c8f975-mxvr8 1/1 Running 2 59d quickstart-ui-app-7b5b6d654d-zpm9b 1/1 Running 2 59d rabbitmq-ha-0 1/1 Running 2 59d relocation-service-app-5544ffbddb-9b5lc 1/1 Running 10 59d relocation-ui-app-59bb67d8b5-nxrb7 1/1 Running 2 59d symphony-logging-797f69655b-gtlzp 1/1 Running 2 59d tango-blueprint-service-app-6544697fd5-l5rwt 1/1 Running 12 59d tango-vro-gateway-app-7bcd5d55dc-nsrtv 1/1 Running 6 59d terraform-service-app-65d7ccb65b-2l9fh 2/2 Running 8 59d user-profile-service-app-74d89ddb8c-7mf2g 1/1 Running 13 59d vco-app-5574c66f65-7pfgx 3/3 Running 6 59d
Then we’ll take the pod name (vco-app-5574c66f65-7pfgx in the above example, now dubbed
<pod-name> in subsequent examples) and run
kubectl describe pod/<pod-name> -n prelude to get the configuration of the node, which will include the volumes of the containers and where they are mounted on the underlying system. The container we’re looking for here is the vco-server-app in the pod configuration. These config files are kind of long so I’ll just cut out the relevant information. You’ll see in the below excerpt that the volume vco-vol is being mounted to
/usr/lib/vco inside of the container. If we scroll to the bottom of the describe output we’ll then see in the Volumes section that the vco-vol volume is located at the path
root@computer [ ~ ]# kubectl describe pod/vco-app-5574c66f65-7pfgx -n prelude Name: vco-app-5574c66f65-7pfgx Namespace: prelude Priority: 0 PriorityClassName: <none> Node: sk-vra1.kpsc.io/10.64.4.21 Start Time: Fri, 31 Jul 2020 00:18:56 +0000 Labels: app=vco-app environment=new pod-template-hash=5574c66f65 product=prelude service-monitor=vco Annotations: <none> Status: Running IP: 10.244.0.240 Controlled By: ReplicaSet/vco-app-5574c66f65 ... vco-server-app: Container ID: docker://f17600a1941fa265e67875e09c8f7cd22acffd8da1bd4f4c3dd68e2c51d8d419 Image: vco_private:latest Image ID: docker://sha256:b70d16804deb3937db793a3f1f4ea4dfab68b1971fdce9ecef13a9e9aaae8e18 Port: 8280/TCP Host Port: 0/TCP ... Mounts: /usr/lib/vco from vco-vol (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-lsm6f (ro) /var/run/vco from vco-scripting (rw) /var/run/vco-polyglot from vco-polyglot (rw) /var/run/vco-polyglot-runner-sock from vco-polyglot-runner-sock (ro) ... Volumes: vco-vol: Type: HostPath (bare host directory volume) Path: /data/vco/usr/lib/vco HostPathType: DirectoryOrCreate ... QoS Class: Burstable Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: <none>
Now, you may notice that the paths make sense from the Kubernetes pod config, but there’s still something off about the path provided by vRO. The paths are all
/usr/lib/vco in the config, but the path provided to us by vRO is
/var/lib/vco , so why is that? To understand, we’ll need to exec into the container. It’s not immediately obvious from the Kubernetes config why we need to do this but just from a Linux administration standpoint it makes sense to assume that these directories are symlinked. Why just assume, though, when we can be 100% sure?
To do this we’ll need to use the
kubectl exec to get a shell in the container and see what’s going on. To do this we’ll need a few things. We already have the pod name and the namespace, but
kubectl exec also accepts a container to be exec’ing into, and if it is not provided one uses a default, which in this case is actually the vco-polyglot-runner container, which doesn’t have bash installed, or any of the information we want. The container we want is the vco-server-app container, we’ll need to set that as follows:
root@computer [ ~ ]# kubectl exec --stdin --tty vco-app-5574c66f65-7pfgx -n prelude -c vco-server-app -- /bin/bash root [ / ]# ls -al /var/lib/ | grep vco lrwxrwxrwx 1 root root 13 Sep 8 21:53 vco -> /usr/lib/vco/
Now you can inspect the directory as done above and see that the
/var/lib/vco directory is symlinked to the
/usr/lib/vco directory, which is why everything is mounted there in Kubernetes but vRO is spitting out the
/var/lib/vco directory in the vRO application.
So now we know where our vRO configuration path is at and we can drop SSH private keys in there as needed. Using all the information gathered from above, we know that the file needs to be placed in the
/data/vco/usr/lib/vco/app-server/conf directory with the name vco_key. Of course, you can change that as you wish now, using the information above to decide where you would like to place the file, or just re-using the same directory and naming the file as you wish. Any changes you make will need to be replicated on the other nodes in the cluster as well.