TAM Insights and K8s

One of my clients who has TAM services was trying to run the TAM Insights script to provide their TAM with some data. This script can take some time to run and must be run against each vCenter. For compliance purposes this client cannot have an idle session on their terminal servers used to access their vCenters for more than 15 minutes. This time limit was posing a problem to getting the scripts to run to completion. There are many easy ways around this. They probably skirt the compliance requirements like mouse jigglers or things of that nature. So I decided to build a servery overly engineered solution.

The script is a powershell script that pulls info out of each vCenter in series. This works for most vSphere shops and they mainly use Windows and are familiar with powershell. The script as it is today does not take command line arguments and has many “read-host” statements to get input from the user. This was my first issues. After parameterizing the script for some inputs and accepting defaults for others I got a run-able from a command line version. Mainly I just added a param line at the top:

param ($customer, $vcenter, $path, $user, $password, $outfile)

I should probably do some input validation and secure-string the password but this wasn’t my goal. I found all the “read-host” statements and REMed them out while hard coding the default values for their inputs. They were all Y/N questions. Really should have made params for them as well but let face it. I’m lazy. After this I needed to make sure that this would run and install all required modules. For example powercli. This was pretty easy:

$moduleName = "vmware.powercli"
try{
$installedVersion = Get-InstalledModule -Name $moduleName -ErrorAction Stop | select -ExpandProperty Version | Measure-Object -Maximum | select -expand Maximum
$installedVersion = "$($installedVersion.Major).$($installedVersion.Minor).$($installedVersion.Build).$($installedVersion.Revision)"
$currentVersion = Find-module -name $moduleName | select -expand Version -ErrorAction Stop
$currentVersion = "$($currentVersion.Major).$($currentVersion.Minor).$($currentVersion.Build).$($currentVersion.Revision)"
If ($currentVersion -gt $installedVersion){
    install-module $moduleName -confirm:$false -force
}
}catch{
    install-module $moduleName -confirm:$false -force
}

This checks if PowerCLI is installed and at the latest version. If not it will install/upgrade it. There were a few other changes due to the assumptions of the script being on windows and the addition of zipping up all of the data after completion. After these change I tested the script in a powershell container running on linux. Seemed to work so now I needed a frontend and a queuing mechanism.

I built a quick flask web app in python using the clarity-ui elements so that it looked “VMwarey.” I built this into a docker image based on the python image.

Web UI

For task generation a simple form with Customer Name, vCenter address, username and, password pushed to a redis database using python-rq. I used a redis docker image and then build another container with a rq worker connected to the queue. This container was based on the Microsoft powershell image. I added a tasks.py file that is imported with my tasks and changed the entrypoint to an rq worker. The tasks.py file does my work:

from pathlib import Path
import os
from rq import get_current_job
import base64

def addTask(customer, vcenter, username, password):
    os.system("/opt/microsoft/powershell/7/pwsh /App/TAM_data.ps1 -customer '"+customer+"' -vcenter '"+vcenter+"' -user '"+username+"' -password '"+password+"' -outfile '/tmp/"+ get_current_job().id+".zip'")
    with open("/tmp/"+ get_current_job().id+".zip", mode='rb') as file: # b is important -> binary
        fileContent = file.read()
    return base64.b64encode(fileContent)

The task runs the TAM_data.ps1 script with the inputs from the web interface. Once completed it bas64 encodes the zip file and returns it. This is put into the redis database as the result of the task. The workers can be scaled as they will each run an item from the queue. This allows for parallelization of the workers. A clever person could do some escaping and run arbitrary commands but this is a POC/MVP and I’m not a developer.

To return the results and show status of running or completed jobs a simple listing of all jobs page and a job details page with a download link was required.

Jobs page
Job Detail

After getting it all running with docker I built some yamls to deploy to k8s. This included a redis deployment, a cluster IP for both the front and and redis containers, a backend, a frontend, and a load balancer. The combined yamls look like this

apiVersion: v1
kind: Namespace
metadata:
  name: tamdata
---
apiVersion: v1
kind: Service
metadata:
  name: tamingress
  namespace: tamdata
spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8000
  - name: https
    port: 443
    protocol: TCP
    targetPort: 443
  selector:
    app: webservers
  type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
  name: taminsights-redis
spec:
  selector:
    app: taminsights-redis
  ports:
    - name: taminsights-redis
      port: 6379
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: taminsights
  namespace: tamdata
spec:
  replicas: 1
  selector:
    matchLabels:
      app: webservers
  template:
    metadata:
      labels:
        app: webservers
    spec:
      containers:
      - image: localhost:32000/tamfrontend
        name: frontend
        ports:
        - containerPort: 8000
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: taminsights-redis
  namespace: tamdata
spec:
  replicas: 1
  selector:
    matchLabels:
      app: taminsights-redis
  template:
    metadata:
      labels:
        app: taminsights-redis
    spec:
      containers:
      - image: redis
        name: taminsights-redis
        ports:
        - containerPort: 6379
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: taminsightsworker
  namespace: tamdata
spec:
  replicas: 1
  selector:
    matchLabels:
      app: worker
  template:
    metadata:
      labels:
        app: worker
    spec:
      containers:
      - image: localhost:32000/tamworker
        name: worker

Left to do is build tls support in for the frontend using k8s secrets to pass a certificate to gunicorn as well as secure the redis traffic. Unfortunately there is so much internal code there will be no github link yet again. Sorry everyone. Feel free to reach out if you want more details.

And there you have it. From one powershell script to 3 containers and a loadbalancer. It runs as a service now but probably a little much just to collect some data.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: