Now I know why you’re thinking. “Kenyon, that’s easy just V2V it.” Or “That’s not supported so don’t do it.” For those of you who don’t know me a story is in order first.
I was building a screen porch out of my tiny front porch at my old house. Now we are talking something like 8ftx6ft. So tiny. I decided to buy a couple rolls of screen and make them like roll up curtains on my porch. I sewed a zipper into them for a door and mounted them behind the “beams” on my front porch. You couldn’t see them while they were rolled up. Now I could sit outside with my wife and not get eaten alive by mosquitoes.
So why this story? Well at the end my neighbor came out and said “some people ask why.” Well I ask why not. So here we are.
The idea was not to just convert an existing vCenter to a non VMware VM but to have a pre installation vCenter VM and then configure it in place. If you’ve installed vCenter you will know that there is a GUI and CLI installer included in the install ISO. If you’ve dug around in the ISO you may have also noticed that there is an OVA that is the appliance. During either install method vCenter is configured via a bunch of parameters that are passed into the OVA via vAPP options. You can also deploy the OVA manually and include the appropriate configuration options. A few other things you may notice if you dig deeper into the OVA. There are only three disks included. Once deployed vCenter has 16 disks. It turns out that the others are all blank disks created during the OVA deployment. They are sized differently based on the size vCenter you select. The first disk is a regular disk, while the second is an ISO masquerading as a VMDK, and the third is a swap disk.
After digging through a deployed appliance I found the first boot scripts. They do a bunch of things like setup the remaining disks, configure the network, generate ssh keys. Normal stuff. The more interesting thing is the phase 2 install. The scripts mount /dev/sdb and run install.sh. This script does the phase two deployment.
There is another scrip install.sh calls to get the installation parameters: install-parameters. This is a python script that retrieves the values necessary for the installation. You call the script with the name of the parameter as the first option and a default value as the second. After digging through this script I found that it gets the values one of three ways in order:
1) From a mount point loading individual files per option (I think. I didn’t dig into this one very much).
2) From a settings.json file in /var/install
3) OVF environment
For my purposes the settings.json seemed like an easy way to get the values I needed. After going through many installs I believed there were only a few values that needed to be set. I set these:{
"appliance.root.passwd":"*********",
"vmdir.password":"*******",
"vmdir.domain-name":"vsphere.local",
"vmdir.username":"administrator@vsphere.local",
"appliance.net.addr.family":"ipv4"
}
To build a set of disks in vhd format for azure I wrote a small bash script. It requires that qemu-image being installed as well as gemu-nbd.
apt install qemu-utils
modprobe nbd
cp /home/azadmin/VMware-VCSA-all-8.0.2-22617221.iso /mnt
cd /mnt7z x VMware-VCSA-all-8.0.2-22617221.iso
cd vcsa
tar -xf VMware-vCenter-Server-Appliance-8.0.2.00100-22617221_OVF10.ova
qemu-img convert -f vmdk -O raw VMware-vCenter-Server-Appliance-8.0.2.00100-22617221_OVF10-disk2.vmdk VMware-vCenter-Server-Appliance-8.0.2.00100-22617221_OVF10-disk2.iso
qemu-img convert -f vmdk -O raw VMware-vCenter-Server-Appliance-8.0.2.00100-22617221_OVF10-disk1.vmdk VMware-vCenter-Server-Appliance-8.0.2.00100-22617221_OVF10-disk1.raw
qemu-nbd -f raw -c /dev/nbd0 VMware-vCenter-Server-Appliance-8.0.2.00100-22617221_OVF10-disk1.raw
pvscan
lvscan
mkdir lv_root
mount /dev/vg_root_0/lv_root_0 ./lv_root
7z x VMware-vCenter-Server-Appliance-8.0.2.00100-22617221_OVF10-disk2.iso -o./lv_root/mnt/
chmod a+x ./lv_root/mnt/*
mv ./lv_root/mnt/02760595a3be651162239f886f99870a2b6d33845962bec0777fbb154aeb4254 ./lv_root/ mnt/02760595a3be651162239f886f99870a2b6d33845962bec0777fbb154aeb4254.blob
mv ./lv_root/mnt/0e28b64789b28ec9b31855b43de832be8459b915032f74cfcc136971ed1902d9 ./lv_root/mnt/0e28b64789b28ec9b31855b43de832be8459b915032f74cfcc136971ed1902d9.blob
mv ./lv_root/mnt/0e691eddbd010fc98a614b09de5952ffe32b068ff62e6fa28ef62c6e4fe21064 ./lv_root/mnt/0e691eddbd010fc98a614b09de5952ffe32b068ff62e6fa28ef62c6e4fe21064.blob
mv ./lv_root/mnt/10f5912fc33d63b003dcf776dd958ca1b37415154dd8540931406e7189e121b8 ./lv_root/mnt/10f5912fc33d63b003dcf776dd958ca1b37415154dd8540931406e7189e121b8.blob
mv ./lv_root/mnt/252ed77e25eb5db98c02f2ea64ed0a364b3b71b85921904b617123afc0ece336 ./lv_root/mnt/252ed77e25eb5db98c02f2ea64ed0a364b3b71b85921904b617123afc0ece336.blob
mv ./lv_root/mnt/2bc5bf7c4be9c435a966f817e1d6ccec2d855d27ff5ca498842355342dacc3b6 ./lv_root/mnt/2bc5bf7c4be9c435a966f817e1d6ccec2d855d27ff5ca498842355342dacc3b6.blob
mv ./lv_root/mnt/44f609b532b71faacc7f24d170a473ba058ec26fa76c89a4c9921a568272d6b3 ./lv_root/mnt/44f609b532b71faacc7f24d170a473ba058ec26fa76c89a4c9921a568272d6b3.blob
mv ./lv_root/mnt/5fcc04f97ad5be20b7b7bf185be184b673cca5a65feacbd95a61c6a4aabebb57 ./lv_root/mnt/5fcc04f97ad5be20b7b7bf185be184b673cca5a65feacbd95a61c6a4aabebb57.blob
mv ./lv_root/mnt/62bf2b1a86fe2f0f06ce1f43af3a126f527411c54d4a5fab211f9a00a610eab2 ./lv_root/mnt/62bf2b1a86fe2f0f06ce1f43af3a126f527411c54d4a5fab211f9a00a610eab2.blob
mv ./lv_root/mnt/68aae88dc0dd659b0969b6519ff0183cb63d56fa836525c82b2fbed93d480f50 ./lv_root/mnt/68aae88dc0dd659b0969b6519ff0183cb63d56fa836525c82b2fbed93d480f50.blob
mv ./lv_root/mnt/736a0486f90cf8287bfb97d8b203c12e714564549ae1add0dffaf025be725403 ./lv_root/mnt/736a0486f90cf8287bfb97d8b203c12e714564549ae1add0dffaf025be725403.blob
mv ./lv_root/mnt/7f9e5468c51da73276f5a0090ce59b6ea701cd2ee25492b347e72403e833a2a9 ./lv_root/mnt/7f9e5468c51da73276f5a0090ce59b6ea701cd2ee25492b347e72403e833a2a9.blob
mv ./lv_root/mnt/8627ad104345fb6b483e5cd8ae6ff14f8391820d88465821e319dfcf85c4e8e4 ./lv_root/mnt/8627ad104345fb6b483e5cd8ae6ff14f8391820d88465821e319dfcf85c4e8e4.blob
mv ./lv_root/mnt/978d170f6424a41f333212023bd38594390b08cef6247224d81d4bbe194f5a0f ./lv_root/mnt/978d170f6424a41f333212023bd38594390b08cef6247224d81d4bbe194f5a0f.blob
mv ./lv_root/mnt/ae84d08534f0fa8a3cf37c9cdab37826bb5523efd0d14bd90ac6f79152439523 ./lv_root/mnt/ae84d08534f0fa8a3cf37c9cdab37826bb5523efd0d14bd90ac6f79152439523.blob
mv ./lv_root/mnt/bed2cb0bd77bb37ae6d5f290123783073500e16aba3bdf2400413da00a568a80 ./lv_root/mnt/bed2cb0bd77bb37ae6d5f290123783073500e16aba3bdf2400413da00a568a80.blob
mv ./lv_root/mnt/c3085f889484b732121e48546c98236635c633eccf1ff11559a4b54f01e2b176 ./lv_root/mnt/c3085f889484b732121e48546c98236635c633eccf1ff11559a4b54f01e2b176.blob
mv ./lv_root/mnt/cf35d7a5c1aedec3282ce4798a8adda5c7d0f2afb30a86892d5435a9630263e0 ./lv_root/mnt/cf35d7a5c1aedec3282ce4798a8adda5c7d0f2afb30a86892d5435a9630263e0.blob
mv ./lv_root/mnt/dc22af3430c12fb1be65dbf1f39c1529a36f849e8e9ab9f437f83f760df5bf7d ./lv_root/mnt/dc22af3430c12fb1be65dbf1f39c1529a36f849e8e9ab9f437f83f760df5bf7d.blob
mv ./lv_root/mnt/e2b65a8309aa491b8bcb135b48abe568c51e0091c5d36ef27e062faa426c396c ./lv_root/mnt/e2b65a8309aa491b8bcb135b48abe568c51e0091c5d36ef27e062faa426c396c.blob
mv ./lv_root/mnt/e326577ff8b29da3a04af55ce42e31810bf11eac87dc114d374b370884e5fe52 ./lv_root/mnt/e326577ff8b29da3a04af55ce42e31810bf11eac87dc114d374b370884e5fe52.blob
mv ./lv_root/mnt/ee16fa22224a247d2f8c3209d808e365992cea329140189278e4b84fcef62eee ./lv_root/mnt/ee16fa22224a247d2f8c3209d808e365992cea329140189278e4b84fcef62eee.blobmv ./lv_root/mnt/fdf43d9d05c4fa7ad8b3bef463edf9a501ed22e1a390eb9d5684cfcc5b04a6e0 ./lv_root/mnt/fdf43d9d05c4fa7ad8b3bef463edf9a501ed22e1a390eb9d5684cfcc5b04a6e0.blob
mkdir ./lv_root/var/install
cat << EOF > ./lv_root/var/install/settings.json
{ "appliance.root.passwd":"*****",
"vmdir.password":"*****",
"vmdir.domain-name":"vsphere.local",
"vmdir.username":"administrator@vsphere.local",
"appliance.net.addr.family":"ipv4",
"vm.vmname":"VMware-vCenter-Server-Appliance",
"deployment.autoconfig":"False",
"desired.state":"{}",
"vmdir.first-instance":"True",
"upgrade.import.directory":"/storage/seat/cis-export-folder",
"db.type":"embedded",
"fips.enabled":"False",
"vpxd.mac-allocation-scheme.prefix-length":"0",
"deployment.node.type":"embedded",
"netdump.enabled":"True",
"silentinstall":"False",
"appliance.net.ports":"{}",
"appliance.ssh.enabled":"True",
"upgrade.source.export.directory":"/var/tmp",
"vmdir.site-name":"Default-First-Site",
"env.classification.level":"none",
"system.vm0.port":"443",
"ceip_enabled":"False",
"upgrade.source.platform":"linux",
"hadcs.enabled":"True",
"clientlocale":"en",
"vpxd.ha.management.port":"443",
"lookup_timeout":"900",
"upgrade.source.ma.port":"9123",
"appliance.time.tools-sync":"False"
}
EOF
cat << EOF > ./lv_root/etc/systemd/network/99-eth.network
[Match]Name=e*
[Network]DHCP=yes
EOF
echo VMWARE_PYTHON_PATH=/usr/lib/vmware/site-packages >> ./lv_root/etc/environment
mkdir ./lv_root/tmp
cd ./lv_root/tmp
git clone https://github.com/Azure/WALinuxAgent
cd ../..
chroot ./lv_root
/bin/bash <<"EOT"
echo "root:*****" | chpasswd
echo "admin:*****" | chpasswd
systemctl enable sshd
cd /tmp/WALinuxAgentpython
setup.py install
systemctl enable waagent
EOT
umount ./lv_root
qemu-nbd -d /dev/nbd0MB=$((1024*1024))size=$(qemu-img info -f raw --output json VMware-vCenter-Server-Appliance-8.0.2.00100-22617221_OVF10-disk1.raw | gawk 'match($0, /"virtual-size": ([0-9]+),/, val) {print val[1]}')rounded_size=$(((($size+$MB-1)/$MB)*$MB))
qemu-img resize -f raw VMware-vCenter-Server-Appliance-8.0.2.00100-22617221_OVF10-disk1.raw $rounded_size
qemu-img convert -f raw -O vpc -o subformat=fixed,force_size VMware-vCenter-Server-Appliance-8.0.2.00100-22617221_OVF10-disk1.raw VMware-vCenter-Server-Appliance-8.0.2.00100-22617221_OVF10-disk1.vhd
rm VMware-vCenter-Server-Appliance-8.0.2.00100-22617221_OVF10-disk1.rawqemu-img convert -f vmdk -O raw VMware-vCenter-Server-Appliance-8.0.2.00100-22617221_OVF10-disk3.vmdk VMware-vCenter-Server-Appliance-8.0.2.00100-22617221_OVF10-disk3.raw
MB=$((1024*1024))size=$(qemu-img info -f raw --output json VMware-vCenter-Server-Appliance-8.0.2.00100-22617221_OVF10-disk3.raw | gawk 'match($0, /"virtual-size": ([0-9]+),/, val) {print val[1]}')rounded_size=$(((($size+$MB-1)/$MB)*$MB))
qemu-img resize -f raw VMware-vCenter-Server-Appliance-8.0.2.00100-22617221_OVF10-disk3.raw $rounded_size
qemu-img convert -f raw -O vpc -o subformat=fixed,force_size VMware-vCenter-Server-Appliance-8.0.2.00100-22617221_OVF10-disk3.raw VMware-vCenter-Server-Appliance-8.0.2.00100-22617221_OVF10-disk3.vhd
rm VMware-vCenter-Server-Appliance-8.0.2.00100-22617221_OVF10-disk3.raw
This will leave you with 2 vhd files that need to be uploaded to a blob store. You could of course convert these to any other type of disk and use them. After that you need to make a new VM in azure:
az account set --subscription <subscription_id>
virtualMachineName=<new_vm_name>
resourceGroupName=<resource_group_name>
vnetName=<vnet_name>
subnetName=<subnet_name>
osType=linuxosdisk=<url for disk 1>
swapdisk=<url for disk 2>
az disk create --resource-group $resourceGroupName --name vc_1_blob --location $location --size-gb 201 --source $osdisk
az disk create --resource-group $resourceGroupName --name vc_2_blob --location $location --size-gb 201 --source $swapdisk
az disk create --resource-group $resourceGroupName --name vc_2_blob --location $location --size-gb 25az disk create --resource-group $resourceGroupName --name vc_3_blob --location $location --size-gb 25
az disk create --resource-group $resourceGroupName --name vc_4_blob --location $location --size-gb 10az disk create --resource-group $resourceGroupName --name vc_5_blob --location $location --size-gb 10
az disk create --resource-group $resourceGroupName --name vc_6_blob --location $location --size-gb 15az disk create --resource-group $resourceGroupName --name vc_7_blob --location $location --size-gb 10
az disk create --resource-group $resourceGroupName --name vc_8_blob --location $location --size-gb 1
az disk create --resource-group $resourceGroupName --name vc_9_blob --location $location --size-gb 10
az disk create --resource-group $resourceGroupName --name vc_10_blob --location $location --size-gb 10
az disk create --resource-group $resourceGroupName --name vc_11_blob --location $location --size-gb 100
az disk create --resource-group $resourceGroupName --name vc_12_blob --location $location --size-gb 50
az disk create --resource-group $resourceGroupName --name vc_13_blob --location $location --size-gb 10
az disk create --resource-group $resourceGroupName --name vc_14_blob --location $location --size-gb 5az disk create --resource-group $resourceGroupName --name vc_15_blob --location $location --size-gb 100
az disk create --resource-group $resourceGroupName --name vc_16_blob --location $location --size-gb 150
managedDiskId0=/subscriptions/$subId/resourceGroups/$resourceGroupName/providers/Microsoft.Compute/disks/vc_0_blob
managedDiskId1=/subscriptions/$subId/resourceGroups/$resourceGroupName/providers/Microsoft.Compute/disks/vc_1_blob
managedDiskId2=/subscriptions/$subId/resourceGroups/$resourceGroupName/providers/Microsoft.Compute/disks/vc_2_blob
managedDiskId3=/subscriptions/$subId/resourceGroups/$resourceGroupName/providers/Microsoft.Compute/disks/vc_3_blob
managedDiskId4=/subscriptions/$subId/resourceGroups/$resourceGroupName/providers/Microsoft.Compute/disks/vc_4_blob
managedDiskId5=/subscriptions/$subId/resourceGroups/$resourceGroupName/providers/Microsoft.Compute/disks/vc_5_blob
managedDiskId6=/subscriptions/$subId/resourceGroups/$resourceGroupName/providers/Microsoft.Compute/disks/vc_6_blob
managedDiskId7=/subscriptions/$subId/resourceGroups/$resourceGroupName/providers/Microsoft.Compute/disks/vc_7_blob
managedDiskId8=/subscriptions/$subId/resourceGroups/$resourceGroupName/providers/Microsoft.Compute/disks/vc_8_blob
managedDiskId9=/subscriptions/$subId/resourceGroups/$resourceGroupName/providers/Microsoft.Compute/disks/vc_9_blob
managedDiskId10=/subscriptions/$subId/resourceGroups/$resourceGroupName/providers/Microsoft.Compute/disks/vc_10_blob
managedDiskId11=/subscriptions/$subId/resourceGroups/$resourceGroupName/providers/Microsoft.Compute/disks/vc_11_blob
managedDiskId12=/subscriptions/$subId/resourceGroups/$resourceGroupName/providers/Microsoft.Compute/disks/vc_12_blob
managedDiskId13=/subscriptions/$subId/resourceGroups/$resourceGroupName/providers/Microsoft.Compute/disks/vc_13_blob
managedDiskId14=/subscriptions/$subId/resourceGroups/$resourceGroupName/providers/Microsoft.Compute/disks/vc_14_blob
managedDiskId15=/subscriptions/$subId/resourceGroups/$resourceGroupName/providers/Microsoft.Compute/disks/vc_15_blob
managedDiskId16=/subscriptions/$subId/resourceGroups/$resourceGroupName/providers/Microsoft.Compute/disks/vc_16_blob
az vm create --name $virtualMachineName --resource-group $resourceGroupName --attach-os-disk $managedDiskId0 --os-type $osType --attach-data-disks $managedDiskId1 $managedDiskId2 $managedDiskId3 $managedDiskId4 $managedDiskId5 $managedDiskId6 $managedDiskId7 $managedDiskId8 $managedDiskId9 $managedDiskId10 $managedDiskId11 $managedDiskId12 $managedDiskId13 $managedDiskId14 $managedDiskId15 $managedDiskId16 --size Standard_D16as_v5 --vnet-name $vnetName --subnet $subnetName --public-ip-address ""
I chose a Standard_D16as_v5 VM because it allowed the correct number of disks and had enough RAM. This may not be the appropriate size but it worked for me. After boot up I SSHed into the machine and ran the cd to the /mnt dir and ran install.sh. This did I beleive the phase 1 install. Then log out and log back in. This time you will see the vCenter server shell. Switch to bash and then cd back to /mnt and run install.sh again. This will do the phase 2 deployment. In a few minutes you’ll have a fully functioning vCenter server in an Azure VM.
One thought on “Installing vCenter as an Azure VM”