Install Citrix Cloud Connector on Server Core 2016

Scope

This post will provide some quick notes on installing the Citrix Cloud Connector on Server Core 2016.

Conceptual overview

  • We’re going to take our domain-joined Server Core installation and install the Citrix Cloud Connector on to it.
  • You can’t simply run the installer from the Server Core UI, because Server Core doesn’t have all the bits required for the Connector wizard to work.
  • So to work around this we’ll get an API Access key from Citrix Cloud admin UI and use that to install the connector silently on the command line.

Pre-requisites and considerations

  • It’s assumed you have installed Server Core 2016 and joined it to the domain.
  • I believe you’ll need an API access secure client entry for each controller you’re setting up. Happy to be corrected on this, but it feels like that’s the best way to go about this.
  • The API access key is tied to the Citrix Administrator. If that Adminstrator account is later revoked access or permissions changed, the API keys will stop working. More on this here.
  • As of right now (September 2018) installing the Cloud Connector on Server Core is not supported – however, the team is aware of appetite for this, and have a workstream open to do some testing with all the components.

Steps

Create an API Access secure client entry for the connector

Go to https://citrix.cloud.com > Identity and Access Management > API Access tab

Enter a descriptive name of your Server Core VM in the “Name your Secure Client box” and click Create Client – I typically use the VM hostname so it’s easy to track which controllers are using which credentials. If you want to add more contextual info, do so. The field isn’t tied or reliant on the VM name at all.

Store the ID and the Secret given to you in a secure place. You’ll never be given the Secret again, so I’d recommend storing it securely.

Gather required information

To install the connector from the command line you’ll need the following information:

  • Citrix Cloud Customer ID
    • You’re told this just before you make the API access credentials, when entering a secure client name.
  • API Access secure client ID
    • You’re told this when you make the API access credentials
  • API Access secure client Secret
    • You’re told this when you make the API access credentials
  • The ID of the Resource Location you’re installing the connector into.
    • This is the UUID of the Resource Location, not its friendly name. You’ll find it in the Resource Locations – click on “ID” to view it.

Download the Connector onto the Server Core VM

Log in to the Server Core VM and run the following, replacing “yourcustomeridhere” with your Customer ID

Install the connector silently

Now, from the same command line, build your silent install command, replacing yourcustomeridhereyourclientidyourclientsecret, and yourresourcelocationid with the information you gathered earlier, and run it:

That’s it. You won’t get confirmation that it worked, so you’ll need to check via Citrix Cloud

Check your Resource Location to verify connectivity

Go back to the Citrix Cloud UI and check your Resource Locations to verify if the connector is being setup. It can take a few minutes to complete.

Uninstalling the Cloud Connector

Should you need to Uninstall the Cloud Connector from Server Core, you can run:

It looks like this isn’t documented (not mentioned if you use /?) but it does work.

Further reading

 

Installing EMC Isilon Virtual Nodes onto VMware ESXi

Background

Isilon, if you’re not aware of it, is a clustered Scale-out NAS solution from EMC. The article below covers setting up a virtual Isilon cluster suitable for experimenting and testing out Isilon in a VMware ESXi-based lab environment.

Scenario and scope

  • Create a 3-node virtual EMC Isilon storage cluster on VMware ESXi.
  • This guide is written for ESXi Standalone (Free edition) in my home lab, but should be applicable to a “real” Virtual Infrastructure setup with vCenter, too.

How to increase the usable disk size in the NetApp ONTAP 8.x 7-mode Simulator

Background

If you or your company own a NetApp Storage System, you can access and download the NetApp Simulator – it’s basically a NetApp filer in a Virtual Machine. Very cool; and great for learning and experimenting in a Lab.

Why increase the disk size?

The Simulator, by default, presents you with 28 x 1GB disks, giving you a mere 28GB raw disk space.

While this is fine for learning; I wanted to use it as the main SAN provider in my Virtualization home lab; where I plan to run an entire lab including ESXi hosts, a SAN appliance and networking inside a single small form factor computer.

Why not use Nexenta, or Openfiler, or FreeNAS?

Two reasons:

  1. I’m used to NetApp systems, and want to learn more
  2. I can use NetApp’s Deduplication and Compression features to squeeze much more out of the limited SSD space inside my Gigabyte Brix.

I tried Nexenta, but found it really slow and it also randomly hogged an entire CPU core for seemingly no reason.

Standing on the shoulders of giants

All of the information below is available and possible thanks to this post from Vidad Cosonok, which covers how to create new larger disks in Cluster-Mode. The commands are different for 7-mode; hence this post. All credit however, should rest with Vidad Cosonok 🙂

I believe that this method below is much easier and simpler than some of the alternatives, one involving adding another 28 disks (not enough) and another which involves tweaking partitions with a FreeBSD boot disk.

Increasing the size of the disks in the NetApp simulator

Following Vidad’s instructions, it’s possible to create a NetApp simulator with up to 400GB of raw disk space. PRetty impressive! In the example below, I only want to use a maximum of 224GB, so I’m choosing to use 4GB disks rather than 9GB disks. Note that going over 224GB can be a pain in VMware ESXi, as you’ll need to increase the size of an IDE disk, which is non-trivial in ESXi.

Setup the Simulator

  1. Download and extract the ONTAP 7-mode NetApp Simulator (I went for 8.2, and this is tested working on 8.2).
  2. Follow the setup guide to install it into your VMware environment.
    • I installed it into ESXi using the VMware Converter to ensure that the VMDKs were thin provisioned as they went into the Brix’s local SSD datastore
  3. Boot the Virtual Machine
  4. Press Ctrl+C when prompted and choose option 4, to wipe the config and initialise the disks.
  5. Once done, complete the first time setup (where you give it a hostname , IP, etc)

Delete the disks and re-create new ones

Once you’ve configured the Simulator, follow these steps. I’d advise doing this from an SSH client, so you can copy/paste easily, rather than using the VMware console.

priv set advanced
useradmin diaguser unlock
useradmin diaguser password
systemshell
(login as diag, with password you just set)

setenv PATH “${PATH}:/usr/sbin”
echo $PATH
cd /sim/dev/,disks
ls
sudo rm v0*
sudo rm v1*
sudo rm ,reservations
cd /sim/dev
vsim_makedisks -h

Make a note of the options available for -t (type), as you may wish to deviate from what I’m doing.

To make ~224GB usable disks:

sudo vsim_makedisks -n 14 -t 31 -a 0
sudo vsim_makedisks -n 14 -t 31 -a 1
sudo vsim_makedisks -n 14 -t 31 -a 2
sudo vsim_makedisks -n 14 -t 31 -a 3

To make ~550GB of usable disks:

sudo vsim_makedisks -n 14 -t 36 -a 0
sudo vsim_makedisks -n 14 -t 36 -a 1
sudo vsim_makedisks -n 14 -t 36 -a 2
sudo vsim_makedisks -n 14 -t 36 -a 3

Check they’re present, then exit and halt.

ls ,disks/
exit
halt

Configuring the new disks for use

  1. Power off the Simulator VM
  2. If you made your total disks larger than 224GB, you need to Edit the VM settings and make Hard Drive 4 550GB (rather than 250GB). If you’re doing this in vCenter/ESXi it can be tricky as the HDD is IDE rather than SCSI.
  3. Power on the Simulator VM
  4. Press Ctrl-C for Boot Menu when prompted
  5. Enter selection 5 ‘Maintenance mode boot’
  6. Assign 3 disks for the Clustered ONTAP dedicated root aggregate, and halt

    disk assign v4.16 v4.17 v4.18
    disk show
    halt

  7. Power-cycle the Simulator
  8. Press Ctrl-C for Boot Menu when prompted
  9. Enter selection 4 ‘Clean configuration and initialize all disks’ and answer ‘y’ to the two prompts
  10. Wait for the wipe/initialise to complete, then re-do the setup if needed.
  11. Log in to the system, and assign the disks, so they’re usable:

    disk assign all

  12. Now you can create a new aggregate/volume/qtree/LUN etc and use the Simulator to its full potential 🙂
  13. At this stage, you may want to add the Licences, which include iSCSI and NFS. You can’t use iSCSI or NFS without installing the license (and for iSCSI, enabling and starting the service).

Upgrade VMware ESXi 5.1 to ESXi 5.5 on a Gigabyte Brix using esxcli

Background

Between ESX 5.1 and ESXi 5.5, VMware removed the bundled driver for the Realtek RTL8111E NIC, which is embedded in the Gigabyte Brix.

This means that if you try to boot a normal ESXi 5.5 installer, it won’t find the NIC driver and will refuse to install/upgrade your ESXi 5.1 instance.

Installing VMware ESXi 5.5 on the Gigabyte Brix

Scope

This article covers the steps required to install VMware ESXi 5.5 on the Gigabyte Brix, and a few other systems that use non-supported NICs that worked in ESXi 5.1.

brix running esxi 5.5