One way to implement global variables in Terraform

Anyone who has developed a highly modular HashiCorp Terraform project has wished at some point that there was a simple way to implement global variables.  This feature will probably not be developed though.  As Paul Hinze of HashiCorp stated:

In general we’re against the particular solution of Global Variables, since it makes the input -> resources -> output flow of Modules less explicit, and explicitness is a core design goal.

I am an advocate of the Don’t Repeat Yourself (DRY) software development principle, and simplicity & ease of use for the end user are key goals for my open source VMUG Labs project, so I started brainstorming on a better way to solve this.  A solution where you do not have to remember to specify a shared tfvars file, use a a script that specifies the shared tfvars file or run-line variables for you, group resources that you want segregated, or reference the value from the state file of one of the other modules or use a default value for the variable when those doesn’t make sense.

Eventually, it dawned on me- create a global variable module with static outputs and reference the module in your Terraform resource configuration files.  The optimal implementation of this would be to store the module in a private source control repository because remote modules can reference specific versions, though this can also implemented via a local module, which was the best fit for my VMUG Labs project since it is open source (public) and I wanted to minimize the number of components necessary for end users.

At first, I thought that I was the first to think of this solution (and document it publicly), but then I found that @keirhbadger beat me to it by almost a year.  Oh well.

Here are a couple of example outputs from the outputs.tf.example file in the local global_variables module in the VMUG Labs project.  You would make your own copy of this file (which will be excluded per the .gitignore file) and set the values appropriately.

output "aws_profile" {
  value = "aws_profile"
}

output "aws_region" {
  value = "us-west-2"
}

I can then use the static output values in the global_variables module to set input parameters in segregated resources with separate state files.  For example, the domain_controller and jumphost Terraform config files:

module "global_variables" {
  source="../../../modules/global_variables"
}

provider "aws" {
  version="~> 1.13"
  profile="${module.global_variables.aws_profile}"
  region="${module.global_variables.aws_region}"
}

This solution can be a more intuitive approach for end users that may be less experienced with Terraform.

Enjoy!

Advertisements

Using Terraform to deploy Nested ESXi hosts in your VMware Cloud on AWS SDDC (or home lab!)

with the Virtually Ghetto vSphere 6.5 U1 ESXi virtual appliance

As I mentioned in the previous post, I recently started a new open source VMUG project with some new friends with a goal of automatically provisioning the necessary virtual infrastructure for VMware-oriented hackathons on the VMware Cloud on AWS (VMC) platform using HashiCorp Terraform and Gruntwork Terragrunt, in a simple and cost effective manner.  The project is called VMUG Labs and the source code can be found in the GitHub repository.

As indicated by the title of this post, you would only need to make a few tweaks to make this work in your home lab, work lab, or any vSphere environment where you want to deploy nested ESXi hosts, because VMC can use the standard Terraform VMware vSphere provider for provisioning VMs in your SDDC.  I’ll cover how to use Terraform to deploy nested ESXi hosts in your lab in more detail in the next post.

Once I had provisioned a few firewall rules and a logical network in my SDDC, the first VM that I deployed in VMC (manually) was William Lam’s vSphere ESXi 6.5 U1 virtual appliance to use as my template VM for Terraform.

For those that are unfamiliar, his virtual appliance image facilitates provisioning automation by allowing the user to supply operating system configuration values such as hostname, IP address, et cetera, via vApp properties at provision time for achieving guest customization of the ESXi host VM (aka: nested hypervisor).

Next, you must set the debug vApp property in your VM template to be user configurable because the Terraform vSphere provider v1.3.3 and earlier does not support this.  Attempts to deploy VMs from a VM or VM template with any vApp property that is not configurable will fail as documented in issue #394.  I made a few minor adjustments to a copy of David Hekimian’s PowerCLI script for doing so and the source code can be found here.  Run this against your nested ESXi VM that you plan to use as a template in PowerShell (eg: ./Enable-VmVappProperties.ps1 -Name 'Nested_ESXi6.5u1_Appliance_Template_v1.0')

It is important to note that you are restricted from enabling the promiscuous mode and the forged transmits distributed port group security settings in VMC, so VMs running on nested hypervisors will be isolated (no network connectivity beyond the hypervisor).  This constraint was not unexpected, but does limit the possibilities of what we can build somewhat.  After much deliberation, our team came to the conclusion that most attendees of VMware-oriented hackathons would primarily want to interact with the virtual infrastructure and we accepted this limitation for the project.  This constraint should not apply to your lab environments though.

Next, all VMs must be deployed under specific virtual infrastructure objects in your VMC SDDC- specifically, the Compute-ResourcePool resource pool, the Workloads VM folder, and the WorkloadDatastore datastore that are all found under the SDDC-Datacenter datacenter object.

To configure Terraform to provision your first nested ESXi host, you need to:

  1. Create a Terraform configuration file, such as ‘main.tf’.
  2. Configure the provider for connecting to vCenter
    provider "vsphere" {
      version="~> 1.3"
      vsphere_server="vcenter.sddc-34-218-61-195.vmc.vmware.com" # Set this to your VMC SDDC FQDN
      allow_unverified_ssl=false
      user="cloudadmin@vmc.local"
      password="VMware1!" # Set this to your VMC SDDC password
    }
  3. Configure data sources to retrieve information about the virtual infrastructure objects
    data "vsphere_datacenter" "dc" {
      name="SDDC-Datacenter"
    }
    
    data "vsphere_resource_pool" "pool" {
      name="Compute-ResourcePool"
      datacenter_id="${data.vsphere_datacenter.dc.id}"
    }
    
    data "vsphere_datastore" "datastore" {
      name="WorkloadDatastore"
      datacenter_id="${data.vsphere_datacenter.dc.id}"
    }
    
    data "vsphere_distributed_virtual_switch" "dvs" {
      name="vmc-dvs"
      datacenter_id="${data.vsphere_datacenter.dc.id}"
    }
    
    data "vsphere_network" "network" {
      name="logical_network1"
      datacenter_id="${data.vsphere_datacenter.dc.id}"
    }
  4. Configure the data source to retrieve information about the nested ESXi host VM template
    data "vsphere_virtual_machine" "template" {
     name="Nested_ESXi6.5u1_Appliance_Template_v1.0"
     datacenter_id="${data.vsphere_datacenter.dc.id}"
    }
  5. And now configure your first nested ESXi VM resource!
    resource "vsphere_virtual_machine" "vm" {
      name="ESXi1"
      guest_id="vmkernel65Guest"
      resource_pool_id="${data.vsphere_resource_pool.pool.id}"
      datastore_id="${data.vsphere_datastore.datastore.id}"
      folder="Workloads"
      num_cpus=2
      memory=6144
      wait_for_guest_net_timeout=0
    
      network_interface {
        network_id="${data.vsphere_network.network.id}"
      }
    
      disk {
        label="sda"
        unit_number=0
        size="${data.vsphere_virtual_machine.template.disks.0.size}"
        eagerly_scrub="${data.vsphere_virtual_machine.template.disks.0.eagerly_scrub}"
        thin_provisioned="${data.vsphere_virtual_machine.template.disks.0.thin_provisioned}"
      }
    
      disk {
        label="sdb"
        unit_number=1
        size="${data.vsphere_virtual_machine.template.disks.1.size}"
        eagerly_scrub="${data.vsphere_virtual_machine.template.disks.1.eagerly_scrub}"
        thin_provisioned="${data.vsphere_virtual_machine.template.disks.1.thin_provisioned}"
      }
    
      disk {
        label="sdc"
        unit_number=2
        size="${data.vsphere_virtual_machine.template.disks.2.size}"
        eagerly_scrub="${data.vsphere_virtual_machine.template.disks.2.eagerly_scrub}"
        thin_provisioned="${data.vsphere_virtual_machine.template.disks.2.thin_provisioned}"
      }
    
      clone {
        template_uuid="${data.vsphere_virtual_machine.template.id}"
      }
    
      vapp {
        properties {
          "guestinfo.hostname" = "esxi1"
          "guestinfo.ipaddress" = "" # Default = DHCP
          "guestinfo.netmask" = ""
          "guestinfo.gateway" = ""
          "guestinfo.vlan" = ""
          "guestinfo.dns" = "8.8.8.8"
          "guestinfo.domain" = ""
          "guestinfo.ntp" = "pool.ntp.org"
          "guestinfo.syslog" = ""
          "guestinfo.password" = "" # Default = VMware1!
          "guestinfo.ssh" = "True" # Case-sensitive string
          "guestinfo.createvmfs" = "False" # Case-sensitive string
          "guestinfo.debug" = "False" # Case-sensitive string
        }
      }
    
      lifecycle {
        ignore_changes= [
          "annotation",
          "vapp.0.properties",
        ]
      }
    }

The lifecycle section of the VM resource instructs Terraform to ignore changes to properties matching “annotations” and “vapp.0.properties” because the Virtually Ghetto Nested ESXi virtual appliance guest customization process removes most of the vApp properties at the end of the guest customization process so that sensitive data is not displayed on the deployed VM and also sets the annotation field.  Since Terraform is unaware of the post-provision guest customization process, subsequent executions of terraform plan and/or terraform apply will flag the VM resource as being in a bad state and will recommend that the nested ESXi VM be destroyed and recreated if this is not set.

Once your configuration file is set, you’re ready to start the automated provisioning process by running the following commands:

  • terraform init
    • Downloads the vSphere provider and prepares for interacting with the environment
  • terraform plan
    • Pre-flight check
  • terraform apply
    • Deploy your nested ESXi host!
  • terraform destroy
    • Eradicate your nested ESXi VM when you’re done so that you can deploy it again

You can find more information about getting started with Terraform here.

Enjoy!

VMware Cloud on AWS: Network Security

I recently had the opportunity to play with a four host VMware Cloud on AWS (VMC) Software-Defined Datacenter (SDDC) for a month as part of an extracurricular, open source, VMUG project using HashiCorp Terraform to automate the provision of virtual infrastructure for a VMware-oriented hackathon, but I’ll discuss the project more in depth in an upcoming post.  I learned a lot during the trial and am very grateful for the opportunity.

If you haven’t read the Getting Started Guide, here are a couple of important items:

  1. You’ll need to create a few firewall rules from the VMC portal to access your SDDC.  By default, both the Management Gateway’s (MGW) and Compute Gateway’s (CGW) firewall policy default to deny all, so no traffic is permitted to or from either environment until you allow it.  The MGW is an NSX Edge Services Gateway (ESG) that secures the north/south traffic to and from the vCenter Server, ESXi hosts, and NSX Manager, and the CGW controls north/south traffic to the VMs that you provision and control in your SDDC.
  2. Since this is a managed service, the rules that you can create on the Management Gateway are restricted to specific services for specific endpoints.  For example, you cannot create a rule permitting SSH access to the ESXi hosts in the environment, but you can permit ICMP so that you can monitor host availability via ping.

One thing that I found interesting is that VMC only permits one service entry per firewall rule, which can be a TCP or UDP port (eg: 80/tcp), an ICMP type (eg: 0 Echo Reply), a range of TCP or UDP ports (eg: 49152-65535/tcp), all TCP, UDP, or ICMP, or any, but cannot be configured for multiple, non-contiguous services (eg: 80/tcp, 443/tcp) per the guide.  Permitting this connectivity must be written as separate rules in VMC.  Implementing a single target service object standard such as this is one way to help design a firewall policy that can be easy to read, troubleshoot, and administer if implemented properly, but I was a little surprised that this was mandated since the firewall functionality of the NSX platform is so robust.  This is also a constraint of firewall rules in AWS VPC security groups, but it seems odd in a NSX environment.

Next, reusable NSX grouping objects, such as IPSets, security groups, custom service objects, and service groups are not yet available in VMC.  This made the initial bit of manual firewall policy management to be tedious since I have been spoiled by the convenience of these.

To illustrate the challenge posed by the combination of these limitations, imagine provisioning a new Active Directory domain controller in your VMC SDDC for an existing domain residing outside the SDDC and you had a requirement of only permitting the minimum necessary connectivity.  You would have to write and manage over 20 separate firewall rules per the guide with the domain controller IP address statically defined in either the source or destination of each rule (as appropriate) to accomplish this because of the single service/service-range constraint.

If you then needed to provision another domain controller for that domain in your SDDC and wanted to permit the same connectivity to & from the original domain controller and also maintain your requirement of permitting the minimum necessary connectivity, you would need to modify the sources and/or destinations of the existing 20+ rules (or add 20+ new rules) because of the lack of reusable firewall grouping objects.

If NSX grouping objects were available, you could consolidate the rules to a small handful with service groups that permitted the same traffic for the first domain controller and maintain the single target service (group) object.  To permit the traffic for the second domain controller (assuming that dynamic security groups and security groups with parent infrastructure objects weren’t used for firewall policy to be automatically updated and applied), you would only have to make one change instead of multiple.  Examples include adding the new IP address to the IPSet object, creating a new IPSet and adding it to the security group object, and adding the VM object to the security group object.

Another challenge to maintaining a strong network security posture is that the VMC SDDC distributed firewall is not yet configurable; however, this feature is currently listed as Planned in the public roadmap.  An alternative for controlling east/west traffic in the mean time is to use host-based firewalls.

There are so many neat and effective ways to administer firewall policy with NSX, so this was one area that could use improvement and my only real point of constructive criticism from the trial period.  Overall, I was impressed with the excellence in architecture, engineering, and service delivery demonstrated by the VMC platform.

Special thanks to Ken Nalbone, Wences Michel, and Brian Graf for this great learning opportunity.

VMware vExpert 2018!

I made the cut!  I am very proud to say that I have been inducted into a terrific community of VMware evangelists and advocates that share their knowledge and love of technology with the greater community.  Thank you to the many, many people who inspired and supported me in this journey.

Today, I am celebrating by starting to plan a Terraform + Terragrunt framework for VMware Cloud on AWS for an upcoming open source VMUG project.  Stay tuned!

Free CI for your GitHub forks

How to build & test your feature branch for free on AppVeyor & Travis CI before submitting a pull request

I had an epiphany the other day when troubleshooting an issue with a pull request that I had submitted, which is that you can have AppVeyor and Travis CI (free CI/CD services for public repos) build & test your commits to your fork’s feature branch on push by simply enabling the repo in your AppVeyor and/or Travis CI accounts.  In this instance, my changes had worked on my Fedora image, but the build failed on Travis CI’s Ubuntu Trusty image.  Had I thought of this beforehand, I would have been able to detect and resolve the issue before submitting the pull request.  This is especially useful if you are attempting to help augment and/or optimize the project’s build configuration.

Also, you can implement this even if the project doesn’t use either of these CI/CD services or if the owners do not store the build configuration in the respective YAML file.  The caveat is that in either case, you would have to configure the CI services yourself.

To start building & testing a new project in AppVeyor, go here, and for Travis CI, go to your profile page.

Enable a new project in AppVeyor
Enable a new project in AppVeyor

 

Enable a new project in Travis CI
Enable a new project in Travis CI

 

So far, I have used this methodology to provide screenshots and build logs as evidence in a couple of pull requests.  I have also used this to create and submit a Travis CI build configuration for a project that was only using AppVeyor and demonstrated how it would work if merged.  Pretty cool, right?

Enjoy!

CI/CD design decisions for a Microsoft PowerShell project on Travis CI

Per my previous post, this post covers my continuous integration / continuous deployment design decisions for my open source ArmorPowerShell project.

General Configuration

Building Specific Branches

You can whitelist and/or blacklist branches here, but I chose to build all branches in this project and included logic in the various scripts to limit actions prior to merging into master.

# whitelist 
#branches:
  #only:
    #- master

# blacklist
#branches:
  #except:
    #- 

Jobs

You can granularly define build stages as well as conditional builds based on criteria such as branch, release tag, et cetera in the Jobs section.  I have not implemented this so far.

#jobs:
  #include: 
    #- stage:

Language

I recommend setting the Language to generic for scripting language projects (which is not listed in the Language documentation, but is briefly mentioned here), because all I needed for installing PowerShell Core was bash, curl, & apt for Ubuntu and homebrew for macOS, but there are a wide variety of choices if you require otherwise.

language: generic

Runtime

You can also define specific runtime versions for certain applications.  If more than one runtime version is specified for the same item, a job will be created for each version.  I did not need to implement any of these for this project though.

#dotnet: 
#gemfile: 
#mono: 
#php: 
#python: 
#rvm:

Git

In the Git section, you can specify a clone depth limit or disable cloning of submodules to optimize job performance.  As of 20171128, the default commit depth on Travis CI is 50, which should provide sufficient commit history for most projects with accommodation for job queuing.

#git: 
  #depth: 
  #submodules:

Environment Configuration

Environment Variables

If you plan to test your open-source PowerShell project on multiple CI providers such as Travis CI and AppVeyor, I recommend defining a few global environment variables such as the ones listed below that abstract the CI specific variables to minimize the logic needed for handling each in your build scripts.  If you define a variable more than once, another job will be created for each definition.  You can also define matrix-specific environment variables in this section, or at the image level in the Matrix section.

# environment variables
env:
 global:
 - BUILD_PATH="$TRAVIS_BUILD_DIR"
 - MODULE_NAME="<insert module name>"
 - MODULE_PATH="$BUILD_PATH/$MODULE_NAME"
 - MODULE_VERSION="{set module version in build script}"
 - OWNER_NAME="$(echo $TRAVIS_REPO_SLUG | cut -d '/' -f1)"
 - PROJECT_NAME="$(echo $TRAVIS_REPO_SLUG | cut -d '/' -f2)"
 - secure: <secure string>
 #matrix:

Services

There are lots of terrific services and databases that are installed and available in each image should you need them.

# enable service required for build/tests
#services:
 #- cassandra # start Apache Cassandra
 #- couchdb # start CouchDB
 #- elasticsearch # start ElasticSearch
 #- memcached # start Memcached
 #- mongodb # start MongoDB
 #- mysql # start MySQL
 #- neo4j # start Neo4j Community Edition
 #- postgresql # start PostgreSQL
 #- rabbitmq # start RabbitMQ
 #- redis-server # start Redis
 #- riak # start Riak

Global Image Settings

You can define your build images at the global scope; however, I chose to use the matrix build image configuration as recommended here for multiple operating system build configurations, because it is cleaner.  For example, when osx_image is defined at the global scope, your Ubuntu builds will receive the xcode tag, even though it does not apply.

xcode tag assigned to Ubuntu Trusty build image

# Build worker image (VM template)
#os:
#- linux
#- osx

#sudo: required

#dist: trusty

#osx_image: xcode9.1

Build Matrix

The Matrix section allows you to customize each image that will build your code.  I cover most of these features sufficiently in the previous post, but the two that I did not are:

  1. allow_failures, which will permit the specified build image to pass regardless of any errors that occur.  I’ll likely never use this feature because it defeats the purpose of implementing continuous integration in my opinion.
  2. exclude, which prevents building specified images when you define combinations of environment variables, runtime versions, and/or matrix images.  I don’t foresee my scripting language projects being complicated enough to require this feature.
matrix:
  include:
    - os: linux
      dist: trusty
      sudo: false
      addons:
        apt:
          sources:
            - sourceline: "deb [arch=amd64] https://packages.microsoft.com/ubuntu/14.04/prod trusty main"
              key_url: "https://packages.microsoft.com/keys/microsoft.asc"
          packages:
            - powershell
    - os: osx
      osx_image: xcode9.1
      before_install:
        - brew tap caskroom/cask
        - brew cask install powershell 
  fast_finish: true
  #allow_failures:
  #exclude:

Add-Ons

In the addons section, you can define hostnames, prepare for headless testing, upload build artifacts, add SSH known hosts, et cetera.  I have not needed any of these so far for this project.

#addons:
  #artifacts:
    #paths:
      #- 
  #chrome:
  #firefox:
  #hosts:
  #mariadb:
  #rethinkdb:
  #sauce_connect:
    #username:
    #access_key:
  #ssh_known_hosts:

APT Add-ons

To install packages not included in the default container-based-infrastructure you need to use the APT addon, as sudo apt-get is not available.

For now, I have only used this to setup the Microsoft PowerShell Core package management repository and install PowerShell Core on my Ubuntu Trusty container image defined in my build matrix.

If the APT Add-ons step exits with a non-zero error code, the build is marked as error and stops immediately.

#addons:
  #apt:
    #sources:
      #- sourceline: 
        #key_url: 
    #packages:
      #-

Build Cache

You can cache files and folders to preserve them between builds such as if you have low-volatility, large files that take a while to clone.  I did not.  Tabula rasa.

If the cache step exits with a non-zero error code, the build is marked as error and stops immediately.

# build cache to preserve files/folders between builds
#cache:

Before Install

In a before_install step, you can install additional dependencies required by your project such as Ubuntu packages or custom services.

One important thing to be aware of is that matrix image instructions override global instructions.  Since I placed the homebrew commands to install PowerShell in the Before Install step of the macOS build matrix image, if I were to define a global Before Install step, the macOS build matrix image would ignore it.  Alternatively, you could use conditional logic in the global step if you only wanted to perform some instructions on a specific operating system, and some on all build images.

If the before_install step exits with a non-zero error code, the build is marked as error and stops immediately.

#before_install:

Install

As of 20171128, there is no default dependency installation step for PowerShell projects on Travis CI.  In the install step, I chose to install and import the necessary PowerShell modules on all build images, and implemented it via a PowerShell script so that I always utilize the same logic in my AppVeyor builds with no additional configuration (ie: DRY).

If the install step exits with a non-zero error code, the build is marked as error and stops immediately.

install:
- pwsh -file ./build/shared/install-dependencies.ps1

Tests Configuration

Before Script

You can run custom commands prior to the build script step.  I have not had a need for this step yet.

If the before_script step exits with a non-zero error code, the build is marked as error and stops immediately.

#before_script:

Script

I call both my build script and test runner script here because non-zero error codes flag the build as a failure, but the build continues to run, which was what I wanted for these.  There is also an after_script section where I could have run my tests, but this step is run last, after the finalization after_success and after_failure steps (similar to the AppVeyor on_finish step), but also after the deploy steps.  Also, these three steps do not affect the build result unless the step times out, and I wanted both the build script and the test script to affect the build result.

script: 
- pwsh -file ./build/shared/build.ps1
- pwsh -file ./tests/start-tests.ps1

Before Cache

This step is used to clean up your cache of files & folders that will persist between builds.  I have not needed this yet.  Again, tabula rasa.

#before_cache:

After Success / After Failure

You can perform additional steps when your build succeeds or fails using the after_success (such as building documentation, or deploying to a custom server) or after_failure (such as uploading log files) options.

I chose to build my documentation in the build.ps1 script in the script step instead of the after_success step, because I wanted failure to affect the build result in my project.

# on successful build
#after_success: 

# on build failure
#after_failure:

Deployment Configuration

There are tons of continuous deployment options available in the Deployment Configuration, such as Heroku, Engine Yard, and so many others, but I haven’t needed any for this project so far because I am handling all of the publishing from AppVeyor.  The continuous deployment tasks could have been implemented just as easily from Travis CI, I just happened to finish the AppVeyor integration first and my publishing tasks only need to happen once per build.

# scripts to run before deployment
#before_deploy:

#deploy:
  #skip_cleanup:

# scripts to run after deployment
#after_deploy:

# after build failure or success
#after_script:

Notifications

It took me approximately one email to get tired of build email notifications.  I recommend disabling it in the Notifications section as shown below.  Next, there are tons of free options out there, but I chose to create a free Slack.com workspace for monitoring builds.  Travis CI has an app published in the Slack App Directory, and setup instructions can be found here.

notifications:
 email: false
 slack:
 secure: <secure string>

Conclusion

That’s it for now.  I have really enjoyed using the Travis CI platform so far, and feel much more confident in the quality of my project because of it.  Enjoy!

PowerShell Core on Travis CI

How to build, test, and deploy your PowerShell projects on Linux and macOS for free with Travis CI!

Last month, I started a new pet project of building an open-source PowerShell module for Armor, and one of the first goals that came to mind was that I wanted to ensure compatibility with PowerShell Core on Linux.  I had recently re-read Chris Wahl’s article: How to Version and Publish a PowerShell Module to GitHub and PSGallery with AppVeyor, and figured that there had to be a similar service for Linux, so I started looking around.  I found Travis CI rather quickly, and was pleasantly surprised to discover that they offered macOS images in addition to Ubuntu.

If you are unfamiliar with Travis CI, here is a solid description:

Travis CI is a hosted, distributed continuous integration service used to build and test projects hosted at GitHub. Travis CI automatically detects when a commit has been made and pushed to a GitHub repository that is using Travis CI, and each time this happens, it will try to build the project and run tests. This includes commits to all branches, not just to the master branch.

Restated, this means that every time you push new code up to your public repo, Travis CI (and/or AppVeyor) will build your project per your specifications, run any tests defined, and even deploy it if desired.  For free.  Build, test, and deploy on push for free.  How cool is that?

Now, one of the reasons that I am writing this article is that getting started with building & testing a PowerShell project on Travis CI was not intuitive.  AppVeyor and Travis CI were both designed for building, testing, and deploying programming language projects, not scripting language projects.  It took a lot of RTFM and a little trial & error to figure it out, but it was so worth it.  The following article covers some of my lessons learned in the process.  I hope that you find them valuable.

Getting started with Travis CI

  1. Sign into Travis CI with your GitHub account and accept the GitHub access permissions confirmation.
  2. Once you’re signed in to Travis CI, and we’ve synchronized your GitHub repositories, go to your profile page and enable the repository you want to build: enable button
  3. Add a .travis.yml file to your repository to tell Travis CI what to do.
  4. Add the .travis.yml file to git, commit and push, to trigger a Travis CI build:
    1. Travis only runs builds on the commits you push after you’ve enabled the repository in Travis CI.
  5. Check the build status page to see if your build passes or fails, according to the return status of the build command

Not too bad, right?  Don’t worry, there isn’t that much more even though the scroll bar indicates otherwise.

Install the Travis Client

The travis gem includes both a command line client and a Ruby library to interface with a Travis CI service.

You’ll need the Travis Client on your workstation for encrypting sensitive data such as access tokens as well as for linting (validating) your .travis.yml file.

Install Ruby

Windows

On Windows, we recommend using the RubyInstaller, which includes the latest version of Ruby

Mac OS X via Homebrew

Mac OSX prior to 10.9 ships with a very dated Ruby version. You can use Homebrew to install a recent version:

$ brew install ruby
$ gem update --system

 

Install the Travis ruby gem

Make sure you have at least Ruby 1.9.3 (2.0.0 recommended) installed.

You can check your Ruby version by running ruby -v:

$ ruby -v
ruby 2.0.0p195 (2013-05-14 revision 40734) [x86_64-darwin12.3.0]

Then run:

$ gem install travis -v 1.8.8 --no-rdoc --no-ri

Now make sure everything is working:

$ travis version
1.8.8

Customizing the build

Travis CI’s documentation of the .travis.yml file sprawls quite a bit as there are so many features available, so I’ll start with an example .travis.yml config file that should work for testing most of your open-source PowerShell projects on the Travis CI platform.  In my next post, I will provide a high-level overview of all of the available options that I found in the documentation for reference, as well as my design decisions for the ArmorPowerShell project.

Example .travis.yml config file

To start testing your open-source PowerShell project on macOS & Ubuntu, copy the contents below to a file named ‘.travis.yml’ in the base directory of your project.

.travis.yml

language: generic

matrix:
  include:
    - os: linux
      dist: trusty
      sudo: false
      addons:
        apt:
          sources:
            - sourceline: "deb [arch=amd64] https://packages.microsoft.com/ubuntu/14.04/prod trusty main"
              key_url: "https://packages.microsoft.com/keys/microsoft.asc"
          packages:
            - powershell
    - os: osx
      osx_image: xcode9.1
      before_install:
        - brew tap caskroom/cask
        - brew cask install powershell
  fast_finish: true

install:
  - pwsh -f "${env:TRAVIS_BUILD_DIR}/install-dependencies.ps1"

before_script:
  - pwsh -f "${env:TRAVIS_BUILD_DIR}/build.ps1"

script:
  - pwsh -f "${env:TRAVIS_BUILD_DIR}/test.ps1"

after_success:
  - pwsh -f "${env:TRAVIS_BUILD_DIR}/deploy.ps1"

NOTES

  • The powershell executable name has been shortened to pwsh as of v6.0.0-beta.9.
  • There are a few lines that call PowerShell to execute a file, such as pwsh -f "${env:TRAVIS_BUILD_DIR}/install-dependencies.ps1" in the base directory of the project, fully-pathed through the TRAVIS_BUILD_DIR environment variable, but these are by no means necessary.  You could store these files in sub-directories, give the files different names, call commands instead of files, or do something else entirely- these are all just ideas to stimulate your imagination; however, whatever logic you define needs to be valid or your build will fail.

What does this .travis.yml config file do?

  • language: This defines the programming language that the build system should use.  I set this to generic, because I am building a scripting language project.  The generic setting is not documented in the Travis CI Languages documentation, but is listed in a few examples, such as this one.
  • matrix: The Matrix section allows you to customize each image that will build your code.
    • include: Include the specified image configurations.  All configurations defined for an image in the matrix will override the corresponding global configuration.  For example, I configured a before_install section in the osx image above, so if I had a global before_install section defined in the .travis.yml config file, the macOS image would skip it.  Excludes can also be defined for here more complex build topologies.
      • os: The operating system of the image.  As of 20171125, the two choices are osx (macOS) and linux (Ubuntu).
      • dist: The Ubuntu Linux distro image.  As of 20171125, the two choices are trusty (Ubuntu 14.04 LTS Trusty Tahr) and precise (Ubuntu 12.04 LTS Precise Pangolin, which is end of life).
      • sudo: This purpose of this setting is almost certainly not what you think.  Setting sudo to false in your Trusty images causes your build to be deployed to a container instead of a VM, which will start up and complete much faster than the VM image.  Unfortunately, as of 20171125, there is not a containerized option for macOS yet.  If you want or need to use an Linux VM image, set sudo to required.  I have had no issues with building or testing my code inside a container so far.  I will update the article if I discover a blocking issue, but I don’t expect to at this point since the PowerShell Core team publishes nightly builds on Docker Hub and of note, they also build on Travis CI.
      • addons: There is a lot that can be configured in the addons section, but for now, we’re only going to use this for the Trusty image to add the appropriate Microsoft software repository where the official PowerShell Core binaries are hosted, the software repository key, and to install PowerShell Core per the recommended methodology as defined in the PowerShell Core Install Guide.
        • apt: The default package management tool for Ubuntu.
          • sources: Software repositories to add.
            • sourceline: The software repository configuration.
            • key_url: The public key for encrypting the traffic.
          • packages: Software packages to install
            • powershell: Install PowerShell Core on Linux, please and thank you.
      • osx_image: The macOS image that you want to use.
        • As of 20171125, the Travis CI default is 7.3, which is an older macOS 10.11 image.
        • The official PowerShell install guide only lists support for macOS 10.12; however, I have performed a few basic functional tests on osx images: 7.3 (10.11) & 6.4 (10.10). PowerShell Core installed, and completed the build and test runs successfully without any additional configuration on macOS 10.11, but failed on macOS 10.10.
          • PowerShell Core may work on macOS 10.10 with additional configuration, but I am not interested in researching this any further at this time.
        • If you are concerned about breaking changes between macOS versions, you can duplicate the osx matrix image section and replace the value of osx_image with a different version.
          • Available image versions can be found here.
      • before_install: This matrix image section overrides the global before_install configuration for our osx image, and is used for installing PowerShell Core as defined in the installation guide.
        • brew tap caskroom/cask: reference

          Homebrew-Cask extends Homebrew and brings its elegance, simplicity, and speed to macOS applications and large binaries alike.

        • brew cask install powershell: Install PowerShell Core on macOS, please and thank you.
    • fast_finish: Job failures halt the build.  If you would rather have the build attempt to continue on error, change the value to false.
  • install: This section can be used for calling the PowerShell script to install dependencies, such as any modules needed to build and/or test the script.
    • I highly recommend storing the logic for each section in a separate file so that:
      1. It is easier for you to reuse & maintain code if you choose to also integrate with AppVeyor for testing your open-source project for free on Windows PowerShell as well, and also…
      2. …because of the inherent challenges with embedding code in code.
    • The build lifecycle order of operations has the install section follow the before_install section and precedes the before_script section.
    • Here is my install-dependencies.ps1 script for the ArmorPowerShell project.
  • before_script: This section can be used for calling your PowerShell build script to do things such as update the module manifest, update the documentation, et cetera.
    • Here is my build.ps1 script for the ArmorPowerShell project.
  • script: This section can be used for calling your PowerShell unit, integration, and/or functional test scripts.
    • If you are new to these concepts, I recommend reading up on those topics, as well as Pester.
    • Here is my start-tests.ps1 script for the ArmorPowerShell project.
  • after_success: This section can be used for calling a deployment script if that makes sense for your project, such as publishing your module, script, et cetera to the PowerShell Gallery, NuGet, Chocolatey, GitHub Releases, et cetera.

Travis CI is an extremely powerful platform with tons of other features that you can take advantage of, but that is all that I am going to cover in this post as to the possibilities available in the .travis.yml config file.

Lint your .travis.yml config file

Now, it’s time to test your config file using the Travis Client that we installed earlier by running travis lint.

> travis lint
Warnings for .travis.yml:
[x] in matrix.include section: unexpected key osx_image, dropping
[x] in matrix.include section: unexpected key dist, dropping
[x] in matrix.include section: unexpected key sudo, dropping

Wait, what?  Why am I seeing these warnings?

As of 20171125, this type of build matrix image configuration is recommended per the Travis CI multiple operating system build configurations documentation, but it will generate three false positive unexpected key warnings when linting (validating) your .travis.yml config file.  These three warnings can be disregarded and have been reported here.  Any warnings or errors other than these should be addressed.

Commit your .travis.yml config file

When you are ready, run the following commands to:

  1. Stage the ./travis.yml config file to the index
  2. Commit the ./travis.yml config file
  3. And then push it up to the master branch of your GitHub public repo, which will trigger the first Travis CI build for your project!
    1. If you prefer to push the change to a branch other than master, then update the branch name accordingly.
git add ./.travis.yml
git commit --message 'Initial commit' ./.travis.yml
git push origin master

Protect your important branches

Now that you have configured and run your first build, update your GitHub repository settings so that any contributions to your project must first pass your build and testing framework as a prerequisite for consideration.  To do so:

Screenshot from 2017-11-25 11-32-00

You made it!

Voila!  You’re done!  There are plenty of other things that you can do here such as configure notifications so that Travis CI automatically posts your build results in a Slack channel, publish your PowerShell module on successful build to the PowerShell gallery, or add a badge to your README.md indicating whether the last build passed or failed, all of which I’ll cover in the next post, but you should have enough now to start testing your PowerShell project on macOS and Ubuntu for free on the powerful & versatile Travis CI platform.  Enjoy!