As an organisation, we are spread across over 100 sites, ranging from small portacabins to large purpose-built offices. All of these sites are geographically dispersed across an area the size of Belgium.
With budgets tight within the NHS, we are constantly looking to consolidate or get the best value for money from our estate. This results in a high turnover of sites and generates quite a bit of work for our small team.
We have tried to standardise on site configurations for a number of years, but there were always small inconsistencies in configuration, such as switch uplinks on different ports or the odd site where we put in a 48 port switch instead of a 24.
We used configuration templates to a certain degree, but things like subnet and wildcard masks were edited by hand. This usually resulted in multiple hours on the phone between engineers on site and those back at head office trying to diagnose why VPN tunnels were not coming up.
After listening to many hours of the Packet Pushers podcast on the way in to work, the Ansible/Python preaching started to break through. It was time to automate all the things!
I wanted a way for our junior engineers to select from a list of available subnets, site codes, bandwidths etc and have the configurations be generated automatically. These could just be emailed to them at first, but ultimately I would like it to push out to the devices directly.
I will start with the user interface part of the solution first, as this will help to explain why some of the other components are required.
As I thought about it, a few requirements for the web front-end started to emerge.
- Active Directory authentication and job level permissions based upon security groups.
- Selection lists that can be read from a remote URL as JSON/Text.
- ‘Default’ and ‘Required’ values.
- Regex support for validating text inputs.
- Ability to use entered/selected data as variables within the command line parameters of scripts.
- Decent looking UI.
- Log job executions to file so it can be picked up by a syslog collector.
- Free & open source.
I hunted around for something that fit the above requirements but struggled at first. I started to look into Python web frameworks such as Django and Flask with a view to writing my own. As the scale of the programming task grew, I invested more time in searching for an off-the-shelf package that I could customise. Thankfully I found Rundeck, which is an excellent open source project.
Rundeck met all of my requirements and was relatively simple to install. A couple of the optional configuration tasks were a little tricky (namely AD integration and SSL certificates), so I may do a separate post about those.
Shown below is an overview diagram of the various software components and how they integrate.
We use SolarWinds for our network monitoring and IP Address Management (IPAM), so this would be the source of truth for the majority of the required configuration data. Unfortunately it wouldn’t be the correct data, nor would it be in a format that Rundeck could use. For example, the SolarWinds IPAM API could give me a list of used subnets, but not all available /26 or /27 subnets.
This conversion between SolarWinds data and Rundeck is where the Network Interrogation Tool (NIT) comes in, which will be mentioned later in this post, but also gets its own dedicated part in this automation series.
The basic steps for the installation of Rundeck on Fedora are given below, but you should check the latest instructions on the Rundeck website if you install it yourself.
# Install Rundeck and its dependencies. sudo yum install java-1.8.0 sudo dnf install yum sudo rpm -Uvh http://repo.rundeck.org/latest.rpm sudo yum install rundeck sudo service rundeckd start # Update the rundeck config file to change the hostname. cd /etc/rundeck/ sudo nano rundeck-config.properties Change the config line: grails.serverURL=http://rundeck.yourdomain.com:4440 # Update the framework.properties file. sudo nano framework.properties Change the config lines: framework.server.name = rundeck.yourdomain.com framework.server.hostname = rundeck.yourdomain.com framework.server.port = 4440 framework.server.url = http://rundeck.yourdomain.com:4440 # Add firewall rules. sudo firewall-cmd --add-port=4440/tcp sudo firewall-cmd --permanent --add-port=4440/tcp sudo firewall-cmd --add-port=4443/tcp sudo firewall-cmd --permanent --add-port=4443/tcp # Restart the server. shutdown -r now
Following the successful installation of Rundeck, jobs were created for each of our remote site Network Configuration models, which we have named NC1, NC2, NC3 and so on; because you can never have enough initialisms and acronyms in IT.
You can also connect Rundeck to GitHub so that the job definitions themselves are version controlled. I signed up for a basic GitHub organisation account as I knew that it would be used for other parts of the project (the main difference between the GitHub free and paid plans are that you can have private repositories). Once you have linked Rundeck to GitHub, jobs that have been modified are highlighted until they have been commited.
Shown below is an example job setup screen. Each of the inputs are either fixed drop-down lists or are tested against a regex validation string. This ensures that the generated configurations are always using consistent data. If the site name needs to be in ‘Title Case’ then it won’t let you proceed until it is entered correctly, perfect for anal people like me.
Once we have all of the required variables, we can call scripts with command line options to perform the various tasks for us.
As of today we perform the following actions, but they can be expanded easily.
- Create the DHCP scopes.
- Create the nodes in NMS for monitoring.
- Generate the router and switch configurations.
- Add any tasks that can’t be automated to Asana.
I will go in to some of the detail of the Python scripts in a later part of this series.
Rundeck does much more than outlined here, but for our purposes it is enough to define jobs that capture sanitised variables from the user and then call scripts that perform a series of actions.
That’s probably a good place to stop on the front-end side of things.