Sep 14, 2016

My Experience with HashiConf 2016

2016 is the second year of HashiConf and my first year attending. I want to relate my experiences and opinion of the conference. My primary goal in attending a conference is to learn new things that, of course, interest me, and will help my client(s) and Object Partners’ goals. In particular, I went looking for solutions to secret management and deployment orchestration.


Lots of attention was given to Vault, I counted at least six talks. Vault Enterprise was officially released which provides a secret management interface, health status dashboard and more which you can read. This attention is justified as secret management is a big problem, both for small companies and large enterprises.

I am impressed with the policy management and secret introduction services in Vault. Applications have a nice REST API to retrieve and renew secrets via a token. I don’t want to re-iterate all of the capability this provides, but what pops out at me first is that passwords can rotated with any schedule. Secret use is logged by Vault, and there is a “break-glass” procedure to stop use of compromised credentials.

Joel Thompson from Bridgewater Associates, LP gave a good talk on how they have implemented Vault in a pseudo multi-tenant configuration.

Nomad (an application scheduler) has been integrated with Vault such that when running applications, it can allow the job access to only the secrets it needs using a policy, and for only the during of the process. It’s very nicely done and uses the public Vault API, so other schedulers can use it as well.


Several talks explained Nomad, both in its principles and user operating experience. Nomad is an application scheduler that can handle more than container loads. It can schedule any binary and has a specific Java job type.

Nomad’s speed was showcased. HashiCorp ran a 1,000,000 container test to show its performance, the load was handled in around 5 minutes. Citadel described their comparison of popular schedulers, including Nomad. Citadel was able to get Nomad to schedule 2,200 containers per second, very impressive! This rate is orders of magnitude above the other schedulers.

A new version of Nomad was announced including the aforementioned Vault integration, also sticky volumes to allow data to stay with application changes (i.e. stateful applications). Kelsey Hightower from Google gave a live demo using Nomad and the ease of use was impressive.

Another thing I like about Nomad is that the jobs can and should be checked into source control. Nomad isn’t opinionated about whether the jobs live with the application repo, or another repo, so it can be customized to each situation. There are many options including types of jobs, how to scale when upgrading, etc.

Micro-services and Containers

Much was said about micro-services and containers. Micro-services are the current way to decouple components of applications and independently deploy them to gain release velocity. Containers are the best deployment unit for micro-services for maximizing resource usage. Several speakers pointed out that although these architectures solve many problems, they also create challenges of their own that must be managed.

Bryan Cantrill, CTO of Joyent, gave an excellent talk on the history of containers and what can be done to make them better. Specifically, running containers on bare metal rather than VMs. The orchestration tool can then increase resource utilization without the overhead of VMs. Joyent is one solution in this place, definitely worth a look.

Thought Provoking

There are talks that are technologically provoking in that you want to go back to your room, download the tools, and make your stuff better. Then there are talks that challenge your way of thinking. Two of these that stuck out were Mathias Meyer, CEO of Travis CI and Corey Quinn, Consultant, speaking on experience with Fortune 100 companies.

Mathias challenged me in two ways. First, sometimes everything can work as designed, but hidden coupling or an unfortunate series of events can still cause outages. Secondly, we tend to focus on problems, and ignore the fact that things usually work. Perhaps a few postmortems on why the last three deploys went well would be helpful so we can identify and repeat the good parts.

Corey spoke from the view point of taking an organization from a previous paradigm (or architecture), and how to navigate the culture to produce change that brings more value to the organization. Rather than comparing a company that has been successful for 50+ years to a startup that has all the right “DevOps” things, the history needs to be considered. Also what needs to be considered is that the history of the large company has been largely successful, or the company would not be operating. (I get this living in a 100+ year old home when looking at the electrical and plumbing. No, it is not in modern shape, but it was built when what was powered was light bulbs and indoor plumbing was a luxury. And hey, my stuff turns on and has better up time than the Californians I shared breakfast with. 🙂


There were a few vendors present and their products were relevant to the conference. They were knowledgeable and I had good conversations about how the tools might be useful in current and future projects. I didn’t feel like I was getting a sales pitch, instead it was engineers describing a solution. Kudos, I don’t like sales pitches.


I learned much at HashiConf 2016. It was an enjoyable time and well worth the time and money. I was able to speak with people who are doing different things in their work and who are from different hemispheres. The venue was classy and (along with the schedule) lent itself well to interaction, even among introverts. I have no reservations recommending HashiConf 2017 to be on any DevOps engineer’s calendar.

Thanks OPI for being a great company and sending us!

Slide Links

These are the slide deck links I harvested off Twitter #hashiconf. Please add comments if you find more. If you go to the HashiConf page you might find everything. There is a live stream of the first keynote.


About the Author

Patrick Double profile.

Patrick Double

Principal Technologist

I have been coding since 6th grade, circa 1986, professionally (i.e. college graduate) since 1998 when I graduated from the University of Nebraska-Lincoln. Most of my career has been in web applications using JEE. I work the entire stack from user interface to database.   I especially like solving application security and high availability problems.

Leave a Reply

Your email address will not be published.

Related Blog Posts
Natively Compiled Java on Google App Engine
Google App Engine is a platform-as-a-service product that is marketed as a way to get your applications into the cloud without necessarily knowing all of the infrastructure bits and pieces to do so. Google App […]
Building Better Data Visualization Experiences: Part 2 of 2
If you don't have a Ph.D. in data science, the raw data might be difficult to comprehend. This is where data visualization comes in.
Unleashing Feature Flags onto Kafka Consumers
Feature flags are a tool to strategically enable or disable functionality at runtime. They are often used to drive different user experiences but can also be useful in real-time data systems. In this post, we’ll […]
A security model for developers
Software security is more important than ever, but developing secure applications is more confusing than ever. TLS, mTLS, RBAC, SAML, OAUTH, OWASP, GDPR, SASL, RSA, JWT, cookie, attack vector, DDoS, firewall, VPN, security groups, exploit, […]