My DockerCon17 Experience

This year I attended DockerCon17 to boost my understanding of Docker and its ecosystem.

DockerCon’s growth has been great, with 5,500 attendees this year. My first impression was surprise in the size of the expo hall and the number of attendees. The Austin Convention Center is large, three city blocks, and DockerCon took up the entire center. According to my iPhone, I walked 15 miles in three days! (I say that as a compliment, I can use the exercise.) The surrounding downtown area has lots to do and the hotels are very nice.

In this post I want to tell my experience with a bent towards what I wanted to get out of the conference. My primary goal was to learn about container orchestrators and storage solutions.

There is a DockerCon17 app for phones that was a great tool to plan my experience. It had all of the sessions listed and let me create a personalized agenda from those sessions. It let me know when I tried to double-book my time. There were alarms when the next session was approaching. One area of improvement is that the sessions had an attendance limit, which I didn’t know until I tried to add a full one to my agenda. However, that limit wasn’t enforced. Many of the Community Theater events were over-filled and I skipped some because it was hard to hear. Perhaps using 180-degree speakers would help.

Collaboration

My biggest take aways from the first day key notes were a new feature, a new project and a talk about the open source collaboration approach of Docker, Inc. Docker, Inc. not only makes their tools available via open source but creates them in components so that users can pick and chose what they want. More than that, it allows an ecosystem to evolve around Docker, which was wisely stated is the key to Docker’s success. This is a great approach because of the people that use open source software and also desire to contribute back to that software. Personally, I think the best way to give back to open source is to contribute. That leads directly into the new project that was announced.

LinuxKit and Moby

LinuxKit, from the README, is “a toolkit for building custom minimal, immutable Linux distributions.” It includes a command line tool named moby that will build a custom Linux distribution using a YAML file into various outputs that can be run virtualized, in a Cloud provider or on bare metal. It was founded on the architecture for Docker Editions (for Mac, for Windows, for AWS, etc.) and as LinuxKit matures, those Docker Editions will be built using it. The YAML file describes which components or services are desired and moby will assemble them.

This is a fascinating project. I’m not sure what innovation will follow, but it seems that the same level of innovation sparked by Docker containers will follow this. One thought I have is to create custom orchestration masters and workers that do not live on general purpose operating systems, but use LinuxKit to include only the necessary components. New workers can be spun up faster and have less overhead. They would be more secure because of the reduced attack surface.

Multi-Stage Builds

Multi-stage builds were announced, which the primary use case is using the same Dockerfile to build the software and then create the runtime image. This makes it easier to keep the runtime image small yet keep the build and production instructions together. For example:

FROM ubuntu:16.04 as build
RUN apt-get update -y && apt-get install -y build-essential python python-dev
# Instructions to produce application into /app
...

FROM python:2.7-alpine
COPY --from=build /app /app

There are two FROM instructions. The second one does not inherit any layers from the first, so it can be much smaller and copy over only the products from the first that are needed. Very cool.

Security

There were several talks about security and Docker, Inc. has done great work into enabling a secure supply chain. I enjoyed the talk from Docker, Inc.’s security team on the secure supply chain. They provided good information from securing commits into the VCS to deployment. Docker Data Center has features to require any number of secure signatures on an images before it can be deployed. This may include automation, such as CI, or individuals such as QA or the security team.

Storage

Persistent storage is an important topic with various solutions. Whether it be storage for database servers or for files. There were several vendors providing different solutions with impressive features. I found Portworx and Nimblestorage interesting products.

Portworx aggregates the storage available on containers into a resilient array. It then chooses the replica closest to the container using the volume. It advertises compatibility with any scheduler, infrastructure and stateful container. The demo looked simple and did not require any dedicated hardware. It scaled out easily. There is a developer version, it’s worth a closer look.

Nimblestorage is a dedicated storage solution providing hardware. Their pricing model is pay for what you use. The feature that interested me was the quick copying of a volume. It takes only a few seconds to copy a large amount of data (1TB), it’s a lightweight copy. That is very interesting for functional testing over large test data.

Windows and Hybrid Environments

Windows 2016 can run containers using a Windows base image. Announced at DockerCon17 is that Windows can now also run Linux base images. I am glad to see Microsoft reaching out beyond Windows, although I am not a fan of Windows. Companies that are more comfortable with Windows can now run Linux containers and that will help with adoption of container technology.

At least one talk demonstrated a hybrid Linux and Windows application. The front-end and API were Linux and the database was Microsoft SQL Server on a Windows container. They were both orchestrated with Docker Swarm. Impressive, and again will help with adoption.

Traditional Applications

A lot of attention was given to running traditional (a.k.a. legacy) applications in containers. I had thought containers required more or less 12-factor applications, but I am happy to re-think this.

James Ford of ADP gave a good use-case talk of how they transitioned their traditional infrastructure to containers. After the transition, they can work piece-by-piece to extract microservices out, providing a smooth migration path.

There were other talks as well, but time didn’t allow for me to sit in all of them. I’m looking forward to the videos being released to learn more.

Image2Docker

There is a cool tool to create Docker images from existing VM images. I expected to see a tool that created a base image from the VM, but I was happily wrong.

Image2Docker is a tool that inspects a VM image to create a decent Dockerfile. The tool determines the operating system and chooses an appropriate base image. Then it scans for various applications and configurations and builds up the Dockerfile to install and configure those applications.

The demo was impressive and the tool can easily be extended for a particular situation. I like the use of containers for the extension points, it makes publishing and distribution of the extensions easier.

There are versions of the tool for Linux and Windows. Check them out!

Play with Docker

Play with Docker is a web application that allows people to play with Docker without installing it in their own workstation nor needing a fast Internet connection. It is able to create multiple Docker instances and put them together in a Swarm. It looks like a nice tool, if you’re interested in getting starting with Docker you should look at it.

Functions as a Service

Another cool hack using containers was FaaS, or Functions as a Service. You can read the details at http://get-faas.com. The demo shown integrates with AWS Lambda to make it easier to write functions. The particular example was for adding to Alexa. Lambda has a set of supported languages. If you want to use something else, what do you do? FaaS adds container support so whatever works in a container will now work in Lambda. Very cool.

Vendors

There were many vendors providing solutions such as container orchestration, resilient storage, CI/CD, etc. The container space is interesting and fast moving. DockerCon is advertised as a developer conference and I was impressed that all of the vendors I spoke to could talk in the weeds and many had live demos using UI or command line. The questions I asked were answered directly and sufficiently. Kudos!

Hands-on Labs and Hacking Sessions

There were hands-on labs and hacking sessions. From the labs descriptions I knew the topics, so I didn’t participate in either. There were so many interesting sessions and vendors I didn’t have the time. I thought it was cool that those things were there.

Conclusion

DockerCon17 was well worth the time and money. I learned enough in 2.5 days to fill my brain. I wanted to pause the conference and go try a bunch of things. WiFi was good and all day people were at tables or on the floor hacking away, teaching and learning. I recommend anyone who is working with containers at a developer or operations level to go to DockerCon18.

About the Author

Patrick Double profile.

Patrick Double

Principal Technologist

I have been coding since 6th grade, circa 1986, professionally (i.e. college graduate) since 1998 when I graduated from the University of Nebraska-Lincoln. Most of my career has been in web applications using JEE. I work the entire stack from user interface to database.   I especially like solving application security and high availability problems.

Leave a Reply

Your email address will not be published.

Related Blog Posts
Natively Compiled Java on Google App Engine
Google App Engine is a platform-as-a-service product that is marketed as a way to get your applications into the cloud without necessarily knowing all of the infrastructure bits and pieces to do so. Google App […]
Building Better Data Visualization Experiences: Part 2 of 2
If you don't have a Ph.D. in data science, the raw data might be difficult to comprehend. This is where data visualization comes in.
Unleashing Feature Flags onto Kafka Consumers
Feature flags are a tool to strategically enable or disable functionality at runtime. They are often used to drive different user experiences but can also be useful in real-time data systems. In this post, we’ll […]
A security model for developers
Software security is more important than ever, but developing secure applications is more confusing than ever. TLS, mTLS, RBAC, SAML, OAUTH, OWASP, GDPR, SASL, RSA, JWT, cookie, attack vector, DDoS, firewall, VPN, security groups, exploit, […]