Jan 5, 2021

ARM Wrestling Its Way Into Mainstream Software Development

Nearly all smart phones have been running ARM-based processors for years. They provide superior power for the amount of power consumed, and thus extend battery life.

With Apple’s recent release of the Apple Silicon M1 chip for the Mac Mini, MacBook Air, and 13″ MacBook Pro, they have catapulted ARM processors out of your pocket and onto your desk (or into your lap). This is exciting for “normal” users — software compiled for ARM is blazing fast, and even Intel-based software runs faster when emulated on the M1 chip via Rosetta 2.

But what about developers? Will we be able to use this new power? How will it affect us? I’ll try to dig into a few of those areas here.

What about my development environment?

For most languages and frameworks, this should not be a big deal. For the JVM, Azul Systems has already released a build of OpenJDK for Apple Silicon (and as always, you compile to Java bytecode which is not architecture-specific). Golang runs in the Rosetta 2 emulation mode, and they are working on improvements. Python runs fine, and is an interpreted language so you shouldn’t have to worry about compiling.

For developers targeting Apple platforms, this should be a straightforward move. Other than macOS, all the others (iPhone, iPad, Apple Watch, etc.) have only ever run ARM-based processors. Development should be even better, as Apple has been working towards this for years, and the transition should be easy.

Docker and containers

Containers have become an essential part of many developers’ workflows. Whether running databases, or upstream microservices, it is a powerful way to run software locally. Docker for Mac has made that easy: since these containers are based in Linux, they must run in a virtual machine on Mac. Docker for Mac runs a VM in the background, and makes much of the process transparent — mounting local folders, exposing network ports, and more.

Docker has stated that it will be a bit before they can get Docker Desktop released for ARM-based Macs (consider subscribing to updates on the Docker roadmap issue), but very recently they have released support in their Developer Preview channel. But you don’t need Docker to run containers. As the Kubernetes project recently noted in a blog, Docker is a stack of tools to provide a better UX layer on top of the containerd runtime.

While many open source projects are published in multiple architectures, I think it’s safe to say that most enterprise software is published to internal repositories using only the architecture it is run on – and in most data centers, this is an x86 processor.


This gets us to an interesting predicament. Macs are extremely popular developer workstations, and will – within a couple years – all be running ARM processors. On the other hand, most workloads in data centers are running on x86 processors.

AWS first supported ARM processors the A1 instance class running the AWS Graviton processor, and now support many specialized capabilities with instances based on their Graviton2 processor. (They have the t4g.micro instance in their free tier through March 31st, 2021, if you want to give it a spin). As an example, an m6g.large (ARM-based) instance costs 80% of what a comparably-sized m5.large (x86-based) instance. Obviously different workloads have different compute needs, but a potential 25% savings on compute should get the attention of any IT leader.

As of yet, it does not appear that Google Cloud Platform or Azure have announced support for ARM-based processors in their clouds.

What can we do to prepare for this?

First, you can setup your CI to publish artifacts in multiple architectures: x86, and ARM. This way, if developers need to pull the programs and run them locally, they will be able to do so from either x86- or ARM-based workstations. Additionally, you would be prepared for a future move to ARM-based servers.

While some tooling (such as Go, and even Docker Buildx) can target binaries in multiple architectures from a single build server, some tooling will need to run on different build agents to build artifacts targeting different architectures. For example, the GitHub Action Runner supported on ARM

Secondly, you could set up your runtime compute platform to support both architectures. This will really depend on your deployment setup, but as an example, with Kubernetes, you can have nodes on x86 and ARM architectures coexisting in the same cluster. Your pod specification could target running on one architecture or the other, or even could support running on either!

Finally, if your developers will need to run a lot of x86 containers and software in your development environment, you may need to think about providing Intel-based workspaces on cloud instances. You could build a system where developers could request a VM running on x86 so they can run necessary software.

Should we get these for my engineers?

That depends on your needs. Currently, these Macs support a maximum of 16GB of memory, and the laptops top out at a 13″ screen. Many developers may need 32GB of RAM, and would prefer a 15″ or 16″ screen.

You may also have to consider near-term options for running x86 containers — using the above-mentioned instances in a cloud, or else providing a small x86-based PC that they could run on their local network.

Some final thoughts

Power-efficiency, speed, and price are all important factors when making decisions on your cloud infrastructure.

However, even with those exciting improvements to be made in infrastructure costs, sometimes the tail wags the dog. Developer costs may exceed infrastructure costs in many organizations, and gains to be made in developer efficiency may exceed gains to be made in compute efficiency.

A developer with a faster machine is a more efficient developer. And as developers are around ARM more, it will be easier to build and deploy artifacts on ARM servers: and it being easier, we should start to see ARM take off everywhere.


I apologize for this blog post’s strong-armed punny title. If you’re in my radius of friends, you’d know I have a gaggle of kids and my dad-joke-o-meter is broken. I have to hand it to my wife, she has learned to put up with it and even appreciate my sense of humerus. There, I got it all out of my system.

About the Author

David Norton profile.

David Norton

Director, Platform Engineering

Passionate about continuous delivery, cloud-native architecture, DevOps, and test-driven development.

  • Experienced in cloud infrastructure technologies such as Terraform, Kubernetes, Docker, AWS, and GCP.
  • Background heavy in enterprise JVM technologies such as Groovy, Spring, Spock, Gradle, JPA, Jenkins.
  • Focus on platform transformation, continuous delivery, building agile teams and high-scale applications.
Leave a Reply

Your email address will not be published. Required fields are marked *

Related Blog Posts
Feature Flags in Terraform
Feature flagging any code can be useful to developers but many don’t know how to or even that you can do it in Terraform. Some benefits of Feature Flagging your code You can enable different […]
Infrastructure as Code – The Wrong Way
You are probably familiar with the term “infrastructure as code”. It’s a great concept, and it’s gaining steam in the industry. Unfortunately, just as we had a lot to learn about how to write clean […]
Snowflake CI/CD using Jenkins and Schemachange
CI/CD and Management of Data Warehouses can be a serious challenge. In this blog you will learn how to setup CI/CD for Snowflake using Schemachange, Github, and Jenkins. For access to the code check out […]
How to get your pull requests approved more quickly
TL;DR The fewer reviews necessary, the quicker your PR gets approved. Code reviews serve an essential function on any software codebase. Done right, they help ensure correctness, reliability, and maintainability of code. On many teams, […]