Bitbucket Parameterized Pipelines

Introduction

I’d like to address how to handle lots of deployment environments within
BitBucket Pipelines. Below I’m presenting two options:

  1. Using tags to parameterize builds
  2. Using the BitBucket API to pass variables

Lots of Environments

There are plenty of examples of pipelines files explaining how to, for example, deploy to a dev, test, staging, and production environment. But what if you have 20 environments that you need to deploy applications to?

For example, my team is responsible for ten services. When there were fewer environments, they looked something like this.

pipelines:
  custom:
    dev-alice:
      - step: *build-application
      - step: *register-revision

      # I've include our specific CodeDeploy script here. The important
      # part here is the "deployment-group-name", which corresponds to
      # each of the 20 environments.
      - step:
          image: aneitayang/aws-cli:1.0
          script:
            - aws --region us-east-1 deploy create-deployment \
              --application-name $BITBUCKET_REPO_SLUG \
              --s3-location bucket=$S3_BUCKET,bundleType=zip,key=$BITBUCKET_REPO_SLUG/$BITBUCKET_BUILD_NUMBER.zip \
              --deployment-group-name dev-alice
    dev-bob:
      - step: *build-application
      - step: *register-revision
      - step:
          image: aneitayang/aws-cli:1.0
          script:
            - aws --region us-east-1 deploy create-deployment \
              --application-name $BITBUCKET_REPO_SLUG \
              --s3-location bucket=$S3_BUCKET,bundleType=zip,key=$BITBUCKET_REPO_SLUG/$BITBUCKET_BUILD_NUMBER.zip \
              --deployment-group-name dev-bob
    dev-carol:
      - step: *build-application
      - step: *register-revision
      - step:
          image: aneitayang/aws-cli:1.0
          script:
            - aws --region us-east-1 deploy create-deployment \
              --application-name $BITBUCKET_REPO_SLUG \
              --s3-location bucket=$S3_BUCKET,bundleType=zip,key=$BITBUCKET_REPO_SLUG/$BITBUCKET_BUILD_NUMBER.zip \
              --deployment-group-name dev-carol

We can cut-and-paste our way to victory for awhile, but at some point this is going to become tiresome and error prone.

Bitbucket Variables

Bitbucket provides us with variables in four scopes:

  1. Workspace
  2. Repository
  3. Deployment Environment
  4. Build Environment

Workspace variables apply to all of our repositories. They are good for things like shared keys, passwords and locations of artifact repositories.

Repository variables help us with service-specific settings, like directories within artifact repositories, or names for generated docker images.

Deployment Environment Variables are defined in a repository’s settings as well, but we can define them for each “deployment:” in our pipeline definition.

dev-alice:
  - step: *build-application
  - step: *register-revision
  - step:
      image: aneitayang/aws-cli:1.0

      # The "deployment" here matches the environment we defined
      # in the Deployment Settings menu
      deployment: dev-alice

      # The DEPLOYMENT_GROUP variable below is specific to the
      # "dev-alice" group. But does that help us?
      script:
        - aws --region us-east-1 deploy create-deployment \
          --application-name $BITBUCKET_REPO_SLUG \
          --s3-location bucket=$S3_BUCKET,bundleType=zip,key=$BITBUCKET_REPO_SLUG/$BITBUCKET_BUILD_NUMBER.zip \
          --deployment-group-name "$DEPLOYMENT_GROUP"
```

This still puts us in a position of having to update every service’s bitbucket-pipelines.yml whenever we want to add an environment.

Build Variables can be passed via the bitbucket API, discussed toward the end of this post.

Tags as Parameters

Bitbucket exposes a BITBUCKET_TAG environment variable to us. So we could add a tags section to our bitbucket-pipelines.yml. Note here that
we’ve replaced dev-alice with $BITBUCKET_TAG.

tags:
  'deployment/**':
    - step: *build-application
    - step: *register-revision
    - step:
        image: aneitayang/aws-cli:1.0
        script:
          - aws --region us-east-1 deploy create-deployment \
            --application-name $BITBUCKET_REPO_SLUG \
            --s3-location bucket=$S3_BUCKET,bundleType=zip,key=$BITBUCKET_REPO_SLUG/$BITBUCKET_BUILD_NUMBER.zip \
            --deployment-group-name $BITBUCKET_TAG
    - step:
      name: Cleanup Tag
      script:
        - git push --delete origin $BITBUCKET_TAG

Our developers can then use the GUI to tag their commits to get a deploy
to an arbitrary environment.

Or…

git tag deployment/dev-alice
git push origin deployment/dev-alice

So now, if we need to deploy to an integration environment for a new or
prospective customer, we can do it without modifying the pipelines. This is a hack, and won’t help much if you need to pass multiple variables to your pipeline.

Build Variables via the API

Another way to pass arbitrary variables to BitBucket is via its API. See
Trigger Pipeline For Branch. This is effectively a fourth variable scope, defined on a per-build basis.

Obtaining Bitbucket API Tokens

First you’ll need to obtain an API token, which is done in your account’s Personal Settings.

Passing Build Variables

Here we define $DEPLOYMENT_GROUP, very similar to what we did above with the deployments:

custom:
  deploy-task:
    - step: *build-application
    - step: *register-revision
    - step:
        image: aneitayang/aws-cli:1.0
        script:
          - aws --region us-east-1 deploy create-deployment \
            --application-name $BITBUCKET_REPO_SLUG \
            --s3-location bucket=$S3_BUCKET,bundleType=zip,key=$BITBUCKET_REPO_SLUG/$BITBUCKET_BUILD_NUMBER.zip \
            --deployment-group-name $DEPLOYMENT_GROUP

With the API Key stowed away, we can pass the DEPLOYMENT_GROUP variable via a REST call. For example, here I’m calling the “deploy-task” pipeline defined above on my feature branch, AWS-860, and passing in a variable via the VARIABLES section.

# "DEPLOYMENT_GROUP" is a variable we reference in the
# pipeline like any other variable.
curl -X POST -is -u mybbusername:myapppassword01234  -H 'Content-Type: application/json'  https://api.bitbucket.org/2.0/repositories/MY_WORKSPACE/MY_GIT_REPO/pipelines/ -d '{
  "target": {
    "type": "pipeline_ref_target",
    "ref_type": "branch",
    "ref_name": "AWS-860",
    "selector": {
      "type": "custom",
      "pattern": "deploy-task"
    }
  },
  "variables": [
    {
      "key": "DEPLOYMENT_GROUP",
      "value": "dev-alice"
    }
  ]
}'

Multiple Variables and Default Values

If your pipeline has multiple variables that need to be passed in on an ad hoc basis, the API is currently the only way to do this without modifying repository settings.

It’s worth noting that bitbucket-pipelines.yml lets you set defaults for your variables just like in the shell.

-step:
  script:
    # Use DEBUG if it's passed in (or defined via the 3 variable
    # scopes defined above, otherwise default to true.
    - echo ${DEBUG:-true}

Wrap-up

BitBucket pipelines allows us to get arbitrary variables into our pipelines, but we can’t (yet) do it via select lists or freeform text boxes during the deployment workflow in the GUI. There are other strategies, like offloading the deployment portion of this workflow to another tool and parameterizing it there, but initiating the pipelines from BitBucket helps us preserve our workflow.

Related Links

About the Author

Tom McLaughlin profile.

Tom McLaughlin

Senior Consultant

I am a technologist specializing in application development, cloud enablement, and modernization.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Blog Posts
Snowflake CI/CD using Jenkins and Schemachange
CI/CD and Management of Data Warehouses can be a serious challenge. In this blog you will learn how to setup CI/CD for Snowflake using Schemachange, Github, and Jenkins. For access to the code check out […]
How to get your pull requests approved more quickly
TL;DR The fewer reviews necessary, the quicker your PR gets approved. Code reviews serve an essential function on any software codebase. Done right, they help ensure correctness, reliability, and maintainability of code. On many teams, […]
Kafka & Kubernetes: Scaling Consumers
Kafka and Kubernetes (K8s) are a great match. Kafka has knobs to optimize throughput and Kubernetes scales to multiply that throughput. On the consumer side, there are a few ways to improve scalability. Resource & […]
AWS RDS MYSQL Playground
Do you need a reliable database platform to hammer out some new application ideas? Or, maybe you’d like to learn MYSQL in a disposable environment? This Hashicorp Terraform MYSQL RDS Module will build all the […]