Automated deployment from CodeCommit to S3 through CodeBuild/Lambda
Posted by Mike Apted on February 5, 2017
I've been meaning to start experimenting with CodeBuild since it's announcement and decided to put something basic but flexible together as a proof of concept.
The TL;DR was to create an environment with a CodeCommit repo and a push trigger. That trigger fires a Lambda, which invokes a CodeBuild project, depositing a set of the repo files into an S3 bucket.
It is possible to include these in a CodePipeline, rather than trigger a Lambda from CodeCommit, but there are a couple reasons I decided to go the Lambda route. First, the project is incredibly simple. There are no test cases, no real complex builds, and no approvals. Second, there is no artifact generated and CodePipeline will not let you select a CodeBuild project that does not produce an artifact. So Lambda+trigger it is.
Whenever possible I prefer to build and iterate in CloudFormation, as it allows me to solve problems once, and then not repeat small configuration and setup mistakes once fixed. It also means no copy and pasting of ARNs, resource names, etc..
At the start of our CloudFormation template we have the version, template description and parameters we expect:
Normally I would try and avoid explicitly naming resources wherever possible, but due to some circular template refs and an unwillingness to increase the complexity at this point, I can live with it in this case. Where possible I have added constraints on the parameters (MinLength, MaxLength, etc.) that reflect the underlying AWS service naming restrictions.
The parameters should be somewhat self explanatory, given the descriptions, but we are asking for names for the CodeCommit repo, the S3 bucket we will deploy to, the CodeBuild project name, the Lambda function and an optional value for a subdirectory of the repo to deploy from. This is to allow us to keep certain files (git related, etc.) out of the deployment assets or to deploy a compiled "dist" folder if there is actually a build process to undergo.
Note: there is an assumption here that we are creating new resources for all these, not using an existing set, so names will need to be unique, etc..
Getting into the Resources section of the template we define our services. First up are our IAM policies and permissions. Our CodeBuild role allows CodeBuild to perform needed tasks like pulling from CodeCommit, pushing to S3 and creating and publishing CloudWatch log groups and streams. If you forget this last one you will have a bit of trouble debugging any issues!
Then we have our Lambda permission and role, which allows this Lambda to be invoked by CodeCommit, and the Lambda to itself invoke a CodeBuild project's build process. Again, we make sure to include creating and publishing CloudWatch log groups and streams:
Next up is our S3 bucket, where the files from the CodeCommit repo will be sync'd to:
After that we have our Lambda function, which is a short Python snippet included inline, that takes the customData passed in the CodeCommit trigger and uses that to invoke the CodeBuild build job:
Following that we have our CodeCommit repo definition, which includes the trigger to call our Lambda when a push is received to the master branch. You can expand the complexity here, if desired, but this works for my current use case:
Lastly we have the CodeBuild project which produces no artifacts, and uses an inline buildspec.yaml to sync our configured subdirectory to our S3 bucket using Boto3 (included in our build environment in the install phase):
Now that our entire pipeline is defined in CloudFormation stack we can kick off it's creation either in the CloudFormation web console or on the CLI with:
Note that we need the --capabilities CAPABILITY_IAM given the roles and permissions we are creating. Once the stack creation is complete you can then check out the CodeCommit repo locally, add files (including the subdirectory if you specified one), and any push back to the repo will kick off a CodeBuild process deploying your files to your S3 bucket.
Taking this basic process to the next level could involve things like:
adding a build process (say a Jekyll website build) and deploying only the subdirectory
adding a test component to the build process
adding notifications through SNS for status updates on various components
updating the S3 bucket to provide static website hosting or more specific policies
adding a CloudFront CDN to the stack fronting the bucket hosted website