DevOps methodology.

Some tools that might make deployments easier, such as infrastructure as code, or IAC, and AWS CodeDeploy

You’ve written code and tests for the critical parts of your application, and you have somewhere to put this code. In this case, a CodeCommit repository, as well as a strong branching strategy. You also have a CI service–in this case, CodeBuild– that monitors the repository and runs tests automatically for every commit that is pushed. Also, you learned how to deploy a serverless application.

AWS CodeCommit as a secure, highly scalable, managed source control service that hosts private Git repositories.

A build is going to take your source, compile it, and retrieve dependency packages from a repository.—like Node package manager (npm) modules or an Apache Maven artifact in Java. A build will usually include automated testing to check the quality of the code, and unit tests to exercise the code and make sure it does what you expect it to.
AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy.
A branch in Git is a pointer to a commit. When you make commits in a branch, the pointer automatically moves forward. The main idea behind the Feature Branch Workflow is that all feature development takes place in a dedicated branch. This workflow makes it easy for multiple developers to work on a feature without disrupting the main code base. When working in source control, you and your team need to agree on a convention that allows you to work on features and keep code out of the main branch until you are confident that it is ready for production.

When you write code, that code is going to do exactly what you tell it to do. The important question is: Are you telling it to do the right thing? At this point, the application is under source control and you have automated tests to make sure the code is in a good state. Because you want to run these tests each time you commit code, you will receive immediate feedback whether the code committed still works.

Bottom line is: It’s easy to make mistakes when humans are overseeing large deployments. Not ideal. Instead, the team wants to be more agile, have a more reliable process, and automate when possible to prevent some of those human errors, and they already do this to an extent.

To understand the deployment strategies for serverless applications, we will first cover the terminology of versions, aliases, and traffic shifting. Each AWS Lambda function can have any number of versions and aliases associated with them.

Versions are snapshots of a function that includes the code and configuration, and it is a good practice to publish a new version each time you update your function code. When you invoke a specific version (using the function name and version number combination) you will get the same code and configuration regardless of the state of the function. This protects you against accidentally updating production code. To use versions, you should create an alias, which is a pointer to a version.

Aliases have a name and an Amazon Resource Number (ARN) similar to the function and are accepted by the Invoke APIs. If you invoke an alias, Lambda will in turn invoke the version that the alias is pointing to. In production, you would first update your function code, publish a new version, and invoke the version directly to run tests against it. After you are satisfied, you would change the alias to point to the new version.

Traffic shifting can shift incoming traffic between two versions of a Lambda function based on pre-assigned weights. You can use this feature to gradually shift traffic between two versions, helping you reduce the risk of new Lambda deployments. You can also change your Lambda function’s code without affecting other upstream dependencies that rely on the alias.

Deploying serverless applications
If you use AWS SAM to create your serverless application, it comes built-in with AWS CodeDeploy to provide gradual Lambda deployments. With a few lines of configuration, AWS SAM does the following for you:

Deploys new versions of your Lambda function, and automatically creates aliases that point to the new version.

Gradually shifts customer traffic to the new version until you’re satisfied that it’s working as expected, or you roll back the update.

Defines pre-traffic and post-traffic test functions to verify that the newly deployed code is configured correctly and your application operates as expected.

Rolls back the deployment if Amazon CloudWatch alarms are generated.

Deployment options
Now that you know about AWS SAM, you can learn about your deployment options. The following list describes other traffic-shifting options that are available:

Canary: Traffic is shifted in two increments. You can choose from predefined canary options. The options specify the percentage of traffic that’s shifted to your updated Lambda function version in the first increment, and the interval, in minutes, before the remaining traffic is shifted in the second increment.

Linear: Traffic is shifted in equal increments with an equal number of minutes between each increment. You can choose from predefined linear options that specify the percentage of traffic that’s shifted in each increment and the number of minutes between each increment.

All-at-once: All traffic is shifted from the original Lambda function to the updated Lambda function version at once.

An all-at-once deployment will shift traffic instantly from one version to another, while canary and linear, are much safer, more gradual deployment options.

The team has decided they want to use the service, AWS CodeDeploy, to take our build artifact, getting built by CodeBuild, and deploy this to our application instances.

CodeDeploy supports deploys to EC2 instances, on-premises physical servers, AWS Lambda, and Amazon ECS. Our application is hosted on EC2 instances. To get started with CodeDeploy, we will need to create an application, and one or more deployment groups.

An application defines the compute platform we are deploying to. This is a choice of the supported platforms: EC2/on-premises, Lambda, and ECS.

An application can have multiple deployment groups. For an EC2 server deployments, the deployment group needs to know information about several things, like: The IAM role that CodeDeploy will use to authenticate to other services. The deployment style, whether it’s in-place or blue/green. In-place will replace the application on our instances, and blue/green will create new green instances to deploy the application.

Then, we have to specify the deployment configuration. For an EC2 deploy, this controls the number or percentage of healthy hosts we want available during a deployment.

We also have to specify how to find our instances by tags, Auto Scaling group, or even on-premises tagged instances. For a blue/green deploy, we can configure an Auto Scaling group that CodeDeploy will copy to provision the green instances. Specifically for a blue/green deploy, you have control over when to route the traffic to the green instances. It can be automatic or the deploy can wait for you to continue the deployments when you’re happy to go ahead. You also control how long the original, or blue, instances are kept after successful deploy.

Optionally, you can also select a load balancer, so CodeDeploy knows where to register and deregister instances during the deployment. We can also associate CloudWatch alarms. A deployment can be halted if any of the configured CloudWatch alarms go into an alarm state. And finally, auto rollback. If we want to automatically rollback on a deployment failure, a rollback is just going to redeploy the last successful deployment for you.

With an application and deployment group in place, I’m ready to create a deployment. A deployment for EC2 just needs to know my application, the deployment group, and the revision of my application.

The revision is the artifact we want to deploy. This will contain all the files and instructions run on each host during the deployments. These instructions are configured in an AppSpec file. We will go into a lot more detail on the AppSpec file, and the hooks you have to run your commands during a deployment, later. How exactly are these commands run on my instances, though? Well, this is the job of the CodeDeploy agent. We need to have the agent installed and running on all of the instances where we plan to deploy the application.

An EC2 deployments from CodeDeploy can pick up this artifact from S3 or from a GitHub commit. When CodeDeploy is added as part of an AWS CodePipeline action, the revision can be picked up from the output of another CodePipeline action, for example, the artifacts created from a pipeline build phase.

I now have an application and a deployment group set up. I have everything I need to create a deployment. This will send an application revision to the deployment group we just created. Let me show you what a simple revision looks like.
Play video starting at :5:53 and follow transcript

I have a revision open in an editor here. The revision contains a simple AppSpec file, the scripts I have specified to run with each of the hooks, and a single source file I want to deploy on my instances.

Back in the CodeBuild console, from my deployment group, I can choose Create deployment.

All I need to supply here is the location of my revision. I have zipped up the contents of my revision and copied this to an S3 bucket. I can enter the S3 location of my revision.

If I choose View events here for one of my instances, I see the sequence of events that happen during the deploy. BlockTraffic is the event that deregistered the instance from the load balancer. AllowTraffic is the event that registered it back. ApplicationStop and BeforeInstall are the two steps where my revision scripts ran. These events all ran simultaneously on my instances, where you’re using the all-at-a-time deployment configuration, for this deployment. If we were to use one at a time for the deployment configuration, one instance would be selected, all of the events we see here would complete. Only then would CodeDeploy move to the next instance. That was a pretty quick introduction to CodeDeploy. We have spoken about CodeDeploy’s application, deployment group, deployments, and revisions. We saw the deployment run against my five instances, where the CodeDeploy agent followed the instructions in my AppSpec file to get my applications installed.

Author: Yuzu
Copyright Notice: All articles in this blog are licensed under CC BY-NC-SA 4.0 unless stating additionally.