aws ecr create-repository --repository-name hazelcast-fargate
02 December 2019
TweetThe following tutorial covers setting up Hazelcast on AWS Fargate using the Hazelcast Kubernetes Plugin with DNS Service discovery. If you are unfamiliar with any of these technologies, I’ve provided some summaries below. Those familiar with the technologies can skip straight to the Setting up Hazelcast on Fargate section. Enjoy!
For a couple of years now when it comes to deciding on an all-purpose cache, I’ve reached for Hazelcast. Each time I’ve been impressed by the wide array of clustering options available. In fact, I’ve never used the same discovery/clustering configuration twice. While I find exploring new ways of doing things exciting, my main driver for using different configurations is the changing needs of my clients who want to take advantage of cloud offerings that fit how they want to manage their stacks. Hazelcast is a swiss army knife type of cache. It provides several different data structures as well as many ways to configure loads and cache invalidations.
Amazon Web Services (AWS) provides a service for running and scaling Docker containers called Elastic Container Service (ECS). ECS comes in two flavors: EC2 and Fargate.
With EC2, you run your containers on virtual machines that you spin up with AWS’s EC2 service, but ECS takes care of orchestrating the Docker containers.
Fargate also manages running, discovering and scaling the Docker operations, but you never spin up any EC2 instances. They are completely managed by AWS.
Nearly all projects I’ve been on recently are working with containers and different ways of deploying them to the cloud. It started with just putting docker on Virtual Machines (VMs), but constant upgrading and DevOps overhead is leading my clients to look at more managed solutions. While Kubernetes(K8) seems to be the most popular, there are cases where K8 is overkill. These projects involve nearly a dozen services maintained by a handful of teams. These teams have already gone through their battles with Docker Compose and perhaps dabbled in Docker Swarm or Rancher. They look at services like Azure Kubernetes Service (AKS) or Amazon’s Elastic Kubernetes Services and they see the good and the bad. The good is that they can have an entirely managed Docker environment that avoids most of the pain of system patching and scaling. But the bad is that they have to learn an entirely new tool with its own opinions, commands, and best practices. AWS ECS has been around for some time and offers a nice middle ground between plain Docker and a fully orchestrated K8.
Another great benefit of ECS is its Command Line Interface (CLI) that has a number of useful features coming from a self-hosted docker environment. The first and by far most useful in my opinion is its integration with Docker Compose. Many Docker projects I’ve been on have adopted Compose soon after figuring out the Docker basics to better manage all the parameters that go into spinning up a stack of docker containers. Moving to Kubernetes requires these to be converted while ECS is able to use the docker-compose.yml nearly as-is. Notable exceptions are secrets which can be managed via the ecs-params.yml file by associating them with Systems Manager (SSM) Parameter Store values. The ecs-params.yml also allows you to set the memory and CPU limits that will be used to size your container in Fargate. This is important since you don’t pay directly for the underling EC2 instances with Fargate. Billing is based on CPU and memory set when you start your container. The ecs-params.yml also allow you to set networking and security parameters within AWS to determine what service your container will have permission to interact with, what ports will be open, and which Virtual Private Cloud (VPC) subnet you’ll be deployed into.
The Command Line Interface (CLI) allows you to set up service discovery in AWS Cloud Map. Cloud Map allows ECS to manage AWS Route53 so that services looking to call your container can use simple DNS A or SRV records. This will be important as we try to configure Hazelcast since instances will need to be able to locate each other without having to re-deploy configuration in the containers. "A" records and "SRV" records do this in different ways. When you think about a regular website like https://bobpaulin.com, an "A" record is what connects bobpaulin.com to the IP address that serves the file content. "SRV" records work a bit different but they allow more information to be passed to the caller, such as the port to look on when looking up how to call the service. When you start a service with ECS you can enable service discovery which adds and removes DNS entries in Route53 as tasks are started and stopped within your cluster.
The last piece of the puzzle to tie all these pieces together is the Hazelcast Kubernetes plugin. This plugin is included in the Hazelcast community docker image and contains two different methods of doing discovery. The first leverages the Kubernetes service API. That API exists only in a K8 deployment so it won’t be of much use in a non-K8 environment.
The second is based on DNS. This implementation relies on DNS SRV records so it could be leveraged in any environment that creates SRV records. This is what we’ll use in ECS since AWS Cloud Map can be setup to create service records. This configuration will allow Hazelcast instances to locate each other and form a cluster.
Prerequisites:
Docker installed
AWS Account
AWS CLI Installation
ECS CLI Installation
AWS VPC with Internet Access
1) Create ECR for hazelcast
aws ecr create-repository --repository-name hazelcast-fargate
2) Create hazelcast.xml
Note the <interface> element entry should match the subnet that the Fargate task will be running in. For example a CIDR of 172.30.4.0/24 would have: <interface>172.30.4.*</interface>
The service-dns value will be determined by the service discovery service name and the private dns namespace name defined later.
<service-discovery-name>.<private-dns-namespace-name>
<?xml version="1.0" encoding="UTF-8"?>
<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-config-3.7.xsd"
xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<properties>
<property name="hazelcast.discovery.enabled">true</property>
</properties>
<management-center enabled="false" update-interval="2">http://localhost:8080/mancenter</management-center>
<network>
<join>
<!-- deactivate multicast which is enabled by default -->
<multicast enabled="false"/>
<aws enabled="false"/>
<tcp-ip enabled="false" />
<discovery-strategies>
<discovery-strategy enabled="true"
class="com.hazelcast.kubernetes.HazelcastKubernetesDiscoveryStrategy">
<properties>
<property name="service-dns">hazelcast.bobpaulin.com</property>
</properties>
</discovery-strategy>
</discovery-strategies>
</join>
<interfaces enabled="true">
<interface>172.30.4.*</interface>
</interfaces>
</network>
</hazelcast>
3) Create Dockerfile
FROM hazelcast/hazelcast:3.11.4
ADD hazelcast.xml $HZ_HOME
4) Build docker image
Build image with custom hazelcast.xml The tag for the ECR should come from the repositoryUri output from the command
aws ecr describe-repositories
Next build and tag the image
docker build -t hazelcast-fargate .
docker tag hazelcast-fargate 11111111111.dkr.ecr.us-east-1.amazonaws.com/hazelcast-fargate:3.11.4
5) Deploy the docker image to ECR
Login to ECR
$(aws ecr get-login --no-include-email)
Push Container
docker push 11111111111.dkr.ecr.us-east-1.amazonaws.com/hazelcast-fargate:3.11.4
6) Create Cloudwatch Log Group
aws logs create-log-group --log-group-name /ecs/bobpaulin/hazelcast
7) Create Task Execution Role
task-execution-assume-role.json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Run the following aws cli command to create the role
aws iam --region us-east-1 create-role --role-name ecsTaskExecutionRole --assume-role-policy-document file://task-execution-assume-role.json
Run the following aws cli command to attach the role policy
aws iam --region us-east-1 attach-role-policy --role-name ecsTaskExecutionRole --policy-arn arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy
8) Configuring the Security Group
Replace vpc-abcdefg with the vpc you’re deploying into
aws ec2 create-security-group --group-name EcsHazelcastSecurityGroup --description "Hazelcast ECS Security Group" --vpc vpc-abcdefg
Add ingress port rules
Replace sg-123456789 with the security group id create above
aws ec2 authorize-security-group-ingress --group-id sg-123456789 --protocol tcp --port 5701 --cidr 0.0.0.0/0
9) Creating a docker-compose.yml
Pull the image, awslogs-group, and region from the previous calls.
version: '3'
services:
hazelcast-service:
image: 11111111111.dkr.ecr.us-east-1.amazonaws.com/hazelcast-fargate:3.11.4
ports:
- "5701:5701"
logging:
driver: awslogs
options:
awslogs-group: /ecs/bobpaulin/hazelcast
awslogs-region: us-east-1
awslogs-stream-prefix: ecs
environment:
- MIN_HEAP_SIZE=4g
- MAX_HEAP_SIZE=4g
- AWS_DEFAULT_REGION=us-east-1
10) Creating a ecs-params.yml
Replace subnet-abcdefg with your subnet
Replace sg-123456789 with your security group
Replace vpc-098765 with your vpc
version: 1
task_definition:
task_execution_role: ecsTaskExecutionRole
ecs_network_mode: awsvpc
task_size:
mem_limit: 6.0GB
cpu_limit: 2048
run_params:
network_configuration:
awsvpc_configuration:
subnets:
- "subnet-abcdefg"
security_groups:
- "sg-123456789"
service_discovery:
private_dns_namespace:
vpc: "vpc-098765"
name: "bobpaulin.com"
service_discovery_service:
name: "hazelcast"
dns-config:
type: SRV
ttl: 120
11) Configuring the ecs-cli to point to he cluster
ecs-cli configure --cluster hazelcast --default-launch-type FARGATE --config-name default --region us-east-1
Configure Profile
Replace <AWS_ACCESS_KEY_ID> and <AWS_SECRET_ACCESS_KEY> with your AWS Access Key and Access Secret respectively.
ecs-cli configure profile --access-key <AWS_ACCESS_KEY_ID> --secret-key <AWS_SECRET_ACCESS_KEY> --profile-name default-profile
12) Running the ecs-cli to create the cluster
Replace sg-123456789 with your security group
Replace vpc-098765 with your vpc
Replace subnet-abcdefg with your subnet
ecs-cli up --cluster-config default --ecs-profile default-profile --security-group sg-123456789 --vpc vpc-098765 --subnets subnet-abcdefg
Create ecs
ecs-cli compose --project-name hazelcast-service service up --cluster hazelcast --enable-service-discovery --dns-type SRV --sd-container-name hazelcast-service --sd-container-port 5701
Scale it up!
ecs-cli compose --project-name hazelcast-service service scale 3
Verify the cluster is formed from the logs
2019-11-18 22:35:34
INFO: [172.30.4.67]:5701 [dev] [3.11.4]
2019-11-18 22:35:34
Members {size:3, ver:3} [
2019-11-18 22:35:34
Member [172.30.4.67]:5701 - f8044a27-e20e-45bd-adba-fcac4e069cc1 this
2019-11-18 22:35:34
Member [172.30.4.241]:5701 - a69055e8-40d7-4cad-b5c1-8dcfd008f766
2019-11-18 22:35:34
Member [172.30.4.236]:5701 - 04ad412e-bc5b-4673-9226-12f8c60a1f06
2019-11-18 22:35:34
]
13) Turn it off!
Remove the Service
ecs-cli compose --project-name hazelcast-service service rm --cluster hazelcast
Remove the Cluster
ecs-cli down --cluster-config default --ecs-profile default-profile