Everyone’s talking about #serverless these days. Technologies like kubeless.io, Fission, and AWS Fargate promise to relieve app developers from the operational complexity of running entire applications in a reliable, resilient, and scalable way, by hiding the underlying virtual computing resources. Curious as I am, I deployed a simple demo app using AWS Fargate to find out how it works. Naturally, I was also looking to get insights into my application and for an efficient way to include Dynatrace in the deployment process.
Once, there was a simple app
I wrote a simple Spring Boot application called bookkeeper, that manages book records in an AWS RDS instance running the MariaDB engine. Furthermore, a Node.js web application queries the Spring Boot application for book records and displays the results in an Express web frontend. For ease of use and the use case at hand, I containerized both application components using Docker.
What rhymes with Stargate?
Fargate. Fargate is short for ‘AWS manages compute resources so you don’t have to worry about that’. To be more precise, AWS Fargate allows you to run containers without having to manage underlying EC2 instances, removing the need to choose server types and think about scale. Let’s take it for a spin.
Make sure you work in the North Virginia region. At this time, Fargate is only available for that region. Unfortunately, the offered first-run-wizard didn’t allow me to reuse existing subnets and security groups and that was not practical in my case. I wanted to reuse those to allow my already deployed and monitored webapp talk to the service managed by ECS.
I followed this step-by-step approach to deploy my dockerized bookkeeper application using Fargate.
- Create AWS ECR (Elastic Container Registry) repository
- Push said Docker image to AWS ECR
- Create a cluster and select the Networking only option
- Create a new task definition and select the Fargate launch type
- Set name, roles, and resource specification for the task, appropriate for the containerized service you’re using
- Add the container definition, referencing the previously uploaded Docker image from the AWS ECR, and specifying memory limits and port forwardings
- Create a service in the cluster
- Select FARGATE launch type, the previously created task as the task definition, and all other properties to your needs (e.g., I needed the service to run in a specific Subnet to make it accessible to other parts of my application)
- I used two tasks using an ALB to balance load between the tasks
- Wait and see
Take good care of the health checks in the target group of your load balancer. If you provide a health check that doesn’t work properly, your tasks will be redeployed over and over again, until the health check is successful. I can’t completely rule out that this didn’t happen to me … so I recommend taking good care of those health checks.
After some time, the application is deployed and ready to use. Now change the URL in your Node.js app to the endpoint provided by the ALB, and you should be set.
Dynatrace and Fargate
Dynatrace offers a number of approaches to get insights into your application running on ECS Fargate. We don’t have access to EC2 instances with Fargate, so we can’t rely on full stack monitoring; there is no host, where the OneAgent could provide out-of-the-box monitoring for all deployed Docker containers running on said host.
For this article I included the Dynatrace PaaS agent at build time into the Docker image by modifying the Dockerfile. If you need to know the details how to make that work, please reach out to me (@wall_dirk).
After creating a new task definition, based on the modified Docker image, and updating the service, I immediately gained insights into the application, because the PaaS agent is automatically started along with the container and starts monitoring the application and sending data to Dynatrace.
BAM! I instantly see all parts of my application: the Node.js webapp, the load balanced Java services maintained by Fargate, and the database. I see the average response time of each component and also see, that there were no failed requests so far. From here you can explore more details about your application and also about the environment the application is running in. For example, you find that AWS Fargate provisions 2 CPU cores and 4 GB of memory for each task, even if you haven’t specified any resource requirements.
This concludes my first steps with AWS Fargate and Dynatrace. Fargate also provides auto scaling capabilities, but I didn’t look into that for this article. I wanted to keep it simple for now. However, I most likely will revisit the additional features of AWS ECS in the future. You can find out more about Dynatrace AWS monitoring capabilities here.
PaaS-platforms provide many convenience features to make software development and operations easier. Serverless computing goes even further. Virtual computing resources are transparent and there is almost no time needed to specify resource requirements and provide runtime libraries; everything is already taken care of. It lets developers focus exclusively on code, and nothing but the code. However, while this may sound like a dream come true, it also introduces new problems. While removing complexity of operating applications, you also lose a lot of control with regards to the size of your application environment, efficiency of resource usage, and TCO of your deployment. Furthermore, troubleshooting is getting harder, because you don’t know all details of your deployment anymore. Application health and performance metrics are key and Dynatrace is here to deliver that.
What about you?
What’s your experience with AWS Fargate? Do you have an exciting use case you want to share? Or do you just want to put Dynatrace to the test to gain insights into your app? Anyways, please don’t hesitate to reach out to me (@wall_dirk) – I’m interested in your stories and experiences around AWS ECS.
This syndicated content is provided by Dynatrace and was originally posted at https://www.dynatrace.com/news/blog/first-steps-with-aws-fargate/