I took this on as a challenge to level-up and improve my skills. Leveling-up is an important concept to me. Many people transitioning from traditional QA or Performance into the new world of DevOps feel at a disadvantage. I’m right there with them. For this reason, I want to share my automation journey to prove that anybody can do it. You don’t need to conquer the world in one day. Just “figure out how to automate your most manual, boring and time-consuming tasks.” Below are some of the key lessons I learned.
Borrow from your peers
Before I could start, I needed a demo application. We already have a demo application called easyTravel, but it runs as multiple processes on a single host. Since Dynatrace was purpose built for the newer cloud infrastructure, I wanted something that was more cloud oriented. Did this mean I’d have deviate from my task and first figure out how to split up the application on my own? Fortunately, the answer was no. Somebody in my company already made a docker version of the easyTravel in a GitHub repository, complete with full instructions on how to run it with docker-compose. I was all set here, or so I thought.
At this time, I had just completed a live workshop covering Andi Grabner’s Unbreakable Pipeline tutorial. I decided I’d build upon the foundation he laid by grabbing a copy of his AWS CloudFormation script and modifying it to suit my needs. No need to create one from scratch. I was all here, or so I thought…again.
The point here is that others in your organization may have already done work you can build from. I personally find it overwhelming and unnecessary to try to study a new technology and understand it all at once. Often, it is easier to deconstruct what others have already done in order to get started and learn the basics, or at least learn what is necessary to complete my task. In this way, learning does not become a roadblock.
Ask for help
Remember how I said I “thought” I was all set with the docker version of easyTravel? I hit some snags. The original easyTravel Docker repository runs like a charm. However, the loadgen container, the one generating traffic, is configured to work with User Experience Management (UEM) from our classic AppMon product. I wanted the load scripts to work with Dynatrace’s Real User Management (RUM).
Once I had the loadgen container running, I figured out how to modify the config file to make it work with Dynatrace RUM. It was a simple change in a config file. I needed to figure out how could I automate this change. Months before this project, I started working my way through the Docker Book. This made me aware that I can execute commands during container creation using the dockerfile. I thought to myself, could I really open a file and modify a line in this way? Even if I could, it might not be a smart thing to do. After some deliberation, I decided that the best way to make this change would be to, instead, rebuild the Docker image with the modified file. After a researching how to rebuild an image, I came to the conclusion that I needed help. Helping each other is the spirit of DevOps, isn’t it?
It turns out my manager, Asad Ali, spent some quality time with Docker. He was happy to get me started on the rebuild path, wisely taking me only as far as he needed to. We traced through the files in the original repo, discovering how those images were built. We got the original source files for the elements, which is where I made the configuration change. With that, he left me to figure out how to put the pieces back together. In the end, not only I get the confidence boost of figuring out part of it on my own, but Asad and I got to share knowledge and level up.
Knowledge is meant to be shared, not hoarded. You’ll find many around you who understand this and are happy to help. Remember to pay it forward.
Know when to go off-script
CloudFormation…it’s a very interesting thing for sure. Andi Grabner already figured out all the tough parts of CoudFormation scripts in his pipeline tutorial. All I had to do was strip it of all the components I didn’t need and change the UserData section. Simple, right? Pffft!
My problem was a single command in the formation script UserData section: docker-compose up. It was supposed to launch all of the containers, but it didn’t do anything. The troubleshooting began:
- I could run all of my UserData commands manually in my EC2 instance, including the docker-compose up command.
- I added a touch command after the compose command, and the touch executed.
- I revisited the security groups, the IAM roles and policies. I even discovered that almost all of the cloudformation script outside of the UserData section was not necessary.
No matter what I did, everything worked as expected except the docker-compose command. I searched all over the internet. Most results talked about moving my containers to the Amazon Elastic Container Service. To me, moving to ECS would make the exercise much too complicated for others to easily reproduce. I wanted this to be simple.
So, what did I do? I gave up. Not on the project entirely, but on CloudFormation. I figured there must be another way to approach this. I gave it more than a fair try. I even asked around and nobody else had any insight to offer.
To be honest, I felt a little dejected. I wanted to complete this project, but I was stumped. Instead of thinking about it, I took a break. Breaks are great. If you’re in an office, go out for a walk, go play whatever game they have set up in the common space. I work from home, so I’ll sometimes take a break and do laundry or take a shower. When we refocus our brain on a different task, one not related to our problem, we humans have a keen ability to come up with a solution without even thinking about it.
A crazy idea came to me. I remembered seeing the User data section in the “Launch EC2 Instance” workflow. My brain screamed at me “don’t bother – UserData is UserData. It won’t work. Move along – nothing to see here.” But my gut, the same gut that misguided me through all my late night CloudFormation experiments, said, “Go on, you know you want to try it. Nobody’s looking. Just do it.” I tried to rationalize this in my head by thinking “Well, I certainly didn’t write any of these AWS processes, so I can’t be sure it won’t work in the EC2 launch workflow. And besides, in the CloudFormation script, it’s ‘UserData’ whereas in the EC2 launch workflow, it ‘User data.’ Maybe the spelling difference is a clue to something.”
I tried it.
It worked! It really worked! I still have no idea why. Maybe someday I’ll figure it out, but for now, mission accomplished. I ignored logic and tried on a whim. Most of the time, these desperate attempts don’t work. Every once in a while, though, they do. Didn’t somebody wise once say “don’t try, do”?
Go ahead and give my experiment a try. It’s amazingly easy.
If you’re like me, where you don’t have a wealth of coding/automation experience, you’ll come out learning a lot. You won’t’ be an expert, but that’s not the goal. Experts start by learning the basics. Besides accomplishing the task I set out to, I leveled-up and now have beginner to moderate familiarity with:
- EC2 instances
- AWS CloudFormation Scripting
- Rebuilding a Docker Container
- Reading and understanding all the build scripts in the easyTravel-Dynatrace-Docker repository
- Writing a shell script
- Pushing an image in Docker Hub
Most importantly, I have more confidence with which to tackle my next challenge.
This syndicated content is provided by Dynatrace and was originally posted at https://www.dynatrace.com/news/blog/level-up-with-the-mean-time-to-instrumentation-challenge-part-2/