· Estimated reading time: 35 minutes.

Docker 101 - Containerizing our first application.

In this post we will learn what Docker is, how to use it, when it is appropiate to use it and we’ll make our first docker container based on a real-world production scenario.

What is a container?

Containers can be conceptually thought of as an advanced version of chroot or a lighter alternative to virtualization.

A container is an isolated user-level instance of the OS. From the point of view of the applications running inside these instances the container behaves exactly like a real computer, and the application will have access only to the resources that are explicitly assigned to it.

Containers share the running kernel and general system calls of the host OS, which means that it is significantly less demanding than traditional virtualization resource-wise. The downside to this is that you can’t have an application running in a container with a different OS than the host.

Pros and cons of using containers.

Using containers has a few inmediate advantages:

  • We can package applications along with their dependencies, creating a portable version of the app and eliminating the dreaded “but it works on my machine“-scenario. This helps eliminate the proverbial walls between the Operations departament and Development’s.

  • It is significantly lighter than traditional OS virtualization, enabling a higher compute density in the same hardware.

  • Reduced the effort required to maintain the runtime environment, along with its complexity. Since the applications can be treated as an opaque self-contained bundle, the package and the behaviour are the same in testing and in production.

  • Since the infrastructure is declared as code, we benefit from the advantages of IaC: Infrastructure versioning, code as reliable documentation and straightforward deployment of new environments.

Of course, we must mention the disadvantages too:

  • Containers run on top of an additional abstraction layer compared to bare-metal.

  • Containers share the running kernel with the host. A bug/glitch in the running kernel affects all the running containers.

  • Container management is challenging, though tools like Docker Swarm and Google Kubernetes can mitigate this.

  • GUI applications don’t work well. While we can work around this using X11 forwarding, it’s not a straightforward solution.

· Estimated reading time: 15 minutes.

Monitoring our infrastructure with Nagios.

In this post we will learn how to monitor computers and services with Nagios. This will allow us to receive timely and actionable alerts about issues in our infrastructure.

What is Nagios.

Nagios is an application that monitors computers, services, networks and infrastructure. It sends alerts in case of trouble, and it sends notifications when the issue is resolved.

Unlike other commercial alternatives like SolarWinds, Nagios is both free and Open Source and it also offers the option of purchasing a support plan.

Nagios: How it works.

Nagios can obtain information about the monitored resources in two different ways:

  • Using the Nagios Agent: After deploying the Agent to the computers you want to monitor, these agents collect the information and perform the checks by themselves before sending them to the Nagios server. -The main advantage of this method is the flexbility it offers, since we can define custom checks and write code to monitor any resource we can think of. The disadvantage would be that we have to push the definition of these custom checks to the clients.

  • Agentless : The Nagios server uses the facilities provided by the device that is being monitored (for example, VMI or SNMP).

    • The main advantage of this mode is that it reduces the complexity of our deployment, and it allows us to centralize all the configuration in the Nagios server. The disadvantage would be that it does not allow for custom checks, so if the device does not provide a facility for monitoring a resource we can’t monitor it this way.

Nagios is able to use both modes at the same time, combining the results from agent checks and from agentless checks.

· Estimated reading time: 11 minutes.

Terraform 101

In this post we’ll explore a tool called Terraform, its advantages and disadvantages; and we’ll use it to spin up a test instance on Amazon AWS.

Terraform: What is it?

Developed by Hashicorp, creators of Vagrant and Packer, Terraform is a tool that enables us to define our infrastructure as code (formally known as IaC). We can add or modify resources such as computing instances, ssh-keys, network topology or firewall rules and Terraform takes care of generating and performing the necessary operations so the infrastructure provider’s state matches the one described in the code.

Terraform supports a wide range of on-site and cloud infrastructure such as Amazon AWS, Microsoft Azure, Openstack, VMware vSphere and Digital Ocean. A full list of supported providers is located on the tool’s website.

Advantages and disadvantages of using Terraform.

Pros

  • Infrastructure as code (IaC): This enables us to treat our infrastructure as a file. In plain English this means the following: we can back it up, keep a history of our hardware and configuration (including the associated firewall rules, VLANs and security policies) and rollback changes quickly and efficiently. Our datacenter is stored on a file, thus we can re-deploy our entire infrastructure in a few keystrokes. We are now able to set-up a testing or development environment that is guaranteed to be an exact replica of the actual production environment, or quickly spin-up a second datacenter in a disaster recovery scenario.
  • Speed: Terraform is fast, very fast. If the infrastructure provider supports it, Terraform parallelizes the generation and modification of resources. The result is: it takes literally a minute to provision one instance, and it takes the exact same time to provision 20 of them.
  • Supports multiple providers: Unlike Heat or CloudFormation, Terraform supports a variety of providers and can mix and match resources from any of them simultaneously. For example: We could keep the database in our local datacenter to comply with PII regulations, while hosting the frontend of the application somewhere else.
  • Flexibility: Our infrastructure is declared in .tf files. Terraform consumes all the .tf files from the folder and processes them to create the execution plan. We can separate the resources in a logical way, and easily add/substract a file with new resources for testing purposes.

· Estimated reading time: 1 minute.

Hello World!

Welcome to my blog. Here I will publish technical posts and general thoughts about technology, with the hope that it can be useful for you.

My native language is Spanish, but the translation efforts are underway. If you can understand the language, the Spanish section of the blog has more content at this time.

In the meantime, you could visit my GitHub and check out my repositories. One of them is a case study on automatically and horizontally scaling a webserver to the cloud with Terraform, and it aims to solve the problem of a sudden and unexpected massive spike in web traffic that one has to be prepared to survive without offering a degraded experience to the end user.