Contents

Infrastructure management technology trends evaluation

The main domains emerging from the Technology Radar by Thoughtworks are infrastructure management, backend development, front-end development and machine learning. What IT technologies are in demand?

Let me summarize their findings, leave a humble opinion on the things I found important and worth pointing out, and say a few more words.

Infrastructure Management Trends

I intentionally do not use DevOps term here, since it's often misunderstood and overused. In short, DevOps and recently DevSecOps is not a person who’s setting up your infrastructure but a methodology, a culture. Still your bare metal or cloud resources have to be provisioned.

As a code FTW!


Few years ago a technique called Infrastructure as code emerged and is about storing and versioning setup and configuration instructions in code in a repository - like ordinary software. This allows employing reviews and maintaining the code as well as setup your infrastructure in a unified and most importantly a repeatable way. Instantiating an environment is a matter of applying files, which declare what has to be present without having to care about how that is achieved.

At SoftwareMill we found tools like git for version control and Terraform for provisioning the infrastructure, be it on cloud or on-premise.

Pipelines as code is similar to this technique. It’s already a standard approach to build an application, run unit and integration as well as UI and contract tests, build a deployable package and install it in a given environment. All that in an automated fashion.

Tools like Jenkins or GitLab CI/CD do that and additionally adhere to the new trend to store such pipelines as code in a source code repository. Having a pipeline in the form of code allows you to recreate a testing environment from scratch for each build. You know exactly what third-party dependencies have to be installed in production. You’re confident that recovering from a disaster is just a matter of clicking a button instead of recreating the prod environment manually, which has been touched for the last time years ago.

Advancements in IT Security

That by far is not the end. Techniques like security policy as code, zero trust architecture and decentralized identity are paving their way into the mainstream.
There are some introductionary posts on DZone and Medium for security policies as code but in a nutshell the idea is to create a declarative description of security rules, like service A is allowed to read the password value for its database. Or broader speaking - what service is allowed to access which resource. Having the rules, they are then checked in to a source code repository and evaluated as part of a CI pipeline. Istio and OpenPolicyAgent are good examples allowing you to introduce this concept in your organisation. Whereas Istio is a whole platform, OPA is an engine evaluating requests against policies and data and replying with a decision - not necessarily limited to an allow/deny answer.

Zero trust architecture tells us to stop treating the internal network as a secure one. It’s been described in detail in Istios security documentation.

Decentralized identity is the effect of a long evolution from centralized identities based on directory and naming servers to Self-Sovereign Identity. One concrete implementation is based on Hyperledger, a blockchain technology, that allows protecting the identity, exposing it by the owner and controlling who and for what purpose can read it.

Cloud is on everyone’s lips and you have to know the benefits and challenges before migrating to cloud. There are several approaches to accomplish this task but companies that have already invested in hardware may not want to abandon it for financial reasons as well as security concerns. In this case a hybrid or multi-cloud strategy comes into play. You’ll find Google’s Anthos and Amazons Outposts interesting allowing you to extend a VPC to on-premise machines and run cloud services locally.

Security in the cloud is a broad topic and with ScoutSuite you can fill at least one gap - visibility into cloud data. It’s an auditing tool supporting AWS, GCP and Azure which is executed in standalone mode and requires the cloud provider’s CLI to be set up. Once run it generates a static html report - a clear view of the attack surface - that can be evaluated offline. Just to name a few you can figure out which machines are not protected by the firewall or rather which ports are exposed to the outside world; which storage buckets have weak permissions, means are accessible by the public, and which accounts are accessible by external accounts and have permissions on a service with root privileges.

To get rid of false positives you can adjust rules and their associated levels as well as introduce exceptions for various resources. Just to be clear, all this data is available through the cloud consoles and UIs. ScoutSuite is just an aggregator grouping relevant parts and displaying them in the form of a report.

Another static analysis tool in the space of Infrastructure as a Code is tfsec scanning Terraform templates for security issues. It works with the major cloud providers, AWS, GCP and Azure and highlights problems like sensitive data inclusion, fully open security group rules, using plain HTTP instead of HTTPS and unencrypted managed disks and buckets, just to name a few.

Life beyond Kubernetes

Besides techniques, the radar mentions a few platforms, of which Istio or Linkerd are the ones to be adopted. It’s an implementation of a service mesh advertised as:

Connect, secure, control, and observe services.

It’s not only the mentioned security principles which make Istio a great fit when it comes to managing a microservices architecture. It also releases you from deploying circuit breakers and other third-party tools for tracing, logging and monitoring. Finally it lets you apply all the fancy deployment strategies like canary rollouts or A / B testing and lets you revert automatically on failures.

Although Istio is platform agnostic and can run on premise or in the cloud, it also goes well along with Kubernetes, one of the most popular container orchestrators. A very handy tool for k8s is k9s, a terminal user interface for kubectl. No more kubectl get pods or figuring out the resource identifier to display logs. Everything is displayed and refreshed in real-time.

Another tool is Lens, an IDE to control Kubernetes clusters: monitoring, debugging, development and operations - all done from one place.

Another nomen omen kind tool is kind - it lets you run local k8s clusters:

kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI.

Another platform is ArgoCD:

a declarative, GitOps continuous delivery tool for Kubernetes

It allows describing application deployment in a declarative way and applying it in an automated fashion. I’ve found this 12 minutes introduction video very informative allowing me to grasp the idea really quickly.

Metrics, API gateways and a search engine

Placed on the edge between backend development and infrastructure management is OpenTelemetry, a platform for capturing distributed traces, metrics and logs. With a set of APIs for Java, JavaScript, Python, Go, and Erlang, but also agents and collector services, it provides data to well established observability tools like Prometheus and Jaeger.

Not that you’re not able to achieve the same goal with tools like Sleuth or Jaeger, a Prometheus client library and Graylog or the (B)ELK stack already today - it’s rather an all-in-one solution, currently in beta though.

Speaking of ELK and especially ElasticSearch - for lower volume of data which doesn’t require or rather justifies a distributed system, MeiliSearch is a text search engine providing typo-tolerant searching capabilities with custom rankings and filters.

Testing SSL is typically a tedious job. It first requires you to create a (self-signed) certificate, means following yet another blog post on what commands to issue, and then hack the client side or the operating system’s root store to be able to use that certificate. mkcert makes it easy to generate signed certificates by a local CA installed in one of the supported root stores, like macOS system store, Windows system store, various Linux stores, Firefox and Chrome stores and the Java store.

When setting up an infrastructure for microservices, one component is designated the entry point role. This is the concept of an API gateway - has been characterized in detail - and it’s far more than just exposing unified APIs to various types of clients, be it mobile, browser or desktop, and addressing security related concerns like authentication and authorization.

A gateway embraces a diverse range of edge functions like protocol translation, rate limiting, caching, metrics collection and request logging. This pattern though is going through an identity crisis:

Traditional API gateway solutions were not designed for highly dynamic environments like Kubernetes and require additional infrastructure to keep up, make highly-available, and production ready. Additionally these solutions are often deployed in a centralized manner that conflicts with the distributed nature of modern applications.

Gloo by solo.io promises to take the API gateway to the next level. Built on top of Envoy Proxy this gateway integrates with Kubernetes, AWS ELB and various service meshes like Istio, Linkerd, Consul Connect and AWS App Mesh. At the same time it provides traffic management (routing, protocol translation), security features (encryption, authentication/authorization, rate limiting, access logging, firewall, CORS and the already mentioned OPA policies) and observability (monitoring, metrics, logging).

The next part is covering front-end (web) and backend development.

Check out all the tools you just read about

GitlabTerraformScoutSuite
OpenTelemetrytfsecLens
MeiliSearchArgoCDOpenPolicyAgent
Istiok9sAnthos
kindOutpostsLinkerd
JenkinsGloomkcert
Blog Comments powered by Disqus.