Contents

Micronaut vs Quarkus: part 2

Michał Chmielarz

26 Jan 2022.23 minutes read

Micronaut vs Quarkus: part 2 webp image

Some time ago, I wrote a post comparing some aspects of Micronaut and Quarkus frameworks. However, it didn't touch on two critical topics: web endpoints and cloud support. Below you can find a not-so-short description of those two areas.

At the moment of writing this post, the available versions of the frameworks were 2.6 for Quarkus and 3.2.3 for Micronaut.

Webserver

Micronaut

When creating web endpoints in Micronaut, we can choose one of two ways: using JAX-RS specification or annotations provided by the framework.

The support of JAX-RS relies on translating annotations during compilation to the corresponding ones from the framework. We can also inject some JAX-RS types and use the security context (which is bound to Micronaut).

The key focus area of the framework is the development of microservices, and as such, it provides excellent support for creating an HTTP server based on Netty.

Micronaut, like Spring, implements URI template specification (RFC-6570). We can specify paths to endpoints programmatically or using annotations. Even providing non-standard HTTP methods (e.g. required by RFC-491) is possible thanks to the CustomHttpMethod annotation.

The serving of HTTP endpoints won't be complete without exception handling. The framework provides a solution for this as well. We can use predefined error handlers or overwrite them. Additionally, we can provide handlers for custom types of exceptions too. You can even find a dedicated ErrorResponseProcessor that produces an error response body. While the default error format is the vnd.error, we can use the application/problem+json format based on the Zalando's Problem library thanks to the problem-json plugin.

Although Micronaut is written with microservices in mind and doesn't fully support the MVC model, you can still use the Micronaut Views extension, providing a template engine integration with server-side view rendering. The support applies to engines like Thymeleaf, Velocity, Freemarker, Rocker, Pebble, Soy, and Handlebars.

If you'd like to use Servlets API for any specific reason, it is possible with the Micronaut Servlet plugin. As the documentation states, all non-Netty features of the default HTTP server should work here too. In addition, the extension improves the handling of multipart requests and simplifies the I/O based on the Micronaut interfaces. We can use Jetty, Tomcat, or Undertow as the server.

For data serialization, the Jackson library is used. JSON is the default format of data returned from endpoints. However, we can use XML as well (with a dedicated add-on for Jackson XML). Created endpoints may also serve files (the media type is provided based on the transferred file name). In addition, the framework supports JSON streaming on both sides. While this is nothing unusual for the server-side (every reactive endpoint may produce such a stream), the provided HTTP client offers an API allowing subscribing to the JSON stream.

De-/serialization of data in endpoints relies on an introspection mechanism. The other option is to provide custom de-/serializers for the Jackson library. We get excellent support for all the stuff you'd need during the development of the HTTP service. So we have various options on types binding, request body parsing, files upload, data validation, and error handling. It's rather pointless to provide a detailed description of all this here, but if you're looking for more information, just check the latest Micronaut documentation.

When discussing web and REST endpoints, we cannot forget about documenting them. Therefore, I have to mention the integration with OpenAPI standard. It provides many annotations and generates an OpenAPI specification based on them. For the rendered document, we can provide a view using Swagger-ui, Redoc, or RapiDoc. We can even create a PDF file from the spec using RapiPdf.

More tech content from the SoftwareMill team:

Quarkus

Quarkus, like Micronaut, offers decent support for creating microservices. With Netty running under the hood, we can make the services reactive (based on integration with the Mutiny library).

REST services use RESTEasy based on the JAX-RS standard; however, the development slightly differs from Jakarta EE. We can define a custom HTTP method for an endpoint besides the standard ones by defining a proper annotation.

We can enable support for Servlet API with the Undertow extension.

In terms of supported media types, we find here everything we need. We can handle plain text, JSON, and files content. Additionally, we can serve HTML using the Qute template engine. In terms of JSON, it is the default data format for endpoints when the Jackson or JSON-B libraries are on the classpath. We can always specify the media type directly using the Produces and Consumes annotations. On the other hand, we can switch off the automatically generated JSON in the configuration, and then endpoints will use auto-negotiation to settle the media type. There is still JAXB support to return XML data from implemented endpoints.

JSON serialization relies on the Java reflection mechanism. Thus, when using GraalVM, all involved classes have to be registered. Quarkus automatically does this step when we return data types directly from endpoints. However, this mechanism is disabled when using the Response instance since Quarkus cannot determine a payload type during build time. Therefore, we may need to annotate data type classes with RegisterForReflection.

For calls that end with errors, we can throw a JAX-RS exception automatically mapped to the adequate response, or we can throw a custom exception and provide a dedicated (local or global) exception handler that translates it to an HTTP answer.

We can define custom request and response filters in two ways - using Quarkus annotations or in the JAX-RS manner. In addition, the framework has the predefined CORS filter. Finally, we can find built-in features like GZip, HTTP/2, data streaming, and multipart (requires additional extension) content types. The documentation provides detailed information about all HTTP features.

To document HTTP/REST API, we can use the SmallRye OpenAPI plugin. Interestingly, we don't have to use annotations to generate the OpenAPI specification for existing endpoints. We need to add the proper dependency to the classpath, and that's it. We can serve static specifications as well. The UI for visualizing the spec is Swagger-ui. It comes bundled with the extension.

REST Data with Panache

For those familiar with Spring Data REST, Quarkus has a similar solution. In the previous blog post, I mentioned the Panache extension that provides easier access to a database with Quarkus. Thanks to this, we can expose entities and repositories as basic CRUD endpoints using experimental REST Data with Panache extensions for Hibernate or MongoDB.

The endpoints are generated for all resources having dedicated interfaces based on the JAX-RS standard. The returned data format is JSON or using hypermedia-driven representation, i.e. HAL. We can customize created endpoints by enabling pagination, the HAL format, or providing a custom path. We can decide which resources should be exposed as well.

While the extension may be compelling, you must know that your REST API will be tightly coupled to a database structure by providing such endpoints. Unfortunately, while this approach allows creating REST endpoints rapidly to provide CRUD functionality, it exposes our domain model to the outside world (which can be not a good thing, after all).

More tech content from the SoftwareMill team:

Web client

For Micronaut, we have two types of clients available.
The first is a low-level HTTP client, a framework-provided bean. All the API required to handle request sending uses classes from the framework. By default, the client uses Jackson to handle JSON data. If we'd like to use another format, we need to provide a dedicated codec. Finally, the client supports form and multipart data; we can work with streamed JSON.

The second is a declarative client, built on top of the first one. Thus, it supports all the features mentioned above. This type of client is an interface or an abstract class with annotations describing requests (and query params and headers). We can use a retry mechanism and circuit breaker as well as fallbacks. The compiler creates its instance based on provided annotations, and we can use such a client as a standard bean during the runtime.
Both clients may use request filters and support reactive streams.

Quarkus has an extension providing a client based on the RESTEasy library when we need to make an HTTP call in an application. The client is declarative, and its definition is pretty simple. We need an interface (annotated with RegisterRestClient) with methods defining paths, query params, and headers (with the appliance of JAX-RS and MicroProfile annotations). We can use dedicated add-ons enabling serialization based on Jackson, JSON-B, and JAXB libraries. It is possible to use multipart data as well. REST client support async and reactive calls by returning CompletionStage or Uni instances (the latter comes from Mutiny library).

WebSockets

Both frameworks offer a declarative way of defining the server and client sides in terms of WebSocket support. Such a way allows focusing on providing a business logic instead of dealing with technical details of WebSockets.

When creating a server-side, we need to create a class with methods for handling opening, closing, messages, and communications failures and marking them with annotations. The client part of WebSockets is simple as well. Again, we need to provide a class with proper annotation, and it must have methods responsible for opening a connection and handling a received message.

Quarkus provides an implementation of the Jakarta WebSocket standard.
Micronaut, on the other hand, uses its own annotations. Additionally, we can imperatively handle WebSockets. The framework offers dedicated beans for session handling and message broadcasting. I haven't found such an option in the Quarkus framework, though.

Server-Sent Events

If you're looking for support for HTML5 Server-Sent Events (the W3C's HTML5 specification), then you won't be disappointed. Both frameworks provide mechanisms to implement push endpoints.
Micronaut handles SSE using its Event API. So the only thing we need to do is create an endpoint that should provide data in the form of a Publisher class emitting Micronaut's Event objects. The media type of the returned content should be text/event-stream.
The situation is simple in the Quarkus framework too. We start with an endpoint returning an instance of SSEMulti that indicates the provided content should be treated as server-sent events.

GraphQL

For GraphQL, both frameworks have dedicated extensions. Micronaut uses the micronaut-graphql module. Since the add-on defines a controller class defining endpoint for GraphQL queries, our task is to configure a GraphQL bean, i.e. load schema and bind methods with query calls. The configuration can be made using three various GraphQL libraries: GraphQL Java, GraphQL Java Tools or GraphQL SPQR.
We can enable subscriptions to the endpoint by allowing queries over web sockets. Additionally, we receive the GraphiQL IDE to explore GraphQL.

The extension for Quarkus uses SmallRye implementation of MicroProfile GraphQL specification. Unlike in Micronaut, we need to provide an endpoint exposing GraphQL queries. It is pretty similar to creating a standard REST endpoint. In this case, we need to annotate the class with GraphQLApi and call services providing data. The framework generates a GraphQL schema based on the types returned. Quarkus delivers GraphiQL; however, it is only an experimental feature. The extension supports web sockets and uses the GraphQL Java library under the hood.

Learn more about GraphQL:

gRPC

Both frameworks support gRPC calls through dedicated modules.

Let's start with Quarkus. It can work with gRPC classes from our project sources or use some coming from project dependencies (by using the Jandex index). In addition, the code is generated with no need for an external Maven/Gradle plugin. However, for Maven, it is still possible to use protobuf-maven-plugin instead.

We can inject generated gRPC services directly or implement their interfaces independently. In both cases, we need to use the GrpcService annotation. We can return a response in two ways: using types of the Mutiny library on returning results from the method or with the StramObserver class from gRPC API. We can even specify whether the business logic is blocking, so the framework will run it on a worker thread instead of an event loop. We can inject service definition with the GrpcClient annotation on the client side.

What does the gRPC support look like on the Micronaut side? First, we define data types and services in a separate protobuf file. Then, an external Maven/Gradle plugin generates classes from the definition during the compilation phase. Next, the gRPC server and clients may be configured using a configuration file or a bean creation listener programmatically. The server side is automatically configured with all services, interceptors, and transport filters injected. On the other hand, client stubs have to be manually provided as Micronaut beans using Factory classes.

We can use a service discovery mechanism when injecting a gRPC managed channel. By default, we'll use the NameResolver class of the gRPC; however, we can use Consul or Eureka for this (with the additional framework extension). In addition, we can switch from gRPC's default OpenCensus to Micronaut's integration with Jaeger or Zipkin for distributed tracing. And finally, we can enable support of application/x-protobuf media-type in Micronaut HTTP server.

Cloud support

Micronaut

As the documentation states, Micronaut was designed from the ground up to build cloud microservices. It borrows and is inspired by some concepts from Grails and Spring. Thus it should be easier to use by developers having experience with those frameworks.

Micronaut automatically tries to detect the active environment it runs on and sets the value of the env property based on that. It is possible to have multiple environments operational simultaneously, e.g. AWS and Kubernetes.

It supports various solutions for distributed configuration. At the moment of writing this, the list of available integrations covers:

  • HashiCorp Consul with support of key/value pairs, blobs (like YAML, JSON, etc.), and file references based on git2consul,
  • HashiCorp Vault,
  • Spring Cloud Config,
  • AWS Parameter Store (with secure information support),
  • Oracle Cloud Vault,
  • Google Cloud Pub/Sub,
  • Kubernetes supporting YAML, JSON, properties, or literals (with plenty of configuration options).

With service discovery, we also have a few options available. First, we can utilize the discovery-client extension. The discovery client can work with Consul and Eureka. We can interact with the client directly, or - which is the preferred way - use the Client annotation with the name of a service. The discovery happens automatically in the latter case. The extension makes it possible to customize various aspects of registration.

The other option is using service discovery provided by Kubernetes. We can use two discovery modes (service and endpoint) with active watching for changes of their respective resources. The Client annotation uses names of defined services and endpoints.

The next option is to use AWS Route 53, which works with DiscoveryClient API as the previous solutions, and it supports health checks.

The last possibility is to use manual service discovery based on configuration entries. That's the simplest way of making the service discovery, and it can even provide some health checks (disabled by default, managed in a separate thread by Micronaut).

Service discovery emits a list of available service instances. Micronaut, by default, performs Round Robin on the client-side load balancing. However, a custom implementation may override this strategy. The Netflix Ribbon extension is such an example. It provides a different load balancing implementation based on the external library. It is more flexible than the standard one, and we can configure it using Ribbon's configuration options (globally or per client).

Since Micronaut is the framework for microservices, it also supports distributed tracing. The framework provides its own annotations to manage spans and offers instrumentations (like HTTP filters) to ensure span context is propagated between threads and microservices. Additionally, the framework provides integrations with Open Tracing API based on Zipkin or Jaeger. Both can be adjusted depending on our need through configuration.

With critical features like fast startup time, low memory footprint, and compile-time approach, Micronaut is a decent match for implementing serverless functions. We can find support for Azure Function, AWS Lambda, Oracle Functions, and Google Cloud Function, among other integrations. Additionally, any FaaS platforms running functions as containers are supported as well.

The support for functions is two-fold. First, we can implement simple functions that involve a dedicated SDK delivered by FaaS providers like AWS, Azure, Oracle, or GCP. Those functions are considered low-level, use DI on fields only, and require no-args constructors. The other type of functions expose controllers defined in Micronaut applications. These are HTTP functions. For GCP and AWS, all endpoints are exposed automatically. For Azure, we need to provide a function routing a request to a proper controller. In the case of the Oracle HTTP function, we need to configure request routing in the cloud console.

For most serverless providers (if not all), we can use GraalVM native-image. We can create thinner images of responsive serverless functions utilizing fewer resources thanks to this.

An example of an environment running a containerized application is the Google Cloud Run. For instance, it may run a Microunat application containerized with JIB.

The above features aren't a complete list when discussing integrations with cloud providers. Every integration has its own additional features.

More tech content from the SoftwareMill team:

On the guides page, you can find tutorials on deploying applications to a cloud, sometimes with external build tool plugins or a provider web console. For instance, we can deploy an application to the Azure cloud using a dedicated azure-webapp plugin for Maven or Gradle. The other example may be deploying AWS Elastic Beanstalk.

Besides the lambdas, the AWS extension supports extending Alexa Skills using HTTP services, even with SSML. The newly created skills can be deployed as lambda or a web service.

Next, we have AWS SDK integrated as well. Some of the clients and their builders from the SDK are available as CDI beans. The ones that are not available as beans require a factory class (like AWS Rekognition). If you are looking for a higher-level API, then maybe the Agorapulse add-on may be the answer.

Additionally, since Micronaut has been verified, it is possible to deploy applications to Amazon Correto, a free, LTS OpenJDK distribution.

As for AWS, the extension for Google Cloud Platform offers way more than serverless support.

We have logging support with many configuration options. It uses the Stackdriver logging format. Next, we can integrate with Cloud Trace. Finally, based on the GCP HTTP client add-on, we can set up authorization of service-to-service communication.

We can integrate with the Pub/Sub messaging service. Implementing the communication is similar to the one present in Kafka or RabbitMQ extensions. We create publishers declaratively, providing interfaces marked with proper annotations. During compile-time, the framework assembles the implementations. Listeners, on the other hand, are classes with appropriate annotations. By default, the extension provides an automatic SerDes that reads messages as JSON data and writes to the wire based on the Content-Type header. For the erroneous situations on receiving data, we can define an exception handler. There is even a way of de-/serializing data to a custom mime type.

While we can access the Secret Manager with distributed configuration, the extension provides a low-level client to read the storage.

We can utilize a dedicated extension to connect with Oracle Cloud. The integration supports four types of authentication providers. We can also connect with the Autonomous Database (it uses Oracle Wallet to store credentials). The other feature available is integrating Micrometer for the OCI Monitoring service to audit cloud resources. We can replace the default tracing with OCI Application Performance Monitoring with this extension.

In addition to service discovery and distributed configuration, the Kubernetes extension provides health checks probing communication with the API and delivering detailed data for the application's pod.

With the Kubernetes client extension, we can access its Java SDK classes as CDI beans. The authentication is pre-configured based on the environment settings and can be tweaked with configuration properties. Moreover, the communication supports reactive style based on RxJava 2 or Reactor projects.

We can integrate with Kubernetes Informer as well. Thanks to this, it is possible to monitor resources of a specific type.

Quarkus

Quarkus offers the Funqy framework for writing serverless functions for various FaaS providers. It works with AWS Lambda, Azure Functions, Google Cloud Functions, KNative, and Knative Events.

Since it spans multiple providers, its API is very small and simple. It supports blocking and async types of programming. With Funqy, we can adjust the names of created functions and use dependency injection. For some FaaS providers, it's possible to inject event context.

This extension aims to provide a simple API allowing the creation of easily portable functions across various providers. If you need specific features of a given cloud environment, you need to use a dedicated integration. On the other hand, Funqy may be worthwhile when you test some ideas for serverless functions or you may need to deliver a simple endpoint when time is crucial.

It has a dedicated binding for HTTP functions - Quarkus Funqy HTTP. The important fact is that it is not a replacement for REST over HTTP but aims to deliver simple definitions of HTTP endpoints. The simplicity of the extension means no specialized features like cache-control or conditional GETs.

Despite the Funqy extension, we can deploy functions using the FaaS provider API directly. Quarkus provides two types of plugins when it comes to AWS Lambda. The first one is for building simple functions. They can be deployed to Amazon Java Runtime or as a native executable to Amazon's Custom Runtime with a smaller memory footprint and faster startup.

We can bundle as many lambdas into the deployable artifact as we want; however, we should point out which one should be deployed in the configuration. The lambda extension can run a mocked AWS Lambda event server when working in dev or test mode, making the development easier.

The second type is for HTTP functions. They can be written based on any Quarkus HTTP framework (like JAX-RS, Reactive Routes, and so on). It is possible to deploy this type of lambdas with AWS Gateway HTTP API or AWS Gateway REST API.

Additionally, both extensions generate deployment files in the format of the Amazon SAM framework.

Azure Functions add-on allows deploying HTTP serverless functions based on RESTeasy, Undertow, Vert.x, or Funqy HTTP. It provides a generic bridge between Azure runtime and the endpoints provided. It supports text-based media types only and is in preview mode.

We have a dedicated extension for Google Cloud Functions as well; however, it is in preview mode. It offers three types of functions:

  • HttpFunction, handling HTTP requests,
  • BackgroundFunction, processing storage events,
  • RawBackgroundFunction for PubSub events.

There is yet another add-on for HTTP Google Cloud Functions. It is provided in preview mode and enables the deployment of functions based on JAX-RS, Vert.x or Servlet API, or Funqy HTTP.

We can extend configuration sources with a distributed configuration as well. Quarkus has three extensions in this topic. The first one is for Kubernetes and applies to the content of ConfigMaps and Secrets. It reads the data using Kubernetes Client and works with literals and files (properties and YAML).
The second extension allows reading configuration from Spring Cloud Config. There is no code required to enable this feature. You would need to set up a couple of configuration properties, and that's it.
The last extension is Quarkus Consul Config, and it works with the key-value store.

What about service discovery? Quarkus gained a new extension, integrating the SmallRye Stork. This project is a framework for service discovery and load balancing on the client side. It works with Consul, Eureka, and Kubernetes; however, Stork is extensible and can work with a custom implementation too.

SmallRye Stork serves client-side load balancing strategies as well. It provides two ways of selecting a service (round-robin and response time), leaving a place to provide a custom implementation.

The Stork extension looks like something the Quarkus world has been really missing. The only concern I would have is that it is pretty new, and Stork itself is in beta version at the moment.

There is still a naive approach to providing a client-side load balancing, which is a custom implementation of the ClientRequestFilter. It's doable. However, it is not the most effective way.

The distributed tracing has its support in Quarkus as well. We can use two extensions based on OpenTracing and OpenTelemetry.

The former uses the Jaeger tracer, and it is automatically applied to all existing REST endpoints. Nonetheless, if we need to, we can trace non-REST calls too. OpenTracing provides additional instrumentation as well. The Quarkus documentation mentions technologies like JDBC, Kafka, or MongoDB. We can even run tracing in a Zipkin compatibility mode.

The latter integrates OpenTelemetry and works with Jaeger as well. It is possible to set up an id generator, propagators, resources, and samplers.

Like in Micronaut, these are not all features and extensions regarding cloud systems development with Quarkus.

Concerning support for AWS, there is an integration with SDK v2. It uses URL Connection Client or Apache HTTP Client under the hood for blocking calls. It is possible to use the async programming model based on CompletableFuture and Netty HTTP Client. The extension provides several services clients (like DynamoDB, KMS, S3, SES, SNS, SQS, Secret Manager, and System Manager) as CDI beans we can inject into our code. There is also a list of properties we can change in the application configuration for each.

Additionally, I have found two more Quarkus extensions for AWS services. The first one aims at making Amazon Alexa SDK work with native executables. And the second provides support for sending logs to the Amazon CloudWatch.

For Azure cloud, the documentation describes the deployment of our Docker images to three different services: Container Instances, Kubernetes Service, or App Service on Linux Containers. Unfortunately, I have found nothing more in the documentation regarding this cloud provider.

A similar situation is with the GCP. We can find a description of deploying an application to App Engine, App Engine Flexible Custom Runtimes, and Google Cloud Run. The first applies to jars, while the last two to Docker images. In addition, the GCP guide provides a section dedicated to configuring the Cloud SQL integration.

Regarding GCP, we have dedicated add-ons placed in Quarkiverse. These offer support for BigQuery, Bigtable, Firestore, PubSub, Secret Manager, Spanner, and Storage.

The Kubernetes deployment plugin offers generation of Kubernetes manifest file, setting env variables based on Secret and ConfigMap integration, and support for Service Binding feature. In addition, we can add probes for readiness and liveness based on the SmallRye health extension.

As I've mentioned above, Quarkus comes with a plugin providing Kubernetes Client. It enables the usage of Kubernetes Operators. Moreover, we can find an extension simplifying tests of the implemented operators. Finally, if the target Kubernetes cluster runs on OpenShift, we can use a dedicated client extension similar to the one above.

In Quarkiverse, we can find the Quarkus Operator SDK, another plugin that simplifies work with the operators, based on the Java Operator SDK.

The last plugin from the Kubernetes family is Funqy KNative Events, supporting routing and processing of Cloud Events on the KNative platform. It delivers a possibility of configuring event processors with configuration properties or annotations, allowing programmers to define triggers, response sources, and types. Processors work with JSON data provided in the String format.

Among other extensions, we can find support for Red Hat OpenShift. It focuses on generating OpenShift resources and deploying them as S2I containers. However, it can also work with Docker and JIB images. This add-on makes it possible to use KNative through OpenShift Serverless.

Any thoughts?

While I focused on two topics in this post only - web and cloud features - there is a lot of content to read and grasp. So I hope I haven't omitted anything important. Both frameworks provide decent support in these two areas, and you can clearly see both are focused on delivering modern microservices.

Which one is better? There is no clear answer to such a question. And I wouldn't point to a "winner" here. However, Micronaut looks better regarding the stability of available extensions and more detailed documentation. While Quarkus offers similar features, some are still in the preview mode. Besides this, I can recommend both frameworks for your next web project.

Blog Comments powered by Disqus.