Some time ago, I wrote a post evaluating some aspects of the Micronaut and Quarkus frameworks. However, it didn’t touch on 2 critical topics: web endpoints and cloud support. Below you can find a not-so-quick description of those 2 areas.
At the moment of writing this post, the available versions of the frameworks were 2.6 for Quarkus and 3.2.3 for Micronaut.
When creating web endpoints in Micronaut, we can choose 1 of 2 ways: using JAX-RS specification or annotations provided by the framework.
The support of JAX-RS depends on translating annotations during compilation to the corresponding ones from the framework. We can also inject some JAX-RS types and use the security context (which is bound to Micronaut).
The key focus area of the framework is the development of microservices, and as such, it provides excellent support for creating an HTTP server based on Netty.
Micronaut, like Spring, implements URI template specification (RFC-6570). We can specify paths to endpoints programmatically or using annotations. Even providing non-standard HTTP methods (e.g. required by RFC-491) is possible thanks to the
The serving of HTTP endpoints won’t be complete without exception handling. The framework provides a resolution for this as well. We can use predefined error handlers or overwrite them. Additionally, we can provide handlers for custom types of exceptions too. You can even find a dedicated
ErrorResponseProcessor that produces an error response body. While the default error format is the vnd.error, we can use the
application/dispute+json format based on Zalando’s Problem library thanks to the dispute-json plugin.
Although Micronaut is written with microservices in mind and doesn’t fully support the MVC model, you can nonetheless use the Micronaut Views extension, providing a template engine integration with server-side view rendering. The support applies to engines like Thymeleaf, Velocity, Freemarker, Rocker, Pebble, Soy, and Handlebars.
If you’d like to use Servlets API for any specific reason, it is possible with the Micronaut Servlet plugin. As the documentation states, all non-Netty features of the default HTTP server should work here too. In addition, the extension improves the handling of multipart requests and simplifies the I/O based on the Micronaut interfaces. We can use Jetty, Tomcat, or Undertow as the server.
For data serialization, the Jackson library is used. JSON is the default format of data returned from endpoints. However, we can use XML as well (with a dedicated add-on for Jackson XML). Created endpoints may also serve files (the media type is provided based on the transferred file name). In addition, the framework helps JSON streaming on both sides. While this is nothing unusual for the server-side (every reactive endpoint may produce such a stream), the provided HTTP client offers an API allowing subscribing to the JSON stream.
De-/serialization of data in endpoints depends on an introspection mechanism. The other option is to provide custom de-/serializers for the Jackson library. We get excellent support for all the stuff you’d need during the development of the HTTP service. So we have various options on types binding, request body parsing, file upload, data validation, and error handling. It’s rather pointless to provide a detailed description of all this here, but if you’re looking for more information, just check the latest Micronaut documentation.
When discussing web and REST endpoints, we cannot overlook about documenting them. Therefore, I have to mention the integration with the OpenAPI standard. It provides many annotations and generates an OpenAPI specification based on them. For the rendered document, we can provide a view using Swagger-ui, Redoc, or RapiDoc. We can even create a PDF file from the spec using RapiPdf.
More tech content from the SoftwareMill Team:
Quarkus, like Micronaut, offers decent support for creating microservices. With Netty operating under the hood, we can make the services reactive (based on integration with the Mutiny library).
REST services use RESTEasy based on the JAX-RS standard; however, the development slightly differs from Jakarta EE. We can define a custom HTTP method for an endpoint besides the standard ones by defining a proper annotation.
We can enable support for Servlet API with the Undertow extension.
In terms of supported media types, we find here everything we need. We can handle plain text, JSON, and file content. Additionally, we can serve HTML using the Qute template engine. In terms of JSON, it is the default data format for endpoints when the Jackson or JSON-B libraries are on the classpath. We can always specify the media type immediately using the
Consumes annotations. On the other hand, we can switch off the routinely generated JSON in the configuration, and then endpoints will use auto-negotiation to settle the media type. There is nonetheless JAXB support to return XML data from implemented endpoints.
JSON serialization depends on the Java reflection mechanism. Thus, when using GraalVM, all involved classes have to be registered. Quarkus routinely does this step when we return data types immediately from endpoints. However, this mechanism is disabled when using the Response instance since Quarkus cannot determine a payload type during construct time. Therefore, we may need to annotate data type classes with RegisterForReflection.
For calls that end with errors, we can throw a JAX-RS exception routinely mapped to the adequate response, or we can throw a custom exception and provide a dedicated (native or global) exception handler that interprets it to an HTTP answer.
We can define custom request and response filters in 2 ways — using Quarkus annotations or in the JAX-RS manner. In addition, the framework has the predefined CORS filter. Finally, we can find constructed-in features like GZip, HTTP/2, data streaming, and multipart (requires additional extension) content types. The documentation provides detailed information about all HTTP features.
To document HTTP/REST API, we can use the SmallRye OpenAPI plugin. Interestingly, we don’t have to use annotations to generate the OpenAPI specification for existing endpoints. We need to add the proper dependency to the classpath, and that’s it. We can serve static specs as well. The UI for visualizing the spec is Swagger-ui. It comes bundled with the extension.
For those acquainted with Spring Data REST, Quarkus has a identical resolution. In the previous blog post, I mentioned the Panache extension that provides simpler entry to a database with Quarkus. Thanks to this, we can expose entities and repositories as basic CRUD endpoints using experimental REST Data with Panache extensions for Hibernate or MongoDB.
The endpoints are generated for all sources having dedicated interfaces based on the JAX-RS standard. The returned data format is JSON or using hypermedia-driven illustration, i.e. HAL. We can customise created endpoints by enabling pagination, the HAL format, or providing a custom path. We can choose which sources should be uncovered as well.
While the extension may be compelling, you should know that your REST API will be tightly coupled to a database structure by providing such endpoints. Unfortunately, while this approach allows creating REST endpoints rapidly to provide CRUD functionality, it exposes our domain model to the outdoors world (which can be not a fine thing, after all).
More on the SoftwareMill Tech Blog:
For Micronaut, we have 2 types of clients available. The first is a low-level HTTP client, a framework-provided bean. All the API required to handle request sending uses classes from the framework. By default, the client uses Jackson to handle JSON data. If we’d like to use another format, we need to provide a dedicated codec. Finally, the client helps form and multipart data; we can work with streamed JSON.
The second is a declarative client, constructed on top of the first 1. Thus, it helps all the features mentioned above. This type of client is an interface or an abstract class with annotations describing requests (and query params and headers). We can use a retry mechanism and circuit breaker as well as fallbacks. The compiler creates its instance based on provided annotations, and we can use such a client as a standard bean during the runtime.
Both clients may use request filters and support reactive streams.
Quarkus has an extension providing a client based on the RESTEasy library when we need to make an HTTP call in an application. The client is declarative, and its definition is pretty simple. We need an interface (annotated with
RegisterRestClient) with methods defining paths, query params, and headers (with the appliance of JAX-RS and MicroProfile annotations). We can use dedicated add-ons enabling serialization based on Jackson, JSON-B, and JAXB libraries. It is possible to use multipart data as well. REST client support async and reactive calls by returning CompletionStage or Uni instances (the latter comes from Mutiny library).
Both frameworks offer a declarative way of defining the server and client sides in terms of WebSocket support. Such a way allows focusing on providing a business logic instead of dealing with technical details of WebSockets.
When creating a server-side, we need to create a class with methods for handling opening, closing, messages, and communications failures and marking them with annotations. The client part of WebSockets is simple as well. Again, we need to provide a class with proper annotation, and it should have methods responsible for opening a connection and handling a received message.
Quarkus provides an implementation of the Jakarta WebSocket standard.
Micronaut, on the other hand, uses its own annotations. Additionally, we can imperatively handle WebSockets. The framework offers dedicated beans for session handling and message broadcasting. I haven’t found such an option in the Quarkus framework, though.
If you’re looking for support for HTML5 Server-Sent Events (the W3C’s HTML5 specification), then you won’t be dissatisfied. Both frameworks provide mechanisms to implement push endpoints.
Micronaut handles SSE using its Event API. So the only thing we need to do is create an endpoint that should provide data in the form of a Publisher class emitting Micronaut’s Event objects. The media type of the returned content should be
The situation is simple in the Quarkus framework too. We start with an endpoint returning an instance of SSEMulti that signifies the provided content should be handled as server-sent events.
For GraphQL, both frameworks have dedicated extensions. Micronaut uses the
micronaut-graphql module. Since the add-on defines a controller class defining endpoint for GraphQL queries, our task is to configure a GraphQL bean, i.e. load schema and bind methods with query calls. The configuration can be made using 3 various GraphQL libraries: GraphQL Java, GraphQL Java Tools, or GraphQL SPQR.
We can enable subscriptions to the endpoint by allowing queries over web sockets. Additionally, we receive the GraphiQL IDE to discover GraphQL.
The extension for Quarkus uses SmallRye implementation of MicroProfile GraphQL specification. Unlike in Micronaut, we need to provide an endpoint exposing GraphQL queries. It is pretty identical to creating a standard REST endpoint. In this case, we need to annotate the class with
GraphQLApi and call services providing data. The framework generates a GraphQL schema based on the types returned. Quarkus delivers GraphiQL; however, it is only an experimental feature. The extension helps web sockets and uses the GraphQL Java library under the hood.
Learn more about GraphQL:
Both frameworks support gRPC calls through dedicated modules.
Let’s start with Quarkus. It can work with gRPC classes from our project sources or use some coming from project dependencies (by using the Jandex index). In addition, the code is generated with no need for an external Maven/Gradle plugin. However, for Maven, it is nonetheless possible to use protobuf-maven-plugin instead.
We can inject generated gRPC services immediately or implement their interfaces independently. In both cases, we need to use the
GrpcService annotation. We can return a response in 2 ways: using types of the Mutiny library on returning results from the method or with the StramObserver class from gRPC API. We can even specify whether the business logic is blocking, so the framework will run it on a worker thread instead of an event loop. We can inject service definition with the
GrpcClient annotation on the client-side.
What does the gRPC support gaze like on the Micronaut side? First, we define data types and services in a separate protobuf file. Then, an external Maven/Gradle plugin generates classes from the definition during the compilation phase. Next, the gRPC server and clients may be configured using a configuration file or a bean creation listener programmatically. The server side is routinely configured with all services, interceptors, and transport filters injected. On the other hand, client stubs have to be manually provided as Micronaut beans using
We can use a service discovery mechanism when injecting a gRPC-managed channel. By default, we’ll use the
NameResolver class of the gRPC; however, we can use Consul or Eureka for this (with the additional framework extension). In addition, we can switch from gRPC’s default OpenCensus to Micronaut’s integration with Jaeger or Zipkin for distributed tracing. And finally, we can enable the support of
application/x-protobuf media-type in the Micronaut HTTP server.
As the documentation states, Micronaut was designed from the ground up to construct cloud microservices. It borrows and is inspired by some concepts from Grails and Spring. Thus it should be simpler to use by developers having experience with those frameworks.
Micronaut routinely tries to detect the lively environment it runs on and sets the value of the
env property based on that. It is possible to have multiple environments operational simultaneously, e.g. AWS and Kubernetes.
It helps various solutions for distributed configuration. At the moment of writing this, the list of available integrations covers:
- HashiCorp Consul with support of key/value pairs, blobs (like YAML, JSON, etc.), and file references based on git2consul,
- HashiCorp Vault,
- Spring Cloud Config,
- AWS Parameter Store (with secure information support),
- Oracle Cloud Vault,
- Google Cloud Pub/Sub,
- Kubernetes supporting YAML, JSON, properties, or literals (with plenty of configuration options).
With service discovery, we also have a few options available. First, we can utilize the discovery-client extension. The discovery client can work with Consul and Eureka. We can interact with the client immediately, or — which is the preferred way — use the
Client annotation with the name of a service. The discovery happens routinely in the latter case. The extension makes it possible to customise various aspects of registration.
The other option is using service discovery provided by Kubernetes. We can use 2 discovery modes (service and endpoint) with lively watching for changes of their respective sources. The
Client annotation uses names of defined services and endpoints.
The next option is to use AWS Route 53, which works with DiscoveryClient API as the previous solutions and helps health checks.
The last possibility is to use guide service discovery based on configuration entries. That’s the easiest way of making the service discovery, and it can even provide some health checks (disabled by default, managed in a separate thread by Micronaut).
Service discovery emits a list of available service instances. Micronaut, by default, performs Round Robin on the client-side load balancing. However, a custom implementation may override this strategy. The Netflix Ribbon extension is such an example. It provides a different load balancing implementation based on the external library. It is more flexible than the standard 1, and we can configure it using Ribbon’s configuration options (globally or per client).
Since Micronaut is the framework for microservices, it also helps distributed tracing. The framework provides its own annotations to handle spans and offers instrumentations (like HTTP filters) to ensure span context is propagated between threads and microservices. Additionally, the framework provides integrations with Open Tracing API based on Zipkin or Jaeger. Both can be adjusted relying on our need through configuration.
With critical features like quickly startup time, low memory footprint, and compile-time approach, Micronaut is a decent match for implementing serverless functions. We can find support for Azure Function, AWS Lambda, Oracle Functions, and Google Cloud Function, among other integrations. Additionally, any FaaS platforms operating functions as containers are supported as well.
The support for functions is 2-fold. First, we can implement simple functions that involve a dedicated SDK delivered by FaaS providers like AWS, Azure, Oracle, or GCP. Those functions are considered low-level, use DI on fields only, and require no-args constructors. The other type of functions expose controllers defined in Micronaut applications. These are HTTP functions. For GCP and AWS, all endpoints are uncovered routinely. For Azure, we need to provide a function routing a request to a proper controller. In the case of the Oracle HTTP function, we need to configure request routing in the cloud console.
For most serverless providers (if not all), we can use GraalVM native-image. We can create thinner images of responsive serverless functions utilizing fewer sources thanks to this.
An example of an environment operating a containerized application is the Google Cloud Run. For instance, it may run a Microunat application containerized with JIB.
The above features aren’t a complete list when discussing integrations with cloud providers. Every integration has its own additional features.
More on the SoftwareMill Tech Blog:
On the guides page, you can find tutorials on deploying applications to a cloud, sometimes with external construct tool plugins or a provider web console. For instance, we can deploy an application to the Azure cloud using a dedicated azure-webapp plugin for Maven or Gradle. The other example may be deploying AWS Elastic Beanstalk.
Besides the lambdas, the AWS extension helps extending Alexa Skills using HTTP services, even with SSML. The newly created skills can be deployed as lambda or a web service.
Next, we have AWS SDK integrated as well. Some of the clients and their builders from the SDK are available as CDI beans. The ones that are not available as beans require a factory class (like AWS Rekognition). If you are looking for a higher-level API, then maybe the Agorapulse add-on may be the answer.
Additionally, since Micronaut has been verified, it is possible to deploy applications to Amazon Correto, a free, LTS OpenJDK distribution.
As for AWS, the extension for Google Cloud Platform offers way more than serverless support.
We have logging support with many configuration options. It uses the Stackdriver logging format. Next, we can combine with Cloud Trace. Finally, based on the GCP HTTP client add-on, we can set up authorization of service-to-service communication.
We can combine with the Pub/Sub messaging service. Implementing the communication is identical to the 1 present in Kafka or RabbitMQ extensions. We create publishers declaratively, providing interfaces marked with proper annotations. During compile-time, the framework assembles the implementations. Listeners, on the other hand, are classes with appropriate annotations. By default, the extension provides an automated SerDes that reads messages as JSON data and writes to the wire based on the Content-Type header. For the misguided situations on receiving data, we can define an exception handler. There is even a way of de-/serializing data to a custom mime type.
While we can entry the Secret Manager with distributed configuration, the extension provides a low-level client to read the storage.
We can utilize a dedicated extension to connect with Oracle Cloud. The integration helps 4 types of authentication providers. We can also connect with the Autonomous Database (it uses Oracle Wallet to store credentials). The other feature available is integrating Micrometer for the OCI Monitoring service to audit cloud sources. We can replace the default tracing with OCI Application Performance Monitoring with this extension.
In addition to service discovery and distributed configuration, the Kubernetes extension provides health checks probing communication with the API, and delivering detailed data for the application’s pod.
With the Kubernetes client extension, we can entry its Java SDK classes as CDI beans. The authentication is pre-configured based on the environment settings and can be tweaked with configuration properties. Moreover, the communication helps a reactive style based on RxJava 2 or Reactor initiatives.
We can combine with Kubernetes Informer as well. Thanks to this, it is possible to monitor sources of a specific type.
Quarkus offers the Funqy framework for writing serverless functions for various FaaS providers. It works with AWS Lambda, Azure Functions, Google Cloud Functions, KNative, and Knative Events.
Since it spans multiple providers, its API is very small and simple. It helps blocking and async types of programming. With Funqy, we can adjust the names of created functions and use dependency injection. For some FaaS providers, it’s possible to inject event context.
This extension aims to provide a simple API allowing the creation of easily portable functions across various providers. If you need specific features of a given cloud environment, you need to use a dedicated integration. On the other hand, Funqy may be worthwhile when you test some ideas for serverless functions or you may need to deliver a simple endpoint when time is crucial.
It has a dedicated binding for HTTP functions — Quarkus Funqy HTTP. The necessary fact is that it is not a replacement for REST over HTTP but aims to deliver simple definitions of HTTP endpoints. The simplicity of the extension means no specialised features like cache-control or conditional GETs.
Despite the Funqy extension, we can deploy functions using the FaaS provider API immediately. Quarkus provides 2 types of plugins when it comes to AWS Lambda. The first 1 is for building simple functions. They can be deployed to Amazon Java Runtime or as a native executable to Amazon’s Custom Runtime with a smaller memory footprint and faster startup.
We can bundle as many lambdas into the deployable artifact as we want; however, we should point out which 1 should be deployed in the configuration. The lambda extension can run a mocked AWS Lambda event server when working in dev or test mode, making the development simpler.
The second type is for HTTP functions. They can be written based on any Quarkus HTTP framework (like JAX-RS, Reactive Routes, and so on). It is possible to deploy this type of lambdas with AWS Gateway HTTP API or AWS Gateway REST API.
Additionally, both extensions generate deployment files in the format of the Amazon SAM framework.
Azure Functions add-on allows deploying HTTP serverless functions based on RESTeasy, Undertow, Vert.x, or Funqy HTTP. It provides a generic bridge between Azure runtime and the endpoints provided. It helps text-based media types only and is in preview mode.
We have a dedicated extension for Google Cloud Functions as well; however, it is in preview mode. It offers 3 types of functions:
HttpFunction, handling HTTP requests,
BackgroundFunction, processing storage events,
RawBackgroundFunctionfor PubSub events.
There is yet another add-on for HTTP Google Cloud Functions. It is provided in preview mode and enables the deployment of functions based on JAX-RS, Vert.x or Servlet API, or Funqy HTTP.
We can extend configuration sources with a distributed configuration as well. Quarkus has 3 extensions on this topic. The first 1 is for Kubernetes and applies to the content of ConfigMaps and Secrets. It reads the data using Kubernetes Client and works with literals and files (properties and YAML).
The second extension allows reading configuration from Spring Cloud Config. There is no code required to enable this feature. You would need to set up a couple of configuration properties, and that’s it.
The last extension is Quarkus Consul Config, and it works with the key-value store.
What about service discovery? Quarkus gained a new extension, integrating the SmallRye Stork. This project is a framework for service discovery and load balancing on the client-side. It works with Consul, Eureka, and Kubernetes; however, Stork is extensible and can work with a custom implementation too.
SmallRye Stork serves client-side load balancing strategies as well. It provides 2 ways of selecting a service (round-robin and response time), leaving a place to provide a custom implementation.
The Stork extension looks like something the Quarkus world has been really missing. The only concern I would have is that it is pretty new, and Stork itself is in beta version at the moment.
There is nonetheless a naive approach to providing a client-side load balancing, which is a custom implementation of the
ClientRequestFilter. It’s doable. However, it is not the most efficient way.
The former uses the Jaeger tracer, and it is routinely utilized to all existing REST endpoints. Nonetheless, if we need to, we can trace non-REST calls too. OpenTracing provides additional instrumentation as well. The Quarkus documentation mentions technologies like JDBC, Kafka, or MongoDB. We can even run tracing in a Zipkin compatibility mode.
The latter integrates OpenTelemetry and works with Jaeger as well. It is possible to set up an id generator, propagators, sources, and samplers.
Like in Micronaut, these are not all features and extensions regarding cloud systems development with Quarkus.
Concerning support for AWS, there is an integration with SDK v2. It uses URL Connection Client or Apache HTTP Client under the hood for blocking calls. It is possible to use the async programming model based on CompletableFuture and Netty HTTP Client. The extension provides several services clients (like DynamoDB, KMS, S3, SES, SNS, SQS, Secret Manager, and System Manager) as CDI beans we can inject into our code. There is also a list of properties we can change in the application configuration for each.
Additionally, I have found 2 more Quarkus extensions for AWS services. The first 1 aims at making Amazon Alexa SDK work with native executables. And the second provides support for sending logs to the Amazon CloudWatch.
For Azure cloud, the documentation describes the deployment of our Docker images to 3 different services: Container Instances, Kubernetes Service, or App Service on Linux Containers. Unfortunately, I have found nothing more in the documentation regarding this cloud provider.
A identical situation is with the GCP. We can find a description of deploying an application to App Engine, App Engine Flexible Custom Runtimes, and Google Cloud Run. The first applies to jars, while the last 2 to Docker images. In addition, the GCP guide provides a section dedicated to configuring the Cloud SQL integration.
Regarding GCP, we have dedicated add-ons positioned in Quarkiverse. These offer support for BigQuery, Bigtable, Firestore, PubSub, Secret Manager, Spanner, and Storage.
The Kubernetes deployment plugin offers generation of Kubernetes manifest file, setting env variables based on Secret and ConfigMap integration, and support for Service Binding feature. In addition, we can add probes for readiness and liveness based on the SmallRye health extension.
As I’ve mentioned above, Quarkus comes with a plugin providing Kubernetes Client. It enables the utilization of Kubernetes Operators. Moreover, we can find an extension simplifying checks of the implemented operators. Finally, if the goal Kubernetes cluster runs on OpenShift, we can use a dedicated client extension identical to the 1 above.
In Quarkiverse, we can find the Quarkus Operator SDK, another plugin that simplifies work with the operators, based on the Java Operator SDK.
The last plugin from the Kubernetes family is Funqy KNative Events, supporting routing and processing of Cloud Events on the KNative platform. It delivers the possibility of configuring event processors with configuration properties or annotations, allowing programmers to define triggers, response sources, and types. Processors work with JSON data provided in the String format.
Among other extensions, we can find support for Red Hat OpenShift. It focuses on producing OpenShift sources and deploying them as S2I containers. However, it can also work with Docker and JIB images. This add-on makes it possible to use KNative through OpenShift Serverless.
While I focused on 2 topics in this post only — web and cloud features — there is a lot of content to read and grasp. So I hope I haven’t omitted anything necessary. Both frameworks provide decent support in these 2 areas, and you can clearly see both are focused on delivering modern microservices.
Which 1 is better? There is no clear answer to such a question. And I wouldn’t point to a “winner” here. However, Micronaut looks better regarding the stability of available extensions and more detailed documentation. While Quarkus offers identical features, some are nonetheless in the preview mode. Besides this, I can recommend both frameworks for your next web project.