In search of ideal Rust microservice template
Despite Rust’s rise in popularity and its recent wins in the Stack Overflow developer survey, it’s still often seen primarily as a systems language. Personally, I admire Rust’s style and conciseness, and I believe it has great potential beyond traditional systems programming, particularly in web development, where RESTful web services dominate much of our daily work. One reason Rust isn’t yet a go-to choice for many software houses and corporate IT divisions is its steep learning curve. Rust can feel intimidating at first, especially to developers new to its unique approach. However, I see no reason why it can’t be used effectively in areas requiring efficient, reliable programming.
This post aims to challenge that perspective, encouraging you to step outside the familiar and try Rust on your servers and Kubernetes clusters, wherever your daily programming needs take you. I’ll admit a bias here—my own experience, along with that of many colleagues I know, has centred on writing business logic embedded within microservices running across multiple instances on Kubernetes clusters (though these weren’t always Kubernetes).
I believe that sharing simple, ready-to-use templates can help accelerate the growth of the Rust community. And, as with any programming language, community support is one of its most valuable assets, arguably even more than its technical features like memory management and speed.
Like many programmers beginning with Rust today, I’m new to writing production-ready web services in it, though I have a wealth of experience in building microservices in general. For this template, I’ve selected libraries that I believe offer a balance of functionality and simplicity; however, I recognize that my choices may not be universally loved. This is just a starting point—I may refine this template over time or create alternative versions with different libraries as needs evolve. Feedback, suggestions, pull requests, or even entirely new templates are always welcome.
A Subjective List of Desired Properties for Our Microservice Template
While I haven’t conducted any formal research, it seems fair to assume that most microservices we build today are running on Kubernetes clusters, often hosted by popular cloud providers like Google, Amazon, or Azure. However, the template we’re building here should be adaptable across different environments, with the only essential requirement being that it’s easy to containerize with Docker.
Dockerizing our microservice is just one example, but you’ll notice that my choices for the key properties in this template—listed below—are quite subjective.
- Ease of Building: The microservice should be straightforward to build, ideally using a mature build tool that allows quick testing, releasing, and seamless CI/CD integration. I believe
cargo
fits this requirement well, though it’s true that large codebases can lead to longer build times. Fortunately, there are ways to mitigate this, which I may cover in a future article. - Configurable Web Framework: Our HTTP-serving framework should be easily configurable. This requirement is likely met by most of the Rust web frameworks available today.
- Library Integration: The web framework should support easy integration with other libraries, particularly those for persistence.
- Persistence with Migration Support: The chosen persistence library should enable straightforward database schema migrations, complete with rollback capabilities.
- Logging and Tracing: The service should feature accessible, reliable logging and tracing.
- Scalability: The framework should handle varying loads effectively, supporting the scalability needs of our application.
- Documentation and Community Support: It’s essential to have comprehensive, widely available documentation, examples, and an active community for support.
And that’s how we landed on Axum as our framework of choice!
Why Axum?
In two words: modern and efficient. While some might prefer frameworks like Actix or Rocket, or see potential in the numerous web frameworks emerging in the Rust community each year (a positive sign of growth!), Axum ticks all the boxes for me—at least for now.
- Asynchronous by Design:
Axum is built on Tokio, a widely used asynchronous runtime in Rust. This makes Axum inherently non-blocking and optimised for high throughput and low latency, ideal for I/O-intensive applications. - Type-Safe Error Handling:
Axum leverages Rust’s robust type system to catch errors at compile time, reducing runtime issues and making it more reliable for production systems. - Middleware Support with Tower:
Axum supports middleware via Tower, a powerful library for building robust network services. This simplifies adding cross-cutting features such as logging, authentication, and metrics. - Strong Community and Ecosystem:
Being built on Tokio means Axum benefits from a thriving ecosystem and smooth integration with other libraries in the Tokio ecosystem, backed by community support and continuous improvements.
Of course, Axum has its downsides. It’s relatively young, less opinionated (which is why we’re building this template), and its API stability isn’t yet on par with some of the older frameworks. However, for the purposes of this blog post and the template we’re creating, these drawbacks are manageable. If there are particular challenges you’ve faced with Axum in production, especially blockers, I’d love to hear your insights.
Choosing a Persistence Library
An important choice here is the persistence library for database integration. Among several options in the Rust ecosystem, I’ve chosen sqlx
, primarily for its simplicity and the availability of a CLI, which makes database management more convenient.
Setting up the foundation
The simplest way to get started with this template is to copy it over, rename it, and you’re ready to go. However, if you’d prefer to pull specific components into your own project, you can start from scratch as well. For those new to Rust who want to build from the ground up, the initial goal is to set up the project with all necessary dependencies and database migrations running. Once that’s in place, you can easily integrate any features you find useful from the template as you go.
- Set up the project
We start with a standard Rust project creation. Use cargo to create the foundation for our template:
cargo new your-project-name
- Add axum and other dependencies
I will not list them all here, please have a look into the source code for this skeleton app available on GitHub.
We are going to create a very simple application in which you will be able to run some basic operations on 2 entities (cars and car parts). Later we will add redis caching so that some read operations will show data from the caching layer if available.
- Create db layer foundation
For the Sqlx persistence to work we need a database and the tables, for the database server we will use a docker compose script (more on that later) with postgres and for the tables we have to create migrations in our project directory.
…sql up/down scripts
Add a database with docker-compose. Our docker compose contains two services, one is the database server and another one is redis server for our cache.
docker-compose -f docker-compose.yaml up -d
- Add
dotenvy
and create.env
file
For handling environmental variables we will use the very popular dotenvy
lib. Create an .env
file in the project root directory and add your entries there for local development. Of course those, on production, will be later read from secret/globals or default values your cluster provides.
Env file content with db and cache url
- Migrate the data
Once you have your db server running you can use sqlx cli to migrate your data or, better yet, add the migration to be executed programmatically whenever your application starts
pub async fn run_migrations(config: &Config) {
let db_pool = Arc::new(postgres::db_connect(config).await);
if let Err(e) = sqlx::migrate!().run(&*db_pool).await {
panic!("Failed to run database migrations: {:?}", e);
}
}
At this point, we have a basic, mostly empty web application, but at least our database is up and running. To transform this into a functional microservice, we need to add some core functionality. We’ll begin by organizing the app structure to ensure it’s easy to extend and maintain as we add real features. Once the structure is in place, we’ll add a few endpoints and incorporate Swagger to streamline our API documentation process.
Project Structure
While this isn’t a definitive guide to structuring a web service, this approach has worked well for me (at least for now) and is still a work in progress. The code is organized into modules, setting up a clear flow for handling requests in the application as follows:
(Routes) -> controllers -> services -> repositories
router
module: This module integrates Utoipa (for Swagger) with our application routes.app
module: Here, everything connects through Axum’s layer/extension mechanism, linking routes to the underlying business logic.
Extensions
In Axum, extensions allow sharing state or data between middleware, handlers, and services. This is especially useful for managing common resources like database connections, configuration data, or business logic components. We add these layers in the app
module as follows:
pub async fn create_app(config: &Config) -> Router {
let _ = run_migrations(config).await;
let car_repository = Arc::new(create_car_repository(config).await);
let part_repository = Arc::new(create_part_repository(config).await);
let cache = Arc::new(create_cache(config).await);
router()
.layer(
TraceLayer::new_for_http()
// Create our own span for the request and include the matched path. The matched
// path is useful for figuring out which handler the request was routed to.
.make_span_with(|req: &Request| {
let method = req.method();
let uri = req.uri();
// axum automatically adds this extension.
let matched_path = req
.extensions()
.get::<MatchedPath>()
.map(|matched_path| matched_path.as_str());
info_span!("request: ", %method, %uri, matched_path)
})
// By default `TraceLayer` will log 5xx responses but we're doing our specific
// logging of errors so disable that
.on_failure(()),
)
.layer(Extension(car_repository))
.layer(Extension(part_repository))
.layer(Extension(cache))
}
In this setup, we add extensions for the car_repository
, part_repository
, and a Redis cache
. These are then available as arguments to route methods in our controllers, like so:
pub async fn view(Path(car_id): Path<i32>, Extension(repo): CarRepoExt, Extension(cache): CacheExt) -> Result<AppJson<Car>, AppError>
Extensions can be passed to the service layer to execute business logic, enabling efficient state sharing.
OpenAPI and Swagger
Our endpoints are defined with the Utoipa library. To integrate Utoipa with Axum’s routing, we use a separate module with a router()
builder function:
pub fn router() -> Router {
let app = OpenApiRouter::new()
.routes(routes!(utils::healthcheck))
.nest("/cars", car_routes())
.nest("/parts", part_routes());
let (router, api) = OpenApiRouter::with_openapi(ApiDoc::openapi())
.nest("/api", app)
.split_for_parts();
let router = router
.merge(SwaggerUi::new("/swagger-ui").url("/api-docs/openapi.json", api.clone()))
.merge(Redoc::with_url("/redoc", api.clone()))
// There is no need to create `RapiDoc::with_openapi` because the OpenApi is served
// via SwaggerUi instead we only make rapidoc to point to the existing doc.
.merge(RapiDoc::new("/api-docs/openapi.json").path("/rapidoc"))
// Alternative to above
// .merge(RapiDoc::with_openapi("/api-docs/openapi2.json", api).path("/rapidoc"))
.merge(Scalar::with_url("/scalar", api));
Router::new().nest("/", router)
}
In addition to Swagger-UI, we’ve set up Redoc, RapiDoc, and Scalar. You’re free to keep whichever OpenAPI UI fits your needs.
Endpoints are defined using Utoipa macros, with many potential errors caught by the compiler:
/// Search all cars
///
/// Tries to get list of cars by query from the database
#[utoipa::path(
get,
path = "/search",
params(("name" = String, Query, description="Car Name")),
responses((status = OK, body = [Car])),
tag = CARS_TAG
)]
pub async fn search(Query(params): Query<CarQuery>, Extension(repo): CarRepoExt) -> Result<AppJson<CarList>, AppError> {
let cars = services::cars::search(repo.clone(), ¶ms).await?;
Ok(AppJson(cars))
}
Once the application is running, all endpoints are available in Swagger-UI, enabling interactive testing.
Scalar UI example:
Error handling
Error handling in Axum offers various strategies. For simplicity in this template, I’ve chosen to work with anyhow::Error
and to translate any errors within the application to a tuple with an HTTP status code and a generic message. Internally, detailed logs are maintained for diagnostics, but clients receive only generic error messages. This strategy balances security with simplicity.
Here’s our custom IntoResponse
implementation for AppError
:
impl IntoResponse for AppError {
fn into_response(self) -> Response {
// How we want errors responses to be serialized
#[derive(Serialize)]
struct ErrorResponse {
message: String,
}
let err = self;
let (status, message) = {
// Because `TraceLayer` wraps each request in a span that contains the request
// method, uri, etc we don't need to include those details here
tracing::error!(%err);
// Don't expose any details about the error to the client
(
StatusCode::INTERNAL_SERVER_ERROR,
"Something went wrong".to_owned(),
)
};
(status, AppJson(ErrorResponse { message })).into_response()
}
}
As well as piece of code to translate any anyhow::Error into our AppError when needed:
impl<E> From<E> for AppError
where
E: Into<anyhow::Error>,
{
fn from(err: E) -> Self {
Self(err.into())
}
}
Wrapping Up
So there you have it—a basic foundation for a Rust-based microservice that’s set up to grow with your project. This isn’t the end-all template, but it’s a start, and hopefully, a step toward making Rust more accessible and practical for web services. Axum, combined with helpful libraries like Utoipa for OpenAPI integration, gives us a solid, flexible framework to build on.
If you’re ready to try it out, the complete project template code is available on GitHub for you to download and experiment with. Feel free to clone it, modify it to fit your needs, and make it your own. Whether you’re just starting with Rust or looking to integrate it into your current projects, this template is designed to get you up and running quickly.
Like any good tool, this setup is something you can shape and evolve as your needs change. Whether you’re running on a massive Kubernetes cluster or testing on your local machine, my hope is that this template will help you see the potential of Rust beyond its systems-level reputation. The road ahead in Rust web development is long, but every new project, every experiment, is a chance to push the boundaries of what we can do with it.
Got thoughts, suggestions, or improvements? This template is a work in progress, so feel free to jump in, try it out, and share your ideas. After all, the Rust community is one of the best parts of the language, and there’s no better way to grow it than by building together.
If you’re as excited about Rust as I am, join us on March 26, 2025, for a one-day Rustikon conference organised by Softwaremill. It’s a great opportunity to hear talks from Rust experts, share your own ideas, and be part of the growing Rust community. Hope to see you there!
Happy coding!
Reviewed by Daniel Ryczko