5 Scala Libraries That Will Make Your Life Easier
Starting a new Scala project or working on an existing one, and you’d rather avoid reinventing the wheel? This article introduces 5 Scala libraries that will make your life easier by solving some typical problems:
- loading configuration,
- copying data between almost identical case classes,
- defining HTTP endpoints,
- modifying nested data structures,
- validation.
Problem #1: Configuration
There’s a good chance you’re already using Lightbend Config (formerly known as Typesafe Config), which lets you read the configuration from HOCON files. It does its job as long as you’re only reading simple values using methods like getString
, getInt
etc.
Let’s consider a configuration of an imaginary server stored in src/main/resources/application.conf
:
server {
host = "example.com"
port = 8080
}
With Lightbend Config, it can be read with the following code:
import com.typesafe.config.ConfigFactory
val config = ConfigFactory.load().getConfig("server")
val host = config.getString("host")
val port = config.getInt("port")
But what if you modeled the configuration of our application with a case class? E.g.
case class ServerConfig(host: String, port: Int)
Well, there’s nothing wrong with manually instantiating the case class from the individually read simple values. However, as the configuration grows bigger, sooner or later, you will start wondering if it could be loaded to a case class automatically. This is where the pureconfig library comes to help. It lets you read the configuration directly into ServerConfig
like this:
import pureconfig._
import pureconfig.generic.auto._
val serverConfig = ConfigSource.default
.at("server")
.loadOrThrow[ServerConfig]
Under the hood, pureconfig still uses Lightbend Config with all its features. For the above example to work, the field names in the case class must be aligned with the configuration keys. If this is not the case, pureconfig lets you define custom mappings with the granularity of single fields.
The loadOrThrow
method - as its name suggests - is going to throw an exception when the configuration fails to load. If you prefer a more functional approach to error handling, you can alternatively use load
, which returns an Either[ConfigReaderFailures, A]
.
For more information and examples, please refer to pureconfig documentation.
Problem #2: Almost identical case classes
Let’s imagine that in your application, you internally model users using the following case class:
case class DomainUser(
login: String,
firstName: String,
lastName: String,
age: Int
)
Let’s say that you also expose some external API in which the user is represented in a very similar yet slightly different fashion:
case class ApiUser(login: String, fullName: String, howOld: Int)
Therefore, you would need some glue code to translate the user objects between those two representations:
val domainUser = DomainUser("jkowalski", "Jan", "Kowalski", 42)
val apiUser = ApiUser(
domainUser.login,
s"${domainUser.firstName} ${domainUser.lastName}",
domainUser.age
)
The above conversion would, of course, work as expected. However, if you look closely at the implementation, you will notice that we’re actually rewriting most of the fields - this would be even more visible if the objects had more properties. So, a question arises again: could this be automated somehow?
This time, the chimney library comes to the rescue. It lets you write the above conversion as follows:
import io.scalaland.chimney.dsl.TransformerOps
val apiUser2 = domainUser.into[ApiUser]
.withFieldComputed(_.fullName, u => s"${u.firstName} ${u.lastName}")
.withFieldRenamed(_.age, _.howOld)
.transform
With chimney the fields that have the same name and type in both representations are rewritten automatically, and all you need to focus on are the fields that are represented differently.
Since chimney uses macros internally, you know already at compile time that a transformation you defined is incomplete - e.g., if you forgot to rename age
to howOld
in the above example.
Chimney provides out-of-the-box support for many built-in types, including algebraic data types (ADTs), e.g., products (case classes) and unions/coproducts (a sealed trait or abstract class with its implementations). If you need to convert between some custom types A
and B
, all you need to do is provide an instance of the Transformer[A, B]
type class for those two types.
You can find more details in the chimney docs.
Problem #3: HTTP
So you’re starting a new project, and you need to expose an HTTP API – so you write yet another set of HTTP endpoints. You pick one of http4s, akka-http, zio-http, or maybe even Play Framework. In the end, an HTTP endpoint is a function from a request to a response, just encoded with a DSL specific to the library of your choice.
Now imagine a not-so-hypothetical situation in which the maintainer of the HTTP server library you chose decides to change the license from open-source to a commercial one. However, you don’t intend to pay, so you’re forced to choose another library and rewrite your endpoints in another DSL.
But is there really no other choice? If the bottomline is to encode a couple of request-to-response functions, what if you could do this in a library-agnostic way, e.g.
import io.circe.generic.auto._
import sttp.tapir._
import sttp.tapir.generic.auto._
import sttp.tapir.json.circe._
object Tapir {
case class Book(title: String, year: Int)
val getBooksByYear =
endpoint
.get
.in("books")
.in(query[Int]("year"))
.out(jsonBody[List[Book]])
}
The above code – written using tapir – defines a description of an endpoint that would handle requests like
GET /books?year=1984
returning a list of books as JSON.
It’s critical that the endpoint description is not coupled with any specific implementation of an HTTP server. Moreover, it only defines the inputs and outputs, but doesn’t define any request processing logic – which can also be defined in a server-agnostic way:
def getBooksByYearLogic(year: Int): Future[List[Book]] =
Future.successful(
List(
Book("Nad Niemnem", 1888),
Book("Designing Data-Intensive Applications", 2017)
)
)
and then attached to the previously defined endpoint:
val serverEndpoint = getBooksByYear
.serverLogicSuccess(getBooksByYearLogic)
Using akka-http as the HTTP server? With the following one-liner:
import sttp.tapir.server.akkahttp.AkkaHttpServerInterpreter
val akkaHttpRoutes = AkkaHttpServerInterpreter().toRoute(serverEndpoint)
you get a ready-to-use integration with akka-http. Need to change the HTTP server to Play? All you have to do is use a different interpreter:
import sttp.tapir.server.play.PlayServerInterpreter
val playRoutes = PlayServerInterpreter().toRoutes(serverEndpoint)
and that’s it – no other changes are necessary.
You may say that in real life, you seldom change the implementation of the HTTP server. Fair enough – but using tapir to define your endpoints has other advantages as well.
By using yet another interpreter, you can reuse the endpoint descriptions to generate an OpenAPI documentation, together with a Swagger or Redoc UI:
import sttp.tapir.swagger.bundle.SwaggerInterpreter
val swaggerEndpoints = SwaggerInterpreter().fromEndpoints(
List(getBooksByYear), "My API", "1.0")
Now it’s sufficient to convert the swaggerEndpoints
to a server-specific description - using a suitable interpreter, similarly as with the serverEndpoint
.
And that’s still not everything. With another interpreter, you can generate an HTTP client for your endpoint, which can be based – for example – on sttp (a library that lets you define library-agnostic HTTP clients and separately choose an actual backend to execute the requests):
import sttp.client3._
import sttp.tapir.client.sttp.SttpClientInterpreter
val booksClient = SttpClientInterpreter().toQuickClient(
getBooksByYear, Some(uri"http://localhost:8080"))
val books: Either[Unit, List[Book]] = booksClient(1984)
Notice how you use a single endpoint definition to generate all of the above: a server definition, an API documentation and a client – which lets you keep your code DRY.
A more complex and runnable example of tapir’s capabilities can be found in the tapir GitHub repository. And, as before, please refer to tapir docs for more details.
Problem #4: Nested data structures
Imagine a data model like this:
case class Street(name: String)
case class Address(street: Street)
case class Person(address: Address, age: Int)
Now you create an instance of Person
:
val person = Person(Address(Street("Funkcyjna")), 42)
and it turns out you need to modify the street name in the address. All those are case classes, so you choose to use copy
:
val person2 = person.copy(
address = person.address.copy(
street = person.address.street.copy(
name = "Obiektowa"
)
)
)
Even with a quite shallow nesting – like above – the readability of the code decreases. If the data structures were nested even more, the code could become close to unreadable. It would be much easier if you could just provide a “path” to the field you want to update (person.address.street
), and the new value.
To solve this problem, you can leverage the concept of lenses - a way to focus on a particular part of a nested data structure. Among numerous Scala implementations of lenses, quicklens has arguably the simplest API and lets you update the street name like this:
import com.softwaremill.quicklens._
val person3 = person.modify(_.address.street.name).setTo("Obiektowa")
The nice thing about the lens (or the “path”) is that it’s a plain value, which can be reused multiple times:
import com.softwaremill.quicklens._
val modifyStreetName = modifyLens[Person](_.address.street.name)
val person4 = modifyStreetName.setTo("Obiektowa")(person)
Apart from that, quicklens also allows you to:
- modify the fields using a function – with
using
, - perform conditional modifications – with
setToIf
andusingIf
, - modify collection elements – with
each
andeachWhere
, - compose the lenses – with
andThenModify
,
and many more - please refer to the quicklens page on GitHub for additional examples.
Problem #5: Invalid data
You most probably already do some data validation in your code. Let’s say that in your domain, you model users with their name and age, but only adult users are allowed. How would you approach this?
For a start, you could use the built-in require
in the body of your case class - it would throw an IllegalArgumentException in case of invalid data:
case class Adult(name: String, age: Int) {
require(age > 18)
}
For a more systematic approach, you could use a predefined data model for validation – e.g. Cats Validated. But this would still make the validation only execute at runtime.
What if you could benefit from using a strongly-typed language and perform at least some of the checks at compile time? To achieve this, you can use refined – a library that lets you, well, refine (or narrow down) your types by adding additional compile-time constraints.
You can use refined to define a new type:
import eu.timepit.refined.api._
import eu.timepit.refined.numeric._
type AdultAge = Int Refined Greater[18]
The above is an infix notation of the Refined[Int, Greater[18]]
type. Also, in Scala 2.13 you can define literal types – which implies that the 18
in the above notation is also a type, not an Int
value.
If you now create an improved version of the case class:
case class RefinedAdult(name: String, age: AdultAge)
then the following code is going to compile:
import eu.timepit.refined.auto._
RefinedAdult("Jan Kowalski", 42)
but this one is not:
import eu.timepit.refined.auto._
RefinedAdult("Janek Kowalski", 7)
and neither is this one:
import eu.timepit.refined.auto._
val age = 7
RefinedAdult("Janek Kowalski", age)
So what can you do if your data model requires a type narrowed down with refined, but you only have a non-refined value available (like the Int
here)? You can use refined’s applyRef
method to do the necessary conversion, whose result would be wrapped in an Either
, so that you are able to handle any conversion errors.
import eu.timepit.refined.api._
val adultAge: Either[String, AdultAge] = RefType.applyRef[AdultAge](age)
With the help of refined, you can now learn about your data being invalid already at compile time instead of at runtime. For values that are unknown at compile time – you can try to convert them to the narrowed-down types at runtime.
However, in practice we mostly deal with dynamic values that are rarely known at compile time – like the payload of an HTTP request etc. Therefore, the true power of refined lies in its ability to integrate with other popular libraries, e.g.
- pureconfig – by letting you validate the configuration,
- circe - by letting you validate the parsed JSON,
- doobie - by letting you write/read the refined types to/from a database,
and many more. For even more details, please have a look at the refined docs.
Summary
I hope you find at least some of those libraries useful in your daily work as a Scala developer. And, hopefully, they will let you reinvent one wheel less.
Feel free to have a look at the GitHub repository where I put the complete code examples from this article.
Do you know any other useful Scala libraries? Make sure to let me know!