Java: Three Decades, Three Lessons
Thirty years ago, the little Duke appeared in the tech scene, and Java introduced its bold promise to “write once, run anywhere.” Now, in 2025, Java is still the backbone of enterprise software, driving everything from insurtech platforms to the crypto wallets that are fast becoming fixtures inside modern banking apps.
I spend most of my days translating engineering excellence into business value. Yet every great story starts with the voice of the people who live it. I reached out to four of our senior engineers, Sebastian, Emil, Darek, and Jacek, and asked them for the lessons they would pass on to the next generation of Java developers.
Enjoy three decades, three(ish) lessons told by the people who write production code every day. Happy birthday, Java! 🎂
Syntax is just a surface
Learning my first language was like learning magic. I just wrote a few lines of code, and something happened on the monitor. That was incredible.
At the start, everything felt easy and understandable. I learned the syntax, explored the popular features. Got familiar with collections, loops, exceptions, and even found something about multithreading. I thought, “If I just get better at the language, I’ll be able to build anything.”
But, as the ideas grew, I hit the “reality check” wall.
I found out that knowing the language isn’t enough to build real software. Now, I need to know about Spring Boot, databases, Rest API, and "deployment" whatever that meant at the time.
As a software developer, I realized that was just the tip of the iceberg.
To create a good product, I can't just focus on writing code. Now our goal is not just to create something. We want this product to last long. We need to test if business requirements are met and ensure that our changes won't destroy everything. We want to make our app secure, observable, and maintainable.
This made me realize an important truth: writing Java means working with an entire ecosystem. I had to learn about tools, frameworks, and libraries.
I’ve learned much of this from developers whom I met at local meetups like WrocławJug and various conferences. The funny part is that many of those people were not Java developers, yet we could share experience and knowledge. That’s because these are problems we all face, regardless of the language we use.
So if you're on this journey too, remember: the language is just the entry point. The real craft lies in everything that surrounds it.
From Java 6 to 22: Watch how far we've come
Java is no longer slow
Java has carried a reputation for being slow for years. I’ve been hearing that a lot, even when programming was just a hobby.
And sure, on the surface, that made sense. Java runs on a virtual machine, has a garbage collector, and offers layers of abstraction. That felt heavy.
Even in 2011 Robert Hundt wrote a paper “Loop Recognition in C++/Java/Go/Scala“ where he analyzed and compared those 4 languages. Java, compared to C++, was slow. Of course, right? C++ is closer to metal. How can we compete with that?
But with each version, Java becomes faster, and with each version, it can bring something to the game.
For example, the Just In Time compiler (JIT) is a life saver for Java applications. It can identify hotspots (code paths that are executed frequently), and compile them into native machine code.
Do you have dead code that will never be executed? JIT will remove it. If your method is often called JIT will Inline code - copy body of method to your main method. If your variable object exists only in scope of one method, then the JIT can put it in the stack.
In this case, the object would not be allocated on the heap, and it would never need to be managed by the garbage collector.
And many more….
In Java 19, virtual threads were introduced, a nice way to work with your blocking calls, like reading from a database. Here is a small visual that explains Project Loom:
What is the idea behind Loom?
As a result of those changes, Java improved a lot. As an example, in 2024, Mr. Gunnar Morling created a challenge, “The One Billion Row Challenge“.
The goal was pretty simple: you just have to write a Java program to fetch temperature measurement values from a * .txt file. Then calculate the max, mean, and min temperature per weather station. And then print stations alphabetically ordered.
Sounds simple, but the file had 1,000,000,000 rows!
Great challenge, but the results are even more impressive. They depend on resources, but some results take less than half a second.
It’s not the end of Java improvements. In Java 24 (release 18 March 2025), there are a few JEP’s that will make Java faster. For example, when you use Ahead-of-Time Class Loading & Linking (JEP 483), you can gain an improvement of 42% in runtime applications in both vanilla and Spring apps. Of course, it’s a number from JEP but sounds promising, right?
If you want your app to have a blazing-fast startup time, you may go into GraalVM and create a native image of your app to run fast in cloud serverless functions.
There are many ways to make your Java faster.
Remember, Java is no longer slow and still improves.
As Java grows, it becomes more readable
The previous lesson taught us that Java becomes faster with each version. But that’s not the only reason why it's worth updating your Java in production. Java with new versions provides a clearer way to develop your code.
Let’s say that you're stuck in Java 11, you could miss many great features from the new Java. There are few examples:
Switch expressions
Before Java 14: verbose switch statement with breaks
switch (day) {
case MONDAY:
case FRIDAY:
case SUNDAY:
System.out.println(6);
break;
case TUESDAY:
System.out.println(7);
break;
case THURSDAY:
case SATURDAY:
System.out.println(8);
break;
case WEDNESDAY:
System.out.println(9);
break;
}
Java 14+: switch expression with '->' cases and no breaks needed
switch (day) {
case MONDAY, FRIDAY, SUNDAY -> System.out.println(6);
case TUESDAY -> System.out.println(7);
case THURSDAY, SATURDAY -> System.out.println(8);
case WEDNESDAY -> System.out.println(9);
}
Switch instead of an instance of
Before Java 21: multiple instances of checks
if (animal instanceof Cat c) {
giveFoodToCat(c);
} else if (animal instanceof Dog d) {
giveFoodToDog(d);
} else {
throw new IllegalStateException("Unexpected animat type: " + emp);
}
Java 21+: pattern matching in switch
switch (animal) {
case Cat c -> giveFoodToCat(c);
case Dog d -> giveFoodToDog(d);
default -> throw new IllegalStateException("Unexpected animal type: " + animal);
}
You can also use guards to work with specific cases
switch (animal) {
case Cat c when c.hasName("Garfield") -> giveLasagneTo(c);
case Cat c -> giveFoodToCat(c);
case Dog d -> giveFoodToDog(d);
default -> throw new IllegalStateException("Unexpected animal type: " + animal);
}
Records
Record classes, which are a special kind of class, help to model plain data aggregates with less ceremony than normal classes.
record Rectangle(double length, double width) { }
It is enough to replace:
public final class Rectangle {
private final double length;
private final double width;
public Rectangle(double length, double width) {
this.length = length;
this.width = width;
}
double length() { return this.length; }
double width() { return this.width; }
public boolean equals...
public int hashCode...
public String toString() {...}
}
Switch patterns for Records
switch (animal) {
case Cat(String name, Color _) when "Garfield".equals(name)-> giveLasagneTo(name);
...
}
In this single line, we used a guarded pattern (when), a deconstructor for records from Java 21, and an unnamed variable from Java 22.
There are many more improvements from old Java, and I encourage you to find and play around with them. Good luck and have fun.
Code is written once but read many times
My journey with programming started with C++ in high school. I remember the first thrill when after writing some text I could see the results on the screen, it was real magic.
This experience encouraged me to study Computer Science. Over the years I learnt more and more about programming, data structures, algorithms and operating systems. At this point I’ve written a lot of small programs in different languages, mainly to pass my classes.
My meeting with Java was inevitable, and I fell in love with it at first sight. My impressions were that Java is easy, powerful and readable. For my first bigger project I wrote Desktop App for Stock market simulator operating on multiple threads, it was quite a journey!
But I stumbled across many problems: How to name classes and functions? How to know when something should be a class? How not to get lost in the code? I started learning about Design Patterns and thanks to that I managed to finish the project, and after that I thought I’ve seen a lot.
Then came my first work, an internship where I met more experienced programmers and a much more complicated system. I was shocked that the code base can be this huge! (It wasn’t that huge).
You could notice there a “classic 3-layer architecture” (presentation, application, data), with services handling a lot of business logic, nested ifs for different special cases. It was hard to read, how to know when something is executed and when not. Tests weren’t that helpful either, they were complicated, with not readable setup, and sometimes even harder asserts. Hopefully, my colleagues also saw it as a problem, so it was not only my issue that I didn’t know the domain, requirements and problems, but they struggled too.
We started learning more about Domain Driven Design (DDD), we participated in workshops, and introduced a more thorough code review process. Finally it paid off, new features were more readable, tests were easier to understand and the code was more open to modification.
It was not only about the DDD, but about the approach you take when you write the code, how you name methods, how you divide responsibilities between classes, modules. For a student who only wrote solo projects it was an awesome experience, to see that it matters how you write the code, and that it’s gonna stay with other people for a while!
Not everything is as simple as it looks
Sometimes, the best way to learn is by failing, but maybe not the best option when the production is down and you need to revert the changes.
My team was assigned to do a new feature, where we wanted to push new information from the server to the client. Having two options to consider: Server-Sent Events (SSE) and WebSockets we have chosen SSE, as it looked simpler and more suited for us.
The sample SSE code can be found here:
@GetMapping("/stream")
public SseEmitter streamEvents() {
SseEmitter emitter = new SseEmitter();
executor.submit(() -> {
try {
int i = 0;
while (true) {
emitter.send("SSE message " + i++);
Thread.sleep(1000);
}
} catch (IOException | InterruptedException e) {
emitter.completeWithError(e);
}
});
return emitter;
}
In the code above we are creating a separate runnable for each emitter, which later is handled by ExecutorService.
When calling this endpoint, you will keep receiving messages each second, until you break the connection. The number of parallel requests for this endpoint is limited based on ExecutorService implementation.
Fortunately, in this code the emitter is shut down once an Exception is thrown, which happens when you break connection.
Looks simple right?
We also thought so, and we were convinced by one of the developers that it will work fine. It was a little bit more complicated than this example, as we had to verify who exactly was requesting the emitter, and send only the data they should see. In the whole project, we didn’t catch that our thread pool was limited by Tomcat, and that when the connection was broken, the threads executing emitters were not released.
Unfortunately, we spotted this problem on production when the server was unresponsive. How didn’t we catch it earlier? We can blame many things, including our testing environments being short-lived, there being almost no traffic, and we not having any load or stress tests.
Fortunately, we learned from this mistake to:
- Investigate the solutions if you don’t know them, focus on the cons, if they are acceptable for you
- Double check your findings with the teammates
- Be critical to yours or others results
- Always test the chosen solution, especially when you are not sure how it behaves under the load
A little Streams API trick
Stream API is really powerful and allows for more readable processing of collections. Once I learnt it, I tried to use it everywhere. Of course the API is not perfect and lacks some of the features that are now possible thanks to the Gatherers -> you can learn more about them in this awesome article Stream Gatherers in practice Part 1.
One of the problems I faced was selecting distinct items by given property, not by the equals()
method. Fortunately one of my colleagues showed me this little trick:
record Book(String title, String author) {}
public static void main(String[] args) {
List<Book> books = List.of(
new Book("Title", "Author1"),
new Book("Title - next part", "Author1"),
new Book("Some adventure", "Author2"),
new Book("Other adventure", "Author2")
);
books.stream()
.filter(uniqueAuthor())
.forEach(System.out::println);
}
private static Predicate<Book> uniqueAuthor() {
Set<String> state = new HashSet<>();
return book -> state.add(book.author());
}
The whole trick relies on the idea that it returns Predicate
from the uniqueAuthor
method. The predicate is used in Stream, so each element passes through it, however the Set
with state is initialized once, when the uniqueAuthor
method is called.
This allows it to keep the authorName
s between Predicate
calls.
For comparison, let’s look how this problem can be solved with Gatherers:
public static void main(String[] args) {
List<Book> books = List.of(
new Book("Title", "Author1"),
new Book("Title - next part", "Author1"),
new Book("Some adventure", "Author2"),
new Book("Other adventure", "Author2")
);
books.stream()
.gather(new UniqueAuthorGatherer())
.forEach(System.out::println);
}
static class UniqueAuthorGatherer implements Gatherer<Book, Set<String>, Book> {
@Override
public Supplier<Set<String>> initializer() {
return HashSet::new;
}
@Override
public Integrator<Set<String>, Book, Book> integrator() {
return Integrator.ofGreedy((state, item, downstream) -> {
String author = item.author();
if(state.add(author)) {
downstream.push(item);
}
return true;
});
}
}
As the Gatherer
API is more powerful, it’s also more complicated. UniqueAuthorGatherer
implements initializer
method, which supplies new HashSet
, and integrator
method, which processes each element, and pushes it to the downstream only if the author wasn’t populated in the state yet.
In both ways, the methods can be generalized, and used as some Util in other places.
Which one do you find better?
From “It works on my machine” to DevOps: The evolution of Java development practices
It’s hard to believe how much Java development has changed in just a couple of decades.
Early on, a typical Java project started as a set of messy classes, thrown together with minimal testing and even less automation. Shipping code meant packaging a WAR or EAR file and pushing it, often manually, onto a Tomcat or WebLogic server, hoping everything would “just work” on production like it did on your laptop.
Testing? Mostly an afterthought.
Documentation? Often in someone’s head.
Automation? That was writing a shell script (maybe).
As complexity grew, so did the need for discipline. Today, a modern Java project looks very different:
- Source code lives in a Git repository, with automated builds, tests, and linters running on every push.
- Unit, integration, and even contract tests are expected.
- Continuous Integration/Continuous Deployment pipelines (CI/CD) push artifacts to cloud-based environments within minutes.
- Infrastructure is defined as code, with containerization (Docker) and orchestration (Kubernetes) the norm.
- Monitoring, alerting, and fast rollback are standard practice.
If you’re interested in the containerization journey for Java, check out Docker support in Java 8 - finally! by Grzegorz Kocur. For an even broader perspective on modern approaches to software delivery, see Platform Engineering vs. DevOps: which is right for your Organization?
What’s the lesson?
The real evolution isn’t just about new tools. It’s about a cultural shift. Java teams today place just as much importance on code quality, automation, and collaboration as they do on language features.
While the ecosystem offers an abundance of tools that enable fast delivery and high quality, it also introduces significant complexity and overhead. Recognizing this is key.
Fortunately, many organizations now have dedicated infrastructure and platform teams that manage the deployment pipeline, allowing developers to focus on what truly matters: building reliable software and solving real business problems, not wrestling with deployment headaches.
Cool Features: From Boilerplate to Beauty
If you coded Java in the 2000s, you know the pain of endless boilerplate. Creating a simple data class meant writing fields, constructors, getters, setters, equals, hashCode, and toString over and over. Java was powerful, but rarely elegant.
Compare this classic approach:
// Java 7 and earlier
public class User {
private final String name;
private final int age;
public User(String name, int age) {
this.name = name;
this.age = age;
}
public String getName() { return name; }
public int getAge() { return age; }
@Override
public String toString() { return "User{name='" + name + "', age=" + age + '}'; }
@Override
public boolean equals(Object o) { /* lots of code */ }
@Override
public int hashCode() { /* lots of code */ }
}
Now, with Java 16 and beyond, you can use record:
public record User(String name, int age) {}
But that’s just the start.
- Lambdas and Streams (Java 8) changed how we process collections:
List<String> names = users.stream()
.filter(u -> u.age() > 18)
.map(User::name)
.collect(Collectors.toList());
- Pattern matching and sealed classes make code safer and more expressive.
sealed interface User permits Admin, RegularUser {}
record Admin(String name, int age, int level) implements User {}
record RegularUser(String name, int age) implements User {}
String accessLevel(User user) {
return switch (user) {
case Admin a -> "Admin Level " + a.level();
case RegularUser r -> r.age() > 18 ? "Full Access" : "Restricted Access";
};
}
And now: Virtual Threads (Java 21+)
For years, handling lots of concurrent requests in Java was hard - creating thousands of OS threads was expensive, so frameworks had to use thread pools and callbacks, which complicated the code.
Classic Java concurrency:
// Handling tasks with traditional threads
for (int i = 0; i < 1000; i++) {
new Thread(() -> {
// Handle some request
handleRequest();
}).start();
}
// This approach doesn’t scale well—creating thousands of real threads is costly!
With Virtual Threads:
// Java 21+ - Virtual Threads make massive concurrency simple and efficient
for (int i = 0; i < 1000; i++) {
Thread.startVirtualThread(() -> {
handleRequest();
});
}
// Now you can easily have thousands or even millions of lightweight threads!
Or using modern ExecutorService:
try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
for (int i = 0; i < 1000; i++) {
executor.submit(() -> handleRequest());
}
}
Even easier with Spring Boot!
If you’re using Spring Boot 3.2+ (and Java 21+), switching to virtual threads is almost effortless.
Just add this property to your application.properties:
spring.threads.virtual.enabled=true
That’s it, no code changes required! Spring will start using virtual threads for handling incoming requests, making your app instantly more scalable and ready for modern workloads.
What’s the difference?
Virtual threads are managed by the JVM, not the OS. They’re cheap to create and schedule, allowing you to write scalable, straightforward, and readable concurrent code, no more complex thread pools or asynchronous gymnastics.
These new features aren’t just syntactic sugar, they unlock better design, fewer bugs, and real productivity.
Lesson:
If you haven’t tried modern Java, you’re missing out. Embrace the evolution: less boilerplate, more business logic, and a language that’s finally catching up with its younger cousins.
You can learn more about Project Loom here:
- What is blocking in Loom?
- #UncoverJava: Project Loom - What is the idea behind Loom
- Project Loom meets Quarkus
Do you really need a framework? The eternal Java dilemma
Let’s be honest: almost every Java developer, at some point, tries to build their own framework.
Maybe Spring feels like overkill for your simple REST API. Maybe you want to learn how dependency injection or routing really work. Or maybe you just want to keep things “clean and minimal.”
So you start with just a few classes, maybe wrap some configuration, add a simple HTTP server. It feels good… until suddenly you’re writing authentication, error handling, logging, validation, testing support… and it starts to look suspiciously like Spring or Quarkus, minus the battle testing and community support.
Here’s a tiny example:
// Minimal HTTP server in Java, no framework
HttpServer server = HttpServer.create(new InetSocketAddress(8080), 0);
server.createContext("/hello", exchange -> {
String response = "Hello, World!";
exchange.sendResponseHeaders(200, response.length());
try (OutputStream os = exchange.getResponseBody()) {
os.write(response.getBytes());
}
});
server.start();
This is great, for learning, for hacks, or for truly tiny services. But as requirements grow, so does your need for security, testing, documentation, integrations… and suddenly, using a framework doesn’t sound so bad.
Curious about the new generation of Java frameworks?
Check out our blogs about Java frameworks:
- Introduction to Micronaut IOC: Basics
- Comparison of Java native frameworks in terms of community aspects
- Comparing Java frameworks for cloud-native environments
- Overview of next-generation Java frameworks
Mini Lesson:
Frameworks like Spring or Micronaut exist for a reason, they solve tough, often invisible problems: dependency injection, configuration, lifecycle management, and more. They save us from reinventing the wheel.
But here’s the catch: these frameworks are often vast, and under the surface lies a labyrinth of abstractions, proxies, and internal mechanisms we rarely fully understand. It’s easy to use the annotation, harder to know what it triggers.
So, do we always need a heavyweight framework? Or is there still value in keeping things simple, especially for small services or greenfield projects? SomeWhat is blocking in Loomtimes, just Java gives us more clarity, faster startup, and fewer surprises.
Functional Domain Modelling
When Scala was first released in 2004, as a language that combines functional and object-oriented paradigms, it started to popularize the functional programming concepts.
While Scala was initially considered a “better Java”, this claim is only partially valid today – since Java hasn’t stayed behind over the years and has introduced a number of language features that make functional programming in some areas almost as easy as in Scala.
One of those areas is functional domain modelling, which, like functional programming itself, relies on modeling the domain as immutable values and pure functions. One of its main concepts is Algebraic Data Types (ADTs).
ADTs
There are two flavors of ADTs:
- Product types - which represent an aggregation of properties (e.g. a user has a name and an email), where each instance has all the properties (i.e. a Cartesian product of those – hence the name). Those are encoded as Java records:
record User(String name, String email)
- Sum/union types - which represent a choice of one of disjoint allowed values (e.g. a state can be on or off). Those are encoded as sealed hierarchies:
sealed interface State {
record On() implements State {}
record Off() implements State {}
}
The sealed keyword implies that all possible variants of State are known at compile time, so the compiler is able to check if all values have been handled e.g. in pattern matching.
A great benefit of using ADTs is that illegal states are not representable at all - your code simply won’t compile.
Errors as values
With the functional approach to domain modeling, we want to treat everything as a value - including errors. So, instead of using exceptions, it’s preferable to encode errors as values - using ADTs, of course. Let’s look at an example:
sealed interface DomainError {
record SomeError(String message) implements DomainError {}
record OtherError(int code) implements DomainError {}
}
sealed interface Try<A> {
record Success<A>(A value) implements Attempt<A> {}
record Failure<A>(DomainError e) implements Attempt<A> {}
}
We’d then use Try as the return type of any function that can result in an error. We then have a single mechanism to represent both successful and unsuccessful results. Moreover, the domain errors (those that we expect) are limited to SomeError and OtherError.
Pattern matching
Once combined with pattern matching, this gives us the power to ensure at compile time that we have covered all possible results of a function that returns a Try:
Try<String> result = foo();
switch (result) {
case Try.Success(var value) -> // use value as a String
case Try.Failure(SomeError(var message)) -> // use message as a String
case Try.Failure(OtherError(var code)) -> // use code as an int
}
A couple of things to notice here:
- We can use pattern matching to decompose nested records (e.g. access the code directly rather than through e.code).
- The compiler is able to infer the types in nested records.
- If you missed any of the cases above, the compiler would complain (thanks to the sealed keyword in the DomainError hierarchy).
While functional domain modelling might feel counterintuitive at first, I really encourage you to give it a try and discover its power.
Verbosity over Magic
Let’s have a look at the (simplified) structure of a typical Spring Boot application:
@Service
public class Service {
@Autowired
private Repository repository;
}
@Controller
public class Controller {
@Autowired
private Service service;
// request mappings
}
@SpringBootApplication
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
It’s going to work out of the box - that’s for sure. The code seems simple and easy to reason about - but is it really? The thing about frameworks, like Spring Boot, is that they introduce quite a lot of magic, by doing a lot of things for you.
They certainly follow the convention-over-configuration approach. But are you aware of all the conventions? Do you know for example:
- What actually happens under the hood when you use the @SpringBootApplication annotation?
- What’s the lifecycle of the repositories, services etc., e.g., when are they instantiated?
- What dependencies does each component have, e.g., can you tell that the Repository requires a database connection?
- Where is the configuration loaded from?
- How are the database connections for the Repository managed?
This approach works perfectly as long as your application runs smoothly. However, when things go wrong, the hidden conventions don’t help you anymore. You need to figure out at least some of the magic before you’re able to debug your issues.
If you choose to replace an all-in-one framework with a set of libraries, each of which has a single responsibility, you gain much more control over what happens in your application. Although this approach results in more verbose code, it can be beneficial in the long run, since there’s neither magic, nor hidden conventions anymore.
Let’s have a look at our updated application:
public class Service(Repository repository) {
}
public class Controller(Service service) {
// request mappings
}
public class Application {
public static void main(String[] args) {
final var config = Config.load();
final var repository = new Repository(config);
final var service = new Service(repository);
final var controller = new Controller(service)
new HttpServer(config, controller).start();
}
}
The code above gives you a clear overview of what is required to set up the application.
You get a single entry point to look at when analyzing the codebase. It also gives you more flexibility as to how to structure your code and what tools to use to solve various problems. Unsurprisingly, though, it’s always about trade-offs, such an approach might not scale perfectly for very large codebases, so please apply with caution.
While frameworks like Spring are useful for rapid prototyping, I’d argue that once you are beyond the prototyping stage, it might be beneficial to gain some more control over what your application does without you even knowing it. This could be particularly useful when debugging production issues.
There’s life beyond @SpringBootApplication and Hibernate (have you heard of jOOQ?)!
RPC - from EJB to REST and gRPC
This is another take on keeping things simple. Remote Procedure Call (RPC) is a critical part of any distributed system, and Java has evolved significantly in this area over the years.
EJB
It all started with Enterprise Java Beans (EJB); a Java-specific standard for remote code invocation which used proprietary RMI (Remote Method Invocation) under the hood.
Setting up EJB was far from simple: you needed to define a number of XML deployment descriptors, e.g. for the so-called local and remote interfaces. Service discovery (i.e. looking up the other party) was performed using JNDI - another Java-specific technology.
Last but not least: a full J2EE container (like JBoss or Weblogic) was required to run the application.
WebServices
Then came SOAP and WebServices; a first take on remote code execution via a well-known existing protocol: HTTP.
They were supported in Java through the JAX-WS specification. WebServices were language-agnostic, so, contrary to EJB, you didn’t need a Java client to remotely interact with a Java application.
WebServices used text payloads encoded as XML with the schema enforced with XSL. While a text payload makes debugging easier for humans in general, XML wasn’t the best format, since it introduced a lot of boilerplate (i.e. payload that didn’t represent actual data), making the requests huge and not that easy to debug.
REST
The next iteration of HTTP-based RPC has led us to sending JSON rather than XML over HTTP.
The idea behind REST (which expands to Representational State Transfer) is maintaining resources identified by their URIs, which you can retrieve or modify. While this approach is not specifically targeted for remotely executing arbitrary code, it has been widely used in such scenarios, not actually limited to managing resources.
With JSON carrying much less boilerplate than XML, the size of the payloads was significantly reduced, and they became truly readable by humans - hence truly easy to debug.
It was not only the Java language that supported REST services (through JAX-RS). This approach was widely adopted with many frameworks (like Spring) and libraries (for handling JSON or testing the services) emerging at that time.
With the flexibility offered by HTTP and text payloads, it soon became important to introduce good practices around API design.
gRPC
While still using a well-known protocol (HTTP/2), gRPC was a take on replacing text payloads with binary ones, with a separate IDL (Interface Definition Language), Protocol Buffers, or protobuf, to define the structure of the messages and the signatures of the remote endpoints.
Therefore, it’s not about managing resources anymore, but this time about actually executing arbitrary code remotely. Binary payloads, while harder to debug, allow for data compression, which significantly reduces the network traffic.
Current state
Today, REST and gRPC are the two most popular alternatives to choose between, depending on the use case. No matter which of them you choose, I’d argue that it’s going to be way simpler than if you had to use EJB (heavyweight and Java-specific) or WebServices (with huge, and hardly-readable payloads). So, simplicity it is!
Wrap up
After three decades Java has proved that real innovation is equal parts people, process and platform:
- The language keeps getting lighter. Records, lambdas, sealed classes and virtual threads show that expressiveness and performance can grow together.
- Choose your abstractions wisely. Frameworks like Spring, Micronaut or Quarkus solve hard problems but understanding what happens under the hood is still the best debugging technique around.
- Culture and tooling matters equally. From bare-metal app servers to DevOps and Platform Engineering, successful teams invest as much in collaboration and automation as in IDEs and libraries.
- Readability is a feature. It's worth investing one's time in thoughtful and explicit design to make platforms more maintainable and win every code-review.
If you’re on your own Java journey, from modernization to green-field cloud-native builds, let’s talk.
Here’s to the next 30 years of shipping reliable, elegant software together!