Contents

A Beginner's Guide to Using Scala Metals With its Model Context Protocol Server

A Beginner's Guide to Using Scala Metals With its Model Context Protocol Server webp image

AI agents are the next big thing in AI-assisted software development.

By now, you've probably at least tried copilot-style code completions or copilot-driven edit suggestions to fix particular problems in your codebases. Agents take this to the next level: they take control of creating an action plan, editing files, running shell commands, and so on. That's in response to your initial prompt, the conversation that follows, and the feedback they get from executing their actions.

However, agents are only as good as the context that they have access to. Sure, this includes the project files themselves and the prompt, but sometimes, that's simply not the whole picture. That's where MCP comes in: it allows the AI model to generate calls to external systems. And because it's a standard (created at Anthropic, now adopted by Microsoft, Google, and many others), a single MCP server can cooperate with Gemini, Claude, and GPT models.

Functionalities exposed by an MCP server may include anything, from obtaining the weather, through interacting with external systems such as GitHub, to querying locally available information via the build system. A prime example is Scala's Metals server, which not only allows compiling the project but is also used by IDEs to provide code completions, type check code, inspect symbols, etc. The recently added Metals-MCP integration exposes this information to AI models.

Another way to think about MCP servers is that they supplement the built-in (provided by the authors of Copilot's agent mode or the Cursor IDE) tools already available to agents. These built-in tools include functionalities such as reading the contents of a file, editing a file, or running a shell command. With additional MCP servers, such as the Metals one, this is extended to include inspecting Scala symbols, compiling the codebase, or running tests. The textual response that agents generate can then include invocations of both built-in and additional tools.

Setting up Metals MCP

I assume that you already have a Scala project with the Metals plugin installed. If not, a good place to start might be generating a simple HTTP API project using Adopt Tapir, installing the Metals plugin via the VS Code marketplace, and importing the build into the IDE. In case of any problems, consult Metals docs!

In fact, simply having Metals installed and the build imported will already take you quite far with agents—see below for the linting integration (i.e., the warnings and errors that Metals reports in your source files). MCP tools enhance that with additional capabilities.

Using Visual Studio Code

Open up Settings and search for "MCP". You should enable both MCP in VS Studio Code in general and the Metals MCP server specifically:

vscode%20mcp%20config

Alternatively, you can enable these two settings in your config file:

"chat.mcp.enabled": true
"metals.startMcpServer": true

You should see the following popup when you open a project with Metals enabled (the port will probably differ):

vscode%20mcp%20auto

Now, open Copilot Chat and select the "Agent" mode:

vscode%20agent%20mode

The stack icon represents "new tools found" (third from the left)—click it to review them. You can also configure which tools will be used by clicking the second tooling button from the left:

vscode%20tool%20listing

Setting up Metals MCP in Cursor

First, go to Settings -> VS Code Settings, and search for the metals.startMcpServer setting (e.g. in the search). Enable the setting so that the MCP server is started:

cursor%20enable%20mcp%20metals

This should cause Cursor to auto-detect the new MCP server, and suggest adding it to your configuration:

cursor%20auto-detect

If this does not happen, go to Settings -> Cursor Settings -> MCP Tools and take a look at the list. If there's nothing there, add a new MCP Server manually, which should open mcp.json for you. The content should be something as follows:

{
  "mcpServers": {
    "chimp-metals": {
      "url": "http://localhost:57525/sse"
    }
  }
}

The port will differ between installations, and you can find yours in the Metals tab -> Check logs:

2025.06.05 10:40:52 INFO  Metals MCP server started on port: 57525.

With this added, you should see the MCP server working and reporting the 9 tools that the Metals MCP server currently exposes:

cursor%20tool%20list

Working with Metals MCP

Now it's time to do something useful. Open up the chat with the agent (AI Pane in Cursor, Copilot Chat in VS Code), and try giving the agent a task. Make sure to choose the "Agent" mode, not "Ask"! (Which is the default in VS Code.)

In my case, I was experimenting with writing an MCP server (from scratch) in Scala. To do that, I first needed to translate the data model that's defined in TypeScript to Scala—a rather mechanical task, perfect for an AI assistant.

Here's my initial prompt (I already have a small part of the model defined from previous experiments):

mcp%20model%20initial%20prompt

As you can see, Cursor is using some of the built-in tools (searching the web, reading the Tools.scala file) to come up with an action plan. You can then watch it iterate on a design, generating the model classes in a couple of iterations.

If anything that's generated causes a compile error, Metals detects this and reports back as a standard "linter error":

mcp%20linter%20error

When the agent detects an error, it will attempt to fix it automatically, generating the next version of the code and linting it (i.e., compiling and receiving feedback from the Metals compile server). At some point, it does give up and asks you to take over, though!

Sometimes, you'll see the agent using the MCP tools to compile the code it generates explicitly:

mcp%20tool%20call

And sometimes it will fall back to running sbt compile via a shell command:

mcp%20sbt%20compile

Remember that, in the end, it's the LLM that generates the commands to execute. It might generate a request to run an MCP tool, but it might also decide that this time, we'll do it the old-fashioned way!

You can control if linter errors and MCP tools are automatically called or if they need your approval. VS Code always asks before running an MCP tool. In Cursor, this can be specified per tool in settings, but it defaults to automatically use the available tools (unlike in VS Code). Quite often, the LLM will also generate questions requiring your input and approval, e.g., when it comes to deciding on a specific course of action. This is separate from approving running MCP tools.

Agents & Scala 3

Generating Scala code with agents works quite well and is further improved by the quick feedback loop that the agents receive from the compiler (either via linters or MCP tools). However, most of the code that's generated will be Scala 2-style: starting with the import syntax, through "braceful" syntax, to using implicits instead of givens, and sealed traits instead of enums.

That's why you'll often have to explicitly ask for Scala 3 code to be generated—and even then, you might end up with Scala 2 code. Then, just ask again to refactor in Scala 3 style or to replace sealed traits with enums, and you'll get closer to the desired outcome. Automatic formatting using scalafmt (which might include brace rewriting if configured) will do the rest.

Agents can go quite far, but in the end, they still need a lot of supervision, and you need to know your Scala (or whatever other language that you are using) to get quality results!

Help the Metals team!

How well an MCP server works depends mostly on the quality of tool descriptions, as well as the choice of which tools to expose in the first place. With too many tools, the LLM might become overwhelmed and fail to select the right one. Too little, and the functionality just isn't there. If you have some suggestions for the Metals team, I'm sure they'd love to get some feedback!

Here's the current MetalsMcpServer definition (look for new Tool(...) invocations). Prompt engineering is becoming a discipline in itself, Cursor being a case in point. An AI-first IDE is only as good as its choice of prompts. Hence contributions might include not only new tools, but also improved prompts!

From my (still very limited) experience, the one tool that's used most is compile-file. For searching, the LLM still usually resorts to grepping the codebase. You have to be very explicit to get the model to use, e.g., the find-usages tool:

mcp%20find%20usages

Arguably, you can do this the old-fashioned way by using the Go To References action in the IDE—faster & cheaper. But the goal here is to get the model to use these tools in more complex tasks, which are performed autonomously. It may be that the descriptions of these tools need refining or that the tasks I've been running so far didn't require such functionalities.

More experiments are needed—please report the results! :)

Blog Comments powered by Disqus.