Building A Better Online Editor For Val Town Typescript

Overview
 

Running the Deno LSP to make Val Town's TypeScript language tools dramatically better.

Val Town makes it easy to ship TypeScript automations and applications to the internet via an integrated web editor experience. We strive to offer a magical tight feedback loop, with 100ms deploys on save.

That online editor experience should be great: we should support high-quality highlighting, autocompletion, information for when you hover over bits of code. But unfortunately it hasn't been so: our previous editor has been buggy and slow to give useful TypeScript feedback.

But now, we've rewritten our editor's TypeScript integration from scratch. It's available to all Val Town users, is fast and accurate, and the code is open source.

Our old system: running TypeScript in a Web Worker

Our previous language integration was entirely client-side. We ran a TypeScript Language Service Host in a Web Worker, to isolate it from the top frame's thread, and communicated between the Web Worker and top frame using Comlink.

The system looked like this:

codemirror-ts system diagram
codemirror-ts system diagram

And we bundled it into codemirror-ts, a CodeMirror extension, and Deno-ATA, an incomplete implementation of Deno's import resolution logic grafted onto TypeScript's capabilities.

This solution worked great in the simplest cases, but stumbled when importing certain NPM packages, and required more and more workarounds. The main two issues we were facing were these:

  1. TypeScript isn't written for Deno. At Val Town, we run Deno, a modern JavaScript runtime that differs from standard TypeScript. Deno supports URL imports, provides server-side APIs through the Deno global (like environment variables), and introduces its own quirks. Sometimes we’ve been able to work around these differences. For example, we could use Deno type definitions. But in other cases, like handling URL imports, it requires us to interpret files differently. Deno is distinct enough that it ships its own language server, built in Rust and wrapping tsserver.
  2. NPM modules can be gigantic and installing dependencies is no joke. Huge import trees for NPM modules are nothing new, but at least when you're installing NPM modules locally, you have the brilliant minds of the package manager implementers to do module resolution: to install the minimal number of packages by comparing semver ranges. We didn't have that luxury, and often referencing an NPM module would trigger an avalanche of HTTP requests and bytes downloaded, which would overload the Web Worker and make the editor's language tools unresponsive.

Bringing DenoLS to Val Town

So, we redesigned our editor's TypeScript handling. Instead of running TSserver in a Web Worker, we now run the official Deno Language Server remotely in cloud containers.

We no longer suffer writing our own workarounds to the mismatch between TypeScript and Deno, because the Deno project's Rust code that wraps around a TypeScript instance solves all those problems. Your browser doesn't struggle to download huge NPM dependency trees because a beefy server does that for you, from a faster connection.

Now, when you visit our editor, we launch a containerized server that exposes a WebSocket and speaks the LSP protocol. The architecture was partially inspired Mahmud Ridwan's great writeup of connecting CodeMirror & an LSP, with the main difference being that we directly map stdio to the WebSocket rather than serializing messages, because vscode-jsonrpc can do that for us!.

Our open source implementation

To tweak the language server for our unique purpose, while keeping the Codemirror extensions LSP-generic, we also took inspiration from the official VS Code LSP client library, which we couldn't use directly because of its reliance on VS Code globals. Their client provides a way to use middleware and URI transforms so that you can easily tweak the language server at the client level when writing VS Code plugins. Transforming URIs makes it easy to spawn the language server from a temp directory but map file paths as if they were relative to the root, and middleware modify the language server for our unique use case, like automatically downloading dependencies when the server sends the client a red squiggle saying a package isn't installed. We built a similar style system as a Language Server proxy server library. It acts as a language server of its own, but can arbitrarily modify messages passing through it.

To actually host the LSP as a WebSocket server, there are various subtleties that were important for our use case. We want to keep connections persisted even when the editor leaves, and allow multiple clients to connect to the same language server instance (to support multi tab, or even multi-browser/device editing). Our implementation uses a stream WebSocket wrapper and pipes stdio directly, and manages multicasting connections so many clients can talk to the same process at once.

Bringing it to the Browser

<video src="https://github.com/user-attachments/assets/5abbc5a3-b397-40fb-beed-f9595021f7a3" style="border: 1px solid #000; width:100%; border-radius: 2px;" width="640" height="360" controls autoplay loop></video>

Once we had a language server server in place, we needed a client. This will be querying for hover information on symbol hovers, displaying red squiggles, and all of the rest of the language-specific tooling. The LSP specification is quite sprawling – there are many fun features to support, like code actions (buttons such as "infer return type") and method suggestions (that pop up as you call functions). Meanwhile we need the client to keep documents synced with the language server, and send document update events.

There are some existing CodeMirror language server client implementations, which we pulled from when building our own. We wrote our own so that we could support more arbitrary transports, in our case WebSockets with message chunking, external renderers for language server UIs (like to be able to use libraries like react, highlight.js, or remark), and take external callback inputs (so that you can implement things like going to definition on an external document).

Shipping on Cloudflare Containers

For deploying our language servers, it was important that we kept user workloads isolated because code is private. Even though we are running language server processes in temporary directories, you can still infer types of libraries in other directories by importing upwards "../../", and possibly even hop to their definition.

We also wanted servers to live for as long as the user's session. Someone might be editing code for two hours, so a solution like traditional AWS Lambda would be a tough fit. Finally, we wanted to restrict users to using a limited amount of language server resources at a time.

Initially, fly seemed like a great option. We could spin up containers on the fly (🥁) and shut them down when not needed. The issues we saw with fly were that we'd need to manually manage the lifecycles of our containers, routing individual users to unique containers, and make sure containers shut down after some amount of time not sending heartbeats from the client.

When Cloudflare announced Cloudflare containers, they immediately seemed like a perfect choice. Cloudflare containers fit within their worker/durable object ecosystem and are tenants of durable objects. This means that they are routable by an arbitrary ID, and that the durable object layer (a persistent, serverless, JavaScript class instance) can internally manage container lifecycles. In our case, we're routing users to a durable object with the ID that is their literal user ID, and then using their container library to shut containers down after inactivity.

This means that we didn't actually need to implement any stateful routing layer ourselves. When you want to connect to a Val Town language server, you simply hit our Cloudflare worker with a signed cookie containing your user ID, which routes you directly to a already-running, or brand-new durable object/container that boots your LSP. In the future, it will also be easy to hook into Cloudflare's built in worker sqlite db to internally manage utilization too.

All together, the architecture ends up looking like this:

VTLSP after
VTLSP after

A server replaced the WebWorker, and instead of communicating by postMessage (via Comlink, to a WebWorker), we now use a WebSocket. But the biggest win here is using the Deno Language Server and an isolated server for running language tooling: this lets us piggy-back on the stock implementation of module resolution and keep those huge NPM dependency trees out of the browser's responsibilities.

Try it out

The easiest way to see this all in action is to sign up for Val Town and write some code! While we'll continue striving for perfection, it's nice to know that we've gotten a lot closer to it this summer.

Out is the editor that was slow, buggy, and required a lot of custom workarounds. Now every user has the full, luxurious Deno language server experience.

Now that our editor is in production, it will only continue to improve. We have plans to add more Val Town specific language server functionality, like suggesting Val Town standard library function imports, giving useful diagnostics about aspects of Deno that behave differently on our platform, and adding more language server features.

We've also open-sourced everything you need to ship your own cloud container WebSocket language server as vtlsp. This repo includes the client, server, and proxy, which you can see in the demo below.

Open Source Demo
Open Source Demo