> With Yarn Workspaces, installing packages across workspaces becomes faster and lighter. Additionally, it prevents package duplication across workspaces and also makes it possible to create links between directories that depend on each other, ensuring that all directories are consistent in the monorepo.
Nice that they included a footnote about pnpm. Makes me want to write a counter article about using pnpm instead. pnpm is faster than yarn 1 (variable with yarn 2, depending on use) and workspaces are just easier. Lest we forget hoisting and dependency dedupe, which is far and away superior with pnpm.
My pnpm + TS setup is as follows:
- /shared/ directory at root that contains tsconfig.base.json, tsconfig.eslint.json, tsconfig.json (which is only symlinked to, has relative settings for the directories it's linked into)
- tsconfig.json at the root, extending shared/tsconfig.base.json, including all things that an editor might care about
- /packages/ (or /services/) at the root
- /packages/{package}/tsconfig.json -> symlinked to -> /shared/tsconfig.json
This setup allows various editors to validate code in any directory that might have TS (and JS!), editor plugins that use ESLint to have a TS config reference, and allows deployment and/or build processes to operate on individual entities in packages|services without having to build the world. Note that this isn't ideal in monorepos where you want to build the world. In monorepos where I have packages and services, for example, and the services are dependent on the packages build built, I leverage pnpm's recursive script ability with filtering to build some of the world, but not all.
Yes, use pnpm ! I should mention meta-updater [1], which is a pnpm tool which allows to keep your various Json config files in sync across the workspace. Pnpm's own mono repo is a good exemple config of meta-updater which keeps TS project references automatically in sync with the actual dependency tree [2]
I love the idea of working with pnpm and I currently use yarn workspaces, but last I heard pnpm has issues with next/nuxt, associated ecosystem packages and deployment (Vercel etc). Are there ways around these problems?
import { FormType } from '../../../../types/form';
import { DateType } from '../../../../types/date';
// This is considered bad practice
Disagree on this one. Beauty of typescript is giving you auto-import autocompletion in your editor, and not having to worry about relative paths any more. I don't think I've manually written an import statement in years.
All code is fast to write. Decisions like this should be optimized for reading and comprehension, not writing.
IMO: Extreme opinions one way or the other don't work, and thus (most of the time) code-complete automation also doesn't work.
import { User } from "./User"; # is probably the most readable and comprehensible way to write this.
import { User } from "app/domain/users/types/User"; # is probably less comprehensible
import { User } from "../User"; # may be ok, but it may also depend on how deep this directory is, because
import { User } from "app/users"; # is actually pretty comprehensible
The point should be to make it really quick for readers to understand where this code is coming from. Imagine a situation where you've got identically named imports; like a
import { User } from "foo/db/types"; # used to schematize a user in the database
import { User } from "foo/api/createUser/types"; # used to schematize a component of the request body for creating users
Maybe the tokens themselves are named poorly, but look past that.
Many people find a specific file using, simply, CMD+P (or your editor's equivalent). There's no context for where they may be at; which inherently makes relative imports less comprehensible.
import { User } from "./User";
const u = new User(req.body.user); # but wait... what kind of user did I just create?
There are many ways to help alleviate this low state of comprehension; and what should be deployed is difficult to create hard-and-fast rules around because its so domain specific.
import { User } from "foo/api/createUser/types";
# having an import path like this can help, but maybe only in small files where the imports aren't 500 lines above where they're used.
import { User as CreateUserAPIUser } from "./User";
# aliasing imports is a good solution
import * as createUserApiTypes from "foo/api/createUser/types";
const u = new createUserApiTypes.User(req.body.user);
# importing, then properly naming, the entire module is in my experience an underutilized tool to help with comprehension.
Do y’all look at the import path at the top of a file to understand a function on line 450?
Any editor I can think of provides a way to show what you’re using here. I don’t think import path intelligibility should really be that high on the list of concerns.
> (most of the time) code-complete automation also doesn't work
They’re pretty great these days, in my experience.
> Do y’all look at the import path at the top of a file to understand a function on line 450?
It depends on the language, the editor, and the development process. There's no one-size-fits-all solution.
Regardless of all of that: code files are near their peak productivity when they can fit on-screen without scrolling, and when you can put all that aside and just immediately move your eyes from the token to the import path. And they're even closer to the peak when you can combine that with encoded contextual information about the import path (which is generally a reflection of what the token does) into the token itself; via, say, import aliasing or importing the module and accessing the token as a property on that imported module.
The reality that we have thousand-line source files obviously isn't ideal. No one who has ever worked with one would say "this is the way the world should be". But, they exist, and thus we have tools like Intellisense to make them manageable, and they'd definitely be less comprehensible without those tools. That doesn't make the whole situation ideal, when talking in hypotheticals. We shouldn't admit defeat to the God of Complexity by saying, even in this small corner of good practices, "well, the comprehensibility of the import path doesn't matter because we gave up on making anything comprehensible a long time ago."
> They’re pretty great these days, in my experience.
I don't mean that they don't work in the sense that they don't produce compileable import paths. Though this is true in some languages, or some projects with bespoke configuration systems, or in some editors.
I mean that they don't work because they don't oftentimes produce comprehensible import paths.
> The reality that we have thousand-line source files obviously isn't ideal
Cut that number by 2/3 and the situation is the same, right? How often are you working on files where the entirety of them fit on screen at all times…
> well, the comprehensibility of the import path doesn't matter because we gave up on making anything comprehensible a long time ago
I think they don’t matter because even modest complexity produces this situation. This isn’t a hypothetical, it’s the far reaching majority of real, practical code.
Using relative imports that ../ out of your package directory confuses TypeScript about which files are in-scope (and how to structure the resulting files). You can end up with weird shit like my-library/dist/my-library/index.js. (I'm sure you can configure your way out of this. That's a puzzle I haven't solved and don't intend to invest time into.)
Yarn Berry has a node_modules mode now, which makes it behave like Yarn Classic and NPM. It also allows you to import sibling projects in a monorepo:
Personally I'm a fan of using relative paths for anything in the same directory or a sub-directory of the current file and absolute paths for everything else.
I feel like it keeps things clean while also providing a way to distinguish between closely related modules and imports from further away in your application.
I’ve also fallen back into favoring relative paths and doing my best to keep nested paths to a minimum. This also makes publishing on Deno’s module listing possible too!
If TypeScript and VSCode are configured properly, it will suggest an auto-import that uses the absolute paths. In my experience, it can be a bit inconsistent though and sometimes it goes for the relative paths instead.
The fact that the compiler itself doesn't error on cycle imports, and that the errors caused by those imports are so opaque, seems like an oversight to me.
The best™ way I found was the dumb way: at emit time (./build, ./dist in package "one") simply copy the new build to all the required dependants across the local file system (two/node_modules/one/build, three/node_modules/one/build, etc.). It's just dumb enough to be good enough. I stress tested with around 100 packages and it works rather decent, especially when all the packages are using esbuild/nodemon for development and restart on node_modules update. To automate this I have added a "develop" [1] feature in a tool I made, joiner, for running tasks over multi-repos.
If you're willing to go whole hog into pnpm, rush.js is really nice. You get a single unified node_modules and lock file for your entire repo with individual node_modules for each app/package projected with symlinks and local packages automatically symlinked together. We also have it set up so that when we push to our develop branch it builds, tests and bumps a prerelease version for only changed packages and dependents.
Rush is great. But you're absolutely right. It is whole-hog and very prescriptive. I personally prefer being closer to the bare metal and using pnpm straight up.
Absolutely. I just want to echo this for anyone in this thread considering using rush. Straying from the way rush wants to do things will bring pain. It is definitely not a good fit for every project but embracing it has solved a ton of problems for us (having tons of duplicated node_modules directories bringing intellij to it's knees, keeping versions in sync, path resolution, publishing, deployment)
I'm dabbling in the monorepo-tool-space for quite a while already and somehow managed to never hear about Turborepo, seems very interesting!
Just a few years back, the monorepo-tooling-landscape left much to be desired, there were a lot of opinionated 'zero-config' tools out there that always seemed to fall apart the moment you strayed from their happy path. I even went so far to create my own tool (https://github.com/abuob/yanice), in parts because it was fun and taught me a lot and in parts because I simply didn't find something fitting our usecase.
It's cool that the tooling in this area is getting better and better, monorepos solve a lot of very annoying enterprise problems but require solid tooling to make it work, even when way smaller than google-scale.
I've always felt this one should be built into the TypeScript compiler. Most of my projects are set up to share some utilities and its annoying to integrate babel or other tools just to fix the paths. I've previously used [ttypescript](https://github.com/cevek/ttypescript) with the [typescript-transform-paths](https://www.npmjs.com/package/typescript-transform-paths) plugin. Gosh, it would be enough if TypeScript just natively supported plugins in the tsconfig (which is what ttypescript provides).
This seems on topic and it's something I've wondered for a while. Does anyone have any good strategies for quickly running TypeScript against an entire repo?
I want to be able to quickly validate that a change that I made didn't introduce any errors. The problem is that our repo is a few hundred thousand lines, and doing `yarn run tsc` takes a long time. I currently use VSCode, which does good at incremental feedback in the current file, but there's still a blind spot where I'm not sure if I've introduced issues that affect files that I'm not currently editing.
This has been a game-changer for me. Any files in the project that are broken immediately show up as red and then I can go find what's wrong. It's a refactoring dream. I use an M1 Mac and it does just fine with big projects.
Typescript in watch mode in a background process (with compilation output on or off) is the default recommendation here. (and what I do, alongside prettier formats and some macros).
That is effectively what happens in VSCode, via the lsp, so I'm guessing something is being lazy and its a bug with the integration.
My understanding is that VSCode's use of the LSP does a lot of "horizon shortening" to keep things performant in a large workspace. If the workspace is small enough it does seem to check every file all the time, as the workspace grows it starts to prioritize open editors and gets lazier about checking other files, and I've even seen situations where it prioritizes the current editors (visible tabs/splits) entirely over even the rest of open editors. I don't think it's necessarily "a bug", but it's certainly seems an intentional laziness to prioritize the editing experience over the compile experience.
Which is a long winded thing to say that it's still often useful in VS Code to keep a full tsc --watch process around even if you think the LSP's horizon is wide enough to catch everything because tsc --watch will always have different performance priorities to VSCode's editing experience.
Break your package up into multiple smaller packages in a monorepo. Use tsc incremental and something like nx which can cache and only rebuild changed packages.
Great article!
I am also using turborepo and it has been fantastic! Being able to skip builds from packages that have already been built either in CI or in other developers machines is really amazing.
If you are not using Vercel caching, I’ve built my own open-source turbo cache backend[1] that can be self-hosted
The state of multi module js/ts projects, is such a disappointment when you are used to working in the JVM ecosystem.
The tooling sucks, configuration is overly redundant, it’s fragile and when it works it’s super slow.
project-1/packages/core
project-1/packages/preact
project-2/packages/core
project-2/packages/demos
etc.
where a bunch of related projects live top-level in a repo. Each project has a packages folder that includes the core implementation, as well as demos, framework-specific adaptors, etc.
In each package's package.json, I have a series of commands (convert the TS to JS, make a bundle, deploy to Firebase, etc.). Each command can depend on another, either in the same project or anywhere else in the file hierarchy.
This provides two benefits:
1. Iterating across packages is faster, because I don't have to worry about making sure each package rebuilds in the right order if I make a change in a library.
2. Filesystem concerns are separated: rollup only needs to worry about bundling, and it only needs to bundle web-facing projects. The only tool my libraries need is tsc.
(Before wireit, using TypeScript and Rollup together was a pain in the ass because you'd have to fiddle with picking the right TS plugin and configuring it. This was also often the long pole on doing a Rollup version upgrade. Decoupling the two makes Rollup way simpler/easier/nicer to use, which makes wireit awesome even if you don't have multiple packages.)
It's also a good replacement for lerna. Each of my packages has a publish command that runs `gitpkg publish`. My root package.json has a publish command that depends on all the packages' publish commands. Thus, I can run `yarn run publish` in the root of my monorepo and trust that all of my packages have been published to our git host appropriately. (gitpkg lets you turn a git host into a private registry, so you can share modules without setting up an NPM registry.)
Here's a snippet from one of my package.jsons. They basically all look like this. (start is complicated because of https://github.com/google/wireit/issues/33. When that's resolved, it will be as simple as the others.)
Nice that they included a footnote about pnpm. Makes me want to write a counter article about using pnpm instead. pnpm is faster than yarn 1 (variable with yarn 2, depending on use) and workspaces are just easier. Lest we forget hoisting and dependency dedupe, which is far and away superior with pnpm.
My pnpm + TS setup is as follows:
- /shared/ directory at root that contains tsconfig.base.json, tsconfig.eslint.json, tsconfig.json (which is only symlinked to, has relative settings for the directories it's linked into)
- tsconfig.json at the root, extending shared/tsconfig.base.json, including all things that an editor might care about
- /packages/ (or /services/) at the root
- /packages/{package}/tsconfig.json -> symlinked to -> /shared/tsconfig.json
This setup allows various editors to validate code in any directory that might have TS (and JS!), editor plugins that use ESLint to have a TS config reference, and allows deployment and/or build processes to operate on individual entities in packages|services without having to build the world. Note that this isn't ideal in monorepos where you want to build the world. In monorepos where I have packages and services, for example, and the services are dependent on the packages build built, I leverage pnpm's recursive script ability with filtering to build some of the world, but not all.
(Happy to answer questions on pnpm monorepos)