Hacker News new | past | comments | ask | show | jobs | submit login
Rome v12.1: a linter formatter for TypeScript, JSX and JSON (rome.tools)
133 points by conaclos on May 14, 2023 | hide | past | favorite | 130 comments



I click through every formatter-related post because I am looking for something better than what I have. My team at work wanted to use Prettier, but lack of configurability (and also heinous bugs!) made me switch to dprint[1].

But even dprint doesn't do a couple of the things I want, #1 of which is "For the love of science, please stop deleting the intentionally placed blank line after a hugely long class declaration!" E.g.:

    @Directive()
    export abstract class AutocompleteComponentBase<T> implements ControlValueAccessor, AfterContentInit, OnDestroy {
    
      @ViewChild(DsAutocompleteContainerDirective, { static: true }) autocomplete: DsAutocompleteContainerDirective<T>;
    
    // class definition continues...
I mean, I know I am a bad person because of those long names, but that is how life goes sometimes! And the blank line there at the top is just very important to like, catch one's breath, while reading this code.

(I'm really just posting this in the hopes that somebody will throw me a "Bro, just use hpstrlnt, it totally lets you configure that!" -- I have not actually tried Rome to see if it does (it's Monday morning and I'm not quite ready to be disappointed again...))

[1]: dprint is good, and I recommend it as the best code formatter I currently know of: https://dprint.dev/


I look at tools like Rome and they might have been relevant 5 years ago.

I think Bun has a better shot than Deno because it has a very fast built in node_modules installer. But I think Bun's goals are too ambitious - trying to be all things to all projects from a single binary. No one cares if you have to use a separate binary, esbuild, for all your bundling needs, for example.

It's amusing how the entire JS ecosystem put up with such slow build times for a decade. If these JS tools teach us anything it's that JavaScript doesn't cut it for performance and a compiled language is the way to go.


I don't get your central point. What is different about now compared to 5 years ago, when it comes to a code linter and/or bundler?

At first glance, I thought you were saying "this Rome thing might have been relevant 5 years ago, but now Bun and Deno exist, so it is not." But, that didn't really make sense because Bun and Deno are both new and growing and there's no clear winner, or even leader, in the "post-NodeJS runtimey programming thingamajig and excution environment".

Re-reading your comment, though, it seems like what you are really saying is "JavaScript itself is slow, so it is no longer relevant".

But when stated so plainly that sentiment becomes absurd, so as a reader I sort of mentally revise my interpretation of it to "JavaScript is slow (yes), so I personally wish people would just switch to faster languages".

But that doesn't really make sense in this context, either, because this tool is written in Rust.

So... what are you saying? What thot r u the thinkr of here?


My guess is that IO latency is the reason why Javascript tools are slow. Creating lots of separate TCP connections to download a tarball and then store it to disk, then decompress it and then write the files it contains. Lots of small files IO slows things down versus one large contiguous file transfer.

If it is IO that is causing slow project builds, then the IO in theory would be slow with compiled tools as well.


Using compiled languages to build UI in a browser is still a disaster. And yes, I’ve tried them all from applets to AS3 to elm to bucklescript to Rust via WASM


Bun do not plan to implement a formatter or a linter. This makes Rome and others still relevant.


I don't see how they are going to compete with tons of ESLint plugins. There's just no way a small team can do such a large amount of maintenance.


I don't need a ton of plugins, i need a working linter and formatter for js/ts and this job does rome for me.

I wasted so much time with configuration of eslint / prettier and a ton of plugins, that es why i had enough and switch to someting simpler... rome.

But if you want to configure every single bit of your source code, stay with eslint, prettier, ...


Most projects use the same set of the most popular plugins. Shouldn't be a problem to get those plugins ported.


You want to know how it competes? “yarn add rome”

Done.

Now demonstrate the equivalent using prettier, eslint, and typescript. It’s a configuration nightmare …


‘npm init @eslint/config’


As far as I can tell that doesn't include prettier which is an annoying configuration since eslint and prettier partially overlap in feature-set


That is true. It’s easier to get ESLint and prettier running together than it uses to be, but it is still a bit of a hassle. Prettier needs to be added as an ESLint plugin, and that allows all its rules to be defined in the ESLint config.

Feels like they could stand to merge, like eslint+tslint.


This seems to have worked for Ruff in the Python world: https://github.com/charliermarsh/ruff

It re-implements a bunch of popular linting rulesets & plugins in Rust and is incredibly fast, especially compared to the other tools.


Do you think LLMs and coding assistants (current or near future iterations) might change this?


Is this good/stable now? Worth switching from Pettier and eslint?


“Pettier” whether a typo or not is hilarious and apt. And I’m not even being insulting. I love how petty it is. :)


How is it petty? It just formats code, instantly, without complaint. That's almost the opposite of petty


It fixates on trivial things. Petty. Pedantic. Again: I don’t think it’s a bad thing. It’s what I asked for.


How does a formatter fixate on anything? It formats. It has to have an opinion on formating.


It very opinionated. I don't know if that makes it petty though.


I’d say it’s the exact opposite, since it prevents petty arguments over formatting and style.


As long as it stays away from my code.


[flagged]


Are insults really necessary?


Use it since ~6 month. It is fast and works without bigger problems. But you have to learn and accept, that you can not modify much things like in prettier,. You have to use it as it is. That is the philosophy of their tools.

And after a while it is ok. You realize, that you spend before much time to modify everyhting, that is not really necesary.


I love it, I hadn't realized how much time I waste formatting my code until I turned on prettier autoformat in my repo.


3 years of using prettier and I still curse it daily for not letting me have more than 1 empty line to delineate related sections of code in large files for readability.


Add `// -----------------` in between blocks to work around this issue, and IMO provide much better delineation of content blocks e.g. imports vs definitions.

I do miss C#'s use of macros to allow defining arbitrary blocks that can be named and folded though.


In Jetbrains Webstorm you can create foldable regions. Happy customer.

https://www.jetbrains.com/help/webstorm/working-with-source-...


It is quite stable at the moment. I would still recommend taking a close look at the changes that Rome suggests, especially for large codebases: I think that some bugs are still expected.

The LSP (VSCode extension) is less stable at the moment.


The tooling itself is in relatively good shape in my usage, although the VS Code extension currently has a number of rough edges (frequent crashes, etc.).

It’s worth a try, but wouldn’t necessarily recommend ‘switching’ wholesale at the moment.


It links against a more recent glibc than Amazon Linux supports, so in my case it’s not ready for usage on EC2 hosted CI machines.


I suppose it’s “JS-stable”, given it is version 12, cutting 2 majors in 6 months.


I've got mixed feelings about Rome. There's so much room to cover with ridiculously slow tools today. But I'm sick and tired of these people in the industry dropping their toys because they're tired of working on stuff people actually use instead of just improving what they currently have.

Would it have been impossible to nudge Node.js in the direction of where Deno is today?

Would it have been impossible to replace Babel with a Go implementation?

I also don't want tools that want to be literally everything.

Imagine if Daniel Stenberg was like, "You know what I'm tired of cURL, let me rebuild literally the same thing in another language and give it a new name, and entirely different opts."


> Would it have been impossible to nudge Node.js in the direction of where Deno is today?

Yes, do you not remember the whole nodejs vs. IOjs split? The community is already pulling node in tons of different directions.

> Would it have been impossible to replace Babel with a Go implementation?

Again, yes it would be impossible by dint of the fact babel is made to be a framework where you process JS ASTs using JavaScript. Rewriting it in Go removes the whole ecosystem of JS plugins it built, which is kind of the entire point of the project. A Go rewrite of certain transforms is a new project.


Also Deno (along with Bun) did pull Node in its general direction -- the latter has finally started to ship things like test runners, a watch mode, and Fetch


This. Same happened with npm starting to offer new features (like lockfiles) only after yarn had them. History has shown competition is only way to make them move.


I think, in part, it’s because people want shiny tools and lots of new features but nobody wants their code to break when upgrading to a new version. Such is the duality of backwards compatibility. That leads to lots of churning with people working on scratching their own itches instead of “fixing” existing tools, because a lot of times that fixing implies breaking compatibility and upstream won’t merge the changes.


Moreover, whenever I’ve offered up a solution that scratches my itch, people complain that I should have focused on improving the current thing, but typically the maintainers of the current thing aren’t interested and have different priorities (probably very understandably!). When I can fix my problem by contributing upstream, I do so but it’s often prohibitively difficult.


I think you’re underestimating the politics of open source and the thanklessness of making disruptive changes to something where a large and vocal portion of the user base just wants the thing to keep working as it is. Without breaking compatibility it’s impossible to fix the design mistakes of the past, so the most worthy changes are necessarily disruptive.


I wouldn’t discount all-in-one tooling. Look at a language like Rust — its std toolchain includes a doc generator, package creation, type system, linter, formatter, compiler, dependency management, etc. This makes for a very pleasant developer experience. JS has literally none of that, and yet has more strict runtime constraints. (E.g. if your user has to download your software every time they use it, it better download and be ready to use very fast.)

JS has nothing like that, and it has been cool to see more experiments around this type of approach. Many disparate tools and versions are hard to manage because there are so many permutations of options!


I'm fine with it if it's the first-party language utility itself, but otherwise I'd prefer third-party tools to limit their scope.


> Would it have been impossible to nudge Node.js in the direction of where Deno is today?

I don’t think anything necessarily prevents Node from being nudged in that direction — and I think we’re seeing the first signs of that with Node 20 — but the implication in nudging is that it takes a comparatively much longer amount of time. Deno has its own set of challenges as a result of it being an independent project with its own sets of breaking differences, but I think the goal with Deno was try to push the JavaScript server ecosystem forward rather than to slowly nudge it forward (with the sizable resistance that entails).

cURL is more singular here in comparison; there’s no rapidly-changing ecosystem that dictates the pace at which it changes or improves, but that’s not the case for the JavaScript ecosystem (see: Node’s many problems in keeping pace with the adoption of modules over the last several years).


The Node/JS/React world is full of thought leaders who gain reputation and clout by releasing new tools/libraries, convincing the masses to adopt them and pay for expensive consulting/courses on them (instead of just having adequate documentation to begin with), and then abandoning it altogether to move onto their next "product" to "sell". Rinse and repeat. It's why if you're not paying attention, JS best practices will seem completely different in one year's time.

Kind of a pessimistic view, but I do think this is what happens.


> The Node/JS/React world is full of thought leaders who gain reputation and clout by releasing new tools/libraries

You call them thought leaders, Jamie Zawinsky used to call them attention-deficit teenagers, in his CADT model theory of open-source.

www.jwz.org/doc/cadt.html


> But I'm sick and tired of these people in the industry dropping their toys because they're tired of working on stuff people actually use instead of just improving what they currently have.

I have sometimes the same feeling. I often dream of a "united community" working on a "perfect tool".

But there are so many differences in the community. In reality – as in most human systems – people create their own "dream tools". And the success of new tools influences earlier tools trying to catch up.

Because Deno has some traction, this forces Nodejs to catch up. I think Bunjs has accelerated some changes in Deno's Nodejs compatibility.

In the process, a lot of efforts seem lost. However, this is the way humans and communities are working: people learn by imitating others, changes are often pushed by new tools and systems that have more freedom to evolve.


If Node.js changed rapidly towards Deno we'd be back to folks complaining about JS fatigue.

Both Node.js and Deno have strong reasons to exist. Node has a massive installed base that values slightly more stability at this point. Deno benefits from being able to boldly explore big changes.


This reminds me of when npm was very slow and had some issues. So Yarn was awesome. Then npm got fixed and I don’t see a need for Yarn anymore.

There’s a ton of value of everyone using the same tool even if it isn’t perfect for every specific use case. Now install readme files regularly explain both yarn and npm installs and the community needs to be aware of the existence of two basically identical tools.


Yarn 3 is still better than npm these days due to PnP, if you choose to use that, but even if not, it's also much faster at installing node dependencies than npm.


Yarn is actually slower than npm these days. Here's some current benchmarks:

https://github.com/orogene/orogene/blob/main/BENCHMARKS.md

As you can see, both bun and orogene beat the js implementations by miles.


And pnpm fixes a flaw both of them have (had?).


Npm is still slow compared to yarn and pnpm.


It could have to do with the massive usage of JavaScript.

But, I only see this happening in the JavaScript space.

There seems to be something intrinsically wrong with JavaScript and NodeJS that a lot of the tools used to manipulate the language is written in other languages.


Really? How many ways exist to manage your dev env in Python?


Too many, I believe for the same reasons as Javascript. It's not a bad thing when the target language is simply unfit for developer tooling, but in case of Javascript and Python it's simply the speed. Doing massive AST manipulation in a language that is both slow and unsafe like Javascript just seems like madness, and it's strange that we've been kind of okay with it for the longest time.


cURL is written in C, and also (definitionally) IO-bound so the benefits of rewriting aren’t obvious perf-wise.

JS linters running in Node are mostly CPU-bound. You can get an order-of-magnitude improvement from writing those tools in Rust.


This is being pedantic. That's not the point: do you have to literally abandon the project instead of rewriting it and leaving all your users out to dry?

Would you get divorced from your spouse if something minor wasn't working out, too?


Who's abandoning their project? Did all eslint and prettier maintainers abandon those projects to work on Rome?


You can't rewrite something complex. If you try to rewrite it you'll not achieve feature parity so it will be something different at least when you're building it. And you will perpetually chase your target. You MAY try to build something somewhat compatible starting small (not a replacement) or you can fork and gradually diverge


Waiting minutes for your JS based toolchain, several time each day is not something minor! I learned Reason with ReasonML and got spoiled. Builds there took fractions of a second. First time with TypeScript made we wanna cry. Some teammates disabled the pipeline before pushing, guess what happened.


Nobody is married to Node.


This is typical FE tech stack engineering. Invent new solutions to solved problems and sell solution as next hot thing. And what proceeds is what causes people to dread FE work.


Who knows if curl would have ever existed if Stenberg would have invested his time in improving existing tools.


How much time do you spend improving those tools you use? Honest question , zero snark intended.


Create something new is always funnier as maintaining something old.

The other problem with existing and working projects is, that they have maintainer and a community and you can't change things radically. That is a really long process.

And sometimes competetive projects improve each other.


> Would it have been impossible to nudge Node.js in the direction of where Deno is today?

It's not the 1st time NodeJs required "forking". And so what if we do nudge Node yet again? It will just keep repeating itself unless there are bigger changes.

> Would it have been impossible to replace Babel with a Go implementation?

More like the world has changed since IE left the chat. We need less / different set of polyfills.


Node literally implemented every Demo complaint except TS which was easy enough. It's part of why Debi is still without significant market share.


Pure nonsense. Has it implemented:

* Single file scripting with dependencies

* No node_modules folder

* Typescript built in. You might dismiss it as easy, but it's only easy in the sense that it's not a lot of work if you've done it. It's still a massive faff and paper cut. Whatever the next thing up from a paper cut is. Toe stub?

* Guaranteed type hints.

* A good standard library


1) yes (sea) 2) yes (experimental http imports and import maps)

All the others are subjective (and wrong :))


It hasn’t even caught up in performance…


There's this incredible phenomenon on this website where someone posts "Show HN: <tool>, but modern" where <tool> is something used by nearly every Linux machine in the last 40 years, and what makes it "modern" is Rust, inexplicably.


Is there a JSON formatter out there that can be configured specifically for numeric arrays ? I'm specifically looking for something that formats 1D arrays in one line and 2D as one line per row.

fracturedJSON gets close but not exactly what I want.


Does anyone know what has happened to the company behind Rome? Seems like all the employees have left


What I have heard and seems was considered "common knowledge" about a year ago was that the founder embezzled the money. His VCs considered suing, but decided it was just too messy. I'm not joking, something about some seriously extravagant vacations and a house renovation all paid right out of the company's bank account.


Any links to direct discussion of this (or sources)? All I can find is indirect evidence that they burned through vc money too fast


AFAIK they ran out of money and now the project is community driven, so no hopes to have the all in one tool soon


They run out of money? I thought they raised like 5M two years ago and they had like 3-5 employees and pretty much 0 expenses outside of salaries.


According to HN, 1 good employee would cost you $1M per year, since everyone obviously lives in SV. 5 employees and that works out to 5M/year.


With payroll taxes and benefits, an employee is ~2x TC. Since startups underpay frequently, let’s say the average salary was ~150k in TC for early employees within SC. That’s 300k*5 = 1.8 million gone per year. Over two years that’s 3.6M. 1.4M left over for finance, legal, rent, and other expenses is kind of tight and you definitely won’t make it a 3rd without any kind of customers or other financial infusions. That’s why a company has to find some kind of product market fit quickly - you have to show there’s a there there so you could continue growing either through revenue growth alone or raising another round.


2x TC seems high. For one direct example I know of, benefits package (including all the insurances with all premiums paid by the employer) costs around $25k/yr. I feel like at $150k salary, it’s probably like $200k total cost? Payroll tax employers are responsible for seems around 7-8%. $30k benefits, $12k extra payroll tax leaves $8k for equipment and other expenses. That would be about $1M/yr for 5 employees.

Curious to learn more though.


I think the 2x number is a rough ballpark that includes total cost of employment (eg HR salaries / outsourced HR etc). It’s entirely possible that this overhead is smaller for startups and it may have shifted over time (this is a rule of thumb I was told by a few people 10 years ago)


Looking at the top contributors (https://github.com/rome/tools/graphs/contributors) that does not seem to be true?

It is Ireland, undisclosed (the founder, so probably SF), France, and UK.


You have a source for that? Couldn't find a mention on their blog.


Is not an official source, but it seems that it is the case according to this discussion[0], searching in the social media accounts there's nothing, also Sebastian[1] didn't published anything more about Rome since December

[0] https://github.com/rome/tools/discussions/4302 [1] https://twitter.com/sebmck


How was Rome going to make money so that an investor could get a return on their investment?



They ran out of money after the CEO spent all of the money on...questionable...purchases. All of the employees were laid off with no notice.


Source on that? I can’t find anything and I’d be interested in knowing more, especially for the future prospects of the project


What did they spend the money on?


Half-OT:

What do you think of pnpm?

I saw it off and on since 2015 and read some projects switched to it and then back again, when it didn't work out.


Coming from using yarn, it is way easier to move to pnpm than to yarn 2+. It’s now my go to. The only issues I find is that you’ll still not find good community support for it. Things like Dependabot and some edge hosting platform build agents.


For me there was always problem with pnpm cache working together with vite when doing pnpm link, and yarn link works perfectly, so even though pnpm is much faster, I went back to yarn.


You can email the dependabot team and they'll opt your repo into a beta with pnpm support


Don't think this is true, yarn 2 is a drop in basically you may be thinking of the PnP stuff but that's off by default now. pnpm used to be the difficult one but seems a bit better these days.


From personal experience introducing pnpm into a big JavaScript project my very subjective(!) view of the situation:

pnpm offers fast installs and the best dependency management capabilities I have seen.

However, there is a steep learning-curve attached to the latter. You cannot just task a random developer with fixing emerging problems or else you will be left with half-a-dozen large and unintelligible config files and hacks.

If you are the type of team that never updates dependencies, it is not worth the effort. If you are the type of team that applies the heuristic of "yes" to the question "Should I download a dependency?", it is not worth the effort.

On the other hand, if you value build tool performance, if you are updating frequently and if your dependencies are carefully curated, pnpm is great and with Node >16 it is just a...

    $ corepack enable
...away, too.


pnpm has only benefits if you use it with packages / workspaces and things like automatic patching.

Use it in some projects, but for the most simplier projects i stay with simple npm and use scripts (via scripty).

Install / update perfomance or SSD space saving are for me not really killer features.


This new version adds the support for Stage 3 decorators. It also stabilizes more than 30 linter rules.


Does it allow to create local custom lint rules? I haven't found anything about this in the doc


Rome has not much rules, that is their philosophy, you should not waste much time with every single configuration option and use it as it is. https://docs.rome.tools/lint/rules/


Ok, not having much rules is fine, but not allowing me to create my is really a downside compared to eslint. Creating arbitrary static checks is really super powerful and actually useful to overcome limitations of what can be done only with TS typing.


Eslint can get VERY slow on large apps. Rome seems to be the perfect answer to that, but I can't seem to find a way to port/import Eslint rules into Rome.


For now, Rome implements most of the ESLint recommended rules (including TypeScript ESLint) and some additional rules that are enabled by default. In the future, you can expect a recommended preset that is a superset of the ESLint recommended preset. So if you're not heavily customising ESLint, you should be able to use Rome.

Otherwise, most of the rules are not fine-tunable in the way that ESLint is. Rome tries to provide the experience that Prettier provided in the formatting landscape: good defaults for a near-zero configuration experience. It tries to adopt the conventions of the JS/TS community. Still, some configuration is provided when the community is divided on some opinions (e.g. space vs. tab indentation, semicolons or as-needed semicolons, ...).

There is an open issue [1] for listing equivalent rules between ESLint and Rome. Expect more documentation in the future, and maybe a migration tool.

If I had been one of the founders of Rome, I could have pushed for more compatibility with ESLint. In particular, using the same naming conventions and thus the same names for most rules, and recognising ESLint ignore comments.

[1] https://github.com/rome/tools/issues/3892


That’s because it only supports a very small subset of lint rules. As best I can tell, the rules are very opinionated often in weird ways and not particularly as complete as ESLint.


Thanks for the clarification. So it is basically only suitable for greenfield projects?


It works as it should, as a user you have to accept that rome works in a specific way.

If you can do that and live with that, rome is a excellent tool. It took me some weeks to accept the "rome way" and now i am more than happy with that and did not waste time with endless configurations of eslint/prettier.


When you have a large project with +500k lines of code, two dozen contributors, tens of eslint rules, it is very hard to "just" do it the rome way.

A migration path is required, otherwise you'll introduce a tool, have a PR that reformats everything and it will be a huge mess.


After you commit that big PR add the commit hash to the .git-blame-ignore-revs file. Then it won't affect the git blame attributions.

https://docs.github.com/en/repositories/working-with-files/u...


Rome is less opinionated than it used to be. However, I admit it is still more opinionated than ESLint. It is part of its ADN.

However, Rome is also trying to provide a smooth experience. And we are open to relax some rules if it makes sense. If you have some time, we would like to get to know the rough edges of Rome.


Thanks for both comments and the clarification!

I'll give it an honest shot tomorrow and update you on the pain points.


Hey! Have you had time to test?


V12? When/how did they jump from v0.x.y to 12?


Probably like React.

0.10.0

0.11.0

12.0.0


Makes sense. In 0.x.y releases, any version changes are allowed to have backwards-incompatible changes. Authors typically use "minor" version changes to indicate this.

https://semver.org/#spec-item-4


In what way does it make sense from going from 0.16.0 to 17.0.0? First stable release would be 1.0.0 if you want it to make sense somehow.

Or are we just starting at whatever versions we want now? My next library is gonna start at 666.0.0 once it gets stable in that case.


It's not exactly unusual to go to non-zero major versions that way, especially if it's been at 0.x for a while and people are thinking/talking about those versions. No confusion between 0.16 and 1.6 etc.


Seems fruitless as an approach to reducing confusion to me. What about versions 0.1.6, 0.6.1, 6.0.1, and 6.1.0?

I’d be more likely to think something was wrong with those releases, and assume someone was forced to delist versions 0.16.0 thru 1.5.0.


If I understood the semver spec correctly, you're allowed to do just that.

As long as you only increase ever number, and not go down, you're good.


I blame Java. In 2004 it went from version 1.4 to version 5.


What about Windows? Bit jumpy with their version numbers


Does this work with Vue SFCs?


Why write something like this in Rust since it presumably doesn't have anything memory intensive or real time going on? Rust's manual memory management is a necessary thing sometimes, but it is a pain, right?


> but it is a pain, right?

Not really.


Ha - I was thinking the exact opposite. I'd have figured that tools that require an understanding of an entire project (imports/exports/types/etc) would need be VERY memory intensive, and something that would actually benefit from manual management/rust.

I don't know much about compilers/formatters/linkers and how they work though, so I could easily be wrong.


Freeing up memory has a cost and if you don't need to do it then don't. And if you don't need to free up memory you can't have a memory leak.


If you aren't short of memory, why write in a language designed around managing it? The alternative is garbage collection, not simply leaking it.


You can write an application that never frees memory and isn't leaking memory.


For some embedded applications, fixed memory regions work fine. For something like a compiler, not so much. Remember that you have to deal with chunks of data of unpredictable size. Even string literals might occupy megabytes. You could program your way around that, but most of the time these days, it's ok to burn memory (i.e. use GC) for the sake of development velocity.


Nah at worst it's like a compiler. Memory intensive = browser engine that is going to burn gigabytes. That was the original use case for Rust.


Compilers are very memory hungry.


You can write a memory hungry compiler but you don't have to.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: