Hacker Newsnew | past | comments | ask | show | jobs | submit | pfg_'s commentslogin

fish lets you cd to a folder without 'cd' although you still need the slashes. I use it all the time.

    c $> pwd
    /a/b/c
    c $> dir1
    dir1 $> ..
    c $> ../..
    / $>


zsh also does, with `setopt autocd` https://zsh.sourceforge.io/Intro/intro_16.html


I wouldn't consider having good default configs and being feature-rich at odds with eachother. Ghostty is feature-rich but needs no config. There's no reason yabai needs to be so highly composable that it doesn't even have a hotkey listener by default and and instead points you to another piece of software that only translates hotkeys to shell commands and is no longer being maintained. i3 at least has a pretty usable default config.


The reason to care about compile time is because it affects your iteration speed. You can iterate much faster on a program that takes 1 second to compile vs 1 minute.

Time complexity may be O(lines), but a compiler can be faster or slower based on how long it takes. And for incremental updates, compilers can do significantly better than O(lines).

In debug mode, zig uses llvm with no optimization passes. On linux x86_64, it uses its own native backend. This backend can be significantly faster to compile (2x or more) than llvm.

Zig's own native backend is designed for incremental compilation. This means, after the initial build, there will be very little work that has to be done for the next emit. It needs to rebuild the affected function, potentially rebuild other functions which depend on it, and then directly update the one part of the output binary that changed. This will be significantly faster than O(n) for edits.


> The reason to care about compile time is because it affects your iteration speed. You can iterate much faster on a program that takes 1 second to compile vs 1 minute.

Color me skeptical. I've only got 30 years of development under the belt, but even a 1 minute compile time is dwarfed by the time it takes to write and reason about code, run tests, work with version control, etc.

Further, using Rust as an example, even a project which takes 5 minutes to build cold only takes a second or two on a hot build thanks to caching of already-built artifacts.

Which leaves any compile time improvements to the very first time the project is cloned and built.

Consequently, faster compile times would not alter my development practices, nor allow me to iterate any faster.


> Consequently, faster compile times would not alter my development practices, nor allow me to iterate any faster.

I think the web frontend space is a really good case for fast compile times. It's gotten to the point that you can make a change, save a file, the code recompiles and is sent to the browser and hot-reloaded (no page refresh) and your changes just show up.

The difference between this experience and my last time working with Ember, where we had long compile times and full page reloads, was incredibly stark.

As you mentioned, the hot build with caching definitely does a lot of heavy lifting here, but in some environments, such as a CI server, having minutes long builds can get annoying as well.

> Consequently, faster compile times would not alter my development practices, nor allow me to iterate any faster.

Maybe, maybe not, but there's no denying that faster feels nicer.


> Maybe, maybe not, but there's no denying that faster feels nicer.

Given finite developer time, spending it on improved optimization and code generation would have a much larger effect on my development. Even if builds took twice as long.


Can't agree. Iteration speed is magical when really fast.

I'm much more productive when I can see the results within 1 or 2 seconds.


> I'm much more productive when I can see the results within 1 or 2 seconds.

That's my experience today with all my Rust projects. Even though people decry the language for long compile times. As I said, hot builds, which is every build while I'm hacking, are exactly that fast already. Even on the large projects. Even on my 4 year old laptop.

On a hot build, build time is dominated by linking, not compilation. And even halving a 1s hot build will not result in any noticeable change for me.


Linking? There are two relatively simple improvements: using a faster linker and not using static helper libraries (dependency crates...) in debug builds. I understand that Rust has basically no useful support for deploying shared libraries, but they could still be useful to get faster debug builds. Well, I've never tried it, but it works well in C++. Dynamically linked binaries typically take a few milliseconds longer to start, but also take seconds less to link.


> I understand that Rust has basically no useful support for deploying shared libraries

Rust has excellent support for shared libraries. Historically they have involved downcasting to C types using the C ABI, but now there are more options like:

https://lib.rs/crates/stabby

https://lib.rs/crates/abi_stable

and https://github.com/rust-lang/rfcs/pull/3470


Same here, Rust currently isn't the best tooling for UI or game development.


> Further, using Rust as an example, even a project which takes 5 minutes to build cold only takes a second or two on a hot build thanks to caching of already-built artifacts.

So optimizing compile times isn’t worthwhile because we already do things to optimize compile times? Interesting take.

What about projects for which hot builds take significantly longer than a few seconds? That’s what I assumed everyone was already talking about. It’s certainly the kind of case that I most care about when it comes to iteration speed.


> So optimizing compile times isn’t worthwhile because we already do things to optimize compile times?

That seems strange to you? If build times constituted a significant portion of my development time I might think differently. They don't. Seems the compiler developers have done an excellent job. No complaints. The pareto principle and law of diminishing returns apply.

> What about projects for which hot builds take significantly longer than a few seconds?

A hot build of Servo, one of the larger Rust projects I can think of off the top of my head, takes just a couple seconds, mostly linking. You're thinking of something larger? Which can't be broken up into smaller compilation units? That'd be an unusual project. I can think of lots of things which are probably more important than optimizing for rare projects. Can't you?


The part that seems strange to me is your evidence that multi-minute compile times are acceptable being couple-second compile times. It seems like everyone actually agrees that couple-second iteration is important.


It seems like you might have missed a few words in this comment, I'm honestly having trouble parsing it to figure out what you're trying to say.

Just for fun, I kicked off a cold build of Bevy, the largest Rust project in my working folder at the moment, which has 830 dependencies, and that took 1m 23s. A second hot build took 0.22s. Since I only have to do the cold build once, right after cloning the repository which takes just as long, that seems pretty great to me.

Are you telling me that you need faster build times than 0.22s on projects with more than 800 dependencies?


This is the context:

> > The reason to care about compile time is because it affects your iteration speed. You can iterate much faster on a program that takes 1 second to compile vs 1 minute.

> Color me skeptical. I've only got 30 years of development under the belt, but even a 1 minute compile time is dwarfed by the time it takes to write and reason about code, run tests, work with version control, etc.

If your counterexample to 1-minute builds being disruptive is a 1-second hot build, I think we’re just talking past each other. Iteration implies hot builds. A 1-minute hot build is disruptive. To answer your earlier question, I don’t experience those in my current Rust projects (where I’m usually iterating on `cargo check` anyway), but I did in C++ projects (even trivial ones that used certain pathological libraries) as well as some particularly badly-written Node ones, and build times are a serious consideration when I’m making tech decisions. (The original context seemed language-agnostic to me.)


I see. Perhaps I wasn't clear. I've never encountered a 1 minute hot build in Rust, and given my experience with large Rust codebases like Bevy I'm not even sure such a thing exists in a real Rust codebase. I was pointing out that no matter how slow a cold build is, hot builds are fast, and are what matters most for iteration. It seems we agree on that.

I too have encountered slow builds in C++. I can't think of a language with a worse tooling story. Certainly good C++ tooling exists, but is not the default, and the ecosystem suffers from decades of that situation. Thankfully modern langs do not.


Yeah, I agree. Much like how the time you spend thinking about the code massively outweighs the time you spend writing the code, the time you spend writing the code massively outweighs the time you spend compiling the code. I think the fascination with compiler performance is focusing on by far the most insignificant part of development.


This is underestimating that running the code is part of the development process.

With fast compile time, running the test suite (which implies to recompile it) is fast too.

Also if the language itself is optimized towards making easy to write a fast compiler, this also makes your IDE fast.

And just if you're wondering, yes, Go is my dope.


I've worked with Delphi where a recompile takes a few seconds, and I've worked with C++ where a similar recompile takes a long time, often 10 minutes or more.

I found I work very differently in the two cases. In Delphi I use the compiler as a spell checker. With the C++ code I spent much more time looking over the code before compiling.

Sometimes though you're forced to iterate over small changes. Might be some bug hunting where you add some debug code that allows you to narrow things a bit more, add some more code and so on. Or it might be some UI thing where you need to check to see how it looks in practice. In those cases the fast iteration really helps. I found those cases painful in C++.

For important code, where the details matter, then yeah, you're not going to iterate as fast. And sometimes forcing a slower pace might be beneficial, I found.


> even a 1 minute compile time is dwarfed by the time it takes to write and reason about code, run tests, work with version control, etc.

You are far from the embedded world if you think 1 minute here or there is long. I have been involved with many projects that take hours to build, usually caused by hardware generation (fpga hdl builds) or poor cross compiling support (custom/complex toolchain requirements). These days I can keep most of the custom shenanigans in the 1hr ballpark by throwing more compute at a very heavy emulator (to fully emulate the architecture) but that's still pretty painful. One day I'll find a way to use the zig toolchain for cross compiles but it gets thrown off by some of the c macro or custom resource embedding nonsense.

Edit: missed some context on lazy first read so ignore the snark above.


> Edit: missed some context on lazy first read so ignore the snark above.

Yeah, 1 minute was the OP's number, not mine.

> fpga hdl builds

These are another thing entirely from software compilation. Placing and routing is a Hard Problem(TM) which evolutionary algorithms only find OK solutions for in reasonable time. Improvements to the algorithms for such carry broad benefits. Not just because they could be faster, but because being faster allows you to find better solutions.


How is it real if it only exists as a spec in a book? Is there a compiler? Is there an editor?


It is real in terms of the language design. However, this is a pretty complex and sophisticated visual languages so it will take time to implement it.


So, not real then. PS: You can't get a patent unless you can show how to make it real. Not how it "would" work but how it does work.

But TBH, I'm with the rest that say this kind of visual programming is DOA for most applications.


I like the idea, and am excited to see an experimental implementation. You will have to ignore many haters who don't realize that Excel is the most popular programming language in the world. "Stop writing dead programs."


> If there was a service that detected "here's a word from our sponsors" parts of the video and removed them, that would be altering Content

This exists and it's called SponsorBlock. It automatically skips past sponsored segments. Debatable if that is altering content though


Youtube creators get access to watchtime stats which show a dip for sponsored segments. My understanding is that sponsor contracts typically don't ask to get access to that data though, instead they look at views and refferals


Hey, are you interested in whey powd... <skip> I guess not. And often it is the same sponsor in multiple video/audio. No, I am not interested in Crowdstrike. No, i don't want to become a Lord by owning a small amount of land in Scotland. Yes, I know about Ground News but I won't need it and yes, I know you can cheaply buy whey powder, add some flavor and hype it up.

And yet, HN (a text-based website) has advertising. It is a small headline in the list. Do people block this? I don't, and I am quite an adblocking person.

I actually believe billboards are a net minus for public safety. Just like you wouldn't want all kind of unnecessary traffic signs.


I tried this four times, every time it recognized it as nonsense.


Same


But it's not just good enough, it's optimal. It is equivalent to picking a random deck from the set of all possible decks assuming your random source is good. More random than a real shuffle.


Right and that’s what satisfactory means, the condition was satisfied.


Whenever I use chrome, I'm missing the style editor and multi-line repl mode from firefox. When I switched to firefox from chrome, I didn't miss anything. There might be new features chrome has added since that I would want if I knew about them


While I agree on those counts, the debugger in Chrome handles large files of minified code, deep framework stack traces, and stopping in dysfunctional code better.


Except for infinite loops in JS. Firefox still handles those better.


The whole comment is spoilered, so you need to click on it to reveal that text. Presumably it could also appear in a comment that you need to scroll on the page to see.

It's clear to a moderator who sees the comment, but the user asking for a summary could easily have not seen it.


I saw other screenshots that were not spoilered at all. I thought they had hidden the text after the screenshot and the reddit post had readable text.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: