Hacker Newsnew | past | comments | ask | show | jobs | submit | cytzol's commentslogin

Embedding your assets like this isn't always an improvement. For example, I work on a site with a Go server and static content pages, and I like that I can update one of the pages and see the change instantly without having to re-compile the entire server binary just to get the new files included.


Easy enough to have the app check the regular file system first, then fallback to the embedded fs. You could have the best of both worlds.


> Why do you prefer having an app running (and owning the sole menu bar) when it has no GUI on screen?

Here's a story from when I was using Windows for work after using Macs for ages. I had one folder in Sublime Text open, and I wanted to close that window and then open another one. So I hit the [X] button in the corner, which closed the window, and then I instinctively went to the global menu bar at the top of the screen to go to 'File › Open' to open my new window. But of course, it wasn't there, because closing the window also got rid of my ability to access the menu bar.

And then I opened Sublime Text again, and it re-opened with the old window I wanted to get rid of.

This is why, like others in this thread, I've grown to really like the application-vs-window separation. Having a menu bar on each window, and having programs close themselves when they get down to zero windows, means I have to do my operations in a certain order (I have to open my second window before I can close the first one, I can't do it in either order) and use a UI hierarchy that I don't think makes sense (I have to use the menu bar of an existing window to open a new window, even though that operation has nothing to do with that window's contents).



I found this "best practice" curious to read:

> The standard net/http server violates this advice and recovers panics from request handlers. Consensus among experienced Go engineers is that this was a historical mistake. If you sample server logs from application servers in other languages, it is common to find large stacktraces that are left unhandled. Avoid this pitfall in your servers.

I don't think I've ever seen a server library — HTTP or otherwise — that didn't have a top-level "catch all exceptions" or "recover from panic" step in place, so that if there's a problem, it can return 500 (or the Internal Server Error equivalent) to the user and then carry on serving other requests.

My reasoning is that any panic-worthy programming error is almost certainly going to be in the "business logic" part of the server, rather than the protocol-parsing "deal with the network" part, and thus, recoving from a panic caused by processing a request is "safe". One incoming request could cause a panic, but the next request may touch completely unrelated parts of the program, and still be processed as normal. Furthermore, returning a 500 error but having nobody read the stacktrace is bad, yes, but it's way, way, way better than having your server crash meaning nobody can use it ever.

Oh wait, is the assumption here that your service is being run under Borg and has another 1000 instances running ready to jump in and take the crashed one's place? Is this another case of Google forgetting that people use Go outside of Google, or am I reading too much into this?


The question: what is the state of your server after a handler panics? The answer: you have no idea. It is not wise to continue serving requests when you may have serious issues in the state of your server. Maybe some central data structure is now corrupt. You have no way of knowing. Fail fast and fail hard.

OTOH, maybe your priorities are different and you would prefer to be more available than correct. In that case by all means add a recover to the top level of your request handlers. But it was a mistake to have made this decision for all users of net/http ahead of time.


In my experience, the panic is most likely because someone accessed a nil field when adapting some data. Nothing is corrupt, we just threw an exception in a mundane way.

The reality is this is far more common than something truly fatal. Mistake for someone or not, it probably is correct for most use cases


Yes, why they made that decision back then. However in the end the lib cannot and shouldn't know, and this decision needs to be intentionally done by the middleware. Correct for most use cases is not good enough here, should be always correct for such fundamental thing.


You're assuming that the code was written exception safely. E.g.

    mu.Lock()
    foo := bar[baz]    // <- throws exception / panics
    mu.Unlock()
Go is sold as a language without exceptions, so people don't write exception-safe code. Which is fine, except when exceptions are actually caught.


That wouldn't pass a code review where I work... Use a defer to do the unlock


I would also not allow it. I'm saying the problem is that core Go developers say "Go doesn't have exceptions", which is manifestly false, but causes people to not write exception safe code.

But despite you and me, I'm saying there's a lot of broken code out there because of this doesn't-but-actually-does misinformation.

And it's very annoying that you have to tell people to do:

    var i int
    func() {
      mu.Lock()
      defer mu.Unlock()
      i = foo[bar]
    }()
Clean code, that is not. (even if you simplify it by having the lambda return the int)


This is like the biggest thing scaring me away from Go. This half-assed "we don't use exceptions, so you shouldn't have to care about it, except when we do, so you still must write proper defers, which are now doubly verbose because nobody considered it a primary use case"... In any other language, a mutex outside a using/with/try-with/RAII would be instantly flagged in code-review or linter tools. In many cases even hard to write incorrectly, due to entering context being only way to acquire the lock.

Now this middle ground leaves you having to write triple verbose if err != null on every third line of your code and still not be safe from panics-that-shouldnt-have-been-panics.

As parent says, the only way panics can ever work is if the top-level never catches and recovers from them. I'm no expert in go but that would mean in such perfect world, defer should hardly ever be needed at all, not even for locks? Only for truly external resources? But now with popular web servers doing such recovery, the entire ecosystem got polluted and all need to handle it?


Does defer get called on panic? I thought panic cancels all further execution


Yes it does, which is why recovering from a panic can be done in a deferred function. The go runtime maintains enough metadata to track what deferred functions need to be run while unwinding the stack.


The main issues that usually arise as a result of catching panics in handlers like HTTP is that unless code it written very deliberately to be able to recover from being interrupted in the middle of any function call, there is a high risk of e.g. mutexes left locked, which in turn leads to a (silent) program deadlock in general.

This has happened during my couple years at Google at least once, even though it wasn't in an HTTP handler, but the issue was very similar.


> most likely

> probably is correct

Yeah I don't know...


But that implies your request checking is off; if your server expects a field to be filled in and a client doesn't send it, that should be a HTTP 400 Bad Request error, not an internal server error.

C/C++ based servers running into an error like that would possibly be open for attacks. Go will be a bit more resilient, but it's still better to avoid situations like that.

That said, logging the error and going on serving responses is fine I think (pragmatic), as long as the error is analyzed. But an error that doesn't trigger immediate action is a warning, and warnings are noise [0].

[0] https://dave.cheney.net/2015/11/05/lets-talk-about-logging#:...


> In my experience, the panic is most likely because someone accessed a nil field when adapting some data. Nothing is corrupt, we just threw an exception in a mundane way.

Even more so if a database is involved (which is generally the case), because odds are the transaction just gets rolled back and there's basically nothing that could be corrupted.


The Echo web framework is an interesting example of this [0]. By default it doesn't handle panics, but there is a configurable `Recover` middleware which does so. I agree though it's unusual, and this stuck out to me as an exception that proves the rule.

[0] https://echo.labstack.com/middleware/recover/


I believe the idea is that 'panic' is considered something fatal. If it happens, the application should die unless you have a strong reason for it not to. If you can recover e.g. by returning HTTP 500 for a single request, it should be handled by returning errors up throughout the call stack, i.e., by error handling instead of panicking.

It's definitely an opinionated approach though.


Nil pointer panics happen. When a DB column that was supposed to be non-null is null and the code unexpectedly accesses it. Oh schema won't help because legacy. There are rare though and we fix such issues as they are discovered, but just pointing out a benign case when crashing the whole server on panic may be a wrong approach. Our approach is to always recover from panics so the server keeps chugging, but raise an alarm so someone is paged and fixes the issue right away.


> I believe the idea is that 'panic' is considered something fatal.

I think opinions in 3rd party Go code on what “fatal” means vary, that’s the issue. Sure it was intended to mean “this error is so bad the entire program needs to die right now” but in practice there’s cases where it’s treated more like an unchecked exception in Java, i.e., “I can’t recover from this so _I’m_ going to give up but _you_ can keep going.” Or put another way, what a library might consider fatal the caller doesn’t. It can be argued whether or not that’s the right thing to do, but the fact is it happens in the wild, so for something like a server you probably should trap it.


> but in practice there’s cases where it’s treated more like an unchecked exception in Java, i.e., “I can’t recover from this so _I’m_ going to give up but _you_ can keep going.”

Java has a supertype for unrecoverable problems like out-of-memory-errors. It's called "Error", a subtype from Throwable. Exceptions are also Throwables, but they are NOT errors.


There's nothing unrecoverable about Error exceptions. If one thread hit a bug and threw StackOverflowError, it's not a reason to kill the entire application or even thread itself, just unwind stack, show some error and continue processing next task. The only tricky situation I'm aware of is OutOfMemoryError, because it can affect other threads. I'd prefer to restart the server. But again it's just because typical code allocated heap all the time. One can write code carefully without heap allocations and this code will work just fine with OOM errors thrown around.


> It can be argued whether or not that’s the right thing to do

No, because both approaches could be right at a higher level, to pretake that decision at this level is definitely wrong here?!


No, that's about error handling: panic/recover in the standard net/http server is used in lieu of exception handling, which is absent in Go. Instead, they (and official Go docs as well) recommend to use "if-hell".


Presumably you have a load balancer in front of your Go code that's written using saner languages and assumptions that does handle errors correctly.


If your program fails in testing, and no-one reads the logs, does it ever get fixed?

There is one advantage to not having a top-level catch: if it fails in testing, it's very obvious immediately.

Though I do agree with you - and this can be done with a feature flag for dev/prod servers.


> Is this another case of Google forgetting that people use Go outside of Google, or am I reading too much into this?

Whether or not they are forgetting aside, this is Google’s style guide for code bases in Google. I don’t think non-Google Go programmers were a consideration for them.

On a broader note, it seems as though anytime Google publishes something people interpret it as “industry standard” (see their C++ style guide) and apply it to their non-Google projects. I personally don’t see this as healthy.


I've always thought that the ideal is somewhere in between the two.

1. Catch the panic/exception.

2. Track the rate of these panics or exceptions. If it is too high some data structure has probably been corrupted or some lock has been poisoned. If a lot of requests are failing abort.

And ideally: 3 signal that you are in a degraded state so that some external process can gracefully drain your traffic and restart you. Although very few people have this level of self-healing infrastructure set up.


I wrote a web server that handled a lot of requests, and my solution to 2. was to have it notify me via our alert chat channel every time it panicked. This was rare enough that I could investigate them individually, and if it got overwhelming (it never did) I could choose to do something fancier.


How I understand it is that it's up to the user to create a panic handler middleware, which is perfectly valid and what most people are doing anyway.

Something like: https://github.com/go-chi/chi/blob/master/middleware/recover...


As a sidenote to your sidenote: I've noticed that the term "syntax" seems to have two different meanings nowadays. There's the technical meaning, namely "the rules that govern how characters are parsed into abstract syntax tree nodes", which in your example, would cover whether Rust shoud use `.` or `::` as a namespace separator, whether to use `[]` or `<>` for generics, that sort of thing. (Both of which have trade-offs in constraining how other parts of the language can be designed.)

But I think sometimes, people use "syntax" in a blanket "how the language looks" way — that is, whether it's symbol-heavy, whether it's word-based, whether it's information-light or information-dense, and so on. This makes it more a function of which features of expressivity the language chooses to expose, than the individual syntactic choices that determine which characters we use and for what. Again in Rust's case, it has attributes, it has namespace separators, it has the zero-tuple, it has generics, and it has lifetimes, all of which need some way to be expressed.

Don't get me wrong, you're allowed to not use a language if you don't like the way it looks visually. Or maybe it makes good use of a certain character that's hard to type on your particular keyboard layout. That's fine. I also can't decree that either of these uses of the term "syntax" are wrong. But when we're talking about language syntax, it's important to remember when you're talking about syntax, and when you're instead talking about language features. If you don't like the way lifetimes look, that's one thing; if you don't like the way lifetimes make you change the way you write code, that's another.

So I have to ask: of those four code snippets, how would you prefer to write them? What would you change? And can you get away with making those changes without breaking anything else?


I have no idea how/why Rust works/looks the way it does. Never used it beyond a hello world. I just know it's syntax looks horrible to me. I can already imagine my pinky going sore typing all those characters.


Looks like the application requires macOS v12, which is only a year old. What features does it require that means it can't be backwards-compatible? Is it a SwiftUI thing?


Honestly, dropping v11 for 2.0 was a really tough call. Adopting ExtensionKit, and getting that out the door right when v13 shipped was difficult, but that was the goal we set out for. Supporting v11 made a number of things more difficult, and SwiftUI was some of it. It was not strictly technically necessary, but it made it easier.


> Programming is a means to an end, and the cost of using Rust (hiring, increased development time) is often not worth it.

I agree with this. I learnt Rust before Go, and using Go makes me feel like The Oatmeal piracy guy[1]:

"I'm not sure if I should use Go to write this HTTP service. I'd lose immutability tracking, I'd lose compiler-enforced thread safety, I'd lose the powerful type system, I'd lose the comprehensive error handling, I'd suffer from a million little papercuts, I'd have to use the weird date formatting system, I'd have to check nil pointers, I'd...

...oh, it's seven days later and I've already accomplished more writing networking servers and clients in Go than I ever have in years with Rust."

This isn't to say the points raised about Go aren't true. They are true, and if a better language were available, I wouldn't stand my ground and argue their benefits, I'd switch to it. The last comment I happened to post on this website is about how Go is insufficient without its army of linting tools [2]! Yes, I'm incredibly happy to have learnt both Go and Rust as their combination has expanded my skillset and the range of programs I'm willing to write tremendously. But if someone said to me "you should just use Rust instead of Go for your production services", I'd think the "just" was doing some incredibly heavy lifting.

An article that I'd like to see is one comparing the two languages for this niche (networking servers and clients), contrasting not just the language pitfalls but the third-party libraries necessary, the iteration speed, and the choices you'll have to make up-front. My guess is that the languages would be judged more closely together.

[1]: https://theoatmeal.com/comics/game_of_thrones [2]: https://news.ycombinator.com/item?id=30749921


I don't really use Go, I think mostly because I'm not in the target market, but this article's complaints (and some of these comments) actually got me thinking that I should take another look at it.

A long time ago, in a Haskell community chat, I saw someone dismiss Go with a pithy comment along the lines of, "Go isn't a programming language, it's a DSL for writing network services." I think I may need to re-assess that comment as actually being a really compelling elevator pitch for the language.

Armed with that perspective, I'm seeing why I wasn't terribly convinced by the article's specific complaints about Go. "Traditional IPC is a PITA and forces you toward talking over a socket? Well, yes, exactly. That's kind of the whole point."

Sometimes I wonder if we are all suffering unnecessarily because of our incessant demanding that all languages try to be all things to all people.


> Sometimes I wonder if we are all suffering unnecessarily because of our incessant demanding that all languages try to be all things to all people.

Programmers aren't. Programmers are building stuff and talking about that. Opinion bloggers are suffering for clicks.


I was struck by ThePrimeagen[0] saying that it took him 5x longer to write a game server in Rust than in Go—despite having significantly more Rust experience. They performed about the same (I think Go actually did better due to how much easier it was to get concurrency working?).

Personally I lean towards strict compilers (I suppose years of JavaScript has traumatized me), but 5x dev time is a big tradeoff! Of course this is just one data point, but it did seem worth mentioning.

[0] - https://www.youtube.com/watch?v=Z0GX2mTUtfo


My experience in writing a small side project in both (on the order of 2-3 KLOC) was that the time to a working project is significantly shorter with golang. The time to the correctly working project was about the same between the two.

Golang gets out of the way in me doing what I want.

Rust actively resists me doing things I will later regret.

Also, unexpectedly, I have gotten some positive comments on my C coding style after I coded some Rust.

All of this is completely anecdotal and personal experience, of course.


99% of people don't need "correctly" working projects.

Trillion dollar companies run on "incorrect" software.


Then, it's time to move on.

It is impossible to run "correct" software, but we can do better, maybe?


The problem with:

"I'm not sure if I should use Go to write this HTTP service. I'd lose immutability tracking, I'd lose compiler-enforced thread safety, I'd lose the powerful type system, I'd lose the comprehensive error handling, I'd suffer from a million little papercuts, I'd have to use the weird date formatting system, I'd have to check nil pointers, I'd...

None of the modern language do it beside Rust so every language are then bad?


I think the quote marks were there to indicate that this was walking through a hypothetical thought process that someone might go through. (It proceeds on to the next line after that, where you see the closing quote mark.) The gist of the whole thing was basically to say, "Don't make the perfect the enemy of the good."


Sorry, I don't get what you mean. Could you elaborate or re-phrase?


None of the modern and popular language have immutability, compiler enforced thread safety, powerful type ( debatable ), same for nil...

Java / C# / Python / Ruby so they fall in the same bucket as Go I assume?


Java has immutability and more powerful types than go. Also considering how easy it is to intermix jvm languages you could add in scala or kotlin for truly powerful type systems without null.


I've grown a little disgruntled by the hype surrounding Scala's and Kotlin's null handling.

For starters "without null" is a myth. They both have null. They have to; there is no other practical option. The JDK uses null all over the place, so you need to have null in order to talk to the JDK.

Now, they do still have mechanism to make null easier to handle. And they're both pretty impressive designs. (Especially Kotlin's, though I haven't tried Scala 3 yet so maybe I'm missing something wonderful there. Aesthetically, I just prefer "we're going to openly acknowledge it and tame it as much as we can" over "we're going to try to sweep it under the carpet.") But there's a sort of Amdahl's Law analogue hiding in that situation: the upper bound on how much practical null safety you can achieve is constrained by how much you can avoid relying on modules that were written in Java. And I know that's basically true of languages like Haskell, too, because you often have to rely on at least a little bit of code that was written in languages like C. But it doesn't preoccupy me the same way it does in Scala or Kotlin, where interacting directly with Java code is a much more everyday kind of affair.


Kotlin is quite explicit about nulls.

Kotlin code requires you to use Type? if the value can ever be null.

Java code that is annotated by @Nullable/@NonNull will automatically map to Type?/Type (and it is up of course to the developer to not fuck up their nullability promises.)

Java code that is not annotated is a Type!, and encourages you to be wary about what could happen.

The nullability story is miles better than Java.


If you enable nullness checks, Java is the same.


No such thing as nullness checks in Java. At best, your IDE is taking the annotations into account and giving you warnings. At worst, you have to run ErrorProne and have it yell at you.

Additionally, all Java code (including the one you wrote) is only optionally null checked. All Kotlin code _is_ null checked, no matter what.


ErrorProne is part of the compiler suite.


??? Absolutely not. ErrorProne is a Google project that is not integrated into javac, and merely acts as a plugin to it. To have ErrorProne running on your project, you need to:

- Know about it (first, big problem for many java shops)

- Integrate it with Gradle/Maven/Ant/yourbuildtool

- Enable the null checks because they are not enabled by default.

Compare this with "it's already in the compiler"


Had the same experience with Scala. Oh, Option types? Cool! Wait, why am I getting NPEs??? Oh, we're using some java library, nulls galore!


I've come to the conclusion that, for the most part, option types make no sense in object-oriented languages. There are exceptions, but they tend to fall into "proves the rule" territory. OCaml, for example.

Not just because of the null problem. It's also that option types push you toward a "conditional logic everywhere" way of doing things, because that's how you handle the options. That's all well and good and holy in a functional language, and perhaps even a procedural one. But it's the opposite of good object-oriented design.

To quote Dr. Mark Crislip, when you serve cow pie with apple pie, it does not make the cow pie better. It just makes the apple pie worse.


How does an OO language handle a case like "Do you want fries with that?" Without conditional logic?

BurgerWithFriesMeal subclasses BurgerMeal?

"Object oriented design", in the religious sense, is an obsolete 1980s fad that took a good idea (encapsulation of mutable state) to comical extremes.


You're sort of telling on yourself with that "in a religious sense" jab. ;)

The original idea of OO design was to strive to eliminate stateful idioms. Which is a slightly different idea than what we're used to. The state existed, but the point was that you were supposed to design your system so that objects didn't need to know - or even attempt to infer - information about other objects' state.

The whole intellectual lineage that includes pervasive use of explicit state querying and manipulation methods such as getters and setters could be characterized as a whole lot of procedural programmers collectively missing the point. It's right up there with when people over-use the State monad in Haskell, effectively doing their darnedest to Greenspun imperaive programming on top of a lazy functional language because they haven't quite internalized this new paradigm yet.


> it's the opposite of good object-oriented design.

So what is good OO design? NPEs? Nil checks?


I have yet to encounter an OO-first language for which Optional provides any real difference from nil checks. Often, it actually makes it all worse. Layering optional types on top of a system that allows any value to be nil tends to just produce a situation where there are more conditions you have to consider if you want to code defensively. As many as four:

  - nil
  - None
  - Some(nil)
  - Some(non-nil)
As far as how to do good OO design, ideally you try to avoid explicit branching whenever possible, and instead use dynamic dispatch to decide what to do. In principle, if you've architected things well, you should generally be able to avoid conditional branching.

An ironically useful example of how this works is Smalltalk's implementation of Booleans and conditionals. Smalltalk doesn't actually have an if statement. Instead, it has a Boolean class that defines three methods: ifTrue:, ifFalse:, and ifTrue:ifFalse:. Each takes one or two blocks, which are effectively anonymous functions.

And then the implementation is that the True subclass as an ifTrue: that executes the block, an ifFalse that doesn't, and an ifTrue:ifFalse: that executes its first argument. And the False implementation does the opposite.

This isn't meant to be an example of "look, OOP doesn't need conditional branching, just use {library implementation of conditional branching}," so much as a small, self-contained example of the kinds of ways that you can achieve conditional-style logic without explicit branch statements. A more real-world example might be something like having separate NotLoggedInUser and LoggedInUser classes that behave differently in relevant situations, rather than having an isLoggedIn field to have to keep checking at every use site.

The big thing working against us on this is that most the popular OO languages - C++, Java, Python, C#, etc - come from the same approach of trying to layer object-oriented features on top of a procedural core. In my not-so-humble opinion, this has been about as successful as more recent efforts to support functional programming by pulling a mess of functional features into existing OO languages. Technically it gets you somewhere, but the resulting language is not an ergonomic pit of success.


Probably a better example is #at:ifAbsent:, which is a place I've seen all kinds of faffing about with default retutn valures in languages without closures/blocks.


I was pretty skeptical of this and yes I'd rather not have null at all. In practice though I've found the boundary with Java for my scala projects to be very small. This is definitely a function of what you're building but there are a lot of great scala libraries so we rarely need to reach for java.


Yeah, that's absolutely fair, a greenfield Scala project can avoid a lot of Java nowadays.

But, at the other end of things, teams that were already using Java and want to start incorporating Scala don't have that option. And 10 year old Scala projects didn't originally have that option, and doing something about it now may be a lift on the scale of a complete rewrite.


Yeah good point. I would not enjoy adding Scala throughout an established Java project unless I could really compartmentalize it


Java has a library for compile-time annotations static nullness checks, so if you use that, null only comes from legacy libraries, not new code.


Does Javascript not exist?


When talking about language design, it's the first one to be kicked out.


Checkout Elixir, I think it's better than GO for a lot of the use cases GO handles.


Oh man. The fake ads on that first link had me rolling around. Have an upvote!


Haha, your comment on the weird date formatting rings very true.


Like many things with Go, its approach seems reasonable and simple at first, but allows you to accidentally write code that looks right but is very, very wrong. For example, what do you think this code will do?

    delaySecs := 1 * time.Second
    time.Sleep(delaySecs * time.Second)
Now I insist on using the durationcheck lint to guard against this (https://github.com/charithe/durationcheck). It found a flaw in some exponential-backoff code I had refactored but couldn’t easily fully test that looked right but was wrong, and now I don’t think Go’s approach is reasonable anymore.


Perhaps the function shouldn't accept the unit of sec². Not least because I have no idea what a delay in that unit could signify.


It doesn't actually use units. Everything is in nanoseconds, so time.Second is just another unitless number.

  const (
   Nanosecond  Duration = 1
   Microsecond          = 1000 * Nanosecond
   Millisecond          = 1000 * Microsecond
   Second               = 1000 * Millisecond
   Minute               = 60 * Second
   Hour                 = 60 * Minute
  )


Note that the wonderful Go type system interprets time.Second * time.Second as 277777h46m40s with the type time.Second (not sec^2)


  time.Second * time.Second
The type of this is `time.Duration` (or int64 internally), not `time.Second` (which is a const with a value).

I agree, though, that this is not quite sound, because it can be misused, as shown above with `time.Sleep(delaySecs * time.Second)`.

In Kotlin you can do `1.seconds + 1.minutes` but not `1.seconds * 1.minutes` (compilation error), which I quite like. Here is a playground link: https://pl.kotl.in/YZLu97AY8


Certainly, but for that the type system should be rich enough to support unit designators.

I know how to implement that in Haskell, and that it can be implemented in C++ and Rust. I know how to logically implement that in Java or Typescript, but usability will suck (no infix operators).


Go tends to cover such things by incorporating them directly in the language. But then it tends to not cover them at all because it would "overcomplicate" the language...

For a good example of what it looks like when somebody does bother to do it, see F# units of measure.


This looks to me like the semantics are good but the implementation details are broken. 1 * time.Second * time.Second semantically reads to me as 1 second. If time.Second is some numeric value, that’s obviously wrong everwhere unless the type system reflects and enforces the unit conversion.


> 1 * time.Second * time.Second semantically reads to me as 1 second.

Which is wrong, 1s * 1s = 1s².

For example, the force of gravity is expressed in m/s² and describe an acceleration (m/s / s, aka a change of velocity per time units, where velocity is a change of distance per time units).


Okay so do I need to consult Relativity to program 1sec + 2min?


Since 1min could be 61 seconds[1], yes?

But assuming your comment is not a joke. You probably want to convert minutes to seconds in order to work with the same units, then add the scalar parts together.

That's how you deal with different quantities: convert to same unit, add values.

This is analog to fractions: 1/2 + 1/4 = 2/4 + 1/4 = (2+1)/4 = 3/4.

  [1] - https://en.wikipedia.org/wiki/Leap_second


In basic middle school math it’s common to multiply different units as a basic conversion mechanism. Multiplying by the same unit is semantically equivalent to “x times 1 is identity(x)”, and other cross-unit arithmetic implies conversion to ensure like units before processing. A typed unit numeric system would imply that to me. It would not imply I’m multiplying the units, but rather the scalar value of the unit.


> In basic middle school math it’s common to multiply different units as a basic conversion mechanism

EDIT: Yes, you multiply the units `2m * 2s` : you first multiply the units to get: `m.s`. This is what I say: you convert everything to the same units before doing the calculations.

> Multiplying by the same unit is semantically equivalent to “x times 1 is identity(x)”

This is wrong.

1kg * 1kg = 1kg² period.

What you're saying is `2kg * 1 = 2kg`, which is right, because `1` is a scalar while `2kg` is a quantity. This is completely different than multiplying 2 quantities.

> It would not imply I’m multiplying the units, but rather the scalar value of the unit.

That's where you're wrong. When doing arithmetic on quantities, you have 2 equations:

  x = 2kg * 4s
  unit(x) = kg * s = kg.s
  scalar(x) = 2 * 4 = 8
  x = 8 kg.s
Or

  x = 5m / 2s
  x = (5/2) m/s
  x = 2.5 m/s
There is a meaning to units and the operation you do with them. `5m / 2s` is 5 meters in 2 seconds, which is the speed `2.5 m/s`.

`2m + 1s` has no meaning, therefore you can't do anything with the scalar values, and the result remains `2m + 1s`, not `3 (m+s)`.


All unit conversions are actually multiplications by the dimensionless constant 1, i.e., no-ops.

Let's say that you want to convert `2 min` into seconds. You know that `1 min = 60 s` is true. Dividing this equation by `1 min` on both sides is allowed and brings `1 = (60 s) / (1 min)`. This shows that if we multiply any value in minutes by `(60 s) / (1 min)`, we are not actually changing the value, because this is equivalent to multiplying it by 1. Therefore, `2 min = 2 min * 1 = 2 min * (60 s) / (1 min) = 2 * 60 s * (1 min) / (1 min) = 120 s`. We didn't change the value because we multiplied it by 1, and we didn't change its dimensionality ("type") because we multiplied it by a dimensionless number. We just moved around a dimensionless factor of 60, from the unit to the numerical value.

I think that you misremember, or didn't realize that to convert minutes into seconds, you were not multiplying by `60 s` but by `(60 s) / (1 min)` which is nothing else than 1.


What is the "scalar value of the unit"?

Units can be expressed in terms of other units, and you can arbitrarily pick one unit as a base and then express the rest in it. But the key word here is "arbitrarily".

If multiplying by the same unit yield the same unit, then how did you compute area or volume in school?


Wait, would you really expect 1m * 1m to be anything other than 1m²? When does it ever happens that you want to multiply to non-unitless[1] measurements and not multiply the units???

[1] would that be unitful?


I expect 1 * m = 1m, and 1 * m * m = 1m because applying a unit doesn’t inherently have a value of that unit associated with it. (1 m) (1 m) obviously equals 1m^2, but ((1 m) m) is not the same expression.


Since when `((1 m) m)` is a valid mathematical expression?

You cannot have a unit on its own without a scalar value. It makes no sense.


If you look upthread, there was a mention of F# unit types. Taking off my programmer hat and returning to my middle school anecdote which also evidently made no sense: expression of a unit without a value is (or should be to my mind, based on my education) a cast, not a computation of N+1 values.

- 1 is unitless

- 1 * m casts the value to a value 1 of unit m = 1m

- 1 * m * m casts the value 1 * m = 1m to 1m then casts 1m to m which = 1m

Admittedly my educational background here might be wildly unconventional but it certainly prepared me for interoperable unit types as a concept without changing values (~precision considerations).


> If you look upthread, there was a mention of F# unit types.

And the syntax is `3<unit>` not `3 * unit`

- 1 is a scalar - 1m is a quantity - 2 * 1m "casts" 2 to a meter, but really this is just multiplying a quantity by a scalar - 2 * 1m * 1m "casts" 2 to meter², multiplying 2 quantities then by a scalar

I insist, `1 * m` does not make sense. This is not a valid mathematical expression, because a unit can never be on its own without a value.

> expression of a unit without a value is (or should be to my mind, based on my education) a cast

There is no casting in math. Mainly because there is no types, only objects with operations. A vector is not a scalar and you can't cast it into a scalar.

A quantity is not a scalar either, and you can't cast one into another.

A quantity is an object, you can multiply 2 quantities together, but you can't add them if they are different. You can multiply a quantity to a scalar, but you still can't add a scalar to a quantity.


> And the syntax is `3<unit>` not `3 * unit`

Well, yeah, F# represents this at the type level. Which I’ve said elsewhere in the discussion is preferable. Not knowing Go, but knowing it only recently gained generics, I read multiplying by `time.Seconds` (which does not have a visible 1 associated with it) as perhaps performing an operator-overloaded type cast to a value/type with the Seconds unit assigned to it. I’ve since learned that Go also does not support operator overloading, so I now know that wouldn’t be the case. But had that been the case, it isn’t inconceivable that unitlessValue * valuelessUnit * valuelessUnit = unitlessValue * valuelessUnit. Because…

> I insist, `1 * m` does not make sense. This is not a valid mathematical expression, because a unit can never be on its own without a value.

Well, if you insist! But you seem to be imposing “mathematical expression” on an expression space where that’s already not the case? Whatever you may think of operator overloading, it is a thing that exists and it is a thing that “makes sense” to people using it idiomatically.

Even in languages without overloading, expressions which look like maths don’t necessarily have a corresponding mathematical representation. An equals infix operator in maths is a statement, establishing an immutable fact. Some languages like Erlang honor this, many (most? I strongly suspect most) don’t! I couldn’t guess without researching it also treat infix = statements as an expression which evaluated to a value.

The syntax of infix operators is generally inspired by mathematical notation, but it’s hardly beholden to that. The syntax of programming languages generally is not beholden to mathematical notation. Reacting as if it’s impossibly absurd that someone might read 1 * time.Seconds * time.Seconds as anything other than 1 * 1s * 1s is just snobbery.

Not knowing Go, I focused on the syntax and the explicit values, and tried to build a syntax tree on top of it. I’m not a fan of infix operators, and I am a fan of lisps, so my mental syntax model was (* (* 1 time.Seconds) time.Seconds)), which still doesn’t “make sense” mathematically, but it can make sense if `*` is a polymorphic function which accepts unquantified units.


> Not knowing Go, but knowing it only recently gained generics, I read multiplying by `time.Seconds` (which does not have a visible 1 associated with it) as perhaps performing an operator-overloaded type cast to a value/type with the Seconds unit assigned to it.

This sums up your incomprehension. `time.Seconds` is just a constant. An integer with the value `1_000_000` meaning 1 million of nanoseconds.

In an expression of the form `a * b` you should always read `a` and `b` as constants. This is true for EVERY programming language.

> it isn’t inconceivable that unitlessValue * valuelessUnit * valuelessUnit = unitlessValue * valuelessUnit.

It is. For example, what would be the meaning of this:

  struct Foo {
    // ...
  }

  2 * Foo
Valueless unit (or any type) is just not a thing, not in math, not in any programming language.

> But you seem to be imposing “mathematical expression” on an expression space where that’s already not the case? Whatever you may think of operator overloading, it is a thing that exists and it is a thing that “makes sense” to people using it idiomatically.

Operator overloading works on typed values, not "valueless" types. In some programming languages (like Python), class are values too, but why implement `a * MyClass` when you can write `MyClass(a)` which is 100% clearer on the intent?

Using operator overloading for types to implement casting is just black magic.

> expressions which look like maths don’t necessarily have a corresponding mathematical representation

Programming languages and the whole field of Computer Science is a branch of mathematics. They are not a natural language like english or german. They are an extension of maths.

> An equals infix operator in maths is a statement, establishing an immutable fact.

An operator only has meaning within the theory you use it.

For example:

  `Matrix_A * Matrix_B` is not the same `*` as `Number_A * Number_B`
  `1 + 2` is not the same `+` as `1 + 2 + 3 + ...`
  `a = 3` in a math theorem is ont the same `=` as `a = 3` in a programming language (and that depends on the programming language)
As long as the theory defines the operators and the rules on how to use them, it does not matter which symbol you use. I can write a language where you have `<-` instead of `=`, and the mathematical rules (precedence, associativity, commutativity, ...) will be the same.

> Reacting as if it’s impossibly absurd that someone might read 1 * time.Seconds * time.Seconds as anything other than 1 * 1s * 1s is just snobbery.

First, that's not what I said. You should read that as `scalar * constant * constant` because reading that as `scalar * unit * unit` does not make sense nor in math, nor in any programming language.

If caring about readability and consistency is snobbery, then so be it.

> Not knowing Go, I focused on the syntax and the explicit values, and tried to build a syntax tree on top of it.

And the syntax is pretty explicit, because it's the same as math or any programming language: `scalar * constant * constant`. This is why using math as a point of reference is useful, you can easily make sense of what you're reading, no matter the syntax.

> I am a fan of lisps, so my mental syntax model was (* (* 1 time.Seconds) time.Seconds))

I still read this as `(* (* scalar constant) constant))`. And I expect your compiler/interpreter to throw an error if `time.Seconds` is anything without a clear value to evaluate the expression properly.

And I would expect to read `(* (* 1 (seconds 1) (seconds 1)))` as `scalar * quantity * quantity`, and I would expect to get square seconds as an output.

Anything else would not be correct and have little to no use.


You can just do `1 * time.Second + 2 * time.Minute` to do that. Adding times works intuitively. It's multiplying durations that gives you accelerations.


Here's a reply to a comment from a few weeks ago, when someone asked the same thing: https://news.ycombinator.com/item?id=30024805


Serious question: how do you square your third paragraph with your first? That is, if you're using Obsidian's features like [[square bracket link syntax]], or #tags, or inline images, aren't you effectively locked in to editors that support the same set of features?


Not really -- you're only locked in to the extent that Obsidian makes those things easy

If Obsidian went away, I'd still have a bunch of text files that I understood, and I'd know that I could find things tagged with #foo by using grep, that the [[links]] just mean to open links.md, that an image should be opened in a browser, etc.


Additionally, it’s pretty to extend most Markdown parsers with extended syntax. The worst thing that happens is you have to fork a parser to add extended syntax…which isn’t so bad.


It’s still all plain text.

Best practice for tags is just include them in the plain text file, and inline images in Obsidian is like Derek said — just keep them in your folder structures just like your text.


square bracket syntax is fairly well used in wikis. While there are varying degrees of user experience if you step away from Obsidian, it's still readable. You can go and convert the links to []() either inside Obsidian with a plugin or later with a script.

You'll be able to get access to it in 20 years, even if not in a shiny UX friendly way. Which is more than can be said for some of the really old notes I took in proprietary pieces of software.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: