- If this is your first time hearing about the readline/emacs keybindings like Ctrl-E and Ctrl-W, you'll be pleased to know that most macOS input sources use these keybindings. If you're on macOS, feel free to try Ctrl-E, Ctrl-W, or Ctrl-U in your browser's address bar right now
- If you're using a command line program that doesn't support _any_ line editing (e.g. no readline keybindings, and no other keybindings), you can install the `rlwrap` program and launch the REPL under rlwrap. For example Standard ML of New Jersey has a REPL but no line editing functionality, but you can recover that via `rlwrap smlnj`
- "don’t use more than 16 colours" — I would go so far as to say "don't use more than 8 colors, or at least make your colors configurable." Many popular color schemes, including Solaraized and the default Base 16 color scheme, use the "bright" colors to hold various shades of gray. What you think is "bright green" might actually be the same shade of gray that normal text is colored.
> If you're on macOS, feel free to try Ctrl-E, Ctrl-W, or Ctrl-U in your browser's address bar right now
Most browsers I've used close the current tab when you press Ctrl-W. Actually, the terminal emulator I use, Alacritty, also does this, and most file explorers that have tabs also do. Iirc even windows explorer does this now, but it's been a while since I've actually used windows.
>Actually, the terminal emulator I use, Alacritty, also does this
This can't be right. I use ctrl+w all the time, and occasionally use Alacritty. I'd notice if this shortcut closed my terminal window (it's extremely annoying when I use a web-based ssh, because I have this shortcut deep in my muscle memory).
Also: the specific choice of the year 1971 (as opposed to say "the 70s" or "the late 60s") is usually meant to call attention to the fact that in 1971 the US abandoned the gold standard for the US dollar.
Notably though, Keepassium from the App Store is licensed differently than the version on GitHub. Only the Keepassium team can ever actually submit to the App Store as GPL software is banned, and so they do not accept contributions so that they have the ability to submit under a proprietary license.
My reading from the License section[1] of the Keepasium README and this Stack Exchange post[2] is that the author of KeePassium wishes to license KeePassium under GPLv3. Accepting applications licensed under GPLv3 would require that Apple provide certain forms of source code alongside App Store downloads which they are unwilling to do. As such the App Store terms of service has terminology stating that you give Apple the right to not do that, which is something that only the copyright holder(s) of a work can do. The simplest way to have clarity over who holds the copyright is to have a single author. So long as the KeePassium author is willing to assign Apple the permission implicit in submitting to the App Store, that’s fine. It just means that all other uses of KeePassium must follow the GPLv3 license.
I am not a lawyer, nor really even well-versed in IP law, and you should not take this as legal advice.
Yes, you got it right. The source code is published under the GPL, but App Store ToS impose additional restrictions that are incompatible with the GPL. So we have to dual-license the project, and only the copyright owner can do that. In order to maintain that role, we can only accept contributions with a CLA (two pages of legalese that transfer the copyright). This is obviously a deterrent for contributions: over the 5 years, I believe there were only 3 people who signed it :)
The one electric Caltrain I rode recently played an ear shattering tone inside the trains when the doors were about to open and close. With any luck and enough complaints, it should be easy enough to dial that back or drop it entirely. I don’t remember the old trains ever playing a noise as loud when the doors open and close.
I think the new Stadler FLIRT trains over here also had "factory setting" horrible door close alert noise that was then toned down after a while. Maybe it's easier to have it that way so the manufacturer gets no lawsuit in the vein of "there was no warning the door was about to close!".
The old trains have the PA volume at completely random level from one car to the next. Often extremely loud. The staff is happy with it being just how it is (no intent to adjust it, no intent to report it to maintenance.) You travel on Caltrain or Bart with earplugs - necessary also because some cars are extra noisy and some cars have no sound insulation left.
The point of the interview is not to answer "Can this candidate find and fix this bug?" but rather "what is the candidate's approach to fixing unknown problems?"
A good performance looks like making a hypothesis for where the bug is, testing that hypothesis, and repeatedly narrowing in closer and closer. Finding and fixing the bug is irrelevant!
A bad performance might look like
- running out of hypotheses for what might be causing the bug
- making hypotheses, but never testing them
- failing to interpret the result of their test of the hypotheses (thinking it confirms one thing, when it doesn't actually confirm that, or it confirms the opposite)
Sorry, but do you have any data that shows 'great' candidates run tests and use your approach of finding / fixing bugs? Read the book 'Coders at Work' which describes some of the best developers of all time - most of them use print statements to find bugs / debug. Btw, I've solved bugs in 2-5 minutes which some developers spent hours or days working on (to their amazement) -- I've done this countless times in the past without using code or any debugging / testing tools -- all it took was reading the code and figuring out in my head what was going on and which data elements could introduce problems -- so I suppose I belong in your list as well? Sorry I just want to see the thought process here, to me it makes 0 sense and all the HN commenters commending the parent article are also making me absolutely scratch my head since the suggested process makes many many assumptions without having any data to back them up.
When I export Markdown, the image reference is placed like `[image-1][base64-reference]`. However, the reference links should be `[image-1](base64-reference)`. The brackets for the reference should be parentheses, not square.
The biggest feature I want out of Starlark the language (not Bazel the build system) is to allow optional type annotations. They don’t have to do anything, they don’t have to specified, it just needs to be syntactically valid to put type annotations somewhere. If there were syntax for type annotations that tools like Bazel ignored, it would be possible for some enterprising soul who’s forced to use Bazel at work to throw up an initial prototype of a type checker and a language server for Starlark in Bazel. The language is not that complicated that a type checker couldn’t be written by a hobbiest.
My biggest frustration when using Bazel is not even Bazel—it’s the fact that when I’m looking at code like this[1], everything comes from a single, unannotated `rctx` variable that has no type annotation. So when I’m trying to read the code, it’s a matter of constantly grepping the repo, grepping the bazel docs, and grepping the bazel source code to get any new code written.
Why can’t I just hover and see the docs? Why can’t I press `rctx.` and see all the attributes available?
I frequently hear “why do you need types? the language is Turing complete just run it and see if it fails” but that misses the point that I have want to use types to guide what code to write in the first place.
This is something that came up multiple times. I think type information is especially valuable in combination with other IDE features (code completion).
My idea was to do type checking in a separate tool (e.g. built on top of Buildifier) and let the interpreter ignore the types. So it could be completely optional. The type system could be gradual, like in Typescript.
I don't know when/if it will happen (I'm no longer working at Google, so it's harder to make large contributions like this).
Having done some nontrivial Bazel/Starlark hacking, I completely agree that lightweight static types would be a good usability improvement. But I want to point out that Starlark is actually not Turing complete, which is imo one of the more interesting characteristics it has. Recursion is forbidden (https://github.com/bazelbuild/starlark/blob/master/spec.md#f...) and loops must be structural; there is no while loop or other unbounded iteration construct. Starlark is one of the more capable and mainstream non-Turing-complete languages out there, and doesn't resemble the other common ones which mostly show up in theorem provers. On the one hand I think the logic in a build system that needs to reason about incremental builds absolutely should be guaranteed to terminate, but in some particularly painful situations I've had to resort to iteration over smart-contract-style "gas" parameters.
EDIT: as a parting shot, dhall is another non-Turing-complete language in common usage, but its claim to fame is that it gets used in places that arguably shouldn't be doing any computation at all.
A type system is a very good thing, but it directly contradicts one of the goals for this language, simplicity. As someone who has written type inference code for plain Hindley-Milner type systems and those with subtyping, I can tell you this is very much an open research problem.
Inferred types, understandable type errors, expressive language: choose two out of three. If you sacrifice good type inference and require users to specify type annotations themselves they will not like it especially considering users' mental model of this language comes from Python; if you sacrifice understandable type errors, users will reject the whole system as soon as they encounter a type error; if you sacrifice expressiveness you are taking away features users want like subtyping.
If you want something that's ignored by Bazel and only used by some other tools you already have it: docstrings. You can enforce docstrings to be of a certain format and contain type annotations. Be the solution you want to see: be that hobbyist that writes a type checker and language server.
I don’t get this argument as it seems at odds with the existence proof of Python itself having optional typing, type inference, understandable type errors and an expressive language. What am I missing?
Here's an elementary litmus test that anyone slightly experienced with type systems will know.
How do you type the omega combinator in Python?
lambda x: x(x)
What about the application of the omega combinator to itself?
(lambda x: x(x))(lambda x: x(x))
How do you type the Y combinator in Python?
lambda f: (lambda x: f(x(x)))(lambda x: f(x(x)))
Depending on the choice of the type system you can either answer: this cannot be assigned a type in which case your type system is too weak to express many real-world programs, or it does have a type but then your type system is so sophisticated that people using this language (just for writing BUILD rules) won't be able to understand type errors in this type system. I have yet to find a middle ground.
In case you claim this is impractical functional programming, bear in mind you can write the same thing using classes in object-oriented programming.
---
Okay let's not even talk about these weird-looking "combinators" even though they have a rich history. Consider this function:
lambda x: {'this': x, 'next': x + 1}
What is its type? Again the answer depends on plenty of choices that need to be made by the type system designer. Do you force dictionaries to have a single type for values? If so, many users used to Python will reject your system for being too inflexible. If not, will you now introduce row polymorphism in your type system? Will you now introduce depth subtyping and width subtyping in your type system? (For example if a function only needs field 'x' in a dictionary but the caller passes a dictionary with both fields 'x' and 'y' should not result in an error; that's why you need subtyping.) Now let's consider the lowly plus operator. In real Python the plus operator can work on integers, floats, strings, sets, etc. Let's say your type inference algorithm uses the RHS to find that x must be an integer. But that's wrong; it could still be a float. Can you now write a type for that expression? Hint: it involves record types, intersection types, type variables and other things that would not be comprehensible.
This seems like a very weird technical explanation that doesn’t actually address my question. Python typing works well in practice and doesn’t suffer from any of the problems you listed.
Also, from a brief investigation, according to [1] Haskell also isn’t able to express the omega combinator in its type system (& indeed the answer says very few type systems are able to do so). This suggests this is a very bad litmus test if almost no languages can actually pass it even though a) languages exist with explainable type errors b) the languages are general purpose and fairly expressive c) they support type inference. An obvious example of this is Rust.
You have a different definition of "works well" and also a different definition of "type inference" here. Rust doesn't have global type inference. C++'s `auto` doesn't count. Go's `:=` syntax doesn't count.
Why not? Because you are adding typing retroactively to a dynamically typed language like Python. If a language was built with typing, you can do this because your initial choice of a type system already constrains the set of valid programs. But a Python-like language where users are already used to having no restrictions on runtime types? Absolutely not. Especially not in a language that encourages duck typing. Let's go back to the omega combinator example. The Stack Overflow link is correct: Haskell cannot assign a type to it. That means the type system is purposefully designed weak enough that the omega combinator cannot be written in the first place! No valid Haskell program has it! Therefore you can totally disregard it. Python is different. You can already write the omega combinator in it. I wrote it three times in this thread. So a bolt-on type system needs to assign a type to it. And you can't.
Python does not have a type system that "works well" and will never have one.
I think the disagreement is about what “works well” means. You are approaching it from some kind of theoretical purity whereas most people approach it from the perspective of “works for every program I am likely to encounter”. Omega combinators aren’t an example of something typically written in Python. I think you’re making a similar decision about global type inference as the only “true” definition of type inference when clearly more limited type inference already adds a significant amount of utility & there’s not typically that much lost in an inability to avoid typing functions - it’s a bit inconvenient in some places, but overall acts as a way to define the explicit contract between the method body and the rest of the code which makes errors faster to spot and the language compile more quickly.
You are correct that the holy grail you’re trying to reach is impossible, but relaxing some constraints still yields a heck of a lot of practical benefit.
The problem with "works for every program I am likely to encounter" is that you don't know what kind of programs your users are writing.
Well, now that I think more about it, except when you are Google or Meta and you have a giant monorepo full of code that users of this language has actually written. So if Starlark were to gain a type system, the type system designer would probably just run it through the entire Starlark code at Google.
Speaking of Meta, this suddenly reminds me of Flow https://flow.org/ an effort at Meta to add a bolt-on type system to JavaScript. Naturally it doesn't aim to "work well" 100% of the time but maybe working 80% of the time is already valuable enough for them to use and announce it.
> is that you don't know what kind of programs your users are writing
Except we have lots of easily accessible OSS code available via GitHub/GitLab etc in addition to self-reporting from closed repos. Can you find any instance of an omega combinator in use? I'm not even sure omega combinator is relevant to Starlark given that Starlark is a more limited language derived from Python. For example, it doesn't typically allow for recursion which I believe would be required to express an Omega combinator within it. Yes I know the definition of the combinator says it's not recursive, but that's in a pure CS sense. In a practical sense, there's not much difference between a function being passed itself as an argument and then invoking it vs invoking itself directly, particularly from an enforcement perspective. Indeed, you can verify it'll fail the recursion prevention code in Starlark [1]
Global inference is a nightmare, you simply don't want global inference. It means that the minimum understandable system is the entire system, if you don't understand the entire system you don't understand any of it, this can't scale.
When weird type theory people insist you want global inference and that you should sacrifice something you actually value to get it, treat them the same as if Hannibal Lecter was explaining why you should let him kill and eat your best friend. Maybe Hannibal sincerely believes this is a good idea, that doesn't make it a good idea, it just further underscores that Hannibal is crazy.
Your dichotomy between type systems that are too restrictive and type systems that are too complex only appears because you implicitly require soundness.
It's an extremely simple type that never leads to hard-to-understand type errors, because it never produces type errors. And you can use it for any real-world program, including malformed ones.
Of course that means the type system is unsound and accepts programs that will fail at runtime. But that is perfectly fine as long as the type checker can spot some issues and gets out of your way otherwise.
And it works great for incremental typing! You start with almost everything (except constant literals and functions that return then) having a type "Any" and let users add typing incrementally.
Python typing is so basic that it didn't even support a proper JSON type until a year ago (!), and yet it brings value to millions of Python programmers around the world.
Given that is deterministic, it should be possible to just run the code and record the set of types that reach every parameter and variable.
A bit expensive, perhaps. And it breaks down if the code is in an intermediate state. Like, "I just deleted a comma, where did my type annotations go?"
Are you saying that every function can only be called with a consistent set of types in the parameters? It’s not possible to call a function two times and supply parameters that have different fields on them?
Unless that is true, the end result might not be ideal. You could certainly record the information that is there at every line of code at runtime, and you could probably calculate the union of the parameter types to find only the fields that are always there, which would be fairly okay I guess.
A type system that doesn't do anything? It has that, it's called comments.
Before you try to tell me that it's somehow different, consider that the main thing a type system has to be for consistency is accurate. If they types aren't checked by a compiler or interpreter, then someone can change the way the variable is assigned or what the function will work with, and the annotation will just be wrong.
The idea is to have gradual typing, like Python does. Most annotations in Python have no impact on runtime (e.g. parameter types aren't checked), but they are still valuable for external tools to perform type checking. A program with the wrong types will still run, but you will get type errors from the checker.
One difference is that Python specifies the type system in PEPs (basically Python's RFCs), while OP didn't ask for that here. This would mean whatever tool is implemented would define the type system.
OP did also say "They wouldn't even have to be specified", which is the big difference here needed for a type checker to actually use them over comments.
This is my biggest complaint with keybindings on Linux/Windows, that Ctrl means “escape codes” in terminal emulator applications and “window/UI operations” in every other application.
By contrast, in macOS all the window/UI keybindings are typically Cmd (Cmd-W) which means that GUI applications attempting to layer on vi keybindings don’t have to also audit all the Ctrl keybindings that might be doing something wildly different compared to what vi users expect.
Cmd for UI and Ctrl for terminal just makes for such a nice default convention.
I agree— having switched back to Windows five years ago, this is one of the biggest things I miss about the Mac hardware. Having a separate key just avoids the whole drama around whether Ctrl-C is "copy the thing" or "shut it all down", and which of those functions is going to get remapped to Ctrl-Shift-C and all the fallout from that.
Configurable windows managers on Linux let you code a “super” key. I like the Window key, since it serves no other purpose, and never overlaps with default bindings on Linux.
So the Cmd/Ctrl split on Mac maps nicely to the Win/Ctrl split on Linux.
While I do this too I understand the above complaint. I think naturally linux should have more of a system similar to what OSX does. I mean I'm constantly switching between OSX and Linux machines but more and more recently I've been at at mac computer while inside a linux system and that is very seemless because it is the GUI that fucks shit up. But now there's so much momentum that I'm not sure they can go back. I think the only way to make this correction is for a big distro like Ubuntu/Pop/Manjaro/Endeavor to switch to a default with alt or super as they windowed operating key but have the option to switch back (should be implemented in the install process and made clear and easy to undo. But without making it the default it'll always stay the other way. (first introduction should be non-default)).
Super is the "Windows" key (or the Command key), it's the "mod key" that can then be set to super or alt/meta or whatever.
For the reasons you suggest (WMs often heavily using the super key), I would not want it to be used in the macOS way as I have a few dozen keybinds I'd lose or have to remap then. Stuff like focusing or moving windows or workspaces. Emacs can also make use of the super key, though I don't know if it has anything bound using it by default.
The macOS approach is cool and makes some sense, but I wouldn't want it forced on me, and I'm kinda glad I didn't imprint on it.
This isn't true though. Control + arrow keys for example navigate desktops on MacOS.
On Linux I can at least restrict my desktop environment's shortcuts to super, leaving the other mods open for applications. MacOS uses every single modifier, sometimes together.
When you use EXWM (Emacs X Window Manager), this is actually a blessing! It means you can bind Super keys only in Emacs and have them do exactly the same thing every time, while the other X apps (terminals, browsers...) get the Control key to themselves.
- If this is your first time hearing about the readline/emacs keybindings like Ctrl-E and Ctrl-W, you'll be pleased to know that most macOS input sources use these keybindings. If you're on macOS, feel free to try Ctrl-E, Ctrl-W, or Ctrl-U in your browser's address bar right now
- If you're using a command line program that doesn't support _any_ line editing (e.g. no readline keybindings, and no other keybindings), you can install the `rlwrap` program and launch the REPL under rlwrap. For example Standard ML of New Jersey has a REPL but no line editing functionality, but you can recover that via `rlwrap smlnj`
- "don’t use more than 16 colours" — I would go so far as to say "don't use more than 8 colors, or at least make your colors configurable." Many popular color schemes, including Solaraized and the default Base 16 color scheme, use the "bright" colors to hold various shades of gray. What you think is "bright green" might actually be the same shade of gray that normal text is colored.