Hacker News new | past | comments | ask | show | jobs | submit | jaen's comments login

No. Taking the value of a single character is a correct perfect hash function, assuming there exists a position for the input string set where all characters differ.


Option and Result types, as implemented today in mainstream languages (ie. mostly anemically), are not the answer to exceptions being a mess.

Exceptions have a lot of additional functionality in larger ecosystems such as:

- Backtraces ie. showing the exact path of the error from its source to whereever it was handled, in a zero-cost way. This is by far the most important aspect of exceptions, as it enables automatically analysing and aggregating them in large systems, to eg. attribute blame from changes in error metrics to individual commits.

- Nested exceptions ie. converting from one error system to another without losing information. Extensible with arbitrary metadata.

- An open and extensible error type hierarchy. Again, necessary in large scale systems to differentiate between eg. the cause (caller fault, callee fault aka HTTP 400/500 divide), retryable or permanently fatal, loggable etc. exceptions while also maintaining API/ABI backward/forward compatibility.

(for some of these, eg. Rust has crates for a Result-y equivalent, but a community consensus does not exist, yet...)

General-purpose exceptions simply are complicated, and any system trying to "re-invent" them will eventually run into the same problems. Over-simplifying error handling just results in less maintainable, debuggable and reliable systems.


This isn't a binary choice. In Scala, you can use Throwable or Exception as your error type with Either:

  Either[Throwable, Option[Foobar]]
The type Try[T] is essentially Either[Throwable, T]

Either[Throwable, T], Try, as well as IO from Cats Effect give you the stack traces that you expect from conventional Java style, with the superior option of programming in the monadic / "railway" style. Try also interfaces nicely with Java libraries: val result: Try[Foobar] = Try(javaFunction).


Don't agree with a single thing, especially not with the characterization that functional error handling is some kind of attempt at reinventing exceptions. But yeah, it's clear my and your camp will never agree lol. Fortunately for you, so far, your camp has mostly won, at least in the "object oriented" languages. But I think that's rapidly changing.


I am not in any sort of "camp", in fact I prefer using a mostly functional style. The above comment was based on experience working in large (~100M LoC) code bases.

As the comment clearly indicates, it is about anemic/"naive" functional error handling not being the counterpoint to general-purpose exceptions, not functional error handling vs. exceptions in general.

I do mostly prefer error handling being explicitly marked at every call site (ie. the functional style), but note that this is not always meaningfully possible in very large systems (at least beyond the notion of "I do not know exactly what errors are possible here, just propagate whatever happens" which is equivalent to regular exception handling)

And, as I already mentioned in the original, Rust does have functional solutions to some of these problems, and as other comments indicate, eg. Scala has them as well (probably even theoretically better since it can be a strict superset of the existing zero-cost exception model in the JVM).


The backtrace argument is good, but I wonder how valuable traces would be in a world that never experienced reads-of-nothing (npe, reading from undefined, reading out of bounds array, etc). Presumably this would be because of 100% use of ADTs, or maybe some other mechanism; but, even Haskell throws exceptions out of `IO a` so such a world might never be realized.


From an optimization perspective, such dialects are pretty much like the intermediate datastructures the "single-IR" style passes build internally anyway (eg. various loop analyses), just in a sharable and more consistent (if less performant) form.

Single IR passes from that perspective are roughly equivalent to MLIR-style `ir = loop_to_generic_dialect(my_loop_optimization(generic_to_loop_dialect(ir))`.

This assumes the existence of bidirectional dialect tranformations. Note that even LLVM IR, while a single IR, is technically multi-level as well, eg. for instruction selection, it needs to be canonicalized & expanded first, and feeding arbitrary IR into that pass will result in an exception (or sometimes even a segfault, considering it is C++).

Also, even though passes for single IR can theoretically be run in an arbitrary order, they are generally run in an order that can re-use (some) intermediate analysis results. This is, again, equivalent to minimizing the number of inter-dialect transformations in a multi-dialect IR.


It requires the final function to have a numerically reasonable finite difference gradient, which is somewhat different from what is commonly referred to as "differentiable" - eg. the insides of that function could still use non-differentiable/non-analytic functions.

It seems to be based on numeric.js, which is based on the classic Fortran UNCMIN [1] optimizer.

[1]: https://perception.lab.uiowa.edu/UNCMIN


When the end product is supposed to have any semblance of reliability and durability, even human beings do not communicate in natural language.

As an example:

- Science: Many non-natural languages and notations used for communication, math being the most obvious.

- Laws: Looks like natural language when you squint, but actually very rigid and un-natural.


What does that have to do with the CEO I mentioned? They're not writing math or legal documents, yet their wishes are still materialized. Whatever the intermediary processor utilizes is unimportant.


It's an example of large domains where natural language does not work well enough, as a counterpoint to the arbitrary, niche CEO example of a situation where it might work. It is responding to the entire thread, not just your comment.


That's a somewhat one-sided view. For gaming and entertainment, yes, people would do it anyway, since it's just fun, but these do not contribute much useful information to the collective consciousness anyway. Hobbies & creative communities will also survive.

OTOH, there are also plenty of technical blogs full of advanced content that is not "fun" to produce on its own, that are written to interact with a community of professionals (or juniors), and that might wither if engagement with actual human beings is reduced.


This only applies to the most common form of anti-aliasing, multisampling. There is also analytical anti-aliasing, which derives the pixel coverage directly from the equations of the shape (for vector graphics), see eg. the implementation in Skia [1], or for a classic, Wu's algorithm [2].

[1]: https://docs.google.com/document/d/17Gq-huAf9q7wA4MRfXwpi_bY... [2]: https://en.wikipedia.org/wiki/Xiaolin_Wu%27s_line_algorithm


The first illustration app I used that offered anti-aliasing was Xara Studio, which ran on very modest Windows PC hardware in 1995.

Originally it was called ArtWorks and ran on 8MHz Acorn Archimedes with 2MB RAMin 1992! I guess it used an anti-aliasing solution like this? http://www.cconcepts.co.uk/products/artworks.htm#:~:text=Art....


The shortcuts also do not work properly on the most common US QWERTY layout (eg. the "-" key does not).

They do work on most EU layouts though, where eg. "-" is placed left of Right Shift (on US QWERTY it is right of "0"), which likely indicates the author made this using some EU keymap...

As a guess, this is using deprecated keycodes [1] instead of just characters.

[1]: https://developer.mozilla.org/en-US/docs/Web/API/KeyboardEve...


> Windows 95 introduced this bizarre abomination in which 32-bit console apps were supported, but their IO was routed via a DOS program called CONAGENT.EXE.

Windows 95 used the 16-bit COMMAND.COM as its default command-line shell, so doing it this way was probably necessary to make 32-bit console applications interoperate with the command shell (and support eg. piping and redirection between 16/32-bit executables).


> Windows 95 used the 16-bit COMMAND.COM as its default command-line shell, so doing it this way was probably necessary to make 32-bit console applications interoperate with the command shell (and support eg. piping and redirection between 16/32-bit executables).

I think that's got the arrow of causation reversed.

Windows 9x did it this way because it didn't have a 32-bit console subsystem. They couldn't have easily ported NT's 32-bit console subsystem to 9x/Me, because it was deeply tied in to how DOS Boxes are implemented (NTVDM), and that's radically different between NT and 9x/Me (which have basically the same architecture in that regard as Windows 3.1 in 386 Enhanced Mode). And also deeply tied into NT architecture components that 9x/Me lacked (CSRSS.EXE and LPCs)

And they used 16-bit COMMAND.COM as the primary shell, because without a 32-bit console subsystem, the value of adopting CMD.EXE was rather limited. It would have allowed some more advanced batch files.

Actually Microsoft did port CMD.EXE to Windows 95 and 98, but unclear if they ever officially released it. It was shipped in some Windows betas and beta SDKs, and some people got it from there and redistributed it (might not be technically legal but I doubt that anyone at Microsoft really cares, especially by now) – http://cygutils.fruitbat.org/consize/index.html

COMMAND.COM did piping using temporary files. I think even in NT versions, redirection works when starting a 32-bit console executable from a DOS app. I wish I had a 32-bit Windows VM handy to test that with. (Pity pcjs.org has Windows 95 but no NT versions, not even NT 3.1/3.5/3.51/4.0)


Given that their "Command & Control" server already knows the user's IP anyway, this might be a disguise, with the actual intention being to check if Google is working from that IP, as these shady VPNs are often used to abuse the client as a proxy for SERP requests, to bypass IP-based search engine query limits (for SEO etc.).


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: