Hacker Newsnew | past | comments | ask | show | jobs | submit | maweki's commentslogin

In light of that, I am wondering why the article opted to go for "However, determining the optimal join order is far from trivial.", when there are hard results in literature.

I was also missing mentioning "sideways information passing", though some of methods are exactly that.

I am wondering whether the company consults literature or whether they fiddle about, mostly reinventing the wheel.


While tedious, you can do the rewrite block-wise from the insertion point and only store a an additional block's worth of the rest (or twice as much as you inserted)

ABCDE, to insert 1 after C: store D, overwrite D with 1, store E, overwrite E with D, write E.


What do you like to call Hungarian notation?


I don’t know any other name for it. While this strings are basically SSO (or a twist of it).


From TFA and AndyP's slides it seems to specifically refer to a variant of SSO where, for large strings, a fixed-sized prefix of the string is stored along-side the pointer, in the same location as that fixed-size prefix would be fore SSO strings. This means that strings lacking a common prefix can be quickly compared without pointer-chasing (or even knowing if they are small vs large).


Well read.

basically SSO (or a twist of it)


Also:

* Reverse polish notation

* Chinese remainder theorem

* Byzantine generals problem


All names given by people who were NOT of that nationality.


"System's Horrendous Pile Of Shit".

If anyone ever referred to Apps Hungarian that would be "Simonyi's Wish For A Proper Type System", but nobody ever does.


I had a lot of contact with computer science students coming from the other side, meaning they used Z3 or other (SMT) solvers as blackboxes which they just use at a certain point in their algorithm without having thought what theories they are using (the T in SMT) and what's undecidable in general or in that theory.

So I had quite a few "groundbreaking" approaches end in disappointment.

It's important to know the capabilities and limits of your tools.


I'm not a rust or systems programmer but I think it meant that as an ABI or foreign function interface bitfields are not stable or not intuitive to use, as they can't be declared granularily enough.


C's bit-fields ABI isn't great either. In particular, the order of allocation of bit-fields within a unit and alignment of non-bit-field structure members are implementation defined (6.7.2.1). And bit-fields of types other than `_Bool`, `signed int` and `unsigned int` are extensions to the standard, so that somewhat limits what types can have bitfields.


I think core of the problem in property-based testing that the property/specification needs to be quite simple compared to the implementation.

I did some property-based testing in Haskell and in some cases the implementation was the specification verbatum. So what properties should I test? It was clearer where my function should be symmetric in the arguments or that there is a neutral element, etc..

If the property is basically your specification which (as the language is very expressive) is your implementation then you're just going in circles.


Yeah, reimplementing the solution just to have something to check against is a bad idea.

I find that most tutorials talk about "properties of function `foo`", whereas I prefer to think about "how is function `foo` related to other functions". Those relationships can be expressed as code, by plugging outputs of one function into arguments of another, or by sequencing calls in a particular order, etc. and ultimately making assertions. However, there will usually be gaps; filling in those gaps is what a property's inputs are for.

Another good source of properties is trying to think of ways to change an expression/block which are irrelevant. For example, when we perform a deletion, any edits made beforehand should be irrelevant; boom, that's a property. If something would filter out negative values, then it's a property that sprinkling negative values all over the place has no effect. And so on.


Incremental automatic grounding to SAT works fine for ASP.


The range can be a product type, as can the domain. Most languages are expressive enough that you can create the product type (struct). You're right on point.


> valuing where papers are published over what they contribute

And who is the arbiter of that? This is an imperfect but easy shorthand. Like valuing grades and degrees instead of what people actually took away from school.

In an ideal world we would see all this intangible worth in people's contributions. But we don't have time for that.

So the PhD committee decides on exactly that measure whether there are enough published articles for a cumulative dissertation and if that's enough. What's exactly the alternative? Calling in fresh reviews to weigh the contributions?


Avoiding the problem altogether is just throwing up your hands and saying "this is too hard so I'm not going to even try".

We already know there is some way to do it because researchers do salami slicing where they take one paper and split it up into multiple papers to get more numbers, out of the same work. Therefore one might for example look at a paper and think, how many papers could one get out of this if they were to take part in salami slicing in order to get at-least some measure of this initially.


We found yaml to be a great exchange format for electronic exam data. It allows us to put student submitted answers and source code into a yaml file and there is no weird escaping. It's very readable with a text editor. And then we just add notes and a score as a list below and then there's the next submission.

For readability of large blocks of texts that may or may not contain various special characters and newlines the only other alternative we have seen was XML, but that is very verbose.

So what the author finds as a negative, the many string formats, are exactly what drew us to yaml in the first place.


Somebody in these discussions always correctly points out that s-expressions are as expressive as XML but without the excess line noise, so it might as well be me.


What is so verbose about a cdata directive? Everybody complains about XML being verbose, never once heard complains about HTML being too verbose.


I’ll be that person then. HTML is too verbose for anything intended to be read as plaintext (and not the parsed marked up form) more than 25% of the time. A well formatted java doc comment full of HTML markup is difficult to read as plaintext, but without the markup loses out on the expressiveness converting javadoc to html can give. That’s why it’s nice that Java 25 will introduce markdown as a new option for javadoc (and presumably why Rust chose it for the same)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: