Hacker News new | past | comments | ask | show | jobs | submit login

Different people have wildly different experiences with ORMs as they used them for wildly different levels of integration and tasks, and in wildly different languages with different features that make ORMs more or less useful, yet there's always someone willing to go out of their way and ignore all that and make absolutest statements about how it's good or bad.

We should just learn to recognize it for what it is, someone that's being controversial for attention, and move on. Let's save our attention for the ORM article and discussion that starts out along the lines of "ORMs can provide benefit, but it's important to recognize where, and not let the problems of their use outweigh their benefits. Here's what I've found."




> Different people have wildly different experiences with ORMs as they used them for wildly different levels of integration and tasks, and in wildly different languages with different features that make ORMs more or less useful, yet there's always someone willing to go out of their way and ignore all that and make absolutest statements about how it's good or bad.

I don't have a horse in this race, I could care less if you do or don't use an ORM. But, and maybe this is the cynic in me, there are practices in software development that absolutely, 100%, for certain, have no good reason and are perpetuated in part by this belief that there must have been a good reason for it to exist (Chesterton's fence and all that).

Null terminated C strings are a prime example. There is absolutely no good reason, other than the fact that the authors may have wanted to save 3 bytes, that C strings should be null terminated. Fortran was created in 1954, and passed the length of the string with the string itself. How many countless bugs and CVEs have risen due to errors in handling null terminated C strings (one example[0])? And for what? To save 3 bytes or just because of the authors decision at a whim's notice?

Likewise, decisions made at Javascript's inception have burdened it for its entire life. Decisions that were made at a whim's notice, like implicitly converting numbers to strings sometimes, and sometimes implicitly converting strings to numbers! (Tell me what `10 + "10"` is and what `"10" + 10` is without using the inspector). And the million and one ways to define something that's undefined.

Anyways, when somebody tells me there's absolutely no good reason for a development practice to exist, sometimes, that is the absolute truth. And I would rather have more people throwing away these crummy practices that lead to unnecessary headaches (or at least questioning them) then people continuing to laud the practice and perpetuate it ad infinitum.

[0]: https://defendtheweb.net/article/common-php-attacks-poison-n...


> Null terminated C strings are a prime example. There is absolutely no good reason, other than the fact that the authors may have wanted to save 3 bytes

Or, you know, they wanted to make the language track assembly as closely as possibly (which was possibly back at that time when processors were much simpler and instructions weren't constantly reordered), and if you've ever written assembly, you know you're not really working at a string level, you're working at a byte and character level. Sized strings are more complex than null terminated ones in that you either need to set a max size or you need to waste multiple bytes per string (which actually mattered on many systems C was used on when it was developed) or you have to use masks on those early bytes to or determine if the next byte is part of the string or a continuation of the size.

And honestly, when you're working on systems where ram (and maybe storage) is in kilobytes or less and speed is in kilohertz, the extra code to do that and the extra time to process them and the extra space to store sized strings is a lot less of an obvious choice to make.

Was C a good choice for the time and where it was used? Possibly. Is it a good choice these days? Probably not without a bunch of extra utils and compiler guards to beat back the worst problems. I do t blame C as much for that as I do the people that continue to use it without extra safeguards.

What was that you said about Chesterton's fence? That's one of those terms you should be careful about throwing around when supporting an absolutist position...


I could almost buy your argument, except for the fact that Fortran, which was created in 1954 when systems like the IBM 650 had a maximum of 35KB of memory[0] (which I'm assuming included program memory), and it still included the size of the string with the string as a convention.

But that's just me guessing. There's no reason for us to do that when Dennis Ritchie wrote down the reason for this:

>> This change was made partially to avoid the limitation on the length of a string caused by holding the count in an 8- or 9-bit slot, and partly because maintaining the count seemed, in our experience, less convenient than using a terminator.[1]

So this was a change made primarily for convenience. And if the limitations of 255 characters was really a huge blocker, they could have easily created a spec like UTF8 to allow variable length encoding depending on the size of the string, which funnily enough Ken Thompson who also worked with Ritchie, later did invent. You mentioned that the processing time would have been an issue, but C strings require you to process the entire length of the string to determine the length, and Ritchie notes that as an additional tradeoff for this convention.

But that wasn't done. And I can't blame Ritchie for that either, because he didn't think this language would become what it is today! Later on in the paper he alludes to this:

>> C has become successful to an extent far surpassing any early expectations[1]

All throughout the paper you can see him referring to decisions that were made out of convenience, and not because he had done extensive analysis to determine whether the tradeoff for the convenience was worth it:

>> Two ideas are most characteristic of C among languages of its class: the relationship between arrays and pointers, and the way in which declaration syntax mimics expression syntax...In both cases, historical accidents or mistakes have exacerbated their difficulty.

>> C treats strings as arrays of characters conventionally terminated by a marker...and as a result the language is simpler to describe and to translate than one incorporating the string as a unique data type.

All that to say, yes of course there were reasons that decisions were made the way they were. But, and this is what I've noticed more and more in programming communities, these decisions are often made with little to no analysis and usually made out of a subjective preference, or to make the implementors life a tad easier. So, yea, I think it's right to call out a lot of "best practices" because history has shown that programmers really don't put too much thought into their decisions. And then you end up with gurus proclaiming that a decision made out of convenience was actually the best decision available and we should never change the way we do things because clearly this is the right way.

[0]: https://en.m.wikipedia.org/wiki/IBM_650

[1]: https://www.bell-labs.com/usr/dmr/www/chist.html


> I could almost buy your argument, except for the fact that Fortran

Fortran was not created for the same purpose. Fortran existing as an invalidation of C's choices is like Java existing being an invalidation of C++'s choices. There's a reason Fortran and Java are not common choices to write in OS kernel in, while C and C++ are/were. There's a reason why C and C++ aren't often used for web development, but Interpreted languages are. Different design choices fir different niches better or worse.

> So this was a change made primarily for convenience.

Convenience can mean a lot of things, and in this context and in the absence of contrary evidence I interpret that statement to be entirely inline with what I said above. It was inconvenient to have a more complex type to deal with, for multiple reasons. I'm not sure why you would think it different, it's not like I said it could not work the other way, just that there were things that went into the reasoning that made it less obvious than in today's world.

> You mentioned that the processing time would have been an issue, but C strings require you to process the entire length of the string to determine the length, and Ritchie notes that as an additional tradeoff for this convention.

Yes, tradeoff. If what you're doing with strings most the time is parsing them, knowing the length ahead of time may be of little benefit, since you're going to step through them character by character anyway. For many operations, knowing the length ahead of time is irrelevant.

> decisions that were made out of convenience, and not because he had done extensive analysis

I'm not sure anyone is arguing they used extensive analysis. I'm certainly not. But when the reality you live and work in is that you are running up against real hardware constraints routinely, that's bound to affect your ad-hoc reasoning about what choice to make when you don't do extensive analysis.

> All that to say, yes of course there were reasons that decisions were made the way they were.

Given that this started because you wanted to support absolutist statements with "there is absolutely no good reason, other than the fact that the authors may have wanted to save 3 bytes, that C strings should be null terminated." and your prime example has now been walked back to the fact that yes, there were some considerations beyond that, including making it a simple language to describe, I think you've proved my original point.

Who is to say that C's simple description and ease of implementation for additional architectures isn't a major factor in it's success and spread? And yes, while we've been paying the price for that for quite a while now, it may also have allowed for a level of software portability that really helped advance computers beyond where they would currently be otherwise.

I don't like C all that much and I don't use it for anything, but I'm also willing to note that it must have done quite a few things right to get to where it did, and I'm not willing to call out any aspect of it as completely without merit while still acknowledging the immense benefit the language brought as a whole, because at that point we're getting into conjecture about alternate histories.


Fair enough :)

Absolute statements are generally bad ideas, and I've just been reminded of why that is the case haha.


> Decisions that were made at a whim's notice, like implicitly converting numbers to strings sometimes, and sometimes implicitly converting strings to numbers!

I see this as a problem that's horribly inflated by those who don't use JS on a daily basis.

Practitioners of the language largely don't care, because in actual code you rarely see cases where it would matter.

Now that we have template strings it's even less relevant.


I use TypeScript on a daily basis for my job. It was an entire language created to make up for JS's shortcomings. There's no horribly inflated reasoning going on when most of the industry has decided the best idea is to just throw away the language and use a different one that transpiles to it.


> Null terminated C strings are a prime example

It’s funny - it’s almost like C never had a string type and null-termination is more a convention than a primitive data type.

There’s nothing that prevents us from having Pascal-like strings as much as we want, provided we know we’ll need to reimplement everything we need.


Isn't that the whole point of this discussion? We're talking about conventions programmers treat as absolutes that are detrimental to the maintenance and security of the programs.

So I don't see how this has much bearing on the ultimate point I'm making, which is that yes, conventions can be bad haha.


what have you found?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: