Hacker News new | past | comments | ask | show | jobs | submit | burgerbrain's comments login

Speaking as somebody who enjoys intentionally mocking others (go ahead, check my comment history. Fuck, I picked this username just to make the then-popular diet discussions on HN more interesting), if he's honestly not trying it (and I have a hard time believing that, he is as far as I can tell a reasonably intelligent person and should be able to analysis himself objectively) then he is damn good at doing it by accident. Damn good.

I only wish I could convey such attitude through text intentionally.

EDIT (posted after jacobolus' reply):

When I first came here, I was annoyed at the entire HN attitude of "a better discussion board". Good clean discussion is great, but at the time it seemed too dismissive of the great conversation that happens elsewhere on the web; too arrogant if you will. Furthermore, I saw that attitude as a sort of challenge: could I game the system and be just helpful enough some of the time to allow myself to be an utter asshole the rest of the time without getting banned?

You see, somebody who is an asshole some of the time and useful the rest of the time is not a valuable community member. Even if most of the time is spent being useful that is still not a person that you want in your community; in short, the karma system is broken. Of course HN isn't quite as broken as a pure karma system would be, hellbanning still kicks in for people with positive karma, but it is broken nevertheless. It's pretty easy to keep individual comments at positive or at least neutral karma by picking your topic correctly, which gets you around (to my knowledge) all but manual banning. (I suspect an improved banning system would involve flagging people for manual inspection after they have too many comments with large amounts of up and down voting. I can't speak for the false positive rate of such a system except to say that my honest productive comments rarely seemed to swing)

But I'm done now. tptacek does a better job of what I wanted to do than I think I ever could.

I think I'm past due on closing this account anyway. I'm out; apologies for any grief I've caused. I probably owe a good number of you a beer.


I should clarify: he surely is trying to mock them sometimes, but (a) it’s a pretty friendly ribbing, and (b) the mockery isn’t the purpose, it’s just a rhetorical device. By my italicized “mock”, I meant something closer to: “he’s not trying to shame them into submission, as some kind of ‘authoritarian’ bully power play”.

Edit: maybe it’s better to say that he isn’t trolling them (i.e. intentionally riling people up just for the sake of being a jerk), but is trying to be genuinely helpful and advance the conversation, even though sometimes that takes a teasing sort of tone. Notice all the smiley faces.


I swear, it's like a few months ago somebody somewhere introduced tptacek to the concept of using "nerdy" as an insult (and to the debating tactic of attaching an insult to a position with the goal of forcing people to distance themselves from that position).


What part of "I'm a nerd, the term is a shorthand, not an insult" are people having so much trouble with?

I've been a systems programmer since 1994. I do not believe that you honestly believe that I think "nerd" is an insult. It beggars belief. Please find a better way to be irritated with me; this is such a stupid one.

Why not just go the extra mile and call me a "self-hating nerd"? At least that would make sense.


No idea. Last time I bothered trying Stack Overflow did though. (That was a while ago, when it was still called that.)


Yeah, the whole Stack Overflow family of websites blocks EC2.

Other examples include Yelp and Bank of America.


It's still called stackoverflow (?).


Oh. I thought they switched to stackexchange or something stupid like that. Whichever, I don't use the site much.


It would be an improvement because then I could count on comments in JSON doing nothing (instead of possibly not parsing).


To add to this, I suspect any non-English speaker would rather have English comments that they can at least throw at Google Translate, than no comments at all.


I have a hard time believing that the removal of comments would greatly simplify a JSON parser.


There would have been demands for tools that transform JSON to JSON, or JSON to XML, to preserve comments across the transformation.

Adding that capability would raise the complexity of the parser, because comments would have to be made part of the data structure that is built and transformed. For instance, it would be harder to embed the data structure for the JSON in JS objects.

But yes, for a parser that's ingesting data for immediate processing and has no need for comments, there's no discernable win regarding parsing simplicity.


> "I saw people were using them to hold parsing directives"

What could possibly make somebody want to do that? Are there any examples around of people doing that?


You can imagine something like this:

    {
       /* if IE */
       browser: "IE"
       /* else */
       browser: "standard"
       /* endif */
    }
Pretty terrible and still possible (but admittedly harder) without comments.


If you are storing this kind of implementation logic in your data, I hope I never have to work with you (not aimed at parent posting, but rather the global "you"


Unfortunately it's all too common in mobile development - mobile is the new "bad old days" of user agent sniffing hell.


You typically don't store this sort of thing in data, though.

Then again, we have certain types of logic stored in a database table, loaded through fixtures... so my two cents may be worth much less than what they appear.


Happens already. Two examples:

1. Internet Explorer has conditional comments - http://msdn.microsoft.com/en-us/library/ms537512%28v=vs.85%2...

2. Sprockets' directive processor uses directives in comments. https://github.com/sstephenson/sprockets


It's not hard to imagine if you sorta hold your breath and let yourself get a little dizzy and think hard about XML you've had the pleasure of messing with.

I quickly get visions of version numbers, customized namespace declarations, typedefs, strftime date format strings...


or say Javadoc @ directives....sigh


Ahhh doclets. I certainly don't miss those things!


The CDDB/FreeDB format requires you to parse comments... http://www.jwz.org/doc/cddb.html

And people do all kinds of nonsense with HTML comments. A very bad idea is often the fastest to implement.


Not JSON, but here's a truly horrible example of Internet Explorer using specially formed comments to take different actions: http://www.quirksmode.org/css/condcom.html


They really aren't that bad, definitely very handy for injecting IE specific stylesheets, without having to rely on javascript.


I'm sure they're handy, but just crashingly inelegant.


Not to mention that the only reason for needing IE specific stylesheets is because IE doesn't follow CSS standards properly!!!


Uh... this was downvoted because? Certainly not because it's wrong.


I completely believe him. Even though XML is really verbose I saw quite a few folks adding these types of things in XML comments.

So this wouldn't surprise me one bit. Would be interesting to see some real-world examples though.



Yep, it made me rage a bit when I found that in a project I'm working on. Comments are for people, people!


And so instead, people embed parsing directives in the JSON itself.


Yes, but that works. Taking in such JSON and then immediately spewing it back out doesn't change the underlying meaning. Transforms from such JSON to some other format (perhaps even another JSON format) must explicitly choose what to do with "comments", instead of accidentally just discarding them. Given that parsing directives are going to exist somewhere, this is the correct place for them.

(Better yet is to create an explicit place for metadata. I almost reflexively use {"metadata": null, "payload": ...} now instead of putting my payload right at the top level, because wouldn't you know it, sooner or later some metadata always seems to wander in. And if it doesn't... shrug. If you're in a place where you can afford JSON in the first place the extra cost is probably way below the noise threshold for you.)


If you don't parse the metadata, it doesn't matter whether it's in JSON or in a comment. You lose the intended meaning either way.


But the point is that if you have comments in your JSON, the first time you do some sort of "for key in data" to transform the data and spit it back out, the comments are gone; you may never even realize they were there to start with.

If you do that with the metadata explicitly stored as a separate key-value pair in your blob, then this doesn't happen; the meta data is never silently discarded when you, say, take all the key value pairs in the JSON blob and send them out down the wire to a client. If you want to strip the meta-data, you have to do that.

I know this isn't Python, but I think the Zen of Python is on point here: "Explicit is better than implicit."


> But the point is that if you have comments in your JSON, the first time you do some sort of "for key in data" to transform the data and spit it back out, the comments are gone; you may never even realize they were there to start with.

If you've stored comments as regular data, you haven't lost them but you've just transformed them in the output.


Your criticism appears to be based on a transform written with an incomplete understanding of the source data. I'd submit the problem lies in the incomplete understanding of the source data, not the fact the source data had comments. If your transform didn't "know" comments were possible, what else did it not "know"?

TANSTAAFL.


> I'd submit the problem lies in the incomplete understanding of the source data

I'd submit that an incomplete understanding of the source data is not necessarily a problem. It's often a design goal. Generic tools have a limited understanding of the source data by design. I don't want my JSON parser/formatter/minifier/etc. to know about some silly parsing rules you added as comments. I want my JSON parser to understand JSON as it's defined.

Your nonstandard comment directives are the problem, not the fact that I didn't write a custom tool.


Here's an example of using javascript comments to set options on a parser. It isn't JSON, but it is pertinent.

/* jslint nomen: true, debug: true, evil: false, vars: true */


Weren't pascal comments delimited by `{` and `}`, and weren't borland pascal compiler directives of the form `{$X}`?


Yes, this was pretty common in Pascal.

Also, Emacs uses comments to set file-local options. There's a long tradition of overloading comments to achieve metalinguistic ends. JavaDoc and Doxygen are great examples.

Even when handed a decent macro language with whizzy namespaces and a great DOM, I imagine that some people will still stoop to gross and convenient hacks.


This brings to mind Closure Compiler's use of comment annotations to add type checking, etc to javascript.

https://developers.google.com/closure/compiler/docs/js-for-c...


As others have pointed out, this is fairly common, but no-one seem to have pointed to this one:

http://en.wikipedia.org/wiki/C_preprocessor


The C preprocessor processes comments?


Well that was pretty stupid. I had gotten into my head that hashes were comments too in C.


Regarding your last point: You're not the only one to make this observation.


Paste that link into your favourite (up to date) torrent client.


Yes, and then you get a database in a simple no quotes | separated file. Someone presumably got this from piratebay servers. What I'm asking is who and how.

EDIT: though I'm realizing now, that "actual" in English might mean something else than I wanted to say. What I meant (and in my language sounds very much like actual) is probably better described as "current".


I think someone scrapped it and created that torrent. You might be asking for some sort of distributed automatically up to date database of pirate bay torrents. That would be cool to have, but its not possible with magnet links like that. The magnet link is based on the content of the file(s). If someone were to upload a new torrent, then the contents of the database changes and the hash changes, and the magnet link changes.


Here is the perl code that was used to scrape that data:

http://pastebin.com/8RXXthXB


Ah yes, I'm not sure there. I think maybe whoever compiled this collection scrapped it themselves. I'm not aware of TPB themselves making the proper database publicly available yet.


Well... that seems fairly damning.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: