Hacker Newsnew | past | comments | ask | show | jobs | submit | bryanrasmussen's commentslogin

I don't think you could self-host Google Reader, so it sort of feels like these two sentences don't hang together.

It's more linked to there not being any / many high quality RSS reader applications, so the comment is talking about a feature, so it does make sense.

theoldreader was built to be as close as possible to Google Reader. And from an interface PoV it's really close. Problem is that without critical mass you can't do the social features.

so can it be the one that gets ahead on having people go find things for them - https://news.ycombinator.com/item?id=47285283

Interesting

It seems to me this is a real show hn, of course there is a lot of worry about those nowadays. I basically had the same idea, but to build a product on top of it, I believe I have several algorithms that assures to greater than 99.9% accuracy that website changes will not cause crawlers to stop working - what degree of accuracy have you achieved with the AI method.

Does it have ways to detect if it has failed to extract the correct fields?


William the Conqueor was a fellow who used to make horns out of the shells of large sea snails; he used to travel across the Pacific in a catamaran, from island to island make horns and selling them.

If William the Conqueor had been on the English side at the Battle of Hastings then the English would have one because their warning horns would have been top notch, everybody says so.


I sort of think the whole middlebrow angst thread about Bourdieu going on right now applies to LLM writing

https://news.ycombinator.com/item?id=47260028


This makes me think of the attractiveness of overly bad writing to writers, as a challenge, the most obvious example being the bulwer-lytton award, or the instinctive ignoring of instructions from fiction magazines that might say "we don't want any stories about murderous grandparents, French bashing, bestiality, bank robbers from the future, or kind-hearted Nazis - and especially do not try to be super brilliant and funny and send us your story about kind-hearted Nazi bank-robbing french-bashing grandparents that like killing people and having sexy fun times with barnyard animals! Because every original thinker like you thinks they are the first to have come up with that idea!" and then as a writer you feel challenged to do exactly what they say they don't want because what a glorious triumph if you manage to outdo everyone and get your dreck published because it's dreck that is so bad it's good!

It does not seem like there are lots of people who are perversely inclined to write a story with all these tropes and words in it, but surely there must be some, because if you make something that beats the LLM (by being creatively good) using all the crap the LLM uses, it would seem some sort of John Henry triumph (discounting the final end of John Henry of course, which is a real downer)


new cheap service jobs!

you also need to compare people injured so badly that they are significantly worse off for life after the war is over, as most of those people would probably have been killed in previous wars but thanks to modern medicine can be kept alive to suffer for years afterwards.

not a knock on modern medicine, and probably the people who survive are happy that they did for the most part, however if you compare the results in the way you did, you should compare those as well.


No, incorrect. You're conflating "People injured died much more often" with other outcomes. Survivors in past times were often more commonly crippled in more severed ways, even when the initiating injury was minor.

At those times the only redress for any injury was amputation. These days a very large numbers of injuries can be addressed with less life changing impact.

>"should compare"

I'm not going to include "better dead than crippled" in my considerations here. It seems both absurd and also not something that was encompassed in the orignal point I was adressing.


OK I thought the comparison was the cost to human beings of war. But call it what rhetorically suits you best.

They're not ignoring anything, they're disagreeing with [what they perceive as] your implications. Because while some injuries that would have caused deaths cause lifetime damage now, injuries that would have caused lifetime damage cause a lot less of it now.

this might be the case, obviously advanced arthroscopic procedures can improve results that in the recent past would have been lifelong problems, but not sure if the benefits of modern medicine across all sorts of battlefield problems are so evenly distributed, I think not given generally the articles I read seem to be echoing my point and not saying that but it all evens out because of this additional benefit, but maybe.

Given that I believe my point is pretty commonly argued and his rebuttal does not seem to me to be as commonly given I would have liked a link to "army study showing the percentage decrease of what were serious injuries match pretty evenly the percentage decrease of what fatalities, showing an even distribution of medical improvements across the range of possible trauma", something like that.

Obviously depending on what was meant by "survivors in past times" applies here, because that is a pretty long range of time.

on edit: my googling shows lots of discussions of improving should injuries that used mean disability are now trivial and so forth, but not overall stats showing the decrease in overall battlefield injury significance matches the decrease in overall battlefield fatality to a reasonable degree.


Yeah if my co-worker can't start figuring out why the code is slow, with a reasonable reference to what the code in question is, that is a knock against their skills. I would actually expect some ideas as to what the problem is just off the top of their heads, but that the coding agent can't do that isn't a hit against it specifically, this is now a good part of what needs to be done differently.

The suggestion to tell the agent to do performance analysis of the part of the code you think is problematic, and offer suggestions for improvements seems like the proper way to talk to a machine, whereas "hey your code is slow" feels like the proper way to talk to a human.


As someone who leads a team of engineers, telling someone their code is slow is not nice, helpful or something a good team member should do. It’s like telling them there’s a bug and not explaining what the bug is. Code can be slow for infinite reasons, maybe the input you gave is never expected and it’s plenty fast otherwise. Or the other dev is not senior enough to know where problems may be. It can be you when I tell you your OOP code is super slow, but you only ever done OOP and have no idea how to put data in a memory layouts that avoids cpu cache misses or whatever. So no that’s not the proper way to talk to humans. And AI is only as good as the quality of what you’re asking. It’s a bit like a genie, it will give you what you asked , not what you actually wanted. Are you prepared for the ai to rewrite your Python code in C to speed it up? Can it just add fast libraries to replace the slow ones you had selected? Can it write advanced optimization techniques it learned about from phd thesis you would never even understand?

>As someone who leads a team of engineers, telling someone their code is slow is not nice, helpful or something a good team member should do

right, I'm sure there are all sorts of scenarios where that is the case and probably the phrasing would be something like that seems slow, or it seems to be taking longer than expected or some other phrasing that is actually synonymous with the code is slow. On the other hand there are also people that you can say the code is slow to, and they won't worry about it.

>So no that’s not the proper way to talk to humans

In my experience there are lots of proper ways to talk to humans, and part of the propriety is involved with what your relationship with them is. so it may be the proper way to talk to a subset of humans, which is generally the only kinds of humans one talks to - a subset. I certainly have friends that I have worked to for a long time who can say "what the fuck were you thinking here" or all sorts of things that would not be nice if it came from other people but is in fact a signifier of our closeness that we can talk in such a way. Evidently you have never led a team with people who enjoyed that relationship between them, which I think is a shame.

Finally, I'll note that when I hear a generalized description of a form of interaction I tend to give what used to be called "the benefit of a doubt" and assume that, because of the vagaries of human language and the necessity of keeping things not a big long harangue as every communication must otherwise become in order to make sure all bases of potential speech are covered, that the generalized description may in fact cover all potential forms of polite interaction in that kind of interaction, otherwise I should have to spend an inordinate amount of my time lecturing people I don't know on what moral probity in communication requires.

But hey, to each their own.

on edit: "the what the fuck were you thinking here" quote is also an example of a generalized form of communication that would be rude coming from other people but was absolutely fine given the source, and not an exact quote despite the use of quotation marks in the example.


maybe there should be an LLM trained on a corpus of a deletions and cleanup of code.

I'm guessing there's a very strong prior to "just keep generating more tokens" as opposed to deleting code that needs to be overcome. Maybe this is done already but since every git project comes with its own history, you could take a notable open-source project (like LLVM) and then do RL training against against each individual patch committed.

Perhaps the problem is that you RL on one patch a time, failing to capture the overarching long term theme, an architecture change being introduced gradually over many months, that exists in the maintainer’s mental model but not really explicitly in diffs.

right, it would have to a specialized tool that you used to do analysis of codebase every now and then, or parts that you thought should be cleaned up.

Obviously there is a just keep generating more tokens bias in software management, since so many developer metrics over the years do various lines of code style analysis on things.

But just as experience and managerial programs have over time developed to say this is a bad bias for ranking devs, it should be clear it is a bad bias for LLMs to have.


I think this is in the training data since they use commit data from repos, but I imagine code deletions are rarer than they should be in the real data as well.

deleting and code cleanup is perhaps more an expression of seniority, and personal preferences. Maybe there should be the same kind style transfer with code that you see with graphical generative AI, "rewrite this code path in the style of Donald Knuth"

I imagine there would be value in not just throwing all of GitHub commits in as training data, but also rating the quality.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: