Hacker Newsnew | past | comments | ask | show | jobs | submit | Karellen's commentslogin

    Marty: Organised crime?
    Cosmo: Trust me, it ain't that organised!


Part of my headcanon for Sneakers is that Agent Abbot (Jones) is actually Admiral Greer (Jones' character from The Hunt for Red October/Patriot Games/Clear and Present Danger), set a bit earlier in his career, and going under a codename while working CyberOps for NSA ;-)


That is just so spot on :)

(There's a whole James Earl Jones "pluriverse" out there, ain't it? ...)


Same!


Your username is very relevant!


Ha, sadly I was born with it rather than being inventive.


A win is a win is a win :)


Um, malloc() is not a system call?


no, mmap is a system call; the memory allocators tend not to use syscalls (often at all) as objection instantiation is very common; also it has to be concurrent and what not.


But then strings with the hash code HASH_MAX would wrap to 0 instead.


Can't believe no-one else has made a comment about how great the form id "230" (tooth-hurty) is, yet. Bravo.


I hope that Peter, Caitlin and Leslie aren't reading this - if you are, STOP IT.

One of the minor villains in my D&D campaign is a literal living nightmare - one of three. Each one is named after a time of night when they appear, and this one is named 2:30 - because he really likes the nightmare where your teeth all fall out.

He's the spookiest one of them all, but the least dangerous.



Is there a 2010-era feature you're relying on that LibreOffice doesn't have yet?

When was the last security update for MS Office 2010? Wikipedia reckons sometime in late 2020. It might be worth looking at alternatives if you ever open potentially untrusted documents - maybe ones that appear to have been sent by people you know.

https://en.wikipedia.org/wiki/Microsoft_Office#Support_polic...


I'm not relying on anything in particular my current versions of Excel and Word. I'm just sticking with them through sheer inertia.

By default they don't allow macros (or editing/saving) documents downloaded from the Internet, which means I have to enable editing on documents I download. The few times a year I get a document from an untrusted source and don't want to open it on my computer, I open it with Google Docs.

If I'm ever forced to upgrade, I'll likely go with LibreOffice. But so far it ain't broke.


> At a Morgan Stanley conference this month, Brian Robins, finance chief for San Francisco-based software maker GitLab, said GitLab is aligned with the goals of DOGE, because the company’s software tools aim to help people do more with less.

> “What the Department of Government Efficiency is trying to do is what GitLab does,” Robins said.

...well, fuck


I need to give a name to my theory which posits that horseshoe theory is a bullshit right-wing talking point, no different from the classic villain trope "We are not so different, you and I", where one side admits to being awful but uses false analogies to try and paint the other with the same brush, and the other rejects both the comparison and the conclusion.

The underlying goal of horseshoe theory is not to create a meaningful comparison between two positions, but an underhanded attempt to demoralise those on the left, and to swing undecided centrists by convincing them that the left isn't really offering the progress that it claims. I think it's also used as a shield by people who are right-leaning but don't want to admit it out loud.

...unless you can find a single good example of a notable left-wing proponent suggesting that horseshoe theory is valid, actually.


This and 1000 times this. It is so absurd: of course it seems ad hoc plausible to treat roughly similar things as if they were the same. However: never do this in this forum, since this is a community is looking a lot into all kinds details, so you will get called out.

But somehow – SOMEHOW – the same people that ask for nuance in everything act as if it would be even remotely plausible that the two most polar opposites of political theory would be basically the same for all important intents and purposes if thought to an end.

It is simply mind-blowing. People looking at something, seeing it is complex, stopping their thinking and just somehow feeling their way to the most empty assessment ever: "probably the same consequencesif you think it to the end". Without even having begun to think their way through it!

But I get it: thinking is nice as long as it is a purely intellectual endeavor but not if any personal moral responsibility is concerned. You might be morally obligated to draw consequences in your behavior – Heaven Forbid!


That takes a really long time though. Most domestic dogs can still breed with wild wolves after ~14,000 years of being pretty well separated by humans, and after some fairly substantial phenotypic shifts.


> During the discussion, Hu said that even if product builders rely heavily on AI, one skill they would have to be good at is reading the code and finding bugs.

Naturally, that brings to mind the classic:

> Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?

-- Brian Kernighan, The Elements of Programming Style

And also:

> Programs must be written for people to read, and only incidentally for machines to execute.

-- Abelson & Sussman, Structure and Interpretation of Computer Programs


> Programs must be written for people to read, and only incidentally for machines to execute.

-- Abelson & Sussman, Structure and Interpretation of Computer Programs

I've been living by this my entire career without ever having read this book. I just snagged it. I really do believe that our primary job is to write code that people can understand and I justify it with a 2 axis analysis: code is either correct or incorrect, and comprehensible or incomprehensible. Code that is correct doesn't require any attention and isn't worth considering. All the code we care about is incorrect, then, and so it's either comprehensible (and therefore fixable) or it's not. Given those facts, and the understanding that code rot and AC changes cause all code to become incorrect over time, my primary job is to write code that other developers can understand so that when it becomes incorrect they can do something about it.


> Code that is correct doesn't require any attention and isn't worth considering.

Until the requirements of the system (of any kind) change and now the code isn't 'correct' for the new requirements and needs updating.


>Given those facts, and the understanding that code rot and AC changes cause all code to become incorrect over time...

---the OP


AC changes? Can you expand that acronym?


I believe Acceptance Criteria


> Programs must be written for people to read, and only incidentally for machines to execute.

-- Abelson & Sussman, Structure and Interpretation of Computer Programs

I think a lot about this whenever I hear blanket statements about software performance. A program should be optimized for its performance on whichever piece of hardware it executes on with the highest cost / hour. 95+% of the time that piece of hardware is yours or your coworker's brain.


Sounds like you're just trying to justify poor software development practices. You can have code that is both performant and readable. Programmers once had to write software for machines 100 times weaker than current machines, yet they had no issue creating software more complex than anything we'll ever build.


This stood out to me as well, along with this quote:

"“Let’s say a startup with 95% AI-generated code goes out [in the market], and a year or two out, they have 100 million users on that product, does it fall over or not? The first versions of reasoning models are not good at debugging. So you have to go in depth of what’s happening with the product,” he suggested."


>So if you're as clever as you can be when you wrote it, how will you ever debug it?

LLMs do not write very "clever" code by default. Without prompting them continuously to make it more "clever", they tend to write lots and lots of simple code, vs writing "clever" code that reduces redundant code, improves performance, etc.

What I am curious about is if these slop-filled codebases will be a problem or not in the future - traditionally it's been bad practice to have duplicate code everywhere, but with LLMs it feels like it matters less, as long as the code is simple and readable.


> traditionally it's been bad practice to have duplicate code everywhere, but with LLMs it feels like it matters less, as long as the code is simple and readable.

duplicate code is a bad practice "traditionally" because it means if you have a bug, you have to fix it in N spots (each of which may have drifted to be slightly different) instead of just 1.

how do LLMs improve that? if you have a bug (which everyone seems to agree happens more often with LLM-generated code) you'll still need to fix it in N spots. being able to feed those N instances into the LLM and ask it to fix the bug maybe speeds the process up a little, but it doesn't solve the underlying problem.

when I got into the industry in the 2000s, saving costs by outsourcing to India was the hype cycle of the day. would you have the same opinion that duplicate code doesn't matter, because you can just pay cheap outsourced engineers to make those N redundant bugfixes?


I wasn’t suggesting anyone should duplicate code or that duplication doesn’t have downsides. My point was more about how LLMs, by default, tend to produce less “clever” (and often more repetitive) code. We know that duplicated code can be painful when bugs need fixing across multiple places, but I’m curious how programming practices will evolve moving forwards as I don't see this going away in the short-term.


I'd imagine AI refactoring meta layers will continue to develop and grow in importance with time. Right now the focus is on generating reams of new code. As time passes this will inevitably shift to more of a focus on maintaining the code.

I would expect there to be innovation in this arena as well, beyond auto-generating code comments for future maintainers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: