Hacker Newsnew | past | comments | ask | show | jobs | submit | tunesmith's commentslogin

I think a super common problem with any todo system is the "capture anything" mindset. They've even redefined what "focus" means, like now it just means to focus on whatever thing you're focused on at that moment.

Focus is supposed to mean you have a clear idea of who you are and what you need to work on, and also what you don't.

So I've taken to follow a (bespoke) process where I identify what my own personal principles are, and what priorities and efforts they imply. Then, of all the "oh I could/should do this" potential tasks that occur to me, I have an out: if it doesn't align with my own personal focus, then I can delete it.


This resonates — the real superpower is having a clear “no”, not capturing everything.

One idea I’m exploring with *Concerns* is making that constraint explicit: when you set “active goals/projects”, you can only keep a *small fixed number* (e.g. 3–5). Anything else becomes “not active”, so the system won’t surface it or turn it into tasks.

Curious: what’s your number—3, 5, or 10—and what rule do you use to decide what gets to be “active”?


Well that's what akrasia is. It's not necessarily a contradiction that needs to be reconciled. It's fine to accept that people might want to behave differently than how they are behaving.

A lot of our industry is still based on the assumption that we should deliver to people what they demonstrate they want, rather than what they say they want.


If you have a ChatGPT account, there's nothing stopping you from installing codex cli and using your chatgpt account with it. I haven't coded with ChatGPT for weeks. Maybe a month ago I got utility out of coding with codex and then having ChatGPT look at my open IDE page to give comments, but since 5.2 came out, it's been 100% codex.

I love rebase locally, especially when I have a few non-urgent branches that sit around for a little while. I hate rebase after pushing. The rule of thumb that has worked for me is "don't rewrite someone else's history". Rewriting origin's history is not so bad, but if there's even a chance that a team member has pulled, or based work off your pushed branch (ugh), rebase is horrible.


parent poster is saying a healthy newsroom is much better than one guy. You're disagreeing by saying one guy is not better than an unhealthy newsroom.


I don't want to nitpick, but they didn't say "healthy", and I think the current situation wrt news ownership should be called out at every opportunity, because not everyone is aware of it.


if, there are healthy newsrooms


cultural problem too... like even before AI in recent years there's been more of a societal push that it's fair game to just lie to people. Not that it didn't always happen, but it's more shameless now. Like... I don't know, just to pick one, actors pretending to be romantically involved for pr of their upcoming movie. That's something that seems way more common than I remember in the past.


While I agree with you, your example is not a great one.. There are examples of fake relationships between stars dating back to the start of talkies.

But I do agree. It is more socially acceptable to just lie, as long as you're trying to make money or win an argument or something. It's out of hand.


Do you have any data to back that "it is more socially acceptable to lie"? I looked a bit and could not find anything either way.

The impression can be a bias of growing up. Adults will generally teach and insist that children tell the truth. As one grows, it is less constrained and can say many "white lies" (low impact lies).

We do have more impact for some people (known people, influences, etc.) than before because of network effects.


There is this study that claims/proves that dishonesty/lying is socially transmittable and

The question of how dishonesty spreads through social networks is relevant to relationships, organizations, and society at large. Individuals may not consider that their own minor lies contribute to a broader culture of dishonesty. [0]

the effect of which would be massively amplified if you take into account that

Research has found that most people lie, on average, about once or twice per day [1]

where the most prolific liars manage upward of 200; you can then imagine that with the rise and prevalence of social media the acceptance/tolerance has also socially transmitted

[0] https://pmc.ncbi.nlm.nih.gov/articles/PMC4198136/

[1] https://gwern.net/doc/psychology/2021-serota.pdf


Interesting, but one assumption for "would be massively amplified" is that we are more connected. It seems that people are (at least feeling) more lonely (ref: https://www.gse.harvard.edu/ideas/usable-knowledge/24/10/wha...).

So, while dishonesty can spread through social networks, does not address if the total dishonesty is larger or lower or equal to, for example 100 years ago, because there are many factors involved.


I'll link you a study that investigates this, but unfortunately it is paywalled if you don't have a paid Springer account: https://link.springer.com/chapter/10.1007/978-3-319-96334-1_...


Let me introduce you to an actor named Rock Hudson coughs in black and white. /s


I think the real distinction is whether the output came from the artist's human intention, or whether someone just said "let's just see what happens!"... it's sort of impossible to reach inside the artist's brain to find out where that line is. I suppose the only test is to start with that same intention multiple times and see how widely the output varies.


Wasn't your intention whatever you typed in? That doesn't make you an artist and I don't want to hear the music AI made that you happened to type some words to and hit enter.


[flagged]


>the rest of us will be here enjoying great music

Speak for yourself.


We have different definitions for great.


Not really. If I plug up and frob-a-knob (real or emulated) eurorack at random to just see what happens, the resulting hour long noise will be described as experimental, boring, profound, piece of trash etc. (e.g. check reviews on Beaubourg by Vangelis) It is not going to be put on the same spot as AI slop.

While intent of course is important, the quantity and manner of taking others' work and calling it my own, I thing, plays even bigger role. If I go "hey check out this Bohemian Rhapsody song I just created using Google Search", I do not think much regard will be given to my intent.


I understand those distinctions, and I can definitely see people caring about that, although how you would tell seems impossible.

That's why I choose to make the distinction by just not caring about any kind of music that uses any kind of AI.


As always, this requires nuance. Just yesterday and today, I did exactly that to my direct reports (I'm director-level). We had gotten a bug report, and the team had collectively looked into it and believed it was not our problem, but that of an external vendor. Reported it to the vendor, who looked into it, tested it, and then pushed back and said it was our problem. My team is still more LLM-averse than me, so I had Codex look at it, and it believed it found the problem and prepared the PR. I did not review or test the PR myself, but instead assigned it to the team to validate, partly for learnings. They looked it over and agreed it was a valid fix for a problem on our side. I believe that process was better than me just fully validating it myself, and part of the process toward encouraging them to use LLM as a tool for their work.


> I believe that process was better than me just fully validating it myself

Why?

> and part of the process toward encouraging them to use LLM as a tool for their work.

Did you look at it from their perspective? You set the exact opposite example and serve as a perfect example for TFA: you did not deliver code you have proven to work. I imagine some would find this demoralizing.

I've worked with a lot of director-level software folk and many would just do the work. If they're not going to do the work, then they should probably assign someone to do it.

What if it didn't work? What if you just wasted a bunch of engineering time reviewing slop? I don't comprehend this mindset. If you're supposedly a leader, then lead.


2 decades ago, so well before any LLMs, our CEO did that with a couple of huge code changes: he hacked together a few things, and threw it over the wall to us (10K lines). I was happy I did not get assigned to deal with that mess, but getting that into production quality code took more than a month!

"But I did it in a few days, how can it take so long for you guys?" was not received well by the team.

Sure, every case is its own, and maybe here it made sense if the fix was small and testing for it was simple. Personally (also in a director-level role today), I'd rather lead by example and do the full story, including testing, and especially writing automated tests (with LLM's help or not), especially if it is small (I actually did that to fix misuse of mutexes ~12 months ago in one of our platform libraries, when everybody else was stuck when our multi-threaded code behaved as single-threaded code).

Even so, I prefer to sit with them and loudly ask questions that I'd be asking myself on the path to a fix: let them learn how I get to a solution is even more valuable, IMO.


So, I've been playing with an mcp server of my own... the api the mcp talks to is something that can create/edit/delete argument structures, like argument graphs - premises, lemmas, and conclusions. The server has a good syntactical understanding of arguments, how to structure syllogisms etc.

But it doesn't have a semantic understanding because it's not an llm.

So connecting an llm with my api via MCP means that I can do things like "can you semantically analyze the argument?" and "can you create any counterpoints you think make sense?" and "I don't think premise P12 is essential for lemma L23, can you remove it?" And it will, and I can watch it on my frontend to see how the argument evolves.

So in that sense - combining semantic understanding with tool use to do something that neither can do alone - I find it very valuable. However, if your point is that something other than MCP can do the same thing, I could probably accept that too (especially if you suggested what that could be :) ). I've considered just having my backend use an api key to call models but it's sort of a different pattern that would require me to write a whole lot more code (and pay more money).


Sometimes I think I'm the only person on the planet that grew up knowing it as "Rock Scissors Paper".

I still defend that as the best name in English. Rock needs to go first. Rock beats Scissors beats Paper. It's the most straightforward order.


Is there a niche "endian" humor about this :) - e.g. is this the little endian, or big endian, or "middle" endian of "Rock Paper Scissors" - excuse my really poor attempt at this.


If you call it Rock-Paper-Scissors it still follows logically:

Rock loses to Paper loses to Scissors


Why would you want to describe it as "loses to" rather than "beats"? People want to win, not lose.


Or any other ordering


Yes, but I was specifically choosing the one that is by far the most popular one; at least in American English.


That rings a bell, as in I think I had some classmates as a kid who called it that, and I remember thinking they were weird. I'd guess it was maybe 1:3 in favor of RPS over RSP.


I grew up calling it "Paper Scissors Rock"!


in german it’s

“Schere Stein Papier”


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: