Hacker Newsnew | past | comments | ask | show | jobs | submit | gnatman's commentslogin

Not every book in a library is meant to be read like a novel. Some books need only be referenced, briefly and periodically.

The article doesn't say much about what was in the library, but it does mention that it contained 10,000 thriller novels.

Karma farming to frontpage more AI news and startups?


I’m of the belief that doing just about anything every single day for a year will change your life! A key for me has been to “lower the bar” so that I can keep the promise to myself and maintain momentum through days of low energy or enthusiasm, e.g. playing the guitar for 1 minute, or writing 1 sentence.


I took a 20-minute walk every day for a year. I don't know that it changed my life, but I think it kept me healthier than I otherwise would have been at the height of the pandemic, and it also gave me a point of pride in saying that I had the resolve and discipline to do something every day for a full year, come what may (did?).

It taught me the importance of ritual, and it also taught me how... incredibly imperceptive a lot of people are. (I was living with a family member at the time, who was constantly asking me if I was "getting out of the house" regularly. Yes. Every day. For a month, and then 3 months, and then half-a-year, and then almost a full year, and then more than a year. On that note, it's essential to not let others' expectations cloud your appreciation for what you're doing. Somehow, it had wormed its way into my subconscious rationale that part of the reason that I was taking my walks was to live up to their expectations. When it became clear that they didn't really care - at least not enough to notice - that kind of deflated things a bit.)


Similarly, just showing up at the gym/hobby/sport is huge. Even if you do next to nothing.


a stranger i once talked to at the gym told me "every workout is better than the workout you are not doing" and that kinda changed my perspective on that topic.


Yeah I go bouldering even on off days to “stay in the rhythm”. And I do have honestly terrible days where I feel I’m struggling climbs of even a grade below my comfort level, but at least I went lol.


How do you stay injury free climbing every day? I feel like at even twice week I am entering the danger zone with ligamentisis.


I suspect they mean "days that I deal off" rather than "every day". Even elite climbers struggle with ligament issues climbing every single day


Name checks out! Your clarification makes sense on a second read. Thanks


Yeah I definitely meant on days I feel off, and not that I climb every day haha. My body wouldn’t allow me to do daily climbs :)


The best form of exercise is the one you can consistently stick with.

For me, that got shot down in flames over the winter because I kept getting sick. :/


I didn't go to the gym a single day for November and December and it was heart-breaking when I started again in January how much I had set back. But slowly I got back to a good rate again.

A week ago someone asked why I was going to the gym that evening and I said, "Because it will make going tomorrow so much easier."

Start again.


I’m cycled almost ever day for a few years them took a 6 week holiday where we walked 15-20k steps per day. I thought that I’d be ok when I got back on the bike.

I was not ok.


Someone said it, I forget who: 90% of life is just showing up


Especially true for friendship. If you want friends, all you have to do is be in the same place with the same people regularly.


I think I recognize your handle, and I’m not sure we agree on much, but because of your comment I’m seeing a friend I haven’t seen in a decade. I sincerely thank you for the nudge.

I'm glad I could help! Do you mean you recognize my handle from previous interactions on here or in another setting? I haven't used this one anywhere else.

Just here. I’m not a prolific World Wide Web traveler.

I think it was Woody Allen.


Atomic Habits is a great book for little things like this that make a big difference when compounded with time.


I imagine reading a book about habits every day for a year would be life changing. :)


The hardest step is usually getting started, at least for me. Reducing the cost of getting started feels like half the job.

This usually means having the supplies ready and the tools out.


It's basically a form of meditation. It's a great way to get your life back on track


That's definitely not universal. I play/practice music, almost every day for at least 30 minutes, and it has no influence on my life, as far as I can tell. I cannot imagine that playing the guitar for a minute has any.


It feels bizarre to claim that playing guitar for 30 minutes a day has no influence on your life. Surely it brings you joy or satisfaction or keeps your skills up if you're a professional. Why do you do it if there's no influence? Couldn't you use that time for something else?

For fun. But I don't see how it influences/changes my life.

You think having fun has no influence on your life? You think sitting bored in a chair would give you an equivalent life experience?

Yeah, doing a small thing daily can add up so fast.

When I started my niche-musueums.com website I bootstrapped it by posting a new museum I had been to every day for a month. It took 15-30 minutes a day and within a few weeks I had a site I was really proud of.

I think the key is to give yourself permission to stop without feeling guilty about it. Any time I start a new streak like this I deliberately tell myself that it's not going to be forever and I can stop any time for any reason.


I love your website! Your url has a typo, here's it fixed in the meantime https://www.niche-museums.com


That's


>lower the bar

the classic: "aim low, avoid disappointment"


>> On January 13th, I woke up to the news that Meta had another round of layoffs and my role specifically as a research engineer had been eliminated.

Not even 10x dog programmers are surviving in this economy


That’s not what this is about, it’s about access to dealership level diagnostic software.

But you don’t have to wait for the farmers, you could “get Claude to code an entire car software and flash it onto your own hardware and put it in your car.” Post back here with your results!


[flagged]


You sound naive and people are mocking you, but honestly who knows. Maybe theres nothing holding this back other than AI skepticism.

The proper way to find out is to get some peice of heavy machinery and try it out. Maybe not a tractor neccesarily, but something that presents a similar quality of risks, even if at a smaller scale. Maybe a forklift?

I think its a bad idea so i wont do it, but why dont you?


Give it a shot, I guess. Sounds like you’ll have a big market in Iowa.


because all the shit is locked down and the corpos can use state violence to stop you from doing so if you manage to succeed


If you go back to a random much older post you’ll find emdashes aplenty.

e.g. https://writings.stephenwolfram.com/2014/07/launching-mathem...


Plot twist - AI reasoned that Stephen Wolfram actually was the smartest human and thus chose to emulate his writing style.


Well, he writes often enough, for long enough, and being who he is, he's got to be a large part of everyone's training data.


You're absolutely right!


LLMs sure do love to burn tokens. It’s like a high schooler trying to meet the minimum word length on a take home essay.


I've always wondered about that. LLM providers could easily decimate the cost of inference if they got the models to just stop emitting so much hot air. I don't understand why OpenAI wants to pay 3x the cost to generate a response when two thirds of those tokens are meaningless noise.


Because they don't yet know how to "just stop emitting so much hot air" without also removing their ability to do anything like "thinking" (or whatever you want to call the transcript mode), which is hard because knowing which tokens are hot air is the hard problem itself.

They basically only started doing this because someone noticed you got better performance from the early models by straight up writing "think step by step" in your prompt.


I would guess that by the time a response is being emitted, 90% of the actual work is done. The response has been thought out, planned, drafted, the individual elements researched and placed.

It would actually take more work to condense that long response into a terse one, particularly if the condensing was user specific, like "based on what you know about me from our interactions, reduce your response to the 200 words most relevant to my immediate needs, and wait for me to ask for more details if I require them."


“Sorry for the long letter, I would have written a shorter one but I didn’t have the time.”


IMO it supports the framing that it's all just a "make document longer" problem, where our human brains are primed for a kind of illusion, where we perceive/infer a mind because, traditionally, that's been the only thing that makes such fitting language.


To an extent. Even though they're clearly improving*, they also definitely look better than they actually are.

* this time last year they couldn't write compilable source code for a compiler for a toy language, I know because I tried


This time last year they could definitely write compilable source code for a compiler for a toy language if you bootstrapped the implementation. If you, e.g., had it write an interpreter and use the source code as a comptime argument (I used Zig as the backend -- Futamura transforms and all that), everything worked swimmingly. I wasn't even using agents; ChatGPT with a big context window was sufficient to write most of the compiler for some language for embedded tensor shenanigans I was hacking on.


Used to need the "if", now SOTA doesn't.

SOTA today has a different set of caveats, of course.


An LLM uses constant compute per output token (one forward pass through the model), so the only computational mechanism to increase 'thinking' quantity is to emit more tokens. Hence why reasoning models produce many intermediary tokens that are not shown to the user, as mentioned in other replies here. This is also why the accuracy of "reasoning traces" is hotly debated; the words themselves may not matter so much as simply providing a compute scratch space.

Alternative approaches like "reasoning in the latent space" are active research areas, but have not yet found major success.


My assumption has been that emitting those tokens is part of the inference, analogous to humans "thinking out loud".


You're absolutely right!


This is an active research topic - two papers on this have come out over the last few days, one cutting half of the tokens and actually boosting performance overall.

I'd hazard a guess that they could get another 40% reduction, if they can come up with better reasoning scaffolding.

Each advance over the last 4 years, from RLHF to o1 reasoning to multi-agent, multi-cluster parallelized CoT, has resulted in a new engineering scope, and the low hanging fruit in each place gets explored over the course of 8-12 months. We still probably have a year or 2 of low hanging fruit and hacking on everything htat makes up current frontier models.

It'll be interesting if there's any architectural upsets in the near future. All the money and time invested into transformers could get ditched in favor of some other new king of the hill(climbers).

https://arxiv.org/abs/2602.02828 https://arxiv.org/abs/2503.16419 https://arxiv.org/abs/2508.05988

Current LLMs are going to get really sleek and highly tuned, but I have a feeling they're going to be relegated to a component status, or maybe even abandoned when the next best thing comes along and blows the performance away.


The 'hot air' is apparently more important than it appears at first, because those initial tokens are the substrate that the transformer uses for computation. Karpathy talks a little about this in some of his introductory lectures on YouTube.


Related are "reasoning" models, where there's a stream of "hot air" that's not being shown to the end-user.

I analogize it as a film noir script document: The hardboiled detective character has unspoken text, and if you ask some agent to "make this document longer", there's extra continuity to work with.


I can only imagine that someone's KPIs are tied to increasing rather than decreasing token usage.


The one that always gets me is how they're insistent on giving 17-step instructions to any given problem, even when each step is conditional and requires feedback. So in practice you need to do the first step, then report the results, and have it adapt, at which point it will repeat steps 2-16. IME it's almost impossible to reliably prevent it from doing this, however you ask, at least without severely degrading the value of the response.


because for API users they get to charge for 3x the tokens for the same requests


Because inference costs are negligible compared to training costs


The long incremental reasoning is how they arrive at higher quality answers.

Some applications hide the reasoning tokens from view, but then the final answer appears delayed.


I feel like this has gotten much worse since they were introduced. I guess they're optimizing for verbosity in training so they can charge for more tokens. It makes chat interfaces much harder to use IMO.

I tried using a custom instruction in chatGPT to make responses shorter but I found the output was often nonsensical when I did this


Yeah, ChatGPT has gotten so much worse about this since the GPT-5 models came out. If I mention something once, it will repeatedly come back to it every single message after regardless of if the topic changed, and asking it to stop mentioning that specific thing works, except it finds a new obsession. We also get the follow up "if you'd like, I can also..." which is almost always either obvious or useless.

I occasionally go back to o3 for a turn (it's the last of the real "legacy" models remaining) because it doesn't have these habits as bad.


It's similar for me, it generates so much content without me asking. if I just ask for feedback or proofreading smth it just tends to regenerate it in another style. Anything is barely good to go, there's always something it wants to add


Claude is so much better for proofing, IMO.

Over the last few years I’ve rotated between OpenAI and Anthropic models on about a 4-5 month cycle. I just started my Anthropic cycle because of my annoyance with the GPT-5.2 verbosity

In four months when opus is annoying me and I forget my grievances with OpenAI’s models and switch back, I’ll report back lol.


It's also annoying when it starts obsessing over stuff from other chats! Like I know it has a memory of me but geez, I mention that I want to learn more about systems design and now every chat, even recipes, is like "Architect mode - your garlic chicken recipe"

Like, no, stop that! Keep my engineering life separate from my personal life!


I'm suspicious it's something far worse: they're increasingly being trained on their own output scraped from the wild.


Because that's where the compute happens, in those "verbose" tokens. A transformer has a size, it can only do so many math operations in one pass. If your problem is hard, you need more passes.

Asking it to be shorter is like doing fewer iteration of numerical integral solving algorithm.


Yeah, but all the models live in chatGPT have reasoning (iirc) - they could use reasoning tokens to do the 'compute', and still show the user a succinct response that directly answers the query


Oh good, it's not just me. Sometimes I'd have it draft an email or something and then the message seems perfect but then it's like "tell me more about the recipient and I'll make it better."

Like, my guy, I don't want to keep prompting you to make shit better, if you're missing info, ask me, don't write a novel then say "BTW, this version sucked"

Yes, I know this could probably be resolved via better prompting or a system prompt, but it's still annoying.


well, they probably have quite a lot of text from high schoolers trying to meet the minimum word length on a take home essay in the training data


Solution: just add "no yapping" to the prompt.


Same. I usually add a "Be curt" in front of every prompt in Gemini.


Is that more effective than simply adding it to your user instructions?


No you’re correct but I’ve experienced a bug with older Workspace business accounts where you can’t reach the screen for user instructions. It just remained blank.


I mean their whole existence is about token prediction, so they just want to do their things :)


Is this the new “why don’t coal miners just learn how to code?”


OK but the difference between $0.00 and $0.01 is also 1 cent.


And your data.


the Master


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: