Hacker News new | past | comments | ask | show | jobs | submit | concrete_head's comments login

Thanks for sharing, I like your approach.

As easy as wordpress, square space or whatever claim to be, in my mind this is even easier, the solution is more elegant, and you expose yourself to a whole lot less crap along the way.

Edit: I recognise this doesn't cover hosting, domain registration etc.


Thanks! Yeah, I find this a lot easier. And yes, hosting is an exercise left to the reader :-) I might write about my hosting setup in an upcoming blog post.

Can you please share the problem?


I don't really want it added to the training set, but eh. Here you go:

> Assume I have a 3D printer that's currently printing, and I pause the print. What expends more energy, keeping the hotend at some temperature above room temperature and heating it up the rest of the way when I want to use it, or turning it completely off and then heat it all the way when I need it? Is there an amount of time beyond which the answer varies?

All LLMs I've tried get it wrong because they assume that the hotend cools immediately when stopping the heating, but realize this when asked about it. Qwen didn't realize it, and gave the answer that 30 minutes of heating the hotend is better than turning it off and back on when needed.


What kind of answer do you expect? It all depends on the hotend shape and material, temperature differences, how fast air moves in the room, humidity of the air, etc.


Keeping something above room temperature will always use more energy than letting it cool down and heating it back up when needed


> It all depends on

No it doesn't.


Sounds like the LLM you used when writing this slop comment struggled with the problem too. :>


Qwen3-32b did it pretty accurately it seems. Calculated heat loss over time going to ambient temp, offered to keep it at standby 100C for short breaks under 10 minutes. Shut down completely for longer breaks.


The correct answer is that it's always better to turn it off, though.


Unless you care about warmup time. LLMs have a habit of throwing in common-sense assumptions that you didn’t tell it to, so you have to be careful of that.

It’s not a bug. Outside of logic puzzles that’s a very good thing.


No, warmup time doesn't change anything, I can simply factor it in.

It is a bug, because I asked it precisely what I wanted, and it gave the wrong answer. It didn't say anything about warmup time, it was just wrong.


Ah! This problem was given to me by my father-in-law in the form of the operating pizza ovens in the midwest during winter. It's a neat, practical one.


Some calculation around heat loss and required heat expenditure to reheat per material or something?


Yep, except they calculate heat loss and required energy to keep heating, but room temperature and energy required to heat from that in the other case, so they wildly overestimate one side of the problem.


Unless I'm missing something holding it hot is pure waste.


Maybe it will help to have a fluid analogy. You have a leaky bucket. What wastes more water, letting all the water leak out and then refilling it from scratch, or keeping it topped up? The answer depends on how bad the leak is vs how long you are required to maintain the bucket level. At least that’s how I interpret this puzzle.


Does it depend though?

The water (heat) leaking out is what you need to add back. As water level drops (hotend cools) the leaking will slow. So any replenishing means more leakage then you are eventually paying for by adding more water (heat) in.


You can stipulate conditions to make the solution work out in either direction.

Suppose the bucket is the size of lake, and the leak is so miniscule that it takes many centuries to detect any loss. And also I need to keep the bucket full for a microsecond. In this case it is better to keep the bucket full, than to let it drain.

Now suppose the bucket is made out of chain-link and any water you put into it immediately falls out. The level is simply the amount of water that happens to be passing through at that moment. And also the next time I need the bucket full is after one century. Well in that case, it would be wasteful to be dumping water through this bucket for a century.


All heat that is lost must be replaced (we must input enough heat that the device returns to T_initial)

Hotter objects lose heat faster, so the longer we delay restoring temperature (for a fixed resume time) the less heat is lost that will need replacement.

Hotter objects require more energy to add another unit of heat, so the cooler we allow the device to get before re-heating (again, resume time is fixed) the more efficient our heating can be.

There is no countervailing effect to balance, preemptive heating of a device before the last possible moment is pure waste no matter the conditions (although the amount of waste will vary a lot, it will always be a positive number)

Even turning the heater off for a millisecond is a net gain.


Does it depend on whether you know in advance _when_ you need it back at the hot temperature?

If you don’t think ahead and simply switch the heater back on when you need it, then you need the heater on for_longer_.

That means you have to pay back the energy you lost, but also the energy you lose during the reheating process. Maybe that’s the countervailing effect?

> Hotter objects require more energy to add another unit of heat

Not sure about this. A unit of heat is a unit energy, right? Maybe you were thinking of entropy?


No, you should always wait until the last possible moment to refill the leaky bucket, because the less water in the bucket, the slower it leaks, due to reduced pressure.


Allowing it to cool below the phase transition point of the melted plastic will cause it to release latent heat, so there is a theoretically possible corner case where maintaining it hot saves energy. I suspect that you are unlikely to hit this corner case, though I am too lazy to crunch the numbers in this comment.


don't worry, it is really trickly for training


Interesting. Though he didn't say what kind of car he drives, it could be a real shitter


2007 Toyota Camry. I looked it up, it's actually worth even less than I thought it was haha.


This makes me wonder. OpenAI isn't the only company offering computer use. The list of companies and models that do this will only grow.

Meaning advertiser's will have to be selective which company they pay to get the most exposure to their target human customers via the agents. Will we see affiliate programs for AI agents which in turn promote products to the users? and we end up with the same shit show we have today?

Or what if eventually everyone has their own personal AI that can bypass the ads. Will we just decide that advertising is a drag on society and kill off that industry for good?


> and we end up with the same shit show we have today?

Absolutely certain. Furthermore, this can maximize exploitation of each individual human as the providers have such rich profiles about each human that they can customize pricing to extract the maximum amount of profit. It is by design.


Since this is hacker news and AI is all the rage right now I'd like to make the connection between this and curriculum learning. https://en.m.wikipedia.org/wiki/Curriculum_learning


We describe a computing architecture capable of simulating networks with billions of spiking neurons using an ordinary Apple MacBook Air equipped with a M2 processor, 24 GB of on-chip unified memory and a 4TB solid-state disk. We use an event-based propagation approach that processes packets of N spikes from the M neurons in the system on each processing cycle. Each neuron has C binary input connections, where C can be 128 or more. During the propagation phase, we increment the activation values for all targets of the N neurons that fired. In the second step, we use the histogram of activation values to determine the firing threshold on the fly and select the N neurons that will fire in the next packet. We note that this active selection process could be related to oscillatory activity in the brain, which may have the function of fixing the percentage of neurons that fire on each cycle. Critically, there are absolutely no restrictions on architecture, since each neuron can have a direct connection to any other neuron, allowing us to have both feed-forward as well as recurrent connections. With M = 2 32 neurons, this allows 2 64 possible connections, although actual connectivity is extremely sparse. Even with off-the-shelf hardware, the simulator can continuously propagate packets with thousands of spikes and millions of connections dozens of times a second. Remarkably, all this is possible using an energy budget of just 37 watts, close to the energy required by the human brain. The work demonstrates that brain-scale simulations are possible using current hardware, but this requires fundamentally rethinking how simulations are implemented.


My own spiking neural network in everyones favourite language to hate (JS). Purely for fun & curiosity.


Just too add an alternative addition based architecture into the mix.

https://www.youtube.com/watch?v=VqXwmVpCyL0


Thanks for sharing. On mobile it's a bit small and one slightly misplaced swipe and my browser scrolls horizontally or tries to refresh the page. Are there any quick fixes for this?

Otherwise it's a great little puzzle I could see myself toodle-ing with often.


This is hands down the best use case for LLM's I've seen to date!

Makes me think of Karl Pilkingtons bullshit man super hero https://youtu.be/1lRIQGU2RRk?si=d1ea8Sc44PyBy6yO


Thought of this immediately too. Karl’s always been somewhat ahead of the curve.


I didn't know it, thanks for link :d


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: