Hacker Newsnew | past | comments | ask | show | jobs | submit | yazanobeidi's commentslogin

Have you run into the bug where claude acts as if it updated the artifact, but it didn’t? You can see the changes in real time, but then suddenly it’s all deleted character by character as if the backspace was held down, you’re left with the previous version, but claude carries on as if everything is fine. If you point it out, it will acknowledge this, try again, and… same thing. The only reliable fix I’ve seen is to ask it to generate a new artifact with that content and the updates. Talk about wasting tokens, and no refunds, no support, you’re on your own entirely. It’s unclear how they can seriously talk about releasing this feature when there are fundamental issues with their existing artifact creation and editing abilities.


Yes, just had it happen a couple nights ago with a simple one pager I asked it to generate from some text in a project. It couldn't edit the existing artifact (I could see it being confused as to why the update wasn't taking in the CoT), so it made a new version for every incremental edit. Which of course means there were other changes too, since it was generating from scratch each time.


Yes, this has been happening a lot more the past 8 weeks.

From troubleshooting Claude by reviewing it's performance and digging in multiple times why it did what it did, it seems useful to make sure the first sentence is a clearer and completer instruction instead of breaking it up.

As models optimize resources, prompt engineering seems to become relevant again.


Yes, this was so frustrating.

I had to keep prompting it to generate new artifacts all the time.

Thankfuly that is mostly gone with Claude Code.


Happens all the time. Like right now


I came here to share the exact same thing - this has been happening for weeks now and it is extremely frustrating. Have to constantly tell Claude to rewrite the artifact from scratch or write it from scratch into a new artifact. This needs to be a priority item to fix.


You’re not wrong, but, I can literally see it get worse throughout the day sometimes, especially recently. Coinciding with Pacific Time Zone business hours.

Quantization could be done, not to deliberately make the model worse, but to increase reliability! Like Apple throttling devices - they were just trying to save your battery! After all there are regular outages, and some pretty major ones a handful of weeks back taking eg Opus offline for an entire afternoon.


Hi, first off thank you for your contributions, and this goes to the entire team. Keras is a wonderful tool and this was definitely the right move to do. No other package nails the “progressive disclosure” philosophy like Keras.

This caught my eye:

> “Right now, we use tf.nest (a Python data structure processing utility) extensively across the codebase, which requires the TensorFlow package. In the near future, we intend to turn tf.nest into a standalone package, so that you could use Keras Core without installing TensorFlow.”

I recently migrated a TF project to PyTorch (would have been great to have keras_core at the time) and used torch.nested. Could this not be an option?

A second question. For “customizing what happens in fit()”. Must this be written in either TF/PyTorch/Jax only, or can this be done with keras_core.ops, similar to the example shown for custom components? The idea would be you can reuse the same training loop logic across frameworks, like for custom components.


At this time, there are no backend-agnostic APIs to implement training steps/training loops, because each backend handles training very differently so no shared abstraction can exist (expecially for JAX). So when customizing fit() you have to use backend-native APIs.

If you want to make a model with a custom train_step that is cross-backend, you can do something like:

  def train_step(self, *args, *kwargs):
    if keras.config.backend() == "tensorflow":
      return self._tf_train_step(*args, *kwargs)
    elif ...
BTW it looks the previous account is being rate-limited to less than 1 post / hour (maybe even locked for the day) so I will be very slow to answer questions.


Try projecting it on a surface with curvature. The projected grid spacing will be irregular and follow the curvature.


Would you mind expanding on “each sample costs 10k USD”?


If one’s standards exceed what they can provide for, they are beholden to that which can.

Seems to be “a feature not a bug” because as you suggest, this is actually problematic for people trying to live a simple dignified life.


Why is this better than self sustaining and carbon neutral homesteads?


Because we don’t need to convince hundreds of millions of people to move to self-sustaining and carbon neutral households.


I like what you wrote here and how you think.

However I have to make one remark.

Schopenhauer would say that Alexa does in fact have will to turn the lights off. A burning will, the same will within yourself and everything that is not idea. That it is your word that sets off an irreversible causal sequence of events leading to the turning off of the lights. Schopenhauer would ascribe his “Principle of Sufficient Reason” as the reason for happening. It is not that Alexa chooses to obey, but by the causal chain enforced by physics and more leaves the will of the universe no choice but to turn off the lights. Same reason why the ball eventually falls down when thrown up. I believe this is the metaphor Schopenhauer uses in his World as Will and Idea.


Well Schopenhauer is an idealist, which confuses the issue. The world as will and representation is quite different than "the world as stuff with will" -- the latter is panpsychism.

To address "world as will", i'd just say it more-or-less doesnt matter what the world is in this sense. There's a distinction between my asking you "please turn the light off" and my rehearsing sounds "alexa, light off" -- and that difference "leaves open" the question of to what degree theyre both grounded in "the nature of the world". Since everything is "will", distinctions then just become distinctions in will.

As for panpsychism, which is really "materialism + the-material-is-conscious", this threatens to confuse the issue more -- since it isnt saying everything is "grounded in x" which allows you to ignore x, really -- as "everything-grounding" theories rarely disable your ability just to ignore them.

In misattributing the properties we are looking to create/find/etc. in things, panpsychism runs the risk of creating a false picture of continuity which impairs our ability to see genuine difference.

In other terms: whilst idealism borders on a dissociative paranoia, pansychism borders on schizophrenia -- whilst dissociation is your problem, schizophrenia might be our problem too.

Here the schiozphrenia of panpsychism is thinking that "Alexa is talking back" -- it isnt.


You’d be surprised what goes on in Level 0… who works there, how they work, the hour-by-hour. Even at a big company that handles most of the world’s credit card transactions at their biggest data centre, for example. It would absolutely blow your mind. It’s amazing the internet works as reliably as it does!


That is not completely correct. We have inherent biological drives from lower level temperature regulation and such to nutrition intake and on an even higher level a proclivity for social interaction including reproduction. The ways these frames can be aligned and satisfied are not infinite. So some level of constraints and requirements are pre-specified. A person’s chosen behaviour must fit within this structure and that of others for it to be sustainable.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: