Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

i read that the OP limited the output to 2000 tokens.




^ this! there's a lot of clocks to generate so I've challenged it to stick to a small(er) amount of code

I wonder if you would get better results if you tell the LLM there's a token limit in the prompt.

something like "You only have 1000 tokens. Generate an analog clock showing ${time}, with a CSS animated second hand. Make it responsive and use a white background. Return ONLY the HTML/CSS code with no markdown formatting"


I got a ~1600 character reply from gpt, including spaces and it worked first shot dumping into an html doc. I think that probably fits ok in the limit? (If I missed something obvious feel free to tell me I'm an idiot)

On the second minute I had the AI World Clocks site open the GPT-5 generated version displayed a perfect clock. Its clock before and every clock from it since has had very apparent issues though.

If you could get a perfect clock several times for the identical prompt in fresh contexts with the same model then it'd be a better comparison. Potentially the ChatGPT site you're using though is doing some adjustments that the API fed version isn't.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: