Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Nice, so the model itself confirmed my suspicion that it doesn't do actual calculations.

Rest assured that it won't be able to do that for a while (at least the public versions). Not for technical reasons, though - I'm 100% convinced that the engineers could embed all sorts of interpreters and calculation engines no problem. Just like manual filters and canned responses.

The reason for that is simple: general computation is non-deterministic in both time and space. Inference on the other side is very deterministic in both, because the time taken is a just function of the input length (i.e. current prompt plus a bit of history), which can be well controlled on the frontend.

Arbitrary calculations or code interpretation, however, are unrestricted operations and would basically allow for DoS or straight up breaking the system. While there are ways to limit both calculation time and resource usage, it's still a big risk and considerable effort (infrastructure-wise) that I don't think is worth it. For closed systems (internal rollouts, business customers, etc.) this might be a different story.

Just another reason why closed software sucks: it's not possible for anyone not working at OpenAI to extend the model in ways that would allow for integrations like this to extend its capabilities and test how far we could push it.



I completely agree.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: