Hacker News new | past | comments | ask | show | jobs | submit | more punkpeye's comments login

They are fundamentally different. If Cloudflare provided a way to host docker containers with volumes though, that would be game over for so many paas platforms.



wow, this will be huge


Only if they can sort out their atrocity of a documentation website.


It is not reflected in their status page, but fly.io itself is not even loading.


https://fly.io/ loading for me


Confirmation ;)


I took time to read everything on Twitter/Reddit/Documentation about this.

I think I have a complete picture.

Here is a quickstart for anyone who is just getting into it.

https://glama.ai/blog/2024-11-25-model-context-protocol-quic...


It would appear the HN hug knocked your host offline, given the 525 TLS Is Bogus and its 502 Bad Gateway friend

I managed to get it to load: https://archive.ph/7DALF


It's actually due to fly.io outage https://news.ycombinator.com/item?id=42241851


Yeah this is a phenomenal resource, so much so I just tried to come back to it. Going to bookmark it and hope it shows back up!


[flagged]


I don't even know what to respond/what this is asking.


I think they meant that your unprompted declaration of having understood the feature, followed by giving no apparent insight into it is odd and something reminiscent of a bot.

Your entire comment could just be “Here’s the quickstart guide: <link>” and literally no useful information would be lost.

A human would topically say: “I spent some time understanding the feature and I think I got it.

<summarized description of the feature or insight/opinion about its implementation>

Here the quickstart: <link>”

Or perhaps you wrote the quickstart? That’s not clear from your wording.


This makes me think about the email from Greg and Ilya to Sam

https://www.reddit.com/r/OpenAI/comments/1gsnxmy/more_lawsui...

I guess I spend my entire day working with LLM prompts, CoT, etc. so maybe I am without realizing starting to adopt some of the same language patterns. The comment reads normal to me, but I bet so did 'We don’t understand your cost function' for Greg and Ilya.


I don't think the wording was unclear. It's obviously their own article.


Cactus? Never heard that expression


fine. Examples, what can I use this for?


have you read either TFA or their blog post? There are plenty of examples.


This looks pretty awesome.

Would love to chat with you if you are open about possible collab.

I am frank [at] glama.ai


Emailed



Thanks, this is the summary I’ve been looking for!


Depending on your use case, there is also a HTML table https://glama.ai/model-prices and also an API that you can call to get more accurate cost https://glama.ai/blog/2024-09-25-chat-cost-calculator-api. The API uses the appropriate tokenization technique to accurately calculate the cost.


I am trying to build this independent of APIs provided by specific providers (such as cache), so this isn't an option.


Would this allow to safely eval Node.js code in a sandbox?


Not "Node.js code" specifically as Node.js itself can't be compiled to wasm. JavaScript can be compiled to wasm, but that won't include the whole Node.js standard library and doesn't seem to be what you are asking.

Check out Deno for a sandbox that is getting there. Their new release does (aim to) support most Node.js code, where it previously and intentionally did not support node_modules nor CommonJS to the best of my knowledge.

If you care more about wasm than sandboxing in general, one project called javy is interesting, but you'll quickly notice they bring their own IO library and not much else in terms of something that compares to Node.js' API.


Just when I thought I covered every edge case!

Let me see if I can add it as an Easter egg


Over the years, I've missed many meetings due to timezone miscommunications. To solve this, I've created TimeZone GPT, a foolproof tool that accurately converts and resolves times from any input.

I know that Google provides similar functionality, but I always mess it up somehow...

TimeZone GPT uses OpenAI's gpt-4o model to understand the inputs, and then TimeAPI to resolve the current timezone information and make the conversion.

I gamified it a bit, so that I record what are the other people inputs, their timezones (as resolved by their IP), and the model outputs. This will allow me to capture edge cases that I've not considered.

I don't expect many other folks to be using it, but I wanted something that would at least solve my problem.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: