Hacker Newsnew | past | comments | ask | show | jobs | submit | more cryptoz's commentslogin

The video and info: https://www.spacex.com/launches/starship-flight-11

(Liftoff is around 33 mins in)


I'm working on Code+=AI: https://codeplusequalsai.com/

It's an AI-webapp builder with a twist: I proxy all OpenAI API calls your webapp makes and charge 2x the token rate; so when you publish your webapp onto a subdomain, the users who use your webapp will be charged 2x on their token usage. Then you, the webapp creator, gets 80% of what's left over after I pay OpenAI (and I get 20%).

It's also a fun project because I'm making code changes a different way than most people are: I'm having the LLM write AST modification code; My site immediately runs the code spit out by the LLM in order to make the changes you requested in a ticket. I blogged about how this works here: https://codeplusequalsai.com/static/blog/prompting_llms_to_m...


Computer Use models are going to ruin simple honeypot form fields meant to detect bots :(


I just tried to submit a contact form with it. It successfully solved the ReCaptcha but failed to fill in a required field and got stuck. We're safe.


You mean the ones where people add a question that is like "What is 10+3?"


I wonder if this stuff is trained on enough Hallmark movies that even AI actors will buy a hot coffee at a cafe and then proceed to flail the empty cup around like the humans do. Really takes me out of the scene every time - they can't even put water in the cup!?


A way for people to build LLM-powered webapps and then easily earn as they are used: I use OpenAI API and charge 2x for tokens so that webapp builders can earn on the margin:

https://codeplusequalsai.com


I've really got to refactor my side project which I tailored to just use OpenAI API calls. I think the Anthropic APIs are a bit different so I just never put in the energy to support the changes. I think I remember reading that there are tools to simpify this kind of work, to support multiple LLM APIs? I'm sure I could do it manually but how do you all support multiple API providers that have some differences in the API design?



I built LLMRing (https://llmring.ai) for exactly this. Unified interface across OpenAI, Anthropic, Google, and Ollama - same code works with all providers.

The key feature: use aliases instead of hardcoding model IDs. Your code references "summarizer", and a version-controlled lockfile maps it to the actual model. Switch providers by changing the lockfile, not your code.

Also handles streaming, tool calling, and structured output consistently across providers. Plus a human-curated registry (https://llmring.github.io/registry/) that I keep updated with current model capabilities and pricing - helpful when choosing models.

MIT licensed, works standalone. I am using it in several projects, but it's probably not ready to be presented in polite society yet.


OpenRouter, Glama ( https://glama.ai/gateway/models/claude-sonnet-4-5-20250929 ), AWS Bedrock, all of them provide you access to all of the AI models via OpenAI compatible API.



LiteLLM is your friend.


or AI SDK


Why don't you ask LLM to do it for you?


I use LiteLLM as a proxy.


> think I remember reading that there are tools to simpify this kind of work, to support multiple LLM APIs

just ask Claude to generate a tool that does this, duh! and tell Claude to make the changes to your side project and then to have sex with your wife too since it's doing all the fun parts


Opening the App Store to download a bunch of apps - in general - is probably the #1 thing people are doing when they open the App Store. Of course, installing a specific app is a top use case. But I think you're just not the average user. Lots of people open the App Store frequently to just check out what's available.

~10 years ago I would do this all the time. It's fun, kind of like surfin' the net was back in the old days, but in a walled garden of applications.


is there actually any data to back up the claim that the "#1 thing people do" is open the app store to see what's available besides your singular story about what you used to do a decade ago when all of this was much more novel in general?


10 years ago that was fun. Today it’s an awful experience.


I'm surprised to hear this, as I am in the same boat as the other poster. Of course it makes sense, they wouldn't build that junk if there weren't junk consumers on the market. But I still can't grasp the concept of "just installing apps".


It seems plausible that casual browsing and downloading remains a significant use case. Apple surely wouldn't design the App Store focusing on discovery this way otherwise. Not sure about the #1 activity hypothesis. What I'm certain about though is that the App Store is deeply broken and they've started rushing down the path of platform "enshittification" (real thing) where online platforms become less useful, less enjoyable, or less user-friendly.


Had a lot of fun trying to break this. Turns out you can screenshot real easily by zooming out. Maybe there are other ways but I stopped trying :)


yeah - I actually was initially confused since I wasn't having any issues screenshotting it but had forgotten that I have the default site zoom set to ~65%.


Not sure what you mean - I can screenshot it freely that's not the point the point is if you look then at the screenshot you cant discern the text because its a single frame now


He's right. This is zoomed out: https://imgur.com/a/G7CKZ94

This is on MacOS 15.6, Chromium (BrowserOS), captured with the OS' native screenshot utility. Since I was asked about the zoom factor, I now tried simply capturing it at 100% and it was still perfectly readable...

I guess the trick doesn't work on this browser.


I zoomed out to 90% and could make out something was there but wasn't easy to read. Zooming out further went back to just being noise. I also tried zooming in but with no success. What zoom level did you use and I guess we have to ask the standard what browser/version/OS/etc?? My FFv142 on macOS never took a screen grab like you did


This is really interesting - because it means the "randomness" is different between the text and the background, and when you zoom out enough, the eye can distinguish it?


hmmm I think it's probably just an aliasing / canvas drawing issue. When I bring a screenshot in heavily zoomed out 33% - the pixels comprising the "HELLO" shape have a significantly higher luminance than the rest of the background.


Zooming out before taking screenshot and the text is no longer obfuscated. I tried and confirmed it works. In fact, the text is perhaps even more readable than the original.


It depends how fast or slow your GPU is. I tried it and saw the effect you described, but within a second or two it started moving and was obscured again. Obviously you could automate the problem away.


Mine freezes the animation on zoom change. Not sure you could automate against that


What I meant was that even if it only freezes for a second, you could automate the screenshots to be captured during that time instead of trying to beat the clock manually


Oh my God. That domain is parked and for sale for $125,000?!?!

Wild.


Oh that is nothing. Check out god.ai..... domain parking is back. At this point we might as well just have a TLD for .god


> TLD for .god

Sounds like a good TLD for an "identity and access management" system :)


Musk would just hog it for himself


Gonna be some wild conspiracies some day in the future, when humanity has altered the moon visibly but 'good old phones from way back in the day' take photos that "clearly" show no change to the moon.


Those phones will be long dead at that point, as well as the cloud services they depend on.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: