Hacker Newsnew | past | comments | ask | show | jobs | submit | dvrp's commentslogin

¿Por qué no los dos?


Wonderful. I love that you can just let it play and it will continue with the next video. This is really magical to me. Thanks for sharing.

WOW simply lovely

krea.ai | Senior Backend Engineer | San Francisco, CA | ONSITE | https://www.krea.ai

krea does AI research & builds AI tools for image generation, video generation, node-based workflows, LoRA training, and more. Small, mostly in-person team with a view of Alcatraz from the office window. Our users range from hobbyists all the way to professional designers at Apple or architects at firms behind The World Trade Center or Burj Khalifa.

We're looking for senior backend engineers. You'd work across our SvelteKit app (Postgres, Redis, Docker, ClickHouse), Python ML inference on GPU clusters, and k8s clusters across multiple cloud and GPU providers.

Some recent projects:

- building canary deploys with cookie-sticky traffic splitting

- implementing durable execution for long-running workflows

- designing our public API with OpenAPI docs auto-generated from Zod schemas

- implementing enterprise-grade authentication, authorization, and permissions

- optimizing ML inference for our hosted image generation models

We care way more about first-principles and core engineering skills rather than specific shenanigans around programming languages or particular tooling—knowing a lot about old UNIX principles is a plus though.

You should be comfortable owning things end-to-end. Experience with GPU infra is a plus. Many of us have some kind of creative background, it helps when building tools for creatives but is not a requirement by any means.

To apply, email d+hn@krea.ai (use the +hn suffix to make sure your email is prioritized!)


It’s great but it ain’t no Mr. Robot.


One is a deeply human drama, the other is a spy thriller. Not sure why you'd even make the comparison.


I'm assuming Mr Robot is the spy thriller? It feels like more of a deeply human drama to me.


You have a very different perception of 'human' than I do!


The main character suffers from DID. From trauma that happened when he was little. Maybe you didn't watch the whole thing, that seems pretty "human drama"-ey to me.


That's not human drama. That's a sensationalized depiction of a really rare disorder that 99% of the audience has no experience with and cannot relate directly to.


Of course that's human drama, the entire show is drama. Even the "spy thriller" subplot is motivated by death of Whiterose's partner and the need to put the world back the way it was.


They ran contemporaneously and tended to come up on the same lunch table conversations.


I couldn’t get past all the drug scenes in Mr Robot.


It didn't peak before the end of season one?


What do you mean?

As in, OpenAI, Anthropic, and Google's models won't follow instructions regarding forensics for this?


Who here is naive enough to think that this little loop hole hasn't been nicely tied off?


We're also skeptical of how AI is being used. Let us know if you have obvious horrible mistakes so we can fix them.


It links to the original documents released by the DOJ.

Also, just like LLMs hallucinate and it's up to the person to decide to commit the code into the repo (and they should be held accountable to that), the same applies to people who use this tool to release fake news.

Of course, we try to apply as many "ground-truthing" techniques as possible.

Journalists of all kinds are using Jmail already for their professional work and we are in touch with them when they give us feedback. For example, we've redacted victim's names that we would've not known except for the work of tons of volunteers and journalists—and yes, this was NOT redacted by the DOJ and should have.

But ofc, this is a thorny trade-off between victim protection and censorship.

Disclaimer: I actively work on jmailarchive!


I think that’s a valid stance to take.

IMO it’s (unfortunately) the public’s responsibility to learn the lesson that LLM’s shouldn’t be trusted without double checking the source — same position Wikipedia was in 10 years ago. “Don’t use Wikipedia because it has incorrect information” used to be a major concern, but that seems to have faded away now that Wikipedia has found its place and people understand how to use it. I think a similar thing will happen with LLM’s.

That opinion does not take the responsibility away from LLMs to continue working on educating people and reducing hallucinations. I like to think of it as equal responsibility between the LLM provider and user. Like driving a car - the most advanced safety system won’t prevent a bad driver from crashing.


We also are working on crowdsourcing methods, but it's hard because almost everyone involved in the development of this project is a volunteer that either works for a company already or is a startup founder (me)... so is very tricky to find time.

Also, feel free to check Jwiki (FKA Jikipedia) at https://jmail.world/wiki


They're using jmail because it's source material. An LLM by definition is not source material. I can't believe you're openly saying this.


If you are interested in (2026-)internet scale data engineering challenges (e.g. 10-100s of petabyte processing) challenges and pre-training/mid-training/post-training scale challenges, please send me an email to d+data@krea.ai !


It's interesting to think that—independently of what you think of Cursor's browser implementation being truly "from scratch" or not—the fact that people are implementing browsers from scratch with agents happened because of Cursor's post. In other words, in a twisted and funny way, this browser exists because of Cursor's agent.

This is how we should be thinking about AI safety!


I mean I wanted to demonstrate further how wrong and misleading I think their initial blog post was so yeah, I made this because of what they said and marketed :)


Hey, if you are ever looking for a job at Krea, just let me know!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: