And yet your blog says you think NFTs are alive. Curious.
But seriously, RAG/retrieval is thriving. It'll be part of the mix alongside long context, reranking, and tool-based context assembly for the forseeable future.
I have no interest in anything crypto, but they are making a proposal about NFTs tied to AI (LLMs and verifiable machine learning) so they can make ownership decisions.
So it'd be alive in the making decisions sense, not in a "the technology is thriving" sense.
> TL;DR They solved something to make post less expensive because they cut corners during production.
FWIW having watched the entire thing, they never blamed bad production staff or unavoidable constraints. Those are things that anyone working with others experiences when making anything, whether it's YouTube videos or enterprise software products. My TLDR is: "Chroma keying is an fragile and imperfect art at best, and can become a clusterf#@k for any number of reasons. CorridorKey can automatically create world-class chroma keys even for some of the most traditionally-challenging scenarios."
When you watch the video it becomes pretty clear why it wouldn't be able to do that, although it's fun to think about how a future iteration or alternative might be able to credibly (if you don't look too hard) mimic that someday.
If I may "yes, and" this: spec → plan → critique → improve plan → implement plan → code review
It may sound absurd to review an implementation with the same model you used to write it, but it works extremely well. You can optionally crank the "effort" knob (if your model has one) to "max" for the code review.
> You should start a new session for the code review to make sure the context window is not polluted with the work on implementation itself.
I'm just a sample size of one, but FWIW I didn't find that this noticably improved my results.
Not having to completely recreate all the LLM context neccessary to understand the literal context and the spectrum of possible solutions (which the LLM still "knows" before you clear the session) saves lots of time and tokens.
Interesting, I definitely see better results on a clean session. On a “dirty” session it’s more likely to go with “this is what we implemented, it’s good, we could improve it this way”, whereas on a clean session it’s a lot more likely to find actual issues or things that were overlooked in the implementation session.
Closing a single lab is not fascism. It becomes fascism when a regime systematically targets institutions that produce independent knowledge that doesn't align with Dear Leader's propoganda.
But seriously, RAG/retrieval is thriving. It'll be part of the mix alongside long context, reranking, and tool-based context assembly for the forseeable future.
reply