That's beautiful! I see a .claude folder in your code, I am curious if you've "vibecoded" the whole project or just had claude there for some tasks! not that it matters or takes away from your work but just pure curiosity as someone who enjoys betting on the LLM output XD
I found that out while working with music models like Suno! I love creating music for my own listening experience as a hobbyist and when I give suno a prompt no matter how well crafted it is the outcome varies from "meh" to "that's good" ... while when I upload semi finished beat I made and prompt it to cover it the results consistently leave me speechless! Could be a bias since the music has a lot of elements I created but this workflow is similar across other generative models for me.
It is scary! my coping mechanism, which I admit is stupid, is to believe no matter what I do as long as I am online they have my data. But you are right most people just give absurd amount of data for willingly.
It becomes even sadder when Google caters to political propaganda of any kind, from any party or country, if the price is right. I wish Google realized how much greater and more beneficial the product and company are to the people who use them than all of that. I am not naïve -I understand they need to profit- but perhaps they are focusing on short term gains while introducing this poor user experience, which will eventually lead to major losses.
I don't think it makes everyone so productive, tbh! If you really know what you are doing and are willing to burn tokens, it will really take your work to the next level—provided you don’t work with a niche product or language that the models weren’t trained on.
If I may use an analogy, it's like what sampling is for music producers. The sample is out there—it’s already a beautiful sample, full of strings and keys, a glorious 8-bar loop. Anyone can grab it, but not every producer can sell it or turn it into a hit.
In the end, every hype train has some truth to it. I highly suggest you give it a shot, if you haven't already. I’m glad I did—it really helped me a lot, and I am (unfortunately, financially) hooked on it.
It's a little unfair to say that Kagi "does that" and thereby imply that they merely repackage search results from Google and Bing. Kagi uses an array of different search sources that includes Google, DuckDuckGo, Apple, Wikipedia, Wolfram Alpha, and others, alongside their own small web crawlers. Their mission is to present search results in a user centric way and so they try to surface results that are directly useful to the user rather than the advertisers (they don't have advertising partners as far as I can tell), and correspondingly their business model supports that approach rather than being in conflict with it.
https://help.kagi.com/kagi/search-details/search-sources.htm...
> Yeah I think they are trying to get a search-results-only view without the ads etc.
If that was the case then their description is completely off, as what they are doing is not creating a search engine but filtering results from other search engines.
I mean, just because you pipe your output to grep it does not mean you created an entirely new app.
"Our search result pages may include a small number of clearly labeled "sponsored links," which generate revenue and cover our operational costs. Those links are retrieved from platforms such as Google AdSense. In order to enable the prevention of click fraud, some non-identifying system information is shared, but because we never share personal information or information that could uniquely identify you, the ads we display are not connected to any individual user."
It's not a "synthesizer" that makes sound but one for visuals. You'll make your own sounds somewhere else/somehow, pipe them into Hydra and have it react to your own music/sounds.
I'm guessing they don't call it just that because of the modular nature of a programming language and because it depends on the user configuring it, rather than just hitting "Play".
I'm not super familiar with Advanced Visualization Studio but as far as I can tell, it wasn't geared toward live programming in a performance setting nor was able to work with general libraries as it was it's own language, not using a general one.
I think they're a bit different, a visualizers uses incoming audio to create visuals, a video-synth can do that but it can also do much more, without the involvement of sound in any way.
That is why I thought there should be sound but I missed the point it seems as others have commented something to do with modifying the visuals based on a function.