Hacker Newsnew | past | comments | ask | show | jobs | submit | XCSme's commentslogin

I tried to find the exact mention of this in their privacy policy, but I find it hard to navigate with their OS ui and tweet-like privacy policy. I saw no mention of Reddit/LinkedIn when I did a CTRL+F search

Why the downvote?

LE: Searching for "LinkedIn" I do get results, but not for "Reddit", maybe they haven't updated the policy yet, or they are not required to disclose that?

But in the email it says "We'll share hashed emails with Reddit and LinkedIn so we can target ads better. We have updated our privacy policy."


Same question for Atrophic.

Personally I only see Google (Gemini), X (Grok) and the Chinese models having a chances to still be alive in 1-2 years.


Anthropic are making a very convincing play for business and "enterprise" customers - first with Claude Code and now with Cowork and especially Claude for Excel. The revenue growth they've announced has been extremely impressive over the past year.

Well, the public reception seems to be changing, after tweets like this: https://x.com/AnthropicAI/status/2025997928242811253

Also, I liked Anthropic because they were focused a lot on safety, but after the Pentagon stuff, it seems like they dropped their focus on safety.


X has only brand recognition right now, and an extremely toxic one.

Big customers may buy but won't give them logos, people who are offended by Musk's worldview won't pay them either. You don't do well with a toxic brand: just look at Ye having to buy full page apologies ads to try and sell a record.


X?

Don't they have the biggest budget and largest GPU farm?

Also grok-4.1-fast was one of the top models for a long time, especially in real world usage.


It's funny you say that, I thought this would be an article about how Anthropic have managed to produce a better (coding) product than OpenAI despite having 1/10th of the funding.

The new versions of Opus (4.5 and 4.6) are absolutely amazing - first time I've felt it necessary to throw hundreds of dollars in a single month at Cursor.

I heard similar things about the older models too (Sonnet 3.5 beating GPT-4 etc.) but sadly only jumped on the Cursor train in the last 12 months or so.


The problem is not the models, is the moat and budget. Google and X still have money and are profitable, all the other AI companies are losing billions per year.

And customers will happily switch from one model to another in a heartbeat.


> Personally I only see Google (Gemini), X (Grok) and the Chinese models having a chances to still be alive in 1-2 years.

I'd make it more general - the only AI tokens providers that will last past the bubble are those companies that are already self-sustaining via other product channels.

Any company that has AI as their one and only product aren't going to survive.


Anthropic* lol, unintentional...

I love it too, because it's easy to write and copy, compared to formatted text

Love the stack you've built, really impressive to see a fully EU-based setup in action. For anyone trying to keep analytics fully European and self-hosted, I built UXWizz. It's EU-based, runs on your own servers, and adds session recordings and heatmaps on top of basic stats, so you get insight into actual user behavior without sending data to US SaaS. Like a self-hosted Hotjar.

Nice stack so far. Coolify + Hetzner is a solid combo once you start consolidating services.

Since you're already self-hosting Umami, you might also want to look at UXWizz. I built it, and it fits the same privacy-first, self-hosted model but adds session recordings and heatmaps on top of basic analytics. It runs fine on a small VPS and keeps you off the recurring SaaS treadmill, but it does have a one-time cost, as it comes with support too. You can use the docker-compose to set it up, and also learn more about Coolify while doing so: https://docs.uxwizz.com/installation/docker/via-docker-compo...


Cool experiment, I like the idea of going deep on a micro-niche instead of chasing broad keywords.

Since you're comparing structured overviews vs blog-style content, I'd watch scroll depth and click behavior closely. I built UXWizz, and for niche hubs heatmaps and session recordings have been way more useful than raw pageviews. You can quickly see if people just grab a code and bounce or actually explore the internal links and sections.


Oh, and one thing that I noticed, your clickable cards look the same as non-clickable ones, I kept ending up clicking on the ones that do nothing.

This matches what I've been noticing. A lot of AI crawler traffic just doesn't show up clearly in typical analytics dashboards, especially when tools aggressively filter or sample.

Part of why I built UXWizz was to avoid black-box filtering and keep control over how traffic is classified. When you own the analytics stack, you get to decide what’s "valid" instead of inheriting someone else's definition.


Strong agree on clicks over surveys. Once I move from a video prototype to a live MVP, the real signal comes from watching what people actually do.

I built UXWizz mainly for this. Self-hosted heatmaps and recordings make it pretty obvious where people get confused or drop off, and you don’t have to rely on polite feedback [0].

[0] https://www.uxwizz.com


Strong agree — clicks > surveys, and behavior > opinions.

The gap I'm trying to fill: before you have the live MVP (and before you invest in heatmaps/recording infrastructure), how do you know which workflow is worth building?

Video prototypes are the "pre-MVP" behavior test — show the experience, see if they click "I'd pay for this" vs just "interesting."

Curious: When building UXWizz, did you validate the "self-hosted vs cloud" decision with video prototypes, or did you ship and learn from early user behavior?

Feels like your tool captures the truth post-launch, mine tries to predict it pre-launch. Complementary approaches.


Lands 6th of my random benchmarks: https://aibenchy.com/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: