Holding and openly expressing these abhorrent views probably encourages him to focus on security and privacy more than others.
But it does risk his app being associated with that and therefore discouraging everyday users. I wouldn't be surprised if it ends up as the next EncroChat.
Focusing on security and privacy is great, but I expected some downsides. I'm glad you decided to emphasize the dedication of the creator of SimpleX instead.
EncroChat was not open-source, so it was much easier to be infiltrated.
You can self-host SimpleX and it is open source, otherwise I have no idea what you mean by associated with his views. If the project is as promised, then why would you care about the views of the developer?
You shouldn’t be blinded by survivorship bias either. Some of the best decisions I’ve made is quit my startup or company when I see the writing on the wall that this is not salvageable by my sacrifices
Yes I tried with PHP and it failed with a size of 8800 for the Block Frequency Test, but it was fine at 880. Then I tried another random sequence of 8800 and it also failed the Autocorrelation Test.
Which raises the question: if everything is generated, why bother reading it at all?
Just ask the LLM what you want to know—why treat headlines like bookmarks?
My experience, AI has shown me that a lot of stuff I do online. Watching videos, reading random articles, is mostly vapid pointless nonsense.
AI slop has finally woken me up and I am prioritizing IRL activities with friends and family, or activities more in the real world like miniature painting, piano, etc. It's made me much more selective with what activities I engage in, because getting sucked in to the endless AI miasma isn't what I want for my life.
Personally, it’s highlighted the value of physical books and helped me spend less time getting sucked into rabbit holes on devices. I’ve been much more deliberate about what text I choose to read. Been burning through classics that have been on my shelf for decades.
You can use the LLM, but you don't also have the rest of the data they relied on. A LLM can generate everything if it starts from a minimal prompt, but this is a recipe for slop. If you come with materials, discuss them, their implications, express your POV and then generate, the article will reflect your ideas and the data if was fed.
I know it is fashionable to put everything a LLM outputs in the slop box, but I don't think it reflects reality.
It's not for everyone. Some people have excellent reasons why it isn't workable for them. Others have had terrible experiences. It takes a great deal of practice to be a good pair and, if you don't start by working with an experienced pair, your memories of pairing are unlikely to be fond.
However.
I paired full-time, all day, at Pivotal, for 5 years. It was incredible. Truly amazing. The only time in my career when I really thrived. I miss it badly.
With AI based coding (no, i won't use "Vibe coding", thank you) this workflow improves a lot.
Instead of jumping straight into code, I have my engineers write a Notion doc that describes what needs to be built. Think of it like an LLD, but really it’s a prompt for Claude-code.
This forces them to think through the problem at a low level, and they share the doc with me before sending it to Claude — so I get to review early in the process.
Once we finalize this "LLD" or "low-level-prompt", they hand it to Claude. The next time I see the work is in a GitHub PR. At that point, we rarely have to throw everything away and start from scratch.
Instead of obsessing with my rating on I just turned on Zen mode in lichees and hid all the numbers.
The game became fun again — Just "oh that was a cool tactic, let me try this weird opening, what happens if I sacrifice my knight for vibes?"
Turns out the rating was a distraction from the actual game.
Same energy as your point about "fucking around" being the point.
The elo was just making me miserable; removing it made me better anyway.