> in Scheme you can redefine `define` to be number 5.
This is like asking "what if your coworker named all errs as `ok`" so everything was `if ok { return errors.New("Not ok!!"); }`. It's possible but no one does it.
This is why `defmacro` and `gensym` in common lisp are awesome, and similarly why Go's warts don't matter. Much of programming language ugliness is an "impact x frequency" calculation, rather than one or the other.
It's also why javascript is so terrible, you run into it's warts constantly all day long.
"No one does it" is extremely relative. Take your closing remark about JavaScript: I don't run into JS warts very often at all, and I'm a professional web developer who works in it day in and day out. I guess my team just doesn't do dumb JS stuff?
But apparently lots of other people do run into them regularly, so I believe that such things do exist.
By the same token, I've heard countless reports of people struggling with the flexibility that Lisp offers, with co-workers who abuse it to create nightmarish situations. That you haven't experienced that doesn't mean no one does.
I don't mean "do dumb stuff", I mean I've literally never seen anyone redefine the `define` keyword in any code.
With javascript, I do see people use `===` frequently. It's a wart of the language that the operator even exist. It's not "dumb" to use it - it's how frequently are you assaulted with the bugs of the language (not bugs in your code).
Same here. I have a 77 year old father who has had a stroke who is not going to be able to wrap his head around the notion of 2FA. It's a bridge too far. Not going to happen. He's just going to get confused and give up when faced with crap he doesn't understand (that's literally how it works with him). I've seem him break into tears because he couldn't figure out some mobile phone UX. Kind of heartbreaking to watch that happen. That's what strokes do to people. Stuff like this doesn't help people like that.
I'm thinking the built in browser password manager might be a safer, more usable option for him at this point. It's probably what I'll have to recommend when this inevitably blows up in a few months.
2FA is a hurdle for normal users. I've had to support 2FA for our Google workspace account for some of my non technical colleagues. It's a PITA almost 100% of them needed me to unblock their account at some point. Absolutely terrible UX. Most users aren't compatible with this stuff. That's why all the big companies are pushing for passkeys now. I don't think that actually fixes the problem and just moves it instead.
But I get it. Bitwarden wants to appeal to corporate IT managers so they can sell expensive enterprise licenses because IT managers are most of their paying customers. And for that they need to sacrifice UX. Because IT managers like liability even less than service providers (like Bitwarden). They'll make their users jump through hoops one hundred percent of the time if it reduces their exposure to their mistakes. So sacrificing UX for that is a small sacrifice. But it is a sacrifice that buys ass coverage for Bitwarden and IT managers. At the cost of users.
I'm very frustrated about this because for a lot of my family members, their phone is the only computing device they have.
When they lose it, they lose access to email, and there is no backup plan here. Using bitwarden is far far superior to them using the same password everywhere, but this will drive them back to the same behavior.
>I'm very frustrated about this because for a lot of my family members, their phone is the only computing device they have.
That's actually a really good point. My 1Password setup is resilient to device loss because I have multiple registered devices, any of which can spin up a new device with just my master password.
But if you're in a situation where you only ever have one device and lose it, then you can't bootstrap a new registration going from 0 devices to 1.
There's definitely a security/resiliency tension here. Is it desirable to have your password manager protected by just a user-specified password? That can allow you to go from 0 devices to 1, but it also greatly lowers defenses against account compromise. You can have a paper recovery kit, but people will misplace that, if they even create it in the first place. Social attestation could be a decent if imperfect mitigation: if everyone is on the same family group, then maybe the admin or the group can recover access for any one person.
I'd love actually an "app" that was just the bookmark manager for launching the browser.
Chrome, firefox, etc on android all either push me to use piles of shortcut icons w/folders (cluttered and exhausting to manage), or click through a hierarchical menu thats 20 clicks away.
You can always just "Saved to home screen" from Chrome on mobile, and then create a shortcut folder with your collection of "web" apps. I've been doing this for the past couple years for a few local restaurants that use Square for direct online ordering, in order to shortcut the link to their ordering site.
> biggest GPU compute cluster in the world right now
This is wildly untrue, and most in industry know that. Unfortunately you won't have a source just like I won't, but just wanted to voice that you're way off here.
> This is wildly untrue, and most in industry know that. Unfortunately you won't have a source just like I won't, but just wanted to voice that you're way off here.
Sure, we probably can't know for sure who has the biggest as they try to keep that under wraps for competition purposes, but it's definitely not "wildly untrue." A simple search will show that they have if not the biggest, damn near one of the biggest. Just a quick sample:
The distinction is that larger installations cannot form a single network. Before xAI's new network architecture, only around 30k GPUs could train a model simultaneously. It's not clear how many can train together with xAI's new approach, but apparently it is >100k.
2 months ago Jensen Huang did an interview where he said xAi built the fastest cluster with 100k GPUs.he said "what they achieved is singular, never been done before"
https://youtu.be/bUrCR4jQQg8?si=i0MpcIawMVHmHS2e
Meta said they would expand their infrastructure to include 350k GPUs by the end of this year. But, my guess is they meant a collection of AI clusters not a singular large cluster. In the post where they mentioned this, they shared details on 2 clusters with 24k GPUs each.https://engineering.fb.com/2024/03/12/data-center-engineerin...
What's singular is putting 100k H100s in a single machine. Which, yay, cool supercomputer, but the distributed supercomputer with 5 times the machines runs just as fast anyways.
Huang is still a CEO trying to prop up his product. He'd tell you putting an RTX4090 in your bathroom to drive an LED screen mirror is unprecedented if it meant it got him more sales and more clout.
Demand disappeared for good local reporting, long before any LLM. Even NYT & WSJ struggle with their subscriptions, and they have a huge addressable market.
The "slop" is just a reflection of this bottom feeding attempt to get some tiny bit of ad revenue. Using an LLM or a "pay per article" contracting fleet looks very similar.
Most local publications don't pay many if any full time journalists either way. I struggle to find great journalism on any local level around me - and discovery is difficult when anything reasonably good is usually a substack behind a paywall. But even then, they aren't pushing on local issues in person. None are going to the Mayor's press conference and directly asking them questions.
Learning to leverage AI as it exists today takes effort, but is usually worth it.
However, ignoring the effort undercuts the distance these products have to go. Speech is great because there is just a single interface to integrate with. Obviously I'm biased to my employer's speech product, but I'm sure there are many.
Biggest thing for me was when I saw that the lem editor[0] posted on hacker news[1] was a small editor, which has 3 top level features: common lisp API, LSP support, and copilot support.
I've installed gptel[2] in emacs, and hacking up a few tools that really make it shine. Up next is figuring out voice + AI + emacs :)
org-review is interesting, but I just added another TODO state: DEFER
Then projects, todos, agenda items etc can go from TODO -> DEFER and I know that they're "not now" items. That has seemed sufficient for me. Tracking exactly when they're reviewed has been too much, and not everything needs a scheduled time in the future for review.
Does anyone handle org-mode for attachments + mobile sync?
I use Autosync on android and a script that runs on my laptop. This works great for the text files, but the attachments/data files (say like screenshots) are a pain to find and open on mobile. Second, they're even more painful to add from mobile.
I'm fine with quite elaborate set ups if it solves this problem.
Not that I am aware of. The mobile story with emacs is pretty painful. I use dropbox + beorg (iOS). If I have to deal with attachments off spelunking through the dropbox folders I go.
Realistically, I dont see the mobile experience improving much. Given Emacs' utility though I dont mind it.
This is like asking "what if your coworker named all errs as `ok`" so everything was `if ok { return errors.New("Not ok!!"); }`. It's possible but no one does it.
This is why `defmacro` and `gensym` in common lisp are awesome, and similarly why Go's warts don't matter. Much of programming language ugliness is an "impact x frequency" calculation, rather than one or the other.
It's also why javascript is so terrible, you run into it's warts constantly all day long.