I run https://pico.sh where we don’t ask for email. Even on our website we instruct users to generate a token so if they do lose their key they can use it to recover their account.
People regularly lose their ssh keypair and also don’t generate a token. I think using email as a form of recovery is totally fine and regardless when you have to pay for the service you’re going to give up your email (and other personal info) via payment processor
I would eventually want even payment processors to stop asking for email. They have my address and government id, for any liability related reasons. ideally, we would use federated auth, where auth providers aren't using email at all. I'd imagine the complexity of your backend is simpler too as a result.
And kudos on your service, I'll keep it mind next time I'm picking a provider.
Andrej talked about this in a podcast with dwarkesh: the same is true for the internet. You will not find a massive spike when LLMs were released. It becomes embedded in the economy and you’ll see a gradual rise. Further, the kind of impact that the internet had took decades, the same will be true for LLMs.
You could argue that if I started marketing dog shit too though. The trick is only applying your argument to the things that will go on to be good. No one’s quite there yet. Probably just around the corner though.
When you are at the age to notice your parents well being, you are no longer a young kid. Little kids are extremely demanding, both physically and mentally. That’s not to say it gets any easier, but when you aren’t sleeping for 4 months it hits totally different.
I have a dotfiles git repo that symlinks my dotfiles. Then I can either pull the repo down on remote machine or rsync. I’m not sure why I would pick this over a git repo with a dotfiles.sh script
This is for when you have to ssh into some machine that's not yours, in order to do debugging or troubleshooting -- and you need your precious dotfiles while you're in there, but it would be not nice to scatter your config and leave it as a surprise for the next person.
This installs into temp dirs and cleans it all up when you disconnect.
Personally, my old-man solution to this problem is different: always roll with defaults even if you don't like them, and don't use aliases. Not for everyone, but I can ssh into any random box and not be flailing about.
Even with OP's neat solution, it's not really going to work when you have to go through a jump box, or have to connect with a serial connection or some enterprise audit loggable ssh wrapper, etc
There's definitely something be said for speaking the common tongue, and being able to use the defaults when it's necessary. I have some nice customisations, but make a point of not becoming depwndent on them because I'm so often not in my own environment.
On the other hand, your comment has me wondering if ssh-agent could be abused to drag your config along between jump hosts and enterprise nonsense, like ti does forwarding of keys.
I think you're joking, but to clarify -- not personally yours. A misbehaving worker box, an app server in the staging environment, etc. A resource owned by the organization for which you work, where it would not be appropriate for you to customize it to your own liking
I’d rather figure out how to stop taxing people and place the burden on companies entirely. Make it progressive like income tax but make it based on revenue not profits.
This is a great reason why letting websites have direct access to git is not a great idea. I started creating static versions of my projects with great success: https://git.erock.io
Do solutions like gitea not have prebuilt indexes of the git file contents? I know GitHub does this to some extent, especially for main repo pages. Seems wild that the default of a web forge would be to hit the actual git server on every http GET request.
The author discusses his efforts in trying caching; in most use cases, it makes no sense to pre-cache every possible piece of content (because real users don't need to load that much of the repository that fast), and in the case of bot scrapers it doesn't help to cache because they're only fetching each file once.
I'd argue every git-backed loadable page in a web forge should be "that fast", at least in this particular use-case.
Hitting the backing git implementation directly within the request/response loop seems like a good way to burn cpu cycles and create unnecessary disk reads from .git folders, possibly killing you drives prematurely. Just stick a memcache in front and call it a day, no?
In the age of cheap and reliable SSDs (approaching memory read speeds), you should just be batch rendering file pages from git commit hooks. Leverage external workers for rendering the largely static content. Web hosted git code is more often read than written in these scenarios, so why hit the underlying git implementation or DB directly at all? Do that for POSTs, sure but that's not what we're talking about (I think?)
Do you really think the user didn't try explaining the problem to the LLM? Do you not see how dismissive the comment you wrote is?
Why are some of you so resistant to admit that LLMs hallucinate? A normal response would be "Oh yeah, I have issues with that sometimes too, here's how I structure my prompts." Instead you act like you've never experienced this very common thing before, and it makes you sound like a shill.
People regularly lose their ssh keypair and also don’t generate a token. I think using email as a form of recovery is totally fine and regardless when you have to pay for the service you’re going to give up your email (and other personal info) via payment processor