Hacker Newsnew | past | comments | ask | show | jobs | submit | monster_truck's commentslogin

Absolutely. I'd imagine not being able to use the card at Costco alone would be enough to have them entertaining surprising concessions. It was the first thing I thought of, with Chase CC's being Visa instead of Mastercard.

Chase issues cards on both the Visa and Mastercard network (i.e. certain cobrands and the Freedom Flex), so I doubt this was a serious consideration.

The accessbility of this website is deplorable. There is no way anyone responsible for this website has our best interests in mind

Joe Gebbia, co-founder of Airbnb is the head of the new national design studio. You can direct feedback to him.

If the author is reading these comments: Please write about the fully semantic IDE as soon as you can. Very interested in hearing more about that as it sounds like you've used it a lot

I'm constantly telling people to look up physical therapy movements/stretches for whatever they've got going on. Slept wrong? Tweaked your neck? You absolutely do NOT have to suffer with that until it goes away on its own, they can show you how to fix it.

If your insurance covers it, go see one! Them being able to actually see and feel what's going on specifically with you makes them markedly better at their jobs.


Can’t find the link now but a very comprehensive analysis of surgery vs physiotherapy for lower back issues found that physiotherapy was as effective as invasive, often dangerous spinal surgery. The only difference was time - surgery with recovery + recovery physio fixed the pain in about 4-6 months, while physiotherapy took 18-24 months

But on the plus side, physiotherapy is “free”, has no real risk, and most people who opted for the physiotherapy path found that they were happier and also fixed a lot of other pains simply because of regular stretching and exercise


Would be great if you could find that link.

It's a good thing we're jumping to conclusions instead of exhaustively evaluating all of the places values exactly like this one appear when dealing with swing and quantization on, and especially when mixing 8, 12, 16 bit samplers and sequencers. Nevermind all of the little nudges from byte window mismatches when reading, playing back, or manipulating samples at varying bit depths and sample rates.

> It’s a good thing we’re all jumping to conclusions

I wholeheartedly agree. This thread would have been a lot less fun to create if I’d had to apply rigorous methodology and proper hypothesis evaluation practices. I’m really glad someone else appreciated that too :D


Honestly the pastebin link needs to be re-submitted and frontpaged.

I even encounter this in professional a/v contexts! If VLC can read and decode your stream, that's a good sign that most things should able to view it, but it absolutely should not be trusted as any measure of doing things correctly/to spec.


You don't have to do any of that if you simply don't make mistakes in the first place FYI

This is why I exclusively write C89 when handling untrusted user input. I simply never make mistakes and so I don't need to worry about off-by-ones or overflows or memory safety or use after frees.

Garbage collection and managed types are for idiots who don't know what the hell they're doing; I'm leet af. You don't need to worry about accidentally writing heartbleed if you simply don't make mistakes in the first place.


Attitudes like this one are why people prefer working with AI to code lol.

It's obviously tongue in cheek

That they are still training models against Objective-C is all the proof you need that it will outlive Swift.

When is someone going to vibe code Objective-C 3.0? Borrowing all of the actual good things that have happened since 2.0 is closer than you'd think thanks to LLVM and friends.


Why would they not? Existing objective-c apps will still need updates and various work. Models are still trained on assembler for architectures that don't meaningfully exist today as well.


I’m sure you can find some COBOL code in many of the training sets. Not sure I would build my next startup using COBOL.


groq was targeting a part of the stack where cuda was weakest: guaranteed inference time at a lower cost per token at scale. This was in response to more than just goog's tpus, they were also one of the few realistic alternative paths oai had with those wafers.


It doesn't do anything. It shouldn't be shared in case people who do not know better are tricked into believing it does


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: