What is it if not a punishment? If you’re a high earner (not obscene earner, mind you) you already pay 50+% in taxes, on top of that you now have to pay even more?
Couldn’t it just as easily be equivalent to saying “you grew this year, so contribute some money back to society for enabling you to have the educated hiring base/financial infrastructure/physical infrastructure that enabled you to grow”?
Like, sure, you don’t owe growth taxes for a quarter when you didn’t grow. But why should you be refunded just because prior taxable growth isn’t denominated in money in a bank account?
> you grew this year, so contribute some money back to society for enabling you to have the educated hiring base/financial infrastructure/physical infrastructure that enabled you to grow
Apparently paying for gas, water, electricity, property taxes, taxes on everything you buy isn’t enough, now you have to “contribute for enabling”. What’s next? Pay because they “enable you to breathe”?
Here's a sad prediction: over the coming few years, AIs will get significantly better at critical evaluation of sources, while humans will get even worse at it.
I wish I could disagree with you, but what I'm seeing on average (especially at work) is exactly that: people asking stuff to ChatGPT and accepting hallucinations as fact, and then fighting me when I say it's not true.
Hot take: Humans have always been bad at this (in the aggregate, without training). Only a certain percentage of the population took the time to investigate.
For most throughout history, whatever is presented to you that you believe is the right answer. AI just brings them source information faster so what you're seeing is mostly just the usual behavior, but faster. Before AI people would not have bothered to try and figure out an answer to some of these questions. It would've been too much work.
The secret sauce about having good understanding, taste and style (both for coding and writing) has always been in the fine tuning and RHLF steps. I'd be skeptical if the signals a few GitHub repos or blogs generate at the initial stages of the learning are that critical. There's probably a filter also for good taste on the initial training set and these are so large not even a single full epoch is done on the data these days.
Jujutsu has been the tool that actually got me into making full use of version control software. Before, through multiple attempts at grasping at the deeper fundamentals, I only learned the bare minimum git commands I needed to make commits and branches, and very careful merges. Jujutsu maps to a much clearer and simpler mental model. Blockchains are nifty and all, but awfully inconvenient to work with as meatbags.
There's odd cases where it still has uses. When I was a teacher, some of the gamifying tools don't allow video embeds without a subscription, but I wanted to make some "what 3D operation is shown here" questions with various tools in Blender. GIF sizes were pretty comparable to video with largely static, less-than-a-second loops, and likely had slightly higher quality with care used to reduce color palette usage.
But I fully realize, there are vanishingly few cases with similar constraints.
If you need animated images in emails or text messages, GIF is the only supported format that will play the animation. Because of the size restrictions for these messaging systems the inefficient compression of GIFs is a major issue.
Just out of curiosity, what's the problem libwebp has with them? I wasn't aware of cases where any image format would just cross its arms and refuse point blank like that.
We have never been able to resolve it better than knowing this:
Certain pixel colour combinations in the source image appear to trip the algorithm to such a degree that the encoder will only produce a black image.
We know this because we have been able to encode the images by (in pure frustration) manually brute forcing moving a black square across the source image on different locations and then trying to encode again. Suddenly it will work.
Images are pretty much always exported from Adobe, often smaller than 3000x3000 pixels. Images from the same camera, same size, same photo session, same export batch will work and then suddenly one out of a few hundred may become black, and only the webp one not other formats, the rest of the photos will work for all formats.
A more mathematically inclined colleague tried to have a look at the implementation once, but was unable to figure it out because they could apparently not find a good written spec on how the encoder is supposed to work.
Exactly. I don't know how about "big design houses" like Apple, but in my small shop designers _only_ care about static screen stories. They don't care how user will click those icons, how focus will work, how any dynamic aspects of complex UI works.
In past it was "given" by desktop env, now it's all rebuilt in material or other design but without any advanced behavior, it only "looks good on static screen".
GNOME on Linux prevents it. You get a notification "Discord updater is ready" instead which you can activate if you want to give it focus - which I never do. F the Discord updater.