Better yet, a link to Kaggle and provide prize funding for a few dozen competitions with most of them open to UK residents only. Directly incentivise the most driven types of people to compete and learn and give local firms a way to identify talent.
But I guess donating another £4MM to PwC is more sensible.
You could have contracted 5 small firms for £400k each (which, for this project seems frankly seems excessive) and even if a couple failed to deliver you'd have gotten 3 separate products to choose the best quality one from, £148k to legally chase up the firms who failed to deliver, and still had £2 million left over.
I agree a good solution isn't easy to come up with, but the status quo is certainly an outrageously awful one.
I'm struggling to follow the logic on this. Glocks are used in murders, Proton has been used to transmit serious threats, C has been used to program malware. All can be legitimate tools in professional settings where the users don't use it for illegal stuff. My Leatherman doesn't need to have a tipless blade so I don't stab people because I'm trusted to not stab people.
The only reason I don't use Grok professionally is that I've found it to not be as useful for my problems as other LLMs.
The Gary Marcus proposal you refer to was about a novel, and not a codebase. I think GP's point is that motivations require analysis outside of the given (or derived) context window, which LLMs are essentially incapable of doing.
I don’t really see how this is an AI issue. We use AI all the time for code generation but if you put this on my desk with specific instructions to be light on review and it’s not a joke, I’m probably checking to see if you’re still on probation because that’s an attitude that’s incompatible with making good software.
People with this kind of attitude existed long before AI and will continue to exist.
Totally, and im not saying otherwise.
I'm saying that it takes the same amount of work to become a good engineering team even with AI.
But it takes exponentially less work to bacome worse team. If they say C++ makes it much more easier to shut yourself in the foot, in a similar manner LLMs are hard to aim. If your team can aim properly, you are going to hit more targets more quickly, but if and when you miss, the entire team is in wheelchairs.
> The following fees apply when a user completes [...] any app installs within 24 hours of following an external content link
So does this mean a malicious competitor or motivated disgruntled user could fraudulently cause millions of app installs? With the scale smartphone activity fraud farms are at these days, paying a few thousand dollars on such a service to cause a developer to spend a few million dollars on worthless installs (or a lot of resources arguing with Google) seems like a worthwhile endeavour for the motivated.
If linking to external content is not viable, developers will not continue linking to external content. If developers stop linking to external content Google stops making money. It's not an infinite money glitch if Google didn't go after fraud, it hurts the profit they can make from it.
I got my AdSense account disabled because "fraudulent click activity" or how they worded it (someone clicked my ads frequently, I assume?). Google then kept all the my hard earned 16++ EUR or so.
The only thing that gives some slight semblance of hope is that he at least acknowledges that Mozilla is vulnerable and he very very briefly mentions needing new sources of revenue.
No mention of an endowment (like Wikipedia has) or concrete plans to spend money efficiently or in a worthwhile way, and I sure hope ‘invest in AI’ doesn’t mean ‘piss away 9 figures that could have set up an endowment to give Mozilla some actual resilience’.
I hope is that he’s at least paranoid enough about Mozilla’s revenue sources to do anything about their current position that gives them resiliency. Mozilla has for well over a decade now been in a pathetic state where if Google turns off the taps it is quite simply over. He talks a lot about peoples’ trust in Mozilla. I don’t really remember what he’s talking about to be honest, but if Mozilla get to a point where they seem like they can exist without them simply being Google’s monopoly defence insurance, perhaps I’ll remember the feeling of trusting Mozilla. I miss it.
Being short anything AI now seems like shotgun tasting unless you really want to give Citadel and Jane Street money since the options premiums are so high, but I have been trying to get a bit less exposed to tech over the last few months and just been buying other ETFs that are less exposed to tech.
MS (or any large company for that matter) didn’t participate in BLM discussions and get speakers to describe themselves and list their pronouns because they thought it was virtuous or right, they were just following the cultural zeitgeist in a way that they thought would make them more money.
Walking it back is just the same behaviour manifesting in a different way. Investors don’t value DEI in the same way they did before so it becomes an expense with no value to shareholders, so it gets cut.
It’s very cynical but nothing about this should be particularly shocking.
There has certainly been an overreaction, and it continues to be the case even after efforts have been walked back.
I have yet to hear a good justification for why people who are not interested in programming should be encouraged to become interested purely in the name of equality, yet my institution is still spending huge amounts of public money on trying to achieve exactly that.
reply