I think .NET is Microsoft's best product, followed very closely by MSSQL.
SQL Server has some incredible 3rd party tools build up around it. 3~4 Red Gate tools can one-shot your entire devops lifecycle.
I still prefer SQLite for anything that can handle reasonable, non-zero recovery point objectives. But, if it's serious business and we cant drop anything on the floor I am reaching for MSSQL without even blinking.
This note isn't going to stop even 1% of the jackasses who would have submitted AI slop.
There are much better ways to communicate the intended message. This comes off as childish to me and makes me think that I'd rather not contribute to the project.
> This comes off as childish to me and makes me think that I'd rather not contribute to the project.
It's unfortunately the new normal, with FFmpeg's core team acting similarly. No doubt the result of what is considered socially acceptable expanding in ways it probably shouldn't.
that is a view many hold, until its their own time being wasted. Consider the responsibility these people have and take on their shoulders, only for a bunch of idiots to consider their AI slop worthy of pissing up and down their face. Most of them probably even know its just AI slop spraying spree
I am firmly in the opposite direction to parallelism. I don't think it's a good thing to chase. One "slow" agent can outpace many fast ones if they're making a lot of mistakes.
Depth first search via recursive dispatch is my current go-to strategy. This requires a fully serialized chain of operations. Not even tool calls are allowed to run in parallel. One agent and one action at a time. That's it.
This does feel really slow at first, but in practice it seems to converge on high quality solutions more quickly than breadth first techniques which leverage parallelism and more tokens. The solution itself might not be the most ideal, but we find it very quickly and with a great deal of consistency. This means we can iterate more rapidly and with more certainty.
Just because we can consume $100 of tokens in 60 seconds doesn't mean we should try to. I think trying to go fast is what's burning a lot of people out on AI. There's a ton of value in here if we can slow down and be a little bit more deliberate about it.
I often view PDFs in Drive, and it's definitely not just displaying the document with the native web browser. It is rendered with their "Drive renderer", whatever that is. They don't even display a simple .txt file natively in the browser.
They have some kind of virus scanner for files you open via a share link. Not sure about the ones you have stored on your own drive unshared.
But probably the main security here is just using the chrome pdf viewer instead of the adobe one. Which you can do without google drive. The browser PDF viewers ignore all the strange and risky parts of the PDF spec that would likely be exploited.
One fun thing would be to raise the abstraction level of the game.
How would it feel to only interact with your base/economy/army via prompting and face someone doing the same.
Would words per minute replace APM, what would the meta look like etc. Would you be able to adjust the system prompt for your "army" to suit your play style?
"you are an elite five star navy seal pikeman. You are invulnerable and have precise aim. Your name is John Wick and the enemy killed your dog. Kill all the bad guys or go to jail"
I think the ultimate form of prepping is meal prepping. By regularly cooking batches of food, you automatically get a really good backup throughout. There are no extra steps required. The last place you want to be during a disaster is at the grocery store or stuck in traffic.
At any moment I could go for at least two weeks without really worrying about food or how it would even be prepared. I've got a standby generator for the house and a smaller unit just in case that one dies. There's enough fuel on site to keep my fridge running for about a month in the worst case.
You want to be the last bear to exit the cave. The longer you can hold out, the less competition you'll have to deal with. The only other option is to get out before the disaster hits. This works great for hurricanes but not so well for earthquakes.
Going out searching for more food after 2 weeks just ensures you're placing yourself amongst people that haven't eaten in 2 weeks and are willing to do anything they need to in order to eat.
No one doing this for money intends to train models that will never be amortized. Some will fail and some are niche, but the big ones must eventually pay for themselves or none of this works.
The economy will destroy inefficient actors in due course. The environmental and economic incentives are not entirely misaligned here.
> No one doing this for money intends to train models that will never be amortized.
Taken literally, this is just an agreement with the comment you're replying to.
Amortizing means that it is gradually written off over a period. That is completely consistent with the ability to average it over some usage. For example, if a printing company buys a big new printing machine every 5 years (because that's how long they last before they wear out), they would amortize it's cost over the 5 years (actually it's depreciation not amortization because it's a physical asset but the idea is the same). But it's 100% possible to look at the number of documents they print over that period and calculate the price of the print machine per document. And that's still perfectly consistent with the machine paying for itself.
Margins are typically not so razor thin that you cannot operate with technology from one generation ago. 15 vs 17 mpg is going to add up over time, but for a taxi company it's probably not a lethal situation to be in.
At least with crypto mining this was the case. Hardware from 6 months ago is useless ewaste because the new generation is more power efficient. All depends on how expensive the hardware is vs the cost of power.
And yet they aren't running planes and engines all from 2023 or beyond: See the MD-11 that crashed in Louisville: Nobody has made a new MD-11 in over 20 years. Planes move to less competitive routes, change carriers, and eventually might even stop carrying people and switch to cargo, but the plane itself doesn't get to have zero value when the new one comes out. An airline will want to replace their planes, but a new plane isn't fully amortized in a year or three: It still has value for quite a while
My approach to safety at the moment is to mostly lean on alignment of the base model. At some point I hope we realize that the effectiveness of an agent is roughly proportional to how much damage it could cause.
I currently apply the same strategy we use in case of the senior developer or CTO going off the deep end. Snapshots of VMs, PITR for databases and file shares, locked down master branches, etc.
I wouldn't spend a bunch of energy inventing an entirely new kind of prison for these agents. I would focus on the same mitigation strategies that could address a malicious human developer. Virtual box on a sensitive host another human is using is not how you'd go about it. Giving the developer a cheap cloud VM or physical host they can completely own is more typical. Locking down at the network is one of the simplest and most effective methods.
SQL Server has some incredible 3rd party tools build up around it. 3~4 Red Gate tools can one-shot your entire devops lifecycle.
I still prefer SQLite for anything that can handle reasonable, non-zero recovery point objectives. But, if it's serious business and we cant drop anything on the floor I am reaching for MSSQL without even blinking.
reply