The automotive industry is huge. It seems unlikely that they would lose lobbying efforts to startup tech companies - so it seems far more likely that cars get more expensive due to government mandated self-diving "safety" features, but just enough that Americans still buy them.
Automotive industry is being driven into the ground by chinese manufacturers now. They would probably be OK with "if we can't sell cars, nobody can" and keep just the (certified) robotaxi factories.
What "classic CI bug" makes bots talk with each other forever? Been doing CI for as long as I've been a professional developer, and not even once I've had that issue.
I've made "reply bots" before, bunch of times, first time on IRC, and pretty much the second or third step is "Huh, probably this shouldn't be able to reply to itself, then it'll get stuck in a loop". But that's hardly a "classic CI bug", so don't think that is what you're referring to here right?
If you’re making a bot in which there will be many sub-behaviors, it can be tempting to say “each sub-behavior should do whatever checks it needs, including basic checks for self-reply.”
And there lie dragons, because whether a tired or junior or (now) not-even-human engineer is writing new sub-behavior, it’s easy to assume that footguns either don’t exist or are prevented a layer up. There’s nothing more classic than that.
I'm kind of understanding, I think, but not fully. Regardless of how you structure this bot, there will be one entrypoint for the webhooks/callbacks, right? Even if there is sub-behaviours, the incoming event is passing through something, or are we talking about "sub-bots" here that are completely independent and use different GitHub users and so on?
Otherwise I still don't see how you'd end up with your own bot getting stuck in a loop replying to itself, but maybe I'm misunderstanding how others are building these sort of bots.
Someone sets up a bot with: on a trigger, read the message, determine which "skill" to use out of a set of behaviors, then let that skill handle all the behavior about whether or not to post.
Later, someone (or a vibe coding system) rolls out a new skill, or a change to the skill, that omits/removes a self-reply guard, making the assumption that there are guards at the orchestration level. But the orchestration level was depending on the skill to prevent self-replies. The new code passes linters and unit tests, but the unit tests don't actually mimic a thread re-triggering the whole system on the self-posting. New code gets yolo-pushed into production. Chaos ensues.
2. Step X is calling an external system that runs its own series of steps.
3. Some potential outcomes of said external system is if it detects some potential outcomes (errors, failed tests, whatever) is it kicks back an automated process that runs back through the bot/system where said system makes the same mistake again without awareness it's caught in a loop.
> pretty much the second or third step is "Huh, probably this shouldn't be able to reply to itself, then it'll get stuck in a loop". But that's hardly a "classic CI bug",
If I've previously misunderstood your point, copy pasting it doesn't clear anything up, no..?
I don't see why it's not a "classic CI bug". It's an easy trap to fall into, and I've seen it multiple times. Same with "action that runs on every commit to main to generate a file and push a new commit if the file changes", that suddenly gets stuck in a loop because the generated file contains a comment with the timestamp of creation.
Yeah, a bot replying to itself is pretty poor design. It's one of the first things you do even with toy bots. You can even hardcode knowing itself, since usually you have an unchanging ID. A much more common problem is if someone deploys another bot, which will lead your bot into having an endless back-and-forth with it.
> A much more common problem is if someone deploys another bot, which will lead your bot into having an endless back-and-forth with it.
This I'd understand, bit trickier since you're basically end up with a problem typical of distributed systems.
But one bot? One identity? One GitHub user? Seems really strange to miss something like that, as you say, it's one of the earlier things you tend to try when creating bots for chats and alike.
It's literally just four screenshots paired with this sentence.
> Trying to orient our economy and geopolitical policy around such shoddy technology — particularly on the unproven hopes that it will dramatically improve– is a mistake.
The screenshots are screenshots of real articles. The sentence is shorter than a typical prompt.
I liked reading through it from a "is modern Python doing anything obviously wrong?" perspective, but strongly disagree anyone should "know" these numbers. There's like 5-10 primitives in there that everyone should know rough timings for; the rest should be derived with big-O algorithm and data structure knowledge.
At Plotly we did a decent amount of benchmarking to see how much the different defaults `uv` uses lead to its performance. This was necessary so we could advise our enterprise customers on the transition. We found you lost almost all of the speed gains if you configured uv behave as much like pip as you could. A trivial example is the precompile flag, which can easily be 50% of pips install time for a typical data science venv.
The precompilation thing was brought up to the uv team several months ago IIRC. It doesn't make as much of a difference for uv as for pip, because when uv is told to pre-compile it can parallelize that process. This is easily done in Python (the standard library even provides rudimentary support, which Python's own Makefile uses); it just isn't in pip yet (I understand it will be soon).
You'll never be able to do maintenance or upgrade these things. The up front cost seems extremely high given the risk of hardware failure or obselecence at data center scales.
Plotly's new Plotly Studio product is a spec-anchored approach to building data applications. Each chart or dataset gets its own prompt/spec.
The question of how much detail to include in a spec is really hard. We actually split it into two levels - an input prompt describing details the user cares about in that component and an output spec describing what was built to allow verification.
reply