> The producer sees no more dependency so it prunes the export. But it can't delete the export, cause the consumer still needs it. You can't deploy the consumer, because the producer has to deploy first sequentially. And if you can't delete the consumer (eg your company mandates a CI pipeline deploy for everything) you gotta go bug Ops on slack, wait for someone who has the right perms to delete it, then redeploy.
This is a tricky issue. Here is how we fixed it:
Assume you have a stack with the ConstructID of `foo-bar`, and that uses resources exported to `charlie`.
Update the Stack ConstructID to be a new value, ie `foo-bar-2`. Then at the very end of your CI, add a `cdk destroy foo-bar` to delete the original stack. This forces a new deployment of your stack, which has new references. Then, `charlie` updates with the new stack and the original `foo-bar` stack can be safely destroyed once `charlie` successfully updates.
The real conundrum is with data - you typically want any data stacks (Dynamo, RDS, etc) to be in their own stack at the very beginning of your dependency tree. That way any revised stacks can be cleanly destroyed and recreated without impacting your data.
Docker has a large cold start time usually, sometimes it can take up to 10 seconds for a dockerized Python lambda to invoke. Are there solutions for that, that include docker?
Fair point, I’m working on a project now that doesn’t require responsiveness - it’s not user facing. We were burned so bad by Lambda cold starts for user facing lambdas that were backing websites we said forget it and just used ECS/fargate. This is my go to template - I didn’t write it. I found it 7 years ago.
You just update the parameter for your container image and redeploy.
Of course you have provisioned concurrency that can help.
But the easiest way to test Python lambdas locally is just to run it like any other Python script and call your event handler with the event you want to test
Something often left out is the dependence on LLM’s. Students today assume LLM’s will always be available, at a price they (or their companies) can afford.
What happens if LLM’s suddenly change their cost to be 1000 USD per user per month? What if it is 1000 USD per request? Will new students and new professionals still be able to complete their jobs?
I swear teachers said something extremely similar about calculators when I was in grade school. "What are you going to do when you don't have access to a calculator? You won't ways have one with you!"
Calculators have never been more accessible/available. (And yet I personally still do most basic calculations in my head)
So I agree students should learn to do this stuff without LLMs, but not because the LLMs are going to get less accessible. There's another better reason I'm just not sure how to articulate it yet. Something to do with integrated information and how thinking works.
Calculators are widely available for a low cost. The logic behind most calculators is able to be consistently duplicated across a variety of manufacturers, thereby lowering the cost to produce these to the masses.
LLM’s are not consistent. For example, having a new company make a functional duplicate of ChatGPT is nearly impossible.
Furthermore, the cost of LLM’s can change at any time for any reason. Access can be changed by new government regulations, and private organizations can chose to suspend or revoke access to their LLM due to changes in local laws.
All of this makes dependence on an LLM a risk for any professionals. The only way these would be mitigated is by an open source, freely available LLM that creates consistent results that students can learn how to use.
The comparison with calculators overlooks several key developments.
LLMs are becoming increasingly efficient. Through techniques such as distillation, quantization, and optimized architectures, it is already possible to run capable models offline, including on personal computers and even smartphones. This trend reduces reliance on constant access to centralized providers and enables local, self-contained usage.
Rather than avoiding LLMs, the rational response is to build local, portable, and open alternatives in parallel. The natural trajectory of LLMs points toward smaller, more efficient, and locally executable models, mirroring the path that calculators themselves once followed.
My intuition is that the costs involved to train and run LLMs will keep dropping. They will become more and more accessible, so long as our economies keep chugging along.
I could be wrong, time will tell. I just wouldn't base my argument for why students should learn to think for themselves on accessibility of LLMs. I think there's something far more fundamental and important, I just don't know how to say it yet.
I wonder if this will cause new legislation to be created, and new government bodies like the FDA. If this becomes available to many homes, what is to stop a hacker from programming this humanoid to kill its owners?
> what is to stop a hacker from programming this humanoid to kill its owners?
What's to stop a hacker from hacking into the tesla update server and pushing an update that causes all teslas to max accelerate right off bridges?
I wonder if over-the-air updates for cars will cause new legislation and a new regulatory body making it illegal to push a murder-update to cars, cause otherwise someone will surely do that.
It's neither that easy to "just hack anything", nor does the world have skilled malicious people that want to commit murder, if only they could do it through hacking instead of with a gun.
Like, this fear-mongering about "what if the hackers turn this into a weapon" seems like such a silly worry in a country where anyone can trivially acquire a gun and a bump-stock, or a car, or a drone and materials for a bomb. Or a canister of gasoline and a pack of matches.
I have no insight into the Asahi project, but the LKML link goes to an email from James Calligeros containing code written by Hector Martin and Sven Peter. The code may have been written a long time ago.
That's an email from James Calligeros. All this patch says is that the author is Hector Martin (and Sven Peter). The code could have been written a long time ago.
It is worth noting that RED is particularly difficult to get a decent ratio on. Spend some time googling reddit posts, there are plenty of examples of people not being able to build solid ratios due to competing with scripted bots.
This is a tricky issue. Here is how we fixed it:
Assume you have a stack with the ConstructID of `foo-bar`, and that uses resources exported to `charlie`.
Update the Stack ConstructID to be a new value, ie `foo-bar-2`. Then at the very end of your CI, add a `cdk destroy foo-bar` to delete the original stack. This forces a new deployment of your stack, which has new references. Then, `charlie` updates with the new stack and the original `foo-bar` stack can be safely destroyed once `charlie` successfully updates.
The real conundrum is with data - you typically want any data stacks (Dynamo, RDS, etc) to be in their own stack at the very beginning of your dependency tree. That way any revised stacks can be cleanly destroyed and recreated without impacting your data.
reply