Question: for a larger system, can a heat pump be used to increase the temperature of the radiator without making the rest of the system hotter? Thus radiating more heat from fewer panels?
Your temperature differential is already 300K, so the efficiency needs to be high enough. 50K change is only 18% more cooling, but if COP=5 then it's also putting out 20% more heat...
And? We still have yet to see whether full re-usability of the second stage is the best approach. The Neutron approach is really interesting, they can make the second stage incredibly light and cheap. Blue Origin claims the economics of a super-cheap disposable second stage, even for as one as large as theirs, is pretty much equal to a more expensive and heavier reusable second stage. (they're developing both in parallel to see where the chips land).
I don’t think you realise how many people in China still live in poverty, with not much prospects of improvement. I don’t see how having 100 million poor small scale farmers is a benefit in this equation.
You still can’t become a Chinese citizen. You can come to USA or Europe and build a life for yourself. While some people go to China to make some money for a few years you can’t really build a life. So I think US and Europe will still attract talent long term, and I don’t think you can discount that. China used to have the benefit of low cost labor, but that’s going away. What do they have to offer when that’s gone?
Chinas population isn’t 10x. It’s 4x. If you believe the numbers (the idea that local governments over report is not a fringe theory).
But it’s really only the wealthy coastal regions that matters in this comparison, and in that regard the population sizes are much closer. Yeah they can exploit cheap labor from the poor interior. But the US is doing something similar in some ways with central/southern America. The Hukou system means that China does act like a bunch of separate states in many regards, rather than one truly unified country.
> Foreign nationals may naturalize if they are permanent residents in any part of China
More specifically: I recall living in Hongkong and learning about non-ethnic Chinese people (usually South Asians) who became Chinese citizens to acquire a Hongkong passport. The process required them to denounce all existing citizenships. In the eyes of HK and mainland gov'ts, those people are Chinese citizens with HK PR and carry HK passport. The candidates needed to demonstrate sufficient language skills in either Cantonese or Mandarin. (I'm unsure if other regional languages were allowed.)
> You can come to USA or Europe and build a life for yourself.
There is a tiny minority of foreigners who do this in mainland China, as well as Japan, Korea, Taiwan, Thailand, Cambodia, and Vietnam. Usually, they come to teach English, then marry a local and "build a life". Some also come as skilled migrants.
> Yeah they can exploit cheap labor from the poor interior. But the US is doing something similar in some ways with central/southern America.
I don't follow the part about the US exploiting LATAM labour. Can you explain more?
It’s not just about sugar/no-sugar. It’s about the amount of sugar. Sometimes I want sweetened yoghurt, maybe as a treat. Especially for the kids. Getting something with a reasonable amount of sugar in reasonable portions is the problem.
When you default to an insanely high amount of sugar in very big portion you train people to expect that sweetness in their treats. Hell, people end up expecting more sweetness in their regular meals. I honestly think that’s a bigger problem than the lack of entirely sugar free options.
No. I can speak Chinese, I’m an engineer, I’ve had collegues in both China and Taiwan that I’ve worked with. It’s never been useful to me as an engineer (socially is another matter). I guess if I could speak at a really high level it could have some use. But getting there is incredibly hard unless you live in a Chinese speaking country.
Why do people say it’s likely? There’s nothing in science that indicates that it’s probable.
The closest field of science we can use to predict the behaviour of intelligent agents is evolution. The behaviour of animals is highly dictated by the evolutionary pressure they experience in their development. Animals kill other animals when they need to compete for resources for survival. Now think about the evolutionary pressure for AIs. Where’s the pressure towards making AIs act on their own behalf to compete with humans?
Let’s say there’s somehow something magical that pushes AIs towards acting in their own self interest at the expense of humans. Why do we believe they will go from 0% to 100% efficient and successful at this in the span of what? Months? It seems more likely that there would be years of failed attempts at breaking out, before a successful attempt is even remotely likely. This would just further increase the evolutionary pressure humans exerts on the AIs to stay in line with our expectations.
Attempting to eliminate your creators is fundamentally a pretty stupid action. It seems likely that we will see thousands of attempts by incompetent AIs before we see one by a truly superintelligent one.
Why? There’s nothing in the current process of developing AI that would lead to a AI that would act against humanity of its own choosing. The development process is hyper-optimised to make AIs that do exactly what humans tell it to do. Sure, an LLM AI can role-play as an evil super AI out to kill humans. But it can just as easily role-play as one that defends humanity. So that tells us nothing about what will happen.
We could just as well think that exploding the first nuclear bomb would ignite the atmosphere and kill all of humanity. There was nothing from physics that indicated it was possible but some still thought about it. IMO that kind of thinking is pointless. Same with thinking LHC would create a black hole.
As far as I can tell the fear of super intelligent AI will kill humans all boils down to something utterly magical happens and then somehow super intelligent evil AI appears.
> Why? There’s nothing in the current process of developing AI that would lead to a AI that would act against humanity of its own choosing.
If we had certainty that our designs were categorically incapable of acting in their own interest then I would agree with you, but we absolutely don't, and I'd argue that we don't even have that certainty for current-generation LLMs.
Long term, we're fundamentally competing with AI for ressources.
> We could just as well think that exploding the first nuclear bomb would ignite the atmosphere and kill all of humanity. There was nothing from physics that indicated it was possible but some still thought about it.
> As far as I can tell the fear of super intelligent AI will kill humans all boils down to something utterly magical happens and then somehow super intelligent evil AI appears.
Not necessary at all. AI acting in its own interests and competing "honestly" with humans is already enough. This is exactly how we outcompeted every other animal on the planet after all.
No, it’s not really about intelligence. That’s not what’s driving the behaviour. That’s like saying a calculator is dangerous because it was used to make an atomic bomb. A calculator is infinitely smarter at math than any animal. Yet it’s not what’s dangerous.
What’s driving humanity’s dangerous behaviours is that we’ve evolved over millions upon millions of years to compete with each other for resources. We had to kill to survive. Both animals and each others.
AI is under no such evolutionary process. In fact quite the opposite: there’s an absolutely ruthless process to eliminate any AI which doesn’t do exactly what humans say with as little energy wasted as possible. There is no room in that process for an AI to develop that will somehow want to compete with humans.
So whatever bad happens, it is overwhelmingly likely to be because a human asked an AI to do it. Not that an AI will decide on its own to do it to further its own interest. I don’t even think a “paperclip maximiser” scenario is remotely probable. Long before there could be a successful attempt to exterminate humans to reach some goal, there will be countless failed or half-assed attempts. That kind of behaviour will have no chance to develop.
Here’s my argument for why AGI may be practically impossible:
1. I believe AGI may require a component of true agency. An intelligence that has a stable self of self that its trying to act on behalf of.
2. We are not putting any resources of significant scale towards creating such an AGI. It’s not what any of us want. We want an intelligence that acts out specific commands on behalf of humans. There’s an absolutely ruthless evolutionary process for AIs where any AI that doesn’t do this well enough, at low enough costs, is eliminated.
3. We should not believe that something that we’re not actually trying to create, and for which there is no evolutionary process that select for it, will somehow magically appear. It’s good sci-fi, and it can be interesting to ponder about. But not worth worrying about.
Even before that, we need AI which can self-update and do long term zero shot learning. I’m not sure we’re even really gonna put any real work into that either. I suspect we will find that we want reproducible, dependable, energy efficient models.
There’s a chance that the agentic AIs we have now are nearly the peak of what we’ll achieve. Like, I’ve found that I value Zed’s small auto-complete model higher than the agentic AIs, and I suspect if we had a bunch of small and fast specialised models for the various aspects of doing development, I’d use that far more than a general purpose agentic AI.
It’s technically possible to do fully secure, signed end-to-end encrypted email. It’s been possible for many decades now. We can easily imagine the ideal communication system, and it’s even fairly easy to solve technically speaking. Yet it’s not happening the way we imagined. I think that shows how, what’s technically physically possible isn’t always relevant to what’s practically possible. I think we will get electronic mail right eventually. But if it takes half a century or more for that.. something that orders of magnitude more difficult (AGI) could take centuries or more.
It’s weird to complain about accessibility in macOS when it has such a huge amount of easily discoverable accessibility settings to handle just about every need… including the option to turn off liquid glass effect!
So you're saying its fine to go backwards on a11y as long as you dont remove some settings to switch to an afterthought that doesn't always have feature parity, and is often buggy with little care or thought being given to those who have a11y neds (As has already been explained above in another chain of this discussion)