Hacker Newsnew | past | comments | ask | show | jobs | submit | nneonneo's commentslogin

I have a fun one: my iPhone (12 Pro) refuses to acknowledge that it has eSIM functionality, even though the hardware exists.

I'm fairly sure I know what the problem is! It was restored from a backup taken on an iPhone X that had two physical SIM slots (Chinese version). The new phone now seems to think it has two physical SIM slots: it shows an IMEI2 in About, but any attempt to use the eSIM functionality just fails (scanning a code does "nothing"; no "add" button is visible, etc.).

If this was an Android phone, I'd root it and just fix the offending network configuration file. I believe it's possible to tamper with a backup of the phone to fix the issue, but this would mean a full backup+restore cycle and some specialized tooling to go mucking with the backup.

I filed a Radar on it ages ago, but I'm assuming nobody ever picked it up.


> I filed a Radar on it ages ago, but I'm assuming nobody ever picked it up.

I mean, based on all experience with Apple in the last 10 years, their bug trackers have presumably migrated to /dev/null for a backing db.


If you have tap to wake turned on (it's on by default), you can tap the screen anywhere to wake it up. So long as the screen isn't already on and you don't mash the Snooze button, you should be able to hit stop.

I believe the power-button-to-snooze thing is meant as a usability boost for groggy people: the physical button may be easier to click than some random spot on the screen.


I will object that the point of the alarm is to make sure you're not groggy anymore by the time you deactivate it.

But not everyone lives alone. If you'd like to avoid waking everyone else up - you hit the power button (or indeed any button) while groggy. And if you drift off to sleep despite your best efforts - well, that's why the button snoozes the alarm rather than disabling it.

Ah, 30% voted for him, but more than 30% decided it didn’t matter to them which one was in power. It’s unclear to me if that apathetic percentage has actually moved significantly.

You have no guarantee the API models won’t be tampered with to serve ads. I suspect ads (particularly on those models) will eventually be “native”: the models themselves will be subtly biased to promote advertisers’ interests, in a way that might be hard to distinguish from a genuinely helpful reply.

> You have no guarantee the API models won’t be tampered with to serve ads. I suspect ads (particularly on those models) will eventually be “native”: the models themselves will be subtly biased to promote advertisers’ interests, in a way that might be hard to distinguish from a genuinely helpful reply.

I admit I don't see how that will happen. What are they gonna do? Maintain a model (LoRA, maybe) for every single advertiser?

When both Pepsi and Coke pay you to advertise, you advertise both. The minute one reduces ad-spend, you need to advertise that less.

This sort of thing is computationally fast currently - ad-space is auctioned off in milliseconds. How will they do introduce ads into the content returned by an LLM while satisfying the ad-spend of the advertiser?


Retraining models every time a advertiser wins a bid on a keyword is unwieldy. Most likey solution is training the model to emit tokens represent ontological entries that are used by the Ad platform so that "<SODA>" can be bid on by PepsiCo/Coca-Cola under food > beverage > chilled > carbonated. Auction cycles have to match ad campaign durations for quicker price discovery, and more competition among bidders

you mean the API response then will contain the Ad display code?

More akin to something like the twitter verified program where companies can bid for relevance in the training set to buy a greater weight so the model will be trained to prefer them. Would be especially applicable for software if azure and aws start bidding on whose platform it should recommend. Or something like when Convex just came out to compete with depth of supabase/firebase training in current model they could be offered to retrain the model giving a higher weight to their personally selected code bases given extra weight for a mere $Xb.

But this is upfront, during training?

How does X then change "on the fly" if ad deals are changing? Constantly re-training with whatever advertiser is the current highest paying on?

In google ad times, this was realtime bidding in the background - for AI ads this does not work, if Im right?


Companies pay for entire sports stadiums for brand recognition. That’s also not something you can change on the fly, it’s a huge upfront cost and takes a significant effort to change. That doesn’t stop it from happening it’s just a different ad model.

The llm output will just contain ads directly. It’s going to be super hard to tell them apart from normal output.

Companies will pay OpenAI to prioritize more of their content during training. The weights for the product category will now be nudged more towards your product. Gartner Magic Quadrant for all businesses!

Or worse subtly integrate companies that pay them into the answers.

The generated text will contain advertisements.

See also the DATANOSE, published in 1991 (in an actual ACM conference, no less): https://www.cs.cmu.edu/~hudson/datanose/uist91_henry_datanos...


Not the OP, but note that adding a std::string to a POD type makes it non-POD. If you were doing something like using malloc() to make the struct (not recommended in C++!), then suddenly your std::string is uninitialized, and touching that object will be instant UB. Uninitialized primitives are benign unless read, but uninitialized objects are extremely dangerous.


That's not what was happening in this example though. It would be UB even if it was a POD.


Even calling uninitialized data “garbage” is misleading. You might expect that the compiler would just leave out some initialization code and compile the remaining code in the expected way, causing the values to be “whatever was in memory previously”. But no - the compiler can (and absolutely will) optimize by assuming the values are whatever would be most convenient for optimization reasons, even if it would be vanishingly unlikely or even impossible.

As an example, consider this code (godbolt: https://godbolt.org/z/TrMrYTKG9):

    struct foo {
        unsigned char a, b;
    };

    foo make(int x) {
        foo result;
        if (x) {
            result.a = 13;
        } else {
            result.b = 37;
        }
        return result;
    }
At high enough optimization levels, the function compiles to “mov eax, 9485; ret”, which sets both a=13 and b=37 without testing the condition at all - as if both branches of the test were executed. This is perfectly reasonable because the lack of initialization means the values could already have been set that way (even if unlikely), so the compiler just goes ahead and sets them that way. It’s faster!


Indeed, UB is literally whatever the compiler feels like. A famous one [1] has the compiler deleting code that contains UB and falling through to the next function.

"But it's right there in the name!" Undefined behavior literally places no restrictions on the code generated or the behavior of the program. And the compiler is under no obligation to help you debug your (admittedly buggy) program. It can literally delete your program and replace it with something else that it likes.

[1] https://kristerw.blogspot.com/2017/09/why-undefined-behavior...


There are some even funnier cases like this one: https://gcc.godbolt.org/z/cbscGf8ss

The compiler sees that foo can only be assigned in one place (that isn't called locally, but could called from other object files linked into the program) and its address never escapes. Since dereferencing a null pointer is UB, it can legally assume that `*foo` is always 42 and optimizes out the variable entirely.


To those who are just as confused as me:

Compilers can do whatever they want when they see UB, and accessing an unassigned and unassiganble (file-local) variable is UB, therefore the compiler can just decide that *foo is in fact always 42, or never 42, or sometimes 42, and all would be just as valid options for the compiler.

(I know I'm just restating the parent comment, but I had to think it through several times before understanding it myself, even after reading that.)


> Compilers can do whatever they want when they see UB, and accessing an unassigned and unassiganble (file-local) variable is UB, therefore the compiler can just decide that *foo is in fact always 42, or never 42, or sometimes 42, and all would be just as valid options for the compiler.

That's not exactly correct. It's not that the compiler sees that there's UB and decides to do something arbitrary: it's that it sees that there's exactly one way for UB to not be triggered and so it's assuming that that's happening.


Although it should be noted that that’s not how compilers “reason”.

The way they work things out is to assume no UB happens (because otherwise your program is invalid and you would not request compiling an invalid program would you) then work from there.


No who would write an incorrect program! :-d


Even the notion that uninitialized memory contain values is kind of dangerous. Once you access them you can't reason about what's going to happen at all. Behaviour can happen that's not self-consistent with any value at all: https://godbolt.org/z/adsP4sxMT


Is that an old 'bot? because I noticed it was an old version of Clang, and I tried switching to the latest Clang which is hilarious: https://godbolt.org/z/fra6fWexM


Oh yeah the classic Clang behaviour of “just stop codegen at UB”. If you look at the assembly, the main function just ends after the call to endl (right before where the if test should go); the program will run off the end of main and execute whatever nonsense is after it in memory as instructions. In this case I guess it calls main again (??) and then runs off into the woods and crashes.

I’ve never understood this behaviour from clang. At least stick a trap at the end so the program aborts instead of just executing random instructions?

The x and y values are funny too, because clang doesn’t even bother loading anything into esi for operator<<(unsigned int), so you get whatever the previous call left behind in that register. This means there’s no x or y variable at all, even though they’re nominally being “printed out”.


No I wrote it with the default choice of compiler just now. That newer result is truly crazy though lol.


icc's result is interesting too


This is gold


If you don't initialise a variable, you're implicitly saying any value is fine, so this actually makes sense.


The difference is that it can behave as if it had multiple different values at the same time. You don't just get any value, you can get completely absurd paradoxical Schrödinger values where `x > 5 && x < 5` may be true, and on the next line `x > 5` may be false, and it may flip on Wednesdays.

This is because the code is executed symbolically during optimization. It's not running on your real CPU. It's first "run" on a simulation of an abstract machine from the C spec, which doesn't have registers or even real stack to hold an actual garbage value, but it does have magic memory where bits can be set to 0, 1, or this-can-never-ever-happen.

Optimization passes ask questions like "is x unused? (so I can skip saving its register)" or "is x always equal to y? (so I can stop storing it separately)" or "is this condition using x always true? (so that I can remove the else branch)". When using the value is an undefined behavior, there's no requirement for these answers to be consistent or even correct, so the optimizer rolls with whatever seems cheapest/easiest.


"Your scientists were so preoccupied with whether they could, they didn't stop to think if they should."

With Optimizing settings on, the compiler should immediately treat unused variables as errors by default.


So here are your options:

1. Syntactically require initialization, ie you can't write "int k;" only "int k = 0;". This is easy to do and 100% effective, but for many algorithms this has a notable performance cost to comply.

2. Semantically require initialization, the compiler must prove at least one write happens before every read. Rice's Theorem says we cannot have this unless we're willing to accept that some correct programs don't compile because the compiler couldn't see why they're correct. Safe Rust lives here. Fewer but still some programmers will hate this too because you're still losing perf in some cases to shut up the prover.

3. Redefine "immediately" as "Well, it should report the error at runtime". This has an even larger performance overhead in many cases, and of course in some applications there is no meaningful "report the error at runtime".

Now, it so happens I think option (2) is almost always the right choice, but then I would say that. If you need performance then sometimes none of those options is enough, which is why unsafe Rust is allowed to call core::mem::MaybeUninit::assume_init an unsafe function which in many cases compiles to no instructions at all, but is the specific moment when you're taking responsibility for claiming this is initialized and if you're wrong about that too fucking bad.


With optimizations, 1. and 2. can be kind of equivalent: if initialization is syntactically required (or variables are defined to be zero by default), then the compiler can elide this if it can prove that value is never read.


That, however, conflicts with unused write detection which can be quite useful (arguably more so than unused variable as it's both more general and more likely to catch issues). Though I guess you could always ignore a trivial initialisation for that purpose.


There isn't just a performance cost to initializing at declaration all the time. If you don't have a meaningful sentinel value (does zero mean "uninitialized" or does it mean logical zero?) then reading from the "initialized with meaningless data just to silence the lint" data is still a bug. And this bug is now somewhat tricky to detect because the sanitizers can't detect it.


Yes, that's an important consideration for languages like Rust or C++ which don't endorse mandatory defaults. It may even literally be impossible to "initialize with meaningless data" in these languages if the type doesn't have such "meaningless" values.

In languages like Go or Odin where "zero is default" for every type and you can't even opt out, this same problem (which I'd say is a bigger but less instantly fatal version of the Billion Dollar Mistake) occurs everywhere, at every API edge, and even in documentation, you just have to suck it up.

Which reminds of in a sense another option - you can have the syntactic behaviour but write it as though you don't initialize at all even though you do, which is the behaviour C++ silently has for user defined types. If we define a Goose type (in C++ a "class"), which we stubbornly don't provide any way for our users to make themselves (e.g. we make the constructors private, or we explicitly delete the constructors), and then a user writes "Goose foo;" in their C++ program it won't compile because the compiler isn't allowed to leave this foo variable uninitialized - but it also can't just construct it, so, too bad, this isn't a valid C++ program.


That's what Golang went for. There are order possibilities: D has `= void` initializer to explicitly leave variables uninitialized. Rust requires values to be initialized before use, and if the compiler can't prove they are, it's either an error or requires an explicit MaybeUninit type wrapper.


If you have a program that will unconditionally access uninitialized memory then the compiler can halt and emit a diagnostic. But that's rarely what is discussed in these UB conversations. Instead the compiler is encountering a program with multiple paths, some of which would encounter UB if taken. But the compiler cannot just refuse to compile this, since it is perfectly possible that the path is dead. Like, imagine this program:

    int foo(bool x, int* y) {
      if (x) return *y;
      return 0;
    } 
Dereferencing y would be UB. But maybe this function is called only with x=false when y is nullptr. This cannot be a compile error. So instead the compiler recognizes that certain program paths are illegal and uses that information during compilation.


Maybe we should make that an error.


More modern languages have indeed embedded nullability into the type system and will yell at you if you dereference a nullable pointer without a check. This is good.

Retrofitting this into C++ at the language level is impossible. At least without a huge change in priorities from the committee.


Maybe not the Standard, but maybe not impossible to retrofit into:

    -Werror -Wlet-me-stop-you-right-there


For some values of 'sense'.


That seems like a reasonable optimization, actually. If the programmer doesn’t initialize a variable, why not set it to a value that always works?

Good example of why uninitialized variables are not intuitive.


Things can get even wonkier if the compiler keeps the values in registers, as two consecutive loads could use different registers based as you say on what's the most convenient for optimisation (register allocation, code density).


If I understand it right, in principle the compiler doesn't even need to do that.

It can just leave the result totally uninitialised. That's because both code paths have undefined behaviour: whichever of result.x or result.y is not set is still copied at "return result" which is undefined behaviour, so the overall function has undefined behaviour either way.

It could even just replace the function body with abort(), or omit the implementation entirely (even the ret instruction, allowing execution to just fall through to whatever memory happens to follow). Whether any computer does that in practice is another matter.


> It can just leave the result totally uninitialised. That's because both code paths have undefined behaviour: whichever of result.x or result.y is not set is still copied at "return result" which is undefined behaviour, so the overall function has undefined behaviour either way.

That is incorrect, per the resolution of DR222 (partially initialized structures) at WG14:

> This DR asks the question of whether or not struct assignment is well defined when the source of the assignment is a struct, some of whose members have not been given a value. There was consensus that this should be well defined because of common usage, including the standard-specified structure struct tm.

As long as the caller doesn't read an uninitialised member, it's completely fine.


Ooh, thanks for mentioning DR222 that's very interesting.


How is this an "optimization" if the compiled result is incorrect? Why would you design a compiler that can produce errors?


It’s not incorrect.

The code says that if x is true then a=13 and if it is false than b=37.

This is the case. Its just that a=13 even if x is false. A thing that the code had nothing to say about, and so the compiler is free to do.


Ok, so you’re saying it’s “technically correct?”

Practically speaking, I’d argue that a compiler assuming uninitialized stack or heap memory is always equal to some arbitrary convenient constant is obviously incorrect, actively harmful, and benefits no one.


In this example, the human author clearly intended mutual exclusivity in the condition branches, and this optimization would in fact destroy that assumption. That said, (a) human intentions are not evidence of foolproof programming logic, and often miscalculate state, and (b) the author could possibly catch most or all errors here when compiling without optimizations during debugging phase.


Regardless of intention, the code says this memory is uninitialized.

I take issue with the compiler assuming anything about the contents of that memory; it should be a black box.


The compiler is the arbiter of what’s what (as long as it does not run afoul the CPU itself).

The memory being uninitialised means reading it is illegal for the writer of the program. The compiler can write to it if that suits it, the program can’t see the difference without UB.

In fact the compiler can also read from it, because it knows that it has in fact initialised that memory. And the compiler is not writing a C program and is thus not bound by the strictures of the C abstract machine anyway.


Yes yes, the spec says compilers are free to do whatever they want. That doesn’t mean they should.

> The user didn’t initialize this integer. Let’s assume it’s always 4 since that helps us optimize this division over here into a shift…

This is convenient for who exactly? Why not just treat it as a black box memory load and not do further “optimizations”?


> That doesn’t mean they should.

Nobody’s stopping you from using non-optimising compilers, regardless of the strawmen you assert.


As if treating uninitialized reads as opaque somehow precludes all optimizations?

There’s a million more sensible things that the compiler could do here besides the hilariously bad codegen you see in the grandparent and sibling comments.

All I’ve heard amounts to “but it’s allowed by the spec.” I’m not arguing against that. I’m saying a spec that incentivizes this nonsense is poorly designed.


Why is the code gen bad? What result are you wanting? You specifically want whatever value happened to be on the stack as opposed to a value the compiler picked?


> As if treating uninitialized reads as opaque somehow precludes all optimizations?

That's not what these words mean.

> There’s a million more sensible things

Again, if you don't like compilers leveraging UBs use a non-optimizing compiler.

> All I’ve heard amounts to “but it’s allowed by the spec.” I’m not arguing against that.

You literally are though. Your statements so far have all been variations of or nonsensical assertions around "why can't I read from uninitialised memory when the spec says I can't do that".

> I’m saying a spec that incentivizes this nonsense is poorly designed.

Then... don't use languages that are specified that way? It's really not that hard.


From the LLVM docs [0]:

> Undef values aren't exactly constants ... they can appear to have different bit patterns at each use.

My claim is simple and narrow: compilers should internally model such values as unspecified, not actively choose convenient constants.

The comment I replied to cited an example where an undef is constant folded into the value required for a conditional to be true. Can you point to any case where that produces a real optimization benefit, as opposed to being a degenerate interaction between UB and value propagation passes?

And to be explicit: “if you don’t like it, don’t use it” is just refusing to engage, not a constructive response to this critique. These semantics aren't set in stone.

[0] https://llvm.org/doxygen/classllvm_1_1UndefValue.html#detail...


> My claim is simple and narrow: compilers should internally model such values as unspecified, not actively choose convenient constants.

An assertion you have provided no utility or justification for.

> The comment I replied to cited an example where an undef is constant folded into the value required for a conditional to be true.

The comment you replied to did in fact not do that and it’s incredible that you misread it such.

> Can you point to any case where that produces a real optimization benefit, as opposed to being a degenerate interaction between UB and value propagation passes?

The original snippet literally folds a branch and two stores into a single store, saving CPU resources and generating tighter code.

> this critique

Critique is not what you have engaged in at any point.


Sorry, my earlier comments were somewhat vague and assuming we were on the same page about a few things. Let me be concrete.

The snippet is, after lowering:

  if (x)
    return { a = 13, b = undef }
  else
    return { a = undef, b = 37 }
LLVM represents this as a phi node of two aggregates:

  a = phi [13, then], [undef, else]
  b = phi [undef, then], [37, else]
Since undef isn’t “unknown”, it’s “pick any value you like, per use”, InstCombine is allowed to instantiate each undef to whatever makes the expression simplest. This is the problem.

  a = 13
  b = 37
The branch is eliminated, but only because LLVM assumes that those undefs will take specific arbitrary values chosen for convenience (fewer instructions).

Yes, the spec permits this. But at that point the program has already violated the language contract by executing undefined behavior. The read is accidental by definition: the program makes no claim about the value. Treating that absence of meaning as permission to invent specific values is a semantic choice, and precisely what I am criticizing. This “optimization” is not a win unless you willfully ignore the program and everything but instruction count.

As for utility and justification: it’s all about user experience. A good language and compiler should preserve a clear mental model between what the programmer wrote and what runs. Silent non-local behavior changes (such as the one in the article) destroy that. Bugs should fail loudly and early, not be “optimized” away.

Imagine if the spec treated type mismatches the same way. Oops, assigned a float to an int, now it’s undef. Let’s just assume it’s always 42 since that lets us eliminate a branch. That’s obviously absurd, and this is the same category of mistake.


It's the same as this:

    int random() {
        return 4; // chosen by dice roll
    }
Technically correct. But not really.


Also even without UB, even for a naive translation, a could just happen to be 13 by chance, so the behaviour isn't even an example of nasal demons.


Because a could be 13 even if x is false because initialisation of the struct doesn’t have defined behavior of what the initial values of a and b need to be.

Same for b. If x is true, b could be 37 no matter how unlikely that is.


It is not incorrect. The values are undefined, so the compiler is free to do whatever it want to do with them, even assign values to them.


It's not incorrect. Where is the flaw?


To be fair, this is with debug symbols. Debug builds of Chrome were in the 5GB range several years ago; no doubt that’s increased since then. I can remember my poor laptop literally running out of RAM during the linking phase due to the sheer size of the object files being linked.

Why are debug symbols so big? For C++, they’ll include detailed type information for every instantiation of every type everywhere in your program, including the types of every field (recursively), method signatures, etc. etc., along with the types and locations of local variables in every method (updated on every spill and move), line number data, etc. etc. for every specialization of every function. This produces a lot of data even for “moderate”-sized projects.

Worse: for C++, you don’t win much through dynamic linking because dynamically linking C++ libraries sucks so hard. Templates defined in header files can’t easily be put in shared libraries; ABI variations mean that dynamic libraries generally have to be updated in sync; and duplication across modules is bound to happen (thanks to inlined functions and templates). A single “stuck” or outdated .so might completely break a deployment too, which is a much worse situation than deploying a single binary (either you get a new version or an old one, not a broken service).


I've hit the same thing in Rust, probably for the same reasons.

Isn't the simple solution to use detached debug files?

I think Windows and Linux both support them. That's how phones like Android and iOS get useful crash reports out of small binaries, they just upload the stack trace and some service like Sentry translates that back into source line numbers. (It's easy to do manually too)

I'm surprised the author didn't mention it first. A 25 GB exe might be 1 GB of code and 24 GB of debug crud.


> Isn't the simple solution to use detached debug files?

It should be. But the tooling for this kind of thing (anything to do with executable formats including debug info and also things like linking and cross-compilation) is generally pretty bad.


> I think Windows and Linux both support them.

Detached debug files has been the default (only?) option in MS's compiler since at least the 90s.

I'm not sure at what point it became hip to do that around Linux.


Since at least October 2003 on Debian:

[1] "debhelper: support for split debugging symbols"

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=215670

[2] https://salsa.debian.org/debian/debhelper/-/commit/79411de84...


Can't debug symbols be shipped as separate files?


The problem is that when a final binary is linked everything goes into it. Then, after the link step, all the debug information gets stripped out into the separate symbols file. That means at some point during the build the target binary file will contain everything. I can not, for example, build clang in debug mode on my work machine because I have only 32 GB of memory and the OOM killer comes out during the final link phase.

Of course, separate binaries files make no difference at runtime since only the LOAD segments get loaded (by either the kernel or the dynamic loader, depending). The size of a binary on disk has little to do with the size of a binary in memory.


> The problem is that when a final binary is linked everything goes into it

I don't think that's the case on Linux, when using -gsplit-dwarf the debug info is put in separate files at the object file level, they are never linked into binaries.


Yes, but it can be more of a pain keeping track of pairs. In production though, this is what's done. And given a fault, the debug binary can be found in a database and used to gdb the issue given the core. You do have to limit certain online optimizations in order to have useful tracebacks.

This also requires careful tracking of prod builds and their symbol files... A kind of symbol db.


Yes, absolutely. Debuginfo doesn't impact .text section distances either way, though.


I’ve seen LLVM dependent builds hit well over 30GB. At that point it started breaking several package managers.


China no longer has a one-child policy and is now actively focusing policies and incentives on increasing childbirth. Although it’s not going to yield immediate results, the PRC operates on long time horizons and will probably succeed long-term in raising birth rates.


> the PRC operates on long time horizons and will probably succeed long-term in raising birth rates.

That would make them the first country to do so, I think. Others have tried and nothing has worked. But China will likely become rich before it gets old, so it may not matter.


Did you mean to say "But China will likely become old before it gets rich"?

Their population is declining already and they have a very long way to go before being considered "rich", so I haven't seen many projections for what you said. If you meant it, I'd be curious to know why.


China's middle class is already larger than the entire US population, and growing fast. It won't be rich in the sense that say Switzerland or Norway are rich. But it seems safe to say they won't be barely scraping by.

IMO, India likely won't make this transition. It's population is still growing but it's birth rate is sinking fast (like most everywhere else).


And China's lower class is nearly double the size of the US population, making a median of $150 per month.

India's demographics don't look as bad as China's, so I'm not sure why you're less optimistic about them.


lol, no. it will not even maintain its current extinction-tier TFR of 1.02, let alone maintain its current population.

like every other civilized people, the Chinese have largely realized that the game is rigged and the only winning move is not to play. the only way to "fix" the birth rate is to reject humanity (education, urbanization, technology) and retvrn to monke (subsistence farming, arranged marriages, illiteracy, superstition), which no civilized country will ever do. even the current TFR of 1.0-1.5 in the civilized world is largely inertial, and it will continue to fall. South Korean 0.7 will seem mind-bogglingly high a hundred years from now,

and 1CP was such a predictably disastrous idea that I seriously doubt the forward-thinking you seem to believe the CCP to posses.


>the only way to "fix" the birth rate is to reject humanity (education, urbanization, technology) and retvrn to monke (subsistence farming, arranged marriages, illiteracy, superstition), which no civilized country will ever do.

They won't do it willingly. That just means it will happen without their input.


sure, they could, hypothetically, close the borders and begin a campaign of forced insemination, but those babies would have no fathers to provide for them, and the state - any state - really resents footing the bill for child rearing, going as far as forcing victims of infidelity, fraud, or rape to pay child support. the state - any state - wants to give you as little as possible and to take as much as possible from you, for the delta between giving and receiving is its lifeblood.

the ideal family has two full-time working parents, paying a mortgage and car loans, consuming as many high-margin domestic products as possible, rearing as many children (future laborers and consumers) as possible, with little to no assistance from the state. and you simply can't have that by force. if you could, you might as well drop the pretense and openly treat your population as slaves.


It sounds like “national security” is the legal justification they’re using to do an end-run around Congress, just like the justifications they’ve used to implement tariffs and which underpin a bunch of their EOs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: