Hacker Newsnew | past | comments | ask | show | jobs | submit | more pornel's commentslogin

The sad thing is that Apple seemed more inviting to developers before they got high on the App Store cut.

Every boxed Mac OS X came with a second disc containing the SDK (Xcode has always been an unstable cow, tho). They used to publish tech notes that explained how the OS works, rather than WWDC videos with high-level overviews that feel more like advertisements.

Back then they've at least made attempts to use some open standards, and allowed 3rd parties to fill gaps in the OS, instead of acting like a Smaug of APIs.


Because they were coming out of being at the edge of bankruptcy and needed any help they could get becoming profitable again.

My graduation thesis was porting a visualisation framework from NeXTSTEP into Windows, Objective-C => C++, because my supervisor saw no future on keeping the NeXT hardware in our campus, if he only knew what would happen a few years later.


Apple has always resented 3rd party developers, right back to the Apple 2 days. They saw them as capturing value that they had created.


They said it in the Epic vs Apple litigation, something along the lines of "we create the entire App Store market", like the 3rd party developers aren't.


Video with ELI5-level overviews that feel like an ad - these do sell. A disk with SDK does not.


Properties of a language shape the tooling and culture that develops around it.

JS has exploded in popularity when Internet Explorer was still around, before ES6 cleanup of the language. JS had lots of gotchas where seemingly obvious code wasn't working correctly, and devs weren't keeping up with all the dumb hacks needed for even basic things. Working around IE6's problems used to be a whole profession (quirksmode.org).

Browsers didn't have support for JS modules yet, and HTTP/1.1 couldn't handle many small files, so devs needed a way to "bundle" their JS anyway. Node.js happened to have a solution, while also enabled reusing code between client and server, and the micro libraries saved developers from having to deal with JS engine differences and memorize all the quirks.


In other languages people build abstraction libraries for that, like apache portable runtime that gives you a consistent api for most things you need to build a web-server, using just one dependency. That would also save you needing to memorise all the micro libraries needed to work around the relevant quirks.

Splitting it into a library per quirk seems like an unforced error in that context.

One could have excused it as being a way to keep the code size down, but if you use npm you also usually use a build step which could drop the unused parts, so it doesn't really hold water


All of the differences are attributable to the language.

The Apache runtime isn't sent over the network every time it's used, but JS in the browser is.

JS ecosystem has got several fix-everything-at-once libraries. However, JS is very dynamic, so even when "compiled", it's very hard to remove dead code. JS compiling JS is also much slower than C compiling C. Both of these factors favor tiny libraries.

Abstractions in JS have a higher cost. Even trivial wrappers add overhead before JIT kicks in, and even then very few things can be optimized out, and many abstractions even prevent JIT from working well. It's much cheaper to patch a few gaps only where they're needed than to add a foundational abstraction layer for the whole app.


I have a vague memory of using dead code removing javascript toolchains pretty long ago, like the closure compiler

I'm not sure the dependency tree madness actually translates to smaller code in the end either, given the bloat in the average web app... but to be fair it would be perfectly plausible that javascript developers opted for micro-libraries motivated by performance, even if it wasn't the effect of that decision.


The press release doesn't give any concrete numbers, but if it doubles efficiency of Peltier coolers, it's still 3-5× less efficient than heat pumps.

Thermoelectric cooling is notable for not having any moving parts and ability to scale down to small sizes, so it might end up having many specialized applications, but for A/C heat pumps are already very effective.


And what about service life? I had a mini-fridge that used this technology, and it stopped working after about 2 years. Was that just bad luck or poor quality, or some inherent lifetime of the components?


In principle peltier elements should be very robust over time, as a solid state system where the only moving parts are fans (versus traditional refrigeration which includes a high pressure pump...).

In practice I strongly suspect most peltier based systems are built very cheaply... because their inefficiency means the majority of the market is bordering on a scam. Sophisticated consumers aren't going to be buying very many fridges built with them (of course you might have a niche use case where they actually make sense and you're willing to pay for a quality product, but do most purchasers?).


Thermal cycles is murder on rigid electronic connections; the mechanical connection between the heatsink on each side of the peltier cell being a prime example.


Rust is over 10 years old now. It has a track record of delivering what it promises, and a very satisfied growing userbase.

OTOH static analyzers for C have been around for longer than Rust, and we're still waiting for them to disprove Rice's theorem.

AI tools so far are famous for generating low-quality code, and generating bogus vulnerability reports. They may eventually get better and end up being used to make C code secure - see DARPA's TRACTOR program.


The applicability of Rice's theorem with respect to static analysis or abstract interpretation is more complex than you implied. First, static analysis tools are largely pattern-oriented. Pattern matching is how they sidestep undecidability. These tools have their place, but they aren't trying to be the tooling you or the parent claim. Instead, they are more useful to enforce coding style. This can be used to help with secure software development practices, but only by enforcing idiomatic style.

Bounded model checkers, on the other hand, are this tooling. They don't have to disprove Rice's theorem to work. In fact, they work directly with this theorem. They transform code into state equations that are run through an SMT solver. They are looking for logic errors, use-after-free, buffer overruns, etc. But, they also fail code for unterminated execution within the constraints of the simulation. If abstract interpretation through SMT states does not complete in a certain number of steps, then this is also considered a failure. The function or subset of the program only passes if the SMT solver can't find a satisfactory state that triggers one of these issues, through any possible input or external state.

These model checkers also provide the ability for user-defined assertions, making it possible to build and verify function contracts. This allows proof engineers to tie in proofs about higher level properties of code without having to build constructive proofs of all of this code.

Rust has its own issues. For instance, its core library is unsafe, because it has to use unsafe operations to interface with the OS, or to build containers or memory management models that simply can't be described with the borrow checker. This has led to its own CVEs. To strengthen the core library, core Rust developers have started using Kani -- a bounded model checker like those available for C or other languages.

Bounded model checking works. This tooling can be used to make either C or Rust safer. It can be used to augment proofs of theorems built in a proof assistant to extend this to implementation. The overhead of model checking is about that of unit testing, once you understand how to use it.

It is significantly less expensive to teach C developers how to model check their software using CBMC than it is to teach them Rust and then have them port code to Rust. Using CBMC properly, one can get better security guarantees than using vanilla Rust. Overall, an Ada + Spark, CBMC + C, Kani + Rust strategy coupled with constructive theory and proofs regarding overall architectural guarantees will yield equivalent safety and security. I'd trust such pairings of process and tooling -- regardless of language choice -- over any LLM derived solutions.


Sure it's possible in theory, but how many C codebases actually use formal verification? I don't think I've seen a single one. Git certainly doesn't do anything like that.

I have occasionally used CBMC for isolated functions, but that must already put me in the top 0.1% of formal verification users.


It's not used more because it is unknown, not because it is difficult to use or that it is impractical.

I've written several libraries and several services now that have 100% coverage via CBMC. I'm quite experienced with C development and with secure development, and reaching this point always finds a handful of potentially exploitable errors I would have missed. The development overhead of reaching this point is about the same as the overhead of getting to 80% unit test coverage using traditional test automation.


You're describing cases in which static analyzers/model checkers give up, and can't provide a definitive answer. To me this isn't side-stepping the undecidability problem, this is hitting the problem.

C's semantics create dead-ends for non-local reasoning about programs, so you get inconclusive/best-effort results propped up by heuristics. This is of course better than nothing, and still very useful for C, but it's weak and limited compared to the guarantees that safe Rust gives.

The bar set for Rust's static analysis and checks is to detect and prevent every UB in safe Rust code. If there's a false positive, people file it as a soundness bug or a CVE. If you can make Rust's libstd crash from safe Rust code, even if it requires deliberately invalid inputs, it's still a CVE for Rust. There is no comparable expectation of having anything reliably checkable in C. You can crash stdlib by feeding it invalid inputs, and it's not a CVE, just don't do that. Static analyzers are allowed to have false negatives, and it's normal.

You can get better guarantees for C if you restrict semantics of the language, add annotations/contracts for gaps in its type system, add assertions for things it can't check, and replace all the C code that the checker fails on with alternative idioms that fit the restricted model. But at that point it's not a silver bullet of "keep your C codebase, and just use a static analyzer", but it starts looking like a rewrite of C in a more restrictive dialect, and the more guarantees you want, the more code you need to annotate and adapt to the checks.

And this is basically Rust's approach. The unsafe Rust is pretty close to the semantics of C (with UB and all), but by default the code is restricted to a subset designed to be easy for static analysis to be able to guarantee it can't cause UB. Rust has a model checker for pointer aliasing and sharing of data across threads. It has a built-in static analyzer for memory management. It makes programmers specify contracts necessary for the analysis, and verifies that the declarations are logically consistent. It injects assertions for things it can't check at compile time, and gives an option to selectively bypass the checkers for code that doesn't fit their model. It also has a bunch of less rigorous static analyzers detecting certain patterns of logic errors, missing error handling, and flagging suspicious and unidiomatic code.

It would be amazing if C had a static analyzer that could reliably assure with a high level of certainty, out of the box, that a heavily multi-threaded complex code doesn't contain any UB, doesn't corrupt memory, and won't have use-after-free, even if the code is full of dynamic memory (de)allocations, callbacks, thread-locals, on-stack data of one thread shared with another, objects moved between threads, while mixing objects and code from multiple 3rd party libraries. Rust does that across millions lines of code, and it's not even a separate static analyzer with specially-written proofs, it's just how it works.

Such analysis requires code with sufficient annotations and restricted to design patterns that obviously conform to the checkable model. Rust had a luxury of having this from the start, and already has a whole ecosystem built around it.

C doesn't have that. You start from a much worse position (with mutable aliasing, const that barely does anything, and a type system without ownership or any thread safety information) and need to add checks and refactor code just to catch up to the baseline. And in the end, with all that effort, you end up with a C dialect peppered with macros, and merely fix one problem in C, without getting additional benefits of a modern language.

CBMC+C has a higher ceiling than vanilla Rust, and SMT solvers are more powerful, but the choice isn't limited to C+analyzers vs only plain Rust. You still can run additional checkers/solvers on top of everything Rust has built-in, and further proofs are easier thanks to being on top of stronger baseline guarantees and a stricter type system.


If we mark any case that might be undecidable as a failure case, and require that code be written that can be verified, then this is very much sidestepping undecidability by definition. Rust's borrow checker does the same exact thing. Write code that the borrow checker can't verify, and you'll get an error, even if it might be perfectly valid. That's by design, and it's absolutely a design meant to sidestep undecidability.

Yes, CBMC + C provides a higher ceiling. Coupling Kani with Rust results in the exact same ceiling as CBMC + C. Not a higher one. Kani compiles Rust to the same goto-C that CBMC compiles C to. Not a better one. The abstract model and theory that Kani provides is far more strict that what Rust provides with its borrow checker and static analysis. It's also more universal, which is why Kani works on both safe and unsafe Rust.

If you like Rust, great. Use it. But, at the point of coupling Kani and Rust, it's reaching safety parity with model checked C, and not surpassing it. That's fine. Similar safety parity can be reached with Ada + Spark, C++ and ESBMC, Java and JBMC, etc. There are many ways of reaching the same goal.

There's no need to pepper C with macros or to require a stronger type system with C to use CBMC and to get similar guarantees. Strong type systems do provide some structure -- and there's nothing wrong with using one -- but unless we are talking about building a dependent type system, such as what is provided with Lean 4, Coq, Agda, etc., it's not enough to add equivalent safety. A dependent type system also adds undecidability, requiring proofs and tactics to verify the types. That's great, but it's also a much more involved proposition than using a model checker. Rust's H-M type system, while certainly nice for what it is, is limited in what safety guarantees it can make. At that point, choosing a language with a stronger type system or not is a style choice. Arguably, it lets you organize software in a better way that would require manual work in other languages. Maybe this makes sense for your team, and maybe it doesn't. Plenty of people write software in Lisp, Python, Ruby, or similar languages with dynamic and duck typing. They can build highly organized and safe software. In fact, such software can be made safe, much as C can be made safe with the appropriate application of process and tooling.

I'm not defending C or attacking Rust here. I'm pointing out that model checking makes both safer than either can be on their own. As with my original reply, model checking is something different than static analysis, and it's something greater than what either vanilla C or vanilla Rust can provide on their own. Does safe vanilla Rust have better memory safety than vanilla C? Of course. Is it automatically safe against the two dozen other classes of attacks by default and without careful software development? No. Is it automatically safe against these attacks with model checking? Also no. However, we can use model checking to demonstrate the absence of entire classes of bugs -- each of these classes of bugs -- whether we model check software written in C or in Rust.

If I had to choose between model checking an existing codebase (git or the Linux kernel), or slowly rewriting it in another language, I'd choose the former every time. It provides, by far, the largest gain for the least amount of work.


People innately admire difficult skills, regardless of their usefulness. Acrobatic skateboarding is impressive, even when it would be faster and safer to go in a straight line or use a different mode of transport.

To me skill and effort is misplaced and wasted when it's spent on manually checking invariants that a compiler could check better automatically, or implementing clever workarounds for language warts that no longer provide any value.

Removal of busywork and pointless obstacles won't make smart programmers dumb and lazy. It allows smart programmers to use their brainpower on bigger more ambitious problems.


These type comments always remind me that we forget where we come from in terms of computation, every time.

It's important to remember Rust's borrow checker was computationally infeasible 15 years ago. C & C++ are much older than that, and they come from an era where variable name length affected compilation time.

It's easy to publicly shame people who do hard things for a long time in the light of newer tools. However, many people who likes these languages are using them longer than the languages we champion today were mere ideas.

I personally like Go in these days for its stupid simplicity, but when I'm going to do something serious, I'll always use C++. You can fight me, but never pry C++ from my cold, dead hands.

For note, I don't like C & C++ because they are hard. I like them because they provide a more transparent window the processor, which is a glorified, hardware implemented PDP-11 emulator.

Last, we shall not forget that all processors are C VMs, anyway.


> It's important to remember Rust's borrow checker was computationally infeasible 15 years ago.

The core of the borrow checker was being formulated in 2012[1], which is 13 years ago. No infeasibility then. And it's based on ideas that are much older, going back to the 90s.

Plus, you are vastly overestimating the expense of borrow checking, it is very fast, and not the reason for Rust's compile times being slow. You absolutely could have done borrow checking much earlier, even with less computing power available.

1: https://smallcultfollowing.com/babysteps/blog/2012/11/18/ima...


> It's important to remember Rust's borrow checker was computationally infeasible 15 years ago.

IIRC borrow checking usually doesn't consume that much compilation time for most crates - maybe a few percent or thereabouts. Monomorphization can be significantly more expensive and that's been much more widely used for much longer.


> It's important to remember Rust's borrow checker was computationally infeasible 15 years ago. C & C++ are much older than that, and they come from an era where variable name length affected compilation time.

I think you're setting the bar a little too high. Rust's borrow-checking semantics draw on much earlier research (for example, Cyclone had a form of region-checking in 2006); and Turbo Pascal was churning through 127-character identifiers on 8088s in 1983, one year before C++ stream I/O was designed.

EDIT: changed Cyclone's "2002" to "2006".


I remember, I was there in the 1980's coding, hence why I know C and C++ were not the only alternatives, rather the ones that eventually won in the end.


> the processor, which is a glorified, hardware implemented PDP-11 emulator.

This specific seems like just gratuitously rewriting history.

I can get how you'd feel C (and certain dialects of C++) are "closer to the metal" in a certain sense: C supports very few abstractions and with fewer abstractions, there are less "things" between you and "the metal". But this is as far as it goes. C does not represent - by any stretch of imagination - an accurate computational model or a memory of a modern CPU. It does stay close to PDP-11, but calling modern CPUs "glorified hardware emulators of PDP-11" is just preposterous.

PDP-11 was an in-order CISC processor with no virtual memory, cache hierarchy, branch prediction, symmetric multiprocessing and SIMD instruction. Some modern CPUs (namely the x86/x64 family of CPUs) do emulate a CISC ISA on that is probably more RISC-like, but that's as far we can say they are trying to behave like a PDP-11 (even though the intention was to behave like a first-gen Intel Pentium).


> we shall not forget that all processors are C VMs

This idea is some 10yrs behind. And no, thinking that C is "closer to the processor" today is incorrect

It makes you think it is close which in some sense is even worse


> This idea is some 10yrs behind.

Akshually[1] ...

> And no, thinking that C is "closer to the processor" today is incorrect

THIS thinking is about 5 years out of date.

Sure, this thinking you exhibit gained prominence and got endlessly repeated by every critic of C who once spent a summer doing a C project in undergrad, but it's been more than 5 years that this opinion was essentially nullified by

    Okay, If C is "not close to the process", what's closer?
Assembler? After all if everything else is "Just as close as C, but not closer", then just what kind of spectrum are you measuring on, that has a lower bound which none of the data gets close to?

You're repeating something that was fashionable years ago.

===========

[1] There's always one. Today, I am that one :-)


Standard C doesn't have inline assembly, even though many compilers provide it as an extension. Other languages do.

> After all if everything else is "Just as close as C, but not closer", then just what kind of spectrum are you measuring on

The claim about C being "close to the machine" means different things to different people. Some people literally believe that C maps directly to the machine, when it does not. This is just a factual inaccuracy. For the people that believe that there's a spectrum, it's often implied that C is uniquely close to the machine in ways that other languages are not. The pushback here is that C is not uniquely so. "just as close, but not closer" is about that uniqueness statement, and it doesn't mean that the spectrum isn't there.


> Some people literally believe that C maps directly to the machine, when it does not.

Maybe they did, 5 years (or more) ago when that essay came out. it was wrong even then, but repeating it is even more wrong.

> This is just a factual inaccuracy.

No. It's what we call A Strawman Argument, because no one in this thread claimed that C was uniquely close to the hardware.

Jumping in to destroy the argument when no one is making it is almost textbook example of strawmanning.


Claiming that a processor is a "C VM" implies that it's specifically about C.


Lots of languages at a higher level than C are closer to the processor in that they have interfaces for more instructions that C hasn't standardized yet.


> Lots of languages at a higher level than C are closer to the processor in that they have interfaces for more instructions that C hasn't standardized yet.

Well, you're talking about languages that don't have standards, they have a reference implementation.

IOW, no language has standards for processor intrinsics; they all have implementations that support intrinsics.


> Okay, If C is "not close to the process", what's closer?

LLVM IR is closer. Still higher level than Assembly

The problem is thus:

char a,b,c; c = a+b;

Could not be more different between x86 and ARM


> LLVM IR is closer. Still higher level than Assembly

So your reasoning for repeating the once-fashionable statement is because "an intermediate representation that no human codes in is closer than the source code"?


To me a compiler's effort is misplaced and wasted when it's spent on checking invariants that could be checked by a linter or a sidecar analysis module.


Checking of whole-program invariants can be accurate and done basically for free if the language has suitable semantics.

For example, if a language has non-nullable types, then you get this information locally for free everywhere, even from 3rd party code. When the language doesn't track it, then you need a linter that can do symbolic execution, construct call graphs, data flows, find every possible assignment, and still end up with a lot of unknowns and waste your time on false positives and false negatives.

Linters can't fix language semantics that create dead-ends for static analysis. It's not a matter of trying harder to make a better linter. If a language doesn't have clear-enough aliasing, immutability, ownership, thread-safety, etc. then a lot of analysis falls apart. Recovering required information from arbitrary code may be literally impossible (Rice's theorem), and getting even approximate results quickly ends up requiring whole-program analysis and prohibitively expensive algorithms.

And it's not even an either-or choice. You can have robust checks for fundamental invariants built into the language/compiler, and still use additional linters for detecting less clear-cut issues.


> Linters can't fix language semantics that create dead-ends for static analysis. It's not a matter of trying harder to make a better linter. If a language doesn't have clear-enough aliasing, immutability, ownership, thread-safety, etc. then a lot of analysis falls apart

this assertion is known disproven. seL4 is a fully memory safe (and also has even more safety baked in) major systems programming behemoth that is written on c + annotations where the analysis is conducted in a sidecar.

to obtain extra safety (but still not as safe as seL4) in rust, you must add a sidecar in the form of MIRI. nobody proposes adding MIRI into rust.

now, it is true that sel4 is a pain in the ass to write,compile+check, but there is a lot of design space in the unexplored spectrum of rust, rust+miri, sel4.


If the compiler is not checking them then it can't assume them, and that reduces the opportunities for optimizations. If the checks don't run on the compiler then they're not running every time; if you do want them to run every time then they may as well live in the compiler instead.


It seems likely that C++ will end up in a similar place as COBOL or Fortran, but I don't see that as a good future for a language.

These languages are not among the top contenders for new projects. They're a legacy problem, and are kept alive only by a slowly shrinking number of projects. It may take a while to literally drop to zero, but it's a path of exponential decay towards extinction.

C++ has strong arguments for sticking around as a legacy language for several too-big-to-rewrite C++ projects, but it's becoming less and less attractive for starting new projects.

C++ needs a better selling point than being a language that some old projects are stuck with. Without growth from new projects, it's only a matter of time until it's going to be eclipsed by other languages and relegated to shrinking niches.


It will take generations to fully bootstrap compiler toolchains, language runtimes, and operating systems that depend on either C or C++.

Also depending on how AI assisted tooling evolves, I think it is not only C and C++ that will become a niche.

I already see this happening with the amount of low-code/no-code augmented with AI workflows, that are currently trending on SaaS products.


Apple got spooked by GPL v3 anti-tivoization clauses and stopped updating GNU tools in 2007.

macOS still has a bunch of GNU tools, but they appear to be incompatible with GNU tools used everywhere else, because they're so outdated.


And Apple is doing a lot of Tivoization these days. They're not yet actually stopping apps that they haven't "notarized" but they're not making it easier. One of the many reasons I left the Mac platform, both private and at work. The other reason was more and more reliance on the iCloud platform for new features (many of its services don't work on other OSes like Windows and Linux - I use all those too)

The problem with the old tools is that I don't have admin rights at work so it's not easy to install coreutils. Or even homebrew.

I can understand why they did it though. Too many tools these days advocate just piping some curl into a root shell which is pretty insane. Homebrew does this too.


Couldn't you simply use macOS without the iCloud features? Which features require iCloud to work?


You can but there's just not much point anymore.

I don't remember all the specifics but every time there was a new macos I could cross most of the new features off. Nope this one requires iCloud or an apple ID. Nope this one only works with other macs or iPhones. Stuff like that. The Mac didn't use to be a walled garden. You can still go outside of their ecosystem (unlike on iOS) but then there's not much point. You're putting a square peg in a round hole.

Now, Apple isn't the only one doing this. Microsoft is making it ever harder to use windows without a Microsoft account. That's why I'm gravitating more and more to foss OSes. But there are new problems now, like with Firefox on Linux I constantly get captcha'd. M365 (work) blocks random features or keeps signing me out. My bank complains my system is not 'trusted'. Euh what about trusting your actual customers instead of a mega corp? I don't want my data locked in or monitored by a commercial party.


The rexif crate supports editing, so you can apply rotation when resizing, and then remove the rotation tag from the EXIF data. Keeping EXIF isn't necessary for small thumbnails, but could be desirable for larger versions of the image.


Rust has a combo: people come for safety, stay for usability.

Languages struggle to win on usability alone, because outside of passion projects it's really hard to justify a rewrite of working software to get the same product, only with neater code.

But if the software also has a significant risk of being exploited, or is chronically unstable, or it's slow and making it multi-core risks making it unstable, then Rust has a stronger selling point. Management won't sign off a rewrite because sum types are so cool, but may sign off an investment into making their product faster and safer.


Generally speaking I was more prone to agreeing with Rust-haters, and thought the whole idea of how Rust lifetimes are implemented is flawed, and the borrow checker is needlessly restrictive. I also disagree with some other ideas like the overreliance on generics to do static dispatch leading to large executables and slow compiles.

To be clear I still think these criticisms are valid. However after using the language in production, I've come to realize these problems in practice are manageable. The langauge is nice, decently well supported, has a relatively rich ecosystem.

Every programming language/ecosystem is flawed in some way, and I think as an experienced dev you learn to deal with this.

It having an actual functioning npm-like package management and build system makes making multiplatform software trivial. Which is something about C++ that kills my desire to deal with that language on a voluntary basis.

The ecosystem is full of people who try to do their best and produce efficient code, and try to understand the underlying problem and the machine. It feels like it still has a culture of technical excellence, while most libraries seem to be also well organized and documented.

This is in contrast to JS people, who often try to throw together something as fast as possible, and then market the shit out of it to win internet points, or Java/C# people who overcomplicate and obfuscate code by sticking to these weird OOP design pattern principles where every solution needs to be smeared across 5 classes and design patterns.


Unicode wanted ability to losslessly roundtrip every other encoding, in order to be easy to partially adopt in a world where other encodings were still in use. It merged a bunch of different incomplete encodings that used competing approaches. That's why there are multiple ways of encoding the same characters, and there's no overall consistency to it. It's hard to say whether that was a mistake. This level of interoperability may have been necessary for Unicode to actually win, and not be another episode of https://xkcd.com/927


Why did Unicode want codepointwise round-tripping? One codepoint in a legacy encoding becoming two in Unicode doesn't seem like it should have been a problem. In other words, why include precomposed characters in Unicode?


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: