Hacker Newsnew | past | comments | ask | show | jobs | submit | xmddmx's commentslogin

My memory is that it was named "Aero Glass" which heightens the irony of "Liquid Glass" sucking.

But I see many references to it being called just "Aero", but some call it "Aero Glass" [1]

Does anyone know the truth?

[1] https://www.pcmag.com/archive/rip-aero-glass-windows-8-stick...


Microsoft marketing maybhave had a specific preference, but "Aero" and "Aero Glass" are interchangeable. From the same article you linked:

> "Rest in peace, Aero. I liked you, a lot. Still do. And I'll miss you," Thurrott writes


Right, this annoyed me too - it was stated w/o attribution as if novel.

What is the name of the law when someone writes a think piece of "stuff I've learned" and fails to cite any of it to existing knowledge?

Makes me wonder if (A) they do know it's not their idea, but they are just cool with plagiarism or (B) they don't know it's not their idea.


I don't know if there's a named law, but the word for not knowing and believing that something remembered is a novel idea is "cryptomnesia".

Knowing that you know something by teaching is Feynman's method of understanding. Basically, on scanning, I don't particularly disagree with the content of the post. However, treating these things (many of which regularly show up here on HN) as being due to "14 years at Google" is a little misplaced.

But, hey, it's 2026, CES is starting, and the hyperbole will just keep rocketing up and out.


You mean 1990. Someone graduating college in 1990 would have been about 21. That was 35 years ago, so they would be about 56 in 2025.

Math is hard.


Weird flex of pedantry even for HN.

Says who? I did a gap year service project and graduated at age 23. My business partner did a 3-2 program and graduated at 23.

Plus, anyone working as an engineer then has a 8 figure net worth and the overwhelming majority moved on long ago.


Cmon man, it's a comment not a research paper. Off by one isn't worth a follow up snark

Off by 10+1. Someone who graduated college in 2000 = 25 + 22 (4 years of college from 18) = 47, not 57, and not anywhere close to the retirement age. It might be pedantry, but the original comment should have said 1990, not 2000.

Their main point was it is off by 10; then they introduced an additional confusing question of “is it off by 10, 11, or 12?”

Turns out that under the USA Code of Federal Regulations, there's a pretty big exemption to IRB for research on pedagogy:

CFR 46.104 (Exempt Research):

46.104.d.1 "Research, conducted in established or commonly accepted educational settings, that specifically involves normal educational practices that are not likely to adversely impact students' opportunity to learn required educational content or the assessment of educators who provide instruction. This includes most research on regular and special education instructional strategies, and research on the effectiveness of or the comparison among instructional techniques, curricula, or classroom management methods."

https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-...

So while this may have been a dick move by the instructors, it was probably legal.


I'm afraid you misunderstand what it means to be "exempt" under the IRB. It doesn't mean "you don't have to talk to the IRB", it means "there's a little less oversight but you still need to file all the paperwork". Here's one university's explanation[1]:

> Exempt human subjects research is a specific sub-set of “research involving human subjects” that does not require ongoing IRB oversight. Research can qualify for an exemption if it is no more than minimal risk and all of the research procedures fit within one or more of the exemption categories in the federal IRB regulations. *Studies that qualify for exemption must be submitted to the IRB for review before starting the research. Pursuant to NU policy, investigators do not make their own determination as to whether a research study qualifies for an exemption — the IRB issues exemption determinations.* There is not a separate IRB application form for studies that could qualify for exemption – the appropriate protocol template for human subjects research should be filled out and submitted to the IRB in the eIRB+ system.

Most of my research is in CS Education, and I have often been able to get my studies under the Exempt status. This makes my life easier, but it's still a long arduous paperwork process. Often there are a few rounds to get the protocol right. I usually have to plan studies a whole semester in advance. The IRB does NOT like it when you decide, "Hey I just realized I collected a bunch of data, I wonder what I can do with it?" They want you to have a plan going in.

[1] https://irb.northwestern.edu/submitting-to-the-irb/types-of-...


The CFR is pretty clear, and I have experience with this (being both an IRB reviewer, faculty member, and researcher). When it says "is exempt" it means "is exempt".

Imagine otherwise: a teacher who wants change their final exam from a 50 item Scantron using A-D choices, to a 50 item Scantron using A-E choices, because they think having 5 choices per item is better than 4, would need to ask for IRB approval. That's not feasible, and is not what happens in the real world of academia.

It is true that local IRBs may try to add additional rules, but the NU policy you quote talks about "studies". Most IRBs would disagree that "professor playing around with grading procedures and policies" constitutes a "study".

It would be presumed exempted.

Are you a teacher or a student? If you are a teacher, you have wide latitude that a student researcher does not.

Also, if you are a teacher, doing "research about your teaching style", that's exempted.

By contrast, if you are a student, or a teacher "doing research" that's probably not exempt and must go through IRB.


You would be correct, except that this is a published blog post. It may not be in an academic journal, but this person has still conducted human subjects research that led to a published artifact. It was just "playing around" until they started posting their students' (summarized, anonymized) data to the internet.

I was impressed by the lack of dominance of Thunderbolt:

"Next I tested llama.cpp running AI models over 2.5 gigabit Ethernet versus Thunderbolt 5"

Results from that graph showed only a ~10% benefit from TB5 vs. Ethernet.

Note: The M3 studios support 10Gbps ethernet, but that wasn't tested. Instead it was tested using 2.5Gbps ethernet.

If 2.5G ethernet was only 10% slower than TB, how would 10G Ethernet have fared?

Also, TB5 has to be wired so that every CPU is connected to every other over TB, limiting you to 4 macs.

By comparison, with Ethernet, you could use a hub & spoke configuration with a Ethernet switch, theoretically letting you use more than 4 CPUs.


10G Ethernet would only marginally speed things up based on past experience with llama RPC; latency is much more helpful but still, diminishing returns with that layer split.


This Video tests the setup using 10Gbps ethernet: https://www.youtube.com/watch?v=4l4UWZGxvoc


That’s llama, which didn’t scale nearly as well in the tests. Assumedly because it’s not optimized yet.

RDMA is always going to have lower overhead than Ethernet isn’t it?


Possibly RDMA over thunderbolt. But for RoCE (RDMA over converged Ethernet) obviously not because it's sitting on top of Ethernet. Now that could still have a higher throughput when you factor in CPU time to run custom protocols that smart NICs could just DMA instead, but the overhead is still definitively higher


what do you think "ethernet's overhead" is?


Header and FCS, interpacket gap, and preamble. What do you think "Ethernet overhead" is?


I've meant in usec, sorry if that wasn't clear, given that the discussion that I've replied was about rpc latency.


That's a very nebulous metric. Usec of overhead depends on a lot of runtime things and a lot of hardware options and design that I'm just not privy to


I really hope the user was running Time Machine - in default settings, Time Machine does hourly snapshot backups of your whole Mac. Restoring is super easy.


There is something wrong with this article, possibly just copyediting mistakes but it makes me question the whole thing.

For example, check out this mess:

> “Unfortunately, there is one significant issue with the aforementioned data: schooling. Seeing as the majority of work to date includes only aggregate data, it is impossible to account. The first concerns small N: seeing as most publish studies only include a handful of TRA data, there is a lot of room for error and over.

Unfortunately, there is a largely unaccounted for confound in this aggregate data which may make generalized analysis questionable: schooling.”


Good catch. Additionally, one of the authors on this is just a student at UWisc, and the other author is also not a professional researcher but instead an author of popular books.

This is not an ad-hominum, but does put into question the statistical training backgrounds of both of these authors to accurate assess the data.


If not ad-hominum, what is it then? I mean, you did not provide any substantiated reason why would their research be false but you went straight to pin-point their experience, or lack thereof.

FWIW I find this research to align on my thoughts about the IQ - IQ is not a constant but a function of multiple variables, where one of the variables is most likely an education.

For instance, I am pretty sure that drilling through the abstract mathematical and hard engineering problems to some extent during the high-school but much more during and after the University, develops your brain in such a way that you become not only more knowledgeable in terms of memorizing things but develops your brain so that it can reason about and anticipate things that you couldn't possibly do before.


> but does put into question the statistical training backgrounds

This is true of virtually all university research. Statistics is far more nuanced than what a semester course can teach you. And the incentives to publish can cause bad actors to use poorly defined surveys or p hack or whatever.


> and the other author is also not a professional researcher but instead an author of popular books.

This makes the awkward wording even more confusing. I don't understand how a professional author who appears to speak English very well would write so poorly and not follow up with edits.


The language is consistent with ESL writing, in my experience.

The strange thing is that the corresponding author and the co-author appear to be english speakers, as far as I can tell. I googled the primary author and found a YouTube channel where someone by the same name speaks clearly about neuroscience. Maybe I'm looking at another person with the same name and middle initial who also happens to speak about neuroscience and brain development?


Why not do an empirical A/B test: Set up two honeypots (or perhaps 2000 for statistical significance). A gets zero updates, B gets all updates immediately. See which ones get pwned faster.


I am baffled by Apple's incompetence here. In the past years I've seen:

* iTunes/Music app randomly reassign my Album artwork, with different (incorrect) art showing up on different devices!

* Reminders app: shared reminder lists can end up with the name of a different list

* Ghost photos that are deleted from my phone, and come back later.

* Maps, when I say "navigate to $friend" set a route that ended in my own driveway.

To me, these bugs suggest a fundamental design flaw, perhaps they are using a simple Integer as an index rather than a UUID?

Or maybe the database schema are solid, but there's some sort of race condition in their synchronization frameworks and the data is getting scrambled in RAM?

Whatever it is, it's absolutely insane that in 2025 these kinds of bugs are happening.


I completely agree about there being a fundamental design flaw.

I still use Macs because data on a physical disk seems perfectly reliable, but I've been bitten by so many of these bugs in their apps. iCloud files completely disappear, then reappear a day later. Highlight a couple chapters of a PDF in Preview, then reopen the file and they're gone because iCloud thinks the older unhighlighted version is newer or something. Madness. I don't touch any of these Apple services/apps anymore.

There's very clearly a fundamental bug in whatever sync framework they seem to share across everything. It's bad enough to have data disappear entirely or deleted data reappear, but then when data shows up in the completely wrong place, and this has been happening for years and years and still isn't fixed... I don't know what to think.

You're right. There's no other word for it but "insane". They can engineer their A-series and M-series microchips, but it's been over a decade now and their sync is still fundamentally broken.


> There's no other word for it but "insane". They can engineer their A-series and M-series microchips

There are certainly other words for it. Lazy, anticompetitive, disinterested, any of those are more plausible than all of Apple being insane. They sold you a microchip that you knew you wanted, now they are beholden to little else. For over a decade, Apple didn't even offer the iOS APIs for third-parties to implement cloud storage. They know you need their software services, regardless of how shit they are.

Insanity would be a pretty satisfying explanation. Fickleness fits a lot better with Apple's track record though.


Apple's hardware is top class, but the software has always been lacking. The only time I've seen both in perfect synergy was when the iPod was released (and even then there was iTunes). Not even the iPhone reveal had that.


No ECC, no replaceable parts, software updates stop in a few years.

Apple's hardware is absolutely trash for anything that needs reliability. For shiny disposable entertainment use, it can be great.


Apple stole my entire music library. I have had one library going back to the first release of iTunes on Windows (2003?) — thousands of songs, most of them CD rips.

I then subscribed to Apple Music and relied on its matching function. After switching from an Intel Mac to an M2 and redownloading my library from remote, it now believes that each and every song in my library are rented Apple Music copies. Even those it shows as having been added in 2003.

Some songs are missing; some go missing, then inexplicably come back months later. Worse: so far I have found around a dozen which have been replaced by different versions.

It's a real mess.


The thing is, in many cases, these products and teams are very siloed from each other. I suspect, having worked in one of these teams, that some of the issues comes from this siloing. Lessons learned aren't shared, and it can be difficult to build integrations.


Another two examples:

* prompts in settings for adding an account recovery contact that never go away, even after months and months of successfully setting it up multiple times.

* OS account profile picture can barely stay associated with the most recently picked option. Happens for non-iCloud local accounts on Mac, happens when I change profile pictures on iOS for iCloud… weird.

* OS account update screens on iPad, iOS, and watchOS will forget that they are in the middle of updating if you navigate away from the settings screen. Thankfully, today they at least recover from it (it’s probably still happening in the background), but it takes several long seconds of spinning for the settings page to remember that it was doing an update two seconds ago before I navigated away from it.

* similar to your ghost pictures bug, deleting a large media file from a media player app moves it to recently deleted, but you can sometimes end up in situations where you can’t permanently delete the file, or it doesn’t show up anywhere but still takes up space. (Talking about 20GB-80GB file sizes where it makes a big difference on OS storage space)

Some of these bugs have been around for a VERY long time.

But the weird thing is I don’t see them in 3rd party apps.


Finder being unable to show file information and instead showing something that looks like file information but is completely wrong is scary and sad. And it has been like this for months. Like here, none of the data in the General part matches the actual file. More info is correct, Preview is correct, so I know I did in fact click the correct file. http://bn5i3r.s3.amazonaws.com/Screenshot-2025-09-17-at-17-0... Happens randomly so I don't know how to report this (and from experience I know that reporting bugs to Apple is completely useless).


Its probably a good thing in a way if someone learns this lesson in a lesser painful way. You need to manage your own files and backups and content. Data portabillity is the opposite of what they want and try to further abstract away until people dont even know what a file or folder is. Its so easy, you dont even have to lift a finger until you decide you want or need to leave. Thats when you realize its sometimes impossible to take it with you


Yeah, I spend a night on writing some python to disentangle my un-amused father's music collection when he stopped using iTunes. What a mess.


iTunes randomly changing album artwork happened to me too. Only thing that fixed it was wiping the iPhone and resyncing with computer.


The clipboard is no longer reliable.

Not sure when exactly that changed, but it was probably a few OS releases ago?


Clipboard has been unstable on every OS (especially on Desktop - and I mean Linux, Windows and Mac), and I think part of the culprit is apps like Teams and Discord, if you Ctrl + C by mistake on an empty text box, IT COPIES THE EMPTY TEXT BOX effectively wiping your clipboard. It's the most irritating UX and it took me years to figure out. Always right click copy and right click paste, you'll notice it works 100% of the time as it used to.


Clipboard managers help a lot there.

I just use KDE's default one, Klipper, and I raise the max entry number.

If something bad replaces your copy, you can get the good one back from the history.

There are nice features like QR code generation for your copied text if you want to quickly share something with someone else's phone as well.


That’s a really interesting solution to copy-pasting between devices, which is one of them features Apple has that I rely on a tonne even though it’s horribly unreliable. I wonder if anyone has a similarly clever way to copy from mobile to your computer.


KDE Connect provides some tight integration, including (optional) clipboard sharing, media player controls, features to do presentations, mouse and keyboard.

I use it as a remote control to adjust volume during movies from the phone to the computer playing them for instance.

https://kdeconnect.kde.org/


Copying empty text is a configurable flag in some linux environments, at least, but I'm not sure if that behavior is faithfully preserved in teams / discord / etc as I've never really had it on.


On Linux you can just select the text and simply paste it using middle click. It works everywhere on Xorg, on some environments on Wayland. And it will only copy what you selected... everytime.


I’ve copied with right click out of chatGPT on Firefox and the contents not ending up on my clipboard. Not reliably.


Thats chatgpt doing ott wrapping and breaking web standards in a way chrome accepts but firefox doesnt last i looked


Could be because of shared clipboard between devices?


I share the author's sentiment. I hate these things.

True story: trying to reverse engineer macOS Photos.app sqlite database format to extract human-readable location data from an image.

I eventually figured it out, but it was:

A base64 encoded Binary Plist format with one field containing a ProtoBuffer which contained another protobuffer which contained a unicode string which contained improperly encoded data (for example, U+2013 EN DASH was encoded as \342\200\223)

This could have been a simple JSON string.


> This could have been a simple JSON string.

There's nothing "simple" about parsing JSON as a serialization format.


Having attempted writing a JSON parser from scratch and a protobuf parser from scratch and only completing one of them, I disagree.


Except that most often you can just look at it and figure it out.


Sure you can look at it[1], but you're not expected to look at Apple Photos database. The computer is.

Write a correct JSON parser, compare with protobuf on various metrics, and then we can talk.

[1]: although to be fair, I am older than kids whose first programming language was JavaScript, so I do not think of JSON object format with property names in quotes and integers that need to be wrapped as strings to be safe, etc., lack of comma after the last entry--to be fair this last one is a problem in writing, not reading JSON--as the most natural thing


I'm also "older" but I don't think that means anything.

> Sure you can look at it[1], but you're not expected to look at Apple Photos database.

How else are you supposed to figure it out? If you're older then you know that you can't rely on the existence or correctness of documentation. Being able to look at JSON and understand it as a human on the wire is huge advantage. JSON being pretty simple in structure is as advantage. I don't see a problem with quoting property names! As for large integers and datetimes, yes that could be much better designed. But that's true of every protocol and file format that has any success.

JSON parsers and writers are common and plentiful and are far less crazy than any complete XML parser/writer library.


> Being able to look at JSON and understand it as a human on the wire is huge advantage

I don’t think this is a given at all. Depends on the context. I think it’s often overvalued. A lot of times the performance matters more. If human readability was the only thing that mattered, I would still not count JSON as the winner. You will have to pipe it to jq, realistically. You’d do the same for any other serialization format too. Inside Google where proto is prevalent, that is just as easy if not more convenient.

The point is how hard or easy it is for an app’s end user to decipher its file database is not a design goal for the serialization library chosen by Apple Photos developers here. The constraints and requirements are all on different axis.


Sure but unless you want to embed an LLM in every JSON library, computers can't do that.



I mean... you can nest-encode stuff in any serial format. You're not describing a problem either intrinsic or unique to Protobuf, you're just seeing the development org chart manifested into a data structure.


Good points this wasn't entirely a protobuf-specific issue, so much as it was a (likely hierarchical and historical set of) bad decisions to use it at all.

Using Protobuffers for a few KB of metadata, when the photo library otherwise is taking multiple GB of data, is just pennywise pound foolish.

Of course, even my preference for a simple JSON string would be problematic: data in a database really should be stored properly normalized to a separate table and fields.

My guess is that protobuffers did play a role here in causing this poor design. I imagine this scenario:

- Photos.app wants to look up location data

- the server returns structured data in a ProtoBuffer

- there's no easy or reasonable way to map a protobuf to database fields (one point of TFA)

- Surrender! just store the binary blob in SQLITE and let the next poor sod deal with it


You have to take into account the fact that iPhoto app has had many iterations. The binary plist stuff is very likely the native NSArchive "object archiving (serialization)" that is done by Obj-C libraries. They probably started using protobuf at some point later after iCloud. I suspect the unicode crap you are facing may even predate Cocoaization of the app (they probably used Carbon API).

So it would make it a set of historical decisions, but I am not convinced they are necessarily bad decisions given the constraints. Each layer is likely responsible for handing edge cases in the application that you and I are not privy to.


The JSON version would have also had the wrong encoding - all formats are just a framing for data fed in from code written by a human. In mac's case, em dash will always be an issue because that's just what Mac decided on intentionally.


That's horrendous. For some reason I imagine Apple's software to be much cleaner, but I guess that's just the marketing getting to my head. Under the hood it's still the same spaghetti.


Yeah, the problem is Apple and all the other contemporary tech companies have engineers bounce around between them all the time, and they take their habits with them.

At some point there becomes a critical mass of xooglers in an org, and when a new use case happens no one bothers to ask “how is serialization typically done in Apple frameworks”, they just go with what they know. And then you get protobuf serialization inside a plist. (A plist being the vanilla “normal” serialization format at Apple. Protobuf inside a plist is a sign that somebody was shoehorning what they’re comfortable with into the code.)


It that's any consolation, in the current version's schema they are just plain ZLATITUDE FLOAT, ZLONGITUDE FLOAT in ZASSET table..


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: