Hacker Newsnew | past | comments | ask | show | jobs | submit | nehalem's commentslogin

Whenever I see another supposedly menial device including enough general purpose hardware to run Doom, I wonder whether I should think of that as a triumph of software over hardware or an economic failure to build cheaper purpose-built hardware for things like sending audio over a radio.

> Whenever I see another supposedly menial device including enough general purpose hardware

The PineBuds are designed and sold as an open firmware platform to allow software experimentation, so there’s nothing bad nor any economic failures going on here. Having a powerful general purpose microcontroller to experiment with is a design goal of the product.

That said, ANC Bluetooth earbuds are not menial products. Doing ANC properly is very complicated. It’s much harder than taking the input from a microphone, inverting the signal, and feeding it into the output. There’s a lot of computation that needs to be done continuously.

Using a powerful microcontroller isn’t a failure, it’s a benefit of having advanced semiconductor processes. Basically anything small and power efficient on a modern process will have no problem running at tens of MHz speeds. You want modern processes for the battery efficiency and you get speed as a bonus.

The speed isn’t wasted, either. Higher clock speeds means lower latency. In a battery powered device having an MCU running at 48MHz may seem excessive until you realize that the faster it finishes every unit of work the sooner it can go to sleep. It’s not always about raw power.

Modern earbuds are complicated. Having a general purpose MCU to allow software updates is much better than trying to get the entire wireless stack, noise cancellation, and everything else completely perfect before spinning out a custom ASIC.

We’re very fortunate to have all of this at our disposal. The groveling about putting powerful microcontrollers into small things ignores the reality of how hard it is to make a bug-free custom ASIC and break even on it relative to spending $0.10 per unit on a proven microcontroller manufacturer at scale.


The other aspect to consider is changing requirements. Maybe a device capable of transmitting PSTN-level audio quality wirelessly would have been popular twenty years ago, but nowadays most people wouldn't settle for anything with less than 44.1kHz bandwidth. A faster processor means that there's some room for software upgrades later on, future-proofing the device and potentially reducing electronic waste. Unfortunately, that advantage is almost always squandered in practice by planned obsolescence and an industry obsession with locked-down, proprietary firmware.

> Doing ANC properly is very complicated. It’s much harder than taking the input from a microphone, inverting the signal, and feeding it into the output. There’s a lot of computation that needs to be done continuously.

Neat, any recommended reading on the topic?


[flagged]


These accusations of someone using ChatGPT are cheap mindless attacks based on nothing more than the fact that someone has put together a good argument and used good formatting.

If that's all your evidence is, don't you dare go near any scientific papers.


Yeah, because there’s plenty of ChatGPT going on in academia too :P

Heh - point taken.

But it is important to note that a lot of what people decry as "AI Generated" is really the fact that someone is adhering to what have been best practices in publishing arguments for some time.


Or a third option - an economic success that economies of scale have made massively capable hardware the cheapest option for many applications, despite being overkill.

Also see: USB 3+ e-marker chips. I'm still waiting for a Doom port on those.

Or the fourth option, an environmental disaster all around

The materials that go into a chip are nothing. The process of making the chip is roughly the same no matter the power of it. So having one chip that can satisfy a large range of customers needs is so much better than wasting development time making a custom just good enough chip for each.

> The materials that go into a chip are nothing.

They really aren't. Every material that goes into every chip needs to be sourced from various mines around the world, shipped to factories to be assembled, then the end goods need to be shipped again around the world to be sold or directly dumped.

High power, low power, it all has negative environmental impact.


That doesn't contradict the point, though. The negative impact on the environment is not reduced by making a less powered chip.

No, hence "all around."

Which materials are they and how would you suggest doing it with fewer materials?

ultra pure water production itself is responsible for untold amounts of hydroflouric acid and ammonia , and most etching processes have an F-Gas involved, and most plants that do this work have tremendously high energy (power) costs due to stability needs/hvac.

it's not 'just sand'.


How would you suggest doing it with fewer materials?

The claim was that "the materials that go into a chip are nothing". Arguing that that is not that case does not really put someone on the hook to explain or even have any clue how to do it better.

In theory, graphene based semiconductors would eliminate a lot of need for shipping and mining.

Maybe. They have the potential for faster semiconductors, but only after adequate modifications. Graphene isn't a semiconductor, and it isn't obvious that we'll find a way to fix that without (or even with) rare resources.

Cease production.

Why are you on a technology site?

I'm not sure why you're asking this or what you're insinuating. The site is called Hacker News, it should be open to anarcho- and eco- hackers too. Not all of believe in infinite growth.

Do you want to expand on why you're on this site?

I've been here for more than 15 years and I'm not the person I was when I signed up or when I went through life in a startup.


It’s the opposite. Using an off the shelf MCU is much more efficient than trying to spin your own ASIC.

Doing the work in software allows for updates and bug fixes, which are more likely to prevent piles of hardware from going into the landfill (in some cases before they even reach customers’ hands).


You have to use a chip, and no matter what kind of chip you're paying for the same raw resources.

I don't think you have an actual argument here, you just want to be indignant and contrarian.


What do you mean? Earbud are environmental disasters?

Nobody cares, unless they’re commenting for an easy win on internet message boards.

You should see it as the triumph of chip manufacturing — advanced, powerful MCUs have became so cheap thanks to manufacturing capabilities and economies of scale means it is now cheaper to use a mass manufactured general purpose device that may take more material to manufacture than a simpler bespoke device that will be produced at low volumes.

You might be wondering "how on earth a more advanced chip can end up being cheaper." Well, it may surprise you but not all cost in manufacturing is material cost. If you have to design a bespoke chip for your earbuds, you need to now hire chip designers, you need to go through the whole design and testing process, you need to get someone to make your bespoke chip in smaller quantities which may easily end up more expensive than the more powerful mass manufactured chips, you will need to teach your programmers how to program on your new chip, and so on. The material savings (which are questionable — are you sure you can make your bespoke chip more efficiently than the mass manufactured ones?) are easily outweighed by business costs in other parts of the manufacturing process.


> CPU: Dual-core 300MHz ARM Cortex-M4F

It's absolute bonkers amount of hardware scaling that happened since Doom was released. Yes, this is a tremendous overkill here, but the crazy part here is that this fits into an earpiece.


This is the "little part" of what fits into an earpiece. Each of those cores is maybe 0.04 square millimeters of die on e.g. 28nm process. RAM takes some area, but that's dwarfed by the analog and power components and packaging. The marginal cost of the gates making up the processors is effectively zero.

so 1mm2 peppered by those cores at 300MHz will give you 4 Tflops. And whole 200mm wafer - 100 Petaflops, like 10 B200s, and just at less than $3K/wafer. Giving half area to memory we'll get 50 PFlops with 300Gb RAM. Power draw is like 10-20KW. So, giving these numbers i'd guess Cerebras has tremendous margin and is just printing money :)

Yes, assuming you don't need to connect anything together and that RAM is tinier than it really is, sure. At 28nm, 3megabits/square millimeter is what you get of SRAM, so an entire wafer only gets you ~12 gigabytes of memory.

And, of course, most of Cerebras' costs are NRE and the stuff like getting heat out of that wafer and power in.


Why not ddram?

Same reason why Cerebras doesn't use DRAM. The whole point of putting memory close is to increase performance and bandwidth, and DRAM is fundamentally latent.

Also, process that is good at making logic isn't necessarily good for making DRAM. Yes, eDRAM exists, but most designs don't put DRAM on the same die as logic and instead stack it or put it off-chip.

Almost all these microcontrollers that are single-die have flash+SRAM. Almost all microprocessor cache designs are SRAM for these reasons (with some designs using off-die L3 DRAM)-- for these reasons.


CPU cache is understandably SRAM.

>The whole point of putting memory close is to increase performance and bandwidth, and DRAM is fundamentally latent.

When the access patterns are well established and understood, like in the case of transformers, you can mitigate latency by prefetch (we can even have very beefed up prefetch pipeline knowing that we target transformers), while putting memory on the same chip gives you huge number of data lines thus resulting in huge bandwidth.


With embedded SRAM close, you get startling amounts of bandwidth -- Cerebras claims to attain >2 bytes/FLOP in practice -- vs H200 attaining more like 0.001-0.002 to the external DRAM. So we're talking about a 3 order of magnitude difference.

Would it be a little better with on-wafer distributed DRAM and sophisticated prefetch? Sure, but it wouldn't match SRAM, and you'd end up with a lot more interconnect and associated logic. And, of course, there's no clear path to run on a leading logic process and embed DRAM cells.

In turn, you batch for inference on H200, where Cerebras can get full performance with very small batch sizes.


I remember playing Doom on a single-core 25MHz 486 laptop. It was, at the time, an amazing machine, hundreds of times more powerful than the flight computer that ran the Apollo space capsule, and now it is outclassed by an earbud.

Can we finally end this Apollo computer comparison forever? It was a real time computer NOT designed for speed but real time operations.1

Why don't you compare it to let's say pdp11, vax780/11 or Cray 1 supercomputer?

NASA used a lot of supercomputers here on earth pior to mission start.


> It was a real time computer NOT designed for speed but real time operations.

More than anything, it was designed to be small and use little power.

But these little ARM Cortex M4F that we're comparing to are also designed for embedded, possibly hard-real-time operations. And dominant factors in experience on playback through earbuds are response time and jitter.

If the AGC could get a capsule to the moon doing hard real-time tasks (and spilling low priority tasks as necessary), a single STM32F405 with a Cortex M4F could do it better.

Actually, my team is going to fly a STM32F030 for minimal power management tasks-- but still hard real-time-- on a small satellite. Cortex-M0. It fits in 25 milliwatts vs 55W. We're clocked slow, but still exceed the throughput of the AGC by ~200-300x. Funnily enough, the amount of RAM is about the same as the AGC :D It's 70 cents in quantity, but we have to pay three whole dollars at quantity 1.

> NASA used a lot of supercomputers here on earth pior to mission start.

Fine, let's compare to the CDC 6600, the fastest computer of the late 60's. M4F @ 300MHz is a couple hundred single precision megaflops; CDC6600 was like 3 not-quite-double-precision megaflops. The hacky "double single precision" techniques have comparable precision-- figure that is probably about 10x slower on average, so each M4F could do about 20 CDC-6600 equivalent megaflops or is roughly 5-10x faster. The amount of RAM is about the same on this earbud.

His 486-25 -- if a DX model with the FPU -- was probably roughly twice as fast as the 6600 and probably had 4x the RAM, and used 2 orders of magnitude less power and massed 3 orders of magnitude less.

Control flow, integer math, etc, being much faster than that.

Just a few more pennies gets you a microcontroller with a double precision FPU, like a Cortex-M7F with the FPv4-SP-D16, which at 300MHz is good for maybe 60 double precision megaflops-- compared to the 6600, 20x faster and more precision.


I have thought about this a little more, and looked into things. Since NASA used the 360/91, and had a lot of 360's and 7900's... all of NASA's 60's computing couldn't quite fit into a single 486DX-25. You'd be more like 486DX2-100 era to replace everything comfortably, and you'd want a lot of RAM-- like 16MB.

It looks like NASA had 5 360/75's plus a 360/91 by the end, plus a few other computers.

The biggest 360/75's (I don't know that NASA had the highest spec model for all 5) were probably roughly 1/10th of a 486-100 plus 1 megabyte of RAM. The 360/91 that they had at the end was maybe 1/3rd of a 486-100 plus up to 6 megabytes of RAM.

Those computers alone would be about 85% of a 486-100. Everything else was comparatively small. And, of course, you need to include the benefits from getting results on individual jobs much faster, even if sustained max throughput is about the same. So all of NASA, by the late 60's, probably fits into one relatively large 486DX4-100.

Incidentally, one random bit of my family lore; my dad was an IBM man and knew a lot about 360's and OS/360. He received a call one evening from NASA during Apollo 13 asking for advice about how they could get a little bit more out of their machines. My mom was miffed about dinner being interrupted until she understood why :D


What's your project/ cubesat name?

Ps. Try msp430 f model for low power. These can be CRAZY efficient.

Ps. Don't forget to short circuit the solar panel directly to system: then your satellite might talk even 50 years from now such as some HAM satellites from cold war (Oscar 7 I think)


> What's your project/ cubesat name?

NyanSat; I'm PI and mentor for a team of high school students that were selected by NASA CSLI.

> Ps. Try msp430 f model for low power. These can be CRAZY efficient.

Yah, I've used MSP430 in space. STM32F0 fits what we're using it for. The main flight computer we designed, and it's RP2350 with MRAM. Some of the avionics details are here: https://github.com/OakwoodEngineering/ObiWanKomputer

> Ps. Don't forget to short circuit the solar panel directly to system: then your satellite might talk even 50 years from now such as some HAM satellites from cold war (Oscar 7 I think)

Current ITU guidelines make it clear this is something we're not supposed to do to ensure that we can actually end transmissions by the satellite. We'll re-enter/burn up within


And perhaps more fittingly, that PC couldn't decode and play an MP3 in real time.

And by an order of magnitude or more, too!

Yes but also Doom is very very old.

I bought a kodak camera in 2000 (640x480 resolution) and even that could run Doom on it. Way back when. Actually playable with sounds and everything.

Here's an even older one running it: https://m.youtube.com/watch?v=k-AnvqiKzjY


Incredible to see people try to spin the wild successes of market based economies as an economic failure.

Hardware is cheap and small enough that we can run doom on an earbud, and I’m supposed to think this is a bad thing?


I can sort of see one angle for it, and the parent story kind of supports it. Bad software is a forcing function for good hardware - the worse that software has gotten in the past few decades the better hardware has had to get to support it. Such that if you actually tried like OP did, you can do some pretty crazy things on tiny hardware these days. Imagine what we could do on computers if they weren't so bottlenecked doing things they don't need to do.

That wasn't the GP's claim. Their implication was that it's an economic failure that we don't produce less powerful hardware.

Yeah, that's more or less what I'm getting at.

It's already very cheap to build though. We are able to pack a ton of processing into a tiny form factor for little money (comparatively, ignoring end-consumer margins etc.).

An earbud that does ANC, supports multiple different audio standard including low battery standby, is somewhat resistant to interference, can send and receive over many meters. That's awesome for the the price. That it has enough processing to run a 33 year old game.. well, that's just technological progression.

A single modern smartphone has more compute than all global conpute of 1980 combined.


I need that in lunar-lander exponents

(imagine the lunar lander computer being an earbud ha)


Well, current smartphone would be about 10^8 times faster/more than the lunar lander.

A single Airpod would be about 10^4 times as powerful as the entire lunar lander guidance system.

Or to put another way: a single Airpod would outcompute the entire Soviet Union's space program.


Earbuds often have features like mic beam forming and noise cancellation which require a substantial degree of processing power. It's hardly unjustified compared to your Teams instance making fans spin or Home Assistant bringing down an RPi to its knees.

These sorts of things feel like they would be quite inefficient on a general-purpose CPU so you would want to do them on some sort of dedicated DSP hardware instead. So I would expect an earbud to use some sort of specialized microcontroller with a slow-ish CPU core but extra peripherals to do all the signal processing and bluetooth-related stuff.

No doubt, maybe should I have emphasised the "general" part of "general purpose" more. Not a hardware person myself, I wonder whether there would be purpose-built hardware that could do the same more cheaply – think F(P)GA.

> I wonder whether there would be purpose-built hardware that could do the same more cheaply – think F(P)GA.

FPGAs are not cost efficient at all for something like this.

MCUs are so cheap that you’d never get to a cheaper solution by building out a team to iterate on custom hardware until it was bug free and ready to scale. You’d basically be reinventing the MCU that can be bought for $0.10, but with tens of millions of dollars of engineering and without economies of scale that the MCU companies have.


> I wonder whether there would be purpose-built hardware that could do the same more cheaply

Where are you imagining costy savings coming from? Custom anything is almost always vastly more expensive than using a standardised product.


> economic failure to build cheaper purpose-built hardware for things like sending audio over a radio.

You're literally just wasting sand. We've perfected the process to the point where it's inexpensive to produce tiny and cheap chips that pack more power than a 386 computer. It makes little difference if it's 1,000 transistors or 1,000,000. It gets more complicated on the cutting edge, but this ain't it. These chips are probably 90 nm or 40 nm, a technology that's two decades old, and it's basically the off-ramp for older-generation chip fabs that can no longer crank out cutting-edge CPUs or GPUs.

Building specialized hardware for stuff like that costs a lot more than writing software that uses just the portions you need. It requires deeper expertise, testing is more expensive and slower, etc.


A wireless earbud is closer in complexity to a WiFi router than a digital wristwatch.

Bluetooth is complicated. Noise canceling is complicated. Audio compression is complicated. Simply being an RF device is complicated.

It is an unfortunate physical reality that it requires a lot of processing to do all the jobs a Bluetooth earbud has to do. The incredible engineering success is that we can put a GHz class CPU in each earbud and all of that crazy processing happens on microwatts of power.

Putting supercomputers in your ears is mildly absurd on the face of it, but consider that we now have supercomputers that are so small, cheap, and energy efficient that we can put them and their batteries in our ears.

Besides, what's more wasteful, one silicon die or two? It a cortex CPU more wasteful than a 555 timer on equivalent die space? Is it more resource efficient to pay 10x more for a 2x larger die using 40x power and a bigger battery to go with it? Or is it most efficient to use the smallest, most efficient die, and the smallest battery you can get away with?

In the grand scheme of things, the "wasted" resources in the chip are essentially nil. You save far, far more resources by using more efficient processing. It's a few milligrams of silicon, carbon, and minerals. You should be far, far more concerned about the lithium batteries ending up in landfills.


Neither - it's a triumph of our ability to do increasing complex things in both software and hardware. An earbud should be able to make good use of the extra computing capacity, whether it is to run more sophisticated compression saving bandwidth, or for features like more sophisticated noise cancelling/microphone isolation algorithms. There are really very few devices that shouldn't be able to be better given more (free) compute.

It's also a triumph of the previous generation of programmers to be able to make interesting games that took so little compute.


Plus there’s actually less waste, I would imagine, by using a generic, very efficiently mass produced, but way overkill part. vs. a one off or very specific, rare but perfectly matched part.

There are enough atoms in that earbud to replace all of the world's computers.

We've got a long way to go.


If it can run Doom it can run malware.

I imagine it’s far more economical to have one foundry that can make a general purpose chip that’s overpowered for 95% of uses than to try to make a ton of different chips. It speaks to how a lot of the actual cost is the manufacturing and R&D.

The only real problem I could see is if the general purpose microcontroller is significantly more power-hungry than a specialized chip, impacting the battery life of the earbuds.

On every other axis, though, it's likely a very clear win: reusable chips means cheaper units, which often translates into real resource savings (in the extreme case, it may save an entire additional factory for the custom chips, saving untold energy and effort.)


I will never understand people who treat MHz like a rationed resource.

I think it's just indicative of the fact that general purpose hardware has more applications, and can thus be mass produced for cheaper at a greater scale and used for more applications.

Marginal cost of a small microprocessor in an ASIC is nothing.

The RAM costs a little bit, but if you want to firmware update in a friendly way, etc, you need some RAM to stage the updates.


Less computing power is not necessarily cheaper.

It's intuitive to think of wasted compute capacity as correlating with a waste of material resources. Is this really the case though?

Waste is subjective or, at best, hard to define. It's the classic "get rid of all the humans and nothing would be wasted" aphorism.

Ah yes the "good old days when we wrote assembly" perspective.

Like, I get it, but embedded device firmware is still efficient af. We end up stuffing a lot of power into these things because contrary to say wired Walkman headphones, these have noise cancellation, speech detection for audio ducking when you start having a conversation, support taking calls, support wakewords for assistants, etc.


If you look at the bottom of the page, it’s an advertisement for someone looking for a job to show off his technical skill.

Okay? Is that good or bad or what?

It’s good. I’m just countering the comment above its argument that this is a sign that people do useless things.

Got it

Answering the question how to sell more tokens per customer while maintaining ~~mediocre~~ breakthrough results.

Delegation patterns like swarm lead to less token usage because:

1. Subagents doing work have a fresh context (ie. focused and not working on the top of a larger monolithic context) 2. Subagents enjoying a more compact context leads to better reasoning, more effective problem solving, less tokens burned.


Merge cost kills this. Does the harness enforce file/ownership boundaries per worker, and run tests before folding changes back into the lead context?

I don't know what you're referring to but I can say with confidence that I see more efficient token usage from a delegated approach, for the reasons I stated, provided that the tasks are correctly sized. ymmv of course :)

Seeing news like this, I wonder whether there is a market for an OSS Android and/or Linux distribution that provides the management comfort of Chromebooks without being tied to Google, Apple or Microsoft. A little like Keycloak but one layer higher.

With all the US/EU issues currently, you might even be able to spin up a company to support European services that need management based on OSS management software.

Ubuntu is pretty strong already in that niche - either using Landscape as a first party management solution, but it also tends to be the distro most-commonly recommended by the big third-party MDM vendors like Scalefusion and Jumpcloud. Not sure what their mobile story is like, but they certainly cover laptop / desktops.

If Android is not a blocker, maybe even then, Jolla, a Finnish company, has been offering a Linux based mobile OS for quite some time. I frankly don't get why other EU companies building the hardware, like Fairphone and Volta, don't partner up with them.

> Android

> without being tied to Google

That's a contradiction.


No. I'm on GrapheneOS and not tied to Google.

You must be thinking of the Google Play Services but these aren't required by GrapheneOS.


No, we're thinking about the fact that Android is Google owned and developed and no removal of Play Services changes that.

Every Android ROM is critically dependant on Googles work to actually develop and secure the OS.


It is tied to google inasmuch all target phones are google branded. https://grapheneos.org/faq#supported-devices

Hopefully that can change, in the future


It's tied to Google's development strategy, such as removal of Manifest-V2, https://news.ycombinator.com/item?id=41905368

It could change now, it could have changed years ago, they just have no interest in trying. It's pretty annoying honestly.

As long as it depends on Google paying upstream development that GrapheneOS updates from, it is tied to Google.

Now if GrapheneOS was its own thing without additional AOSP code updates.


This becomes sophistry, though. "tied to" in a way that doesn't matter, doesn't matter.

Ah but it does, as Google can decide to close down AOSP shop at any moment.

If that happens, the world can always try to fork. Until then it seems kind of pointless to do so?

Try is the keyword here.

Hence why these efforts should not rely on US institutions good will in first place.


Can try to fork?, china , russia, and lots of smaller countrys are steadily moving away and as basic introperability standards for phone and internet will remain, they can do this, and pressure is also mounting to get a linux phone fully functional, that will alao happen. And in a world where Guggappl is providing genocide and abduction services, Billions would happily choose other alternatives.

China and Russia are likewise involved in their own genocides (Uyghurs and Ukraine respectively), and they are just as interested in developing centralised systems of control. They will not give the world truly free and open platform.

"They will not give the world truly free and open platform", uhuhu!, but we are giving them the pivot point to claim the flag of freedom , rather than just doing that ourselves. also, one more move from you know who, and a whole lot of countrys will have to very seriously start looking for stable deals that last longer than it takes the ink to dry. China just ghosted nvidia, on the "something 200" ai chip to start shipping in march, tsmc and all there suppliers have stood down on that, and will of course, instantly re focus on the next job, which might be a batch of chips for fairphones....

> but we are giving them the pivot point to claim the flag of freedom

Nobody said that, so you're arguing with the strawman.

The OS offering actual freedom is GNU/Linux.


The day they do that, Android will just be a Chinese product and Google will lose control over it.


And I am still sad that they didn't go for an open source hard fork of AOSP. Would have been fun.

Indeed and if Google would pull the plug on AOSP, some initiative like this would become the de facto Android standard.

I love your optimism. What you'll see is return to 2000s where you may have had "Symbian" as the operating system, but the phones weren't compatible between themselves and apps broke and didn't work across manufacturers (or even product lines) because there was noone enforcing compatibility.

I wonder if you forgot that or you're too young to remember what kind of bizarre hell mobile development was at that time.

Heck, even early Android was really hard to develop for because CTS suite didn't cover enough and all of us spent hours upon hours (and many dollars) trying to reproduce and fix Samsung, Huawei, HTC and other bugs.


I never said it's going to be smoother than it is right now, just that Google will lose control.

8 of the top 10 manufacturers are Chinese, the last two are Samsung (which definitely isn't going to side with Google) ... and Google themselves.

If Google doesn't publish AOSP anymore, Pixels will be the only phone with their software on it, Samsung might attempt something alone and the rest will pick up the development from a Chinese government consortium which will be the de-facto default mobile platform instead of the Google one.


I doubt that people advocating for GrapheneOS would pivot to a Chinese powered platform.

They would have to follow like everybody else, they aren't powerful enough to dictate market trends.

8 out of the top 10 Android manufacturers are Chinese.

Google would just lose the ownership of Android to a Chinese consortium used by everybody else.


Anyone on Graphene is tied tot Google, for it requires Google hardware.

The actual disturbing thing is that given Next‘s track record of questionable security architecture, the author felt compelled to make the joke explicit.


There is an element of tragic comedy to those announcement. While remarkable on their own, everybody knows that one cannot use any new browser feature reliably any time soon due to Apple not shipping continuous updates to the browsers they force upon their users.


iOS from 2 versions prior don't get latest Safari?

I can't check because my wife's iPhone is, regrettably according to her, "updated to the latest glAss version".


I know one of my clients complained something didnt work on their few year old iPad. So.. I don't know what the cutoff is but clearly not everything updates regularly. He did try updating it manually too but couldn't.


Safari got a big update last week.


Safari in general got an update, or Safari on only the devices Apple deems worthy? Usually Apple limits Safari updates to new phones.


Do you consider six-year-old phones new? What about seven-year-old Macs?


I think the iPhone X is the newest model that is no longer receiving iOS updates. That came out in 2017. So 8 years ago


To me personally, it feels like Windows 2000 was the last and maybe only consistent UI onto which all later versions bolted what they considered improvements without ever overhauling the UI in full.


I think Windows XP did a pretty good job for the home market, making Windows appear friendly and easy to use to a wide audience (and without too many style inconstistencies).

Moreover, Windows XP let you switch the interface back to the classic 9x look, if you wanted a more serious appearance, and better performance.


> back to the classic 9x look

If i remember correctly this is the windows 2000 look.


We're both right. Windows XP had two different legacy themes: "Windows Standard" which looked like Windows 2000 and "Windows Classic" which looked like Windows 9x.


Totally agree!

Although I‘m a Mac user for a long time, I still remember that I got work done using Windows 2000.

I‘d buy a license and switch back to Windows if we could get the productivity of this UI.

Typing this on iOS with Liquid Glass that drives me nuts


Windows 8 was a pretty big overhaul. But I agree with the author it was a most unwelcome overhaul.


Yeah, but many of its 'advanced' settings and such still pop-up windows 95-styled interfaces. And these are actually the most user-friendly parts of the OS.


I think one of the fundamental issues is "...to those raised on computers, rather than smartphones"


I am glad Vercel works on agents now. After all, Next is absolutely perfect and recommends them for greater challenges. /s


From AWS wrapper to OpenAI wrapper


How does it do with multi-column text and headers and footers?


We have trained the model on tables with hierarchical column headers and with rowspan and colspan >1. So it should work fine. This is the reason we predict the table in HTML instead of markdown.


Thank you. I was rather thinking of magazine like layouts with columns of text and headers and footers on every page holding article title and page number.


It should work there also. We have trained on research papers with two columns of text. Generally, papers have references as a footer and contains page number.


I wonder what happened to Siri. Not a single mention anywhere?


hope to show you more later this year. was like the first thing they said about apple intelligence


Which is the same as what they said last year.


I actually loved Siri when it first came out. It felt magical back then (in a way)


I wonder how this relates to Mother Duck (https://motherduck.com/)? They do „DuckDB-powered data warehousing“ but predate this substantially.


Motherduck is hosting duckdb in cloud. DuckLake is a much more open system.

Ducklake you can build petabyte scale warehouse with multiple readers and writer instances, all transactional on your s3, on your ec2 instances.

Motherduck has limitations like only one writer instance. Read replicas can be 1m behind (not transactional).

Having different instances concurrently writing to different tables is not possible.

Ducklake gives proper separation of compute and storage with a transactional metadata layer.


Just wondering does DuckLake utilizing Open Table Formats (OTFs) since I don't see it's mentioned anywhere in the website?


No. DuckLake is implementing the Open Table Format (and the Catalog above the Table Format). Not utilizing them, but an alternate implementation.


For what it's worth, MotherDuck and DuckLake will play together very nicely. You will be able to have your MotherDuck data stored in DuckLake, improving scalability, concurrency, and consistency while also giving access to the underlying data to third-party tools. We've been working on this for the last couple of months, and will share more soon.


i think a way to see it is MotherDuck is a service to just throw your data at at they will sort it (using duckdb underneath) and you can use DuckDB to iterface with your data. But if you want to be more "lakehouse" or maybe down the line there are more integrations with DuckLake ir you want data to be stored in a blob storage, you can use DuckLake with MotherDuck as the metadata store.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: