Surely this makes your actors feel sick? And wouldn’t it make your motion blur look dashed and also cause artifacts at the edge of the mask if there’s a lot of motion?
You could strobe at some multiple of the sensor frame rate as long as your strobes are continuous through the integration period of the sensor and the lighting fades very quickly. This probably wouldn't work with incandescents but people strobe LEDs a lot to boost the instantaneous illumination without going past the continuous power rating in the datasheet.
You mean do strobe, strobe, strobe, strobe, pause, pause, pause, pause? I bet that's at least as bad as holding the source on for the first four intervals and then off for the latter four intervals.
In any case, if you actually have a scene bright for 1/24th of a second and then dark for 1/24th of a second, repeating, you're well within photosensitive epilepsy range. Don't do that to your actors unless you've discussed it with them and with your insurance company first.
Incandescent lights flicker at twice your AC power frequency -- to a decent approximation, their power is proportional to V^2. But this is input power -- the cooling of the filament is slowish and the modulation depth is low. Most people aren't bothered by this.
Fluorescent lights with old or very crappy "magnetic" ballasts flicker at twice the mains frequency, with deep modulation. The effect on people varies from moderate to extremely unpleasant, and it's extra bad if anything is moving quickly (gyms, etc). There are even studies showing that office workers perform worse under such lighting even if they don't experience personally perceptible symptoms. The effect is so severe that people invented the "electronic ballast", which flickers at much, much higher frequency and avoids low-frequency components. Phew. (The light might still be a nasty color, but the temporal output is okay.)
"Driverless LEDs" are deeply modulated at twice the mains frequency. These are very nasty.
If you actually have a light that flickers at the AC power frequency (certain LED sources in a two-brightness diode-dimmed kitchen appliance fixture will do this, as will driverless LEDs with certain types of failures), then it's extra nasty.
There are plenty of people around who find (depending on the actual waveform) 60Hz flicker intolerable and 120Hz flicker extremely unpleasant. And there are plenty of people who can often perceive flicker under appropriate circumstances up to at least several hundred Hz and even into the low kHz with certain shapes of light sources. You can read up on IEEE 1789 to find a standard based on actual research on what lighting waveforms should look like.
The effect of 120 Hz flicker is bad enough that energy codes in some places (e.g. California) have started to require that LED sources minimize this flicker, but, sadly, it's poorly enforced.
Feel sick? Possibly. People are more or less sensitive to imperceptable flicker.
Artifacts?
I bet that can be remedied by interpolating a new frame between every mask frame. Plus, when you mix it down to 24fps you can introduce as much motion blur and shutter angle "emulation" as you want.
Motion blur can also be very forgiving. You are more likely to notice artifacts in still or slow moving scenes and then the problem goes away.
> There's nothing about DoH that makes it complicated to speak it to an authority server.
There’s a problem with HTTPS, though. HTTPS uses URLs that use WebPKI to tie the URL to the certificate validation algorithm. Which means you need WebPKI certificates, which needs DNS. Chicken, meet egg.
Maybe there could be a new URL scheme that doesn’t need WebPKI. It could be spelled like:
https_explicit:[key material]//host.name/path
or maybe something slightly crazy and even somewhat backwards compatible if the CA/browser people wouldn’t blow a fuse:
explicit_key.net would be some appropriate reserved domain, and some neutral party (ICANN?) could actually register it, expose the appropriate A records and, using a trusted and name-constrained intermediate CA, issue actual certificates that allow existing browsers to validate the key material in the domain name.
Wouldn’t it make more sense to design a new, simple API and glue for doing secure DNS lookups just for certificate issuance? It could look more like dnscurve or even like HTTPS: have a new resource, say NSS, in parallel with NS. To securely traverse to a subdomain, you would query the parent for NSS and, if the record is present, you would learn an IP address and a public key hash or certificate hash that you can query via HTTPS to read the next domain. And this whole spec would say that normal HTTPS clients and OS resolvers SHOULD NOT use it. So if you mess it up, your site doesn’t go offline.
(HTTPS really needs a way to make a URL where the URL itself encodes the keying information.)
Yes. WebPKI people have been talking about doing that for a long time. There's a couple different angles you can come up with on it, including things like RDAP to directly query registrars for ownership of a domain, and speaking DoH all the way up to authorities.
Presumably the problem is that it just takes for-fucking-ever to make anything happen inside CA/BForum. Case in point: we were all today years old before CA/BForum required CAs to actually use DNSSEC if it's set up.
DNSSEC is a complex scheme that is designed to allow queries to be answered with no secrets know to the answering nameserver: everything is signed offline and signed records are served up.
My (vague) suggestion is to use a much simpler online scheme with correspondingly lower performance, but to use it only for security-critical queries such as those made by CAs.
For better or for worse we have a system designed for exceptional scalability and performance for verifying that you’re talking to the domain name that you think you’re talking to: WebPKI. It’s all kinds of outdated, baroque, and crappy, but it works and is supported by basically everything.
We keep taking little baby steps toward making it suck less (dns-persist-01 is an upcoming one that I’m excited about). And all these little baby steps can be deployed worldwide without modifying client software.
DNSSEC tries to fit in a weird corner of the performance envelope. It requires modifying client software to get more than a token benefit. Approximately zero clients use it for anything worthwhile. The technology is terrible. It’s really not fit for any particular purpose. About the best one can say for using it to help with domain validated certificate issuance is that it exists, but, as widely noted, it’s so awful that even major high-budget websites don’t use it.
At the end of the day, domain validation is a heavyweight process that needs to be reliably but not fast. All it’s doing is, in effect, asking the DNS hierarchy whether the certificate requester is authorized. A much MUCH simpler solution would be to … just ask the chain of authoritative DNS servers. This query does not need to be scalable. It does not need to be fast. It does not need to meet almost any of the ridiculous pile of requirements that led to DNSSEC. Heck, it doesn’t even need to be possible for unauthenticated users to issue queries at all.
Sometimes specific problems deserve specific, narrowly tailored solutions.
HA is an absurdly heavyweight pile of Python and Docker. Get it a real computer — a used “thin client” with 8 GB of RAM is probably less expensive than an RPi4 plus case and power supply.
What’s the issue with new ones? Proxmox should work just fine on new hardware.
I’m rocking a Dell Wyse “thin client” that cost rather less than any comparable N150. It’s fanless, still gets firmware updates (via LVFS!), and takes about 5W at the wall plug.
A ways back, I wrote a sort of database that was memory-mapped-file backed (a mistake, but I didn’t know that at the time), and I would have paid top dollar for even a few GB of NVDIMMs that could be put in an ordinary server and could be somewhat straightforwardly mounted as a DAX filesystem. I even tried to do some of the kernel work. But the hardware and firmware was such a mess that it was basically a lost cause. And none of the tech ever seemed to turn into an actual purchasable product. I’m a bit suspicious that Intel never found product-market fit in part because they never had a credible product on the NVDIMM side.
Somewhere I still have some actual battery-backed DIMMs (DRAM plus FPGA interposer plus awkward little supercapacitor bundle) in a drawer. They were not made by Intel, but Intel was clearly using them as a stepping stone toward the broader NVDIMM ecosystem. They worked on exactly one SuperMicro board, kind of, and not at all if you booted using UEFI. Rebooting without doing the magic handshake over SMBUS [0] first took something like 15 minutes, which was not good for those nines of availability.
[0] You can find my SMBUS host driver for exactly this purpose on the LKML archives. It was never merged, in part, because no one could ever get all the teams involved in the Xeon memory controller to reach any sort of agreement as to who owned the bus or how the OS was supposed to communicate without, say, defeating platform thermal management or causing the refresh interval to get out of sync with the DIMM temperature, thus causing corruption.
I’m suspicious that everything involved in Optane development was like this.
If you're thinking about reliability in terms of 9s you should probably not be depending on a single machine. Reboots could take hours and be fine if your architecture is set up for reliability.
Things go down and need to come back up. Take a look at almost any major cloud incident in the last 15 years — some combination of goofs results in a bunch of services going down. Then it takes many minutes or hours to come back up. Sure, one can (and should) try to design for fewer failures that result in waiting for things to come up, but one should also design for faster bringup.
For the NVDIMM in question, the whole point is for durability after a reboot or power loss. A fifteen minute cycle in which it would write its contents out to nonvolatile storage and then read it back, during which the firmware is too inept to notice that all the data is already loaded is a bug that should have been fixed. (In fairness, this was a pre-production model.)
Even ignoring the reboot time, SMBUS was needed to properly identify the device and to access its health-check data.
As for my database, it’s not intended for active-active HA setups — it’s intended for use on a small number of high quality machines. It has had zero meaningful data loss incidents in its entire time in production, and it’s extremely fast. But it has some other properties I don’t like and that would cause me to do it differently if I were to start over.
> There are very few applications that benefit from such low latency
Basically any RDBMS? MySQL and Postgres both benefit from high performance storage, but too many customers have moved into the cloud where you can’t get NVMe-like performance for durable storage for anything remotely close to a worthwhile price.
I'm saying that there are very few downstream applications that use databases that benefit from reducing latency beyond the slow performance of the cloud. Running your database on VMs or baremetal gives better performance, but almost no applications built on databases bother to do it.
Intel did a spectacularly poor job with the ecosystem around the memory cells. They made two plays, and both were flops.
1. “Optane” in DIMM form factor. This targeted (I think) two markets. First, use as slower but cheaper and higher density volatile RAM. There was actual demand — various caching workloads, for example, wanted hundreds of GB or even multiple TB in one server, and Optane was a route to get there. But the machines and DIMMs never really became available. Then there was the idea of using Optane DIMMs as persistent storage. This was always tricky because the DDR interface wasn’t meant for this, and Intel also seems to have a lot of legacy tech in the way (their caching system and memory controller) and, for whatever reason, they seem to be barely capable of improving their own technology. They had multiple serious false starts in the space (a power-supply-early-warning scheme using NMI or MCE to idle the system, a horrible platform-specific register to poke to ask the memory controller to kindly flush itself, and the stillborn PCOMMIT instruction).
2. Very nice NVMe devices. I think this was more of a failure of marketing. If they had marketed a line of SSDs that, coupled with an appropriate filesystem, could give 99% fsync latency of 5 microseconds and they had marketed this, I bet people would have paid. But they did nothing of the sort — instead they just threw around the term “Optane” inconsistently.
These days one could build a PCM-backed CXL-connected memory mapped drive, and the performance might be awesome. Heck, I bet it wouldn’t be too hard to get a GPU to stream weights directly off such a device at NVLink-like speeds. Maybe Intel should try it.
One of the many problems was trying to limit the use of Optane to Intel devices. They should have manufactured and sold Optane memory and let other players build on top of it at a low level.
Which “Optane memory”? The NVMe product always worked on non-Intel. The NVDIMM products that I played with only ever worked on a very small set of rather specialized Intel platforms. I bet AMD could have supported them about as easily as Intel, and Intel barely ever managed to support them.
The consumer "Optane memory" products were a combination of NVMe and Intel's proprietary caching software, the latter of which was locked to Intel's platforms. They also did two generations of hybrid Optane+QLC drives that only worked on certain Intel platforms, because they ran a PCIe x2+x2 pair of links over a slot normally used for a single X2 or x4 link.
Yes, the pure-Optane consumer "Optane memory" products were at a hardware level just small, fast NVMe drives that could be use anywhere, but they were never marketed that way.
Exactly. I happen to have all AMD sitting around here, and buying my first Optane devices was a gamble, because I had no idea if they'd work. Only reason I ever did, is they got cheap at one point and I could afford the gamble.
That uncertainty couldn't have done the market any favors.
I feel like this is proving my point. You can’t read “Optane” and have any real idea of what you’re buying.
Also… were those weird hybrid SSDs even implemented by actual hardware, or were they part of the giant series of massive kludges in the “Rapid Storage” family where some secret sauce in the PCIe host lied to the OS about what was actually connected so an Intel driver could replace the OS’s native storage driver (NVMe, AHCI, or perhaps something worse depending on generation) to implement all the actual logic in software?
It didn’t help Intel that some major storage companies started selling very, very nice flash SSDs in the mean time.
> were those weird hybrid SSDs even implemented by actual hardware, or were they part of the giant series of massive kludges
They were definitely part of the series of massive kludges. But aside from the Intel platforms they were marketed for, I never found a PCIe host that could see both of the NVMe devices on the drive. Some hosts would bring up the x2 link to the Optane half of the drive, some hosts would bring up the x2 link to the QLC half of the drive, but I couldn't find any way to get both links active even when the drive was connected downstream of a PCIe switch that definitely had hardware support for bifurcation down to x2 links. I suspect that with appropriate firmware hacking on the host side, it may have been possible to get those drives fully operational on a non-Intel host.
Why on Earth did Intel implement this as a 2x2 device? They could have implemented multiple functions or they could have used a PCIe switch or they could have exposed their device as an NVMe device with multiple namespaces, etc. (I won’t swear that all of these would have worked nicely. But all of them would have performed better than arbitrarily splitting the link in half.)
Maybe they didn’t own any of the IP for the conventional SSD part and couldn’t make it play ball?
The Optane side of the drive used the same x2 controller as the pure-Optane cache drives. The NAND side used a Silicon Motion controller, same as their consumer QLC drives of the era. They almost literally just crammed their two existing consumer products onto one PCB and shipped it. Intel was never interested enough in the consumer applications of Optane to design a good, useful SSD controller around it, and they weren't going to let a third-party like Silicon Motion make an Optane-compatible controller.
> > People are also having to intervene in once-automated tasks. Thousands of orders that used to auto-flow directly to the warehouse floor for same-day shipping now often miscalculate tariff costs.
> Charge a blanket tariff fee like Mouser.
The importer still needs to pay the correct tariff.
Also, according to the article, a big part of the problem for is that Digi-Key does substantial business selling imported parts to non-US buyers. It’s fantastic for the US that this business can exist (money flows into the US and actual good jobs are created), but the tariff system makes is difficult to run this part of the business and there’s a lot of pressure to move those jobs and the revenue to a different country that doesn’t have this problem.
I’ve helped someone with a rather clean iMac, circa 2019, still supported by Apple. Forget 6 minutes — you can spend a full hour from boot to giving up trying to get anything done.
I think that Apple has gotten so used to having fast storage in their machines that the newer OSes basically don’t work on spinning rust.
I bet this is it. I had a 2018 Mac Mini with a failing drive that moved like frozen molasses, but wasn't throwing obvious errors. Before it failed, it was slow compared to an SSD, but booted up in a reasonable amount of time and ran office apps just fine, just with a little startup lag. It was bad compared to an SDD, but not intolerably slow.
If a Mac is running that slowly, there's probably a hardware issue.
Is there some reasonable way to check whether the Fusion drive is failing? Some quick searches suggest that Apple’s built in tooling doesn’t actually help much.
(I get it. It's an awesome replacement for MathType. It uses OLE so that it embeds in Microsoft Word nicely. Still...)
reply