Mach-E was the bestseller last month, second place this month after the Model Y. The 80% last month were for plugin vehichles including plugin hybrid (64% battery electric, BEV), the 70% now are BEV (about 88% plugin).
The numbers change a lot from month to month. Depends on delivery rate (e.g. Tesla does bulk delivery every three months), and there is typically a spike in deliveries when a model starts shipping, since there are months of backorders to work through. New models starting shipping was Mach-E last month and Model Y this month.
So it's more useful to look at longer timeframes.
Sounds to me very much like "I want to learn truths and relationships about natural language that are independent of any specific culture and history", i.e. independent of any particular language and concepts like alphabets and spelling. Such things exist (like Chomsky's generative grammar), but they are of limited use in learning any particular language.
Music theory without culture and history would have to leave out things like scales, chords, chord progressions, tuning systems (like our 12-tone equal temperament), etc., since they vary between cultures and over time. I'm not entirely sure what's left, maybe the harmonic series?
Sheet music is similar to math notation or written language, and simpler to learn than either of those. It's not the only possible notation, but it's widely used and more compact than say guitar tabs or a MIDI piano roll. If you can't read/write any notation at all you will be limited by how much you can memorize, writing things down is a time-honored tradition for rememering details for yourself as well as for sharing it with others.
So I would suggest that music students learn sheet music plus any other notation that's relevant to their instrument, for the same reason I would suggest that English learners learn to read and write despite English spelling being a crapton of inconsistencies; it gives a lot more options for a modest amount of extra effort.
Can you go through life without being able to read and write, sure. I just don't see why you would want to.
> [They] adapted a heat-exchange process commonly used for cooling submarines to the underwater datacenter. The system pipes seawater directly through the radiators on the back of each of the 12 server racks and back out into the ocean.
So watercooling with seawater, a pump or two, perhaps a heat exchanger (the radiator) is involved. Server to air to pod outer surface would be way too inefficient to keep the servers operating.
For me, the photos in the linked article look great no matter what the process was to create them. I don't see why they would lose their charm if they were CGI, simply because we tend to know whether we like a picture or not within a split second of first seeing it, while we generally don't know how they were made until later.
But for the pictures in the GP link, I don't think the pictures by themselves are all that interesting; to me almost the entire interest stems from knowing how they were made.
Somewhat comparable to pictures posted on HN a while back that went from "ho-hum, another portrait" to "that's actually interesting" by knowing that these were not real people or real photos, but rather computer generated portraits. (https://petapixel.com/2019/09/20/this-company-is-giving-away... )
So I think charm/appeal can stem from several sources, like pure visual impact, but it can also stem from an appriciation of the process behind it. And I see people sometimes change their minds on how well they like a picture - in either direction - based on how simple or complex they think the process was, even if the picture is identical. (Compare say a picture of a wild wolf pouncing its prey vs the same picture after you've learned that it's stuffed animals and the scene was arranged by the photographer.)
"While a few other substances (like solid neon) could potentially explain the coma-free acceleration, hydrogen was the best match for the data."
It doesn't have to be hydrogen. But to explain acceleration from solar heating it does need outgassing, and the outgassing has to be in a form that we can't detect (no comet tail). Stone won't have much (or any) of that.
Not all that massive: 6% increase if all cars and vans (excluding e.g. big rigs and tractors) go electric, according to the Norwegian national grid institute[1] (in Norwegian). Some local grids will need an upgrade, but the national transmission and distribution grid can handle it if 'smart' charging solutions shift the bulk of the charging to happen at night and other low-demand periods.
Norway uses a lot of electricity even without EVs, so it's not a very large increase percentage wise. E.g. we don't have gas pipes to homes like some places in the USA, so stoves are all electric and a lot of heating is electric as well.
Because for a given amount of funding and engineering effort, a telescope on the ground can be a lot larger.
Hubble has a 2.4 meter mirror, TMT is 30 meters. Resolution scales with the diameter of the mirror (so an order of magnitude more for TMT), while light gathering scales with the area (two orders of magnitude more for TMT).
Traditionally, there were two challenges for large ground based telescopes: a) We couldn't make telescope mirrors larger than ~5 meters because the glass starts to deform under its own weight and the view gets worse rather than better; and b) we lose too much to atmospheric distortion.
The solution to the first issue is to assemble large mirrors from many small mirrors in a honeycomb pattern[0].
The solution to the second one is adaptive optics[1]: Actuators and computer control can compensate for atmospheric distortion by precisely deforming the mirror in real time, several times per second, to cancel out the distortion. We still need a good location for the telescope to minimize the amount of distortion we need to handle, but with that we could for the first time achieve better resolution from the ground with the 10-meter Keck.
These days the benefits of a space telescope come not so much from visible light, but from other wavelengths that are fully or partially blocked by the atmosphere[2]. Like UV, X-ray, or infrared like the upcoming James Webb space telescope.
With higher resolution sensors, small imperfections are more visible. To extract the most from a high-megapixel camera, you need spot-on focusing, a higher quality (more expensive) lens, and more care (e.g. higher shutter speed or tripod) to avoid camera shake during exposure.
If you nail all these, I wouldn't call it crappy even at 1:1.
But there is a residual imperfection from the Bayer pattern on the sensor, since each pixel records only one color (R, G, or B) and the other two values for that pixel have to be guesstimated from neighboring pixels, so the de-bayering process isn't perfect.
One way to fix it is to use a monochrome sensor and color filters, taking 3-4 exposures (luminance/mono plus R/G/B) and stack them.
A few cameras on the market have a pixel shift feature that can do something similar - multiple exposures, shifting the sensor one pixel between exposures so each pixel get a true R/G/B sample, and stack them in camera or in post.
Edit: Forgot to mention the anti-aliasing filter. It sits in front of the sensor and deliberately blurs the image at the pixel level. This is done to avoid aliasing and moiré artifacts, but obviously has the side effect of not-so-great image quality at the pixel peeping level. The fix for this is to get a camera without an AA filter, many modern high-resolution cameras don't have them.
Probably an RTG [1], it's what NASA normally uses. It's thermoelectric [2]: Heat (in this case from radioactive decay) is converted directly into electricity without moving parts.
The numbers change a lot from month to month. Depends on delivery rate (e.g. Tesla does bulk delivery every three months), and there is typically a spike in deliveries when a model starts shipping, since there are months of backorders to work through. New models starting shipping was Mach-E last month and Model Y this month. So it's more useful to look at longer timeframes.