Hacker Newsnew | past | comments | ask | show | jobs | submit | viewtransform's favoriteslogin

Sadly, yet another bloody chapter of the Abu Dhabi (al Nahyan) - Doha (al Thani) feud that has been going on since the 2011 coup attempt [0], which itself is part of a longer multi-generational blood feud going on between the royal families [4]. The Middle East, North Africa, Central Asia, and Balkans are all burning because of this saga [1].

The UAE backs the RSF [2] (formerly known as the Janjaweed of the Darfur Genocide), and Qatar supports the Sudanese Army [3]

[0] - https://www.middleeasteye.net/news/united-arab-emirates-pala...

[1] - https://lobelog.com/doha-and-abu-dhabis-incompatible-visions...

[2] - https://www.wsj.com/world/how-u-a-e-arms-bolstered-a-sudanes...

[3] - https://www.africaintelligence.com/eastern-africa-and-the-ho...

[4] - https://gulfif.org/changing-alignments-in-the-lower-gulf/


99.9% of BlueSky users use only Bluesky services. But BlueSky has a Personal Data Service for each. That means:

Those users have credible exit to take their data off BlueSky's hosting to someplace else (and as of a week or two ago to move back to BlueSky if they want).

Those users can put whatever kind of data they want in their PDS. They can host their git data via https://tangled.org . They can store their music listening scrobbles with https://teal.fm . They can blog on https://leaflet.pub .

And there's been rapidly advancing host it yourself options. Plenty of folk individually or collectively host PDS. There are alternate relays that collect &n syndicate out everyone's PDS data as that changes. Hosting the aggregation layer is significantly harder especially if you are trying to fully connect the network but there are a couple & progress is good.

it feels like a huge improvement over the status quo, and there's extremely visible developer energy building forward & rolling with the concepts. The breakdown on architecture allows for wins and work in various areas. The base seems solid, the core seems coherent & well built, built to scale not as one big thing but coherent layers. I think it's doing what you are asking for, and the signs of advancement & uptake warm my heart to see.


A bunch of people. Just type these terms into DuckDuckGo:

analog neural network hardware

physical neural network hardware

Put "this paper" after each one to get academic research. Try it with and without that phrase. Also, add "survey" to the next iteration.

The papers that pop up will have the internal jargon the researchers use to describe their work. You can further search with it.

The "this paper," "survey," and internal jargon in various combinations are how I find most CompSci things I share.


To add to this & the Jobs interview - an oil industry proverb: a healthy oil company has a geologist in charge, a mature one has an engineer in charge, a declining one has an accountant in charge, and a dying one has a lawyer in charge.

When reading "picture an apple with three blue dots on it", I have an abstract concept of an apple and three dots. There's really no geometry there, without follow on questions, or some priming in the question.

In my conscious experience I pretty much imagine {apple, dot, dot, dot}. I don't "see" blue, the dots are tagged with dot.color == blue.

When you ask about the arrangement of the dots, I'll THEN think about it, and then says "arranged in a triangle." But that's because you've probed with your question. Before you probed, there's no concept in my mind of any geometric arrangement.

If I hadn't been prompted to think / naturally thought about the color of the apple, and you asked me "what color is the apple." Only then would I say "green" or "red."

If you asked me to describe my office (for example) my brain can't really imagine it "holistically." I can think of the desk and then enumerate it's properties: white legs, wooden top, rug on ground. But, essentially, I'm running a geometric iterator over the scene, starting from some anchor object, jumping to nearby objects, and then enumerating their properties.

I have glimpses of what it's like to "see" in my minds eye. At night, in bed, just before sleep, if I concentrate really hard, I can sometimes see fleeting images. I liken it to looking at one of those eye puzzles where you have to relax your eyes to "see it." I almost have to focus on "seeing" without looking into the blackness of my closed eyes.


The sibling comment by theptip explains this well for this specific case, but the other sibling comment still seems confused, so I will explain more broadly: productive uses of heat are all about the temperature. The boundaries vary, but "high-grade" heat is roughly 300-500C, medium-grade is 100-300C, and low-grade <100C. Up to a certain point [0], heat is easier to use the hotter it is.

High-grade heat can be easily turned into electricity with a turbine, or reused in an industrial process (the entire point of a nuclear reactor is to create heat in this temperature range!) Medium-grade heat can still be used for some processes or used to generate electricity, but the electricity generation will be less efficient. Low-grade heat is under 100C and is a lot harder to use. You cannot economically generate electricity from it, or use it for most industrial processes, so use cases often focus on district heating.

The problem with these low-grade-heat district heating schemes, or more broadly any use of low-grade heat, is the economics. Let's take your idea. The efficiency from sunlight to heat is indeed high (much higher than PV panel -> resistive element) but the heat generated is all low-grade heat.

So what's the root cause here, why is low grade heat usually not economic to use? It comes back to two main causes: 1) efficiency, and 2) storage. Most power is generated from turbines that use heat - a type of heat engine. Carnot efficiency is the maximum theoretical efficiency of a heat engine. Carnot efficiency is η = 1 – Tcold/Thot where temperatures are absolute temperatures (Kelvin/Rankine.) In other words, 7.7% for 50C->25C (298K->323K), 37% for 200C->25C, and 61% for 500C->25C. Note that this is _theoretical_ maximum efficiency; real world efficiency varies quite a bit - from ~1/2 to ~1/10th of Carnot efficiency at peak depending on your heat engine. The second, storage costs, are even more important. You need to insulate your warm object to keep it warm, and if your heat is low-grade, then it is spread out across a huge volume.

Back to your idea, your "paint the dirt black" idea will generate far more heat, but very low-grade and entirely non-economic to use. You will have a somewhat warm pile of dirt, but nobody really wants a somewhat warm pile of dirt. This is the same reason why you see people, logically, tearing down their high-efficiency solar water heaters to install low-efficiency solar panels.

"Use solar panels to resistively heat the dirt", on the other hand, is less efficient and generates far less heat - but it generates high-grade heat. This startup proposes to eventually sell that heat to power plants, to generate electricity directly; and if you're doing that, the temperature of the heat is critical. As you can see from the Carnot efficiency, a power plant couldn't economically do anything with a warm pile of dirt, a solar water heater, or other similar technologies. But they _can_ do something with a source of high-grade heat - namely, they can run the turbines that currently run on fossil fuels. In other words, you can solve the seasonal solar curve problem and have constant electricity production year-round, even in northerly climes.

[0] Above - very roughly - 500C, it is harder to use the waste heat efficiently, because the engineering gets a lot harder, but the theoretical maximum efficiency is higher. That's one reason why there are a lot of efforts to try to build nuclear reactors working at these higher temperatures (see: molten salt reactors.)


I built you this: https://tools.simonwillison.net/hacker-news-filtered

It shows you the Hacker News page with ai and llm stories filtered out.

You can change the exclusion terms and save your changes in localStorage.

o3 knocked it out for me in a couple of minutes: https://chatgpt.com/share/68766f42-1ec8-8006-8187-406ef452e0...

Initial prompt was:

  Build a web tool that displays the Hacker
  News homepage (fetched from the Algolia API)
  but filters out specific search terms,
  default to "llm, ai" in a box at the top but
  the user can change that list, it is stored
  in localstorage. Don't use React.
Then four follow-ups:

  Rename to "Hacker News, filtered" and add a
  clear label that shows that the terms will
  be excluded

  Turn the username into a link to
  https://news.ycombinator.com/user?id=xxx -
  include the comment count, which is in the
  num_comments key

  The text "392 comments" should be the link,
  do not have a separate thread link

  Add a tooltip to "1 day ago" that shows the
  full value from created_at

Tesla's FSD has different approach / tradeoffs compared to dedicated robotaxi services. FSD has to be cheap and energy efficient, run completely on-board, and it must work everywhere. They're trying to do more with less, which has so far been impossible. Their cybercab and robotaxi service will probably work more like Waymo, with a slightly relaxed set of limitations.

Some differences compared to Waymo:

- Waymo has / can use more on-board compute, from [0] "It has also been revealed that Waymo is using around four NVIDIA H100 GPUSs at a unit price of 10,000 dollars per vehicle to cover the necessary computing requirements."

- Waymo uses remote operators. This includes humans but can also have remote compute.

- Waymo's neural network model can be trained / overfit on specific route or area. FSD uses the same model everywhere.

- Waymo's on-board hardware can use more energy, because it's possible to charge the battery between trips.

- Robotaxi services charge customers per mile, so it makes sense to run longer routes which are also easier to drive, i.e. the routing algorithm can be tuned to avoid challenging routes. This would be possible to implement on FSD too, but it seems that FSD drives fastest route.

[0] https://thelastdriverlicenseholder.com/2024/10/27/waymos-5-6...


A file consists of data and various metadata, e.g. file name, timestamps, access rights, user-defined file attributes.

By default, a file copy should include everything that is contained in the original file. Sometimes the destination file system cannot store all the original metadata, but in such cases a file copying utility must give a warning that some file metadata has been lost, e.g. like when copying to a FAT file system or to a tmpfs file system as implemented by older Linux kernels. (Many file copy or archiving utilities fail to warn the user when metadata cannot be preserved.)

Some times you may no longer need some of the file metadata, but the user should be the one who chooses to loose some information, it should not be the default behavior, especially when this unexpected behavior is not advertised anywhere in the documentation.

The origin of the problem is that the old UNIX file systems did not support many kinds of modern file metadata, i.e. they did not have access control lists or extended file attributes and the file timestamps had a very low resolution.

When the file systems were modernized (XFS was the first Linux file system supporting such features, then slowly also the other file systems were modernized), most UNIX utilities have not been updated until many years later, and even then the additional features remained disabled by default.

Copying like rsync, between different computers, creates additional problems, because even if e.g. both Windows and Linux have extended file attributes, access control lists and high-resolution file timestamps, the APIs used for accessing file metadata differ between operating systems, so a utility like rsync must contain code able to handle all such APIs, otherwise it will not be able to preserve all file metadata.


It is possible to set up end to end encryption where two different keys unlock your data. Your key, and a government key. I assume google does this.

1. encrypt data with special key 2. encrypt special key with users key, and 3. encrypt special key with government key

Anyone with the special key can read the data.the user key or the government key can be used to get special key.

This two step process can be done for good or bad purposes. A user can have their key on their device, and a second backup key could be in a usb stick locked in a safe, so if you loose your phone you can get your data back using the second key.


> > IQ is compute speed, not storage.

> Says who?

https://en.wikipedia.org/wiki/John_von_Neumann#Mathematical_...

Von Neumann's mathematical fluency, calculation speed, and general problem-solving ability were widely noted by his peers. Paul Halmos called his speed "awe-inspiring." Lothar Wolfgang Nordheim described him as the "fastest mind I ever met". Enrico Fermi told physicist Herbert L. Anderson: "You know, Herb, Johnny can do calculations in his head ten times as fast as I can! And I can do them ten times as fast as you can, Herb, so you can see how impressive Johnny is!" Edward Teller admitted that he "never could keep up with him", and Israel Halperin described trying to keep up as like riding a "tricycle chasing a racing car."

He had an unusual ability to solve novel problems quickly. George Pólya, whose lectures at ETH Zürich von Neumann attended as a student, said, "Johnny was the only student I was ever afraid of. If in the course of a lecture I stated an unsolved problem, the chances were he'd come to me at the end of the lecture with the complete solution scribbled on a slip of paper." When George Dantzig brought von Neumann an unsolved problem in linear programming "as I would to an ordinary mortal", on which there had been no published literature, he was astonished when von Neumann said "Oh, that!", before offhandedly giving a lecture of over an hour, explaining how to solve the problem using the hitherto unconceived theory of duality.

A story about von Neumann's encounter with the famous fly puzzle has entered mathematical folklore. In this puzzle, two bicycles begin 20 miles apart, and each travels toward the other at 10 miles per hour until they collide; meanwhile, a fly travels continuously back and forth between the bicycles at 15 miles per hour until it is squashed in the collision. The questioner asks how far the fly traveled in total; the "trick" for a quick answer is to realize that the fly's individual transits do not matter, only that it has been traveling at 15 miles per hour for one hour. As Eugene Wigner tells it, Max Born posed the riddle to von Neumann. The other scientists to whom he had posed it had laboriously computed the distance, so when von Neumann was immediately ready with the correct answer of 15 miles, Born observed that he must have guessed the trick. "What trick?" von Neumann replied. "All I did was sum the geometric series."


This process, known as primary endosymbiosis, happened at least twice, for mitochondria and chloroplasts. Further, while all chloroplasts (and more widely plastids) appear to share a common ancestor, there is evidence that mitochondria may descend from multiple lineages that underwent lateral gene transfer and/or convergent evolution. Nitroplasts are a likely another, separate instance of primary endosymbiosis.

There is also secondary endosymbiosis, where the endosymbiont organelles of one eukaryote are engulfed and incorporated into another eukaryotic cell to create a new type of endosymbiont. This has happened at least 8 times.

There are also theories that some other organelles are the product of other endosymbiosis events, many of which also have some of the hallmarks like their own genetic material. These theories are more speculative though.

It's also worth noting that while eukaryotes obviously gained some important capabilities from incorporating these endosymbionts, the endosymbionts they incorporated obviously managed to just evolve to perform those functions directly. Further, while one of eukaryotes' distinguishing features are mitochondria, there are several other major differences, and mitochondria are not believed to be what made eukaryotes better able to evolve complex multicellularity. Prokaryotes have indeed evolved multicellularity dozens of times, and we arbitrarily set our definition of complex multicellularity to distinguish from what prokaryotes have achieved.


> Sora was impressive because the clips were long and had lots of rapid movement

Sora videos ran at 1 beat per second, so everything in the image moved at the same beat and often too slow or too fast to keep the pace.

It is very obvious when you inspect the images and notice that there are keyframes at every whole second mark and everything on the screen suddenly goes in their next animation step.

That really limits the kind of videos you can generate.


I got a masters degree in ML at a good school. I will say there’s pretty much nothing they taught me that I couldn’t have learned myself. That said, school focused my attention in ways I wouldn’t have alone, and provided pressure to keep going.

The single thing which I learned the most from was implementing a paper. Lectures and textbooks to me are just words. I understand them in the abstract but learning by doing gets you far deeper knowledge.

Others might suggest a more varied curriculum but to me nothing beats a one hour chunk of uninterrupted problem solving.

Here are a few suggested projects.

Train a baby neural network to learn a simple function like ax^2 + bx + c.

MNIST digits classifier. Basically the “hello world” of ML at this point.

Fine tune GPT2 on a specialized corpus like Shakespeare.

Train a Siamese neural network with triplet loss to measure visual similarity to find out which celeb you’re most similar to.

My $0.02: don’t waste your time writing your own neural net and backprop. It’s a biased opinion but this would be like implementing your own HashMap function. No company will ask you to do this. Instead, learn how to use profiling and debugging tools like tensorboard and the tf profiler.


I just want to underscore that. DeepMind's research output within the last month is staggering:

2023-11-14: GraphCast, word leading weather prediction model, published in Science

2023-11-15: Student of Games: unified learning algorithm, major algorithmic breath-through, published in Science

2023-11-16: Music generation model, seemingly SOTA

2023-11-29: GNoME model for material discovery, published in Nature

2023-12-06: Gemini, the most advanced LLM according to own benchmarks


Yeah that's a fairly well studied one. Most of these techniques are rather "lossy" compared to extending the context window. The most likely "real solution" is going to be using various tricks and finetuning on higher context lengths to just extend the context window.

Here's a bunch of other related methods,

Summarizing context - https://arxiv.org/abs/2305.14239

continuous finetuning - https://arxiv.org/pdf/2307.02839.pdf

retrieval augmented generation - https://arxiv.org/abs/2005.11401

knowledge graphs - https://arxiv.org/abs/2306.08302

augmenting the network a side network - https://arxiv.org/abs/2306.07174

another long term memory technique - https://arxiv.org/abs/2307.02738


Have started playing with this and ffmpeg has a scene change detection filter already so it should be fairly straightforward

Also take note of the companies what have sprung up to supply low volume manufacturing at good prices to aid in prototyping and access to specialized machinery.

https://jlcpcb.com https://sendcutsend.com https://www.pcbway.com https://www.knifeprint.com


If you're in the market for buy-it-for-life solid wood furniture:

https://www.thejoinery.com

https://vermontwoodsstudios.com/

https://hedgehousefurniture.com

https://57stdesign.com

https://www.57thstreetbookcase.com/ (all bookcases, some veneer and plywood)

https://www.spekeklein.com/home

https://www.pompy.com/

https://www.chiltons.com/

https://roomandboard.com (mix of solid and veneer, some MDF)

These makers are in a league of their own, very expensive, incredibly beautiful hand-made pieces:

https://www.sammaloofwoodworker.com

https://www.thosmoser.com (highly recommended)

https://nakashimawoodworkers.com (new commissions around $7K-$15K for a coffee table, $20K-40K for dining table, plus shipping; older Nakashima pieces are highly valued in the art world and sell anywhere between $15K-$300K)

https://www.wright20.com/search/nakashima/items#past

Edit: Also, to echo what someone mentioned below, if you're interested in solid wood furniture you should find a local woodworker.

Another edit and thought: I used to own a lot of IKEA furniture and as I've gotten older, have slowly replaced those pieces with items from Knoll, with custom pieces from local woodworkers, with a few pieces from the studios listed above. A lot of people are commenting on the cost, and yes they're expensive and could be considered luxury goods.

But if you like art and design and you care about quality, you save for what you want to buy. I wanted to be surrounded by great craftsmanship, so instead of buying "stuff" and instead of spending money on lots of subscriptions and services, or constantly upgrading phones and computers, I buy one piece of nice furniture every year. I believe the more you appreciate the things around you, the more they begin to influence your own work, and your sense of place.

I regularly see a lot of IKEA furniture on the side of the road and in dumpsters. I think this is the difference between buying "things" and having "possessions" but that's a discussion for another day.


This one I presume:

----------------------------------------------------------------

Dear battery technology claimant,

Thank you for your submission of proposed new revolutionary battery technology. Your new technology claims to be superior to existing lithium-ion technology and is just around the corner from taking over the world. Unfortunately your technology will likely fail, because:

[ ] it is impractical to manufacture at scale.

[ ] it will be too expensive for users.

[ ] it suffers from too few recharge cycles.

[ ] it is incapable of delivering current at sufficient levels.

[ ] it lacks thermal stability at low or high temperatures.

[ ] it lacks the energy density to make it sufficiently portable.

[ ] it has too short of a lifetime.

[ ] its charge rate is too slow.

[ ] its materials are too toxic.

[ ] it is too likely to catch fire or explode.

[ ] it is too minimal of a step forward for anybody to care.

[ ] this was already done 20 years ago and didn't work then.

[ ] by this time it ships li-ion advances will match it.

[ ] your claims are lies.

----------------------------------------------------------------


Sure, tens to hundreds of thousands of years ago nobody was working with metals at all. And the centrifugal fan he uses is a modern invention; the oldest mention of them in the literature is less than 500 years old, in De Re Metallica.

It's really interesting to think about the "could have done this but didn't" stuff!

Silver chloride is one of the less sensitive silver halides you can use in photography, but it works; it dates to about 2500 years ago when someone (the Lydians?) figured out you could separate silver from gold by firing it with salt. So you could have done photography 2500 years ago instead of 200 years ago.

There's lots of stuff in optics that only requires a Fizeau interferometer (made of a candle flame and a razor blade, Bronze Age stuff), abrasives (Paleolithic), reflective metal (Bronze Age again; Newton's mirrors were just a high-tin bronze), abrasives, and an unreasonable amount of patience. Imhotep could have made a Dobsonian telescope and seen the moons of Jupiter 4700 years ago if he'd known that was a worthwhile thing to do.

Speaking of metrology, I've heard conflicting stories about surface plates: one story that the Babylonians knew about grinding three surfaces alternately against one another to make them all flat, and another that Maudslay originated the technique only about 220 years ago. (Or, sometimes, Maudslay's apprentice Whitworth.) This is clearly a technique you could have employed in the Neolithic.

Sorption pumps for fine vacuum (usually 1e-2 mbar) require a high-surface-area sorbent (zeolite or maybe even kieselguhr or ball-milled non-zeolite clay: Neolithic), probably glassblowing (Roman Republic era in Syria), sealed joints (apparently Victorians used sealing wax successfully up to HV though not UHV, and sealing wax is pine resin and beeswax: probably Paleolithic), and some way to heat up the sorbent (fire: Paleolithic). Fine vacuum is enough for thermos bottles (dewars) and CVD, among other things.

Conceivably you could have just luted together an opaque vacuum apparatus from glazed earthenware (which dates from probably 3500 years ago), using sealing wax to seal the joints. But debugging the thing or manipulating anything inside of it would have been an invincible challenge.

Sorption pumping works better if you can also cool the sorbent down, too; dry ice is today made by explosive decompression of carbon dioxide, similar to how puffed corn and rice can be made with a grain-puffing cannon, and regularly is by Chinese street vendors. Pure carbon dioxide is available by calcining limestone (thus the name: Neolithic) in a metal vessel (Bronze Age) that bubbles the result into water into a "gasometer", a bucket floating upside down. Compressing the carbon dioxide sufficiently probably requires the accurately cylindrical bores produced for the first time for things like the Dardanelles Gun (15th century). But possibly not; the firepiston in Madagascar is at least 1500 years old, dating back to the time of the Western Roman Empire, and I think it can achieve sufficiently high compression.

Mercury has been known all over the world since antiquity, though usually as a precious metal rather than a demonic pollutant. Mercury plus glassblowing (Roman Republic, again) is enough for a Sprengel pump, which can achieve 1 mPa, high vacuum, 1000 times higher vacuum than an ordinary sorption pump (though some sorption pumps are even better than the Sprengel pump). High vacuum is sufficient to make vacuum tubes.

The Pidgeon process to refine magnesium requires dolomite, ferrosilicon, and a reducing atmosphere or vacuum. You get ferrosilicon by firing iron, coke, and silica in acid refractory (such as silica). Magnesium is especially demanding of reducing atmospheres; in particular nitrogen and carbon dioxide are not good enough, so you need something like hydrogen (or, again, vacuum) to distill the magnesium out of the reaction vessel. As a structural metal magnesium isn't very useful unless you also have aluminum or zinc or manganese or silicon, which the ancients didn't; but it's a first-rate incendiary weapon and thermite reducer, permitting both the easy achievement of very high temperatures and the thermite reduction of nearly all other metals.

Copper and iron with any random kind of electrolyte makes a (rather poor) battery; this permits you to electroplate. The Baghdad Battery surely isn't such a battery, but it demonstrates that the materials available to build one were available starting in the Iron Age. Electroplating is potentially useful for corrosion resistance, but to electroplate copper onto iron you apparently need an intermediate metal like nickel or chromium to get an adherent coating, and to electroplate gold or silver you probably need cyanide or more exotic materials. Alternate possible uses for low-voltage expensive electricity include molten-salt electrolysis and the production of hydrogen from water.

Copper rectifiers and photovoltaic panels pretty much just require heating up a sheet of copper, I think? Similarly copper wires for a generator only require wire drawing (Chalcolithic I think, at least 2nd Dynasty Egypt) and something like shellac (Mahabharata-age India, though rare in Europe until 500 years ago), though many 19th-century electrical machines were instead insulated with silk cloth.

Vapor-compression air conditioners probably need pretty advanced sealing and machining techniques, but desiccant-driven air conditioners can operate entirely at atmospheric pressure. The desiccants are pretty corrosive, but beeswax-painted metal or salt-glazed ceramic pipes are probably fine for magnesium chloride ("bitterns" from making sea salt, Japanese "nigari"), and you can pump it around with a geyser pump.

I think the geyser pump is still under patent, but it can be made of unglazed earthenware or carved out of bamboo (both Neolithic) and driven by either a bellows (Neolithic) or a trompe (Renaissance).

Some years ago I figured out a way to use textile thread (and, say, tree branches) to make logic gates; I posted that to kragen-tol. So you probably could have done digital logic with Neolithic materials science, though only at kHz clock rates. And of course you could have hand-filed clockwork gears out of sheet copper as early as the Chalcolithic, instead of waiting until the Hellenistic period.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: