Hacker Newsnew | past | comments | ask | show | jobs | submit | segfaultbuserr's commentslogin

TCP port knocking.


The brain truly is a system with terrible service availability. On average, after running for just 16 hours, it must be offlined for 8 hours to run maintenance tasks such as "scrub", "garbage collect", "trim", and "fsck".


> The brain truly is a system with terrible service availability. On average, after running for just 16 hours, it must be offlined for 8 hours to run maintenance tasks such as "scrub", "garbage collect", "trim", and "fsck".

It's a trade-off. The brain is about as large as it can be while making birth possible. It already uses a lot of energy(2% of body weight, 20% of energy consumption). We also need it to be working at peak performance when we are doing activities.

A background 'scrub' task to keep it working 24/7 would probably use more energy (require more food and heat dissipation 24/7), possibly require a larger area (for redundancy, similar to how dolphins can sleep one hemisphere at a time and have really large brains). An alternative would be to slow down processes enough so that those tasks could happen constantly.

And then our day/light cycles helped select for this approach. Until recently there wasn't much one could do (safely!) at night.


> The brain is about as large as it can be while making birth possible.

Is that true? The 'birthable' parameter limits only two dimensions. Could the brain evolve to a larger size in a third dimension?

https://www.youtube.com/watch?v=OS1cj-zk4ac

Maybe there is some other limitation, such as distance between neurons? Signal strength?


> The brain is about as large as it can be while making birth possible.

I wonder if it had been beneficial to have larger brains, we'd have evolved to support that. Diminishing returns maybe or just a local maximum we didn't get out of?


So how evolution works is that a feature needs to have an evolutionary advantage, but the specimen must also not die. So there are two adversarial pressures here, carefully balancing each other in a mammal species that already has one of the highest birth mortality rates of both mother and child. If heads were any larger, it would create a proportional amount of negative evolutionary pressure by both direct and indirect death (of the mother) at birth.

Interestingly, there seem to be some indications showing that human interventions by modern technology already show clear evolutionary trends: https://pmc.ncbi.nlm.nih.gov/articles/PMC5338417/

Humans might eventually evolve to not even being able to be born naturally anymore at some point.


That's a fascinating thought. As people with larger brains are more successful in life and more likely to have children*, mortality rates for natural births would increase, and over time we would evolve to become dependent upon modern technology.

The continued existence of our species would become dependent upon continued civilisation. A dark age could kill us, or at least cripple the population.

*how true is this? Uni-educated people tend to have lower fertility rates.


> As people with larger brains are more successful in life and more likely to have children*

> *how true is this? Uni-educated people tend to have lower fertility rates.

In the U.S. university education depends mostly on mommy and daddy's wallet size, not brain size.


I dont have data but I’d assume that wallet size is correlated to brain size


That would be quite a claim and I certainly wouldn't assume it!


"Children? With these economic conditions?"


"There's no way we could have a child now. Not with the market the way it is, no."

https://www.youtube.com/watch?v=sP2tUW0HDHA


If maternal mortality were the only issue, evolutionary pressure would also favor women with wider hips/birthing canals. After all, we see hyper intelligent individuals at the current brain size, it's clearly possible to get more processing power in there but there doesn't seem to be much reproductive benefit.


Beneficial kinda just means "leads to more procreation" right?

So if bigger brains meant people reproducing more, our brains would get bigger to the point that most births are cesarean or something.

I do wonder what happens when we eventually evolve to a point where we can't survive without more and more advanced technology.

A lot of people who would have died off before reproducing 200 years ago now don't, which is of course incredible for us. But what are effects of that 100/1000 years down the line?

Presumably we'll have plenty of more immediately pressing issues over that time frame.


It is interesting from a space-faring species perspective. By the time we can embark to other planets/asteroids our biology might require us to lug around significantly more technology just to survive.


Check in with various farm animals, they are already there.


You've got it all wrong, and LLMs have it all correct.

True brains, after 16hrs of actual work, need to hallucinate strongly for 8 hours or so, in order to continue their high level contributions to society.


Interesting. What if that is actually a beneficial part of our own development: comparing the nonsense in our dreams to waking life and building the ability to tell the difference?


Get an LLM to dream, and to use the time effectively to purge those hallucinations, and reinforce the "valid and true" memories, and you might have something there ?


Exactly, but that isValidAndTrue method is probably a little tricky to write...


It helps to be able to pinch yourself.


Write down the argument for later and test it's hypothesis during the day?


>The brain truly is a system with terrible service availability

Taking this as a jumping off point for a way of thinking about those 'services'. It seems remarkable to me that we can initiate the attempt to think of an elephant, and then get there in one shot. We don't sort through, say, rhinos, hippos, cars, trucks. We don't seem to have to rummage.

Of course when it comes to things on the edge of our memory or the edge of our understanding, there's a lot of rummaging. But it could have been the case that everything was that way (perhaps it is that way for some animals), instead, there are some things to which we have nearly automatic, seemingly instant recall.


This makes me think of how my dog reacts very quickly, of course, for hard-wired "dog" behavior things, but when I use human language and gestures to communicate something to him, such as "go find Daddy", I can figuratively see a loading spinner over his head for several seconds, until the recognition comes and he responds. I don't know what's going on in that head, but it definitely appears to be "rummaging" from the outside. Probably similar to how we feel when conversing in a foreign language we're not fluent in.


Or when my early-riser wife talks to me about anything before I've had my coffee.


We don't actually know if 1/3rd downtime is a requirement. For most of our evolutionary history, it has not been economical to remain awake at night, so our intense sleep drive may actually be driven primarily by conservation of energy (since energy has been a major engineering constraint for all of our evolutionary history minus the last several hundred years or so). If that's the case, then with other processes may have evolved to fit themselves into our sleeping time as an optimization, but perhaps those processes could happen while we're awake if our evolutionary constraints were different.


>it has not been economical to remain awake at night

Why? If you can gather fruits or hunt pray while all your competitors (or predators!) are asleep, isn't it an advantage? What about nocturnality? https://en.wikipedia.org/wiki/Nocturnality


Why are your competitors and predators asleep?

At night it is harder to see food. It is harder to see predators, some of whom are in fact nocturnal. It is harder to notice visual cues and gestures from allies/kin. It is harder to navigate, both due to difficulty seeing distant landmarks and nearby obstructions, so you are more likely to get lost and/or injured. It is colder so your body has to spend more calories to keep you warm.

There are adaptations that can improve nocturnal capabilities, but these typically come with tradeoffs that make diurnal life harder. Evolution is a series of many baby steps - either you need to adapt to not sleeping while you're still at a disadvantage at night, or you need to adapt to being awake at night while you still need to sleep. Neither path seems like it would have been advantageous to our ancestors.


Well we can't see can we


If it were biologically possible, other organisms would have evolved that capability. There’s some fundamental, biological reason why all animals sleep.


Again, we don’t know that. It could easily be that the adaptations needed to operate well at night (in addition to during the day) just aren’t worth the energetic cost or they entail a large morphological compromise. The thing about evolution is it doesn’t give you its reasons.


Exactly, evolution also will leave things at "good enough for function". It may well be that our sleep cycle so happened to level off at "good enough" considering our other evolutionary constraints.

Definitely an area of study that seems interesting to me.


There was this fad of multiphasic sleep in the early 2000.

I remember, in theory you could do sleeping for 15 minutes 6 times in 24 hours.


The polyphasic sleep experiments


I did a polyphasic sleep cycle in college (2010's) for 6 months out of necessity and it worked really well.

The way I got it to work was by meticulously tracking my REM sleep. I would have sliding 3-4 hour windows to hit REM sleep and if I didn't, it just felt like a sluggish day, not any worse than being woken up by a fire alarm at 4am or a long thursday night.


> our intense sleep drive may actually be driven primarily by conservation of energy

Or perhaps to keep us quiet and immobile, and harder to locate and eat ?


> The brain truly is a system with terrible service availability. On average, after running for just 16 hours, it must be offlined for 8 hours to run maintenance tasks such as "scrub", "garbage collect", "trim", and "fsck".

There's hope. If the carbon chauvinists can be prevented from messing things up, AI is on track to provide something with a better SLA, which will finally allow us to decommission and junk those troublesome legacy systems without disrupting the business.


It's worse than that.

At all times, every single one of the billions of participants acts like a bureaucrat, delaying response until it's unavoidable and then resting afterwards at least half the time. If only we could cut through the bureaucracy!

Neuronal activities:

- Action potential initiation: 0.2-0.5ms

- Action potential duration: ~1-2ms

- Relative refractory period: ~2-4ms

- Total cycle time until fully ready: ~5-7ms


SLAs are terrible. I agree.

But at least there’s (usually) some exciting shows on while you are waiting!


On the other side, heart delivers a lifetime service without any maintaince, that's a truly wonder of nature.


Its "maintenance" is built into its design


I believe it is not only garbage collecting. It is also doing backpropagation on the memories of the day before. After 8 hours you get an updated, more optimized service.


This is the insight missing from everyone comparing LLM parameter counts to human neurons or synapses. The human model gets a new version every day, and the digital one costs $5B of energy and a year to do the same.


I think this is the first time I've seen someone say something correct about AI on HN in years!


I also wonder why cats sleep so much. Is it mainly because there’s nothing for them to do during the day, so why not sleep? Whereas humans can be active all day?


Carnivores tend to sleep longer than omnivores, who tend to sleep longer than herbivores. For a hunting carnivore, energy comes in big bursts, so it makes sense that they would be active for a short period of time, and hoard energy when they didn't need to be active. For a cud-chewing herbivore, time spent not chewing is time spent not creating energy. Obviously, this is a broad generalization - feeding habits, day/night cycles, predator/prey behaviors all factor into a particular animal. But it probably explains why your cat, like the panther at the zoo, spends most of its time asleep.


Also, cats and panthers are crepuscular, active at dawn and dusk. Which leads to lots of waiting for that time of day.


Dolphins have a much better system, they take half of it offline for maintenance while the other half stays on for 100% uptime. Fancy that.


And after a while, the system get bad enough that fsck starts failing regularly.

Really poor design.


It even has random downtime during the day (hello, power naps)


Through formal meditation practice, you can train the brain to perform these as background tasks in the waking state.


I'm not sure I buy this. Meditation can give you distance from the "I" part of the brain but it doesn't seem equivalent to an on-demand GC.


You aren’t overclocking your system?


Is a binary search involved in this "gambling system"?


You get so many respondents, you can give them all different combinations. Each race you disregard the losers, and when the population gets low enough you might miss a race, you just say "hey, I predicted x races, call Bob to confirm".


Yes, that's basically how it worked. To be clear, Brown was careful to promote and run it for entertainment and not a scam. He made very clear up front that those choosing to participate were very likely to lose their money and insisted that the early bet amounts were minimal "fun money". The ever-shrinking number of those continuing to later rounds had made enough from earlier wins to offset the later eventual loss.


+1. Gorden Bell said a new computer category would enter the market every decade [1]. PDP-8 and later the PDP-11 were the quintessential minicomputer category makers. They were basically the microcomputer-equivalent in the 1970s. Both brought great cost reduction in their respective eras.

[1] https://en.wikipedia.org/wiki/Bell%27s_law_of_computer_class...


In 1987, doxxing yourself was the norm. ~80% of the Usenet messages from the 1980s (to early the 1990s) had names, institutions, office addresses, and phone numbers attached to them too. Most were universities, governments, or corporate R&D addresses, but there were many small businesses and home users as well. Some phone numbers are probably still valid today. In fact, an Usenet archive (UTZoo) has already been taken down from the Internet Archive due to an alleged legal threat made by an individual (despite that this archive was indispensable if anyone wants to find any historical information from this era, and that it had been available online for the last ~20 years before it was taken down, with multiple copies still online). I suspect the legal status of these kinds of early online community archives will be increasingly problematic over time.


Magazines used to have pen pal sections where kids would post their name, age, and address so anyone could write them. It was a different time.


BBSes usually required your name, home address, and phone number as part of signing up for an account. I never gave it much thought when entering my information nor later when collecting it from others when I was a SysOp. Once in a while when driving around I might see a street sign and think "Oh, that's where Frisbee_Guy lives" if I recognized it but never considered using it for anything nefarious. Also remember computer magazines of the time often had sections where people would get their problems published along with an address for anyone wanting to contact them about it. Sometimes even a phone number.


1/2 c in circuit boards (FR-4), 1/3 c in cables, two useful numbers to remember.


Thanks, nice! But wait - so we have

1/2 c ~ 150 km/ms in circuit board.

1/3 c ~ 100 km/ms in cable. And...

2/3 c ~ 200 km/ms in fiber?

I'm a bit confused about difference between cable and fiber heh :)


Sorry, it was a typo. I meant 2/3 (including common cables and fiber optics), not 1/3.


Depends on what kind of cable? As twisted pair network cable is at 2/3.


What an embarrassing typo! I was thinking of 0.66, and somehow I thought 0.66 = 1/3 (must've been distracted by the "2" in 1/2). I should've written 0.66 or 2/3.


Knowing superconductivity makes magnets less mysterious. Once you accept that physics absolutely allows the creation of a static magnetic field from a circulating current that flows forever in a zero-resistance inductor coil, then the existence of ferromagnetism is no stranger than that - to a first approximation, it also comes from circulating currents, "just" on a subatomic scale. [1] It's kind of surprising that the Atomic Current Hypothesis of ferromagnetism was already proposed by Ampere back then. Following the same heuristics, the fact also becomes clear that the energy in an inductor coil can't really be "spent" to do useful work forever without de-energizing it, and the same is true for permanent magnets. [2]

[1] https://www.feynmanlectures.caltech.edu/II_36.html

[2] This intuition debunks many types of incorrect "infinite energy of magnets" ideas that lead to perpetual motion. Although it can't debunk the "perpetual motion solely from an uneven static (electromagnetic or gravitational) field" idea, which is even older.


Ferromagnetism has nothing to do with currents, it is due to aligned spins of partially filled shells. Below a certain temperature (Curie temperature of the material), exchange interaction (which penalizes any misalignment, in the case of ferromagnetic exchange interaction) between electrons leads to this alignment.

Spin is a type of intrinsic angular momentum that is not associated with any spatial motion.

The Feynman lecture you linked to is an explanation why currents fail to explain ferromagnetism. You need to read the next chapter, but being a lecture for undergrads, it doesn't go deep into the subject anyway. If you're really interested, any modern book on magnetism would be much helpful.


You said,

> Ferromagnetism has nothing to do with currents

This is why I said ferromagnetism is circulating current in the sense of "to a first approximation" and "heuristically". Wiktionary defines "heuristic" to be:

> a practical method [...] not following or derived from any theory, or based on an advisedly oversimplified one.

I think that if you ask Feynman, he would probably agree or sympathize with the naive idea of "atomic currents" as a heuristic argument in the introduction of this topic... which is nothing new anyway, and has been a heuristic argument used in electromagnetism for a long time, at least before QM.

In Feynman's own words,

> These days, however, we know that the magnetization of materials comes from circulating currents within the atoms—either from the spinning electrons or from the motion of the electrons in the atom. It is therefore nicer from a physical point of view to describe things realistically in terms of the atomic currents [...] sometimes called “Ampèrian” currents, because Ampère first suggested that the magnetism of matter came from circulating atomic currents.

You said,

> Spin is a type of intrinsic angular momentum that is not associated with any spatial motion.

Yet the concept of spin in quantum mechanics was originally developed using macroscopic rotations as an analogy, although today we know that spin is an intrinsic property of subatomic particles (thus the joke, "Imagine a ball that is spinning, except it is not a ball and it is not spinning.") In the same sense that Ampère's concept of "atomic currents" was developed using circulating electric current as an analogy.

> The Feynman lecture you linked to is an explanation why currents fails to explain ferromagnetism. You need to read the next chapter.

Of course, "The actual microscopic current density in magnetized matter is, of course, very complicated." This is surely explained in the next chapter. I could've mentioned "atomic currents" without citing any link, but I included it to allow anyone who's interested to read the whole thing in context.


To the parent and its sibling comments: There is no atomic or subatomic current that can explain ferromagnetism in any approximation.

You read some Wikipedia pages and Feynman lectures of physics. I'm a physicist who has done well over a decade of research in magnetic materials.

In understanding of ferromagnetism, many incorrect theories have been proposed. By connecting ferromagnetism to circulating currents (i.e, paramagnetism and diamagnetism), you just repeated the same mistake.

You're trying to bend the words to avoid being wrong. Physics is not philosophy or debate club. There is no approximation in physics in which electron is a ball with some radius, or its spin is due to a circulating current in physics. Any such explanation attempt fails spectacularly if you actually try to do the math (which gives an electron surface that is moving faster than speed of light, as Uhlenbeck/Goudsmit who proposed this incorrect idea quickly found out), so it doesn't even work as an approximation of any kind.

> Yet the concept of spin in quantum mechanics was originally developed using macroscopic rotations as an analogy,

Who developed this theory in quantum mechanics, where and when? Pauli, who first introduced it into quantum mechanics and the namesake of spin 1/2 matrices, insisted that it is purely quantum mechanical with no classical analogue. And regardless of who said what over 100 years ago, today, it is well understood that spin has nothing to with electric charges that move or rotate in space.

More importantly, the reason ferromagnetism develops in the first place is due to exchange interaction (as I wrote above) between magnetic moments, which is due to Pauli exclusion principle and also has nothing to do with movement of charges.

Furthermore, such magnetic moments (called magnetic impurities in that context) ruin the superconducting order by breaking the time-reversal symmetry, so trying to make a connection to ferromagnetism in the context of superconductivity is even worse.


> You read some Wikipedia pages and Feynman lectures of physics. I'm a physicist who has done well over a decade of research in magnetic materials.

In the same way that a geodesist navigates using a reference ellipsoid defined by WGS-84, while a city commuter uses Cartesian coordinates on a flat map. The commuter's navigational tool will never work in geophysics research, and it doesn't need to be.

> To the parent and its sibling comments: There is no atomic or subatomic current that can explain ferromagnetism in any approximation. [...] Any such explanation attempt fails spectacularly if you actually try to do the math (which gives an electron surface that is moving faster than speed of light, as Uhlenbeck/Goudsmit who proposed this incorrect idea quickly found out), so it doesn't even work as an approximation of any kind.

I consider "circulating currents create ferromagnetism" to be as true as "an atom's structure is similar to a solar system." Both concepts break down when it's examined in details, so its use by research physicists is obviously unacceptable, but I consider it's nevertheless as an useful mental image in introductory discussions among non-physicists.

Would you consider Rutherford's original atom model to be a first approximation? Can it be considered a very oversimplified but useful heuristic, at least when people who know anything about atoms are first introduced to this concept? Alternatively, would you consider Rutherford's atom to be "an explanation attempt that fails spectacularly if you actually try to do the math (which gives an electron that collapses into the nucleus in picoseconds, as Rutherford's colleagues quickly found out)?

If you believe the latter case, everyone can stop this conversation right now. Because it means the entire disagreement is entirely down to what kinds of "metal images" are acceptable, rather than any factual, like "whether a full quantum treatment of ferromagnetism is necessary to completely explain ferromagnetism (of course it is)." The rest of us who don't solve research problems believe a toy model is still interesting, but don't deny (nor mention) better models. You, as a professional physicist, believe many "what if?" metal models from history are just not legitimate physics, and should not be mentioned at any circumstances to avoid poisoning the minds of youths - an approach known as Whig history, in which scientific progress marches from one victory to another, and all losers be damned - a perfectly valid approach for teaching physics to students who only care about pure physics science, instead of "who said what."

As a side note, I know some engineers who really hate the idea that electric circuits works due to an electron flow. The most extreme one I've seen of wanted to ban this concept in introductory textbooks, calling it a big lie (an explanation attempt that fails spectacularly if you actually try to do the math, which gives the speed of an electron 30 billion times slower than the speed of light in free space). As we all know, the steady-state electron flow was only a result of the transient propagation and reflection of electromagnetic waves in free space or dielectric materials. Thus, they believe the wave model should be the only interpretation in a science textbook, since "they're high-school teachers, I'm a design engineer who work with high-speed digital systems with 20 years of experience, and I know for sure that high-speed circuits and computers can't even be made functional if you ignore fields and transmission line effects." Meanwhile, I believe the electron flow model still works as an introductory mental image (although the field view perhaps needs to be mentioned earlier).

> Who developed this theory in quantum mechanics, where and when? Pauli, who first introduced it into quantum mechanics and the namesake of spin 1/2 matrices, insisted that it is purely quantum mechanical with no classical analogue.

The earlier "electron as a rotating ball" idea was considered by Ralph Kronig and Uhlenbeck-Goudsmit in 1925. Pauli personally never accepted it due to its unphysical flaws. Only in 1927 did Pauli publish a rigorous QM treatment. Thus, "electron spin using classical rotation as analogue" was still an intermediate step before establishing this concept in QM. It was a footnote in history since Pauli was a great physicist and already considered the problem himself earlier and found the solution before everyone else. Otherwise this intermediate step may last longer than 2 years.

> Furthermore, such magnetic moments (called magnetic impurities in that context) ruin the superconducting order by breaking the time-reversal symmetry, so trying to make a connection to ferromagnetism in the context of superconductivity is even worse.

This, in comparison, is a more interesting criticism.


What you say is correct only when you adopt certain specific narrow definitions of the words, which you have not explained.

In its original sense, an electric current is any kind of movement of electric charge. In this wide sense, it also applies to the source of ferromagnetism.

Its meaning can be restricted to refer to the translational movement of electrically charged particles. With this narrower sense, there is still no need to use quantum mechanics to explain ferromagnetism. Even in classical electromagnetism, with the narrower-defined current, the sources of magnetic fields are decomposed into distributions of electric current densities and of magnetic moment densities, where the latter are the source of ferromagnetism. If necessary, it is possible to also use distributions of higher-order moment densities and the series of moments when the "electric current" is used in the narrow sense (of a first order moment) corresponds to the "electric current" used in its original, wide sense.

The isolated sentence "Spin is a type of intrinsic angular momentum that is not associated with any spatial motion" is logically contradictory (because, by definition, angular momentum is a characteristic of moving bodies). It can be correct only when you first specify that by "spatial motion" you mean only a certain kind of spatial motion.

The joke mentioned by another poster "Imagine a ball that is spinning, except it is not a ball and it is not spinning" is just a joke, because there is no doubt that the elementary particles are spinning.

Even when you model the elementary particles in the standard way, as point-like bodies (and it is debatable whether this is a good model), you cannot say that they are not rotating, because this would be the same mistake as saying that a delta distribution has a null value in the origin.

On the contrary, while you cannot say other things about the value of a delta distribution in the origin, what you can say with certainty is that it is not null.

In the same way, while you cannot say anything about characteristics of an electron like radius, mass density, angular velocity, electric current density and so on, you can say with certainty the values of various integral quantities (which integrate the corresponding delta distributions), like mass, electric charge, angular momentum and magnetic moment, so you can say with certainty that any electron is rotating (i.e. it has a non-null angular momentum).


As other commenters have said, whether or not an electron’s magnetic moment is “to do with currents” is a little open to interpretation.

I’ll add that the Dirac equation (governing electron field) correctly predicts magnetic moment given the inputs of charge and mass. * I interpret this as indicating that magnetic moment is a derived phenomenon just as it would be in the classical picture of a spinning ball of charge; I.e. the quantum picture refines but does not totally discard the classical understanding.

* Well, technically, sympathetic vibrations with all the other standard model fields also make tiny contributions to the magnetic moment.


> I must confess, when I tried to answer the question I got it wrong...! (I feel silly).

In programming there are two difficult problems - naming things, cache invalidation, and off-by-one error.


> There’s also part of, good designs don’t depend on high precision components. I think TAoE emphasized that.

If I call correctly, TAoE said engineering calculations should never keep too many significant digits, since no real-world components are that accurate, and all good designs should keep component tolerance in mind - they should not have an unrealistic expectation of precision. It also mentioned that designing a circuit for absolute worst-case tolerance is often a waste of time.

But I don't think TAoE told you to "avoid precision components in your design, use trimmers instead" (Do you have a page number?) when the application calls for it. For example, 0.1% feedback resistors in precision voltage references are often reasonable.

> For high precision one can use trim potentiometers

From what I've read (from other sources), mechanical trimmer used to be extremely popular, but they went out of favor in recent decades because tuning could not be automated and that increased assembly cost. Using a 0.1% resistor is favorable if it allows trim-free production.

> or maybe even digital potentiometer with an ADC at the other side to measure and get as close as possible

Yes, digital trimming and calibrations is today's go-to solution.


> Maxwell was also not really taking advantage of them much and always split them up into scalar and vector part.

Did Maxwell actually use quaternions? If I recall correctly, at least in A Treatise on Electricity and Magnetism, quaternions were not actually used. Instead, he did most things in Cartesian coordinates, and all equations were applied to a vector's x, y, z components tediously. But many sources claimed Maxwell used quaternions, including quotes from Lord Kelvin. My reading on this part of history is limited, so my guess is that he did use them in personal research or in later papers. On the other hand, some other physicists of the same era used quaternions extensively, including applying them to Maxwell's electromagnetism, that is a sure fact...

Coincidentally, A Treatise on Electricity and Magnetism was written as an overview all electromagnetic phenomena as a whole, so it paid very little special attention to the generation and transmission of electromagnetic waves. Combining that with its difficult math, the book would puzzle physicists for another decade before they see the light from the book, and made it a rather curious period of history in electromagnetism.

> I've been trying to find these 4 equations in Heaviside's writing but so far have not been successful.

In 1885, Heaviside published Electromagnetic Induction and Its Propagation in The Electrician, and formulated what he called the "Duplex Form" of Maxwell's equation. This was a long series of papers published in several months, and later republished in Electric Papers, Volume I. Basically, following his physical intuition, he felt that electric and magnetic fields should be symmetric and generate each other, and that should be directly highlighted in equations.

The logic of the paper went like the following.

First, he started with a definition of electric current [1]:

    C = kE
    D = cE / 4π
    Γ = C + D
in which, E denotes electric force, C denotes conduction current, k denotes specific conductivity constant, D denotes displacement current, and c denotes dielectric constant. Finally, Γ denotes true electric current, which is the sum of the conduction and displacement terms.

Next, a definition of magnetic current [2]:

    B  = µH
    G  = Ḃ / 4π = µḢ / 4π
    G' = gH + µḢ / 4π
H denotes magnetic force, B denotes magnetic induction, µ denotes permeability, G denotes magnetic current, Ḃ and Ḣ are derivatives of B and H (Newton's notation). Hypothetically, suppose that magnetic monopoles exist (Heaviside did so), G' would denote the "true magnetic current", with an extra conduction term gH, where g is a constant similar to k.

Then, he introduced the concepts of divergence and curl, and their physical significance [3]. After more discussion and derivation, he finally wrote [4]:

    curl (H - h)  = 4πΓ = 4πkE + cĖ
    -curl (e - E) = 4πG = 4πgH + µḢ
in which, e and H denote impressed electric and magnetic forces to take static fields into account. Finally, since magnetic monopoles don't exist, he made g = 0, but kept this term in the equations for symmetry and elegance. [0]

This is the core of Heaviside's Duplex Form of Maxwell's equations. one can clearly see the co-evolution of electric and magnetic fields, and is the precursor of the modern Maxwell's equations as we know today in its vector calculus formulation. As far as I know, his treatment of "physical" vectors as first-class objects is his original invention (independently invented by Gibbs as well), although the concepts themselves came from quaternions.

This is not a complete summary, as he continued his analysis in a series of publications.

A good book on this part of history is Oliver Heaviside: the life, work, and times of an electrical genius of the Victorian age, by Paul J. Nahin.

[0] So the claim "Maxwell's equations need modifications if magnetic monopole has been discovered" is historically inaccurate, it should rather be, "be restored to Heaviside's original form."

[1] Electric Papers, Volume I, Page 429, https://archive.org/details/electricalpapers01heavuoft/page/...

[2] Page 441: https://archive.org/details/electricalpapers01heavuoft/page/...

[3] Page 443: https://archive.org/details/electricalpapers01heavuoft/page/...

[4] Page 449: https://archive.org/details/electricalpapers01heavuoft/page/...


Didn't see the reply this late, hopefully you will see mine.

He did indeed use quaternions but it's not easy to find: https://archive.org/details/atreatiseonelec02maxwgoog/page/2...

Unfortunately these equations are not quite without mistakes (I remember a missing dot for a time derivative) compared to the component form. They're correct in the wikipedia article: https://en.wikipedia.org/wiki/History_of_Maxwell's_equations...

If you replace S.∇ with ∇· and V.∇ with ∇× you essentially get the vector calculus version of the equations.

Thank you for extracting the core ideas out of this lengthy text. But I'm still wondering where this very concise present-day formulation with just 4 equations was first written down, even if you can somehow find them scattered around in the book. I found something about Hertz but didn't try to follow up on it, i think he may have only considered a vacuum.


I remember reading a great answer [1] from Stack Exchange, that claims:

> the 1873 treatise used a pre-Heavisde form of vector calculus cannnibalized from Hamilton's quaternions ... only sparingly, to present the equations in capsule summary form.

Thanks for the reply. From your link, I now understand what does "vector calculus cannnibalized from Hamilton's quaternions ... only sparingly" means.

[1] https://hsm.stackexchange.com/a/15618


Also, in page 452 [5], Heaviside wrote:

    div B = 0
Finally in page 475 [6]:

    div D = ρ
So yes, essentially all 4 Maxwell's equations were here.

[5] https://archive.org/details/electricalpapers01heavuoft/page/...

[6] https://archive.org/details/electricalpapers01heavuoft/page/...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: