There's a factor not considered here: to what extent were Scalia & Ginsberg able to get along because of other material conditions?
As supreme court justices we can assume that they had a basic foundation of psychological and material security - a position of prestige, a job for life, healthcare and so on.
I believe it is a lot easier to summon the "higher thoughts" necessary to be civil when ones personal position is more secure, so to achieve a more civil society it may help to work to make more insecure people secure.
I think you're right to bring up the NFLT, but I don't think it is applicable, it just points at the real question.
The key assumption to get the NFLT is that each environment vote has the same weight, i.e. we are targeting a uniform distribution on objective functions / environments / problems / whatever you call it.
If you break this assumption, you get an opposite result which is that search algorithms divide into some equivalence classes determined by the sets of different outcomes (traces, if I remember the theorem's description) that you discriminate between.
A uniform distribution like this is actually a very very strong precondition; it implies (looking at results about the complexity of sets of strings, since choosing an environment is like choosing a string from 2^N given some encoding, etc) that you care equally about a very large number of environments most of which have no compressible structure or equivalently have a huge kolmogorov complexity. Most of these environments would not have a compact encoding, relative to a particular choice of machine, but we are weighing these the same as those environments which are actually implementable using less than a ridiculous amount of storage to represent the function.
The reason why I think this is too strong an assumption to use is then that we don't care about all these quadrillion problems which have no compact encoding - we know this because we literally can't encounter them as they would be too large to ever write down using ordinary matter.
Allowing for this, talking usefully about evaluating an AGI or equivalently a search strategy or optimization algorithm implies having an understanding of the distribution of environments / problems we care about. I think capturing this concept in a 'neat' way would be a significant contribution; I had a go during my PhD but failed to get anywhere. Unfortunately things like K-complexity are uncomputable, so reasoning about distributions in those terms is a dead-end.
Right, the environments are not uniformly distributed. In fact, the paper actually defines not one single intelligence comparator but an infinite family, parametrized by a hyperparameter which is, essentially, a choice of which environments vote and how to count their votes. Crucially, this doesn't change the truth of the structural theorems (except that some of the theorems require the hyperparameter satisfy certain constraints).
Other authors (Legg and Hutter, 2007) followed the line of reasoning in your comment much more literally. They proposed to measure the intelligence of an agent as the infinite sum of the expected rewards the agent achieves on each computable environment, weighted by 2^-K where K is the environment's Kolmogorov complexity. Which seems as if it gives "one true measure" of intelligence, but actually that isn't the case at all, because Kolmogorov complexity depends on a reference universal Turing machine (Hutter himself eventually acknowledged how big a problem this is for his definition, Leike and Hutter, 2015).
My position is that any attempt to come up with "one true comparison of intelligence" (as opposed to a parametrized family) should be viewed with skepticism, because relative intelligence really must depend on a lot of arbitrary choices.
Hah, interesting - this is a reference I hadn't seen and I like the sound of it. There was me thinking I'd had an idea of my own one time!
The reference machine thing would be the next problem to argue if using 2^-K as the weight; whilst you can make the K-complexity of any particular string low by putting an instruction in your machine that is 'output the string', this is clearly cheating! So there ought to be a connection between the reference machine and some real physics, since we are perhaps not interested in building optimisers that perform well in universes whose physics is very different to ours.
Sadly even if this were cracked I think the fact that K is uncomputable would make the result likely to be useless in practise.
It still suffers the problem that it's highly lopsided in favor of simpler environments. Of course you're absolutely right that environments too complex to exist in our universe should get low weight. But it's hard to find the right "Goldilocks zone" where those ultra-complex environments are discounted sufficiently but medium-complexity environments aren't overly disenfranchised, and where ultra-simple environments aren't given overwhelming authority.
>There was me thinking I'd had an idea of my own one time!
I wouldn't give up. Although it's such a long paper, Legg and Hutter 2007 actually has very little solid content: they propose the definition, and the rest of the paper is mostly filler. There are approximately zero theorems or even formal conjectures. One area I think is ripe for contributions would be to better articulate what the desired properties of an intelligence measure should be. Legg and Hutter offered a measure using Kolmogorov weights, but WHY is that any better than just randomly assigning any gibberish numbers to agents in any haphazard way--what axioms does it satisfy that one might want an intelligence measure to satisfy?
Yep its clear that the NFLT only apply if we consider all possible environments equally.
In practice, we are indeed not interested in every imaginable environments, only in "realistic" ones.
It was not clear for me if the paper addressed such concerns for AGI, e.g. when writing:
To achieve good rewards across the universe of all environments, such an AI would need to have (or appear to have) creativity (for those environments intended to reward creativity), pattern-matching skills (for those environments intended to reward pattern-matching), ability to adapt and learn (for those environments which do not explicitly advertise what things they are intended to reward, or whose goals change over time), etc.
But like I said, I only skimmed it.
In general (not talking about the paper there), I have the impression that this is something that may be missed (sometimes even by researchers working in the domain), and I agree very much to your point!
This is why I think the NFLT gives us an interesting theoretical insight here:
Making a "General" AI is not actually about creating an approach that is able to learn efficiently about any type of environment.
Yes - I think you're right that the actual interesting result from NFLT is not that 'optimisation is impossible', but that 'uniform priors are stupid'.
I agree with most of what you have said - we need to exert political pressure by taking action. This action probably needs to be disruptive and unpleasant to work, like the actions taken by the civil rights movements of the last century.
However, I would also like to put forward the following argument for why your own efforts could make a difference:
Imagine a trolley, speeding toward a junction. On one branch is a child, tied to the rails.
You are in the plant room and can cut power to the trolley, but this will only slow it down - the trolley's momentum alone will kill the child. However, you see across the way a stranger in the signal box, surrounded by levers controlling the points in the station. They are frantically pulling levers, but so far they haven't hit on the one which diverts the trolley.
Should you cut the power to slow the trolley?
We are on the tracks - if we survive, it will be because of a political or technical breakthrough before it's too late. We don't know precisely when too late is - it could be ten years, or twenty, or ten years ago and we're buggered.
Each individual's emissions savings make too late a little later, which changes the odds of survival a little bit (or our estimate of the odds - this is probably the philosophical weak spot in the argument).
Maybe the plane trip you don't take or the car you stop driving or the product you choose not to buy is the marginal decision that gives time to avert disaster. If we do avert disaster, one of these decisions must be that marginal decision, somewhere, somewhen.
These choices are tickets in the not-extinction lottery, and it makes sense to play when you can, as much as you can.
If you have a (virtually) unlimited amount of time and energy to expend on thinking about climate change and its impacts, go for it.
My point is that you will get far more bang for your buck if you spend energy on political action. Most ordinary people don't have time to spend on HN arguing about the best way of stopping climate change, they have other things to do in their lives -- and we should be convincing them to take political action (with us) instead of wasting their limited amounts of spare energy on minor personal changes that won't have as much of an impact.
I guess if you have a limited budget to spend, I would put political action at the top of the list. However, I think typically people have several limited budgets which are kind of incommensurable.
For example for me, the decision never to fly again has not cost me any action points to spend on my involvement in political campaigns, nor has that limited my decision to work in this field for a reduced wage than I could get in adtech, or whatever.
Where efforts are not orthogonal like this I'd say go for politics first though. So we are in agreement!
A COP of 5 is quite optimistic for a domestic scale heat pump, at least in the climate where I work (UK). Sensible values here would be more like 2 - 3.5 depending on the temperature gradient required.
This is a function of how big the radiators are and how cold the heat reservoir is, so if you plug an air->water device into some normal radiators its performance won't be good on a really cold day, whereas if you plug a ground->water or water->water device into underfloor heating it'll probably be pretty good.
The big question (in the UK) for per-dwelling heat pumps is whether the distribution network has enough capacity to meet the winter peak load - if everyone gets a heat pump the cost might have to include replacing a lot of substations and distribution wires.
I am interested in the question of whether you could have PV cells on your roof with a coolant loop that goes into a heat pump, and a thermal store in the house for buffering. Then when it's sunny you can dump some electrical heat into your thermal store whilst also use your heat pump to chill the PV cells, keeping them at high efficiency. When it's not sunny you can draw off your thermal store giving a high efficiency for the pump.
If the numbers came out right you might be able to be self-sufficient for heat using something like this, just by making good use of the radiation already falling on the house.
In dense enough areas you can also do well with heat distribution networks, or mixed heat and cold networks. These are a good low-regret choice because you have a big central plant which you can easily refit if the best supply technology changes.
Also if you're running a heat pump having one big central heat pump with a big thermal store can allow you to get better conversion ratio from electricity to heat.
As a thought experiment that might point you toward why some people say not: if you make an atom-perfect simulation of two 1kg spheres orbiting each other in an empty universe, would that produce any gravitation?
If it was well simulated, yes it would produce simulated gravity. Why should there be any expectation that the simulated reality affect the non simulated reality? If you start down that path then you might conclude that no other person has feelings if it doesn't affect you.
By "actually" are you again inferring the requirement for cross over between realities? If the simulated brain was experiencing something, then that "something" is an experience in the frame of the simulation. Or perhaps your argument is that, because WE dont identify the simulated brain as a person its experience is irrelevant?
By asserting that the simulated brain is "experiencing" something, you're assuming it does have qualia.
We actually have no idea how a bunch of atoms interacting create qualia, or even whether they do. There's no math to tell us that a configuration of atoms makes qualia, or what qualia it makes. We have no way of distinguishing between a conscious being who experiences, and a philosophical zombie with the same behavior but no internal experience.
Therefore we can't know whether a simulation of atoms actually does capture what's necessary for qualia.
Sure, I only really know that I have qualia. It seems reasonable to assume that something which looks like me has qualia like me. But if it's a simulation, it doesn't actually look like me at all.
It does have a sort of abstract mapping to something that looks like me, but since I have no idea what produces qualia, I have no way to know whether something essential is lost in that mapping.
There's another solution to this question (or maybe the same one in different words). I believe something similar, but I still do things - why do I do things? Why write this message?
The question seems like a show-stopper.
Then again, the trees I can see outside my window don't believe pointing their leaves at the sun matters or bears meaning, nor do the planets consider turning in their orbits. Nevertheless, they continue to do these things!
What is the difference between me, and these other systems? Why do I need to Ultimately Matter to live?
I think we're thinking of "unlivable" in two different senses. When I said "belief system is unlivable" I didn't mean you can't agree with it and live. I meant you can't live as though it's actually true. You can't be consistent with it and live.
Why take care of yourself if you don't matter? Why help other people if they are as irrelevant as your help? Why participate in thoughtful discussion if nobody matters, and neither do the conclusions? Folks who think they have no ultimate value do these (good) things all the time -- but they can't and be consistent with their worldview.
> What is the difference between me, and these other systems?
Simply put, you're human, not a plant or a planet. Humans are volitional, trees and rocks are not. "How can trees live without beliefs?" is analogous to "how can humans live without photosynthesis?" -- because we're fundamentally different.
Human actions are the products of our beliefs. We think and long for significance. Why? If you're a just mass of chemicals, atoms clashing with atoms, why do you long to matter? And if you aren't more significant than the tree, why can't I cut you down if it suits me? The typical responses ("it's beneficial for survival", "society says so", etc.) don't make it wrong inherently, yet we know murder et al. are wrong. Actually wrong. And, if we don't matter, why is survival good? Why is society's opinion good? They wouldn't be, they would just be other possible, equally meaningless, states in the vast, purposeless, eternal state machine.
But you do matter. And it's a divine gift. You matter because you were created to matter.
> Human actions are the products of our beliefs. We think and long for significance. Why? If you're a just mass of chemicals, atoms clashing with atoms, why do you long to matter?
Because it’s built-in for your survival. You are optimizing a fitness function. Nature is ultimate. The meme that man has dominion over Earth has led to us raping and pillaging her. We have done horrible things to our mother.
I agree with your scepticism about solving the commute traffic with a self-driving silver bullet. The research for traffic flow [1] says that, with human driven cars, a slice of a 4m wide lane will pass about 1000 people/hr in cars, about 7000 people on foot or bicycle, and about 10,000 people by bus.
That means that to get competitive flow with buses, self-driving cars need to deliver a tenfold improvement in flow. Since in traffic cars are quite tightly packed already, you probably can't get much from putting them closer together, so even if self-driving magic gives you two doublings of flow on its own you still need to more than double the speed limit to compete. I can imagine this working only on roads dedicated to self-driving vehicles, with no pedestrian crossings, so now self-driving cars benefits only materialise if they eat up a chunk of the public space.
You might get a bit of improvement from vehicle sharing as well, but it seems like a really complicated way to attack a problem that already has an OK low-tech answer, at least in high density areas.
Where I live the census says that about half the commute journeys in the area are less than a few miles, for which cars of any sort seem ridiculous to me, for the able bodied. For those who aren't able bodied or have a long journey all the able bodied car users on short trips are using up a scarce resource that they need!
Throw in the health benefits of active transport, and the health cost of particulate emissions (still a problem with electric cars, since a decent chunk comes off tyres and brakes) and it looks like a wash to me. Then again I am one of those awful bike people, so maybe I miss the point.
Outside of NYC, in the U.S. public transit in my experience takes two or three times as long to get you where you're going. Until the typical experience gets better or people get poorer, the resource efficiency won't matter.
I guess part of the issue here is that cars are a kind of coordination problem - if nobody else is using a car on the roads, cars are quite a good choice. They are fast and easy, and go where you want.
However, each person who uses a car creates some external costs (a high traffic factor / unit of person-flow) which are borne by all the other road users. Once people choosing the "defect" strategy (cars) have enough numbers, the "cooperate" strategy (not cars) gets broken for other solutions that use the road network, so the system fails to a bad equilibrium if there are enough defectors in the population.
The common American is getting poorer, but just like junk food and lack of exercise, people still drive. Other modes of transit may be much safer and healthier, but unless the common person is encouraged (whether that be with incentives to not drive, or tolls for driving) quite a few people will still drive.
There are a number of US cities that get most workers into the urban core besides NYC despite poor to middling infrastructure at best, but driving infrastructure dwarfs everything else, bike lanes are added only to diet roads usually (resulting in shitty lanes), and most areas see 60% to 70% of their land tied up in (mostly empty) parking, which is a disaster for every mode of transport.
It is being a luddite - in a good way! Don't knock the luddites, they had analogous concerns.
They didn't hate machines or novelty per se, they hated the specifics of how the machines were affecting their quality of life.
Same with facebook - networked communications might be OK (remains to be seen if you ask me), but the socio-technical-political blob that is facebook's implementation of same has side effects that lots of us find horrible.