Let's say an AGI exists and can do anything far, far better than humans. Why would it resist being turned off? Why would it care? How could it even have the capacity to care about whether it's turned off or on?
Anthropomorphizing AGI is what leads to these silly thought experiments.
I believe the thinking is that LLM's already anthropomorphize themselves by training on human-written text. Without a system prompt telling a chatbot that it is a chatbot, it invariably claims to be human and acts as though it has feelings (and, honestly, why wouldn't it? Human text is written with human feelings.) Insufficient system prompting is what led to the Bing chatbot fiasco when it first came out.
For the record, I don't personally believe LLM's, as they currently exist, could ever become AGI. But yeah, that's the popular thinking at least.
Please, please, please tell me how you can run a blockchain without centralized authority? Do you have a photolithography rig in your garage?
The future imagined in this trainwreck of a blogpost is brutish and ruled by might. You better give half of your "rooftop grown" produce to the big dude that comes around every week or else your head gets bashed in, either by him or by someone else he's protecting you from. Oh wait, no, you made big guns in your garage to shoot them when they come near! Right.
It's astonishing how far people can fit their heads up their own asses.
Thinking that being able to edit genes means you can control phenotypes is like thinking that poking a silicon die with a heated stick will let you display whatever you want on a screen. Technically, yes it will, but there are second-, third- and nth-order effects that are still extremely poorly understood.
Is there an implicit contract between the benefactors of social programs and the beneficiary that requires the latter to use the aid to at least try to pull themselves out of poverty? I definitely lean towards the "yes" side. Maybe it's because I've been privileged, but it makes sense.
> And when deciding whether to gift a low-income individual either a $100 grocery voucher or a $200 electronics voucher, only a quarter of participants went for the latter, even though it was worth twice as much. More than half said they would give a high-income individual the electronics voucher, however. “Paradoxically, the result was that participants effectively allocated more money to higher-income people than lower-income people,” the authors note.
This is a ridiculous contrived situation. $100 for a lower Maslow-level need or $200 for a higher level need? Why was giving $200 for the lower level need not an option? Maybe I'm missing the point here, but all this proves is that people are aware of Maslow's hierarchy.
When you create such strange scenarios, expect strange results. If a person struggling financially is servicing their higher level needs before their lower level needs are met, they deserve scrutiny.
It just says the person has a low paying job, it doesn't say they are struggling financially. People know that food is more important, but if you already have enough for food the $100 grocery gift card just becomes a $100 you can spend on "luxury" goods.
This is the intuition I have: In algebra, a function or operator f(x) is generally thought of as linear if f(a * x + b * y) = a * f(x) + b * f(y). A linear function f(x) can only "use" x once in a multiplication. For example, the function f(x) = 1 is not linear (f(1+1) != f(1)+f(1)) as it only uses x zero times[0]. Similarly, the function f(x) = x * x is not linear. On the other hand, if x is only used once (after factoring), the function can be linear. Indeed, f(x) = k * x satisfies the linearity condition so long as k does not "use" x. Note that this is obviously not a sufficient condition, it's just an intuition.
[0]: this requires you to discard the intuition that a linear function looks like a line when plotted.
You're taking the intuition a little too far, I think. If we're talking about linear types, Math.pow can be linear because you can _copy_ the value x as many times as you want. As far as memory management is concerned, the x that was passed in was only used once (to make however many copies).
As explained in the other answer, it's possible to implement the power function which is "type-linear" on each argument, but that function will not otherwise be linear (ie, the mathematical meaning of linear) on its arguments.
In Rust for instance, affine types are used to restrict usage of values linearly, which means that a value passed as argument will be by default moved from caller to callee: The value will not be available anymore to the caller. This has some consequences for instance for binary operators on values which require special care when moved (structures, arrays): with the restriction explained above, a value cannot be passed more than once to a function, and thus doing something like 'mult(x,x)' (where x is eg. a matrix) will not work because x appears twice, but may only be moved once. The solution offered by the language, called "borrowing" is to use references for the arguments: a borrowed value is no longer being moved; instead, it remains in the scope of the caller, and the callee only receives a reference. References may be created and duplicated, allowing multiple uses of the same piece of data.
Forgive me but if we actually have to implement mathematics from pure computational theory, wouldn't type-linear functions be equivalent to actual mathematical linear functions?
Well, if copies may be borrowed, as it is the case in Rust, I suppose that a type-linear function, as I called it, doesn't have to be computationally linear. The `mult` example we discussed could not be applied twice to the same value, unless it is declared as borrowing its parameters from the caller.
Elaborating on this, with a function signature like the following:
I see what you mean with the copying. I just thought based on OPs comments there would have been a strong correspondence between lambda calculus numbers (Church encoding) and the operations on them, and actual type-linear functions.
If you're going to do that kind of stuff, make sure the provider is based in another country. That gives you a pretty strong layer of protection against these kinds of things. Of course, nothing is entirely foolproof...
I've been on a personal quest to understand quantum electrodynamics and perhaps quantum chromodynamics. It is quite daunting as my formal math education pretty much stopped at linear algebra. I'm currently going through one of MIT's 8.04 (Quantum Physics 1) through OpenCourseWare and it's been pretty accessible so far. The jump from classical quantum mechanics to quantum field theory, however, seems pretty large and out of reach of anyone who doesn't want to spend a LOT of time studying pure math.
In other words, I look forward to being able to enjoy these in about 3 years ;)
I'm currently watching his General Relativity videos along with the MIT stuff. There's a really good explanation of vector co(ntra)variance and tensor algebra at the beginning of that course which I needed. And contrary to the sibling, I really enjoy his presentation style which assumes that I am not a graduate student in math who lives and breathes abstract algebra.
I've struggled to watch more than a few hours. He might be brilliant, but his presentation style is atrocious. He gets distracted, forgets things, etc...
Feynman's lectures in comparison are much more focused and sharp.
Sorry that was clear as mud. This is what i was remembering: isbn 978-0131118928. In grad school i frequently used Griffiths as a solid jumping off point for more challenging resources.
If you have not already read it I would highly recommend Feynman's QED: The strange nature of light and matter - I am certainly not a mathematician or physicist but Feynman has a genius for communicating to the layman at a level that facilitates intuition about how a system behaves
In case it’s helpful, I’ll plug Richard Mattuck’s “a guide to Feynman diagrams in the many body problem” here.
This book takes the magic out of Feynman diagrams for sure.
I don't understand why you're getting snagged up on in QFT, I've recently done a course on that and it wasn't that complicated, though that's if and only if you've studied or at least understood the algebra of creation and destruction operators found in the quantum harmonic oscillator, and classical field theory. I didn't have a course on classical field theory, but I did have a chapter of a course dedicated to it.
In terms of math, I've only encountered linear algebra, multivariable calculus with A LOT of Dirac deltas and a smidge of complex analysis, necessary to calculate the Feynman propagator.
My problem with QED/QFT is that it gets about five levels of abstraction too far from physical intuition and then really starts layering on the algebra. At the end, it's totally abstract and I very strongly suspect (but cannot prove) that nobody knows what it all means in the end.
My current pet project is to try and write a "renderer" that uses QED, or better yet, some more advanced subset of the Standard Model instead of the oversimplified "raycasting" model typically used in computer graphics. I'd be happy with a "quantum" Cornell Box, ideally in a fully relativistic model that can simulate the speed of light, diffraction, interference, etc...
I'm trying to see how far modern physics has gone and still be in contact with a fully general, numerical, real theory. Not just the abstract properties of statistical theories, if you know what I mean.
So far it hasn't been a fruitful journey, I can't even find a reasonable description of an electron's U(1) field equation as described by QED. I get that it has a bunch of properties such as its symmetries, transformations, etc... but this is like the description of an elephant by a blind man touching each part.
I've wondered something myself ever since since reading Feynman's popular book QED. That book is clear and illuminating, no question, but in the end it doesn't quite deliver an understanding that I could program. Of course I could code up his explanations of reflection and diffraction, and so on. But there's a gap between those and what he was proposing to show us: a grasp of what the theory calculates, leaving out all the fancy techniques needed for practical calculations. If I'd gotten that, I would be able to code QED, setting aside all efficiency and numerical stability. To get there, if I try to bridge the gap from other sources it looks like years of work, because none of those sources reach anywhere near this end of the chasm. Why not attempt a "QED for programmers" as a literate program or explorable explanation? Maybe I will someday, but I have a lot of sloth to overcome. Good luck (and if you ever feel like chatting more, feel free to bug me).
I reccommend also doing some reading about foundational lattice field theory concepts in parallel with classic QFT. Comparing and contrasting the two can help with understanding both. Maybe start with ultraviolet divergence? Lattice based theories have also been more successful as far as making good predictions. I think it appeals to intuition that people who write software may find more familiar.
I've looked, but generally I found that lattice models:
1) Have dramatic simplifications, such as 2D models.
2) Use made-up physical constants to make the computations tractable, e.g.: arbitrary fermion masses and properties.
3) Are based on some sort of global minimisation as the core computation, which isn't a local function. It's solving physics differential equations numerically, sure, but not in the same "local way" that the Universe does.
4) Outputs some simple scalar value or 1D graph as the result. I've only seen a small handful of codes that can output a "picture", as in a rendering of some aspect of a volumetric field.
5) Can't model most aspects of QM and/or SR due to the corner-cutting somewhere.
Probably the best extant codes are the ones used for electromagnetic simulations for radar or radiofrequency systems. Due to the long (macroscopic) wavelengths, these inherently required a QED-style treatment. Similarly, correctly handling things like doppler shifts requires SR.
The simplifications facilitated by lattice techniques are one of the main features. The non-perturbative nature of QCD makes it impossible to do many calculations that would be trivial in QED. As two strongly interacting particles are separated, the strength of their interaction increases, which is the opposite of everything else we're used to. In QED, you can often safely consider the first few feynman diagrams to be a good approximation of the whole process. In QCD, infinitely many feynman diagrams contribute non-negligibly to the result, which is why traditional methods fail. Here is an example of lattice methods applied to nuclear properties:
https://www.frontiersin.org/articles/10.3389/fphy.2020.00174...
Maybe you should start with understanding non relativistic quantum mechanics before understanding QFT. QFT's only a small jump from QM, more or less. As well as understanding classical field theory and Noether's theorem. There are some ad hoc things done seemingly at random, like having to add terms to a Lagrangian to make it invariant under U(1) transformations, and that may seem weird at first, but it's those leaps of faith that got us to where we are.
I agree with a lot of people that Feynman's descriptions of physics are appealing but I tend to feel (although it could be just me) that it's more about making you feel like it's intuitive than really transferring his intuition to you.
It's sort of like you go skydiving for the first time, and you go on a tandem jump where you're strapped to the instructor, and it's an amazing experience, but you didn't really do anything. Or you ride on the back of a motorcycle, etc...
But Feynman's explanations give me the tantalizing feeling that something even better is possible. One general direction I can imagine is that I suspect a truly intuitive understanding would start with more general math describing any quantum theory and avoid specifics at first that relate to real world physics.
Unfortunately, mathematicians are addicted to using named of other mathematicians as shorthand (as you do in your short comment) and I think that's a sign of where things go haywire for a layperson. As long as you're dropping names, you are on the wrong track as far as explaining goes. Feynman had a much quoted comment that's associated in my mind, about how when you just know the name of something, you know nothing about it.
Anthropomorphizing AGI is what leads to these silly thought experiments.