Hacker Newsnew | past | comments | ask | show | jobs | submit | cbennett's commentslogin

Check out Greg Bear’s Forge Of God and Anvil of Stars 2 book series . Not exactly the same plot you mentioned but covers an interesting variant plot including fall out of a situation where Earth is killed by killer probes and remnants of humanity painfully search for the Killer’s system. What they find is different than expected


Edit: I see you linked to the wrong sub-section there. However, overall the contrast between super-predator theory and (self-)annihilation is a bit more complex than one would think. They're not entirely binary.

On one hand, the self-annihilation idea implies that super-predator civilizations, being extremely rate to exist, would not have strong motivations for exterminating competitors.

What would possibly be the need, given the vastness of time and space, and the thermodynamic arguments given? Indeed, following this logic, the extraordinarily rare galactic survivor civilizations are almost guaranteed a long, peaceful existence.

On the other hand, it all develops on the curve of the galactic gaussian justifying possible pre-emptive defense investments. There's a difference of 3-4 or n sigmas statistics likelihood. In the former case, the likelihood of a rare rival emerging is high enough that pre-emptive defense strategies will be considered, and rudimentary systems of deterrence put in place. In the (high) n theory, it probably would not. How a civilization in the early stages of hegemony determined this fact, would be an interesting and possible civilization-saving (or destroying) feat....

Side note, haven't heard of the Revelation Space trilogy. Does it offer a workable theory for the origins of the predator dynamic?


> Side note, haven't heard of the Revelation Space trilogy. Does it offer a workable theory for the origins of the predator dynamic?

Not really, it's doesn't get any philosophical treatment and is more of a plot device. Without giving away spoilers it's more a bit like iRobot (the movie) with the predator having noble but extremely long term goals of what's best for life itself, I don't think why it does this really holds up either but it works as a plot device. It's happy with intelligent life as long as it doesn't expand into the galaxy, it doesn't act preemptively.

> On the other hand, it all develops on the curve of the galactic gaussian justifying possible pre-emptive defense investments. There's a difference of 3-4 or n sigmas statistics likelihood.

The other factor is the cost to put down potential rivals, in the dark forest universe this cost is incredibly cheap. If the cost were a lot higher then the outcome would be very different.


One thing that especially irked me about Liu was how contingent so many plot points were on specifics, and how little attention was paid to that.

E.g. If there were no FTL travel, deterrence would look very different. Especially if the average time for a civilization to become multi-planetary < the average distance to the nearest annihilation capable civilization


"Well, magnetic core memory is the only data storage format that is robust enough to withstand high-radiation environments. Jeri is clearly interested in magnetic logic and memory because it is the only computing platform that will be able to survive the first wave of nuclear blasts that will unavoidably come from the beginning of the third great world war. "

Erm, this premise is factually untrue though. A lot of next generation resistive RAM devices, especially OxRAMs, have been demonstrated to be rather rad hard, making them good candidates for future space electronics platforms or.. all the other attendant apocalyptic scenarios.


Radiation hardness is different than sensitivity to EMP. It is the eddy currents from an EMPT that build up in and burn out small traces in micro electronics.


But magnetic cores are resistant to the M in EMP?


I'll offer a wild guess - perhaps they are only temporarily affected (i.e. mem-wiped) and function as normal after device reset, being made of iron, in contrast to semiconductor doping materials being hard-killed by the emp.


Drum memory [1] and hard disks [2] have been used in nuclear tipped missiles, which are supposed to operate in an environment where EMPs are expected. Both types use magnetism for storing data.

[1] https://en.wikipedia.org/wiki/ASC-15 (Titan II)

[2] https://en.wikipedia.org/wiki/D-17B (Minuteman I) (note that the hard drive was used as RAM)


> If it scales faster than linear, then your recursive bootstrapping operation takes longer and longer each time,

Wait, does that really follow? What if you have a better than linear bootstrapping compiler. To unpack that a bit, imagine we not only have $n$ such units, but we have them wired to together in a creative way-- i dont know whether it is hierarchy, or some clever topology, but lets say that the bootstrapper now gets sub-linear scaling properties as it grows $n$.

If we look at the brain, there is a lot to be understood from the dynamics of recurrent neural fields. They are wired in a very complex way which seems to allow for some kind of very special booting (re-booting) operations. And thats just at one level of abstraction, then we re-wire them into meta-fields (like the columnar abstractions that Hawkin's builds his HTM theories around). If we have a sort of fractal information encoding, we ultimately approach shannon efficient coding. Is that what evolution has selected brains to do? And do you think it is possible the first seed AI may realize this and exploit the same strategy, just 1000x (10kx?) faster?



Thanks for that. the hackernoon piece in particular is really worth the read. Two salient points/questions/comments that jumped out to me after reading (these are really about the theory and not practice):

-The fact that you have a bunch of capsules in each layer, and each one of them is intrinsically performing a non-linear filter function (a particular squashing function shown as an image in that blog), seems like both a great asset (it looks like a 'meta-network') and also a potential problem. If you need to tune weights within these caps by the derivative of such a composite function, it doesn't seem straightforward.

-The 'routing by agreement' feature is interesting, but I dont quite get why it is superior to max pooling. If the feature is simply that it punishes weak links rather than selecting only the strongest, one interesting analogy is that it seems a bit Hebbian , and related to a concept in un-supervised learning called STDP (spike timing dependent plasticity).


To save you clicks, the two pre-prints are at : https://arxiv.org/pdf/1710.09829.pdf && https://openreview.net/pdf?id=HJWLfGWRb


That sounds really interesting. What tasks did you attempt?

If you have open-sourced this in any sense, would enjoy to check it out and/or contribute.


In terms of cosmology, the physically observable universe does have (some) fractal structure or features--, primarily to the existence of multi-scale primordial (quantum) fluctuations. https://en.wikipedia.org/wiki/Primordial_fluctuations https://arxiv.org/abs/astro-ph/0003124

On the other hand, at very large scales the universe is expressly homogenous (smooth) https://www.space.com/17234-universe-fractal-large-scale-the...

W/r to larger and more wide-ranging questions, e.g. if the mutli-verse has some sort of deep fractal meta-structure, and/or the recursion of fundamental quantum computing operations, the jury is still entirely out.


> On the other hand, at very large scales the universe is expressly homogenous (smooth)

I assume that's because of looking at it through low pass filters, inadvertantly.


As elec. engineers (and c. scientists) one of our only hammers is signals theory and cosmology looks like a nail. But the physicists probably have it right.


... have what right exactly? Pictures (i.e. measurements) of faint celestal bodies are but mushy wish-washed smears of a point is what I'm talking about for example. Quite simply, our band-width is limited by position, the ammount that we can observe is likely just a tiny fraction, the known universe only accounts for ca. 10% of theoretical total energy. In that sense the low energies are likely lost on us, then we actually see through a highpass (redshift?).


I mean that the physicists have accounted for the limited amount of information available and still conclude that the observable universe is not a fractal.

I remember thinking about this in 2011/2012 when it wasn't known (IIRC there was a bet over a bottle of wine between two physicists over this matter). But years later, because a satellite "zoomed out" far enough and sent images back to earth, physicists confirmed that it is very likely that the observable universe is not a fractal (i.e. it's Hausdorff-Besikovitch dimension, which is calculated from measurements, is fractional).


innerrestin but hard to believe to be a settled matter


Thanks for this extremely educational comment. I'm excited to read a bit more about DE and CMAES in particular!

>>I could go on all day; this was a chapter in my dissertation

I'd love a link (assuming it is publically available!)


Glad it was useful! Once I get started writing about GAs I have trouble slowing down. By the time I posted my comment it was three or four pages back, so I'm encouraged that somebody actually read it!

Here's a link to my dissertation:

https://etda.libraries.psu.edu/catalog/28690

I forgot to put a disclaimer in my post: Tim Simpson was my advisor, which is how I know about his paper with D'Souza.

For DE, your best bet is probably to start with Rainer Storn's website: http://www1.icsi.berkeley.edu/~storn/code.html

For CMAES, Hansen has a good website: https://www.lri.fr/~hansen/cmaesintro.html


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: