Hacker Newsnew | past | comments | ask | show | jobs | submit | more ericb's commentslogin

Pair programming is draining for most introverts, and there are a lot of introvert programmers.

I love coding, but I'd rather work a McDonalds fryolator than pair for more than an hour at a time. Short bursts--fine. Mentorship--fine. That's not a joke or derision--I'm serious. Past an hour, the fryolater would be preferable.

In general pairing ties up and impairs the part of my brain that does design work, troubleshooting, and problem solving by splitting my focus. The social obligations that come from interacting with another human, like paying attention to what they are doing and saying, and their mental and emotional state, eye contact, my own appearance and affect--all of those distractions are costly. Not to mention pairing destroys the fun part--flow.


I don't think I'd use this in production. Testing/development--sure.

A class added a method with a require or dynamic definition and that was cause to crash a production activity of some kind? You'd discover the attempted modification via a new FrozenError being raised unexpectedly and crashing what you were doing.

Ruby is made to let you extend core classes--Rails does it all over the place. If I put a require behind a feature flag, this is probably going to surprise me when it fails. It might also make junior devs think gems "don't work" or are buggy if you use it in development, when they work fine? How well does this play with dynamic class loading in dev work in Rails? I would think it would be problematic as you can't draw a line in the sand about when everything is loaded, so it is a safe time to "freeze."


> Ruby is made to let you extend core classes

This is not the way to build long-lived software that outlives your team. This is how you create upgrade and migration headaches and make it difficult for new people to join and be productive.

Chasing down obscure behaviors and actions at a distance is not fun. Being blocked from upgrades is not fun. Having to patch a thousand failing tests is not fun.

I have serious battle scars from these bad practices.


There’s some nuance here.

Application and nearly all library code should not do this. (There can be exceptions for libraries, but they should be used sparingly.)

A framework like Rails? A reasonable place to do this sort of stuff, because a framework implies your entire application depends on the framework, and the framework is well managed (otherwise you wouldn’t be using it).

Like you said: “you” shouldn’t do this. I feel like your pain from this comes from someone being too clever, outside of a framework, hijacking core methods and classes.


Exactly. Dynamic languages that allow self-modification create tech debt and bugs implicitly, which is why I prefer statically-compiled languages that have stable ABI/API guarantees. When there are too many "freedoms", there are no promises and zero stability. Static compilation (because all of code paths must be exercised and translated to machine code) or at least gradual-typing of dynamic languages are essential. rbs and sorbet are non-starters, mostly because it's fragmented, optional, not widely-deployed, and lots of extra work. Python demonstrated wiser leadership in this specific area by modifying the language.


I like to hear these stories--feel free to share. I guess, usually, I feel like the battle scars are from Rails users, though, which is made up of hundreds of core extensions which make it nicer to use, so reducing the practice is a good recommendation, but removing the practice seems like a nonstarter?


Its a safety thing, and it's probably difficult to use it effectively with rails.

E.g. in a project with lots of dependencies, things can break if two libs patch the same class after an update. A worse scenario: malicious code could be smuggled into core classes by any library that is compromised, e.g. to exfiltrate information at runtime. This would grant access even to information that is so sensitive that the system does not store it.


Except for carefully sandboxed languages, malicious code can generally exfiltrate process memory regardless of what the language constructs are. In the case of Ruby code, this could be with Fiddle or with more esoteric means like /proc/self/mem. At worst, patching classes can make it a bit easier.


Jeremy Evans is not in the wrong part of town.


Jeremy Evans is definitely not in the wrong part of town. I use his Sequel gem in production and it is perhaps the best piece of software ever written for ruby. Studying how it is implemented is a textbook example of how to develop complex ruby DSLs really well without getting too deep in the metaprogramming muck.


That's fair, and I removed that comment for seeming snarky or directed at the author--it wasn't. My meaning was, like strong typing, it is an idea from a different context that works well there, but may not translate well to the Ruby world given expectations and usage patterns.


Efforts to freeze more and more objects and classes after initial setup have been a long-standing trend in the Ruby world.


It's like a professional wandered into amateur hour.


Lesson learned for me though, if you "put a require behind a feature flag", you'll get surprise failures when your staging and test environments are no longer able to properly test what might happen in production. Put the require outside the flag and make the flag wrap the smallest possible part of the feature.


If that's the goal, the problem itself has been re-implemented by the Javascript ecosystem.


For now, taste and debugging still rule the day.

o1 designed some code for me a few hours ago where the method it named "increment" also did the "limit-check", and "disable" functionality as side-effects.

In the longer run, SWE's evolve to become these other roles, but on-steroids:

- Entrepreneur - Product Manager - Architect - QA - DevOps - Inventor

Someone still has to make sure the solution is needed, the right fit for the problem given the existing ecosystems, check the code, deploy the code and debug problems. And even if those tasks take fewer people, how many more entrepreneurs become enabled by fast code generation?


This is the way.

It achieves 95% of the effect, and you maintain one implementation, not two.


Bing is now better than google.

You're right, though, it isn't quite as good as old google.


I can't speak to Interface Builder, but in VB 6 Hooking resize was SO easy, and the math you do inside it was so easy, too. It took maybe 5 minutes to do almost any layout.

It was frankly, a shock to see how easy this model was and then see the monstrosity that came to pass for HTML and CSS positioning. Baffling.

Everything you just listed was easy in VB 6 IMHO (well, touch wasn't a thing exactly).


Having done a few such forms in the past, it was easy enough for a simple dialog with a list, two buttons and a text field, but it quickly got unwieldy.

Which was okay back in the day. Everyone had low display resolutions, so simply scaling a window's controls when resizing was okay. No need for responsive layout and even too fancy layouts, I guess. But what we got later with anchor in WinForms, layout panels in WinForms or WPF, layout managers in Swing, and CSS layout die help reduce the math you'd have to do yourself, especially for more complex layouts or even when the layouts change due to different requirements.


> It's never aliens.

Why? Shouldn't probes be here by now?

There are 200 sextillion stars in the observable universe. We know of one civilization--ours, and it immediately began sending probes once capable. If Von Neumann self-replicating probes are possible, doesn't that solve both distance/speed of light/time scale objection and imply probes should be here by now? The sending civilization may not even still exist, but their probes would.


The odds definitely tell us that by now, we should be seen a vast swarm of Von Neumann self-replicating little machines all around us. I stand by those odds, that idea makes a lot of sense.

However, "Von-Neumman probes must exist if possible" is a very different statement from "UAPs are alien probes".


We know of only one planet with life. Of which there are estimates of forms of life number in the billions to possibly a trillion since the formation of our planet. One and only one form of life has evolved the intelligence to even as the question or conceptualize existence beyond this world and it took literally over a third of the age of the universe for us to get this far and short of sending machines beyond this planet, we (the living things) haven’t ventured much further than our own moon.

So…yes, until we see some tentacled slug monster land in Central Park…or their probe, I am going to assume that intelligent life is super rare. Lots and lots of time must pass to become intelligent and advanced enough to venture beyond the place where they live out there. Also, distances are so vast between these intelligent civilizations that there just hasn’t been enough time that has passed for them or their probes to reach us since the creation of the universe.


So advanced civilizations routinely: FTL to earth, fly around San Diego buzzing the navy, then head to the southwest where they have engine trouble and crash?

Seems likely.


This posits an extraordinarily American attitude to the cosmos. The universe eschewed public transportation several billion years ago, and instead of building the space subway system, they all got personal flying saucers. So yes they're out there on the weekends, cruising around and causing mischief. Even drunk-piloting their starships into sheep ranches in New Mexico (do the assholes even have insurance?).

Had space politics gone differently, there would be high-speed galaxy trains connecting all the important star systems, and we would've gone unnoticed in our little backwater forever. You need proof? r/fuckpersonalflyingsaucers is marked "invite only".


It doesn't seem like you read my full comment?


The probes can’t be both so incompetent that they can’t avoid detection and competent enough to successfully fly thousands of light years over god knows how much time.


You’re making some assumptions here and I suggest you broaden your mindset a bit. First of all, at least according to testimony under oath, they are not avoiding detection and there is a plethora of high resolution imagery and sensor data, they just keep it hidden and highly classified. I get that there isn’t a lot of quality stuff coming from civilians, but we don’t know the UAP intentions or quantity. They could be just trying to observe us without freaking us out or interfering with us en masse, much like we observe animals in nature ourselves. They may only be interested in subtly monitoring our military assets and nothing else. There could be any other unknowable reasons. They may not perceive existence the same way as us, they may not have the same 5 senses or see in the same spectrums, they may not have emotions or a completely different thought process.

They also may not be light years away, but rather ultra terrestrial or interdimensional or time travelers or they use wormholes to “teleport”. We just don’t know, but the possibilities are not constrained to just your “incompetent but competent” paradox.

Edit: you may find this article thought provoking: https://jdmadden.substack.com/p/naturalizing-without-demytho...


> they are not avoiding detection and there is a plethora of high resolution imagery and sensor data

That's my point. They seem to want to be hidden yet aren't very good at it? They can travel across the galaxy but somehow can't avoid being seen by our shitty cameras? Those two things don't square.

> They also may not be light years away, but rather ultra terrestrial or interdimensional or time travelers or they use wormholes to “teleport”.

They might also be magical pixies that travel via quantum flatulence. But it seems unlikely.


There's so many ways to postulate it such that it makes sense, here's just one:

Beings lived on earth 50-60M years ago, they left without a trace but left probes to watch what happens, once civ reaches a certain point they reveal themselves in stages.

I was going to write a few more variations, but as I do I realize even more permutations. There must be at least 50 I can think of within an hour or so that seem remote but plausible enough.

I think with any postulate you sort of get to "ok so what now?" and I agree that there's not much you can really do. If they are super-squid or inter-dimensional beings or future humans or some sort of god, there's not really much to do with it unless you think they are weaving some sort of message for us to parse.


Yes there are a million fantastical scenarios one could imagine. But they are all incredibly implausible compared all of the mundane, boring non-alien alternatives:

* People getting bored and imagining things (“seeing faces in the clouds” so to speak)

* People being primed to see something (they read about other “UFO” sightings) and then, of course, they “see it” too

* People wanting to believe something for tangential ideological or social reasons — the government is corrupt and is hiding things -> UFOs must exist!

* People literally just making things up (they want attention, money, want to one-up someone, want to be special, have unregulated emotions, etc)

* Faulty / mis-calibrated sensors

* Data corruption / misinterpretation

* It was just a bird / cloud / water vapor / reflection of sunlight / electrical short / optical illusion

* And so on

These are the types of “boring” things that are always the real explanation when someone starts talking about crazy shit like ghosts or aliens. Aliens and ghosts are cool and exciting so people don’t want to believe the boring reasons. We often want distractions in our lives, and what a fun distraction that would be, eh?

Wild and outlandish claims require overwhelming evidence.


I get it, but now you’re moving the goalposts. You went from “there’s no way to explain it plausibly” to “there’s many ways to explain it but they are unlikely”.

I’ll take it that you’ve conceded that point.

If I wanted to engage with your new point I’d say - actually none of the points you’ve listed explain the current situation. There’s simply too many credible people, from too many separate instances, that are claiming largely similar things across many different incidents. And we now have hard (though not ideal) video evidence that has yet to be explained within our current popular understanding of physics (and I’m aware of the popular debunks, which I find far from compelling).

But that would be opening a whole new discussion, and I’m not here to fight that battle.


Maybe they are hive-minded cosmic shrimps, and their queen died en route, so they arrive headless, with an unsatiable craving for cat-food.

https://en.wikipedia.org/wiki/District_9


They’re trying to pick up some whales and take them back in their ship.


If you make reasonable assumptions about grabby aliens (the kind that wants to do VN probes and get here anyway), and condition on us having not yet noticed, then the current modelling suggests about 40-50% of the universe has already been occupied by them — the only reason we wouldn't have noticed is that light from the expansion just hasn't reached us yet.

https://arxiv.org/pdf/2102.01522


> Why? Shouldn't probes be here by now?

The Great Filter is one possible explanation:

https://en.wikipedia.org/wiki/Great_Filter

https://www.youtube.com/watch?v=TrGpG0OrNws


> If Von Neumann self-replicating probes are possible, doesn't that solve both distance/speed of light/time scale objection and imply probes should be here by now?

We should've been seeing progressively more of them then, perhaps already been eaten by them.


Earth should have been eaten by them billions of years ago.


It was, and we are the goo.

Though if you mean "physically disassemble the Earth", conditional on us having not noticed them yet, there's this: https://arxiv.org/pdf/2102.01522


Why not? Yes, we did send out probes...and where are we with them? "If Von Neumann self-replicating probes are possible" That's a big if. Where is proof of these probes that you speak of?


> That's a big if.

That's a tiny "if". Were you to ask me to write a science fiction story where the premise was that they were indeed impossible, I'd struggle to come up with ideas why that might be the case. In principle it must remain true, even if the "self-replicating probe" were gigantic human colony ships, and humans were some kind of component of the scheme. And if humans are sufficient components to make that work, why not something simpler than a big drooling man-monkey?

It's not that they're impossible. I don't think we'd need even another 200 years to do it. I just don't think we have the time. We're on a fast course to extinction, and by the time most people realize that this is so, the last chance to avert it will be 30 or 40 years in the rear-view.


> That's a big if.

Is it? Since humans can send probes, and reproduce ourselves, technology and civilization, it proves that a slower self-replicating bio-mechanical system has this capability and now we're left debating if a faster more compact system is possible, which doesn't seem like a leap?


It is a leap. Human beings are dependent on a specific, very delicate ecosystem and technological civilization to be able to replicate with the stability and scale that we do. We also can't replicate ourselves using arbitrary matter, we require the presence of a diverse breeding population.

You can't just take human beings, push them out into space and expect them - somehow - to multiply exponentially and without error using any arbitrary matter they encounter over millions of generations, but that's the baseline expectation of Von Neumann probes.

Add to that all of the assumptions required to use Von Neumann probes as an explanation for modern UFO folklore. One might assume that a simplistic organism could survive and thrive more easily in interstellar space (something like a virus or tardigrade, for example.) Complex organisms tend to be more delicate and more prone to replication errors over time, as well as more sensitive to things like the radiation of interstellar space. But that isn't what people are reporting - they're reporting large, complex, technological, apparently even piloted vehicles. And that's not even getting into the interdimensional stuff, links between UFOs and the supernatural, or abductions.

Obviously Von Neumann probes haven't consumed the galaxy. Therefore, either no technological civilization has ever existed in the universe other than ourselves, in which case UFOs/UAPs cannot be Von Neumann probes because they obviously haven't consumed the galaxy, and don't behave in the ways that such probes would be expected to, or else technological civilizations can and likely do exist, and Von Neumann probes aren't as simple or inevitable as people seem to think. And it still doesn't explain UFOs/UAPs as reported.


> We also can't replicate ourselves using arbitrary matter, we require the presence of a diverse breeding population.

The former is true, but mainly limited by phosphorus as all the other elements are relatively easy to get hold of in random rocks or gas giants.

The latter is no longer true as we have adequate diversity of collectable sperm and egg samples and also (albeit crude) DNA modification tech if we really needed it.


> haven't consumed the galaxy

That's absurd. If you can make a self-replicating probe, you can code a heuristic that prevents a grey goo scenario. Cells reproduce, but they haven't consumed the galaxy.

> doesn't explain UFOs/UAPs as reported.

So, the existence of spurious reports magically invalidates actual probes?


It isn't absurd, it's the entire basis of the argument whenever it's brought up. If only one civilization has ever developed Von Neumann probes, then they should have consumed the galaxy by now. That they haven't is presented as a paradox, or evidence against the existence of life beyond Earth altogether. Some degree of grey goo scenario (enough for evidence of it to be obvious and observable) is always implied.

>So, the existence of spurious reports magically invalidates actual probes?

Given that the entire premise of bringing them up was to validate the probable reality of those reports, yes. If one is going to argue that the UFO/UAP phenomena which are the subject of TFA are likely true because self-replicating probes should exist, a valid counterargument is pointing out that the craft described by said phenomena do not resemble self-replicating probes either in their construction or behavior. For instance by having pilots, or by appearing to travel across dimensions or faster than light, which would render obsolete the entire rationale behind self-replicating probes to begin with.


Why is grey goo implied? Send one probe per solar system. If two probes meet each other, have them pick a random number to decide who flies off to find another solar system or into the sun.

If there were probes here, are you sure you'd see it? Why do silly reports being mixed in change the liklihood? If they don't, couldn't a single report that was real be in the mix? Ok, maybe a few?

So, saying "*some* don't resemble self-replicating probes" doesn't really invalidate anything?

> a valid counterargument is pointing out that the craft do not resemble self-replicating probes either in their construction or behavior

It seems like this is difficult to know what it would present as?


> Send one probe per solar system. If two probes meet each other, have them pick a random number to decide who flies off to find another solar system or into the sun.

I don't think there would be enough probes for it to be unusual that none have landed here, given the size of interstellar space. I may be wrong, but if I am, we're back to the Fermi Paradox and it's essentially the same problem. If they should be here, they should be everywhere.

>So, saying "some don't resemble self-replicating probes" doesn't really invalidate anything?

I'm saying none of them do.

>It seems like this is difficult to know what it would present as?

To reiterate, being a probe, it wouldn't likely have pilots or crews, and the current UAP narrative mentions "NHI" (non-human intelligences) and "NHB" (non-human biologics), and the broader UFO narrative of course has plenty of aliens.

One could, I suppose, suggest that the probes for whatever reason also replicate crews within themselves, but that seems to be stretching the premise beyond absurdity to me. At some point one has to assume there are limits to the capabilities of this model, otherwise it's essentially magic.

Also to reiterate, the model of self-replicating probes assumes slower than light travel. If you have faster than light travel (assuming that's possible) you don't need self-replicating probes. Both the current UAP narrative and the broader UFO narrative involve craft traveling faster than light, across dimensions, through wormholes, etc. And as far as I'm aware there isn't any report of a UAP or UFO replicating itself as one would expect.

So it isn't necessary to know what, specifically, self-replicating probes would present as when none of the details reported support the presence of self-replicating probes over any other possibility.


Thats also assuming Von Neumann probes don't replicate wrong like DNA as it clones, etc. Too many variables for it too be feasible


If I could attach a software program to every replicating chunk of DNA, I'm pretty sure I could write a checksum and diagnostic and instructions to shut down if they failed. That's not a variable that does anything to feasibility.


What would this "software program" be made of?

Millions of years of evolution have already granted DNA the capacity for diagnostics, error correction and self-repair. Despite that, genetic replication is still error prone, and everything still fails, sometimes catastrophically. DNA is made of molecular bonds, and that "software" is necessarily also made of molecular bonds, there is no distinction between "software" and "hardware" at that level of physical granularity.

You're not going to get perfect replication of anything over millions of generations. That violates the laws of thermodynamics. The system has to change, randomly and unexpectedly, and it has to fail over time. Entropy must accumulate. And that's not even taking into account the radioactive hellscape of interstellar space.

This is the difference between mathematics and physics. Mathematicians can handwave away or ignore inconvenient or uninteresting complexities, and thus something like the infinite exponential progress of self-replicating probes across the galaxy seems obvious because the math is obvious, but reality doesn't allow that.


> The system has to change, randomly and unexpectedly. Entropy must accumulate.

Yes to the first two, but "has to fail over time" is something you are making up--not a law of physics.

Local decreases of entropy happen continually. Resilient, error-checking, self-healing systems are possible.


Self-replicating probes aren't closed systems. They exist within the universe, consume matter (which is how they replicate) and emit waste heat (as all physical systems must,) and are bound to the laws of physics which, yes, include the second law of thermodynamics. There may be short term local reductions in entropy, but eventually, inevitably, entropy must increase.

>Resilient, error-checking, self-healing systems are possible.

You cannot have such systems be perfectly efficient. That isn't physics, it's magic.


No one suggested perfect efficiency. I'm debating with you, a product of a similar reproductive process (evolution) where a system of throwing away the bad offspring has worked just fine. Magical indeed!


In fairness, that's a bad argument because this same process also gives us cancer.

I'd instead point out that while "perfect" isn't possible, we can relatively simply design the system to have an error rate such that there's less than a 1e-12 chance of an error occurring anywhere in the universe even if you did turn the entire mass of the universe into probes.

I'd counter that with the point that people are very bad at accounting for all the possible ways that systems can fail, and that while it's easy to create an error correction code that good, the actual failure rate of the system as a whole is likely to be much, much worse.


But the reason DNA is an apt analogy is because as we see in reality cancer does exist. So a self replicating probe would in theory be following same principles of DNA, it would develop a cancerous probe, that in theory could defeat the original design plan and maybe kill all previous probes. Or say by a bacteria on a planet laying latent that causes unexpected issues, etc.


DNA/cancer tells you mutation is possible, it does not say it is (in practice) mandatory.

Organisms' mutation rate is not constant: there's a need to mutate more due to the need to be able to adapt over generations to novel threats, and a need to mutate less due to the risk of premature (before reproduction) death. This adaptability also explains why whales, which are much bigger than us, don't all get cancer almost immediately and die young; and also why dogs, which are much smaller, still often manage to get it despite only living to 12 or so.

In computers… well, there's multiple layers of error correction. There's error correction in the link layer, in the transport layer with TCP, at the OSI application layer implicitly in TLS because errors would break the signing, and important files also get separate checksums (and these days security signatures) to be tested when the transfer is complete. These were all designed with arbitrary standards of "acceptable" error rates, there's nothing to prevent a new design with a different idea of what's "acceptable" and setting that threshold as low as I suggested, or even lower.

But!

Every time I see someone make a pronouncement that they're more than 99% confident of something they've never actually tested, I think they've likely not thought of all the ways their thing can go wrong. That means that while I can be confident that we can design a system that appears, according to every scenario we can imagine, to have less than a one-in-a-trillion chance of a mutation surviving even if every atom of the universe is converted into more von Neumann probes implementing that system, even then I still expect that if it were built it would rapidly encounter an outside context problem.


Why do you assume DNA replication is "wrong"? Maybe it's just part of the exploring. It needs to adapt to new environments to explore them after all.


Assuming they all can travel near light speed and considering time dilation, all interstellar civilizations will meet each other "from our perspective" somewhere a trillion years from now.


This statement is so nonsensical I’m not even going to respond to the content.


Please do.


> Why? Shouldn't probes be here by now?

it's going to turn out that absence of evidence is evidence of absence


> We know of one civilization--ours

Strong argument, no notes.


The argument that we have evidence of our own civilization requires no evidence. You’re typing on an artifact of that civilization.

An argument that there exists any other civilization requires evidence. Show me yours.


We agree.


Which one?


JR1+


What model do you use?


This one: https://www.amazon.com/gp/aw/d/B0789F4Z2M

The $122 price is about what I paid for it. Dunno when they have ever actually sold it for $350.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: