used to be one of the “sharers,” maintaining a fairly popular blog, writing tutorials on platforms like DZone and CodeProject, answering questions on StackOverflow, and creating open-source projects that collectively amassed millions of downloads.
At one point, I decided to monetize one of my open-source projects by creating a commercial fork. That’s when a group of people, none of whom had contributed to the project in any way, started a witch hunt over a few super trivial lines of code they accused me of “stealing” from contributors. Despite having the full support of all actual contributors, the backlash from these outsiders left me drained and disillusioned. So I stopped sharing my work and contributing to open source altogether—and honestly, I’m happier for it.
To all the Jimmy Millers who genuinely appreciate the goodwill of creators: be aware that there are people who will leech off it or even destroy it.
This series got me into coding languages for work and fun. I wish more books about complex topics were written this way... For anyone who's interested, Jack Crenshaw did an interview in 2009 and touched upon how he wrote this series. [1]
What I found disheartening was many of those scientists, especially those on the "nothing to worry about" camp, seemed not to entertain the thought that they could be wrong, considering the scale of the matter, i.e. human extinction. If there's a chance AI poses an existential threat to us, even if it is 0.00000001% (I made that up), should they be at least a bit more humble? This is uncharted domain and I find it incredible that many talk like they already have all the answers.
Meh. Add it to the pile. The number of world ending risks that we could be worried about at this point are piling up and AI exterminating us is far from the top concern, especially when AI may be critical to solving many of the other problems that are.
Wrong about nuclear proliferation and MAD game theory? Human extinction. Wrong about plasticizers and other endocrine disruptors, leading to a Children of Men scenario? Human extinction. Wrong about the risk of asteroid impact? Human extinction. Climate change? Human extinction. Gain of function zombie virus? Human extinction. Malignant AGI? ehh... whatever, we get it.
It's like the risk of driving: yeah it's one of the leading causes of death but what are we going to do, stay inside our suburban bubbles all our lives, too afraid to cross a stroad? Except with AI this is all still completely theoretical.
I think almost none of the scenarios you have named outside of the asteroid & the AGI would result in complete human extinction, potentially a very bad MAD breakdown could also lead to this but the research here is legitimately mixed.
You disagreed with me, but at least you acknowledged there was risk, even though we could disagree about the odd or potential impact. Yet, folks like Yann LeCun ridiculed anyone who thought there was a risk AI could endanger us or harm our way of life. What do we know about experts who are always confident (usually on TV) about things that haven't happened yet?
Yes, and all of those (including AI) are not even human extinction events.
- Nuclear war: Northern Hemisphere is pretty fucked. But life goes one elsewhere.
- Plasticisers: We have enough science to pretty much do what we like with fertility these days. So it's catastrophic but not extinction.
- Climate Change: Life gets hard, but we can build livable habitats in space... pretty sure we can manage a harsh earth climate. Not extinction.
- Deadly virus: Wouldn't be the first time, and we're still here.
- Astroid impact: Again, ALL human life globally? Some how birds survived the meteor that killed the dinosaurs, I'm sure we'd find a way.
- Complete Made up evil AI: Well we'd torch the sky, be turned into batteries but then be freed by Keanu Reeves.. or a Time traveling John Connor. (sounds like I'm being ridiculous, but ask a stupid question...)
You're taking these things too lightly. It's true that most of these things are unlikely to kill all humans directly, but with most of them, civilizational collapse is definitely on the table, and that can ultimately lead to human extinction.
For example: Yes, we could probably build livable habitats in space (though we don't really have proof of that). But how many, for how many people, and what kind of external support systems do they require? These questions put stresses on society that prevents space habitats from working out in the long term.
Humans have a start in time and will have an end. I was born and I will die. I don't know why we're so obsessed about this. We will most definitely cease existing soon in geological/cosmic time scale. Doesn't matter.
There's a nonzero chance that the celery in my fridge is harboring an existentially virulent and fatal strain of E. coli. At the same time, it would be completely insane for me to autoclave every vegetable that enters my house.
Sensible action here requires sensible numbers: it's not enough to claim existential risk on extraordinary odds.
Okay, maybe I shouldn't have mentioned the worst possible outcome. Let's use the words of Sam Altman, the risk here is "light out for all of us", and let's just assume it meant we would still live, just in darkness. Or whatever plausible bad case outcome you could imagine. Do you see any negative outcome is possible at all? If you do, would you at least be cautious so that we could avoid such an outcome? That would be the behavior I expect to see in leading AI scientists and yet...
All kinds of negative outcomes are possible, at all times. What matters is their probability.
If you (or anyone else) can present a well-structured argument that AI presents, say, a 1-in-100 existential risk to humanity in the next 500 years, then you'll have my attention. Without those kinds of numbers, there are substantially more likely risks that have my attention first.
Shouldn't unchared territory come with a risk multiplier of some kind?
Currently it's an estimation at best. Maybe 1-in-20 maybe 1-in-million in the next 2 years.
The OPs point of this thread still stands, scientists shouldn't be so confident.
> considering the scale of the matter, i.e. human extinction.
There is literally no evidence that this is the scale of the matter. Has AI ever caused anything to go extinct? Where did this hypothesis (and that's all it is) come from? Terminator movies?
It's very frustrating watching experts and the literal founder of lesswrong reacting to pure make believe. There is no disernable/convincing path from GPT4 -> Human Extinction. What am I missing here?
Nuclear bombs have also never caused anything to go extinct. That's no reason not to be cautious.
The path is pretty clear to me. An AI that can recreate an improved version of itself will cause an intelligence explosion. That is a mathematical tautology though it could turn out that it would plateau at some point due to physical limitations or whatever. And the situation then becomes: at some point, this AI will be smarter than us. And so, if it decides that we are in the way for one reason or another, it can decide to get rid of us and we would have as much chance of stopping it as chimpanzees would of stopping us if we decided to kill them off.
We do not, I think, have such a thing at this point but it doesn't feel far off with the coding capabilities that GPT4 has.
So what would be the path for GPT5 or 6 creating an improved model of itself? It's not enough to generate working code. It has to come up with a better architecture or training data.
The idea is that a model might already be smarter than us or at the very least have a very different thought process from us and then do something like improving itself. The problem is that it's impossible for us to predict the exact path because it's thought up by an entity whose thinking we don't really understand or are able to predict.
I understand the idea of a self-improving intelligence, but unless there's a path for it to do so, it's just a thought experiment. The other poster who replied to you has a better idea that civilization can be thought of as the intelligence that is improving itself. Instead of worrying about some emergent AGI inside of civilization, we can thing of civilization itself as an ASI that already exists. Anything that emerges inside of civilization will be eclipsed and kept in check by the existing super intelligence of the entire world.
I think "llm builds better llm" is drawing the border at the wrong place. Technical progress has been accelerating for centuries. It's pretty self evident that the technological civilization is improving upon itself.
… This is literally non logical reasoning. If we redefine AI to mean something it’s never been defined as… unfortunately logic has left the chat at that point
Debatable, since there are plenty of other unavoidable existential threats that are far more likely than the best estimates that AI will wipe us out. E.g. supervolcano eruption, massive solar flare, asteroid impact, some novel virus.
At least we can take comfort in the fact that if an AI takes us out, one of the aforementioned will avenge us and destroy the AI too on a long enough time scale.
I find striking that we have a rich cultural tradition of claiming we're artificial beings. Maybe we're building a successor lifeform... I've thought about this as a story premise: humans and robots are two stages of a lifecycle. Humans flourish on a planetary ecosystem, build robots that go on to colonize new systems where they seed humans because (reason I haven't been able to formulate).
They lost their money on the short position because the price went up, instead of going down. Had other people noticed about the hack as early as they had assumed and the price went down as they had assumed, they would have made a lot of money.
I used to maintain several popular open-source projects and contribute to even more popular ones. It was always fun at the beginning, especially because I built them for my own needs. But I kept getting asked to fix bugs or improve things even long after my needs had expired. I tried the donation route for a little while but it didn't go anywhere - I received maybe a few hours worth of money (versus hundreds if not thousands of hours I had spent working on those projects). I also tried releasing a paid version for one of the projects and got buried with hate mails and, unfortunately, online abuse. That was when I stopped working on open-source and I'm happier than ever.
I'm very happy for people who make it from doing serious open-source work. I think they deserve it. But at the same time I feel bad for those who build or maintain no less serious or popular work and yet couldn't make enough to worth even a portion of the time they'd spend.
Sponsorship is not a panacea unfortunately as corporate use of a project is not proportional to the maintainership burden. Hopefully the industry can fix this one day.
If you could even get it! Think about all the millions of dollars dealing with the Log4J issues last December. How much has that prompted sponsorship of any logging libraries by the affected companies? Not much, from what I've seen.
I related pretty strongly to this. I've never tried to monetize, and the community I'm mostly working in (ROS) is populated almost exclusively with exactly the sort of kind, considerate people who will happily roll up their sleeves to take a crack at it themselves, given a little guidance.
Nonetheless, there are dozens of effectively abandonware ROS projects out there attached to my name— drivers for some sensor I shipped years ago and haven't touched since, interface libraries that aren't really relevant but don't have a clear alternative, stuff that was never out of the prototype phase and doesn't have anywhere close to the level of test coverage that would let me just merge much less release changes without extensive manual running of it.
I suppose I should go in and just mark them all as archived so that well-meaning people don't file issues (and even PRs) that will never be addressed or perhaps even acknowledged. And in some cases I've just granted PR authors write access and been like "there it's yours now." But none of these end states feel quite right; in all cases I end up feeling guilty and unsatisfied with how it turns out.
> And in some cases I've just granted PR authors write access and been like "there it's yours now." But none of these end states feel quite right; in all cases I end up feeling guilty and unsatisfied with how it turns out.
What feels wrong with this? Personally I'd much rather hand a project over to someone else than leave it completely archived.
It's the ideal outcome, I guess. But a number of projects end up just re-abandoned six months later after the person merged their own thing and pushed a release with it. I guess that's good for having moved it forward at all, but it certainly doesn't feel like a complete solution.
I wonder if such projects should be collaboratively maintained rather than attached to your name? Then anyone who is part of the ROS community and ends up needing them can take on fixing any issues they encounter. As long as the org holding them has a liberal enough membership policy, this should work well.
"It's hard to maintain a project as a single-maintainer. Nobody puts in more than a cursory effort to contribute."
"Yeah, but what if other people contributed?"
It's a huge chicken-or-egg problem.
At this point, unless you have a big name behind you (being a major corporation like Google or Facebook, or OSS celebrity like Torvalds), the chances of any particular project growing beyond single-maintainer are really slim.
I've seen several people, on my own projects and others, talk about not trusting single-maintainer projects to last as a reason to not get involved. It's infuriating, because it's a self-fulfilling prophecy.
And there are no guarantees the major OSS names will perpetuate their projects any better than an independent, no-name, single-maintainer. If anything, I've seen companies like Google drop projects much faster than I've ever done in any of mine.
People see the name and assume there is already support behind the project. Yet often the projects start as a single-maintainer thing that the company then puts a marketing machine behind.
I had a project years ago that was my first promising open source project. It was growing, it had a small handful of minor-but-meaningful contributors. People saw my name on 95% of the commits (I was working on it full-time) and called it single-maintainer. A year after I released, a major corp announced they were releasing a competitor. They hadn't released it yet, just announced. Overnight, I saw interest in my project dry up. It took another year for that other corp to actually releases, during which time they had fewer actual contributors than I had. When they did release, it took another year before they reached feature parity with me, but by that point it didn't matter. They had started telling people they were the first such project ever. Randos online started accusing me of copying the other project. I couldn't keep up with development and marketing on my own, especially after the digital agency I was working on it at went belly up.
I knew most of the people on the team at the major corp. We were in a very niche industry and we all knew each other. I think it was that "ours is the first" bit that really got my goat. They not only knew me and knew about my project, they even admitted they had been using my project for testing compatibility of another project they develop.
I'm sorry that happened to you, that is a horrible way to treat people, Mozilla and all open source users and contributors need to do better than that.
I'm a full-time maintainer that has luckily made it work. Donations don't work. Carving out business specific features and paywalling has worked for me.
But it's not a tenable solution for most projects. I've been thinking about how people in your position can basically "cash out". It is sitting dormant, PR authors are too intimidated by maintenance to take the lead. Maybe some sort of robotics vertical-focussed Private Equity type organization could buy out your project (where you get paid to essentially transfer the repo to their GH org, they would then take up maintenance and monetisation)? Would you accept a deal from them? How much would you ask for one of your popular repos?
Start by adding breakpoints for some key actions of the app. Then step through the flows with the debugger. First pass you can just step over functions to get the high-level idea. In subsequent passes, you can step into functions that seem important. Rinse and repeat until you understand those actions well. Then move on to other areas in the app.
This works because you can see the actual end-to-end execution flow (not always clear from reading code), inspect runtime data (impossible by reading code) and even change the runtime data (variables, DOM) to validate assumptions about how the code works.
> what relevant software have they actually written
I think Fitnesse [1] is quite relevant. That said, not a lot of FOSS work from someone like him, to put the things he preaches in large and complex projects that we can look at the source and learn from.
At one point, I decided to monetize one of my open-source projects by creating a commercial fork. That’s when a group of people, none of whom had contributed to the project in any way, started a witch hunt over a few super trivial lines of code they accused me of “stealing” from contributors. Despite having the full support of all actual contributors, the backlash from these outsiders left me drained and disillusioned. So I stopped sharing my work and contributing to open source altogether—and honestly, I’m happier for it.
To all the Jimmy Millers who genuinely appreciate the goodwill of creators: be aware that there are people who will leech off it or even destroy it.