Hacker News new | past | comments | ask | show | jobs | submit login
Dependency (xkcd.com)
808 points by kjhughes on Aug 17, 2020 | hide | past | favorite | 294 comments



A couple years ago I wrote a program to make dependency diagrams just like this! https://github.com/kach/tower-of-power

Here's `pdf-redact-tools` from Homebrew: https://github.com/kach/tower-of-power/blob/master/gallery/p...

Here's the CS core at Stanford (dependency = prereq): https://github.com/kach/tower-of-power/blob/master/gallery/c...


Speaking of homebrew, its author definitely fits the tiny block in this cartoon.


We're actually a thriving team now, although donations (and more eyeballs!) always help :-)


If you didn't ping each other to see this within 15 minutes is a nice testimony regarding the reach of HN.


There are bots to find out mentions of specific keywords on HN.


I honestly don’t know the homebrew team but I use it and it’s the first thing to come to mind when I saw this xkcd.

Edit: partly because I also know the original author was denied a job at Google. Imagine the productivity gain this team has given to a company the size of google.


A number of maintainers browse and contribute a fair bit.


The first thing I thought of was OpenSSL:

https://news.ycombinator.com/item?id=7575210


I thought it was referring to the "Olson" time zone database, used by virtually any operating system except Windows to keep track of time zone changes:

https://news.ycombinator.com/item?id=3080437

Voluntarily maintained by a single person for decades until hit by a big lawsuit and ICANN stepped up to help.


GnuPG and ffmpeg also come to mind.


Yes, I remember GPG being in the news for that:

https://www.propublica.org/article/the-worlds-email-encrypti...

But I also remember the takedown about no one actually relying on email encryption for security:

https://news.ycombinator.com/item?id=22368888


I'd argue homebrew is the cartwheel that brought the bricks there, and not the bricks themselves.

Yes, it's an important piece of software. But it's not "required" for any services itself.


OpenSSL on the other hand...


Yeah let's not get into that conversation...


Don't even go there. Might jinx it.


I made https://github.com/mimoo/cargo-dephell to see how many github stars, contributors, last updated, etc. for your rust dependencies


Can you imagine the dependency tree for a project with `node_modules`, for a create-react-app it would be thousands of modules deep...


There is a tool for visualizing node_modules dependencies here:

https://npm.anvaka.com/#!/view/2d/webpack

And yes, the dependency chains become absolutely insane for some projects.


I would be interested in seeing that.

Probably, however, the graph would horrify me as well.


Or a Solver trying to build the graph would return: "You seem to be in dependency hell."


Does it distinguish between dev dependencies and runtime dependencies?


I think Guix provides a better dataset for this sort of analysis, given its emphasis on reproducible builds.


https://github.com/kach/tower-of-power/blob/master/tower-of-...

That's a nice instance of literate programming. I don't come across those very often.


Tip for those not up to scratch like me:

Click "Raw" to see the image


is it just my browser, or is the image for pdf-redact-tools really a bunch of wasted space in the top half with the bottom row of dependencies barely legible?


Hrm. Can any DAG be drawn as a box diagram?


If a box diagram has to be a planar graph, then no:

    digraph G {
        a -> {x y z}
        b -> {x y z}
        c -> {x y z}
    }
cannot be drawn on the plane without any crossings.

https://en.wikipedia.org/wiki/Three_utilities_problem


Isn't two "utilities" enough here?

Suppose we have two components-- X and Y-- which depend on two other lower-level components-- A and B. How would you draw this as a block dependency diagram without repetition?

If both X and Y depend on B, then X and Y must be above B, and both touch B. If X and Y are orthogonal rectangles, then the entire X-Y edge is obscured from below. This means that another lower-level component A must can only interact with either X or Y. You'd need to allow X or Y to no longer be orthogonal rectangles to overcome this, or need to give up on the vertical organization of dependencies.


Hmm. You're right, not if we are restricting ourselves to rectangles. But if we make A u-shaped, with B nestled in it, we'll be in business!


Hey there was a phys article about that problem today!

https://phys.org/news/2020-08-solution-math-riddle.html


This is a great question! I don’t actually know the precise condition for box-diagram-ability of a DAG, but I’ve spent a long time thinking about it; it’s partly why I wrote that program. It’s not a deterministic algorithm, it’s powered by a SAT solver which may return “UNSAT” indicating failure. If anyone has a better answer I would be _very_ interested in reading about it!


I guess the answer is no... assuming 1 is on the bottom:

1->2; 1->4; 2->3; 3->4

Unless 4 is slanted.


The key part is performing a topological sort, which you can do on any DAG.


Author of one such library faced serious problems. He has opened up yesterday in this blog post.

https://medium.com/@behdadesfahbod/if-you-read-one-thing-fro...

His tweet: https://twitter.com/behdadesfahbod/status/129532694517805465...

The library: https://github.com/harfbuzz/harfbuzz


Oh wow, he was at Google Waterloo when I first started and was a super nice guy.

This is a hard and terrifying read. I really feel for him.


No amount of revolutions can change the conflicts in the Islamic world. What is required is to reform the Islam version 2.0


Who attempted to force his employer to fire him for a code of conduct violation?


I don't know what part of his story you refer to, if its:

"It took me weeks to regain access to my accounts, which were disabled after my employer (Facebook) was notified and embarked on disabling them."

I guess they've disabled his account to prevent access of the intelligence services to any sensitive data.



I think it's an unrelated issue.


Image title:

> Someday ImageMagick will finally break for good and we'll have a long period of scrambling as we try to reassemble civilization from the rubble.

EDIT: don't forget to always hover over the image. Half of XKCD's fun comes from those tidbits.


I was fumbling with a random piece of software one day, then realized that ImageMagick was installed as a dependency. "Oh," I thought, while recollections of years past flooded my mind. After a few minutes of reading ImageMagick documentation, I stopped fumbling around and completed the work with ImageMagick instead.


Honestly, yes, imagemagick is such a ridiculous dependency for everything.



Those all do well-defined, well-understood things; they could be replaced relatively easily, and subjectively they "feel" pretty solid. ImageMagick is a complex thing full of subjectively flaky perl and it's not at all clear which things are and aren't supported by it. I'm a lot more worried about ImageMagick breaking than any of the things you list.


> they could be replaced relatively easily

That's leaving a pretty big loophole there and in the case of SQLite you'll need that. To re-create something that solid will take a lot of effort.


I keep a nice bottle of red just to celebrate the day I rip ImageMagick from the guts of the project I’m working on.


As someone who's just now building imagemagick into a system I'm working on as a major dependency, what issues have you had with it?


I have that issue that it's an overkill for just resizing images (which is what we're using it for; previously it might have had more uses, but now is reduced to this). Also, it gets patched for security often. Like, really often. Maybe more often than it feels comfortable to update.

Then, there are major incompatibilities between versions 6.9.x (which is what is even in the latest Fedora, likely other distros as well) and 7.x (which is probably what's sane to use in this day and age). Which means that to use 7.x, you're likely to have to roll your own and tuck it in its own LD_LIBRARY_PATH. That hurts.

The cherry on top in my particular case is PerlMagick. It's not installable in any way other than with the ImageMagick itself. You cannot have just Perl modules, which you "perl Makefile.PL && make && make install".

When you see separate packages for ImageMagick and PerlMagick in your distro, it means that the maintainer has compiled the whole ImageMagick, then torn it apart. A corollary to that is that PerlMagick is tied to binaries it came with real tight, and updating one means updating the other most of the time.

Then again, it's very likely that you use some other bindings to ImageMagick and my pain is not like your pain.


I agree using imagemagick to resize images is ridiculous overkill, that's using something like 0.01% of what it's capable of...

But then again we also use groupware/chat programs these days that occupy more RAM on the workstation OS than the entire amount of physical RAM in all of the BSD/Linux servers of a mid-sized ISP 18 years ago.


RAM usage is your concern? Then using ImageMagick to resize an animated gif is going to get you very quickly acquainted with policy.xml

It doesn’t use intermediate disk store for frames AFAIK so you’d better have enough RAM.

In fact, you’d better just resize with ffmpeg and then convert to gif there.


Yep, a big bundle of hard to compile code written in an unsafe language.


Isn't it super easy to resize images? I wrote a program to resize images after watching a computerphile video on the subject and it took like ten to twenty minutes to get something that seems functionally identical to "real" resizing. If you can read your image as a two dimensional array of bytes you can resize easily enough.


If you only want bilinear and nearest-neighbor, yes.

Lanczos is much less easy to write, and is even harder to make it run fast.

And there are things like subpixel positioning (it is way too easy to make resizer that shift image by 0.5 pixel)


Add on top of that correct gamma and color space handling...


The power of ImageMagick is when you need to support multiple formats. Resizing BMP is easy, the hard part is supporting the various ways that data is encoded.


Not the parent poster, but it does a lot and is a regular source of potential security issues, and there's always a worry you haven't turned off everything you don't need etc. So if I process external inputs and don't really need ImageMagick, I prefer using more limited pieces.


Yeah, the frequency with which they have to make security patches is telling a lot about the overall code quality.

One might argue that it's because it's more popular than other similar projects, but it doesn't happen as often with, say, Pillow. And I don't quite remember PIL/Pillow having holes as big as ImageMagick.

Also, try actually programming against it. Whatever language you choose, it's going to feel clunky and alien. It feels alien in C. It feels alien in Perl. Even in PHP, the match made in hell, it feels thoroughly un-idiomatic. I'm not even surprised they have to patch it up now and then.


Parsing complex untrusted inputs in a language that isn't memory safe is the center of the unholy venn diagram. It's very hard to do a good job under those conditions.

While "rewrite it in Rust" is a cliche, I think ImageMagick is a good candidate!



Can it easily sprout bindings to other languages, especially given Rust’s ABI stability guarantees which are as firm as melted icecream?


Its README addresses that. TL;DR: it exposes a C-API and offers bindings based on that, like pretty much any language allowing bindings would.


Replying to sibling comment: It's only overkill if you know in advance you only need one specific kind of input file format. If you need to process many (or arbitrary) image formats with excellent compatibility, then ImageMagick is a godsend.


ImageMagick always seemed to be a bit shaky to me. I usually prefer to use netpbm when I can because each utility is so tiny that it just feels better.


Netpbm seems to have terrible support for modern image formats. Unless I'm missing something, I can't imagine using it to handle user-supplied images.


Came here to suggest libvips, but whilst checking my facts discovered that even it too has an (optional) dependency on ImageMagick.


That's optional and only for loading (not saving!) of additional filetypes outside the ones implemented by libvips itself.


PhotoStructure (very happily) uses libvips (via sharp) and doesn't include any ImageMagick pieces-parts.


Why is GraphicsMagick [1] not a more popular alternative?

[1] http://www.graphicsmagick.org/


Same reason any legacy dependency remains; the cost of replacement is higher than the cost of maintenance.


It's meant to be a drop-in replacement, the cost of replacement should be negligible.


Both projects are moving fairly fast, and regularly adding new features that are not shared.

The last time I made heavy use of ImageMagick/GraphicsMagick, I was suprised how often I would find a feature present in one and not the other (often things like drawing gradients).

And it's not like one has a superset of the other's features: I once painted myself into a corner and created a project that depended on both ImageMagick and GraphicsMagick.


There probably still exist some weird edge cases introduced by imagemagick that the developer(s) behind graphicsmagick haven't accounted for.

That is, should graphicsmagick account for quirks introduced by imagemagick? If it's meant to truly be a drop-in replacement versus one with some strings attached, I would say the answer is yes. Otherwise, you can't easily just drop it into a system and let it run.

Systems built with libraries around long enough as imagemagick likely have all sorts of workarounds built in over the years for that kind of behavior and any deviation from imagemagick might break things more than one expects. We've all been bitten by libraries that claim backwards compatibility with previous versions, only to find out there's breaking changes in some non-trivial use cases the library devs didn't consider. Add those complications on top of switching to a library that is supposed to be a drop in replacement and it can get pretty messy for what seemed like a trivial task.


Format support lags. For example, it does not currently support HEIF.


almost nothing supports HEIF.

consider: most state-run web apps that need an image upload for personal ID images, or anything else. Ugh, this happens frequently for me and just frustrates me to no end.

EDIT: I suppose this is why you're saying ImageMagick is king right now.


Well, yes. It happened only recently, but ImageMagick does indeed support HEIF. :)


Is it a fork?


Never heard of it either, but the homepage provides some info:

> GraphicsMagick is originally derived from ImageMagick 5.5.2 as of November 2002 but has been completely independent of the ImageMagick project since then.


https://marc.info/?l=imagemagick-developer&m=104777007831767...

More info on the reason for the fork (at the time)


Interesting. But today they are both collaborative projects?


And at the same time its bus factor is 1.


"Honey, can we upload some new photos to my website?"

[ tech spouse takes a look at the image set ... rotated, various sizes, even different formats ... reaches for convert(1) and wonders why non-tech spouse can't do this easily ]


I still prefer netpbm, much leaner.


I use imagemagick to split spritesheets into single images for games that I make[0]. More than 1 gamedev has saved countless of hours thanks to it, so I 100% hope it never breaks.

[0]: https://gist.github.com/ldd/9b576bb6f0ac99a0a7895eaf1aae7802


My "ff-dice" program (together with "pngff" to decode PNG, and "ffpng" to encode PNG; these two also require LodePNG, which is included, and which itself has no other dependencies) also does that.

Netpbm also does this; you can use the "pamdice" program (together with "pngtopnm" and "pnmtopng" to decode and encode PNG). However, pamdice requires creating intermediate files, while ff-dice doesn't require that.

So, you have two more things to try if ImageMagick won't work.


Do you have a link?

Ironically, trying to good for `ff-dice` or `pngff` sends me back to this thread!


There is a Fossil repository at: http://zzo38computer.org/fossil/farbfeld.ui

(Although there are a lot of files, you will only need to compile the files that you need, and you can delete the files that you don't need.)


awesome!


Where’s the love for ffmpeg?


ffmpeg is great. I use it to transcode videos that our users upload directly from their phones from HEVC (H.265) to AVC1 (H.264) - it’s painless: just asynchronously shell-invoke it from Java, .NET, etc. I’m surprised it works well even in tightly locked-down sandboxed environments like Azure App Services (nee Azure Websites). It’s saved my org thousands of dollars we’d have otherwise spent on something like Azure Media Services. It “just works” and I’ve never had it fail to process any legitimate video file.

Props to ffprobe too - that’s part of our file upload + identification + sanitation process.


Isn't H.256 the successor to H.264? Wouldn't H.264 use more space?


Chrome doesn’t support H.265 by-design, even if it’s host OS supports it, because Google is trying hard to encourage people to use the (purportedly) patent unencumbered WebM format and codec instead.

In summary:

H.264 has the widest adoption of all modern video formats.

H.265 is Microsoft + Apple.

WebM is Google and Firefox.

H.266 came out a few weeks ago.


For those on mobile, the https://m.xkcd.com url's provide a tappable alt-text link that shows these tidbits https://m.xkcd.com/2347/


In iOS you can long press in the image.


This is the funniest (to me) alt text I've ever read. Had me chuckling for a solid minute!


Thanks for the hint, I never knew it had tooltips with hidden messages :)


Not the only comic that puts a bonus in the img tag alt text—I think Wondermark does, and I wanna say that at one time SMBC did that and had the red-button extra panel, both, and it may still. I always check for it when I find a new webcomic, now.


SMBC still does both.


Unfortunately the older comics didn’t get alt text, although many got the extra panel afterwards.


There’s also m.xkcd.com if you tire of hovers.


Ooh, thanks; useful when I'm using my phone (as the m indicates, obvs), which truncates hover texts.


Early betas of iOS 13 actually used to do this and xkcd was explicitly brought up as a reason to not truncate ;)


There is also explainxkcd, which displays the title text underneath the picture, and has categorization, searching, explanations (in case there is anything you forgot, or do not know, or just want to read other people's commentary), transcripts (helpful for search, and also to copy the text), and more.

Additionally, if you have user CSS in your browser, you can force it to display the title text even in the official xkcd web pages (CSS has a command to display the text of HTML attributes).


How do you do it on mobile?


Depends on your browser. On chrome, just long-press on the image until the context menu pops-up with the alt text on it.


Thanks but it only shows the first part of it.


On iOS, long press on the image.


Try using https://m.xkcd.com/ and click the comic to toggle the alt text


Or gpg


This trend towards huge dependency trees has started to turn me off getting serious about learning software development. It seems like unless you want to make a toy program, you end up with hundreds of dependencies, most of which are made by volunteers who could decide they don't feel like working on it any more. Even having someone paid to do the work doesn't seem like it matters anymore when you consider that Mozilla just laid off a bunch of technical employees to focus on whatever else.

Are there any other solo devs who get "dependency anxiety" and don't want to start working on a project because you worry that you'll have wasted your time if one of the hundreds of dependencies breaks something, and there is nobody to fix it (like a crypto library for example)?


I'm sorry to hear you feel afraid to learn more software development, but i think these concerns are largely unfounded. First of all, dependencies don't break just because they are unmaintained. If it worked ten years ago, it should still work now. For particularly complicated dependencies, there's a likelihood that they have bugs. But if they are popular, these bugs are often either things you probably won't ever come across or can work around if you do.

Finally, the beauty of open source software is that you can fix things yourself! Found a package from 10 years ago that works 99% of the time, but you just need to fix one thing or add one tiny feature? Not a problem, you fork the repo and add what you need.


When I started Python development I felt exactly like OP did. There was this unmaintained library that didn't support Python3 and was breaking everything — turns out it took 15 minutes to change few print and import statements to get it working perfectly.


> Finally, the beauty of open source software is that you can fix things yourself!

Don't you need proper domain knowledge to hack into the some obscure error for svg conversion inside of cairo? All you were trying to do is to display the company logo.svg from your graphics department who exported it on Illustrator version 5.0 because our company is too poor to go to subscription model for Adobe. May be something goofy in the svg data.

You spend next days fixing this open source package, ranting on stackoverflow and finally have to concede and file an bug report on Github.

It gets rejected because you have not gathered all the necessary details to file a bug report including proper stack traces and operating system information.


if all you want to display is one logo then you fix the logo not cairo. You are more in trouble if your busines is accepting svg-s your users upload and only a minuscule amount breaks for unknown reasons.


Gotcha, thanks for the feedback. I've never done professional software development, it's just something I picked up to broaden my skillset, so it's good to know that the concerns aren't really an issue in practice. Regarding fixing bugs in the dependencies, I think that's like a chicken/egg problem for me. I use the dependencies because I don't have the time or skill yet to make the functionality on my own, which means I also don't have the time/skills to fix bugs in the dependencies. I think I probably have to buckle down and get over that hump to be confident enough to make bigger projects that need more dependencies.


Part of the skillset is evaluating dependencies on how well they fit with your goals both technical and strategic and what level of risk/benefit they pose, and how those dependencies are places in their respective ecosystem.

Usually any ecosystem has a core set of dependencies that are heavily relied on by most or a lot of projects as the go-to solution for a given problem. Then there are often sets of dependencies where there are several popular well maintained choices to pick from depending on your needs.

Where it gets a little trickier is things that are a bit more niche. That being said, most software development just wouldn't be possible without standing on the shoulders of giants so to speak cos it's just so time consuming.

They're far more of a benefit than a detriment but you have to learn to assess them and make choices based on your individual considerations.


Skillset -- reading other people's code - especially if you can get to the point where you understand it well enough to fix bugs in it - is a great way to work on building your skillset.

Time -- yes.


Dependencies do break because they are unmaintained. All the time. If it worked ten years ago, there is no guarantee whatsoever that it will work now, unless you:

(a) run it in an environment of ten years ago (otherwise other platform dependencies may be unavailable, or platform behaviour may have changed);

(b) ensure that environment is already fully set-up (otherwise internet systems from which it fetches other dependencies may be unavailable, e.g. due to a service being taken down or certificate changes); and

(c) are using strictly offline-capable software (e.g. if a commercial library does an activation/licensing server check, that may fail).

In other words, to be confident, you need to freeze a snapshot of the entire system, which is obviously terrible for security.

If these conditions are not met, things may or may not work. Some platforms will be more robust over time than others. Scripting languages are generally atrocious, with an installability half-life well under ten years (I’m going to guess something around five years). Compiled languages are a mixed bag. Rust is one of the best options out there, so long as it’s pure Rust code only and doesn’t include any bindings on more build-fragile code.

I shall give a few examples of the sorts of problems that can occur and have occurred to me in the last decade.

• In 2012 I wanted to install IE6 on a Windows 2000 machine (don’t ask). But the underlying infrastructure at Microsoft was gone, so even if I had the full installer, it failed to install.

• A few years ago I wanted to install something that depended (transitively) on some key foundational Perl package, but due to recent Linux kernel changes that package’s tests no longer passed. (Perl defaults to running the tests when you install. You can disable that, but clearly the software isn’t running as expected.) Fortunately the package was maintained, so I just had to install the newer version, overriding the stated dependencies.

• You could be trying to install something with Node.js and it doesn’t work beyond Node 0.12 or 4 or something, so you’ll need to figure out how to get that old version of Node before you have any hope of running the project. And that probably means “figure out how to build it from source”, which is harder than you might hope, because it probably depends on old versions of libraries that used to be easy to get from your OS’s package manager, but are now unavailable, so you’ll have to build them from scratch too, recursively.

• I used to use Hyde to generate my website. A couple of years ago, five years or so after it was abandoned, it had become very tedious at best to install anew, due to being part of the declining Python 2 ecosystem. A few years ago I think I tried setting it up on Windows and gave up after a while, though that’s hardly a fair comparison on its own. On my server, I had used https://aur.archlinux.org/packages/hyde-git to install it, but that’s no longer installable, because it depended on packages in the community repository that have been removed and have not resurfaced in AUR. This is probably not insurmountable, as you can probably install the package in other ways, but it indicates that the way that I had installed it has broken.

I had two or three more examples in mind, but I’ve just been called for lunch.

So just one more note: use lockfiles, they’ll normally make things more robust, but be ready to need to update some of those dependencies in order to get things to work at all a few years down the track, and you’ll be practising trial and error over which.


So don't use huge dependency trees. I largely keep to the standard library of a language plus very precise libraries that handle a lot of complicated logic in one domain like, say, XML handling.

Also, some things are going to be around for the foreseeable future, like POSIX, Win32, or PostgreSQL. Debian Linux isn't going away any time soon, nor is TCP/IP. Too much of our economy depends on TLS for it to be horribly abandoned for long. After a decade or so you will get a feel for what things you can assume will be there for you and what things are ephemeral.


Where I work, anything that is outside of the standard library and some certain security-approved dependencies requires a miserable approval process where an authorization package has to be submitted to security who will require upgrades anytime a CVE with a high enough assessed criticality is posted for the dependency (or other concerns regarding the source arise.) This approval is revisited on a regular period (or added to the standard list of approved dependencies.)

The net effect is that there isn't dependency creep where any team can just continuously add dependencies to a project without their ongoing justification and visibility.

For personal projects, I'm very liberal with using dependencies that can get something in my mind functioning quickly before my motivation dries up. I think I shared the OP's anxiety regarding at least the longevity of what I write until I came to peace with the fact that my software will not function forever, and likely not for the rest of the decade. Relevant Tom Scott vid: https://www.youtube.com/watch?v=BxV14h0kFs0


POSIX, Win32, or PostgreSQL

Interestingly some of my Win32 code broke this week after the latest update was released. It was in a fairly obscure part of the system and I was pushing at the border of what was possible. None the less I have 500 customers running around with their hair on fire at the moment and it's making me wonder why I am continuing in this business.


We're all building off the work of others, unless you start with silicon from sand. Just choose your dependencies wisely. A web framework might not be around in a year but your OS's socket interface will.

And if a dependency breaks, _you_ can be the one to fix it. Just, you know, don't depend on closed source code.


Don’t let it turn you off. Just don’t join the club.

I’ve been doing software dev for close to 20 years. I’ve been watching various frameworks come and go for years. Somehow React gained precedence in web land, but based on the last 6 years of dealing with that hot mess it’s just an even deeper level of dependency hell but go a single framework.

The recent work I’ve been doing has been entirely back to basics. It’s extremely satisfying to write web software again that’s fast, easy to deploy, and easy to debug.


You're assuming your hypothetical project is gonna work and be successful, and that dependencies breaking will be the end of it. In reality, most projects never go get that far, they go nowhere. Building successful software is a lot of work. Building the wrong thing, scope creep, not finding an audience are all much bigger problems.

That being said, just start. Build something. Make it work. Learn. You can figure out problems with dependencies going out of support when you get there.


The flip side is 12 people could run Instagram before acquisition.


Mature projects make sure to depend on "well maintained" projects for this reason. However, in general dependencies don't just spontaneously break. If you're serious about the project and have to upgrade due to security vulnerabilities, you typically rely on stable branches whose purpose is to prevent breakages like the ones you're worried about.


There are other (non web) development systems that don't have huge numbers of dependencies - Delphi and Xcode are two that come to mind. You can add other projects to them if you so desire but if you want to learn programming with something out of the box Delphi is the one to look at imho. Delphi community edition is available for free https://www.embarcadero.com/products/delphi/starter.

(An unpopular opinion - The web is a hierarchy of hacks and patches because the web browser was never made to do what it's being asked to do.)

Edit: here's some YouTube courses that are very introductory https://www.youtube.com/watch?v=wsGpj-FVjGs&list=PLZZqoiUyRB...


> unless you want to make a toy program, you end up with hundreds of dependencies

One way to avoid this is to choose a programming language (e.g. Python or Go) that has a "batteries included" standard library. If you choose your dependencies from that standard library whenever possible, it's likely that they will be maintained for as long as the language itself.

Using the standard library may not be the fastest way to solve your problem, and it won't be the "coolest", but it probably will be the approach with maximum longevity.

Unfortunately this solution doesn't work quite as well if you're using a language with a poor standard library - for example when writing JavaScript code to run in the browser. I wrote one project using an open-source HTML canvas library, only to see the library abandoned shortly after. Since then, I've avoided those libraries in favor of using the raw canvas API - it's significantly more work, but it will be supported for decades instead of just a couple of years. I've decided to avoid Bootstrap and its "long-term support means 3 years" in the same way.

> Even having someone paid to do the work doesn't seem like it matters anymore

In fact, I think commercial solutions may be more likely to be abandoned than free ones. I'd trust Python's community of enthusiastic volunteers to maintain their language for longer than I trust Google to maintain Go.


> (e.g. Python or Go)

golang is quite notorious for its dependencies pulling in tens or even hundreds of other dependencies


Yes, Go suffers from the problem of every direct third-party dependency bringing in a large number of transitive dependencies. Almost every language does, because most developers don't mind.

The reason I recommend Python or Go to people like the OP, who do mind, is that their strong standard libraries allow you to mitigate the problem by minimizing, or even eliminating, those direct third-party dependencies.


I'd say because of the standard lib Python and Go suffer less than other languages. The third party libs benefit from the standard libs as well.

Actually at my current job one of the tech leads decisions for choosing Go was exactly this.


> standard lib Python and Go suffer less than other languages

Depends on what those "other languages" are. For example. golang has nothing on the Java standard library (e.g. what does golang have to even remotely compare with `java.util.concurrent`?)


> golang is quite notorious for its dependencies pulling in tens or even hundreds of other dependencies

Plain wrong. Just stop trolling against Go.

1. The point is that you don't need as much third lib deps with Go or Python due to the huge and useful standard libs.

2. For exaclty this reason the third party dependencies don't need as much other third party dependencies.


> Just stop trolling against Go.

Please assume good faith when responding to people: https://news.ycombinator.com/newsguidelines.html


In my experience, it's fairly easy to keep the number of deps down to a minimum when writing an average Go application. I have several apps that take less than 5 deps total (including recursive deps).

However, there are a few outliers. Most famously, the Kubernetes client libraries are made of hundreds of packages because of how they're structured and autogenerated (due to lack of language-level generics).


The way around this is to use dependencies sparingly and to just lock in a version. I can't say how many times in Java for instance I've pulled in a library used one or two features then shipped. A few years later in support security will start auditing stuff and ask for updates and upgrades for patching. To which there's usually a conversation along the lines of, "I'm sorry, Richfaces doesn't exist anymore." There are ways around this, it's like managing every risk.

I think the dependency anxiety comes from a mixture of knowing that solving the problem would be to use a language like Java or Go, for their rich frameworks and simplicity. What holds you back and creates the anxiety is the downside that the tech stack stops being sexy and interesting.

On crypto libraries, you've got options here. you can not do crypto in your app and push that into infrastructure and use some other app as a proxy to handle that work. This gives you freedom from issues because you can manage that layer by swapping in and out different ssl proviers/clients.


I wouldn't call it an anxiety really, but I've obtained some new perspective. I went from primarily working on app frontend for years using JS, and into a more API backend-ish role using .NET Core. In the former, dependency trees like this never really bothered me, probably because that was the culture around it in the JS world (at least at the time) and perhaps a bunch of other reasons. In the latter, I think .NET's ecosystem has a tradition of fewer external dependencies. It seems like it used to almost be a monolith, where basically Microsoft provided it, maybe a few major libraries existed, or you rolled your own, though that's just my impression of it.

Another huge factor has changed my thinking on dependencies too. My employer has legal and security requirements around all dependencies. Anytime a new dependency is considered, it has to have the license reviewed which takes time, and known CVE's have to be addressed (patched or mitigated otherwise). I consider these good since it helps in the sense that the costs of dependencies forward, but certainly a pain.

So I guess my perspective change is simply one of trying to understand the trade-offs. Some are well worth the cost because they're cheap, you won't get much benefit of your own, and will get benefit from a common one for everyone developing in it (testing libraries are one example). If a UI library is applicable to your project, of course a modern common one is a good choice, though you will pay for a lot of dependencies. Then there are other options which are more context-sensitive. Examples that come to mind are date/time libs, math libs, collection helper libs, etc. Sometimes they make sense to have, but oftentimes today can be substituted for a simpler function, though this can change with time too.

This really just turned into a ramble. So I guess I'll conclude with this: if you're getting started and learning, maybe try and skip libraries until you really feel the pain and then introduce one when you need it. It may help to build up piece-by-piece and reduce the anxiety you feel about it.


The huge dependency trees was a major turnoff for me when I started using the Javascript ecosystem like Node and React.

When installing React, I received errors and depreciation warnings in the latest build available. What kind of confidence is that supposed to give me when something distributed by one of the largest tech company in the world seems unreliable?

Even later, I've faced numerous issues where I had to manually find a fix online since a random dependency disappeared or new version was incompatible with the rest of my entire dev tree.


You aren't the only one. My reasons are quite similar, albeit from a different angle: I'm more concerned about my ability to understand the depth and breadth of these dependencies to make reliable software. It's a silly thing to be anxious about since I'm fairly certain that I am better than the average developer because of it, yet it also presents roadblocks on virtually any project I work on.


And at the same time this system allows for a few people or person to do amazing things rather than build from the ground up.


Oh, well here you go then:

0 1

Wish you the best.


Hah! Try node.js for anything non-trivial and shudder.


Yeah, I've been using Rust for hobby stuff for the past few years, and I never paid much attention to how many dependencies get pulled in. I checked recently with the postgres crate, and that one crate resulted in just under 100 dependencies being pulled in.


It’s entirely possible to write good and efficient software with node without requiring an excess of dependencies.

It just requires strong responsibility and good practices.


Ok, I concede it is possible. Just like going to the moon is possible. But in practice everything I see that pulls in the node ecosystem even peripherally suffers from this. 100's of dependencies are not rare at all, and some of those on the most obscure packages.


You aren’t wrong. After years of seeing things going that way, I’m hyper critical of any new additions I make to my own software or anything my team is working on.


Couldn't agree more. To expand on the point a bit I feel that Node (and JS in general, and other wildin' langs like C++ too) mostly have disadvantages that can be circumvented with discipline while keeping their advantages intact. Go and related crew on the other hand impose discipline as a language feature, and if you circumvent it (if you even can) you're missing much of the point of the language.


It's NPM that's the problem, not node.


The dirty secret is there are always more volunteers. Have you ever noticed people say open source “looks good” on your resume?


Yes, It's dependency system's weakness, but also that's what makes this system very strong:

If one of those base level things break, everyone who depend on it, will come together and replace/fix it instantly.


This is kind of true.

Look at the recent security issue with lodash. lodash is a dependency of a huge number of javascript and node projects. For people who would say "don't use tons of random libraries", lodash was a great choice; it was almost the equivalent of a standard library for JS before ES6. Problem was, for all intents and purposes it was abandoned about a year ago. Couple months ago a security issue was reported, it wasn't fixed, and then 'npm audit' started failing builds with lodash as a dependency. It's like all of the sudden half the node ecosystem started failing.

The problem was not that the bug wasn't fixed, it's that the original author wasn't really involved any more, and it took a long while for other people with commit rights to figure out just how to get the build working. But the problem with any reasonably complex JS project is that there could easily be thousands of references to lodash. NPM doesn't make it easy to essentially say "I'm changing the namespace of lodash to mean this instead of that."

Dependencies are pretty unavoidable, and I think the software engineering community will actually start to get better at handling what happens when a widespread dependency needs to go on life support.


Lodash is in an incredibly pathetic state. I wanted to help, but last I checked the project is impossible to build and none of the tests pass. It's a completely broken repo and the maintainers barely contribute.


And according to Linus's law if enough people depend on it, it won't matter how complex the breaking bug is.

https://en.wikipedia.org/wiki/Linus%27s_law


> If one of those base level things break, everyone who depend on it, will come together and replace/fix it instantly.

I don't see how this is causally related to that dependency being thanklessly maintained by a single person. Maybe I misunderstood what you are trying to say.


Or they won't.


This precedent suggests they will:

https://en.wikipedia.org/wiki/Heartbleed#Root_causes,_possib...

The industry's collective response to the crisis was the Core Infrastructure Initiative, a multimillion-dollar project announced by the Linux Foundation on April 24, 2014 to provide funds to critical elements of the global information infrastructure.[192] The initiative intends to allow lead developers to work full-time on their projects and to pay for security audits, hardware and software infrastructure, travel, and other expenses.[193] OpenSSL is a candidate to become the first recipient of the initiative's funding.[192]

After the discovery Google established Project Zero which is tasked with finding zero-day vulnerabilities to help secure the Web and society.[194]


Everything can happen, but your scenario has a very low probability, incentives do work


Would anyone donate to a service that maps your dependency tree and breaks up your donation to support the projects in your entire tree (not only surface level) proportional to their usage or something like that?


It's been proposed and tried before, and I think generally deemed not worth it. a) Almost no one donates, b) the donations would be broken up to infinitesimal levels, helping no one, and c) there are people who devote their working lives to OSS without compensation, other people who write a few cute small libraries/tools but don't need the money, and then there are corporations who contribute to open-source for their own benefit. The proportion of money that would go to your favorite OSS heroes would be embarrassingly small; compounded by the low donation level and general dilution.


There was one time though... When redhat went public, they gave redhat stock to a long list of free software developers that had made significant contributions.


Also you end up with a handful of people making 1000s of one liner libraries that link to each other in an effort to game such systems.


At the same time other people will be more motivated not to include one-liner libraries to not further dilute donations.


Maybe that would lead to 1000s of new possible libraries to use in your projects, some of them useful.


Single line anti-gaming algorithms could be open sourced to combat this.


Would this be prevented if the fiscal flow followed the dependency graph?

On the other hand, in market capitalism, economy of scale is a powerful force for conglomeration that sometimes results in an anti-competitive reduction in consumer choice.


> c) there are people who devote their working lives to OSS without compensation, other people who write a few cute small libraries/tools but don't need the money

Serious question. If you are devoting your working life (that is, time that would otherwise be dedicated to remunerative work) to OSS without compensation, then by definition you probably don't need the money, right? Perhaps because you are already wealthy, or have a family that will support you. There's nothing wrong with this of course, many of us would contribute to OSS for free if we didn't have mortgages and families to pay for. It's basically like doing volunteer work after retirement.

Put another way, are there many people devoting their entire working lives to OSS with no other compensation or resources, essentially going bankrupt to write OSS? I'd imagine very few people are in that category, or if they are, can't sustain it for very long.


I took a sabbatical from work in 2017 and devoted that time to creating an open source project which I felt was important to exist in the world. I am not wealthy, nor was anyone else supporting me; I had saved up enough to take a year off, and instead of travelling or learning the saxophone, I spent my time developing software that I released under GPL.

I guess because I was not living hand-to-mouth, I didn't "need" the money? But I wound up nearly going into debt, because it took another year to get a job after that--turns out that maintaining an open-source project is not an asset for employers but a liability. I had an offer from a HugeCo for decent money, but I turned it down because it would basically have precluded me from working further on this project which is only now starting to walk on its own.

It's not my entire working life, by any stretch, but those 2 years were 5% of it, and I figure that cost me $200k+ in "lost" wages. And that does not count all the "free" time I've spent since, fixing bugs and helping people use it, because I love the thing and I want other people to love it too. I'm glad I created it, it's very useful to me and other people who do data exploration in the terminal, but now I have to accept that I probably won't ever recover any significant amount of that value, because...people don't pay for open-source software.


May I ask, what project did you build?


Of course! It's VisiData (visidata.org). Thanks for asking, let me know if you find it useful too :)


Thank you for an AWESOME tool that I regret not finding a use case for yet :D


Well for starters you can just pipe json or shell commands into it. Try `ps -afx | vd -f fixed`.


> I guess because I was not living hand-to-mouth, I didn't "need" the money?

More that you made the conscious choice to work without pay for 2 years, and you and any dependents didn't starve or end up homeless in the process. That's something few people can do.


You asked "Put another way, are there many people devoting their entire working lives to OSS with no other compensation or resources, essentially going bankrupt to write OSS?" He answered, "I wound up nearly going into debt". There are different meanings of "bankrupt", but "out of money" is a reasonable definition to use in this situation.

Besides that, it's in extremely poor taste to marginalize someone's sacrifice of two years just because you think they're privileged. It's usually better to make much of someone's contributions than to minimize them.


> Besides that, it's in extremely poor taste to marginalize someone's sacrifice

I didn't marginalize it. I said the ability to do it is rare. Those are very different things.

I think it would be great to live in a society where people can do what GP did without going into debt, but that will require bigger structural changes.


I guess you could have a heuristic that takes into consideration the number of lines of code from the projects source repository, the last date of commit (i.e. exclude projects that are unsupported for >3mo), the number of contributors, whether the author has opted-in up to receive donations and some heuristic on the number and size of commits to try and estimate the number of hours spent authoring.

And then of course, manually approving each hours/month estimate above a certain amount i.e. 10 hours per month.


> heuristic that takes into consideration the number of lines of code from the projects source repository

Not only is this a horrible metric of value, it’s ridiculously easy to game.

It’s pretty hard to measure objective value of software. I think the closest is letting the market set a price and that has lots of flaws.

One of the upsides of OSS is that it bypasses that whole aspect.

I think the best approach is to let projects choose their own license according to their goals. If they want to charge, let them. If they want to be free, let them.


I wish there were a vetted Open Source license that:

- allowed the code to be visible and available as a general rule

- allowed an individual user to use it and modify it for non-commercial use

- disallowed redistributing modified versions (distributing patches would be fine)

- disallowed commercial use

Basically, a license that would allow free experimentation and grassroots adoption and all kinds of open-source goodness for individuals, while not giving FAANG profiteers a handout for RMS' ideological reasons.


What you're describing would preclude someone from forking the project, even to maintain it after the author lost interest. That's one of the main risks I seek to avoid by using Free software.


Well, perhaps there could be a small modification that would allow "use in perpetuity" and rights of survivorship, or whatever. The important part is just that profiteers have to negotiate with the creator(s) so that they get some slice of the pie they've helped create.


I would never use any software with that license. I rarely have software I only want to use for non-commercial use.

Even simple stuff like note taking where it’s 95% personal, I still want to use at work.

I’d rather seek out an open source tool.

I’m not sure what the motivation is to prevent FAANG’s use. All of those companies contribute lots of open source.

Would you rather be back in the 90s where the giant tech companies kept everything proprietary and didn’t contribute anything back. Early 90s Microsoft, IBM, Oracle, Sun sucked and I’d rather have FAANG contributing their open source stuff nowadays than make them pay for projects and stop contributing.


You can totally use the software for non-commercial use. You'd just have to pay for it! I don't understand why FOSS supporters insist on the software being free-as-in-beer, when that has never been the point (even if many people and corporations enjoy the spoils of zero-cost software made by passionate artisans). Open Source Software is awesome, but imagine how much better it could be if there were a way for those passionate artisans to eke out a modest livelihood from their efforts!


There is this existing (and somewhat complex) system that measures people's income and diverts a portion of that to a fund which is then used to support systems that people deem to be socially beneficial. Various versions of it are installed around the world. A lot of big companies that should contribute the most seem to find a way not to contribute at all, and others use their influence to divert the spending away from socially beneficial programmes. And there's quite a lot of disagreement about it, generally.


Why bother? FOSS is a charitable gift to humanity. Appreciate the givers, and give what you can too. If you have money and not time/skill to give, then you can give your money to any charitable cause that needs it (FOSS, or any other cause). If you satisfy one need, then someone else can satisfy another. There's no need to distibute your gift in a complicated way.

Example: Bram Moolenaar (ViM) asks users to :help uganda, not to pay him. And I speculate that he wouldn't be offended if you donated to help someone else in need instead.


This would sort of discourage code-reuse and modularity, as people would rather copy a module into their project to prevent their donations from diluting to them...


What if instead of donating to the project proportional to use it donated developer time to projects most in need?


Hmm maybe, I would want to avoid donating to big projects supported by google etc though and instead focus on smaller community projects, regardless of their usage.

Measuring usage in general I think will be tricky..


I think you are describing Tidelift: tidelift.com


I’m constantly amazed by the quality and consistency of the open source community. Can you think of any other industry where massive companies are built on top of free infrastructure?


I don’t have insider knowledge, but sewing and baking comes to mind. The fashon industry probably relies heavily on established patterns that are free to copy, alter and use (quite often even from a competitor). And the baking industry can do similar with free recipes.


Volkswagen don't build the highways themselves and I'm not aware of Boeing ever building an airport.


Indeed. High skilled labor and research is also often trained for free by universities, so we can put pharmaceutical companies, mining companies, etc. there too. Hospitals often provide free healthcare for the workers, and municipalities often provide waste management for a very modest fee.

In fact—as you point out—the average company in most industries relies heavily on state provided infrastructure, that they get to use for free (or a really modest price).


There's certainly nothing like the open source community. But if you generalize it sufficiently, all businesses are somewhat like that. Take banks for example. They rely on there being a functional society, with most transactions being honest, and with police/justice systems enforcing safety of depositors, etc.


I wonder why that is? I guess the relative ease with which software is created and published?

Do civil engineers have the opportunity to create the bridge version of Linux as their masters project? I think the physical limitations of bridge building means that there isn't a great analog to Linux in the civil engineering field.

The only field I can come up with that could behave similar to software is art. Why don't artist release art "open source" for others to use and transform? I could certainly see compensation as a factor: artists are famously underpaid so they are not willing to work for free. Or maybe it's a cultural thing, where lots of people using your open source library translates into respect for the author, but lots of people ripping off an artist could have the opposite effect?


> Do civil engineers have the opportunity to create the bridge version of Linux as their masters project? I think the physical limitations of bridge building means that there isn't a great analog to Linux in the civil engineering field.

That may be true to build final product, but to even become interested or explore the field of engineering (and other professions) has a lot of roadblocks and expenses that software developers don't have to deal with. Imagine having to pay $2000 for a Windows Server + MS SQL license to develop your small website. Whoops! I forgot the $500 license for Visual Studio just to get your code to compile.

The availability and power of free, open-source software greatly lowers the barrier to entry for anyone and everyone: hobbyists to make and release a project, startups to create their product on low/no budget, curious people interested in trying programming.

The fact that this free and open software rivals and often beats its paid equivalents is absolutely amazing. It's why despite the problems in our industry, I am so proud to be a part of it.


OpenSource software isn't free. It costs people's time to develop and maintain. Calling it 'free' devalues the projects and their maintainers by giving the impression their time isn't worth anything.

We as the OpenSource community need to start moving towards a more sustainable approach to OpenSource development. First step: Stop calling it free.


Oh god this is the same argument rolled out by opponents of socialised healthcare. If it's free then you are forcing the doctor to work for no pay! Stop splitting hairs over the meaning of 'free' in this context. It is free to the end user, it is not free to the developer or maintainer.

I make christmas hampers for the poor, I call them free hampers. Of course they are not free, but to the end user they are.

And no, people are not going to start using the word libre, it is not an English word and it is not used widely enough outside of computing and it will probably not catch on. /endrant


I feel like calling Free software 'open source' devalues the project and their maintainers by giving the impression their work can be exploited for non-Free software.


Yes, I agree, open source software isn't free. Anyone can take it, close it off and steal the fruits of my work to do whatever with it. Calling it "free" devalues the projects and their maintainers by giving the impression they'd rather use a questionable license model than create software for humanity.

We as the free software community need to show people who call it open source the door and have them escort out by security so they stop ruining the whole affair.

...what was that money thing you where rambling about?


A big part of Go's success is due to the large and useful standard lib. This kind of ecosystem feature is much more valuable than any language feature.

If the Rust guys would understand this and put an async runtime, http server and client, crypto and some more essentials in the standard lib the adoption would be much higher. Than we would have a super robust language and a super robust ecosystem. I don't think the borrow checker is the issue with Rust slow adoption in the mainstream backend business.


I can't speak for the Rust devs, but as far as I'm aware, the longterm plan is to have more stuff in std, but only once they're fairly certain they have the right design. Async/await is a good example. The Rust community explored the design space for async runtimes for several years before they felt comfortable settling on a design for the initial pieces of std.

The process is definitely frustratingly slow, but going faster has burned the Rust devs already in the past (just look at how much stuff in std::error::Error is deprecated).


The main problem isn't just that, it's that also, Rust cannot decide that a one-size-fits-all executor is the only one. Different properties are needed for different projects, and blessing any one executor cuts off all of the rest of them.

Tokio is fantastic for web services, but is inappropriate for my microcontroller.


Do you have a notification daemon for every HN comment that mentions Rust? :)


Nope, just use the search bar.


And when the Nebraskan sells the project to a scammer, we’re all in trouble.


Can you do that? I'd be open to selling an open source project that gets over 1M monthly downloads. I can't seem to monetize it otherwise.

EDIT>> "Can you do that?" meaning, "Can you sell an open source project?" Not "Can I sell to a scammer?", for people accusing me of planning to commit a crime.


I think there is a market for selling rights to browser extensions. Someone I know with a mildly popular extension got offered $10K by a buyer to completely take over.

This market might be one of the reasons why extension API powers have been scaled down.


Yep, it happened quite a lot. The most famous example probably is Stylish.


> EDIT>> "Can you do that?" meaning, "Can you sell an open source project?" Not "Can I sell to a scammer?", for people accusing me of planning to commit a crime.

Probably not. I can only think of two reasons someone would buy it: to do something not allowed by the license or to distribute malware.

If you have say copyright on an AGPL-licensed piece of software that a proprietary vendor wants to use/sell, maybe you have a shot at the former. I don't have any experience with this but it's an above board thing to do anyway.

Otherwise, I can't think of a legitimate reason for someone to be willing to buy it for a significant sum. They can always fork for free if they just want to make improvements and distribute them. If it's important to them to take over the existing name/url/credentials it's probably to put in something that users don't want...


A while back, the node event-stream package was compromised when the author passed ownership to a hacker. Not for money afaik: https://gist.github.com/dominictarr/9fd9c1024c94592bc7268d36...


Ownership was passed to a contributor who showed no signs of being a hacker until they were added as a maintainer on the project, unfortunately.


Haven't heard of people selling successful open-source projects, as it's kind of a contradiction.

If you don't want it and stop maintaining it, motivated people could presumably fork it and maintain it themselves rather than pay for it... but I guess you're also buying the community


You can sell access to the update mechanism, repo, and domain name (and any trademarks you have registered), and any rights to relicense the software you have (not necessarily those of other contributers). For most intents and purposes this is selling the project. You can't rescind an open-source license retroactively, so the value of trying to buy such a project and turn it into a system with a proprietary license is probably low. Most users (and especially contributers) of open-source projects would see such an action as a breach of trust, so such an action is likely to result in a hostile fork even if the new 'owner' does not do anything actively malicious.


In case you missed it, https://news.ycombinator.com/item?id=23613719 talks about github sponsors


Please don't... we already had our scare in the past when someone handed over the JS project for something to a third party and broke the internet.


"please don't" doesn't pay the bills. You want those people to avoid transferring the reigns for cash? Find some sustainable way to pay them. No amount of feel good platitudes will do that.


Crime isn't supposed to pay the bills either. You have no right to give away a gift and the bulglarize anyone who accepts it. If you don't want to give your work away, put a protective license on it.


While the above case is (arguably) crime. In general crime is unrelated to this issue.

As for licensing: totally agreed. I would like to see more projects with protective licensing.

Too many projects are basically donations to Amazon.


I think daenz was asking about how to sell an open-source project in general, not in a criminal way.


Could you fork it to a closed source "premium" version and implement some high-demand features?


Publicly announcing your intention to commit fraud is ill-advised.


See the updated comment addressing this bad-faith accusation.


Before your edit, it was not at all clear that you were talking about selling to people who aren't scammers.


In practice, that doesn't happen often enough to be something I actively worry about. I scan my dependency's source code each time I upgrade, and that's enough for me.

The number of dependencies I've had fail in that manner is currently 0. Also, all my open source dependencies have their source-code reviewed regularly by distro maintiners (like debian's), and the last version's source code is available in source control, or at worst in a distro's src tarball mirrors.

However, on the flip side, the number of dependencies I've had break because they're maintained by a large company, and that company pivoted in some way, or reworked everything, or in some cases had the code calling into their proprietary backend via some api and broke their api... that number is in the dozens to hundreds.

Using a dependency maintained by an entire company vs by a single contributor doing it in their spare time... Well, the company-maintained ones have a much worse track record for me so far.


>> Well, the company-maintained ones have a much worse track record for me so far.

This made me want laugh, then cry. Both because it's often true.


version 10.0.0.1: local config and data files backed up to cloud

(this actually happened to an ios app I had years ago - gas cubby - one day all the local data about your car - vin number, insurance info, mileage and more became cloud based overnight)


This is why you are supposed to choose whichever dependency you add to your project very very carefully.

Also, each dependency you add to project is a security risk in itself. I really can’t believe how webdesigners casually add dependencies without a second thought especially with dependency management system like npm.

People give C++ lots of hate but thanks to not having an easy way of adding complex dependencies, it has less dependency driven bugs/security issues.


> People give C++ lots of hate but thanks to not having an easy way of adding complex dependencies, it has less dependency driven bugs/security issues.

Oh, come on. The number of security issues to things like libssl alone is enormous.


Oh come one, you're describing a single dependency which being encryption is generally recommended against rolling your own.

Don't try and tell me C++ projects in general have more dependencies than high level languages.


Nope, but I'm happy to tell you that C/C++ projects in general have more security flaws than high level languages.


That's superficial. How many "high level languages" roll their own SSL from scratch? I'd bet most of them link down to the C++ libssl, so of course that one gets a lot of heat, but it also means it's a very robust piece of software.


Don't shift the goal posts, we're talking specifically about dependencies and dependency driven bugs.


Languages like C++ that lack memory safety have the irritating property that a memory safety error anywhere in the dependency tree can be exploited to attack unrelated parts of the binary. In most languages you don’t have to worry that some stateless pure-function log formatter is secretly the gap in your armor.


They certainly don't have less to a degree which makes up for the lack of tooling to support it. C++ dependency management is a nightmare which dwarfs even the worst of npm. Even if it were to half the number of dependencies it doesn't make up for the 10 times more dependency and build-related issues which appear.


I'll try:

https://nightly.ardour.org/i/A_Linux_i386/info.txt

That's the dependency stack for a 350k LOC cross-platform C++ project.

I'll leave it to your judgement if this is more or less than "high level languages".


Cherry picking is pointless and doesn't prove anything


Neither does just invoking No True Scotsman at every turn.


Code that you reinvent/maintain yourself is almost always less actively maintained than the dependency you could have used. Less maintained code universally means more buggy code. (not even talking about performance and features) It all boils down to human resources. Also your saved human resources by using a dependency can be reallocated either at bettering the rest of your code or towards contributing to the dependency.

Regarding security issues in C++ in general: ~51% of security issues are memory related, so you get at least half less security issues with a garbage collected language.


> People give C++ lots of hate but thanks to not having an easy way of adding complex dependencies, it has less dependency driven bugs/security issues.

This doesn't stop people adding dependencies: most C++ software still has a complex dependency tree, just most of the labour of maintaining it falls on distro maintainers (and the poor souls which need to make windows builds) [0]

And I don't think C++ has any shortage of dependency driven bugs or security issues. Certainly if you asked me to pick whether I'd like to build something from npm or the average C++ project I'd pick npm every time: I expect to need to fix on average about 2 issues every time I attempt to run a non-trivial C++ project's build, each issue taking an hour or so to locate and fix or hack around. The nature of C++ dependency management means there's more likely to be vendored dependencies which are out of date and have security issues, perhaps introduced by the dependee project as patches (and ripping out this kind of vendoring and dealing with any required patches creates even more headaches for distro maintainers, though they are also quite capable of introducing their own bugs through their own patches).

[0] https://wiki.alopex.li/LetsBeRealAboutDependencies


I don't use a lot of dependencies in my software (some uses none except the standard library, or for software that runs on a VM, the VM it runs on (although any implementation can be used)). (Probably the only program I do maintain that has too many dependencies is TeXnicard, because it depends on Ghostscript (which has a lot of dependencies). It also uses PCRE and SQLite3 (which are not so bad, since they don't have other dependencies).)

In terms of ImageMagick, I used to use it but now maintain my own set of programs for dealing with picture files (almost entirely written in C, although the ones for converting to/from fax format are in PostScript; I may later write a version in C as well, so that you can use it if you do not have a PostScript interpreter). Like most software I write, I try to not use many dependencies if I can avoid it. (It doesn't use libpng either; it is using LodePNG, which is better in many ways, in my opinion.) (These set of programs I wrote also have many effects that I have not seen in other programs, such as automatic rearranging in a horizontal or vertical strip, removing duplicates from a vertical strip, and making the tensor product of two pictures. There are also file formats ImageMagick does not support.)


What are some examples of this?


OpenSSL is notoriously in this situation: maintained by a few guys for whom it isn't even a full-time job, so they can't really do the proper work of trimming the fat that's all there

GnuPG is also maintained mostly by the same original guy, who was going broke a few years back so there was a chance he couldn't work on it anymore (https://www.propublica.org/article/the-worlds-email-encrypti...). Fortunately the community came to help and now the future of GPG is a little bit more secure

It's not really the same but ssh is literally everywhere. Absolutely none of our sysadmin work would be possible without it, and yet very few of us actually give back to the OpenBSD foundation. If we all gave a penny every time we installed it every year, the foundation would reach its annual goal (https://www.openbsdfoundation.org/campaign2019.html) but we obviously don't, so they still need to ask for money

Also not specifically in this way but pretty much anything that Daniel J. Bernstein does is so good that it becomes a crypto standard. It's kinda ok because the software he does are "finished" in the sense that no new features are neither needed nor wanted, and his work is more about the primitives than the actual libraries. Still, there's a lot of the upcoming standards that all depend on what this single guy did.


NIce Article!!! Thank you for providing such a great help !!!

https://devu.in/devops-certification-training-in-bangalore/


[flagged]


Cool post history. Please find a healthier outlet for your ideology.


The most newsworthy one in the last year was core js - a JS poly fill library that most other big npm libraries are built on top of. The maintainer is some random guy from a remote part of Russia. About half a year ago, he was sentenced to a prison sentence . He was the sole maintainer of the project (I guess he refused to give other people write access). I’m not sure if any of its dependent projects have done anything about forking the project, but I wouldn’t be surprised if most just ignore it until it causes some hilarious left-padpocalyse.


> The most newsworthy one in the last year was core js - a JS poly fill library that most other big npm libraries are built on top of. The maintainer is some random guy from a remote part of Russia. About half a year ago, he was sentenced to a prison sentence . He was the sole maintainer of the project (I guess he refused to give other people write access).

Core-js most recent commit was 4 days ago, and the repo owner stopped both committing and being the sole committer about 7 months ago with no extended interruption in activity (and there've been several patch releases since), so it looks like whatever happened was dealt with quickly.


Source? I love npm stories like this and couldn't find it on the first page of google.



JSON.net in .NET land was this type of dependency for many years. It was maintained largely by a single founding developer in New Zealand, eventually even becoming a core dependency of ASP.NET itself. He got hired by MS eventually but it must have been close to a decade of this library being a dep of nearly every .NET web project under James' stewardship.


nowadays if you fuzz any ASP.NET site hard enough, eventually you are going to encounter a Newtonsoft stacktrace.


SQLite feels like this to me. It's more than one person, and it's not thankless work (at least, I hope). But still it is critical to a surprising amount of technology, and maintained by a very few people.


And, at the same time, it's a very underused piece of tech

Every time you think you need a custom file format for a given piece of software... you most likely don't. Just use SQLite. You can use the standard OS file Open/Save dialog box and users will never know the difference.

Yes, that means you now have to write SQL statements to manipulate your data. But that also means that you can get lots of complicated data structures on disk and can manipulate them easily, even outside your own software - just fire up the sqlite CLI and point to your file. There are GUIs as well.

Things like UNDO/REDO can also be had almost trivially (see https://www.sqlite.org/undoredo.html)

Sometimes this also means you don't need an external RDBMS even for web apps. I've seen so many apps which co-locate a small database in the same box that might as well have been a single sqlite file. I'm actually maintaining one right now that, although relatively important, will only ever be a single box. But sqlite wasn't 'enterprisey' enough, had to use PG. For a couple of tables.

https://www.sqlite.org/whentouse.html

Subscribe for my next rant on another underused piece of tech... Lua :)


sqlite3 is now very heavily used in the Apple ecosystem - all of Core Data is built on it, and many of Apple’s own apps use it to store all kinds of data. It’s a godsend for tinkerers like me too - just point the SQLite CLI at one of the internal DBs (like the Photos database) and all sorts of cool stuff comes spilling out.

For on-disk document storage I think Apple mostly uses a mix of plain folders with magic extensions (“packages”) and ZIP files nowadays, although there are definitely a lot of exceptions. SQLite isn’t that great for binary blob storage (relatively speaking) so a folder structure is still more useful there, IMO.


SQLite3 is one of the most widely used pieces of software ever. Every browser, every mobile OS, many many apps, all use SQLite3. It is absolutely essential.


SQLite3 is one of the few pieces of open source software that has a very successful business model. The business model is this: SQLite3 is open source, including some tests, but the real test suite -the one used by the developers- is proprietary, and is one of the most thorough test suites ever built. That means that nobody can credibly fork SQLite3[0]! Thus the SQLite3 developers can and have formed a consortium that all the big players (Apple, Google, etc.) pay to join because they so utterly depend on SQLite3.

Consortia have been tried many times, but few as successful as the SQLite Consortium.

What a great model. Open source for the main thing, proprietary test suite.

This model does require building something everyone needs, years of patient care and feeding to make it wildly popular, and a fantastic test suite. So it's not exactly easy to pull off. But it is brilliant, if you can do it.

[0] Even just contributing is very difficult. Try to contribute anything other than a trivial bug fix, and you'll find that a) unless it's utterly trivial, the devs are not interested, b) they are dead serious about checking that the contributor has their employer's permission to place that contribution in the public domain.


SQLite is supported by commercial arrangements (https://sqlite.org/prosupport.html) and used by giant corporations all around the world, quite a different story.


Yes, and SQLite is one of the few dependencies I commonly use in my own software (although, most commonly I use no dependencies (other than the standard library), I think).

Not only can you get a SQLite database (which can be used as a application file format, suitable for many uses), but there are functions such as sqlite3_mprintf() and sqlite3_str_new() and stuff like that; and furthermore, it can allow the user to query it too with SQL commands, even within the program, and to allow user customizability by the use of SQL codes, too, and it allows the user to deal with the file even without your software in some cases (not needing to write a separate decoder). Sometimes virtual tables are useful for doing some things. And, this is in addition to being able to use SQLite as a file format.



NTP basically was but I think things might have changed since?

https://www.infoworld.com/article/3144546/time-is-running-ou...


ntpd is now looked after by the Network Time Foundation, but more importantly many distributions use other implementations of the protocol like chronyd.



dnsmasq comes to mind, it runs on quite a few embedded routers. Here's an interview with the author:

https://joshuakugler.com/an-interview-with-simon-kelley-the-...



ICANN has been managing/backing it since 2011 (http://mm.icann.org/pipermail/tz/2011-October/008090.html), when the maintainers were sued by some random astrology company.


My library, mergo [0], is used in Docker, Kubernetes, and other big projects that are the core of what we know as the cloud.

At some point, my library caused a bug or two in Docker, impacting how config was loaded.

0: https://github.com/imdario/mergo


Well the classic example was openssl, although that has thankfully been improved substantially ever since Heartbleed.


Heartbleed was really the watershed moment.

"Tech giants, chastened by Heartbleed, finally agree to fund OpenSSL" (2014):

https://arstechnica.com/information-technology/2014/04/tech-...




Not the most recent article (2016), so things might have changed since then: https://www.infoworld.com/article/3144546/time-is-running-ou...


Probably things like YACC https://en.wikipedia.org/wiki/Yacc


Why? There have been alternative implementations since forever.


curl ? :)


The author of curl has a funny blog post about how random people will sometimes find the curl license in the software of their everyday things (like a Toyota Corolla) and reach out to him asking for help with their things.

https://daniel.haxx.se/blog/2016/11/14/i-have-toyota-corola/


Not exactly a lone developer, but ffmpeg


NTP


dnsmasq (Simon Kelley)

bash (Chet Ramey)


Chet also does Readline.


I wonder what the most critical (and smallest, least maintained) npm package in the world is?


Probably the is-thirteen package, for checking if a number is equal to 13.


Bold of you to assume it's npm that's being spoken of :D



> The problem was promptly fixed, and for the vast majority of us users, there was no down-time thanks to caching

I don't know much about npm management, but I'm assuming it actually had nothing to do with caching? More like most companies have pipelines and breaking dependency changes won't reach prod.


Safe assumption.


Well, the first impulse is naturally to just use what's out there for free and leave it to some other poor sap to shoulder the cost of maintenance. That puts you ahead in the short run, at the cost of higher risk.

It takes a strong open source advocacy effort to a reach out and a sophisticated organization to be reached, for such an organization to recognize when it's in their interest to participate in open source maintenance.


There’s a lot of different metrics to look at when determining if one should install a dependency or not.

One such metric I found to be missing for npm dependencies was the install size, which is why I created https://packagephobia.com

We need more tools to help make this decision because its easy to add a new dependency but its often hard to remove a dependency.


I'm not sure size is a good metric here. For example, a really tiny npm library like left pad (https://packagephobia.com/result?p=leftpad) is something you don't want to depend on (because you can implement it in a couple of lines), whereas date-fns (https://packagephobia.com/result?p=date-fns), which is way bigger than left pad, is a library I definitely don't mind depending on.


That’s a great example!

You looked at the size and determined that it might not be worth bringing on a small dependency when you can implement it yourself (or rather use String.prototype.padStart).

It’s not just size alone but size of transitive dependencies.

Both examples you gave don’t have any dependencies (the publish size matches the install size).

Take a look at some popular dependencies like request or jest and notice the number of transitive dependencies.

https://packagephobia.com/result?p=request

https://packagephobia.com/result?p=jest


I wonder if you could write a program that scans javascript-dependencies for function-usage that could be replaced by suites like lodash, to find and replace "leftpad-style dependencies".


Would you trust the stability of the dependency more if it was maintained by a large company, say, Oracle?


This is describing cURL, isn't it?


ImageMagick is the ffmpeg of images.


All I think of is Angry Birds when I see that picture...


ffmpeg comes to mind


Very well put. I suspect this is one of those xkcd's that will become a timeless classic, just like the one on standards: https://xkcd.com/927/.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: