Yes, there are crash reports, but AFAICT they are all but universally ignored.
Example: Verizon's "Message+" for the Mac crashes for almost any reason, including "you just selected an old conversation." I allowed its auto crash report to work, several times.
I gave them a negative rating on the Play Store, citing this. They wrote and asked me for details on "the problem" so they could fix it. I told them to start testing their software.
If you're a developer on the other end of these things for some company, please speak up. How many are actually looked at?
I formerly worked on $LARGE_SOFTWARE for $COMPANY and would frequently get bug reports for crashes, and spent a reasonable amount of time analyzing memory dumps to understand what was causing those crashes so that they could be fixed.
I think correlating those automated crash dumps with reviews in a software distribution channel would have been non-trivial, and not something I would have spent time on though.
In this case, though, I think you would have said about the review "Oh, yeah, that's the embarrassing bug where it crashes when you do something totally ordinary. Fixed (or fixing) that."
That said I'd say I fixed 95% of the crashes, even the trivial ones. The remainder were difficult enough to understand the cause (ie: out of memory and we got caught holding the stick) that risk of change outweighed fixing.
"Everybody"? Emacs does not. vim does not. OpenSSH does not. Python does not. ffmpeg does not.
I think this is a culture-clash question. Open source software generally does no automated crash reporting. Some software like the Rust compiler will print out a crash report and ask you to copy/paste it into a GitHub issue on your own; you can decide not to. The general cultural assumption is that if you want the software to get better, you have to participate, and if you don't participate, the developers don't owe it to you to fix your crashes. In fact, with the software I mentioned above, you generally don't even get it directly from the developer; you get it from a redistributor like a Linux distro, who may have modified it. Therefore it may not even make sense to send crash reports directly to the original developers, and the social contract is between you and the redistributor.
Some open-source software like VLC does prompt you to send crash reports if you want, but it doesn't send them by default, and their data collection terms say nothing about law enforcement (https://www.videolan.org/privacy.html
Some open-source software that operates at a scale that resembles traditional "shrink-wrapped" (conceptually) software does even more telemetry - Firefox, for instance, but even so it doesn't say anything about collecting data for law enforcement: https://www.mozilla.org/en-US/privacy/firefox/
There's also a world of closed-source software where there isn't any culture of participation and where asking a user to file a bug makes little sense. This is the world of Windows and macOS and mobile apps, all of which do automated crash reports. This is the world of web analytics tools that track exactly where your mouse cursor pointed so developers can build more intuitive websites, and more generally, the world in which software runs online and server-side because it helps developers. The social contract here is that the developers improve the software fix bugs without your active participation, and in turn the software needs to send them data about how you use it - both crash reports and general usage data - on its own, without you doing anything.
The real question is which world Audacity now lives in. A clause like "data necessary for law enforcement" is perfectly standard for websites, and probably they copied and pasted it from some standard terms of service document. But can you imagine Emacs or Python having such terms?
>Emacs does not. vim does not. OpenSSH does not. Python does not. ffmpeg does not.
And yet, Fedora [0], Ubuntu [1], and KDE [2] do. So what you're saying doesn't paint a full picture: even if upstream of those projects you mentioned doesn't take the automated bug reports, there is likely some downstream that is using them to aggregate crash data and turn those into bug reports. It is not feasible, on a large scale, to manually comb through crash dumps and ask people to figure out how to use gdb. Open source projects don't have any special sauce here.
Also, it seems very strange to complain about companies collecting data that they're required to collect by law. "Forking" there is a fool's errand, any other company legally doing business in the same areas will be following the same laws.
I think the question is about automatically reporting bugs versus giving the user tools to report bugs in a consistent manner if they choose to.
Fedora's tool does not automatically report the bugs - it just automates the process of allowing the users to do so if they choose, and handles generating stack traces locally or remotely (configurable) and dealing with core dumps and submitting them if so chosen (again configurable), and tries to tie crashes to existing bug reports if you choose to submit the data to prevent devs from being flooded.
It even has some pretty great functionality to help make sure no private/sensitive data leaks in the report, and lets you go through things line-by-line that based on a default or custom list of keywords that may indicate sensitive data (such as passwd, key, etc.) before you finally choose to submit it.
And it's all done on a case-by-case basis, so for each crash you can choose whether to submit any data or not, in case you're running one app that you'd be concerned about leaking sensitive data versus another.
> Also, it seems very strange to complain about companies collecting data that they're required to collect by law.
Thanks for clearly phrasing this, because I think this is exactly what I'm complaining about.
Audacity is, in fact, not required to collect anything by law, and for a very long time, it did not. It's a piece of software that runs on your computer. It doesn't even require you to have a network connection. It doesn't require the developers to even know that you exist.
No law (in any slightly free country, at least) prevents the existence of software that doesn't phone home. WordStar wasn't illegal. Emacs and Python and ffmpeg aren't illegal.
It is true that once you start collecting data, legal requirements start to apply, and you need to comply with requests for that data. That's a really good reason not to collect it in the first place.
If all you have is a stack trace, and you have nothing personally identifying (including filenames), then the data you have is useless for any other purpose besides fixing the bug. You're protected from legal requests, you're protected from illegal requests, you're less of a target of hacks, and so forth.
Fedora's ABRT, for instance, goes out of its way to ensure that any crash data they collect is suitable to make public: https://abrt.readthedocs.io/en/latest/ureport.html#ureport That's a pretty good way of squaring this circle. If the data is okay to make public, then by definition it's not a problem if anyone (law enforcement or otherwise) wants to see it, and you don't have to rely on their good intentions.
KDE doesn't claim to collect information about crash dumps; it just collects statistics about how it's used.
It is, of course, harder to debug things from just a list of symbols. And as you say, it's not feasible to teach everyone how to use gdb. So there's a tradeoff. You acknowledge that the software isn't going to be as good as possible in terms of bug-free-ness (or you find other ways to address that problem, like fuzzing or static analysis), but in exchange, you make the software better in terms of how it treats the user's privacy.
I don't understand why you're comparing Audacity to Python. ffmpeg would be a much better comparison to emacs or python – and I admit I'd be surprised if ffmpeg was emitting telemetry like crash reports.
Audacity's more comparable peer would be Adobe Photoshop. I'd expect crash reports in a GUI app like either of these. I'd also expect that anyone with data on a server would need a clause like "data necessary for law enforcement".
I would not expect that law enforcement regularly hounds Adobe for data from crash reports.
Well, I'm making that comparison because Audacity is - historically - an open-source app created by volunteers (originally created by a CMU professor and grad student). That's why I think it's a culture thing. (Emacs is a GUI app too, so the line isn't quite "GUI apps." Or maybe consider Signal vs. Facebook Messenger.) Is it part of the culture of open-source programs like Emacs and ffmpeg, or is it part of the culture of closed-source programs like Photoshop and Facebook?
I do actually think Firefox is a very interesting comparison here, because despite coming from the open-source culture, they collect extensive telemetry and they do work in a model where they can't expect user participation. But they in fact have no clause about data necessary for law enforcement.
(I'm also not defending Firefox as perfect, and a complication of Firefox is that it's designed to be used with an internet connection anyway. Audacity is an app that makes perfect sense to use offline. So was Photoshop, for that matter.)
In any case, the clause itself is not the interesting part; the interesting part is the implication that they have enough data to be interesting to law enforcement. Adobe and Facebook certainly do; whether or not it's used, it's there, and there are so many other reasons to be concerned about the data existing (rogue employees, hacks/leaks, etc.). The developers of Emacs clearly don't. I would guess the developers of Firefox try to avoid it, which is why Firefox doesn't need such a clause.
I intended "KGB" as a stand-in for "Government Intelligence and Law Enforcement Bureaus" but in the specific case of the FSB I doubt they'd give a rat's ass what a Terms of Service contains if they really wanted to come after somebody.
EDIT: I am worried about governments coming after some people, but I am not worried that Audacity's ToS telemetry or ToS will be pivotal to their success in doing so. Just look at Assange – they got him on falsified rape charges, not his history of torrents or website visits.
The rape charges against Assange may have been partly exaggerated, but calling them "falsified" is so much of an exaggeration that it itself comes closer to being a falsification than the charges ever were.
Audacity doesn't have incriminating data about its users.
Are you seriously worried that the KGB is gonna request your extradition because of the filenames of some mp3s you were editing when a crash occurred?