This isn't okay - the author is selling their own alternative to Sentry, 'reusing' Sentry's open-source client SDK's, while spreading FUD about self-hosting Sentry.
I've been self-hosting Sentry for over 10 years: Sentry is installed by running `git clone`, editing one or two configuration files, and running ./install.sh. It requires 16GB RAM if you enable the full feature set. That includes automatically recording user sessions on web or mobile for replay, and support for end-to-end tracing (so seeing which database queries your api asks for in response to a button tap in your mobile app).
Sentry has a wonderful self-hosting team that's working hard to make Sentry's commercial feature set available, for free, to everyone that needs a mature error tracking solution. You can talk to them on discord and they listen and help.
Just for transparency, are you by any chance a member of this self-hosting team or a sentry employee? Is it a coincidence that your keybase name is georghendrik according to your profile, and the painter _Georg Hendrik_ Breitner painted a picture called "Sentry"? https://www.demesdagcollectie.nl/en/collection/hwm0055
This is some Olympics level of badly coping with valid and welcome criticism.
Georg Hendrik = "George Henry", pretty common name. The fact that Google returned a result when you searched "Georg Hendrik Sentry" should not be considered weird.
I was born 400 years ago in the Highlands of Scotland. I am Immortal, and I am not alone. Now is the time of the Gathering, when the stroke of the sword will release the power of the Quickening. In the end, there can be only one.
It seems like all the FUD in the article is copy-pasted from Sentry's own docs, though, no? And assuming Sentry's SDKs are open source and licensed appropriately (which seems to be the case), there's no issue (legal or moral) with directing users to use the Sentry SDK to communicate with a different, compatible product.
OP built a product because they were frustrated by Sentry's seeming hostility toward self-hosting. It doesn't feel like OP decided to build a competing product and then thought it would be a good marketing strategy to falsely push the idea that Sentry is difficult to self-host.
FWIW I've never self-hosted Sentry, but poking around at their docs around it turns me off to the idea. (I do need a Sentry-like thing for a new project I'm building right now, but I doubt I'll be using Sentry itself for it.) Even if it's possible to run with significantly less than 16GB (like 1GB or so), just listing the requirements that way suggests to me that running in a bare-bones, low-memory configuration isn't well tested. Maybe it's all fine! But I don't feel confident about it, and that's all that matters, really.
TBH most of the FUD in the OP is straight from Sentry's own website.
Regarding using the SDKs, I'm telling my users to take Sentry at their word when they wrote "Permission is hereby granted, free of charge [..] to deal in the Software without restriction, including without limitation the rights to use"
Wow, their example to "clean up the code" does a bit more than just refactoring to make code more readable, it appears to change the output.
One would have to check the resulting code carefully to see if the meaning is still as originally intended, or replaced with code that is more probable to be correct (but no longer working).
The before-code responds to dataset='animals' with `load_turtles(...)` and to dataset='turtle' or 'formal_turtle' with an error; In the after-code this is reversed, although the apparent logic error and the assignment/equals sign error are resolved.
Actually, the code before does nothing if dataset is set to 'animals', 'turtle' or 'formal_turtle', most of the branches are inaccessible. Also, the extra else clause that raises an error and the line
elif dataset = 'formal_turtle':
are both invalid syntax.
I think 'clean up' here means something closer to 'convert this to what I'm trying to write'.
Agreed, but I have to say data cleaning is actually one of the hardest step, LLMs are simply not there yet.
It's almost impossible to for LLM to tell all the invalid rows at once since the data cannot be fit into the context window. If we prompt the model to thoroughly do data cleaning, there will be many try-and-fail steps. This happens to me as a human, I clean some rows, expect my program to run with the data, only to find there are more malformed data.
LLM cannot get it right for now, actually I see many cases that LLM fails because it wants to convert types (e.g. string to date).
Based on my experience, the best way is simply to skip the data cleaning step in the planning stage (you can provide feedback asking the tool to not do any steps).
For those that want to try it, and have a rust development environment installed, the following runs Zed on Windows. (The editor part at least, I haven't tried the collaborative functions).
git clone https://github.com/zed-industries/zed
cargo run --release
Yes, see ManicTime [0], Timing [1] and ActivityWatch [2] for that.
They passively record what you do, on computer and phone, and at the end of the day (or when invoicing is due) allow you to link what you did to projects.
Use ManicTime if you're mostly on Windows, Timing appears to be good if you're mostly on OS X, and ActivityWatch takes the same approach in an open-source cross-platform project.
I have ManicTime set to record a screenshot every 15 seconds, and record window title and document paths on every app switch. Invoicing still takes time, but knowing what you did and when helps to take all the guesswork out of it, especially for chaotic days.
ManicTime [0] passively records your computer and phone activity in minute detail.
This makes it great for consultants, i.e. people working for multiple clients every day. You can focus on making your clients happy, and at the end of the day (or week, or month), still accurately assign time spent on each project.
ManicTime:
- Takes a screenshot every X seconds,
- Records window titles, document paths, urls,
- Records phone call metadata, phone location, foreground phone app (Android only).
ManicTime has a very intuitive zoomable timeline interface to assign screenshots and other recorded activity to projects. Everything is stored offline. If you want, a free to use self-hosted server ManicTime server helps with backup and multi-device synchronisation.
ManicTime works best on Windows and with Android, although an OS X client exists.
Pricing: 67 USD for a one year license (so: no subscription that holds your data hostage). ManicTime is not free, but knowing what you did and when (down to the minute) gives it a quick ROI.
PostHog Analytics [1] is open source, and sofar a very nice alternative (or complement) to Google Analytics.
It's more like Heap Analytics, in that it collects user clicks and other events from your site (or app) and allows you to retro-actively define "actions" based on these events.
For instance, if you decide today to keep track of how many users click your sign-up button on page X, PostHog can graph this metric for you for every moment since you first installed it on your site.
You can combine actions into funnels, graph nearly everything, the GDPR support is first-class, there's support for heatmaps, feature flags (rolling out new features to just a percentage of your visitors), and user cohort analysis.
I wanted to mention Sonic [1] as another lightweight document indexing alternative written in rust, when I found MeiliSearch to provide a thoughtful comparison page [2]
Just to make sure someone doesn't get the wrong impressions from your comment:
Redash is 99% open source (and I'm going to close this gap this month[1]), mature and actively maintained. The friendliness is subjective, but we're not trying to please everyone :-)
[1] Sometimes it's easier to prototype new things in the SaaS version, but everything reaches open source eventually. There is practically one feature that wasn't open sourced until now, and I'm going to add it to the open source version now.
We use and like redash, because we can have it trigger zapier tasks when there are new results on a query.
Makes cross-integration when testing new things great, and saves time in development being able to hack together a stop-gap solution this way.
Would people pay for a desktop tool like this? How important is sharing? I had built something a year ago (on top of PyQt) but shelved it for lack of interest.
I might if it wasn't tied to a cloud service. Sharing not that important for me unless it can be done without the need for getting a 3rd party involved.
It also seems to have some serious bugs. For example, none of their SQL Parameters examples[1] work with a Postgres connection. I just downloaded it this morning to give it a try after reading this topic.
It's likely that's because it currently uses the built-in Android WebView (which may store its own pseudo-history). That is a problem, but I believe their plans to replace WebView with Gecko will likely solve this problem. https://github.com/mozilla-mobile/focus-android/issues/13
I've been self-hosting Sentry for over 10 years: Sentry is installed by running `git clone`, editing one or two configuration files, and running ./install.sh. It requires 16GB RAM if you enable the full feature set. That includes automatically recording user sessions on web or mobile for replay, and support for end-to-end tracing (so seeing which database queries your api asks for in response to a button tap in your mobile app).
Sentry has a wonderful self-hosting team that's working hard to make Sentry's commercial feature set available, for free, to everyone that needs a mature error tracking solution. You can talk to them on discord and they listen and help.