Feel like I've trialled every multi-DB client under the sun. Spent a good deal of time with Valentina, then DataGrip but firmly in bed with TablePlus (http://tableplus.com) now. First class Mac native experience and it's hard to see ever giving it up! There's a Windows build I've not used.
Want to also plug TablePlus — I got it in my SetApp subscription but it’s one I would have no problem buying outright. Really good stuff and consistently developed.
Huge fan of SetApp here. I subscribed months ago and absolutely love being able to just search, install, and get on with my work. Sure it is much cheaper to directly buy software you will use forever but that’s not why I use SetApp. I use it because it makes the hunt for apps obsolete. Any time I think I need software to do something, I have been able to find it instantly on SetApp. From PDF search to simple image editor to TablePlus, it makes me so much more relaxed about finding and installing the right app.
I really like TablePlus too and bought the premium version a couple of weeks ago. The native UI is very nice. It's a very nice and cheaper alternative to Navicat.
I still experience bugs from time to time. But it's improving.
Works very well with Wine (but only with non Hi-DPI displays when and the Windows Version is set to 2003). It's my favourite client on desktop Linux / Windows.
If you are on macOS and use Postgres, give https://eggerapps.at/postico/ a try. Haven't used a better DB GUI Client and it looks great with Mojaves new dark mode :-)
For working with MySQL on Mac, I really enjoy Sequel Pro [1]. Full native (and kinda old-skool, if you dig that kinda thing) Mac experience/design. Works fast. Is free.
I used the regular Sequel Pro for years but just recently switched to the nightly version, which usually it's stable and has more updates. I recommend the switch to nightly. They share the same DB configurations you have created, so no need to worry that you have to import/export.
It's buggier as it crashes on me every few days but can't connect to MySQL 8 otherwise but the way development is going on, the product seems like a dead end which is too bad.
For linux, dbeaver is the least ugly thing I found. DataGrip looks ugly in 4k with 2.0 scaling, and I dont like the electron based GUIs. Looks like heidi is windows-only, will check with wine.
It'll search through all text fields of all tables of whichever database you specify for a given string, and displays a nice per-table summary for each matching record. I haven't found anything like this in any other DB client.
SQL Workbench/J has this feature[0]. I often use this feature when someone asks me questions about a report and I don't have a clue which tables that data comes from. Just search the database.
I don't have to do this frequently but if any dev/QA/myself need to do this, I dump the DB and run grep. Not the fastest way for sure but gets the job done!
So many people sleeping on DB Visualizer. It’s been around forever, can connect to anything with a JDBC driver, has a ton of features, and is updated frequently. My favorite is the ability to quickly full text search any query I’ve ever run.
Sounds interesting, but what are good examples of "Visualizer" part? Screenshots section [1] only has 6 (pretty primitive [2]) images out of about a 100 total, but maybe that is outdated?
I recently discovered DBeaver and can highly recommend it. It is a bit more complicated to use than some alternatives, but really powerful. Also supports a large
number of databases.
https://dbeaver.io/
I use three tools interchangeably. DBeaver as mentioned by other comments.
Squirrel SQL[0] has been around for a very long time. I have been using it for well over 10 years. Probably 15. Don't be fooled by the dated screenshots. First thing I always do it change the look and feel to Windows native then looks like any other Windows application. SQL Workbench/J[1]. Has tonnes of features, is well documented. I particularly like the headless mode which allows me to automate running of SQL statements using a batch file.
I would argue Datagrip has yet to fail me while having a ton of support for many databases. Including support for MySQL 4.0.16. I had to use that DB for my previous job about a year ago. With datagrip I just needed to download an old jar file and I had a connection automatically.
A simple postgres client I'm working on in my free time: https://sanchosql.com/
Linux only now but it's open source so it should be possible to compile for win/mac
It's one of the best free apps I've used when I was still developing on Windows, and it's one of the few tools I miss for development when on other platforms. If you haven't yet, give it a try!
A while back, I was fixing up procedures in PL/pgSQL for a company. The procedures had to be rewritten, and they all sort of looked like this:
create function...
"""
declare some string "'"this string"'" int
"""
end;
The cross-database client didn't support or comprehend $$ escaping, so the people who wrote everything had no choice but the write with loads of quotes everywhere. I didn't realize this, so I passed some code back to them and they couldn't run it at all.
The client-specific programs are all optimized to the RDMS you are using, so I never was able get on board with these products.
I have been using HeidiSQL with Wine for couple of years now, works great! It does have some bugs though, like after running a large SQL file the UI just freezes and you have to kill the program.
This has been around a long time. I used to use it when working with DBs in awkward places, like RDP accessed Windows servers. Since it doesn't require an install, it was handy to just put it on a shared local folder and run it from there on the remote server.
Never had any real complaints but it wasn't anything I'd used for regular work. I tend to prefer using the DB plugins in IntelliJ IDEA Ultimate or, if they aren't great, then something specific to the DB at the time.
I usually use this on Windows but now that I have all JetBrains tools I opt for DataGrip the most since its cross platform. But also this gets tricky when you want to open a MSSQL db thats local for development I forgot what Microsoft calls it but it may as well be called MSSQLite.
I wish I had the same experience as you. I use MW at my work and it is constantly crashing, hanging, freezing, and otherwise puttering out. We don't have a stupidly large database either. Lately I've been hounding my team to switch to another product, such as HeidiSQL or DBeaver.
Induction [1] was supposed to be that, but the webpage is long gone, and it looks like there's just a source dump on GitHub with "alpha" code that hasn't been touched in 5 years.
Curious how do workbench and the other (have not used that i guess) screw up CSV exports?
Recently tried workbench after a year or 2 again; was enjoying some new features, but the lack of OSX dark theme support is hurting usability, some parts of UI and or texts are simply unreadable. I hope they sort that soon. Workbench is not one of many that supports connections over ssh native, which is nice as i hope non of the folks here run interfacing databases :). The performance analyses tab is quite neat, saves me a lot of manual query logging and plowing through that to find bad queries.
I'm not sure about HeidiSQL but I've had great luck with MySQL database exports using PHPStorm. I think PHPStorm essentially has DataGrip built-in:
https://www.jetbrains.com/datagrip/
After having trouble with workbench, Using PHPStorm I was able to export a whole database to a file per table in one go and both TSV/CSV are supported. I found my resultant CSV files to be reliably escaped.
Though I'm not familiar with the CSV export of those two tools, I've had adventures in CSV parsing from other tools. How do they screw it up? Is there a canonical "correct" way defined somewhere?
I cannot remember exactly how they each screw it up, but the issues I have had are, but not limited to,
a) truncation of data
b) removal of line breaks
c) not escaping enclosing values, ex. " as ""
d) using \n or other values in place of actual line breaks or tabs
e) using \N for NULL values (which isn't too bad, but it would be nice to be able to configure this)
f) in general not complying with https://tools.ietf.org/html/rfc4180
I wonder what percentage of CSV represented data 1) passes through a spreadsheet at some point from data source to final data consumption and 2) how much of that spreadsheet action is specifically MS Excel.
You're right, there is no standard that I'm aware of, though the answer to those first two questions could well be a proxy for a de facto standard in practice.