Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
PhpMyAdmin Project Successfully Completes Security Audit (phpmyadmin.net)
210 points by pyprism on June 14, 2016 | hide | past | favorite | 109 comments


I encourage people to google how to run phpMyAdmin, MySQL Workbench, or Sequel Pro locally, and use port forwarding over SSH. It's super simple.

Here is a command that forwards all traffic to localhost:3306 across the ssh tunnel to example.com:3306 (the mysql default port).

    ssh user@example.com -L 3306:localhost:3306
I would never run a DB admin application on the live server because it's just one more piece that might open a security hole.


On Windows, I recommend using HeidiSQL, which handles SSH tunnels for you using PuTTY's plink.exe.


I like some of the HeidiSQL tools so much that I run it via WINE on my mac.


Be wary of Heidi and SSH tunnelling on windows - I'm sure the bugs have been fixed but I first hand realised that it re-used the tunnel for subsequent connections meaning you were not making changes on the database you thought you were - definitely caused some problems! However it is such a good client that I now use it on Linux under Wine :)


https://sourceforge.net/p/heidisql/tickets/2832/

Found my original ticket! And yes, I truncated a few production tables :(


Take it a step further. Set up a locked down bastion server that can only be reached from your IP address, and disable all access to your DB from anything but your application and the bastion server. Then tunnel your local queries through your bastion server.

  ssh -fNg -L 3306:my-secure-db.com:3306 user@bastion-server.com


Another option is to install these tools at a separate domain and setup HTTP authorization on a webserver. So automated bots scanning for vulnerable apps will not get even to the login page.


Apart from security, are there any other benefits?


Sequel Pro (free open source, despite name) is a native application which is much nicer to use than phpMyAdmin. So, yes there's benefits other than security.



I presume it'd be faster, since only remote data is MySQL data, as opposed to web server/PHP rendering.


Or learn how to use the CLI client to connect. No need to install a awkward wrapper software layer.


The "awkward wrapper software layer" has so many workflow enhancements and advantages over the ho-hum CLI client that it's not even funny.


You seem to be a software developer who is advocating that people not automate something that's tedious (writing and running SELECT and UPDATE queries). Weird.


phpMyAdmin is not automation. It's a less-painful manual interface.

Tedium in a GUI is still tedium.


You can automate with scripts. No need to use a fancy GUI for this.


And what if I don't want to write scripts because other people have already automated all common database browsing/editing tasks for me? What if I don't want to reinvent the wheel?

With a GUI, you can click 3 times and browse 3 tables. That's just not possible without a GUI.

If GUIs didn't save people (lots of) time, no one would have invented them and the software industry would be moving away from them.

Besides being way faster and more efficient, GUIs do things that CLIs don't:

- linting/error-checking/autocomplete for SQL

- checking query sanity (warn before an unconstrained DELETE for example)

- copy/paste data (within the database and between applications)

- jump from FK cell to the row it references

The list goes on.


Scripted migrations have their advantages too:

- they can be stored in a version control system

- your coworkers can run them at their dev environment

- they can be used for automatic deployment

- you can test it in a dev environment before running at a production server. With GUI tools you have to remember and repeat exact steps you did before

- You can review the script or ask someone to look at it

I think GUI tools might be good for browsing and exploring database but not for making changes.


In that case I would add HeidiSQL to that list.

I use it with Wine in Ubuntu.


> A lack of filtering on user CSV output that could allow an attacker to run arbitrary code on an administrator's computer.

> Improper cookie invalidation that could allow an attacker to unset internal global variables.

Those don't count as serious issues? Props to them for making the report public though.


Interestingly, Google specifically excludes CSV vulnerabilities like that from their bug bounty program.

> CSV files are just text files (the format is defined in RFC 4180) and evaluating formulas is a behavior of only a subset of the applications opening them - it's rather a side effect of the CSV format and not a vulnerability in our products which can export user-created CSVs. This issue should mitigated by the application which would be importing/interpreting data from an external source, as Microsoft Excel does (for example) by showing a warning. In other words, the proper fix should be applied when opening the CSV files, rather then when creating them.

https://sites.google.com/site/bughunteruniversity/nonvuln/cs...


To add a bit of color here, Google is correct to exclude CSV vulnerabilities. There is no way (through this vulnerability) to compromise a Google host, a user account managed by a Google host or sensitive data associated with a user account.

As framed, this is kind of similar to using a Google web tool to edit an XML file, export it and compromise someone using external entity injection. In that context, you're not really compromising Google, and you're not using Google as a medium to automatically compromise many Google users at once. Google is only peripherally involved in the process.

What would qualify for a bounty is code/script execution on a Google host using a CSV file, maybe through something like Google Trends Correlate (https://www.google.com/trends/correlate). Either reflect the CSV contents in a publicly accessible location with a mismatched content-type or leverage an arbitrary file upload error to execute a malicious payload disguised as a CSV on the server. But just exporting a CSV from a Google host is not really a vulnerability in Google.

By the way, for people who are interested in bug bounties, it looks like the page I linked hasn't been updated since 2011. Might want to check it out.


It probably helps that Google's applications encourages their users to open spreadsheets in Google Sheets, instead of office software running on the users' machines.

If you work on a web application where users can generate reports that contain input from other users, and a common workflow is for users to download those reports and open them in Excel, they aren't going to be happy if your site is a vector for attacks. Saying "Oh, but it's an attack against your laptop, not against our servers; WONTFIX." is going to be tough to justify.


> > A lack of filtering on user CSV output that could allow an attacker to run arbitrary code on an administrator's computer.

Iff the user has Excel, and explicitly allows it to run macros in a CSV file. It's already a stretch to call this a phpMyAdmin vulnerability, much less a "medium severity" one.

> > Improper cookie invalidation that could allow an attacker to unset internal global variables.

From the PDF report:

> Note: Because of the large amount of global variables, and the relatively short nature of this assessment, NCC Group was unable to fully determine the impact of this vulnerability.

It might be serious, but they didn't have enough budget to make a proper analysis.


> Because of the large amount of global variables... NCC Group was unable to fully determine the impact of this vulnerability.

In other words, "This project is too full of potential security holes to find the definite ones."


No, it means we understand there are theoretical security issues with global variables, but cannot determine if they're actually applicable or exploitable in this software.


You just repeated exactly the same thing he said as if you were disagreeing.


A theoretical security vulnerability isn't really a think - it's just a bug. Either it's exploitable, and thus a security vulnerability, or it's a bug and isn't,


Yes it is. It is a bug, that may be exploitable. There's no contradiction there.


Global variables are not bugs -- at worse they are bad style and can cause bugs.

As for your other comments, there's this "burden of proof" thing.


Did you reply to the wrong comment by mistake? What other comments? What are you talking about?


I would venture that to do so would devolve into a full source audit, which seriously increases the scope of the test. Full source audits are likely to consume ten times or more calendar effort.

Performing a full source audit is going to result in sticker shock for all but the most well-funded.


I really hate the idea of having a web interface to my database anywhere, no matter how secure they say it is. Social engineering (over direct "hacking") lends itself to circumventing technical security.

No matter their technical security (Although I'm super happy they test phpmyadmin!), I still wouldn't trust it on my servers.

Granted you can lock phpmyadmin down via ip restriction, vpn, etc - that's definitely good, but, if you can forgive a bit of generalization, those measure tend to be above people's head or too restrictive for those using phpmyadmin.

If we do connect to a database using a GUI (usually an app instead of phpmyadmin), however, my preference is through an SSH tunnel. This lets us connect securely (over SSH), and still allow MySQL to not be globally accessible from the outside world - meaning, you can still using MySQL's built-in network security features (bind-address and username hosts, along with firewall restrictions) to lock down MySQL.


> I really hate the idea of having a web interface to my database anywhere,

Aren't those called "applications"? And yes, I hate them too.


Why do you presume that web app has to be run public? You can easily limit access to web app by IP, or you can put it on a private network that you will access through VPN. That would make it more secure than most web services that we trust regularly, like gmail or paypal...


If you're going to do this, go the VPN route.


For a prospective hacker, I don't think there's much of a (functional) difference between a graphical interface or a shell.


The attack surface for a web application like phpmyadmin is the entire codebase of that application. The attack surface for mysql over an ssh tunnel is basically only the sshd daemon and its authentication configuration.

I think most people would agree which one exposes a greater likelihood of being hacked. Of course you can secure a phpmyadmin installation against even being accessed by attackers (I've done this in the past myself), but there is still a chance of such security measures being accidentally botched compared to the sshd configuration.

I don't feel strongly either way, if you are confident that your security measures on a phpmyadmin installation are solid. I for one, security audit or not, would never expose a phpmyadmin installation on a publicly accessible URL.


I think he is talking about unnecessary additional attack vectors, not about functionality.


Secure Open Source has completed[1] the following audits.

    - PCRE v2 audited by Cure53[2]
      1 Critical
      5 Medium
      20 Low
      3 Informational

    - libjpeg-turbo audited by Cure53
      1 High
      2 Medium
      2 Low

    - phpMyAdmin audited by NCC Group[3]
      3 Medium
      5 Low
      1 Informational
[1] https://wiki.mozilla.org/MOSS/Secure_Open_Source/Completed

[2] https://cure53.de/

[3] https://www.nccgroup.trust/uk/


Stupid question, how does a security audit work? Do the consultants just read through the code? Do they try to find security bug like they do on bug bounty programs?


I'm not an expert in this field, but we recently did a security audit. The auditors get access to the code in order to evaluate it for vulnerabilities. In our ruby application, they also check gems that we are using (through open source tools albeit).

They also did an in-app audit where they tried to break the application however they might see that. Having access to the code helps with this.

When you get audited by potential customer, it usually involves not having code access and trying to penetrate the app without that access.


> When you get audited by potential customer, it usually involves not having code access and trying to penetrate the app without that access.

Is this in reference to on-prem / enterprise software and is this typical? I haven't heard of customers doing this but it certainly makes sense (might as well invest thousands to test before spending magnitudes more on the product itself only to find it having a huge security hole). Then again I'm not sure I've worked with potential customers who have access to do something like that.


We just signed a big deal with a Google subsidiary, and part of that deal required us to go through a third party penetration test (no code access).


Well today I learned. Thanks!


The first few chapters of the book, "The Art of Software Security Assessment: Identifying and Preventing Software Vulnerabilities" outlines a very meticulous process of reviewing source code for vulnerabilities in a professional manner.


Yes. The way it works is that two smart hackers go into the office each day and spend eight hours trying to think of as many creative ways of tapping on the target application as possible. Nothing is off-limits except what is agreed up front, but you're obviously expected not to interfere with production operations. The client is generally expected to set up a testing environment substantially similar to production, but usually consultants just have to muddle through with whatever the clients give, which may not (and usually isn't) populated with production data. As long as consultants are given the ability to enter in data themselves, i.e. admin accounts, this is fine, because data entry is their job. Hacking is basically large-scale data entry, and it's as boring as it sounds: very tedious, interspersed with excitement when you see an XSS popup window or figure out a clever way to get a reverse shell.

If after two weeks of this you found no medium-or-higher security vulnerabilities, you were generally considered to not be doing a very good job.

The secret of the industry is that at the end of this process, you are deemed secure. That's the point of the security audit. But if it's not a repeating process, it doesn't work. It may work for that particular version of the application, and it may substantially improve the security in that old vulnerabilities are found and fixed. Let me abandon this train of thought and put it another way:

This post is a press release saying that phpMyAdmin is secure. But that's not how this works. High-severity vulnerabilities are often found near the end of an audit. This is because the consultants have had time to become intimately familiar with the application. But the late stages of an audit are exactly when the consultant's time is mostly spent writing reports for the existing findings, and not doing pentesting. This means that two weeks is often just long enough to start finding serious vulns, since week one can be devoted to pentesting and week two is mostly reporting from Tuesday onward. But that "mostly reporting" process gets the consultant thinking about the application as they're doing the writeups, which -- you guessed it -- leads to realizing that there's something clever they could try. And when they try that clever thing, sometimes it yields a high-severity vuln. It's the opposite of a mechanical, thoughtless process.

That means your results will vary depending on who, specifically, is doing the auditing. If you run your application through the consulting process twice -- same version, same staging data, same everything -- it's likely that you'll get wildly different results, because the pentesters are different people.

It has to be an on-going process in order to be effective. And it can be highly effective. It just costs so much that only the most massive companies can afford this.

That's not to say this audit wasn't effective. It's possible that whoever did the audit found substantially everything. But it was interesting to discover how often this was not the case, in a "How'd they miss this last time?" sort of way.


They're a very worthwhile process for SMEs and companies that haven't had one before. At one of my previous employers (I won't say which) they were marvelling at the ways in which the contractor was able to do privilege escalation by editing a form to change their user level from the "3" or "4" in the drop-down to "1".

The fact that they got full rights so quickly really drove home the need for security to be a feature and for code reviews.

Now the cynical would point out that standard pen testers would have found that, and maybe they would, but the speed at which a contractor could find these issues and then see the full breadth of the surface compared to pen testers was great. And the fact they could explain back what the problem was in terms of code and how it should be rewritten rather than just "found rights escalation in form x" leaving the client to perhaps improperly deal with that.

Overall I was far more impressed watching an auditor doing a few days work than any of the regular pen testing companies I've seen since who mostly seem to point fuzzers at any endpoints they find.


> Now the cynical would point out that standard pen testers would have found that, and maybe they would, but the speed at which a contractor could find these issues and then see the full breadth of the surface compared to pen testers was great.

What's the difference between a contractor and a pen tester?

I consider one a function of how you are employed and the other a function of role. IE the two overlap and are not directly comparable.


I think eterm is comparing security auditors (with code access) to pen-testers (no code access).


Good question. It can be all or none of the above. Here's what happens at a high level:

Once a company decides it needs a security assessment performed on an application, it engages with a consulting firm. Consulting firms generally offer a variety of services, from web and mobile application penetration tests, to cryptanalysis (implementation and design), to reverse engineering and binary penetration testing, with source code audits sprinkled throughout (or as standalone assessments). Let's assume they move forward with a web application assessment.

The company decides if it wants a source code audit, a penetration test or both. The most comprehensive assessments will include source code and unmitigated access to a staging environment that the consultants do not have to worry about destroying. However, they could also decide they don't want to hand over the code (common in things like sensitive financial applications or in applications with protective developers). I've worked on many assessments where I had no source code - this is called a "black-box" assessment.

Conversely, an assessment might consist of a source code audit with no penetration test! This is less common, but it's particularly suited for engagements where the developers are fairly sure they've eliminated the most common issues and they are really focused on obscure errors, logic flaws and race conditions.

It really depends on the type of security audit. You can have more exotic ones, like black-box cryptanalysis where a company hands Riscure a proprietary payment mechanism and there is heavy reverse engineering and side channel analysis. It can also be very vanilla, like the web application penetration tests that bug bounty programs attempt to simulate. Companies decide what they are going to do based on their application's profile and their goals.

Putting this all together, these are the stages of a traditional security audit from a high-quality firm:

Step 1: A company receives several proposals and decides which company to move forward with based on which statement of work most closely matches their security goals, timing, budget and desired expertise. Then they decide on a start date.

Step 2: Representatives from the company (generally a technical manager, a security engineer or manager if the company has one and at least one developer) have a conference call with representatives from the security firm (generally, the security consultants performing the assessment, an account executive and a technical manager) to "kick off" the assessment with technical and logistical engagement planning. Things like "How will we access the staging environment?" and "Is there anything off-limits" are fleshed out here as well as reminders about scope and scheduling.

Step 3: Things like source code, infrastructure/application/API documentation, PGP keys, etc. are securely exchanged and verified. This comes out of a list of mutual action items from the kick-off call.

Step 4: The actual assessment happens, generally in a period of one to three weeks. I've never been involved in an assessment less than one week long, and assessments longer than four weeks usually need to re-scope or they become monolithic and difficult to coordinate. Progress reports with findings and testing data are securely sent to the company from the security firm.

Step 5: The assessment is finished and a final deliverable is securely sent to the company from the security firm. An optional re-test assessment might happen a few weeks or months later to confirm if the findings have been satisfactorily resolved.

This is based on my knowledge of having worked in security consultancies, engaging with them as an in-house security engineer and running my own consulting firm.


Thanks for your response, great to see how it works from a business side. I'm going to use this opportunity and ask you another question.

What happens if after 2-3 weeks of consulting you don't find any "high impact" issue? Are your customer angry, happy?


That almost never happens. I can count on one hand the number of times it has happened in ~100 past assessments. Generally speaking, the maxim, "There is no such thing as a secure system" is valid. Competent security consultants should be capable finding something actionable in all but the most exceptional circumstances if you throw them into a room to search for vulnerabilities for a few weeks.

That said, I have had assessments where there were no findings. This is generally because there are informational observations that can't be escalated to vulnerabilities in the given assessment time, or because the application has a very security-conscious development team. If it happens, it might be a sign that the application is not sufficiently mature to require an assessment yet, or it's just too simple to really analyze. It can also mean that the consultant is not sufficiently competent to perform the assessment.

To give an example, I worked at a large consultancy where we had a giant public company hold us on retainer to perform assessments on "brochure websites" - they were not interactive at all. There wasn't even a login interface. The company wanted to check off that it had security assessments performed on all webpages it hosted, but realistically there were never any actionable findings. (This is about as much detail as I can give because it's NDA'd, but it's not the sort of thing I'd take on in my own practice).

A more recent example is a YC company I worked with a few weeks ago. Their development team is very well educated on security matters. While I found security vulnerabilities, there were no high severity findings because the quality of peer review and paranoid development was very high there. They were very familiar with every Ruby/Rails gotcha and pretty thoroughly avoided them.

To answer your question, I've never had anyone "angry" at me for not finding anything. They're not "happy", but as long as they can verify that the work they paid for was done, they aren't angry. It doesn't happen often, and when it has happened the consultant should provide enough information to demonstrate that competent work was done.

However, I personally don't feel very good about it. My understanding is that competent security engineers in general are not happy about it. It is much more likely that the assessment either shouldn't have happened (because the application is not mature or complex enough) or that the consultant was simply insufficiently competent than that the application is really completely secure.


The smart clients are usually unhappy, unless they've set expectations in advance that you're not expected to find anything (which is rare).

As consultants, you are always very unhappy when your project ends with no sev:hi findings. That, too, is rare.


I can't upvote this comment enough. That's an excellent answer to the question.


I wish NCC Group had been given more time, since phpMyAdmin is nigh-ubiquitous in legacy PHP apps.

For example:

https://github.com/phpmyadmin/phpmyadmin/blob/4cd8ab8a957a23...

Despite setting several security-related session configuration values, they don't touch the cookie entropy fields, which means a potential session fixation vulnerability.

This might not be a concern for most users: typically your distro ships a php.ini configured to read at least 16 bytes from /dev/urandom. But not always! Many projects set cookie.entropy_length and cookie.entropy_source just to be sure.


Does anyone know how much (approximately) this audit could have costed?


Given that the assessment occupied two weeks with two consultants, between $25,000 - $35,000.

I don't have intimate knowledge of NCC Group's pricing structure because I don't work there. But I have friends who do, and similarly situated consultancies that I've worked for are in the $10,000/week range for a one-off assessment with non-senior staff. This is also somewhat close to what I charge through my own smaller consulting practice.

Now, if there was specialty work (like crypto), particularly comprehensive work, more consultants billed on the assessment than usual or senior/principal consultants billed on the assessment, the total fee would go up. This is why I added a $10,000 premium to my estimate; the source code analysis detailed in this report might qualify as "non-standard."

That said, NCC might have worked on a discount for the opportunity to advertise that they were involved in the audit. But I don't see this assessment having costed anything less than $20,000 even in a charitable situation.


$10,000/week range seems low for a week long audit, but depends on time charged.

Most audits I've worked on, while a week long, have a 2 week pre-audit familiarization period for the audit team, and a 1 week long post-audit report-writing period. This means a 1 week audit is an actual week of investigation, and for $10,000 this sounds low.

Via the article, it seems like a leading client / lead of future potential client, so discount works on many levels.

And from TFA: Conservancy and the phpMyAdmin project are proud of the results and thank Mozilla for funding and initiating the audit.


Interesting. Do you mind if I ask what sort of audits you were working on?

I can understand the 2 week pre-audit familiarization period. How would you price this out instead? I was operating under the assumption that the pre-audit familiarization was priced into the first week as threat modeling and discovery. This would also lend credence to the report admitting that they did not have time to investigate as thoroughly as they would have liked.

I did forget to include the post-audit report-writing period, it's been a while since that was a thing for me. I've never billed for that in my own practice because I disagree with the idea of billing for five days of work that essentially boils down to "fill in findings and application details into a long-form, templated PDF." I've also never seen a consultant really need five days to complete one of those :). I'm sure folks like Tom will come in shortly to beat me over the head for not charging for this part of the assessment.

I don't understand what you mean by this though:

> And from TFA: Conservancy and the phpMyAdmin project are proud of the results and thank Mozilla for funding and initiating the audit.

I do agree it's likely that there is a discount here for future or publicly recognizable work.


Banking. But there was a standard policy, regardless of department - HR, Operations, Technology, Sales, everything. What was important was the scope.

I may have read the article wrongly, however. On second reading, it seems audit in the sense of check. Not audit as I assumed on an institutional level. In this case, certainly not everything is checked. Tires are kicked in the first couple of days, and if something seems like it has a leak, an extremely deep dive will be taken, for example checking thousands of records by hand (well, probably in Excel) looking for something missed - a signature, a verifier, etc. Non-cooperation results in the audit being extended in time until the auditor is satisfied with their findings.


As a former employee of a penetration testing firm, and a current purchaser of such services, this contrary to my expectations.

I expect any competent firm to be able, in an afternoon, to look at the overall documentation of the web site, chat with me for an hour or so, and come up with a multi-point threat model that will guide the testing. I expect to pay for the actual week or weeks that the team is actually testing the system, and that the report after is a day or two and part of the price.


We're currently doing an audit, non-security audit though, but auditor salaries will probably not deviate that much. We hired a top-five audit firm and were billed roughly $400 per hour in total for two auditors (senior, junior) plus some work by the partner, mostly at the beginning and end.

The scope of their work of course is what drives the total cost, but a single full-time week would usually range $15-20k.


You can usually add a pretty significant premium to an audit if it includes a public statement, like this one apparently did. But since Mozilla funded it as part of a block grant, the rate might be significantly lower.


Yeah, good call, I'm a little torn on whether to discount or raise the estimated rate based on the public report. In the end I figured it might be lower due to the possibility of further work down the pipeline.


Does anyone still use this? I didn't realize this was still actively maintained.


If you are using MySQL, and need to manually fuck around with tables for whatever reason, it's really useful and beats most other options.

For us it sees plenty of use with poorly developed legacy software (e.g. Wordpress).


If you need to make a manual database change then Adminer is often a better option[1].

It's doesn't have the featureset as PHPMyAdmin but it has a huge advantage in that it's a single PHP file you can upload, make the necessary changes, and then delete. If you're interested in maintaining a secure server but you don't have any better options than using a script, then it's better to upload something when you need it than trying to secure an online admin tool.

[1] https://www.adminer.org/


> It's doesn't have the featureset as PHPMyAdmin but it has a huge advantage in that it's a single PHP file you can upload, make the necessary changes, and then delete

The days of painfully slow FTP servers are long gone where "it's one file" would count as advantage.

This leaves a massively worse UX and featureset.

(We are using adminer for postgresql databases, because there's no better alternative, and it makes me wish phpmyadmin supported postgres every time I have to use it.)


Didn't postgresql community just annouced pgadmin4 lately and it seems to come with a modern webclient https://www.pgadmin.org/


Do any of these web-based DB tools come with autocomplete? I'd say that's the feature I value most when using desktop DB tools.


TeamPostgreSQL[1] is a pretty good web interface for Postgres. It has SQL autocompletion too, with completion for schema objects as well as SQL keywords. It is free.

[1] http://www.teampostgresql.com


Thanks for the tip.


Latest version of PhpMyAdmin does, but not nearly as good as for example "Microsoft SQL Management" does for MsSQL.

My favorite feature is generally the graphical database relation view. Without it i am something like 500% less effective.


Having a single file isn't an advantage from a speed perspective; the advantage lies in only having to upload a file, make a change and delete the file again. There's no install, no config, etc.


MySQL Workbench works fine though?


+1 for this.

The profiler (visual explain) of queries in MySQL workbench is a godsend.

Also, for general querying and table layout lookup in OSX (macOS, whatever) I recommend Sequel Pro. It has a slightly better UI when working with multiple databases (easier to switch).


Last time I tried it (about two years ago). It would crash all the time on Ubuntu. Went back to PHPMyAdmin.


Mine kept crashing on Mac when viewing table information, turns out it was some old subversion plugin that integrated into the OS shell.


You might like this: https://www.dbninja.com/


I can't think of much reason to use it over Workbench or Sequel Pro.


Workbench has some great features that I can't do without, but when it comes to just browsing around a DB I much prefer PhpMyAdmin's interface over it. That said I haven't really tried any of the other offerings in the space, so I expect that's a big part of my opinion.


I agree with this comparison but Navicat is well worth the price. I use their data backup and synchronization (data and/or structure) all the time and it works extremely well. The "find in database" text search is a life saver too.


Sequel Pro - No native linux support

Workbench - Massively unstable on linux. (Although I do like the visualizing tools assuming I'm willing to put up with it crashing every hour or so).


Some dev environments aren't local, and sometimes this is faster, especially if you have to document the changes for future updates that don't include your fancy tools.


Sequel Pro's built-in SSH tunnel has worked for me in every remote development situation I've encountered.

It seems like a really bad idea to place a web-based database tool on a public-facing host when technology exists to route MySQL through SSH.

Even shared hosts support SSH these days. If yours doesn't, maybe it's time to find another shared host!


It's been a while but I'm pretty sure you can do so with PHPMyAdmin.

I seem to remember installing it on my own workstation, setting up the ssh tunnel and then pointing PHPMA to localhost.

It's not my favorite tool and I've avoided it due to security concerns but I've set it up for others as described and I recall it worked fine. Like I said though it's been a while and I'm fuzzy on the details.


Oh, if only were we using it only in dev environments…


Countless people maintaining WordPress sites do.


Every Drupal dev shop we've worked this insisted this was installed on the server. Its just firewalled off/tunneled/whatever for safety.


Anyone using shared hosting (e.g. most people who do web development for small businesses) does.


I think many (most?) web developers get their start on some shared hosting provider where your only obvious option for managing MySQL databases is phpmyadmin. You have to dig a little deeper to realize you cna use MySQL Workbench, but even then a lot of them disable remote MySQL and SSH so you're SOL.

So for me, at least, it's ingrained in my head that phpmyadmin is the best tool for the job given the limitations of what I've got. Although I recently switched my company's reseller hosting account to a provider that actually allows remote MySQL or SSH, so that's exciting.


What are some good alternatives? I've been using DataGrip the last couple of months, but prior to that I used phpMyAdmin all the time because I just couldn't find anything else as useful for MySQL. And even with DataGrip, I sometimes have to log in to phpmyadmin because there's stuff DataGrip doesn't do....


It usually comes by default with CPanel and Plesk on web servers, as well as MAMP / WAMP / XAMPP for development environments. In my experience it's still used a lot by junior devs who haven't yet learned any different, and people with absolutely no idea what they're doing.


Yes. A lot of times its available on shared hosting behind some login. Esp, if you don't have shell access on inexpensive hosts.

Its been available at places I've worked. It was locked down by IP. The database was restricted to access by ip too, in theory making outside access more difficult.

I used it a lot (less now) and honestly, I kind of like it. The interface is a little kludgy, but it gets the job done. Queries are editable, exportable in various formats. You can construct a search via gui then edit the SQL it generates . It seems to have a lot of functionality built in, user/table management etc..

For local instances I use sequel pro too (the ssh login function it has is nice and works well).


PHPMyAdmin is a lifesaver for newbies and those who are intimidated by the Command Line. It was for me, and I still prefer to use it when possible.

I can't thank the people who created it and maintain it enough.


I think it is one of the best mariadb/mysql gui's out there.

But to get the good stuff one has to configure it properly, and generally people don't bather configuring it. They just place the files in a folder.

No other web based tool for any database that i have tried even comes close.


I personally use Workbench when I can but a lot of clients with shared hosting use it. It still gets the job done for the most part.


Is there much sense in auditing things that are usually used by the admin and are by design exposing a lot of control of the server? Sure it must not be exposed to an outsider, but if auth is done right, it doesn't matter how far the insider can get... IMO


How can we get such audits done for our own open source projects?


There are selection criteria listed at https://wiki.mozilla.org/MOSS/Secure_Open_Source , and, if you think you meet most of the criteria, you can fill out a form to apply.


"I'm not sure, what the guys did during the audit of phpMyAdmin, but it took me 3 minutes to find a persistent XSS in the latest version."

https://twitter.com/totally_unknown/status/74275332346864026...


I encourage everyone to use MySQL Workbench over SSH. For whatever reason people seem to not understand the concept of SSH and the inherent security it provides. But, once you explain to folks how to use it effectively it really is a good balance of security and usability.


> Software Freedom Conservancy congratulates its phpMyAdmin project on succesfuly completing completing a thorough

repetition of "completing" in first line.


10 years late.


And in the PDF, the auditors complain that they didn't have enough time to even fully analyze the impact of the vulnerabilities found.

I wouldn't read too much into it.


That is misleading. They said they had the ability to unset global variables. Looking at the PHPMyAdmin codebase, I understand they didn't have the time.


This is not relevant. An audit cost a substantial amount of money, you wouldn't expect your consultants to spend a lot of time exploiting or building Proof-of-Concepts. If you have a time-boxed assessments, you want the consultants to cover the most ground and not spend too much time on a finding.


If fixing the bug is less work then determining exploitability, fixing it and moving on is just economical. Digging in further would only have distracted from looking for other vulnerabilities.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: