No, it does not. Here is the next sentence from the article:
"Then we exploited a hardware simulation bug within VMware to escape from the guest operating system to the host one. All started from and only by a controlled a website."
Edge isn't nearly the security nightmare that Internet Explorer was and runs sandboxed in a similar manner as Chrome so you shouldn't underestimate the impressiveness of that first escape either.
CVE listings are not a meaningful metric at all for security. What that could reflect is that Edge's bounty program is far more popular than Chrome's. It could mean that Edge is willing to hand out CVEs for uncomfirmed bugs whereas Chrome requires further proof (I am not saying this is the case at all, to be clear).
I have no reason to believe the metrics are biased. A similar number of issues appear for all browsers . The difference is the type of issue.
Of course you're free to just take it on faith that Edge is more secure because Microsoft says so and that the CVE listings have no purpose being used as a comparison. But for me, without finding some other metric to compare by I'll use this metric until there's a better replacement
Evaluating the security of a product is a fairly rigorous task with no clear, objective 'this one is strictly more secure' outcome possible.
> I have no reason to believe the metrics are biased.
I gave you multiple reasons why they could be based on a number of things. Not to mention that they seem flat out wrong - Chrome has dozens of security vulnerabilities patched every month. Maybe they don't create CVEs for all of them?
> Of course you're free to just take it on faith that Edge is more secure because Microsoft says so
I don't take it on their word. I have multiple reasons to believe that Edge is a pretty secure browser - I never said it is more secure than Chrome.
Here is a really solid post, that I felt was particularly unbias, by Justin Schuh, who works for (or is the head of? Can't remember) Chrome's security.
As you can read, they both take fairly different approaches that are hard to compare objectively. Is a sandbox improvement more powerful than a new mitigation technique? Again, impossible to say. As Justin states, they're doing solid work though.
> that the CVE listings have no purpose being used as a comparison.
CVE has nothing to do with comparing the security of products and everything to do with notifying users of that products that they need to patch.
> But for me, without finding some other metric to compare by I'll use this metric until there's a better replacement
"I have no tools to compare, so I will use an arbitrary, incorrect tool to compare"
This is totally faulty logic. CVEs are not a metric for evaluating security and they were never intended for such a thing. If you want to actually evaluate software security it is a serious, real process that requires more discipline than saying "well some numbers that have no meaning should be good enough".
As I said above, CVE listings are a useless metric for measuring safety of a product. High # of CVEs can easily indicate a safer product with a more mature bug bounty program or an attitude of "Assume it's vulnerable, don't require full exploits to prove it".
There is no good information to be drawn from those two links.
"Then we exploited a hardware simulation bug within VMware to escape from the guest operating system to the host one. All started from and only by a controlled a website."