Yes, I am very aware that many times they don't, but that doesn't mean they shouldn't!
Fortunately, in many cases, even when the detail is omitted from the headline theorem, they did in fact do the work and it is in fact stated lower down in the body of the paper; or in other cases they fail to state it but it can be easily assembled from a few parts. That's why I was asking.
But sometimes though it's a big complicated thing and they were just like, eh, let's not bother figuring out exactly what it is. To which I say, laziness! You're just making other people do a proof-mine later! You're doing this much, do the work to get a concrete bound, because you can do it better than later proof-miners can!
I won't say that in an ideally functioning mathematics proof-mining would never be necessary, that'd be like saying that in a well-written mathematics one would never need to refactor, but c'mon, mathematicians should at least do what they can to reduce the necessity of it.
https://yuri.is/not-julia/ is a good write-up of one person's opinion on the problems of Julia. I'm much less experienced with Julia but I somewhat agree. There's too much focus on "magic" instead of correctness for me to try building serious software using Julia. An amazing language in many aspects though.
He says he'll port his performance optimizations to the original game once he's done with his game / romhack. Otherwise he'd have to always update two codebases when he finds a new optimization.
Only reason Safari feels fast is that it has avoided implementing features. There's still stuff that's been present in Chrome/Firefox for 10 years, but still doesn't work in Safari.
Does Google still show the old search results page design to Safari users? It did that for a long time.
If those features slow down the web then that’s a knock on those features. Plus I’ve yet to see a single one of these supposedly “missing” features actually matter in the real world.
May I recommend Orion? It's based on Safaris engine, but imo improves massively on its ui (for example it comes with a fantastic (opt-in) tree-style tab browser).
But most importantly it supports chrome and firefox extensions. Most of them do just work out of the box.
It's funny how massive the impact of "bundle a bunch of proprietary stuff into our open browser" was on the browser wars. Between H264, MP3, Flash and PDF, Firefox never had a chance.
I can't remember firefox ever being included in installers for flash updates, antivirus, etc with install by default checked either.
Perhaps Chrome did succeed mostly on its own merits, but it wasn't above techniques used by things like Bonzai Buddy and Ask Toolbar to get the job done.
To top it, it could be argued that Chrome is a worse piece of spyware than Bonzi Buddy ever was.
You don't put a single character into the address bar of Chrome without notifying Google.
Yes, even if you are typing a domain instead of planning to search for something you have told Google about it, which means they know everything you visit, both internal websites at work and everything else.
The problem is that there simply wasn't a better option at the time.
Ogg Vorbis was a novelty at best, and it was the only decently widely adopted open source competitor for any of the items listed that was available at the time.
HTML5 was only just published when Chrome launched. So Flash was at that point the only option available to show a video in the browser (sure, downloading a RealPlayer file was always an option, but it was clunky, creators didn't like people being able to save stuff locally, and was also not open source). Chrome in fact arguably accelerated the process of getting web video open sourced: Google bought On2 in 2010 to get the rights to VP8 (the only decent H.264 competitor available at that point) so they could immediately open source it. The plan was in fact to remove H.264 from Chrome entirely once VP8/VP9 adoption ramped up[1], but that didn’t end up happening.
Flash was integrated into Chrome because people were going to use it anyway, and having Google distribute it at least let them both sandbox it and roll out automatic updates (a massive vector for malware at the time was ads pretending to be Flash updates, which worked because people were just that used to constant Flash security patches, most of which required a full reboot to apply; Chrome fixed both of those issues). Apple are the ones who ultimately dealt the death blow to Flash, and it was really just because Adobe could not optimize it for phone CPUs no matter what they tried (even the few Android releases of Flash that we got were practically unusable). That also further accelerated the adoption of open source HTML5 technologies.
PDF is an open source format, and has been since 2008. While I don't know if pressure from Google is what did it, that wouldn’t surprise me. Regardless, the Chrome PDF reader, PDFium, is open source[2] and Mozilla's equivalent project from 2011, PDF.js, is also open source.[3] Both of these projects replaced the distinctly closed source Adobe Reader plugin that was formerly mandatory for viewing PDFs in the browser.
Chrome is directly responsible for eliminating a lot of proprietary software from mainstream use and replacing it with high-quality open source tools. While they've caused problems in other areas of browser development that are worthy of criticism, Chrome's track record when it comes to open sourcing their tech has been very good.
Exactly. It could even be turtles all the way down, with new building blocks of physics becoming relevant as we go smaller and smaller (and back in time).
It's jarring initially, but becomes natural very quickly. Writing loops like "for i in 1:length(arr) ... end" is pretty neat compared to C++ or even python. Plus in math sequences typically start at index 1.
I would go as far as to say that it is now largely an archaic idiom to write «for i in 1:length(arr) ... end», and there is no reason to write such a code to process a collection of elements (an array, a list etc) in its entirety. Yet, people keep on writing it and then spend countless man-hours chasing subtle bugs that blow out later in production.
Most modern languages have a «foreach» operator or its functional equivalent as a collection item generator. «foreach elem in my_collection {}» also has a clearly defined semantics: «process everything». The code is cleaner, concise and a bit shorter and reduces the cognitive overload. And since many languages now support lambdas, the code can quickly and error free be converted into a staged data processing pipeline without the use of temporary variables that serve no purpose, e.g. «foreach elem in my_collection.filter (_ -> _.my_selection_criteria ("42"); ) { do_something (elem); }». Whether the collection is indexed from 0 or from 1 becomes irrelevant.
You should never write loops that way, at least in code you're going to share (assuming that you’re going to have some arr[i] in the loop body—if not, you would just do "for e in arr").
Assuming that arrays start at 1 is a source of occasional bugs in public packages. The existence of OffsetArrays means that arrays can start at any index (so for people who get nauseated by 1-based arrays, you can change it).
Instead, write "for i in eachindex(arr)".
In fact, Julia’s array syntax means you can loop over and manipulate arrays without using indexes much of the time, so don’t even need to know how they’re based.
> Plus in math sequences typically start at index 1.
I'm not quite sure that this is the case enough to say "typically". In terms of undergraduate exposure to math, I think more people have taken Calculus (or Analysis) than Linear Algebra, and I think Calculus textbooks tend to index from 0 while Linear Algebra textbooks tend to index from 1.
It is nitpicky, but the fact that these classes deal with decision problems and not general problems is something which feels like should be mentioned more often.
Of course this ends up not mattering in practice since we can convert a problem with arbitrary (bitstring) output into a decision problem. To do this we just include a index n in the input and make the decision problem about whether the bit at given index n is 1.
> Of course this ends up not mattering in practice since we can convert a problem with arbitrary (bitstring) output into a decision problem.
Not always. My favorite counter-example is coloring k-colorable graphs. Consider that for some reason (and there could be many), you are guaranteed that all the graphs on input are k-colorable. Still, the input is just a graph without any coloring. Your algorithm is tasked to find a proper coloring using the smallest number of colors.
It is both a problem that has been already studied in terms of approximation and at the same time an optimization problem that has no good decision version, as far as I am aware.
The comment I was referring to was talking about "decision problems and general problems" and there always being a reduction between them.
Now "general problems" is a bit vague, but in my classes on optimization the students are intuitively led to believe that optimization problems always have a decision problem associated with them, so we can talk about NP-hardness of optimization problems, too. Which is often true, but not always.
As a good example, if you consider graph coloring, you can argue that the associated decision problem is "given a graph G, and a parameter k, answer yes if G is colorable with at most k colors". This way, slightly informally, you can talk about NP-hardness of finding the smallest number of colors for a graph.
However, the optimization problem I presented -- coloring k-colorable graphs -- is a valid optimization problem, it is interesting and has been studied in the past, but it has no good decision problem associated with it.
Your conversion doesn't address determining the length of the output.
Another conversion avoids that issue, by converting a polynomial time function f into the language
This would make sense right? You probably can't run all cores at max turbo frequency before hitting power and thermal limits. So dividing chiplets based on binning would result in good performance for 1-8 -threaded tasks while reducing chip lottery.
(But it's possible there are other reasons for the discrepancy)
The first CCD is binned for the SKU's spec, the other is whatever. This has been true for all chiplet Ryzens. Though the effect used to be more extreme, I had for example a 3900X where the second CCD only managed to run at 4.1 GHz under load, compared to ~4.4 GHz on the first CCD (and 4.6 GHz being printed on the box). That's pretty common with those.