It's inline with what I perceive as the more informal tone of the sqlite documentation in general. It's slightly wordier but fun to read, and feels like the people who wrote it had a good time doing so.
Pages and other iWork apps will remain free. The premium features are for curated images and templates, and AI assisted document creation. If you don't care for those features, you will not be affected by the change.
I don’t think this fully explains it. In the nineties you could build a whole application using a toolkit like MFC, and you could ship the entire .dll, which bundled a whole bunch of (low-res, remarkably tasteless even for the time) bitmapped UI assets, and the resulting package didn’t hold a candle to modern bloat. Nor could it: you wanted that self-extracting installer (this predated MSI!) to be reasonable to download on a 56k modem, and even if you used a CD, you wanted room for something else on that CD.
Of course, there were multi-hundred-MB software packages in that era, but they were complex, multifunctional, and highly capable. And IIRC all of Microsoft Office (RIP!) except for some rarely used extras still managed to fit in one CD for a long time.
I'm wondering what the source for these apps were. Google Store downloads strip unnecessary localized assets at download time. This used to be a BIG deal back when icons were bitmaps; but compiled binaries do provide COMPLETE pre-rendered sets of bitmap icons if your downlevel minimum version (Android 5.0? Android 6.0?) extends that far. Which are then stripped whenever you download to modern phone. Assuming that you have SVG replacements for all legacy bitmap icons. Perhaps Gmail does not. So if there are multiple localizations, that would suggest that either the user is measuring the pre-installed app (which almost immediately upgraded and is therefore effectively replaced), or has obtained the APKs from a different source.
The publicly available email app in the Android sources is NOT gmail (and is therefore likely to be unloved and uncared for, and probaly will contain massive blobs of bitmap icons. So if it was that...
Any native code ALSO bloats compiled binary size dramatically (since binaries include code compiled for each processor you have selected when you performed the native build). Unnecessary binary blobs are stripped by the Play Store when you download. It is conceivable that gmail carries ancient crusty pieces of native code, I suppose, given its long heritage.
And also includes pre-compile maps to speed up startup. Very strange process. Apparently, the Google Play Store profiles the startup of the first 20-odd users who download your app, and then transmit the pre-compile map to all subsequent downloads. I'm not sure whether apps are pre-jitted at install time or whether the pre-jitted code is downloaded from the Play Store.(Play Store does tells you they are going to do it when you upload, and -- sure enough -- load time "magically" improves by a significant amount shortly after you push the binary to production. I don't honestly know whether pre-jitting has taken place before first load. (And whether that code shows up as cache space or app size).
Compat frameworks, on the other hand.... absolutely yes! I'm not sure that native Android framework code EVER gets executed on a modern app, to be honest. Almost all compat layers, and extensions, I think.
>iOS apps also need to include all localized assets in each app bundle.
Is this a hard requirement? I use two languages or at best three. The other 100 language don't matter to me. Why do I have to waste space on my smartphone. This adds up if I have hundreds of apps.
You could get localizations OTA, but that's engineering you need to do to make that happen.. or a product you purchase, as there's various localization managers offering these options in their software.
If you do software as a profession, is the book really THAT expensive at £45? Having a deep understanding of CSS could make you significantly more than that.
debug_assert!() (and it's equivalent in other languages, like C's assert with NDEBUG) is cursed. It states that you believe something to be true, but will take no automatic action if it is false; so you must implement the fallback behavior if your assumption is false manually (even if that fallback is just fallthrough). But you can't /test/ that fallback behavior in debug builds, which means you now need to run your test suite(s) in both debug and release build versions. While this is arguably a good habit anyway (although not as good a habit as just not having separate debug and release builds), deliberately diverging behavior between the two, and having tests that only work on one or the other, is pretty awful.
For example, I’m pretty sure some complex invariant holds. Checking it is expensive, and I don’t want to actually check the invariant every time this function runs in the final build. However, if that invariant were false, I’d certainly like to know that when I run my unit tests.
Using debug_assert is a way to do this. It also communicates to anyone reading the code what the invariants are.
If all I had was assert(), there’s a bunch of assertions I’d leave out of my code because they’re too expensive. debug_assert lets me put them in without paying the cost.
And yes, you should run unit tests in release mode too.
There is no recovery. When an invariant is violated, the system is in a corrupted state. Usually the only sensible thing to do is crash.
If there's a known bug in a program, you can try and write recovery code to work around it. But its almost always better to just fix the bug. Small, simple, correct programs are better than large, complex, buggy programs.
Correct. But how are you testing that you successfully crash in this case, instead of corrupting on-disk data stores or propagating bad data? That needs a test.
> Correct. But how are you testing that you successfully crash
In a language like rust, failed assertions panic. And panics generally aren't "caught".
> instead of corrupting on-disk data stores
If your code interacts with the filesystem or the network, you never know when a network cable will be cut or power will go out anyway. You're always going to need testing for inconvenient crashes.
IMO, the best way to do this is by stubbing out the filesystem and then using randomised testing to verify that no matter what the program does, it can still successfully open any written (or partially written) data. Its not easy to write tests like that, but if you actually want a reliable system they're worth their weight in gold.
I think the idea is that those asserts should never be hit in the first place, because the code is correct.
In reality, its a mistake to add too many asserts to your code. Certainly not so many that performance tanks. There's always a point where, after doing what you can to make your code correct, at runtime you gotta trust that you've done a good enough job and let the program run.
You don't. Assertions are assumptions. You don't explicitly write recovery paths for individual assumptions being wrong. Even if you wanted to, you probably wouldn't have a sensible recovery in the general case (what will you do when the enum that had 3 options suddenly comes in with a value 1000?).
I don't think any C programmer (where assert() is just debug_assert!() and there is no assert!()) is writing code like:
assert(arr_len > 5);
if (arr_len <= 5) {
// do something
}
They just assume that the assertion holds and hope that some thing would crash later and provide info for debugging if it didn't.
Anyone writing with a standard that requires 100% decision-point coverage will either not write that code (because NDEBUG is insane and assert should have useful semantics), or will have to both write and test that code.
I would have expected SVGs to be like PDFs and render the same across devices. Is the issue that some renderers don’t implement the full spec, or that some implement parts incorrectly?
They are reasonably consistent because there is a de-facto reference implementation (Adobe Acrobat) which, if your implementation does not match exactly, users will think your implementation is broken.
You definitely don't understand PDFs, let alone SVGs.
PDFs can also contain scripts. Many applications have had issues rendering PDFs.
Don't get me wrong, the folks creating the SVG standard should've used their heads. This is like the 5th time (that I am aware of) this type of issue has happened, (and at least 3 of them were Adobe). Allowing executable code in an image/page format shouldn't be a thing.
SVG can for example contain text elements rendered with a font. If the font is not available it will render in a different one. The issue can be avoided by turning text elements into paths, but not all SVGs do that.
More like HTML and getting different browsers to render pixel perfectly identical result (which they don't) including text layout and shaping. Where different browser don't mean just Chrome, Firefox, Safari but also also IE6 and CLI based browsers like Lynx.
PDFs at least usually embed the used subset of fonts and contain explicit placement of each glyph. Which is also why editing or parsing text in PDFs is problematic. Although it also has many variations of Standard and countless Adobe exclusive extensions.
Even when you have exactly the same font text shaping is tricky. And with SVGs lack of ability to embed fonts, files which unintentionally reference system font or a generic font aren't uncommon. And when you don't have the same font, it's very likely that any carefully placed text on top of diagram will be more or less misplaced, badly wrap or even copletely disappear due to lack of space. Because there is 0 consistency between the metrics across different fonts.
The situation with specification is also not great. Just SVG 1.1 defines certain official subsets, but in practice many software pick whatever is more convenient for them.
SVG 2.0 specification has been in limbo for years although seems like recently the relevant working group has resumed discussions. Browser vendors are pushing towards synchronizing certain aspects of it with HTML adjacent standards which would make fully supporting it outside browsers even more problematic. It's not just polishing little details many major parts that were in earlier drafts are getting removed, reworked or put on backlog.
There are features which are impractical to implement or you don't want to implement outside major web browsers that have proper sandboxing system (and even that's not enough once uploads get involved) like CSS, Javascript, external resource access across different security contexts.
There are multiple different parties involved with different priorities and different threshold for what features are sane to include:
- SVG as scalable image format for icons and other UI elements in (non browser based) GUI frameworks -> anything more complicated than colored shapes/strokes can problematic
- SVG as document format for Desktop vector graphic editors (mostly Inkscape) -> the users expect feature parity with other software like Adobe Illustrator or Affinity designer
- SVG in Browsers -> get certain parts of SVG features for free by treating it like weird variation of HTML because they already have CSS and Javascript functionality
- SVG as 2d vector format for CAD and CNC use cases (including vinyl cutters, laser cutters, engravers ...) -> rarely support anything beyond shapes of basic paths
Beside the obviously problematic features like CSS, Javascript and animations, stuff like raster filter effects, clipping, text rendering, and certain resource references are also inconsistently supported.
From Inkscape unless you explicitly export as plain 1.1 compatible SVG you will likely get an SVG with some cherry picked SVG2 features and a bunch of Inkscape specific annotations. It tries to implement any extra features in standard compatible way so that in theory if you ignore all the inkscape namespaced properties you would loose some of editing functionality but you would still get the same result. In practice same of SVG renderers can't even do that and the specification for SVG2 not being finalized doesn't help. And if you export as 1.1 plain SVG some features either lack good backwards compatibility converters or they are implemented as JavaScript making files incompatible with anything except browsers including Inkscape itself.
Just recently Gnome announced working on new SVG render. But everything points that they are planning to implement only the things they need for the icons they draw themselves and official Adwaita theme and nothing more.
And that's not even considering the madness of full XML specification/feature set itself. Certain parts of it just asking for security problems. At least in recent years some XML parsers have started to have safer defaults disabling or not supporting that nonsense. But when you encounter an SVG with such XML whose fault is it? SVG renderer for intentionally not enabling insane XML features or the person who hand crafted the SVG using them.
Thanks for the feedback. The document model in mongo was slopped together by a junior engineer, so perhaps an unorthodox approach. It is basically flat, and already used in a pseudo-relational manner via in-app join to the existing sqlite store. This blog post inspired me to think, what if we just chucked all the json from mongo into sqlite and used the generated indices? Then we can gradually "strangler fig" endpoint by endpoint
Maybe the page could have been shorter, but not my much.
reply