Databases have pretty robust access controls to limit (a sql user's) access to tables, schemas, etc. Basic controls like being able to read but not write, and more advanced situations like being able to access data through a view or stored procedure without having direct access to the underlying tables.
Those features aren't used often in modern app development where one app owns the database and any external access is routed through an API. They were much more commonly used in old school apps enterprise apps where many different teams and apps would all directly access a single db.
> I suppose the reason is because it takes more time?
I assume so. I didn't even notice that the article didn't motivate that `async: false` is bad. I always avoid it if I can since you might as well perform independent tests concurrently.
From the docs [1]:
* `:async` - configures tests in this module to run concurrently with tests in other modules. Tests in the same module never run concurrently. It should be enabled only if tests do not change any global state. Defaults to `false`.
This is a great write up. There appear to be a few camps forming in the comments and I’m in camp “SQL is confusing and attempts to explain it in terms of relational algebra have felt inadequate to me”.
It also gives me some good follow up material to read. I’m particularly interested in that one link that forms subqueries and lateral joins in terms of a new “dependent join” operator.
ERD's are your friend. Learn how to generate one, and how to read it.
The relations (not relational, not algebra) are IN the design they are IN the ERD (as a tool to visualize). Even if your not visual thinker the ERD might help you find a path between two distant tables.
Needing a subquery is rare. It happens but a lot of subqueries would be better off as joins. The moment you grasp the design of something you're less likely to want to sub query.
Explain is your friend. Reading an explain plan is going to give you some good insight into what is going on UNDER the hood. Not only will it help you tune slow queries but it is more insight into how large queries decompose.
Lastly, there is nothing worse than having to query a badly designed DB. If you do a shit job on the first part everything else is going to be painful.
Depends on what you want from a (text)book. In my mind books should be authoritative, they should include things that have had some thought put into them and are fairly well studied/verified. Modern advances in deep learning/ML are exciting but are very often not this. I would not a read a book which is just some recent hype papers from NeurIPS/ICML stapled together.
It depends on the particular subtopics it covers. Machine Learning: A Probabilistic Perspective is from 2012 and it's still a great resource, although Murphy's newer book will certainly cover more up-to-date material.
My personal exasperation is less about being unable to find alternatives (e.g. [1]). It's more that those alternatives aren't 1) as good, 2) free, or 3) part of the same platform.
And even if I use an alternative, my friends, family, and workplace do not. So I'm still fighting the use of Google products after I stop using them.
I don't really disagree, I just feel like that choice is pretty well known at this point. Big tech provides a high degree of polish, an economical cost, and probably mind-blowing amounts of data collection. Alternatives are slightly less polished, perhaps more expensive, and probably more favorable from a privacy perspective. Sure it would be nice to have it all, but I see this as just another tradeoff we have to make to live in the modern world. I sympathize with the viewpoint, I guess I just react differently to the options.