Internet related would be, back in the early 2000s when I was building out infrastructure of pinkbike/trailforks I was optimizing things and discovered that this initcwnd parameter was hardcoded to something like 2 or 3 packets in the linux kernels. This was causing the initial page to require multiple RTTs to load so I figured I would change that and recompile the kernel to make sure all out pages would be transmitted in one go. This made out site perform a lot better compared to most sites at the time.
Funny at the time I was a bit worried that the IETF would discover this and shut us down or something.
These days that parameter is default to something like 10 and you can increase it with a config parameter.
Thank you. The UI is pretty basic but I tried to design it in such a way that the functionality would feel intuitive to newcomers, with as few hidden features as possible. I picked this project to share because I figured that it was very easy for people to play with by editing the pattern, without needing to understand how the sound synthesis works.
I think this article is cherry picking here trying to make Canada look good compared to say the USA and other countries. ( im canadain btw )
I think it's easy to take in 28K refuges when the illegal load is very small in Canada, of something like 20K per year [1]
Compare that to the USA that needs to deal with an illegal load of 20X that. [2]
And for 2019 the USA load seems to be twice that of 2018. [3]
If you agree that using a CDN for static content is a good idea, then it would seem HTTP/2 Push is useless.
The website is served from your servers while the static content is served from a CDN so you can't "push" it in the same stream as you webpage content.
Am I missing something here?
Yes, you can't push cross-origin.[1] However, there's still a lot of use-cases where this is useful, such as if your entire site is static content, or if your app servers are behind the CDN as well.
[1]: Yet. I believe the web packaging standard (intended, among other things, to replace AMP) allows pushing bundles signed by other origins.
I'm not sure if I never understood this right, but it seemed server push was not an ideal overall solution. It seemed it was targeted to push needed css/js/image in the stream of the page result. But most assets that one would want loaded are coming via an external CDN so doing a push of that does not make sense. Yeah, I guess if you are proxying your whole site via the CDN.
Am I missing something here?
My view on this is that it doesn't just work for "http page preloads assets to get", and more like "http request preloads other http requests" ... the difference is, in the case of a CDN, they could basically download a JS file for, say, a JS framework, but also send you jQuery, etc because they know it's dependent.
All this is to get away from a lot of our current hacky workflows (like file concatenation).
I don't see a lot of external CDN script (like cdnjs) use in the real-world anymore. That's sort of dated compared to bundling and using something like S3. Ideally, these days you'd still minify, but your *.html files could plainly include scripts, and the server would read the resources to be sent, and send them all over the same stream, rather than forcing a client to open multiple connections.
I hear ya...of the two bolded feature headings in the story we get:
1. Supersonic speed
( always welcome )
2. League of extraordinary emojis
( the last thing I would care about in a new os release )
You can get centimeter accuracy with gps. http://swift-nav.com/piksi.html
Even with cheaper modules and phones that dont carrier phase rtk, you can get meter which would be plenty for this application.
Last time I used Redis I was surprised to determine to my surprise that Redis was single threaded. Of course I could have just RTFM but I assumed incorrectly.
This means that if you have part of your application that requires fast consistent GETs, and then another application does a slower SORT, UNION, DIFF, etc, on the same db or even other dbs on the same Redis server, EVERY other client request has to wait for this slower command to finish.
http://redis.io/topics/latency
This is something that one really has to engineer around in order to use it in an environment that requires performance and consistent latency. In our case of 1000s req/s it was just unacceptable to have the latency be affected, sometimes by 10 times, by a slower command.
If the two datasets with different access speed requirements are disjoint, you can just run two instances of redis. One for the high-latency gruntwork, one for the low-latency GETs.
If the datasets aren't disjoint, then you're trying to do fast and slow ops with the same data, which - if you need accurate values - is going to be mildly hairy even if multithreaded, since you'll need to somehow lock the data while you do the slow op (which will exclude the GETs, causing high latency), or you'll need some kind of transaction-based stable view to operate on (e.g. transactional memory?)
Very rare data access is disjoint, unless you're only doing key/value put/get. I think the interest of Redis is that it has many other features than simply put/get, and all those sorts, diff, etc typically would work a set of data that is being written in.
For sure having multiple instances will help some of this, but adds more complexity. Do you have your app write to multiple instances, and then read low latency from one, and read high latency from another? Is that data now consistent? Do you setup Redis replication and make sure that works right and then read from different replicas? Or perhaps you engineer some queue that does not block writes, groups them together and writes to Redis in a separate thread. Then you have to maintain all this and make sure it's correct, back it up, what are the corner cases, failure modes, etc.
From my experience, if you want to engineer things well, you end up essentially building out the same sub systems that a larger db engine has. Say Innodb.
I'm smart enough to know that I'm not smart enough to build a one off complex system more correctly than really smart people that have been iterating over many years and improving things on something like innodb.
There are very rare, very specific cases where I would use redis over something else if I was building something realtime, large and important.
Your script name is tracker.js which is not the best name as adblockers/privacy blockers simply block any names with the word track and many other similar names.
If you name your script something else that is not blocked it would enable it to run, and be valuable to see the errors that users get when they block other js on a page.