I have to agree that one should be using PHP 7.2. It also gives a nice performance boost to Matomo.
The required PHP version for Matomo is shown in [1] (5.5.9 or greater)
Can you please send me a link to the FAQ page mentioning 5.3 (e.g. to lukas@matomo.org) so they can be updated?
My mistake, I was skimming the docs only but overlooked the specially linked requirements. I would go further though and remove outdated PHP versions from that page and only recommend maintained versions.
Additionally, the error described by OP looks like autoloading is broken.
Just FYI: Matomo has nearly none impact to the loading performance as it is always loaded async and deferred (so only after the page has finished loading).
If this is still too slow for you, you could try the official QueuedTracking plugin [1], which tracks into a redis or mysql database and processes the requests afterwards.
That way you should get to about 30ms for the request.
This is really the chance for a great open source project. So many open source apps (Signal, Mastodon clients, riot.im, telegram-foss and more) have to do hacks to be able to deliver push notifications without GCM/FCM
What if there was an open source software one could setup on a server that would provide push services for all these apps and would interact with one open source client running on android.
That way one would have the energy saving benefits of only handling one server connection, but the privacy benefits of a private server for oneself/friends/people one trusts.
This is definitely not easy and would require coordination between many open source projects and also additions in Android to run on a system level (maybe in lineageos and similar), but I really think it would be worth it.
One of the keys to low-power push system is the ability to manage the radio. It is obviously much easier, if you have a single connection that you manage, know when it is possible to drop it, or when to send the heartbeat.
Once multiple processes start doing it, without cooperating together, your battery is going to be shot.
Apps also should never send anything private over push notification. It should be only a ping to the app, to check it's event source.
I wish there were an open standard for push notifications with some sort of mux/proxy support. Let me run a daemon (or use a service provider) that collects all my notifications in a data-center, then send a datagram, sms, or even pocsag message to let me know notifications are available for pickup.
There is an open standard - it's called Web Push (https://www.w3.org/TR/push-api/). It's already used by Firefox and Chrome to implement push notifications for web pages (via service workers). The notifications are proxied via Mozilla/Google server respectively, but there is no reason you could not run your own server.
It's not an open standard, but this was the one of the interesting pieces of RIM's software stack. Their push system worked reasonably well, used little power on the handset, and organizations were able to run their own on-site push proxy supporting their own in-house applications.
BlackBerry cracked push like this years ago, though it wasn't open. The phones used to connect over the carrier's network to a private APN (over VPN or leased line with to the carrier), so they always knew your BlackBerry's IP address. If you were out of range they'd do the whole storing it all up and when your device came back into coverage they'd send it all down. No hanging connections, the device was directly exposed into the BlackBerry network.
But you need to know how to write the word and in which grammatical case the word is or otherwise you end up on the wrong end of the earth because every homophone also maps to a real location.
I'm not sure I understand their idea of localization.
The center of Vienna is "decays.jump.graver" (if those are simple words is questionable).
But when I switch to German the same block becomes "fahrende.hügeligen.ansprüche" (driving, hilly, demands). So the words I get by default are again just random strings for someone who doesn't speak German)
And even someone who speaks German may get the inflection of the words wrong and end up in "fahren.hügelig.ansprüche", "fahrend.hügelige.ansprüche" or any other permutation of the many ways the same word could be written in another context.
And worst of all, all those permutations exist and map to other locations somewhere on earth.
You are correct, it's a horrible implementation of an otherwise good idea.
Personally I want a public domain list of 2^k distinct words of distinct and unambiguous enough meanings, that are then translated to many other languages and evaluated for the same criteria (in reality, such system will be required to begin with as many languages as possible). My best guess is that k=10 is possible with a lot of efforts.
> And worst of all, all those permutations exist and map to other locations somewhere on earth.
To be honest they did give a thought about this, as apparently close sounding words are supposed to map to places that are far apart.
It makes sense, as you should at least know if you're looking for a place in Austria or Burkina-Faso, but it also means that if you heard the words wrong then you depend on their algorithm to find a slightly different-sounding address that is the one you were looking for, instead of a more intuitive system where you could just around because if it sounded the same then it should be closer.
And since the word list is proprietary, you really entirely depend on them. Their website shows three different suggestions, the one you typed and two close sounding ones.
Tough luck if you typed "duper.listons.égalisons" and the actual address was "duper.listons.égalisation", you won't get it in the suggestions.
Hi, I have already expected a question about Neural Networks :)
When I had the idea I also toyed around with word-rnn and similar RNN libraries. The results I got were pretty good, but training was extremely resource consuming. Cuda gave a 8x boost, but still training one of the smaller sites took 20 minutes on my simple graphics card while creating the Markov chain is done in 2 minutes.
I also have absolutely no experience with Machine Learning and just the setup was already quite an experience. So I stuck with what I know and stayed "the traditional way".
Interesting. Yes, training takes way more time. In some really big projects like https://blog.openai.com/unsupervised-sentiment-neuron/
"We first trained a multiplicative LSTM with 4,096 units on a corpus of 82 million Amazon reviews to predict the next character in a chunk of text. Training took one month across four NVIDIA Pascal GPUs, with our model processing 12,500 characters per second."
For smaller ones (and smaller datasets) a few hours of GPU is considered fast (at ~1$/h it is not much!). If you want to use it online, there is https://neptune.ml/ (no setup, $5 free credit for computing; full disclosure: created my colleagues).
In any case, I would be excited to see it on some site with code (like SO) or formulae (math or stats). Especially as I am a big fan of StackExchange and analysis (vide http://p.migdal.pl/tagoverflow/ :)).
Sounds nice, but my plan was to find out how much data I can handle easily on my plain simple desktop PC.
You may have seen, that you can filter by site [1].
Code quite ruined my chains as it didn't appear in blocks, but rather everywhere, so I went the easy way and filtered all code blocks.
I didn't filter math as I couldn't find a proper way to do it, but you can see that it gets quite messy [2]
I have to agree that one should be using PHP 7.2. It also gives a nice performance boost to Matomo. The required PHP version for Matomo is shown in [1] (5.5.9 or greater) Can you please send me a link to the FAQ page mentioning 5.3 (e.g. to lukas@matomo.org) so they can be updated?
[1] https://matomo.org/docs/requirements/