> A big impediment is the federal structure of Germany, as high-speed trains need to have at least one stop in every federal state they cross for political reasons, which of course makes them slow.
That's not generally true. For example, the ICE train from Hamburg to Berlin goes through 3 other states (Schleswig-Holstein, Mecklenburg-Vorpommern and Brandenburg) without stops.
There may be local politicians in some states making such demands ("If you want to build your tracks through _my_ state, ..."), but I would assume they are trying to make an impression in an election campaign or something.
In this particular case, it works because there are other long-distance services on the same line that do the extra stops. The EuroCity towards Prague stops in Büchen, Ludwigslust and Wittenberge, which (funnily enough) maps precisely onto the three states that the train passes through.
I'm German and I never heard the term "Kleincousin". According to Wikipedia [0] it seems to be used regionally to refer to 2nd degree cousins (which I would call "Großcousin").
Anyway, "Kleincousin" and "Großcousin" don't imply the age of the cousin, but a degree of relationship (with regional differences to what is actually meant).
To refer to a younger cousin, I would just say "jüngerer Cousin". "kleiner Cousin" may be possible too (like "kleiner Bruder" for a younger Brother), but it sounds a bit like childs talk and it may not be immediately clear to everyone what is meant.
We have mandatory 1 business day bank transactions in Euro currency in Germany and the EU since 2012. If your bank is taking "up to three _work_ days" for regular Euro transactions they are violating the law (§675s BGB).
As of last year, most banks also offer instant (10 seconds) transactions for a small fee. This is planned to be made mandatory at the end of 2021. My bank is charging 0.50€ for such transactions which is significantly less than the median Bitcoin transaction cost (more than $1 in 2020 average, more than $5 currently).
Thanks. Especially that second link seems very much more plausible in its analysis “This is probably a simple check assertion somewhere which is seeing unexpected data passing through a call”
If you write a file system every bug can destroy years of data, silently, so “better be safe then sorry” should be a poster hanging on every wall, so even a simple directory traversal, upon seeing a “this normally shouldn’t happen” event, should raise a few flags.
Does it? Apple's documentation seems to disagree [1]:
"A weak memory ordering model, like the one in Apple silicon, gives the processor more flexibility to reorder memory instructions and improve performance, but doesn’t add implicit memory barriers."
It's switchable at runtime. Apple silicon can enable total store ordering on a per-thread basis while emulating x86_64, then turn it back off for maximum performance in native code.
> But do you understand people who dare to run executables from 'proper-company' site? It's closed source, you have no idea what you are running, isn't it? As long as it's not free software in terms of FSF there is not guarantee what so ever that it's not harmful or even worse intentionally harmful.
You'll have to trust somebody at some point.
Do you trust the company who designed your Ethernet chip? Do you trust the person who wrote the firmware for it? If not, go and design your own network chip. Otherwise, there's no guarantee it won't spy on you.
You'll also want to write your own compiler that you'll then use to build the operating system you intend to run. You won't just go download some Linux .iso to install, would you? After all, there's no guarantee it's not been manipulated by those who offer it on their website.
No, I do not have to. I can choose to, but I don't have to! That is the core of the issue.If I 'have to' then I don't. I prefer checking and facts, not delusions.
>Do you trust the company who designed your Ethernet chip?
No, I don't and we shouldn't. I evaluate chances and we should track network activity with diff. hardware on diff. chipsets from diff. manufacturers.
>Do you trust the person who wrote the firmware for it?
No, I don't and we shouldn't as it's insane to do so.
>If not, go and design your own network chip.
There are other means to overcome this: encryption.
But yes, you right, we should make open source network chip. Agree. I certainly plan to design it.
>Otherwise, there's no guarantee it won't spy on you.
That is correct, I agree with you.
>You'll also want to write your own compiler.
Yes, that is correct. I want and I am writing it right now. There is also an option of GNU c/c++ compiler available (GCC) https://gcc.gnu.org
>that you'll then use to build the operating system you intend to run.
This is how you build a proper GNU/Linux system worth some degree of your trust.
>You won't just go download some Linux .iso to install, would you? After all, there's no guarantee it's not been manipulated by those who offer it on their website.
Exactly. Or it can be modified on the way, while you download it. For the later you can check hash sums published by a site who respects user freedom and cares about own reputation.
I have some trouble trusting FSF since he was removed from the position due to false accusations for saying (!) just saying things, which he actually didn't say if you read carefully. Speak about respect of freedom of speech.
To trust the system we should have trust worthy components with open sourced designs starting from CPU and every chip installed and ending with each software running. That is the only way!
> We re-estimated mortality rates by dividing the number of deaths on a given day by the number of patients with confirmed COVID-19 infection 14 days before.
This seems seriously flawed to the point of being outright stupid and irresponsible to spread in my opinion.
They basically assume the number of confirmed infections 14 days ago better represents the real number of infections on that date than the number of confirmed infections today. That does not seem right.
1. Infected persons test positive for the virus only after it breaks out, so the number of confirmed cases is always lagging behind the real number of infections.
2. As they mention in the article, but choose to ignore, a large number of infected people are asymptomatic or show only mild symptoms and will usually not be tested. The UK government yesterday assumed the real number of infections to be 10-20x higher than the number of confirmed cases. [1]
So taking the - IMHO still valid - approximation of 2-3% mortality rate for confirmed COVID-19 cases and considering the assumption of an infection rate 10-20x higher than the number of confirmed cases, the real mortality rate should be in the ballpark of 0.2%.
> They basically assume the number of confirmed infections 14 days ago better represents the real number of infections on that date than the number of confirmed infections today.
No, they assume it better represents the real number of infections present long enough that someone would have died of them if they were going to. Which may also be a problematic assumption, but is a very different assumption from what you describe.
“ a large number of infected people are asymptomatic or show only mild symptoms and will usually not be tested”
No.
WHO, China, Italy and South Korea have all looked and the hypothesised majority of asymptomatic cases doesn’t exist.
China did 320k background tests in one province and had a 0.5% positive rate. There’s some asymptomatic cases, but no evidence suggests there are massive numbers.
These 320k tests were performed in the Guangdong province which has a population of 113 million and ~10,000 confirmed COVID-19 cases, or about 0.009% of the population.
If they found a 0.5% positive rate on background tests there, that would suggest the real number of cases to be hundreds of thousands in that province alone.
That article suggests that the 320,000 tests were not a background test, but they tested "worried people [who] flooded fever clinics to be tested". Actual numbers across all population would hopefully be much less than 0.5%.
The article also states "The claim [that there's not huge transmission beyond what you can see clinically] was quickly challenged by an infectious diseases expert who serves on a committee that advises the WHO's health emergencies program.".
It is stupid. Instantaneous CFR is rarely knowable because of the problems of separating the infection data into groups of those who were infected simultaneously and tracking their ultimate resolution.
(Haiku developer here.) Some of those P=high should really be blockers (and the one blocker currently there is a trivial task essentially there as a reminder to do it while making driver ABI changes.) I should do that, actually.
IMHO, Beta 2 should be held back until _at least_ USB Wifi devices are supported. I can't get most of my laptops to work, and the tried-and-true trusted Wifi USB dongles I use on various ARM projects just don't work because Haiku doesn't support _any_ USB Wifi.
As an open source developer, I have mixed feelings about this.
Yes, Microsoft seems very keen on keeping Windows compatible even with ancient versions of the OS. New stuff usually is optional and APIs that behaved strangely in Windows 95 still behave the same way in Windows 10.
In Apple land, APIs may change their behavior whenever Apple deems it necessary. I ran into issues because of this with almost every macOS update since 10.8. And I see that even big players like Adobe keep running into compatibility issues all the time.
On the other hand, I'm spending just a few hours per week working on my project [0] and I manage to support an app that now runs on 10.5 through 10.14 and on three different CPU architectures with a single package. So no, I don't think you need to "throw 100 programmers at it" to get a working macOS version.
Not all applications are the same. Your application may not have the same requirements from the OS than some other application. As an (extreme) example, an application written in C that only needs from the host OS is a top level window and basic events for handling input while doing everything else with custom code will be much more portable and easy to keep up with changes than an application written in a language with its own runtime, creates a native UI for each platform, uses native APIs and tries to integrate with the native OS.
Yes, that's my own custom UI framework. Most of the platform dependent stuff is implemented in the smooth library.
In hindsight, it would have been easier to just use Qt, GTK or wxWidgets. But I learned a lot by doing this myself and wouldn't want to miss that experience.
I think it's more of a challenge when you have someone who is more skilled as a subject matter expert than as a programmer, which may be the case when you're the tech person or even founder of a small business.
That person may not be doing good unit testing, they might not use the best tools, and find that supporting different configurations may require a lot more manual work rather than maintaining some carefully crafted #ifdefs.
And maybe one person could do it, but one salary they can't justify based on the demand may as well be 100 programmers.
That's not generally true. For example, the ICE train from Hamburg to Berlin goes through 3 other states (Schleswig-Holstein, Mecklenburg-Vorpommern and Brandenburg) without stops.
There may be local politicians in some states making such demands ("If you want to build your tracks through _my_ state, ..."), but I would assume they are trying to make an impression in an election campaign or something.