7. Uncertainty seems overlooked these days. The job of politicians is to make people and businesses dare. Making people dare getting an expensive education or starting a business or hiring your first employee or whatever it might be. What that requires will vary (if it's a social security system or a tax break for new companies or whatever). But something it always requires is trust in the stability. That the calculus for an investment is valid over N years. That laws or taxes don't swing wildly with political cycles.
That has been the bane of brazil for decades, every politician, at every level, undoes or stops whatever the previous politician was doing so there's absolutely no guarantee what you're doing today will still work tomorrow.
Its a terrible state and situation to invest in a business doesn't benefit anyone. My hometown had a large cultural center built by the mayor, he couldn't run for reelection again, new mayor is elected, completely ignores the whole thing was built and lets it rot. Everything is only done for an election cycle, the next cycle could bring something else entirely.
Its terrible to live in a place like this, Americans have no idea how bad this is going to be for the country.
This is hardly surprising given the attack on knowledge by populists in recent years. If you aren't subscribing to the populist views of science, knowledge and experts, you aren't going to trust the people you need to trust (experts such as doctors, economists and scientists). Likewise if I entered a doctors' office and saw or heard signs they themselves subscribe to these beliefs (e.g. are anti vaccine like ) then it would be pretty natural to distrust this doctor. So the bottom line is that this works both ways, but one way is correct and the other isn't.
Always quote all yaml strings. If you have a yaml file that has something that isn't a simple value (number, boolean) such as for example a date, time, ip-address, mac address, country code, phone number, server name, configuration name, etc. etc. then you are asking for trouble. Just DON'T DO THAT. It's pretty simple.
"Yeah but it's so convenient"
"Yeah but the benefit of yaml is that you don't need quotes everywhere so that it's more human readable"
Logging: no could also be log in norwegian. Or log only for the norwegian region. That's the thing with too many keywords and optional quoting, you can't know.
And for this reason, "logging: false" would be clearer than "logging: no" to represent "I do not want logging".
`false` could be a code for something else just as well as `no`. For example, it could mean that I only want to see logs of false information appearing in the system. The only proper solution is to require quotes around strings.
Any proper experiment for UBI requires 1) to do it for a whole economy. You can't have 10 or 100 people on UBI where the job market, prices etc aren't affected by UBI. And 2) For life. The small effects on behavior if I'm given UBI for 1, 3 or 5 years are going to be so small as to be uninteresting. The surprising results will just be artifacts of the experiment.
The .NET 4.X runtimes baked into Win10/11 are OS components just like Win32 is an OS Component.
That's first of all brilliant if you want to ship apps that run on those systems, because you don't need to ship the runtime.
But it's painful when you want to write server software. Because you don't want your deployment story to be dependent on the server having some particular system level software (or it being windows).
So it's completely inevitable that 4.X will never die. Win32 won't die either.
And making the ship to a completely separate runtime that runs on Linux, and isn't an OS component (but in return requires deployment) was just natural. It's not completely unthinkable that the .NET Runtime will be shipped as part of Windows, but even that woulnd't mean that they can stop shipping the .NET Framework 4.X runtmes.
I just ported a 20 year old 200+ man year desktop app from 4.8 to 8.0 and it was pretty smooth. Even things like old binary-only proprietary winforms controls actually work, which was a huge surprise.
If you are making a web app or any kind of server side app, then the migration is usually straightforward. For client side software it's not as easy, and in some cases it's also not clear whether it's a win. If you are shipping some small Windows-only utility it's probably better to ship on the "old" runtime because your deployment will be smaller when you can use the OS runtime.
ChatGPT takes 5-10 seconds to respond. Until it's as fast as google, i'm not switching.
The question is: are you searching for answers to something, or are you searching for a site/article/journal/whatever in order to consume the actual content? If you are searching for a page/article/journal/ in order to find an answer, then the journal/article itself was just a detour, if the LLM could give you the answer and you could trust it. But if you were looking for the page/article itself, not some piece of information IN the article then ChatGPT can (at best) give you the same URL google did, but 100x slower?
Re: panics: If you have a single long lived process that must do multiple short-lived things (web requests, say) and a panic in one of them MUST NOT take down the whole process, is that extremely difficult to pull off in Rust? I thought you could set up panic boundaries much like you would use catch-all exception handlers around e.g. each web request or similar, in other languages?
You can install a global panic handler to avoid bringing the whole process down. Instead of aborting, take the stack trace, print it, perhaps raise to sentry, and kill the specific "work unit" that caused it. This "work unit" can be a thread of a task, depending on how the application is architected.
This is precisely what Tokio does: by default, a panic in async code will only bring down the task that panicked instead of the whole application. In the context of a server, where you'll spawn a task for each request, you have no way to bring down the whole application (), only your current scope.
(): there could be other issues, like mutex poisoning, which is why nobody uses the stdlib's mutexes. But the general point still stands.
In the context of Tokio, the tokio's native mutexes / locking primitives. For sync code, parking_lot is the de facto replacement for the stdlib's ones.
I don't remember where I read it, but it has been admitted that having synchronization primitives with poisoning in the stdlib was a mistake, and "simpler" ones without it.
for context: a mutex is poisoned should a panic occur while the mutex is held. it is then assumed the guarded data to be broken or in an unknown state, thus poisoned.
Since Rust is not a managed/high-level language, panics are unrecoverable crashes so they need to be dealt at a higher-level, i.e the OS, with appropriate supervisor systems like systemd, or having a master Rust process that spawns subprocesses, and react when one of them abnormally terminates with regular POSIX APIs.
On a platform like Elixir, for example, you can deal with process crashes because everything runs on top of a VM, which is at all effects and purposes your OS, and provides process supervision APIs.
Rust can be optionally compiled in a panic=abort mode, but by default panics are recoverable. From implementation perspective Rust panics are almost identical to C++ exceptions.
In very early pre-1.0 prototypes Rust was meant to have isolated tasks that are killed on panic. As Rust became more low-level, it turned into terminating a whole OS thread on panic, and since Rust 1.9.0, it's basically just a try/catch with usage guidelines.
But no few would write a process-per-request web server today for example. And if a single process web server handles 100 requests you would then accept that one bad request one tore down the handling of the 99 others. Even if you have a watchdog that restarts the service after the one request choked, you wouldn't save the 99 requests that were in-flight on the same process.
Can't you catch_unwind for each request handler, if one chokes then you just ignore that request. If you worry about that messing anything up, then you can tear down and restart your process after that, so the 99 other requests get a chance to complete?
This is factually incorrect. The behavior you describe with Elixir (sic) is precisely what most Rust async runtimes do. (sic because it's Erlang that's to thank)
IMHO that is the sensible thing to do for pretty much any green thread or highly concurrent application. e.g. Golang does the same: panicking will only bring down the goroutine and not the whole process.
Everyone can produce _something_ they have written. Yes there are people who literally clock in at work and code 8-5 for 10 years and never touched a hobby project or contributed to an OSS project. And you might not want to filter that group out completely. But if I was in that group and I was considering switching jobs, I'd definitely make sure I had some of that proprietary code stashed away so I could show a potential future employer. Yes you won't be allowed to do that. And it would be understandable if in some cases (like you work as a defense contractor) it's completely impossible. But for most people it should be possible to show something.
As umpteen other have said: drop the timer and add drag & drop and it's a winner. The problem (compared to wordle) I think is that it requires manual work to make the categories.
If you want to rank the results I think number of guesses is better than shortest time.
But it also feels like one of these things where a ChatGPT could help you create the game data very easily. It excels as these things like "give me 1000 categories of 6 letter words grouped with 5 in each category". That would have been a chore, but now it's just easy. Not sure if that's what you did here already.
reply