> This is a complete fabrication and lie. As bad as it gets. Iran never said it is going to wipe another country. What they said is explained clearly in detail here [1].
There are more then a million Israeli Arabs who enjoy full citizenship rights. I fact, they enjoy more rights then the average Israeli, being able to enter West Bank freely.
> I fact, they enjoy more rights then the average Israeli, being able to enter West Bank freely.
I wouldn't go that far. Yes, Israeli Arabs technically have full rights. Yes, technically speaking they might be able to go to places non-Arabs "can't".
But that's being really "technically correct". I think for day-to-day living, Israeli Arabs effectively have a worse time living in Israeli than Israeli Jews (e.g. there is at least some racism, unfortunately, there are certainly differences in funding in e.g. Arab schools vs Jewish schools from what I know, etc).
I usually hear this what-aboutism argument from the people who benefit from current undemocratic regime.
If another country did/do something wrong, that doesn't justify Iran's actions. everyone is responsible for their own action and the reality is current Iranian regime has killed many thousands of it's own citizens.
Iran shutting the plane down and denying for 3 days is not justifiable by any means.
State machines are a great way to control defects by producing states which are only able to mutate in certain specific pre-defined ways
The defacto example is usually a traffic light. Green is permitted to transition to yellow, but never to red; a state machine makes a bug of that form impossible
Obviously, it's nominally used for more complex stuff, but in general, all of your appliances are state machines. Your microwave especially.
ROS stands for Robotic Operating System. It is a framework of different tools such as an abstraction layer, libraries, and a package manager that makes it easier to program/interface with different robotic systems/components. It basically simplifies a lot of work that you would be doing on your own. While it is extremely useful, it also has some flaws. For example, it is bloated, can be tricky to configure, and can be sometimes difficult in making it do what you want it to.
It's an absolute mess of a software, to get running and if you're not paying attention you'll install 15+GB of packages you won't need.
I've helped some friends with c++ assignments during college and with ROS they had to have dozens of terminal windows open running some command just to get things working.
It's a shame that ROS is getting this sort of reputation. It has it's merits, and is used by most robotics applications I have seen.
It is akin to the python ecosystem, where there is A LOT of good software and packages out there, but there is also A LOT of garbage out there.
The community is nice and supportive, and the framework has really matured recently, with ROS1 being "complete" and ROS2 getting off the ground with a lot of mature features like DDS
This might be a dumb question, but why is it way lighter than the US or anywhere in the world? Why do they need to install so many towers? Is there anything specific about Europe's geography that requires European countries to install that much tower?
CDMA gets longer range than 3/4G, so it's useable in a sparse network so long as it's not trying to service too many people at the same time. I'm guessing the US launched lots of CDMA infrastructure back when it wasn't considered obsolete by most of the world, and hasn't upgraded it's rural cellular networks.
Australia set down all it's "legacy" CDMA stuff over a decade ago, which makes the large dark patches that cover lots of the continent very under-served.
You can serve only so many phones and handle only so much bandwidth from a single cell tower. So they reduce cell sizes and install more towers to keep devices per cell down.
Evaluating the accuracy of a model is an unsolved problem with both formal and informal approaches.
Formally: we could ask multiple separate model/proof assistants to generate separate models from the same underlying specification, and then attempt to find discrepancies between their predicted results. This really just punts the responsibility: now we're relying on the accuracy of the abstract specification, rather than the model(s) automatically or manually produced from it. It's also not sound; it just allows us to feel more confident.
Informally: we can have a lot of different people look at the model very closely, and produce test vectors for the model based on human predictions of behavior.
You could generate a set of random vectors that span the entire input space, exercise the system with those vectors, and publish some sort of "accuracy" (e.g. generate random vectors through i.i.d. uniform R.V.s over the input space, evaluate f(input), and use the successes in a hierarchical binomial distribution). Remember that most of the time we try to verify programs by building a model to model _edge cases_; after all the "happy path" of the program is simple to test. Edge cases are, by their nature, rare occurrences. As a trivial example, think of a boolean function f(x, y) = x & y. f evaluates to 0 for every value of (x, y) except (1, 1). If we were to create a model of this function, f_model = 0, f_model would appear to evaluate similarly to f 75% of the time. With a sufficiently large input state space, it would be quite feasible to hide essential edge cases in very small tail probabilities (e.g. < 99.5%).
I’m just starting to investigate using formal analysis in practice and I think a much more practical topic is the symbolic execution frameworks which can analyze programs and feed them to theorem solvers like Z3.
The most useful and accessible symbolic execution package I’ve found so far is KLEE https://klee.github.io/
If anyone else has recommendations for tools or beginner material I’m all ears!
Nielson & Nielson have the standard textbook in static program analysis that is used everywhere. But it's quite unfriendly as it's written using abstract algebra. They've recently released two textbooks that are much gentler. Actually, I'd say they are easy going and fun but still retain all the mathematical rigor.
They use program graphs, which are a bit less general but a lot easier to digest. They cover all major techniques, including theorem proving, static analysis, model checking, abstract interpretation, type and effect systems, etc:
There's also a companion website with some F# code. The second book, which seems still unfinished discusses how to implement program analyses using datalog. This speeds up development quite a lot. Otherwise, developing your own static analyzer is a lot of work.
My dream is to implement some kind of framework that enables quick DSL creation along with lightweight formal methods support to verify programs written in each DSL. I think restricted semantics is the key to make formal methods practical. Quoting Alan Perlis, "Beware of the Turing tar-pit in which everything is possible but nothing of interest is easy."
>My dream is to implement some kind of framework that enables quick DSL creation along with lightweight formal methods support to verify programs written in each DSL.
There's work in this area using monads. Specifically, Darais (from Galois) et al show in "Abstracting Definitional Interpreters" how given a definitional interpreter you can easily create all sorts of abstractions using a stack of monad transformers. The best part of it all is that your particular chosen stack remains valid when moved between interpreters of different languages.
Your dream of varied static analysis can be achieved using monad transformers, definitional interpreters written in the required style, and Racket's DSL-creation system.
Another option: the "reversing" challenges in infosec Capture The Flag competitions regularly require you to work out which constraints a program has implemented and then plug into Z3 or Angr to find a solution, that might be a fun way to learn.