I'm still using Bootstrap and Jquery for my side projects.
It helps lower the amount of technology I use in my stack.
From my experience maintaining frontend technology is not as simple as backend.
I've come to the conclusion that data is the most important thing, so my concentration is mostly on getting the database right then build out with a backend rendering framework. So the presentation is not as important in the initial phases and bootstrap is good enough.
I was in my org conference and the guy that head AI team was stating how Deep Learning going to change the world. That it'll write software in a few years (he's from the applied math field). He also glossed over many on going problems with Deep Learning.
We're in healthcare industry and this guy is pushing Theranos like level of snake oil.
The people that don't know a lick about ML or Statistic or both rely on these people. The doctors are relying on that dude and I for these inputs. And the dude is selling theranos like stuff. I think the problem is the medical field, cs, and stat are all high level field, requiring years of training to acquire mastery and knowledge. So it's rare that someone would have all three to be able to have a impartial view or a good overview of pros and cons.
Yeah it because the Erlang VM have a preemptive scheduler and each process have a mailbox which gets a designated amount of time to access cpu.
You can do a for(1) for loop and it won't hog the cpu because it get premptive. The problem is when you have large amount of problematic mails in the mailbox....
So having written production Erlang, I have literally never had an issue with large amounts of problematic mails in the mailbox. It's like, yeah, technically it's possible, since there's no forced constraints around what kinds of messages a given process can be sent (which is intentional; after all, the process may enter a later receive statement that knows how to handle those kinds of messages), but it's really easy to do correctly. And you can always program defensively if you want to limit the kinds of messages you respond to by having a wildcard pattern match at the end of a receive.
If you're not running into overfull mailboxes, you're not trying hard enough! :D
Usually, it's some process that's gotten itself into a bad state and isn't making any progress (which happens, we're human), so ideally you had a watchdog on it it, to kill it if it makes no progress, or flag for human intervention, or stop sending it work, cause it's not doing any of it.
But sometimes, you get into a bad place where you've got much more work coming in than your process can handle, and it just grows and grows. This can get worse if some of the the processing involves send + receive to another process and the pattern doesn't trigger the selective receive optimization where the current mailbox is marked at make_ref() and receive only checks from the marker, instead of the whole thing.
If you miss that optimization, the receives will check the whole mailbox, which takes a long time if the mailbox is large, and tends to make you get further behind, making it take longer, etc, until you lose all hope of catching up, and eventually run out of memory; possibly triggering process death if you've set max_heap_size on the process (although where I was using Erlang we didn't set that), or triggering OS process death of BEAM when it can't get more ram because allocation failed or OOM killer, or triggering OS death, if it has trouble when BEAM sucks in all the memory and can't muster its OOM killer in time and just gets stuck or OOM kills the wrong thing.
Yep; as I mentioned elsewhere, I'm not discounting mailboxes filling faster than you can handle them, but rather the idea that you have 'problematic mails in the mailbox' - I've never had problematic mail in the mailbox. I've certainly seen mailboxes grow, because they were receiving messages faster than I was processing them, and i had to rethink my design. But that isn't an issue with the mail going into the mailbox; that's an issue with my not scaling out my receivers to fit the load. As I mentioned elsewhere, that may seem like semantics, but to me it isn't; it means the language's working that way makes sense, and it's rather an issue that I created a bottleneck with no way to relieve the pressure (as compared to messages that can't be handled and just sit there causing receives to take longer and longer over time).
Oh, I think I see. I don't think there's such thing as a problematic mail for BEAM really, mails are just terms, and BEAM handles terms, no problem. A mail that contained a small slice of a very large binary, would of course keep the whole referenced binary hanging around, which could be bad, or a large tuple could be bad because if you tried to inspect the message queue through process_info in another process, that would be a big copy.
But I think maybe the original poster just meant lots of bad mail in the mailbox to mean mail that would take a long time to process, because of how the receiving process will handle it.
Or, possibly bad mail meaning (as you suggest, perhaps), mail that won't be matched by any receive, resulting in longer and longer receives when they need to start from the top.
"But I think maybe the original poster just meant lots of bad mail in the mailbox to mean mail that would take a long time to process, because of how the receiving process will handle it."
Yeah; just, if he meant that, it seems like a...weird call out. Since that's not particular to Erlang's messaging model; that's true in any system where you have a synchronization point that is being hit faster than it can execute. Seems weird to call that out as a notable problem, as such.
What's unique to Erlang, and -could- be prevented (by limiting a process to a single receive block, and having a type system prevent sending any messages not in that receive block), if you wanted to change the model, is the fact I can send arbitrary messages to a process, that will never match on them, and because it's a queue, will cause it to delay all following messages from being handled. Hence my focusing on that; yes, that's a potential problem, no, it's not a particularly likely one.
I've also never had the issue where process mailboxes were filling faster than the messages were being consumed. If I were to run into a problem where that was an issue I would question whether or not erlang/elixir was the right tool for the task, but in my experience there's always been a way to spread that load across multiple processes so that work is being done concurrently and eventually across multiple nodes if the throughput demands keep increasing. If the workload truly does have to be synchronized I've always had the experience that sending a message to another tool was the right answer - maybe a database query or another service, for example.
Overfull mailboxes can be problematic even with a wildcard `receive` and can happen if some branch of the control flow is slower than expected. We have some places in our code that essentially do manual mailbox handling by merging similar messages and keeping them in the process state until it's ready to actually handle them.
Right, but I wouldn't characterize that as problematic messages...rather problematic handling, or systemic issues with load. I.e., my fix does not change the format of the messages. Semantics, perhaps, but a difference between "this design decision of Erlang's caused me problems" and "this architectural/design decision of mine caused me problems"
It's not the language but more about how the data is store (data structures/indexes) and what you're going to do with the data.
At least during my time as a developer, I've come across many people that didn't understand this. When asked why they want to use elasticsearch over RMDB is it because they wanted Trie over B+tree? They didn't understand. Also the use cases almost always relational. Postgres have good enough FTS actually if anything elasticsearch is almost always a complement database not a replacement to RMDB.
Honestly folks choose elasticsearch only in part for Trie over B+tree. It provides a fair amount of magic wrt to appropriate defaults for dealing with large datasets in a distributed store.
I think a lot of folks would be better off with RMDB, but if you barely know SQL and spend most of your time making UIs, you’re lucky to have the breadth of know-how to configure Postgres the right way (no offense intended to frontend developers).
Of course, Elasticsearch’s magic defaults expectations may come back to bite you later on when you’re using it OOTB this way, but it’s hard to argue with throwing a ton of data in it and then -POOF- you have highly performant replicated queries, with views that are written in your “home” programming language, without even really necessarily understanding what your schema was to start with (yikes, but also, I get it).
Yeah... that's why I wait a few months to download any new update from Apple.
With the reports/articles about Big Sur and how it send data back to Apple, I am moving to PC.
I am not moving to PC thinking that Window doesn't do the same. It's just the line between Apple and Microsoft is slowly blurring.
The issue with Apple quality and dev friendly can be mitigated by good enough pc quality and linux vm. Apple is more and more tailor toward general consumers which led to more wall garden and questionable practices.
Also Bootstrap 3 to 4 have had patterns of dropping support of IE versions.