Computer-generated random numbers are not truly random, yet they are practically random in most real-world use cases. You can’t easily cheat the RNG in World of Warcraft to get critical strike every time.
The output from GPT is generally very intelligent and versatile in terms of text. It may even be capable of handling more multi-modal problems with the use of enough sensors and motors. Perhaps the same idea of "predicting the next move" or "predicting the next idea" can still apply.
Who knows, maybe humans are essentially physical creatures that "generate the next thought and generate the next move"?
One of the biggest issues with GPT is its lack of mid-term memory like human do. Instead, we need vector store and search then bolt back its short term memory instead of letting it handle everything in a more coherent way. Perhaps it could benefit from lightweight fine-tuning technologies like LoRA and hypernetworks for stable diffusion. If this issue is resolved we would see it'll get even more practical. Again, the flaw is not about "predicting the next words".
I also find erlang does not play well with modern ops work flow like docker and kubernetes.
But as for Erlang, a lot of people uses Erlang for building in-memory massive stateful services, and a very fault tolerant system on top of it.
I'm interested in both language, but haven't start playing with Rust yet.
I tried Akka and find it's pretty good but the virtual machine is not as good as BEAM, but Scala is a powerful language.
I'm curious about Rust. Could you please share some insights about if Rust is capable of such stateful services as Erlang does, or there could be other advantages in terms of stateful/fault tolerance?
“And while val and var were near-doppelgangers, unveränderliche and opportunistisch are now much more easily disambiguated.”
“One more planned change to Skala’s syntax is the elimination of unnecessary spaces between modifiers and definitions. It was always a frustration to Odersky that abstract override lazy val could not be a single word. “It’s a single concept, so why not?” he asks, incredulously. Yet the reason was always that forming compound words simply didn’t work so well in English; but in German, writing implementationsdefiniertüberschreibenfaulunveränderliche is completely natural, so we intend to fully embrace it.”
I feel the same. Why do you use multiple words when you could just use one?
I believe at least most of those lisp dialects are useful without any frameworks. You get view template with s-expr for free instead of erb/ejs/eex. And you are basically composing everything like plug in elixir very easily.
However, the down side of not having a major framework is community does not have same context to discuss and share, which keeps almost every lisp dialect remains out of mainstream.
Computer-generated random numbers are not truly random, yet they are practically random in most real-world use cases. You can’t easily cheat the RNG in World of Warcraft to get critical strike every time.
The output from GPT is generally very intelligent and versatile in terms of text. It may even be capable of handling more multi-modal problems with the use of enough sensors and motors. Perhaps the same idea of "predicting the next move" or "predicting the next idea" can still apply.
Who knows, maybe humans are essentially physical creatures that "generate the next thought and generate the next move"?
One of the biggest issues with GPT is its lack of mid-term memory like human do. Instead, we need vector store and search then bolt back its short term memory instead of letting it handle everything in a more coherent way. Perhaps it could benefit from lightweight fine-tuning technologies like LoRA and hypernetworks for stable diffusion. If this issue is resolved we would see it'll get even more practical. Again, the flaw is not about "predicting the next words".