However, my opinion is that is a PITA that every Flask project has a different structure, unlike a "batteries included" framework like Django, Rails, etc.
Most nontrivial Django websites also have varying structures. At least they did a decade ago when I was working as a consultant specializing in Django. About as much commonality as Flask. Did Django ever figure out a less verbose way of declaring REST APIs with arbitrary serialization?
I agree. Flask is quite nice, as are some of the other things based off it, like FastAPI, and some of the Flask-inspired async frameworks, and the Jinja template engine that Flask uses. All good stuff.
However, the focus has moved to py4web [1] which has many of web2py's strengths (including the DAL), but with a more orthodox architecture at the cost of a little more complexity and a slightly steeper learning curve.
Stuck for a while in python 2.x world with its "always backward compatible" pledge... then it lost attention I guess. Also, questionable technical choices. Now py 3.5+ compatible, but no compelling reason to use it.
Lots of negative comments below which I think are overly critical. I think the stack looks good and I’ve also had no issues with mongo. Will take a look!
Fair warning - tools like SageMaker are good for simple use cases, but SageMaker tends to abstract away a lot of functionality you might find yourself digging through the framework for. Not to mention - it's easy to rack up a hefty AWS bill
Helpful. I was thinking today about when it makes sense to fine tune vs use embeddings to feed into the LLM prompt and this helped solidify my understanding.
Except that the article didn't cover that distinction at all. It looked at (manual) prompt engineering vs fine tuning. What you are describing is Retrieval Augmented Generation (RAG) which is creating embeddings from a knowledgebase, doing a similarity search using an embedding of the search query, and then programmatically generating a prompt from the search query and the returned content. IMO, this design pattern should be preferred to fine tuning in the vast majority of use cases. Fine tuning should be used to get the model to perform new tasks; RAG should be used instead to add knowledge.
Realistically this seems like a question that would be difficult to generalize an answer to without measuring it. Intuition is unlikely to yield a better result than actually trying it.
This reminds me of Vancouver Canada. What happens is decent people and families move out of the downtown core, leaving the worst offenders to run amok. It’s sad. I wonder how many of the 200 experts advocating for the continuation of this policy have children.
It shouldn't. Vancouver and Portugal have very different policy.
In Portugal drug possession, use are illegal. They aren't criminal but are illegal and carry penalties at special drug courts which give less protect to defendents.
Also, Vancouver's downtown core hardly had any families and things families need simply aren't available downtown.
> Vancouver and Portugal have very different policy.
They shouldn't be all that different. Vancouver (well, British Columbia) has tried to model its decriminalization efforts after Portugal's.
That decriminalization hasn't even been around for more than a handful of months, though, so it seems rather early for Vancouver to also be having doubts. Nobody was expecting things to change overnight; Health Canada allowed until 2026 to prove the model.
Vancouver and BC's opioid crisis gets worse and worse as the policies continue to become more and more liberal. Every year there are more tent cities, more crime, and more overdose deaths.
It's also getting worse just as fast in the rest of Canada without the lack of enforcement of the wet coast. Doesn't sound like a causative correlation to me.
So its mostly just public-key encryption and its been a known issue since about 1994. We are still nowhere near making quantum computers that can crack them so its not an urgent thing. There has been a lot of research into alterantives though.
Forward secrecy does not provide any value against cryptography compromise. Quite the opposite as it depends on the security of the cryptography over the long term to insure old messages stay inaccessible after the key is forgotten.
Forward secrecy addresses this specific attack:
* Someone builds a archive of your encrypted messages, possibly without your knowledge or consent.
* That someone then gets access to your secret key material.
* They can then decrypt their archive.
The session keys are exchanged by the asymmetrical systems that the imagined quantum computer would be able to break. So the attacker gets the session keys directly. So for, say, signal, they only have to break a new key exchange which doesn't happen all that often. They can just run the hash ratchet after that. Even for TLS that does a new session key per connection, that connection might last a fair time. The 10 min can be spread over multiple connections for this proposal. We are hardly talking about a massive increase of difficulty.
I mean, it depends a little bit on what your threat model is. If it takes a week to break a key, and you have hundreds of thousands of tls sessions without knowing which is the relavent one, it is definitely something. But yeah it seems like it would quickly become a minor hurdle once real quantum computers become a thing and presumably have their own moore's law.
I agree with you that the statement is overly broad, but the person is referring to asymmetric cryptography in the past tense, making me read it as not about PQC because PQC is indeed the fix for the stated problem but must be applied first and until then, indeed we've always known QC are going to be an issue that needs solving.
But its still bleeding edge. Its been used for experimental purposes but always in combination with a traditional algorithm (so if its broken the traditional algo still secures things). Its definitely not trusted yet.
Crypto does not, for a lot of reasons, but biggest I can think of is that hashing is still one-way, public keys are hidden (until used, which is why it is important to expose your public key only when using funds).
When there is a viable ECC attack vector, it will not be much effort to migrate to a more mature PQC. Better to wait as long as possible, maybe even have a crypto built on PQC to field test it with money on the line -- a few billion in market cap goes a long way to incentivizing breaking the crypto involved.
Kind of goes without saying when nobody has built a quantum computer of the type we are talking about. No general purpose error corrected quantum computer has been used to do anything because they don't exist yet.
I don't think that's common knowledge. It's commonly accepted truth in the industry, but particularly when most people think of military/spies as secretly X years ahead (pick a number) of what the public knows is possible, the tech sector in general can't be expected to know this. It's good to add this in a thread with a headline that sounds like anyone using ecc keys might have a big problem.
That particular demonstration is interesting, but it's not a general-purpose error-corrected quantum computer. It's a single-purpose quantum computer that simulates a quantum process with fewer gate operations than a classical computer needs to simulate the same process.
That's not true. There is such a thing as a completely general set of quantum gates, which combined with qubit memory, would make for a general quantum computer capable of computing any unitary transformation to a certain accuracy.
Not fully general, no. There is no such thing like a "Turning machine" for quantum computers. There are classes of algorithms like Shor's, Grover's, and quantum annealing which can be used to solve large many instances of related problems.
This is kinda like how matrix diagonalization can be used to solve any problem which is expressible as a linear system of equations, and to some degree any continuous function can be approximated by a linear system, so a BLAS + LAPACK accelerator is a "universal simulation engine."
You could probably build a generic Shor's algorithm quantum computer that is able to both factor integers and break elliptic curve keys. But the same quantum computer wouldn't be usable for Grover's algorithm to find the presage of a cryptographic hash. This is what I mean by there not being a "universal quantum computer" in the same way a Turing machine is a universal classical computer. Quantum computers by their very nature are ASIC implementations of specific algorithms, even though those algorithms might have some multi-domain applicability.
That really isn't true. If you have the CNOT gate, controlled rotation and phase shift, you can implement any operation on a set of qbits. If you then have quantum registers, you have a truly universal quantum computer, as you can then use registers to chain operation arbitrarily.
This computer would be able to do Shor, Grover, QAOA, as well as any classical algorithm of course. If you're interested, I can try to describe the proof of universality, it's just a bit of linear algebra. Otherwise, you can look up "Solovay-Kitaev theorem".