If you are looking for speed tacking an event loop on an interpreted scripting language is trying to dig a hole with a rake.
Its why the demand for such functionality was almost wholly absent from Python, PHP, etc for so long. Asyncio is a throughput boon even in a single core environment, which dates its practicality back to the 90s - and it was found there, in pretty much every graphics stack and COM. C apis have used what is fundamentally a promise since the 80s,
But if you are in a situation where you want those kinds of performance characteristics where you have work to do during blocking operations you probably don't want to be writing that code in an interpreted language to begin with.
Last week I was optimizing some performance critical Python but when considering my options I just ported the whole thing to Boost / C++ and got an ~80x speedup over the whole loop.
Theres only a narrow range of problems actually best solved with interpreted asyncio, pretty much only in the space where introducing a build system and doing language binding isn't worth it but the gains from being non-blocking are. They exist, and its definitely not a bad thing to have async support in your interpreted language, but it definitely isn't mission critical in the slightest.
Cani you please elaborate a bit how python Async await is crappy ? Not defending anything but I'm curious. Is it in the language or its implementation ? I've found the loop choices and relationship with dbus loop or other internal libraries cumbersome on Linux but that is implementation.
In the early days, qb has only the builder api. However, I also wanted to have a table builder from structs. Currently, the builder has completely separate implementation from session. The session is the structural composition of builder, engine, metadata, etc. You can see more examples in https://qb.readme.io/docs/the-builder
Feedbacks & Contributions are welcome about the builder.
The struct tags are grouped like type, constraints, etc. The groups of struct tags is more Gorm currently can't auto migrate keys with foreign keys. There is an issue I've commented but the issue is open for a long time.
There are also some missing features qb doesn't have and gorm has. For instance, you can define relationships in gorm while in qb, you can only define foreign keys using `qb:"constraints:ref(col, table)"`.
Moreover, I am not entirely sure but I don't think enforcing types is possible in gorm. In qb, consider this struct;
type User struct {
id string `qb:"type:uuid; constraints:primary_key"`
}
qb understands this as a uuid type although it has string definitions. These are the main reasons why I created qb.
The relationship feature is not clear in my mind. I'd like to have feedback on relationships.
Have you considered another mechanism apart from field tags for relating fields to the db (for example a function for marshalling?). It's starting to look like you're creating a dsl stuffed into struct field tags as strings, which gets ugly if it is complex. This is one part of go I'm really not keen on for this reason.
In https://github.com/aodin/sol#sol I separated the table schema, which is a function, from the destination / receiver structs. This approach allows you to build tables programmatically and do condensed selections, such as all IDs into a []int. However, this process still has to match database columns to struct field names for most selections.
There is a table api in qb much like yours. See https://github.com/aacanakin/qb/blob/master/table_test.go. The session api converts tags into table object. I really like the column definitions. Maybe I can convert the definition implementation like you.
I'm also not happy to add tons of tags in struct fields. Can you provide an example on how to do that?
What I understand is to have a function inside the struct namely like "map" and define the constraints there. I think we can do this in the next milestone of qb.
Can you open an issue about this? The github repo is https://github.com/aacanakin/qb
I just use a function taking the row from the db in the model and creating the model from that. It avoids tags and reflection at the cost of some verbosity.
In a golang conference, the presenter talking about youtube's vitess said:
"If vitess servers are down, then youtube is down. That's how golang is crucial for youtube"
It's 2019 guys. Please.