Do Re Mi Fa So La Ti Do, So Do ... compile please!
:-D
EDIT:
Seriously, though, what both music and fundamentally sound programming languages have in common is math. Elegantly defined versions of both are beautiful expressions of thought.
Do you have any specific restrictions on what a 'project' means here? I have seen Ada used on an extensive scale in aerospace and defense projects, including a simple realtime OS.
Maybe not as obvious for those without formal education in """database normalization""" but it's pretty trivial to convert from a tree structure to a flat table structure using foreign key relations. Recursive queries aren't even that difficult in SQLite, so self-referential data can be represented cleanly too, if not a bit more difficult to write. IME most applications "tree structures" aren't self-referential and are better formalized as distinct entities with one-to-one relationships (ie. a subtree gets a table).
There's always the lazy approach of storing JSON blobs in TEXT fields, but I personally shy away from that because you lose out on a huge part of the benefits of using a SQL DB in the first place, most importantly migrations and querying/indexing.
Until just now, I've been trying to figure out why people think that JSON is necessary in the database? Yes, lots of data is hierarchical, and you just normalize it into tables and move on. The fact that some people don't work this way, and would like to put this data as it stands into a JSON tree hadn't occurred to me.
What problem does normalization solve? You don't have to parse and run through a tree every time you're looking for data. You would, however, need to rebuild the tree through self joins or other references in other cases, I suppose. It depends how far you break down your data. I understand that we all see data structures a bit differently, however.
> There's always the lazy approach of storing JSON blobs in TEXT fields, but I personally shy away from that because you lose out on a huge part of the benefits of using a SQL DB in the first place, most importantly migrations and querying/indexing.
SQLite at least provides functions to make the “querying” part of that straightforward: https://sqlite.org/json1.html
What problem are you trying to solve with this approach? Unless your document is huge and you need the ability to read or update portions of it, it is better to just read and write JSON.
There's a laundry list of benefits that all add up, not like one specific killer feature. Some applications really do have very complex configuration needs, but it's sorta situation dependent on whether embedding a scripting language or a database is the right solution (for really simple cases I'm more likely to reach for TOML).
An incomplete list of benefits of using SQLite:
- Runtime config changes for free
- type safety
- strong migration support
- incorrect configurations can be unrepresentable (or at least enforced with check constraints)
- interactable from text-based interfaces and strong off-the-shelf GUI support
Type safety as a benefit of SQLite? For me type safety is a negative of SQLite. Being able to store a different type that what the column is declared to store is a bug (not a feature). I also find the lack of DATE and DATETIME/TIMESTAMP to be less than ideal.
Most frameworks can serialize and deserialize JSON from strongly typed classes. For example, Newtonsoft in .NET. The rest isn't worth the effort for most people. Your scenario may be unusual.
I've certainly had some unusual contents in the past where we had approximately 10,000 configurable properties on the system, but we didn't use SQLite for that. Regardless, you ignored 3 of the 4 (I'll ignore the last one, it applies to JSON too) other points I made. My use cases aren't that weird and I'm not saying reach for SQLite every time, it's one option out of many. Migrations and runtime configuration change alone justify it for me in many cases.
I frequently write software like this (in other domains). Structs exist (in most languages) specifically for the purpose of packaging up state to pass around, I don't really buy that passing around a huge context is the problem. I'm not opposed to long functions (they can frequently be easier to understand than deep callstacks), especially in languages that are strictly immutable like Clojure or where mutation is tightly controlled like Rust. Otherwise it really helps to break down problems into sub problems with discrete state and success/failure conditions, even though it results in more boilerplate to manage.
Fair point on structs - the context object itself isn't the problem, it's what ends up inside it. When the struct is a clean data container (transaction, amounts, dates, account codes) it works great.
Where I've seen it go sideways is when it accumulates process state: wasTransferDetected, skipVATCalculation, needsManualReview, originalMatchConfidence. Now you have a data object that's also a control flow object, and understanding the code means understanding which flags get set where and what downstream checks them.
Your point about discrete success/failure conditions is well taken though. We moved toward exactly that - each phase either returns its result or an explicit error, and the orchestrator handles the failures instead of stuffing error flags into the context for later. Bit more boilerplate but much easier to reason about.
I suspect that the amount of background legwork for each application is fairly limited. It should be possible to triage the vast majority of applications in a matter of days at most, at least the denials. It's wild that it takes years to do this.
The really long waits aren't processing backlogs they are quota backlogs, either global (because the total annual cap in a category is or was recently lower than the annual nuber of applicants, so there it takes a period of years for quota space to be available globally) or on a per-country basis (because in each category, only 7% of the annual quota can go to applicants from one country, regardless of the distribution of applications.)
Though the processing times are also ridiculously and inexcusably long, in most categories.
Considering that there are over 2M immigrants per year and the USCIS staff is about 20K, it's actually pretty quick. If all USCIS did was just immigration it would still be ~100 immigrants per year per employee or about 2 workdays per immigrant. Even considering that some of those are dependents on the same case, it's still pretty fast. But USCIS also handles all the non-immigrant stuff: CoS, AoS, asylums, etc so they don't dedicate their whole workforce to the immigration and the actual caseload is higher than this estimate.
You've clearly never seen someone go for citizenship. It's a relatively involved process that involves multiple interviews, character reference letters, lots of paperwork, etc.
Getting a greencard (or equivalent) is an entirely different thing and is even _more_ broken.
I've known several people who've done it. I wasn't trying to argue that there isn't a lot of manual labor going on. But I'm doubting how much of that labor extends beyond interfacing with the applicant.
Are they interviewing references outside the country? Doing deep background checks that are not basically instant electronically? That's what I'm talking about. The denial process can probably be made extremely fast, and then the tedious interview part can be focused only on the ones we are planning to accept otherwise.
You're probably right that the background checks aren't that intensive, but every other part of that process is. If needing 2+ interviewers for 15-30 minutes per candidate isn't labor intensive, I don't know what your definition is.
No its not. It's a 3-step process with only one in person interview involved. I've helped 2 people go through that process in the last 2 years.
1) Submit an application and fee. Along with additional documentation (if any). Then wait for biometrics appointment notification.
2) Go to appointed date for biometrics. (Finger printing, photos). Takes about 30 minutes. No different than appointment for TSA Pre-check or Global entry.
3) Go for naturalization interview. If accepted, then usually interviewer will let the person know that they've been approved for naturalization. They'll receive an email/letter indicating date , time and location of the naturalization ceremony/oath.
Of course, depending on the area of the country you live in , the time between the above 3 steps varies. From 90 days to upwards of a year or more. Also, the above is for most people. But there could be some complicated cases where a person has to make multiple in-person visits. But regarding interview, there is only one.
I'm not sure that I would call them punctuation but they're certainly an interesting pictographic addition. I think they're great, but I too get irritated when not used judiciously.
To me, their usage is akin to to turning a plaintext file into rtf. Emojis do not look the same across platforms. Generated text should default to the generic IMO.
Plain text doesn't look the same across platforms for the same reason emojis don't, what's your point? At a technical level, it's no different than a plaintext doc with Chinese (or almost any other non-latin script) characters in it. It's still just a linear stream of text encoding with no specific structure beyond that.
I'm also in a Midwestern city and see similar things. I once saw a project manager at a Fortune 500 that literally fabricated statistics about an ongoing project I was on to please management.
I've found that not being afraid to say no or opine on things has been very effective in my career.
I already get it with plan9port and it addresses 100% of my issues with make. It integrates nicely with rc so there's really not a lot of additional syntax to remember.
I'll toss $20-50 your way to bump up the priority on writing that knowledge down, only strings are it has to actually get done and be publicly available
reply