Laravel with Filament is super productive also. Django used to be my goto for 15 years but the frontend infra, Livewire, and flexibility of Filament compared to django-admin are hard to beat.
I've used Redshift for many years and really like it, especially as a user. I'm following the development of DuckDB and one aspect that's nice about it is how predictable prices are when you run it on dedicated servers: https://fet.dev/posts/costs-of-analytics-on-dedicated-hardwa... .
If you don't have huge amounts of data I would take a look at it. Also, Cloudflare R2 might be a good alternative to S3 because there are no egress costs.
> I'm following the development of DuckDB and one aspect that's nice about it is how predictable prices are when you run it on dedicated servers:
Are the predictable prices worth the trade-off of more labor to operate the servers? I work for a database vendor and find the economic trade-offs quite fascinating.
Sounds like a DAG based task orchestrator could be a good fit. Where tasks state their dependencies and are allowed to run only when they have all completed.
Where I lived until some months ago they usually drive way faster than I do. One memory that stands out is being passed by one driving about 80km/h on a narrow 30km/h road filled with speed bumps, narrowings amd other obstacles. It was a miracle the thing staid on the road.
It was just where all the kids cross the road to school. That was when I changed my mind from "youthful fun" to "ban them".
More than half of all dangerous situation I have been in concerning cars have involved EPA-traktors.
Now whenever one is coming I either walk in the ditch on the side of the road or jump to the other aide of it.
Driving from where I lived to the closest school I was usually passed by 2-5 EPA-traktors, despite driving 80km/h.
Sounds like it conveniently makes the worst drivers easily identifiable. Sounds good to me! I doubt making them drive something else would magically make them safer.
At least a driver with a license have been officially trained with security in mind.
Those EPA drivers are, actually, teenagers trained by their teenagers friends. It’s normal for teenagers to underestimate risks and dangers.
That’s basically why a licence is needed to drive : not to explain you how to control a car (that’s a rather simple skill) but how to act on the shared road.
Do they not have a license because they'd rather self-learn, or do they not have a license because they don't meet the required age?
While I don't doubt that some people would not give a shit and self-learn and drive the A-tractors without a license anyway, at least lowering the age for a license would mean that those who want to be safer can get it.
You could even attach restrictions to said license that incentivise safer driving over time - a teenage license can be obtained at 15 but comes with some restrictions that get lifted over time (unless caught breaking the rules) and inherently "converts" to a full license by the time you're 18 assuming you haven't lost it beforehand due to unsafe driving.
How about requiring them to get a license to drive a car first? It's not going to fix everything, but at least requiring people to learn about road rules and take an exam before they are allowed to drive would be something that might help a little bit?
In fact, a commit should have a single purpose, otherwise it creates dependencies and, for instance, you cannot revert a feature without also reverting a bug fix and it makes cherry-picking a nightmare.
With normal review cycle times, this would slow things very much.
Optimizing your development practices for being able to revert single commits seems very wrong to me. If you produce so many bugs that this is important, I'd focus on making fewer bugs.
At my current job I can't remember us reverting a commit in the three years I've been here.
We probably work in very different environments, and your views may be the pragmatic ones in your environment.
> With normal review cycle times, this would slow things very much.
I don't think it does. Ultimately you need to review everything so if you ensure that each commit is logically coherent it actually makes reviews easier.
Otherwise, in my experience, things can get quite messy and the history very difficult to follow.
It's also impossible to predict when or if you'll need to revert a commit. If you follow good practices and have discipline then you make your life much easier if things go wrong or you need to cherry-pick something, which is quite common with bug fixes if you have several branches (e.g. one branch per release).
I agree that reverting a single commit is less common than we might plan for, but don't smaller commits help review? That's how it is on my team. You can review smaller change batches instead of a whole pr or branch diff
> With normal review cycle times, this would slow things very much.
Compared to how much it would slow things down having to try to revert or correct a commit that both fixes a bug and introduces a feature? Especially if that change involved any sort of DB schema change that had to be reverted.
Anyone who has ever gone to production with a set of changes that involved an important bug fix and introduced a high-value feature, only to find that the whole thing had to be reverted because of a bug in the new feature, re-introducing the bug to customers, knows the importance of small, separable changes.
I agree, but I've also been told off for making many, small commits. I don't know why the complainers don't squash them together if that's the way they feel and leave the moaning out of it.
That's a little strange to me. As a reviewer, I find it substantially easier to read 30 commits that are 20 lines each than one commit that is 200 lines.
Bite sized ideas are really easy to digest. Maybe the real complaint is that the commits aren't sufficiently independent. I know some projects mandate that the full test suite be able to pass on every commit, which is really just a way of enforcing that every commit is "complete".
My view, and I think a best practice, is that for bugs the fix should be in a single, dedicated commit, even is that's just one line.
For features you may split into multiple commits if things are too big but then each commit should be self-contained and complete by splitting the feature in a number of smaller improvements.
So really, commits are what they are based on the work at hand as long as they are kept to manageable sizes. I've seen code reviews for 1000+ lines all over the place. The author may know what they are doing but the reviewers are usually lost...
More commits != Bad, but there are a few bad ways to do it. Committing incomplete changes just to save your current changes, carelessly trying stuff and then piling on fixes to previous mistakes instead of fixing them, etc. But yeah, some people are just threatened by certain things.
> I agree, but I've also been told off for making many, small commits.
We're not paying by the commit, here. As long as the commits make some logical sense, you're doing the right thing and those folks don't know how to use version control effectively.
If it's their project you sort of have to follow their rules. Otherwise I'd say just ignore them!
I'd agree when it's simple to do. Sometimes "fix a bug" leads to rewriting some code and the bug fix comes naturally - or more easily - with that.
For a contrived example, say a bug exists because a sort algorithm sometimes reverses the order of identical values. Fixing the resulting bug locally might involve handling duplicates with some extra logic, but changing the sort might result in fixing the bug as a side effect. I might argue for both because correctness should not be an accident, but we might change one fix to an assert rather than handling a case that doesn't occur any more.
A sort is called "stable" if it doesn't re-order things that compare equal. Unstable sorts are sometimes faster. If you care your code should request a stable sort. If your library only offers one sort and doesn't specify whether it's stable, I agree with your "defence in depth" strategy but I believe the right long term fix is to have the library clear up whether or not the provided sort is stable.
This may be a special case because the question comes up so often, but in general, I don’t think libraries should document all the qualities they don’t have, because that’s inexhaustible. In my opinion, if a sort function isn’t documented to do a stable sort, it must be assumed to be not stable, regardless of current implementation details. It’s often not such a big deal either: in many cases, there’s an artificial key (an id) that you can use as a tie-breaker in the comparison function.
One thing I hate is when people look at the library source code to figure this kind of thing out, since any implementation detail can change with an update. Assume the most hostile implementation allowed by the docs and you’re usually fine.
> One thing I hate is when people look at the library source code to figure this kind of thing out
In case you didn't already know: Hyrum's Law. https://www.hyrumslaw.com/ Even if the source code isn't provided and nothing is documented your users will rely on every observable nuance of your actual implementation anyway.
At Google's scale (internally, for their internal software where they could in principle fire people for doing this) this means if you change how the allocator works so that now putting ten foozles in the diddly triggers a fresh allocation instead of twenty, you blow up real production systems because although this behaviour was never documented somebody had measured, concluded ten doesn't allocate and designed their whole system around this belief and now South East Asia doesn't have working Google search with the new allocator behaviour...
In protocol design Hyrum's Law led to TLS 1.3 having to be spelled TLS 1.2 Resumption. If your client writes "Hi I want to speak TLS 1.3" the middleboxes freak out so nothing works. So instead every single TLS 1.3 connection you make is like, "Don't mind me, just a TLS 1.2 session resumption... also I know FlyCasualThisIsTLS1.3 and here is an ECDH key share for no reason" and the server goes "Yes, resuming your session, I too know FlyCasualThisIsTLS1.3 and here's another ECDH key share I'm just blurting out for no reason. Now, since we're just resuming a TLS 1.2 session everything else is encrypted" and idiot middleboxes go "Huh, TLS resumption, I never did really understand those, nothing to see here, carry on" and it works.
Thanks for reminding me of the term "stable sort". It was a contrived example. I suppose I could have abstracted even more:
Algorithm B sometimes fails on the output of Algorithm A. We can fix the issue by making B deal with that case, or we can change A so it never produces that output. Sometimes changing the algorithm in A just makes the problem go away, and maybe that was a good idea anyway. This seems too abstract, so I picked a slightly more specific (sorting) thing for A.
Mixing multiple purposes into a single commit not only makes the initial review more difficult, it makes later review very taxing when, for example, trying to sort out the “why” of a particular change when troubleshooting. If you have to look through 200 lines of non-functional cosmetic changes to suss out the three lines of functional change, you've had to reinvest basically the whole code review process time again just to get oriented.
yeah, but my single purpose is "make code better". If in fixing bug I can refactor and/or fix/expand/add comments and/or document things to make similar future bug impossible or less likely I'm doing that.
Also the "single purpose" of a commit often includes: fixing or adding issue, documenting what / why of the fix, and mentoring other devs via example or co-review.
I'm really excited by Cloudflare Workers. I hope to get a chance to build something tangible with it soon. The developer experience with Wrangler is superb and paired with Durable Objects and KV you got disk and in-memory storage covered. I'm a fan of and have used AWS for 10+ years but wiring everything together still feels like a chore, and Serverless framework and similar tools just hide these details which will inevitable haunt you at some point.
Cloudflare recently announced database partners. Higher-level options for storage and especially streaming would be welcome improvements to me. Macrometa covers a lot of ground but sadly it seems like you need to be on their enterprise tier to use the interesting parts that don't overlap with KV/DO, such as streams. [0]
I played with recently launched support for ES modules and custom builds in Wrangler/Cloudflare Workers the other day together with ClojureScript and found the experience quite delightful. [1]
Second weightlifting. Not really similar or left of field, but a good complement. I found it more interesting after I started following a routine and tracking progress on each exercise, trying to optimize nutrients before/after workout, daily macros etc. Optimizing these things is kind of satisfying.
For example, the other day I had to work out after work instead of in the morning, and I had also been encouraged to do some spinning because of a knee issue. I found that I didn't perform well but I don't what caused it exactly: the cardio before lifting, it being late, tiredness from work, eating schedule, the knee, etc. Kind of similar to debugging.
I would like to know why dark themed apps have become so popular? I'm 30 and use black on white themes whenever possible, only exception being when there are no other light sources, like reading from bed.
That's like saying "I am only interested in the one bits. They carry information, unlike the zero bits which signify nothing."
There is no fundamental difference between one bits and zero bits. This should be self-evident to any programmer. After all, you can apply a "not" function to any binary string to turn it into another string with all the bits reversed.
Light pixels and dark pixels each convey the same amount of information. Light themes vs. dark themes is not a question of which one lets photons carry more information, it's the same information either way.
The real difference is how our far-from-perfect human eyes perceive them.
You're telling me there's no difference between light and the absent of light barring our perception of them as though light isn't a physical phenomenon with measurable effects.
For an inverted color scheme (light information on a black background) on a display where black pixels don't emit light-- the only emissions your eyes have to take in are information.
Your digital logic analogy is sanitized. If the high logic level is at 10kV your equipment is going to have a hard time.
We're talking about staring into a light source, there's a difference in total energy your eyes have to absorb.
Maybe your argument is self-evident to any theorist, but it completely ignores the physical world.
I'd also add that when reading with reflected light (such as reading on paper), your eyes are going to be taking in the same amount of light regardless of where they look in your environment because you're just reading ambient light. Completely different than staring into a light source.
Try programming while looking through an LCD directly at the sun and have it display characters by darkening pixels of direct sunlight aimed at your retinae then tell me there's no difference between a lit pixel and a dark pixel.
Hey, there's no difference between a one bit and a zero bit, here's a great computer interface idea: Transmit binary data to the user with gamma rays. The only difference is in how their far from perfect human body experiences them.
There are two kinds of dark themes: low-contrast, washed-out dark themes that use various shades of greys, and high-contrast, white-on-black dark themes that are good for people with poor eyesight or poor quality hardware, and for use in an actually dark environment.
Naturally there are things in between the two, but by and large there’s a pretty clear division between the two and which is provided, and the latter are pretty rare.
I actually do, especially on my OLED laptop screen and phone. The backgrounds are often these dark but not black grays that just lower the contrast of the content for no reason. Chrome and Firefox have these reader modes that don't give a true black background, it's just so wasteful.
I still prefer those themes to black-on-light but I do not love them.
They’re supposed to reduce eye strain, but personally I’ve never experienced eye strain from any colour scheme, and suspect it has more to do with the eyesight of the user than the colours involved.
Staring into a bright screen in a poorly lit room is eye strain. The pupil is dilated because the room is almost dark so more of the light from the bright screen will hit the retina.
Blacking out the screen in those conditions reduces that strain. But you still have to maintain a good brightness and contrast between the elements otherwise you trade one issue for another.
There's no one size fits all unfortunately. Which is why it's great when developers allow for customization.
> Staring into a bright screen in a poorly lit room is eye strain.
That's a really good point.
If I have to use a computer or phone under poor lighting, I turn down the display brightness to match.
Otherwise, I prefer to be in a space where I can read something written on paper. And then I adjust the brightness to match that - not cranked up too bright like I see sometimes.
Personally I can work 16 hours with white theme but even 1-2 hours with strictly dark theme is borderline unbearable. I tried it multiple times due to posts advertising it's lower eye strain. I'm 30 and myopic, -4 in both eyes, using PCs for decades.
Opening a white window is like a full headlight in my eyes. Many scientific studies said that blue led (and then white RGB pixels) deteriorate the sight over time, by destructing photosensors in the eyes and they never recover. I would encourage black themes in all apps.
Apparently my direct experience gets downvotes. I literally use light themed apps in low light conditions every day and have never experienced any discomfort that could be called eye strain. For at least some set of people it simply does not exist. I therefore put the variance down to the individual’s vision.
I think it would largely depend on the display technology. I know for some technologies a black pixel uses nearly no energy, but for others, all pixels are equally 'lit' by the same backlight.
For OLED displays, black uses (almost?) no energy. For LCD displays, black uses more energy, since the pixel has to be energized to block out the backlight. The difference is negligible, though, since the backlight uses far more.