Thanks for the example of a Playwright report page.
I agree that getting browser tests (not even just visual tests) to work reliably is considerable work. I built out a suite at work for a rather complex web application and it certainly looks easier than it is.
A couple of notes:
- I disagree that you need a powerful VPS to run these tests, we run our suite once a day at midnight instead of on every commit. You still get most of the benefit for much cheaper this way.
- We used BrowserStack initially but stopped due to flakiness. The key to getting a stable suite was to run tests against a local nginx image serving the web app and wiremock serving the API. This way you have short, predictable latency and can really isolate what you're trying to test.
> - I disagree that you need a powerful VPS to run these tests, we run our suite once a day at midnight instead of on every commit. You still get most of the benefit for much cheaper this way.
Then how do you know which commit is responsible for the regression? I can see that working for a very small team where the amount of changes is limited but even so, especially with css, where a change in some place can affect the styles in another.
We probably have max 50 commits a day in our team spread across many areas in the application. So when breakages occur it's typically easy to tell which commit caused it.
But I agree, if you have a large team or a large monorepo you probably want to know about breaking changes already at the PR stage.
Ah forgot to mention it in the post. This comes built in by Playwright. Normally, you invoke the test suite by running `npx playwright test`. This fails your test if a screenshot is missing or if it differs. By running `npx playwright test --update-snapshots` you tell Playwright to just overwrite the snapshots and not fail tests.
I only use the built-in diffing by Playwright. It comes with a nice overview page [0] that shows all the failed tests including traces and screenshots. There you have a pixel diff. If you have some notion of irrelevant changes that shouldn't warrant a test failure, I wouldn't know of a way to pull that off.
updown.io also has a relatively new feature called cron monitoring[0] that allows you to regularly check in to signal success. If there has been no check-in in a configured time it will alert you. For backups you could add a simple curl somewhere into your backup process to do just that.
I know, but I prefer fair pricing over free in situations like this. There are plenty of stories going around of CF forcing users to upgrade to an enterprise plan due to their usage. When there is a price tag, at least I know that won't happen to me (not that my usage would be on CFs radar anyway, it's the principle of it).
I've been eyeing DuckDB for a metric collection hobby project. Quick benchmark showed promising query performance over SQLite (unsurprising considering DuckDB is column oriented), but quite a bit slower for inserts. Does anyone have experience using it as an "online" backend DB as opposed to a data analytics engine for interactive use? From what I gather they are trying to position themselves more in the latter use case.
Doing row-by-row inserts into DuckDB is really slow. Accumulating rows in an in-memory data structure and periodically batching them into something like an in-memory Arrow table, and then reading the Arrow table into DuckDB, is fast and has been tenable for my own use cases.
Depends on the scale of users you expect for your project. Generally I like to keep oltp and olap tools in their lanes, but if < 100 people are going to be using it probably doesn't matter. I doubt duckdb has any sort of acid guarantees, so thats something to keep in mind.
It's a simple flat list of links that you annotate with a description for better search. The killer feature is that it works with Firefox's keyword search. I can enter `go gith prof` in the url bar and hit enter. Since there is only one entry that matches (with description 'github profile'), I'm immediately redirect to that link.
Me too, I enjoyed creating a data dashboard static site[1] with Observable Framework[2] + Plot for their line and bar graphs. I did run into an inconvenience when I used d3 components outside of this combo that requires you to "copy-paste the entire function"[3] rather than just importing it
I think you have to be careful not to stretch your learning budget too thin. Coincidentally, I recently started tinkering with embedded as well and decided against using Rust (not new to me) for it because I prefer the well trodden path when first getting started in something. There is plenty of opportunity to venture out later.
Similar to the times when boards with USB support weren't a thing, where you had to program it over serial. Now with USB it's so easy to program these boards, that you can jump right to the thing which interested you from the beginning, without having to deal with things like first flashing an Arduino which then served as a programmer.
I currently don't see any benefit of using Rust on a MCU if you're a hobbyist, but that doesn't mean that it isn't a wonderful language to use on a Raspberry Pi or anything above that.
- I disagree that you need a powerful VPS to run these tests, we run our suite once a day at midnight instead of on every commit. You still get most of the benefit for much cheaper this way.
- We used BrowserStack initially but stopped due to flakiness. The key to getting a stable suite was to run tests against a local nginx image serving the web app and wiremock serving the API. This way you have short, predictable latency and can really isolate what you're trying to test.