If you are comfortable with a hand grinder, Porlex grinders [1] [2] are excellent. I use one to make a coarse grind for the french press. It's gotten used 4-7 times per week for the last 5+ years and still going strong.
On topic of best purchases under $100, suppose you regularly boil water for coffee or pasta but don't own an electric kettle, consider investing in a cheap white plastic kettle for $5. Fast and energy efficient way to turn electrical energy into boiling water.
I'm not an accountant either, that also makes sense.
If instead the exchange had been of real world money for N months of prepaid subscription, that was consumed after N months had passed, that'd be a little different but also presumably quite acceptable to accountants.
Suppose the exchange had been of real world money for N months of prepaid subscription credits that could be stored indefinitely and only consumed if the player chose to actively play during a month. That might turn into an accounting nightmare if those subscription credits didn't expire (maybe cannot recognise the revenue while they are unused, becomes liability on the balance sheet).
I wonder how the accounting rules work for stuff like Eve online where there is an game consumable item (PLEX) that when consumed extends your subscription, can be traded inside the game's economy, and can be purchased with real world money
Great writeup. Much of this is more about testing, how package dependencies are expressed and many-repo/singlerepo tradeoffs than "microservices"!
Maintaining and testing a codebase containing many external integrations ("Destinations") was one of the drivers behind the earlier decision to shatter into many repos, to isolate the impact of Destination-specific test suite failures caused because some tests were actually testing integration to external 3rd party services.
One way to think about that situation is in terms of packages, their dependency structure, how those dependencies are expressed (e.g. decoupled via versioned artefact releases, directly coupled via monorepo style source checkout), their rates of change, and the quality of their automated tests suites (high quality meaning the test suite runs really fast, tests only the thing it is meant to test, has low rates of false negatives and false positives, low quality meaning the opposite).
Their initial situation was one that rapidly becomes unworkable: a shared library package undergoing a high rate of change depended on by many Destination packages, each with low quality test suites, where the dependencies were expressed in a directly-coupled way by virtue of everything existing in a single repo.
There's a general principle here: multiple packages in a single repo with directly-coupled dependencies, where those packages have test suites with wildly varying levels of quality, quickly becomes a nightmare to maintain. The packages with low quality test suites that depend upon high quality rapidly changing shared packages generate spurious test failures that need to be triaged and slow down development. Maintainers of packages that depend upon rapidly changing shared package but do not have high quality test suites able to detect regressions may find their package frequently gets broken without anyone realising in time.
Their initial move solves this problem by shattering the single repo and trade directly-coupled dependencies with decoupled versioned dependencies, to decouple the rate of change of the shared package from the per Destination packages. That was an incremental improvement but added the complexity and overhead of maintaining multiple versions of the "shared" library and per-repo boilerplate, which grows over time as more Destinations are added or more changes are made to the shared library while deferring the work to upgrade and retest Destinations to use it.
Their later move was to reverse this, go back to directly-coupled dependencies, but instead improve the quality of their per-Destination test suites, particularly by introducing record/replay style testing of Destinations. Great move. This means that the test suite of each Destination is measuring "is the Destination package adhering to its contract in how it should integrate with the 3rd party API & integrate with the shared package?" without being conflated with testing stuff that's outside of the control of code in the repo (is the 3rd party service even up, etc).
Suppose we were critiquing an article that was advocating the health benefits of black coffee consumption, say, we might raise eyebrows or immediately close the tab without further comment if a claim was not backed up by any supporting evidence (e.g. some peer reviewed article with clinical trials or longitudinal study and statistical analysis).
Ideally, for this kind of theorising we could devise testable falsifiable hypotheses, run experiments controlling for confounding factors (challenging, given microservices are _attempting_ to solve joint technical-orgchart problems), and learn from experiments to see if the data supports or rejects our various hypotheses. I.e. something resembling the scientific method.
Alas, it is clearly cost prohibitive to attempt such experiments to experimentally test the impacts of proposed rules for constraining enterprise-scale microservice (or macroservice) topologies.
The last enterprise project I worked on was roughly adding one new orchestration macroservice atop the existing mass of production macroservices. The budget to get that one service into production might have been around $25m. Maybe double that to account for supporting changes that also needed to be made across various existing services. Maybe double it again for coordination overhead, reqs work, integrated testing.
In a similar environment, maybe it'd cost $1b-$10b to run an experiment comparing different strategies for microservice topologies (i.e. actually designing and building two different variants of the overall system and operating them both for 5 years, measuring enough organisational and technical metrics, then trying to see if we could learn anything...).
Anyone know of any results or data from something resembling a scientific method applied to this topic?
It'd also have been interesting to see some overall profiling data of the initial program & some discussion of which optimisations to investigate based on that profiling data.
When investigating performance issues its often very helpful to run with profiling instrumentation enabled and start by looking at some top-down "cumulative sum" profiler output to get a big picture view of which functions/phases are consuming most of the running time, to see where it may be worth spending some effort.
Getting familiar with linux's perf [1] tool is also helpful, both in terms of interpreting summary statistics from perf stat (instructions per cycle, page faults, cache misses, etc) that can give clues what to focus on, but also being able to use it to annotate source line by line with time spent.
I'm not familiar with rust, but e.g. the rustc compiler dev guide has a tutorial on how to profile rustc using perf [2]
> Rather than getting stuck in front-end minutiae, the tutorial goes straight to generating working assembly code, from very early on
Good summary.
I had no background in compilers or related theory but read Jack Crenshaw's Let's Build a Compiler tutorials some time ago. My main take away from reading half a dozen or so of these tutorials was that building a simple compiler for a toy language was a small project that was well within my grasp and ability, not a huge undertaking that required mastery of esoteric pre-requisites or a large amount of planning.
I got a lot of enjoyment messing about with toy compiler projects related to Brainfuck.
Why Brainfuck? It's a beautiful little toy language. Brainfuck has 8 instructions, each instruction is 1 character, so parsing reduces to getting a char and switching on it. I guess it depends on what you want to explore. If you want to focus on writing recursive descent parsers, not the best choice!
One initial project could be to compile (transpile) from Brainfuck source to C source. You can do this as a source to source compiler without any internal representation by transforming each Brainfuck operation to a corresponding C statement. Brainfuck is specified in terms of a single fixed length array of bytes, and a pointer - an index into that array - that can be moved around, and basic manipulations of the byte it is pointing it. So on the C side you need two variables: one for the array and a second, an index for the pointer.
A second project could be compiling from Brainfuck to assembly language, skipping C. You'd need to read a few tutorials/reference docs about your chosen assembly language and learn how to run the assembler to compile tiny assembly programs into native executables. You could explore some examples of what output assembly programs you get when you compile small Brainfuck programs to C and then compile those C programs to assembly. You could write a direct source to source compiler without an internal representation, where each Brainfuck operation is directly mapped to a snippet of assembly instructions. Once you've got this working, you can compile a Brainfuck program into an assembly program, and then use the usual toolchain to assemble that into a native executable and run it.
There's also lots of projects in another direction, treating Brainfuck as the target language. Imagine that your job is to write Brainfuck programs for a CPU that natively executes Brainfuck. Try writing a few tiny Brainfuck programs by hand and savour how trying to do almost anything involves solving horrible little puzzles. Maybe it'd be much easier to do your job if you, the Brainfuck programmer, didn't have to manually track which index of the array is used to store what. You could invent a higher level language supporting concepts like local variables, where you could add two local variables together and store the results in a third local variable! Maybe you could allow the programmer to define and call their own functions! Maybe you could support `if` blocks, comparisons! You could have a compiler that manages the book-keeping of memory allocation and mapping complex high level abstractions such as integer addition into native Brainfuck concepts of adding one to things and moving left or right. Projects in this direction let you explore more stuff about parsers (the input syntax for your higher level language is richer), internal representations, scopes and so on.
hypothesis: expected upvotes = views of comment thread * probability your comment is read given someone reads the comment thread * probability of upvote given someone read your comment
If you make a "great" comment but the comment thread isn't popular, no/few upvotes
If you make a "great" comment in a really popular thread but it's buried down the comment tree where others are less likely to see it, no/few upvotes
Let's define by "great" comment we mean one that readers of that comment upvote at a high rate.
You'll likely get more votes by making a pretty good comment relatively early in a popular comment thread, at a time of the day when many people are reading HN, rather than an absolutely fantastic comment in some thread that hardly any one reads.
There's path dependence -- if there's two equally "great" comments contributed to a thread, and one is made 30 minutes earlier, it's likely that the earlier one accrues a bunch of votes and sub-threads, secures the best real estate at the top of the thread, and ends up with many more votes than the other one.
These may make it harder to identify if the content or topic is having much impact on the accrued votes.
Could perhaps normalise for that by adding metrics for the number of votes that the submission got, or the total number of votes of all comments in the thread, then see if that can explain some of the variation. Measuring the duration between when the comment thread opened and when the comment was posted could be interesting too.
i hope after enough corrections for topic popularity, time of day, how fast you commented, it becomes clear that your best topic is fire truck efficiency, and we can look forward to frequent comments about fire truck efficiency going forward
On topic of best purchases under $100, suppose you regularly boil water for coffee or pasta but don't own an electric kettle, consider investing in a cheap white plastic kettle for $5. Fast and energy efficient way to turn electrical energy into boiling water.
[1] https://www.porlex.co.jp/ [2] https://www.porlex.com.au/collections/porlex-grinders
reply