Do any HN constitutional scholars or lawyers who work in adjacent fields have any comments on the ruling? Having only read the introduction (which by definition is not a comprehensive argument, so I acknowledge that I'm drawing conclusions based on an incomplete understanding of the ruling), my intuition tells me the standing sub-decision could be abused by states, and the textually oriented picking-apart of the SecEdu's stance struck me as - forgive my ignorance, but - arbitrary and borderline petulant.
FWIW, I'm _not_ interested in discussing this from a socio-political standpoint. I'm just curious to hear opinions about the ruling from a legal perspective.
I just joined a company that does formal but unpaid oncall, coming from a prior co that had implicit oncall. I'm very much in the "if you built it, you run it" camp. This said, I think:
- if oncall is a part of the gig, you compensate _somehow_ (demonstrably above market salaries, explicit extra pay, time in lieu, etc); oncall culture (or the lack thereof) should be explicitly mentioned in any hiring process and employment contracts
- the team should be striving for 8 or more engineers in the steady state; temporary vacancies should be temporary
- primary should be handling 80+% of pages in the steady state; if this is not the case on average across the team, you are not building enough resiliency into your oncall culture, or relevant tech debt should be high priority
- relatedly, kpis/incentives should be structured such that as call gets worse, progressively more immediate investments are made to address technical root causes (a la SRE error budget)
I'm tinkering with that last one my head. It's easy to say, hard to execute
How much of this is possible due to the fact that "there is no real fear of the company collapsing?" This sounds like process-resembling-waterfall, which seemingly works well because use cases, constraints, and patterns are (relatively) so well understood at an established company. Is there "edge" or value in that process, or is the value in the understanding of use-cases, constraints, and patterns (or a drive to align towards reaching said understanding) that enables the process?
"the value in the understanding of use-cases, constraints, and patterns (or a drive to align towards reaching said understanding) that enables the process"
These companies operate at a different scale, speed and constraints to other companies. Just as people say "you are not Google" don't use Google tech solutions for your problems. Don't use FAANG processes for your use case without understanding why those processes are the way they are.
As an addendum to my first comment I want to also say that no scrum does not mean no processes. In fact there is a lot of processes. The number one complaint is that there is too much processes. They are just not scrum.
There is no one size fits all solution which is what the product "Enterprise Scrum" tries to be. Teams need to understand their needs for themselves and build/iterate on a set of processes that works for them. Large orgs need to enable this as well as set the baseline for processes that work across the whole org at every level (indivudual -> team _-> product area -> organization).
I’d have to dig back into the literature, I don’t have anything at hand. Basically for every pro-scrum paper you’ll find a criticism. A few people have done meta analyses and found no benefits. There should be something in IEEE but I’m on a plane right now.
I did a baby bakeoff internally in my prior role ~18mo ago now. Prefect felt nicer to write code in but perhaps not as easy to find answers in the docs (though their Slack is phenomenal). Ended up going with Prefect so I could focus on biz/ETL logic with less boilerplate, but I'm sure Dagster is not a bad choice either. Curious to hear about parent's experience
I always liked the associated page for Gift Economy -- reading it is when I first realized there are multiple credit systems in play at any given moment, beyond raw currency.
My interpretation of the statement is that compounding interest applies to education as much as it does to bank accounts. Missing 5k in contributions to retirement accounts at 25 and then making up for it with extra at 26 is very different than making up with an extra contribution at 46. Same applies to education, and if you're trying to make up missed "contributions" in 6/7th grade, you're already the educational equivalent of a 46 year old.
At risk of stretching a thin metaphor even further, this is to say nothing of Social Security (long-term structural impact of system-wide attempts to catch up half a generation of students on 12+ months of education).
Correct me if I'm wrong on the following, as I have not yet used Metaflow (only read the docs):
It is conceivable to execute a Flow entirely locally yet achieve @step-wise "Batch-like" parallelism/distributed computation by, in the relevant @step's, `import dask` and use it as you would outside of Metaflow, correct?
Although, as I think of it, the `parallel_map` function would achieve much of what Dask offers on a single box, wouldn't it? But within a @step, using dask-distributed could kinda replicate something a little more akin to the AWS integration?
Tangentially related, but the docs note that data checkpointing is achieved using pickle. I've never compared them, but I've found parquet files to be extremely performant for pandas dataframes. Again, I'm assuming a lot here, but I'd expect @step results to be dataframes quite often. What was the design consideration associated with how certain kinds of objects get checkpointed?
To be clear, the fundamental motivation behind these lines of questioning is: how can I leverage Metaflow in conjunction with existing python-based _on-prem_ distributed (or parallel) computing utilities, e.g. Dask? In other words, can I expect to leverage Metaflow to execute batch ETL or model-building jobs that require distributed compute that isn't owned by $cloud_provider?
Yes - you should be able to use dask the way you say.
Your first part of the understanding matches my expectation too.
Dask single box parallelism achieved by multi processing - akin to parallel map.
And distributed compute is achieved by shipping the work to remote substrates.
For your second comment - we leverage pickle mostly to keep it easy and simple for basic types.
For small dataframes we just pickle for simplicity. For larger dataframes we rely on users to directly store the data (probably encoded as parquet) and just pickle the path instead of the whole dataframe.
FWIW, I'm _not_ interested in discussing this from a socio-political standpoint. I'm just curious to hear opinions about the ruling from a legal perspective.