Hacker Newsnew | past | comments | ask | show | jobs | submit | gratitoad's commentslogin

Here’s a fun theoretical existential threat: false vacuum decay. If our universe is in a false vacuum state (maybe yes? maybe no? we don’t know!) and that vacuum someday suddenly shifts into a true vacuum state (decays), then worst case scenario this change propagates everywhere instantaneously and we experience “complete cessation of fundamental forces” including elementary particles and structures. Everything everywhere wiped out in a blink.

We can take some comfort in the fact that we’d never know it happened, and theorists have asserted that it’s highly unlikely a false vacuum of any size could exist for more than a moment in the presence of gravitational forces, plus physics within the bubble would be “super weird”, I think is the technical term.

https://en.m.wikipedia.org/wiki/False_vacuum#Existential_thr...


Isn't "false vacuum state" equivalent to a non-zero vacuum energy ? Which we know is the case, because virtual particle pairs blinking into and out of existence ? Did I leave something important out ?


We’re about to embark on the MFE journey and I’m concerned about pulling it off in a way that will have justified the effort. Our motivations are pure, we have many feature teams, we’re driving toward team independence, already breaking up the monolith into Microservices, etc. But on the FE we have the standard requirement to maintain eventual (definition tbd) UI consistency and so intend to continue to maintain a large number of build-time dependencies between MFEs and shared libraries. So I’m not sure exactly how to pull this off.

Are any of these “many success stories” talked about online? Case studies we can get pumped on?


This gets you most of the advantages of a microfrontend:

1) Monorepo with multiple workspaces

2) With several separate independently deployed applications, each maintained by a separate team

3) That share a set of common packages for common stuff (auth, etc) with full typescript definitions

4) Add CI to typecheck if any shared package changes types you get errors

5) Preferably with the packages being able to be independently run for development (something like storybook, although I don't recommend it)

6) Preferably with the packages being kept small, lean, with limited number of external dependencies (ie, settle on the cross-team deps to use, so framework, routing, data-fetching, etc)

7) Some kind of pre-commit git hook or CI script to validate a set of core shared dependencies used by packages are kept in sync (ie, everyone is running the EXACT SAME React version). I use this one: https://gist.github.com/DanielHoffmann/a456aadb2f27880d59241...

8) Shared build configuration and tools, all apps are validated, built and bundled by the same code.

9) NO PACKAGE PUBLISHING/VERSIONING, all dependencies are workspace:*

For example:

  folder-structure:
  /apps/{team1-app-name}/
  /apps/{team2-app-name}/
  /packages/auth/
  /packages/some-util-lib/

  /package.json
  "scripts": (test, lint, format, typecheck all done at the top level package.json)
  "workspaces": [
    "apps/*",
    "packages/*",
  ],

  /packages/auth/package.json
  "scripts": (dev to run in development mode)
  "devDependencies": { "some-util-lib": "workspace:*", "react": "^18.0.0" }
  "peerDependencies": { "some-util-lib": "*", "react": "*" }

  /apps/{team1-app-name}/package.json
  "scripts": (dev to run in development mode, build for production builds)
  "dependencies": {
     "auth": "workspace:*"
     "some-util-lib": "workspace:*"
  }

This doesn't give you 100% of the independence of microfrontends, but it does give you quite a lot of bang for your buck. Depending how much independence or reuse you want you might want more or less shared external dependencies (for example your shared packages could be framework agnostic and just use raw JS)

The build/bundling configuration can set up separate chunks for the core set of shared dependencies (react, router, etc) to improve build times, load speeds and caching*


First off, I appreciate the thoughtful write-up, under different circumstances this is the direction I’d probably be heading. However our web Frontend code is all currently in a large-ish (500 packages, 100 apps) monorepo and we’re actively pursuing breaking that up into a hybrid repo model, largely in the interest of true team autonomy. We want our code organization to better reflect our Eng org structure and to establish much stronger lines of ownership and responsibility. Obviously we see other compelling reasons to take this big step, but there’s also a lot of risk involved, particularly around dependency management and figuring out how to accomplish eventual UI consistency across our web apps, all of which will use a large number of shared libraries/components that will be broadly organized into a handful of repos (likely monorepos) representing domains and owned by different teams. The intent is for this evolution to culminate in a microfrontends architecture as a way to support collaboration across autonomous teams without the need for built-time coupling.


Fair enough, probably far bigger scale than I am used to. But I wonder why move away from a monorepo? Publishing and versioning common packages is a huge pain in the ass, especially for JS projects where you need transpiling, sourcemaps, etc.

Some smart branch management so teams can work independently seems better for me. For example each project gets their own production branch and development branches to trigger deployments and can pull changes from master as they see fit.

If you are planning on these shared components and dependencies be versioned I highly advise against that, the permutation of versions of underlying common libraries (like React) can make an incompatible versioning hell where component X works on React ^16.0.0 but in practice was tested in React ^18.0.0 only. In my own project I explicitly force all shared dependencies (which I try to keep to a minimum) to be on the same version.

> without the need for built-time coupling

There are two ways of having build-time coupling:

1) Shared build/bundling code, configuration and tools

2) Single build/bundling for all the code

You can most definitely have independent projects sharing the same build/bundling code, configuration and tools while every project is built separately and independently. This can make it hard to integrate with solutions that rely on taking over your bundling though.


Weird, I just discovered and made my first UML sequence diagrams yesterday and today it’s at the top of HN. Super useful for describing my problem and quite straightforward.


It happens: https://qz.com/646467/how-one-programmer-broke-the-internet-...

I’m by no means in the web-development-sucks-lets-kill-the-build camp, I think modern web frameworks and tooling can be incredibly useful particularly at scale (people, codebase, features), but JS dependency management is pretty nightmarish rn.


You tease. Dunno why you’d go out of your way to not name names, but it’s working: Who?


This might hold true if you’re talking about desktop browsers, but it’s a different story on mobile, particularly in rapidly growing emerging markets. Both network latency and large JS payloads dramatically affect user experience on low powered devices, and if UX isn’t compelling enough, there have been plenty of studies showing the real financial costs of slow web pages for businesses that depend those websites to bring in customers.

I’ve personally spent many hours doing performance analysis, triage and remediation on websites built using modern tech stacks that had inadvertently exchanged UX for DX. Too much JS sent over the wire can definitely tie up the browser’s main thread for whole seconds even on desktop, though in my experience it’s much more common on mobile. This situation can be difficult to correct depending on the abstractions, organization and overall architecture you chose early on, and code-spitting and dead code elimination won’t always fix what’s broken.


I’m curious, where do you get those numbers from? Those are shockingly low numbers and don’t align with my observations, but I also don’t have hard figures to back them up.


FWIW I used to get HBO Max free with my AT&T phone plan, so they have at one point given it “for free.”

And I think you could be right about D+ architecture, they went from zero to a streaming service very quickly.


> they went from zero to a streaming service very quickly

Disney basically bought MLB's streaming service and used that for Disney Plus.


Can I ask why that’s not at all surprising to you?


Off-topic question for you: of those 600 games, what’s your short-list of favorites?

I’ve only recently started playing board games and there’s an overwhelming number to choose from.


Sure.

0) Neuroshima Hex! - It's an OS for battles. The armies are little programs and battles are when you let two programs execute simultaneously.

1) 51st State - Same universe/publisher as Neuroshima Hex! You carve out turf in a post-apocalyptic America. Cards in the game can be played in 1 of three ways. a) Incorporate. Bring it into your state. b) Make a deal. Establish an ongoing trading relationship. c) Conqueror. Smash it for parts. A school would give you a worker/turn + victory point if you incorporate it, a worker/turn if you make a deal, and three workers at once if you conqueror it. Some factions are better at offense, some are better at trading, etc. It's great with 5.

2) Ticket to Ride. It's simple. Got it when it first came out, still play it.

3) Cuba Libre/Falling Sky/GMT COIN games in general. It models insurgencies across history. Caesar in Rome, The American Revolution, etc. Great ways to get people interested in history.

4) 4X Space Games. Space Empires 4X & Twilight Imperium especially.

5) Codenames. It's the best way to get to know new people. Like really get to know how they think. I made a massive multiplayer version of it a few years ago.

6) Bios Megafauna. The rulebook for this one is not the product of an ordered mind. The rules are in seemingly random order, some rules or exceptions only appear in the glossary, and the footnotes are full of scientific/pseudo-scientific ramblings.. And it's a lot of fun.

7) Starship Troopers. The 1970's Avalon Hill game. It's a neat hex-and-counter skirmish game.

8) Railways of the World. Started off as Railroad Tycoon: The Board Game. Play it with as many people around the table as possible.

For RPGs:

0) Warhammer Fantasy Role-Play 4th edition. Incredible. I had absolutely no interest in Warhammer before picking it up.. Now I'm painting a few Warbands for Warcry, have 6th-8th edition of Warhammer Fantasy Battles, and Blood Bowl is setup on my table. WFRP's a really solid game that blends the best of D&D and Call of Cuthulu. And, I really like The Old World as a setting.

1) Conan: Adventures in an age undreamed of. It's a 2d20 game (the Star Trek version is great too) that's equal parts RPG and dissertation on the works of Robert E. Howard.

2) DCC RPG. If you like D&D, play DCC. It's a heavily modified 3rd edition with a bonkers spell system and level-0 meat grinder dungeons. Some games try to be weird through art and tone.. DCC is weird through mechanism. It works. Everyone generates 4+ characters, runs them through a meat grinder, whoever survives to the end is your 1st level character, backstory included. <--THIS IS THE ONE if you want to create your own setting or run a hex crawl.

Overwhelming number of games:

I don't read reviews or watch videos. I still do what I've always done: Walk into game stores/conventions and buy what looks interesting. You get a lot of junk but also find great games that you might have missed because the internet decided that they are bad. If a game is popular you're not going to miss it. Just look around to see what people are playing at 12:00 AM at game store/con.


Thanks! Of your list I only know Ticket to Ride and Codenames (both great), so this is a list rich with potential for me. I really appreciate you taking the time to write that out.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: