Hacker Newsnew | past | comments | ask | show | jobs | submit | discreteevent's favoriteslogin

I believe we all carry massive mythologies that are tough to displace.

My Kagi-fu fails me. Its by an environmentalist who says any, say "avoid poisoning fish" advice stands against a massive "the line must go up" mythology. He compares it to the geocentrism of the Church; how the Sun, Moon and starts rotate around us and provides for humanity; and how heliocentrism also had to stand up against this massive mythology.


``` ╰─ wasmer run python/python

error: Spawn failed

╰─▶ 1: compile error: Validate("exceptions proposal not enabled (at offset 0x191a)")

```


And the value of AI as pushed to us by these companies is in doing larger units of work.

But... reviewing code is harder than writing code. Expressing how I want something to be done in natural language is incredibly hard.

So over time I'm spending a lot of energy in those things, and only getting it 80% right.

Not to mention I'm constantly in this highly suspicious mode, trying to pierce through the veil of my own prompt and the code generated, because it's the edge cases that make work hard.

The end result is exhaustion. There is no recharge. Plans are front-loaded, and then you switch to auditing mode.

Whereas with code you front-load a good amount of design, but you can make changes as you go, and since you know your own code the effort to make those are much lower.


Imagine a fake engineer who read books about engineering as scifi, and thanks to his superhuman memory, he's mastered the engineer-speak so well that he sounds more engineery than top engineers in the world. Except that he has no clue about engineering and to him it's the same as literature or prose. Now he's tasked with designing a bridge. He pauses for a second and starts speaking, in his usual polished style: "sure, let me design a bridge for you." And while he's talking, he's starring at you with his perfect blank face expression, for his mind is blank as well.

Think of the absurdity of trying to understand the Pi number by looking at its first billion digits and trying to predict the next digit. And think of what it takes to advance from memorizing digits of such numbers and predicting continuation with astrology-style logic to understanding the math behind the digits of Pi.


This is how I used to do it over TCP, 20 years ago: each request message has a unique request ID which the server echoes and the client uses to match against a pending request. There is a periodic timer that checks if requests have been pending for longer than a timeout period and fails them with an error bubbled up to the application layer. We even had an incrementing sequence number in each message so that the message stream can resume after a reconnect. This was all done in C++e, and didn't require a large amount of code to implement. I was 25 years old at the time.

What the author and similar web developers consider complex, awkward or difficult gives me pause. The best case scenario is that we've democratized programming to a point where it is no longer limited to people with highly algorithmic/stateful brains. Which would be a good thing. The worst case scenario is that the software engineering discipline has lost something in terms of rigor.


For those who don't feel like taking math courses in a formal setting, making games from scratch is a fun way to learn and apply linear algebra and calculus.

I never really needed determinants in my life until I tried moving a spaceship towards another object. Trying to render realistic computer graphics gets you into some deep topics like FFTs and the physics of light and materials, with some scary-looking math, but I can feel my mind sharpening with each turn of the page in the book.


Specifics sure. I dont expect them to understand the specifics. I dont want them across every task.

But I also dont want to (and currently dont have to) explain specific risks regarding what I do, I dont have to justify how long things take, because my management understands that. We speak the same language. Its glorious.

I mean just comparing my clients that have relevant technical knowledge, vs the ones that dont, the clients that dont have that knowledge need "meetings" and "catchups" and immense email threads in the order of 10 times the ones that do understand. Thats measurable (to me) waste.

Another observation of mine is that non technical people really have no ability to recruit and manage technical people. I have seen multiple businesses brought low because the "technical" person brought in to manage the "technical" side of the startup actually had NFI. Or when they do accidentally hire someone competant, their requests for resources or time are ignored, even when well justified. The non technical founder or CEO either has to trust someone (which fails a lot) or they dont trust someone (and thats even worse).


OP should really save their money. Cursor has a pretty generous free trail and is far from the holy grail.

I recently (in the last month) gave it a shot. I would say once in the maybe 30 or 40 times I used it did it save me any time. The one time it did I had each line filled in with pseudo code describing exactly what it should do… I just didn’t want to look up the APIs

I am glad it is saving you time but it’s far from a given. For some people and some projects, intern level work is unacceptable. For some people, managing is a waste of time.

You’re basically introducing the mythical man month on steroids as soon as you start using these


I think HM is simply not practical. You don’t want your types to be a multidimensional logic puzzle solved by a computer, cause you want to reason about them and you are much weaker than a computer. You want clear contracts and rigidity that they provide to the whole code structure. And only then to automatically fill up niches to avoid obvious typing. You also rarely want Turing Completeness in your types (although some people are still in this phase, looking at some d.ts’es).

Weak var/auto is practical. An average program and an average programmer have average issues that do not include “sound type constraints” or whatever. All they want is to write “let” in place of a full-blown declaration that must be named, exported, imported and then used with various modifiers like Optional<T> and Array<T>. This friction is one of the biggest reasons people may turn to the untyped mess. Cause it doesn’t require this stupid maintenance every few lines and just does the job.


GRPC is very performant. A few points, ensure you have a script to compile the GRPC protobuf, ideally in docker so that you don’t pollute your local environment. That other pitfall, is don’t save raw protobuf binaries, you will face backwards compatibility issues as you change the definition of the protobuf, just write everything into MCAP’s. GRPC essentially replaces ROS messages with protobuf definitions and is not a publish, subscriber model, but you can build publisher/ subscribers out of it. It is managed by Google, used in android, web dev etc so it is very performant and reliable.

As a disclaimer, the last time I gave Zig a solid shot was when 0.12 released. The last time I played with comptime properly was in 0.11.

There's a heap of praise thrown at zig comptime. I can certainly see why. From a programming language perspective it's an elegant and very powerful solution. It's a shame that Rust doesn't have a similar system in place. It works wonderfully if you need to precompute something or do some light reflection work.

But, from an actual user perspective it's not very fun or easy to use as soon as you try something harder. The biggest issue I see is that there's no static trait/interface/concept in the language. Any comptime type you receive as a parameter is essentially the `any` type from TypeScript or `void` from C/C++. If you want to do something specific* with it, like call a specific method on it, you have to make sure to check that the type has it. You can of course ignore it and try to call it without checking it, but you're not going to like the errors. Of course, since there are no interfaces you have to do that manually. This is done by reading the Zig stdlib source code to figure out the type enum/structures and then pattern-matching like 6 levels deep. For every field, every method, every parameter of a method. This sucks hard. Of course, once you do check for the type you still won't get any intellisense or any help at all from your IDE/editor.

Now, there are generally two solutions to this:

One would be to add static interfaces/concepts to the language. At the time this was shot down as "unnecessary". Maybe, but it does make this feature extremely difficult to use for anyone but the absolutely most experienced programmers. Honestly, it feels very similar to how Rust proc macros are impenetrable for most people.

The second one is to take a hint from TypeScript and take their relatively complex type system and type assertions. Eg. `(a: unknown): a is number => typeof a === 'number'`. This one also seems like a bust as it seems to go against the "minimal language" mantra. Also, I don't get the feeling that the language dev team particularly cares about IDEs/LSPs as the Zig LSP server was quite bad the last time I tried it.

Now, the third solution and the one the people behind the Zig LSP server went with is to just execute your comptime functions to get the required type information. Of course, this can't really make the experience of writing comptime any easier, just makes it so that your IDE knows what the result of a comptime invocation was.

So in short it is as difficult to use as it is cool. Really, most of the language is like this. The C interop isn't that great and is severly overhyped. The docs suck. The stdlib docs are even worse. I guess I'm mostly dissapointed since I was hoping Zig could be used where unsafe Rust sucks, but I walked away unsatisfied.


Intellectually capable to do what? Orchestrate some of the largest distributed network systems in the world? I bet you all the best functional programmers wouldn't be able to create something as big as Google if you got them in a room together. Nor would they be able to create anything as important as UNIX, like one of Go's creators did. Nor UTF-8. Nor the JVM HotSpot machine. I could go on. What have you done that Ken Thompson couldn't?

The fact that none of these highly accomplished individuals want anything FP-related in Go says far more than what typical Go-haters want to think it does.


Reflecting on these words, it’s clear that many people take a “realist” perspective on power in and between human societies, and see no reason at all to strive to create better conditions for all or even most humans.

My take: it’s a luxury position that probably only makes sense if you’ve been a winner in the birth lottery of the global elite. They are the enablers of power-for-power’s sake populists and dead-eyed bureaucrats because they are certain, at least until too late, that bad things won’t happen to them of their loved ones.


Justice has to be declared as an essential principle of human organisation.

If the 1984 vision of a boot stamping on a human face forever is going to work out to be true, then so be it.

The ICJ is at least holding out against that future.

What will you (as a human) choose to do?

These days and years are going to be definitional I think.


Sun RPC — the official name was ONC RPC — was probably the first modern RPC. It's an Internet standard. You've probably used it without realizing it; it's the protocol that NFS uses. If you've ever had to deal with the NFS "portmapper", then that's because of Sun RPC. Some other protocols use it.

It uses XDR as the schema definition language. XDR is basically analogous to Protobuf files. It has structs and tagged unions and so on.

Another technology from around the same time was DCE/RPC [2]. Microsoft adapted wholesale as MSRPC. Windows used it extensively around the time of NT 3.x for protocols like Exchange Server, and I believe it's still in wide use. DCE/RPC has its own IDL. You used the compiler to generate the stub implementations, just like Protobuf/gRPC.

Microsoft COM uses DCE/RPC under the hood, with lots of extensions [3]. CORBA emerged around the same time as DCE/RPC and COM and is roughly analogous in functionality.

COM and CORBA are explicitly object-oriented. While protocols like DCE/RPC and gRPC return values that are pure data, such as primitives and structs, COM and CORBA can return interfaces. An interface pretends to be a local in-memory instance, but its methods are "stubs" that invoke the underlying RPC call to execute them remotely. Methods can also return functions, which means you have whole trees of objects which are remote. Adding to that, both COM and CORBA use reference counting to hold onto objects, so if a client has received a remote object and reference counted it, the server needs to keep it around until the client either releases the refcount, or the client dies. COM and CORBA called this referential transparency, in that any object could be either local or remote, and a consumer of the interface didn't need to know about it. Of course, while nicely magical, this leads to a lot of complexity. I developed a rather complex distributed DCOM application in the late 1990s, and while it did, inexplicably, work quite well, it was also a nightmare to debug and keep stable.

While COM is alive and well in Windows these days (and interestingly enough, some APIs like DirectX use the COM pattern of defining interfaces via IUnknown etc., but are not actually true COM), DCOM turned out to be a mistake, and CORBA failed for some of the same reasons, although for many reasons unique to CORBA as well. CORBA made tons of design mistakes.

SOAP and WS-*, and of course XML-RPC and JSON-RPC, came later. The wheel has been reinvented many times.

[1] https://datatracker.ietf.org/wg/oncrpc/about/

[2] https://en.wikipedia.org/wiki/DCE/RPC

[3] https://learn.microsoft.com/en-us/openspecs/windows_protocol...


Using Occam's razor, that is less probable than the model picking up on statistical regularities in human language, especially since that's what they are trained to do.

This article is the what, but it is missing the why. I'm supremely biased against scrum, so basically ignore what I am saying if you're a fan of it. Fundamentally, Kanban is pull-based, and Scrum is push-based. That's the difference.

Scrum spends time determining how long things will take, and then attempts push it into a schedule via story pointing and other ceremony where people pretend they're not guessing how long thing will take by using points and t-shirt sizes and anything other than time to guess how much they can get done in some arbitrary amount of time. Then devs do what they were going to do anyway, and everyone slaps each other on the back because they're measuring the success they're having. It's a dream for people who like to count things and build check lists and check them off. Its success has little to do with the process and much to do with the team's ability to gather requirements and do their job. It's ideal for contract gigs where it's as important to track how much time it takes you to do things as it is to actually do things.

In Kanban you put what you want to do in a list, and pull the things from the list in order as you complete them. If customers need things quicker, you change the order of things in the list while communicating to them what will slip and what will accelerate. They take as long as they take (because that's how the world works, yes, even in scrum), but with 100% less ceremony and pointless coordination. Kanban is about constantly managing constraints and eliminating waste. You don't need to strictly measure how long things are taking. You pay attention when things don't move off the board, and modify resourcing in whatever way will get things unstuck. It's less fun because you don't get to pretend you know how long something will take, but it's more fun because you get to be an adult professional instead of a servant to the processes of people who don't actually build things.

This is somewhat tongue-in-cheek, but in my career I've never seen switching to pull-based patterns make things worse, and I've often seen them make things better, including morale. It doesn't seem like it will work, but in practice there are so many efficiencies gained in pull-based processes that it usually ends up being faster and feeling better while doing it.


The problem with all of these technologies is that they were invented by different divisions of Microsoft to do different things. That, and Microsoft chasing the Next Big Thing.

What we consider to be "Win32 apps" are built with a framework in USER.dll, which is half reimplementation of the classic MacOS Toolbox API and half a pure-C object oriented class system. It's been here since the beginning, and is the lowest common denominator for getting anything on screen. Every other toolkit eventually opens a USER window, attaches the appropriate window class and wndprocs to it, and then yields CPU control to an event loop that, among other things, contains a Windows USER message pump.

USER, being an object-oriented, pure-C[0] API, is infamously verbose to work with. The "200 line Hello World" example everyone passed around back in the 90s is specifically that verbose because of all the bookkeeping you have to do for USER's sake. It is possible to build USER apps that work well, but it puts a lot of onus on the programmer. Even things like high-DPI support[1] or resizable windows are a pain in the ass because they all have to be implemented manually.

Microsoft's original answer for "USER is too hard" was to adopt Visual Basic or MFC as you mentioned. AFAIK .NET WinForms was also a wrapper around USER. This is why Windows had a cohesive visual appearance all the way through to Windows 7, because everything was just developer-friendly wrappers around common controls. Even third-party widget toolkits could incorporate those controls as subwindows as needed[2].

The problem with USER is that it was built for multiple windows and applications that render (using the CPU!) to a shared surface representing the final visible image. Modern toolkits instead have multiple separate surfaces and draw on them as needed before presenting a final image to a compositor that then mixes other windows together to get a final image. Windows Vista onward has the compositor, but the UI toolkits also need to be surface-aware instead of chucking a bunch of subwindows at DWM at the last minute.

WPF is the first attempt at a modern UI toolkit. Relative to USER resources are replaced with XAML and window classes replaced with... well, actual language classes. Except it was developed by the DevTools division (aka DevDiv), and only ran on .NET with managed code. If you had a native application or just didn't want to pay the cost of having a CLR VM, tough.

Then the iPhone launched. And the iPad launched. The thing is, good tablet UI needs GPU-acceleration up and down the stack, so Microsoft shat themselves, gave the Windows division (aka WinDiv) the keys to the castle, and they completely rewrote WPF in C++ with some fancy language projections. That became "Metro" in Windows 8, then "Modern UI" after a trademark dispute. Microsoft wanted Windows 8 to be a tablet OS, damn it, with full-screen only apps and no third-party app distribution.

And then most people just bought Surface tablets, opened the Desktop "app", and used the same USER apps they were used to, complaining about the Start Screen along the way. So Microsoft pivoted back to a normal desktop with Modern UI apps, which are now called UWP apps, and there's a whole bunch of new glue APIs to let you stick XAML subwindows inside of USER or just use UWP outside of AppX packages, which is what Windows 8 should have done, and now everything is just a mess. WinUI 3 is just an upgrade to the XAML library that UWP apps use, but it sounds like Yet Another Toolkit. MAUI is some kind of meta-toolkit like the old AWT on Java.

At some level, I can explain this, but it's not reasonable. There is no "native" UI toolkit or consistent look-and-feel on Windows anymore. I suspect this, more than anything else, is the reason why Windows killed Aero blur-behind everywhere, and why Electron apps are so damned popular now. HTML and CSS are almost as old as USER, but with consistent engineering support and developer experience.

USER is an enhanced clone of the MacOS API, so it's natural to see what Apple did when confronted with the same problems. MacOS didn't have an object system at all, you just threw a bunch of controls onto a list and the system rendered them. That (along with user mode applications) was actually one of the reasons why they bought NeXT. OSX's AppKit toolkit shipped with compatibility bridges for Toolbox apps, but it was still about as advanced as USER was when it came to GPU usage, given that it was built around the same era as Windows, just for beefier hardware.

So what did Apple do? They made AppKit speak layers. They wrote a whole new compositing system called CoreAnimation to do in-process compositing, with all the common controls knowing how to manage it and layer-unaware third-party controls just doing whatever made sense. And this itself was a trojan horse for UIKit: the compositing library had been written to support a touch tablet demo that was later rolled into the Purple project to produce the iPhone. Y'know, the thing that actually kicked Microsoft's ass so much they decided to fracture their development ecosystem into 40 different UI toolkits with confusing names. In comparison, on modern macOS the big split comes from SwiftUI and Catalyst, but those are both wrappers around AppKit controls rather than ground-up rewrites of UI toolkits nobody dares touch.

[0] Or possibly Pascal, given the MacOS heritage

[1] The correct way to do high-DPI is for the windowing toolkit to work exclusively in virtual coordinates. Physical device coordinates and their derivatives should be converted away from at the earliest possible convenience and converted back into as late as possible. At a minimum, no user-facing APIs should use physical coordinates.

USER does not do this, even though there's an option to make it do this, which has worked wonders on every non-DPI-aware app I've thrown at it.

[2] Or, alternatively, implement their own. My favorite story about this is Internet Explorer, which ships with it's own implementations of common controls specifically just so that HTML form elements don't have to hold an HWND each and can share the parent window.


I used to be a teaching assistant for CS 61A (intro to programming) at Berkeley teaching from this book with Brian as the instructor.

One of Brian's primary points is the following:

> Scheme ... has a very simple, uniform notation for everything. Other languages have one notation for variable assignment, another notation for conditional execution, two or three more for looping, and yet another for function calls. Courses that teach those languages spend at least half their time just on learning the notation. In my SICP-based course at Berkeley, we spend the first hour on notation and that's all we need; for the rest of the semester we're learning ideas, not syntax.

Bullshit. Again, I was a TA for this course. You do not spend the rest of the semester on ideas, you spend the rest of the semester on the students being very confused.

This "everything looks the same" property of Scheme and of all LISP-like languages is a bug, not a feature. When the semantics is different, humans need the syntax to be different. In contrast, LISP/Scheme make everything look the same. It is quite hard to even tell a noun from a verb. This makes learning it and teaching it hard, not easy.

Brian is selling a fantasy here. If you think Scheme is so great, look at this nightmare of examples showing the various ways to implement the factorial function in Scheme: https://erkin.party/blog/200715/evolution/

All of this "abstractions first, reality second" agenda is just a special case of what I call "The Pathology of the Modern": the pathological worship of the abstract over the concrete. Everything modernism touches turns into shit. I am done with living in modernist shit and I hope you are too.


To focus on something I don't think gets a lot of play:

> To me, the local minima looked "good"

AI's entire business [0] is generating high quality digital content for free, but we've never ever ever needed help "generating content". For millennia we've sung songs and told stories, and we were happy with the media the entire time. If we'd never invented Tivo we'd be completely happy with linear TV. If we'd never invented TV we'd be completely happy with the radio. If we'd never invented the the CD we'd be completely happy with tapes. At every local minima of media, humanity has been super satisfied. Even if it were a problem, it's nowhere near the top of the list. We don't need more AI-generated news articles, music, movies, photos, illustrations, websites, instant summaries of research papers, (very very bad) singing. No one's looking around saying, "God there's just not enough pictures of fake waves crashing against a fake cliff". We need help with stuff like diseases and climate change. We need to figure out fusion, and it would be pretty cool if we could build the replicator (I am absolutely serious about the replicator). I remember a quote from long ago, someone saying something like, "it's lamentable that the greatest minds of my generation are focused 100% on getting more eyeballs on more ads". Well, here we are again (still?).

So why do we get wave after wave of companies doing this? Advances in this area are insanely popular and create instant dissatisfaction with the status quo. Suddenly radio is what your parents listened to, fast-forwarding a cassette is super tedious, not having instant access to every episode of every show feels deeply limiting, etc. There's tremendous profits to be had here.

You might be thinking, "here we go again, another 'capitalism just exploits humanity's bugs' rant", which of course I always have at the ready, but I want to make a different point here. For a while now the rich world has been _OK_. We reached an equilibrium where our agonies are almost purely aesthetic: "what kind of company do I want to work for", "what's the best air quality monitor", "should I buy a Framework on a lark and support a company doing something I believe in or do the obvious thing and buy an MBP", "how can I justify buying the biggest lawnmower possible", etc. Barring some big dips we've been here since the 80s, and now our culture just gasps from one "this changes everything" cigarette to the next. Is it Atari? Is it Capcom? Is it IMAX? Is it the Unreal Engine? Is it Instagram? Is it AI? Is it the Internet? Is it smartphones? Is it Web 2.0? Is it self-driving cars? Is it crypto? Is it the Metaverse and AR/VR headsets? I think us in the know wince whenever people make the leap from crypto to AI and say it's just the latest Silicon Valley scam--it's definitely not the same. But the truth in that comparison is that it is just the next fix, we the dealers and American culture the junkies in a codependent catastrophe of trillions wasted when like, HTML4 was absolutely fine. Flip phones, email, 1080p, all totally fine.

There is peace in realizing you have enough [1]. There is beauty and discovery in doing things that, sure, AI could do, but you can also do. There is joy in other humans. People listening to Hall & Oates on Walkmans teaching kids Spanish were just as happy (actually, probably a lot happier) as you are, and assuredly happier than you will be in a Wall-E future where 90% of your interactions are with an AI because no human wants to interact with any other human, and we've all decided we're too good to make food for each other or teach each other's kids algebra. It is miserable, the absolute definition of misery: in a mad craze to maximize our joy we have imprisoned ourselves in a joyless, desolate digital wasteland full of everything we can imagine, and nothing we actually want.

[0]: I'm sure there's infinite use cases people can come up with where AI isn't just generating a six fingered girlfriend that tricks you into loving her and occasionally tells you how great you would look in adidas Sambas. These are all more cases where tech wants humanity to adapt to the thing it built (cf. self-driving cars) rather than build a thing useful to humanity now. A good example is language learning: we don't have enough language tutors, so we'll close the gap with AI. Except teaching is a beautiful, unique, enriching experience, and the only reason we don't have enough teachers is that we treat them like dirt. It would have been better to spend the billions we spent on AI training more teachers and paying them more money. Etc. etc. etc.

[1]: https://www.themarginalian.org/2014/01/16/kurt-vonnegut-joe-...


The OO paradigm is much more fundamental and natural to the way we think of and model Domain Concepts/Objects. I remember studying fundamental techniques like CRC(Classes/Responsibilities/Collaborations), Commonality/Variability Analysis, Rumbaugh's OMT etc. and thinking how natural it felt to model domain concepts directly in code. People seem to have forgotten all that and only focus on fads/acronyms like Patterns/SOLID etc. without really understanding how they came about and what their nuances are. The result is that people don't think through their analysis/design but merely follow a cookie-cutter approach popularized by some self-aggrandizing author and when things fail blame the OO approach.

> but also with a sense of misplaced religious purity regarding the evils of state

To clarify, Rust isn't against state at all. Rust bends over backwards to make mutation possible, when it would have been far easier (and slower, and less usable) to have a fully-immutable language. What Rust is against is global mutable state, and an aversion to global mutable state isn't a religious position, it's a pragmatic position, because global mutable state makes concurrency (and reasoning about your code in general) completely intractable.


Strong typing is like violence: if it isn't solving all your problems, you must just not be using enough of it.

Suppose you touch a fireplace once, do you touch it again? No.

OK, here's something much stranger. Suppose you see your friend touch the fireplace, he recoils in pain. Do you touch it? No.

Hmm... whence statistics? There is no frequency association here, in either case. And in the second, even no experience of the fireplace.

The entire history of science is supposed to be about the failure of statistics to produce explanations. It is a great sin that we have allowed pseudosciences to flourish in which this lesson isnt even understood; and worse, to allow statistical showmen with their magic lanterns to preach on the scientific method. To a point where it seems, almost, science as an ideal has been completely lost.

The entire point was to throw away entirely our reliance on frequency and association -- this is ancient superstition. And instead, to explain the world by necessary mechanisms born of causal properties which interact in complex ways that can never uniquely reveal themselves by direct measurement.


Same, re-reading my replies I realize I phrased things in a stand-offish way. Sorry about that.

Thanks for being willing to take a step back. I think possibly we are talking about two different things. IME most instances of exploitation are due to much more rudimentary vulnerabilities.

My bias is that, while I did work on mitigations for stuff like Meltdown and Rowhammer, most "code level" memory vulnerabilities were easier to just patch, than to involve my team, so I probably under-estimate their number.

Regardless, if I were building computation-as-a-service, 4 types of vulnerability would make me worry about letting multiple containers share a machine:

1. Configuration bugs. It's really easy to give them access to a capability, a mount or some other resource they can use to escape.

2. Kernel bugs in the filesystems, scheduling, virtual memory management (which is different from the C memory model). It's a big surface. As you said, better use a VM.

3. The kernel has straight up vulnerabilities, often related to memory management (use after free, copy too much memory, etc.)

4. The underlying platform has bugs. Some cloud providers don't properly erase physical RAM. x86 doesn't always zero registers. Etc.

Most of my experience is in 1, a bit of 2 and mitigation work on 4.

The reason I think we're talking past each other a bit is that you're generating CVEs, while I mostly worked on mitigating and detecting/investigating attacks. In my mind, the attacks that are dirt cheap and I see every week are the biggest problem, but if we fix all of those, and the underlying platform gets better, I see that it'll boil down to trusting the kernel doesn't have vulnerabilities.


> I appreciate that most of the ECS hype has been around specific use cases, though.

Depends a lot on where you hang out. On amateur gamedev fora, I have seen many many many posts from beginners where they are struggling to cram ECS into their game and feel they need to because it's simply "the way" that one architects a game. Even if their game is written in a language that offers no performance benefits and the their simulation benefits nothing from it, they just think they have to.

It's heartbreaking watching someone go, "I know I could just store this piece of data right here in my entity class, but I'm not supposed to because of DoD, so how should I do this?" And then they get back confidence answers that involve pages of code and unnecessary systems.

It's exactly like the OOP fad of the 90s, just in the opposite direction. Yes, it turned out you don't need to encapsulate all data in classes. But, also, it is OK to just store data in stuff. You don't have to make every letter of your pop-up dialog a separate component.


I liked CORBA, mainly the IDL and IIOP (I didn't use many of the parts that other people hated). It felt like a sequel to Sun RPC and it was easy to move to gRPC. The main problem I had was the Any type wasn't performant.

After working with remote call systems for a while I concluded there should only be two RPCs:

GetMessage() (which polls for incoming messages) and PutMessage() (which sends a message). All the method information goes in the payload. There are no verbs or headers (HTTP). There is no relationship between the message and some resource system (REST). The name "message" is an indicator that a message is being passed, rather than a remote call with function-call-like semantics (similar to MPI).


Beyond the importance of controlling the placebo effect, I am worried that a lot of the drug-depression research is overlooking an important possibility: that the thing about ketamine/psilocybin/etc that is helping with depression is not some latent property of the molecule, but rather the actual transcendent experience of the trip. In other words, the trip is the point, not the mechanistic neuro-tinkering [0].

Importantly, this tracks with what we know about the protective effects of things like religiosity against depression. As such, the qualitative experience of the drug might not be something we can (or should) do away with. I would even go as far as suggesting that an absence of transcendence in one's life is precisely what causes a large segment of people to become depressed in the first place, and that perhaps drugs are helpful only insofar as they produce a transcendent experience.

This isn't to say we can't take a scientific approach to treating depression, but that has to be balanced with something profoundly metaphysical: the actual qualia of life experience. Wellness isn't the absence of disease; it's the presence of thriving, and that includes within it a component of things like hope, inspiration, and elevation above the ordinary. We used to have various ceremonies designed to turn us towards the numinous, but we've pretty systematically dismantled those in favor of a grounded hyper-rationality [1]. As a scientist, I can't really object to rationality on its own, but it may be worth considering non-rational, transcendent experience as a fundamental psychological need.

[0] If you're a materialist, you might object that neurological machinery is not differentiable from qualia. Fair enough! I even agree! My point is simply that medicine needs to consider qualia as a major parameter in the treatment of depression. Fixing depression is not like fixing a car.

[1] I suspect most people here are familiar with Nietzsche's "God is dead quote". Many people in my entourage are floored to discover that he correctly predicted the dramatic increase in anxiety, depression, neuroticism and nihilism that is present in modern life.


What I mean is: can a process block until a particular key appears, then resume execution after receiving the associated value? That's the main differentiating feature of a tuple space, and it wasn't clear to me whether this system supports that or not (or even has a concept of a process).

The cloud is the new mainframe.

You work offline on a "job definition" of some sort, submit it to the proprietary shared system, wait in a queue, then you receive a log file generated by a system you don't control. You can't run the proprietary system code locally on a workstation, and hence the inner loop is often tens of minutes long at best, hours or days at worst. There's no preview, no "what if" mode or "dry run". You work directly on production even if it's called "test" because there's only one system.

The real problem isn't YAML. It wouldn't matter if the pipelines were scripted in God's own programming language.

Software development on workstations instead of central timeshare mainframes became wildly popular because it allowed a dramatically faster inner loop, it allowed isolation from production environments, and it gave control back into the hands of the developers.

Current-generation CI/CD pipelines generally undo all of that.

Single-box Kubernetes reintroduces most of what made workstation-based development good, but it is still a very new system and has many teething issues.

PS: A related issue to yours is that there are great solutions for the solo dev doing click-ops for one app, there are great solutions for megacorops doing automation at scale for thousands of devs, but in the middle where you have a couple of enterprise devs managing a few dozen apps its just madness.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: