The Government already has a monopoly on force and steals our money under the threat of imprisonment in the form of taxation: Now that money is supposed to be allocated into risky investments for abstract, unmeasureable benefits like "soft power," for the benefit of people somewhere else? That creates resentment here. It also infantilizes the Global South.
How much better would it be if we eliminated the IMF, shrank government, and lowered taxes so individuals could freely choose how to benefit the Global South via nonprofit giving? If people don't give voluntarily, why should they be compelled to give in the form of taxation?
I feel like Graeber, an anarchist, would appreciate a decentralized solution like that instead of trying to handle this from the top down.
Nature abhors a vacuum. If western countries are not setting up credit facilities, someone else will.
It might be predatory private entities looking to get their hands on state assets, or it might be the Saudis or Chinese looking to exert more political power.
If we want to seal ourselves off from most of the world and stop trading or participating in regional security arrangements, then this is a reasonable approach.
I think people underestimate how much of our material wealth in the west is reliant on these trade and security arrangements.
Countries like the United States can pursue a North Korean model and try to produce everything domestically, but the standard of living is going to plummet.
The view that "you have _your_ money and then the government takes it away from you" ignores that without the society that you are inserted in, the very concept of money would be meaningless. More accurate to say is that part of the social contract for participating in a society that enables you to """make""" that money is that you have a pay a "fee", i.e. taxes (among other things, e.g. abiding by laws which are democratically decided, etc).
Society is a mutually beneficial arrangement. I agree that you should be able to "opt out" and live in a homestead way out in the forest, and that if you do so you should be exempt of taxes. But unless you live in a Kazynscki cabin, that view makes no sense.
Sure, there are some beneficial things that the government does. Like roads, police, fire, regulatory agencies; all the usual things that are cited as "We live in a society" line items. But those things are a tiny minority of government spending.
The VAST majority of the federal budget goes to entitlements. Another massive chunk goes to paying interest on debt because we spent money we didn't have on entitlements. An similarly large chunk goes to defense (Which you could make an argument for, except the VAST majority of which is not actually used for defense, but intervening in far away conflicts)
So going back to all the "society" items, well, the roads, infrastructure electricity, basic services, police, clean water, airports; all the stuff you really need for a society: They're a tiny, minute sliver of the money the government takes from us on a regular basis. You could reduce taxes by 90% and still provide everything you need from a functional society without the parasitic drain on the economy.
The real parasitic drain on the economy is the large accumulations of capital that prevent the Invisible Hand from manifesting. Being alive is an inelastic good. If people have to choose between starving to death or labor for scrip to use at the company grocery, it breaks the efficiency of the market at valuing labor.
The government is both wasting too much money feeding poors who that guy wants to labor or die, and also at the same time forcing too many people to labor for no gain because of whatever thing you're imagining here.
It's quite possible taxation results in more starvation than it prevents. Particularly when used to incentive low income fertility and dis-incintivize production and labor.
Or, perhaps the 60% of American Households that give to charity would be able to take their increased pay, and more efficiently get help to people that need it than a series of large, fraud-ridden government programs.
If taxation is theft, why don’t you move to Somalia or start a cartel in a remote region of Mexico and live there? Or perhaps there are benefits you are receiving from living with a functioning government?
I buy the premise that Zig is better if you know you will have lots of pointer arithmetic going on. Having written a fair amount of unsafe C interop code in Rust, I feel like these critiques of the ergonomics are valid. The new #![feature(strict_provenance)] adds a new layer of complexity, that, I hope, improves some of this experience while adding safety. Rust's benefits are not free.
The benefits of Rust's (wonderful) model around references and lifetimes come at a significant cost to ergonomics when having to go into the Mordor of some C library and back. I usually find myself wishing I could have some macro where I just write in C and have it exposed as an unsafe back in Rust. I know I can do this by just writing a C dylib and integrating that, but now I've got two problems.
Even still, I prefer writing unsafe Rust to writing C. std::mem::ptr forces me to ask the right questions and reminds me of just how easy it is to fall into UB in C as well.
This one runs the C code as a program, so the code doesn't really communicate. I made a proof of concept some time ago of a macro that really translates C to Rust at compile time: https://github.com/zdimension/embed-c
This transpiles C to Rust at compile time, though it requires a nightly compiler and hasn't been updated in some time. But it's exactly what you're looking for. C code in, unsafe Rust code out.
One big difference between unsafe Rust and C is that C compilers have flags to turn off the UB, so you have a lot less mental load when writing it. You go from e.g. "if this index calculation overflows, we may read from outside the array, because the bounds check was deleted" to "if this index calculation overflows, we may read from the wrong index, but never outside of the array bounds". UB is Damocles's sword and the speed gains are usually not worth it. With UB your program can enter a buggy state that you cannot detect, because "it cannot happen". Without UB, your program can still enter a buggy state, but you can detect it and potentially recover or crash immediately before even more things go wrong.
Very true and worth evangelizing to others. I have unknowingly violated -fstrict-aliasing in some part of my code only to discover later that it is benign at -O0 and metastasized at -O3.
This freaks me out the most in networking code, where there is all kinds of casting of structs (esp. if you blindly copy-paste examples from StackOverflow) and performance usually matters. Rust has inspired me to take more time to profile C code to see whether strict aliasing (strict overflow, etc.) actually make a significant enough improvement to merit the UB-risk, review time, and acid in the stomach.
Most of the independence armies except Rhakine armies (AA and ARSA) officially want a true federal union, which would require a new constitution. This is the same aim as NLD.
Only KNU (Karen army) so far has made statement opposing military coup.
Edit: That said, independence armies are all to some degree corrupt and many funded through drug manufacturing and smuggling.
The Noise Protocol spec is fantastic. It asks a reasonable set of questions to a protocol designer and in exchange gives a set of safe choices for key exchange. It's a great example of building powerful systems from a handful of simple abstractions. Trevor Perrin (and I'm sure, not just he) did a phenomenal job.
That would be really cool, but I think enforcing that is undecidable. My gut tells me that having that language feature at compile time is the same as the Halting Problem.
In its full generality it's undecidable, but there are probably restrictions you can build into the type system to make it decidable. I was thinking something along these lines:
class Symbol {
@init(constructor) originalToken: Token
@init(constructor) position: SourceLocation
@init(Parser.parse) scope: SymbolTable
@init(Parser.parse) declaredType: Type
@init(TypeChecker.typecheck) inferredType: Type
@init(DataFlowAnalyzer.computeLiveness) usages: List<Expr>
}
And then the type system carries an extra bit of information around for whether a reference is fully-constructed or not, much like const-correctness. Fields marked with @init can only be written on a reference that's not fully-constructed. There's no compile-time enforcement for initialization order, though it'd be pretty easy to do this at runtime (convert them to null-checks or special not-initialized sentinel values). Newly-created references start out with the "initializing" bit set, but once they're returned from a function or passed into a function not in the list of legal accessors, they lose this bit unless explicitly declared.
It's basically the same way "mutable" works (in languages that support it), but with a separate state bit in the type system and extra access checks within the mutating functions to make sure they only touch the fields they're declared to touch. You can fake this now by passing mutable references into initialization functions and then only using const, but it's a bit less specific because many classes are designed to be long-term mutable through 1-2 specific public APIs but also need to be mutated internally for deferred initialization, and lumping these use-cases together means that deferred fields can be touched by the public API.
I suspect that the ideal language (at least for human use) is one that is just shy of Turing completeness. Once a program compiles and an initial configuration phase has passed all loops would be bounded and all recursions would be guaranteed to terminate. Except for the case of programs that aren't supposed to terminate for which there would be a single construct allowing wrapping everything inside one and only one infinite loop.
What if someone lives in Taiwan or the United States or Singapore or anywhere outside of the PRC? This is exactly the type of sociotechnical pressure the PRC wants to enforce over its diaspora.
As an Italian-American, this explains why every time I go to a Buca di Beppo, I feel like I'm in some kind of merger between a mediocre Italian restaurant and a minstrel show. To those of us who grew up in this culture, it's ridiculous (almost bordering on offensive). Thankfully, I grew up in communities where the Italian's could proudly go to other restaurants that were more respectful of a 3,000 year old culture.
It's tough to feel proud of what Buca di Beppo has done to popularize Italian-American culture in the same way it's tough to feel proud of the movie The Godfather. We aren't all mobsters and we don't all have giant busts of the Pope and cherubs in our houses. Some of us are just computer scientists who like basil.
> To those of us who grew up in this culture, it's ridiculous (almost bordering on offensive).
Both of my parents are half Italian-American, and that's the tradition in which I was raised.
My relatives who actually came off the boat loved Buca di Beppo. And Olive Garden, etc. (Of course we all know it's not as good as home cooking, but who cares when you just want something fast.)
Honestly, Italian-Americans have no one to blame but ourselves for exalting gangster culture and caricatures. We love that stuff.
Does McDonalds make the best hamburger? Does Taco Bell make the best burrito? Does Pizza Hut make the best pizza?
No, but I still eat at every single one of them all the time.
If I'm in flyover country, nothing beats the $19 Tuscan Steak at Olive Garden. Maybe some day I'll go to Italy and then never eat "fake" Italian food ever again, but I doubt it.
I could almost tolerate it, might even enjoy it sometimes, but eating there is just excruciating to me. For some reason they all seem to be optimized for loudness. Buca di Beppo really wants you to feel like you’re always surrounded by huge groups of people having comically loud conversations, but that’s one of the last things I want from a restaurant, or any public place really.
Loudness in restaurants seems to be hip right now. It also has been shown to correlate positively with both alcohol consumption and table turnover, so it might not be going anywhere soon.
I'm glad I found someone that feels that same way. I used to walk by a BdB on my way to work and I couldn't help but notice the horribly stereotypical cartoon they use as their mascot.[0] My thought experiment for this image is: Imagine how this type of caricature looks like for pretty much any other genre of restaurant (Chinese food, sushi, burritos, etc). It's probably super offensive. To me, that's a good indicator that this caricature is offensive too.
I think Italian-Americans see "cultural appropriation", caricature, non-Italians portraying Italians in movies, etc as part of the process of integration.
If someone with no Italian heritage wants to open up a pizza shop or sell pasta and put a big statue of Mario in front, go right ahead! If it's good, we'll eat there.
You know who made the best pizzelles in my family - by far? My mom's Irish-German stepfather.
tpolverini says>"Some of us are just computer scientists who like basil."
Well, you can open a restaurant named "Computer Scientists Who Like Basil" but I ain't gonna be one of your customers until I see lines queuing up!8-)
tpolverini says>"respectful"? "bordering on offensive"? "proud"? "it's tough to feel proud of the movie The Godfather", "I grew up in communities where the Italians could proudly go to other restaurants that were more respectful of a 3,000 year old culture."
Heck, that's nuthin'! I grew up in communities where you could proudly go to restaurants serving food that _was_ 3,000 years old, judging by the taste! I'll take Italian-American any day!
Jeez! Lighten up a little bit! The economy runs on entertainment. And if we can't laugh at each other and at ourselves then we're lost. That's the original and cheapest entertainment.
[Later: Hey! Hey! Why the downvote? You guys got no sense of humor? C'mon, help me out here, I'm dyin'!]
It's cultural appropriation, and it's understandable you feel it's silly or offensive. Part of what makes it uncomfortable is the appropriation of culture for little more than vapid monetary gain. Like you said, it's not contributing to Americans' understanding of Italian culture---if anything it's hurting it with silly stereotypes.
Interestingly, if someone was criticizing the appropriation of an indigenous or asian culture, there is little chance it would be this highly upvoted on HN. I think this is a worthwhile lesson for the community.
The dueling rhetoric is the same rhetoric that has been around for decades: Some people really feel type systems add value; others, feel it's a ball and chain. So which is it? The answer is probably "yes." We should all believe by now since history has proven this correct. Most of the time you start with no type system for speed. Then you start adding weird checks and hacks (here's lookin' at you clojure.spec). Then you rewrite with a type system.
I'm a devout Clojure developer. I think it delivers on the promises he outlines in his talk, but I also have no small appreciation for Haskell as an outrageously powerful language. Everyone robs from Haskell for their new shiny language, as they should. Unfortunately, not a night goes by where I don't ask God to make me smart enough to understand how a statement like "a monad is just a monoid in the category of endofunctors" can radically change how I implement marginably scalable applications that serve up JSON over REST. Clojure talks to me as if I were a child.
Rich Hickey is selling the case for Clojure, like any person who wants his or her language used should do. His arguments are mostly rational, but also a question of taste, which I feel is admitted. As for this writer, I'm glad he ends it by saying it isn't a flame war. If I had to go to war alongside another group of devs, it would almost certainly be Haskell devs.
Most of the time you start with no type system for speed. Then you start adding weird checks and hacks (here's lookin' at you clojure.spec). Then you rewrite with a type system.
You seem to be in the camp of gradual types. Which Clojure falls more into, though experimentally. Racket, TypeScript, Shen, C# or Dart are better examples of it.
make me smart enough to understand how a statement like "a monad is just a monoid in the category of endofunctors" can radically change how I implement marginably scalable applications that serve up JSON over REST.
That's the thing, it doesn't radically change it. Static types are not powerful enough to cross remote boundaries. Also, monads don't need static types, and fully exist in Clojure. Haskell is more then a language with a powerful static type checker. Its also a pure functional programming language. It will help you if you don't complect static types with functional programming. There's more design benefits from functional programming then static types. Learning those can help you write better code, including Json over rest api style applications.
Clojure and Haskell are a lot more similar then people think. Clojure is highly functional in nature, more so then most other programming languages. So is Haskell. Haskell just adds a static type checker on top, which forces you to add type annotations in certain places. Its like Clojure's core.typed, but mandatory and better designed.
The fact that Haskell is smarter than me is exactly why I have been keeping at it!
There is no fun left if you know all the overarching principles of a language, and you realize it still doesn't solve your problem. This happened to me when learning Python, this is also why I don't really look at Go or Rust. They're good languages, I might use them at a workplace someday, but you can get to the end of their semantics, but be still left with the feeling that it's not enough.
I love how I just keep learning Haskell, and keep improving, despite how much I already did it.
That said, Python is also smarter than me. The possibilities with monkey patching and duck typing are endless. But differently from Haskell, Python is not a good teacher, so I tend to only create messes when I go out of the way exploring them.
> That said, Python is also smarter than me. The possibilities with monkey patching and duck typing are endless.
Don't do it, 99.9% of the time. It's that simple.
There is seldom a reason to use more than just defs - and a little syntactic sugar (like list comprehensions) just to keep it readable.
Even the use of classes is typically bad idea (if I do say so). Just because: there is no advantage to using it, except when everything is super dynamic. And if that's the case, I suggest that's a smell, indicating that one is trying to do to many things at once, not properly knowing the data.
Nevertheless using classes (or any other superfluous feature) makes everything more complicated (less consistent - need new ways to structure the program, new criterions whether to go for a class or not to or where to bolt methods on,...).
Don't use "mechanisms" like monkey patching just because they exist. They are actually not mechanisms - just curiosities arising from an implementation detail. The original goal is simplicity: Make everything be represented as a dict (in the case of python)
> The possibilities with monkey patching and duck typing are endless.
I think there are many more "obvious" ways to do things in Haskell than in Python just because you as a developer need to draw the line between static and dynamic. And if you later notice that you chose the line wrong, you have to rewrite everything.
In Python - or any other simple language - there is typically one obvious way to do things. At least to me.
Classes definitely give you a lot of rope to hang yourself with (metaclasses, inheritance, MULTIPLE inheritance), but they have their place. I'll usually start with a function, but when it gets too big, you need to split it up. Sometimes helper functions is enough, but sometimes you have a lot of state that you need to keep track of. If the options are passing around a kwargs dictionary, and storing all that state on the class, I know which I'd pick.
You can memoize methods to the instance to get lazy evaluation, properties can be explicitly defined up-front, and the fact that everything is namespaced is nice. You can also make judicious use of @staticmethod to write functional code whenever possible.
You can always opt for explicit dict passing. You are right that it's more typing work (and one can get it wrong...), but the resulting complexity is constant in the sense that it is obvious upfront, never growing, not dependent on other factors like number of dependencies etc.
When opting for explicit, complexity is not hidden and functions are not needlessly coupled to actual data. Personally I'm much more productive this way. Also because it makes me think through properly so I usually end up not needing a dict at all.
Regarding namespacing, python modules act as namespaces already. Also manual namespacing (namespacename+underscore) is not that bad, and technically avoids an indirection. I'm really a C programmer, and there I have to prefix manually and that's not a problem.
Yup, this open field to do whatever with meta classes, inheritance, properties, etc. was what hanged my interest. Since all this "multiple meta monkey patching" was possible, there was no way of telling (for me) what's a good way to implement something in an elegant way. Simple was not good enough, but complex had no rules.
The fact that Haskell is smarter than me is exactly why I have been keeping at it!
I tend to think of Haskell as an eccentric professor.
Sometimes it's brilliant and what it's developed lets you do things that would be much harder in other ways.
Sometimes it just thinks it's clever, like the guy who uses long words and makes convoluted arguments about technicalities that no-one else can understand to look impressive, except that then someone who actually knows what they're talking about walks into the room and explains the same idea so clearly and simply that everyone is left wondering what all the fuss was about.
I tend to ignore 99% of the clever haskell stuff and get by just fine in Haskell.
I keep learning about stuff like GADTs and whatnot, but they're more like the top of the tool drawer special tools than the ones you break out every day.
I think people learning/using haskell tend to go for crazy generalized code first, versus what gets me to a minimal working thing that I can expand/change out later.
Or I just suck at haskell, probably a little from column a and b, for me more sucking at haskell than anything.
You suck at Haskell about as much as Don Stewart :) In this talk he describes how he builds large software systems in Haskell and eschews complicated type system features
I’m not a rust user per se, but I’m surprised to see it listed alongside Python and Go as a language without a lot of depth. Rust not only has quite an advanced type system (not Haskell level, but certainly the most powerful of any other language as mainstream), but it can also teach the user a lot about memory management and other low-level aspects of programming that Haskell (and many other languages) hide. I mostly write Haskell for my own projects, but one of these days I hope to get better at Rust.
Fwiw, the monad quote is actually pretty digestible if you know what monoids and functors are.
A functor is a container that you can reach into to perform some action on the thing inside (e.g. mapping the sqrt function on a list of ints). The endo bit just tells you that the functor isn't leaving the category (e.g. an object, in this case a Haskell type, when lifted into this functor context is still in the Haskell 'category'). A monoid is something we can smash together that also has an identity (e.g. strings form a monoid under concatenation and the empty string as an identity). So, in other words, monads are functors ('endofunctors') that we can smash together using bind/flatMap, and we have an identity in the form of the Id/Identity functor (a wrapper, essentially - `Id<A> = A`).
>"a monad is just a monoid in the category of endofunctors"
Once I discovered the Monad thing in Haskell has pretty much nothing to do with the Monad in Category Theory, everything made much more sense. As a bonus I now (sort of) understand Category Theory. Much the same as Relational Databases have not very much to do with Relational Algebra.
This is a very important point although I would slightly tweak this
> the Monad thing in Haskell has pretty much nothing to do with the Monad in Category Theory
to "the Monad thing in Haskell is a very simple special case of the Monad in Category Theory". Thinking you have to "learn category theory" before you can use a Monad in Haskell is like thinking you have to learn this
Hopefully we all agree that static types and dynamic types are useful. Those who use hyperbole are attempting some form of splitting. I think the point where we disagree is what the default should be. The truth is this discussion will rage on into oblivion because dynamic types and static types form a duality. One cannot exist without the other and they will forever be entangled in conflict.
Well, I think that static types are much more useful than dynamic ones. Static types allow you to find errors with your program before execution and that is very important. And if you are going to go through the effort of defining types, it is much better to use static types because then you get this additional error checking. Furthermore, with static types the compiler can help in other ways, e.g. by organizing your data in memory much more efficiently.
I am not sure what you mean when you talk about the duality of static and dynamic types. One can exist without the other and most statically typed languages either forbid or strongly discourage dynamic typing.
> Static types allow you to find errors with your program before execution and that is very important.
It depends on how valuable it is in your situation to be able to run a program that contains type errors.
Sometimes it's a net win. If I'm prototyping an algorithm, and can ignore the type errors so I can learn faster, that's a win. If I'm running a startup and want to just put something out there so that I can see if the market exists (or see what I should have built), it's a net win.
Sometimes it's a net loss. If I'm building an embedded system, it's likely a net loss. If I'm building something safety-critical, it's almost certainly a net loss. If I'm dealing with big money, it's almost certainly at least a big enough risk of a net loss that I can't do it.
Forget ideology. Choose the right tools for the situation.
This is a rather glib response. Of course one should choose the right tools for the situation. Personally if I'm prototyping an algorithm I'd rather do it with types so I don't write any code that was clearly nonsense before I even tried to run it.
Personally, I work the same way you do. But I've heard enough people who want the faster feedback of a REPL-like environment to accept that their approach at least feels more productive to them. It may even be more productive - for them. If so, tying them down with type specifications would slow them down, at least in the prototyping phase.
That certainly seems like a reasonable hypothesis to explore and I'm curious to try a Haskell "EDN"-like type as defined in the article to see if that helps me prototype faster!
One can exist without the other and most statically typed languages either forbid or strongly discourage dynamic typing.
It never seemed like that much of a prohibition to me. Dynamic types take one grand universe of "values" and divide it up in ways that (ideally) reflect differences in those values -- the number six is a different kind of thing than the string with the three letters s i x -- but what the types are is sort of arbitrary. Is an int/string pair a different type than a float/float pair? Is positive-integer a type in its own right? Is every int a rational, or just convertible into a rational? What if you have union types? After using enough dynamically typed languages, the only common factor that I'm confident holds across the whole design space is that a dynamic type is a set of values. That means static typing still leaves you free to define dynamic types that refine the classification of values imposed by your static types, and people do program with pre-/postconditions not stated in types. You just don't get the compiler's help ensuring your code is safe with regard to your own distinctions (unless maybe you build your own refinement type system on top of your language of choice).
By a similar process, dynamic typing leaves you free to define and follow your own static discipline even if a lot of programmers prefer not to. This is more or less why How to Design Programs is written/taught using a dynamic language. The static type-like discipline is a huge aspect of the curriculum, but the authors don't want to force students to commit to one specific language's typing discipline.
Dynamic types are definitly more useful. That's why since the last 40 years, we've almost never once had a language without it. That's why Haskell also has dynamic runtime types. Erasing them at runtime, and stopping to check their validity at runtime would be folly.
Static types are an extension, they say, do not allow types to only be defined at runtime when possible. Its not always possible, which is why static typed languages also include a runtime dynamic type system.
The debate is if the benefit of static checks on types outweighs the negative of having to spend time helping the type checker figure out the types at compile time, and limiting the use of construct that are too dynamic for the static checker to understand at compile time. That's the "dichotomy". To which someone proclaims: "Can we have a static type checker which adds no extra burden to the programmer and no limits to the type of code he wants to write?" To which OP has missed the point entirely and simply shown that you can spend more time giving the Haskell type checker info about EDN, and gain nothing since its now neither useful, nor less effort. Which was a bit dumb, but he did remark that he did it for fun and laughs, not for seriousness.
A more interesting debate for me would be the strengths and weaknesses of gradual/external typing systems like TypeScript/Flow and MyPy vs something like clojure.spec. Especially since there are still dynamic languages like Ruby that haven't really adopted a system like this yet
Amusingly, history is showing that there are as many rewrites from type systems to no type systems as the reverse. Consider all of the things that are getting redone in the javascript world.
Why are we down voting this? Of all people in the world, Ray Dalio would want us to ask, "Why." Principle #1: Trust in truth. Whether I'm being facetious is an exercise to the reader.
Something is awry in our understanding of moral capitalism at large for statements like that to be met with anything other than derision, and DHH is getting at the heart of why: We've all become obsessed with growth. When the way to get rich in the world is capital gains, as opposed to value creation, the abnormalities in behavior and ethics, which are glorified elements of Silicon Valley culture, fall out as a natural consequence. Having real, powerful values that help evaluate whether something legal is actually ethical means saying "no" to certain kinds of work. God forbid you do that though, lest your investors punish you.
Where your treasure is, your heart will be also. It comes as no surprise to anyone tarrying long over the facts that SV's treasure is gold.
I didn't interpret that Altman article as saying an "airplane" was less good than a "spaceship". In fact, he was saying the opposite, that both kinds of companies are equally impressive.
Yes, even if you only get 0.3-3% of that 100M exit it's going to completely change the equation if you ever try again. Or just do something else with the rest of your life. Further, if you have another exit of that size again your likely to end up with a much larger chunk of the second company. To the point where it's possible to be better off with 2 x 100M exits than 1 billion dollar exit.
Most startup fail. So though yes that's true, the chances of it happening twice are quite small. Even the first chance is quite small, so maybe... dunno have not seen numbers to back up hypothesizing.
The odds are much better that you can repeat after your first success. Most startups fail before gaining significant funding, if you have enough capital to skip the seed stage and a good relationship with investors and have experience with running a startup then you have much higher odds. To use some famous examples, Apple/Next/Pixar, Paypal/Tesla /Spacex/Solar City they both had a weaker startup such as Next and Solar City but leveraged past success to avoid failure.
Totally agree. My first company raised $300k and sold for between $2M and $10M. Started my second company and immediately raised $1M in a seed round. Having an exit is an excellent signal to future investors. And employees. And acquirers.
How much better would it be if we eliminated the IMF, shrank government, and lowered taxes so individuals could freely choose how to benefit the Global South via nonprofit giving? If people don't give voluntarily, why should they be compelled to give in the form of taxation?
I feel like Graeber, an anarchist, would appreciate a decentralized solution like that instead of trying to handle this from the top down.