I don't think any large tech company is capable of shipping a feature that quickly. Even for a feature as narrowly scoped as this, going from ideation to deployment in a few weeks seems completely unrealistic. There's just way too much involved aside from raw lines of code.
Uber itself might be, the business it is in certainly isn't... Which is kinda dichotomy with many modern tech companies. The platform might be such, but in the end the product they are selling very much isn't.
That's not equivalent. An equivalent example would be a person who makes a large amount of money in their 20s, but spends the majority of it on things that could conceivably appreciate, such that their income artificially represents a loss. Then all of a sudden, whatever they spent money on becomes extremely profitable in their mid 30s.
Someone doing that will normally do it within the confines of an LLC by convention (because it almost always implies a business). But you could do it with investing, too. In either case you need not be a megacorp.
There are also income tax offsets for education, provided your tax bracket isn't too high.
> That's not equivalent. An equivalent example would be a person who makes a large amount of money in their 20s, but spends the majority of it on things that could conceivably appreciate, such that their income artificially represents a loss. Then all of a sudden, whatever they spent money on becomes extremely profitable in their mid 30s.
Like someone getting a $150k student loan to go to college and/or post-college education?
No, because that's a loan rather than an R&D expenditure from income. I guess that probably sounds flippant, but the mechanics are different. If a business received a loan, the tax prospects wouldn't be as favorable as expenditure either.
Different how? Different in practice or in tax law?
The point of the thread is that the two are essentially the same in practice (investing current monetary influxes towards future revenues) but the tax law differences favor one over the other.
True, but I think the sentiment isn't "make individual loans work like business investments" so much as it is "make business investments work like individual loans". In other words, if individuals are expected to pay taxes, businesses should be expected to pay taxes as well (rather than being allowed to minimize them due to earlier losses).
It's not black and white, of course, and I'm not an accountant, so my knowledge is limited and most likely filled with holes and misunderstandings. But I imagine there's also a matter of scale that's at play here, with tax laws meant to make things easier on small, privately owned businesses in their early years, having unintended consequences to the benefit of companies already behemoth in size investing in getting even bigger. Bigger in ways only made possible in the new digital, globalized world.
loans are different for corporations due to the interest being deductible (colloquially called 'tax shields'). along with the carryforward provision, that can so valuable that it's the principal reason why a given company is bought. personal loans have no such leeway and value.
The interest on personal loans is also deductible, if used for (a) education, (b) buying a residence, or (c) for business activities of the individual.
sure, there are a few exceptions, but carveouts result in distortions that lead to unintended consequences, as we see in all of those instances (e.g., higher economic rents). for greater fairness and more efficiency in markets, we should reduce carveouts for both corporations and individuals, not try to justify the ones we have.
it's difficult to get to 30% of EBITDA (the actual yardstick) in interest payments, but if you do, the carryforward is unlimited, so it's effectively unlimited.
The actual yard stick is only EBITDA through this year, and EBIT thereafter, and moreover, it's the tax versions of EBITDA and EBIT, not the accounting versions. For starters, the tax versions use taxable income as the base, not book income, and don't add back in most non-cash items, so the threshold is much lower.
Many businesses were hit by this interest limitation in 2018 and 2019. There are dozens of articles from major tax firms about it. Yes, a business can carryforward their unused interest to a future year in which they have spare income to apply it. But that means they have to have sufficient profits and reduced loans in order to take advantage of their interest expenses. If a business is unprofitable, churn loans, or increases its debt load, the business effectively loses out on a lot of the benefit of most of its interest expense.
ok, i'll take your word for it since it's not something i follow that closely.
i'd just also reiterate that all this rigamarole is not worth it (for the generally claimed increase in productivity and investment), and we should just simplify and equalize individual and corporate tax law.
Of course not. The $155k tuition is the loss. The $150k loan is just a sudden influx of money used to compensate for the investment losses, the same as a company receiving investment funding.
If you do that by investing, it does work the same way. You can roll forward capital losses indefinitely and offset them against future capital gains, something which investors in 2000 and 2008 became well-acquainted with.
The difference is few people start with financing that allows them to amass large investable sums without showing a cash income early in life. That’s because companies can raise VC and sell equity while we (rightly, of course) banned the equivalent practice for people.
The parent commenter is not disagreeing with information theory (and what you're saying is shown in the article anyway).
They're making a practical distinction that you generally don't have access to the actual thing in an empirical format for which compression will achieve true learning. Instead you have access to training data which represents, let's say, a projection of the actual thing in a smaller space with fewer dimensions.
Like trying to learn from images instead of the 3d world. Humans learn to distinguish between objects in a 3-dimensional space using sight and interaction. This learning generalizably transfers to recognition in 2 dimensions. We don't generally equip models with robotic interfaces to train in 3d before benchmarking them on ImageNet.
> We don't generally equip models with robotic interfaces to train in 3d before benchmarking them on ImageNet.
Don't they train models using 3D rendering and simulations ? We have relatively realistic simulations for various scenarios - having a learned model that could make inferences based on those complex simulations sounds like a win.
>Like trying to learn from images instead of the 3d world. Humans learn to distinguish between objects in a 3-dimensional space using sight and interaction. This learning generalizably transfers to recognition in 2 dimensions.
If we use human "comprehension" as a reference point, then the relevant point of comparison should be the understanding a human can develop given the same inputs.
Sure, but how do you measure that? How do we figure out how much understanding a human can develop from only ever seeing 2d pictures, without any movement or interaction with a 3d world?
Most ML problems are things humans are quite good at and have a lot of context to draw from.
Sure. But again, practically speaking, that isn't the reality of how we learn. The commenter wasn't refuting Kolmogorov complexity. They're just saying it's an extremely limited way of viewing the problem. Useful sure, but insufficient.
Speaking as someone outside the autonomous vehicle industry, can you provide sources or reputable commentary on how Waymo is furthest ahead? And maybe a rough ranking of the rest of the major players?
The only public data is California's safety reports over the last couple years, which show Cruise and Waymo having millions of miles driven with Waymo having fewer "disengagements" than Cruise. There is also the fact that Waymo has ride sharing service in "production". As someone who is also outside the AV industry that is the closest I've gotten to a source, everything else is just hearsay by various Hacker News engineers about how Waymo is the best.
The ranking as I understand it is Waymo, Cruise, then everyone else.
You might want to have a look into the public output of George Hotz. Obviously he is a little biased towards his own efforts, but he is very open about limitations and the actual reality of the self driving business.
"Time average" and "ensemble average" are standard vocabulary in the statistical mechanics literature. Your comment is essentially a restatement of the article's point.
I think it's uncharitable to say the article would be easier to understand if it didn't use the language of ergodicity. Its explicit goal is to show how non-ergodicity leads to an example like yours.
So of course your comment seems easier to understand. But that's because you're just saying different distributions can be parameterized by the same mean. Ergodicity is about a lot more than that, and the language of ergodicity was the entire exercise here.
This intro doesn't get to the depths of the issue. https://www.nature.com/articles/s41567-019-0732-0, by one of the pioneers of the "egondocity economics" is very nice for both going over the math and the academic history of the error.
Given the illustrious history of statistical mechanics into Modern probability theory, information theory, theoretical computer science, etc., it's a real shame Econonomics is still stuck with this bad math.
What is a shame is how “egondocity economics” (maybe a reference to the size of Peters’ ego?) misrepresents some things.
Are you calling “bad math” the expected utility theory developed by von Neumann (et al.)? He knew one thing or two about ergodicity, information theory, computer science, etc.
I read https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenster..., And there's no notion of time let alone non-ergoticity in the formula. I am not familiar of with the rest of its book, but I wouldn't be surprise if it's similarly fine, building a theory similarly of rich theorems about very simple models.
If so, the problem isn't Von Neumann's math then, even if the general aim of the endever was misinspired by Bernoulli's primitive notions. The problem would be all the math cargo culters in economics who constantly try to the premise premises of math theorems as if they were broad social laws.
I mean don't get me wrong, I am no fan of Von Neumannn's politics, but obviously I am not going to fight his pure math.
The article by Peters that you linked to says that “in maximizing the expectation value — an ensemble average over all possible outcomes of the gamble — expected utility theory implicitly assumes that individuals can interact with copies of themselves, effectively in parallel universes (the other members of the ensemble).” This is nonsense.
He may be desperately trying to shoehorn frequentist statistics, which require actual samples over repeated experiments. If you take the frequentist view, then you kinda have to collect all the possible results of your bet from all the parallel universes of outcomes. Which requires interaction with those universes, and does not make any sense unless maybe you wave your hands a lot about the Many Worlds interpretation of quantum mechanics (and dammit, even if we believe in that, the parallel universes do not interact!)
If you're a reasonable person instead, you recognise that probabilities instead describe a state of partial information (that is, probability is in the mind), and the "ensemble average" really comes from a probability distribution we can compute with bog standard probabilistic counterfactual reasoning, not by actually hopping universes.
> "Time average" and "ensemble average" are standard vocabulary in the statistical mechanics literature
But their application to non-standard-mechanical things is very confusing.
Of course wealth is not ergodic. Ergodicity would mean that the distribution is always the same. Every point in time would be identical to every other point in time and growth would be impossible.
I agree it's not perfectly explained. But I think someone new to ergodic theory would find the article clearer (or at least more helpful overall) than your second paragraph here.
“What we're seeing is that even though the expected value is positive, and the ensemble average is increasing, the time average for any single person is usually decreasing. The average of the entire "system" increases, but that doesn't mean that the average of a single unit is increasing.”
Someone new to ergodic theory may understand from that article that if wealth was ergodic the average for every trajectory would increase like the average for the entire system. But that doesn’t make sense.
Thing is, I don't believe we even care about the time average. What we care about is the evolution of the distribution of outcomes over time.
More specifically:
- The distribution of outcomes at certain points of interest in time (like the valuation of my company when I intend to sell it).
- The probability that we cross a catastrophic threshold at some point (like bankruptcy).
Time average is a terrible metric to estimate those things. Heck, I'm not sure it can measure anything of interest, besides our own mistaken intuitions. It should probably be called something like "time average fallacy".
I'm a little confused - ergodic theory very much cares about the time average. Or do you mean the toy example of betting shouldn't care about it?
It seems like you think the problem here is too unsophisticated for ergodic theory or something. Which, fine sure. But this isn't an article intended to teach you about betting. It's an article intended to teach you about ergodicity, using betting as a toy example. The author isn't trying to introduce the best way to analyze betting strategies, they're trying to show what non-ergodicity is. And I think they basically succeed.
Just meet the article where it is, for its intended usage.
This is not about the example. What I'm saying that no betting at all should care about the time average. Betting is about having good estimation of outcomes, and time averages only helps you when the process is ergotic.
That's a very special case. For everything else (that is, non-ergotic processes), your time average is crap, and you must look at the distribution of outcomes directly. Even the ensemble average is not enough. Averages are crap at visualising skewed distributions. For those you want the median, the quartiles, sometimes even the percentiles.
---
To be honest, this "ergotic theory" shows signs of snake oil. The definition of ergodicity itself is dead simple, so it's pretty easy to evaluate. What seems pretty clear is that ergodic processes are the exception. And a pretty uninteresting one at that, since it's a class of processes that people will have good intuitions about.
It would then seem that ergodic theory is more interested in the non ergodic processes (the very point of this blog post is to warn us about them). That is, processes that lack some property —the general case. And surprise, since the time average and ensemble averages are different, and you only care about the ensemble average (well, the ensemble distribution really), the time average won't help you. Be afraid, or lose your assets.
That's why I see snake oil: what works on non-ergodic processes will also work on the ergodic ones. Unless you need to make a split second decision using your intuition (which while inadvisable is safer with ergodic processes), there's no need to make the distinction at all. Just analyse your process without without assuming it will be ergodic, the results will be applicable even if it is.
You say it’s not about the example, then go on to talk about the example...as I said, this article is only about betting insofar as it’s a toy example to illustrate ergodics. In the real world you wouldn’t analyze a bet this particular way, but that’s nitpicking and missing the point.
> To be honest, this "ergotic theory" shows signs of snake oil.
lol. Alright, I’m checking out of the discussion when a major subfield of mathematics is described as snake oil.
> You say it’s not about the example, then go on to talk about the example
I did not mention those stupid coin tosses, where did you get the impression I was talking about those specifically?
> a major subfield of mathematics is described as snake oil.
I did not say it was snake oil, just that it shows signs of being such. Then I described those signs. If you have counter arguments or pointers to such, I'd be happy to read them. I'd rather lose an argument and learn something than stay ignorant.
Yes. Given 1000 players and 1000 turns, if each player starts with $100 in capital under your chosen parameters:
import random
l = 0.33
w = 0.5
c = 100
m = 1000
p = {k: c for k in range(m)}
n = 1000
for k in range(n):
for j in range(m):
if random.choice([0,1]):
p[j] += (w * p[j])
else:
p[j] -= (l * p[j])
print(sum([p[k] for k in p]) / len(p))
print(sum(1 for k in p if p[k] > c) / len(p))
I wrote this up quickly so there might be an error, but under your stated parameters the average wealth increases over time and most people end up wealthier than they started. Specifically, the number of people who will be wealthier at the end seems to converge to somewhere between 57-60%.
NB: This assumes you bet your entire capital each round instead of a constant bet size. In the presence of non-ergodicity you wouldn't want to do this, but that just means it's an even stronger result that most people come out ahead.
In fact 33% happens to be the maximum loss percentage this system (win rate, win percentage, bet = total capital) can tolerate while still exhibiting higher wealth for most players over time :)
The thrust of the point is that while the wealth of a group will rise on average when playing a game with positive expected value, individuals with significant upfront losses will lose over time if the reward percentage is too close to the loss percentage. Because your future wins depend on your present capital, which in turn depends on your past wins. This becomes an optimization problem!
This does not mean that you shouldn't play a game with positive expected value. Expected value is still the salient framework with which you should judge risk. It just means that the size of your bet needs to be considered in conjunction with your total capital, not just whether any individual bet is more likely to win than lose.
The author states this seems to not be well known in finance, but in point of fact this is very well known in both literature and practice. A trading strategy with positive expected value has additional considerations before you execute on it, including your total capital and liquidity.
Well, not just upfront losses, since it's commutative. You break even if you have 10 wins for every 9 losses, which will happen to very few people with enough flips – but the amount you win if you have more victories than that threshold is quite large, while you can only lose $1 at most.
I don't think they're speaking too strongly. I think a lot of the time when we correctly infer causality without empirically interacting with the system, it's because we have built up significant categorical experience about more atomic systems we were able to interact with.
In my view, a lot of things that are noninteractively inferred are compositions of more fundamental things that required empirical experience. When you've had the causality of gravity thoroughly beaten into you at a young age, a lot of other things seem intuitive that would otherwise completely fall outside a framework for being unempirically learned.
Do you have a specific counterexample of causality you can infer without interaction or empirical experience of something related?
Caveats: I'm not a neurologist or psychologist, so this is mostly philosophical speculation on my part.
We think we're so smart because we causally understand the world, but it took us a very long time to collectively discover these principles. A human alone would not be so smart.
I think this confuses (fact) knowledge and the ability to recognize causal relations.
A human can never be smarter than said human: Our brains are not connected and we can't share capacity with others.
So, discovering causality is always an individual experience. And that happens likely by "playing" with "the world".
I think it's noticeable that smarter animals are more playful. Which is also a hint that points to the fundamental importance of interaction with the world as a prerequisite for "smartness". Additionally the capabilities of the "sensors" and "actors" that make interaction with the world possible in the first place seem to be crucial to develop "smart behavior".
The part about the "sensors" seems quite obvious. I think one can gain a better general understanding of some thing if one can "experience" it in more than one "dimension".
And the "actors" allow one to perform "experiments" with the things around one, and find out this way how that thing "works" or is supposed to be "used".
That's actually the behavior that can be observed in children of all kinds of "smarter" species. So it seems to be at least linked somehow to "smartness".
Empirical evidence is a special case observation. If you observe the whole universe in its entirety, you could separate moments that effectively followed whatever conditions you might set in a lab. You can't act on the system, but you can forever tune your models to match the infinite observations you could record. At that point the separation between observation derived knowledge versus experiential knowledge is meaningless (it's just hard to imagine a universal model manifesting without having used experiental knowledge along the way).
Whether you swing the bat or just watch it hit the ball into the sky, you have the prerequisites needed to reason about the interaction.
A more entertaining question is how a system comes to believe causality (i.e. comes to believe that things can and must have causes)
I agree. To say we're able to influence the system is to assume free will, which I believe is an undecidable problem. Ultimately this is just a matter of semantics, which makes this thread rather pointless.
It does not need to assume free will, which indeed is a much harder philosophical issue - it needs to assume that the decisions of the intervention are caused by factors outside of the system you're studying (which usually isn't the whole world), which is easy to get if you have some actual external influence, and very hard to determine if you're purely observing.
For example, if you're allocating patients in control and treatment group based on a roll of dice - the dice don't have free will, but they are influencing the "system" of the patients and their treatment, while that system is not causually influencing the dice rolls.
For another example, if a baby is "experimenting" by babbling (a key part of language acquisition, https://en.wikipedia.org/wiki/Babbling), it's not necessary or relevant to decide whether free will is involved, the baby obtains useful experimental results of what sound experiences are caused by which attempts to move the tongue/mouth/etc, even if there's no free will and the attempts to move the tongue/mouth/etc are also deterministically caused by the sensory experiences of the baby.
> Please explain to me how I can lose money by writing a covered call option.
When you to sell the underlying to cover. It's right there in the name. Of course you lose money, it's just that your downside risk is capped.
> Us plebs aren't allowed to write naked options, that privilege only belongs to institutional actors.
Yeah because you'll probably lose all your money. Would you rather be allowed to do something incredibly dangerous and then get met with a dispassionate, "Well, almost everyone fails at this but you tried anyway, should have known better! Thanks for playing."?
Writing any amount of uncovered calls where the present stock price is at least higher than the teens generally exposes you to more risk than the average American can absorb with their entire net worth.
That being said, if you really want to, there are places that will let you do it using margin if you guarantee you know what you're doing. Bad idea though.
It's an old strategy. Without an actual edge and significant (read: sophisticated and costly) risk management, that strategy will eventually get blown out by someone who is better at pricing long tail risk events than the seller. There's a reason market makers obsessively delta hedge.
Would hate to have been selling put options on VIAC when Archegos shit the bed on $20B with >4x leverage. Good luck foreseeing that.
yes, I have had this happen with a biotech stock. The puts can be very risky. I usually sell calls since it is easier to stomach psychologically when it goes wrong.... "I missed out on some gains. But I would've sold anyway..."