Putting aside the fact that you can't justify fraud by post-hoc outcomes (Shkreli and SBF also claimed that they didn't commit fraud because they would or did win it back), I doubt any of the Tesla Energy division income (even assuming the accounting is okay) is a result of businesses that can be traced back to Solar City.
I suspect most of the business is battery storage and the Powerwall was introduced by Tesla before the acquisition.
The vertically integrated Tesla strategy is to use batteries to arbitrage the value of stored energy. This is well described in the Tesla "Master Plan" documents. Solar is a huge part of this.
According to the company filings the solar assets Tesla acquired are still generating revenue at the rates projected during the acquisition.
Well I trust Elon's statements (even the financial statements) as far as I can throw them but can you point out where it says that? I'm searching the 10-K and I don't see solar broken out.
But even if it's true I assume they are including their utility-scale solar and battery packs, which has nothing to do with Solar City: the panels are generic, the customer lists are not residential, they don't use the fancy shingles. Solar City did not help them in any way with these getting or fulfilling these contracts.
That's not true, the Moro experiments show they use different capacities as do similar experiments on people who have certain severe cognitive deficiencies that don't impact language processing (e.g. the subject "Chris")
My argument is that “thinking” and “language processing” are not two sequential or clearly separated modes in the brain but deeply intertwined.
Language is a lot more than parsing syntax, whatever your thoughts are on the matter, even LLMs are clearly doing more than that. Are there any experiments where subjects had severe cognitive deficiencies and language in its full breadth or maybe i should say communication ? came out unscathed ?
The chris experiments don't seem to go into much detail in that front.
I just gave one: "Chris". Here's Chomsky describing the "Chris"-experiments ([1]) as part of a broader answer about how language is distinct from general cognition which I paraphrased above.
> That doesn't contradict the argument that “thinking” and “language processing” are not two sequential or clearly separated modes in the brain but deeply intertwined.
It's not an argument, it's an assertion, that is, in fact, contradicted by the experimental evidence I described (Moro and "Chris"). Of course they are "deeply intertwined" but because of the evidence it's probably an interface between two distinctive systems rather than one general system doing two tasks.
Like i said, these experiments stop at a vague 'Chris can still learn languages'. No comment on actual proficiency or testing. For all i know i can't have a meaningful conversation with this guy beyond syntactically correct speech. Or maybe the best proficiency he's ever managed is still pretty poor compared to the average human. I have no idea.
There's no contradiction because i never argued/asserted the brain didn't have parts tuned for language, which is really all this experiment demonstrates.
It's irrelevant to the experiment: he could learn synthetic language with human-like grammar and could not learn synthetic languages with non-human-like grammar. Regular people could solve the non-human-like languages with difficulty. Because his language ability is much higher than his general problem solving ability it gives strong evidence that 1. human language capacity a special function, not a general purpose cognitive function and 2. it obeys a certain structure.
> There's no contradiction because i never argued/asserted the brain didn't have centers tuned for language, which is really all this experiment demonstrates.
>Because his language ability is much higher than his general problem solving ability
I don't see how you can say his language ability is much higher than his general problem solving ability if you don't know what proficiency of language he is capable of reaching.
When you are learning say English as a second language, there are proficiency tiers you get assigned when you get tested - A1, A2 etc
If he's learning all these languages but maxing out at A2 then his language ability is only slightly better than his general problem solving ability.
This is the point i'm trying to drive home. Maybe it's because i've been learning a second language for a couple years and so i see it more clearly but saying 'he learned x language' says absolutely nothing. People say that to mean anything from 'well he can ask for the toilet' to 'could be mistaken for a native'.
>I don't know what you are trying to say then.
The brain has over millions of years been tuned to speak languages with certain structures. Deviating from these structures is more taxing for the brain. True statement. But how on earth does that imply the brain isn't 'thinking' for the structures it is used to ? Do you say you did not think for question 1 just because question 2 was more difficult ?
As I said it's not relevant but if you wanted to know you could put in the bare minimum of effort into doing your own research. From Smith and Tsimpli's "The Mind of a Savant": "On the [Gapadol Reading Comprehension Test] Christopher scored at the maximum level, indicating a reading comprehension of 16 years and 10 months". They describe the results of a bunch of other language tests, where he scores average to above average, including his translations of passages from a dozen different languages.
> But how on earth does that imply the brain isn't 'thinking' for the structures it is used to ? Do you say you did not think for question 1 just because question 2 was more difficult ?
The point isn't to define the word "thinking" it is to show that the language capacity is a distinct faculty from other cognitive capacities.
This is just wrong. Languages follow certain inviolable rules, most notably, hierarchical structure dependence. There are experiments (Moro, the subject "Chris") that show that humans don't process synthetic languages that violate these rules the same as synthetic languages that do (specifically it takes them longer to process and they use non-language parts of the brain to do so).
This does not mean that language in humans isn't probabilistic in nature. You seem to think that because there is structure then it must be rule based but that doesn't follow at all.
When a group of birds fly, each bird discovers/knows that flying just a little behind another will reduce the amount of flaps it needs to fly. When you have nearly every bird doing this, the flock form an interesting shape.
'Birds fly in a V shape' is essentially what grammar is here - a useful fiction of the underlying reality. There is structure. There is meaning but there is no rule the birds are following to get there. No invisible V shape in the sky constraining bird flight.
First, there is no evidence of any probabilistic processing at the level of syntax in humans (it's irrelevant what computers can do).
Second, I didn't say that, in language, structure implies deterministic rules, I said that there is a deterministic rule that involves the structure of a sentence. Specifically, sentences are interpreted according to their parse tree, not the linear order of words.
As for the birds analogy, the "rules" the birds follow actually does explain the V-shape that the flock forms. You make an observation "V-shaped flock" ask the question "why a V-shape and not some other shape" and try to find a explanation (the relative bird positions make it easier to fly [because of XYZ]). In the case of language you observe that there is structure dependence, you ask why it's that way and not another (like linear order) and try to come up with an explanation. You are trying to suggest that the observation that language has structure dependence is like seeing an image of an object in a cloud formation: an imagined mental projection that doesn't have any meaningful underlying explanation. You could make the same argument for pretty much anything (e.g. the double-slit experiment is just projecting some mental patterns onto random behavior) and I don't think it's a serious argument in this case either.
And research on syntactic surprisal—where more predictable syntactic structures are processed faster—shows a strong correlation between the probability of a syntactic continuation and reading times.
>In the case of language you observe that there is structure dependence, you ask why it's that way and not another (like linear order) and try to come up with an explanation. You are trying to suggest that the observation that language has structure dependence is like seeing an image of an object in a cloud formation: an imagined mental projection that doesn't have any meaningful underlying explanation.
No I'm suggesting that all you're doing here is cooking up some very nice fiction like Newton did when he proposed his model of gravity. Grammar does not even fit into rule based hierarchies all that well. That's why there are a million strange exceptions to almost every 'rule'. Exceptions that have no sensible explanations beyond, 'well this is just how it's used' because of course that's what happens when you try to break down an inherently probabilistic process into rigid rules.
> And research on syntactic surprisal—where more predictable syntactic structures are processed faster—shows a strong correlation between the probability of a syntactic continuation and reading times.
I'm not sure what this is supposed to show? If I can predict what you are going to say so what. I can predict you are going to pick something up too if you are looking at it and start moving your arm. So what?
The third paper looks like a similar argument. As far as I can tell neither paper 1 or 2 propose a probabilistic model for language. 1 talks about how certain language features are acquired faster with more exposure (that isn't inconsistent with a deterministic grammar). I believe 2 is the same.
> No I'm suggesting that all you're doing here is cooking up some very nice fiction like Newton did when he proposed his model of gravity.
Absolutely bonkers to describe Newton's model of gravity as "fiction". In that sense every scientific breakthrough is fiction: Bohr's model of the atom is fiction (because it didn't use quantum effects), Einstein's gravity will be fiction too when physics is unified with quantum gravity. No sane person uses the word "fiction" to describe any of this, it's just scientific refinement: we go from good models to better ones, patching up holes in our understanding, which is an unceasing process. It would be great if we could have a Newton-level "fictitious" breakthrough in language.
> Grammar does not even fit into rule based hierarchies all that well. That's why there are a million strange exceptions to almost every 'rule'. Exceptions that have no sensible explanations beyond, 'well this is just how it's used' because of course that's what happens when you try to break down an inherently probabilistic process into rigid rules.
No one is saying grammar has been solved, people are trying to figure out all the things that we don't understand.
>I'm not sure what this is supposed to show? If I can predict what you are going to say so what.
If the speed of your understanding varies with how frequent and predictable syntactic structures are then your understanding of syntax is a probabilistic process. A strictly non-probabilistic process would have a fixed, deterministic way of processing syntax, independent of how often a structure appears or how predictable it is.
>I can predict you are going to pick something up too if you are looking at it and start moving your arm. So what?
Ok ? This is very interesting. Do you seriously think this prediction right now isn't probabilistic ? You estimate not from rigid rules but past experience that it's likely I will pick it up. What if i push it off the table ? You think that isn't possible? What if i grab the knife in my bag while you're distracted and stab you instead? Probability is the reason you picked that option instead of the myriad of options.
>Absolutely bonkers to describe Newton's model of gravity as "fiction". In that sense every scientific breakthrough is fiction: Bohr's model of the atom is fiction (because it didn't use quantum effects), Einstein's gravity will be fiction too when physics is unified with quantum gravity. No sane person uses the word "fiction" to describe any of this, it's just scientific refinement: we go from good models to better ones, patching up holes in our understanding, which is an unceasing process. It would be great if we could have a Newton-level "fictitious" breakthrough in language.
"All models are wrong. Some are useful" - George Box.
There's nothing insane with calling a spade a spade. It is fiction and many academics do view it in such a light. It's useful fiction, but fiction none the less. And yes, Einstein's theory is more useful fiction.
Grammar is a model of language. It is not language.
> If the speed of your understanding varies with how frequent and predictable syntactic structures are then your understanding of syntax is a probabilistic process.
In what sense? I don't see how it tells you anything if you have the sentence "The cat ___ " and then you expect a verb like "went" but you could get a relative clause like "that caught the mouse". The sentence is interpreted deterministically not by what what follows after a fragment might contain but what it does contain. If you are more "surprised" by the latter it doesn't tell you that the process is not deterministic.
> Ok ? This is very interesting. Do you seriously think this prediction right now isn't probabilistic ? You estimate not from rigid rules but past experience that it's likely I will pick it up. What if i push it off the table ? You think that isn't possible. What if i grab the gun in my bag while you're distracted and shoot you instead?
I think you are confusing multiple things. I can predict actions and words, that doesn't mean sentence parsing/production is probabilistic (I'm not even sure exactly what a person might mean by that, especially with respect to production) nor does it mean arm movement is.
> "All models are wrong. Some are useful" - George Box. There's nothing insane with calling a spade a spade. It is fiction and many academics do view it in such a light. It's useful fiction, but fiction none the less. And yes, Einstein's theory is more useful fiction. Grammar is a model of language. It is not language.
I have no idea what you are saying: calling grammar a "fiction" was supposed to be a way to undermine it but now you are saying that it was some completely trivial statement that applies to the best science?
>In what sense? I don't see how it tells you anything if you have the sentence "The cat ___ " and then you expect a verb like "went" but you could get a relative clause like "that caught the mouse". The sentence is interpreted deterministically not by what what follows after a fragment might contain but what it does contain. If you are more "surprised" by the latter it doesn't tell you that the process is not deterministic.
The claim isn't about whether the ultimate interpretation is deterministic-it’s about the process of parsing and expectation-building as the sentence unfolds.
The idea is that language processing (at least in humans and many computational models) involves predictions about what structures are likely to come next. If the brain (or a model) processes common structures more quickly and experiences more difficulty and higher processing times with less frequent ones, then the process of parsing sentences is very clearly probabilistic.
Being "surprised" isn't just a subjective experience here - it manifests as measurable processing costs that scale with the degree of unexpectedness. This graded response to probability is not explainable with purely deterministic models that would parse every sentence with the same algorithm and fixed steps.
>I have no idea what you are saying: calling grammar a "fiction" was supposed to be a way to undermine it but now you are saying that it was some completely trivial statement that applies to the best science?
None of my comments undermine grammar beyond saying it is not how language works. I preface 'fiction' with the word useful multiple times and make comparisons to Newton.
> If the brain (or a model) processes common structures more quickly ... then the process of parsing sentences is very clearly probabilistic.
This isn't true. For one more common sentences are probably structurally simpler and structurally simpler sentences are faster to process. You also get in bizarre territory when you can predict what someone is going to say before they say it: Obviously no "parsing" has occurred there so the fact that you predicted it cannot be evidence that parsing is probabilistic. If that is the case then a similar argument is true if you have only a sentence fragment. The probabilistic prediction is some ancillary process just as if I can predict that a cup is going to fall doesn't make my vision a probabilistic process in any meaningful sense. If for some reason I couldn't predict I could still see and I could still parse sentences.
Furthermore, you can obviously parse sentences and word sequences you have never seen before (and sentences can be arbitrarily complex/nested, at least up to your limits on memory). You can also parse sentences with invented terms.
Most importantly it's not clear how sentences are produced in the mind in this model. Is the claim that you somehow start with a word and produce some random most-likely next word? Do you not believe in syntax parse trees?
Finally, (as Chomsky points out in the video I linked) this model doesn't account for structure dependence. For example why is the question form of the sentence "The man who is tall is happy" "Is the man who is tall happy?" and not "is the man who tall is happy?". Why not move the first "is" that you come across?
> In a strictly deterministic model, both continuations ("went" or "that caught the mouse") would be processed through the same fixed algorithm with the same computational steps, regardless of frequency. The parsing mechanism wouldn't be influenced by prior expectations
Correct. You seem to imply that is somehow unreasonable. Computer parsers work this way.
> Being "surprised" isn't just a subjective experience here - it manifests as measurable processing costs that scale with the degree of unexpectedness. This graded response to probability is not explainable with purely deterministic models.
Again, there are two orthogonal concepts: Do I know what you are going to say next or how you are going to finish your sentence (and possibly something like strain or slowed processing when faced with an unusual concept) and what process do I use to interpret the thing you actually said.
> None of my comments undermine grammar beyond saying it is not how language works. I preface 'fiction' with the word useful multiple times and make comparisons to Newton.
Again, I have no idea what the point of describing universal grammar as fiction is if you say the term applies to all other great scientific theories.
>This isn't true. For one more common sentences are probably structurally simpler and structurally simpler sentences are faster to process.
Common sentences are not necessarily structurally simpler and those still get processed faster so yes it's pretty true.
>You also get in bizarre territory when you can predict what someone is going to say before they say it: Obviously no "parsing" has occurred there so the fact that you predicted it cannot be evidence that parsing is probabilistic.
Of course parsing has occurred. Your history with this person (and people in general) and what you know he likes to say, his mood and body language. Still probabilistic.
>Furthermore, you can obviously parse sentences and word sequences you have never seen before (and sentences can be arbitrarily complex/nested, at least up to your limits on memory). You can also parse sentences with invented terms.
So? LLMs can do this. I'm not even sure why you would think probabilistic predictors couldn't.
>Most importantly it's not clear how sentences are produced in the mind in this model. Is the claim that you somehow start with a word and produce some random most-likely next word? Do you not believe in syntax parse trees?
That's one way to do it yeah. Why would I 'believe in it' ? Computers that rely on it don't work anywhere near as well as those that don't. What evidence is there to it being anything more than a nice simplification ?
>Finally, (as Chomsky points out in the video I linked) this model doesn't account for structure dependence. For example why is the question form of the sentence "The man who is tall is happy" "Is the man who is tall happy?" and not "is the man who tall is happy?". Why not move the first "is" that you come across?
Why does a LLM that encounters a novel form of that sentence generate the question form correctly ?
You are giving examples that probalistic approaches are clearly handling as if they are examples that probalistic approaches cannot explain. It's bizarre
>Correct. You seem to imply that is somehow unreasonable. Computer parsers work this way.
I'm not implying it's unreasonable. I'm telling you the brain clearly does not process language this way because even structurally simple but uncommon syntax is processed slower.
>Again, I have no idea what the point of describing universal grammar as fiction is if you say the term applies to all other great scientific theories
What's the point of describing Newton's model as fiction if I still teach it in high schools and Universities? Because erroneous models can still be useful.
>Again, there are two orthogonal concepts: Do I know what you are going to say next or how you are going to finish your sentence (and possibly something like strain or slowed processing when faced with an unusual concept) and what process do I use to interpret the thing you actually said.
The brain does not comprehend a sentence without trying to predict its meaning. They aren't orthogonal. They're intrinsically linked
> "Of course parsing has occurred. Your history with this person (and people in general) and what you know he likes to say, his mood and body language. Still probabilistic."
This is just redefining terms to be so vague as to make rationality inquiry or discussion impossible. I don't know what re-definition of parsing you could be using that would still be in any way useful or to what "probabilistic" in that case is supposed to apply to.
If you are saying that the brain is constantly predicting various things so that it automatically imbues some process that doesn't involve prediction as probabilistic then that is just useless.
> Common sentences are not necessarily structurally simpler and those still get processed faster so yes it's pretty true.
Well, I'll have to take your word for it as you haven't cited the paper but I would point to the reasonable explanation of different processing times that has nothing to do with parsing I gave further below. But I will repeat the vision analogy: If I had an experiment that showed that I took longer to react to an unusual visual sequence we would not immediately conclude that the visual system was probabilistic. The more parsimonious explanation is that the visual system is deterministic and some other part of cognition takes longer (or is recomputed) because of the "surprise".
> So? LLMs can do this. I'm not even sure why you would think probabilistic predictors couldn't.
It's not about capturing it in a statistics or having an LLM produce it, it's about explaining why that rule occurs and not some other. That's the difference between explanation and description.
> That's one way to do it yeah. Why would I 'believe in it' ? Computers that rely on it don't work anywhere near as well as those that don't. What evidence is there to it being anything more than a nice simplification ?
Because producing one token at a time cannot produce arbitrary recursive structures like sentences can be? Because no language uses linear order? Because when we express a thought it usually can't be reduced to a single start word and statistically most-likely next word continuations? It's also irrelevant what computers do, we are talking about what humans do.
> Why does a LLM that encounters a novel form of that sentence generate the question form correctly ?
That isn't the question. The question is why it's that way and not another. It's as if I ask why do the planets move in a certain pattern and you respond with "well why does my deep-neural-net predict it so well?". It's just nonsense.
> You are giving examples that probalistic approaches are clearly handling as if they are examples that probalistic approaches cannot explain. It's bizarre
No probabilistic model has explained anything. You are confusing predicting with explaining.
> I'm not implying it's unreasonable. I'm telling you the brain clearly does not process language this way because even structurally simple but uncommon syntax is processed slower.
I explained why you would expect that to be the case even with deterministic processing.
> What's the point of describing Newton's model as fiction if I still teach it in high schools and Universities? Because erroneous models can still be useful.
Well as I said this is also true of Einstein's theory of gravity and you presumably brought up the point to contrast universal grammar with that theory rather than point out the similarities.
> The brain does not comprehend a sentence without trying to predict its meaning. They aren't orthogonal. They're intrinsically linked
The brain is doing lots of things, we are talking about the language system. Again, if instead we were talking about the visual system no one would dispute that the visual system is doing the "seeing" and other parts of the brain are doing predicting.
In fact they must be orthogonal because once you get to the end of the sentence, where there are no next words to predict, you can still parse it even if all your predictions were wrong. So the main deterministic processing bit (universal grammar) still needs to be explained and the ancillary next-word-prediction "probabilistic" part is not relevant to its explanation.
What exactly is wrong? The fact that grammars are very limited models of human languages? My key thesis is that human languages operate in a way that non-probabilistic models (i.e. grammars) can only describe it in a very lossy way.
Sure, LLMs are also lossy but also much more scalable.
I've spent quite a lot of time with 90s/2000s papers on the topic, and I don't remember any model useful in generating human language better than "stohastic parrots" do.
As I said there are universal rules that human language processing follows (like hierarchical structure dependence); you can't have arbitrary syntax/grammars. It's true that science hasn't solved the main puzzles about how to characterize these rules.
The fact that statistical models are better predictors than the-"true"-characterization-that-we-haven't-figured-out-yet is completely irrelevant, just as it would be irrelevant if your deep-learning net was a better predictor of the weather: it wouldn't imply that the weather doesn't follow rules in physics, regardless of whether we knew what those rules were.
> As I said there are universal rules that human language processing follows (like hierarchical structure dependence); you can't have arbitrary syntax/grammars.
GP didn't say anything about grammars being arbitrary. In fact, his claim that grammars are models of languages would mean the complete opposite.
I don't think they have a consistent understanding of the word "grammar": they seem to use it in the grade-school sense (grammar for English, grammar for French) but then refer to Chomsky's universal grammar which is different (grammar rules that are common to all languages).
The main point of contention is their statement that "grammar follows language" which, in the Chomsky sense, is false: (universal) grammar/syntax describes the human language faculty (the internal language system) from which external languages (English, French, sign language) are derived, so (external) languages follow grammar.
Yes, I was a bit vague. If we are to be serious then we would have to come with definitions of grammar-based approaches vs stohastic approaches.
All I am saying is that grammars (as per Chomsky) or even high-school rule-based stuff are imperfect and narrow models of human languages. They might work locally, for a given sentence, but fall apart when applied to the problem at scale. They also (by definition) fail to capture both more subtle and more general complexities of languages.
And the universal grammar hypothesis is just that - a hypothesis. It might be convenient at times to think about languages in this way in certain contexts but that's about it.
Also, remember, this is Hacker News, and I am just a programmer who loves his programming/natural languages so I look at everything from a computational point of view.
All this comes down to is that language is not a solved problem. By the same logic why not just stop doing any research in physics and just put everything through a neural net which is going to give better predictions than the current best theories?
The fact that a deep-neural-net can predict the weather better than a physics-based model does not mean that the weather is not physics-based. Furthermore deep-neural-nets predict but don't explain while a physics-based model tries to explain (and consequently predict).
The correlations are 0.25-0.5, which is quite poor (Gaussian distribution plots with those correlations look like noise). That's before analyzing the methodology and assumptions.
Correlation of 0.25-0.5 being poor is very problem dependent.
For example, in difficult perceptual tasks ("can you taste which of these three biscuits is different" [one biscuit is made with slightly less sugar]), a correlation of 0.3 is commonplace and considered an appropriate amount of annotator agreement to make decisions.
Yes for certain things like statistical trading (assuming some kind of "nice" Gaussian-like distribution) where you have lots of trades and just need to be more right than wrong it's probably useful.
Not here though, where you are trying to prove a (near) equivalence.
There are shell options you need to set to, for example, make shell history saving work when multiple terminals are used (the defaults are bad). Read the manual
> ... meeting the same strict regulatory noise levels as the latest subsonic airplanes
Extremely dishonest: as far as I can tell (CFR title 14, B36.5) there are no specific noise level regulations for subsonic cruise flight (i.e. not take-off and landing) because you can't hear subsonic aircraft at cruise altitude. On the other hand, however, you will be able to hear sonic booms.
It's intentionally misleading, they are technically saying they will meet the takeoff and landing requirements (which they are required to meet by law) but implying that the plane is going to be quiet at cruise (which they want to perform over the continental United States, not just over the ocean).
Moreover, their statement falsely suggests that Concorde does not "[meet] the same strict regulatory noise levels as the latest subsonic airplanes" but 36.301 says that Concorde also has to meet the same standards as subsonic planes (standards which exclude operation at cruise which didn't matter for Concorde because it was over the Atlantic).
It’s amazing the checkboxes that stick out: having a dog for no reason for dog lovers; the relationship slop that appeals to women; the violence and sex slop to appeal to men.
I wouldn't describe myself as a "heavy pornography consumer", but I certainly get bored by the gratuitous sex scenes in many shows and movies these days, thinking, "I can get this and much more any time I want, so can we stop with it and move the plot and/or character development along please?"
I don’t remember the program but in the years of broadcast TV there was a writer on a nightly talk show explaining why all TV episodes were so bland. He said that he wrote an intricate plot for TV which was rejected because the show had to be watchable by someone doing this dishes. So this isn’t a phenomenon new to the Netflix era.
Many years ago when I was in college, one of my professors wrote a Star Trek Next Generation script, and she talked about how the producers pretty much destroyed her story by insisting she stick to the formula such as "between X and Y minutes, the Enterprise or one of the main characters must be in danger. That danger must be resolved by minute Z." Sigh.
Since not every episode follows that formula, I wonder if that's a requirement specifically of spec script writers because they'd want to keep the more important/interesting episodes written by staff.
What are you talking about? The stock can go down so that isn’t guaranteed. The bonds can also go down if bitcoin goes down enough. The convertible bonds can go down for the same reasons.
I used my 401k to invest in MSTR. In fact, when I figured out what Saylor was doing, I put all of it in MSTR, in 2021. I'm up 700-800% right now.
It wasn't an easy ride, but long term it was the right choice for me. It isn't that much much money because I stopped contributing to it about 15 years ago (instead opting to do what you said, buy BTC directly), but my 401k has never seen that sort of growth.
With the fairly recent split, I can now sell covered call options. I generally sell OTM and then use the proceed from that to buy more MSTR. My goal is to compound the stock, not dollars. It will split again, enabling me to compound again.
Once you spend the time understanding what he is doing, you will understand the value in investing in both. It definitely takes an acceptance that Bitcoin won't go to zero, but if you can get past that, as he has done, then the risk profile changes dramatically.
I am also waiting for someone to explain it coherently.
It essentially allows you to buy bitcoin at 3x the market price. (mstr market cap is about 3x the bitcoin it holds)
People seem to equate that as meaning that the future stock price will grow at a faster rate than the price of bitcoin, but that would require the premium to grow to an even larger factor than the current 3x. It also ignores the reality that a falling bitcoin price should completely wipe out the premium of stock price over net asset value (if we are to apply any common sense).
Of course I might be missing some great insight that makes it something other than a really stupid gamble.
I would invest in micro strategy if I cared about Bitcoin because I don't want to mess with wallets or pay someone to manage my wallet.
Banks can do fractional reserve banking to 'pay' for the expense of storing our money. Can crypto wallets do such a thing? Or would they have to borrow money?
Some exhanges like coinbase are basically banks at this point.
You open an acccount, you do a kyc, you declare it to the state, and they are submitted to regulations in your country.
Coinbase is an expensive one, but way cheaper than micro strategy, and is from y combinator s12 so you know what to expect.
Deal with an exhange directly, it's cheaper, and more flexible. No wallet to manage, and you can handle a few millions before even having to talk to a human.
Of course you lose a lot of advantages of crypto, but you were going to with micro strategy anyway.
Or buy an etf if you really like your bank.
Although I would argue now is the worse time to buy. You had to buy last year, or wait until 2 years, assuming the halving cycle is still relevant, which I believe it is.
It is expected from most crypto holders that we are not that far from the top and the usual crash back is near.
Some my friends are still in, but have placed their exit orders already. I'm already out with a 500% profit and play it safe.
You may be able to get some more, but it's getting riskier by the day.
And maybe the cycle will break, but it's really not what I would bet on.
I suspect most of the business is battery storage and the Powerwall was introduced by Tesla before the acquisition.