Hacker Newsnew | past | comments | ask | show | jobs | submit | aakilfernandes's commentslogin

IANAL but I believe the legal theory is they stole proprietary information from Coinbase (information about an upcoming listing) and used it for personal profit. The "victims" in this view are NOT users of the exchange (or the broader cryptocurrency market) but rather shareholders of Coinbase.


Almost all crypto lending (at least the defi/smart-contract variety) is collateralized lending. There's still a sort of "counterparty" risk, in that the management has to operate price feeds/risk algos to make sure the collateral stays safely about the loan value.


Analyses like this just take the energy cost of each block of transactions and divide it by the amount of transactions in the block. If the number of transactions in a block doubles, the average energy cost per transaction is halved.

Another way of stating it is "the marginal energy usage of a bitcoin transaction is essentially 0".


Except that there is a maximum number of transactions per block. Across the entire network, by design, there can never be more than 5 transactions per second. This is stupidly small. If people received their biweekly paychecks in bitcoin, only 6 million people could be paid without going over that transaction limit, assuming that absolutely nothing else is done using bitcoin.

The marginal energy usage being zero is another way of saying that Bitcoin wastes the tremendous amount of energy that it does even if nobody is using it at the time.


Yes, but the number of transactions in 'bitcoin' as it's currently defined is severely limited (blocks are already full at a $15 per transaction cost). Conceptually, making something which increases the transaction count involves forking bitcoin, and a fork which aimed to do exactly that was denounced heavily by the community and rejected by the markets. The marginal cost may be small but the cost per transaction is high, by design, and by design which has extremely heavy resistance to even small and simple changes.


But the number of transactions in a block is fixed.

We went through this with BTC/BCH. The BTC folks did not want to increase block size.


Played around with bitmessage for a bit and worked on an alternative for a bit. Here's some thoughts:

1. Like most gossip networks, it uses tcp. Since most consumer devices won't allow for incoming tcp connections, the end result is that most traffic gets routed through the small fraction of nodes on cloud servers. While this is true for most gossip networks, it is particularly problematic when you're using it bandwidth intensive applications (a twitter/parler alternative).

2. Using PoW for spam prevention is better than nothing, but the PoW algorithm is a simple sha256 hash. Sha256 ASICs will keep spam cost effective. Not sure if there's any solution. I think using some kind of crypto based incentive would be better economics, though of course have an increased user burden of acquiring crypto.

3. Bitmessage IMHO tries to be too many things with a message storage/rebroadcast protocol on top of a gossip network. All of these suffer from less than great documentation.


For 2, the system is just broken. There's no balance between a message that's acceptable in terms of time to be able to send for a user (say, 10 minutes of proof of work), that would reduce spam. If someone sends me a message every 10 minutes the service is unusable, but making the proof of work more expensive means it's unusable as well. This is why the original proof of work for email was broken as well.


Someone has to know your public key to send you a message in the first place.


Nah, they can be sniffed.


Regarding PoW: I think that there are two improvements that could be implemented.

1. Allow sending messages without PoW to friends. If you want to send a message to a stranger, you still need to do PoW. That should not happen often, so PoW bar could be higher.

2. Mobile clients probably won't be able to perform adequate PoW, so there should be an option to delegate PoW to some server for money. That's not a protocol issue, though.

2.1. May be instead of paying for PoW, it would be better to send money directly to the receiver. That would complicate the protocol, though.


HOw does this compare with Jami - https://jami.net/ ?


Re: point 1. The bittorrent protocol somehow manages to get around this problem. I wonder if that mechanism could be used here.


> Since most consumer devices won't allow for incoming tcp connections

I am curious why do you say that?


Spent a lot of time/money this election cycle betting on the spread between Predictit and 538. I did act more conservatively by only making a bet when it made sense given all 3 versions of 538s' model.

ROI around 12% (not including 5% withdrawl fee). Expect it to go a few points higher given that called elections are still trading at 90c, but PredictIt won't close due to ongoing litigation.


> not including 5% withdrawl fee

This withdrawal fee they have is a confounding variable to interpreting the prices of the options on Predictit.

I've seen many slam dunk contracts trading at $0.96 or $0.97 a share. At first I thought it would be a good idea to buy as many as the platform would allow me, and get the guaranteed 4ish% percent return. But then I remembered the withdrawal fee and realized I'd still lose. And then realized that any other potential buyer would do the same.

I suppose if you already have money deposited there, and don't know where else to park it, then buying these contracts would be the in-universe equivalent of "parking it in treasuries." But I'd expect the pricing to become more accurate if that 5% transaction cost was eliminated.


Right, the 5% withdrawal fee only applies when you withdraw, meaning you could still make a profit continuously buying 95%+ markets.

The real reasons I think these often don’t follow what you expect is:

- Lots of people set sell orders at 90-99 since closing times are often uncertain and there are other opportunities that at least appear to be more profitable. - As you diverge from 50/50, it gets cheaper and cheaper for crazy people to buy up the other side of a bet. This is not always unprofitable too; they just have to sell it to a greater fool. This is with PredictIt’s $850 limit at least.


I should add to my comment, a lot of the markets I was looking at (months ago), were based on this election. So once I bought in, the money would be tied up for months. Even when I won, after taking into account the withdrawal fee, it would have been a loss. (EDIT: Assuming I deposited the money for that specific market, and it wasn't already on the platform facing down a future 5% fee)

If there are $0.95 markets that resolve in the near future, then yeah, I'd roll my money from one to the next. But now I'm wondering if those are more likely to be correctly-priced. (Because other "investors" are thinking the same thing, right?) So now I'm just picking up nickels in front of a steamroller.

I can't remember the exact markets when I saw all these 95 cent shares, but the odds of them failing were far below 1/20. Things like Hillary Clinton wining the Democratic nomination, or California voting for Trump. These are the same markets that, within a month of closing, were at 1Y/99N.

So any market that's going to resolve very soon, and is still at 5/95, probably represents closer to a true 1/20 odds, and now it's just regular old gambling :)


Many crypto exchanges like Poloniex and FTX had futures contracts for the election which paid $1 on expiration without any withdrawal fee. There are more places to trade now than PredictIt although they have far more esoteric elections than what’s available in crypto.


I'm aware, I'm just hesitant due to the counter party/smart contract security risk. In a few years when some of these platforms have more of a track record, I'll probably start using them. Seems like polymarket has done pretty well this cycle.


Why? There's no need to use some game-theoretic machine for getting data when you can just get it direct from the source.


You're comparing two scenarios, one in which you know all the facts, and one in which you don't.

In the dice toss scenario, we know everything relevant. In the election scenario, we don't.

A model like this is attempting to say "these are the rules we think exist. Based on the rules, and assuming the data is off by some random distribution, here's what we think could happen".

What different forecasters disagree about is what the rules are. For example, the relevance of certain demographic characteristics and the potential variance between polling (conducted prior to the election) and actual election results.

There's a huge amount of assumptions, and forecasters disagree on those assumptions. We have very little historical data (polling is very recent) and even with complete historical data, future elections do not always conform to past elections.


I will veer this off into the dreaded political territory even though this is mostly a technical discussion.

The Democratic Party proved it was not as progressive as they thought as Sanders lost the primary. The reality is, the country as a whole is also not as liberal either, regardless of what these pollsters are asking people. You think the party is youthful, and ready for progressive ideas, but alas, the party wholly rejected an amazingly progressive candidate in Sanders. You think everyone’s super pissed at Coronavirus handling, and police brutality, healthcare, but alas, you find out people associate BLM protests with crime, and the virus with China, and socialism with unfair wealth redistribution. We can keep learning this the hard way I guess, this is America after all.

It’s important the technical discussions are happening this time around, because there was virtually none the last time. The post mortems for these forecasts being wrong again should be a death knell for accumulating bad data. I’m certain the models are good, but I’m not certain the data is.

Anyway, if you want my hot take, the conditional forecasting is to save their ass on election night from being embarrassingly wrong again. Imagine writing a giant if-statement that looked something like ‘and if(imWrong) changeMyAnswer’.


> Anyway, if you want my hot take, the conditional forecasting is to save their ass on election night from being embarrassingly wrong again.

Well Nate Silver wrote a full critically acclaimed book about why these types of forecast are more useful (and accurate) in reality because they account for uncertainty - he has been doing this for years, ever since he used to write similar algorithms to help bookies pick odds for sporting events, so I think your hot take isn’t based in any world of facts or knowledge on this.

Don’t trust a forecaster that says with certainty that a certain candidate will win, unless they have also bet their life’s earnings on it. Showing your statistical confidence level isn’t a bad thing.


I think it’s certainly more grounded in reality if you realize 538 is basically finished if they miss the mark again.

If you listen to what they say, they admit they were not able to measure for the no-colllege male demographic in 2016, or in other words, they couldn’t model identity politics. Why couldn’t they do that? I’m not sure, but they are certain they can this time around because they saw the 2016 data and now believe they have more complete data to not make the same mistake again.

They are looking at elections as if there are hundreds of millions of elections that happen every day and the data speaks for itself. No sorry, there’s very few elections to extrapolate the way they are doing it, and you really need to do sociopolitical analysis of things like a demographic identity bloc (no-college whites that feel some way about things) that really get you the accurate undercurrents that can sway an election.

Lastly, it doesn’t take a genius to sit there at 10pm on election night and go ‘well if Florida and Michigan went this way, then probably so will these other states in flux’. ‘Our forecast becomes more accurate as we get the actual poll closing numbers on election night’, ah I see, you’re all geniuses, I should have known.

Anyways, we’ll know soon enough.


> If you listen to what they say, they admit they were not able to measure for the no-colllege male demographic in 2016, or in other words, they couldn’t model identity politics. Why couldn’t they do that? I’m not sure,

You seem to have a fundamental misunderstanding of what FiveThirtyEight is trying to model, versus what pollsters are trying to model with the numbers they publish that FiveThirtyEight consumes. The kind of demographic weighting you're complaining about FiveThirtyEight being bad at is something the pollsters do, and is outside the scope of FiveThirtyEight's forecasting models.


> If you listen to what they say, they admit they were not able to measure for the no-colllege male demographic in 2016, or in other words, they couldn’t model identity politics. Why couldn’t they do that? I’m not sure, but they are certain they can this time around because they saw the 2016 data and now believe they have more complete data to not make the same mistake again.

I think you possibly misunderstand what 538 _do_ a bit. Their data is based on polling, so they can only work on what the pollsters do. Historically, pollsters didn't pay that much attention to education, beyond using income or class as a proxy for it; one middle-class white man was pretty much like another. This worked quite well historically, but no longer does (and it's not just a US phenomenon; it was also a contributor to polling problems for Brexit, notably).

In their current model, 538 assume a higher rate of uncertainty than last time round; also, some pollsters now model education. But really there's not that much they can do about stuff that pollsters don't ask about.


No, I don’t think so. If you build a model out of pollsters asking stupid questions, you deserve some blame.

I’ve got some basketball statistics to populate 538s model if their interested. Lebron did pretty good this season, hopefully they can correlate that with the black vote.

Their model is not transparent on any level, because if they make it transparent, we’d easily be able to see why it’s ridiculous.


> I think it’s certainly more grounded in reality if you realize 538 is basically finished if they miss the mark again.

What does missing the mark mean though? In 2016 they proposed a c30% chance that Donald would win, and a 70% chance Hillary would win. Does that mean they were wrong? Not really, because that's how probabilistic forecasting works - and they stated their confidence interval - they were 70% confident that Hillary would win, but thought there was a 30% chance Donald would win.


> The Democratic Party proved it was not as progressive as they thought as Sanders lost the primary.

The FiveThirtyEight forecast for the Democratic primary [1] gave Biden the highest chance of winning for most of the process. He did have a steep drop in the month before Super Tuesday (followed by an equally steep rebound), but still, I wouldn't say the forecast was especially bad. That said, polling is always worse for primaries than general elections, since there are more candidates and fewer voters.

[1] https://projects.fivethirtyeight.com/2020-primary-forecast/


Could this just be a result of low sampling by the author? If you reduce a sample to only include some tiny edge case, the resulting data points are going to be weird in random ways.


No, there's enough data to determine all between-state error correlations.

Edit: Why the downvote? Each of the between-state correlations can be calculated from 40,000 datapoints.


Not really possible cause we don't know how much carbon PoW generates. You could naively just assume they generates the average amount (which is what some analysts have done), but I doubt that would be accurate.


The S&P500 is not an index of the entire economy. It's heavily weighted towards the tech sector.

The hardest hit businesses (retail, restaurants, service) were small parts of the S&P500 because they usually don't have the economies of scale that tech has, and therefore don't produce the mega-caps that dominate the S&P500.


it's supposed to be a broad index representative of most if not all publicly traded industries. tech is heavily represented because it's become the largest sector as far as total market cap goes.

there are equal weighted index that trade - it's not nearly as recovered as the market cap weighted one but recovered significantly nonetheless. energy, transports, utilities, and certain retail companies have been doing well.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: