Not really sure how this is supposed to be low risk -- I think they're omitting the actual interesting parts of the strategy (that are essential to actually making money).
If you want to be the first or close to the first in the queue on a price level, you're there before it becomes big. So then your "risk-free" closing strategy doesn't work, because if you get traded against, you will have to close out your trade at a loss, since the price level is gone (because it wasn't big yet).
But if you wait until the level becomes big, you will no longer be at the front of the queue, so the strategy doesn't work.
If the ask is at $8.03 and you are first in queue at $8.01 all the orders in $8.03 and $8.02 have to be filled before you get filled so you have plenty of time to cancel your order.
Assuming you mean bid. If the the size is only 10 @ 8.03 and 1 @ 8.02 and 8.01, then if someone is looking to sell 100 at market price it will be a problem. The sell order will go through many levels after filling yours and it will all happen at the same time.
It depends what you mean by "untradeable". If the market or regulator decides that the stock cannot be traded at all, you might end up paying a lot of money in fees for years even if you were right.
In some markets, such as Hong Kong, instead of being delisted, suspected frauds are often halted (the stock cannot be traded). So you might end up paying the borrow fee for years, even though you were right about the company being a fraud.
I'm really surprised how much a single search query is costing them in terms of compute/infrastructure (1.25 US cents). That buys you more than a minute of compute time on the top vps on hetzner.
What's happening behind the scenes during a query?
Getting the compute costs down will probably be a major obstacle to growth, since they'll have to keep prices high (19 USD/month) to pay for their infrastructure.
It is not compute that is killing them - it is using the Google and Bing API's for many searches. They charge a lot for those. In your Kagi account you can see exactly how many searches you have done and how much you have cost them (and it could easily be higher than your 10$ a month if you do a lot of searching)
Happy paying customer here, and agreed. Their cost per search feels very high to me, but I don't know enough about the search space to know what is normal.
I'd be surprised if that isn't their eventual goal, but I don't think it's really possible to deliver the kind of general-purpose engine they want with a team of ~5-10. Hopefully if they're successful and grow they can expand their own indexes.
I'm guessing the query itself will be a very minor percentage of the total cost with just costs coming from crawling, data moving, storage, redundancy and resilience etc.
The issue is that the requester is asking for 329000 pages, and courts apparently previously ruled that a rate of 500 pages/month is reasonable for freedom of information requests.
And looks like more narrow requests for documents would be completed faster:
"If Plaintiff decides to request fewer records, then FDA will be able to complete its processing at an earlier
date."
But shouldn't requests like this complete still in a timely matter? If the government uses data that is under freedom of information but effectively doesn't allow people to go over the data, isn't that perversion of the idea of freedom of information?
An alternative is of course that we would treat requests by multiple people as more important, for example, if we assume that 1000 people requested these documents it would only take less than a month, but imho that's a huge slippery slope where information can effectively be hidden by inflating the size of it.
The issue is that not all of the data is subject to a freedom of information request because it may contain PII, trade secrets, etc. Therefore the data must be review to remove that information not releasable through a FOIA request.
If the government can't distinguish between sensitive and nonsensitive data then that sounds like something that should be fixed instead of being used as an excuse.
So let Pfizer submit requests for redaction. They have the resources to run a trial with ~65,000 people in it, they can throw some lawyers at that problem and get it fixed pronto.
Say: you have two weeks to submit any requests. After that, everything gets released. It's your trade secrets after all.
Clearly, niceties like medical privacy are completely irrelevant when it comes to COVID vaccination. Government already went there and a thousand miles beyond so who cares. Replace the names with numbers and call ita day The consequences of bad things being hidden in this data are drastically higher than a few people being deanonymized.
Eh, I'm not sure I agree with that[0], but even if I did, the government skirting (what you think is) the law once certainly should not be a reason for it to do so again in the future. As it stands, they are legally required to redact PII.
If someone actually wants to understand how/why the vaccine was approved, the most sensible option IMO would be to tailor the FOIA Request more narrowly. Request the summary section: it should lay out the rationale, without diving into specific details that might need redaction. The only other alternatives I see are lobbying congress to either a) allocate more money for FOIA or b) make some categories/documents releasable as-is.
OTOH, asking for a third of a million pages--many of which no one has real intention of reading--does seem like a good publicity stunt.
[0] If participating in a clinical trial gets (potentially embarrassing) personal details leaked, we won't have nearly as many volunteers next time.
It's not really surprising that invalid code doesn't go along with compiler optimizations.
The correct way to deal with this here is probably either _NOT_ using nullptr, and instead using a special null object (similar to the end iterator); or using pointers instead of references (because you really want pointer behaviour).
Their fix is insane, and not something I'd want in production code. Unless you carefully vet your code and compiler version, you should never rely on undefined behaviour.
(There are some cases in the c standard that technically undefined but all compiler vendors have essentially agreed to handle in the same way; so those are fine. IIRC unions are a common example.)
If you mean type punning: That is fine in C (chapter 6.5.2.3 of the C11 Standard even mentions "type punning" in footnote 95), but not in C++, where it's undefined behavior.
I would just use memcpy. It works even when C code is compiled as C++, and modern compilers know what memcpy does and can optimize its call to a move operation: http://blog.regehr.org/archives/959
I'm not sure where you got your numbers for China's debt from. The IMF says that china's debt has decreased (from 26% to 19%), while the US's has stayed the same (~100%):
General government gross debt [Percent of GDP]
Country Year 2012 2013 2014 2015
China 26 23 20 19
United States 102 104 106 107
If you want to be the first or close to the first in the queue on a price level, you're there before it becomes big. So then your "risk-free" closing strategy doesn't work, because if you get traded against, you will have to close out your trade at a loss, since the price level is gone (because it wasn't big yet).
But if you wait until the level becomes big, you will no longer be at the front of the queue, so the strategy doesn't work.