Hacker News new | past | comments | ask | show | jobs | submit | dklsf's comments login

Not really sure how this is supposed to be low risk -- I think they're omitting the actual interesting parts of the strategy (that are essential to actually making money).

If you want to be the first or close to the first in the queue on a price level, you're there before it becomes big. So then your "risk-free" closing strategy doesn't work, because if you get traded against, you will have to close out your trade at a loss, since the price level is gone (because it wasn't big yet).

But if you wait until the level becomes big, you will no longer be at the front of the queue, so the strategy doesn't work.


If the ask is at $8.03 and you are first in queue at $8.01 all the orders in $8.03 and $8.02 have to be filled before you get filled so you have plenty of time to cancel your order.


Assuming you mean bid. If the the size is only 10 @ 8.03 and 1 @ 8.02 and 8.01, then if someone is looking to sell 100 at market price it will be a problem. The sell order will go through many levels after filling yours and it will all happen at the same time.


Apparently that doesn’t happen often enough to make the strategy unprofitable


The point is that you can’t assume you can pull your order


It depends what you mean by "untradeable". If the market or regulator decides that the stock cannot be traded at all, you might end up paying a lot of money in fees for years even if you were right.

https://www.reuters.com/article/muddy-waters-asia-short-sell...


In some markets, such as Hong Kong, instead of being delisted, suspected frauds are often halted (the stock cannot be traded). So you might end up paying the borrow fee for years, even though you were right about the company being a fraud.

See https://www.reuters.com/article/muddy-waters-asia-short-sell...


I'm really surprised how much a single search query is costing them in terms of compute/infrastructure (1.25 US cents). That buys you more than a minute of compute time on the top vps on hetzner. What's happening behind the scenes during a query?

Getting the compute costs down will probably be a major obstacle to growth, since they'll have to keep prices high (19 USD/month) to pay for their infrastructure.


It is not compute that is killing them - it is using the Google and Bing API's for many searches. They charge a lot for those. In your Kagi account you can see exactly how many searches you have done and how much you have cost them (and it could easily be higher than your 10$ a month if you do a lot of searching)


They wrote about it before. They use various paid third-party APIs to power their own search. I guess that is the majority of the cost.


Happy paying customer here, and agreed. Their cost per search feels very high to me, but I don't know enough about the search space to know what is normal.


Some of that comes from using Bing and Google search APIs. They have their own index as well but can't fully rely on it yet for all queries.


Is the end goal to ween onto their own index or will the search space always be dominated by some major upstream players?


I'd be surprised if that isn't their eventual goal, but I don't think it's really possible to deliver the kind of general-purpose engine they want with a team of ~5-10. Hopefully if they're successful and grow they can expand their own indexes.

You can try them at https://www.teclis.com/ and https://tinygem.org/.


I guess it depends on how much traction they get, both by consumers and also with websites.

AFAIK some websites actively or passively block everyone except Google.


Much of that will be the crawling and indexing that has to be done independent of the number of users, right?

So costs per user will go down with the number of users as this fixed cost is shared between more requests/users.


I'm guessing the query itself will be a very minor percentage of the total cost with just costs coming from crawling, data moving, storage, redundancy and resilience etc.


The issue is that the requester is asking for 329000 pages, and courts apparently previously ruled that a rate of 500 pages/month is reasonable for freedom of information requests.

And looks like more narrow requests for documents would be completed faster: "If Plaintiff decides to request fewer records, then FDA will be able to complete its processing at an earlier date."


But shouldn't requests like this complete still in a timely matter? If the government uses data that is under freedom of information but effectively doesn't allow people to go over the data, isn't that perversion of the idea of freedom of information?

An alternative is of course that we would treat requests by multiple people as more important, for example, if we assume that 1000 people requested these documents it would only take less than a month, but imho that's a huge slippery slope where information can effectively be hidden by inflating the size of it.


The issue is that not all of the data is subject to a freedom of information request because it may contain PII, trade secrets, etc. Therefore the data must be review to remove that information not releasable through a FOIA request.


If the government can't distinguish between sensitive and nonsensitive data then that sounds like something that should be fixed instead of being used as an excuse.


They can distinguish it... at a rate of around 500 pages per month.


So let Pfizer submit requests for redaction. They have the resources to run a trial with ~65,000 people in it, they can throw some lawyers at that problem and get it fixed pronto.

Say: you have two weeks to submit any requests. After that, everything gets released. It's your trade secrets after all.


The issue is that's not just Pfizer's trade secrets; the documents may also contain PII (e.g., medical history) of those 65,000 people.


Clearly, niceties like medical privacy are completely irrelevant when it comes to COVID vaccination. Government already went there and a thousand miles beyond so who cares. Replace the names with numbers and call ita day The consequences of bad things being hidden in this data are drastically higher than a few people being deanonymized.


Eh, I'm not sure I agree with that[0], but even if I did, the government skirting (what you think is) the law once certainly should not be a reason for it to do so again in the future. As it stands, they are legally required to redact PII.

If someone actually wants to understand how/why the vaccine was approved, the most sensible option IMO would be to tailor the FOIA Request more narrowly. Request the summary section: it should lay out the rationale, without diving into specific details that might need redaction. The only other alternatives I see are lobbying congress to either a) allocate more money for FOIA or b) make some categories/documents releasable as-is.

OTOH, asking for a third of a million pages--many of which no one has real intention of reading--does seem like a good publicity stunt.

[0] If participating in a clinical trial gets (potentially embarrassing) personal details leaked, we won't have nearly as many volunteers next time.


You stop to refuel in the middle of the trip, rather than making it nonstop.


What will you use multicore support for?


It's not really surprising that invalid code doesn't go along with compiler optimizations.

The correct way to deal with this here is probably either _NOT_ using nullptr, and instead using a special null object (similar to the end iterator); or using pointers instead of references (because you really want pointer behaviour).

Their fix is insane, and not something I'd want in production code. Unless you carefully vet your code and compiler version, you should never rely on undefined behaviour.

(There are some cases in the c standard that technically undefined but all compiler vendors have essentially agreed to handle in the same way; so those are fine. IIRC unions are a common example.)


IIRC unions are a common example.

If you mean type punning: That is fine in C (chapter 6.5.2.3 of the C11 Standard even mentions "type punning" in footnote 95), but not in C++, where it's undefined behavior.

I would just use memcpy. It works even when C code is compiled as C++, and modern compilers know what memcpy does and can optimize its call to a move operation: http://blog.regehr.org/archives/959


Type punning is not fine in C99 - you need to use a correctly typed pointer or a char pointer.


You're right. Type punning via pointer cast, e.g.

  int i = ...
  float f = *(float*) &i;
is not fine (violates strict aliasing). Type punning via union, e.g.

  union { int i; float f; } u;
  u.i = ...
  float f = u.f;
is, though (but only in C).

Just use memcpy :-)


The author should just be glad that his code didn't launch any missiles.


John Regehr did a contest in 2012: Craziest Compiler Output due to Undefined Behavior: http://blog.regehr.org/archives/759

No launched missiles though.


I'm not sure where you got your numbers for China's debt from. The IMF says that china's debt has decreased (from 26% to 19%), while the US's has stayed the same (~100%):

  General government gross debt [Percent of GDP]
  Country        Year  2012 2013 2014 2015
  China                  26   23   20   19
  United States         102  104  106  107
http://www.imf.org/external/pubs/ft/weo/2014/01/weodata/weor...


Ocaml can do this

  let do_something (thing : Thing.t) (logging : [`With_logging | `No_logging])  =
  ...
Even better, it can even infer the type based on you using it (for example matching on it):

  let do_something thing logging =
    let logging_as_bool = 
      match logging with
      | `With_logging -> true
      | `No_logging -> false
    in
    ....


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: