The hashtag index is not going to be any bigger than the tweets storage though, and may be significantly smaller, so this part is not if by an order of magnitude (even a binary one). Assuming something like a common SQL database is used for storage there would be a tags table (one row per unique tag, tag string plus a numeric identifier, indexed by both which bloats the size a bit but it'll still be small) and a link table (one row per tag in a message, containing the tag and message ids). Even if using 64-bit IDs because you don't want to fall over at 2 thousand million messages (or 4, if your DB supports unsigned types or you start at MININT instead of zero or one) then that structure is going to consume about 32 bytes per tag per message (plus some static overheads and a little more for non-leaf index pages of course). In theory this could be the same size as the messages table or even larger (if most messages contain many small tags), but in practise it is going to be significantly smaller.
Yes, this would be big enough to need specifically factoring into a real implementation design. But it would not be big enough to invalidate the proposed idea so I understand leaving it off, at least initially, to simplify the thought experiment.
Similarly to support a message responding to, or quoting, a single other you only need one or two NULLable ID references, 16 bytes per message, which will likely be dwarfed by the message text in the average case. Given it likely makes sense to use something like SQL Server's compression options for data like this the average load imposed will be much smaller than 16 bytes/message.
We are fiddling, fairly insignificantly, measurable but to massively, with constants a & b in O(a+bN) here, so the storage problem is still essentially of order O(N) [where N is total length of the stored messages].
Yes, this would be big enough to need specifically factoring into a real implementation design. But it would not be big enough to invalidate the proposed idea so I understand leaving it off, at least initially, to simplify the thought experiment.
Similarly to support a message responding to, or quoting, a single other you only need one or two NULLable ID references, 16 bytes per message, which will likely be dwarfed by the message text in the average case. Given it likely makes sense to use something like SQL Server's compression options for data like this the average load imposed will be much smaller than 16 bytes/message.
We are fiddling, fairly insignificantly, measurable but to massively, with constants a & b in O(a+bN) here, so the storage problem is still essentially of order O(N) [where N is total length of the stored messages].