Assuming here you mean something like a classic REST-alike "/events" endpoint which returns a bunch of stuff that's changed since the last time you requested it.
In that case, as the number of events grows, the HTTP transaction overhead goes to zero with polling, yeah.
But now you have a bunch of extra things which will impact your latency:
- The third-party service will do more work preparing the payload, meaning that the earliest event on the list no longer hits the wire right away
- related: someone might be holding a lock on event 63 of 100. Now other events have to wait for it before they can hit the wire
- In your application code, you may have to read the entire request before you can validate it or do anything with it (at least, this goes for APIs which speak JSON)
- You probably have to commit your transaction for the previous page of events before you can start your next request. Otherwise, whichever side of the network is keeping tabs on your current pointer in the list, that pointer may end up in the wrong place. Oops!
- If more events happen during the time it takes you to request a page than will fit on a page, then you're really stuck.
- An error anywhere in the super-http-transaction (network, user code...) now means that an entire page of updates has been delayed rather than just one.
It's possible to remove the sequential-ness constraint from our hypothetical "/events" but not without introducing other fun new problems.
In that case, as the number of events grows, the HTTP transaction overhead goes to zero with polling, yeah.
But now you have a bunch of extra things which will impact your latency:
- The third-party service will do more work preparing the payload, meaning that the earliest event on the list no longer hits the wire right away
- related: someone might be holding a lock on event 63 of 100. Now other events have to wait for it before they can hit the wire
- In your application code, you may have to read the entire request before you can validate it or do anything with it (at least, this goes for APIs which speak JSON)
- You probably have to commit your transaction for the previous page of events before you can start your next request. Otherwise, whichever side of the network is keeping tabs on your current pointer in the list, that pointer may end up in the wrong place. Oops!
- If more events happen during the time it takes you to request a page than will fit on a page, then you're really stuck.
- An error anywhere in the super-http-transaction (network, user code...) now means that an entire page of updates has been delayed rather than just one.
It's possible to remove the sequential-ness constraint from our hypothetical "/events" but not without introducing other fun new problems.