Yes, but is it being used that way ? Everything that I've seen has indicated that service providers use this law to play it both ways: they actively control what users see through algorithms, but then claim that they are (largely) not responsible for the moderation of questionable content. IOW, they are not just a passive distribution point for information, and, in many cases, actively control the flow of information. Doesn't that intrinsically make them a publisher ?
Yes, most hosts have and enforce (though imperfectly) content rules as they would be disincentivized from doing without CDA 230.
> Everything that I've seen has indicated that service providers use this law to play it both ways: they actively control what users see through algorithms, but then claim that they are (largely) not responsible for the moderation of questionable content.
That's not playing it both ways.
> IOW, they are not just a passive distribution point for information, and, in many cases, actively control the flow of information.
Yes, the entire point of CDA 230 is to allow them to actively moderate without becoming strictly liable for all content, since without that allowance they would be disincentivized from moderation. Removing CDA 230 would not encourage active moderation, it would make active moderation a gateway to unmanageable liability.
> Doesn't that intrinsically make them a publisher ?
Yes, without CDA 230 it would, restoring the “if you moderate content at all, you must succeed in capturing every bit of user-submitted illegal content or be fully liable as if you had deliberately originated it yourself” rule that was in place before CDA 230 (with the added challenge that there are more content regulations now than before the CDA was adopted), which is not something that promotes moderation, it promotes either no moderation or no user content hosting at all, leaving moderated hosting for operators outside of US jurisdiction.