Hacker Newsnew | past | comments | ask | show | jobs | submit | more brianolson's commentslogin

OP is a link to the atproto site because it got a major new revision within the last week



Does BlueSky have a legal representative in Brazil?


BlueSky don't have any justice issue. Because of this, don't need representative yet.


A lot of Brazilian users are posting "I am using a VPN to access X" on BlueSky so the judge will probably order BlueSky to turn over their IP addresses(like he did in orders to X) so they can be fined $9000 a day.


To be fair, that order would only matter if Bluesky is logging and storing user IPs. I don't know of any technical reason for needing to track that based on the AT Protocol, they could avoid the entire problem by not tracking that data (assuming they currently do).


The judge would probably force the legal rep to log ip addresses the next time those users access BlueSky under threat of jail time and frozen bank accounts like he did for Twitters rep.


Is it not a problem to you that a judge would go after legal counsel personally based on how the counsel represents the will of their client?

Personally I see that as a very serious problem. Its one thing if it can be proven that legal counsel knowingly breaks the law. Its entirely different if the lawyer simply disagrees with the judge. Judges shouldn't be able to threaten counsel with jail time of seizure of assets simply because the judge disagrees with the legal argument a client wishes to make.


You probably need to store user IPs to moderate.

At least on Mastodon, any instance stores IP, so....


Are IP addresses really helpful in moderation? Moderation usually pertains to the content itself, meaning you would want to block certain content from being posted regardless of the IP that sends a request. Maybe you extend moderation to include banning users, but at that point you have more specific data than IP addresses (like usernames or user IDs).

You can target a list of suspected spammer IPs, though that's often a losing game of cat and mouse where you end up missing some spammers and catching legitimate users in the crossfire.


I know that for Instagram/Threads, IP is very key to their moderation. Once they ban a user with that IP, any other account from that IP will also be banned (or something like it), it's fundamental to avoid ban evasion.

For Mastodon, all of this is manual, so it will be up to moderators on the server....


Give it a little time.


Who will the justice send a censorship request to?


They won't, cos it's STILL not censorship, much though the right-of-center machinery would love everyone to believe it is.


Ok, removal request then.


Bluesky have contact information in their site..

But Brazil justice is already in contact with then, as long as they offer a place for justice to send court orders and comply with then, as they have done so far, they wont have any problems..


what keeps them from banning bsky too in the coming months?


Compliance with court orders?


Because Bsky will censor. They are celebrating X not censoring with all these posts boasting about the numbers.


Isn't Bluesky (I know nothing about it: heard about it for the first time on HN following the X ban in totalitarian Brazil) decentralized?


in theory it is.. but the only site that implement their protocol is bluesky.social


By not enabling blatant (harmful) misinformation accounts to proliferate in the platform, just like X/Twitter did.

And of course, complying with the law and courts order


Maybe don't finish! Maybe you did the interesting part and that's enough.

Is there something about finishing that's interesting? Some value intrinsic to you in finishing?


Rust async/await is less nice than Go coroutines. There are things you can't do and weird rules around Rust async code. Every Go chan allows multiple readers and multiple writers, but Rust stdlib and tokio default to single-reader queues.


Bitcoin is still wasting gigawatts every year, but later blockchains are literally a million times more efficient. If the current generations of AI turn out to be useful, maybe someone will figure it out an efficiency optimization.


It's not. It's not about efficiency. Compute-per-watt will certainly be better in other systems. This is about pushing a small system as fast as possible because it's easier to program for a small system. A few problems are 'embarrassingly parallel', but lots have substantial overhead as parallelism increases so running each core as fast as possible is a win for some problems.


going way back: Science Fair in High School landed me summer internships that rolled over into my first job out of college. ("Science and Engineering Fair" project was building robots with microcontrollers) I think it was the proof that I could do that kind of work in a self directed way that made them notice me.


there's a bunch of chemical pathways for turning plant matter into jet fuel

https://afdc.energy.gov/fuels/sustainable_aviation_fuel.html


I have heard many times: Google hires good engineers not to _do_ things, but to _not do things elsewhere_. (even if 80% of Google engineers are 'productively' working on products Google cares about, thats thousands of grade-A engineers spinning their wheels, and given the rate Google discontinues and replaces its own software, 80% is certainly high)


I have propagated this meme. It's only partially true.

SREs at Google are generally an insanely good use of Google's resources in keeping actually profitable services (like ads, search, cloud, etc.) running and running at a standard that I think few non-Googlers on this forum really have a concept of.

But the bulk of SWEs are working on developing things which are not part of those Product Areas. I actually don't think the discontinuation thing is the main problem. The problem is that outside of Search, Ads & Cloud, pretty much nothing else at Google is really a profitable revenue generator. But you don't need >100k engineers to make the profitable stuff run. There's only so many ways to sling ads and report on them and build the infra for them.

But Google's conundrum is that if they just focused on those areas, someone would eventually come along with something that would eat their lunch. So in the past Google would prefer to a) hire those people and shower money on them to stop them from doing that and b) farm out a bazillion "bets" and projects to try their hand at that stuff in the hopes of striking a vein of gold again, like they did with AdWords 20+ years ago.

Cancelling projects is the byproduct of this continual search for new gold veins.

Also a lot is changing now. SV execs seem to have made a gentleman's agreement among each other to tighten the labour market.

Anyways, I think Meta is in more trouble than Google, long run. They're in a much riskier, shakier position.


> Also a lot is changing now. SV execs seem to have made a gentleman's agreement among each other to tighten the labour market.

Nah. They tried that and there was an anti-trust case.

I think this time everyone is 1) recovering from COVID and Inflationary Money Printing, and 2) not hiring more because AI is coming and they'd rather hold their breath.


but as always: Are you sure your data doesn't fit in PostgreSQL? You should probably try PostgreSQL first.


The more I deal with scalable relational DBMSes (Spanner in particular), the more I doubt their usefulness even at large scale. Relational + fully-consistent are always at odds with scalability. It seems like either you have a classic NoSQL use case and use that, or you shard at the application level rather than in the database.

Could someone share a use case where you truly benefited migrating from Postgres/MySQL to Spanner/Citus/Cockroach, and there was no better solution? I'd like my hunch to be wrong.


that really should be the first heuristic for almost any systems design problem - if you can afford to buy a big enough pair of machines to fit your data in hot-swap postgres then just do that.

don't bother with mongo or mysql or dynamo or cassandra or bigtable or spanner or ... until your lack of profitability or size means you can't afford to just use postgres.


What do you mean "can't afford to just use postgres"? I thought postgres cost per query in many cases is cheaper than competitors.


> What do you mean "can't afford to just use postgres"?

if I have 10TB of hot data, can I afford two machines with 10T of RAM each? how about 100T?

> I thought postgres cost per query in many cases is cheaper than competitors.

that's not really a useful metric without size/latency/etc attached to it, being cheap for 0.1qps might be fine for a YC company but that's no good for my successful company etc


Makes sense. Thanks for the response!


I wonder what the costs are compared to Aurora. I haven't been on AWS for a while but loved the service when I was.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: