Hacker Newsnew | past | comments | ask | show | jobs | submit | kdbg's commentslogin

Only tangentially related but I'm a Canadian but have been on a US Cell provider (AT&T) for over a decade now because its cheaper, especially when I used to spend a lot more time roaming in the US. The number of Canadian companies that fail silently when sending SMS to US numbers is too damn high.

My bank is one of those with Verified by Visa. Thankfully I've figured out that using the Voice option instead of Text will work but still that silent failure is really annoying.


I'm not a lawyer so maybe I'm misunderstanding something but the plaintiff is Whatsapp, not the journalists. This isn't really about holding NSO Group accountable for hacking journalists at all

The fact journalists were compromised seems only incidental, the ruling is about weather or not NGO Group "exceeded authorization" on WhatsApp by sending the Pegasus installation vector through WhatsApp to the victims and not weather they were unauthorized in accessing the victims. Its a bit of a subtle nuance but I think its important.

Quoting the judgement itself:

> The court reasoned that, because all Whatsapp users are authorized to send messages, defendants did not act without authorization by sending their messages, even though the messages contained spyware. Instead, the court held that the complaint’s allegations supported only an "exceeds authorization" theory.

> The nub of the fight here is semantic. Essentially, the issue is whether sending the Pegasus installation vector actually did exceed authorized access. Defendants argue that it passed through the Whatsapp servers just like any other message would, and that any information that was 'obtained' was obtained from the target users' devices (i.e., their cell phones), rather than from the Whatapp servers themselves

> [...removing more detailed defendant argument...]

> For their part, plaintiffs point to section (a)(2) itself, which imposes liability on whoever "accesses a computer" in excess of authorized access, and "thereby obtains information from any protected computer" pointing to the word "any"

> [...]

> As the parties clarified at the hearing, while the WIS does obtain information directly from the target users’ devices, it also obtains information about the target users' device via Whatsapp servers.

Adding a little more detail that comes from the prior dockets and isn't in the judgement directly but basically NSO Group scripted up a fake Whatsapp client that could send messages that the original application wouldn't be able to send. They use this fake client to send some messages that the original application wouldn't be able to send which provide information about the target users' device. In that the fake client is doing something the real client cannot do (and fake clients are prohibited by the terms) they exceeded authorization.

Think about that for a moment and what that can mean. I doubt I'm the only person here who has ever made an alternative client for something before. Whatapp (that I recall) does not claim that the fake client abused any vulnerabilities to get information just that it was a fake client and that was sufficient. Though I should note that there were some redacted parts in this area that could be relevant.

I dunno, I mean the CFAA is a pretty vague law that has had these very broad applications in the past so I'm not actually surprised I was just kinda hopeful to see that rolled back a bit after the Van Bruen case a few years ago and the supreme court had some minor push back against the broad interpretations that allowed ToS violations to become CFAA violations.

Edit: Adding a link to the judgement for anyone interested: https://storage.courtlistener.com/recap/gov.uscourts.cand.35...

Edit2: And CourtListener if you want to read the other dockets that include the arguments from both sides (with redactions) https://www.courtlistener.com/docket/16395340/facebook-inc-v...


> I doubt I'm the only person here who has ever made an alternative client for something before.

I've been on both sides of the issue by authoring unofficial clients, and battling abusive unofficial clients to services I run. The truth is, complete carte blanche for either side is untenable. 99.99% of well-behaved clients are tacitly ignored, I'm not against those that deliver malware, or bypass rate-limiting having their day in court.


Laws need to be clear about where the line is though. If circumventing rate limiting is illegal then that should be explicit, including the criteria used to determine that a service is in fact rate limited in such a legally binding manner. As it is an API is available but somehow is not considered public (criteria unclear) and thus engaging with it in certain ways (criteria unclear) is out of bounds.

If we want using a service to perpetrate a crime to itself be an additional crime then that should be made explicit. In the (unlikely) event that NSO wasn't actually perpetrating any crimes against the end users then that fact is probably what needs to be fixed.


Given the nature of who the stakeholders are, the neatest way to achieve an end is to target authorization. It focuses on the how instead of the who or what.

This reduces embarrassment for stakeholders, protects sources and methods, and sends a message.

The law is as broad as can be. If it were a US National instead of NSO Group, some crazy calculation of damages would be used to extract a plea in lieu of a thousand months in prison.


THE CFAA is definitely ripe for reform. It wouldn't be hard to argue it's broad and vague. There's definitely this overarching sweep of online behaviors that could easily be classified as benign.


i dont think users of whatsapp would have standing against people hacking whatsapp to get their data.

whatsapp owns the systems, so its up to whatsapp to sue


The thing of value isn’t in WhatsApp in this case.

You can’t sue a dude for stealing a screwdriver to break into your home with. Your tort is the act against you.


What?

So if someone robs a bank and empties my safety deposit box I can't sue them because it was the bank that had the money, not me?


Well, haven't you heard? The issue with your analogy is: you don't own your data.

(One might argue that it's similar with "your" money ((in the bank)) , but that's not the point)


Different scenario. The bank is a bailor — they have an duty of care for property in their possession that you retain ownership to.

You can sue the thief for stealing your property and the bank for negligent bailment. Same concept as a valet crashing your car.


If someone steals the ownership registry the bank maintains regarding the deposit boxes-- may be the better analogy. Or list of the owner and box number. Clearly this is information the bank controls, not the individual.


> fake client to send some messages that the original application wouldn't be able to send which provide information about the target users' device

> I doubt I'm the only person here who has ever made an alternative client for something before

I think the distinction here for "exceeds authorisation" is pretty apparent. I don't read this judgement as being damning for people wanting to make their own clients.

They made a third party client for deliberately malicious purposes. If you go ahead and make a discord client with the intention of spamming or otherwise causing harm to its users, I think it's completely reasonable for you to get in trouble for that.


> with the intention of spamming or otherwise causing harm to its users

That sounds hopelessly ambiguous to me. What if Google decides that making use of yt-dlp is causing harm to them? What is the criteria here?

We wanted email spam to be illegal and so it was explicitly made illegal. We wanted robocalling to be illegal and so it was explicitly made illegal. In such cases we have (reasonably) clear criteria for what is and is not permitted.


Author of the site here (though not this specific post).

Any chance you could take a screenshot of what your seeing? The other commenter mentioned the contract of comment s in code blocks which I've already noted to fix.


> Any chance you could take a screenshot of what your seeing?

I don't think it's a rendering issue, but sure: https://postimg.cc/ygsNzMhX

I looked into the CSS and removing the "line-height:1.15" makes it massively more readable for me personally. I have no idea about any science of human perception but I think the font is too "dense" with that reduced line spacing. (It's hard to self-observe this but I believe my eyes are slipping off between lines. Character width might be a factor too.)

(To clarify, my issue is with the main text itself, not code blocks.)


curious what type prompting you do on the LLM?

I run a markov chain bot in a Twitch chat, has some great moments. I tried using a LLM for awhile, would include recent chat in the prompting but never really got results that came across as terribly humorous, I could prompt engineer a bit to tell it some specifics about the types of jokes to build but the LLM just tended to always follow the same format.


I'm actually not following the model's fine-tuned/desired prompt at all. I am operating in purely pattern completion mode. The first text the LLM sees are alternating lines of input and response examples that look like what it will get getting from the IRC client front end written in the tone I want it to respond and giving some information about itself. Then I just tack the IRC chat history+input onto those example chat pre-prompt lines. Nothing but single lines and newlines with newline as a stop token. No instructions, nothing meta or system or the like.

But that's also configurable by users. They can invoke any pre-prompt they want by a command passing a URL with a .txt file.


Reminds me a little of a stored XSS I read about last year.

https://tttang-com.translate.goog/archive/1880/?_x_tr_sl=aut...

Had that same root of not having the mime.types in the container, leading to server-side sniffing of the mime type for the Content-Type header.

It's just a bit interesting the impact such a file can have


You can, but maybe not in the "standard" way.

Standard way being trying to measure the precise differences between requests. The smaller the difference the more requests are needed to level things out and that just becomes pretty impractical quickly but still possible in some situations.

If you actually wanted to do a timing attack on the web you'd probably want to do something like a "Timeless Timing Attack" [0]. At a high-level the idea is to measure relative timing differences rather than the precise difference. Answering which request completes faster rather than how much faster.

The specific attack from the paper is taking advantage of HTTP/2 multiplexing to send two requests within a single packet, ensuring they arrive at the same time. Then uses the response order to determine which was processed faster/slower. It still requires making multiple requests to smooth out the data just not as much since you're only interested in the relative competition time.

Its not practical everywhere, but its more practical for the web than the traditional technique.

[1] https://www.usenix.org/conference/usenixsecurity20/presentat...


Thanks for this, was trying to wrap my head around it a bit and you broke it down nicely.


Binja decompiler is more-or-less fine. Its not as mature as IDA or Ghidra but its not a bad decompiler.

Though for me the big selling point on Binja is the Intermediate Languages (ILs). HIgh-level IL is the decompiler but you also get Low-level and Medium-level ILs as steps between assembly and source. If the decompiler is a bit funky you can look at the ILs to get a better idea of what is happening. the ILs are also just much nicer to read than plain assembly so I tend to use them a lot.

Its a feature that isn't really matched on any other platform. Ghidra and IDA both have a single IL that is more machine readable compared to Binja's human-readable ones.


Pretty sure that price is only for your first year if you sign up with a code/link from one of their creators.

The advertised rates are currently $5/month or $50/year.


First, just a high-level overview over how it would work as an app/consumer which is not terribly centralized outside of requiring browser support:

Passkeys are an open standard, and they basically are just public/private keys with a wrapper. When you create a credential which is just a call to `navigator.credentials.create` with options to indicate a PublicKeyCredential type and that it should be a client-side discoverable key and a user ID to associate with it (and some optional info). Its gives you back some meta-info and the public key to store.

For a login flow, similarly a call to `navigator.credentials.get` indicating you want one of those discoverable keys to be used and a challenge. The browser returns a signature to you along with the key info (that user ID from the creation) and you are responsible to verify the signature is appropriately signed.

---

So, on the actual crypto side, nothing about it requires any centralization, there is no required phoning home or to a remote source. On the creation/storage/retrieval side, the WebAuthn Authenticator Model is defined as part of the standard so anyone can implement it. I don't know enough about how you'd register as such an authenticator. Dashlane already supports passkeys, so it is possible for a third-party to do so, Bitwarden and 1Password are also working on it.

So my understanding would be that self-hosting is more just a matter of giving it time for others to implement the necessary components and not there being any restriction on who can do this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: