Hacker Newsnew | past | comments | ask | show | jobs | submit | drum55's commentslogin

If it’s constant current the “refresh rate” is infinite, or zero depending how you look at it.

Didn’t realize how they actually function, looks like I need some new lights.

Do you go by the smell of the executable or just general vibes? Nobody has never reviewed even a tiny fraction of the software they run, closed source or open source.

It’s a false sense of security, more or less. If an application wants to talk to a C2 they don’t have to make a connection at all, just proxy a connection through something already allowed, or tunnel through DNS. Those juicy cryptocurrency keys? Pop Safari with them in the URL and they’re sent to the malicious actor instantly. If you’re owned Little Snitch does nothing at all for you except give you the impression that you’re not.

Especially in this case where the attackers could've proxied you to their malicious servers through npp's good/trusted servers

This is far too cynical of a take. LittleSnitch might not save you from well-established malware on your machine, but it will certainly hamper attempts to get payloads and exploits on your machine in the first place

I find it difficult to believe that there is levels of cooperation between different companies that would allow this to work.

Source. I work for a company for longer than the internet has been alive.


My example is “living off the land”, safari already has access to everything, open it and use it to communicate. Needs no permissions, bypasses little snitch entirely.

Ah . I was thinking of non web apps.

You have worked for the same company for >55 years? That's wild. Can you share the industry?

IBM, although I consider internet and arpanet different things.

Like saying pstn and fiber are different things.


That's at the very least harder and less likely; security is not all or nothing.

You can get intel NUCs way cheaper if you look around, or the Lenovo mini PCs. Small clusters will never beat a decent CPU but you can probably make a cluster of old mini PCs for less than the price of one.


So obviously it depends on goals; I absolutely agree that up to a surprisingly high point, the best way to actually get the most compute is to go buy the most powerful single machine that will do what you want off eBay or whatever. My goal is the opposite; I want to specifically get my hands dirty figuring out how to actually provision and manage and operate a large number of hosts, even if they're so low resource as to be effectively worthless individually and are still weak clustered. To that end specifically, digikey claims to be willing to sell me a rather large number of pi zero 2w at $15/each - https://www.digikey.com/en/products/detail/raspberry-pi/SC11... - and even cheap used boxes on eBay seem to start at double that. Obviously you need a little bit more to actually run them, but I believe the only thing you actually need is a USB cable and then a single host computer can provide power and boot them over usb.


Just be aware that you’ll be stuck with wifi, or spending far more on ethernet adapters for them. If you want cheap with ethernet there’s other devices supported by Arabian in the same price range which have ethernet. It’s fun for sure to have a huge number of very cheap machines, I have an old arm cluster made of Odroid boards for that purpose.


I actually think I can run network over the USB connections to the head node anyways, though WiFi is also fine.

Actually, you raise a good point: I should spend some time browsing the Armbian supported hardware list...


Wanted to reply to you directly also to increase the chance you see this because I think I had exactly the same intrusive thought as you, and actually built such a cluster recently. Would love to hear what you think: https://x.com/andreer/status/2007509694374691230/photo/1


I think that's amazing, surprisingly physically nice looking, and now you've gone and reminded me that Plan 9 is an option, which is kinda another tangent I didn't need:), but IIRC Plan 9 is really low resource so it might be a good fit (aside it lending itself to distributed computing). Have you written up the build anywhere?


No, not beyond what I already put in the twitter thread. I wanted to wait until I had some cool distributed software running on it too, but then I ran into trouble with the plan 9 wifi drivers for the rpi being unstable and so I'm still working on fixing that. It does serve as a great test bed for that purpose too, as with 8 nodes I can get much more reliable data about how often the driver gets stuck


I feel you, but something like an R730 or 7810 with a pair of E5-2690 v4 and 128GB RAM can be had for under $400. Not the most power efficient, but you'd have to run it quite a while to make up the difference in energy cost. Plus there's way less work in getting it all set up.


eBay is hyper aggressive about fingerprinting, they will catch things like it trivially. Browsers leak all sorts of information like what sockets are open on localhost, making yourself look like an actual person is very challenging to someone motivated to detect you.


LLMs don't need browser automation though. Multimodal models with vision input can operate a real computer with "real" user inputs over USB, where the computer itself returns a real, plausible browser fingerprint because it is a real browser being operated by something that behaves humanly.


But will they behave like same user in past? I would guess there is lot of difference between how bot accesses page and real user has historically accessed them. Like opening multiple tabs at one time, possibly how long going through next set takes. How they navigate and so on.

There might be lot of modelling that could be done simply based on words used in searches and behaviour of opening pages. All trivially tracked to user's logged in session.


> But will they behave like same user in past? I would guess there is lot of difference between how bot accesses page and real user has historically accessed them. Like opening multiple tabs at one time, possibly how long going through next set takes. How they navigate and so on.

That would be making an assumption that a device and/or account maps 1:1 to a specific human. It does not. People share accounts, share devices, and ask others for one-off help ("hey can you finish buying this for me while I deal with $[whatever our kid just did]", this kind of stuff).


Sure, the cost of that goes way up though, especially if it has to emulate real world inputs like a mouse, type in a way that’s plausible, and browse a website in a way that’s not always the direct happy path.


It’s more or less an entire apple watch acting as a secure element, or used to be in the older models.


At the chip level there’s no difference as far as I’m aware, you just have 9 bits per byte rather than 8 bits per byte physically on the module. More chips but not different chips.


> you just have 9 bits per byte rather than 8 bits per byte physically on the module. More chips but not different chips.

For those who aren't well versed in the construction of memory modules: take a look at your DDR4 memory module, you'll see 8 identical chips per side if it's a non-ECC module, and 9 identical chips per side if it's an ECC module. That's because, for every byte, each bit is stored in a separate chip; the address and command buses are connected in parallel to all of them, while each chip gets a separate data line on the memory bus. For non-ECC memory modules, the data line which would be used for the parity/ECC bit is simply not connected, while on ECC memory modules, it's connected to the 9th chip.

(For DDR5, things are a bit different, since each memory module is split in two halves, with each half having 4 or 5 chips per side, but the principle is the same.)


The "cost" of executing the JavaScript proof of work is fairly irrelevant, the whole concept just doesn't make sense with a pessimistic inspection. Anubis requires the users to do an irrelevant amount of sha256 hashes in slow javascript, where a scraper can do it much faster in native code; simply game over. It's the same reason we don't use hashcash for email, the amount of proof of work a user will tolerate is much lower than the amount a professional can apply. If this tool provides any benefit, it's due to it being obscure and non standard.

When reviewing it I noticed that the author carried the common misunderstanding that "difficulty" in proof of work is simply the number of leading zero bytes in a hash, which limits the granularity to powers of two. I realize that some of this is the cost of working in JavaScript, but the hottest code path seems to be written extremely inefficiently.

    for (; ;) {
        const hashBuffer = await calculateSHA256(data + nonce);
        const hashArray = new Uint8Array(hashBuffer);

        let isValid = true;
        for (let i = 0; i < requiredZeroBytes; i++) {
          if (hashArray[i] !== 0) {
            isValid = false;
            break;
          }
        }
It wouldn’t be exaggerating to say that a native implementation of this with even a hair of optimization could reduce the “proof of work” to being less time intensive than the ssl handshake.


That is not a productive way of thinking about it, because it will lead you to the conclusion that all you need is a smarter proof of work algorithm. One that's GPU-resistant, ASIC-resistant, and native code resistant. That's not the case.

Proof of work can't function as a counter-abuse challenge even if you assume that the attackers have no advantage over the legitimate users (e.g. both are running exactly the same JS implementation of the challenge). The economics just can't work. The core problem is that the attackers pay in CPU time, which is fungible and incredibly cheap, while the real users pay in user-observable latency which is hellishly expensive.


They do use SubtleCrypto digest [0] in secure contexts, which does the hashing natively.

Specifically for Firefox [1] they switch to the JavaScript fallback because that's actually faster [2] (because of overhead probably):

> One of the biggest sources of lag in Firefox has been eliminated: the use of WebCrypto. Now whenever Anubis detects the client is using Firefox (or Pale Moon), it will swap over to a pure-JS implementation of SHA-256 for speed.

[0] https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypt...

[1] https://github.com/TecharoHQ/anubis/blob/main/web/js/algorit...

[2] https://github.com/TecharoHQ/anubis/releases/tag/v1.22.0


If you can optimize it, I would love that as a pull request! I am not a JS expert.


>but the hottest code path seems to be written extremely inefficiently.

Why is this inefficient?


That's surprising, it's at least casually known that they're bio accumulative to some extent. I've joked to the techs before about gadolinium eventually accumulating enough to not be necessary if you do it with enough frequency. Realistically though any situation that you're doing the contrast you're probably at a lot more risk of whatever they've found than from the contrast agent.


I had to have contrast to diagnose a simple cyst, which is entirely asymptomatic and was discovered by accident in the background of a cardiac MRI (family history of SCD, but my own heart is fine).

You're making me feel lucky about what was otherwise a very unpleasant experience!


Yes.

A chemist gave a great talk about this at a big MRI conference (ISMRM) in Paris 10ish years ago. His explanation was that gad behaves a lot like iron does in the body. It deposits where iron does and like iron it lacks a metabolic route for removal (though menstruating females lose iron).

He stated that deposition was entirely predictable. However the harm caused is still debated.

The article here says ‘ Dr Wagner theorized that nanoparticle formation could trigger a disproportionate immune response, with affected cells sending distress signals that intensify the body’s reaction.’

Emphasis on ‘theorised’.

Deposition is discussed in the below link, and the comparison with iron is briefly mentioned.

https://pmc.ncbi.nlm.nih.gov/articles/PMC10791848/


Nah, they used it on me when I cracked a toe. If I knew that this may be that dangerous I’d go the way without the contrast agent.


Based on what I've read I'm quite sure a cracked toe is way more dangerous than a contrast agent.


Maybe, but I was taking an immense amount of vitamin C as prescribed by the doc to bootstrap the healing process.

So this reveals to me two issues

1. In general side effects of the contrast agent are not communicated properly. If I knew, I might have asked - hey can you do the analysis without the agent?

2. There’s no recommendation to avoid vitamin C prior and right after the MRI, heightening the risk.


Maybe donate some plasma afterward. There was a study about firefighters exposed to microplastics that had a statistical reduction after regular donations.

Pretty much just diluting it out of your system.


Materials like these accumulate in other parts of your body, like bones. Letting some blood out is not gonna change it.


That’s nothing to do with static electricity, it’s capacitive coupling through the safety capacitors in the power supply. The chassis sits at 90vac or so as a result, it’s not a safety issue it’s FCC compliance for emitted noise.


Is this generally true for laptops / phones?

I've often wondered why I can tell by touch whether a device is charging or not from the slight "vibration" sensation I get when gently touching the case.


For ungrounded / 2-prong outlet devices, yeah.

It's often noticeable if you have a point contact of metal against your skin; sharp edge / screw / speaker grill, etc. Once you have decent coupling between your body and the laptop, you won't feel the tingle / zap.

They're called Y-caps if you want to delve deeper into them and their use in power supplies.


I get that too. I was wondering if it was just me.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: