I bought a large format e-reader for the opposite reason - being able to read and study from large format textbooks while on long train journeys or in hotel rooms (or even camping). It handles stuff from arxiv fine too.
I really like my Boox Max, as it means that I can read textbooks at a good size without reflowing. It still holds charge for several weeks at a time after about 7 years
I wish I had it at university instead of 1000+ page hardback calculus textbooks.
Is that not a problem with how people are using CVEs, scoring them and attaching value to them rather than whether a CVE should be assigned itself. A CVE is simply a number and some data on a vulnerability so that the community knows they are all talking about the same issue
Even if you need to be root to edit the files, it still is a deviation from the design or reasonably expected behaviour of that interface, so is still a bug and should still get a CVE. It should either be fixed or failing that documented as 'wont fix' and on the radar of anyone building an application. Someone building the next plesk or cpanel or similar management system should at least know about filtering their input and not allowing it to get to the dangerous config file.
Re: Harassment - Can't the project release a statement saying that the bug writeup is low quality and unable to be reproduced? Anyone ignoring that without question and using it as evidence that the project is bad without proof is putting way too much value in CVEs and the fault is their own
It's a bug, sure. The V in CVE is for "vulnerability", which is why people treat CVEs as more than just bugs.
If every bug got a CVE, practically every commit would get one and they'd be even less useful than they are now.
At that point, why not just use commit hashes for CVEs and get rid of the system entirely if we're going to say every bug should get a CVE?
> Re: Harassment - Can't the project release a statement saying that the bug writeup is low quality and unable to be reproduced?
If your suggested response to a human DoS is "why can't the humans just do more work and write more difficult-to-word-correctly communication", then you're not understanding the problem.
If you are wasting time wording communication then are you doing it wrong?
I imagine the response would be looking at it briefly, seeing if it looks dangerous or reproducible and getting an AI to return a templated "PoC or GTFO" response.
The mere existence of a CVE doesn't tell anyone whether a bug is valid or not, and the security reports should be handled in the same way regardless of whether one does exist. For some odd reason people have attached value to having your name logged beside CVEs, despite it not telling you anything,
"human communication is easy, just have an AI say 'buzz off' and the conversation partner and other strangers will always respond respectfully, I don't know why so many people complain about lack of spoons or other social issues".
Thanks doctor, you just solved my anxiety.
I broadly agree that having templates does lower the amount of human effort and emotional labor required, but trust me, it's not a silver bullet, even hitting someone with a template takes spoons.
I don't really care that CVEs in theory are apparently entirely without meaning and created for nonexistent bugs, we're talking about the reality of how they're perceived and used.
Like, I'm saying "Issuing garbage such that 100 people have to read it and then figure out what to do is bad, we should instead have a higher bar for the initial issuing part so 1 or 2 people have to actually read it, and 100 people can save some time. We should call out issuing garbage as bad behavior to hopefully reduce it in the future".
You're apparently disagreeing with that and saying "But reading is easy, and the thing is meaningless anyway so this real harm that actually happens is totally fine. We should keep issuing as much garbage as we can, the numbers don't mean anything. It's better to make a pile of garbage and stress the entire system such that no one values or trusts it than to add any amount of vetting or criticism over creating garbage"
idk, I guess we're probably actually on the same page and you're just arguing for arguing's sake because you think you can be a pedant and be technically correct about CVEs.
Tell me if I got a wrong read there and you have a more concrete point I'm missing?
But that's not what happened here. These are memory corruption bugs. Probably not meaningful ones, but in the subset of bugs that are generally considered vulnerabilities.
It's more complicated than that though. For security, the whole context has to be considered.
Like for example, look at the linked CVE-2025-12200, "NULL pointer dereference parsing config file"...
Please, explain a single dnsmasq setup where someone is somehow constructing a config file such that it both takes in untrusted input where this NPE is the difference between it being secure and being DoSd or insecure somehow, if you can even conjure up a plausible hypothetical way this could happen, I'd love to hear it, because this just seems so impossible to me.
This seems firmly in the realm of issuing CVEs for "post quantum crypto may not be safe from unknown alien attacks"
> Is that not a problem with how people are using CVEs, scoring them and attaching value to them
Well, yes, it is. But if that's the way the market is going to game the scoring/value system it's (mis)using, then it behooves a project that wants to be successful to play the same game and push back when the scoring unfairly penalizes it.
Basically dnsmasq doesn't really have much of a choice here. Someone found a config parser bug and tried to make a big deal out of it, so someone else (which has to be dnsmasq or a defender) needs to explain why it's not a big deal.
Some product decides not to use it. Someone loses a contract supporting it. Someone doesn't get a job because their work isn't favored anymore.
I think you're trying to invoke a frame where because dnsmasq is "open source" that it isn't subject to market forces or doesn't define value in a market-sensitive way. And... it is, and it does.
Free software hippies may be communists at heart but they still need to win on a capitalist battlefield.
From memory, online and offline transactions are usually split out by BIN number (first six digits)
The BIN will tell you which bank was the issuer and which class of card you have, like standard or premium, though most readers probably don't take that into account beyond the card scheme and card type associated with the range that the individual BIN is in. Many banks will have multiple BINs for the same card type if they are large.
Credit / online debit / offline debit usually get different ranges. The reader gets a list of the ranges when it updates and they don't change super often. Offline readers can be configured to reject cards with a number in an online only range.
It's usually based on the chip settings. Rules aren't as simple as "always online" or "never offline"; an issuer can e.g. convey that they'd prefer online transactions for certain types of payments, while offline is ok for others, via relatively complex configurations of the code of the chip application.
Before that, there was the service code on the magnetic stripe, which also can convey things like "online only" or "domestic use only".
The BIN is only involved in risk management on the terminal's side: Many of these in-flight terminal accept deferred online transactions, which means that, even though they're completely offline, they take the risk of accepting an online-only card. (For truly offline capable cards, the risk is often with the issuing bank.)
That type of risk management can benefit from knowing what type of card it is, and prepaid cards are often seen as riskier (because customers might intentionally drain them before a flight). Of course, debit and credit cards can also be empty/marked as stolen, but these are marginally harder to get and replace.
Yep you are completely correct; people don't realise how complex the chip is - it has what you'd legitimately recognise as an operating system! It can also be reprogrammed over the wire, if your chip and pin is taking a bit toooo long that might be what's happening.
Your correct on the risk spread. I wasn't confident last night (I'm not totally versed on the terminals) but looked it up. As I understand if you choose to accept offline only payments then you accept the risk of the transaction failing. If it's the issuers choice they own the risk.
My employer in the UK had a machine in around 2014, but it was only used for sales of their own products to employees.
It put all the transaction risk onto the employer, and had a high fee per-use, but since they only had these 'stock clearance' sales to employees once a year it was fine.
I used to have an online maestro card (was solo and now known as debit mastercard) and an offline card (was switch, now also known as debit mastercard) from a UK bank, due to having two current accounts there.
The offline card was from a current account with an overdraft and also worked as a cheque guarantee card, for cheques up to £250 under the (discontinued ~2011) cheque guarantee scheme[0] and had a special hologram on the back. The retailer would watch you sign the cheque and write details about you, the card and any CCTV etc. on the back of the cheque. I imagine the offline behavior of the card was similar, and was a carry over from that.
The online card was from a basic account with no overdraft facility and acted a bit like a prepaid debit card.
Often you will obtain a vulnerability in some software and then search for companies using it. You can often use Google or Shodan to do the searching, but perhaps ingested LLM data could also work.
In the simplest case if you get remote code execution in SuperServer9000 (made up product) and that has a banner on error / status pages that reads "Powered with pride by SuperServer9000 version 2.1", then you could just search for that string (or part of it) and use your remote code execution bug against any sites that come up.
It can get behavior based or more complicated than that though, or rely on information that an LLM has ingested about a company from public sources.
Then either grab data and sell it or sell your access to a broker or whatever else.
Would it even be possible to enumerate all edge cases and test all the permutations of them in non-trivial codebases or interconnected systems? How do you know when you have all of the edge cases?
With fuzzing you can randomly generate bad input that passes all of your test cases that were written using by whatever method you have already been using but still causes the application to crash or behave badly. This may mean that there are more tests that you could write that would catch the issue related to the fuzz case, or the fuzz case itself could be used as a test.
Using probability you can get to 90 or 99% or 99.999% or whatever confidence level you need that the software is unaffected by bugs based on the input size / number of fuzz test cases. In many non-critical situations the goal may not be 100% but 'statistically very unlikely with a known probability and error'
It is highly recommended to configure two or more DNS servers incase one is down.
I would count not configuring at least two as 'user error'. Many systems require you to enter a primary and alternate server in order to save a configuration.
The default setting on most computers seems to be: use the (wifi) router. I suppose telcos like that because it keeps the number of DNS requests down. So I wouldn't necessarily see it as user error.
The funny part with that is that sites like cloudflare say "Oh, yeah, just use 1.0.0.1 as your alternate", when, in reality, it should be an entirely different service.
I really like my Boox Max, as it means that I can read textbooks at a good size without reflowing. It still holds charge for several weeks at a time after about 7 years
I wish I had it at university instead of 1000+ page hardback calculus textbooks.