Hacker Newsnew | past | comments | ask | show | jobs | submit | melchizedek6809's commentslogin

I'm kind of wondering where this sentiment is coming from that LLMs are stealing open-source code. Isn't it just the same as someone learning programming by reading and working on open-source code and then writing closed software with that knowledge, where is the difference? Or is it just closed models that are problematic and would open models like Llama/DeepSeek be acceptable?


Why can't the server send unsolicited messages in JSON-RPC? I've implemented bidirectional JSON-RPC in multiple projects and things work just fine, even going as far as sharing most of the code.


Yep, the web server and client can both act as JSON-RPC servers and clients. I've used this pattern before too with Web Workers, where the main thread acts as both client (sending requests to the worker) and server (fielding requests from the worker).


That particular border actually does get controlled these days, got held up at the German-Polish border just a week or so ago:

https://www.bmi.bund.de/SharedDocs/pressemitteilungen/EN/202...

But of course your overall point stands


They're just checking for illegal immigration though, right? Not like they'll care about GPUs going across the border.


Seemed that way, in my case at least they only really checked the passports. Doubt they'd notice or care if one brings a trunkload full of GPUs across the border.


Since it allows for accepting incoming TCP connections, this should allow for HTTP servers to run within the browser, although running directly on port 80/443 might not be supported everywhere (can't see it mentioned in the spec, but from what I remember on most *nix systems only root can listen on ports below 1024, though I might be mistaken since it's been a while)


Memory64 is supported by a lot of runtimes now, although it isn't fully standardized yet (see https://github.com/WebAssembly/proposals), so no idea how reliable the implementations actually are, since I haven't had a need for that much memory yet


Not sure I agree with the conclusion of that article, according to it, only 2 screen readers don't support nested labels, I couldn't find statistics on how prevalent these are, but there are a lot of alternative screen readers one could use which might support nested labels since they're not mentioned there (I've mainly heard of JAWS, which isn't mentioned there), so it doesn't seem to be an inherent limitation of assistive technology, just a bug in some (popular?) screen readers.


VoiceControl and Naturally Speaking aren’t screen readers: they’re voice command software. They’re designed for people with mobility problems, not vision problems. There’s no inherent limitation here that couldn’t be solved by bug fixes, but they’re the two major pieces of assistive tech in that sector so can’t be dismissed without dismissing people who need that functionality.


Fair enough, so to test things out I've enabled Voice Control and tried whether it makes a difference how the elements were arranged:

At least with Chrome, it does not make a difference! It correctly determined the label and I could just tell it to click on that particular checkbox.

Since Dragon Naturally Speaking doesn't seem to have a trial, as well as having a broken shop page you can't order from, I can't give it a test, but that articles advice seems rather questionable to me.


That feels somehow like building your website for IE6 because some people still use them.


The article is right. A nested input with a for attribute is always better than an input without one.

Maybe not for the screenreaders of today, but maybe for the screenreaders of tomorrow.


> Some old processors had page fault handling in hardware. This sucked rocks, and Linus yelled at the PowerPC guys somewhat extensively about this 20 years ago.

Wait, wasn't it the other way around? I might be mistaken but wasn't one of the problems with PowerPC that it only really had a TLB and the kernel had to walk the page tables in software?

Afaik on x86 the page fault handler is only called when a page isn't marked present, so that one can allocate a new page/load the page from mass storage, but apart from that walking the page tables is done in hardware.

Has been a while since I've only really dabbled in 32-bit protected mode a decade or so ago so I might be misremembering.


MIPS had a fully classic RISC MMU, TLB only. TLB miss resulted in a fault which the OS handled to walk whatever data structure they chose for memory mapping. They had the KSEG regions which let the (virtual mapped) kernel easily access physical mapped memory for the walk. Not sure if this changed with MIPS64, though KSEGs would have been much less costly in terms of address space there.

PPC (at least Book-E variants) had a more complicated setup where TLB misses did a hash table lookup in HW. If that missed as well, it faulted to the kernel to do the full walk. The trick PPC used was that the page fault handler ran with paging disabled entirely, so it could access physical memory directly while handling the miss, no KSEGs necessary.

No idea how SPARC handled this, but x86/x86-64/ARM all do this entirely in hardware, though in practice it is really microcode.


Can you provide some citation for the claim that x86-64 (assuming something modern like AMD Zen or Intel (post) Skylake P-core) does page table walking/TLB-filling in microcode instead of the fairly obvious state machine that can walk as quickly as the cache hierarchy can deliver the table entries? Well, maybe give it a full cycle latency to process the response and decide the next step, though I don't remember there being any addition required to generate the address of the next level's page table entry so the bit of combinatorics to control the cache's read port might fit in the margins between the port's data out latches becoming valid and the address in latches's setup deadline.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: