Hacker Newsnew | past | comments | ask | show | jobs | submit | maggit's commentslogin

It is indeed badly aliased. The technique demonstrated does not take into account antialiasing in the initial render, which causes this issue. There are ways to improve it, but I would advise against this approach in general since it doesn't handle these edge cases well.


x86-64 introduced a `syscall` instruction to allow syscalls with a lower overhead than going through interrupts. I don't know any reason to prefer `int 80h` over `syscall` when the latter is available. For documentation, see for example https://www.felixcloutier.com/x86/syscall


While AMD syscall or Intel sysenter can provide a much higher performance than the old "int" instructions, both syscall and sysenter have been designed very badly, as explained by Linus himself in many places. It is extremely easy to use them in ways that do not work correctly, because of subtle bugs.

It is actually quite puzzling why both the Intel designers and the AMD designers have been so incompetent in specifying a "syscall" instruction, when such instructions, but well designed, had been included in many other CPU ISAs for many decades.

When not using an established operating system, where the implementation for "syscall" has been tested for many years and hopefully all bugs have been removed, there may be a reason to use the "int" instruction to transition into the privileged mode, because it is relatively foolproof and it requires a minimum amount of code to be handled.

Now Intel has specified FRED, a new mechanism for handling interrupts, exceptions and system calls, which does not have any of the defects of "int", "syscall" and "sysenter".

The first CPU implementing FRED should be Intel Panther Lake, to be launched by the end of this year, but surprisingly, recently when Intel has made a presentation providing information about Panther Lake no word was said about FRED, even if this is expected to be the greatest innovation of Panther Lake.

I hope that the Panther Lake implementation of FRED is not buggy, which could have made Intel to disable it and postpone its introduction to a future CPU, like they have done many times in the past. For instance, the "sysenter" instruction was intended to be introduced in Intel Pentium Pro, by the end of 1995, but because of bugs it was disabled and not documented until Pentium II, in mid 1997, where it finally worked.


32 bit x86 also has sysenter/sysexit.


Only Intel. AMD had its own "syscall" instead of Intel's "sysenter" since the K6 CPU, so x86-64 has inherited that.

AMD's "syscall" corrects some defects of Intel's "sysenter", but unfortunately it introduces some new defects.

Details can be found in the Linux documentation, in comments by Linus Torvalds about the use of these instructions in the kernel.


There was a Minesweeper on here that used a SAT solver, but I cannot find it at the moment. As I recall, it never had any issue with resolving the board quickly. I think it dynamically resolved where the mines would be as you played the game, and if you clicked a square that could be a mine, it would be a mine, except, I believe, when there were no open squares that were safe.

(Edit: Here it is! https://pwmarcz.pl/kaboom/ And the write-up: https://pwmarcz.pl/blog/kaboom/ )

This is similar in spirit to my take on the game: https://magnushoff.com/articles/minesweeper/

Unfortunately, not being familiar with SAT solvers, my implementation can grind to a halt in some configurations :)


I wonder if one learned to play faster with this kind of minesweeper.

I find in a lot of repetitive learning, you have a very noisy signal, you don't know if you succeeded because of luck or you did something right.

This variant takes out the luck part.


It's more a matter of your personal preference and previous exposure to different languages. The way Rust reads is one of its super strengths in my book. I also really enjoyed Standard ML in university, and Rust picks up some of that (via OCaml).


Adding some detail to this: With three buffers, you have one front-buffer (what's currently visible on screen) and two back-buffers. Let's call them A, B and C, respectively. This lets you work on the next frame in, say, B, and when it's ready, you queue it up for presentation. At the right time, then, the roles of the buffers will be switched, making B the front-buffer and A a back-buffer.

The third buffer comes into play if you want to start working on the next frame _before_ the switch has occurred. So you start drawing in C, and if the right time should hit, the display system can still flip A and B. In this case, triple buffering gave you a head-start with drawing the frame in C.

Going further, if you complete the frame in C still before the A/B switch has happened, you queue up C as the next frame, instead of B. Then, you can start working on the next frame again in B. With this scheme, there is no sense in having more buffers than three.


Exact integers doesn't seem to be its strong suite. Can it even represent 3 exactly?

Running code from the linked notebook (https://github.com/AdamScherlis/notebooks-python/blob/main/m...), I can see that a 32 bit representation of the number 3 decodes to the following float: 2.999999983422908

(This is from running `decode(encode(3, 32))`)


Yeah I think any numbers away from 0 or infinity aren’t its strong suit.


That's true for normal binary encoding of integers, but I think we should understand the question in context of the post: What's the number of bits required in iterated log coding?


Empirically, it seems to grow more like 2*log2(n)+1. A handwavy argument can be made that the first bit serves to distinguish the positive values from the negative ones, but after that on average every second bit only adds more precision to values that are already distinguishable or out of range, but doesn't help with values whose representation has the same prefix. I don't know how to make that airtight, though...


Cute! I wonder if it would be amenable for use as a variable-width encoding for, say, DCT coefficients in a JPEG-like codec..?


I have realized that there is a big design space here, as I recently did a write-up of my take, Id30. 30 bits of information encoded base 32 into six chars, eg bpv3uq, zvaec2 or rfmbyz, with some handling of ambiguous chars on decoding.

https://magnushoff.com/blog/id30/



Worth reading the entire series, IMO. They are also nicely linked with "next part" at the bottom (and this linking is not unique to that volta/ispc series of blog posts on Matt's blog).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: