Not as described. But I would appreciate a connection with certain brands that have helped me solve a problem in the past.
I wish I could go to the same shop and tell them "this was the solution back then, give me the latest iteration".
For example: Moto G phone, Brother laser printer, Gap jeans, Geox cordless shoes. Don't waste any money trying to sell to me, just tell me what you have when I need to replace the previous one.
great to hear! yes the idea is to have a more direct conection to brands/shops so you can know what your really want are arriving and products that you dont, just dont appear every time in every social platform as we are not interested. thanks for the feedback!
Sonic Pi is SuperCollider, but using Ruby instead of the default sclang language. Overtone is similar (and possibly originally by the same developer, iirc?) but using Clojure, and is also missing from the list.
Yeah, that's some glaring omissions - not including Sam Aaron's work makes me distrust the whole list. SonicPi is fundamental for teaching kids music and programming and Overtone is just mind-blowing - I watched people DJing music while evaling things in Emacs, that looked sick.
The article is missing this motivation paragraph, taken from the blog index:
> Graphics APIs and shader languages have significantly increased in complexity over the past decade. It’s time to start discussing how to strip down the abstractions to simplify development, improve performance, and prepare for future GPU workloads.
Meaning ... SSDs initially reused IDE/SATA interfaces, which had inherent bottlenecks because those standards were designed for spinning disks.
To fully realize SSD performance, a new transport had to be built from the ground up, one that eliminated those legacy assumptions, constraints and complexities.
Thanks, I had trouble figuring out what the article was about, lost in all the "here's how I used AI and had the article screened by industry insiders".
I read that whole (single) paragraph as “I made really, really, really sure I didn’t violate any NDAs by doing these things to confirm everything had a public source”
I was lost when it suddenly jumped from a long retrospective on GPUs to abruptly talking about "my allocator API" on the next paragraph with no segue or justification.
haha, instead of making them read an AI-coauthored blog post, which obviously, they didn't do, he could have asked them interesting questions like, "Do better graphics make better games?" or "If you could change anything about the platforms' technology, what would it be?"
> The demo at the top has some bad noise issues when the light is in small gaps, at least on my phone (which I don't think the article acknowledges).
Right at the end:
> The random jitter ensures that pixels next to each other don’t end up in the same band. This makes the result a little grainy which isn’t great. But I think looks better than banding… This is an aspect of the demo that I’m still not satisfied with, so if you have ideas for how to improve it please tell me!
Not only. There is an inherent aliasing effect with this method which is very apparent when the light is close to the wall.
I implemented a similar algorithm myself, and had the same issue. I did find a solution without that particular aliasing, but with its own tradeoffs. So, I guess I should write it up some time as a blog post.
AFAIK (I have a similar soft shadows system based on SDFs) the reason the noise issues occur in small gaps is that the distance values become small there so the steps become small and you start ending up in artifact land. The workaround for this is to enforce a minimum step size of perhaps 0.5 - 2.0 pixels (depending on the quality of your SDF) so you don't get trapped like that - the author probably knows but it's not done by their sample code.
Small step sizes are doubly bad because low-spec shader models like WebGL and D3D9 have a limitation on the number of loop iterations, so no matter how powerful your GPU is the step loop will terminate somewhat early and produce results that don't resemble the ground truth.
However I don't have any issues with the demo in the middle (the hard shadows). So the artifacting has to be from the soft shadow rules, or from the "few extra tweaks".
The primary force behind real soft shadows is obviously that real lights are not point sources. I wonder how much worse the performance would be if instead of the first two (kinda hacky) soft shadow rules we instead replaced the light by maybe five lights that represent random points in a small circular light source. Maybe you'd get too much banding unless you used a much higher number of light sources, but at the very least it would be an interesting comparison to justify using the approximation
Same goes for a few of the other images too, but not all of them.
The article would probably benefit from having figure captions below each image stating whether the image is interactive or not.
Or alternatively to figure captions about interactivity, showing some kind of symbol in one of the corners of each of the ones that are interactive. In that case, the intro should also mention that symbol and what it means before any images that have that symbol on it.
I was surprised that the podcast Stuff You Should Know now advertises a short firm video provider, but I couldn't explain exactly why. Maybe this sheds some light on my concerns.
As a player: What's the lag? Does it depend on the game and the gesture?
As a developer: I'd like to implement a "game" which would be ideal for Dynamicland (tens of cards with ID stickers on the corners), but this might be a simpler platform to set up and use. Would that be possible with the board as sold?
Also curious about latency. In the past I've worked around latency using video sensors for high-bandwidth high-latency features, then literally glued a contact mic to my interface to get low latency tap detection. How does the Board hide latency?
This isn't a solution either. Not sure why you think it is. Here's how I name files, just as an example:
Meditationes de Prima Philosophia - GTNB•0023306 (2007) - Descartes, René (aut)
Meditations on First Philosophy - 9780203417621 (2013) - Descartes, René (aut); Haldane, Elizabeth (trl); Ross, G. R. T. (trl) & Tweyman, Stanley (edt,wfw)
Where and how should I put a URI in there, especially considering that they at minimum need the colon (:), which is a problematic character in filenames on NTFS/HFS/APFS/XFS? They're not exactly disallowed, but they create a resource fork or some shit and so it doesn't behave as you would expect. If Standard Ebooks just started numbering their books, then I'd slap the STBK• in front of the number and use that. They're not in Worldcat, or I could use OCLC numbers (but it shouldn't be other people's job to keep the catalog of their own books).
- they don’t need to do anything to conform to your arbitrary organization choices
- hashes are as long or short as you need them to be
- publication timestamp is in every ebook’s metadata, is almost guaranteed to be unique, monotonically increases, and has actual semantic meaning compared to an isbn or oclc
>they don’t need to do anything to conform to your arbitrary organization choices
They don't need to. It'd be smart. It's not "arbitrary". It's fucking library science.
>hashes are as long or short as you need them to be
Hashes might uniquely identify a computer file, but they don't uniquely identify an edition/release of a published book. Some jackass on libgen decides to tweak a single byte, now it has a new hash... but it's not a new edition.
>publication timestamp is in every ebook’s metadata
As someone who takes a look at every internal opf file, no... they're not in every ebook.
You're suggesting I go to the extra trouble of doing a job they could do easily, when I can only do it poorly, and I don't know why... because the first person to respond was a dumbass and thought I was attacking him? I swear, 99% of humans are still monkeys.
You don't need to hash file contents (though that is often a useful thing to do). You can hash e.g. the URL that was earlier claimed to be the canonical identifier. Running it through your favorite hash function fixes your complaints about file names (choose your favorite hash function such that it is not too long and only outputs allowed characters).
Ah. The url, so I can substitute one difficult-for-human-readability with another difficult-for-human-readability, both of which are excessively long and opaque-by-design.
>choose your favorite hash function such that it is not too long
ISBN's 13 digits is about as long as is tolerable. Any time there is a list of authors six names long (academic titles) along with a subtitle, it's very easy to bump up against max filename size.
This isn't a problem I can solve on my own. Just trying to bring attention to it. My solution thus far is to just avoid publishers who are so unprofessional as to not provide numbers. It's not tough, Project Gutenberg does it. Anyone can do it. If you're some amateur whose entire catalog is 8 books published, you say "this book is 1, and this book is 2" etc, and it's a done deal. Again, I don't expect anyone to use ISBNs (in the US, you have to pay for them unless you're one of the big 5 publishing houses), but just use your own for god's sake.
Hashes are not excessively long unless you choose to make it so. They might be opaque/random if you want, or they might not. "Remove all special characters and keep only the first 5 characters with space padding" is a string hash function. "Keep only the first 5 vowels with space padding" is a string hash function.
Here's a friendly AI generated hash function to give you an opaque 13 digit number if you're into that:
It looks like their ebook sources are all published in git repos online, so you could check out the repos, get the timestamp of the initial commits, and do a monotonic ID on that if you wanted. You could also contribute the change back to them if you think it's something others would benefit from.