This is awesome! I’ve given up learning graphics programming in the past due to the fragmented ecosystem of libraries. It just felt overwhelming. This seems exactly what I’ve been missing.
This is amazing!! I've found myself writing selenium scripts to automate tasks for my dad's job (things such as getting a name from a spreadsheet, putting that name in a website's search box and from there repeating the same actions for 100s of names) and saved him a ton of time. Making browser automation more accessible by just showing the machine how to do it will definitely make lots of people's lives easier. Can't wait to mess around with it.
Interesting, sounds like the kind of stuff we're excited about helping with! Would love to hear what you think once you try it, email is in my bio if you're open to a chat.
I was at a talk given by the president of Embraer-X (that's an Embraer subsidiary that looks into these riskier ventures), and he got into the details of how they validated their business plan of using these vehicles for urban transportation.
They offered helicopter rides to one of Rio's major airports and charged R$100 per passenger (according to him, that's an accurate estimate of future prices once they make their eVTOL) at a loss just to see if there was demand. They got booked the entire year in advance.
I have faced this myself, and the aimless excitement can create a lot of frustration in new programmers. Programming is a unique tool in the sense that it has the ability to glaringly show the programmer's lack (or wealth) of vision.
I have been journaling for 2 years on and off. In the beginning it had extremely positive effects on my development as a person.
After a while the sense of effectiveness that came from writing constantly started to fizzle out and I'd feel like going in circles in my writing.For a while, I wouldn't write and I'd feel guilt and would attribute negative aspects of my life to my lack of writing, which led me back to daily writing. This was a recurring thing in my life.
After a while I just concluded that the valuable thing journaling teaches you is the importance of exploring your ideas and thoughts. Doing it religiously or with harsh constraints is no good. There must be time to explore and to just do things for a bit - that until some sort of critical mass is reached or some idea shows up that you want to discuss.
I think what confuses a lot of people is what one means with "journaling". If it's just writing about your day, I don't see that much utility in it. Effective journaling ends up being an alternative name for writing and reflecting without objective in my experience.
I find it very interesting that you mention looking at the history of ISA's first in order to understand the current iteration of the technology.
I was reading the RISC-V privileged ISA recently and the amount of seemingly arbitrary registers and behaviours that must be implemented to support a UNIX-like OS is crazy, and that got me thinking about the history behind all of these things that the hardware must support in order to support the OS.
But thank you for the pointers, I'll definitely use this.
The "ISA" mentioned above is the "Industry Standard Architecture", the 8/16bit bus used by PCs and PC clones back in the day, not "Instruction Set Architecture (x86, ARM, RISC-V, etc):
https://en.wikipedia.org/wiki/Industry_Standard_Architecture
It doesn't really answer my question, but from what I've seen in the TOC I'd say it's equivalent to an introductory course on computer architecture + computer systems and some cryptography as well. Kind of an introduction (don't get me wrong with the word 'introduction', it covers a decent amount of material) to the most important concepts and technologies that guide computers and the internet.
Yeah, as I've read other responses to my post I've been able to better define my difficulties in understanding CPU-GPU communication. I was having a hard time separating the MMIO concept from the communications protocol that ties together all of these devices (based on what you've explained that'd be PCIe). I actually haven't learned about PCIe as of yet, so the way you've introduced the concept has set me up to further look into it, thanks.
Yeah, your explanation really hits the nail in regards to what I was trying to understand - MMIO coupled with all of that bus dynamic of a master and slave going on. It's clear to me now that my knowledge gap resides in not knowing enough about interconnects. Thanks a lot!
I do wonder, why aren't interconnects more emphasized in the courses I took? All I've seen was just oversimplified pictures of the process. Your explanation goes just enough into the lower-level aspects of the process to allow me to piece it.
> Yeah, your explanation really hits the nail in regards to what I was trying to understand
:)
There's a lot of knowledge, and acronyms, and BS out there. E.g. there is no need to discuss PCIe here. It's much easier, and enjoyable, to cultivate a simple understanding of fundamentals. Build up from there. Reduce it down to your own simple model.
Interconnects focus on the transfer of data between components in the system. Topics like topology, switching/routing, and performance come into play. But, for the purposes of the simple model described above, all you really need to grasp is topology. I.e., how are things connected and where is data flowing?
The memory model is another extension to the simple model described above. Both the CPU and the GPU have access to DRAM memory (shared memory). The CPU can send transactions to the GPU, and the GPU can interrupt the CPU. These are all different paths thru the system. But, remember that we described a very specific order of events that need to happen for shared memory communication between CPU and GPU. E.g. (1) the CPU sends transactions to the DDR controller to store some information in DRAM memory, and then (2) sends a transaction to the GPU to inform the GPU that it can now (3) send transactions to the DDR controller to retrieve that information from DRAM memory. But what if (1) and (3) happen much faster than (2)? The GPU will get old data, not the new data that was written by the CPU. Managing these order of events in the system is what the memory model is all about. What if shared memory exists not only in DRAM memory but also in caches elsewhere in the system?
Edit:
Back to your question of "how a CPU tells a GPU to start executing a program"...
In the simple model, you could imagine something like: (1) CPU stores the shader program in DRAM memory. (2) CPU writes a GPU register informing the GPU of what address the shader program is located at in DRAM memory. (3) CPU also informs the GPU of the size of the shader program. (4) GPU loads the shader program from DRAM memory. (5) GPU starts executing shader program.
Wow. Just skimmed across that chapter and that looks like a great resource. No wonder I couldn't find it in any of my searching sessions, I'd never think a book titled like that would cover hardware concepts so extensively. This will definitely help me in understanding buses better. Thank you.