I’d suggest to revisit voice recognition — it works quite well for me in the same usecase.
I also like to take walks, sometimes listening to podcasts. The stock iOS voice recognition (the microphone button on the keyboard, not Siri) is quite good, I usually just talk into the phone without looking at the output. After the walk, I format and clean up the notes to fix any errors.
That's fair - I live in a bustling city and get self conscious with people being able to hear me text and such. I know it's really effective these days though.
that is obvious. But how does it change big O? (why any manual implementation would have a better big O compared to existing arithmetics implementation in CPython?)
I think he meant that naively implementing an algorithm may not be bounded by the O notation he/she originally wanted due to code calling other functions “hidden” from the programmer.
Loading books using programs like calibre [1] allows you to covert EPUB to MOBI (the kindle format) seamlessly before transferring. In my experience this works perfectly.
It should be noted that the described graph embedding related tasks are only a small subset of the tasks that GNNs solve. Many (if not most) graph learning techniques focus on more "local" tasks like node classification or edge prediction.
His prerequisite first year course [0] actually matches your description more, and even has students designing their own CPUs on FPGAs. I feel like this course goes more in the direction of his own research interest, and already assumes the basic "breath" you speak of.