Hacker Newsnew | past | comments | ask | show | jobs | submit | chsasank's commentslogin

fast.ai course is the best: https://course.fast.ai/ Comes with its own framework and is deeply practical.


This is basically fork of llama.cpp. I created a PR to see the diff and added my comments on it: https://github.com/ggerganov/llama.cpp/pull/4543

One thing that caught my interest is this line from their readme:

> PowerInfer exploits such an insight to design a GPU-CPU hybrid inference engine: hot-activated neurons are preloaded onto the GPU for fast access, while cold-activated neurons are computed on the CPU, thus significantly reducing GPU memory demands and CPU-GPU data transfers.

Apple's Metal/M3 is perfect for this because CPU and GPU share memory. No need to do any data transfers. Checkout mlx from apple: https://github.com/ml-explore/mlx


Let me start with declaring conflict of interest: I work in one of the aforementioned AI startups, qure.ai. Bear with my long comment.

AI is starting to revolutionise radiology and imaging, just not in the ways we think. You would imagine radiologists getting replaced by some automatic algorithm and we stop training radiologists thereafter. This is not gonna happen anytime soon. Besides, there's not much to gain by doing that. If there are already trained radiologists in a hospital, it's pretty dumb to replace them with AI IMO.

AI instead is revolutionising imaging in a different way. Whenever we imagine AI for radiology, you probably imagine dark room, scanners and films. I appeal you to imagine patient instead. And point of care. Imaging is one of the best diagnostics out there: non invasive and you can actually see what is happening inside the body without opening it up. Are we training enough radiologists to support this diagnostic panacea? In other words, is imaging limited by the growth of radiologists?

Data does suggest lack of radiologists. Especially in the lower and medical income countries.[1] Most of the world's population lives in these countries. In these countries, hospitals can afford CT or X-Ray scanners (at least the pre-owned ones) but can't afford having a radiologist on premise. In India, there are roughly 10 radiologists per million.[2] (For comparison, US has ~ 10x more radiologists.) Are enough imaging exams being ordered by these 10 radiologists? What point is there to 'enhance' or 'replace' these 10 radiologists?

So, coming to my point: AI will create new care pathways and will revolutionize imaging by allowing more scans to be ordered. And this is happening as we speak. In March 2021, WHO released guidelines saying that AI can be used as an alternative to human readers for X-Rays in the tuberculosis (TB) screening [3]. It turns out AI is both more sensitive and specific than human reader (see table 4 in [3]). Because TB is not a 'rich country disease', nobody noticed this, author included likely. Does this directive hurt radiologists? Nope, because there are none to be hurt: Most of the TB cases are in rural areas and no radiologist will travel to random nowhere village in Vietnam. This means more X-rays can be ordered, more patients treated, all without taking on the burden of training ultra-specialist for 10 years.

References:

1. https://twitter.com/mattlungrenMD/status/1382355232601079811

2. https://health.economictimes.indiatimes.com/news/industry/th...

3. https://apps.who.int/iris/bitstream/handle/10665/340255/9789...


See this for how it actually works in code: https://chsasank.github.io/concurrent-programming-trio-tutor...


How different is podman from docker?


This is the classic paper that rediscovered back-propagation. Conceptually, back propagation is quite simple and just is a repeated application of chain rule. However, results of applying backprop for multi layer neural networks have been spectacular. This paper reads like a very brief tutorial of deep learning.


In 1955, four computer scientists wrote the following proposal for a workshop to lay groundwork for Artificial Intelligence. This workshop is considered to be the founding event of AI as a field. Ideas from this proposal remain highly relevant to the day.

Yellow highlights/annotations are my own. You can disable them. I also added side notes with my thoughts and the connections I could trace to the more modern AI ideas.


Ok somebody changed the title. That is awesome. thanks!


stderr is not missing.

> The discussion of I/O in §3 above seems to imply that every file used by a program must be opened or created by the program in order to get a file descriptor for the file. Programs executed by the Shell, however, start off with two open files which have file descriptors 0 and 1. As such a program begins execution, file 1 is open for writing, and is best understood as the standard output file. Except under circumstances indicated below, this file is the user’s typewriter. Thus programs which wish to write informative or diagnostic information ordinarily use file descriptor 1. Conversely, file 0 starts off open for reading, and programs which wish to read messages typed by the user usually read this file.


Thats stdin & stdout. stderr is file descriptor 2.


you're right! thanks for clarifying!


Great question. I never thought of this!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: