Hacker Newsnew | past | comments | ask | show | jobs | submit | qrios's commentslogin

A simple skill markdown for Claude Code was enough to use the local Wolfram Kernel.

Even the documentation search is available:

```bash

/Applications/Wolfram.app/Contents/MacOS/WolframKernel -noprompt -run '

Needs["DocumentationSearch`"];

result = SearchDocumentation["query term"];

Print[Column[Take[result, UpTo[10]]]];

Exit[]'

```


Works on my computer: RTX 3090, CUDA 12.6

Interesting project! I haven't really worked with Vulkan myself yet. Hence my question: how is the code compiled and then loaded into the cores?

Or is the entire code always compiled in the REPL and then uploaded, with only the existing data addresses being updated?


Thanks for trying it! :)

Each gpu_* call emits SPIR-V and dispatches via Vulkan compute. Data stays resident in VRAM between calls — no round-trips to CPU unless you need the result.

No thread_id exposed. The runtime handles thread indexing internally — gpu_add(a, b) means "one thread per element, each does a[i] + b[i]." Workgroup sizing and dispatch dimensions are automatic.

The tradeoff: you can't write custom kernels with shared memory or warp-level ops. OctoFlow targets the 80% of GPU work that's embarrassingly parallel. For the other 20% you still want CUDA/Vulkan directly.

Cheers


Very interesting! I'll definitely give it a try. However, the documentation link[1] isn't working at the moment (404).

[1] https://crux-ecosystem.github.io/MOL/



Via certificate invalidation, or turning off services for certificate pinning?



Same for me with ISP from Austria.


Sorry. Maybe US-only government site. Unfortunate



> … what human intelligence is because intelligence is best described as a side-effect of consciousness …

Is "human intelligence" and "intelligence" equal?

And: How to become conscious before being intelligent?

Or: If intelligence is a side-effect, how often this side-effect can't be observed?

Xor: What if an intelligent being denies being conscious?


"low hanging" is relative. At least from my perspective. A significant part of my work involves cleaning up structured and unstructured data.

An example: More than ten years ago a friend of mine was fascinated by the german edition of the book "A Cultural History of Physics" by Károly Simonyi. He scanned the book (600+ pages) and created a PDF (nearly) same layout.

Against my advice he used Adobe tools for it instead of creating an epub or something like DocBook.

The PDF looks great, but the text inside is impossible to use as training data for a small LLM. The lines from the two columns are mixed and a lot of spaces are randomly placed (makes it particularly difficult because mathematical formulas often appear in the text itself).

After many attempts (with RegEx and LLMs), I gave up and rendered each page and had a large LLM extract the text.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: