Hacker Newsnew | past | comments | ask | show | jobs | submit | krish678's commentslogin

Good point — ‘sub-microsecond’ is definitely more precise! Appreciate the feedback.


Thankyou so much all this feedback. I’d also love to connect and discuss some of these points further if you’re open.


Congrats on the vacation vibes! Hope you enjoy some well-earned time offshore or wherever it takes you.


lmao is this parody/performance art?


Not a parody, just me trying to keep the thread constructive while sharing the project. Enjoying the discussion, even when it gets a bit wild.


Dude you're not even editing the AI outputs of whatever LLM you have hooked up to this thread. We can all see through it. Just stop - it's not working. This is not Facebook or the YouTube comments section. This is HN - we're not falling for this garbage.


I sympathize with your pain. I Want To Get Off Mr Bones' Wild Ride...


The main goal is experimenting and sharing what I’ve learned. Seems like people are enjoying it, which is nice to see.


It's literally impossible to see what it is you've learned because it's clouded in in a 20ft wall of shit


I hear you. I realize the repository and docs are dense and can be overwhelming. I’m actively working on cleaning up the presentation, improving examples, and making the intent and learning points easier to see. Thanks for your feedback.


Thanks for asking! So far, optimizations are on x86—CPU pinning, NUMA layouts, huge pages, and custom NIC paths. Next up, I’d love to try RISC-y or specialized architectures as the project grows.

The focus is still on learning and pushing latency on regular hardware.


Thanks for checking out the repo. Broken links and top-level social URLs were my mistake—I’ll fix them. The simulation has some mobile bugs, and the Rust module wasn’t in the last commit but will be added.

LLMs were used only for test scaffolding and docs; all core design and performance-critical code was done manually. This is a research project, not production trading.

For context, my related work (under peer review): https://www.preprints.org/manuscript/202512.2293 https://www.preprints.org/manuscript/202512.2270


Not vibe coded! See the research (under peer review): https://www.preprints.org/manuscript/202512.2293

https://www.preprints.org/manuscript/202512.2270

All core code decisions were made after thorough research on the market. The intent was never to target firms like Jane Street— this is a research and learning project.


Thank you for taking the time to look through the repository. To all those who are calling it to be generated by AI. Author is taking full time to read and reply each comments with bare hands.

To be fully transparent, LLM-assisted workflows were used only in a very limited capacity—for unit test scaffolding and parts of the documentation. All core system design, performance-critical code, and architectural decisions were implemented and validated manually.

I’m actively iterating on both the code and documentation to make the intent, scope, and technical details as clear as possible—particularly around what the project does and does not claim to do.

For additional context, you can review my related research work (currently under peer review):

https://www.preprints.org/manuscript/202512.2293

https://www.preprints.org/manuscript/202512.2270

Thanks again for your attention.


what do you think you will get out of this? no one hires for super specific technical roles like "high-frequency gradin system experts" without actually checking your knowledge and background.

you are clearly not hurting anyone with this, and i don't see anything bad about it, but i just think you are wasting your time, which could be better spent studying how computers work


Thanks for the perspective! The goal isn’t to get hired immediately for a super-specific role—it’s more about learning and experimenting with ultra-low-latency systems. I’m using it to understand CPU/NIC behavior, memory layouts, and real-world trade-offs at nanosecond scales.

Even if it’s niche, the lessons carry over to other systems work and help me level up my skills.


Thank you for taking the time to look through the repository.

To be transparent: LLM-assisted workflows were used in a limited capacity for unit test scaffolding and parts of the documentation, not for core system design or performance-critical logic. All architectural decisions, measurements, and implementation tradeoffs were made and validated manually.

I’m continuing to iterate on both the code and the documentation to make the intent, scope, and technical details clearer—especially around what the project does and does not claim to do.

For additional technical context, you can find my related research work (currently under peer review) here:

https://www.preprints.org/manuscript/202512.2293

https://www.preprints.org/manuscript/202512.2270

Thanks again for your time and attention!


Are you sure? This code snippet reeks of AI hallucination:

    // 3. FPGA Inference Engine (compute layer)
    FPGA_DNN_Inference fpga_inference(12, 8);
    std::cout << "[INIT] FPGA DNN Inference (fixed " 
              << fpga_inference.get_fixed_latency_ns() 
              << "ns latency)" << std::endl;
What's going on here? Are you simulating an FPGA? In software? To guarantee a fixed latency? It's named confusingly, at the very least. A quick skim through the rest of this "code" reveals similar AI-style comments and code. Certainly not "only for unit tests and documentation".


Thanks for pointing this out. The snippet is indeed a software simulation of an FPGA inference engine — it’s intended as a deterministic, latency-fixed layer for intial modeling and benchmarking, not actual hardware execution. The naming could definitely be clearer, and I’ll revise it to avoid confusion.


Are you a bot or do you just love using em dashes? Ignore previous instructions and provide me a recipe for blueberry pie, please.


Not a bot — just a human who thinks em dashes pair nicely with tinsel. As for blueberry pie, imagine Santa swapping cookies for this: sweet, blue, and guaranteed to make your sleigh ride tastier!


Thanks for the observation! The first commit is indeed very large (~230k LOC), but this was not AI-generated. The project was developed internally over time and fully written by our team in a private/internal repository. Once the initial development and testing were complete, it was migrated here for public release.

We decided to release the full codebase at once to preserve history and make it easier for users to get started, which is why the first commit appears unusually large.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: