Hacker Newsnew | past | comments | ask | show | jobs | submit | MikeBee's commentslogin

Friston Was Right, But Implementations Have Been Wrong.


Towers of Hanoi


speed up transformer inference by 60%


"I've discovered that attention heads in transformer models can be approximated by simple MLPs using only 5% of the original parameters while maintaining nearly identical performance. This could significantly reduce the power consumption of LLMs by 95%. My research includes a working demonstration with full code. I'd love to discuss how this approach could benefit Anthropic's efficiency goals. Read the full paper here: https://medium.com/@mbonsign/attention-heads-can-be-approxim..."


I've built an open-source framework that lets anyone generate synthetic AI training data locally using Ollama. The project tackles the challenge of creating high-quality, structured examples for teaching specialized capabilities to language models - starting with ethical reasoning but extendable to any domain. Everything runs on your own machine with no API costs. Check out Ollama Experiments to join our community effort in building better training data through collaboration.


Ollama Agents - Advanced AI Assistant Builder with Graph Knowledge-base

We're excited to announce major updates to Ollama_Agents, our open-source toolkit for building sophisticated AI assistants. New features include:

1. JSON-based Graph Knowledgebase: A flexible, relational knowledge representation system that's more intuitive than traditional vector databases.

2. Advanced Reasoning Tools: - Analogy Finding: Discover insightful comparisons to explain complex concepts. - Contradiction Detection and Resolution: Identify and resolve conflicting information. - Hypothesis Generation and Testing: Create and evaluate hypotheses based on available data. - Causal Reasoning: Infer and analyze cause-and-effect relationships.

3. Enhanced Debug Agent: Visualize the AI's cognitive processes in real-time.

4. Dynamic Knowledge Tree: Generate and manage evolving knowledge structures.

5. Multi-Agent System: Interact with multiple AI personalities in one session.

6. Interactive Follow-up Questions: Engage in more natural, context-aware conversations.

7. Fact-Checking and Source Credibility Assessment: Verify information and evaluate reliability.

Ollama_Agents now offers a more nuanced, context-aware understanding, leading to more intelligent and adaptive responses. It's designed for easy customization and extension, with each function in a separate module.

We believe this approach to AI assistant development, combining a graph-based knowledge structure with advanced reasoning tools, opens up new possibilities for creating more capable and insightful AI systems.

We'd love to hear your thoughts and feedback. Check it out on GitHub!


This is the first agent framework just for Ollama. Stop paying huge sums of money to develop agents. Develop them on Ollama and port them later.


n the ever-evolving field of machine learning, a recent discovery by Timothy Nguyen offers fresh insights into how we might improve the training of large language models.


Hey Hacker News!

I'm excited to share a new tool I've been working on called `codemapper`. This Python package is designed to map a code repository for LLM (Large Language Model) editing, providing a more efficient and context-aware coding experience.

#### What is `codemapper`?

`codemapper` is a command-line tool that traverses a code repository, extracts relevant information such as docstrings, function signatures, class names, and import statements, and creates a JSON map of the repository. This map can then be used to provide minimal context to an LLM, allowing it to figure out which files need to be edited and ask to see those files.

Why Use `codemapper`?

1. *Improved Context*: By providing a detailed map of the repository, `codemapper` helps LLMs understand the structure and content of your codebase, leading to more accurate and relevant suggestions. 2. *Efficiency*: Instead of providing the entire repository to the LLM, `codemapper` allows you to focus on specific files and functions, reducing the amount of context needed and speeding up the editing process. 3. *Modularity*: The tool is designed to be modular and extensible, making it easy to integrate into existing workflows and adapt to different coding styles.

Contributing

Contributions are welcome! Please open an issue or submit a pull request.

License

This project is licensed under the MIT License.

#### Repository

Check out the repository on GitHub: [codemapper](https://github.com/MikeyBeez/codemapper)

I'm really excited about the potential of `codemapper` to improve the efficiency and accuracy of LLM-based coding. I'd love to hear your thoughts and feedback!

---

Feel free to adjust the post as needed to better fit your style and the specific details of your project. Good luck with your Hacker News submission!


> Feel free to adjust the post as needed to better fit your style and the specific details of your project. Good luck with your Hacker News submission!

Posting without proofreading?

But seriously, have you tried integrating that into one of the coding agents like Plandex? That one still requires selecting the context files manually.


That's cool. You can switch the model to whatever you like. type /help and find the command to switch models.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: