The service runs on secure cloud infrastructure and processes code in-memory during PR reviews - we don't permanently store any source code. We use enterprise-grade LLMs (can't disclose specific models due to licensing) and implement context-aware analysis without fine-tuning on customer code.
When we say "learning", we mean analyzing the codebase context during PR reviews to understand patterns and relationships, not training or building persistent knowledge bases. This ensures both privacy and effectiveness.
We're working on open-sourcing parts of the implementation - will share more soon!
Good questions! Code review and generation are quite different tasks. Review is about pattern recognition and consistency checking, while generation requires understanding business logic and system design.
LlamaPReview works best at:
- Spotting potential issues (like off-by-one errors)
- Identifying patterns across the codebase
- Maintaining coding standards
For complex architectural decisions, it serves as an assistant rather than a replacement - helping senior developers save their time to focus their attention where it matters most.
Thanks for raising this important question. We will not store any code in our database. But we will leverage SaaS LLM API (e.g. GPT/Claude/Mistral) to help on the PR review - during this step, for sure we need to send code to these SaaS LLM for analyze. This is the main reason why we mentioned "collecting users code" in our privacy.
Interesting perspective on the timing of feedback. We chose PR reviews because they're a natural integration point where developers already expect feedback, and it's when context is most complete. However, we're exploring ways to provide earlier feedback without being intrusive.
The key is finding the right balance between immediate assistance and allowing developers to maintain their flow. Would love to hear more about your experiences with different feedback timing approaches.
Thanks for mentioning PR Agent. While there are several tools in this space, LlamaPReview focuses on deep codebase understanding and context-aware reviews(advanced functions still under evolution). We'd love to hear about your experiences and what specific features you find most valuable in code review tools.
Thanks for raising this question. Currently, we're offering a free tier to gather community feedback and improve the service. We use enterprise-grade LLMs to ensure high-quality reviews while maintaining reasonable operational costs. Our focus is on building a valuable tool for developers first, and we'll be transparent about any future pricing changes.
The "learning" process involves analyzing your codebase's context during PR reviews - we don't train on your data (we even will not save them but only calculate in memory). Instead, we use advanced context retrieval to understand:
- Project structure and architecture
- Coding patterns and conventions
- Dependencies and relationships between components
This allows us to provide more relevant and context-aware reviews while maintaining data privacy (some advanced features still is under developing)
Great points about code review reliability. LlamaPReview is designed to be a complementary tool for senior developers, not a replacement for human review. Here's our approach:
1. It helps save senior developers' time by handling routine checks and providing initial insights
2. It analyzes the entire codebase context to provide more meaningful reviews
3. It's particularly useful for identifying patterns and relationships across the codebase
The goal is to make human reviewers more efficient, allowing them to focus on complex architectural decisions and critical business logic. We've seen positive results from both open-source and commercial projects using this approach.
The service runs on secure cloud infrastructure and processes code in-memory during PR reviews - we don't permanently store any source code. We use enterprise-grade LLMs (can't disclose specific models due to licensing) and implement context-aware analysis without fine-tuning on customer code.
When we say "learning", we mean analyzing the codebase context during PR reviews to understand patterns and relationships, not training or building persistent knowledge bases. This ensures both privacy and effectiveness.
We're working on open-sourcing parts of the implementation - will share more soon!