Good question! While it could be implemented as a ChatGPT plugin, OpenAstra offers some key advantages:
1. Visual Features - We maintain useful UI elements from traditional API tools (request/response viewers, environment management) within your chat responses.
2. Privacy & Control - You can self-host it and use any OpenAI-compatible LLM (including local models), keeping sensitive API traffic within your infrastructure
The core idea is to build a complete API discovery and testing platform that happens to use chat as its primary interaction model.
Thanks for the feedback! Let me explain with a concrete example:
Currently with tools like Postman, to test an API endpoint you need to:
1. Read through API documentation
2. Manually construct request parameters
3. Navigate through multiple UI sections to set up headers/auth
4. Format and validate JSON payloads
With OpenAstra, you can simply type "Send a POST request to create a new user with email test@example.com" and it handles all the above setup automatically. It's particularly useful when:
- Exploring new APIs without reading extensive docs
- Running quick tests without navigating complex UIs
- Helping team members who aren't familiar with API tooling
Think of it as having an assistant who understands API documentation and handles the technical setup while focusing on what you want to test. Hope this helps!
Yes, absolutely. The goal is to come as close as possible to GitHub copilot in terms of DX. Will add a comparison table soon with other players in the space like continue.dev, TabbyML etc.
Hi @lgrammel, my intention was not to undermine the contributions made by you and others. Added all the original missing contributor's list to the repo. The commit link is http://tinyurl.com/2uzdefak.
Really wish both completion and chat could be run against the same LLM end point. Most of these seem to be doing their own incompatible docker thing which is annoying
While testing internally, Mistral worked well. But these models are just starting points. Will add support for the models WaveCoder-Ultra-6.7B, WizardCoder-33B, Magiccoder-S-DS-6.7B, etc soon.
Yes, well, you've basically missed the entirety of the British empire who most of have at some point in not too recent history called a toilet a privy. It's an older term, but still very well recognised.
1. Visual Features - We maintain useful UI elements from traditional API tools (request/response viewers, environment management) within your chat responses. 2. Privacy & Control - You can self-host it and use any OpenAI-compatible LLM (including local models), keeping sensitive API traffic within your infrastructure
The core idea is to build a complete API discovery and testing platform that happens to use chat as its primary interaction model.