I like this. I am imagining a companion extension for chrome/ff that uses you-get as a backend to implement it in a seamless way. Forward thinking idea: imagine going on youtube and have you-get extension bypass the youtube player and playing the content directly without ads. When I say youtube I might also say any other platform.
This is surely useful right now. I wonder what will happens to all the nice X11 tools once Wayland (hopefully soon) will be the golden standard. There are options to enable X11 behaviors in Wayland but I guess that is just a fallback to the insecure implementation.
### Added
- New `model_tokens.json` file containing token limits for various Ollama models.
- Dynamic token limit updating based on selected model in options.
- Automatic loading of model-specific token limits from `model_tokens.json`.
- Chunking and recursive summary for long pages
- Better handling of markdown returns
### Changed
- Updated `manifest.json` to include `model_tokens.json` as a web accessible resource.
- Modified `options.js` to handle dynamic token limit updates:
- Added `loadModelTokens()` function to fetch model token data.
- Added `updateTokenLimit()` function to update token limit based on selected model.
- Updated `restoreOptions()` function to incorporate dynamic token limit updating.
- Added event listener for model selection changes.
### Improved
- User experience in options page with automatic token limit updates.
- Flexibility in handling different models and their respective token limits.
### Fixed
- Potential issues with incorrect token limits for different models.
I applied (for now) a pre-filled table with a 4096 default limit. Users can also specify an upper or lower limit from the UI directly now. Added chunk and recursive summarization too.
Personally I use llama3.1:8b or mistral-nemo:latest which have a decent contex window (even if it is less than the commercial ones usually). I am working on a token calculator / division of the content method too but is very early
https://garden.tcsenpai.com/bookmarks/ai/ai-convos-notes/gem...