I've been playing around with
https://github.com/zphang/minimal-llama/ and
https://github.com/tloen/alpaca-lora/blob/main/finetune.py, and wanted to create a simple UI where you can just paste text, tweak the parameters, and finetune the model quickly using a modern GPU.
To prepare the data, simply separate your text with two blank lines.
There's an inference tab, so you can test how the tuned model behaves.
This is my first foray into the world of LLM finetuning, Python, Torch, Transformers, LoRA, PEFT, and Gradio.
Enjoy!