Why use Fine-tuning?
All large language models work on a token system. A token is approximately 4 characters. These token systems have a limit so you can't always provide enough data in a prompt.
For more complex generations, it is also often the case that you need to feed the AI enough examples to understand what you are trying to achieve when you run a generation. These examples, or datasets can range from 50 to hundreds of thousands and will ensure a quality output. A fine-tuned model will perform better than a prompt.
As most large language models charge for the tokens used within a prompt, using fine-tuning can also save you a lot of money. Pushing those examples into a dataset means that when you run the fine-tune, it is optimized.
Learn About JSONL Datasets
You can fine-tune models directly in Riku with no-code. We even have a JSONL dataset builder to help you take your outputs and put them in the right format without the stress. Making fine-tuning accessible is one of our main goals.