We understand how important fine-tuning is. Fine-tuning is a way to get more context to your prompts and your AI generations. By providing a fine-tune with a lot of data you can ensure that the quality of your output is better than it could ever be if you are using prompt engineering and only providing 2-3 examples. The problem is, most people knew to the AI arena, will not understand how to compile their datasets to get to the point of having a successful fine-tune.
Whilst, we're not solving the dataset compiling problem in this blog post, we are solving the problem of being able to import your fine-tuned models directly into Riku to use in conjunction with all of the other large language models we support. If you have made a fine-tune with OpenAI, AI21 or Cohere, we have you covered and the ability to bring these fine-tunes into Riku is now possible with just a few clicks.
You can read all about the ability to import your fine-tunes into Riku from our documentation. There are instructions available for OpenAI, AI21, and Cohere so you cannot really go wrong! Once imported into Riku, you can use these fine-tuned models as part of prompt chains or more complicated workflows. You can even call these fine-tuned models from the single endpoint for use within your own applications.
Let us know what fine-tunes you have created and how they help solve problems for your business and if you are interested in all things AI and playing with the best selection of large language models in a single place, sign up for Riku today at https://riku.ai.