Import a model to h2oGPT
Once the model has been fine-tuned using H2O LLM Studio, you can then use h2oGPT to query, summarize, and chat with your model.
The most common method to get the model from H2O LLM Studio over to h2oGPT, is to import it into h2oGPT via HuggingFace. However, if your data is sensitive, you can also choose to download the model locally to your machine, and then import it directly into h2oGPT.
You can use any of the following methods:
- Publish the model to HuggingFace and import the model from HuggingFace
- Download the model and import it to h2oGPT by specifying the local folder path
- Download the model and upload it to h2oGPT using the file upload option on the UI
- Pull a model from a Github repository or a resolved web link
Steps
Publish the model to HuggingFace or download the model locally.
If you opt to download the model, make sure you extract the downloaded .zip file.
Use the following command to import it into h2oGPT.
python generate.py --base_model=[link_or_path_to_folder]
Examples- From HuggingFace
- From a Local File
- From a Repository
python generate.py --base_model=HuggingFaceH4/zephyr-7b-beta
python generate.py --base_model=zephyr-7b-beta.Q5_K_M.gguf
python generate.py --base_model=TheBloke/zephyr-7B-beta-AWQ
info
For more information, see the h2oGPT documentation.
Feedback
- Submit and view feedback for this page
- Send feedback about H2O LLM Studio | Docs to cloud-feedback@h2o.ai