Skip to main content

H2O LLM Studio performance

Setting up and running H2O LLM Studio requires the following minimal prerequisites. This page lists out the speed and performance metrics of H2O LLM Studio based on different hardware setups.

The following metrics were measured.

  • Hardware setup: The type and number of computing devices used to train the model.
  • LLM backbone: The underlying architecture of the language model. For more information, see LLM backbone.
  • Quantization: A technique used to reduce the size and memory requirements of the model. For more information, see Quantization.
  • Train: The amount of time it took to train the model in hours and minutes.
  • Validation: The amount of time it took to validate the mode in hours and minutes.
Hardware setupLLM backboneQuantizationTrain (hh:mm:ss)Validation (hh:mm:ss)
8xA10Gh2oai/h2ogpt-4096-llama2-7bbfloat1611:353:32
4xA10Gh2oai/h2ogpt-4096-llama2-7bbfloat1621:1306:35
2xA10Gh2oai/h2ogpt-4096-llama2-7bbfloat1637:0412:21
1xA10Gh2oai/h2ogpt-4096-llama2-7bbfloat161:25:2915:50
8xA10Gh2oai/h2ogpt-4096-llama2-7bnf414:2606:13
4xA10Gh2oai/h2ogpt-4096-llama2-7bnf426:5511:59
2xA10Gh2oai/h2ogpt-4096-llama2-7bnf448:2423:37
1xA10Gh2oai/h2ogpt-4096-llama2-7bnf41:26:5942:17
8xA10Gh2oai/h2ogpt-4096-llama2-13bbfloat16OOMOOM
4xA10Gh2oai/h2ogpt-4096-llama2-13bbfloat16OOMOOM
2xA10Gh2oai/h2ogpt-4096-llama2-13bbfloat16OOMOOM
1xA10Gh2oai/h2ogpt-4096-llama2-13bbfloat16OOMOOM
8xA10Gh2oai/h2ogpt-4096-llama2-13bnf425:0710:58
4xA10Gh2oai/h2ogpt-4096-llama2-13bnf448:4321:25
2xA10Gh2oai/h2ogpt-4096-llama2-13bnf41:30:4542:06
1xA10Gh2oai/h2ogpt-4096-llama2-13bnf42:44:361:14:20
8xA10Gh2oai/h2ogpt-4096-llama2-70bnf4OOMOOM
4xA10Gh2oai/h2ogpt-4096-llama2-70bnf4OOMOOM
2xA10Gh2oai/h2ogpt-4096-llama2-70bnf4OOMOOM
1xA10Gh2oai/h2ogpt-4096-llama2-70bnf4OOMOOM
---------------
4xA100 80GBh2oai/h2ogpt-4096-llama2-7bbfloat167:043:55
2xA100 80GBh2oai/h2ogpt-4096-llama2-7bbfloat1613:147:23
1xA100 80GBh2oai/h2ogpt-4096-llama2-7bbfloat1623:3613:25
4xA100 80GBh2oai/h2ogpt-4096-llama2-7bnf49:446:30
2xA100 80GBh2oai/h2ogpt-4096-llama2-7bnf418:3412:16
1xA100 80GBh2oai/h2ogpt-4096-llama2-7bnf434:0621:51
4xA100 80GBh2oai/h2ogpt-4096-llama2-13bbfloat1611:465:56
2xA100 80GBh2oai/h2ogpt-4096-llama2-13bbfloat1621:5411:17
1xA100 80GBh2oai/h2ogpt-4096-llama2-13bbfloat1639:1018:55
4xA100 80GBh2oai/h2ogpt-4096-llama2-13bnf416:5110:35
2xA100 80GBh2oai/h2ogpt-4096-llama2-13bnf432:0521:00
1xA100 80GBh2oai/h2ogpt-4096-llama2-13bnf459:1136:53
4xA100 80GBh2oai/h2ogpt-4096-llama2-70bnf41:13:3346:02
2xA100 80GBh2oai/h2ogpt-4096-llama2-70bnf42:20:441:33:42
1xA100 80GBh2oai/h2ogpt-4096-llama2-70bnf44:23:572:44:51
info

The runtimes were gathered using the default parameters.

Expand to see the default parameters
architecture:
backbone_dtype: int4
force_embedding_gradients: false
gradient_checkpointing: true
intermediate_dropout: 0.0
pretrained: true
pretrained_weights: ''
augmentation:
random_parent_probability: 0.0
skip_parent_probability: 0.0
token_mask_probability: 0.0
dataset:
add_eos_token_to_answer: true
add_eos_token_to_prompt: true
add_eos_token_to_system: true
answer_column: output
chatbot_author: H2O.ai
chatbot_name: h2oGPT
data_sample: 1.0
data_sample_choice:
- Train
- Validation
limit_chained_samples: false
mask_prompt_labels: true
parent_id_column: None
personalize: false
prompt_column:
- instruction
system_column: None
text_answer_separator: <|answer|>
text_prompt_start: <|prompt|>
text_system_start: <|system|>
train_dataframe: /data/user/oasst/train_full.pq
validation_dataframe: None
validation_size: 0.01
validation_strategy: automatic
environment:
compile_model: false
find_unused_parameters: false
gpus:
- '0'
- '1'
- '2'
- '3'
- '4'
- '5'
- '6'
- '7'
huggingface_branch: main
mixed_precision: true
number_of_workers: 8
seed: -1
trust_remote_code: true
use_fsdp: false
experiment_name: default-8-a10g
llm_backbone: h2oai/h2ogpt-4096-llama2-7b
logging:
logger: None
neptune_project: ''
output_directory: /output/...
prediction:
batch_size_inference: 0
do_sample: false
max_length_inference: 256
metric: BLEU
metric_gpt_model: gpt-3.5-turbo-0301
metric_gpt_template: general
min_length_inference: 2
num_beams: 1
num_history: 4
repetition_penalty: 1.2
stop_tokens: ''
temperature: 0.3
top_k: 0
top_p: 1.0
problem_type: text_causal_language_modeling
tokenizer:
add_prompt_answer_tokens: false
max_length: 512
max_length_answer: 256
max_length_prompt: 256
padding_quantile: 1.0
use_fast: true
training:
batch_size: 2
differential_learning_rate: 1.0e-05
differential_learning_rate_layers: []
drop_last_batch: true
epochs: 1
evaluate_before_training: false
evaluation_epochs: 1.0
grad_accumulation: 1
gradient_clip: 0.0
learning_rate: 0.0001
lora: true
lora_alpha: 16
lora_dropout: 0.05
lora_r: 4
lora_target_modules: ''
loss_function: TokenAveragedCrossEntropy
optimizer: AdamW
save_checkpoint: "last"
schedule: Cosine
train_validation_data: false
warmup_epochs: 0.0
weight_decay: 0.0

Feedback