h2ogpte package

Submodules

h2ogpte.h2ogpte module

class h2ogpte.h2ogpte.H2OGPTE(address: str, api_key: str | None = None, token_provider: TokenProvider | None = None, verify: bool | str = True, strict_version_check: bool = False)

Bases: object

Connect to and interact with an h2oGPTe server.

INITIAL_WAIT_INTERVAL = 0.1
MAX_WAIT_INTERVAL = 1.0
TIMEOUT = 3600.0
WAIT_BACKOFF_FACTOR = 1.4
answer_question(question: str, system_prompt: str | None = '', pre_prompt_query: str | None = None, prompt_query: str | None = None, text_context_list: List[str] | None = None, llm: str | int | None = None, llm_args: Dict[str, Any] | None = None, chat_conversation: List[Tuple[str, str]] | None = None, pii_settings: Dict | None = None, timeout: float | None = None, **kwargs: Any) Answer

Send a message and get a response from an LLM.

Note: This method is only recommended if you are passing a chat conversation or for low-volume testing. For general chat with an LLM, we recommend session.query() for higher throughput in multi-user environments. The following code sample shows the recommended method:

# Establish a chat session
chat_session_id = client.create_chat_session()
# Connect to the chat session
with client.connect(chat_session_id) as session:
    # Send a basic query and print the reply
    reply = session.query("Hello", timeout=60)
    print(reply.content)

Format of inputs content:

{text_context_list}
"""\n{chat_conversation}{question}
Args:
question:

Text query to send to the LLM.

text_context_list:

List of raw text strings to be included, will be converted to a string like this: “

“.join(text_context_list)
system_prompt:

Text sent to models which support system prompts. Gives the model overall context in how to respond. Use auto for the model default, or None for h2oGPTe default. Defaults to ‘’ for no system prompt.

pre_prompt_query:

Text that is prepended before the contextual document chunks in text_context_list. Only used if text_context_list is provided.

prompt_query:

Text that is appended after the contextual document chunks in text_context_list. Only used if text_context_list is provided.

llm:

Name or index of LLM to send the query. Use H2OGPTE.get_llms() to see all available options. Default value is to use the first model (0th index).

llm_args:
Dictionary of kwargs to pass to the llm. Valid keys:

temperature (float, default: 0) — The value used to modulate the next token probabilities. Most deterministic: 0, Most creative: 1 seed (int, default: 0) — The seed for the random number generator, only used if temperature > 0, seed=0 will pick a random number for each call, seed > 0 will be fixed. top_k (int, default: 1) — The number of highest probability vocabulary tokens to keep for top-k-filtering. top_p (float, default: 1.0) — If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. repetition_penalty (float, default: 1.07) — The parameter for repetition penalty. 1.0 means no penalty. max_new_tokens (int, default: 1024) — Maximum number of new tokens to generate. This limit applies to each (map+reduce) step during summarization and each (map) step during extraction. min_max_new_tokens (int, default: 512) — minimum value for max_new_tokens when auto-adjusting for content of prompt, docs, etc. response_format (str, default: “text”) — Output type, one of [“text”, “json_object”, “json_code”]. guided_json (str, default: “”) — If specified, the output will follow the JSON schema. guided_regex (str, default: “”) — If specified, the output will follow the regex pattern. guided_choice (Optional[List[str]], default: None — If specified, the output will be exactly one of the choices. guided_grammar (str, default: “”) — If specified, the output will follow the context free grammar. guided_whitespace_pattern (str, default: “”) — If specified, will override the default whitespace pattern for guided json decoding.

chat_conversation:

List of tuples for (human, bot) conversation that will be pre-appended to an (question, None) case for a query.

pii_settings:

PII Settings.

timeout:

Timeout in seconds.

kwargs:
Dictionary of kwargs to pass to h2oGPT. Not recommended, see https://github.com/h2oai/h2ogpt for source code. Valid keys:

h2ogpt_key: str = “” chat_conversation: list[tuple[str, str]] | None = None docs_ordering_type: str | None = “best_near_prompt” max_input_tokens: int = -1 docs_token_handling: str = “split_or_merge” docs_joiner: str = “

image_file: Union[str, list] = None

Returns:

Answer: The response text and any errors.

Raises:

TimeoutError: If response isn’t completed in timeout seconds.

cancel_job(job_id: str) Result

Stops a specific job from running on the server.

Args:
job_id:

String id of the job to cancel.

Returns:

Result: Status of canceling the job.

connect(chat_session_id: str) Session

Create and participate in a chat session.

This is a live connection to the H2OGPTE server contained to a specific chat session on top of a single collection of documents. Users will find all questions and responses in this session in a single chat history in the UI.

Args:
chat_session_id:

ID of the chat session to connect to.

Returns:

Session: Live chat session connection with an LLM.

count_assets() ObjectCount

Counts number of objects owned by the user.

Returns:

ObjectCount: The count of chat sessions, collections, and documents.

count_chat_sessions() int

Counts number of chat sessions owned by the user.

Returns:

int: The count of chat sessions owned by the user.

count_chat_sessions_for_collection(collection_id: str) int

Counts number of chat sessions in a specific collection.

Args:
collection_id:

String id of the collection to count chat sessions for.

Returns:

int: The count of chat sessions in that collection.

count_collections() int

Counts number of collections owned by the user.

Returns:

int: The count of collections owned by the user.

count_documents() int

Counts number of documents accessed by the user.

Returns:

int: The count of documents accessed by the user.

count_documents_in_collection(collection_id: str) int

Counts the number of documents in a specific collection.

Args:
collection_id:

String id of the collection to count documents for.

Returns:

int: The number of documents in that collection.

count_documents_owned_by_me() int

Counts number of documents owned by the user.

Returns:

int: The count of documents owned by the user.

count_prompt_templates() int

Counts number of prompt templates

Returns:

int: The count of prompt templates

count_question_reply_feedback() int

Fetch user’s questions and answers with feedback count.

Returns:

int: the count of questions and replies that have a user feedback.

create_chat_session(collection_id: str | None = None) str

Creates a new chat session for asking questions (of documents).

Args:
collection_id:

String id of the collection to chat with. If None, chat with LLM directly.

Returns:

str: The ID of the newly created chat session.

create_chat_session_on_default_collection() str

Creates a new chat session for asking questions of documents on the default collection.

Returns:

str: The ID of the newly created chat session.

create_collection(name: str, description: str, embedding_model: str | None = None, prompt_template_id: str | None = None, collection_settings: dict | None = None) str

Creates a new collection.

Args:
name:

Name of the collection.

description:

Description of the collection

embedding_model:

embedding model to use. call list_embedding_models() to list of options.

prompt_template_id:

ID of the prompt template to get the prompts from. None to fall back to system defaults.

collection_settings:

(Optional) Dictionary with key/value pairs to configure certain collection specific settings like pii_settings or max_tokens_per_chunk.

Returns:

str: The ID of the newly created collection.

create_prompt_template(name: str, description: str | None = None, lang: str | None = None, system_prompt: str | None = None, pre_prompt_query: str | None = None, prompt_query: str | None = None, hyde_no_rag_llm_prompt_extension: str | None = None, pre_prompt_summary: str | None = None, prompt_summary: str | None = None, system_prompt_reflection: str | None = None, pre_prompt_reflection: str | None = None, prompt_reflection: str | None = None, auto_gen_description_prompt: str | None = None, auto_gen_document_summary_pre_prompt_summary: str | None = None, auto_gen_document_summary_prompt_summary: str | None = None, auto_gen_document_sample_questions_prompt: str | None = None, default_sample_questions: List[str] | None = None, image_batch_image_prompt: str | None = None, image_batch_final_prompt: str | None = None) str

Create a new prompt template

Args:
name:

Name of the prompt template

description:

Description of the prompt template

lang:

Language code

system_prompt:

System Prompt

pre_prompt_query:

Text that is prepended before the contextual document chunks.

prompt_query:

Text that is appended to the beginning of the user’s message.

hyde_no_rag_llm_prompt_extension:

LLM prompt extension.

pre_prompt_summary:

Prompt that goes before each large piece of text to summarize

prompt_summary:

Prompt that goes after each large piece of of text to summarize

system_prompt_reflection:

System Prompt for self-reflection

pre_prompt_reflection:

deprecated - ignored

prompt_reflection:

Template for self-reflection, must contain two occurrences of %s for full previous prompt (including system prompt, document related context and prompts if applicable, and user prompts) and answer

auto_gen_description_prompt:

prompt to create a description of the collection.

auto_gen_document_summary_pre_prompt_summary:

pre_prompt_summary for summary of a freshly imported document (if enabled).

auto_gen_document_summary_prompt_summary:

prompt_summary for summary of a freshly imported document (if enabled).

auto_gen_document_sample_questions_prompt:

prompt to create sample questions for a freshly imported document (if enabled).

default_sample_questions:

default sample questions in case there are no auto-generated sample questions.

image_batch_final_prompt:

Prompt for each image batch for vision models

image_batch_image_prompt:

Prompt to reduce all answers each image batch for vision models

Returns:

str: The ID of the newly created prompt template.

create_tag(tag_name: str) str

Creates a new tag.

Args:
tag_name:

String representing the tag to create.

Returns:

String: The id of the created tag.

delete_chat_messages(chat_message_ids: Iterable[str]) Result

Deletes specific chat messages.

Args:
chat_message_ids:

List of string ids of chat messages to delete from the system.

Returns:

Result: Status of the delete job.

delete_chat_sessions(chat_session_ids: Iterable[str]) Result

Deletes chat sessions and related messages.

Args:
chat_session_ids:

List of string ids of chat sessions to delete from the system.

Returns:

Result: Status of the delete job.

delete_collections(collection_ids: Iterable[str], timeout: float | None = None)

Deletes collections from the environment.

Documents in the collection will not be deleted.

Args:
collection_ids:

List of string ids of collections to delete from the system.

timeout:

Timeout in seconds.

delete_document_summaries(summaries_ids: Iterable[str]) Result

Deletes document summaries.

Args:
summaries_ids:

List of string ids of a document summary to delete from the system.

Returns:

Result: Status of the delete job.

delete_documents(document_ids: Iterable[str], timeout: float | None = None)

Deletes documents from the system.

Args:
document_ids:

List of string ids to delete from the system and all collections.

timeout:

Timeout in seconds.

delete_documents_from_collection(collection_id: str, document_ids: Iterable[str], timeout: float | None = None)

Removes documents from a collection.

See Also: H2OGPTE.delete_documents for completely removing the document from the environment.

Args:
collection_id:

String of the collection to remove documents from.

document_ids:

List of string ids to remove from the collection.

timeout:

Timeout in seconds.

delete_prompt_templates(ids: Iterable[str]) Result

Deletes prompt templates

Args:
ids:

List of string ids of prompte templates to delete from the system.

Returns:

Result: Status of the delete job.

delete_upload(upload_id: str) str

Delete a file previously uploaded with the “upload” method.

See Also:

upload: Upload the files into the system to then be ingested into a collection. ingest_uploads: Add the uploaded files to a collection.

Args:
upload_id:

ID of a file to remove

Returns:

upload_id: The upload id of the removed.

Raises:

Exception: The delete upload request was unsuccessful.

download_document(destination_directory: str, destination_file_name: str, document_id: str) Path

Downloads a document to a local system directory.

Args:
destination_directory:

Destination directory to save file into.

destination_file_name:

Destination file name.

document_id:

Document ID.

Returns:

Path: Path of downloaded document

download_reference_highlighting(message_id: str, destination_directory: str, output_type: str = 'combined') list

Get PDFs with reference highlighting

Args:
message_id:

ID of the message to get references from

destination_directory:

Destination directory to save files into.

output_type: str one of

"combined" Generates a PDF file for each source document, with all relevant chunks highlighted

in each respective file. This option consolidates all highlights for each source document into a single PDF, making it easy to view all highlights related to that document at once.

"split" Generates a separate PDF file for each chunk, with only the relevant chunk

highlighted in each file. This option is useful for focusing on individual sections without interference from other parts of the text. The output files names will be in the format “{document_id}_{chunk_id}.pdf”

Returns:

list[Path]: List of paths of downloaded documents with highlighting

encode_for_retrieval(chunks: List[str], embedding_model: str | None = None) List[List[float]]

Encode texts for semantic searching.

See Also: H2OGPTE.match for getting a list of chunks that semantically match each encoded text.

Args:
chunks:

List of strings of texts to be encoded.

embedding_model:

embedding model to use. call list_embedding_models() to list of options.

Returns:

List of list of floats: Each list in the list is the encoded original text.

extract_data(text_context_list: List[str] | None = None, system_prompt: str = '', pre_prompt_extract: str | None = None, prompt_extract: str | None = None, llm: str | int | None = None, llm_args: Dict[str, Any] | None = None, pii_settings: Dict | None = None, timeout: float | None = None, **kwargs: Any) ExtractionAnswer

Extract information from one or more contexts using an LLM.

pre_prompt_extract and prompt_extract variables must be used together. If these variables are not set, the inputs texts will be summarized into bullet points.

Format of extract content:

"{pre_prompt_extract}"""
{text_context_list}
"""\n{prompt_extract}"

Examples:

extract = h2ogpte.extract_data(
    text_context_list=chunks,
    pre_prompt_extract="Pay attention and look at all people. Your job is to collect their names.\n",
    prompt_extract="List all people's names as JSON.",
)
Args:
text_context_list:

List of raw text strings to extract data from.

system_prompt:

Text sent to models which support system prompts. Gives the model overall context in how to respond. Use auto or None for the model default. Defaults to ‘’ for no system prompt.

pre_prompt_extract:

Text that is prepended before the list of texts. If not set, the inputs will be summarized.

prompt_extract:

Text that is appended after the list of texts. If not set, the inputs will be summarized.

llm:

Name or index of LLM to send the query. Use H2OGPTE.get_llms() to see all available options. Default value is to use the first model (0th index).

llm_args:
Dictionary of kwargs to pass to the llm. Valid keys:

temperature (float, default: 0) — The value used to modulate the next token probabilities. Most deterministic: 0, Most creative: 1 seed (int, default: 0) — The seed for the random number generator, only used if temperature > 0, seed=0 will pick a random number for each call, seed > 0 will be fixed. top_k (int, default: 1) — The number of highest probability vocabulary tokens to keep for top-k-filtering. top_p (float, default: 1.0) — If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. repetition_penalty (float, default: 1.07) — The parameter for repetition penalty. 1.0 means no penalty. max_new_tokens (int, default: 1024) — Maximum number of new tokens to generate. This limit applies to each (map+reduce) step during summarization and each (map) step during extraction. min_max_new_tokens (int, default: 512) — minimum value for max_new_tokens when auto-adjusting for content of prompt, docs, etc. response_format (str, default: “text”) — Output type, one of [“text”, “json_object”, “json_code”]. guided_json (str, default: “”) — If specified, the output will follow the JSON schema. guided_regex (str, default: “”) — If specified, the output will follow the regex pattern. guided_choice (Optional[List[str]], default: None — If specified, the output will be exactly one of the choices. guided_grammar (str, default: “”) — If specified, the output will follow the context free grammar. guided_whitespace_pattern (str, default: “”) — If specified, will override the default whitespace pattern for guided json decoding.

pii_settings:

PII Settings.

timeout:

Timeout in seconds.

kwargs:
Dictionary of kwargs to pass to h2oGPT. Not recommended, see https://github.com/h2oai/h2ogpt for source code. Valid keys:

h2ogpt_key: str = “” chat_conversation: list[tuple[str, str]] | None = None docs_ordering_type: str | None = “best_near_prompt” max_input_tokens: int = -1 docs_token_handling: str = “split_or_merge” docs_joiner: str = “

image_file: Union[str, list] = None

Returns:

ExtractionAnswer: The list of text responses and any errors.

Raises:

TimeoutError: If response isn’t completed in timeout seconds.

get_chat_session_prompt_template(chat_session_id: str) PromptTemplate | None

Get the prompt template for a chat_session

Args:
chat_session_id:

ID of the chat session

Returns:

str: ID of the prompt template.

get_chat_session_questions(chat_session_id: str, limit: int) List[SuggestedQuestion]

List suggested questions

Args:
chat_session_id:

A chat session ID of which to return the suggested questions

limit:

How many questions to return.

Returns:

List: A list of questions.

get_chunks(collection_id: str, chunk_ids: Iterable[int]) List[Chunk]

Get the text of specific chunks in a collection.

Args:
collection_id:

String id of the collection to search in.

chunk_ids:

List of ints for the chunks to return. Chunks are indexed starting at 1.

Returns:

Chunk: The text of the chunk.

Raises:

Exception: One or more chunks could not be found.

get_collection(collection_id: str) Collection

Get metadata about a collection.

Args:
collection_id:

String id of the collection to search for.

Returns:

Collection: Metadata about the collection.

Raises:

KeyError: The collection was not found.

get_collection_for_chat_session(chat_session_id: str) Collection

Get metadata about the collection of a chat session.

Args:
chat_session_id:

String id of the chat session to search for.

Returns:

Collection: Metadata about the collection.

get_collection_prompt_template(collection_id: str) PromptTemplate | None

Get the prompt template for a collection

Args:
collection_id:

ID of the collection

Returns:

str: ID of the prompt template.

get_collection_questions(collection_id: str, limit: int) List[SuggestedQuestion]

List suggested questions

Args:
collection_id:

A collection ID of which to return the suggested questions

limit:

How many questions to return.

Returns:

List: A list of questions.

get_default_collection() CollectionInfo

Get the default collection, to be used for collection API-keys.

Returns:

CollectionInfo: Default collection info.

get_document(document_id: str) Document

Fetches information about a specific document.

Args:
document_id:

String id of the document.

Returns:

Document: Metadata about the Document.

Raises:

KeyError: The document was not found.

get_job(job_id: str) Job

Fetches information about a specific job.

Args:
job_id:

String id of the job.

Returns:

Job: Metadata about the Job.

get_llm_names() List[str]

Lists names of available LLMs in the environment.

Returns:

list of string: Name of each available model.

get_llm_performance_by_llm(interval: str) List[LLMPerformance]
get_llm_usage_24h() float
get_llm_usage_24h_by_llm() List[LLMUsage]
get_llm_usage_24h_with_limits() LLMUsageLimit
get_llm_usage_6h() float
get_llm_usage_6h_by_llm() List[LLMUsage]
get_llm_usage_by_llm(interval: str) List[LLMUsage]
get_llm_usage_with_limits(interval: str) LLMUsageLimit
get_llms() List[dict]

Lists metadata information about available LLMs in the environment.

Returns:

list of dict (string, ANY): Name and details about each available model.

get_meta() Meta

Returns information about the environment and the user.

Returns:

Meta: Details about the version and license of the environment and the user’s name and email.

get_prompt_template(id: str | None = None) PromptTemplate

Get a prompt template

Args:
id:

String id of the prompt template to retrieve or None for default

Returns:

PromptTemplate: prompts

Raises:

KeyError: The prompt template was not found.

get_scheduler_stats() SchedulerStats

Count the number of global, pending jobs on the server.

Returns:

SchedulerStats: The queue length for number of jobs.

get_tag(tag_name: str) Tag

Returns an existing tag.

Args:
tag_name:

String The name of the tag to retrieve.

Returns:

Tag: The requested tag.

Raises:

KeyError: The tag was not found.

get_vision_capable_llm_names() List[str]

Lists names of available vision-capable multi-modal LLMs (that can natively handle images as input) in the environment.

Returns:

list of string: Name of each available model.

import_collection_into_collection(collection_id: str, src_collection_id: str, gen_doc_summaries: bool = False, gen_doc_questions: bool = False, copy_document: bool = False, ocr_model: str = 'auto', timeout: float | None = None)

Import all documents from a collection into an existing collection

Args:
collection_id:

Collection ID to add documents to.

src_collection_id:

Collection ID to import documents from.

gen_doc_summaries:

Whether to auto-generate document summaries (uses LLM)

gen_doc_questions:

Whether to auto-generate sample questions for each document (uses LLM)

copy_document:

Whether to save a new copy of the document

ocr_model:

Which OCR model to use. Pass empty string to see choices.

timeout:

Timeout in seconds.

import_document_into_collection(collection_id: str, document_id: str, gen_doc_summaries: bool = False, gen_doc_questions: bool = False, copy_document: bool = False, ocr_model: str = 'auto', timeout: float | None = None)

Import an already stored document to an existing collection

Args:
collection_id:

Collection ID to add documents to.

document_id:

Document ID to add.

gen_doc_summaries:

Whether to auto-generate document summaries (uses LLM)

gen_doc_questions:

Whether to auto-generate sample questions for each document (uses LLM)

copy_document:

Whether to save a new copy of the document

ocr_model:

Which OCR model to use. Pass empty string to see choices.

timeout:

Timeout in seconds.

ingest_from_azure_blob_storage(collection_id: str, container: str, path: str | List[str], account_name: str, credentials: AzureKeyCredential | AzureSASCredential | None = None, gen_doc_summaries: bool = False, gen_doc_questions: bool = False, timeout: float | None = None, audio_input_language: str = 'auto', ocr_model: str = 'auto')

Add files from the Azure Blob Storage into a collection.

Args:
collection_id:

String id of the collection to add the ingested documents into.

container:

Name of the Azure Blob Storage container.

path:

Path or list of paths to files or directories within an Azure Blob Storage container. Examples: file1, dir1/file2, dir3/dir4/

account_name:

Name of a storage account

credentials:

The object with Azure credentials. If the object is not provided, only a public container will be accessible.

gen_doc_summaries:

Whether to auto-generate document summaries (uses LLM)

gen_doc_questions:

Whether to auto-generate sample questions for each document (uses LLM)

timeout:

Timeout in seconds

audio_input_language:

Language of audio files. Defaults to “auto” language detection. Pass empty string to see choices.

ocr_model:

Which OCR model to use. Pass empty string to see choices.

ingest_from_file_system(collection_id: str, root_dir: str, glob: str, gen_doc_summaries: bool = False, gen_doc_questions: bool = False, audio_input_language: str = 'auto', ocr_model: str = 'auto', timeout: float | None = None)

Add files from the local system into a collection.

Args:
collection_id:

String id of the collection to add the ingested documents into.

root_dir:

String path of where to look for files.

glob:

String of the glob pattern used to match files in the root directory.

gen_doc_summaries:

Whether to auto-generate document summaries (uses LLM)

gen_doc_questions:

Whether to auto-generate sample questions for each document (uses LLM)

audio_input_language:

Language of audio files. Defaults to “auto” language detection. Pass empty string to see choices.

ocr_model:

Which OCR model to use. Pass empty string to see choices.

timeout:

Timeout in seconds

ingest_from_gcs(collection_id: str, url: str | List[str], credentials: GCSServiceAccountCredential | None = None, gen_doc_summaries: bool = False, gen_doc_questions: bool = False, timeout: float | None = None, audio_input_language: str = 'auto', ocr_model: str = 'auto')

Add files from the Google Cloud Storage into a collection.

Args:
collection_id:

String id of the collection to add the ingested documents into.

url:

The path or list of paths of GCS files or directories. Examples: gs://bucket/file, gs://bucket/../dir/

credentials:

The object holding a path to a JSON key of Google Cloud service account. If the object is not provided, only public buckets will be accessible.

gen_doc_summaries:

Whether to auto-generate document summaries (uses LLM)

gen_doc_questions:

Whether to auto-generate sample questions for each document (uses LLM)

timeout:

Timeout in seconds

audio_input_language:

Language of audio files. Defaults to “auto” language detection. Pass empty string to see choices.

ocr_model:

Which OCR model to use. Pass empty string to see choices.

ingest_from_s3(collection_id: str, url: str | List[str], region: str = 'us-east-1', credentials: S3Credential | None = None, gen_doc_summaries: bool = False, gen_doc_questions: bool = False, timeout: float | None = None, audio_input_language: str = 'auto', ocr_model: str = 'auto')

Add files from the AWS S3 storage into a collection.

Args:
collection_id:

String id of the collection to add the ingested documents into.

url:

The path or list of paths of S3 files or directories. Examples: s3://bucket/file, s3://bucket/../dir/

region:

The name of the region used for interaction with AWS services.

credentials:

The object with S3 credentials. If the object is not provided, only public buckets will be accessible.

gen_doc_summaries:

Whether to auto-generate document summaries (uses LLM)

gen_doc_questions:

Whether to auto-generate sample questions for each document (uses LLM)

timeout:

Timeout in seconds

audio_input_language:

Language of audio files. Defaults to “auto” language detection. Pass empty string to see choices.

ocr_model:

Which OCR model to use. Pass empty string to see choices.

ingest_uploads(collection_id: str, upload_ids: Iterable[str], gen_doc_summaries: bool = False, gen_doc_questions: bool = False, timeout: float | None = None, audio_input_language: str = 'auto', ocr_model: str = 'auto')

Add uploaded documents into a specific collection.

See Also:

upload: Upload the files into the system to then be ingested into a collection. delete_upload: Delete uploaded file

Args:
collection_id:

String id of the collection to add the ingested documents into.

upload_ids:

List of string ids of each uploaded document to add to the collection.

gen_doc_summaries:

Whether to auto-generate document summaries (uses LLM)

gen_doc_questions:

Whether to auto-generate sample questions for each document (uses LLM)

audio_input_language:

Language of audio files. Defaults to “auto” language detection. Pass empty string to see choices.

ocr_model:

Which OCR model to use. Pass empty string to see choices.

timeout:

Timeout in seconds

ingest_website(collection_id: str, url: str, gen_doc_summaries: bool = False, gen_doc_questions: bool = False, follow_links: bool = False, audio_input_language: str = 'auto', ocr_model: str = 'auto', timeout: float | None = None)

Crawl and ingest a URL into a collection.

The web page or document linked from this URL will be imported.

Args:
collection_id:

String id of the collection to add the ingested documents into.

url:

String of the url to crawl.

gen_doc_summaries:

Whether to auto-generate document summaries (uses LLM)

gen_doc_questions:

Whether to auto-generate sample questions for each document (uses LLM)

follow_links:

Whether to import all web pages linked from this URL will be imported. External links will be ignored. Links to other pages on the same domain will be followed as long as they are at the same level or below the URL you specify. Each page will be transformed into a PDF document.

audio_input_language:

Language of audio files. Defaults to “auto” language detection. Pass empty string to see choices.

ocr_model:

Which OCR model to use. Pass empty string to see choices.

timeout:

Timeout in seconds

list_all_tags() List[Tag]

Lists all existing tags.

Returns:

List of Tags: List of existing tags.

list_chat_message_meta_part(message_id: str, info_type: str) ChatMessageMeta

Fetch one chat message meta information.

Args:
message_id:

Message id to which the metadata should be pulled.

info_type:

Metadata type to fetch. Valid choices are: “self_reflection”, “usage_stats”, “prompt_raw”, “llm_only”, “hyde1”

Returns:

ChatMessageMeta: Metadata information about the chat message.

list_chat_message_references(message_id: str) List[ChatMessageReference]

Fetch metadata for references of a chat message.

References are only available for messages sent from an LLM, an empty list will be returned for messages sent by the user.

Args:
message_id:

String id of the message to get references for.

Returns:

list of ChatMessageReference: Metadata including the document name, polygon information, and score.

list_chat_messages(chat_session_id: str, offset: int, limit: int) List[ChatMessage]

Fetch chat message and metadata for messages in a chat session.

Messages without a reply_to are from the end user, messages with a reply_to are from an LLM and a response to a specific user message.

Args:
chat_session_id:

String id of the chat session to filter by.

offset:

How many chat messages to skip before returning.

limit:

How many chat messages to return.

Returns:

list of ChatMessage: Text and metadata for chat messages.

list_chat_messages_full(chat_session_id: str, offset: int, limit: int) List[ChatMessageFull]

Fetch chat message and metadata for messages in a chat session.

Messages without a reply_to are from the end user, messages with a reply_to are from an LLM and a response to a specific user message.

Args:
chat_session_id:

String id of the chat session to filter by.

offset:

How many chat messages to skip before returning.

limit:

How many chat messages to return.

Returns:

list of ChatMessageFull: Text and metadata for chat messages.

list_chat_sessions_for_collection(collection_id: str, offset: int, limit: int) List[ChatSessionForCollection]

Fetch chat session metadata for chat sessions in a collection.

Args:
collection_id:

String id of the collection to filter by.

offset:

How many chat sessions to skip before returning.

limit:

How many chat sessions to return.

Returns:

list of ChatSessionForCollection: Metadata about each chat session including the latest message.

list_collection_permissions(collection_id: str) List[Permission]

Returns a list of access permissions for a given collection.

The returned list of permissions denotes who has access to the collection and their access level.

Args:
collection_id:

ID of the collection to inspect.

Returns:

list of Permission: Sharing permissions list for the given collection.

list_collections_for_document(document_id: str, offset: int, limit: int) List[CollectionInfo]

Fetch metadata about each collection the document is a part of.

At this time, each document will only be available in a single collection.

Args:
document_id:

String id of the document to search for.

offset:

How many collections to skip before returning.

limit:

How many collections to return.

Returns:

list of CollectionInfo: Metadata about each collection.

list_document_chunks(document_id: str, collection_id: str | None = None) List[SearchResult]

Returns all chunks for a specific document.

Args:
document_id:

ID of the document.

collection_id:

ID of the collection the document belongs to. If not specified, an arbitrary collections containing the document is chosen.

Returns:

list of SearchResult: The document, text, score and related information of the chunk.

list_documents_from_tags(collection_id: str, tags: List[str]) List[Document]

Lists documents that have the specified set of tags within a collection. Args:

collection_id:

String The id of the collection to find documents in.

tags:

List of Strings representing the tags to retrieve documents for.

Returns:

List of Documents: All the documents with the specified tags.

list_documents_in_collection(collection_id: str, offset: int, limit: int) List[DocumentInfo]

Fetch document metadata for documents in a collection.

Args:
collection_id:

String id of the collection to filter by.

offset:

How many documents to skip before returning.

limit:

How many documents to return.

Returns:

list of DocumentInfo: Metadata about each document.

list_embedding_models() List[str]
list_jobs() List[Job]

List the user’s jobs.

Returns:

list of Job:

list_list_chat_message_meta(message_id: str) List[ChatMessageMeta]

Fetch chat message meta information.

Args:
message_id:

Message id to which the metadata should be pulled.

Returns:

list of ChatMessageMeta: Metadata about the chat message.

list_prompt_permissions(prompt_id: str) List[Permission]

Returns a list of access permissions for a given prompt template.

The returned list of permissions denotes who has access to the prompt template and their access level.

Args:
prompt_id:

ID of the prompt template to inspect.

Returns:

list of Permission: Sharing permissions list for the given prompt template.

list_question_reply_feedback_data(offset: int, limit: int) List[QuestionReplyData]

Fetch user’s questions and answers that have a feedback.

Questions and answers with metadata and feedback information.

Args:
offset:

How many conversations to skip before returning.

limit:

How many conversations to return.

Returns:

list of QuestionReplyData: Metadata about questions and answers.

list_recent_chat_sessions(offset: int, limit: int) List[ChatSessionInfo]

Fetch user’s chat session metadata sorted by last update time.

Chats across all collections will be accessed.

Args:
offset:

How many chat sessions to skip before returning.

limit:

How many chat sessions to return.

Returns:

list of ChatSessionInfo: Metadata about each chat session including the latest message.

list_recent_collections(offset: int, limit: int) List[CollectionInfo]

Fetch user’s collection metadata sorted by last update time.

Args:
offset:

How many collections to skip before returning.

limit:

How many collections to return.

Returns:

list of CollectionInfo: Metadata about each collection.

list_recent_collections_sort(offset: int, limit: int, sort_column: str, ascending: bool) List[CollectionInfo]

Fetch user’s collection metadata sorted by last update time.

Args:
offset:

How many collections to skip before returning.

limit:

How many collections to return.

sort_column:

Sort column.

ascending:

When True, return sorted by sort_column in ascending order.

Returns:

list of CollectionInfo: Metadata about each collection.

list_recent_document_summaries(document_id: str, offset: int, limit: int) List[ProcessedDocument]

Fetches recent document summaries/extractions/transformations

Args:
document_id:

document ID for which to return summaries

offset:

How many summaries to skip before returning summaries.

limit:

How many summaries to return.

list_recent_documents(offset: int, limit: int) List[DocumentInfo]

Fetch user’s document metadata sorted by last update time.

All documents owned by the user, regardless of collection, are accessed.

Args:
offset:

How many documents to skip before returning.

limit:

How many documents to return.

Returns:

list of DocumentInfo: Metadata about each document.

list_recent_documents_with_summaries(offset: int, limit: int) List[DocumentInfoSummary]

Fetch user’s document metadata sorted by last update time, including the latest document summary.

All documents owned by the user, regardless of collection, are accessed.

Args:
offset:

How many documents to skip before returning.

limit:

How many documents to return.

Returns:

list of DocumentInfoSummary: Metadata about each document.

list_recent_documents_with_summaries_sort(offset: int, limit: int, sort_column: str, ascending: bool) List[DocumentInfoSummary]

Fetch user’s document metadata sorted by last update time, including the latest document summary.

All documents owned by the user, regardless of collection, are accessed.

Args:
offset:

How many documents to skip before returning.

limit:

How many documents to return.

sort_column:

Sort column.

ascending:

When True, return sorted by sort_column in ascending order.

Returns:

list of DocumentInfoSummary: Metadata about each document.

list_recent_prompt_templates(offset: int, limit: int) List[PromptTemplate]

Fetch user’s prompt templates sorted by last update time.

Args:
offset:

How many prompt templates to skip before returning.

limit:

How many prompt templates to return.

Returns:

list of PromptTemplate: set of prompts

list_recent_prompt_templates_sort(offset: int, limit: int, sort_column: str, ascending: bool) List[PromptTemplate]

Fetch user’s prompt templates sorted by last update time.

Args:
offset:

How many prompt templates to skip before returning.

limit:

How many prompt templates to return.

sort_column:

Sort column.

ascending:

When True, return sorted by sort_column in ascending order.

Returns:

list of PromptTemplate: set of prompts

list_upload() List[str]

List pending file uploads to the H2OGPTE backend.

Uploaded files are not yet accessible and need to be ingested into a collection.

See Also:

upload: Upload the files into the system to then be ingested into a collection. ingest_uploads: Add the uploaded files to a collection. delete_upload: Delete uploaded file

Returns:

List[str]: The pending upload ids to be used in ingest jobs.

Raises:

Exception: The upload list request was unsuccessful.

list_users(offset: int, limit: int) List[User]

List system users.

Returns a list of all registered users fo the system, a registered user, is a users that has logged in at least once.

Args:
offset:

How many users to skip before returning.

limit:

How many users to return.

Returns:

list of User: Metadata about each user.

make_collection_private(collection_id: str)

Make a collection private

Once a collection is private, other users will no longer be able to access chat history or documents related to the collection.

Args:
collection_id:

ID of the collection to make private.

make_collection_public(collection_id: str)

Make a collection public

Once a collection is public, it will be accessible to all authenticated users of the system.

Args:
collection_id:

ID of the collection to make public.

match_chunks(collection_id: str, vectors: List[List[float]], topics: List[str], offset: int, limit: int, cut_off: float = 0, width: int = 0) List[SearchResult]

Find chunks related to a message using semantic search.

Chunks are sorted by relevance and similarity score to the message.

See Also: H2OGPTE.encode_for_retrieval to create vectors from messages.

Args:
collection_id:

ID of the collection to search within.

vectors:

A list of vectorized message for running semantic search.

topics:

A list of document_ids used to filter which documents in the collection to search.

offset:

How many chunks to skip before returning chunks.

limit:

How many chunks to return.

cut_off:

Exclude matches with distances higher than this cut off.

width:

How many chunks before and after a match to return - not implemented.

Returns:

list of SearchResult: The document, text, score and related information of the chunk.

process_document(document_id: str, system_prompt: str | None = None, pre_prompt_summary: str | None = None, prompt_summary: str | None = None, image_batch_image_prompt: str | None = None, image_batch_final_prompt: str | None = None, llm: str | int | None = None, llm_args: Dict[str, Any] | None = None, max_num_chunks: int | None = None, sampling_strategy: str | None = None, pages: List[int] | None = None, schema: Dict[str, Any] | None = None, keep_intermediate_results: bool | None = None, pii_settings: Dict | None = None, meta_data_to_include: Dict[str, bool] | None = None, timeout: float | None = None) ProcessedDocument

Processes a document to either create a global or piecewise summary/extraction/transformation of a document.

Effective prompt created (excluding the system prompt):

"{pre_prompt_summary}
"""
{text from document}
"""
{prompt_summary}"
Args:
document_id:

String id of the document to create a summary from.

system_prompt:

System Prompt

pre_prompt_summary:

Prompt that goes before each large piece of text to summarize

prompt_summary:

Prompt that goes after each large piece of of text to summarize

image_batch_final_prompt:

Prompt for each image batch for vision models

image_batch_image_prompt:

Prompt to reduce all answers each image batch for vision models

llm:

LLM to use

llm_args:
Dictionary of kwargs to pass to the llm. Valid keys:

temperature (float, default: 0) — The value used to modulate the next token probabilities. Most deterministic: 0, Most creative: 1 top_k (int, default: 1) — The number of highest probability vocabulary tokens to keep for top-k-filtering. top_p (float, default: 1.0) — If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. seed (int, default: 0) — The seed for the random number generator when sampling during generation (if temp>0 or top_k>1 or top_p<1), seed=0 picks a random seed. repetition_penalty (float, default: 1.07) — The parameter for repetition penalty. 1.0 means no penalty. max_new_tokens (int, default: 1024) — Maximum number of new tokens to generate. This limit applies to each (map+reduce) step during summarization and each (map) step during extraction. min_max_new_tokens (int, default: 512) — minimum value for max_new_tokens when auto-adjusting for content of prompt, docs, etc. response_format (str, default: “text”) — Output type, one of [“text”, “json_object”, “json_code”]. guided_json (str, default: “”) — If specified, the output will follow the JSON schema. guided_regex (str, default: “”) — If specified, the output will follow the regex pattern. guided_choice (Optional[List[str]], default: None — If specified, the output will be exactly one of the choices. guided_grammar (str, default: “”) — If specified, the output will follow the context free grammar. guided_whitespace_pattern (str, default: “”) — If specified, will override the default whitespace pattern for guided json decoding. enable_vision (str, default: “auto”) - Controls vision mode, send images to the LLM in addition to text chunks. Only if have models that support vision, use get_vision_capable_llm_names() to see list. One of [“on”, “off”, “auto”]. visible_vision_models (List[str], default: [“auto”]) - Controls which vision model to use when processing images. Use get_vision_capable_llm_names() to see list. Must provide exactly one model. [“auto”] for automatic.

max_num_chunks:

Max limit of chunks to send to the summarizer

sampling_strategy:

How to sample if the document has more chunks than max_num_chunks. Options are “auto”, “uniform”, “first”, “first+last”, default is “auto” (a hybrid of them all).

pages:

List of specific pages (of the ingested document in PDF form) to use from the document. 1-based indexing.

schema:

Optional JSON schema to use for guided json generation.

keep_intermediate_results:

Whether to keep intermediate results. Default: disabled. If disabled, further LLM calls are applied to the intermediate results until one global summary is obtained: map+reduce (i.e., summary). If enabled, the results’ content will be a list of strings (the results of applying the LLM to different pieces of document context): map (i.e., extract).

pii_settings:

PII Settings.

meta_data_to_include:

A dictionary containing flags that indicate whether each piece of document metadata is to be included as part of the context given to the LLM. Only used if enable_vision is disabled. Default is {

“name”: True, “text”: True, “page”: True, “captions”: True, “uri”: False, “connector”: False, “original_mtime”: False, “age”: False, “score”: False,

}

timeout:

Amount of time in seconds to allow the request to run. The default is 86400 seconds.

Returns:

ProcessedDocument: Processed document. The content is either a string (keep_intermediate_results=False) or a list of strings (keep_intermediate_results=True).

Raises:

TimeoutError: The request did not complete in time. SessionError: No summary or extraction created. Document wasn’t part of a collection, or LLM timed out, etc.

reset_collection_prompt_settings(collection_id: str) str

Reset the prompt settings for a given collection.

Args:
collection_id:

ID of the collection to update.

Returns:

str: ID of the updated collection.

search_chunks(collection_id: str, query: str, topics: List[str], offset: int, limit: int) List[SearchResult]

Find chunks related to a message using lexical search.

Chunks are sorted by relevance and similarity score to the message.

Args:
collection_id:

ID of the collection to search within.

query:

Question or imperative from the end user to search a collection for.

topics:

A list of document_ids used to filter which documents in the collection to search.

offset:

How many chunks to skip before returning chunks.

limit:

How many chunks to return.

Returns:

list of SearchResult: The document, text, score and related information of the chunk.

set_chat_message_votes(chat_message_id: str, votes: int) Result

Change the vote value of a chat message.

Set the exact value of a vote for a chat message. Any message type can be updated, but only LLM response votes will be visible in the UI. The expectation is 0: unvoted, -1: dislike, 1 like. Values outside of this will not be viewable in the UI.

Args:
chat_message_id:

ID of a chat message, any message can be used but only LLM responses will be visible in the UI.

votes:

Integer value for the message. Only -1 and 1 will be visible in the UI as dislike and like respectively.

Returns:

Result: The status of the update.

Raises:

Exception: The upload request was unsuccessful.

set_chat_session_collection(chat_session_id: str, collection_id: str | None) str

Set the prompt template for a chat_session

Args:
chat_session_id:

ID of the chat session

collection_id:

ID of the collection, or None to chat with the LLM only.

Returns:

str: ID of the updated chat session

set_chat_session_prompt_template(chat_session_id: str, prompt_template_id: str | None) str

Set the prompt template for a chat_session

Args:
chat_session_id:

ID of the chat session

prompt_template_id:

ID of the prompt template to get the prompts from. None to delete and fall back to system defaults.

Returns:

str: ID of the updated chat session

set_collection_prompt_template(collection_id: str, prompt_template_id: str | None, strict_check: bool = False) str

Set the prompt template for a collection

Args:
collection_id:

ID of the collection to update.

prompt_template_id:

ID of the prompt template to get the prompts from. None to delete and fall back to system defaults.

strict_check:

whether to check that the collection’s embedding model and the prompt template are optimally compatible

Returns:

str: ID of the updated collection.

share_collection(collection_id: str, permission: Permission) ShareResponseStatus

Share a collection to a user.

The permission attribute defined the level of access, and who can access the collection, the collection_id attribute denotes the collection to be shared.

Args:
collection_id:

ID of the collection to share.

permission:

Defines the rule for sharing, i.e. permission level.

Returns:

ShareResponseStatus: Status of share request.

share_prompt(prompt_id: str, permission: Permission) ShareResponseStatus

Share a prompt template to a user.

Args:
prompt_id:

ID of the prompt template to share.

permission:

Defines the rule for sharing, i.e. permission level.

Returns:

ShareResponseStatus: Status of share request.

summarize_content(text_context_list: List[str] | None = None, system_prompt: str = '', pre_prompt_summary: str | None = None, prompt_summary: str | None = None, llm: str | int | None = None, llm_args: Dict[str, Any] | None = None, pii_settings: Dict | None = None, timeout: float | None = None, **kwargs: Any) Answer

Summarize one or more contexts using an LLM.

Effective prompt created (excluding the system prompt):

"{pre_prompt_summary}
"""
{text_context_list}
"""
{prompt_summary}"
Args:
text_context_list:

List of raw text strings to be summarized.

system_prompt:

Text sent to models which support system prompts. Gives the model overall context in how to respond. Use auto for the model default or None for h2oGPTe defaults. Defaults to ‘’ for no system prompt.

pre_prompt_summary:

Text that is prepended before the list of texts. The default can be customized per environment, but the standard default is "In order to write a concise single-paragraph or bulleted list summary, pay attention to the following text:\n"

prompt_summary:

Text that is appended after the list of texts. The default can be customized per environment, but the standard default is "Using only the text above, write a condensed and concise summary of key results (preferably as bullet points):\n"

llm:

Name or index of LLM to send the query. Use H2OGPTE.get_llms() to see all available options. Default value is to use the first model (0th index).

llm_args:
Dictionary of kwargs to pass to the llm. Valid keys:

temperature (float, default: 0) — The value used to modulate the next token probabilities. Most deterministic: 0, Most creative: 1 seed (int, default: 0) — The seed for the random number generator, only used if temperature > 0, seed=0 will pick a random number for each call, seed > 0 will be fixed. top_k (int, default: 1) — The number of highest probability vocabulary tokens to keep for top-k-filtering. top_p (float, default: 1.0) — If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. repetition_penalty (float, default: 1.07) — The parameter for repetition penalty. 1.0 means no penalty. max_new_tokens (int, default: 1024) — Maximum number of new tokens to generate. This limit applies to each (map+reduce) step during summarization and each (map) step during extraction. min_max_new_tokens (int, default: 512) — minimum value for max_new_tokens when auto-adjusting for content of prompt, docs, etc. response_format (str, default: “text”) — Output type, one of [“text”, “json_object”, “json_code”]. guided_json (str, default: “”) — If specified, the output will follow the JSON schema. guided_regex (str, default: “”) — If specified, the output will follow the regex pattern. guided_choice (Optional[List[str]], default: None — If specified, the output will be exactly one of the choices. guided_grammar (str, default: “”) — If specified, the output will follow the context free grammar. guided_whitespace_pattern (str, default: “”) — If specified, will override the default whitespace pattern for guided json decoding.

pii_settings:

PII Settings.

timeout:

Timeout in seconds.

kwargs:
Dictionary of kwargs to pass to h2oGPT. Not recommended, see https://github.com/h2oai/h2ogpt for source code. Valid keys:

h2ogpt_key: str = “” chat_conversation: list[tuple[str, str]] | None = None docs_ordering_type: str | None = “best_near_prompt” max_input_tokens: int = -1 docs_token_handling: str = “split_or_merge” docs_joiner: str = “

image_file: Union[str, list] = None

Returns:

Answer: The response text and any errors.

Raises:

TimeoutError: If response isn’t completed in timeout seconds.

summarize_document(*args, **kwargs) DocumentSummary
tag_document(document_id: str, tag_name: str) str

Adds a tag to a document.

Args:
document_id:

String id of the document to attach the tag to.

tag_name:

String representing the tag to attach.

Returns:

String: The id of the newly created tag.

unshare_collection(collection_id: str, permission: Permission) ShareResponseStatus

Remove sharing of a collection to a user.

The permission attribute defined the level of access, and who can access the collection, the collection_id attribute denotes the collection to be shared. In case of un-sharing, the Permission’s user is sufficient

Args:
collection_id:

ID of the collection to un-share.

permission:

Defines the user for which collection access is revoked.

ShareResponseStatus: Status of share request.

unshare_collection_for_all(collection_id: str) ShareResponseStatus

Remove sharing of a collection to all other users but the original owner

Args:
collection_id:

ID of the collection to un-share.

ShareResponseStatus: Status of share request.

unshare_prompt(prompt_id: str, permission: Permission) ShareResponseStatus

Remove sharing of a prompt template to a user.

Args:
prompt_id:

ID of the prompt template to un-share.

permission:

Defines the user for which collection access is revoked.

ShareResponseStatus: Status of share request.

unshare_prompt_for_all(prompt_id: str) ShareResponseStatus

Remove sharing of a prompt template to all other users but the original owner

Args:
prompt_id:

ID of the prompt template to un-share.

ShareResponseStatus: Status of share request.

untag_document(document_id: str, tag_name: str) str

Removes an existing tag from a document.

Args:
document_id:

String id of the document to remove the tag from.

tag_name:

String representing the tag to remove.

Returns:

String: The id of the removed tag.

update_collection(collection_id: str, name: str, description: str) str

Update the metadata for a given collection.

All variables are required. You can use h2ogpte.get_collection(<id>).name or description to get the existing values if you only want to change one or the other.

Args:
collection_id:

ID of the collection to update.

name:

New name of the collection, this is required.

description:

New description of the collection, this is required.

Returns:

str: ID of the updated collection.

update_collection_rag_type(collection_id: str, name: str, description: str, rag_type: str) str

Update the metadata for a given collection.

All variables are required. You can use h2ogpte.get_collection(<id>).name or description to get the existing values if you only want to change one or the other.

Args:
collection_id:

ID of the collection to update.

name:

New name of the collection, this is required.

description:

New description of the collection, this is required.

rag_type: str one of

"auto" Automatically select the best rag_type. "llm_only" LLM Only - Answer the query without any supporting document contexts.

Requires 1 LLM call.

"rag" RAG (Retrieval Augmented Generation) - Use supporting document contexts

to answer the query. Requires 1 LLM call.

"hyde1" LLM Only + RAG composite - HyDE RAG (Hypothetical Document Embedding).

Use ‘LLM Only’ response to find relevant contexts from a collection for generating a response. Requires 2 LLM calls.

"hyde2" HyDE + RAG composite - Use the ‘HyDE RAG’ response to find relevant

contexts from a collection for generating a response. Requires 3 LLM calls.

"rag+" Summary RAG - Like RAG, but uses more context and recursive

summarization to overcome LLM context limits. Keeps all retrieved chunks, puts them in order, adds neighboring chunks, then uses the summary API to get the answer. Can require several LLM calls.

"all_data" All Data RAG - Like Summary RAG, but includes all document

chunks. Uses recursive summarization to overcome LLM context limits. Can require several LLM calls.

Returns:

str: ID of the updated collection.

update_prompt_template(id: str, name: str, description: str | None = None, lang: str | None = None, system_prompt: str | None = None, pre_prompt_query: str | None = None, prompt_query: str | None = None, hyde_no_rag_llm_prompt_extension: str | None = None, pre_prompt_summary: str | None = None, prompt_summary: str | None = None, system_prompt_reflection: str | None = None, pre_prompt_reflection: str | None = None, prompt_reflection: str | None = None, auto_gen_description_prompt: str | None = None, auto_gen_document_summary_pre_prompt_summary: str | None = None, auto_gen_document_summary_prompt_summary: str | None = None, auto_gen_document_sample_questions_prompt: str | None = None, default_sample_questions: List[str] | None = None, image_batch_image_prompt: str | None = None, image_batch_final_prompt: str | None = None) str

Update a prompt template

Args:
id:

String ID of the prompt template to update

name:

Name of the prompt template

description:

Description of the prompt template

lang:

Language code

system_prompt:

System Prompt

pre_prompt_query:

Text that is prepended before the contextual document chunks.

prompt_query:

Text that is appended to the beginning of the user’s message.

hyde_no_rag_llm_prompt_extension:

LLM prompt extension.

pre_prompt_summary:

Prompt that goes before each large piece of text to summarize

prompt_summary:

Prompt that goes after each large piece of of text to summarize

system_prompt_reflection:

System Prompt for self-reflection

pre_prompt_reflection:

deprecated - ignored

prompt_reflection:

Template for self-reflection, must contain two occurrences of %s for full previous prompt (including system prompt, document related context and prompts if applicable, and user prompts) and answer

auto_gen_description_prompt:

prompt to create a description of the collection.

auto_gen_document_summary_pre_prompt_summary:

pre_prompt_summary for summary of a freshly imported document (if enabled).

auto_gen_document_summary_prompt_summary:

prompt_summary for summary of a freshly imported document (if enabled).

auto_gen_document_sample_questions_prompt:

prompt to create sample questions for a freshly imported document (if enabled).

default_sample_questions:

default sample questions in case there are no auto-generated sample questions.

image_batch_final_prompt:

Prompt for each image batch for vision models

image_batch_image_prompt:

Prompt to reduce all answers each image batch for vision models

Returns:

str: The ID of the updated prompt template.

update_question_reply_feedback(reply_id: str, expected_answer: str, user_comment: str)

Update feedback for a specific answer to a question.

Args:
reply_id:

UUID of the reply.

expected_answer:

Expected answer.

user_comment:

User comment.

Returns:

None

update_tag(tag_name: str, description: str, format: str) str

Updates a tag.

Args:
tag_name:

String representing the tag to update.

description:

String describing the tag.

format:

String representing the format of the tag.

Returns:

String: The id of the updated tag.

upload(file_name: str, file: Any) str

Upload a file to the H2OGPTE backend.

Uploaded files are not yet accessible and need to be ingested into a collection.

See Also:

ingest_uploads: Add the uploaded files to a collection. delete_upload: Delete uploaded file

Args:
file_name:

What to name the file on the server, must include file extension.

file:

File object to upload, often an opened file from with open(…) as f.

Returns:

str: The upload id to be used in ingest jobs.

Raises:

Exception: The upload request was unsuccessful.

h2ogpte.h2ogpte.marshal(d)
h2ogpte.h2ogpte.unmarshal(s: str)
h2ogpte.h2ogpte.unmarshal_dict(d: dict)

h2ogpte.session module

class h2ogpte.session.Session(address: str, chat_session_id: str, client: H2OGPTE = None, prompt_template_id: str | None = None, open_timeout: int = 10, close_timeout: int = 10, max_connect_retries: int = 10, connect_retry_delay: int = 0.5, connect_retry_max_delay: int = 60)

Bases: object

Create and participate in a chat session.

This is a live connection to the h2oGPTe server contained to a specific chat session on top of a single collection of documents. Users will find all questions and responses in this session in a single chat history in the UI.

See Also:

H2OGPTE.connect: To initialize a session on an existing connection.

Args:
address:

Full URL of the h2oGPTe server to connect to.

chat_session_id:

The ID of the chat session the queries should be sent to.

client:

Set to the value of H2OGPTE client object used to perform other calls to the system.

Examples:

# Example 1: Best practice, create a session using the H2OGPTE module
with h2ogpte.connect(chat_session_id) as session:
    answer1 = session.query('How many paper clips were shipped to Scranton?', timeout=10)
    answer2 = session.query('Did David Brent co-sign the contract with Initech?', timeout=10)

# Example 2: Connect and disconnect manually
session = Session(
    address=address,
    client=client,
    chat_session_id=chat_session_id
)
session.connect()
answer = session.query("Are there any dogs in the documents?")
session.disconnect()
connect()

Connect to an h2oGPTe server.

This is primarily an internal function used when users create a session using with from the H2OGPTE.connection() function.

property connection: ClientConnection
disconnect()

Disconnect from an h2oGPTe server.

This is primarily an internal function used when users create a session using with from the H2OGPTE.connection() function.

query(message: str, system_prompt: str | None = None, pre_prompt_query: str | None = None, prompt_query: str | None = None, image_batch_image_prompt: str | None = None, image_batch_final_prompt: str | None = None, pre_prompt_summary: str | None = None, prompt_summary: str | None = None, llm: str | int | None = None, llm_args: Dict[str, Any] | None = None, self_reflection_config: Dict[str, Any] | None = None, rag_config: Dict[str, Any] | None = None, include_chat_history: bool | None = False, tags: List[str] | None = None, timeout: float | None = None, retries: int = 3, callback: Callable[[ChatMessage | PartialChatMessage], None] | None = None) ChatMessage | None

Retrieval-augmented generation for a query on a collection.

Finds a collection of chunks relevant to the query using similarity scores. Sends these and any additional instructions to an LLM.

Format of questions or imperatives:

"{pre_prompt_query}
"""
{similar_context_chunks}
"""                {prompt_query}{message}"
Args:
message:

Query or instruction from the end user to the LLM.

system_prompt:

Text sent to models which support system prompts. Gives the model overall context in how to respond. Use auto or None for the model default. Defaults to ‘’ for no system prompt.

pre_prompt_query:

Text that is prepended before the contextual document chunks. The default can be customized per environment, but the standard default is "Pay attention and remember the information below, which will help to answer the question or imperative after the context ends.\n"

prompt_query:

Text that is appended to the beginning of the user’s message. The default can be customized per environment, but the standard default is “According to only the information in the document sources provided within the context above, “

image_batch_final_prompt:

Prompt for each image batch for vision models

image_batch_image_prompt:

Prompt to reduce all answers each image batch for vision models

pre_prompt_summary:

Not used, use H2OGPTE.process_document to summarize.

prompt_summary:

Not used, use H2OGPTE.process_document to summarize.

llm:

Name or index of LLM to send the query. Use H2OGPTE.get_llms() to see all available options. Default value is to use the first model (0th index).

llm_args:
Dictionary of kwargs to pass to the llm. Valid keys:

temperature (float, default: 0) — The value used to modulate the next token probabilities. Most deterministic: 0, Most creative: 1 top_k (int, default: 1) — The number of highest probability vocabulary tokens to keep for top-k-filtering. top_p (float, default: 1.0) — If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. seed (int, default: 0) — The seed for the random number generator when sampling during generation (if temp>0 or top_k>1 or top_p<1), seed=0 picks a random seed. repetition_penalty (float, default: 1.07) — The parameter for repetition penalty. 1.0 means no penalty. max_new_tokens (int, default: 1024) — Maximum number of new tokens to generate. This limit applies to each (map+reduce) step during summarization and each (map) step during extraction. min_max_new_tokens (int, default: 512) — minimum value for max_new_tokens when auto-adjusting for content of prompt, docs, etc. response_format (str, default: “text”) — Output type, one of [“text”, “json_object”, “json_code”]. guided_json (str, default: “”) — If specified, the output will follow the JSON schema. guided_regex (str, default: “”) — If specified, the output will follow the regex pattern. guided_choice (Optional[List[str]], default: None — If specified, the output will be exactly one of the choices. guided_grammar (str, default: “”) — If specified, the output will follow the context free grammar. guided_whitespace_pattern (str, default: “”) — If specified, will override the default whitespace pattern for guided json decoding. enable_vision (str, default: “auto”) - Controls vision mode, send images to the LLM in addition to text chunks. Only if have models that support vision, use get_vision_capable_llm_names() to see list. One of [“on”, “off”, “auto”]. visible_vision_models (List[str], default: [“auto”]) - Controls which vision model to use when processing images. Use get_vision_capable_llm_names() to see list. Must provide exactly one model. [“auto”] for automatic.

self_reflection_config:

Dictionary of arguments for self-reflection, can contain the following string:string mappings:

llm_reflection: str

"gpt-4-0613" or "" to disable reflection

prompt_reflection: str

‘Here’s the prompt and the response: """Prompt:\n%s\n"""\n\n""" Response:\n%s\n"""\n\nWhat is the quality of the response for the given prompt? Respond with a score ranging from Score: 0/10 (worst) to Score: 10/10 (best), and give a brief explanation why.'

system_prompt_reflection: str

""

llm_args_reflection: str

"{}"

rag_config:

Dictionary of arguments to control RAG (retrieval-augmented-generation) types. Can contain the following key/value pairs: rag_type: str one of

"auto" Automatically select the best rag_type. "llm_only" LLM Only - Answer the query without any supporting document contexts.

Requires 1 LLM call.

"rag" RAG (Retrieval Augmented Generation) - Use supporting document contexts

to answer the query. Requires 1 LLM call.

"hyde1" LLM Only + RAG composite - HyDE RAG (Hypothetical Document Embedding).

Use ‘LLM Only’ response to find relevant contexts from a collection for generating a response. Requires 2 LLM calls.

"hyde2" HyDE + RAG composite - Use the ‘HyDE RAG’ response to find relevant

contexts from a collection for generating a response. Requires 3 LLM calls.

"rag+" Summary RAG - Like RAG, but uses more context and recursive

summarization to overcome LLM context limits. Keeps all retrieved chunks, puts them in order, adds neighboring chunks, then uses the summary API to get the answer. Can require several LLM calls.

"all_data" All Data RAG - Like Summary RAG, but includes all document

chunks. Uses recursive summarization to overcome LLM context limits. Can require several LLM calls.

hyde_no_rag_llm_prompt_extension: str

Add this prompt to every user’s prompt, when generating answers to be used for subsequent retrieval during HyDE. Only used when rag_type is “hyde1” or “hyde2”. example: '\nKeep the answer brief, and list the 5 most relevant key words at the end.'

num_neighbor_chunks_to_include: int

Number of neighboring chunks to include for every retrieved relevant chunk. Helps to keep surrounding context together. Only enabled for rag_type “rag+”. Defaults to 1.

meta_data_to_include:

A dictionary containing flags that indicate whether each piece of document metadata is to be included as part of the context for a chat with a collection. Default is {

“name”: True, “text”: True, “page”: True, “captions”: True, “uri”: False, “connector”: False, “original_mtime”: False, “age”: False, “score”: False,

}

include_chat_history:

Whether to include chat history. Includes previous questions and answers for the current chat session for each new chat request. Disable if require deterministic answers for a given question.

tags:

A list of tags from which to pull the context for RAG.

timeout:

Amount of time in seconds to allow the request to run. The default is 1000 seconds.

retries:

Amount of retries to allow the request to run when hits a network issue. The default is 3.

callback:

Function for processing partial messages, used for streaming responses to an end user.

Returns:

ChatMessage: The response text and details about the response from the LLM. For example:

ChatMessage(
    id='XXX',
    content='The information provided in the context...',
    reply_to='YYY',
    votes=0,
    created_at=datetime.datetime(2023, 10, 24, 20, 12, 34, 875026)
    type_list=[],
)
Raises:

TimeoutError: The request did not complete in time.

h2ogpte.session.deserialize(response: str) ChatResponse | ChatAcknowledgement
h2ogpte.session.serialize(request: ChatRequest) str

h2ogpte.types module

class h2ogpte.types.Answer(*, content: str, error: str, prompt_raw: str = '', llm: str, input_tokens: int = 0, output_tokens: int = 0, origin: str = 'N/A')

Bases: BaseModel

content: str
error: str
input_tokens: int
llm: str
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'content': FieldInfo(annotation=str, required=True), 'error': FieldInfo(annotation=str, required=True), 'input_tokens': FieldInfo(annotation=int, required=False, default=0), 'llm': FieldInfo(annotation=str, required=True), 'origin': FieldInfo(annotation=str, required=False, default='N/A'), 'output_tokens': FieldInfo(annotation=int, required=False, default=0), 'prompt_raw': FieldInfo(annotation=str, required=False, default='')}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

origin: str
output_tokens: int
prompt_raw: str
class h2ogpte.types.ChatAcknowledgement(t: str, session_id: str, correlation_id: str, message_id: str)

Bases: object

correlation_id: str
message_id: str
session_id: str
t: str
class h2ogpte.types.ChatMessage(*, id: str, content: str, reply_to: str | None = None, votes: int, created_at: datetime, type_list: List[str] | None = None, error: str | None = None)

Bases: BaseModel

content: str
created_at: datetime
error: str | None
id: str
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'content': FieldInfo(annotation=str, required=True), 'created_at': FieldInfo(annotation=datetime, required=True), 'error': FieldInfo(annotation=Union[str, NoneType], required=False), 'id': FieldInfo(annotation=str, required=True), 'reply_to': FieldInfo(annotation=Union[str, NoneType], required=False), 'type_list': FieldInfo(annotation=Union[List[str], NoneType], required=False), 'votes': FieldInfo(annotation=int, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

reply_to: str | None
type_list: List[str] | None
votes: int
class h2ogpte.types.ChatMessageFull(*, id: str, username: str | None = None, content: str, reply_to: str | None = None, votes: int, created_at: datetime, type_list: List[ChatMessageMeta] | None = [], has_references: bool, error: str | None = None)

Bases: BaseModel

content: str
created_at: datetime
error: str | None
has_references: bool
id: str
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'content': FieldInfo(annotation=str, required=True), 'created_at': FieldInfo(annotation=datetime, required=True), 'error': FieldInfo(annotation=Union[str, NoneType], required=False), 'has_references': FieldInfo(annotation=bool, required=True), 'id': FieldInfo(annotation=str, required=True), 'reply_to': FieldInfo(annotation=Union[str, NoneType], required=False), 'type_list': FieldInfo(annotation=Union[List[ChatMessageMeta], NoneType], required=False, default=[]), 'username': FieldInfo(annotation=Union[str, NoneType], required=False), 'votes': FieldInfo(annotation=int, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

reply_to: str | None
type_list: List[ChatMessageMeta] | None
username: str | None
votes: int
class h2ogpte.types.ChatMessageMeta(*, message_type: str, content: str)

Bases: BaseModel

content: str
message_type: str
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'content': FieldInfo(annotation=str, required=True), 'message_type': FieldInfo(annotation=str, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

class h2ogpte.types.ChatMessageReference(*, document_id: str, document_name: str, chunk_id: int, pages: str, score: float, collection_id: str)

Bases: BaseModel

chunk_id: int
collection_id: str
document_id: str
document_name: str
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'chunk_id': FieldInfo(annotation=int, required=True), 'collection_id': FieldInfo(annotation=str, required=True), 'document_id': FieldInfo(annotation=str, required=True), 'document_name': FieldInfo(annotation=str, required=True), 'pages': FieldInfo(annotation=str, required=True), 'score': FieldInfo(annotation=float, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

pages: str
score: float
class h2ogpte.types.ChatRequest(t: str, mode: str, session_id: str, correlation_id: str, body: str, system_prompt: str | None, pre_prompt_query: str | None, prompt_query: str | None, pre_prompt_summary: str | None, prompt_summary: str | None, llm: str | int | NoneType, llm_args: str | None, self_reflection_config: str | None, rag_config: str | None, include_chat_history: bool | None = False, tags: List[str] | None = None, image_batch_image_prompt: str | None = None, image_batch_final_prompt: str | None = None)

Bases: object

body: str
correlation_id: str
image_batch_final_prompt: str | None = None
image_batch_image_prompt: str | None = None
include_chat_history: bool | None = False
llm: str | int | None
llm_args: str | None
mode: str
pre_prompt_query: str | None
pre_prompt_summary: str | None
prompt_query: str | None
prompt_summary: str | None
rag_config: str | None
self_reflection_config: str | None
session_id: str
system_prompt: str | None
t: str
tags: List[str] | None = None
class h2ogpte.types.ChatResponse(t: str, session_id: str, message_id: str, reply_to_id: str, body: str, error: str)

Bases: object

body: str
error: str
message_id: str
reply_to_id: str
session_id: str
t: str
class h2ogpte.types.ChatSessionCount(*, chat_session_count: int)

Bases: BaseModel

chat_session_count: int
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'chat_session_count': FieldInfo(annotation=int, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

class h2ogpte.types.ChatSessionForCollection(*, id: str, latest_message_content: str | None = None, updated_at: datetime)

Bases: BaseModel

id: str
latest_message_content: str | None
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'id': FieldInfo(annotation=str, required=True), 'latest_message_content': FieldInfo(annotation=Union[str, NoneType], required=False), 'updated_at': FieldInfo(annotation=datetime, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

updated_at: datetime
class h2ogpte.types.ChatSessionInfo(*, id: str, latest_message_content: str | None = None, collection_id: str | None, collection_name: str | None, prompt_template_id: str | None = None, updated_at: datetime)

Bases: BaseModel

collection_id: str | None
collection_name: str | None
id: str
latest_message_content: str | None
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'collection_id': FieldInfo(annotation=Union[str, NoneType], required=True), 'collection_name': FieldInfo(annotation=Union[str, NoneType], required=True), 'id': FieldInfo(annotation=str, required=True), 'latest_message_content': FieldInfo(annotation=Union[str, NoneType], required=False), 'prompt_template_id': FieldInfo(annotation=Union[str, NoneType], required=False), 'updated_at': FieldInfo(annotation=datetime, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

prompt_template_id: str | None
updated_at: datetime
class h2ogpte.types.Chunk(*, text: str, id: int, name: str, size: int, pages: str)

Bases: BaseModel

id: int
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'id': FieldInfo(annotation=int, required=True), 'name': FieldInfo(annotation=str, required=True), 'pages': FieldInfo(annotation=str, required=True), 'size': FieldInfo(annotation=int, required=True), 'text': FieldInfo(annotation=str, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

name: str
pages: str
size: int
text: str
class h2ogpte.types.Chunks(*, result: List[Chunk])

Bases: BaseModel

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'result': FieldInfo(annotation=List[Chunk], required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

result: List[Chunk]
class h2ogpte.types.Collection(*, id: str, name: str, description: str, document_count: int, document_size: int, created_at: datetime, updated_at: datetime, username: str, rag_type: str | None = None, embedding_model: str | None = None, prompt_template_id: str | None = None, collection_settings: dict | None = None, is_public: bool)

Bases: BaseModel

collection_settings: dict | None
created_at: datetime
description: str
document_count: int
document_size: int
embedding_model: str | None
id: str
is_public: bool
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'collection_settings': FieldInfo(annotation=Union[dict, NoneType], required=False), 'created_at': FieldInfo(annotation=datetime, required=True), 'description': FieldInfo(annotation=str, required=True), 'document_count': FieldInfo(annotation=int, required=True), 'document_size': FieldInfo(annotation=int, required=True), 'embedding_model': FieldInfo(annotation=Union[str, NoneType], required=False), 'id': FieldInfo(annotation=str, required=True), 'is_public': FieldInfo(annotation=bool, required=True), 'name': FieldInfo(annotation=str, required=True), 'prompt_template_id': FieldInfo(annotation=Union[str, NoneType], required=False), 'rag_type': FieldInfo(annotation=Union[str, NoneType], required=False), 'updated_at': FieldInfo(annotation=datetime, required=True), 'username': FieldInfo(annotation=str, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

name: str
prompt_template_id: str | None
rag_type: str | None
updated_at: datetime
username: str
class h2ogpte.types.CollectionCount(*, collection_count: int)

Bases: BaseModel

collection_count: int
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'collection_count': FieldInfo(annotation=int, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

class h2ogpte.types.CollectionInfo(*, id: str, name: str, description: str, document_count: int, document_size: int, updated_at: datetime, user_count: int, is_public: bool, username: str, sessions_count: int)

Bases: BaseModel

description: str
document_count: int
document_size: int
id: str
is_public: bool
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'description': FieldInfo(annotation=str, required=True), 'document_count': FieldInfo(annotation=int, required=True), 'document_size': FieldInfo(annotation=int, required=True), 'id': FieldInfo(annotation=str, required=True), 'is_public': FieldInfo(annotation=bool, required=True), 'name': FieldInfo(annotation=str, required=True), 'sessions_count': FieldInfo(annotation=int, required=True), 'updated_at': FieldInfo(annotation=datetime, required=True), 'user_count': FieldInfo(annotation=int, required=True), 'username': FieldInfo(annotation=str, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

name: str
sessions_count: int
updated_at: datetime
user_count: int
username: str
class h2ogpte.types.ConfigItem(*, key_name: str, string_value: str, value_type: str, can_overwrite: bool)

Bases: BaseModel

can_overwrite: bool
key_name: str
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'can_overwrite': FieldInfo(annotation=bool, required=True), 'key_name': FieldInfo(annotation=str, required=True), 'string_value': FieldInfo(annotation=str, required=True), 'value_type': FieldInfo(annotation=str, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

string_value: str
value_type: str
class h2ogpte.types.Document(*, id: str, name: str, type: str, size: int, page_count: int, pii_settings: dict | None = None, connector: str | None = None, uri: str | None = None, original_type: str | None = None, original_mtime: datetime | None = None, meta_data_dict: dict | None = None, status: Status, created_at: datetime, updated_at: datetime)

Bases: BaseModel

connector: str | None
created_at: datetime
id: str
meta_data_dict: dict | None
model_config: ClassVar[ConfigDict] = {'use_enum_values': True}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'connector': FieldInfo(annotation=Union[str, NoneType], required=False), 'created_at': FieldInfo(annotation=datetime, required=True), 'id': FieldInfo(annotation=str, required=True), 'meta_data_dict': FieldInfo(annotation=Union[dict, NoneType], required=False), 'name': FieldInfo(annotation=str, required=True), 'original_mtime': FieldInfo(annotation=Union[datetime, NoneType], required=False), 'original_type': FieldInfo(annotation=Union[str, NoneType], required=False), 'page_count': FieldInfo(annotation=int, required=True), 'pii_settings': FieldInfo(annotation=Union[dict, NoneType], required=False), 'size': FieldInfo(annotation=int, required=True), 'status': FieldInfo(annotation=Status, required=True), 'type': FieldInfo(annotation=str, required=True), 'updated_at': FieldInfo(annotation=datetime, required=True), 'uri': FieldInfo(annotation=Union[str, NoneType], required=False)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

name: str
original_mtime: datetime | None
original_type: str | None
page_count: int
pii_settings: dict | None
size: int
status: Status
type: str
updated_at: datetime
uri: str | None
class h2ogpte.types.DocumentCount(*, document_count: int)

Bases: BaseModel

document_count: int
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'document_count': FieldInfo(annotation=int, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

class h2ogpte.types.DocumentInfo(*, id: str, username: str, name: str, type: str, size: int, page_count: int, pii_settings: dict | None = None, connector: str | None = None, uri: str | None = None, original_type: str | None = None, meta_data_dict: dict | None = None, status: Status, updated_at: datetime)

Bases: BaseModel

connector: str | None
id: str
meta_data_dict: dict | None
model_config: ClassVar[ConfigDict] = {'use_enum_values': True}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'connector': FieldInfo(annotation=Union[str, NoneType], required=False), 'id': FieldInfo(annotation=str, required=True), 'meta_data_dict': FieldInfo(annotation=Union[dict, NoneType], required=False), 'name': FieldInfo(annotation=str, required=True), 'original_type': FieldInfo(annotation=Union[str, NoneType], required=False), 'page_count': FieldInfo(annotation=int, required=True), 'pii_settings': FieldInfo(annotation=Union[dict, NoneType], required=False), 'size': FieldInfo(annotation=int, required=True), 'status': FieldInfo(annotation=Status, required=True), 'type': FieldInfo(annotation=str, required=True), 'updated_at': FieldInfo(annotation=datetime, required=True), 'uri': FieldInfo(annotation=Union[str, NoneType], required=False), 'username': FieldInfo(annotation=str, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

name: str
original_type: str | None
page_count: int
pii_settings: dict | None
size: int
status: Status
type: str
updated_at: datetime
uri: str | None
username: str
class h2ogpte.types.DocumentInfoSummary(*, id: str, username: str, name: str, type: str, size: int, page_count: int, pii_settings: dict | None = None, connector: str | None = None, uri: str | None = None, original_type: str | None = None, meta_data_dict: dict | None = None, status: Status, updated_at: datetime, usage_stats: str | None, summary: str | None, summary_parameters: str | None)

Bases: BaseModel

connector: str | None
id: str
meta_data_dict: dict | None
model_config: ClassVar[ConfigDict] = {'use_enum_values': True}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'connector': FieldInfo(annotation=Union[str, NoneType], required=False), 'id': FieldInfo(annotation=str, required=True), 'meta_data_dict': FieldInfo(annotation=Union[dict, NoneType], required=False), 'name': FieldInfo(annotation=str, required=True), 'original_type': FieldInfo(annotation=Union[str, NoneType], required=False), 'page_count': FieldInfo(annotation=int, required=True), 'pii_settings': FieldInfo(annotation=Union[dict, NoneType], required=False), 'size': FieldInfo(annotation=int, required=True), 'status': FieldInfo(annotation=Status, required=True), 'summary': FieldInfo(annotation=Union[str, NoneType], required=True), 'summary_parameters': FieldInfo(annotation=Union[str, NoneType], required=True), 'type': FieldInfo(annotation=str, required=True), 'updated_at': FieldInfo(annotation=datetime, required=True), 'uri': FieldInfo(annotation=Union[str, NoneType], required=False), 'usage_stats': FieldInfo(annotation=Union[str, NoneType], required=True), 'username': FieldInfo(annotation=str, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

name: str
original_type: str | None
page_count: int
pii_settings: dict | None
size: int
status: Status
summary: str | None
summary_parameters: str | None
type: str
updated_at: datetime
uri: str | None
usage_stats: str | None
username: str
class h2ogpte.types.DocumentSummary(*, id: str, content: str, error: str, document_id: str, kwargs: str, created_at: datetime, usage_stats: str | None = None)

Bases: ProcessedDocument

content: str
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'content': FieldInfo(annotation=str, required=True), 'created_at': FieldInfo(annotation=datetime, required=True), 'document_id': FieldInfo(annotation=str, required=True), 'error': FieldInfo(annotation=str, required=True), 'id': FieldInfo(annotation=str, required=True), 'kwargs': FieldInfo(annotation=str, required=True), 'usage_stats': FieldInfo(annotation=Union[str, NoneType], required=False)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

class h2ogpte.types.ExtractionAnswer(*, content: List[str], error: str, llm: str, input_tokens: int = 0, output_tokens: int = 0)

Bases: BaseModel

content: List[str]
error: str
input_tokens: int
llm: str
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'content': FieldInfo(annotation=List[str], required=True), 'error': FieldInfo(annotation=str, required=True), 'input_tokens': FieldInfo(annotation=int, required=False, default=0), 'llm': FieldInfo(annotation=str, required=True), 'output_tokens': FieldInfo(annotation=int, required=False, default=0)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

output_tokens: int
class h2ogpte.types.Identifier(*, id: str, error: str | None = None)

Bases: BaseModel

error: str | None
id: str
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'error': FieldInfo(annotation=Union[str, NoneType], required=False), 'id': FieldInfo(annotation=str, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

exception h2ogpte.types.InvalidArgumentError

Bases: Exception

class h2ogpte.types.Job(*, id: str, name: str, passed: float, failed: float, progress: float, completed: bool, canceled: bool, date: datetime, kind: JobKind, statuses: List[JobStatus], errors: List[str], last_update_date: datetime, duration: str, duration_seconds: float)

Bases: BaseModel

canceled: bool
completed: bool
date: datetime
duration: str
duration_seconds: float
errors: List[str]
failed: float
id: str
kind: JobKind
last_update_date: datetime
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'canceled': FieldInfo(annotation=bool, required=True), 'completed': FieldInfo(annotation=bool, required=True), 'date': FieldInfo(annotation=datetime, required=True), 'duration': FieldInfo(annotation=str, required=True), 'duration_seconds': FieldInfo(annotation=float, required=True), 'errors': FieldInfo(annotation=List[str], required=True), 'failed': FieldInfo(annotation=float, required=True), 'id': FieldInfo(annotation=str, required=True), 'kind': FieldInfo(annotation=JobKind, required=True), 'last_update_date': FieldInfo(annotation=datetime, required=True), 'name': FieldInfo(annotation=str, required=True), 'passed': FieldInfo(annotation=float, required=True), 'progress': FieldInfo(annotation=float, required=True), 'statuses': FieldInfo(annotation=List[JobStatus], required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

name: str
passed: float
progress: float
statuses: List[JobStatus]
class h2ogpte.types.JobKind(value)

Bases: str, Enum

An enumeration.

DeleteCollectionsJob = 'DeleteCollectionsJob'
DeleteDocumentsFromCollectionJob = 'DeleteDocumentsFromCollectionJob'
DeleteDocumentsJob = 'DeleteDocumentsJob'
DocumentSummaryJob = 'DocumentProcessJob'
ImportCollectionIntoCollectionJob = 'ImportCollectionIntoCollectionJob'
ImportDocumentIntoCollectionJob = 'ImportDocumentIntoCollectionJob'
IndexFilesJob = 'IndexFilesJob'
IngestFromCloudStorageJob = 'IngestFromCloudStorageJob'
IngestFromFileSystemJob = 'IngestFromFileSystemJob'
IngestUploadsJob = 'IngestUploadsJob'
IngestWebsiteJob = 'IngestWebsiteJob'
NoOpJob = 'NoOpJob'
UpdateCollectionStatsJob = 'UpdateCollectionStatsJob'
class h2ogpte.types.JobStatus(*, id: str, status: str)

Bases: BaseModel

id: str
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'id': FieldInfo(annotation=str, required=True), 'status': FieldInfo(annotation=str, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

status: str
class h2ogpte.types.LLMPerformance(*, llm_name: str, call_count: int, input_tokens: int, output_tokens: int, tokens_per_second: float, time_to_first_token: float)

Bases: BaseModel

call_count: int
input_tokens: int
llm_name: str
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'call_count': FieldInfo(annotation=int, required=True), 'input_tokens': FieldInfo(annotation=int, required=True), 'llm_name': FieldInfo(annotation=str, required=True), 'output_tokens': FieldInfo(annotation=int, required=True), 'time_to_first_token': FieldInfo(annotation=float, required=True), 'tokens_per_second': FieldInfo(annotation=float, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

output_tokens: int
time_to_first_token: float
tokens_per_second: float
class h2ogpte.types.LLMUsage(*, llm_name: str, llm_cost: float, call_count: int, input_tokens: int, output_tokens: int)

Bases: BaseModel

call_count: int
input_tokens: int
llm_cost: float
llm_name: str
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'call_count': FieldInfo(annotation=int, required=True), 'input_tokens': FieldInfo(annotation=int, required=True), 'llm_cost': FieldInfo(annotation=float, required=True), 'llm_name': FieldInfo(annotation=str, required=True), 'output_tokens': FieldInfo(annotation=int, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

output_tokens: int
class h2ogpte.types.LLMUsageLimit(*, current: float, max_allowed_24h: float, cost_unit: str, interval: str | None = None)

Bases: BaseModel

cost_unit: str
current: float
interval: str | None
max_allowed_24h: float
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'cost_unit': FieldInfo(annotation=str, required=True), 'current': FieldInfo(annotation=float, required=True), 'interval': FieldInfo(annotation=Union[str, NoneType], required=False), 'max_allowed_24h': FieldInfo(annotation=float, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

class h2ogpte.types.Meta(*, version: str, build: str, username: str, email: str, license_expired: bool, license_expiry_date: str, global_configs: List[ConfigItem], picture: str | None)

Bases: BaseModel

build: str
email: str
global_configs: List[ConfigItem]
license_expired: bool
license_expiry_date: str
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'build': FieldInfo(annotation=str, required=True), 'email': FieldInfo(annotation=str, required=True), 'global_configs': FieldInfo(annotation=List[ConfigItem], required=True), 'license_expired': FieldInfo(annotation=bool, required=True), 'license_expiry_date': FieldInfo(annotation=str, required=True), 'picture': FieldInfo(annotation=Union[str, NoneType], required=True), 'username': FieldInfo(annotation=str, required=True), 'version': FieldInfo(annotation=str, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

picture: str | None
username: str
version: str
class h2ogpte.types.ObjectCount(*, chat_session_count: int, collection_count: int, document_count: int)

Bases: BaseModel

chat_session_count: int
collection_count: int
document_count: int
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'chat_session_count': FieldInfo(annotation=int, required=True), 'collection_count': FieldInfo(annotation=int, required=True), 'document_count': FieldInfo(annotation=int, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

exception h2ogpte.types.ObjectNotFoundError

Bases: Exception

class h2ogpte.types.PartialChatMessage(*, id: str, content: str, reply_to: str | None = None)

Bases: BaseModel

content: str
id: str
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'content': FieldInfo(annotation=str, required=True), 'id': FieldInfo(annotation=str, required=True), 'reply_to': FieldInfo(annotation=Union[str, NoneType], required=False)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

reply_to: str | None
class h2ogpte.types.Permission(*, username: str)

Bases: BaseModel

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'username': FieldInfo(annotation=str, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

username: str
class h2ogpte.types.ProcessedDocument(*, id: str, content: str | List[str], error: str, document_id: str, kwargs: str, created_at: datetime, usage_stats: str | None = None)

Bases: BaseModel

content: str | List[str]
created_at: datetime
document_id: str
error: str
id: str
kwargs: str
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'content': FieldInfo(annotation=Union[str, List[str]], required=True), 'created_at': FieldInfo(annotation=datetime, required=True), 'document_id': FieldInfo(annotation=str, required=True), 'error': FieldInfo(annotation=str, required=True), 'id': FieldInfo(annotation=str, required=True), 'kwargs': FieldInfo(annotation=str, required=True), 'usage_stats': FieldInfo(annotation=Union[str, NoneType], required=False)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

usage_stats: str | None
class h2ogpte.types.PromptTemplate(*, is_default: bool, id: str | None, name: str, description: str | None, lang: str | None, system_prompt: str | None, pre_prompt_query: str | None, prompt_query: str | None, hyde_no_rag_llm_prompt_extension: str | None, pre_prompt_summary: str | None, prompt_summary: str | None, system_prompt_reflection: str | None, pre_prompt_reflection: str | None, prompt_reflection: str | None, auto_gen_description_prompt: str | None, auto_gen_document_summary_pre_prompt_summary: str | None, auto_gen_document_summary_prompt_summary: str | None, auto_gen_document_sample_questions_prompt: str | None, default_sample_questions: List[str] | None, created_at: datetime | None, user_id: str | None = '', username: str | None = '', user_count: int | None = -1, image_batch_image_prompt: str | None, image_batch_final_prompt: str | None)

Bases: BaseModel

auto_gen_description_prompt: str | None
auto_gen_document_sample_questions_prompt: str | None
auto_gen_document_summary_pre_prompt_summary: str | None
auto_gen_document_summary_prompt_summary: str | None
created_at: datetime | None
default_sample_questions: List[str] | None
description: str | None
hyde_no_rag_llm_prompt_extension: str | None
id: str | None
image_batch_final_prompt: str | None
image_batch_image_prompt: str | None
is_default: bool
lang: str | None
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'auto_gen_description_prompt': FieldInfo(annotation=Union[str, NoneType], required=True), 'auto_gen_document_sample_questions_prompt': FieldInfo(annotation=Union[str, NoneType], required=True), 'auto_gen_document_summary_pre_prompt_summary': FieldInfo(annotation=Union[str, NoneType], required=True), 'auto_gen_document_summary_prompt_summary': FieldInfo(annotation=Union[str, NoneType], required=True), 'created_at': FieldInfo(annotation=Union[datetime, NoneType], required=True), 'default_sample_questions': FieldInfo(annotation=Union[List[str], NoneType], required=True), 'description': FieldInfo(annotation=Union[str, NoneType], required=True), 'hyde_no_rag_llm_prompt_extension': FieldInfo(annotation=Union[str, NoneType], required=True), 'id': FieldInfo(annotation=Union[str, NoneType], required=True), 'image_batch_final_prompt': FieldInfo(annotation=Union[str, NoneType], required=True), 'image_batch_image_prompt': FieldInfo(annotation=Union[str, NoneType], required=True), 'is_default': FieldInfo(annotation=bool, required=True), 'lang': FieldInfo(annotation=Union[str, NoneType], required=True), 'name': FieldInfo(annotation=str, required=True), 'pre_prompt_query': FieldInfo(annotation=Union[str, NoneType], required=True), 'pre_prompt_reflection': FieldInfo(annotation=Union[str, NoneType], required=True), 'pre_prompt_summary': FieldInfo(annotation=Union[str, NoneType], required=True), 'prompt_query': FieldInfo(annotation=Union[str, NoneType], required=True), 'prompt_reflection': FieldInfo(annotation=Union[str, NoneType], required=True), 'prompt_summary': FieldInfo(annotation=Union[str, NoneType], required=True), 'system_prompt': FieldInfo(annotation=Union[str, NoneType], required=True), 'system_prompt_reflection': FieldInfo(annotation=Union[str, NoneType], required=True), 'user_count': FieldInfo(annotation=Union[int, NoneType], required=False, default=-1), 'user_id': FieldInfo(annotation=Union[str, NoneType], required=False, default=''), 'username': FieldInfo(annotation=Union[str, NoneType], required=False, default='')}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

name: str
pre_prompt_query: str | None
pre_prompt_reflection: str | None
pre_prompt_summary: str | None
prompt_query: str | None
prompt_reflection: str | None
prompt_summary: str | None
system_prompt: str | None
system_prompt_reflection: str | None
user_count: int | None
user_id: str | None
username: str | None
class h2ogpte.types.PromptTemplateCount(*, prompt_template_count: int)

Bases: BaseModel

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'prompt_template_count': FieldInfo(annotation=int, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

prompt_template_count: int
class h2ogpte.types.QuestionReplyData(*, question_content: str, reply_content: str, question_id: str, reply_id: str, llm: str | None, system_prompt: str | None, pre_prompt_query: str | None, prompt_query: str | None, pre_prompt_summary: str | None, prompt_summary: str | None, rag_config: str | None, collection_documents: List[str] | None, votes: int, expected_answer: str | None, user_comment: str | None, collection_id: str | None, collection_name: str | None, response_created_at_time: str, prompt_template_id: str | None = None, include_chat_history: bool | None = None)

Bases: BaseModel

collection_documents: List[str] | None
collection_id: str | None
collection_name: str | None
expected_answer: str | None
include_chat_history: bool | None
llm: str | None
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'collection_documents': FieldInfo(annotation=Union[List[str], NoneType], required=True), 'collection_id': FieldInfo(annotation=Union[str, NoneType], required=True), 'collection_name': FieldInfo(annotation=Union[str, NoneType], required=True), 'expected_answer': FieldInfo(annotation=Union[str, NoneType], required=True), 'include_chat_history': FieldInfo(annotation=Union[bool, NoneType], required=False), 'llm': FieldInfo(annotation=Union[str, NoneType], required=True), 'pre_prompt_query': FieldInfo(annotation=Union[str, NoneType], required=True), 'pre_prompt_summary': FieldInfo(annotation=Union[str, NoneType], required=True), 'prompt_query': FieldInfo(annotation=Union[str, NoneType], required=True), 'prompt_summary': FieldInfo(annotation=Union[str, NoneType], required=True), 'prompt_template_id': FieldInfo(annotation=Union[str, NoneType], required=False), 'question_content': FieldInfo(annotation=str, required=True), 'question_id': FieldInfo(annotation=str, required=True), 'rag_config': FieldInfo(annotation=Union[str, NoneType], required=True), 'reply_content': FieldInfo(annotation=str, required=True), 'reply_id': FieldInfo(annotation=str, required=True), 'response_created_at_time': FieldInfo(annotation=str, required=True), 'system_prompt': FieldInfo(annotation=Union[str, NoneType], required=True), 'user_comment': FieldInfo(annotation=Union[str, NoneType], required=True), 'votes': FieldInfo(annotation=int, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

pre_prompt_query: str | None
pre_prompt_summary: str | None
prompt_query: str | None
prompt_summary: str | None
prompt_template_id: str | None
question_content: str
question_id: str
rag_config: str | None
reply_content: str
reply_id: str
response_created_at_time: str
system_prompt: str | None
user_comment: str | None
votes: int
class h2ogpte.types.QuestionReplyDataCount(*, question_reply_data_count: int)

Bases: BaseModel

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'question_reply_data_count': FieldInfo(annotation=int, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

question_reply_data_count: int
class h2ogpte.types.Result(*, status: Status)

Bases: BaseModel

model_config: ClassVar[ConfigDict] = {'use_enum_values': True}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'status': FieldInfo(annotation=Status, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

status: Status
class h2ogpte.types.SchedulerStats(*, queue_length: int)

Bases: BaseModel

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'queue_length': FieldInfo(annotation=int, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

queue_length: int
class h2ogpte.types.SearchResult(*, id: int, topic: str, name: str, text: str, size: int, pages: str, score: float)

Bases: BaseModel

id: int
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'id': FieldInfo(annotation=int, required=True), 'name': FieldInfo(annotation=str, required=True), 'pages': FieldInfo(annotation=str, required=True), 'score': FieldInfo(annotation=float, required=True), 'size': FieldInfo(annotation=int, required=True), 'text': FieldInfo(annotation=str, required=True), 'topic': FieldInfo(annotation=str, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

name: str
pages: str
score: float
size: int
text: str
topic: str
class h2ogpte.types.SearchResults(*, result: List[SearchResult])

Bases: BaseModel

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'result': FieldInfo(annotation=List[SearchResult], required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

result: List[SearchResult]
exception h2ogpte.types.SessionError

Bases: Exception

class h2ogpte.types.ShareResponseStatus(*, status: str)

Bases: BaseModel

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'status': FieldInfo(annotation=str, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

status: str
class h2ogpte.types.Status(value)

Bases: str, Enum

An enumeration.

Canceled = 'canceled'
Completed = 'completed'
Failed = 'failed'
Queued = 'queued'
Running = 'running'
Scheduled = 'scheduled'
Unknown = 'unknown'
class h2ogpte.types.SuggestedQuestion(*, question: str)

Bases: BaseModel

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'question': FieldInfo(annotation=str, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

question: str
class h2ogpte.types.Tag(*, id: str, name: str, description: str | None = None, format: str | None = None)

Bases: BaseModel

description: str | None
format: str | None
id: str
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'description': FieldInfo(annotation=Union[str, NoneType], required=False), 'format': FieldInfo(annotation=Union[str, NoneType], required=False), 'id': FieldInfo(annotation=str, required=True), 'name': FieldInfo(annotation=str, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

name: str
exception h2ogpte.types.UnauthorizedError

Bases: Exception

class h2ogpte.types.User(*, username: str)

Bases: BaseModel

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'username': FieldInfo(annotation=str, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

username: str

Module contents

h2oGPTe - AI for documents and more

h2ogpte is the Python client library for H2O.ai’s Enterprise h2oGPTe, a RAG (Retrieval-Augmented Generation) based platform built on top of many open-source software components such as h2oGPT, hnswlib, Torch, Transformers, Golang, Python, k8s, Docker, PyMuPDF, DocTR, and many more.

h2oGPTe is designed to help organizations improve their business using generative AI. It focuses on scaling as your organization expands the number of use cases, users, and documents and has the goal of being your one stop for integrating any model or LLM functionality into your business.

Main Features

  • Contextualize chat with your own data using RAG (Retrieval-Augmented Generation)

  • Scalable backend and frontend, multi-user, high throughput

  • Fully containerized with Kubernetes

  • Multi-modal support for text, images, and audio

  • Highly customizable prompting for:
    • talk to LLM

    • talk to document

    • talk to collection of documents

    • talk to every page of a collection (Map/Reduce), summary, extraction

  • LLM agnostic, choose the model you need for your use case

class h2ogpte.H2OGPTE(address: str, api_key: str | None = None, token_provider: TokenProvider | None = None, verify: bool | str = True, strict_version_check: bool = False)

Bases: object

Connect to and interact with an h2oGPTe server.

INITIAL_WAIT_INTERVAL = 0.1
MAX_WAIT_INTERVAL = 1.0
TIMEOUT = 3600.0
WAIT_BACKOFF_FACTOR = 1.4
answer_question(question: str, system_prompt: str | None = '', pre_prompt_query: str | None = None, prompt_query: str | None = None, text_context_list: List[str] | None = None, llm: str | int | None = None, llm_args: Dict[str, Any] | None = None, chat_conversation: List[Tuple[str, str]] | None = None, pii_settings: Dict | None = None, timeout: float | None = None, **kwargs: Any) Answer

Send a message and get a response from an LLM.

Note: This method is only recommended if you are passing a chat conversation or for low-volume testing. For general chat with an LLM, we recommend session.query() for higher throughput in multi-user environments. The following code sample shows the recommended method:

# Establish a chat session
chat_session_id = client.create_chat_session()
# Connect to the chat session
with client.connect(chat_session_id) as session:
    # Send a basic query and print the reply
    reply = session.query("Hello", timeout=60)
    print(reply.content)

Format of inputs content:

{text_context_list}
"""\n{chat_conversation}{question}
Args:
question:

Text query to send to the LLM.

text_context_list:

List of raw text strings to be included, will be converted to a string like this: “

“.join(text_context_list)
system_prompt:

Text sent to models which support system prompts. Gives the model overall context in how to respond. Use auto for the model default, or None for h2oGPTe default. Defaults to ‘’ for no system prompt.

pre_prompt_query:

Text that is prepended before the contextual document chunks in text_context_list. Only used if text_context_list is provided.

prompt_query:

Text that is appended after the contextual document chunks in text_context_list. Only used if text_context_list is provided.

llm:

Name or index of LLM to send the query. Use H2OGPTE.get_llms() to see all available options. Default value is to use the first model (0th index).

llm_args:
Dictionary of kwargs to pass to the llm. Valid keys:

temperature (float, default: 0) — The value used to modulate the next token probabilities. Most deterministic: 0, Most creative: 1 seed (int, default: 0) — The seed for the random number generator, only used if temperature > 0, seed=0 will pick a random number for each call, seed > 0 will be fixed. top_k (int, default: 1) — The number of highest probability vocabulary tokens to keep for top-k-filtering. top_p (float, default: 1.0) — If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. repetition_penalty (float, default: 1.07) — The parameter for repetition penalty. 1.0 means no penalty. max_new_tokens (int, default: 1024) — Maximum number of new tokens to generate. This limit applies to each (map+reduce) step during summarization and each (map) step during extraction. min_max_new_tokens (int, default: 512) — minimum value for max_new_tokens when auto-adjusting for content of prompt, docs, etc. response_format (str, default: “text”) — Output type, one of [“text”, “json_object”, “json_code”]. guided_json (str, default: “”) — If specified, the output will follow the JSON schema. guided_regex (str, default: “”) — If specified, the output will follow the regex pattern. guided_choice (Optional[List[str]], default: None — If specified, the output will be exactly one of the choices. guided_grammar (str, default: “”) — If specified, the output will follow the context free grammar. guided_whitespace_pattern (str, default: “”) — If specified, will override the default whitespace pattern for guided json decoding.

chat_conversation:

List of tuples for (human, bot) conversation that will be pre-appended to an (question, None) case for a query.

pii_settings:

PII Settings.

timeout:

Timeout in seconds.

kwargs:
Dictionary of kwargs to pass to h2oGPT. Not recommended, see https://github.com/h2oai/h2ogpt for source code. Valid keys:

h2ogpt_key: str = “” chat_conversation: list[tuple[str, str]] | None = None docs_ordering_type: str | None = “best_near_prompt” max_input_tokens: int = -1 docs_token_handling: str = “split_or_merge” docs_joiner: str = “

image_file: Union[str, list] = None

Returns:

Answer: The response text and any errors.

Raises:

TimeoutError: If response isn’t completed in timeout seconds.

cancel_job(job_id: str) Result

Stops a specific job from running on the server.

Args:
job_id:

String id of the job to cancel.

Returns:

Result: Status of canceling the job.

connect(chat_session_id: str) Session

Create and participate in a chat session.

This is a live connection to the H2OGPTE server contained to a specific chat session on top of a single collection of documents. Users will find all questions and responses in this session in a single chat history in the UI.

Args:
chat_session_id:

ID of the chat session to connect to.

Returns:

Session: Live chat session connection with an LLM.

count_assets() ObjectCount

Counts number of objects owned by the user.

Returns:

ObjectCount: The count of chat sessions, collections, and documents.

count_chat_sessions() int

Counts number of chat sessions owned by the user.

Returns:

int: The count of chat sessions owned by the user.

count_chat_sessions_for_collection(collection_id: str) int

Counts number of chat sessions in a specific collection.

Args:
collection_id:

String id of the collection to count chat sessions for.

Returns:

int: The count of chat sessions in that collection.

count_collections() int

Counts number of collections owned by the user.

Returns:

int: The count of collections owned by the user.

count_documents() int

Counts number of documents accessed by the user.

Returns:

int: The count of documents accessed by the user.

count_documents_in_collection(collection_id: str) int

Counts the number of documents in a specific collection.

Args:
collection_id:

String id of the collection to count documents for.

Returns:

int: The number of documents in that collection.

count_documents_owned_by_me() int

Counts number of documents owned by the user.

Returns:

int: The count of documents owned by the user.

count_prompt_templates() int

Counts number of prompt templates

Returns:

int: The count of prompt templates

count_question_reply_feedback() int

Fetch user’s questions and answers with feedback count.

Returns:

int: the count of questions and replies that have a user feedback.

create_chat_session(collection_id: str | None = None) str

Creates a new chat session for asking questions (of documents).

Args:
collection_id:

String id of the collection to chat with. If None, chat with LLM directly.

Returns:

str: The ID of the newly created chat session.

create_chat_session_on_default_collection() str

Creates a new chat session for asking questions of documents on the default collection.

Returns:

str: The ID of the newly created chat session.

create_collection(name: str, description: str, embedding_model: str | None = None, prompt_template_id: str | None = None, collection_settings: dict | None = None) str

Creates a new collection.

Args:
name:

Name of the collection.

description:

Description of the collection

embedding_model:

embedding model to use. call list_embedding_models() to list of options.

prompt_template_id:

ID of the prompt template to get the prompts from. None to fall back to system defaults.

collection_settings:

(Optional) Dictionary with key/value pairs to configure certain collection specific settings like pii_settings or max_tokens_per_chunk.

Returns:

str: The ID of the newly created collection.

create_prompt_template(name: str, description: str | None = None, lang: str | None = None, system_prompt: str | None = None, pre_prompt_query: str | None = None, prompt_query: str | None = None, hyde_no_rag_llm_prompt_extension: str | None = None, pre_prompt_summary: str | None = None, prompt_summary: str | None = None, system_prompt_reflection: str | None = None, pre_prompt_reflection: str | None = None, prompt_reflection: str | None = None, auto_gen_description_prompt: str | None = None, auto_gen_document_summary_pre_prompt_summary: str | None = None, auto_gen_document_summary_prompt_summary: str | None = None, auto_gen_document_sample_questions_prompt: str | None = None, default_sample_questions: List[str] | None = None, image_batch_image_prompt: str | None = None, image_batch_final_prompt: str | None = None) str

Create a new prompt template

Args:
name:

Name of the prompt template

description:

Description of the prompt template

lang:

Language code

system_prompt:

System Prompt

pre_prompt_query:

Text that is prepended before the contextual document chunks.

prompt_query:

Text that is appended to the beginning of the user’s message.

hyde_no_rag_llm_prompt_extension:

LLM prompt extension.

pre_prompt_summary:

Prompt that goes before each large piece of text to summarize

prompt_summary:

Prompt that goes after each large piece of of text to summarize

system_prompt_reflection:

System Prompt for self-reflection

pre_prompt_reflection:

deprecated - ignored

prompt_reflection:

Template for self-reflection, must contain two occurrences of %s for full previous prompt (including system prompt, document related context and prompts if applicable, and user prompts) and answer

auto_gen_description_prompt:

prompt to create a description of the collection.

auto_gen_document_summary_pre_prompt_summary:

pre_prompt_summary for summary of a freshly imported document (if enabled).

auto_gen_document_summary_prompt_summary:

prompt_summary for summary of a freshly imported document (if enabled).

auto_gen_document_sample_questions_prompt:

prompt to create sample questions for a freshly imported document (if enabled).

default_sample_questions:

default sample questions in case there are no auto-generated sample questions.

image_batch_final_prompt:

Prompt for each image batch for vision models

image_batch_image_prompt:

Prompt to reduce all answers each image batch for vision models

Returns:

str: The ID of the newly created prompt template.

create_tag(tag_name: str) str

Creates a new tag.

Args:
tag_name:

String representing the tag to create.

Returns:

String: The id of the created tag.

delete_chat_messages(chat_message_ids: Iterable[str]) Result

Deletes specific chat messages.

Args:
chat_message_ids:

List of string ids of chat messages to delete from the system.

Returns:

Result: Status of the delete job.

delete_chat_sessions(chat_session_ids: Iterable[str]) Result

Deletes chat sessions and related messages.

Args:
chat_session_ids:

List of string ids of chat sessions to delete from the system.

Returns:

Result: Status of the delete job.

delete_collections(collection_ids: Iterable[str], timeout: float | None = None)

Deletes collections from the environment.

Documents in the collection will not be deleted.

Args:
collection_ids:

List of string ids of collections to delete from the system.

timeout:

Timeout in seconds.

delete_document_summaries(summaries_ids: Iterable[str]) Result

Deletes document summaries.

Args:
summaries_ids:

List of string ids of a document summary to delete from the system.

Returns:

Result: Status of the delete job.

delete_documents(document_ids: Iterable[str], timeout: float | None = None)

Deletes documents from the system.

Args:
document_ids:

List of string ids to delete from the system and all collections.

timeout:

Timeout in seconds.

delete_documents_from_collection(collection_id: str, document_ids: Iterable[str], timeout: float | None = None)

Removes documents from a collection.

See Also: H2OGPTE.delete_documents for completely removing the document from the environment.

Args:
collection_id:

String of the collection to remove documents from.

document_ids:

List of string ids to remove from the collection.

timeout:

Timeout in seconds.

delete_prompt_templates(ids: Iterable[str]) Result

Deletes prompt templates

Args:
ids:

List of string ids of prompte templates to delete from the system.

Returns:

Result: Status of the delete job.

delete_upload(upload_id: str) str

Delete a file previously uploaded with the “upload” method.

See Also:

upload: Upload the files into the system to then be ingested into a collection. ingest_uploads: Add the uploaded files to a collection.

Args:
upload_id:

ID of a file to remove

Returns:

upload_id: The upload id of the removed.

Raises:

Exception: The delete upload request was unsuccessful.

download_document(destination_directory: str, destination_file_name: str, document_id: str) Path

Downloads a document to a local system directory.

Args:
destination_directory:

Destination directory to save file into.

destination_file_name:

Destination file name.

document_id:

Document ID.

Returns:

Path: Path of downloaded document

download_reference_highlighting(message_id: str, destination_directory: str, output_type: str = 'combined') list

Get PDFs with reference highlighting

Args:
message_id:

ID of the message to get references from

destination_directory:

Destination directory to save files into.

output_type: str one of

"combined" Generates a PDF file for each source document, with all relevant chunks highlighted

in each respective file. This option consolidates all highlights for each source document into a single PDF, making it easy to view all highlights related to that document at once.

"split" Generates a separate PDF file for each chunk, with only the relevant chunk

highlighted in each file. This option is useful for focusing on individual sections without interference from other parts of the text. The output files names will be in the format “{document_id}_{chunk_id}.pdf”

Returns:

list[Path]: List of paths of downloaded documents with highlighting

encode_for_retrieval(chunks: List[str], embedding_model: str | None = None) List[List[float]]

Encode texts for semantic searching.

See Also: H2OGPTE.match for getting a list of chunks that semantically match each encoded text.

Args:
chunks:

List of strings of texts to be encoded.

embedding_model:

embedding model to use. call list_embedding_models() to list of options.

Returns:

List of list of floats: Each list in the list is the encoded original text.

extract_data(text_context_list: List[str] | None = None, system_prompt: str = '', pre_prompt_extract: str | None = None, prompt_extract: str | None = None, llm: str | int | None = None, llm_args: Dict[str, Any] | None = None, pii_settings: Dict | None = None, timeout: float | None = None, **kwargs: Any) ExtractionAnswer

Extract information from one or more contexts using an LLM.

pre_prompt_extract and prompt_extract variables must be used together. If these variables are not set, the inputs texts will be summarized into bullet points.

Format of extract content:

"{pre_prompt_extract}"""
{text_context_list}
"""\n{prompt_extract}"

Examples:

extract = h2ogpte.extract_data(
    text_context_list=chunks,
    pre_prompt_extract="Pay attention and look at all people. Your job is to collect their names.\n",
    prompt_extract="List all people's names as JSON.",
)
Args:
text_context_list:

List of raw text strings to extract data from.

system_prompt:

Text sent to models which support system prompts. Gives the model overall context in how to respond. Use auto or None for the model default. Defaults to ‘’ for no system prompt.

pre_prompt_extract:

Text that is prepended before the list of texts. If not set, the inputs will be summarized.

prompt_extract:

Text that is appended after the list of texts. If not set, the inputs will be summarized.

llm:

Name or index of LLM to send the query. Use H2OGPTE.get_llms() to see all available options. Default value is to use the first model (0th index).

llm_args:
Dictionary of kwargs to pass to the llm. Valid keys:

temperature (float, default: 0) — The value used to modulate the next token probabilities. Most deterministic: 0, Most creative: 1 seed (int, default: 0) — The seed for the random number generator, only used if temperature > 0, seed=0 will pick a random number for each call, seed > 0 will be fixed. top_k (int, default: 1) — The number of highest probability vocabulary tokens to keep for top-k-filtering. top_p (float, default: 1.0) — If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. repetition_penalty (float, default: 1.07) — The parameter for repetition penalty. 1.0 means no penalty. max_new_tokens (int, default: 1024) — Maximum number of new tokens to generate. This limit applies to each (map+reduce) step during summarization and each (map) step during extraction. min_max_new_tokens (int, default: 512) — minimum value for max_new_tokens when auto-adjusting for content of prompt, docs, etc. response_format (str, default: “text”) — Output type, one of [“text”, “json_object”, “json_code”]. guided_json (str, default: “”) — If specified, the output will follow the JSON schema. guided_regex (str, default: “”) — If specified, the output will follow the regex pattern. guided_choice (Optional[List[str]], default: None — If specified, the output will be exactly one of the choices. guided_grammar (str, default: “”) — If specified, the output will follow the context free grammar. guided_whitespace_pattern (str, default: “”) — If specified, will override the default whitespace pattern for guided json decoding.

pii_settings:

PII Settings.

timeout:

Timeout in seconds.

kwargs:
Dictionary of kwargs to pass to h2oGPT. Not recommended, see https://github.com/h2oai/h2ogpt for source code. Valid keys:

h2ogpt_key: str = “” chat_conversation: list[tuple[str, str]] | None = None docs_ordering_type: str | None = “best_near_prompt” max_input_tokens: int = -1 docs_token_handling: str = “split_or_merge” docs_joiner: str = “

image_file: Union[str, list] = None

Returns:

ExtractionAnswer: The list of text responses and any errors.

Raises:

TimeoutError: If response isn’t completed in timeout seconds.

get_chat_session_prompt_template(chat_session_id: str) PromptTemplate | None

Get the prompt template for a chat_session

Args:
chat_session_id:

ID of the chat session

Returns:

str: ID of the prompt template.

get_chat_session_questions(chat_session_id: str, limit: int) List[SuggestedQuestion]

List suggested questions

Args:
chat_session_id:

A chat session ID of which to return the suggested questions

limit:

How many questions to return.

Returns:

List: A list of questions.

get_chunks(collection_id: str, chunk_ids: Iterable[int]) List[Chunk]

Get the text of specific chunks in a collection.

Args:
collection_id:

String id of the collection to search in.

chunk_ids:

List of ints for the chunks to return. Chunks are indexed starting at 1.

Returns:

Chunk: The text of the chunk.

Raises:

Exception: One or more chunks could not be found.

get_collection(collection_id: str) Collection

Get metadata about a collection.

Args:
collection_id:

String id of the collection to search for.

Returns:

Collection: Metadata about the collection.

Raises:

KeyError: The collection was not found.

get_collection_for_chat_session(chat_session_id: str) Collection

Get metadata about the collection of a chat session.

Args:
chat_session_id:

String id of the chat session to search for.

Returns:

Collection: Metadata about the collection.

get_collection_prompt_template(collection_id: str) PromptTemplate | None

Get the prompt template for a collection

Args:
collection_id:

ID of the collection

Returns:

str: ID of the prompt template.

get_collection_questions(collection_id: str, limit: int) List[SuggestedQuestion]

List suggested questions

Args:
collection_id:

A collection ID of which to return the suggested questions

limit:

How many questions to return.

Returns:

List: A list of questions.

get_default_collection() CollectionInfo

Get the default collection, to be used for collection API-keys.

Returns:

CollectionInfo: Default collection info.

get_document(document_id: str) Document

Fetches information about a specific document.

Args:
document_id:

String id of the document.

Returns:

Document: Metadata about the Document.

Raises:

KeyError: The document was not found.

get_job(job_id: str) Job

Fetches information about a specific job.

Args:
job_id:

String id of the job.

Returns:

Job: Metadata about the Job.

get_llm_names() List[str]

Lists names of available LLMs in the environment.

Returns:

list of string: Name of each available model.

get_llm_performance_by_llm(interval: str) List[LLMPerformance]
get_llm_usage_24h() float
get_llm_usage_24h_by_llm() List[LLMUsage]
get_llm_usage_24h_with_limits() LLMUsageLimit
get_llm_usage_6h() float
get_llm_usage_6h_by_llm() List[LLMUsage]
get_llm_usage_by_llm(interval: str) List[LLMUsage]
get_llm_usage_with_limits(interval: str) LLMUsageLimit
get_llms() List[dict]

Lists metadata information about available LLMs in the environment.

Returns:

list of dict (string, ANY): Name and details about each available model.

get_meta() Meta

Returns information about the environment and the user.

Returns:

Meta: Details about the version and license of the environment and the user’s name and email.

get_prompt_template(id: str | None = None) PromptTemplate

Get a prompt template

Args:
id:

String id of the prompt template to retrieve or None for default

Returns:

PromptTemplate: prompts

Raises:

KeyError: The prompt template was not found.

get_scheduler_stats() SchedulerStats

Count the number of global, pending jobs on the server.

Returns:

SchedulerStats: The queue length for number of jobs.

get_tag(tag_name: str) Tag

Returns an existing tag.

Args:
tag_name:

String The name of the tag to retrieve.

Returns:

Tag: The requested tag.

Raises:

KeyError: The tag was not found.

get_vision_capable_llm_names() List[str]

Lists names of available vision-capable multi-modal LLMs (that can natively handle images as input) in the environment.

Returns:

list of string: Name of each available model.

import_collection_into_collection(collection_id: str, src_collection_id: str, gen_doc_summaries: bool = False, gen_doc_questions: bool = False, copy_document: bool = False, ocr_model: str = 'auto', timeout: float | None = None)

Import all documents from a collection into an existing collection

Args:
collection_id:

Collection ID to add documents to.

src_collection_id:

Collection ID to import documents from.

gen_doc_summaries:

Whether to auto-generate document summaries (uses LLM)

gen_doc_questions:

Whether to auto-generate sample questions for each document (uses LLM)

copy_document:

Whether to save a new copy of the document

ocr_model:

Which OCR model to use. Pass empty string to see choices.

timeout:

Timeout in seconds.

import_document_into_collection(collection_id: str, document_id: str, gen_doc_summaries: bool = False, gen_doc_questions: bool = False, copy_document: bool = False, ocr_model: str = 'auto', timeout: float | None = None)

Import an already stored document to an existing collection

Args:
collection_id:

Collection ID to add documents to.

document_id:

Document ID to add.

gen_doc_summaries:

Whether to auto-generate document summaries (uses LLM)

gen_doc_questions:

Whether to auto-generate sample questions for each document (uses LLM)

copy_document:

Whether to save a new copy of the document

ocr_model:

Which OCR model to use. Pass empty string to see choices.

timeout:

Timeout in seconds.

ingest_from_azure_blob_storage(collection_id: str, container: str, path: str | List[str], account_name: str, credentials: AzureKeyCredential | AzureSASCredential | None = None, gen_doc_summaries: bool = False, gen_doc_questions: bool = False, timeout: float | None = None, audio_input_language: str = 'auto', ocr_model: str = 'auto')

Add files from the Azure Blob Storage into a collection.

Args:
collection_id:

String id of the collection to add the ingested documents into.

container:

Name of the Azure Blob Storage container.

path:

Path or list of paths to files or directories within an Azure Blob Storage container. Examples: file1, dir1/file2, dir3/dir4/

account_name:

Name of a storage account

credentials:

The object with Azure credentials. If the object is not provided, only a public container will be accessible.

gen_doc_summaries:

Whether to auto-generate document summaries (uses LLM)

gen_doc_questions:

Whether to auto-generate sample questions for each document (uses LLM)

timeout:

Timeout in seconds

audio_input_language:

Language of audio files. Defaults to “auto” language detection. Pass empty string to see choices.

ocr_model:

Which OCR model to use. Pass empty string to see choices.

ingest_from_file_system(collection_id: str, root_dir: str, glob: str, gen_doc_summaries: bool = False, gen_doc_questions: bool = False, audio_input_language: str = 'auto', ocr_model: str = 'auto', timeout: float | None = None)

Add files from the local system into a collection.

Args:
collection_id:

String id of the collection to add the ingested documents into.

root_dir:

String path of where to look for files.

glob:

String of the glob pattern used to match files in the root directory.

gen_doc_summaries:

Whether to auto-generate document summaries (uses LLM)

gen_doc_questions:

Whether to auto-generate sample questions for each document (uses LLM)

audio_input_language:

Language of audio files. Defaults to “auto” language detection. Pass empty string to see choices.

ocr_model:

Which OCR model to use. Pass empty string to see choices.

timeout:

Timeout in seconds

ingest_from_gcs(collection_id: str, url: str | List[str], credentials: GCSServiceAccountCredential | None = None, gen_doc_summaries: bool = False, gen_doc_questions: bool = False, timeout: float | None = None, audio_input_language: str = 'auto', ocr_model: str = 'auto')

Add files from the Google Cloud Storage into a collection.

Args:
collection_id:

String id of the collection to add the ingested documents into.

url:

The path or list of paths of GCS files or directories. Examples: gs://bucket/file, gs://bucket/../dir/

credentials:

The object holding a path to a JSON key of Google Cloud service account. If the object is not provided, only public buckets will be accessible.

gen_doc_summaries:

Whether to auto-generate document summaries (uses LLM)

gen_doc_questions:

Whether to auto-generate sample questions for each document (uses LLM)

timeout:

Timeout in seconds

audio_input_language:

Language of audio files. Defaults to “auto” language detection. Pass empty string to see choices.

ocr_model:

Which OCR model to use. Pass empty string to see choices.

ingest_from_s3(collection_id: str, url: str | List[str], region: str = 'us-east-1', credentials: S3Credential | None = None, gen_doc_summaries: bool = False, gen_doc_questions: bool = False, timeout: float | None = None, audio_input_language: str = 'auto', ocr_model: str = 'auto')

Add files from the AWS S3 storage into a collection.

Args:
collection_id:

String id of the collection to add the ingested documents into.

url:

The path or list of paths of S3 files or directories. Examples: s3://bucket/file, s3://bucket/../dir/

region:

The name of the region used for interaction with AWS services.

credentials:

The object with S3 credentials. If the object is not provided, only public buckets will be accessible.

gen_doc_summaries:

Whether to auto-generate document summaries (uses LLM)

gen_doc_questions:

Whether to auto-generate sample questions for each document (uses LLM)

timeout:

Timeout in seconds

audio_input_language:

Language of audio files. Defaults to “auto” language detection. Pass empty string to see choices.

ocr_model:

Which OCR model to use. Pass empty string to see choices.

ingest_uploads(collection_id: str, upload_ids: Iterable[str], gen_doc_summaries: bool = False, gen_doc_questions: bool = False, timeout: float | None = None, audio_input_language: str = 'auto', ocr_model: str = 'auto')

Add uploaded documents into a specific collection.

See Also:

upload: Upload the files into the system to then be ingested into a collection. delete_upload: Delete uploaded file

Args:
collection_id:

String id of the collection to add the ingested documents into.

upload_ids:

List of string ids of each uploaded document to add to the collection.

gen_doc_summaries:

Whether to auto-generate document summaries (uses LLM)

gen_doc_questions:

Whether to auto-generate sample questions for each document (uses LLM)

audio_input_language:

Language of audio files. Defaults to “auto” language detection. Pass empty string to see choices.

ocr_model:

Which OCR model to use. Pass empty string to see choices.

timeout:

Timeout in seconds

ingest_website(collection_id: str, url: str, gen_doc_summaries: bool = False, gen_doc_questions: bool = False, follow_links: bool = False, audio_input_language: str = 'auto', ocr_model: str = 'auto', timeout: float | None = None)

Crawl and ingest a URL into a collection.

The web page or document linked from this URL will be imported.

Args:
collection_id:

String id of the collection to add the ingested documents into.

url:

String of the url to crawl.

gen_doc_summaries:

Whether to auto-generate document summaries (uses LLM)

gen_doc_questions:

Whether to auto-generate sample questions for each document (uses LLM)

follow_links:

Whether to import all web pages linked from this URL will be imported. External links will be ignored. Links to other pages on the same domain will be followed as long as they are at the same level or below the URL you specify. Each page will be transformed into a PDF document.

audio_input_language:

Language of audio files. Defaults to “auto” language detection. Pass empty string to see choices.

ocr_model:

Which OCR model to use. Pass empty string to see choices.

timeout:

Timeout in seconds

list_all_tags() List[Tag]

Lists all existing tags.

Returns:

List of Tags: List of existing tags.

list_chat_message_meta_part(message_id: str, info_type: str) ChatMessageMeta

Fetch one chat message meta information.

Args:
message_id:

Message id to which the metadata should be pulled.

info_type:

Metadata type to fetch. Valid choices are: “self_reflection”, “usage_stats”, “prompt_raw”, “llm_only”, “hyde1”

Returns:

ChatMessageMeta: Metadata information about the chat message.

list_chat_message_references(message_id: str) List[ChatMessageReference]

Fetch metadata for references of a chat message.

References are only available for messages sent from an LLM, an empty list will be returned for messages sent by the user.

Args:
message_id:

String id of the message to get references for.

Returns:

list of ChatMessageReference: Metadata including the document name, polygon information, and score.

list_chat_messages(chat_session_id: str, offset: int, limit: int) List[ChatMessage]

Fetch chat message and metadata for messages in a chat session.

Messages without a reply_to are from the end user, messages with a reply_to are from an LLM and a response to a specific user message.

Args:
chat_session_id:

String id of the chat session to filter by.

offset:

How many chat messages to skip before returning.

limit:

How many chat messages to return.

Returns:

list of ChatMessage: Text and metadata for chat messages.

list_chat_messages_full(chat_session_id: str, offset: int, limit: int) List[ChatMessageFull]

Fetch chat message and metadata for messages in a chat session.

Messages without a reply_to are from the end user, messages with a reply_to are from an LLM and a response to a specific user message.

Args:
chat_session_id:

String id of the chat session to filter by.

offset:

How many chat messages to skip before returning.

limit:

How many chat messages to return.

Returns:

list of ChatMessageFull: Text and metadata for chat messages.

list_chat_sessions_for_collection(collection_id: str, offset: int, limit: int) List[ChatSessionForCollection]

Fetch chat session metadata for chat sessions in a collection.

Args:
collection_id:

String id of the collection to filter by.

offset:

How many chat sessions to skip before returning.

limit:

How many chat sessions to return.

Returns:

list of ChatSessionForCollection: Metadata about each chat session including the latest message.

list_collection_permissions(collection_id: str) List[Permission]

Returns a list of access permissions for a given collection.

The returned list of permissions denotes who has access to the collection and their access level.

Args:
collection_id:

ID of the collection to inspect.

Returns:

list of Permission: Sharing permissions list for the given collection.

list_collections_for_document(document_id: str, offset: int, limit: int) List[CollectionInfo]

Fetch metadata about each collection the document is a part of.

At this time, each document will only be available in a single collection.

Args:
document_id:

String id of the document to search for.

offset:

How many collections to skip before returning.

limit:

How many collections to return.

Returns:

list of CollectionInfo: Metadata about each collection.

list_document_chunks(document_id: str, collection_id: str | None = None) List[SearchResult]

Returns all chunks for a specific document.

Args:
document_id:

ID of the document.

collection_id:

ID of the collection the document belongs to. If not specified, an arbitrary collections containing the document is chosen.

Returns:

list of SearchResult: The document, text, score and related information of the chunk.

list_documents_from_tags(collection_id: str, tags: List[str]) List[Document]

Lists documents that have the specified set of tags within a collection. Args:

collection_id:

String The id of the collection to find documents in.

tags:

List of Strings representing the tags to retrieve documents for.

Returns:

List of Documents: All the documents with the specified tags.

list_documents_in_collection(collection_id: str, offset: int, limit: int) List[DocumentInfo]

Fetch document metadata for documents in a collection.

Args:
collection_id:

String id of the collection to filter by.

offset:

How many documents to skip before returning.

limit:

How many documents to return.

Returns:

list of DocumentInfo: Metadata about each document.

list_embedding_models() List[str]
list_jobs() List[Job]

List the user’s jobs.

Returns:

list of Job:

list_list_chat_message_meta(message_id: str) List[ChatMessageMeta]

Fetch chat message meta information.

Args:
message_id:

Message id to which the metadata should be pulled.

Returns:

list of ChatMessageMeta: Metadata about the chat message.

list_prompt_permissions(prompt_id: str) List[Permission]

Returns a list of access permissions for a given prompt template.

The returned list of permissions denotes who has access to the prompt template and their access level.

Args:
prompt_id:

ID of the prompt template to inspect.

Returns:

list of Permission: Sharing permissions list for the given prompt template.

list_question_reply_feedback_data(offset: int, limit: int) List[QuestionReplyData]

Fetch user’s questions and answers that have a feedback.

Questions and answers with metadata and feedback information.

Args:
offset:

How many conversations to skip before returning.

limit:

How many conversations to return.

Returns:

list of QuestionReplyData: Metadata about questions and answers.

list_recent_chat_sessions(offset: int, limit: int) List[ChatSessionInfo]

Fetch user’s chat session metadata sorted by last update time.

Chats across all collections will be accessed.

Args:
offset:

How many chat sessions to skip before returning.

limit:

How many chat sessions to return.

Returns:

list of ChatSessionInfo: Metadata about each chat session including the latest message.

list_recent_collections(offset: int, limit: int) List[CollectionInfo]

Fetch user’s collection metadata sorted by last update time.

Args:
offset:

How many collections to skip before returning.

limit:

How many collections to return.

Returns:

list of CollectionInfo: Metadata about each collection.

list_recent_collections_sort(offset: int, limit: int, sort_column: str, ascending: bool) List[CollectionInfo]

Fetch user’s collection metadata sorted by last update time.

Args:
offset:

How many collections to skip before returning.

limit:

How many collections to return.

sort_column:

Sort column.

ascending:

When True, return sorted by sort_column in ascending order.

Returns:

list of CollectionInfo: Metadata about each collection.

list_recent_document_summaries(document_id: str, offset: int, limit: int) List[ProcessedDocument]

Fetches recent document summaries/extractions/transformations

Args:
document_id:

document ID for which to return summaries

offset:

How many summaries to skip before returning summaries.

limit:

How many summaries to return.

list_recent_documents(offset: int, limit: int) List[DocumentInfo]

Fetch user’s document metadata sorted by last update time.

All documents owned by the user, regardless of collection, are accessed.

Args:
offset:

How many documents to skip before returning.

limit:

How many documents to return.

Returns:

list of DocumentInfo: Metadata about each document.

list_recent_documents_with_summaries(offset: int, limit: int) List[DocumentInfoSummary]

Fetch user’s document metadata sorted by last update time, including the latest document summary.

All documents owned by the user, regardless of collection, are accessed.

Args:
offset:

How many documents to skip before returning.

limit:

How many documents to return.

Returns:

list of DocumentInfoSummary: Metadata about each document.

list_recent_documents_with_summaries_sort(offset: int, limit: int, sort_column: str, ascending: bool) List[DocumentInfoSummary]

Fetch user’s document metadata sorted by last update time, including the latest document summary.

All documents owned by the user, regardless of collection, are accessed.

Args:
offset:

How many documents to skip before returning.

limit:

How many documents to return.

sort_column:

Sort column.

ascending:

When True, return sorted by sort_column in ascending order.

Returns:

list of DocumentInfoSummary: Metadata about each document.

list_recent_prompt_templates(offset: int, limit: int) List[PromptTemplate]

Fetch user’s prompt templates sorted by last update time.

Args:
offset:

How many prompt templates to skip before returning.

limit:

How many prompt templates to return.

Returns:

list of PromptTemplate: set of prompts

list_recent_prompt_templates_sort(offset: int, limit: int, sort_column: str, ascending: bool) List[PromptTemplate]

Fetch user’s prompt templates sorted by last update time.

Args:
offset:

How many prompt templates to skip before returning.

limit:

How many prompt templates to return.

sort_column:

Sort column.

ascending:

When True, return sorted by sort_column in ascending order.

Returns:

list of PromptTemplate: set of prompts

list_upload() List[str]

List pending file uploads to the H2OGPTE backend.

Uploaded files are not yet accessible and need to be ingested into a collection.

See Also:

upload: Upload the files into the system to then be ingested into a collection. ingest_uploads: Add the uploaded files to a collection. delete_upload: Delete uploaded file

Returns:

List[str]: The pending upload ids to be used in ingest jobs.

Raises:

Exception: The upload list request was unsuccessful.

list_users(offset: int, limit: int) List[User]

List system users.

Returns a list of all registered users fo the system, a registered user, is a users that has logged in at least once.

Args:
offset:

How many users to skip before returning.

limit:

How many users to return.

Returns:

list of User: Metadata about each user.

make_collection_private(collection_id: str)

Make a collection private

Once a collection is private, other users will no longer be able to access chat history or documents related to the collection.

Args:
collection_id:

ID of the collection to make private.

make_collection_public(collection_id: str)

Make a collection public

Once a collection is public, it will be accessible to all authenticated users of the system.

Args:
collection_id:

ID of the collection to make public.

match_chunks(collection_id: str, vectors: List[List[float]], topics: List[str], offset: int, limit: int, cut_off: float = 0, width: int = 0) List[SearchResult]

Find chunks related to a message using semantic search.

Chunks are sorted by relevance and similarity score to the message.

See Also: H2OGPTE.encode_for_retrieval to create vectors from messages.

Args:
collection_id:

ID of the collection to search within.

vectors:

A list of vectorized message for running semantic search.

topics:

A list of document_ids used to filter which documents in the collection to search.

offset:

How many chunks to skip before returning chunks.

limit:

How many chunks to return.

cut_off:

Exclude matches with distances higher than this cut off.

width:

How many chunks before and after a match to return - not implemented.

Returns:

list of SearchResult: The document, text, score and related information of the chunk.

process_document(document_id: str, system_prompt: str | None = None, pre_prompt_summary: str | None = None, prompt_summary: str | None = None, image_batch_image_prompt: str | None = None, image_batch_final_prompt: str | None = None, llm: str | int | None = None, llm_args: Dict[str, Any] | None = None, max_num_chunks: int | None = None, sampling_strategy: str | None = None, pages: List[int] | None = None, schema: Dict[str, Any] | None = None, keep_intermediate_results: bool | None = None, pii_settings: Dict | None = None, meta_data_to_include: Dict[str, bool] | None = None, timeout: float | None = None) ProcessedDocument

Processes a document to either create a global or piecewise summary/extraction/transformation of a document.

Effective prompt created (excluding the system prompt):

"{pre_prompt_summary}
"""
{text from document}
"""
{prompt_summary}"
Args:
document_id:

String id of the document to create a summary from.

system_prompt:

System Prompt

pre_prompt_summary:

Prompt that goes before each large piece of text to summarize

prompt_summary:

Prompt that goes after each large piece of of text to summarize

image_batch_final_prompt:

Prompt for each image batch for vision models

image_batch_image_prompt:

Prompt to reduce all answers each image batch for vision models

llm:

LLM to use

llm_args:
Dictionary of kwargs to pass to the llm. Valid keys:

temperature (float, default: 0) — The value used to modulate the next token probabilities. Most deterministic: 0, Most creative: 1 top_k (int, default: 1) — The number of highest probability vocabulary tokens to keep for top-k-filtering. top_p (float, default: 1.0) — If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. seed (int, default: 0) — The seed for the random number generator when sampling during generation (if temp>0 or top_k>1 or top_p<1), seed=0 picks a random seed. repetition_penalty (float, default: 1.07) — The parameter for repetition penalty. 1.0 means no penalty. max_new_tokens (int, default: 1024) — Maximum number of new tokens to generate. This limit applies to each (map+reduce) step during summarization and each (map) step during extraction. min_max_new_tokens (int, default: 512) — minimum value for max_new_tokens when auto-adjusting for content of prompt, docs, etc. response_format (str, default: “text”) — Output type, one of [“text”, “json_object”, “json_code”]. guided_json (str, default: “”) — If specified, the output will follow the JSON schema. guided_regex (str, default: “”) — If specified, the output will follow the regex pattern. guided_choice (Optional[List[str]], default: None — If specified, the output will be exactly one of the choices. guided_grammar (str, default: “”) — If specified, the output will follow the context free grammar. guided_whitespace_pattern (str, default: “”) — If specified, will override the default whitespace pattern for guided json decoding. enable_vision (str, default: “auto”) - Controls vision mode, send images to the LLM in addition to text chunks. Only if have models that support vision, use get_vision_capable_llm_names() to see list. One of [“on”, “off”, “auto”]. visible_vision_models (List[str], default: [“auto”]) - Controls which vision model to use when processing images. Use get_vision_capable_llm_names() to see list. Must provide exactly one model. [“auto”] for automatic.

max_num_chunks:

Max limit of chunks to send to the summarizer

sampling_strategy:

How to sample if the document has more chunks than max_num_chunks. Options are “auto”, “uniform”, “first”, “first+last”, default is “auto” (a hybrid of them all).

pages:

List of specific pages (of the ingested document in PDF form) to use from the document. 1-based indexing.

schema:

Optional JSON schema to use for guided json generation.

keep_intermediate_results:

Whether to keep intermediate results. Default: disabled. If disabled, further LLM calls are applied to the intermediate results until one global summary is obtained: map+reduce (i.e., summary). If enabled, the results’ content will be a list of strings (the results of applying the LLM to different pieces of document context): map (i.e., extract).

pii_settings:

PII Settings.

meta_data_to_include:

A dictionary containing flags that indicate whether each piece of document metadata is to be included as part of the context given to the LLM. Only used if enable_vision is disabled. Default is {

“name”: True, “text”: True, “page”: True, “captions”: True, “uri”: False, “connector”: False, “original_mtime”: False, “age”: False, “score”: False,

}

timeout:

Amount of time in seconds to allow the request to run. The default is 86400 seconds.

Returns:

ProcessedDocument: Processed document. The content is either a string (keep_intermediate_results=False) or a list of strings (keep_intermediate_results=True).

Raises:

TimeoutError: The request did not complete in time. SessionError: No summary or extraction created. Document wasn’t part of a collection, or LLM timed out, etc.

reset_collection_prompt_settings(collection_id: str) str

Reset the prompt settings for a given collection.

Args:
collection_id:

ID of the collection to update.

Returns:

str: ID of the updated collection.

search_chunks(collection_id: str, query: str, topics: List[str], offset: int, limit: int) List[SearchResult]

Find chunks related to a message using lexical search.

Chunks are sorted by relevance and similarity score to the message.

Args:
collection_id:

ID of the collection to search within.

query:

Question or imperative from the end user to search a collection for.

topics:

A list of document_ids used to filter which documents in the collection to search.

offset:

How many chunks to skip before returning chunks.

limit:

How many chunks to return.

Returns:

list of SearchResult: The document, text, score and related information of the chunk.

set_chat_message_votes(chat_message_id: str, votes: int) Result

Change the vote value of a chat message.

Set the exact value of a vote for a chat message. Any message type can be updated, but only LLM response votes will be visible in the UI. The expectation is 0: unvoted, -1: dislike, 1 like. Values outside of this will not be viewable in the UI.

Args:
chat_message_id:

ID of a chat message, any message can be used but only LLM responses will be visible in the UI.

votes:

Integer value for the message. Only -1 and 1 will be visible in the UI as dislike and like respectively.

Returns:

Result: The status of the update.

Raises:

Exception: The upload request was unsuccessful.

set_chat_session_collection(chat_session_id: str, collection_id: str | None) str

Set the prompt template for a chat_session

Args:
chat_session_id:

ID of the chat session

collection_id:

ID of the collection, or None to chat with the LLM only.

Returns:

str: ID of the updated chat session

set_chat_session_prompt_template(chat_session_id: str, prompt_template_id: str | None) str

Set the prompt template for a chat_session

Args:
chat_session_id:

ID of the chat session

prompt_template_id:

ID of the prompt template to get the prompts from. None to delete and fall back to system defaults.

Returns:

str: ID of the updated chat session

set_collection_prompt_template(collection_id: str, prompt_template_id: str | None, strict_check: bool = False) str

Set the prompt template for a collection

Args:
collection_id:

ID of the collection to update.

prompt_template_id:

ID of the prompt template to get the prompts from. None to delete and fall back to system defaults.

strict_check:

whether to check that the collection’s embedding model and the prompt template are optimally compatible

Returns:

str: ID of the updated collection.

share_collection(collection_id: str, permission: Permission) ShareResponseStatus

Share a collection to a user.

The permission attribute defined the level of access, and who can access the collection, the collection_id attribute denotes the collection to be shared.

Args:
collection_id:

ID of the collection to share.

permission:

Defines the rule for sharing, i.e. permission level.

Returns:

ShareResponseStatus: Status of share request.

share_prompt(prompt_id: str, permission: Permission) ShareResponseStatus

Share a prompt template to a user.

Args:
prompt_id:

ID of the prompt template to share.

permission:

Defines the rule for sharing, i.e. permission level.

Returns:

ShareResponseStatus: Status of share request.

summarize_content(text_context_list: List[str] | None = None, system_prompt: str = '', pre_prompt_summary: str | None = None, prompt_summary: str | None = None, llm: str | int | None = None, llm_args: Dict[str, Any] | None = None, pii_settings: Dict | None = None, timeout: float | None = None, **kwargs: Any) Answer

Summarize one or more contexts using an LLM.

Effective prompt created (excluding the system prompt):

"{pre_prompt_summary}
"""
{text_context_list}
"""
{prompt_summary}"
Args:
text_context_list:

List of raw text strings to be summarized.

system_prompt:

Text sent to models which support system prompts. Gives the model overall context in how to respond. Use auto for the model default or None for h2oGPTe defaults. Defaults to ‘’ for no system prompt.

pre_prompt_summary:

Text that is prepended before the list of texts. The default can be customized per environment, but the standard default is "In order to write a concise single-paragraph or bulleted list summary, pay attention to the following text:\n"

prompt_summary:

Text that is appended after the list of texts. The default can be customized per environment, but the standard default is "Using only the text above, write a condensed and concise summary of key results (preferably as bullet points):\n"

llm:

Name or index of LLM to send the query. Use H2OGPTE.get_llms() to see all available options. Default value is to use the first model (0th index).

llm_args:
Dictionary of kwargs to pass to the llm. Valid keys:

temperature (float, default: 0) — The value used to modulate the next token probabilities. Most deterministic: 0, Most creative: 1 seed (int, default: 0) — The seed for the random number generator, only used if temperature > 0, seed=0 will pick a random number for each call, seed > 0 will be fixed. top_k (int, default: 1) — The number of highest probability vocabulary tokens to keep for top-k-filtering. top_p (float, default: 1.0) — If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. repetition_penalty (float, default: 1.07) — The parameter for repetition penalty. 1.0 means no penalty. max_new_tokens (int, default: 1024) — Maximum number of new tokens to generate. This limit applies to each (map+reduce) step during summarization and each (map) step during extraction. min_max_new_tokens (int, default: 512) — minimum value for max_new_tokens when auto-adjusting for content of prompt, docs, etc. response_format (str, default: “text”) — Output type, one of [“text”, “json_object”, “json_code”]. guided_json (str, default: “”) — If specified, the output will follow the JSON schema. guided_regex (str, default: “”) — If specified, the output will follow the regex pattern. guided_choice (Optional[List[str]], default: None — If specified, the output will be exactly one of the choices. guided_grammar (str, default: “”) — If specified, the output will follow the context free grammar. guided_whitespace_pattern (str, default: “”) — If specified, will override the default whitespace pattern for guided json decoding.

pii_settings:

PII Settings.

timeout:

Timeout in seconds.

kwargs:
Dictionary of kwargs to pass to h2oGPT. Not recommended, see https://github.com/h2oai/h2ogpt for source code. Valid keys:

h2ogpt_key: str = “” chat_conversation: list[tuple[str, str]] | None = None docs_ordering_type: str | None = “best_near_prompt” max_input_tokens: int = -1 docs_token_handling: str = “split_or_merge” docs_joiner: str = “

image_file: Union[str, list] = None

Returns:

Answer: The response text and any errors.

Raises:

TimeoutError: If response isn’t completed in timeout seconds.

summarize_document(*args, **kwargs) DocumentSummary
tag_document(document_id: str, tag_name: str) str

Adds a tag to a document.

Args:
document_id:

String id of the document to attach the tag to.

tag_name:

String representing the tag to attach.

Returns:

String: The id of the newly created tag.

unshare_collection(collection_id: str, permission: Permission) ShareResponseStatus

Remove sharing of a collection to a user.

The permission attribute defined the level of access, and who can access the collection, the collection_id attribute denotes the collection to be shared. In case of un-sharing, the Permission’s user is sufficient

Args:
collection_id:

ID of the collection to un-share.

permission:

Defines the user for which collection access is revoked.

ShareResponseStatus: Status of share request.

unshare_collection_for_all(collection_id: str) ShareResponseStatus

Remove sharing of a collection to all other users but the original owner

Args:
collection_id:

ID of the collection to un-share.

ShareResponseStatus: Status of share request.

unshare_prompt(prompt_id: str, permission: Permission) ShareResponseStatus

Remove sharing of a prompt template to a user.

Args:
prompt_id:

ID of the prompt template to un-share.

permission:

Defines the user for which collection access is revoked.

ShareResponseStatus: Status of share request.

unshare_prompt_for_all(prompt_id: str) ShareResponseStatus

Remove sharing of a prompt template to all other users but the original owner

Args:
prompt_id:

ID of the prompt template to un-share.

ShareResponseStatus: Status of share request.

untag_document(document_id: str, tag_name: str) str

Removes an existing tag from a document.

Args:
document_id:

String id of the document to remove the tag from.

tag_name:

String representing the tag to remove.

Returns:

String: The id of the removed tag.

update_collection(collection_id: str, name: str, description: str) str

Update the metadata for a given collection.

All variables are required. You can use h2ogpte.get_collection(<id>).name or description to get the existing values if you only want to change one or the other.

Args:
collection_id:

ID of the collection to update.

name:

New name of the collection, this is required.

description:

New description of the collection, this is required.

Returns:

str: ID of the updated collection.

update_collection_rag_type(collection_id: str, name: str, description: str, rag_type: str) str

Update the metadata for a given collection.

All variables are required. You can use h2ogpte.get_collection(<id>).name or description to get the existing values if you only want to change one or the other.

Args:
collection_id:

ID of the collection to update.

name:

New name of the collection, this is required.

description:

New description of the collection, this is required.

rag_type: str one of

"auto" Automatically select the best rag_type. "llm_only" LLM Only - Answer the query without any supporting document contexts.

Requires 1 LLM call.

"rag" RAG (Retrieval Augmented Generation) - Use supporting document contexts

to answer the query. Requires 1 LLM call.

"hyde1" LLM Only + RAG composite - HyDE RAG (Hypothetical Document Embedding).

Use ‘LLM Only’ response to find relevant contexts from a collection for generating a response. Requires 2 LLM calls.

"hyde2" HyDE + RAG composite - Use the ‘HyDE RAG’ response to find relevant

contexts from a collection for generating a response. Requires 3 LLM calls.

"rag+" Summary RAG - Like RAG, but uses more context and recursive

summarization to overcome LLM context limits. Keeps all retrieved chunks, puts them in order, adds neighboring chunks, then uses the summary API to get the answer. Can require several LLM calls.

"all_data" All Data RAG - Like Summary RAG, but includes all document

chunks. Uses recursive summarization to overcome LLM context limits. Can require several LLM calls.

Returns:

str: ID of the updated collection.

update_prompt_template(id: str, name: str, description: str | None = None, lang: str | None = None, system_prompt: str | None = None, pre_prompt_query: str | None = None, prompt_query: str | None = None, hyde_no_rag_llm_prompt_extension: str | None = None, pre_prompt_summary: str | None = None, prompt_summary: str | None = None, system_prompt_reflection: str | None = None, pre_prompt_reflection: str | None = None, prompt_reflection: str | None = None, auto_gen_description_prompt: str | None = None, auto_gen_document_summary_pre_prompt_summary: str | None = None, auto_gen_document_summary_prompt_summary: str | None = None, auto_gen_document_sample_questions_prompt: str | None = None, default_sample_questions: List[str] | None = None, image_batch_image_prompt: str | None = None, image_batch_final_prompt: str | None = None) str

Update a prompt template

Args:
id:

String ID of the prompt template to update

name:

Name of the prompt template

description:

Description of the prompt template

lang:

Language code

system_prompt:

System Prompt

pre_prompt_query:

Text that is prepended before the contextual document chunks.

prompt_query:

Text that is appended to the beginning of the user’s message.

hyde_no_rag_llm_prompt_extension:

LLM prompt extension.

pre_prompt_summary:

Prompt that goes before each large piece of text to summarize

prompt_summary:

Prompt that goes after each large piece of of text to summarize

system_prompt_reflection:

System Prompt for self-reflection

pre_prompt_reflection:

deprecated - ignored

prompt_reflection:

Template for self-reflection, must contain two occurrences of %s for full previous prompt (including system prompt, document related context and prompts if applicable, and user prompts) and answer

auto_gen_description_prompt:

prompt to create a description of the collection.

auto_gen_document_summary_pre_prompt_summary:

pre_prompt_summary for summary of a freshly imported document (if enabled).

auto_gen_document_summary_prompt_summary:

prompt_summary for summary of a freshly imported document (if enabled).

auto_gen_document_sample_questions_prompt:

prompt to create sample questions for a freshly imported document (if enabled).

default_sample_questions:

default sample questions in case there are no auto-generated sample questions.

image_batch_final_prompt:

Prompt for each image batch for vision models

image_batch_image_prompt:

Prompt to reduce all answers each image batch for vision models

Returns:

str: The ID of the updated prompt template.

update_question_reply_feedback(reply_id: str, expected_answer: str, user_comment: str)

Update feedback for a specific answer to a question.

Args:
reply_id:

UUID of the reply.

expected_answer:

Expected answer.

user_comment:

User comment.

Returns:

None

update_tag(tag_name: str, description: str, format: str) str

Updates a tag.

Args:
tag_name:

String representing the tag to update.

description:

String describing the tag.

format:

String representing the format of the tag.

Returns:

String: The id of the updated tag.

upload(file_name: str, file: Any) str

Upload a file to the H2OGPTE backend.

Uploaded files are not yet accessible and need to be ingested into a collection.

See Also:

ingest_uploads: Add the uploaded files to a collection. delete_upload: Delete uploaded file

Args:
file_name:

What to name the file on the server, must include file extension.

file:

File object to upload, often an opened file from with open(…) as f.

Returns:

str: The upload id to be used in ingest jobs.

Raises:

Exception: The upload request was unsuccessful.

class h2ogpte.H2OGPTEAsync(address: str, api_key: str | None = None, token_provider: AsyncTokenProvider | None = None, verify: bool | str = True, strict_version_check: bool = False)

Bases: object

Connect to and interact with an h2oGPTe server, via an async interface.

INITIAL_WAIT_INTERVAL = 0.1
MAX_WAIT_INTERVAL = 1.0
TIMEOUT = 3600.0
WAIT_BACKOFF_FACTOR = 1.4
async answer_question(question: str, system_prompt: str | None = '', pre_prompt_query: str | None = None, prompt_query: str | None = None, text_context_list: List[str] | None = None, llm: str | int | None = None, llm_args: Dict[str, Any] | None = None, chat_conversation: List[Tuple[str, str]] | None = None, pii_settings: Dict | None = None, timeout: float | None = None, **kwargs: Any) Answer

Send a message and get a response from an LLM.

Note: This method is only recommended if you are passing a chat conversation or for low-volume testing. For general chat with an LLM, we recommend session.query() for higher throughput in multi-user environments, like this:

chat_session_id = client.create_chat_session() with client.connect(chat_session_id) as session:

reply = session.query(“Hi”)

Format of input content:

{text_context_list}
"""\n{chat_conversation}{question}
Args:
question:

Text query to send to the LLM.

text_context_list:

List of raw text strings to be included, will be converted to a string like this: “

“.join(text_context_list)
system_prompt:

Text sent to models which support system prompts. Gives the model overall context in how to respond. Use auto for the model default, or None for h2oGPTe default. Defaults to ‘’ for no system prompt.

pre_prompt_query:

Text that is prepended before the contextual document chunks in text_context_list. Only used if text_context_list is provided.

prompt_query:

Text that is appended after the contextual document chunks in text_context_list. Only used if text_context_list is provided.

llm:

Name or index of LLM to send the query. Use H2OGPTE.get_llms() to see all available options. Default value is to use the first model (0th index).

llm_args:
Dictionary of kwargs to pass to the llm. Valid keys:

temperature (float, default: 0) — The value used to modulate the next token probabilities. Most deterministic: 0, Most creative: 1 top_k (int, default: 1) — The number of highest probability vocabulary tokens to keep for top-k-filtering. top_p (float, default: 1.0) — If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. seed (int, default: 0) — The seed for the random number generator when sampling during generation (if temp>0 or top_k>1 or top_p<1), seed=0 picks a random seed. repetition_penalty (float, default: 1.07) — The parameter for repetition penalty. 1.0 means no penalty. max_new_tokens (int, default: 1024) — Maximum number of new tokens to generate. This limit applies to each (map+reduce) step during summarization and each (map) step during extraction. min_max_new_tokens (int, default: 512) — minimum value for max_new_tokens when auto-adjusting for content of prompt, docs, etc. response_format (str, default: “text”) — Output type, one of [“text”, “json_object”, “json_code”]. guided_json (str, default: “”) — If specified, the output will follow the JSON schema. guided_regex (str, default: “”) — If specified, the output will follow the regex pattern. guided_choice (Optional[List[str]], default: None — If specified, the output will be exactly one of the choices. guided_grammar (str, default: “”) — If specified, the output will follow the context free grammar. guided_whitespace_pattern (str, default: “”) — If specified, will override the default whitespace pattern for guided json decoding.

chat_conversation:

List of tuples for (human, bot) conversation that will be pre-appended to an (question, None) case for a query.

pii_settings:

PII Settings.

timeout:

Timeout in seconds.

kwargs:
Dictionary of kwargs to pass to h2oGPT. Not recommended, see https://github.com/h2oai/h2ogpt for source code. Valid keys:

h2ogpt_key: str = “” chat_conversation: list[tuple[str, str]] | None = None docs_ordering_type: str | None = “best_near_prompt” max_input_tokens: int = -1 docs_token_handling: str = “split_or_merge” docs_joiner: str = “

image_file: Union[str, list] = None

Returns:

Answer: The response text and any errors.

Raises:

TimeoutError: If response isn’t completed in timeout seconds.

async cancel_job(job_id: str) Result

Stops a specific job from running on the server.

Args:
job_id:

String id of the job to cancel.

Returns:

Result: Status of canceling the job.

connect(chat_session_id: str, rag_type: str | None = None, prompt_template_id: str | None = None) SessionAsync

Create and participate in a chat session. This is a live connection to the H2OGPTE server contained to a specific chat session on top of a single collection of documents. Users will find all questions and responses in this session in a single chat history in the UI.

Args:
chat_session_id:

ID of the chat session to connect to.

rag_type:

RAG type to use.

prompt_template_id:

ID of the prompt template to use.

Returns:

Session: Live chat session connection with an LLM.

async count_assets() ObjectCount

Counts number of objects owned by the user.

Returns:

ObjectCount: The count of chat sessions, collections, and documents.

async count_chat_sessions() int

Returns the count of chat sessions owned by the user.

async count_chat_sessions_for_collection(collection_id: str) int

Counts number of chat sessions in a specific collection.

Args:
collection_id:

String id of the collection to count chat sessions for.

Returns:

int: The count of chat sessions in that collection.

async count_collections() int

Counts number of collections owned by the user.

Returns:

int: The count of collections owned by the user.

async count_documents() int

Returns the count of documents accessed by the user.

async count_documents_in_collection(collection_id: str) int

Counts the number of documents in a specific collection.

Args:
collection_id:

String id of the collection to count documents for.

Returns:

int: The number of documents in that collection.

async count_documents_owned_by_me() int

Returns the counts of documents owned by the user.

async count_prompt_templates() int

Counts number of prompt templates

Returns:

int: The count of prompt templates

async count_question_reply_feedback() int

Fetch user’s questions and answers count.

Returns:

int: the count of questions and replies.

async create_chat_session(collection_id: str | None = None) str

Creates a new chat session for asking questions (of documents).

Args:
collection_id:

String id of the collection to chat with. If None, chat with LLM directly.

Returns:

str: The ID of the newly created chat session.

async create_chat_session_on_default_collection() str

Creates a new chat session for asking questions of documents on the default collection.

Returns:

str: The ID of the newly created chat session.

async create_collection(name: str, description: str, embedding_model: str | None = None, prompt_template_id: str | None = None, collection_settings: dict | None = None) str

Creates a new collection.

Args:
name:

Name of the collection.

description:

Description of the collection

embedding_model:

embedding model to use. call list_embedding_models() to list of options.

prompt_template_id:

ID of the prompt template to get the prompts from. None to fall back to system defaults.

collection_settings:

(Optional) Dictionary with key/value pairs to configure certain collection specific settings like pii_settings or max_tokens_per_chunk.

Returns:

str: The ID of the newly created collection.

async create_prompt_template(name: str, description: str | None = None, lang: str | None = None, system_prompt: str | None = None, pre_prompt_query: str | None = None, prompt_query: str | None = None, hyde_no_rag_llm_prompt_extension: str | None = None, pre_prompt_summary: str | None = None, prompt_summary: str | None = None, system_prompt_reflection: str | None = None, pre_prompt_reflection: str | None = None, prompt_reflection: str | None = None, auto_gen_description_prompt: str | None = None, auto_gen_document_summary_pre_prompt_summary: str | None = None, auto_gen_document_summary_prompt_summary: str | None = None, auto_gen_document_sample_questions_prompt: str | None = None, default_sample_questions: List[str] | None = None, image_batch_image_prompt: str | None = None, image_batch_final_prompt: str | None = None) str

Create a new prompt template

Args:
name:

Name of the prompt template

description:

Description of the prompt template

lang:

Language code

system_prompt:

System Prompt

pre_prompt_query:

Text that is prepended before the contextual document chunks.

prompt_query:

Text that is appended to the beginning of the user’s message.

hyde_no_rag_llm_prompt_extension:

LLM prompt extension.

pre_prompt_summary:

Prompt that goes before each large piece of text to summarize

prompt_summary:

Prompt that goes after each large piece of of text to summarize

system_prompt_reflection:

System Prompt for self-reflection

pre_prompt_reflection:

Deprecated - ignored

prompt_reflection:

Template for self-reflection, must contain two occurrences of %s for full previous prompt (including system prompt, document related context and prompts if applicable, and user prompts) and answer

auto_gen_description_prompt:

prompt to create a description of the collection.

auto_gen_document_summary_pre_prompt_summary:

pre_prompt_summary for summary of a freshly imported document (if enabled).

auto_gen_document_summary_prompt_summary:

prompt_summary for summary of a freshly imported document (if enabled).

auto_gen_document_sample_questions_prompt:

prompt to create sample questions for a freshly imported document (if enabled).

default_sample_questions:

default sample questions in case there are no auto-generated sample questions.

image_batch_final_prompt:

Prompt for each image batch for vision models

image_batch_image_prompt:

Prompt to reduce all answers each image batch for vision models

Returns:

str: The ID of the newly created prompt template.

async create_tag(tag_name: str) str

Creates a new tag.

Args:
tag_name:

String representing the tag to create.

Returns:

String: The id of the created tag.

async delete_chat_messages(chat_message_ids: Iterable[str]) Result

Deletes specific chat messages.

Args:
chat_message_ids:

List of string ids of chat messages to delete from the system.

Returns:

Result: Status of the delete job.

async delete_chat_sessions(chat_session_ids: Iterable[str]) Result

Deletes chat sessions and related messages.

Args:
chat_session_ids:

List of string ids of chat sessions to delete from the system.

Returns:

Result: Status of the delete job.

async delete_collections(collection_ids: Iterable[str], timeout: float | None = None) Job

Deletes collections from the environment. Documents in the collection will not be deleted.

Args:
collection_ids:

List of string ids of collections to delete from the system.

timeout:

Timeout in seconds.

async delete_document_summaries(summaries_ids: Iterable[str]) Result

Deletes document summaries.

Args:
summaries_ids:

List of string ids of a document summary to delete from the system.

Returns:

Result: Status of the delete job.

async delete_documents(document_ids: Iterable[str], timeout: float | None = None) Job

Deletes documents from the system.

Args:
document_ids:

List of string ids to delete from the system and all collections.

timeout:

Timeout in seconds.

async delete_documents_from_collection(collection_id: str, document_ids: Iterable[str], timeout: float | None = None) Job

Removes documents from a collection.

See Also: H2OGPTE.delete_documents for completely removing the document from the environment.

Args:
collection_id:

String of the collection to remove documents from.

document_ids:

List of string ids to remove from the collection.

timeout:

Timeout in seconds.

async delete_prompt_templates(ids: Iterable[str]) Result

Deletes prompt templates

Args:
ids:

List of string ids of prompte templates to delete from the system.

Returns:

Result: Status of the delete job.

async delete_upload(upload_id: str) str

Delete a file previously uploaded with the “upload” method.

See Also:

upload: Upload the files into the system to then be ingested into a collection. ingest_uploads: Add the uploaded files to a collection.

Args:
upload_id:

ID of a file to remove

Returns:

upload_id: The upload id of the removed.

Raises:

Exception: The delete upload request was unsuccessful.

async download_document(destination_directory: str | Path, destination_file_name: str, document_id: str) Path

Downloads a document to a local system directory.

Args:
destination_directory:

Destination directory to save file into.

destination_file_name:

Destination file name.

document_id:

Document ID.

Returns:

The Path to the file written to disk.

async download_reference_highlighting(message_id: str, destination_directory: str, output_type: str = 'combined') list

Get PDFs with reference highlighting

Args:
message_id:

ID of the message to get references from

destination_directory:

Destination directory to save files into.

output_type: str one of

"combined" Generates a PDF file for each source document, with all relevant chunks highlighted

in each respective file. This option consolidates all highlights for each source document into a single PDF, making it easy to view all highlights related to that document at once.

"split" Generates a separate PDF file for each chunk, with only the relevant chunk

highlighted in each file. This option is useful for focusing on individual sections without interference from other parts of the text. The output files names will be in the format “{document_id}_{chunk_id}.pdf”

Returns:

list[Path]: List of paths of downloaded documents with highlighting

async encode_for_retrieval(chunks: Iterable[str], embedding_model: str | None = None) List[List[float]]

Encode texts for semantic searching.

See Also: H2OGPTE.match for getting a list of chunks that semantically match each encoded text.

Args:
chunks:

List of strings of texts to be encoded.

embedding_model:

embedding model to use. call list_embedding_models() to list of options.

Returns:

List of list of floats: Each list in the list is the encoded original text.

async extract_data(text_context_list: List[str] | None = None, system_prompt: str = '', pre_prompt_extract: str | None = None, prompt_extract: str | None = None, llm: str | int | None = None, llm_args: Dict[str, Any] | None = None, pii_settings: Dict | None = None, timeout: float | None = None, **kwargs: Any) ExtractionAnswer

Extract information from one or more contexts using an LLM. pre_prompt_extract and prompt_extract variables must be used together. If these variables are not set, the inputs texts will be summarized into bullet points. Format of extract content:

"{pre_prompt_extract}"""
{text_context_list}
"""\n{prompt_extract}"

Examples:

extract = h2ogpte.extract_data(
    text_context_list=chunks,
    pre_prompt_extract="Pay attention and look at all people. Your job
                        is to collect their names.\n",
    prompt_extract="List all people's names as JSON.",
)
Args:
text_context_list:

List of raw text strings to extract data from.

system_prompt:

Text sent to models which support system prompts. Gives the model overall context in how to respond. Use auto or None for the model default. Defaults to ‘’ for no system prompt.

pre_prompt_extract:

Text that is prepended before the list of texts. If not set, the inputs will be summarized.

prompt_extract:

Text that is appended after the list of texts. If not set, the inputs will be summarized.

llm:

Name or index of LLM to send the query. Use H2OGPTE.get_llms() to see all available options. Default value is to use the first model (0th index).

llm_args:
Dictionary of kwargs to pass to the llm. Valid keys:

temperature (float, default: 0) — The value used to modulate the next token probabilities. Most deterministic: 0, Most creative: 1 seed (int, default: 0) — The seed for the random number generator, only used if temperature > 0, seed=0 will pick a random number for each call, seed > 0 will be fixed. top_k (int, default: 1) — The number of highest probability vocabulary tokens to keep for top-k-filtering. top_p (float, default: 1.0) — If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. repetition_penalty (float, default: 1.07) — The parameter for repetition penalty. 1.0 means no penalty. max_new_tokens (int, default: 1024) — Maximum number of new tokens to generate. This limit applies to each (map+reduce) step during summarization and each (map) step during extraction. min_max_new_tokens (int, default: 512) — minimum value for max_new_tokens when auto-adjusting for content of prompt, docs, etc. response_format (str, default: “text”) — Output type, one of [“text”, “json_object”, “json_code”]. guided_json (str, default: “”) — If specified, the output will follow the JSON schema. guided_regex (str, default: “”) — If specified, the output will follow the regex pattern. guided_choice (Optional[List[str]], default: None — If specified, the output will be exactly one of the choices. guided_grammar (str, default: “”) — If specified, the output will follow the context free grammar. guided_whitespace_pattern (str, default: “”) — If specified, will override the default whitespace pattern for guided json decoding.

pii_settings:

PII Settings.

timeout:

Timeout in seconds.

kwargs:
Dictionary of kwargs to pass to h2oGPT. Not recommended, see https://github.com/h2oai/h2ogpt for source code. Valid keys:

h2ogpt_key: str = “” chat_conversation: list[tuple[str, str]] | None = None docs_ordering_type: str | None = “best_near_prompt” max_input_tokens: int = -1 docs_token_handling: str = “split_or_merge” docs_joiner: str = “

image_file: Union[str, list] = None

Returns:

ExtractionAnswer: The list of text responses and any errors.

Raises:

TimeoutError: If response isn’t completed in timeout seconds.

async get_chat_session_prompt_template(chat_session_id: str) PromptTemplate | None

Get the prompt template for a chat_session

Args:
chat_session_id:

ID of the chat session

Returns:

str: ID of the prompt template.

async get_chat_session_questions(chat_session_id: str, limit: int) List[SuggestedQuestion]

List suggested questions

Args:
chat_session_id:

A chat session ID of which to return the suggested questions

limit:

How many questions to return.

Returns:

List: A list of questions.

async get_chunks(collection_id: str, chunk_ids: Iterable[int]) List[Chunk]

Get the text of specific chunks in a collection.

Args:
collection_id:

String id of the collection to search in.

chunk_ids:

List of ints for the chunks to return. Chunks are indexed starting at 1.

Returns:

Chunk: The text of the chunk.

Raises:

Exception: One or more chunks could not be found.

async get_collection(collection_id: str) Collection

Get metadata about a collection.

Args:
collection_id:

String id of the collection to search for.

Returns:

Collection: Metadata about the collection.

Raises:

KeyError: The collection was not found.

async get_collection_for_chat_session(chat_session_id: str) Collection

Get metadata about the collection of a chat session.

Args:
chat_session_id:

String id of the chat session to search for.

Returns:

Collection: Metadata about the collection.

async get_collection_prompt_template(collection_id: str) PromptTemplate | None

Get the prompt template for a collection

Args:
collection_id:

ID of the collection

Returns:

str: ID of the prompt template.

async get_collection_questions(collection_id: str, limit: int) List[SuggestedQuestion]

List suggested questions

Args:
collection_id:

A collection ID of which to return the suggested questions

limit:

How many questions to return.

Returns:

List: A list of questions.

async get_default_collection() CollectionInfo

Get the default collection, to be used for collection API-keys.

Returns:

CollectionInfo: Default collection info.

async get_document(document_id: str) Document

Fetches information about a specific document.

Args:
document_id:

String id of the document.

Returns:

Document: Metadata about the Document.

Raises:

KeyError: The document was not found.

async get_job(job_id: str) Job

Fetches information about a specific job.

Args:
job_id:

String id of the job.

Returns:

Job: Metadata about the Job.

async get_llm_names() List[str]

Lists names of available LLMs in the environment.

Returns:

list of string: Name of each available model.

async get_llm_performance_by_llm(interval: str) List[LLMPerformance]
async get_llm_usage_24h() float
async get_llm_usage_24h_by_llm() List[LLMUsage]
async get_llm_usage_24h_with_limits() LLMUsageLimit
async get_llm_usage_6h() float
async get_llm_usage_6h_by_llm() List[LLMUsage]
async get_llm_usage_by_llm(interval: str) List[LLMUsage]
async get_llm_usage_with_limits(interval: str) LLMUsageLimit
async get_llms() List[Dict[str, Any]]

Lists metadata information about available LLMs in the environment.

Returns:

list of dict (string, ANY): Name and details about each available model.

async get_meta() Meta

Returns various information about the server environment, including the current build version, license information, the user, etc.

async get_prompt_template(id: str | None = None) PromptTemplate

Get a prompt template

Args:
id:

String id of the prompt template to retrieve or None for default

Returns:

PromptTemplate: prompts

Raises:

KeyError: The prompt template was not found.

async get_scheduler_stats() SchedulerStats

Count the number of global, pending jobs on the server.

Returns:

SchedulerStats: The queue length for number of jobs.

async get_tag(tag_name: str) Tag

Returns an existing tag.

Args:
tag_name:

String The name of the tag to retrieve.

Returns:

Tag: The requested tag.

Raises:

KeyError: The tag was not found.

async get_vision_capable_llm_names() List[str]

Lists names of available vision-capable multi-modal LLMs (that can natively handle images as input) in the environment.

Returns:

list of string: Name of each available model.

async import_collection_into_collection(collection_id: str, src_collection_id: str, gen_doc_summaries: bool = False, gen_doc_questions: bool = False, ocr_model: str = 'auto', timeout: float | None = None)

Import all documents from a collection into an existing collection

Args:
collection_id:

Collection ID to add documents to.

src_collection_id:

Collection ID to import documents from.

gen_doc_summaries:

Whether to auto-generate document summaries (uses LLM)

gen_doc_questions:

Whether to auto-generate sample questions for each document (uses LLM)

ocr_model:

Which OCR model to use. Pass empty string to see choices.

timeout:

Timeout in seconds.

async import_document_into_collection(collection_id: str, document_id: str, gen_doc_summaries: bool = False, gen_doc_questions: bool = False, copy_document: bool = False, ocr_model: str = 'auto', timeout: float | None = None)

Import an already stored document to an existing collection

Args:
collection_id:

Collection ID to add documents to.

document_id:

Document ID to add.

gen_doc_summaries:

Whether to auto-generate document summaries (uses LLM)

gen_doc_questions:

Whether to auto-generate sample questions for each document (uses LLM)

copy_document:

Whether to save a new copy of the document

ocr_model:

Which OCR model to use. Pass empty string to see choices.

timeout:

Timeout in seconds.

async ingest_from_azure_blob_storage(collection_id: str, container: str, path: str | List[str], account_name: str, credentials: AzureKeyCredential | AzureSASCredential | None = None, gen_doc_summaries: bool = False, gen_doc_questions: bool =