h2ogpte package
Submodules
h2ogpte.h2ogpte module
- class h2ogpte.h2ogpte.H2OGPTE(address: str, api_key: str | None = None, token_provider: TokenProvider | None = None, verify: bool | str = True, strict_version_check: bool = False)
Bases:
object
Connect to and interact with an h2oGPTe server.
- INITIAL_WAIT_INTERVAL = 0.1
- MAX_WAIT_INTERVAL = 1.0
- TIMEOUT = 3600.0
- WAIT_BACKOFF_FACTOR = 1.4
- answer_question(question: str, system_prompt: str | None = '', pre_prompt_query: str | None = None, prompt_query: str | None = None, text_context_list: List[str] | None = None, llm: str | int | None = None, llm_args: Dict[str, Any] | None = None, chat_conversation: List[Tuple[str, str]] | None = None, timeout: float | None = None, **kwargs: Any) Answer
Send a message and get a response from an LLM.
Note: For general chat with an LLM, we recommend session.query() for higher throughput in multi-user environments. The following code sample shows the recommended method:
# Establish a chat session chat_session_id = client.create_chat_session() # Connect to the chat session with client.connect(chat_session_id) as session: # Send a basic query and print the reply reply = session.query("Hello", timeout=60) print(reply.content)
Format of inputs content:
{text_context_list} """\n{chat_conversation}{question}
- Args:
- question:
Text query to send to the LLM.
- text_context_list:
List of raw text strings to be included, will be converted to a string like this: “
- “.join(text_context_list)
- system_prompt:
Text sent to models which support system prompts. Gives the model overall context in how to respond. Use auto for the model default, or None for h2oGPTe default. Defaults to ‘’ for no system prompt.
- pre_prompt_query:
Text that is prepended before the contextual document chunks in text_context_list. Only used if text_context_list is provided.
- prompt_query:
Text that is appended after the contextual document chunks in text_context_list. Only used if text_context_list is provided.
- llm:
Name or index of LLM to send the query. Use H2OGPTE.get_llms() to see all available options. Default value is to use the first model (0th index).
- llm_args:
- Dictionary of kwargs to pass to the llm. Valid keys:
temperature (float, default: 0) — The value used to modulate the next token probabilities. Most deterministic: 0, Most creative: 1 seed (int, default: 0) — The seed for the random number generator, only used if temperature > 0, seed=0 will pick a random number for each call, seed > 0 will be fixed. top_k (int, default: 1) — The number of highest probability vocabulary tokens to keep for top-k-filtering. top_p (float, default: 1.0) — If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. repetition_penalty (float, default: 1.07) — The parameter for repetition penalty. 1.0 means no penalty. max_new_tokens (int, default: 1024) — Maximum number of new tokens to generate. This limit applies to each (map+reduce) step during summarization and each (map) step during extraction. min_max_new_tokens (int, default: 512) — minimum value for max_new_tokens when auto-adjusting for content of prompt, docs, etc.
- chat_conversation:
List of tuples for (human, bot) conversation that will be pre-appended to an (question, None) case for a query.
- timeout:
Timeout in seconds.
- kwargs:
Dictionary of kwargs to pass to h2oGPT.
- Returns:
Answer: The response text and any errors.
- Raises:
TimeoutError: If response isn’t completed in timeout seconds.
- cancel_job(job_id: str) Result
Stops a specific job from running on the server.
- Args:
- job_id:
String id of the job to cancel.
- Returns:
Result: Status of canceling the job.
- connect(chat_session_id: str) Session
Create and participate in a chat session.
This is a live connection to the H2OGPTE server contained to a specific chat session on top of a single collection of documents. Users will find all questions and responses in this session in a single chat history in the UI.
- Args:
- chat_session_id:
ID of the chat session to connect to.
- Returns:
Session: Live chat session connection with an LLM.
- count_assets() ObjectCount
Counts number of objects owned by the user.
- Returns:
ObjectCount: The count of chat sessions, collections, and documents.
- count_chat_sessions() int
Counts number of chat sessions owned by the user.
- Returns:
int: The count of chat sessions owned by the user.
- count_chat_sessions_for_collection(collection_id: str) int
Counts number of chat sessions in a specific collection.
- Args:
- collection_id:
String id of the collection to count chat sessions for.
- Returns:
int: The count of chat sessions in that collection.
- count_collections() int
Counts number of collections owned by the user.
- Returns:
int: The count of collections owned by the user.
- count_documents() int
Counts number of documents accessed by the user.
- Returns:
int: The count of documents accessed by the user.
- count_documents_in_collection(collection_id: str) int
Counts the number of documents in a specific collection.
- Args:
- collection_id:
String id of the collection to count documents for.
- Returns:
int: The number of documents in that collection.
- count_documents_owned_by_me() int
Counts number of documents owned by the user.
- Returns:
int: The count of documents owned by the user.
- count_prompt_templates() int
Counts number of prompt templates
- Returns:
int: The count of prompt templates
- count_question_reply_feedback() int
Fetch user’s questions and answers with feedback count.
- Returns:
int: the count of questions and replies that have a user feedback.
- create_chat_session(collection_id: str | None = None) str
Creates a new chat session for asking questions (of documents).
- Args:
- collection_id:
String id of the collection to chat with. If None, chat with LLM directly.
- Returns:
str: The ID of the newly created chat session.
- create_chat_session_on_default_collection() str
Creates a new chat session for asking questions of documents on the default collection.
- Returns:
str: The ID of the newly created chat session.
- create_collection(name: str, description: str, embedding_model: str | None = None, prompt_template_id: str | None = None) str
Creates a new collection.
- Args:
- name:
Name of the collection.
- description:
Description of the collection
- embedding_model:
embedding model to use. call list_embedding_models() to list of options.
- prompt_template_id:
ID of the prompt template to get the prompts from. None to fall back to system defaults.
- Returns:
str: The ID of the newly created collection.
- create_prompt_template(name: str, description: str | None = None, lang: str | None = None, system_prompt: str | None = None, pre_prompt_query: str | None = None, prompt_query: str | None = None, hyde_no_rag_llm_prompt_extension: str | None = None, pre_prompt_summary: str | None = None, prompt_summary: str | None = None, system_prompt_reflection: str | None = None, pre_prompt_reflection: str | None = None, prompt_reflection: str | None = None, auto_gen_description_prompt: str | None = None, auto_gen_document_summary_pre_prompt_summary: str | None = None, auto_gen_document_summary_prompt_summary: str | None = None, auto_gen_document_sample_questions_prompt: str | None = None, default_sample_questions: List[str] | None = None) str
Create a new prompt template
- Args:
- name:
Name of the prompt template
- description:
Description of the prompt template
- lang:
Language code
- system_prompt:
System Prompt
- pre_prompt_query:
Text that is prepended before the contextual document chunks.
- prompt_query:
Text that is appended to the beginning of the user’s message.
- hyde_no_rag_llm_prompt_extension:
LLM prompt extension.
- pre_prompt_summary:
Prompt that goes before each large piece of text to summarize
- prompt_summary:
Prompt that goes after each large piece of of text to summarize
- system_prompt_reflection:
System Prompt for self-reflection
- pre_prompt_reflection:
deprecated - ignored
- prompt_reflection:
Template for self-reflection, must contain two occurrences of %s for full previous prompt (including system prompt, document related context and prompts if applicable, and user prompts) and answer
- auto_gen_description_prompt:
prompt to create a description of the collection.
- auto_gen_document_summary_pre_prompt_summary:
pre_prompt_summary for summary of a freshly imported document (if enabled).
- auto_gen_document_summary_prompt_summary:
prompt_summary for summary of a freshly imported document (if enabled).
- auto_gen_document_sample_questions_prompt:
prompt to create sample questions for a freshly imported document (if enabled).
- default_sample_questions:
default sample questions in case there are no auto-generated sample questions.
- Returns:
str: The ID of the newly created prompt template.
- delete_chat_messages(chat_message_ids: Iterable[str]) Result
Deletes specific chat messages.
- Args:
- chat_message_ids:
List of string ids of chat messages to delete from the system.
- Returns:
Result: Status of the delete job.
- delete_chat_sessions(chat_session_ids: Iterable[str]) Result
Deletes chat sessions and related messages.
- Args:
- chat_session_ids:
List of string ids of chat sessions to delete from the system.
- Returns:
Result: Status of the delete job.
- delete_collections(collection_ids: Iterable[str], timeout: float | None = None)
Deletes collections from the environment.
Documents in the collection will not be deleted.
- Args:
- collection_ids:
List of string ids of collections to delete from the system.
- timeout:
Timeout in seconds.
- delete_document_summaries(summaries_ids: Iterable[str]) Result
Deletes document summaries.
- Args:
- summaries_ids:
List of string ids of a document summary to delete from the system.
- Returns:
Result: Status of the delete job.
- delete_documents(document_ids: Iterable[str], timeout: float | None = None)
Deletes documents from the system.
- Args:
- document_ids:
List of string ids to delete from the system and all collections.
- timeout:
Timeout in seconds.
- delete_documents_from_collection(collection_id: str, document_ids: Iterable[str], timeout: float | None = None)
Removes documents from a collection.
See Also: H2OGPTE.delete_documents for completely removing the document from the environment.
- Args:
- collection_id:
String of the collection to remove documents from.
- document_ids:
List of string ids to remove from the collection.
- timeout:
Timeout in seconds.
- delete_prompt_templates(ids: Iterable[str]) Result
Deletes prompt templates
- Args:
- ids:
List of string ids of prompte templates to delete from the system.
- Returns:
Result: Status of the delete job.
- delete_upload(upload_id: str) str
Delete a file previously uploaded with the “upload” method.
- See Also:
upload: Upload the files into the system to then be ingested into a collection. ingest_uploads: Add the uploaded files to a collection.
- Args:
- upload_id:
ID of a file to remove
- Returns:
upload_id: The upload id of the removed.
- Raises:
Exception: The delete upload request was unsuccessful.
- download_document(destination_directory: str, destination_file_name: str, document_id: str) Path
Downloads a document to a local system directory.
- Args:
- destination_directory:
Destination directory to save file into.
- destination_file_name:
Destination file name.
- document_id:
Document ID.
- Returns:
Path: Path of downloaded document
- encode_for_retrieval(chunks: List[str], embedding_model: str | None = None) List[List[float]]
Encode texts for semantic searching.
See Also: H2OGPTE.match for getting a list of chunks that semantically match each encoded text.
- Args:
- chunks:
List of strings of texts to be encoded.
- embedding_model:
embedding model to use. call list_embedding_models() to list of options.
- Returns:
List of list of floats: Each list in the list is the encoded original text.
- extract_data(text_context_list: List[str] | None = None, system_prompt: str = '', pre_prompt_extract: str | None = None, prompt_extract: str | None = None, llm: str | int | None = None, llm_args: Dict[str, Any] | None = None, **kwargs: Any) ExtractionAnswer
Extract information from one or more contexts using an LLM.
pre_prompt_extract and prompt_extract variables must be used together. If these variables are not set, the inputs texts will be summarized into bullet points.
Format of extract content:
"{pre_prompt_extract}""" {text_context_list} """\n{prompt_extract}"
Examples:
extract = h2ogpte.extract_data( text_context_list=chunks, pre_prompt_extract="Pay attention and look at all people. Your job is to collect their names.\n", prompt_extract="List all people's names as JSON.", )
- Args:
- text_context_list:
List of raw text strings to extract data from.
- system_prompt:
Text sent to models which support system prompts. Gives the model overall context in how to respond. Use auto or None for the model default. Defaults to ‘’ for no system prompt.
- pre_prompt_extract:
Text that is prepended before the list of texts. If not set, the inputs will be summarized.
- prompt_extract:
Text that is appended after the list of texts. If not set, the inputs will be summarized.
- llm:
Name or index of LLM to send the query. Use H2OGPTE.get_llms() to see all available options. Default value is to use the first model (0th index).
- llm_args:
- Dictionary of kwargs to pass to the llm. Valid keys:
temperature (float, default: 0) — The value used to modulate the next token probabilities. Most deterministic: 0, Most creative: 1 seed (int, default: 0) — The seed for the random number generator, only used if temperature > 0, seed=0 will pick a random number for each call, seed > 0 will be fixed. top_k (int, default: 1) — The number of highest probability vocabulary tokens to keep for top-k-filtering. top_p (float, default: 1.0) — If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. repetition_penalty (float, default: 1.07) — The parameter for repetition penalty. 1.0 means no penalty. max_new_tokens (int, default: 1024) — Maximum number of new tokens to generate. This limit applies to each (map+reduce) step during summarization and each (map) step during extraction. min_max_new_tokens (int, default: 512) — minimum value for max_new_tokens when auto-adjusting for content of prompt, docs, etc.
- kwargs:
Dictionary of kwargs to pass to h2oGPT.
- Returns:
ExtractionAnswer: The list of text responses and any errors.
- get_chat_session_prompt_template(chat_session_id: str) PromptTemplate | None
Get the prompt template for a chat_session
- Args:
- chat_session_id:
ID of the chat session
- Returns:
str: ID of the prompt template.
- get_chat_session_questions(chat_session_id: str, limit: int) List[SuggestedQuestion]
List suggested questions
- Args:
- chat_session_id:
A chat session ID of which to return the suggested questions
- limit:
How many questions to return.
- Returns:
List: A list of questions.
- get_chunks(collection_id: str, chunk_ids: Iterable[int]) List[Chunk]
Get the text of specific chunks in a collection.
- Args:
- collection_id:
String id of the collection to search in.
- chunk_ids:
List of ints for the chunks to return. Chunks are indexed starting at 1.
- Returns:
Chunk: The text of the chunk.
- Raises:
Exception: One or more chunks could not be found.
- get_collection(collection_id: str) Collection
Get metadata about a collection.
- Args:
- collection_id:
String id of the collection to search for.
- Returns:
Collection: Metadata about the collection.
- Raises:
KeyError: The collection was not found.
- get_collection_for_chat_session(chat_session_id: str) Collection
Get metadata about the collection of a chat session.
- Args:
- chat_session_id:
String id of the chat session to search for.
- Returns:
Collection: Metadata about the collection.
- get_collection_prompt_template(collection_id: str) PromptTemplate | None
Get the prompt template for a collection
- Args:
- collection_id:
ID of the collection
- Returns:
str: ID of the prompt template.
- get_collection_questions(collection_id: str, limit: int) List[SuggestedQuestion]
List suggested questions
- Args:
- collection_id:
A collection ID of which to return the suggested questions
- limit:
How many questions to return.
- Returns:
List: A list of questions.
- get_default_collection() CollectionInfo
Get the default collection, to be used for collection API-keys.
- Returns:
CollectionInfo: Default collection info.
- get_document(document_id: str) Document
Fetches information about a specific document.
- Args:
- document_id:
String id of the document.
- Returns:
Document: Metadata about the Document.
- Raises:
KeyError: The document was not found.
- get_job(job_id: str) Job
Fetches information about a specific job.
- Args:
- job_id:
String id of the job.
- Returns:
Job: Metadata about the Job.
- get_llm_names() List[str]
Lists names of available LLMs in the environment.
- Returns:
list of string: Name of each available model.
- get_llm_usage_24h() float
- get_llm_usage_24h_with_limits() LLMUsageLimit
- get_llm_usage_6h() float
- get_llm_usage_with_limits(interval: str) LLMUsageLimit
- get_llms() List[dict]
Lists metadata information about available LLMs in the environment.
- Returns:
list of dict (string, ANY): Name and details about each available model.
- get_meta() Meta
Returns information about the environment and the user.
- Returns:
Meta: Details about the version and license of the environment and the user’s name and email.
- get_prompt_template(id: str | None = None) PromptTemplate
Get a prompt template
- Args:
- id:
String id of the prompt template to retrieve or None for default
- Returns:
PromptTemplate: prompts
- Raises:
KeyError: The prompt template was not found.
- get_scheduler_stats() SchedulerStats
Count the number of global, pending jobs on the server.
- Returns:
SchedulerStats: The queue length for number of jobs.
- import_collection_into_collection(collection_id: str, src_collection_id: str, gen_doc_summaries: bool = False, gen_doc_questions: bool = False, timeout: float | None = None)
Import all documents from a collection into an existing collection
- Args:
- collection_id:
Collection ID to add documents to.
- src_collection_id:
Collection ID to import documents from.
- gen_doc_summaries:
Whether to auto-generate document summaries (uses LLM)
- gen_doc_questions:
Whether to auto-generate sample questions for each document (uses LLM)
- timeout:
Timeout in seconds.
- import_document_into_collection(collection_id: str, document_id: str, gen_doc_summaries: bool = False, gen_doc_questions: bool = False, timeout: float | None = None)
Import an already stored document to an existing collection
- Args:
- collection_id:
Collection ID to add documents to.
- document_id:
Document ID to add.
- gen_doc_summaries:
Whether to auto-generate document summaries (uses LLM)
- gen_doc_questions:
Whether to auto-generate sample questions for each document (uses LLM)
- timeout:
Timeout in seconds.
- ingest_from_file_system(collection_id: str, root_dir: str, glob: str, gen_doc_summaries: bool = False, gen_doc_questions: bool = False, timeout: float | None = None)
Add files from the local system into a collection.
- Args:
- collection_id:
String id of the collection to add the ingested documents into.
- root_dir:
String path of where to look for files.
- glob:
String of the glob pattern used to match files in the root directory.
- gen_doc_summaries:
Whether to auto-generate document summaries (uses LLM)
- gen_doc_questions:
Whether to auto-generate sample questions for each document (uses LLM)
- timeout:
Timeout in seconds
- ingest_uploads(collection_id: str, upload_ids: Iterable[str], gen_doc_summaries: bool = False, gen_doc_questions: bool = False, timeout: float | None = None)
Add uploaded documents into a specific collection.
- See Also:
upload: Upload the files into the system to then be ingested into a collection. delete_upload: Delete uploaded file
- Args:
- collection_id:
String id of the collection to add the ingested documents into.
- upload_ids:
List of string ids of each uploaded document to add to the collection.
- gen_doc_summaries:
Whether to auto-generate document summaries (uses LLM)
- gen_doc_questions:
Whether to auto-generate sample questions for each document (uses LLM)
- timeout:
Timeout in seconds
- ingest_website(collection_id: str, url: str, gen_doc_summaries: bool = False, gen_doc_questions: bool = False, follow_links: bool = False, timeout: float | None = None)
Crawl and ingest a website into a collection.
The web page linked from this URL will be imported.
- Args:
- collection_id:
String id of the collection to add the ingested documents into.
- url:
String of the url to crawl.
- gen_doc_summaries:
Whether to auto-generate document summaries (uses LLM)
- gen_doc_questions:
Whether to auto-generate sample questions for each document (uses LLM)
- follow_links:
Whether to import all web pages linked from this URL will be imported. External links will be ignored. Links to other pages on the same domain will be followed as long as they are at the same level or below the URL you specify. Each page will be transformed into a PDF document.
- timeout:
Timeout in seconds
- list_chat_message_meta_part(message_id: str, info_type: str) ChatMessageMeta
Fetch one chat message meta information.
- Args:
- message_id:
Message id to which the metadata should be pulled.
- info_type:
Metadata type to fetch. Valid choices are: “self_reflection”, “usage_stats”, “prompt_raw”, “llm_only”, “rag”, “hyde1”, “hyde2”
- Returns:
ChatMessageMeta: Metadata information about the chat message.
- list_chat_message_references(message_id: str) List[ChatMessageReference]
Fetch metadata for references of a chat message.
References are only available for messages sent from an LLM, an empty list will be returned for messages sent by the user.
- Args:
- message_id:
String id of the message to get references for.
- Returns:
list of ChatMessageReference: Metadata including the document name, polygon information, and score.
- list_chat_messages(chat_session_id: str, offset: int, limit: int) List[ChatMessage]
Fetch chat message and metadata for messages in a chat session.
Messages without a reply_to are from the end user, messages with a reply_to are from an LLM and a response to a specific user message.
- Args:
- chat_session_id:
String id of the chat session to filter by.
- offset:
How many chat messages to skip before returning.
- limit:
How many chat messages to return.
- Returns:
list of ChatMessage: Text and metadata for chat messages.
- list_chat_messages_full(chat_session_id: str, offset: int, limit: int) List[ChatMessageFull]
Fetch chat message and metadata for messages in a chat session.
Messages without a reply_to are from the end user, messages with a reply_to are from an LLM and a response to a specific user message.
- Args:
- chat_session_id:
String id of the chat session to filter by.
- offset:
How many chat messages to skip before returning.
- limit:
How many chat messages to return.
- Returns:
list of ChatMessageFull: Text and metadata for chat messages.
- list_chat_sessions_for_collection(collection_id: str, offset: int, limit: int) List[ChatSessionForCollection]
Fetch chat session metadata for chat sessions in a collection.
- Args:
- collection_id:
String id of the collection to filter by.
- offset:
How many chat sessions to skip before returning.
- limit:
How many chat sessions to return.
- Returns:
list of ChatSessionForCollection: Metadata about each chat session including the latest message.
- list_collection_permissions(collection_id: str) List[Permission]
Returns a list of access permissions for a given collection.
The returned list of permissions denotes who has access to the collection and their access level.
- Args:
- collection_id:
ID of the collection to inspect.
- Returns:
list of Permission: Sharing permissions list for the given collection.
- list_collections_for_document(document_id: str, offset: int, limit: int) List[CollectionInfo]
Fetch metadata about each collection the document is a part of.
At this time, each document will only be available in a single collection.
- Args:
- document_id:
String id of the document to search for.
- offset:
How many collections to skip before returning.
- limit:
How many collections to return.
- Returns:
list of CollectionInfo: Metadata about each collection.
- list_documents_in_collection(collection_id: str, offset: int, limit: int) List[DocumentInfo]
Fetch document metadata for documents in a collection.
- Args:
- collection_id:
String id of the collection to filter by.
- offset:
How many documents to skip before returning.
- limit:
How many documents to return.
- Returns:
list of DocumentInfo: Metadata about each document.
- list_embedding_models() List[str]
- list_list_chat_message_meta(message_id: str) List[ChatMessageMeta]
Fetch chat message meta information.
- Args:
- message_id:
Message id to which the metadata should be pulled.
- Returns:
list of ChatMessageMeta: Metadata about the chat message.
- list_question_reply_feedback_data(offset: int, limit: int) List[QuestionReplyData]
Fetch user’s questions and answers that have a feedback.
Questions and answers with metadata and feedback information.
- Args:
- offset:
How many conversations to skip before returning.
- limit:
How many conversations to return.
- Returns:
list of QuestionReplyData: Metadata about questions and answers.
- list_recent_chat_sessions(offset: int, limit: int) List[ChatSessionInfo]
Fetch user’s chat session metadata sorted by last update time.
Chats across all collections will be accessed.
- Args:
- offset:
How many chat sessions to skip before returning.
- limit:
How many chat sessions to return.
- Returns:
list of ChatSessionInfo: Metadata about each chat session including the latest message.
- list_recent_collections(offset: int, limit: int) List[CollectionInfo]
Fetch user’s collection metadata sorted by last update time.
- Args:
- offset:
How many collections to skip before returning.
- limit:
How many collections to return.
- Returns:
list of CollectionInfo: Metadata about each collection.
- list_recent_collections_sort(offset: int, limit: int, sort_column: str, ascending: bool) List[CollectionInfo]
Fetch user’s collection metadata sorted by last update time.
- Args:
- offset:
How many collections to skip before returning.
- limit:
How many collections to return.
- sort_column:
Sort column.
- ascending:
When True, return sorted by sort_column in ascending order.
- Returns:
list of CollectionInfo: Metadata about each collection.
- list_recent_document_summaries(document_id: str, offset: int, limit: int) List[DocumentSummary]
Fetches recent document summaries
- Args:
- document_id:
document ID for which to return summaries
- offset:
How many summaries to skip before returning summaries.
- limit:
How many summaries to return.
- list_recent_documents(offset: int, limit: int) List[DocumentInfo]
Fetch user’s document metadata sorted by last update time.
All documents owned by the user, regardless of collection, are accessed.
- Args:
- offset:
How many documents to skip before returning.
- limit:
How many documents to return.
- Returns:
list of DocumentInfo: Metadata about each document.
- list_recent_documents_with_summaries(offset: int, limit: int) List[DocumentInfoSummary]
Fetch user’s document metadata sorted by last update time, including the latest document summary.
All documents owned by the user, regardless of collection, are accessed.
- Args:
- offset:
How many documents to skip before returning.
- limit:
How many documents to return.
- Returns:
list of DocumentInfoSummary: Metadata about each document.
- list_recent_documents_with_summaries_sort(offset: int, limit: int, sort_column: str, ascending: bool) List[DocumentInfoSummary]
Fetch user’s document metadata sorted by last update time, including the latest document summary.
All documents owned by the user, regardless of collection, are accessed.
- Args:
- offset:
How many documents to skip before returning.
- limit:
How many documents to return.
- sort_column:
Sort column.
- ascending:
When True, return sorted by sort_column in ascending order.
- Returns:
list of DocumentInfoSummary: Metadata about each document.
- list_recent_prompt_templates(offset: int, limit: int) List[PromptTemplate]
Fetch user’s prompt templates sorted by last update time.
- Args:
- offset:
How many prompt templates to skip before returning.
- limit:
How many prompt templates to return.
- Returns:
list of PromptTemplate: set of prompts
- list_recent_prompt_templates_sort(offset: int, limit: int, sort_column: str, ascending: bool) List[PromptTemplate]
Fetch user’s prompt templates sorted by last update time.
- Args:
- offset:
How many prompt templates to skip before returning.
- limit:
How many prompt templates to return.
- sort_column:
Sort column.
- ascending:
When True, return sorted by sort_column in ascending order.
- Returns:
list of PromptTemplate: set of prompts
- list_upload() List[str]
List pending file uploads to the H2OGPTE backend.
Uploaded files are not yet accessible and need to be ingested into a collection.
- See Also:
upload: Upload the files into the system to then be ingested into a collection. ingest_uploads: Add the uploaded files to a collection. delete_upload: Delete uploaded file
- Returns:
List[str]: The pending upload ids to be used in ingest jobs.
- Raises:
Exception: The upload list request was unsuccessful.
- list_users(offset: int, limit: int) List[User]
List system users.
Returns a list of all registered users fo the system, a registered user, is a users that has logged in at least once.
- Args:
- offset:
How many users to skip before returning.
- limit:
How many users to return.
- Returns:
list of User: Metadata about each user.
- make_collection_private(collection_id: str)
Make a collection private
Once a collection is private, other users will no longer be able to access chat history or documents related to the collection.
- Args:
- collection_id:
ID of the collection to make private.
- make_collection_public(collection_id: str)
Make a collection public
Once a collection is public, it will be accessible to all authenticated users of the system.
- Args:
- collection_id:
ID of the collection to make public.
- match_chunks(collection_id: str, vectors: List[List[float]], topics: List[str], offset: int, limit: int, cut_off: float = 0, width: int = 0) List[SearchResult]
Find chunks related to a message using semantic search.
Chunks are sorted by relevance and similarity score to the message.
See Also: H2OGPTE.encode_for_retrieval to create vectors from messages.
- Args:
- collection_id:
ID of the collection to search within.
- vectors:
A list of vectorized message for running semantic search.
- topics:
A list of document_ids used to filter which documents in the collection to search.
- offset:
How many chunks to skip before returning chunks.
- limit:
How many chunks to return.
- cut_off:
Exclude matches with distances higher than this cut off.
- width:
How many chunks before and after a match to return - not implemented.
- Returns:
list of SearchResult: The document, text, score and related information of the chunk.
- reset_collection_prompt_settings(collection_id: str) str
Reset the prompt settings for a given collection.
- Args:
- collection_id:
ID of the collection to update.
- Returns:
str: ID of the updated collection.
- search_chunks(collection_id: str, query: str, topics: List[str], offset: int, limit: int) List[SearchResult]
Find chunks related to a message using lexical search.
Chunks are sorted by relevance and similarity score to the message.
- Args:
- collection_id:
ID of the collection to search within.
- query:
Question or imperative from the end user to search a collection for.
- topics:
A list of document_ids used to filter which documents in the collection to search.
- offset:
How many chunks to skip before returning chunks.
- limit:
How many chunks to return.
- Returns:
list of SearchResult: The document, text, score and related information of the chunk.
- set_chat_message_votes(chat_message_id: str, votes: int) Result
Change the vote value of a chat message.
Set the exact value of a vote for a chat message. Any message type can be updated, but only LLM response votes will be visible in the UI. The expectation is 0: unvoted, -1: dislike, 1 like. Values outside of this will not be viewable in the UI.
- Args:
- chat_message_id:
ID of a chat message, any message can be used but only LLM responses will be visible in the UI.
- votes:
Integer value for the message. Only -1 and 1 will be visible in the UI as dislike and like respectively.
- Returns:
Result: The status of the update.
- Raises:
Exception: The upload request was unsuccessful.
- set_chat_session_prompt_template(chat_session_id: str, prompt_template_id: str | None) str
Get the prompt template for a chat_session
- Args:
- chat_session_id:
ID of the chat session
- prompt_template_id:
ID of the prompt template to get the prompts from. None to delete and fall back to system defaults.
- Returns:
str: ID of the updated chat session
- set_collection_prompt_template(collection_id: str, prompt_template_id: str | None, strict_check: bool = False) str
Set the prompt template for a collection
- Args:
- collection_id:
ID of the collection to update.
- prompt_template_id:
ID of the prompt template to get the prompts from. None to delete and fall back to system defaults.
- strict_check:
whether to check that the collection’s embedding model and the prompt template are optimally compatible
- Returns:
str: ID of the updated collection.
Share a collection to a user.
The permission attribute defined the level of access, and who can access the collection, the collection_id attribute denotes the collection to be shared.
- Args:
- collection_id:
ID of the collection to share.
- permission:
Defines the rule for sharing, i.e. permission level.
- Returns:
ShareResponseStatus: Status of share request.
- summarize_content(text_context_list: List[str] | None = None, system_prompt: str = '', pre_prompt_summary: str | None = None, prompt_summary: str | None = None, llm: str | int | None = None, llm_args: Dict[str, Any] | None = None, **kwargs: Any) Answer
Summarize one or more contexts using an LLM.
Effective prompt created (excluding the system prompt):
"{pre_prompt_summary} """ {text_context_list} """ {prompt_summary}"
- Args:
- text_context_list:
List of raw text strings to be summarized.
- system_prompt:
Text sent to models which support system prompts. Gives the model overall context in how to respond. Use auto for the model default or None for h2oGPTe defaults. Defaults to ‘’ for no system prompt.
- pre_prompt_summary:
Text that is prepended before the list of texts. The default can be customized per environment, but the standard default is
"In order to write a concise single-paragraph or bulleted list summary, pay attention to the following text:\n"
- prompt_summary:
Text that is appended after the list of texts. The default can be customized per environment, but the standard default is
"Using only the text above, write a condensed and concise summary of key results (preferably as bullet points):\n"
- llm:
Name or index of LLM to send the query. Use H2OGPTE.get_llms() to see all available options. Default value is to use the first model (0th index).
- llm_args:
- Dictionary of kwargs to pass to the llm. Valid keys:
temperature (float, default: 0) — The value used to modulate the next token probabilities. Most deterministic: 0, Most creative: 1 seed (int, default: 0) — The seed for the random number generator, only used if temperature > 0, seed=0 will pick a random number for each call, seed > 0 will be fixed. top_k (int, default: 1) — The number of highest probability vocabulary tokens to keep for top-k-filtering. top_p (float, default: 1.0) — If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. repetition_penalty (float, default: 1.07) — The parameter for repetition penalty. 1.0 means no penalty. max_new_tokens (int, default: 1024) — Maximum number of new tokens to generate. This limit applies to each (map+reduce) step during summarization and each (map) step during extraction. min_max_new_tokens (int, default: 512) — minimum value for max_new_tokens when auto-adjusting for content of prompt, docs, etc.
- kwargs:
Dictionary of kwargs to pass to h2oGPT.
- Returns:
Answer: The response text and any errors.
- summarize_document(document_id: str, system_prompt: str | None = None, pre_prompt_summary: str | None = None, prompt_summary: str | None = None, llm: str | int | None = None, llm_args: Dict[str, Any] | None = None, max_num_chunks: int | None = None, sampling_strategy: str | None = None, timeout: float | None = None) DocumentSummary
Creates a summary of a document.
Effective prompt created (excluding the system prompt):
"{pre_prompt_summary} """ {text from document} """ {prompt_summary}"
- Args:
- document_id:
String id of the document to create a summary from.
- system_prompt:
System Prompt
- pre_prompt_summary:
Prompt that goes before each large piece of text to summarize
- prompt_summary:
Prompt that goes after each large piece of of text to summarize
- llm:
LLM to use
- llm_args:
- Dictionary of kwargs to pass to the llm. Valid keys:
temperature (float, default: 0) — The value used to modulate the next token probabilities. Most deterministic: 0, Most creative: 1 top_k (int, default: 1) — The number of highest probability vocabulary tokens to keep for top-k-filtering. top_p (float, default: 1.0) — If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. seed (int, default: 0) — The seed for the random number generator when sampling during generation (if temp>0 or top_k>1 or top_p<1), seed=0 picks a random seed. repetition_penalty (float, default: 1.07) — The parameter for repetition penalty. 1.0 means no penalty. max_new_tokens (int, default: 1024) — Maximum number of new tokens to generate. This limit applies to each (map+reduce) step during summarization and each (map) step during extraction. min_max_new_tokens (int, default: 512) — minimum value for max_new_tokens when auto-adjusting for content of prompt, docs, etc.
- max_num_chunks:
Max limit of chunks to send to the summarizer
- sampling_strategy:
How to sample if the document has more chunks than max_num_chunks. Options are “auto”, “uniform”, “first”, “first+last”, default is “auto” (a hybrid of them all).
- timeout:
Amount of time in seconds to allow the request to run. The default is 86400 seconds.
- Returns:
DocumentSummary: Summary of the document
- Raises:
TimeoutError: The request did not complete in time. SessionError: No summary created. Document wasn’t part of a collection, or LLM timed out, etc.
Remove sharing of a collection to a user.
The permission attribute defined the level of access, and who can access the collection, the collection_id attribute denotes the collection to be shared. In case of un-sharing, the Permission’s user is sufficient
- Args:
- collection_id:
ID of the collection to un-share.
- permission:
Defines the user for which collection access is revoked.
ShareResponseStatus: Status of share request.
Remove sharing of a collection to all other users but the original owner
- Args:
- collection_id:
ID of the collection to un-share.
ShareResponseStatus: Status of share request.
- update_collection(collection_id: str, name: str, description: str) str
Update the metadata for a given collection.
All variables are required. You can use h2ogpte.get_collection(<id>).name or description to get the existing values if you only want to change one or the other.
- Args:
- collection_id:
ID of the collection to update.
- name:
New name of the collection, this is required.
- description:
New description of the collection, this is required.
- Returns:
str: ID of the updated collection.
- update_collection_rag_type(collection_id: str, name: str, description: str, rag_type: str) str
Update the metadata for a given collection.
All variables are required. You can use h2ogpte.get_collection(<id>).name or description to get the existing values if you only want to change one or the other.
- Args:
- collection_id:
ID of the collection to update.
- name:
New name of the collection, this is required.
- description:
New description of the collection, this is required.
rag_type: str one of
"llm_only"
LLM Only (no RAG) - Generates a response to answer the user’squery without any supporting document contexts. Requires 1 LLM call.
"rag"
RAG (Retrieval Augmented Generation) - RAG with neural/lexical hybridsearch using the user’s query to find relevant contexts from a collection for generating a response. Requires 1 LLM call.
"hyde1"
HyDE RAG (Hypothetical Document Embedding) - Like RAG, but uses theLLM Only response to find relevant contexts from a collection for generating a response. Requires 2 LLM calls.
"hyde2"
HyDE RAG+ (Combined HyDE+RAG) - Like RAG, but uses HyDE RAG responseto find relevant contexts from a collection for generating a response. Requires 3 LLM calls.
"rag+"
RAG+ - Like RAG, but uses more context and recursive summarization toovercome LLM context limits. Keeps all retrieved chunks, puts them in order, adds neighboring chunks, then uses the summary API to get the answer. Can require several LLM calls.
- Returns:
str: ID of the updated collection.
- update_prompt_template(id: str, name: str, description: str | None = None, lang: str | None = None, system_prompt: str | None = None, pre_prompt_query: str | None = None, prompt_query: str | None = None, hyde_no_rag_llm_prompt_extension: str | None = None, pre_prompt_summary: str | None = None, prompt_summary: str | None = None, system_prompt_reflection: str | None = None, pre_prompt_reflection: str | None = None, prompt_reflection: str | None = None, auto_gen_description_prompt: str | None = None, auto_gen_document_summary_pre_prompt_summary: str | None = None, auto_gen_document_summary_prompt_summary: str | None = None, auto_gen_document_sample_questions_prompt: str | None = None, default_sample_questions: List[str] | None = None) str
Update a prompt template
- Args:
- id:
String ID of the prompt template to update
- name:
Name of the prompt template
- description:
Description of the prompt template
- lang:
Language code
- system_prompt:
System Prompt
- pre_prompt_query:
Text that is prepended before the contextual document chunks.
- prompt_query:
Text that is appended to the beginning of the user’s message.
- hyde_no_rag_llm_prompt_extension:
LLM prompt extension.
- pre_prompt_summary:
Prompt that goes before each large piece of text to summarize
- prompt_summary:
Prompt that goes after each large piece of of text to summarize
- system_prompt_reflection:
System Prompt for self-reflection
- pre_prompt_reflection:
deprecated - ignored
- prompt_reflection:
Template for self-reflection, must contain two occurrences of %s for full previous prompt (including system prompt, document related context and prompts if applicable, and user prompts) and answer
- auto_gen_description_prompt:
prompt to create a description of the collection.
- auto_gen_document_summary_pre_prompt_summary:
pre_prompt_summary for summary of a freshly imported document (if enabled).
- auto_gen_document_summary_prompt_summary:
prompt_summary for summary of a freshly imported document (if enabled).
- auto_gen_document_sample_questions_prompt:
prompt to create sample questions for a freshly imported document (if enabled).
- default_sample_questions:
default sample questions in case there are no auto-generated sample questions.
- Returns:
str: The ID of the updated prompt template.
- upload(file_name: str, file: Any) str
Upload a file to the H2OGPTE backend.
Uploaded files are not yet accessible and need to be ingested into a collection.
- See Also:
ingest_uploads: Add the uploaded files to a collection. delete_upload: Delete uploaded file
- Args:
- file_name:
What to name the file on the server, must include file extension.
- file:
File object to upload, often an opened file from with open(…) as f.
- Returns:
str: The upload id to be used in ingest jobs.
- Raises:
Exception: The upload request was unsuccessful.
- h2ogpte.h2ogpte.marshal(d)
- h2ogpte.h2ogpte.unmarshal(s: str)
h2ogpte.session module
- class h2ogpte.session.Session(address: str, chat_session_id: str, client: H2OGPTE = None, prompt_template_id: str | None = None)
Bases:
object
Create and participate in a chat session.
This is a live connection to the h2oGPTe server contained to a specific chat session on top of a single collection of documents. Users will find all questions and responses in this session in a single chat history in the UI.
- See Also:
H2OGPTE.connect: To initialize a session on an existing connection.
- Args:
- address:
Full URL of the h2oGPTe server to connect to.
- chat_session_id:
The ID of the chat session the queries should be sent to.
- client:
Set to the value of H2OGPTE client object used to perform other calls to the system.
Examples:
# Example 1: Best practice, create a session using the H2OGPTE module with h2ogpte.connect(chat_session_id) as session: answer1 = session.query('How many paper clips were shipped to Scranton?', timeout=10) answer2 = session.query('Did David Brent co-sign the contract with Initech?', timeout=10) # Example 2: Connect and disconnect manually session = Session( address=address, client=client, chat_session_id=chat_session_id ) session.connect() answer = session.query("Are there any dogs in the documents?") session.disconnect()
- connect()
Connect to an h2oGPTe server.
This is primarily an internal function used when users create a session using with from the H2OGPTE.connection() function.
- property connection: ClientConnection
- disconnect()
Disconnect from an h2oGPTe server.
This is primarily an internal function used when users create a session using with from the H2OGPTE.connection() function.
- query(message: str, system_prompt: str | None = None, pre_prompt_query: str | None = None, prompt_query: str | None = None, pre_prompt_summary: str | None = None, prompt_summary: str | None = None, llm: str | int | None = None, llm_args: Dict[str, Any] | None = None, self_reflection_config: Dict[str, Any] | None = None, rag_config: Dict[str, Any] | None = None, timeout: float | None = None, callback: Callable[[ChatMessage | PartialChatMessage], None] | None = None) ChatMessage | None
Retrieval-augmented generation for a query on a collection.
Finds a collection of chunks relevant to the query using similarity scores. Sends these and any additional instructions to an LLM.
Format of questions or imperatives:
"{pre_prompt_query} """ {similar_context_chunks} """ {prompt_query}{message}"
- Args:
- message:
Query or instruction from the end user to the LLM.
- system_prompt:
Text sent to models which support system prompts. Gives the model overall context in how to respond. Use auto or None for the model default. Defaults to ‘’ for no system prompt.
- pre_prompt_query:
Text that is prepended before the contextual document chunks. The default can be customized per environment, but the standard default is
"Pay attention and remember the information below, which will help to answer the question or imperative after the context ends.\n"
- prompt_query:
Text that is appended to the beginning of the user’s message. The default can be customized per environment, but the standard default is “According to only the information in the document sources provided within the context above, “
- pre_prompt_summary:
Not yet used, use H2OGPTE.summarize_content
- prompt_summary:
Not yet used, use H2OGPTE.summarize_content
- llm:
Name or index of LLM to send the query. Use H2OGPTE.get_llms() to see all available options. Default value is to use the first model (0th index).
- llm_args:
- Dictionary of kwargs to pass to the llm. Valid keys:
temperature (float, default: 0) — The value used to modulate the next token probabilities. Most deterministic: 0, Most creative: 1 top_k (int, default: 1) — The number of highest probability vocabulary tokens to keep for top-k-filtering. top_p (float, default: 1.0) — If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. seed (int, default: 0) — The seed for the random number generator when sampling during generation (if temp>0 or top_k>1 or top_p<1), seed=0 picks a random seed. repetition_penalty (float, default: 1.07) — The parameter for repetition penalty. 1.0 means no penalty. max_new_tokens (int, default: 1024) — Maximum number of new tokens to generate. This limit applies to each (map+reduce) step during summarization and each (map) step during extraction. min_max_new_tokens (int, default: 512) — minimum value for max_new_tokens when auto-adjusting for content of prompt, docs, etc.
- self_reflection_config:
Dictionary of arguments for self-reflection, can contain the following string:string mappings:
- llm_reflection: str
"gpt-4-0613"
or""
to disable reflection- prompt_reflection: str
‘Here’s the prompt and the response:
"""Prompt:\n%s\n"""\n\n""" Response:\n%s\n"""\n\nWhat is the quality of the response for the given prompt? Respond with a score ranging from Score: 0/10 (worst) to Score: 10/10 (best), and give a brief explanation why.'
- system_prompt_reflection: str
""
- llm_args_reflection: str
"{}"
- rag_config:
Dictionary of arguments to control RAG (retrieval-augmented-generation) types. Can contain the following key/value pairs: rag_type: str one of
"llm_only"
LLM Only (no RAG) - Generates a response to answer the user’squery without any supporting document contexts. Requires 1 LLM call.
"rag"
RAG (Retrieval Augmented Generation) - RAG with neural/lexical hybridsearch using the user’s query to find relevant contexts from a collection for generating a response. Requires 1 LLM call.
"hyde1"
HyDE RAG (Hypothetical Document Embedding) - Like RAG, but uses theLLM Only response to find relevant contexts from a collection for generating a response. Requires 2 LLM calls.
"hyde2"
HyDE RAG+ (Combined HyDE+RAG) - Like RAG, but uses HyDE RAG responseto find relevant contexts from a collection for generating a response. Requires 3 LLM calls.
"rag+"
RAG+ - Like RAG, but uses more context and recursive summarization toovercome LLM context limits. Keeps all retrieved chunks, puts them in order, adds neighboring chunks, then uses the summary API to get the answer. Can require several LLM calls.
- hyde_no_rag_llm_prompt_extension: str
Add this prompt to every user’s prompt, when generating answers to be used for subsequent retrieval during HyDE. Only used when rag_type is “hyde1” or “hyde2”. example:
'\nKeep the answer brief, and list the 5 most relevant key words at the end.'
- num_neighbor_chunks_to_include: int
Number of neighboring chunks to include for every retrieved relevant chunk. Helps to keep surrounding context together. Only enabled for rag_type “rag+”. Defaults to 1.
- timeout:
Amount of time in seconds to allow the request to run. The default is 1000 seconds.
- callback:
Function for processing partial messages, used for streaming responses to an end user.
- Returns:
ChatMessage: The response text and details about the response from the LLM. For example:
ChatMessage( id='XXX', content='The information provided in the context...', reply_to='YYY', votes=0, created_at=datetime.datetime(2023, 10, 24, 20, 12, 34, 875026) type_list=[], )
- Raises:
TimeoutError: The request did not complete in time.
- h2ogpte.session.deserialize(response: str) ChatResponse | ChatAcknowledgement
- h2ogpte.session.serialize(request: ChatRequest) str
h2ogpte.types module
- class h2ogpte.types.Answer(*, content: str, error: str, prompt_raw: str = '', llm: str, input_tokens: int = 0, output_tokens: int = 0, origin: str = 'N/A')
Bases:
BaseModel
- content: str
- error: str
- input_tokens: int
- llm: str
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'content': FieldInfo(annotation=str, required=True), 'error': FieldInfo(annotation=str, required=True), 'input_tokens': FieldInfo(annotation=int, required=False, default=0), 'llm': FieldInfo(annotation=str, required=True), 'origin': FieldInfo(annotation=str, required=False, default='N/A'), 'output_tokens': FieldInfo(annotation=int, required=False, default=0), 'prompt_raw': FieldInfo(annotation=str, required=False, default='')}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- origin: str
- output_tokens: int
- prompt_raw: str
- class h2ogpte.types.ChatAcknowledgement(t: str, session_id: str, correlation_id: str, message_id: str)
Bases:
object
- correlation_id: str
- message_id: str
- session_id: str
- t: str
- class h2ogpte.types.ChatMessage(*, id: str, content: str, reply_to: str | None = None, votes: int, created_at: datetime, type_list: List[str] | None = None, error: str | None = None)
Bases:
BaseModel
- content: str
- created_at: datetime
- error: str | None
- id: str
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'content': FieldInfo(annotation=str, required=True), 'created_at': FieldInfo(annotation=datetime, required=True), 'error': FieldInfo(annotation=Union[str, NoneType], required=False), 'id': FieldInfo(annotation=str, required=True), 'reply_to': FieldInfo(annotation=Union[str, NoneType], required=False), 'type_list': FieldInfo(annotation=Union[List[str], NoneType], required=False), 'votes': FieldInfo(annotation=int, required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- reply_to: str | None
- type_list: List[str] | None
- votes: int
- class h2ogpte.types.ChatMessageFull(*, id: str, username: str | None = None, content: str, reply_to: str | None = None, votes: int, created_at: datetime, type_list: List[ChatMessageMeta] | None = [], error: str | None = None)
Bases:
BaseModel
- content: str
- created_at: datetime
- error: str | None
- id: str
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'content': FieldInfo(annotation=str, required=True), 'created_at': FieldInfo(annotation=datetime, required=True), 'error': FieldInfo(annotation=Union[str, NoneType], required=False), 'id': FieldInfo(annotation=str, required=True), 'reply_to': FieldInfo(annotation=Union[str, NoneType], required=False), 'type_list': FieldInfo(annotation=Union[List[ChatMessageMeta], NoneType], required=False, default=[]), 'username': FieldInfo(annotation=Union[str, NoneType], required=False), 'votes': FieldInfo(annotation=int, required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- reply_to: str | None
- type_list: List[ChatMessageMeta] | None
- username: str | None
- votes: int
- class h2ogpte.types.ChatMessageMeta(*, message_type: str, content: str)
Bases:
BaseModel
- content: str
- message_type: str
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'content': FieldInfo(annotation=str, required=True), 'message_type': FieldInfo(annotation=str, required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- class h2ogpte.types.ChatMessageReference(*, document_id: str, document_name: str, chunk_id: int, pages: str, score: float)
Bases:
BaseModel
- chunk_id: int
- document_id: str
- document_name: str
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'chunk_id': FieldInfo(annotation=int, required=True), 'document_id': FieldInfo(annotation=str, required=True), 'document_name': FieldInfo(annotation=str, required=True), 'pages': FieldInfo(annotation=str, required=True), 'score': FieldInfo(annotation=float, required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- pages: str
- score: float
- class h2ogpte.types.ChatRequest(t: str, mode: str, session_id: str, correlation_id: str, body: str, system_prompt: str | None, pre_prompt_query: str | None, prompt_query: str | None, pre_prompt_summary: str | None, prompt_summary: str | None, llm: str | int | NoneType, llm_args: str | None, self_reflection_config: str | None, rag_config: str | None)
Bases:
object
- body: str
- correlation_id: str
- llm: str | int | None
- llm_args: str | None
- mode: str
- pre_prompt_query: str | None
- pre_prompt_summary: str | None
- prompt_query: str | None
- prompt_summary: str | None
- rag_config: str | None
- self_reflection_config: str | None
- session_id: str
- system_prompt: str | None
- t: str
- class h2ogpte.types.ChatResponse(t: str, session_id: str, message_id: str, reply_to_id: str, body: str, error: str)
Bases:
object
- body: str
- error: str
- message_id: str
- reply_to_id: str
- session_id: str
- t: str
- class h2ogpte.types.ChatSessionCount(*, chat_session_count: int)
Bases:
BaseModel
- chat_session_count: int
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'chat_session_count': FieldInfo(annotation=int, required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- class h2ogpte.types.ChatSessionForCollection(*, id: str, latest_message_content: str | None = None, updated_at: datetime)
Bases:
BaseModel
- id: str
- latest_message_content: str | None
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'id': FieldInfo(annotation=str, required=True), 'latest_message_content': FieldInfo(annotation=Union[str, NoneType], required=False), 'updated_at': FieldInfo(annotation=datetime, required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- updated_at: datetime
- class h2ogpte.types.ChatSessionInfo(*, id: str, latest_message_content: str | None = None, collection_id: str | None, collection_name: str | None, prompt_template_id: str | None = None, updated_at: datetime)
Bases:
BaseModel
- collection_id: str | None
- collection_name: str | None
- id: str
- latest_message_content: str | None
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'collection_id': FieldInfo(annotation=Union[str, NoneType], required=True), 'collection_name': FieldInfo(annotation=Union[str, NoneType], required=True), 'id': FieldInfo(annotation=str, required=True), 'latest_message_content': FieldInfo(annotation=Union[str, NoneType], required=False), 'prompt_template_id': FieldInfo(annotation=Union[str, NoneType], required=False), 'updated_at': FieldInfo(annotation=datetime, required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- prompt_template_id: str | None
- updated_at: datetime
- class h2ogpte.types.Chunk(*, text: str)
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'text': FieldInfo(annotation=str, required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- text: str
- class h2ogpte.types.Chunks(*, result: List[Chunk])
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'result': FieldInfo(annotation=List[Chunk], required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- class h2ogpte.types.Collection(*, id: str, name: str, description: str, document_count: int, document_size: int, created_at: datetime, updated_at: datetime, username: str, rag_type: str | None = None, embedding_model: str | None = None, prompt_template_id: str | None = None, is_public: bool)
Bases:
BaseModel
- created_at: datetime
- description: str
- document_count: int
- document_size: int
- embedding_model: str | None
- id: str
- is_public: bool
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'created_at': FieldInfo(annotation=datetime, required=True), 'description': FieldInfo(annotation=str, required=True), 'document_count': FieldInfo(annotation=int, required=True), 'document_size': FieldInfo(annotation=int, required=True), 'embedding_model': FieldInfo(annotation=Union[str, NoneType], required=False), 'id': FieldInfo(annotation=str, required=True), 'is_public': FieldInfo(annotation=bool, required=True), 'name': FieldInfo(annotation=str, required=True), 'prompt_template_id': FieldInfo(annotation=Union[str, NoneType], required=False), 'rag_type': FieldInfo(annotation=Union[str, NoneType], required=False), 'updated_at': FieldInfo(annotation=datetime, required=True), 'username': FieldInfo(annotation=str, required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- name: str
- prompt_template_id: str | None
- rag_type: str | None
- updated_at: datetime
- username: str
- class h2ogpte.types.CollectionCount(*, collection_count: int)
Bases:
BaseModel
- collection_count: int
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'collection_count': FieldInfo(annotation=int, required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- class h2ogpte.types.CollectionInfo(*, id: str, name: str, description: str, document_count: int, document_size: int, updated_at: datetime, user_count: int, is_public: bool, username: str, sessions_count: int)
Bases:
BaseModel
- description: str
- document_count: int
- document_size: int
- id: str
- is_public: bool
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'description': FieldInfo(annotation=str, required=True), 'document_count': FieldInfo(annotation=int, required=True), 'document_size': FieldInfo(annotation=int, required=True), 'id': FieldInfo(annotation=str, required=True), 'is_public': FieldInfo(annotation=bool, required=True), 'name': FieldInfo(annotation=str, required=True), 'sessions_count': FieldInfo(annotation=int, required=True), 'updated_at': FieldInfo(annotation=datetime, required=True), 'user_count': FieldInfo(annotation=int, required=True), 'username': FieldInfo(annotation=str, required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- name: str
- sessions_count: int
- updated_at: datetime
- user_count: int
- username: str
- class h2ogpte.types.ConfigItem(*, key_name: str, string_value: str, value_type: str, can_overwrite: bool)
Bases:
BaseModel
- can_overwrite: bool
- key_name: str
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'can_overwrite': FieldInfo(annotation=bool, required=True), 'key_name': FieldInfo(annotation=str, required=True), 'string_value': FieldInfo(annotation=str, required=True), 'value_type': FieldInfo(annotation=str, required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- string_value: str
- value_type: str
- class h2ogpte.types.Document(*, id: str, name: str, type: str, size: int, page_count: int, status: Status, created_at: datetime, updated_at: datetime)
Bases:
BaseModel
- created_at: datetime
- id: str
- model_config: ClassVar[ConfigDict] = {'use_enum_values': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'created_at': FieldInfo(annotation=datetime, required=True), 'id': FieldInfo(annotation=str, required=True), 'name': FieldInfo(annotation=str, required=True), 'page_count': FieldInfo(annotation=int, required=True), 'size': FieldInfo(annotation=int, required=True), 'status': FieldInfo(annotation=Status, required=True), 'type': FieldInfo(annotation=str, required=True), 'updated_at': FieldInfo(annotation=datetime, required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- name: str
- page_count: int
- size: int
- type: str
- updated_at: datetime
- class h2ogpte.types.DocumentCount(*, document_count: int)
Bases:
BaseModel
- document_count: int
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'document_count': FieldInfo(annotation=int, required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- class h2ogpte.types.DocumentInfo(*, id: str, username: str, name: str, type: str, size: int, page_count: int, status: Status, updated_at: datetime)
Bases:
BaseModel
- id: str
- model_config: ClassVar[ConfigDict] = {'use_enum_values': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'id': FieldInfo(annotation=str, required=True), 'name': FieldInfo(annotation=str, required=True), 'page_count': FieldInfo(annotation=int, required=True), 'size': FieldInfo(annotation=int, required=True), 'status': FieldInfo(annotation=Status, required=True), 'type': FieldInfo(annotation=str, required=True), 'updated_at': FieldInfo(annotation=datetime, required=True), 'username': FieldInfo(annotation=str, required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- name: str
- page_count: int
- size: int
- type: str
- updated_at: datetime
- username: str
- class h2ogpte.types.DocumentInfoSummary(*, id: str, username: str, name: str, type: str, size: int, page_count: int, status: Status, updated_at: datetime, usage_stats: str | None, summary: str | None, summary_parameters: str | None)
Bases:
BaseModel
- id: str
- model_config: ClassVar[ConfigDict] = {'use_enum_values': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'id': FieldInfo(annotation=str, required=True), 'name': FieldInfo(annotation=str, required=True), 'page_count': FieldInfo(annotation=int, required=True), 'size': FieldInfo(annotation=int, required=True), 'status': FieldInfo(annotation=Status, required=True), 'summary': FieldInfo(annotation=Union[str, NoneType], required=True), 'summary_parameters': FieldInfo(annotation=Union[str, NoneType], required=True), 'type': FieldInfo(annotation=str, required=True), 'updated_at': FieldInfo(annotation=datetime, required=True), 'usage_stats': FieldInfo(annotation=Union[str, NoneType], required=True), 'username': FieldInfo(annotation=str, required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- name: str
- page_count: int
- size: int
- summary: str | None
- summary_parameters: str | None
- type: str
- updated_at: datetime
- usage_stats: str | None
- username: str
- class h2ogpte.types.DocumentSummary(*, id: str, content: str, error: str, document_id: str, kwargs: str, created_at: datetime, usage_stats: str | None = None)
Bases:
BaseModel
- content: str
- created_at: datetime
- document_id: str
- error: str
- id: str
- kwargs: str
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'content': FieldInfo(annotation=str, required=True), 'created_at': FieldInfo(annotation=datetime, required=True), 'document_id': FieldInfo(annotation=str, required=True), 'error': FieldInfo(annotation=str, required=True), 'id': FieldInfo(annotation=str, required=True), 'kwargs': FieldInfo(annotation=str, required=True), 'usage_stats': FieldInfo(annotation=Union[str, NoneType], required=False)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- usage_stats: str | None
- class h2ogpte.types.ExtractionAnswer(*, content: List[str], error: str, llm: str, input_tokens: int = 0, output_tokens: int = 0)
Bases:
BaseModel
- content: List[str]
- error: str
- input_tokens: int
- llm: str
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'content': FieldInfo(annotation=List[str], required=True), 'error': FieldInfo(annotation=str, required=True), 'input_tokens': FieldInfo(annotation=int, required=False, default=0), 'llm': FieldInfo(annotation=str, required=True), 'output_tokens': FieldInfo(annotation=int, required=False, default=0)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- output_tokens: int
- class h2ogpte.types.Identifier(*, id: str, error: str | None = None)
Bases:
BaseModel
- error: str | None
- id: str
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'error': FieldInfo(annotation=Union[str, NoneType], required=False), 'id': FieldInfo(annotation=str, required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- exception h2ogpte.types.InvalidArgumentError
Bases:
Exception
- class h2ogpte.types.Job(*, id: str, name: str, passed: float, failed: float, progress: float, completed: bool, canceled: bool, date: datetime, kind: JobKind, statuses: List[JobStatus], errors: List[str], last_update_date: datetime, duration: str, duration_seconds: float)
Bases:
BaseModel
- canceled: bool
- completed: bool
- date: datetime
- duration: str
- duration_seconds: float
- errors: List[str]
- failed: float
- id: str
- last_update_date: datetime
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'canceled': FieldInfo(annotation=bool, required=True), 'completed': FieldInfo(annotation=bool, required=True), 'date': FieldInfo(annotation=datetime, required=True), 'duration': FieldInfo(annotation=str, required=True), 'duration_seconds': FieldInfo(annotation=float, required=True), 'errors': FieldInfo(annotation=List[str], required=True), 'failed': FieldInfo(annotation=float, required=True), 'id': FieldInfo(annotation=str, required=True), 'kind': FieldInfo(annotation=JobKind, required=True), 'last_update_date': FieldInfo(annotation=datetime, required=True), 'name': FieldInfo(annotation=str, required=True), 'passed': FieldInfo(annotation=float, required=True), 'progress': FieldInfo(annotation=float, required=True), 'statuses': FieldInfo(annotation=List[JobStatus], required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- name: str
- passed: float
- progress: float
- class h2ogpte.types.JobKind(value)
Bases:
str
,Enum
An enumeration.
- DeleteCollectionsJob = 'DeleteCollectionsJob'
- DeleteDocumentsFromCollectionJob = 'DeleteDocumentsFromCollectionJob'
- DeleteDocumentsJob = 'DeleteDocumentsJob'
- DocumentSummaryJob = 'DocumentSummaryJob'
- ImportCollectionIntoCollectionJob = 'ImportCollectionIntoCollectionJob'
- ImportDocumentIntoCollectionJob = 'ImportDocumentIntoCollectionJob'
- IndexFilesJob = 'IndexFilesJob'
- IngestFromFileSystemJob = 'IngestFromFileSystemJob'
- IngestUploadsJob = 'IngestUploadsJob'
- IngestWebsiteJob = 'IngestWebsiteJob'
- NoOpJob = 'NoOpJob'
- UpdateCollectionStatsJob = 'UpdateCollectionStatsJob'
- class h2ogpte.types.JobStatus(*, id: str, status: str)
Bases:
BaseModel
- id: str
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'id': FieldInfo(annotation=str, required=True), 'status': FieldInfo(annotation=str, required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- status: str
- class h2ogpte.types.LLMUsage(*, llm_name: str, llm_cost: float)
Bases:
BaseModel
- llm_cost: float
- llm_name: str
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'llm_cost': FieldInfo(annotation=float, required=True), 'llm_name': FieldInfo(annotation=str, required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- class h2ogpte.types.LLMUsageLimit(*, current: float, max_allowed_24h: float, cost_unit: str, interval: str | None = None)
Bases:
BaseModel
- cost_unit: str
- current: float
- interval: str | None
- max_allowed_24h: float
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'cost_unit': FieldInfo(annotation=str, required=True), 'current': FieldInfo(annotation=float, required=True), 'interval': FieldInfo(annotation=Union[str, NoneType], required=False), 'max_allowed_24h': FieldInfo(annotation=float, required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- class h2ogpte.types.Meta(*, version: str, build: str, username: str, email: str, license_expired: bool, license_expiry_date: str, global_configs: List[ConfigItem], picture: str | None)
Bases:
BaseModel
- build: str
- email: str
- global_configs: List[ConfigItem]
- license_expired: bool
- license_expiry_date: str
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'build': FieldInfo(annotation=str, required=True), 'email': FieldInfo(annotation=str, required=True), 'global_configs': FieldInfo(annotation=List[ConfigItem], required=True), 'license_expired': FieldInfo(annotation=bool, required=True), 'license_expiry_date': FieldInfo(annotation=str, required=True), 'picture': FieldInfo(annotation=Union[str, NoneType], required=True), 'username': FieldInfo(annotation=str, required=True), 'version': FieldInfo(annotation=str, required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- picture: str | None
- username: str
- version: str
- class h2ogpte.types.ObjectCount(*, chat_session_count: int, collection_count: int, document_count: int)
Bases:
BaseModel
- chat_session_count: int
- collection_count: int
- document_count: int
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'chat_session_count': FieldInfo(annotation=int, required=True), 'collection_count': FieldInfo(annotation=int, required=True), 'document_count': FieldInfo(annotation=int, required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- exception h2ogpte.types.ObjectNotFoundError
Bases:
Exception
- class h2ogpte.types.PartialChatMessage(*, id: str, content: str, reply_to: str | None = None)
Bases:
BaseModel
- content: str
- id: str
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'content': FieldInfo(annotation=str, required=True), 'id': FieldInfo(annotation=str, required=True), 'reply_to': FieldInfo(annotation=Union[str, NoneType], required=False)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- reply_to: str | None
- class h2ogpte.types.Permission(*, username: str)
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'username': FieldInfo(annotation=str, required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- username: str
- class h2ogpte.types.PromptTemplate(*, is_default: bool, id: str | None, name: str, description: str | None, lang: str | None, system_prompt: str | None, pre_prompt_query: str | None, prompt_query: str | None, hyde_no_rag_llm_prompt_extension: str | None, pre_prompt_summary: str | None, prompt_summary: str | None, system_prompt_reflection: str | None, pre_prompt_reflection: str | None, prompt_reflection: str | None, auto_gen_description_prompt: str | None, auto_gen_document_summary_pre_prompt_summary: str | None, auto_gen_document_summary_prompt_summary: str | None, auto_gen_document_sample_questions_prompt: str | None, default_sample_questions: List[str] | None, created_at: datetime | None)
Bases:
BaseModel
- auto_gen_description_prompt: str | None
- auto_gen_document_sample_questions_prompt: str | None
- auto_gen_document_summary_pre_prompt_summary: str | None
- auto_gen_document_summary_prompt_summary: str | None
- created_at: datetime | None
- default_sample_questions: List[str] | None
- description: str | None
- hyde_no_rag_llm_prompt_extension: str | None
- id: str | None
- is_default: bool
- lang: str | None
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'auto_gen_description_prompt': FieldInfo(annotation=Union[str, NoneType], required=True), 'auto_gen_document_sample_questions_prompt': FieldInfo(annotation=Union[str, NoneType], required=True), 'auto_gen_document_summary_pre_prompt_summary': FieldInfo(annotation=Union[str, NoneType], required=True), 'auto_gen_document_summary_prompt_summary': FieldInfo(annotation=Union[str, NoneType], required=True), 'created_at': FieldInfo(annotation=Union[datetime, NoneType], required=True), 'default_sample_questions': FieldInfo(annotation=Union[List[str], NoneType], required=True), 'description': FieldInfo(annotation=Union[str, NoneType], required=True), 'hyde_no_rag_llm_prompt_extension': FieldInfo(annotation=Union[str, NoneType], required=True), 'id': FieldInfo(annotation=Union[str, NoneType], required=True), 'is_default': FieldInfo(annotation=bool, required=True), 'lang': FieldInfo(annotation=Union[str, NoneType], required=True), 'name': FieldInfo(annotation=str, required=True), 'pre_prompt_query': FieldInfo(annotation=Union[str, NoneType], required=True), 'pre_prompt_reflection': FieldInfo(annotation=Union[str, NoneType], required=True), 'pre_prompt_summary': FieldInfo(annotation=Union[str, NoneType], required=True), 'prompt_query': FieldInfo(annotation=Union[str, NoneType], required=True), 'prompt_reflection': FieldInfo(annotation=Union[str, NoneType], required=True), 'prompt_summary': FieldInfo(annotation=Union[str, NoneType], required=True), 'system_prompt': FieldInfo(annotation=Union[str, NoneType], required=True), 'system_prompt_reflection': FieldInfo(annotation=Union[str, NoneType], required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- name: str
- pre_prompt_query: str | None
- pre_prompt_reflection: str | None
- pre_prompt_summary: str | None
- prompt_query: str | None
- prompt_reflection: str | None
- prompt_summary: str | None
- system_prompt: str | None
- system_prompt_reflection: str | None
- class h2ogpte.types.PromptTemplateCount(*, prompt_template_count: int)
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'prompt_template_count': FieldInfo(annotation=int, required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- prompt_template_count: int
- class h2ogpte.types.QuestionReplyData(*, question_content: str, reply_content: str, question_id: str, reply_id: str, llm: str | None, system_prompt: str | None, pre_prompt_query: str | None, prompt_query: str | None, pre_prompt_summary: str | None, prompt_summary: str | None, rag_config: str | None, collection_documents: List[str] | None, votes: int, expected_answer: str | None, user_comment: str | None, collection_id: str | None, collection_name: str | None, response_created_at_time: str, prompt_template_id: str | None = None)
Bases:
BaseModel
- collection_documents: List[str] | None
- collection_id: str | None
- collection_name: str | None
- expected_answer: str | None
- llm: str | None
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'collection_documents': FieldInfo(annotation=Union[List[str], NoneType], required=True), 'collection_id': FieldInfo(annotation=Union[str, NoneType], required=True), 'collection_name': FieldInfo(annotation=Union[str, NoneType], required=True), 'expected_answer': FieldInfo(annotation=Union[str, NoneType], required=True), 'llm': FieldInfo(annotation=Union[str, NoneType], required=True), 'pre_prompt_query': FieldInfo(annotation=Union[str, NoneType], required=True), 'pre_prompt_summary': FieldInfo(annotation=Union[str, NoneType], required=True), 'prompt_query': FieldInfo(annotation=Union[str, NoneType], required=True), 'prompt_summary': FieldInfo(annotation=Union[str, NoneType], required=True), 'prompt_template_id': FieldInfo(annotation=Union[str, NoneType], required=False), 'question_content': FieldInfo(annotation=str, required=True), 'question_id': FieldInfo(annotation=str, required=True), 'rag_config': FieldInfo(annotation=Union[str, NoneType], required=True), 'reply_content': FieldInfo(annotation=str, required=True), 'reply_id': FieldInfo(annotation=str, required=True), 'response_created_at_time': FieldInfo(annotation=str, required=True), 'system_prompt': FieldInfo(annotation=Union[str, NoneType], required=True), 'user_comment': FieldInfo(annotation=Union[str, NoneType], required=True), 'votes': FieldInfo(annotation=int, required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- pre_prompt_query: str | None
- pre_prompt_summary: str | None
- prompt_query: str | None
- prompt_summary: str | None
- prompt_template_id: str | None
- question_content: str
- question_id: str
- rag_config: str | None
- reply_content: str
- reply_id: str
- response_created_at_time: str
- system_prompt: str | None
- user_comment: str | None
- votes: int
- class h2ogpte.types.QuestionReplyDataCount(*, question_reply_data_count: int)
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'question_reply_data_count': FieldInfo(annotation=int, required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- question_reply_data_count: int
- class h2ogpte.types.Result(*, status: Status)
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {'use_enum_values': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'status': FieldInfo(annotation=Status, required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- class h2ogpte.types.SchedulerStats(*, queue_length: int)
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'queue_length': FieldInfo(annotation=int, required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- queue_length: int
- class h2ogpte.types.SearchResult(*, id: int, topic: str, name: str, text: str, size: int, pages: str, score: float)
Bases:
BaseModel
- id: int
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'id': FieldInfo(annotation=int, required=True), 'name': FieldInfo(annotation=str, required=True), 'pages': FieldInfo(annotation=str, required=True), 'score': FieldInfo(annotation=float, required=True), 'size': FieldInfo(annotation=int, required=True), 'text': FieldInfo(annotation=str, required=True), 'topic': FieldInfo(annotation=str, required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- name: str
- pages: str
- score: float
- size: int
- text: str
- topic: str
- class h2ogpte.types.SearchResults(*, result: List[SearchResult])
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'result': FieldInfo(annotation=List[SearchResult], required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- result: List[SearchResult]
- exception h2ogpte.types.SessionError
Bases:
Exception
Bases:
BaseModel
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- class h2ogpte.types.Status(value)
Bases:
str
,Enum
An enumeration.
- Canceled = 'canceled'
- Completed = 'completed'
- Failed = 'failed'
- Queued = 'queued'
- Running = 'running'
- Scheduled = 'scheduled'
- Unknown = 'unknown'
- class h2ogpte.types.SuggestedQuestion(*, question: str)
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'question': FieldInfo(annotation=str, required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- question: str
- exception h2ogpte.types.UnauthorizedError
Bases:
Exception
- class h2ogpte.types.User(*, username: str)
Bases:
BaseModel
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'username': FieldInfo(annotation=str, required=True)}
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- username: str
Module contents
h2oGPTe - AI for documents and more
h2ogpte is the Python client library for H2O.ai’s Enterprise h2oGPTe, a RAG (Retrieval-Augmented Generation) based platform built on top of many open-source software components such as h2oGPT, hnswlib, Torch, Transformers, Golang, Python, k8s, Docker, PyMuPDF, DocTR, and many more.
h2oGPTe is designed to help organizations improve their business using generative AI. It focuses on scaling as your organization expands the number of use cases, users, and documents and has the goal of being your one stop for integrating any model or LLM functionality into your business.
Main Features
Contextualize chat with your own data using RAG (Retrieval-Augmented Generation)
Scalable backend and frontend, multi-user, high throughput
Fully containerized with Kubernetes
Multi-modal support for text, images, and audio
- Highly customizable prompting for:
talk to LLM
talk to document
talk to collection of documents
talk to every page of a collection (Map/Reduce), summary, extraction
LLM agnostic, choose the model you need for your use case
- class h2ogpte.H2OGPTE(address: str, api_key: str | None = None, token_provider: TokenProvider | None = None, verify: bool | str = True, strict_version_check: bool = False)
Bases:
object
Connect to and interact with an h2oGPTe server.
- INITIAL_WAIT_INTERVAL = 0.1
- MAX_WAIT_INTERVAL = 1.0
- TIMEOUT = 3600.0
- WAIT_BACKOFF_FACTOR = 1.4
- answer_question(question: str, system_prompt: str | None = '', pre_prompt_query: str | None = None, prompt_query: str | None = None, text_context_list: List[str] | None = None, llm: str | int | None = None, llm_args: Dict[str, Any] | None = None, chat_conversation: List[Tuple[str, str]] | None = None, timeout: float | None = None, **kwargs: Any) Answer
Send a message and get a response from an LLM.
Note: For general chat with an LLM, we recommend session.query() for higher throughput in multi-user environments. The following code sample shows the recommended method:
# Establish a chat session chat_session_id = client.create_chat_session() # Connect to the chat session with client.connect(chat_session_id) as session: # Send a basic query and print the reply reply = session.query("Hello", timeout=60) print(reply.content)
Format of inputs content:
{text_context_list} """\n{chat_conversation}{question}
- Args:
- question:
Text query to send to the LLM.
- text_context_list:
List of raw text strings to be included, will be converted to a string like this: “
- “.join(text_context_list)
- system_prompt:
Text sent to models which support system prompts. Gives the model overall context in how to respond. Use auto for the model default, or None for h2oGPTe default. Defaults to ‘’ for no system prompt.
- pre_prompt_query:
Text that is prepended before the contextual document chunks in text_context_list. Only used if text_context_list is provided.
- prompt_query:
Text that is appended after the contextual document chunks in text_context_list. Only used if text_context_list is provided.
- llm:
Name or index of LLM to send the query. Use H2OGPTE.get_llms() to see all available options. Default value is to use the first model (0th index).
- llm_args:
- Dictionary of kwargs to pass to the llm. Valid keys:
temperature (float, default: 0) — The value used to modulate the next token probabilities. Most deterministic: 0, Most creative: 1 seed (int, default: 0) — The seed for the random number generator, only used if temperature > 0, seed=0 will pick a random number for each call, seed > 0 will be fixed. top_k (int, default: 1) — The number of highest probability vocabulary tokens to keep for top-k-filtering. top_p (float, default: 1.0) — If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. repetition_penalty (float, default: 1.07) — The parameter for repetition penalty. 1.0 means no penalty. max_new_tokens (int, default: 1024) — Maximum number of new tokens to generate. This limit applies to each (map+reduce) step during summarization and each (map) step during extraction. min_max_new_tokens (int, default: 512) — minimum value for max_new_tokens when auto-adjusting for content of prompt, docs, etc.
- chat_conversation:
List of tuples for (human, bot) conversation that will be pre-appended to an (question, None) case for a query.
- timeout:
Timeout in seconds.
- kwargs:
Dictionary of kwargs to pass to h2oGPT.
- Returns:
Answer: The response text and any errors.
- Raises:
TimeoutError: If response isn’t completed in timeout seconds.
- cancel_job(job_id: str) Result
Stops a specific job from running on the server.
- Args:
- job_id:
String id of the job to cancel.
- Returns:
Result: Status of canceling the job.
- connect(chat_session_id: str) Session
Create and participate in a chat session.
This is a live connection to the H2OGPTE server contained to a specific chat session on top of a single collection of documents. Users will find all questions and responses in this session in a single chat history in the UI.
- Args:
- chat_session_id:
ID of the chat session to connect to.
- Returns:
Session: Live chat session connection with an LLM.
- count_assets() ObjectCount
Counts number of objects owned by the user.
- Returns:
ObjectCount: The count of chat sessions, collections, and documents.
- count_chat_sessions() int
Counts number of chat sessions owned by the user.
- Returns:
int: The count of chat sessions owned by the user.
- count_chat_sessions_for_collection(collection_id: str) int
Counts number of chat sessions in a specific collection.
- Args:
- collection_id:
String id of the collection to count chat sessions for.
- Returns:
int: The count of chat sessions in that collection.
- count_collections() int
Counts number of collections owned by the user.
- Returns:
int: The count of collections owned by the user.
- count_documents() int
Counts number of documents accessed by the user.
- Returns:
int: The count of documents accessed by the user.
- count_documents_in_collection(collection_id: str) int
Counts the number of documents in a specific collection.
- Args:
- collection_id:
String id of the collection to count documents for.
- Returns:
int: The number of documents in that collection.
- count_documents_owned_by_me() int
Counts number of documents owned by the user.
- Returns:
int: The count of documents owned by the user.
- count_prompt_templates() int
Counts number of prompt templates
- Returns:
int: The count of prompt templates
- count_question_reply_feedback() int
Fetch user’s questions and answers with feedback count.
- Returns:
int: the count of questions and replies that have a user feedback.
- create_chat_session(collection_id: str | None = None) str
Creates a new chat session for asking questions (of documents).
- Args:
- collection_id:
String id of the collection to chat with. If None, chat with LLM directly.
- Returns:
str: The ID of the newly created chat session.
- create_chat_session_on_default_collection() str
Creates a new chat session for asking questions of documents on the default collection.
- Returns:
str: The ID of the newly created chat session.
- create_collection(name: str, description: str, embedding_model: str | None = None, prompt_template_id: str | None = None) str
Creates a new collection.
- Args:
- name:
Name of the collection.
- description:
Description of the collection
- embedding_model:
embedding model to use. call list_embedding_models() to list of options.
- prompt_template_id:
ID of the prompt template to get the prompts from. None to fall back to system defaults.
- Returns:
str: The ID of the newly created collection.
- create_prompt_template(name: str, description: str | None = None, lang: str | None = None, system_prompt: str | None = None, pre_prompt_query: str | None = None, prompt_query: str | None = None, hyde_no_rag_llm_prompt_extension: str | None = None, pre_prompt_summary: str | None = None, prompt_summary: str | None = None, system_prompt_reflection: str | None = None, pre_prompt_reflection: str | None = None, prompt_reflection: str | None = None, auto_gen_description_prompt: str | None = None, auto_gen_document_summary_pre_prompt_summary: str | None = None, auto_gen_document_summary_prompt_summary: str | None = None, auto_gen_document_sample_questions_prompt: str | None = None, default_sample_questions: List[str] | None = None) str
Create a new prompt template
- Args:
- name:
Name of the prompt template
- description:
Description of the prompt template
- lang:
Language code
- system_prompt:
System Prompt
- pre_prompt_query:
Text that is prepended before the contextual document chunks.
- prompt_query:
Text that is appended to the beginning of the user’s message.
- hyde_no_rag_llm_prompt_extension:
LLM prompt extension.
- pre_prompt_summary:
Prompt that goes before each large piece of text to summarize
- prompt_summary:
Prompt that goes after each large piece of of text to summarize
- system_prompt_reflection:
System Prompt for self-reflection
- pre_prompt_reflection:
deprecated - ignored
- prompt_reflection:
Template for self-reflection, must contain two occurrences of %s for full previous prompt (including system prompt, document related context and prompts if applicable, and user prompts) and answer
- auto_gen_description_prompt:
prompt to create a description of the collection.
- auto_gen_document_summary_pre_prompt_summary:
pre_prompt_summary for summary of a freshly imported document (if enabled).
- auto_gen_document_summary_prompt_summary:
prompt_summary for summary of a freshly imported document (if enabled).
- auto_gen_document_sample_questions_prompt:
prompt to create sample questions for a freshly imported document (if enabled).
- default_sample_questions:
default sample questions in case there are no auto-generated sample questions.
- Returns:
str: The ID of the newly created prompt template.
- delete_chat_messages(chat_message_ids: Iterable[str]) Result
Deletes specific chat messages.
- Args:
- chat_message_ids:
List of string ids of chat messages to delete from the system.
- Returns:
Result: Status of the delete job.
- delete_chat_sessions(chat_session_ids: Iterable[str]) Result
Deletes chat sessions and related messages.
- Args:
- chat_session_ids:
List of string ids of chat sessions to delete from the system.
- Returns:
Result: Status of the delete job.
- delete_collections(collection_ids: Iterable[str], timeout: float | None = None)
Deletes collections from the environment.
Documents in the collection will not be deleted.
- Args:
- collection_ids:
List of string ids of collections to delete from the system.
- timeout:
Timeout in seconds.
- delete_document_summaries(summaries_ids: Iterable[str]) Result
Deletes document summaries.
- Args:
- summaries_ids:
List of string ids of a document summary to delete from the system.
- Returns:
Result: Status of the delete job.
- delete_documents(document_ids: Iterable[str], timeout: float | None = None)
Deletes documents from the system.
- Args:
- document_ids:
List of string ids to delete from the system and all collections.
- timeout:
Timeout in seconds.
- delete_documents_from_collection(collection_id: str, document_ids: Iterable[str], timeout: float | None = None)
Removes documents from a collection.
See Also: H2OGPTE.delete_documents for completely removing the document from the environment.
- Args:
- collection_id:
String of the collection to remove documents from.
- document_ids:
List of string ids to remove from the collection.
- timeout:
Timeout in seconds.
- delete_prompt_templates(ids: Iterable[str]) Result
Deletes prompt templates
- Args:
- ids:
List of string ids of prompte templates to delete from the system.
- Returns:
Result: Status of the delete job.
- delete_upload(upload_id: str) str
Delete a file previously uploaded with the “upload” method.
- See Also:
upload: Upload the files into the system to then be ingested into a collection. ingest_uploads: Add the uploaded files to a collection.
- Args:
- upload_id:
ID of a file to remove
- Returns:
upload_id: The upload id of the removed.
- Raises:
Exception: The delete upload request was unsuccessful.
- download_document(destination_directory: str, destination_file_name: str, document_id: str) Path
Downloads a document to a local system directory.
- Args:
- destination_directory:
Destination directory to save file into.
- destination_file_name:
Destination file name.
- document_id:
Document ID.
- Returns:
Path: Path of downloaded document
- encode_for_retrieval(chunks: List[str], embedding_model: str | None = None) List[List[float]]
Encode texts for semantic searching.
See Also: H2OGPTE.match for getting a list of chunks that semantically match each encoded text.
- Args:
- chunks:
List of strings of texts to be encoded.
- embedding_model:
embedding model to use. call list_embedding_models() to list of options.
- Returns:
List of list of floats: Each list in the list is the encoded original text.
- extract_data(text_context_list: List[str] | None = None, system_prompt: str = '', pre_prompt_extract: str | None = None, prompt_extract: str | None = None, llm: str | int | None = None, llm_args: Dict[str, Any] | None = None, **kwargs: Any) ExtractionAnswer
Extract information from one or more contexts using an LLM.
pre_prompt_extract and prompt_extract variables must be used together. If these variables are not set, the inputs texts will be summarized into bullet points.
Format of extract content:
"{pre_prompt_extract}""" {text_context_list} """\n{prompt_extract}"
Examples:
extract = h2ogpte.extract_data( text_context_list=chunks, pre_prompt_extract="Pay attention and look at all people. Your job is to collect their names.\n", prompt_extract="List all people's names as JSON.", )
- Args:
- text_context_list:
List of raw text strings to extract data from.
- system_prompt:
Text sent to models which support system prompts. Gives the model overall context in how to respond. Use auto or None for the model default. Defaults to ‘’ for no system prompt.
- pre_prompt_extract:
Text that is prepended before the list of texts. If not set, the inputs will be summarized.
- prompt_extract:
Text that is appended after the list of texts. If not set, the inputs will be summarized.
- llm:
Name or index of LLM to send the query. Use H2OGPTE.get_llms() to see all available options. Default value is to use the first model (0th index).
- llm_args:
- Dictionary of kwargs to pass to the llm. Valid keys:
temperature (float, default: 0) — The value used to modulate the next token probabilities. Most deterministic: 0, Most creative: 1 seed (int, default: 0) — The seed for the random number generator, only used if temperature > 0, seed=0 will pick a random number for each call, seed > 0 will be fixed. top_k (int, default: 1) — The number of highest probability vocabulary tokens to keep for top-k-filtering. top_p (float, default: 1.0) — If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. repetition_penalty (float, default: 1.07) — The parameter for repetition penalty. 1.0 means no penalty. max_new_tokens (int, default: 1024) — Maximum number of new tokens to generate. This limit applies to each (map+reduce) step during summarization and each (map) step during extraction. min_max_new_tokens (int, default: 512) — minimum value for max_new_tokens when auto-adjusting for content of prompt, docs, etc.
- kwargs:
Dictionary of kwargs to pass to h2oGPT.
- Returns:
ExtractionAnswer: The list of text responses and any errors.
- get_chat_session_prompt_template(chat_session_id: str) PromptTemplate | None
Get the prompt template for a chat_session
- Args:
- chat_session_id:
ID of the chat session
- Returns:
str: ID of the prompt template.
- get_chat_session_questions(chat_session_id: str, limit: int) List[SuggestedQuestion]
List suggested questions
- Args:
- chat_session_id:
A chat session ID of which to return the suggested questions
- limit:
How many questions to return.
- Returns:
List: A list of questions.
- get_chunks(collection_id: str, chunk_ids: Iterable[int]) List[Chunk]
Get the text of specific chunks in a collection.
- Args:
- collection_id:
String id of the collection to search in.
- chunk_ids:
List of ints for the chunks to return. Chunks are indexed starting at 1.
- Returns:
Chunk: The text of the chunk.
- Raises:
Exception: One or more chunks could not be found.
- get_collection(collection_id: str) Collection
Get metadata about a collection.
- Args:
- collection_id:
String id of the collection to search for.
- Returns:
Collection: Metadata about the collection.
- Raises:
KeyError: The collection was not found.
- get_collection_for_chat_session(chat_session_id: str) Collection
Get metadata about the collection of a chat session.
- Args:
- chat_session_id:
String id of the chat session to search for.
- Returns:
Collection: Metadata about the collection.
- get_collection_prompt_template(collection_id: str) PromptTemplate | None
Get the prompt template for a collection
- Args:
- collection_id:
ID of the collection
- Returns:
str: ID of the prompt template.
- get_collection_questions(collection_id: str, limit: int) List[SuggestedQuestion]
List suggested questions
- Args:
- collection_id:
A collection ID of which to return the suggested questions
- limit:
How many questions to return.
- Returns:
List: A list of questions.
- get_default_collection() CollectionInfo
Get the default collection, to be used for collection API-keys.
- Returns:
CollectionInfo: Default collection info.
- get_document(document_id: str) Document
Fetches information about a specific document.
- Args:
- document_id:
String id of the document.
- Returns:
Document: Metadata about the Document.
- Raises:
KeyError: The document was not found.
- get_job(job_id: str) Job
Fetches information about a specific job.
- Args:
- job_id:
String id of the job.
- Returns:
Job: Metadata about the Job.
- get_llm_names() List[str]
Lists names of available LLMs in the environment.
- Returns:
list of string: Name of each available model.
- get_llm_usage_24h() float
- get_llm_usage_24h_with_limits() LLMUsageLimit
- get_llm_usage_6h() float
- get_llm_usage_with_limits(interval: str) LLMUsageLimit
- get_llms() List[dict]
Lists metadata information about available LLMs in the environment.
- Returns:
list of dict (string, ANY): Name and details about each available model.
- get_meta() Meta
Returns information about the environment and the user.
- Returns:
Meta: Details about the version and license of the environment and the user’s name and email.
- get_prompt_template(id: str | None = None) PromptTemplate
Get a prompt template
- Args:
- id:
String id of the prompt template to retrieve or None for default
- Returns:
PromptTemplate: prompts
- Raises:
KeyError: The prompt template was not found.
- get_scheduler_stats() SchedulerStats
Count the number of global, pending jobs on the server.
- Returns:
SchedulerStats: The queue length for number of jobs.
- import_collection_into_collection(collection_id: str, src_collection_id: str, gen_doc_summaries: bool = False, gen_doc_questions: bool = False, timeout: float | None = None)
Import all documents from a collection into an existing collection
- Args:
- collection_id:
Collection ID to add documents to.
- src_collection_id:
Collection ID to import documents from.
- gen_doc_summaries:
Whether to auto-generate document summaries (uses LLM)
- gen_doc_questions:
Whether to auto-generate sample questions for each document (uses LLM)
- timeout:
Timeout in seconds.
- import_document_into_collection(collection_id: str, document_id: str, gen_doc_summaries: bool = False, gen_doc_questions: bool = False, timeout: float | None = None)
Import an already stored document to an existing collection
- Args:
- collection_id:
Collection ID to add documents to.
- document_id:
Document ID to add.
- gen_doc_summaries:
Whether to auto-generate document summaries (uses LLM)
- gen_doc_questions:
Whether to auto-generate sample questions for each document (uses LLM)
- timeout:
Timeout in seconds.
- ingest_from_file_system(collection_id: str, root_dir: str, glob: str, gen_doc_summaries: bool = False, gen_doc_questions: bool = False, timeout: float | None = None)
Add files from the local system into a collection.
- Args:
- collection_id:
String id of the collection to add the ingested documents into.
- root_dir:
String path of where to look for files.
- glob:
String of the glob pattern used to match files in the root directory.
- gen_doc_summaries:
Whether to auto-generate document summaries (uses LLM)
- gen_doc_questions:
Whether to auto-generate sample questions for each document (uses LLM)
- timeout:
Timeout in seconds
- ingest_uploads(collection_id: str, upload_ids: Iterable[str], gen_doc_summaries: bool = False, gen_doc_questions: bool = False, timeout: float | None = None)
Add uploaded documents into a specific collection.
- See Also:
upload: Upload the files into the system to then be ingested into a collection. delete_upload: Delete uploaded file
- Args:
- collection_id:
String id of the collection to add the ingested documents into.
- upload_ids:
List of string ids of each uploaded document to add to the collection.
- gen_doc_summaries:
Whether to auto-generate document summaries (uses LLM)
- gen_doc_questions:
Whether to auto-generate sample questions for each document (uses LLM)
- timeout:
Timeout in seconds
- ingest_website(collection_id: str, url: str, gen_doc_summaries: bool = False, gen_doc_questions: bool = False, follow_links: bool = False, timeout: float | None = None)
Crawl and ingest a website into a collection.
The web page linked from this URL will be imported.
- Args:
- collection_id:
String id of the collection to add the ingested documents into.
- url:
String of the url to crawl.
- gen_doc_summaries:
Whether to auto-generate document summaries (uses LLM)
- gen_doc_questions:
Whether to auto-generate sample questions for each document (uses LLM)
- follow_links:
Whether to import all web pages linked from this URL will be imported. External links will be ignored. Links to other pages on the same domain will be followed as long as they are at the same level or below the URL you specify. Each page will be transformed into a PDF document.
- timeout:
Timeout in seconds
- list_chat_message_meta_part(message_id: str, info_type: str) ChatMessageMeta
Fetch one chat message meta information.
- Args:
- message_id:
Message id to which the metadata should be pulled.
- info_type:
Metadata type to fetch. Valid choices are: “self_reflection”, “usage_stats”, “prompt_raw”, “llm_only”, “rag”, “hyde1”, “hyde2”
- Returns:
ChatMessageMeta: Metadata information about the chat message.
- list_chat_message_references(message_id: str) List[ChatMessageReference]
Fetch metadata for references of a chat message.
References are only available for messages sent from an LLM, an empty list will be returned for messages sent by the user.
- Args:
- message_id:
String id of the message to get references for.
- Returns:
list of ChatMessageReference: Metadata including the document name, polygon information, and score.
- list_chat_messages(chat_session_id: str, offset: int, limit: int) List[ChatMessage]
Fetch chat message and metadata for messages in a chat session.
Messages without a reply_to are from the end user, messages with a reply_to are from an LLM and a response to a specific user message.
- Args:
- chat_session_id:
String id of the chat session to filter by.
- offset:
How many chat messages to skip before returning.
- limit:
How many chat messages to return.
- Returns:
list of ChatMessage: Text and metadata for chat messages.
- list_chat_messages_full(chat_session_id: str, offset: int, limit: int) List[ChatMessageFull]
Fetch chat message and metadata for messages in a chat session.
Messages without a reply_to are from the end user, messages with a reply_to are from an LLM and a response to a specific user message.
- Args:
- chat_session_id:
String id of the chat session to filter by.
- offset:
How many chat messages to skip before returning.
- limit:
How many chat messages to return.
- Returns:
list of ChatMessageFull: Text and metadata for chat messages.
- list_chat_sessions_for_collection(collection_id: str, offset: int, limit: int) List[ChatSessionForCollection]
Fetch chat session metadata for chat sessions in a collection.
- Args:
- collection_id:
String id of the collection to filter by.
- offset:
How many chat sessions to skip before returning.
- limit:
How many chat sessions to return.
- Returns:
list of ChatSessionForCollection: Metadata about each chat session including the latest message.
- list_collection_permissions(collection_id: str) List[Permission]
Returns a list of access permissions for a given collection.
The returned list of permissions denotes who has access to the collection and their access level.
- Args:
- collection_id:
ID of the collection to inspect.
- Returns:
list of Permission: Sharing permissions list for the given collection.
- list_collections_for_document(document_id: str, offset: int, limit: int) List[CollectionInfo]
Fetch metadata about each collection the document is a part of.
At this time, each document will only be available in a single collection.
- Args:
- document_id:
String id of the document to search for.
- offset:
How many collections to skip before returning.
- limit:
How many collections to return.
- Returns:
list of CollectionInfo: Metadata about each collection.
- list_documents_in_collection(collection_id: str, offset: int, limit: int) List[DocumentInfo]
Fetch document metadata for documents in a collection.
- Args:
- collection_id:
String id of the collection to filter by.
- offset:
How many documents to skip before returning.
- limit:
How many documents to return.
- Returns:
list of DocumentInfo: Metadata about each document.
- list_embedding_models() List[str]
- list_list_chat_message_meta(message_id: str) List[ChatMessageMeta]
Fetch chat message meta information.
- Args:
- message_id:
Message id to which the metadata should be pulled.
- Returns:
list of ChatMessageMeta: Metadata about the chat message.
- list_question_reply_feedback_data(offset: int, limit: int) List[QuestionReplyData]
Fetch user’s questions and answers that have a feedback.
Questions and answers with metadata and feedback information.
- Args:
- offset:
How many conversations to skip before returning.
- limit:
How many conversations to return.
- Returns:
list of QuestionReplyData: Metadata about questions and answers.
- list_recent_chat_sessions(offset: int, limit: int) List[ChatSessionInfo]
Fetch user’s chat session metadata sorted by last update time.
Chats across all collections will be accessed.
- Args:
- offset:
How many chat sessions to skip before returning.
- limit:
How many chat sessions to return.
- Returns:
list of ChatSessionInfo: Metadata about each chat session including the latest message.
- list_recent_collections(offset: int, limit: int) List[CollectionInfo]
Fetch user’s collection metadata sorted by last update time.
- Args:
- offset:
How many collections to skip before returning.
- limit:
How many collections to return.
- Returns:
list of CollectionInfo: Metadata about each collection.
- list_recent_collections_sort(offset: int, limit: int, sort_column: str, ascending: bool) List[CollectionInfo]
Fetch user’s collection metadata sorted by last update time.
- Args:
- offset:
How many collections to skip before returning.
- limit:
How many collections to return.
- sort_column:
Sort column.
- ascending:
When True, return sorted by sort_column in ascending order.
- Returns:
list of CollectionInfo: Metadata about each collection.
- list_recent_document_summaries(document_id: str, offset: int, limit: int) List[DocumentSummary]
Fetches recent document summaries
- Args:
- document_id:
document ID for which to return summaries
- offset:
How many summaries to skip before returning summaries.
- limit:
How many summaries to return.
- list_recent_documents(offset: int, limit: int) List[DocumentInfo]
Fetch user’s document metadata sorted by last update time.
All documents owned by the user, regardless of collection, are accessed.
- Args:
- offset:
How many documents to skip before returning.
- limit:
How many documents to return.
- Returns:
list of DocumentInfo: Metadata about each document.
- list_recent_documents_with_summaries(offset: int, limit: int) List[DocumentInfoSummary]
Fetch user’s document metadata sorted by last update time, including the latest document summary.
All documents owned by the user, regardless of collection, are accessed.
- Args:
- offset:
How many documents to skip before returning.
- limit:
How many documents to return.
- Returns:
list of DocumentInfoSummary: Metadata about each document.
- list_recent_documents_with_summaries_sort(offset: int, limit: int, sort_column: str, ascending: bool) List[DocumentInfoSummary]
Fetch user’s document metadata sorted by last update time, including the latest document summary.
All documents owned by the user, regardless of collection, are accessed.
- Args:
- offset:
How many documents to skip before returning.
- limit:
How many documents to return.
- sort_column:
Sort column.
- ascending:
When True, return sorted by sort_column in ascending order.
- Returns:
list of DocumentInfoSummary: Metadata about each document.
- list_recent_prompt_templates(offset: int, limit: int) List[PromptTemplate]
Fetch user’s prompt templates sorted by last update time.
- Args:
- offset:
How many prompt templates to skip before returning.
- limit:
How many prompt templates to return.
- Returns:
list of PromptTemplate: set of prompts
- list_recent_prompt_templates_sort(offset: int, limit: int, sort_column: str, ascending: bool) List[PromptTemplate]
Fetch user’s prompt templates sorted by last update time.
- Args:
- offset:
How many prompt templates to skip before returning.
- limit:
How many prompt templates to return.
- sort_column:
Sort column.
- ascending:
When True, return sorted by sort_column in ascending order.
- Returns:
list of PromptTemplate: set of prompts
- list_upload() List[str]
List pending file uploads to the H2OGPTE backend.
Uploaded files are not yet accessible and need to be ingested into a collection.
- See Also:
upload: Upload the files into the system to then be ingested into a collection. ingest_uploads: Add the uploaded files to a collection. delete_upload: Delete uploaded file
- Returns:
List[str]: The pending upload ids to be used in ingest jobs.
- Raises:
Exception: The upload list request was unsuccessful.
- list_users(offset: int, limit: int) List[User]
List system users.
Returns a list of all registered users fo the system, a registered user, is a users that has logged in at least once.
- Args:
- offset:
How many users to skip before returning.
- limit:
How many users to return.
- Returns:
list of User: Metadata about each user.
- make_collection_private(collection_id: str)
Make a collection private
Once a collection is private, other users will no longer be able to access chat history or documents related to the collection.
- Args:
- collection_id:
ID of the collection to make private.
- make_collection_public(collection_id: str)
Make a collection public
Once a collection is public, it will be accessible to all authenticated users of the system.
- Args:
- collection_id:
ID of the collection to make public.
- match_chunks(collection_id: str, vectors: List[List[float]], topics: List[str], offset: int, limit: int, cut_off: float = 0, width: int = 0) List[SearchResult]
Find chunks related to a message using semantic search.
Chunks are sorted by relevance and similarity score to the message.
See Also: H2OGPTE.encode_for_retrieval to create vectors from messages.
- Args:
- collection_id:
ID of the collection to search within.
- vectors:
A list of vectorized message for running semantic search.
- topics:
A list of document_ids used to filter which documents in the collection to search.
- offset:
How many chunks to skip before returning chunks.
- limit:
How many chunks to return.
- cut_off:
Exclude matches with distances higher than this cut off.
- width:
How many chunks before and after a match to return - not implemented.
- Returns:
list of SearchResult: The document, text, score and related information of the chunk.
- reset_collection_prompt_settings(collection_id: str) str
Reset the prompt settings for a given collection.
- Args:
- collection_id:
ID of the collection to update.
- Returns:
str: ID of the updated collection.
- search_chunks(collection_id: str, query: str, topics: List[str], offset: int, limit: int) List[SearchResult]
Find chunks related to a message using lexical search.
Chunks are sorted by relevance and similarity score to the message.
- Args:
- collection_id:
ID of the collection to search within.
- query:
Question or imperative from the end user to search a collection for.
- topics:
A list of document_ids used to filter which documents in the collection to search.
- offset:
How many chunks to skip before returning chunks.
- limit:
How many chunks to return.
- Returns:
list of SearchResult: The document, text, score and related information of the chunk.
- set_chat_message_votes(chat_message_id: str, votes: int) Result
Change the vote value of a chat message.
Set the exact value of a vote for a chat message. Any message type can be updated, but only LLM response votes will be visible in the UI. The expectation is 0: unvoted, -1: dislike, 1 like. Values outside of this will not be viewable in the UI.
- Args:
- chat_message_id:
ID of a chat message, any message can be used but only LLM responses will be visible in the UI.
- votes:
Integer value for the message. Only -1 and 1 will be visible in the UI as dislike and like respectively.
- Returns:
Result: The status of the update.
- Raises:
Exception: The upload request was unsuccessful.
- set_chat_session_prompt_template(chat_session_id: str, prompt_template_id: str | None) str
Get the prompt template for a chat_session
- Args:
- chat_session_id:
ID of the chat session
- prompt_template_id:
ID of the prompt template to get the prompts from. None to delete and fall back to system defaults.
- Returns:
str: ID of the updated chat session
- set_collection_prompt_template(collection_id: str, prompt_template_id: str | None, strict_check: bool = False) str
Set the prompt template for a collection
- Args:
- collection_id:
ID of the collection to update.
- prompt_template_id:
ID of the prompt template to get the prompts from. None to delete and fall back to system defaults.
- strict_check:
whether to check that the collection’s embedding model and the prompt template are optimally compatible
- Returns:
str: ID of the updated collection.
Share a collection to a user.
The permission attribute defined the level of access, and who can access the collection, the collection_id attribute denotes the collection to be shared.
- Args:
- collection_id:
ID of the collection to share.
- permission:
Defines the rule for sharing, i.e. permission level.
- Returns:
ShareResponseStatus: Status of share request.
- summarize_content(text_context_list: List[str] | None = None, system_prompt: str = '', pre_prompt_summary: str | None = None, prompt_summary: str | None = None, llm: str | int | None = None, llm_args: Dict[str, Any] | None = None, **kwargs: Any) Answer
Summarize one or more contexts using an LLM.
Effective prompt created (excluding the system prompt):
"{pre_prompt_summary} """ {text_context_list} """ {prompt_summary}"
- Args:
- text_context_list:
List of raw text strings to be summarized.
- system_prompt:
Text sent to models which support system prompts. Gives the model overall context in how to respond. Use auto for the model default or None for h2oGPTe defaults. Defaults to ‘’ for no system prompt.
- pre_prompt_summary:
Text that is prepended before the list of texts. The default can be customized per environment, but the standard default is
"In order to write a concise single-paragraph or bulleted list summary, pay attention to the following text:\n"
- prompt_summary:
Text that is appended after the list of texts. The default can be customized per environment, but the standard default is
"Using only the text above, write a condensed and concise summary of key results (preferably as bullet points):\n"
- llm:
Name or index of LLM to send the query. Use H2OGPTE.get_llms() to see all available options. Default value is to use the first model (0th index).
- llm_args:
- Dictionary of kwargs to pass to the llm. Valid keys:
temperature (float, default: 0) — The value used to modulate the next token probabilities. Most deterministic: 0, Most creative: 1 seed (int, default: 0) — The seed for the random number generator, only used if temperature > 0, seed=0 will pick a random number for each call, seed > 0 will be fixed. top_k (int, default: 1) — The number of highest probability vocabulary tokens to keep for top-k-filtering. top_p (float, default: 1.0) — If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. repetition_penalty (float, default: 1.07) — The parameter for repetition penalty. 1.0 means no penalty. max_new_tokens (int, default: 1024) — Maximum number of new tokens to generate. This limit applies to each (map+reduce) step during summarization and each (map) step during extraction. min_max_new_tokens (int, default: 512) — minimum value for max_new_tokens when auto-adjusting for content of prompt, docs, etc.
- kwargs:
Dictionary of kwargs to pass to h2oGPT.
- Returns:
Answer: The response text and any errors.
- summarize_document(document_id: str, system_prompt: str | None = None, pre_prompt_summary: str | None = None, prompt_summary: str | None = None, llm: str | int | None = None, llm_args: Dict[str, Any] | None = None, max_num_chunks: int | None = None, sampling_strategy: str | None = None, timeout: float | None = None) DocumentSummary
Creates a summary of a document.
Effective prompt created (excluding the system prompt):
"{pre_prompt_summary} """ {text from document} """ {prompt_summary}"
- Args:
- document_id:
String id of the document to create a summary from.
- system_prompt:
System Prompt
- pre_prompt_summary:
Prompt that goes before each large piece of text to summarize
- prompt_summary:
Prompt that goes after each large piece of of text to summarize
- llm:
LLM to use
- llm_args:
- Dictionary of kwargs to pass to the llm. Valid keys:
temperature (float, default: 0) — The value used to modulate the next token probabilities. Most deterministic: 0, Most creative: 1 top_k (int, default: 1) — The number of highest probability vocabulary tokens to keep for top-k-filtering. top_p (float, default: 1.0) — If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. seed (int, default: 0) — The seed for the random number generator when sampling during generation (if temp>0 or top_k>1 or top_p<1), seed=0 picks a random seed. repetition_penalty (float, default: 1.07) — The parameter for repetition penalty. 1.0 means no penalty. max_new_tokens (int, default: 1024) — Maximum number of new tokens to generate. This limit applies to each (map+reduce) step during summarization and each (map) step during extraction. min_max_new_tokens (int, default: 512) — minimum value for max_new_tokens when auto-adjusting for content of prompt, docs, etc.
- max_num_chunks:
Max limit of chunks to send to the summarizer
- sampling_strategy:
How to sample if the document has more chunks than max_num_chunks. Options are “auto”, “uniform”, “first”, “first+last”, default is “auto” (a hybrid of them all).
- timeout:
Amount of time in seconds to allow the request to run. The default is 86400 seconds.
- Returns:
DocumentSummary: Summary of the document
- Raises:
TimeoutError: The request did not complete in time. SessionError: No summary created. Document wasn’t part of a collection, or LLM timed out, etc.
Remove sharing of a collection to a user.
The permission attribute defined the level of access, and who can access the collection, the collection_id attribute denotes the collection to be shared. In case of un-sharing, the Permission’s user is sufficient
- Args:
- collection_id:
ID of the collection to un-share.
- permission:
Defines the user for which collection access is revoked.
ShareResponseStatus: Status of share request.
Remove sharing of a collection to all other users but the original owner
- Args:
- collection_id:
ID of the collection to un-share.
ShareResponseStatus: Status of share request.
- update_collection(collection_id: str, name: str, description: str) str
Update the metadata for a given collection.
All variables are required. You can use h2ogpte.get_collection(<id>).name or description to get the existing values if you only want to change one or the other.
- Args:
- collection_id:
ID of the collection to update.
- name:
New name of the collection, this is required.
- description:
New description of the collection, this is required.
- Returns:
str: ID of the updated collection.
- update_collection_rag_type(collection_id: str, name: str, description: str, rag_type: str) str
Update the metadata for a given collection.
All variables are required. You can use h2ogpte.get_collection(<id>).name or description to get the existing values if you only want to change one or the other.
- Args:
- collection_id:
ID of the collection to update.
- name:
New name of the collection, this is required.
- description:
New description of the collection, this is required.
rag_type: str one of
"llm_only"
LLM Only (no RAG) - Generates a response to answer the user’squery without any supporting document contexts. Requires 1 LLM call.
"rag"
RAG (Retrieval Augmented Generation) - RAG with neural/lexical hybridsearch using the user’s query to find relevant contexts from a collection for generating a response. Requires 1 LLM call.
"hyde1"
HyDE RAG (Hypothetical Document Embedding) - Like RAG, but uses theLLM Only response to find relevant contexts from a collection for generating a response. Requires 2 LLM calls.
"hyde2"
HyDE RAG+ (Combined HyDE+RAG) - Like RAG, but uses HyDE RAG responseto find relevant contexts from a collection for generating a response. Requires 3 LLM calls.
"rag+"
RAG+ - Like RAG, but uses more context and recursive summarization toovercome LLM context limits. Keeps all retrieved chunks, puts them in order, adds neighboring chunks, then uses the summary API to get the answer. Can require several LLM calls.
- Returns:
str: ID of the updated collection.
- update_prompt_template(id: str, name: str, description: str | None = None, lang: str | None = None, system_prompt: str | None = None, pre_prompt_query: str | None = None, prompt_query: str | None = None, hyde_no_rag_llm_prompt_extension: str | None = None, pre_prompt_summary: str | None = None, prompt_summary: str | None = None, system_prompt_reflection: str | None = None, pre_prompt_reflection: str | None = None, prompt_reflection: str | None = None, auto_gen_description_prompt: str | None = None, auto_gen_document_summary_pre_prompt_summary: str | None = None, auto_gen_document_summary_prompt_summary: str | None = None, auto_gen_document_sample_questions_prompt: str | None = None, default_sample_questions: List[str] | None = None) str
Update a prompt template
- Args:
- id:
String ID of the prompt template to update
- name:
Name of the prompt template
- description:
Description of the prompt template
- lang:
Language code
- system_prompt:
System Prompt
- pre_prompt_query:
Text that is prepended before the contextual document chunks.
- prompt_query:
Text that is appended to the beginning of the user’s message.
- hyde_no_rag_llm_prompt_extension:
LLM prompt extension.
- pre_prompt_summary:
Prompt that goes before each large piece of text to summarize
- prompt_summary:
Prompt that goes after each large piece of of text to summarize
- system_prompt_reflection:
System Prompt for self-reflection
- pre_prompt_reflection:
deprecated - ignored
- prompt_reflection:
Template for self-reflection, must contain two occurrences of %s for full previous prompt (including system prompt, document related context and prompts if applicable, and user prompts) and answer
- auto_gen_description_prompt:
prompt to create a description of the collection.
- auto_gen_document_summary_pre_prompt_summary:
pre_prompt_summary for summary of a freshly imported document (if enabled).
- auto_gen_document_summary_prompt_summary:
prompt_summary for summary of a freshly imported document (if enabled).
- auto_gen_document_sample_questions_prompt:
prompt to create sample questions for a freshly imported document (if enabled).
- default_sample_questions:
default sample questions in case there are no auto-generated sample questions.
- Returns:
str: The ID of the updated prompt template.
- upload(file_name: str, file: Any) str
Upload a file to the H2OGPTE backend.
Uploaded files are not yet accessible and need to be ingested into a collection.
- See Also:
ingest_uploads: Add the uploaded files to a collection. delete_upload: Delete uploaded file
- Args:
- file_name:
What to name the file on the server, must include file extension.
- file:
File object to upload, often an opened file from with open(…) as f.
- Returns:
str: The upload id to be used in ingest jobs.
- Raises:
Exception: The upload request was unsuccessful.
- class h2ogpte.H2OGPTEAsync(address: str, api_key: str | None = None, token_provider: AsyncTokenProvider | None = None, verify: bool | str = True, strict_version_check: bool = False)
Bases:
object
Connect to and interact with an h2oGPTe server, via an async interface.
- INITIAL_WAIT_INTERVAL = 0.1
- MAX_WAIT_INTERVAL = 1.0
- TIMEOUT = 3600.0
- WAIT_BACKOFF_FACTOR = 1.4
- async answer_question(question: str, system_prompt: str | None = '', pre_prompt_query: str | None = None, prompt_query: str | None = None, text_context_list: List[str] | None = None, llm: str | int | None = None, llm_args: Dict[str, Any] | None = None, chat_conversation: List[Tuple[str, str]] | None = None, timeout: float | None = None, **kwargs: Any) Answer
Send a message and get a response from an LLM.
Note: For general chat with an LLM, we recommend session.query() for higher throughput in multi-user environments. The following code sample shows the recommended method:
# Establish a chat session chat_session_id = client.create_chat_session() # Connect to the chat session with client.connect(chat_session_id) as session: # Send a basic query and print the reply reply = session.query("Hello", timeout=60) print(reply.content)
Format of input content:
{text_context_list} """\n{chat_conversation}{question}
- Args:
- question:
Text query to send to the LLM.
- text_context_list:
List of raw text strings to be included, will be converted to a string like this: “
- “.join(text_context_list)
- system_prompt:
Text sent to models which support system prompts. Gives the model overall context in how to respond. Use auto for the model default, or None for h2oGPTe default. Defaults to ‘’ for no system prompt.
- pre_prompt_query:
Text that is prepended before the contextual document chunks in text_context_list. Only used if text_context_list is provided.
- prompt_query:
Text that is appended after the contextual document chunks in text_context_list. Only used if text_context_list is provided.
- llm:
Name or index of LLM to send the query. Use H2OGPTE.get_llms() to see all available options. Default value is to use the first model (0th index).
- llm_args:
- Dictionary of kwargs to pass to the llm. Valid keys:
temperature (float, default: 0) — The value used to modulate the next token probabilities. Most deterministic: 0, Most creative: 1 top_k (int, default: 1) — The number of highest probability vocabulary tokens to keep for top-k-filtering. top_p (float, default: 1.0) — If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. seed (int, default: 0) — The seed for the random number generator when sampling during generation (if temp>0 or top_k>1 or top_p<1), seed=0 picks a random seed. repetition_penalty (float, default: 1.07) — The parameter for repetition penalty. 1.0 means no penalty. max_new_tokens (int, default: 1024) — Maximum number of new tokens to generate. This limit applies to each (map+reduce) step during summarization and each (map) step during extraction. min_max_new_tokens (int, default: 512) — minimum value for max_new_tokens when auto-adjusting for content of prompt, docs, etc.
- chat_conversation:
List of tuples for (human, bot) conversation that will be pre-appended to an (question, None) case for a query.
- timeout:
Timeout in seconds.
- kwargs:
Dictionary of kwargs to pass to h2oGPT.
- Returns:
Answer: The response text and any errors.
- Raises:
TimeoutError: If response isn’t completed in timeout seconds.
- async cancel_job(job_id: str) Result
Stops a specific job from running on the server.
- Args:
- job_id:
String id of the job to cancel.
- Returns:
Result: Status of canceling the job.
- connect(chat_session_id: str, rag_type: str | None = None, prompt_template_id: str | None = None) SessionAsync
Create and participate in a chat session. This is a live connection to the H2OGPTE server contained to a specific chat session on top of a single collection of documents. Users will find all questions and responses in this session in a single chat history in the UI.
- Args:
- chat_session_id:
ID of the chat session to connect to.
- rag_type:
RAG type to use.
- prompt_template_id:
ID of the prompt template to use.
- Returns:
Session: Live chat session connection with an LLM.
- async count_assets() ObjectCount
Counts number of objects owned by the user.
- Returns:
ObjectCount: The count of chat sessions, collections, and documents.
- async count_chat_sessions() int
Returns the count of chat sessions owned by the user.
- async count_chat_sessions_for_collection(collection_id: str) int
Counts number of chat sessions in a specific collection.
- Args:
- collection_id:
String id of the collection to count chat sessions for.
- Returns:
int: The count of chat sessions in that collection.
- async count_collections() int
Counts number of collections owned by the user.
- Returns:
int: The count of collections owned by the user.
- async count_documents() int
Returns the count of documents accessed by the user.
- async count_documents_in_collection(collection_id: str) int
Counts the number of documents in a specific collection.
- Args:
- collection_id:
String id of the collection to count documents for.
- Returns:
int: The number of documents in that collection.
- async count_documents_owned_by_me() int
Returns the counts of documents owned by the user.
- async count_prompt_templates() int
Counts number of prompt templates
- Returns:
int: The count of prompt templates
- async count_question_reply_feedback() int
Fetch user’s questions and answers count.
- Returns:
int: the count of questions and replies.
- async create_chat_session(collection_id: str | None = None) str
Creates a new chat session for asking questions (of documents).
- Args:
- collection_id:
String id of the collection to chat with. If None, chat with LLM directly.
- Returns:
str: The ID of the newly created chat session.
- async create_chat_session_on_default_collection() str
Creates a new chat session for asking questions of documents on the default collection.
- Returns:
str: The ID of the newly created chat session.
- async create_collection(name: str, description: str, embedding_model: str | None = None, prompt_template_id: str | None = None) str
Creates a new collection.
- Args:
- name:
Name of the collection.
- description:
Description of the collection
- embedding_model:
embedding model to use. call list_embedding_models() to list of options.
- prompt_template_id:
ID of the prompt template to get the prompts from. None to fall back to system defaults.
- Returns:
str: The ID of the newly created collection.
- async create_prompt_template(name: str, description: str | None = None, lang: str | None = None, system_prompt: str | None = None, pre_prompt_query: str | None = None, prompt_query: str | None = None, hyde_no_rag_llm_prompt_extension: str | None = None, pre_prompt_summary: str | None = None, prompt_summary: str | None = None, system_prompt_reflection: str | None = None, pre_prompt_reflection: str | None = None, prompt_reflection: str | None = None, auto_gen_description_prompt: str | None = None, auto_gen_document_summary_pre_prompt_summary: str | None = None, auto_gen_document_summary_prompt_summary: str | None = None, auto_gen_document_sample_questions_prompt: str | None = None, default_sample_questions: List[str] | None = None) str
Create a new prompt template
- Args:
- name:
Name of the prompt template
- description:
Description of the prompt template
- lang:
Language code
- system_prompt:
System Prompt
- pre_prompt_query:
Text that is prepended before the contextual document chunks.
- prompt_query:
Text that is appended to the beginning of the user’s message.
- hyde_no_rag_llm_prompt_extension:
LLM prompt extension.
- pre_prompt_summary:
Prompt that goes before each large piece of text to summarize
- prompt_summary:
Prompt that goes after each large piece of of text to summarize
- system_prompt_reflection:
System Prompt for self-reflection
- pre_prompt_reflection:
Deprecated - ignored
- prompt_reflection:
Template for self-reflection, must contain two occurrences of %s for full previous prompt (including system prompt, document related context and prompts if applicable, and user prompts) and answer
- auto_gen_description_prompt:
prompt to create a description of the collection.
- auto_gen_document_summary_pre_prompt_summary:
pre_prompt_summary for summary of a freshly imported document (if enabled).
- auto_gen_document_summary_prompt_summary:
prompt_summary for summary of a freshly imported document (if enabled).
- auto_gen_document_sample_questions_prompt:
prompt to create sample questions for a freshly imported document (if enabled).
- default_sample_questions:
default sample questions in case there are no auto-generated sample questions.
- Returns:
str: The ID of the newly created prompt template.
- async delete_chat_messages(chat_message_ids: Iterable[str]) Result
Deletes specific chat messages.
- Args:
- chat_message_ids:
List of string ids of chat messages to delete from the system.
- Returns:
Result: Status of the delete job.
- async delete_chat_sessions(chat_session_ids: Iterable[str]) Result
Deletes chat sessions and related messages.
- Args:
- chat_session_ids:
List of string ids of chat sessions to delete from the system.
- Returns:
Result: Status of the delete job.
- async delete_collections(collection_ids: Iterable[str], timeout: float | None = None) Job
Deletes collections from the environment. Documents in the collection will not be deleted.
- Args:
- collection_ids:
List of string ids of collections to delete from the system.
- timeout:
Timeout in seconds.
- async delete_document_summaries(summaries_ids: Iterable[str]) Result
Deletes document summaries.
- Args:
- summaries_ids:
List of string ids of a document summary to delete from the system.
- Returns:
Result: Status of the delete job.
- async delete_documents(document_ids: Iterable[str], timeout: float | None = None) Job
Deletes documents from the system.
- Args:
- document_ids:
List of string ids to delete from the system and all collections.
- timeout:
Timeout in seconds.
- async delete_documents_from_collection(collection_id: str, document_ids: Iterable[str], timeout: float | None = None) Job
Removes documents from a collection.
See Also: H2OGPTE.delete_documents for completely removing the document from the environment.
- Args:
- collection_id:
String of the collection to remove documents from.
- document_ids:
List of string ids to remove from the collection.
- timeout:
Timeout in seconds.
- async delete_prompt_templates(ids: Iterable[str]) Result
Deletes prompt templates
- Args:
- ids:
List of string ids of prompte templates to delete from the system.
- Returns:
Result: Status of the delete job.
- async delete_upload(upload_id: str) str
Delete a file previously uploaded with the “upload” method.
- See Also:
upload: Upload the files into the system to then be ingested into a collection. ingest_uploads: Add the uploaded files to a collection.
- Args:
- upload_id:
ID of a file to remove
- Returns:
upload_id: The upload id of the removed.
- Raises:
Exception: The delete upload request was unsuccessful.
- async download_document(destination_directory: str | Path, destination_file_name: str, document_id: str) Path
Downloads a document to a local system directory.
- Args:
- destination_directory:
Destination directory to save file into.
- destination_file_name:
Destination file name.
- document_id:
Document ID.
- Returns:
The Path to the file written to disk.
- async encode_for_retrieval(chunks: Iterable[str], embedding_model: str | None = None) List[List[float]]
Encode texts for semantic searching.
See Also: H2OGPTE.match for getting a list of chunks that semantically match each encoded text.
- Args:
- chunks:
List of strings of texts to be encoded.
- embedding_model:
embedding model to use. call list_embedding_models() to list of options.
- Returns:
List of list of floats: Each list in the list is the encoded original text.
- async extract_data(text_context_list: List[str] | None = None, system_prompt: str = '', pre_prompt_extract: str | None = None, prompt_extract: str | None = None, llm: str | int | None = None, llm_args: Dict[str, Any] | None = None, **kwargs: Any) ExtractionAnswer
Extract information from one or more contexts using an LLM. pre_prompt_extract and prompt_extract variables must be used together. If these variables are not set, the inputs texts will be summarized into bullet points. Format of extract content:
"{pre_prompt_extract}""" {text_context_list} """\n{prompt_extract}"
Examples:
extract = h2ogpte.extract_data( text_context_list=chunks, pre_prompt_extract="Pay attention and look at all people. Your job is to collect their names.\n", prompt_extract="List all people's names as JSON.", )
- Args:
- text_context_list:
List of raw text strings to extract data from.
- system_prompt:
Text sent to models which support system prompts. Gives the model overall context in how to respond. Use auto or None for the model default. Defaults to ‘’ for no system prompt.
- pre_prompt_extract:
Text that is prepended before the list of texts. If not set, the inputs will be summarized.
- prompt_extract:
Text that is appended after the list of texts. If not set, the inputs will be summarized.
- llm:
Name or index of LLM to send the query. Use H2OGPTE.get_llms() to see all available options. Default value is to use the first model (0th index).
- llm_args:
- Dictionary of kwargs to pass to the llm. Valid keys:
temperature (float, default: 0) — The value used to modulate the next token probabilities. Most deterministic: 0, Most creative: 1 seed (int, default: 0) — The seed for the random number generator, only used if temperature > 0, seed=0 will pick a random number for each call, seed > 0 will be fixed. top_k (int, default: 1) — The number of highest probability vocabulary tokens to keep for top-k-filtering. top_p (float, default: 1.0) — If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. repetition_penalty (float, default: 1.07) — The parameter for repetition penalty. 1.0 means no penalty. max_new_tokens (int, default: 1024) — Maximum number of new tokens to generate. This limit applies to each (map+reduce) step during summarization and each (map) step during extraction. min_max_new_tokens (int, default: 512) — minimum value for max_new_tokens when auto-adjusting for content of prompt, docs, etc.
- kwargs:
Dictionary of kwargs to pass to h2oGPT.
- Returns:
ExtractionAnswer: The list of text responses and any errors.
- async get_chat_session_prompt_template(chat_session_id: str) PromptTemplate | None
Get the prompt template for a chat_session
- Args:
- chat_session_id:
ID of the chat session
- Returns:
str: ID of the prompt template.
- async get_chat_session_questions(chat_session_id: str, limit: int) List[SuggestedQuestion]
List suggested questions
- Args:
- chat_session_id:
A chat session ID of which to return the suggested questions
- limit:
How many questions to return.
- Returns:
List: A list of questions.
- async get_chunks(collection_id: str, chunk_ids: Iterable[int]) List[Chunk]
Get the text of specific chunks in a collection.
- Args:
- collection_id:
String id of the collection to search in.
- chunk_ids:
List of ints for the chunks to return. Chunks are indexed starting at 1.
- Returns:
Chunk: The text of the chunk.
- Raises:
Exception: One or more chunks could not be found.
- async get_collection(collection_id: str) Collection
Get metadata about a collection.
- Args:
- collection_id:
String id of the collection to search for.
- Returns:
Collection: Metadata about the collection.
- Raises:
KeyError: The collection was not found.
- async get_collection_for_chat_session(chat_session_id: str) Collection
Get metadata about the collection of a chat session.
- Args:
- chat_session_id:
String id of the chat session to search for.
- Returns:
Collection: Metadata about the collection.
- async get_collection_prompt_template(collection_id: str) PromptTemplate | None
Get the prompt template for a collection
- Args:
- collection_id:
ID of the collection
- Returns:
str: ID of the prompt template.
- async get_collection_questions(collection_id: str, limit: int) List[SuggestedQuestion]
List suggested questions
- Args:
- collection_id:
A collection ID of which to return the suggested questions
- limit:
How many questions to return.
- Returns:
List: A list of questions.
- async get_default_collection() CollectionInfo
Get the default collection, to be used for collection API-keys.
- Returns:
CollectionInfo: Default collection info.
- async get_document(document_id: str) Document
Fetches information about a specific document.
- Args:
- document_id:
String id of the document.
- Returns:
Document: Metadata about the Document.
- Raises:
KeyError: The document was not found.
- async get_job(job_id: str) Job
Fetches information about a specific job.
- Args:
- job_id:
String id of the job.
- Returns:
Job: Metadata about the Job.
- async get_llm_names() List[str]
Lists names of available LLMs in the environment.
- Returns:
list of string: Name of each available model.
- async get_llm_usage_24h() float
- async get_llm_usage_24h_with_limits() LLMUsageLimit
- async get_llm_usage_6h() float
- async get_llm_usage_with_limits(interval: str) LLMUsageLimit
- async get_llms() List[Dict[str, Any]]
Lists metadata information about available LLMs in the environment.
- Returns:
list of dict (string, ANY): Name and details about each available model.
- async get_meta() Meta
Returns various information about the server environment, including the current build version, license information, the user, etc.
- async get_prompt_template(id: str | None = None) PromptTemplate
Get a prompt template
- Args:
- id:
String id of the prompt template to retrieve or None for default
- Returns:
PromptTemplate: prompts
- Raises:
KeyError: The prompt template was not found.
- async get_scheduler_stats() SchedulerStats
Count the number of global, pending jobs on the server.
- Returns:
SchedulerStats: The queue length for number of jobs.
- async import_collection_into_collection(collection_id: str, src_collection_id: str, gen_doc_summaries: bool = False, gen_doc_questions: bool = False, timeout: float | None = None)
Import all documents from a collection into an existing collection
- Args:
- collection_id:
Collection ID to add documents to.
- src_collection_id:
Collection ID to import documents from.
- gen_doc_summaries:
Whether to auto-generate document summaries (uses LLM)
- gen_doc_questions:
Whether to auto-generate sample questions for each document (uses LLM)
- timeout:
Timeout in seconds.
- async import_document_into_collection(collection_id: str, document_id: str, gen_doc_summaries: bool = False, gen_doc_questions: bool = False, timeout: float | None = None)
Import an already stored document to an existing collection
- Args:
- collection_id:
Collection ID to add documents to.
- document_id:
Document ID to add.
- gen_doc_summaries:
Whether to auto-generate document summaries (uses LLM)
- gen_doc_questions:
Whether to auto-generate sample questions for each document (uses LLM)
- timeout:
Timeout in seconds.
- async ingest_from_file_system(collection_id: str, root_dir: str, glob: str, gen_doc_summaries: bool = False, gen_doc_questions: bool = False, timeout: float | None = None) Job
Add files from the local system into a collection.
- Args:
- collection_id:
String id of the collection to add the ingested documents into.
- root_dir:
String path of where to look for files.
- glob:
String of the glob pattern used to match files in the root directory.
- gen_doc_summaries:
Whether to auto-generate document summaries (uses LLM)
- gen_doc_questions:
Whether to auto-generate sample questions for each document (uses LLM)
- timeout:
Timeout in seconds
- async ingest_uploads(collection_id: str, upload_ids: Iterable[str], gen_doc_summaries: bool = False, gen_doc_questions: bool = False, timeout: float | None = None) Job
Add uploaded documents into a specific collection.
- See Also:
- upload: Upload the files into the system to then be ingested into a
collection.
delete_upload: Delete uploaded file.
- Args:
- collection_id:
String id of the collection to add the ingested documents into.
- upload_ids:
List of string ids of each uploaded document to add to the collection.
- gen_doc_summaries:
Whether to auto-generate document summaries (uses LLM)
- gen_doc_questions:
Whether to auto-generate sample questions for each document (uses LLM)
- timeout:
Timeout in seconds
- async ingest_website(collection_id: str, url: str, gen_doc_summaries: bool = False, gen_doc_questions: bool = False, follow_links: bool = False, timeout: float | None = None) Job
Crawl and ingest a website into a collection.
The web page linked from this URL will be imported.
- Args:
- collection_id:
String id of the collection to add the ingested documents into.
- url:
String of the url to crawl.
- gen_doc_summaries:
Whether to auto-generate document summaries (uses LLM)
- gen_doc_questions:
Whether to auto-generate sample questions for each document (uses LLM)
- follow_links:
Whether to import all web pages linked from this URL will be imported. External links will be ignored. Links to other pages on the same domain will be followed as long as they are at the same level or below the URL you specify. Each page will be transformed into a PDF document.
- timeout:
Timeout in seconds
- async list_chat_message_meta_part(message_id: str, info_type: str) ChatMessageMeta
Fetch one chat message meta information.
- Args:
- message_id:
Message id to which the metadata should be pulled.
- info_type:
Metadata type to fetch. Valid choices are: “self_reflection”, “usage_stats”, “prompt_raw”, “llm_only”, “rag”, “hyde1”, “hyde2”
- Returns:
ChatMessageMeta: Metadata information about the chat message.
- async list_chat_message_references(message_id: str) List[ChatMessageReference]
Fetch metadata for references of a chat message.
References are only available for messages sent from an LLM, an empty list will be returned for messages sent by the user.
- Args:
- message_id:
String id of the message to get references for.
- Returns:
list of ChatMessageReference: Metadata including the document name, polygon information, and score.
- async list_chat_messages(chat_session_id: str, offset: int, limit: int) List[ChatMessage]
Fetch chat message and metadata for messages in a chat session.
Messages without a reply_to are from the end user, messages with a reply_to are from an LLM and a response to a specific user message.
- Args:
- chat_session_id:
String id of the chat session to filter by.
- offset:
How many chat messages to skip before returning.
- limit:
How many chat messages to return.
- Returns:
list of ChatMessage: Text and metadata for chat messages.
- async list_chat_messages_full(chat_session_id: str, offset: int, limit: int) List[ChatMessageFull]
Fetch chat message and metadata for messages in a chat session.
Messages without a reply_to are from the end user, messages with a reply_to are from an LLM and a response to a specific user message.
- Args:
- chat_session_id:
String id of the chat session to filter by.
- offset:
How many chat messages to skip before returning.
- limit:
How many chat messages to return.
- Returns:
list of ChatMessageFull: Text and metadata for chat messages.
- async list_chat_sessions_for_collection(collection_id: str, offset: int, limit: int) List[ChatSessionForCollection]
Fetch chat session metadata for chat sessions in a collection.
- Args:
- collection_id:
String id of the collection to filter by.
- offset:
How many chat sessions to skip before returning.
- limit:
How many chat sessions to return.
- Returns:
list of ChatSessionForCollection: Metadata about each chat session including the latest message.
- async list_collection_permissions(collection_id: str) List[Permission]
Returns a list of access permissions for a given collection.
The returned list of permissions denotes who has access to the collection and their access level.
- Args:
- collection_id:
ID of the collection to inspect.
- Returns:
list of Permission: Sharing permissions list for the given collection.
- async list_collections_for_document(document_id: str, offset: int, limit: int) List[CollectionInfo]
Fetch metadata about each collection the document is a part of. At this time, each document will only be available in a single collection.
- Args:
- document_id:
String id of the document to search for.
- offset:
How many collections to skip before returning.
- limit:
How many collections to return.
- Returns:
list of CollectionInfo: Metadata about each collection.
- async list_documents_in_collection(collection_id: str, offset: int, limit: int) List[DocumentInfo]
Fetch document metadata for documents in a collection.
- Args:
- collection_id:
String id of the collection to filter by.
- offset:
How many documents to skip before returning.
- limit:
How many documents to return.
- Returns:
list of DocumentInfo: Metadata about each document.
- async list_embedding_models() List[str]
- async list_list_chat_message_meta(message_id: str) List[ChatMessageMeta]
Fetch chat message meta information.
- Args:
- message_id:
Message id to which the metadata should be pulled.
- Returns:
list of ChatMessageMeta: Metadata about the chat message.
- async list_question_reply_feedback_data(offset: int, limit: int) List[QuestionReplyData]
Fetch user’s questions and answers.
Questions and answers with metadata.
- Args:
- offset:
How many conversations to skip before returning.
- limit:
How many conversations to return.
- Returns:
list of QuestionReplyData: Metadata about questions and answers.
- async list_recent_chat_sessions(offset: int, limit: int) List[ChatSessionInfo]
Fetch user’s chat session metadata sorted by last update time. Chats across all collections will be accessed.
- Args:
- offset:
How many chat sessions to skip before returning.
- limit:
How many chat sessions to return.
- Returns:
list of ChatSessionInfo: Metadata about each chat session including the latest message.
- async list_recent_collections(offset: int, limit: int) List[CollectionInfo]
Fetch user’s collection metadata sorted by last update time.
- Args:
- offset:
How many collections to skip before returning.
- limit:
How many collections to return.
- Returns:
list of CollectionInfo: Metadata about each collection.
- async list_recent_collections_sort(offset: int, limit: int, sort_column: str, ascending: bool) List[CollectionInfo]
Fetch user’s collection metadata sorted by last update time.
- Args:
- offset:
How many collections to skip before returning.
- limit:
How many collections to return.
- sort_column:
Sort column.
- ascending:
When True, return sorted by sort_column in ascending order.
- Returns:
list of CollectionInfo: Metadata about each collection.
- async list_recent_document_summaries(document_id: str, offset: int, limit: int) List[DocumentSummary]
Fetches recent document summaries
- Args:
- document_id:
document ID for which to return summaries
- offset:
How many summaries to skip before returning summaries.
- limit:
How many summaries to return.
- async list_recent_documents(offset: int, limit: int) List[DocumentInfo]
Fetch user’s document metadata sorted by last update time. All documents owned by the user, regardless of collection, are accessed.
- Args:
- offset:
How many documents to skip before returning.
- limit:
How many documents to return.
- Returns:
list of DocumentInfo: Metadata about each document.
- async list_recent_documents_with_summaries(offset: int, limit: int) List[DocumentInfoSummary]
Fetch user’s document metadata sorted by last update time, including the latest document summary. All documents owned by the user, regardless of collection, are accessed.
- Args:
- offset:
How many documents to skip before returning.
- limit:
How many documents to return.
- Returns:
list of DocumentInfoSummary: Metadata about each document.
- async list_recent_documents_with_summaries_sort(offset: int, limit: int, sort_column: str, ascending: bool) List[DocumentInfoSummary]
Fetch user’s document metadata sorted by last update time, including the latest document summary. All documents owned by the user, regardless of collection, are accessed.
- Args:
- offset:
How many documents to skip before returning.
- limit:
How many documents to return.
- sort_column:
Sort column.
- ascending:
When True, return sorted by sort_column in ascending order.
- Returns:
list of DocumentInfoSummary: Metadata about each document.
- async list_recent_prompt_templates(offset: int, limit: int) List[PromptTemplate]
Fetch user’s prompt templates sorted by last update time.
- Args:
- offset:
How many prompt templates to skip before returning.
- limit:
How many prompt templates to return.
- Returns:
list of PromptTemplate: set of prompts
- async list_recent_prompt_templates_sort(offset: int, limit: int, sort_column: str, ascending: bool) List[PromptTemplate]
Fetch user’s prompt templates sorted by last update time.
- Args:
- offset:
How many prompt templates to skip before returning.
- limit:
How many prompt templates to return.
- sort_column:
Sort column.
- ascending:
When True, return sorted by sort_column in ascending order.
- Returns:
list of PromptTemplate: set of prompts
- async list_upload() List[str]
List pending file uploads to the H2OGPTE backend. Uploaded files are not yet accessible and need to be ingested into a collection.
- See Also:
upload: Upload the files into the system to then be ingested into a collection. ingest_uploads: Add the uploaded files to a collection. delete_upload: Delete uploaded file
- Returns:
List[str]: The pending upload ids to be used in ingest jobs.
- Raises:
Exception: The upload list request was unsuccessful.
- async list_users(offset: int, limit: int) List[User]
List system users.
Returns a list of all registered users fo the system, a registered user, is a users that has logged in at least once.
- Args:
- offset:
How many users to skip before returning.
- limit:
How many users to return.
- Returns:
list of User: Metadata about each user.
- async make_collection_private(collection_id: str)
Make a collection private Once a collection is private, other users will no longer be able to access chat history or documents related to the collection.
- Args:
- collection_id:
ID of the collection to make private.
- async make_collection_public(collection_id: str) None
Make a collection public Once a collection is public, it will be accessible to all authenticated users of the system.
- Args:
- collection_id:
ID of the collection to make public.
- async match_chunks(collection_id: str, vectors: List[List[float]], topics: List[str], offset: int, limit: int, cut_off: float = 0, width: int = 0) List[SearchResult]
Find chunks related to a message using semantic search. Chunks are sorted by relevance and similarity score to the message. See Also: H2OGPTE.encode_for_retrieval to create vectors from messages.
- Args:
- collection_id:
ID of the collection to search within.
- vectors:
A list of vectorized message for running semantic search.
- topics:
A list of document_ids used to filter which documents in the collection to search.
- offset:
How many chunks to skip before returning chunks.
- limit:
How many chunks to return.
- cut_off:
Exclude matches with distances higher than this cut off.
- width:
How many chunks before and after a match to return - not implemented.
- Returns:
list of SearchResult: The document, text, score and related information of the chunk.
- async reset_collection_prompt_settings(collection_id: str) str
Reset the prompt settings for a given collection.
- Args:
- collection_id:
ID of the collection to update.
- Returns:
str: ID of the updated collection.
- async search_chunks(collection_id: str, query: str, topics: List[str], offset: int, limit: int) List[SearchResult]
Find chunks related to a message using lexical search. Chunks are sorted by relevance and similarity score to the message.
- Args:
- collection_id:
ID of the collection to search within.
- query:
Question or imperative from the end user to search a collection for.
- topics:
A list of document_ids used to filter which documents in the collection to search.
- offset:
How many chunks to skip before returning chunks.
- limit:
How many chunks to return.
- Returns:
list of SearchResult: The document, text, score and related information of the chunk.
- async set_chat_message_votes(chat_message_id: str, votes: int) Result
Change the vote value of a chat message. Set the exact value of a vote for a chat message. Any message type can be updated, but only LLM response votes will be visible in the UI. The expectation is 0: unvoted, -1: dislike, 1 like. Values outside of this will not be viewable in the UI.
- Args:
- chat_message_id:
ID of a chat message, any message can be used but only LLM responses will be visible in the UI.
- votes:
Integer value for the message. Only -1 and 1 will be visible in the UI as dislike and like respectively.
- Returns:
Result: The status of the update.
- Raises:
Exception: The upload request was unsuccessful.
- async set_chat_session_prompt_template(chat_session_id: str, prompt_template_id: str | None) str
Get the prompt template for a chat_session
- Args:
- chat_session_id:
ID of the chat session
- prompt_template_id:
ID of the prompt template to get the prompts from. None to delete and fall back to system defaults.
- Returns:
str: ID of the updated chat session
- async set_collection_prompt_template(collection_id: str, prompt_template_id: str | None, strict_check: bool = False) str
Set the prompt template for a collection
- Args:
- collection_id:
ID of the collection to update.
- prompt_template_id:
ID of the prompt template to get the prompts from. None to delete and fall back to system defaults.
- strict_check:
whether to check that the collection’s embedding model and the prompt template are optimally compatible
- Returns:
str: ID of the updated collection.
Share a collection to a user.
The permission attribute defined the level of access, and who can access the collection, the collection_id attribute denotes the collection to be shared.
- Args:
- collection_id:
ID of the collection to share.
- permission:
Defines the rule for sharing, i.e. permission level.
- Returns:
ShareResponseStatus: Status of share request.
- async summarize_content(text_context_list: List[str] | None = None, system_prompt: str = '', pre_prompt_summary: str | None = None, prompt_summary: str | None = None, llm: str | int | None = None, llm_args: Dict[str, Any] | None = None, **kwargs: Any) Answer
Summarize one or more contexts using an LLM.
Format of summary content:
"{pre_prompt_summary}""" {text_context_list} """\n{prompt_summary}"
- Args:
- text_context_list:
List of raw text strings to be summarized.
- system_prompt:
Text sent to models which support system prompts. Gives the model overall context in how to respond. Use auto for the model default or None for h2oGPTe defaults. Defaults to ‘’ for no system prompt.
- pre_prompt_summary:
Text that is prepended before the list of texts. The default can be customized per environment, but the standard default is
"In order to write a concise single-paragraph or bulleted list summary, pay attention to the following text:\n"
- prompt_summary:
Text that is appended after the list of texts. The default can be customized per environment, but the standard default is
"Using only the text above, write a condensed and concise summary of key results (preferably as bullet points):\n"
- llm:
Name or index of LLM to send the query. Use H2OGPTE.get_llms() to see all available options. Default value is to use the first model (0th index).
- llm_args:
- Dictionary of kwargs to pass to the llm. Valid keys:
temperature (float, default: 0) — The value used to modulate the next token probabilities. Most deterministic: 0, Most creative: 1 seed (int, default: 0) — The seed for the random number generator, only used if temperature > 0, seed=0 will pick a random number for each call, seed > 0 will be fixed. top_k (int, default: 1) — The number of highest probability vocabulary tokens to keep for top-k-filtering. top_p (float, default: 1.0) — If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. repetition_penalty (float, default: 1.07) — The parameter for repetition penalty. 1.0 means no penalty. max_new_tokens (int, default: 1024) — Maximum number of new tokens to generate. This limit applies to each (map+reduce) step during summarization and each (map) step during extraction. min_max_new_tokens (int, default: 512) — minimum value for max_new_tokens when auto-adjusting for content of prompt, docs, etc.
- kwargs:
Dictionary of kwargs to pass to h2oGPT.
- Returns:
Answer: The response text and any errors.
- async summarize_document(document_id: str, system_prompt: str | None = None, pre_prompt_summary: str | None = None, prompt_summary: str | None = None, llm: str | int | None = None, llm_args: Dict[str, Any] | None = None, max_num_chunks: int | None = None, sampling_strategy: str | None = None, timeout: float | None = None) DocumentSummary
Creates a summary of a document.
- Args:
- document_id:
String id of the document to create a summary from.
- system_prompt:
System Prompt
- pre_prompt_summary:
Prompt that goes before each large piece of text to summarize
- prompt_summary:
Prompt that goes after each large piece of of text to summarize
- llm:
LLM to use
- llm_args:
- Dictionary of kwargs to pass to the llm. Valid keys:
temperature (float, default: 0) — The value used to modulate the next token probabilities. Most deterministic: 0, Most creative: 1 seed (int, default: 0) — The seed for the random number generator, only used if temperature > 0, seed=0 will pick a random number for each call, seed > 0 will be fixed. top_k (int, default: 1) — The number of highest probability vocabulary tokens to keep for top-k-filtering. top_p (float, default: 1.0) — If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. repetition_penalty (float, default: 1.07) — The parameter for repetition penalty. 1.0 means no penalty. max_new_tokens (int, default: 1024) — Maximum number of new tokens to generate. This limit applies to each (map+reduce) step during summarization and each (map) step during extraction. min_max_new_tokens (int, default: 512) — minimum value for max_new_tokens when auto-adjusting for content of prompt, docs, etc.
- max_num_chunks:
Max limit of chunks to send to the summarizer
- sampling_strategy:
How to sample if the document has more chunks than max_num_chunks. Options are “auto”, “uniform”, “first”, “first+last”, default is “auto” (a hybrid of them all).
- timeout:
Amount of time in seconds to allow the request to run. The default is 86400 seconds.
- Returns:
DocumentSummary: Summary of the document
- Raises:
TimeoutError: The request did not complete in time. SessionError: No summary created. Document wasn’t part of a collection, or LLM timed out, etc.
Remove sharing of a collection to a user. The permission attribute defined the level of access, and who can access the collection, the collection_id attribute denotes the collection to be shared. In case of un-sharing, the Permission’s user is sufficient.
- Args:
- collection_id:
ID of the collection to un-share.
- permission:
Defines the user for which collection access is revoked.
ShareResponseStatus: Status of share request.
Remove sharing of a collection to all other users but the original owner.
- Args:
- collection_id:
ID of the collection to un-share.
ShareResponseStatus: Status of share request.
- async update_collection(collection_id: str, name: str, description: str) str
Update the metadata for a given collection. All variables are required. You can use h2ogpte.get_collection(<id>).name or description to get the existing values if you only want to change one or the other.
- Args:
- collection_id:
ID of the collection to update.
- name:
New name of the collection, this is required.
- description:
New description of the collection, this is required.
- Returns:
str: ID of the updated collection.
- async update_collection_rag_type(collection_id: str, name: str, description: str, rag_type) str
Update the metadata for a given collection. All variables are required. You can use h2ogpte.get_collection(<id>).name or description to get the existing values if you only want to change one or the other.
- Args:
- collection_id:
ID of the collection to update.
- name:
New name of the collection, this is required.
- description:
New description of the collection, this is required.
- rag_type: str one of
"llm_only"
LLM Only (no RAG) - Generates a response to answer the user’squery without any supporting document contexts. Requires 1 LLM call.
"rag"
RAG (Retrieval Augmented Generation) - RAG with neural/lexical hybridsearch using the user’s query to find relevant contexts from a collection for generating a response. Requires 1 LLM call.
"hyde1"
HyDE RAG (Hypothetical Document Embedding) - Like RAG, but uses theLLM Only response to find relevant contexts from a collection for generating a response. Requires 2 LLM calls.
"hyde2"
HyDE RAG+ (Combined HyDE+RAG) - Like RAG, but uses HyDE RAG responseto find relevant contexts from a collection for generating a response. Requires 3 LLM calls.
"rag+"
RAG+ - Like RAG, but uses more context and recursive summarization toovercome LLM context limits. Keeps all retrieved chunks, puts them in order, adds neighboring chunks, then uses the summary API to get the answer. Can require several LLM calls.
- Returns:
str: ID of the updated collection.
- async update_prompt_template(id: str, name: str, description: str | None = None, lang: str | None = None, system_prompt: str | None = None, pre_prompt_query: str | None = None, prompt_query: str | None = None, hyde_no_rag_llm_prompt_extension: str | None = None, pre_prompt_summary: str | None = None, prompt_summary: str | None = None, system_prompt_reflection: str | None = None, pre_prompt_reflection: str | None = None, prompt_reflection: str | None = None, auto_gen_description_prompt: str | None = None, auto_gen_document_summary_pre_prompt_summary: str | None = None, auto_gen_document_summary_prompt_summary: str | None = None, auto_gen_document_sample_questions_prompt: str | None = None, default_sample_questions: List[str] | None = None) str
Update a prompt template
- Args:
- id:
String ID of the prompt template to update
- name:
Name of the prompt template
- description:
Description of the prompt template
- lang:
Language code
- system_prompt:
System Prompt
- pre_prompt_query:
Text that is prepended before the contextual document chunks.
- prompt_query:
Text that is appended to the beginning of the user’s message.
- hyde_no_rag_llm_prompt_extension:
LLM prompt extension.
- pre_prompt_summary:
Prompt that goes before each large piece of text to summarize
- prompt_summary:
Prompt that goes after each large piece of of text to summarize
- system_prompt_reflection:
System Prompt for self-reflection
- pre_prompt_reflection:
Deprecated - ignored
- prompt_reflection:
Template for self-reflection, must contain two occurrences of %s for full previous prompt (including system prompt, document related context and prompts if applicable, and user prompts) and answer
- auto_gen_description_prompt:
prompt to create a description of the collection.
- auto_gen_document_summary_pre_prompt_summary:
pre_prompt_summary for summary of a freshly imported document (if enabled).
- auto_gen_document_summary_prompt_summary:
prompt_summary for summary of a freshly imported document (if enabled).
- auto_gen_document_sample_questions_prompt:
prompt to create sample questions for a freshly imported document (if enabled).
- default_sample_questions:
default sample questions in case there are no auto-generated sample questions.
- Returns:
str: The ID of the updated prompt template.
- async upload(file_name: str, file: IO[bytes] | bytes) str
Upload a file to the H2OGPTE backend. Uploaded files are not yet accessible and need to be ingested into a collection.
- See Also:
ingest_uploads: Add the uploaded files to a collection. delete_upload: Delete uploaded file
- Args:
- file_name:
What to name the file on the server, must include file extension.
- file:
File object to upload, often an opened file from with open(…) as f.
- Returns:
str: The upload id to be used in ingest jobs.
- Raises:
Exception: The upload request was unsuccessful.
- class h2ogpte.Session(address: str, chat_session_id: str, client: H2OGPTE = None, prompt_template_id: str | None = None)
Bases:
object
Create and participate in a chat session.
This is a live connection to the h2oGPTe server contained to a specific chat session on top of a single collection of documents. Users will find all questions and responses in this session in a single chat history in the UI.
- See Also:
H2OGPTE.connect: To initialize a session on an existing connection.
- Args:
- address:
Full URL of the h2oGPTe server to connect to.
- chat_session_id:
The ID of the chat session the queries should be sent to.
- client:
Set to the value of H2OGPTE client object used to perform other calls to the system.
Examples:
# Example 1: Best practice, create a session using the H2OGPTE module with h2ogpte.connect(chat_session_id) as session: answer1 = session.query('How many paper clips were shipped to Scranton?', timeout=10) answer2 = session.query('Did David Brent co-sign the contract with Initech?', timeout=10) # Example 2: Connect and disconnect manually session = Session( address=address, client=client, chat_session_id=chat_session_id ) session.connect() answer = session.query("Are there any dogs in the documents?") session.disconnect()
- connect()
Connect to an h2oGPTe server.
This is primarily an internal function used when users create a session using with from the H2OGPTE.connection() function.
- property connection: ClientConnection
- disconnect()
Disconnect from an h2oGPTe server.
This is primarily an internal function used when users create a session using with from the H2OGPTE.connection() function.
- query(message: str, system_prompt: str | None = None, pre_prompt_query: str | None = None, prompt_query: str | None = None, pre_prompt_summary: str | None = None, prompt_summary: str | None = None, llm: str | int | None = None, llm_args: Dict[str, Any] | None = None, self_reflection_config: Dict[str, Any] | None = None, rag_config: Dict[str, Any] | None = None, timeout: float | None = None, callback: Callable[[ChatMessage | PartialChatMessage], None] | None = None) ChatMessage | None
Retrieval-augmented generation for a query on a collection.
Finds a collection of chunks relevant to the query using similarity scores. Sends these and any additional instructions to an LLM.
Format of questions or imperatives:
"{pre_prompt_query} """ {similar_context_chunks} """ {prompt_query}{message}"
- Args:
- message:
Query or instruction from the end user to the LLM.
- system_prompt:
Text sent to models which support system prompts. Gives the model overall context in how to respond. Use auto or None for the model default. Defaults to ‘’ for no system prompt.
- pre_prompt_query:
Text that is prepended before the contextual document chunks. The default can be customized per environment, but the standard default is
"Pay attention and remember the information below, which will help to answer the question or imperative after the context ends.\n"
- prompt_query:
Text that is appended to the beginning of the user’s message. The default can be customized per environment, but the standard default is “According to only the information in the document sources provided within the context above, “
- pre_prompt_summary:
Not yet used, use H2OGPTE.summarize_content
- prompt_summary:
Not yet used, use H2OGPTE.summarize_content
- llm:
Name or index of LLM to send the query. Use H2OGPTE.get_llms() to see all available options. Default value is to use the first model (0th index).
- llm_args:
- Dictionary of kwargs to pass to the llm. Valid keys:
temperature (float, default: 0) — The value used to modulate the next token probabilities. Most deterministic: 0, Most creative: 1 top_k (int, default: 1) — The number of highest probability vocabulary tokens to keep for top-k-filtering. top_p (float, default: 1.0) — If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. seed (int, default: 0) — The seed for the random number generator when sampling during generation (if temp>0 or top_k>1 or top_p<1), seed=0 picks a random seed. repetition_penalty (float, default: 1.07) — The parameter for repetition penalty. 1.0 means no penalty. max_new_tokens (int, default: 1024) — Maximum number of new tokens to generate. This limit applies to each (map+reduce) step during summarization and each (map) step during extraction. min_max_new_tokens (int, default: 512) — minimum value for max_new_tokens when auto-adjusting for content of prompt, docs, etc.
- self_reflection_config:
Dictionary of arguments for self-reflection, can contain the following string:string mappings:
- llm_reflection: str
"gpt-4-0613"
or""
to disable reflection- prompt_reflection: str
‘Here’s the prompt and the response:
"""Prompt:\n%s\n"""\n\n""" Response:\n%s\n"""\n\nWhat is the quality of the response for the given prompt? Respond with a score ranging from Score: 0/10 (worst) to Score: 10/10 (best), and give a brief explanation why.'
- system_prompt_reflection: str
""
- llm_args_reflection: str
"{}"
- rag_config:
Dictionary of arguments to control RAG (retrieval-augmented-generation) types. Can contain the following key/value pairs: rag_type: str one of
"llm_only"
LLM Only (no RAG) - Generates a response to answer the user’squery without any supporting document contexts. Requires 1 LLM call.
"rag"
RAG (Retrieval Augmented Generation) - RAG with neural/lexical hybridsearch using the user’s query to find relevant contexts from a collection for generating a response. Requires 1 LLM call.
"hyde1"
HyDE RAG (Hypothetical Document Embedding) - Like RAG, but uses theLLM Only response to find relevant contexts from a collection for generating a response. Requires 2 LLM calls.
"hyde2"
HyDE RAG+ (Combined HyDE+RAG) - Like RAG, but uses HyDE RAG responseto find relevant contexts from a collection for generating a response. Requires 3 LLM calls.
"rag+"
RAG+ - Like RAG, but uses more context and recursive summarization toovercome LLM context limits. Keeps all retrieved chunks, puts them in order, adds neighboring chunks, then uses the summary API to get the answer. Can require several LLM calls.
- hyde_no_rag_llm_prompt_extension: str
Add this prompt to every user’s prompt, when generating answers to be used for subsequent retrieval during HyDE. Only used when rag_type is “hyde1” or “hyde2”. example:
'\nKeep the answer brief, and list the 5 most relevant key words at the end.'
- num_neighbor_chunks_to_include: int
Number of neighboring chunks to include for every retrieved relevant chunk. Helps to keep surrounding context together. Only enabled for rag_type “rag+”. Defaults to 1.
- timeout:
Amount of time in seconds to allow the request to run. The default is 1000 seconds.
- callback:
Function for processing partial messages, used for streaming responses to an end user.
- Returns:
ChatMessage: The response text and details about the response from the LLM. For example:
ChatMessage( id='XXX', content='The information provided in the context...', reply_to='YYY', votes=0, created_at=datetime.datetime(2023, 10, 24, 20, 12, 34, 875026) type_list=[], )
- Raises:
TimeoutError: The request did not complete in time.
- class h2ogpte.SessionAsync(chat_session_id: str, client: H2OGPTEAsync, prompt_template_id: str | None = None)
Bases:
object
Create and participate in a chat session. This is a live connection to the h2oGPTe server contained to a specific chat session on top of a single collection of documents. Users will find all questions and responses in this session in a single chat history in the UI. See Also:
H2OGPTE.connect: To initialize a session on an existing connection.
- Args:
- address:
Full URL of the h2oGPTe server to connect to.
- api_key:
API key for authentication to the h2oGPTe server. Users can generate a key by accessing the UI and navigating to the Settings.
- chat_session_id:
The ID of the chat session the queries should be sent to.
- verify:
Whether to verify the server’s TLS/SSL certificate. Can be a boolean or a path to a CA bundle. Defaults to True.
- Examples::
- async with h2ogpte.connect(_chat_session_id) as session:
- answer1 = await session.query(
‘How many paper clips were shipped to Scranton?’
) answer2 = await session.query(
‘Did David Brent co-sign the contract with Initech?’
)
- async query(message: str, *, system_prompt: str | None = None, pre_prompt_query: str | None = None, prompt_query: str | None = None, pre_prompt_summary: str | None = None, prompt_summary: str | None = None, llm: str | int | None = None, llm_args: Dict[str, Any] | None = None, self_reflection_config: Dict[str, Any] | None = None, rag_config: Dict[str, Any] | None = None, timeout: float | None = None, callback: Callable[[ChatMessage], None] | None = None) ChatMessage
Retrieval-augmented generation for a query on a collection. Finds a collection of chunks relevant to the query using similarity scores. Sends these and any additional instructions to an LLM. Format of questions or imperatives:
"{pre_prompt_query} """ {similar_context_chunks} """ {prompt_query}{message}"
- Args:
- message:
Query or instruction from the end user to the LLM.
- system_prompt:
Text sent to models which support system prompts. Gives the model overall context in how to respond. Use auto or None for the model default. Defaults to ‘’ for no system prompt.
- pre_prompt_query:
Text that is prepended before the contextual document chunks. The default can be customized per environment, but the standard default is
"Pay attention and remember the information below, which will help to answer the question or imperative after the context ends.\n"
- prompt_query:
Text that is appended to the beginning of the user’s message. The default can be customized per environment, but the standard default is “According to only the information in the document sources provided within the context above, “
- pre_prompt_summary:
Not yet used, use H2OGPTE.summarize_content
- prompt_summary:
Not yet used, use H2OGPTE.summarize_content
- llm:
Name or index of LLM to send the query. Use H2OGPTE.get_llms() to see all available options. Default value is to use the first model (0th index).
- llm_args:
- Dictionary of kwargs to pass to the llm. Valid keys:
temperature (float, default: 0) — The value used to modulate the next token probabilities. Most deterministic: 0, Most creative: 1 top_k (int, default: 1) — The number of highest probability vocabulary tokens to keep for top-k-filtering. top_p (float, default: 1.0) — If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. seed (int, default: 0) — The seed for the random number generator when sampling during generation (if temp>0 or top_k>1 or top_p<1), seed=0 picks a random seed. repetition_penalty (float, default: 1.07) — The parameter for repetition penalty. 1.0 means no penalty. max_new_tokens (int, default: 1024) — Maximum number of new tokens to generate. This limit applies to each (map+reduce) step during summarization and each (map) step during extraction. min_max_new_tokens (int, default: 512) — minimum value for max_new_tokens when auto-adjusting for content of prompt, docs, etc.
- self_reflection_config:
Dictionary of arguments for self-reflection, can contain the following string:string mappings:
- llm_reflection: str
"gpt-4-0613"
or""
to disable reflection- prompt_reflection: str
‘Here’s the prompt and the response:
"""Prompt:\n%s\n"""\n\n""" Response:\n%s\n"""\n\nWhat is the quality of the response for the given prompt? Respond with a score ranging from Score: 0/10 (worst) to Score: 10/10 (best), and give a brief explanation why.'
- system_prompt_reflection: str
""
- llm_args_reflection: str
"{}"
- rag_config:
Dictionary of arguments to control RAG (retrieval-augmented-generation) types. Can contain the following key/value pairs: rag_type: str one of
"llm_only"
LLM Only (no RAG) - Generates a response to answer the user’squery without any supporting document contexts. Requires 1 LLM call.
"rag"
RAG (Retrieval Augmented Generation) - RAG with neural/lexical hybridsearch using the user’s query to find relevant contexts from a collection for generating a response. Requires 1 LLM call.
"hyde1"
HyDE RAG (Hypothetical Document Embedding) - Like RAG, but uses theLLM Only response to find relevant contexts from a collection for generating a response. Requires 2 LLM calls.
"hyde2"
HyDE RAG+ (Combined HyDE+RAG) - Like RAG, but uses HyDE RAG responseto find relevant contexts from a collection for generating a response. Requires 3 LLM calls.
"rag+"
RAG+ - Like RAG, but uses more context and recursive summarization toovercome LLM context limits. Keeps all retrieved chunks, puts them in order, adds neighboring chunks, then uses the summary API to get the answer. Can require several LLM calls.
- hyde_no_rag_llm_prompt_extension: str
Add this prompt to every user’s prompt, when generating answers to be used for subsequent retrieval during HyDE. Only used when rag_type is “hyde1” or “hyde2”. example:
'\nKeep the answer brief, and list the 5 most relevant key words at the end.'
- num_neighbor_chunks_to_include: int
Number of neighboring chunks to include for every retrieved relevant chunk. Helps to keep surrounding context together. Only enabled for rag_type “rag+”. Defaults to 1.
- timeout:
Amount of time in seconds to allow the request to run. The default is 1000 seconds.
- callback:
Function for processing partial messages, used for streaming responses to an end user.
- Returns:
ChatMessage: The response text and details about the response from the LLM. For example:
ChatMessage( id='XXX', content='The information provided in the context...', reply_to='YYY', votes=0, created_at=datetime.datetime(2023, 10, 24, 20, 12, 34, 875026) type_list=[], )
- Raises:
TimeoutError: The request did not complete in time.
- property websocket: WebSocketClientProtocol