id
stringlengths
14
15
text
stringlengths
13
2.7k
source
stringlengths
60
181
080dbd5b5352-8
chains.llm_requests.LLMRequestsChain Chain that requests a URL and then uses an LLM to parse results. chains.llm_summarization_checker.base.LLMSummarizationCheckerChain Chain for question-answering with self-verification. chains.mapreduce.MapReduceChain Map-reduce chain. chains.moderation.OpenAIModerationChain Pass input through a moderation endpoint. chains.natbot.base.NatBotChain Implement an LLM driven browser. chains.natbot.crawler.Crawler() A crawler for web pages. chains.natbot.crawler.ElementInViewPort A typed dictionary containing information about elements in the viewport. chains.openai_functions.citation_fuzzy_match.FactWithEvidence Class representing a single statement. chains.openai_functions.citation_fuzzy_match.QuestionAnswer A question and its answer as a list of facts each one should have a source. chains.openai_functions.openapi.SimpleRequestChain Chain for making a simple request to an API endpoint. chains.openai_functions.qa_with_structure.AnswerWithSources An answer to the question, with sources. chains.prompt_selector.BasePromptSelector Base class for prompt selectors. chains.prompt_selector.ConditionalPromptSelector Prompt collection that goes through conditionals. chains.qa_generation.base.QAGenerationChain Base class for question-answer generation chains. chains.qa_with_sources.base.BaseQAWithSourcesChain Question answering chain with sources over documents. chains.qa_with_sources.base.QAWithSourcesChain Question answering with sources over documents. chains.qa_with_sources.loading.LoadingCallable(...) Interface for loading the combine documents chain. chains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain Question-answering with sources over an index. chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain
https://api.python.langchain.com/en/latest/langchain_api_reference.html
080dbd5b5352-9
chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain Question-answering with sources over a vector database. chains.query_constructor.base.StructuredQueryOutputParser Output parser that parses a structured query. chains.query_constructor.ir.Comparator(value) Enumerator of the comparison operators. chains.query_constructor.ir.Comparison A comparison to a value. chains.query_constructor.ir.Expr Base class for all expressions. chains.query_constructor.ir.FilterDirective A filtering expression. chains.query_constructor.ir.Operation A logical operation over other directives. chains.query_constructor.ir.Operator(value) Enumerator of the operations. chains.query_constructor.ir.StructuredQuery A structured query. chains.query_constructor.ir.Visitor() Defines interface for IR translation using visitor pattern. chains.query_constructor.parser.ISO8601Date A date in ISO 8601 format (YYYY-MM-DD). chains.query_constructor.schema.AttributeInfo Information about a data source attribute. chains.retrieval_qa.base.BaseRetrievalQA Base class for question-answering chains. chains.retrieval_qa.base.RetrievalQA Chain for question-answering against an index. chains.retrieval_qa.base.VectorDBQA Chain for question-answering against a vector database. chains.router.base.MultiRouteChain Use a single chain to route an input to one of multiple candidate chains. chains.router.base.Route(destination, ...) Create new instance of Route(destination, next_inputs) chains.router.base.RouterChain Chain that outputs the name of a destination chain and the inputs to it. chains.router.embedding_router.EmbeddingRouterChain Chain that uses embeddings to route between options. chains.router.llm_router.LLMRouterChain A router chain that uses an LLM chain to perform routing. chains.router.llm_router.RouterOutputParser
https://api.python.langchain.com/en/latest/langchain_api_reference.html
080dbd5b5352-10
chains.router.llm_router.RouterOutputParser Parser for output of router chain in the multi-prompt chain. chains.router.multi_prompt.MultiPromptChain A multi-route chain that uses an LLM router chain to choose amongst prompts. chains.router.multi_retrieval_qa.MultiRetrievalQAChain A multi-route chain that uses an LLM router chain to choose amongst retrieval qa chains. chains.sequential.SequentialChain Chain where the outputs of one chain feed directly into next. chains.sequential.SimpleSequentialChain Simple chain where the outputs of one step feed directly into next. chains.sql_database.query.SQLInput Input for a SQL Chain. chains.sql_database.query.SQLInputWithTables Input for a SQL Chain. chains.transform.TransformChain Chain that transforms the chain output. Functions¶ chains.combine_documents.reduce.acollapse_docs(...) Execute a collapse function on a set of documents and merge their metadatas. chains.combine_documents.reduce.collapse_docs(...) Execute a collapse function on a set of documents and merge their metadatas. chains.combine_documents.reduce.split_list_of_docs(...) Split Documents into subsets that each meet a cumulative length constraint. chains.combine_documents.stuff.create_stuff_documents_chain(...) Create a chain for passing a list of Documents to a model. chains.ernie_functions.base.convert_python_function_to_ernie_function(...) Convert a Python function to an Ernie function-calling API compatible dict. chains.ernie_functions.base.convert_to_ernie_function(...) Convert a raw function/class to an Ernie function. chains.ernie_functions.base.create_ernie_fn_chain(...) [Legacy] Create an LLM chain that uses Ernie functions. chains.ernie_functions.base.create_ernie_fn_runnable(...) Create a runnable sequence that uses Ernie functions. chains.ernie_functions.base.create_structured_output_chain(...)
https://api.python.langchain.com/en/latest/langchain_api_reference.html
080dbd5b5352-11
chains.ernie_functions.base.create_structured_output_chain(...) [Legacy] Create an LLMChain that uses an Ernie function to get a structured output. chains.ernie_functions.base.create_structured_output_runnable(...) Create a runnable that uses an Ernie function to get a structured output. chains.ernie_functions.base.get_ernie_output_parser(...) Get the appropriate function output parser given the user functions. chains.example_generator.generate_example(...) Return another example given a list of examples for a prompt. chains.graph_qa.cypher.construct_schema(...) Filter the schema based on included or excluded types chains.graph_qa.cypher.extract_cypher(text) Extract Cypher code from a text. chains.graph_qa.falkordb.extract_cypher(text) Extract Cypher code from a text. chains.graph_qa.neptune_cypher.extract_cypher(text) Extract Cypher code from text using Regex. chains.graph_qa.neptune_cypher.trim_query(query) Trim the query to only include Cypher keywords. chains.graph_qa.neptune_cypher.use_simple_prompt(llm) Decides whether to use the simple prompt chains.history_aware_retriever.create_history_aware_retriever(...) Create a chain that takes conversation history and returns documents. chains.loading.load_chain(path, **kwargs) Unified method for loading a chain from LangChainHub or local fs. chains.loading.load_chain_from_config(...) Load chain from Config Dict. chains.openai_functions.base.convert_python_function_to_openai_function(...) Convert a Python function to an OpenAI function-calling API compatible dict. chains.openai_functions.base.convert_to_openai_function(...) Convert a raw function/class to an OpenAI function. chains.openai_functions.base.create_openai_fn_chain(...)
https://api.python.langchain.com/en/latest/langchain_api_reference.html
080dbd5b5352-12
chains.openai_functions.base.create_openai_fn_chain(...) [Legacy] Create an LLM chain that uses OpenAI functions. chains.openai_functions.base.create_openai_fn_runnable(...) Create a runnable sequence that uses OpenAI functions. chains.openai_functions.base.create_structured_output_chain(...) [Legacy] Create an LLMChain that uses an OpenAI function to get a structured output. chains.openai_functions.base.create_structured_output_runnable(...) Create a runnable that uses an OpenAI function to get a structured output. chains.openai_functions.base.get_openai_output_parser(...) Get the appropriate function output parser given the user functions. chains.openai_functions.citation_fuzzy_match.create_citation_fuzzy_match_chain(llm) Create a citation fuzzy match chain. chains.openai_functions.extraction.create_extraction_chain(...) Creates a chain that extracts information from a passage. chains.openai_functions.extraction.create_extraction_chain_pydantic(...) Creates a chain that extracts information from a passage using pydantic schema. chains.openai_functions.openapi.get_openapi_chain(spec) Create a chain for querying an API from a OpenAPI spec. chains.openai_functions.openapi.openapi_spec_to_openai_fn(spec) Convert a valid OpenAPI spec to the JSON Schema format expected for OpenAI chains.openai_functions.qa_with_structure.create_qa_with_sources_chain(llm) Create a question answering chain that returns an answer with sources. chains.openai_functions.qa_with_structure.create_qa_with_structure_chain(...) Create a question answering chain that returns an answer with sources chains.openai_functions.tagging.create_tagging_chain(...) Creates a chain that extracts information from a passage chains.openai_functions.tagging.create_tagging_chain_pydantic(...) Creates a chain that extracts information from a passage chains.openai_functions.utils.get_llm_kwargs(...)
https://api.python.langchain.com/en/latest/langchain_api_reference.html
080dbd5b5352-13
chains.openai_functions.utils.get_llm_kwargs(...) Returns the kwargs for the LLMChain constructor. chains.openai_tools.extraction.create_extraction_chain_pydantic(...) Creates a chain that extracts information from a passage. chains.prompt_selector.is_chat_model(llm) Check if the language model is a chat model. chains.prompt_selector.is_llm(llm) Check if the language model is a LLM. chains.qa_with_sources.loading.load_qa_with_sources_chain(llm) Load a question answering with sources chain. chains.query_constructor.base.construct_examples(...) Construct examples from input-output pairs. chains.query_constructor.base.fix_filter_directive(...) Fix invalid filter directive. chains.query_constructor.base.get_query_constructor_prompt(...) Create query construction prompt. chains.query_constructor.base.load_query_constructor_chain(...) Load a query constructor chain. chains.query_constructor.base.load_query_constructor_runnable(...) Load a query constructor runnable chain. chains.query_constructor.parser.get_parser([...]) Returns a parser for the query language. chains.query_constructor.parser.v_args(...) Dummy decorator for when lark is not installed. chains.retrieval.create_retrieval_chain(...) Create retrieval chain that retrieves documents and then passes them on. chains.sql_database.query.create_sql_query_chain(llm, db) Create a chain that generates SQL queries. langchain.embeddings¶ Embedding models are wrappers around embedding models from different APIs and services. Embedding models can be LLMs or not. Class hierarchy: Embeddings --> <name>Embeddings # Examples: OpenAIEmbeddings, HuggingFaceEmbeddings Classes¶ embeddings.cache.CacheBackedEmbeddings(...) Interface for caching results from embedding models. Functions¶ langchain.evaluation¶ Evaluation chains for grading LLM and Chain outputs.
https://api.python.langchain.com/en/latest/langchain_api_reference.html
080dbd5b5352-14
Functions¶ langchain.evaluation¶ Evaluation chains for grading LLM and Chain outputs. This module contains off-the-shelf evaluation chains for grading the output of LangChain primitives such as language models and chains. Loading an evaluator To load an evaluator, you can use the load_evaluators or load_evaluator functions with the names of the evaluators to load. from langchain.evaluation import load_evaluator evaluator = load_evaluator("qa") evaluator.evaluate_strings( prediction="We sold more than 40,000 units last week", input="How many units did we sell last week?", reference="We sold 32,378 units", ) The evaluator must be one of EvaluatorType. Datasets To load one of the LangChain HuggingFace datasets, you can use the load_dataset function with the name of the dataset to load. from langchain.evaluation import load_dataset ds = load_dataset("llm-math") Some common use cases for evaluation include: Grading the accuracy of a response against ground truth answers: QAEvalChain Comparing the output of two models: PairwiseStringEvalChain or LabeledPairwiseStringEvalChain when there is additionally a reference label. Judging the efficacy of an agent’s tool usage: TrajectoryEvalChain Checking whether an output complies with a set of criteria: CriteriaEvalChain or LabeledCriteriaEvalChain when there is additionally a reference label. Computing semantic difference between a prediction and reference: EmbeddingDistanceEvalChain or between two predictions: PairwiseEmbeddingDistanceEvalChain Measuring the string distance between a prediction and reference StringDistanceEvalChain or between two predictions PairwiseStringDistanceEvalChain Low-level API These evaluators implement one of the following interfaces:
https://api.python.langchain.com/en/latest/langchain_api_reference.html
080dbd5b5352-15
Low-level API These evaluators implement one of the following interfaces: StringEvaluator: Evaluate a prediction string against a reference label and/or input context. PairwiseStringEvaluator: Evaluate two prediction strings against each other. Useful for scoring preferences, measuring similarity between two chain or llm agents, or comparing outputs on similar inputs. AgentTrajectoryEvaluator Evaluate the full sequence of actions taken by an agent. These interfaces enable easier composability and usage within a higher level evaluation framework. Classes¶ evaluation.agents.trajectory_eval_chain.TrajectoryEval A named tuple containing the score and reasoning for a trajectory. evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain A chain for evaluating ReAct style agents. evaluation.agents.trajectory_eval_chain.TrajectoryOutputParser Trajectory output parser. evaluation.comparison.eval_chain.LabeledPairwiseStringEvalChain A chain for comparing two outputs, such as the outputs evaluation.comparison.eval_chain.PairwiseStringEvalChain A chain for comparing two outputs, such as the outputs evaluation.comparison.eval_chain.PairwiseStringResultOutputParser A parser for the output of the PairwiseStringEvalChain. evaluation.criteria.eval_chain.Criteria(value) A Criteria to evaluate. evaluation.criteria.eval_chain.CriteriaEvalChain LLM Chain for evaluating runs against criteria. evaluation.criteria.eval_chain.CriteriaResultOutputParser A parser for the output of the CriteriaEvalChain. evaluation.criteria.eval_chain.LabeledCriteriaEvalChain Criteria evaluation chain that requires references. evaluation.embedding_distance.base.EmbeddingDistance(value) Embedding Distance Metric. evaluation.embedding_distance.base.EmbeddingDistanceEvalChain Use embedding distances to score semantic difference between a prediction and reference. evaluation.embedding_distance.base.PairwiseEmbeddingDistanceEvalChain Use embedding distances to score semantic difference between two predictions. evaluation.exact_match.base.ExactMatchStringEvaluator(*)
https://api.python.langchain.com/en/latest/langchain_api_reference.html
080dbd5b5352-16
evaluation.exact_match.base.ExactMatchStringEvaluator(*) Compute an exact match between the prediction and the reference. evaluation.parsing.base.JsonEqualityEvaluator([...]) Evaluates whether the prediction is equal to the reference after evaluation.parsing.base.JsonValidityEvaluator(...) Evaluates whether the prediction is valid JSON. evaluation.parsing.json_distance.JsonEditDistanceEvaluator([...]) An evaluator that calculates the edit distance between JSON strings. evaluation.parsing.json_schema.JsonSchemaEvaluator(...) An evaluator that validates a JSON prediction against a JSON schema reference. evaluation.qa.eval_chain.ContextQAEvalChain LLM Chain for evaluating QA w/o GT based on context evaluation.qa.eval_chain.CotQAEvalChain LLM Chain for evaluating QA using chain of thought reasoning. evaluation.qa.eval_chain.QAEvalChain LLM Chain for evaluating question answering. evaluation.qa.generate_chain.QAGenerateChain LLM Chain for generating examples for question answering. evaluation.regex_match.base.RegexMatchStringEvaluator(*) Compute a regex match between the prediction and the reference. evaluation.schema.AgentTrajectoryEvaluator() Interface for evaluating agent trajectories. evaluation.schema.EvaluatorType(value[, ...]) The types of the evaluators. evaluation.schema.LLMEvalChain A base class for evaluators that use an LLM. evaluation.schema.PairwiseStringEvaluator() Compare the output of two models (or two outputs of the same model). evaluation.schema.StringEvaluator() Grade, tag, or otherwise evaluate predictions relative to their inputs and/or reference labels. evaluation.scoring.eval_chain.LabeledScoreStringEvalChain A chain for scoring the output of a model on a scale of 1-10. evaluation.scoring.eval_chain.ScoreStringEvalChain A chain for scoring on a scale of 1-10 the output of a model.
https://api.python.langchain.com/en/latest/langchain_api_reference.html
080dbd5b5352-17
A chain for scoring on a scale of 1-10 the output of a model. evaluation.scoring.eval_chain.ScoreStringResultOutputParser A parser for the output of the ScoreStringEvalChain. evaluation.string_distance.base.PairwiseStringDistanceEvalChain Compute string edit distances between two predictions. evaluation.string_distance.base.StringDistance(value) Distance metric to use. evaluation.string_distance.base.StringDistanceEvalChain Compute string distances between the prediction and the reference. Functions¶ evaluation.comparison.eval_chain.resolve_pairwise_criteria(...) Resolve the criteria for the pairwise evaluator. evaluation.criteria.eval_chain.resolve_criteria(...) Resolve the criteria to evaluate. evaluation.loading.load_dataset(uri) Load a dataset from the LangChainDatasets on HuggingFace. evaluation.loading.load_evaluator(evaluator, *) Load the requested evaluation chain specified by a string. evaluation.loading.load_evaluators(evaluators, *) Load evaluators specified by a list of evaluator types. evaluation.scoring.eval_chain.resolve_criteria(...) Resolve the criteria for the pairwise evaluator. langchain.hub¶ Interface with the LangChain Hub. Functions¶ hub.pull(owner_repo_commit, *[, api_url, ...]) Pulls an object from the hub and returns it as a LangChain object. hub.push(repo_full_name, object, *[, ...]) Pushes an object to the hub and returns the URL it can be viewed at in a browser. langchain.indexes¶ Code to support various indexing workflows. Provides code to: Create knowledge graphs from data. Support indexing workflows from LangChain data loaders to vectorstores. For indexing workflows, this code is used to avoid writing duplicated content into the vectostore and to avoid over-writing content if it’s unchanged. Importantly, this keeps on working even if the content being written is derived
https://api.python.langchain.com/en/latest/langchain_api_reference.html
080dbd5b5352-18
Importantly, this keeps on working even if the content being written is derived via a set of transformations from some source content (e.g., indexing children documents that were derived from parent documents by chunking.) Classes¶ indexes.base.RecordManager(namespace) An abstract base class representing the interface for a record manager. indexes.graph.GraphIndexCreator Functionality to create graph index. indexes.vectorstore.VectorStoreIndexWrapper Wrapper around a vectorstore for easy access. indexes.vectorstore.VectorstoreIndexCreator Logic for creating indexes. Functions¶ langchain.memory¶ Memory maintains Chain state, incorporating context from past runs. Class hierarchy for Memory: BaseMemory --> BaseChatMemory --> <name>Memory # Examples: ZepMemory, MotorheadMemory Main helpers: BaseChatMessageHistory Chat Message History stores the chat message history in different stores. Class hierarchy for ChatMessageHistory: BaseChatMessageHistory --> <name>ChatMessageHistory # Example: ZepChatMessageHistory Main helpers: AIMessage, BaseMessage, HumanMessage Classes¶ memory.buffer.ConversationBufferMemory Buffer for storing conversation memory. memory.buffer.ConversationStringBufferMemory Buffer for storing conversation memory. memory.buffer_window.ConversationBufferWindowMemory Buffer for storing conversation memory inside a limited size window. memory.chat_memory.BaseChatMemory Abstract base class for chat memory. memory.combined.CombinedMemory Combining multiple memories' data together. memory.entity.BaseEntityStore Abstract base class for Entity store. memory.entity.ConversationEntityMemory Entity extractor & summarizer memory. memory.entity.InMemoryEntityStore In-memory Entity store. memory.entity.RedisEntityStore Redis-backed Entity store. memory.entity.SQLiteEntityStore SQLite-backed Entity store memory.entity.UpstashRedisEntityStore Upstash Redis backed Entity store.
https://api.python.langchain.com/en/latest/langchain_api_reference.html
080dbd5b5352-19
memory.entity.UpstashRedisEntityStore Upstash Redis backed Entity store. memory.kg.ConversationKGMemory Knowledge graph conversation memory. memory.motorhead_memory.MotorheadMemory Chat message memory backed by Motorhead service. memory.readonly.ReadOnlySharedMemory A memory wrapper that is read-only and cannot be changed. memory.simple.SimpleMemory Simple memory for storing context or other information that shouldn't ever change between prompts. memory.summary.ConversationSummaryMemory Conversation summarizer to chat memory. memory.summary.SummarizerMixin Mixin for summarizer. memory.summary_buffer.ConversationSummaryBufferMemory Buffer with summarizer for storing conversation memory. memory.token_buffer.ConversationTokenBufferMemory Conversation chat memory with token limit. memory.vectorstore.VectorStoreRetrieverMemory VectorStoreRetriever-backed memory. memory.zep_memory.ZepMemory Persist your chain history to the Zep MemoryStore. Functions¶ memory.utils.get_prompt_input_key(inputs, ...) Get the prompt input key. langchain.model_laboratory¶ Experiment with different models. Classes¶ model_laboratory.ModelLaboratory(chains[, names]) Experiment with different models. langchain.output_parsers¶ OutputParser classes parse the output of an LLM call. Class hierarchy: BaseLLMOutputParser --> BaseOutputParser --> <name>OutputParser # ListOutputParser, PydanticOutputParser Main helpers: Serializable, Generation, PromptValue Classes¶ output_parsers.boolean.BooleanOutputParser Parse the output of an LLM call to a boolean. output_parsers.combining.CombiningOutputParser Combine multiple output parsers into one. output_parsers.datetime.DatetimeOutputParser Parse the output of an LLM call to a datetime. output_parsers.enum.EnumOutputParser
https://api.python.langchain.com/en/latest/langchain_api_reference.html
080dbd5b5352-20
output_parsers.enum.EnumOutputParser Parse an output that is one of a set of values. output_parsers.ernie_functions.JsonKeyOutputFunctionsParser Parse an output as the element of the Json object. output_parsers.ernie_functions.JsonOutputFunctionsParser Parse an output as the Json object. output_parsers.ernie_functions.OutputFunctionsParser Parse an output that is one of sets of values. output_parsers.ernie_functions.PydanticAttrOutputFunctionsParser Parse an output as an attribute of a pydantic object. output_parsers.ernie_functions.PydanticOutputFunctionsParser Parse an output as a pydantic object. output_parsers.fix.OutputFixingParser Wraps a parser and tries to fix parsing errors. output_parsers.openai_functions.JsonKeyOutputFunctionsParser Parse an output as the element of the Json object. output_parsers.openai_functions.JsonOutputFunctionsParser Parse an output as the Json object. output_parsers.openai_functions.OutputFunctionsParser Parse an output that is one of sets of values. output_parsers.openai_functions.PydanticAttrOutputFunctionsParser Parse an output as an attribute of a pydantic object. output_parsers.openai_functions.PydanticOutputFunctionsParser Parse an output as a pydantic object. output_parsers.openai_tools.JsonOutputKeyToolsParser Parse tools from OpenAI response. output_parsers.openai_tools.JsonOutputToolsParser Parse tools from OpenAI response. output_parsers.openai_tools.PydanticToolsParser Parse tools from OpenAI response. output_parsers.pandas_dataframe.PandasDataFrameOutputParser Parse an output using Pandas DataFrame format. output_parsers.pydantic.PydanticOutputParser Parse an output using a pydantic model. output_parsers.rail_parser.GuardrailsOutputParser
https://api.python.langchain.com/en/latest/langchain_api_reference.html
080dbd5b5352-21
output_parsers.rail_parser.GuardrailsOutputParser Parse the output of an LLM call using Guardrails. output_parsers.regex.RegexParser Parse the output of an LLM call using a regex. output_parsers.regex_dict.RegexDictParser Parse the output of an LLM call into a Dictionary using a regex. output_parsers.retry.RetryOutputParser Wraps a parser and tries to fix parsing errors. output_parsers.retry.RetryWithErrorOutputParser Wraps a parser and tries to fix parsing errors. output_parsers.structured.ResponseSchema A schema for a response from a structured output parser. output_parsers.structured.StructuredOutputParser Parse the output of an LLM call to a structured output. output_parsers.yaml.YamlOutputParser Parse YAML output using a pydantic model. Functions¶ output_parsers.loading.load_output_parser(config) Load an output parser. langchain.prompts¶ Prompt is the input to the model. Prompt is often constructed from multiple components. Prompt classes and functions make constructing and working with prompts easy. Class hierarchy: BasePromptTemplate --> PipelinePromptTemplate StringPromptTemplate --> PromptTemplate FewShotPromptTemplate FewShotPromptWithTemplates BaseChatPromptTemplate --> AutoGPTPrompt ChatPromptTemplate --> AgentScratchPadChatPromptTemplate BaseMessagePromptTemplate --> MessagesPlaceholder BaseStringMessagePromptTemplate --> ChatMessagePromptTemplate HumanMessagePromptTemplate AIMessagePromptTemplate SystemMessagePromptTemplate PromptValue --> StringPromptValue ChatPromptValue Classes¶ prompts.example_selector.ngram_overlap.NGramOverlapExampleSelector Select and order examples based on ngram overlap score (sentence_bleu score). Functions¶ prompts.example_selector.ngram_overlap.ngram_overlap_score(...)
https://api.python.langchain.com/en/latest/langchain_api_reference.html
080dbd5b5352-22
Functions¶ prompts.example_selector.ngram_overlap.ngram_overlap_score(...) Compute ngram overlap score of source and example as sentence_bleu score. langchain.retrievers¶ Retriever class returns Documents given a text query. It is more general than a vector store. A retriever does not need to be able to store documents, only to return (or retrieve) it. Vector stores can be used as the backbone of a retriever, but there are other types of retrievers as well. Class hierarchy: BaseRetriever --> <name>Retriever # Examples: ArxivRetriever, MergerRetriever Main helpers: Document, Serializable, Callbacks, CallbackManagerForRetrieverRun, AsyncCallbackManagerForRetrieverRun Classes¶ retrievers.contextual_compression.ContextualCompressionRetriever Retriever that wraps a base retriever and compresses the results. retrievers.document_compressors.base.BaseDocumentCompressor Base class for document compressors. retrievers.document_compressors.base.DocumentCompressorPipeline Document compressor that uses a pipeline of Transformers. retrievers.document_compressors.chain_extract.LLMChainExtractor Document compressor that uses an LLM chain to extract the relevant parts of documents. retrievers.document_compressors.chain_extract.NoOutputParser Parse outputs that could return a null string of some sort. retrievers.document_compressors.chain_filter.LLMChainFilter Filter that drops documents that aren't relevant to the query. retrievers.document_compressors.cohere_rerank.CohereRerank Document compressor that uses Cohere Rerank API. retrievers.document_compressors.embeddings_filter.EmbeddingsFilter Document compressor that uses embeddings to drop documents unrelated to the query. retrievers.ensemble.EnsembleRetriever
https://api.python.langchain.com/en/latest/langchain_api_reference.html
080dbd5b5352-23
retrievers.ensemble.EnsembleRetriever Retriever that ensembles the multiple retrievers. retrievers.merger_retriever.MergerRetriever Retriever that merges the results of multiple retrievers. retrievers.multi_query.LineList List of lines. retrievers.multi_query.LineListOutputParser Output parser for a list of lines. retrievers.multi_query.MultiQueryRetriever Given a query, use an LLM to write a set of queries. retrievers.multi_vector.MultiVectorRetriever Retrieve from a set of multiple embeddings for the same document. retrievers.multi_vector.SearchType(value[, ...]) Enumerator of the types of search to perform. retrievers.parent_document_retriever.ParentDocumentRetriever Retrieve small chunks then retrieve their parent documents. retrievers.re_phraser.RePhraseQueryRetriever Given a query, use an LLM to re-phrase it. retrievers.self_query.base.SelfQueryRetriever Retriever that uses a vector store and an LLM to generate the vector store queries. retrievers.self_query.chroma.ChromaTranslator() Translate Chroma internal query language elements to valid filters. retrievers.self_query.dashvector.DashvectorTranslator() Logic for converting internal query language elements to valid filters. retrievers.self_query.deeplake.DeepLakeTranslator() Translate DeepLake internal query language elements to valid filters. retrievers.self_query.elasticsearch.ElasticsearchTranslator() Translate Elasticsearch internal query language elements to valid filters. retrievers.self_query.milvus.MilvusTranslator() Translate Milvus internal query language elements to valid filters. retrievers.self_query.mongodb_atlas.MongoDBAtlasTranslator() Translate Mongo internal query language elements to valid filters.
https://api.python.langchain.com/en/latest/langchain_api_reference.html
080dbd5b5352-24
Translate Mongo internal query language elements to valid filters. retrievers.self_query.myscale.MyScaleTranslator([...]) Translate MyScale internal query language elements to valid filters. retrievers.self_query.opensearch.OpenSearchTranslator() Translate OpenSearch internal query domain-specific language elements to valid filters. retrievers.self_query.pinecone.PineconeTranslator() Translate Pinecone internal query language elements to valid filters. retrievers.self_query.qdrant.QdrantTranslator(...) Translate Qdrant internal query language elements to valid filters. retrievers.self_query.redis.RedisTranslator(schema) Visitor for translating structured queries to Redis filter expressions. retrievers.self_query.supabase.SupabaseVectorTranslator() Translate Langchain filters to Supabase PostgREST filters. retrievers.self_query.timescalevector.TimescaleVectorTranslator() Translate the internal query language elements to valid filters. retrievers.self_query.vectara.VectaraTranslator() Translate Vectara internal query language elements to valid filters. retrievers.self_query.weaviate.WeaviateTranslator() Translate Weaviate internal query language elements to valid filters. retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever Retriever that combines embedding similarity with recency in retrieving values. retrievers.web_research.LineList List of questions. retrievers.web_research.QuestionListOutputParser Output parser for a list of numbered questions. retrievers.web_research.SearchQueries Search queries to research for the user's goal. retrievers.web_research.WebResearchRetriever Google Search API retriever. Functions¶ retrievers.document_compressors.chain_extract.default_get_input(...) Return the compression chain input. retrievers.document_compressors.chain_filter.default_get_input(...) Return the compression chain input.
https://api.python.langchain.com/en/latest/langchain_api_reference.html
080dbd5b5352-25
retrievers.document_compressors.chain_filter.default_get_input(...) Return the compression chain input. retrievers.self_query.deeplake.can_cast_to_float(string) Check if a string can be cast to a float. retrievers.self_query.milvus.process_value(value) Convert a value to a string and add double quotes if it is a string. retrievers.self_query.vectara.process_value(value) Convert a value to a string and add single quotes if it is a string. langchain.runnables¶ Classes¶ runnables.hub.HubRunnable An instance of a runnable stored in the LangChain Hub. runnables.openai_functions.OpenAIFunction A function description for ChatOpenAI runnables.openai_functions.OpenAIFunctionsRouter A runnable that routes to the selected function. langchain.smith¶ LangSmith utilities. This module provides utilities for connecting to LangSmith. For more information on LangSmith, see the LangSmith documentation. Evaluation LangSmith helps you evaluate Chains and other language model application components using a number of LangChain evaluators. An example of this is shown below, assuming you’ve created a LangSmith dataset called <my_dataset_name>: from langsmith import Client from langchain_community.chat_models import ChatOpenAI from langchain.chains import LLMChain from langchain.smith import RunEvalConfig, run_on_dataset # Chains may have memory. Passing in a constructor function lets the # evaluation framework avoid cross-contamination between runs. def construct_chain(): llm = ChatOpenAI(temperature=0) chain = LLMChain.from_string( llm, "What's the answer to {your_input_key}" ) return chain
https://api.python.langchain.com/en/latest/langchain_api_reference.html
080dbd5b5352-26
"What's the answer to {your_input_key}" ) return chain # Load off-the-shelf evaluators via config or the EvaluatorType (string or enum) evaluation_config = RunEvalConfig( evaluators=[ "qa", # "Correctness" against a reference answer "embedding_distance", RunEvalConfig.Criteria("helpfulness"), RunEvalConfig.Criteria({ "fifth-grader-score": "Do you have to be smarter than a fifth grader to answer this question?" }), ] ) client = Client() run_on_dataset( client, "<my_dataset_name>", construct_chain, evaluation=evaluation_config, ) You can also create custom evaluators by subclassing the StringEvaluator or LangSmith’s RunEvaluator classes. from typing import Optional from langchain.evaluation import StringEvaluator class MyStringEvaluator(StringEvaluator): @property def requires_input(self) -> bool: return False @property def requires_reference(self) -> bool: return True @property def evaluation_name(self) -> str: return "exact_match" def _evaluate_strings(self, prediction, reference=None, input=None, **kwargs) -> dict: return {"score": prediction == reference} evaluation_config = RunEvalConfig( custom_evaluators = [MyStringEvaluator()], ) run_on_dataset( client, "<my_dataset_name>", construct_chain, evaluation=evaluation_config, ) Primary Functions arun_on_dataset: Asynchronous function to evaluate a chain, agent, or other LangChain component over a dataset. run_on_dataset: Function to evaluate a chain, agent, or other LangChain component over a dataset.
https://api.python.langchain.com/en/latest/langchain_api_reference.html
080dbd5b5352-27
RunEvalConfig: Class representing the configuration for running evaluation. You can select evaluators by EvaluatorType or config, or you can pass in custom_evaluators Classes¶ smith.evaluation.config.EvalConfig Configuration for a given run evaluator. smith.evaluation.config.RunEvalConfig Configuration for a run evaluation. smith.evaluation.config.SingleKeyEvalConfig Configuration for a run evaluator that only requires a single key. smith.evaluation.progress.ProgressBarCallback(total) A simple progress bar for the console. smith.evaluation.runner_utils.EvalError(...) Your architecture raised an error. smith.evaluation.runner_utils.InputFormatError Raised when the input format is invalid. smith.evaluation.runner_utils.TestResult A dictionary of the results of a single test run. smith.evaluation.string_run_evaluator.ChainStringRunMapper Extract items to evaluate from the run object from a chain. smith.evaluation.string_run_evaluator.LLMStringRunMapper Extract items to evaluate from the run object. smith.evaluation.string_run_evaluator.StringExampleMapper Map an example, or row in the dataset, to the inputs of an evaluation. smith.evaluation.string_run_evaluator.StringRunEvaluatorChain Evaluate Run and optional examples. smith.evaluation.string_run_evaluator.StringRunMapper Extract items to evaluate from the run object. smith.evaluation.string_run_evaluator.ToolStringRunMapper Map an input to the tool. Functions¶ smith.evaluation.name_generation.random_name() Generate a random name. smith.evaluation.runner_utils.arun_on_dataset(...) Run the Chain or language model on a dataset and store traces to the specified project name. smith.evaluation.runner_utils.run_on_dataset(...) Run the Chain or language model on a dataset and store traces to the specified project name. langchain.storage¶ Implementations of key-value stores and storage helpers.
https://api.python.langchain.com/en/latest/langchain_api_reference.html
080dbd5b5352-28
langchain.storage¶ Implementations of key-value stores and storage helpers. Module provides implementations of various key-value stores that conform to a simple key-value interface. The primary goal of these storages is to support implementation of caching. Classes¶ storage.encoder_backed.EncoderBackedStore(...) Wraps a store with key and value encoders/decoders. storage.file_system.LocalFileStore(root_path) BaseStore interface that works on the local file system. storage.in_memory.InMemoryBaseStore() In-memory implementation of the BaseStore using a dictionary. langchain.text_splitter¶ Text Splitters are classes for splitting text. Class hierarchy: BaseDocumentTransformer --> TextSplitter --> <name>TextSplitter # Example: CharacterTextSplitter RecursiveCharacterTextSplitter --> <name>TextSplitter Note: MarkdownHeaderTextSplitter and **HTMLHeaderTextSplitter do not derive from TextSplitter. Main helpers: Document, Tokenizer, Language, LineType, HeaderType Classes¶ text_splitter.CharacterTextSplitter([...]) Splitting text that looks at characters. text_splitter.ElementType Element type as typed dict. text_splitter.HTMLHeaderTextSplitter(...[, ...]) Splitting HTML files based on specified headers. text_splitter.HeaderType Header type as typed dict. text_splitter.Language(value[, names, ...]) Enum of the programming languages. text_splitter.LatexTextSplitter(**kwargs) Attempts to split the text along Latex-formatted layout elements. text_splitter.LineType Line type as typed dict. text_splitter.MarkdownHeaderTextSplitter(...) Splitting markdown files based on specified headers. text_splitter.MarkdownTextSplitter(**kwargs) Attempts to split the text along Markdown-formatted headings.
https://api.python.langchain.com/en/latest/langchain_api_reference.html
080dbd5b5352-29
Attempts to split the text along Markdown-formatted headings. text_splitter.NLTKTextSplitter([separator, ...]) Splitting text using NLTK package. text_splitter.PythonCodeTextSplitter(**kwargs) Attempts to split the text along Python syntax. text_splitter.RecursiveCharacterTextSplitter([...]) Splitting text by recursively look at characters. text_splitter.SentenceTransformersTokenTextSplitter([...]) Splitting text to tokens using sentence model tokenizer. text_splitter.SpacyTextSplitter([separator, ...]) Splitting text using Spacy package. text_splitter.TextSplitter(chunk_size, ...) Interface for splitting text into chunks. text_splitter.TokenTextSplitter([...]) Splitting text to tokens using model tokenizer. text_splitter.Tokenizer(chunk_overlap, ...) Tokenizer data class. Functions¶ text_splitter.split_text_on_tokens(*, text, ...) Split incoming text and return chunks using tokenizer. langchain.tools¶ Tools are classes that an Agent uses to interact with the world. Each tool has a description. Agent uses the description to choose the right tool for the job. Class hierarchy: ToolMetaclass --> BaseTool --> <name>Tool # Examples: AIPluginTool, BaseGraphQLTool <name> # Examples: BraveSearch, HumanInputRun Main helpers: CallbackManagerForToolRun, AsyncCallbackManagerForToolRun Classes¶ tools.retriever.RetrieverInput Input to the retriever. Functions¶ tools.render.render_text_description(tools) Render the tool name and description in plain text. tools.render.render_text_description_and_args(tools) Render the tool name, description, and args in plain text. tools.retriever.create_retriever_tool(...)
https://api.python.langchain.com/en/latest/langchain_api_reference.html
080dbd5b5352-30
tools.retriever.create_retriever_tool(...) Create a tool to do retrieval of documents.
https://api.python.langchain.com/en/latest/langchain_api_reference.html
da65ef07fff5-0
langchain_core 0.1.4¶ langchain_core.agents¶ Classes¶ agents.AgentAction A full description of an action for an ActionAgent to execute. agents.AgentActionMessageLog Override init to support instantiation by position for backward compat. agents.AgentFinish The final return value of an ActionAgent. agents.AgentStep The result of running an AgentAction. Functions¶ langchain_core.beta¶ Classes¶ beta.runnables.context.Context() Context for a runnable. beta.runnables.context.ContextGet Get a context value. beta.runnables.context.ContextSet Set a context value. beta.runnables.context.PrefixContext([prefix]) Context for a runnable with a prefix. Functions¶ beta.runnables.context.aconfig_with_context(...) Asynchronously patch a runnable config with context getters and setters. beta.runnables.context.config_with_context(...) Patch a runnable config with context getters and setters. langchain_core.caches¶ Classes¶ caches.BaseCache() Base interface for cache. langchain_core.callbacks¶ Classes¶ callbacks.base.AsyncCallbackHandler() Async callback handler that handles callbacks from LangChain. callbacks.base.BaseCallbackHandler() Base callback handler that handles callbacks from LangChain. callbacks.base.BaseCallbackManager(handlers) Base callback manager that handles callbacks from LangChain. callbacks.base.CallbackManagerMixin() Mixin for callback manager. callbacks.base.ChainManagerMixin() Mixin for chain callbacks. callbacks.base.LLMManagerMixin() Mixin for LLM callbacks. callbacks.base.RetrieverManagerMixin() Mixin for Retriever callbacks. callbacks.base.RunManagerMixin() Mixin for run manager. callbacks.base.ToolManagerMixin() Mixin for tool callbacks. callbacks.manager.AsyncCallbackManager(handlers) Async callback manager that handles callbacks from LangChain.
https://api.python.langchain.com/en/latest/core_api_reference.html
da65ef07fff5-1
callbacks.manager.AsyncCallbackManager(handlers) Async callback manager that handles callbacks from LangChain. callbacks.manager.AsyncCallbackManagerForChainGroup(...) Async callback manager for the chain group. callbacks.manager.AsyncCallbackManagerForChainRun(*, ...) Async callback manager for chain run. callbacks.manager.AsyncCallbackManagerForLLMRun(*, ...) Async callback manager for LLM run. callbacks.manager.AsyncCallbackManagerForRetrieverRun(*, ...) Async callback manager for retriever run. callbacks.manager.AsyncCallbackManagerForToolRun(*, ...) Async callback manager for tool run. callbacks.manager.AsyncParentRunManager(*, ...) Async Parent Run Manager. callbacks.manager.AsyncRunManager(*, run_id, ...) Async Run Manager. callbacks.manager.BaseRunManager(*, run_id, ...) Base class for run manager (a bound callback manager). callbacks.manager.CallbackManager(handlers) Callback manager that handles callbacks from LangChain. callbacks.manager.CallbackManagerForChainGroup(...) Callback manager for the chain group. callbacks.manager.CallbackManagerForChainRun(*, ...) Callback manager for chain run. callbacks.manager.CallbackManagerForLLMRun(*, ...) Callback manager for LLM run. callbacks.manager.CallbackManagerForRetrieverRun(*, ...) Callback manager for retriever run. callbacks.manager.CallbackManagerForToolRun(*, ...) Callback manager for tool run. callbacks.manager.ParentRunManager(*, ...[, ...]) Sync Parent Run Manager. callbacks.manager.RunManager(*, run_id, ...) Sync Run Manager. callbacks.stdout.StdOutCallbackHandler([color]) Callback Handler that prints to std out. callbacks.streaming_stdout.StreamingStdOutCallbackHandler() Callback handler for streaming. Functions¶ callbacks.manager.ahandle_event(handlers, ...)
https://api.python.langchain.com/en/latest/core_api_reference.html
da65ef07fff5-2
Functions¶ callbacks.manager.ahandle_event(handlers, ...) Generic event handler for AsyncCallbackManager. callbacks.manager.atrace_as_chain_group(...) Get an async callback manager for a chain group in a context manager. callbacks.manager.handle_event(handlers, ...) Generic event handler for CallbackManager. callbacks.manager.trace_as_chain_group(...) Get a callback manager for a chain group in a context manager. langchain_core.chat_history¶ Classes¶ chat_history.BaseChatMessageHistory() Abstract base class for storing chat message history. langchain_core.chat_sessions¶ Classes¶ chat_sessions.ChatSession Chat Session represents a single conversation, channel, or other group of messages. langchain_core.documents¶ Classes¶ documents.base.Document Class for storing a piece of text and associated metadata. documents.transformers.BaseDocumentTransformer() Abstract base class for document transformation systems. langchain_core.embeddings¶ Classes¶ embeddings.Embeddings() Interface for embedding models. langchain_core.example_selectors¶ Logic for selecting examples to include in prompts. Classes¶ example_selectors.base.BaseExampleSelector() Interface for selecting examples to include in prompts. example_selectors.length_based.LengthBasedExampleSelector Select examples based on length. example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector ExampleSelector that selects examples based on Max Marginal Relevance. example_selectors.semantic_similarity.SemanticSimilarityExampleSelector Example selector that selects examples based on SemanticSimilarity. Functions¶ example_selectors.semantic_similarity.sorted_values(values) Return a list of values in dict sorted by key. langchain_core.exceptions¶ Classes¶ exceptions.LangChainException General LangChain exception. exceptions.OutputParserException(error[, ...]) Exception that output parsers should raise to signify a parsing error. exceptions.TracerException
https://api.python.langchain.com/en/latest/core_api_reference.html
da65ef07fff5-3
Exception that output parsers should raise to signify a parsing error. exceptions.TracerException Base class for exceptions in tracers module. langchain_core.language_models¶ Classes¶ language_models.base.BaseLanguageModel Abstract base class for interfacing with language models. language_models.chat_models.BaseChatModel Base class for Chat models. language_models.chat_models.SimpleChatModel Simple Chat Model. language_models.llms.BaseLLM Base LLM abstract interface. language_models.llms.LLM Base LLM abstract class. Functions¶ language_models.chat_models.agenerate_from_stream(stream) Async generate from a stream. language_models.chat_models.generate_from_stream(stream) Generate from a stream. language_models.llms.create_base_retry_decorator(...) Create a retry decorator for a given LLM and provided list of error types. language_models.llms.get_prompts(params, prompts) Get prompts that are already cached. language_models.llms.update_cache(...) Update the cache and get the LLM output. langchain_core.load¶ Serialization and deserialization. Classes¶ load.load.Reviver([secrets_map, ...]) Reviver for JSON objects. load.serializable.BaseSerialized Base class for serialized objects. load.serializable.Serializable Serializable base class. load.serializable.SerializedConstructor Serialized constructor. load.serializable.SerializedNotImplemented Serialized not implemented. load.serializable.SerializedSecret Serialized secret. Functions¶ load.dump.default(obj) Return a default value for a Serializable object or a SerializedNotImplemented object. load.dump.dumpd(obj) Return a json dict representation of an object. load.dump.dumps(obj, *[, pretty]) Return a json string representation of an object. load.load.load(obj, *[, secrets_map, ...])
https://api.python.langchain.com/en/latest/core_api_reference.html
da65ef07fff5-4
load.load.load(obj, *[, secrets_map, ...]) Revive a LangChain class from a JSON object. load.load.loads(text, *[, secrets_map, ...]) Revive a LangChain class from a JSON string. load.serializable.to_json_not_implemented(obj) Serialize a "not implemented" object. load.serializable.try_neq_default(value, ...) Try to determine if a value is different from the default. langchain_core.memory¶ Classes¶ memory.BaseMemory Abstract base class for memory in Chains. langchain_core.messages¶ Classes¶ messages.ai.AIMessage A Message from an AI. messages.ai.AIMessageChunk A Message chunk from an AI. messages.base.BaseMessage The base abstract Message class. messages.base.BaseMessageChunk A Message chunk, which can be concatenated with other Message chunks. messages.chat.ChatMessage A Message that can be assigned an arbitrary speaker (i.e. messages.chat.ChatMessageChunk A Chat Message chunk. messages.function.FunctionMessage A Message for passing the result of executing a function back to a model. messages.function.FunctionMessageChunk A Function Message chunk. messages.human.HumanMessage A Message from a human. messages.human.HumanMessageChunk A Human Message chunk. messages.system.SystemMessage A Message for priming AI behavior, usually passed in as the first of a sequence of input messages. messages.system.SystemMessageChunk A System Message chunk. messages.tool.ToolMessage A Message for passing the result of executing a tool back to a model. messages.tool.ToolMessageChunk A Tool Message chunk. Functions¶ messages.base.merge_content(first_content, ...) Merge two message contents. messages.base.message_to_dict(message) Convert a Message to a dictionary.
https://api.python.langchain.com/en/latest/core_api_reference.html
da65ef07fff5-5
messages.base.message_to_dict(message) Convert a Message to a dictionary. messages.base.messages_to_dict(messages) Convert a sequence of Messages to a list of dictionaries. langchain_core.output_parsers¶ Classes¶ output_parsers.base.BaseGenerationOutputParser Base class to parse the output of an LLM call. output_parsers.base.BaseLLMOutputParser() Abstract base class for parsing the outputs of a model. output_parsers.base.BaseOutputParser Base class to parse the output of an LLM call. output_parsers.json.JsonOutputParser Parse the output of an LLM call to a JSON object. output_parsers.json.SimpleJsonOutputParser alias of JsonOutputParser output_parsers.list.CommaSeparatedListOutputParser Parse the output of an LLM call to a comma-separated list. output_parsers.list.ListOutputParser Parse the output of an LLM call to a list. output_parsers.list.MarkdownListOutputParser Parse a markdown list. output_parsers.list.NumberedListOutputParser Parse a numbered list. output_parsers.string.StrOutputParser OutputParser that parses LLMResult into the top likely string. output_parsers.transform.BaseCumulativeTransformOutputParser Base class for an output parser that can handle streaming input. output_parsers.transform.BaseTransformOutputParser Base class for an output parser that can handle streaming input. output_parsers.xml.XMLOutputParser Parse an output using xml format. Functions¶ output_parsers.json.parse_and_check_json_markdown(...) Parse a JSON string from a Markdown string and check that it contains the expected keys. output_parsers.json.parse_json_markdown(...) Parse a JSON string from a Markdown string. output_parsers.json.parse_partial_json(s, *) Parse a JSON string that may be missing closing braces.
https://api.python.langchain.com/en/latest/core_api_reference.html
da65ef07fff5-6
Parse a JSON string that may be missing closing braces. output_parsers.list.droplastn(iter, n) Drop the last n elements of an iterator. output_parsers.xml.nested_element(path, elem) Get nested element from path. langchain_core.outputs¶ Classes¶ outputs.chat_generation.ChatGeneration A single chat generation output. outputs.chat_generation.ChatGenerationChunk A ChatGeneration chunk, which can be concatenated with other outputs.chat_result.ChatResult Class that contains all results for a single chat model call. outputs.generation.Generation A single text generation output. outputs.generation.GenerationChunk A Generation chunk, which can be concatenated with other Generation chunks. outputs.llm_result.LLMResult Class that contains all results for a batched LLM call. outputs.run_info.RunInfo Class that contains metadata for a single execution of a Chain or model. langchain_core.prompt_values¶ Classes¶ prompt_values.ChatPromptValue Chat prompt value. prompt_values.ChatPromptValueConcrete Chat prompt value which explicitly lists out the message types it accepts. prompt_values.PromptValue Base abstract class for inputs to any language model. prompt_values.StringPromptValue String prompt value. langchain_core.prompts¶ Prompt is the input to the model. Prompt is often constructed from multiple components. Prompt classes and functions make constructing and working with prompts easy. Class hierarchy: BasePromptTemplate --> PipelinePromptTemplate StringPromptTemplate --> PromptTemplate FewShotPromptTemplate FewShotPromptWithTemplates BaseChatPromptTemplate --> AutoGPTPrompt ChatPromptTemplate --> AgentScratchPadChatPromptTemplate BaseMessagePromptTemplate --> MessagesPlaceholder BaseStringMessagePromptTemplate --> ChatMessagePromptTemplate HumanMessagePromptTemplate AIMessagePromptTemplate SystemMessagePromptTemplate Classes¶
https://api.python.langchain.com/en/latest/core_api_reference.html
da65ef07fff5-7
AIMessagePromptTemplate SystemMessagePromptTemplate Classes¶ prompts.base.BasePromptTemplate Base class for all prompt templates, returning a prompt. prompts.chat.AIMessagePromptTemplate AI message prompt template. prompts.chat.BaseChatPromptTemplate Base class for chat prompt templates. prompts.chat.BaseMessagePromptTemplate Base class for message prompt templates. prompts.chat.BaseStringMessagePromptTemplate Base class for message prompt templates that use a string prompt template. prompts.chat.ChatMessagePromptTemplate Chat message prompt template. prompts.chat.ChatPromptTemplate A prompt template for chat models. prompts.chat.HumanMessagePromptTemplate Human message prompt template. prompts.chat.MessagesPlaceholder Prompt template that assumes variable is already list of messages. prompts.chat.SystemMessagePromptTemplate System message prompt template. prompts.few_shot.FewShotChatMessagePromptTemplate Chat prompt template that supports few-shot examples. prompts.few_shot.FewShotPromptTemplate Prompt template that contains few shot examples. prompts.few_shot_with_templates.FewShotPromptWithTemplates Prompt template that contains few shot examples. prompts.pipeline.PipelinePromptTemplate A prompt template for composing multiple prompt templates together. prompts.prompt.PromptTemplate A prompt template for a language model. prompts.string.StringPromptTemplate String prompt that exposes the format method, returning a prompt. Functions¶ prompts.base.format_document(doc, prompt) Format a document into a string based on a prompt template. prompts.loading.load_prompt(path) Unified method for loading a prompt from LangChainHub or local fs. prompts.loading.load_prompt_from_config(config) Load prompt from Config Dict. prompts.string.check_valid_template(...) Check that template string is valid. prompts.string.get_template_variables(...) Get the variables from the template.
https://api.python.langchain.com/en/latest/core_api_reference.html
da65ef07fff5-8
prompts.string.get_template_variables(...) Get the variables from the template. prompts.string.jinja2_formatter(template, ...) Format a template using jinja2. prompts.string.validate_jinja2(template, ...) Validate that the input variables are valid for the template. langchain_core.retrievers¶ Classes¶ retrievers.BaseRetriever Abstract base class for a Document retrieval system. langchain_core.runnables¶ LangChain Runnable and the LangChain Expression Language (LCEL). The LangChain Expression Language (LCEL) offers a declarative method to build production-grade programs that harness the power of LLMs. Programs created using LCEL and LangChain Runnables inherently support synchronous, asynchronous, batch, and streaming operations. Support for async allows servers hosting LCEL based programs to scale better for higher concurrent loads. Streaming of intermediate outputs as they’re being generated allows for creating more responsive UX. This module contains schema and implementation of LangChain Runnables primitives. Classes¶ runnables.base.Runnable() A unit of work that can be invoked, batched, streamed, transformed and composed. runnables.base.RunnableBinding Wrap a runnable with additional functionality. runnables.base.RunnableBindingBase A runnable that delegates calls to another runnable with a set of kwargs. runnables.base.RunnableEach A runnable that delegates calls to another runnable with each element of the input sequence. runnables.base.RunnableEachBase A runnable that delegates calls to another runnable with each element of the input sequence. runnables.base.RunnableGenerator(transform) A runnable that runs a generator function. runnables.base.RunnableLambda(func[, afunc]) RunnableLambda converts a python callable into a Runnable. runnables.base.RunnableMap alias of RunnableParallel
https://api.python.langchain.com/en/latest/core_api_reference.html
da65ef07fff5-9
runnables.base.RunnableMap alias of RunnableParallel runnables.base.RunnableParallel A runnable that runs a mapping of runnables in parallel, and returns a mapping of their outputs. runnables.base.RunnableSequence A sequence of runnables, where the output of each is the input of the next. runnables.base.RunnableSerializable A Runnable that can be serialized to JSON. runnables.branch.RunnableBranch A Runnable that selects which branch to run based on a condition. runnables.config.ContextThreadPoolExecutor([...]) ThreadPoolExecutor that copies the context to the child thread. runnables.config.EmptyDict Empty dict type. runnables.config.RunnableConfig Configuration for a Runnable. runnables.configurable.DynamicRunnable A Serializable Runnable that can be dynamically configured. runnables.configurable.RunnableConfigurableAlternatives A Runnable that can be dynamically configured. runnables.configurable.RunnableConfigurableFields A Runnable that can be dynamically configured. runnables.configurable.StrEnum(value[, ...]) A string enum. runnables.fallbacks.RunnableWithFallbacks A Runnable that can fallback to other Runnables if it fails. runnables.graph.Edge(source, target) Create new instance of Edge(source, target) runnables.graph.Graph(nodes, edges) runnables.graph.Node(id, data) Create new instance of Node(id, data) runnables.graph_draw.AsciiCanvas(cols, lines) Class for drawing in ASCII. runnables.graph_draw.VertexViewer(name) Class to define vertex box boundaries that will be accounted for during graph building by grandalf. runnables.history.RunnableWithMessageHistory A runnable that manages chat message history for another runnable. runnables.passthrough.RunnableAssign
https://api.python.langchain.com/en/latest/core_api_reference.html
da65ef07fff5-10
runnables.passthrough.RunnableAssign A runnable that assigns key-value pairs to Dict[str, Any] inputs. runnables.passthrough.RunnablePassthrough A runnable to passthrough inputs unchanged or with additional keys. runnables.passthrough.RunnablePick A runnable that picks keys from Dict[str, Any] inputs. runnables.retry.RunnableRetry Retry a Runnable if it fails. runnables.router.RouterInput A Router input. runnables.router.RouterRunnable A runnable that routes to a set of runnables based on Input['key']. runnables.utils.AddableDict Dictionary that can be added to another dictionary. runnables.utils.ConfigurableField(id[, ...]) A field that can be configured by the user. runnables.utils.ConfigurableFieldMultiOption(id, ...) A field that can be configured by the user with multiple default values. runnables.utils.ConfigurableFieldSingleOption(id, ...) A field that can be configured by the user with a default value. runnables.utils.ConfigurableFieldSpec(id, ...) A field that can be configured by the user. runnables.utils.FunctionNonLocals() Get the nonlocal variables accessed of a function. runnables.utils.GetLambdaSource() Get the source code of a lambda function. runnables.utils.IsFunctionArgDict() Check if the first argument of a function is a dict. runnables.utils.IsLocalDict(name, keys) Check if a name is a local dict. runnables.utils.NonLocals() Get nonlocal variables accessed. runnables.utils.SupportsAdd(*args, **kwargs) Protocol for objects that support addition. Functions¶ runnables.base.chain() Decorate a function to make it a Runnable.
https://api.python.langchain.com/en/latest/core_api_reference.html
da65ef07fff5-11
runnables.base.chain() Decorate a function to make it a Runnable. runnables.base.coerce_to_runnable(thing) Coerce a runnable-like object into a Runnable. runnables.config.acall_func_with_variable_args(...) Call function that may optionally accept a run_manager and/or config. runnables.config.call_func_with_variable_args(...) Call function that may optionally accept a run_manager and/or config. runnables.config.ensure_config([config]) Ensure that a config is a dict with all keys present. runnables.config.get_async_callback_manager_for_config(config) Get an async callback manager for a config. runnables.config.get_callback_manager_for_config(config) Get a callback manager for a config. runnables.config.get_config_list(config, length) Get a list of configs from a single config or a list of configs. runnables.config.get_executor_for_config(config) Get an executor for a config. runnables.config.merge_configs(*configs) Merge multiple configs into one. runnables.config.patch_config(config, *[, ...]) Patch a config with new values. runnables.config.run_in_executor(...) Run a function in an executor. runnables.configurable.make_options_spec(...) Make a ConfigurableFieldSpec for a ConfigurableFieldSingleOption or ConfigurableFieldMultiOption. runnables.configurable.prefix_config_spec(...) Prefix the id of a ConfigurableFieldSpec. runnables.graph_draw.draw(vertices, edges) Build a DAG and draw it in ASCII. runnables.passthrough.aidentity(x) An async identity function runnables.passthrough.identity(x) An identity function runnables.utils.aadd(addables) Asynchronously add a sequence of addable objects together.
https://api.python.langchain.com/en/latest/core_api_reference.html
da65ef07fff5-12
Asynchronously add a sequence of addable objects together. runnables.utils.accepts_config(callable) Check if a callable accepts a config argument. runnables.utils.accepts_context(callable) Check if a callable accepts a context argument. runnables.utils.accepts_run_manager(callable) Check if a callable accepts a run_manager argument. runnables.utils.add(addables) Add a sequence of addable objects together. runnables.utils.gated_coro(semaphore, coro) Run a coroutine with a semaphore. runnables.utils.gather_with_concurrency(n, ...) Gather coroutines with a limit on the number of concurrent coroutines. runnables.utils.get_function_first_arg_dict_keys(func) Get the keys of the first argument of a function if it is a dict. runnables.utils.get_function_nonlocals(func) Get the nonlocal variables accessed by a function. runnables.utils.get_lambda_source(func) Get the source code of a lambda function. runnables.utils.get_unique_config_specs(specs) Get the unique config specs from a sequence of config specs. runnables.utils.indent_lines_after_first(...) Indent all lines of text after the first line. langchain_core.stores¶ Classes¶ stores.BaseStore() Abstract interface for a key-value store. langchain_core.tools¶ Base implementation for tools or skills. Classes¶ tools.BaseTool Interface LangChain tools must implement. tools.SchemaAnnotationError Raised when 'args_schema' is missing or has an incorrect type annotation. tools.StructuredTool Tool that can operate on any number of inputs. tools.Tool Tool that takes in function or coroutine directly. tools.ToolException An optional exception that tool throws when execution error occurs. Functions¶ tools.create_schema_from_function(...)
https://api.python.langchain.com/en/latest/core_api_reference.html
da65ef07fff5-13
Functions¶ tools.create_schema_from_function(...) Create a pydantic schema from a function's signature. tools.tool(*args[, return_direct, ...]) Make tools out of functions, can be used with or without arguments. langchain_core.tracers¶ Classes¶ tracers.base.BaseTracer(**kwargs) Base interface for tracers. tracers.evaluation.EvaluatorCallbackHandler(...) A tracer that runs a run evaluator whenever a run is persisted. tracers.langchain.LangChainTracer([...]) An implementation of the SharedTracer that POSTS to the langchain endpoint. tracers.log_stream.LogEntry A single entry in the run log. tracers.log_stream.LogStreamCallbackHandler(*) A tracer that streams run logs to a stream. tracers.log_stream.RunLog(*ops, state) A run log. tracers.log_stream.RunLogPatch(*ops) A patch to the run log. tracers.log_stream.RunState State of the run. tracers.root_listeners.RootListenersTracer(*, ...) A tracer that calls listeners on run start, end, and error. tracers.run_collector.RunCollectorCallbackHandler([...]) A tracer that collects all nested runs in a list. tracers.schemas.Run Run schema for the V2 API in the Tracer. tracers.stdout.ConsoleCallbackHandler(**kwargs) Tracer that prints to the console. tracers.stdout.FunctionCallbackHandler(...) Tracer that calls a function with a single str parameter. Functions¶ tracers.context.collect_runs() Collect all run traces in context. tracers.context.register_configure_hook(...) Register a configure hook. tracers.context.tracing_v2_enabled([...]) Instruct LangChain to log all runs in context to LangSmith. tracers.evaluation.wait_for_all_evaluators()
https://api.python.langchain.com/en/latest/core_api_reference.html
da65ef07fff5-14
tracers.evaluation.wait_for_all_evaluators() Wait for all tracers to finish. tracers.langchain.get_client() Get the client. tracers.langchain.log_error_once(method, ...) Log an error once. tracers.langchain.wait_for_all_tracers() Wait for all tracers to finish. tracers.stdout.elapsed(run) Get the elapsed time of a run. tracers.stdout.try_json_stringify(obj, fallback) Try to stringify an object to JSON. langchain_core.utils¶ Utility functions for LangChain. These functions do not depend on any other LangChain module. Classes¶ utils.aiter.NoLock() Dummy lock that provides the proper interface but no protection utils.aiter.Tee(iterable[, n, lock]) Create n separate asynchronous iterators over iterable utils.aiter.atee alias of Tee utils.formatting.StrictFormatter() A subclass of formatter that checks for extra keys. utils.iter.NoLock() Dummy lock that provides the proper interface but no protection utils.iter.Tee(iterable[, n, lock]) Create n separate asynchronous iterators over iterable utils.iter.safetee alias of Tee Functions¶ utils.aiter.py_anext(iterator[, default]) Pure-Python implementation of anext() for testing purposes. utils.aiter.tee_peer(iterator, buffer, ...) An individual iterator of a tee() utils.env.env_var_is_set(env_var) Check if an environment variable is set. utils.env.get_from_dict_or_env(data, key, ...) Get a value from a dictionary or an environment variable. utils.env.get_from_env(key, env_key[, default]) Get a value from a dictionary or an environment variable. utils.html.extract_sub_links(raw_html, url, *)
https://api.python.langchain.com/en/latest/core_api_reference.html
da65ef07fff5-15
utils.html.extract_sub_links(raw_html, url, *) Extract all links from a raw html string and convert into absolute paths. utils.html.find_all_links(raw_html, *[, pattern]) Extract all links from a raw html string. utils.input.get_bolded_text(text) Get bolded text. utils.input.get_color_mapping(items[, ...]) Get mapping for items to a support color. utils.input.get_colored_text(text, color) Get colored text. utils.input.print_text(text[, color, end, file]) Print text with highlighting and no end characters. utils.iter.batch_iterate(size, iterable) Utility batching function. utils.iter.tee_peer(iterator, buffer, peers, ...) An individual iterator of a tee() utils.json_schema.dereference_refs(schema_obj, *) Try to substitute $refs in JSON Schema. utils.loading.try_load_from_hub(path, ...) Load configuration from hub. utils.pydantic.get_pydantic_major_version() Get the major version of Pydantic. utils.strings.comma_list(items) Convert a list to a comma-separated string. utils.strings.stringify_dict(data) Stringify a dictionary. utils.strings.stringify_value(val) Stringify a value. utils.utils.build_extra_kwargs(extra_kwargs, ...) Build extra kwargs from values and extra_kwargs. utils.utils.check_package_version(package[, ...]) Check the version of a package. utils.utils.convert_to_secret_str(value) Convert a string to a SecretStr if needed. utils.utils.get_pydantic_field_names(...) Get field names, including aliases, for a pydantic class. utils.utils.guard_import(module_name, *[, ...]) Dynamically imports a module and raises a helpful exception if the module is not installed.
https://api.python.langchain.com/en/latest/core_api_reference.html
da65ef07fff5-16
Dynamically imports a module and raises a helpful exception if the module is not installed. utils.utils.mock_now(dt_value) Context manager for mocking out datetime.now() in unit tests. utils.utils.raise_for_status_with_text(response) Raise an error with the response text. utils.utils.xor_args(*arg_groups) Validate specified keyword args are mutually exclusive. langchain_core.vectorstores¶ Classes¶ vectorstores.VectorStore() Interface for vector store. vectorstores.VectorStoreRetriever Base Retriever class for VectorStore.
https://api.python.langchain.com/en/latest/core_api_reference.html
174a9775d84c-0
langchain_mistralai 0.0.1¶ langchain_mistralai.chat_models¶ Classes¶ chat_models.ChatMistralAI A chat model that uses the MistralAI API. Functions¶ chat_models.acompletion_with_retry(llm[, ...]) Use tenacity to retry the async completion call.
https://api.python.langchain.com/en/latest/mistralai_api_reference.html
773160f07d12-0
langchain_together 0.0.1¶ langchain_together.embeddings¶ Classes¶ embeddings.TogetherEmbeddings TogetherEmbeddings embedding model.
https://api.python.langchain.com/en/latest/together_api_reference.html
1de1ac18a345-0
langchain_google_genai 0.0.5¶ langchain_google_genai.chat_models¶ Classes¶ chat_models.ChatGoogleGenerativeAI Google Generative AI Chat models API. chat_models.ChatGoogleGenerativeAIError Custom exception class for errors associated with the Google GenAI API. Functions¶ langchain_google_genai.embeddings¶ Classes¶ embeddings.GoogleGenerativeAIEmbeddings Google Generative AI Embeddings. langchain_google_genai.llms¶ Classes¶ llms.GoogleGenerativeAI Google GenerativeAI models. Functions¶
https://api.python.langchain.com/en/latest/google_genai_api_reference.html
a785f16cecd8-0
langchain_core.caches.BaseCache¶ class langchain_core.caches.BaseCache[source]¶ Base interface for cache. Methods __init__() clear(**kwargs) Clear cache that can take additional keyword arguments. lookup(prompt, llm_string) Look up based on prompt and llm_string. update(prompt, llm_string, return_val) Update cache based on prompt and llm_string. __init__()¶ abstract clear(**kwargs: Any) → None[source]¶ Clear cache that can take additional keyword arguments. abstract lookup(prompt: str, llm_string: str) → Optional[Sequence[Generation]][source]¶ Look up based on prompt and llm_string. abstract update(prompt: str, llm_string: str, return_val: Sequence[Generation]) → None[source]¶ Update cache based on prompt and llm_string.
https://api.python.langchain.com/en/latest/caches/langchain_core.caches.BaseCache.html
91229f9d1a2c-0
langchain.chains.base.Chain¶ class langchain.chains.base.Chain[source]¶ Bases: RunnableSerializable[Dict[str, Any], Dict[str, Any]], ABC Abstract base class for creating structured sequences of calls to components. Chains should be used to encode a sequence of calls to components like models, document retrievers, other chains, etc., and provide a simple interface to this sequence. The Chain interface makes it easy to create apps that are: Stateful: add Memory to any Chain to give it state, Observable: pass Callbacks to a Chain to execute additional functionality,like logging, outside the main sequence of component calls, Composable: the Chain API is flexible enough that it is easy to combineChains with other components, including other Chains. The main methods exposed by chains are: __call__: Chains are callable. The __call__ method is the primary way toexecute a Chain. This takes inputs as a dictionary and returns a dictionary output. run: A convenience method that takes inputs as args/kwargs and returns theoutput as a string or object. This method can only be used for a subset of chains and cannot return as rich of an output as __call__. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param callback_manager: Optional[langchain_core.callbacks.base.BaseCallbackManager] = None¶ Deprecated, use callbacks instead. param callbacks: Optional[Union[List[langchain_core.callbacks.base.BaseCallbackHandler], langchain_core.callbacks.base.BaseCallbackManager]] = None¶ Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error.
https://api.python.langchain.com/en/latest/chains/langchain.chains.base.Chain.html
91229f9d1a2c-1
starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. param memory: Optional[langchain_core.memory.BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the chain. Defaults to None. This metadata will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None. These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to the global verbose value, accessible via langchain.globals.get_verbose().
https://api.python.langchain.com/en/latest/chains/langchain.chains.base.Chain.html
91229f9d1a2c-2
accessible via langchain.globals.get_verbose(). __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any][source]¶ Execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation runs ainvoke in parallel using asyncio.gather.
https://api.python.langchain.com/en/latest/chains/langchain.chains.base.Chain.html
91229f9d1a2c-3
Default implementation runs ainvoke in parallel using asyncio.gather. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any][source]¶ Asynchronously execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys.
https://api.python.langchain.com/en/latest/chains/langchain.chains.base.Chain.html
91229f9d1a2c-4
Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any][source]¶ Default implementation of ainvoke, calls invoke from a thread. The default implementation allows usage of async code even if the runnable did not implement a native async version of invoke. Subclasses should override this method if they can run asynchronously. apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]][source]¶ Call the chain on all inputs in the list. async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any[source]¶ Convenience method for executing chain. The main difference between this method and Chain.__call__ is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in
https://api.python.langchain.com/en/latest/chains/langchain.chains.base.Chain.html
91229f9d1a2c-5
**kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output. Example # Suppose we have a single-input chain that takes a 'question' string: await chain.arun("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." await chain.arun(question=question, context=context) # -> "The temperature in Boise is..." assign(**kwargs: Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any], Mapping[str, Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any]]]]) → RunnableSerializable[Any, Any]¶ Assigns new fields to the dict output of this runnable. Returns a new runnable. async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output.
https://api.python.langchain.com/en/latest/chains/langchain.chains.base.Chain.html
91229f9d1a2c-6
Subclasses should override this method if they support streaming output. async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, with_streamed_output_list: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶ Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state. Parameters input – The input to the runnable. config – The config to use for the runnable. diff – Whether to yield diffs between each step, or the current state. with_streamed_output_list – Whether to yield the streamed_output list. include_names – Only include logs with these names. include_types – Only include logs with these types. include_tags – Only include logs with these tags. exclude_names – Exclude logs with these names. exclude_types – Exclude logs with these types. exclude_tags – Exclude logs with these tags. async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of atransform, which buffers input and calls astream.
https://api.python.langchain.com/en/latest/chains/langchain.chains.base.Chain.html
91229f9d1a2c-7
Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated. batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation runs invoke in parallel using a thread pool executor. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶ The type of config this runnable accepts specified as a pydantic model. To mark a field as configurable, see the configurable_fields and configurable_alternatives methods. Parameters include – A list of fields to include in the config schema. Returns A pydantic model that can be used to validate config. configurable_alternatives(which: ConfigurableField, *, default_key: str = 'default', prefix_keys: bool = False, **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶ configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶ classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
https://api.python.langchain.com/en/latest/chains/langchain.chains.base.Chain.html
91229f9d1a2c-8
Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict[source]¶ Dictionary representation of chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters **kwargs – Keyword arguments passed to default pydantic.BaseModel.dict method. Returns A dictionary representation of the chain. Example chain.dict(exclude_unset=True) # -> {"_type": "foo", "verbose": False, ...} classmethod from_orm(obj: Any) → Model¶ get_graph(config: Optional[RunnableConfig] = None) → Graph¶ Return a graph representation of this runnable. get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel][source]¶ Get a pydantic model that can be used to validate input to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic input schema that depends on which configuration the runnable is invoked with.
https://api.python.langchain.com/en/latest/chains/langchain.chains.base.Chain.html
91229f9d1a2c-9
methods will have a dynamic input schema that depends on which configuration the runnable is invoked with. This method allows to get an input schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate input. classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] get_name(suffix: Optional[str] = None, *, name: Optional[str] = None) → str¶ Get the name of the runnable. get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel][source]¶ Get a pydantic model that can be used to validate output to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the runnable is invoked with. This method allows to get an output schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate output. get_prompts(config: Optional[RunnableConfig] = None) → List[BasePromptTemplate]¶ invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any][source]¶ Transform a single input into an output. Override to implement. Parameters input – The input to the runnable. config – A config to use when invoking the runnable. The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do
https://api.python.langchain.com/en/latest/chains/langchain.chains.base.Chain.html
91229f9d1a2c-10
purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Returns The output of the runnable. classmethod is_lc_serializable() → bool¶ Is this class serializable? json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. map() → Runnable[List[Input], List[Output]]¶ Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
https://api.python.langchain.com/en/latest/chains/langchain.chains.base.Chain.html
91229f9d1a2c-11
pick(keys: Union[str, List[str]]) → RunnableSerializable[Any, Any]¶ Pick keys from the dict output of this runnable. Returns a new runnable. pipe(*others: Union[Runnable[Any, Other], Callable[[Any], Other]], name: Optional[str] = None) → RunnableSerializable[Input, Other]¶ Compose this runnable with another object to create a RunnableSequence. prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str][source]¶ Validate and prepare chain inputs, including adding inputs from memory. Parameters inputs – Dictionary of raw inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. Returns A dictionary of all inputs, including those added by the chain’s memory. prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str][source]¶ Validate and prepare chain outputs, and save info about this run to memory. Parameters inputs – Dictionary of chain inputs, including any inputs added by chain memory. outputs – Dictionary of initial chain outputs. return_only_outputs – Whether to only return the chain outputs. If False, inputs are also added to the final outputs. Returns A dict of the final chain outputs. run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any[source]¶ Convenience method for executing chain. The main difference between this method and Chain.__call__ is that this method expects inputs to be passed directly in as positional arguments or
https://api.python.langchain.com/en/latest/chains/langchain.chains.base.Chain.html
91229f9d1a2c-12
method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output. Example # Suppose we have a single-input chain that takes a 'question' string: chain.run("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." chain.run(question=question, context=context) # -> "The temperature in Boise is..." save(file_path: Union[Path, str]) → None[source]¶ Save the chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters file_path – Path to file to save the chain to. Example chain.save(file_path="path/chain.yaml") classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
https://api.python.langchain.com/en/latest/chains/langchain.chains.base.Chain.html
91229f9d1a2c-13
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated. classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ Bind config to a Runnable, returning a new Runnable. with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶ Add fallbacks to a runnable, returning a new Runnable. Parameters fallbacks – A sequence of runnables to try if the original runnable fails. exceptions_to_handle – A tuple of exception types to handle. Returns A new Runnable that will try the original runnable, and then each fallback in order, upon failures.
https://api.python.langchain.com/en/latest/chains/langchain.chains.base.Chain.html
91229f9d1a2c-14
fallback in order, upon failures. with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶ Bind lifecycle listeners to a Runnable, returning a new Runnable. on_start: Called before the runnable starts running, with the Run object. on_end: Called after the runnable finishes running, with the Run object. on_error: Called if the runnable throws an error, with the Run object. The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run. with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶ Create a new Runnable that retries the original runnable on exceptions. Parameters retry_if_exception_type – A tuple of exception types to retry on wait_exponential_jitter – Whether to add jitter to the wait time between retries stop_after_attempt – The maximum number of attempts to make before giving up Returns A new Runnable that retries the original runnable on exceptions. with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶ Bind input and output types to a Runnable, returning a new Runnable. property InputType: Type[langchain_core.runnables.utils.Input]¶ The type of input this runnable accepts specified as a type annotation. property OutputType: Type[langchain_core.runnables.utils.Output]¶
https://api.python.langchain.com/en/latest/chains/langchain.chains.base.Chain.html
91229f9d1a2c-15
property OutputType: Type[langchain_core.runnables.utils.Output]¶ The type of output this runnable produces specified as a type annotation. property config_specs: List[langchain_core.runnables.utils.ConfigurableFieldSpec]¶ List configurable fields for this runnable. abstract property input_keys: List[str]¶ Keys expected to be in the chain input. property input_schema: Type[pydantic.main.BaseModel]¶ The type of input this runnable accepts specified as a pydantic model. property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} name: Optional[str] = None¶ The name of the runnable. Used for debugging and tracing. abstract property output_keys: List[str]¶ Keys expected to be in the chain output. property output_schema: Type[pydantic.main.BaseModel]¶ The type of output this runnable produces specified as a pydantic model. Examples using Chain¶ BabyAGI User Guide BabyAGI with Tools SalesGPT - Your Context-Aware AI Sales Assistant With Knowledge Base Custom chain
https://api.python.langchain.com/en/latest/chains/langchain.chains.base.Chain.html
ba967fb49299-0
langchain.chains.natbot.crawler.ElementInViewPort¶ class langchain.chains.natbot.crawler.ElementInViewPort[source]¶ A typed dictionary containing information about elements in the viewport. node_index: str¶ backend_node_id: int¶ node_name: Optional[str]¶ node_value: Optional[str]¶ node_meta: List[str]¶ is_clickable: bool¶ origin_x: int¶ origin_y: int¶ center_x: int¶ center_y: int¶
https://api.python.langchain.com/en/latest/chains/langchain.chains.natbot.crawler.ElementInViewPort.html
f3cd3b2c3064-0
langchain.chains.combine_documents.reduce.ReduceDocumentsChain¶ class langchain.chains.combine_documents.reduce.ReduceDocumentsChain[source]¶ Bases: BaseCombineDocumentsChain Combine documents by recursively reducing them. This involves combine_documents_chain collapse_documents_chain combine_documents_chain is ALWAYS provided. This is final chain that is called. We pass all previous results to this chain, and the output of this chain is returned as a final result. collapse_documents_chain is used if the documents passed in are too many to all be passed to combine_documents_chain in one go. In this case, collapse_documents_chain is called recursively on as big of groups of documents as are allowed. Example from langchain.chains import ( StuffDocumentsChain, LLMChain, ReduceDocumentsChain ) from langchain_core.prompts import PromptTemplate from langchain_community.llms import OpenAI # This controls how each document will be formatted. Specifically, # it will be passed to `format_document` - see that function for more # details. document_prompt = PromptTemplate( input_variables=["page_content"], template="{page_content}" ) document_variable_name = "context" llm = OpenAI() # The prompt here should take as an input variable the # `document_variable_name` prompt = PromptTemplate.from_template( "Summarize this content: {context}" ) llm_chain = LLMChain(llm=llm, prompt=prompt) combine_documents_chain = StuffDocumentsChain( llm_chain=llm_chain, document_prompt=document_prompt, document_variable_name=document_variable_name ) chain = ReduceDocumentsChain( combine_documents_chain=combine_documents_chain, ) # If we wanted to, we could also pass in collapse_documents_chain
https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html
f3cd3b2c3064-1
) # If we wanted to, we could also pass in collapse_documents_chain # which is specifically aimed at collapsing documents BEFORE # the final call. prompt = PromptTemplate.from_template( "Collapse this content: {context}" ) llm_chain = LLMChain(llm=llm, prompt=prompt) collapse_documents_chain = StuffDocumentsChain( llm_chain=llm_chain, document_prompt=document_prompt, document_variable_name=document_variable_name ) chain = ReduceDocumentsChain( combine_documents_chain=combine_documents_chain, collapse_documents_chain=collapse_documents_chain, ) Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated, use callbacks instead. param callbacks: Callbacks = None¶ Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. param collapse_documents_chain: Optional[BaseCombineDocumentsChain] = None¶ Chain to use to collapse documents if needed until they can all fit. If None, will use the combine_documents_chain. This is typically a StuffDocumentsChain. param combine_documents_chain: BaseCombineDocumentsChain [Required]¶ Final chain to call to combine documents. This is typically a StuffDocumentsChain. param memory: Optional[BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes
https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html
f3cd3b2c3064-2
and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the chain. Defaults to None. This metadata will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None. These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param token_max: int = 3000¶ The maximum number of tokens to group documents into. For example, if set to 3000 then documents will be grouped into chunks of no greater than 3000 tokens before trying to combine them into a smaller chunk. param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to the global verbose value, accessible via langchain.globals.get_verbose(). __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶ Execute the chain. Parameters
https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html
f3cd3b2c3064-3
Execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation runs ainvoke in parallel using asyncio.gather. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode.
https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html
f3cd3b2c3064-4
e.g., if the underlying runnable uses an API which supports a batch mode. async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶ Asynchronously execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async acombine_docs(docs: List[Document], token_max: Optional[int] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Tuple[str, dict][source]¶
https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html
f3cd3b2c3064-5
Async combine multiple documents recursively. Parameters docs – List of documents to combine, assumed that each one is less than token_max. token_max – Recursively creates groups of documents less than this number of tokens. callbacks – Callbacks to be passed through **kwargs – additional parameters to be passed to LLM calls (like other input variables besides the documents) Returns The first element returned is the single string output. The second element returned is a dictionary of other keys to return. async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶ Default implementation of ainvoke, calls invoke from a thread. The default implementation allows usage of async code even if the runnable did not implement a native async version of invoke. Subclasses should override this method if they can run asynchronously. apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Call the chain on all inputs in the list. async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Convenience method for executing chain. The main difference between this method and Chain.__call__ is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in
https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html
f3cd3b2c3064-6
callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output. Example # Suppose we have a single-input chain that takes a 'question' string: await chain.arun("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." await chain.arun(question=question, context=context) # -> "The temperature in Boise is..." assign(**kwargs: Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any], Mapping[str, Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any]]]]) → RunnableSerializable[Any, Any]¶ Assigns new fields to the dict output of this runnable. Returns a new runnable. async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output.
https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html
f3cd3b2c3064-7
Subclasses should override this method if they support streaming output. async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, with_streamed_output_list: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶ Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state. Parameters input – The input to the runnable. config – The config to use for the runnable. diff – Whether to yield diffs between each step, or the current state. with_streamed_output_list – Whether to yield the streamed_output list. include_names – Only include logs with these names. include_types – Only include logs with these types. include_tags – Only include logs with these tags. exclude_names – Exclude logs with these names. exclude_types – Exclude logs with these types. exclude_tags – Exclude logs with these tags. async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of atransform, which buffers input and calls astream.
https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html
f3cd3b2c3064-8
Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated. batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation runs invoke in parallel using a thread pool executor. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. combine_docs(docs: List[Document], token_max: Optional[int] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Tuple[str, dict][source]¶ Combine multiple documents recursively. Parameters docs – List of documents to combine, assumed that each one is less than token_max. token_max – Recursively creates groups of documents less than this number of tokens. callbacks – Callbacks to be passed through **kwargs – additional parameters to be passed to LLM calls (like other input variables besides the documents) Returns The first element returned is the single string output. The second element returned is a dictionary of other keys to return. config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶ The type of config this runnable accepts specified as a pydantic model. To mark a field as configurable, see the configurable_fields and configurable_alternatives methods. Parameters include – A list of fields to include in the config schema. Returns
https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html
f3cd3b2c3064-9
Parameters include – A list of fields to include in the config schema. Returns A pydantic model that can be used to validate config. configurable_alternatives(which: ConfigurableField, *, default_key: str = 'default', prefix_keys: bool = False, **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶ configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶ classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict¶ Dictionary representation of chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters
https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html
f3cd3b2c3064-10
Expects Chain._chain_type property to be implemented and for memory to benull. Parameters **kwargs – Keyword arguments passed to default pydantic.BaseModel.dict method. Returns A dictionary representation of the chain. Example chain.dict(exclude_unset=True) # -> {"_type": "foo", "verbose": False, ...} classmethod from_orm(obj: Any) → Model¶ get_graph(config: Optional[RunnableConfig] = None) → Graph¶ Return a graph representation of this runnable. get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate input to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic input schema that depends on which configuration the runnable is invoked with. This method allows to get an input schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate input. classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] get_name(suffix: Optional[str] = None, *, name: Optional[str] = None) → str¶ Get the name of the runnable. get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate output to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the runnable is invoked with.
https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html
f3cd3b2c3064-11
methods will have a dynamic output schema that depends on which configuration the runnable is invoked with. This method allows to get an output schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate output. get_prompts(config: Optional[RunnableConfig] = None) → List[BasePromptTemplate]¶ invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶ Transform a single input into an output. Override to implement. Parameters input – The input to the runnable. config – A config to use when invoking the runnable. The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Returns The output of the runnable. classmethod is_lc_serializable() → bool¶ Is this class serializable? json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes.
https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html
f3cd3b2c3064-12
A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. map() → Runnable[List[Input], List[Output]]¶ Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ pick(keys: Union[str, List[str]]) → RunnableSerializable[Any, Any]¶ Pick keys from the dict output of this runnable. Returns a new runnable. pipe(*others: Union[Runnable[Any, Other], Callable[[Any], Other]], name: Optional[str] = None) → RunnableSerializable[Input, Other]¶ Compose this runnable with another object to create a RunnableSequence. prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶ Validate and prepare chain inputs, including adding inputs from memory. Parameters inputs – Dictionary of raw inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. Returns A dictionary of all inputs, including those added by the chain’s memory. prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶ Validate and prepare chain outputs, and save info about this run to memory. Parameters
https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html
f3cd3b2c3064-13
Validate and prepare chain outputs, and save info about this run to memory. Parameters inputs – Dictionary of chain inputs, including any inputs added by chain memory. outputs – Dictionary of initial chain outputs. return_only_outputs – Whether to only return the chain outputs. If False, inputs are also added to the final outputs. Returns A dict of the final chain outputs. prompt_length(docs: List[Document], **kwargs: Any) → Optional[int]¶ Return the prompt length given the documents passed in. This can be used by a caller to determine whether passing in a list of documents would exceed a certain prompt length. This useful when trying to ensure that the size of a prompt remains below a certain context limit. Parameters docs – List[Document], a list of documents to use to calculate the total prompt length. Returns Returns None if the method does not depend on the prompt length, otherwise the length of the prompt in tokens. run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Convenience method for executing chain. The main difference between this method and Chain.__call__ is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects.
https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html
f3cd3b2c3064-14
these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output. Example # Suppose we have a single-input chain that takes a 'question' string: chain.run("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." chain.run(question=question, context=context) # -> "The temperature in Boise is..." save(file_path: Union[Path, str]) → None¶ Save the chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters file_path – Path to file to save the chain to. Example chain.save(file_path="path/chain.yaml") classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html
f3cd3b2c3064-15
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated. classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ Bind config to a Runnable, returning a new Runnable. with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶ Add fallbacks to a runnable, returning a new Runnable. Parameters fallbacks – A sequence of runnables to try if the original runnable fails. exceptions_to_handle – A tuple of exception types to handle. Returns A new Runnable that will try the original runnable, and then each fallback in order, upon failures. with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶ Bind lifecycle listeners to a Runnable, returning a new Runnable. on_start: Called before the runnable starts running, with the Run object. on_end: Called after the runnable finishes running, with the Run object. on_error: Called if the runnable throws an error, with the Run object.
https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html
f3cd3b2c3064-16
on_error: Called if the runnable throws an error, with the Run object. The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run. with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶ Create a new Runnable that retries the original runnable on exceptions. Parameters retry_if_exception_type – A tuple of exception types to retry on wait_exponential_jitter – Whether to add jitter to the wait time between retries stop_after_attempt – The maximum number of attempts to make before giving up Returns A new Runnable that retries the original runnable on exceptions. with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶ Bind input and output types to a Runnable, returning a new Runnable. property InputType: Type[langchain_core.runnables.utils.Input]¶ The type of input this runnable accepts specified as a type annotation. property OutputType: Type[langchain_core.runnables.utils.Output]¶ The type of output this runnable produces specified as a type annotation. property config_specs: List[langchain_core.runnables.utils.ConfigurableFieldSpec]¶ List configurable fields for this runnable. property input_schema: Type[pydantic.main.BaseModel]¶ The type of input this runnable accepts specified as a pydantic model. property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor.
https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html
f3cd3b2c3064-17
These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} name: Optional[str] = None¶ The name of the runnable. Used for debugging and tracing. property output_schema: Type[pydantic.main.BaseModel]¶ The type of output this runnable produces specified as a pydantic model. Examples using ReduceDocumentsChain¶ Set env var OPENAI_API_KEY or load from a .env file
https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html
28bcaee3ff16-0
langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain¶ class langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain[source]¶ Bases: Chain Chain for question-answering against a graph by generating nGQL statements. Security note: Make sure that the database connection uses credentialsthat are narrowly-scoped to only include necessary permissions. Failure to do so may result in data corruption or loss, since the calling code may attempt commands that would result in deletion, mutation of data if appropriately prompted or reading sensitive data if such data is present in the database. The best way to guard against such negative outcomes is to (as appropriate) limit the permissions granted to the credentials used with this tool. See https://python.langchain.com/docs/security for more information. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated, use callbacks instead. param callbacks: Callbacks = None¶ Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. param graph: NebulaGraph [Required]¶ param memory: Optional[BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog.
https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain.html
28bcaee3ff16-1
There are many different types of memory - please see memory docs for the full catalog. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the chain. Defaults to None. This metadata will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param ngql_generation_chain: LLMChain [Required]¶ param qa_chain: LLMChain [Required]¶ param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None. These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to the global verbose value, accessible via langchain.globals.get_verbose(). __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶ Execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be
https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain.html
28bcaee3ff16-2
response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation runs ainvoke in parallel using asyncio.gather. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶ Asynchronously execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects
https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain.html
28bcaee3ff16-3
Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶ Default implementation of ainvoke, calls invoke from a thread. The default implementation allows usage of async code even if the runnable did not implement a native async version of invoke. Subclasses should override this method if they can run asynchronously. apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Call the chain on all inputs in the list.
https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain.html
28bcaee3ff16-4
Call the chain on all inputs in the list. async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Convenience method for executing chain. The main difference between this method and Chain.__call__ is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output. Example # Suppose we have a single-input chain that takes a 'question' string: await chain.arun("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." await chain.arun(question=question, context=context) # -> "The temperature in Boise is..."
https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain.html
28bcaee3ff16-5
# -> "The temperature in Boise is..." assign(**kwargs: Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any], Mapping[str, Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any]]]]) → RunnableSerializable[Any, Any]¶ Assigns new fields to the dict output of this runnable. Returns a new runnable. async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output. async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, with_streamed_output_list: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶ Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state. Parameters input – The input to the runnable. config – The config to use for the runnable. diff – Whether to yield diffs between each step, or the current state.
https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain.html
28bcaee3ff16-6
diff – Whether to yield diffs between each step, or the current state. with_streamed_output_list – Whether to yield the streamed_output list. include_names – Only include logs with these names. include_types – Only include logs with these types. include_tags – Only include logs with these tags. exclude_names – Exclude logs with these names. exclude_types – Exclude logs with these types. exclude_tags – Exclude logs with these tags. async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated. batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation runs invoke in parallel using a thread pool executor. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶ The type of config this runnable accepts specified as a pydantic model. To mark a field as configurable, see the configurable_fields and configurable_alternatives methods. Parameters include – A list of fields to include in the config schema. Returns A pydantic model that can be used to validate config.
https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain.html
28bcaee3ff16-7
Returns A pydantic model that can be used to validate config. configurable_alternatives(which: ConfigurableField, *, default_key: str = 'default', prefix_keys: bool = False, **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶ configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶ classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict¶ Dictionary representation of chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters **kwargs – Keyword arguments passed to default pydantic.BaseModel.dict method. Returns
https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain.html
28bcaee3ff16-8
**kwargs – Keyword arguments passed to default pydantic.BaseModel.dict method. Returns A dictionary representation of the chain. Example chain.dict(exclude_unset=True) # -> {"_type": "foo", "verbose": False, ...}
https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain.html
28bcaee3ff16-9
classmethod from_llm(llm: BaseLanguageModel, *, qa_prompt: BasePromptTemplate = PromptTemplate(input_variables=['context', 'question'], template="You are an assistant that helps to form nice and human understandable answers.\nThe information part contains the provided information that you must use to construct an answer.\nThe provided information is authoritative, you must never doubt it or try to use your internal knowledge to correct it.\nMake the answer sound as a response to the question. Do not mention that you based the result on the given information.\nIf the provided information is empty, say that you don't know the answer.\nInformation:\n{context}\n\nQuestion: {question}\nHelpful Answer:"), ngql_prompt: BasePromptTemplate = PromptTemplate(input_variables=['question', 'schema'], template="Task:Generate NebulaGraph Cypher statement to query a graph database.\n\nInstructions:\n\nFirst, generate cypher then convert it to NebulaGraph Cypher dialect(rather than standard):\n1. it requires explicit label specification only when referring to node properties: v.`Foo`.name\n2. note explicit label specification is not needed for edge properties, so it's e.name instead of e.`Bar`.name\n3. it uses double equals sign for comparison: `==` rather than `=`\nFor instance:\n```diff\n< MATCH (p:person)-[e:directed]->(m:movie) WHERE m.name = 'The Godfather II'\n< RETURN p.name, e.year, m.name;\n---\n> MATCH (p:`person`)-[e:directed]->(m:`movie`) WHERE m.`movie`.`name` == 'The Godfather II'\n> RETURN p.`person`.`name`, e.year, m.`movie`.`name`;\n```\n\nUse only the provided relationship types and properties in the schema.\nDo not use any other relationship
https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain.html
28bcaee3ff16-10
only the provided relationship types and properties in the schema.\nDo not use any other relationship types or properties that are not provided.\nSchema:\n{schema}\nNote: Do not include any explanations or apologies in your responses.\nDo not respond to any questions that might ask anything else than for you to construct a Cypher statement.\nDo not include any text except the generated Cypher statement.\n\nThe question is:\n{question}"), **kwargs: Any) → NebulaGraphQAChain[source]¶
https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain.html
28bcaee3ff16-11
Initialize from LLM. classmethod from_orm(obj: Any) → Model¶ get_graph(config: Optional[RunnableConfig] = None) → Graph¶ Return a graph representation of this runnable. get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate input to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic input schema that depends on which configuration the runnable is invoked with. This method allows to get an input schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate input. classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] get_name(suffix: Optional[str] = None, *, name: Optional[str] = None) → str¶ Get the name of the runnable. get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate output to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the runnable is invoked with. This method allows to get an output schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate output. get_prompts(config: Optional[RunnableConfig] = None) → List[BasePromptTemplate]¶
https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain.html
28bcaee3ff16-12
invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶ Transform a single input into an output. Override to implement. Parameters input – The input to the runnable. config – A config to use when invoking the runnable. The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Returns The output of the runnable. classmethod is_lc_serializable() → bool¶ Is this class serializable? json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. map() → Runnable[List[Input], List[Output]]¶ Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input.
https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain.html
28bcaee3ff16-13
by calling invoke() with each input. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ pick(keys: Union[str, List[str]]) → RunnableSerializable[Any, Any]¶ Pick keys from the dict output of this runnable. Returns a new runnable. pipe(*others: Union[Runnable[Any, Other], Callable[[Any], Other]], name: Optional[str] = None) → RunnableSerializable[Input, Other]¶ Compose this runnable with another object to create a RunnableSequence. prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶ Validate and prepare chain inputs, including adding inputs from memory. Parameters inputs – Dictionary of raw inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. Returns A dictionary of all inputs, including those added by the chain’s memory. prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶ Validate and prepare chain outputs, and save info about this run to memory. Parameters inputs – Dictionary of chain inputs, including any inputs added by chain memory. outputs – Dictionary of initial chain outputs. return_only_outputs – Whether to only return the chain outputs. If False, inputs are also added to the final outputs. Returns A dict of the final chain outputs.
https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain.html
28bcaee3ff16-14
Returns A dict of the final chain outputs. run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Convenience method for executing chain. The main difference between this method and Chain.__call__ is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output. Example # Suppose we have a single-input chain that takes a 'question' string: chain.run("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." chain.run(question=question, context=context) # -> "The temperature in Boise is..." save(file_path: Union[Path, str]) → None¶ Save the chain.
https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain.html
28bcaee3ff16-15
save(file_path: Union[Path, str]) → None¶ Save the chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters file_path – Path to file to save the chain to. Example chain.save(file_path="path/chain.yaml") classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated. classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ Bind config to a Runnable, returning a new Runnable. with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶
https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain.html
28bcaee3ff16-16
Add fallbacks to a runnable, returning a new Runnable. Parameters fallbacks – A sequence of runnables to try if the original runnable fails. exceptions_to_handle – A tuple of exception types to handle. Returns A new Runnable that will try the original runnable, and then each fallback in order, upon failures. with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶ Bind lifecycle listeners to a Runnable, returning a new Runnable. on_start: Called before the runnable starts running, with the Run object. on_end: Called after the runnable finishes running, with the Run object. on_error: Called if the runnable throws an error, with the Run object. The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run. with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶ Create a new Runnable that retries the original runnable on exceptions. Parameters retry_if_exception_type – A tuple of exception types to retry on wait_exponential_jitter – Whether to add jitter to the wait time between retries stop_after_attempt – The maximum number of attempts to make before giving up Returns A new Runnable that retries the original runnable on exceptions. with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain.html
28bcaee3ff16-17
Bind input and output types to a Runnable, returning a new Runnable. property InputType: Type[langchain_core.runnables.utils.Input]¶ The type of input this runnable accepts specified as a type annotation. property OutputType: Type[langchain_core.runnables.utils.Output]¶ The type of output this runnable produces specified as a type annotation. property config_specs: List[langchain_core.runnables.utils.ConfigurableFieldSpec]¶ List configurable fields for this runnable. property input_schema: Type[pydantic.main.BaseModel]¶ The type of input this runnable accepts specified as a pydantic model. property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} name: Optional[str] = None¶ The name of the runnable. Used for debugging and tracing. property output_schema: Type[pydantic.main.BaseModel]¶ The type of output this runnable produces specified as a pydantic model. Examples using NebulaGraphQAChain¶ NebulaGraphQAChain
https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain.html
fbce44918852-0
langchain.chains.combine_documents.reduce.acollapse_docs¶ async langchain.chains.combine_documents.reduce.acollapse_docs(docs: List[Document], combine_document_func: AsyncCombineDocsProtocol, **kwargs: Any) → Document[source]¶ Execute a collapse function on a set of documents and merge their metadatas. Parameters docs – A list of Documents to combine. combine_document_func – A function that takes in a list of Documents and optionally addition keyword parameters and combines them into a single string. **kwargs – Arbitrary additional keyword params to pass to the combine_document_func. Returns A single Document with the output of combine_document_func for the page contentand the combined metadata’s of all the input documents. All metadata values are strings, and where there are overlapping keys across documents the values are joined by “, “.
https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.reduce.acollapse_docs.html
6331230ba401-0
langchain.chains.moderation.OpenAIModerationChain¶ class langchain.chains.moderation.OpenAIModerationChain[source]¶ Bases: Chain Pass input through a moderation endpoint. To use, you should have the openai python package installed, and the environment variable OPENAI_API_KEY set with your API key. Any parameters that are valid to be passed to the openai.create call can be passed in, even if not explicitly saved on this class. Example from langchain.chains import OpenAIModerationChain moderation = OpenAIModerationChain() Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated, use callbacks instead. param callbacks: Callbacks = None¶ Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. param error: bool = False¶ Whether or not to error if bad content was found. param memory: Optional[BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the chain. Defaults to None. This metadata will be associated with each call to this chain,
https://api.python.langchain.com/en/latest/chains/langchain.chains.moderation.OpenAIModerationChain.html
6331230ba401-1
This metadata will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param model_name: Optional[str] = None¶ Moderation model name to use. param openai_api_key: Optional[str] = None¶ param openai_organization: Optional[str] = None¶ param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None. These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to the global verbose value, accessible via langchain.globals.get_verbose(). __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶ Execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False.
https://api.python.langchain.com/en/latest/chains/langchain.chains.moderation.OpenAIModerationChain.html