id
stringlengths 14
15
| text
stringlengths 13
2.7k
| source
stringlengths 60
181
|
---|---|---|
b5b603b848e3-6 | by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
partial(**kwargs: Union[str, Callable[[], str]]) → BasePromptTemplate¶
Return a partial of the prompt template.
pick(keys: Union[str, List[str]]) → RunnableSerializable[Any, Any]¶
Pick keys from the dict output of this runnable.
Returns a new runnable.
pipe(*others: Union[Runnable[Any, Other], Callable[[Any], Other]], name: Optional[str] = None) → RunnableSerializable[Input, Other]¶
Compose this runnable with another object to create a RunnableSequence.
save(file_path: Union[Path, str]) → None¶
Save the prompt.
Parameters
file_path – Path to directory to save prompt to.
Example:
.. code-block:: python
prompt.save(file_path=”path/prompt.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.string.StringPromptTemplate.html |
b5b603b848e3-7 | to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.string.StringPromptTemplate.html |
b5b603b848e3-8 | on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Type[langchain_core.runnables.utils.Input]¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Any¶
The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain_core.runnables.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.string.StringPromptTemplate.html |
b5b603b848e3-9 | property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
name: Optional[str] = None¶
The name of the runnable. Used for debugging and tracing.
property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model.
Examples using StringPromptTemplate¶
Plug-and-Plai
Wikibase Agent
SalesGPT - Your Context-Aware AI Sales Assistant With Knowledge Base
Custom Agent with PlugIn Retrieval
Custom agent with tool retrieval
Custom prompt template
Connecting to a Feature Store | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.string.StringPromptTemplate.html |
1793d2ecbf33-0 | langchain_core.prompts.few_shot.FewShotPromptTemplate¶
class langchain_core.prompts.few_shot.FewShotPromptTemplate[source]¶
Bases: _FewShotPromptTemplateMixin, StringPromptTemplate
Prompt template that contains few shot examples.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param example_prompt: PromptTemplate [Required]¶
PromptTemplate used to format an individual example.
param example_selector: Any = None¶
ExampleSelector to choose the examples to format into the prompt.
Either this or examples should be provided.
param example_separator: str = '\n\n'¶
String separator used to join the prefix, the examples, and suffix.
param examples: Optional[List[dict]] = None¶
Examples to format into the prompt.
Either this or example_selector should be provided.
param input_types: Dict[str, Any] [Optional]¶
A dictionary of the types of the variables the prompt template expects.
If not provided, all variables are assumed to be strings.
param input_variables: List[str] [Required]¶
A list of the names of the variables the prompt template expects.
param output_parser: Optional[BaseOutputParser] = None¶
How to parse the output of calling an LLM on this formatted prompt.
param partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]¶
param prefix: str = ''¶
A prompt template string to put before the examples.
param suffix: str [Required]¶
A prompt template string to put after the examples.
param template_format: Union[Literal['f-string'], Literal['jinja2']] = 'f-string'¶
The format of the prompt template. Options are: ‘f-string’, ‘jinja2’. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot.FewShotPromptTemplate.html |
1793d2ecbf33-1 | The format of the prompt template. Options are: ‘f-string’, ‘jinja2’.
param validate_template: bool = False¶
Whether or not to try validating the template.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
async ainvoke(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Any) → Output¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously.
assign(**kwargs: Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any], Mapping[str, Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any]]]]) → RunnableSerializable[Any, Any]¶
Assigns new fields to the dict output of this runnable.
Returns a new runnable.
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot.FewShotPromptTemplate.html |
1793d2ecbf33-2 | Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, with_streamed_output_list: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
Parameters
input – The input to the runnable.
config – The config to use for the runnable.
diff – Whether to yield diffs between each step, or the current state.
with_streamed_output_list – Whether to yield the streamed_output list.
include_names – Only include logs with these names.
include_types – Only include logs with these types.
include_tags – Only include logs with these tags.
exclude_names – Exclude logs with these names.
exclude_types – Exclude logs with these types.
exclude_tags – Exclude logs with these tags.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot.FewShotPromptTemplate.html |
1793d2ecbf33-3 | Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, *, default_key: str = 'default', prefix_keys: bool = False, **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot.FewShotPromptTemplate.html |
1793d2ecbf33-4 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return dictionary representation of prompt.
format(**kwargs: Any) → str[source]¶
Format the prompt with the inputs.
Parameters
**kwargs – Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
format_prompt(**kwargs: Any) → PromptValue¶
Create Chat Messages.
classmethod from_orm(obj: Any) → Model¶
get_graph(config: Optional[RunnableConfig] = None) → Graph¶
Return a graph representation of this runnable.
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot.FewShotPromptTemplate.html |
1793d2ecbf33-5 | methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
get_name(suffix: Optional[str] = None, *, name: Optional[str] = None) → str¶
Get the name of the runnable.
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
get_prompts(config: Optional[RunnableConfig] = None) → List[BasePromptTemplate]¶
invoke(input: Dict, config: Optional[RunnableConfig] = None) → PromptValue¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool[source]¶
Return whether or not the class is serializable. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot.FewShotPromptTemplate.html |
1793d2ecbf33-6 | Return whether or not the class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
partial(**kwargs: Union[str, Callable[[], str]]) → BasePromptTemplate¶
Return a partial of the prompt template.
pick(keys: Union[str, List[str]]) → RunnableSerializable[Any, Any]¶
Pick keys from the dict output of this runnable.
Returns a new runnable. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot.FewShotPromptTemplate.html |
1793d2ecbf33-7 | Pick keys from the dict output of this runnable.
Returns a new runnable.
pipe(*others: Union[Runnable[Any, Other], Callable[[Any], Other]], name: Optional[str] = None) → RunnableSerializable[Input, Other]¶
Compose this runnable with another object to create a RunnableSequence.
save(file_path: Union[Path, str]) → None[source]¶
Save the prompt.
Parameters
file_path – Path to directory to save prompt to.
Example:
.. code-block:: python
prompt.save(file_path=”path/prompt.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot.FewShotPromptTemplate.html |
1793d2ecbf33-8 | Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot.FewShotPromptTemplate.html |
1793d2ecbf33-9 | between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Type[langchain_core.runnables.utils.Input]¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Any¶
The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain_core.runnables.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
name: Optional[str] = None¶
The name of the runnable. Used for debugging and tracing.
property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model.
Examples using FewShotPromptTemplate¶
Select by maximal marginal relevance (MMR)
Select by n-gram overlap | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot.FewShotPromptTemplate.html |
062be94f06dc-0 | langchain_core.prompts.few_shot_with_templates.FewShotPromptWithTemplates¶
class langchain_core.prompts.few_shot_with_templates.FewShotPromptWithTemplates[source]¶
Bases: StringPromptTemplate
Prompt template that contains few shot examples.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param example_prompt: langchain_core.prompts.prompt.PromptTemplate [Required]¶
PromptTemplate used to format an individual example.
param example_selector: Any = None¶
ExampleSelector to choose the examples to format into the prompt.
Either this or examples should be provided.
param example_separator: str = '\n\n'¶
String separator used to join the prefix, the examples, and suffix.
param examples: Optional[List[dict]] = None¶
Examples to format into the prompt.
Either this or example_selector should be provided.
param input_types: Dict[str, Any] [Optional]¶
A dictionary of the types of the variables the prompt template expects.
If not provided, all variables are assumed to be strings.
param input_variables: List[str] [Required]¶
A list of the names of the variables the prompt template expects.
param output_parser: Optional[BaseOutputParser] = None¶
How to parse the output of calling an LLM on this formatted prompt.
param partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]¶
param prefix: Optional[langchain_core.prompts.string.StringPromptTemplate] = None¶
A PromptTemplate to put before the examples.
param suffix: langchain_core.prompts.string.StringPromptTemplate [Required]¶
A PromptTemplate to put after the examples.
param template_format: str = 'f-string'¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot_with_templates.FewShotPromptWithTemplates.html |
062be94f06dc-1 | param template_format: str = 'f-string'¶
The format of the prompt template. Options are: ‘f-string’, ‘jinja2’.
param validate_template: bool = False¶
Whether or not to try validating the template.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
async ainvoke(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Any) → Output¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously.
assign(**kwargs: Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any], Mapping[str, Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any]]]]) → RunnableSerializable[Any, Any]¶
Assigns new fields to the dict output of this runnable.
Returns a new runnable.
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot_with_templates.FewShotPromptWithTemplates.html |
062be94f06dc-2 | Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, with_streamed_output_list: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
Parameters
input – The input to the runnable.
config – The config to use for the runnable.
diff – Whether to yield diffs between each step, or the current state.
with_streamed_output_list – Whether to yield the streamed_output list.
include_names – Only include logs with these names.
include_types – Only include logs with these types.
include_tags – Only include logs with these tags.
exclude_names – Exclude logs with these names.
exclude_types – Exclude logs with these types.
exclude_tags – Exclude logs with these tags.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot_with_templates.FewShotPromptWithTemplates.html |
062be94f06dc-3 | Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, *, default_key: str = 'default', prefix_keys: bool = False, **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot_with_templates.FewShotPromptWithTemplates.html |
062be94f06dc-4 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return dictionary representation of prompt.
format(**kwargs: Any) → str[source]¶
Format the prompt with the inputs.
Parameters
kwargs – Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
format_prompt(**kwargs: Any) → PromptValue¶
Create Chat Messages.
classmethod from_orm(obj: Any) → Model¶
get_graph(config: Optional[RunnableConfig] = None) → Graph¶
Return a graph representation of this runnable.
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot_with_templates.FewShotPromptWithTemplates.html |
062be94f06dc-5 | This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str][source]¶
Get the namespace of the langchain object.
get_name(suffix: Optional[str] = None, *, name: Optional[str] = None) → str¶
Get the name of the runnable.
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
get_prompts(config: Optional[RunnableConfig] = None) → List[BasePromptTemplate]¶
invoke(input: Dict, config: Optional[RunnableConfig] = None) → PromptValue¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool¶
Return whether this class is serializable. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot_with_templates.FewShotPromptWithTemplates.html |
062be94f06dc-6 | classmethod is_lc_serializable() → bool¶
Return whether this class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
partial(**kwargs: Union[str, Callable[[], str]]) → BasePromptTemplate¶
Return a partial of the prompt template.
pick(keys: Union[str, List[str]]) → RunnableSerializable[Any, Any]¶
Pick keys from the dict output of this runnable. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot_with_templates.FewShotPromptWithTemplates.html |
062be94f06dc-7 | Pick keys from the dict output of this runnable.
Returns a new runnable.
pipe(*others: Union[Runnable[Any, Other], Callable[[Any], Other]], name: Optional[str] = None) → RunnableSerializable[Input, Other]¶
Compose this runnable with another object to create a RunnableSequence.
save(file_path: Union[Path, str]) → None[source]¶
Save the prompt.
Parameters
file_path – Path to directory to save prompt to.
Example:
.. code-block:: python
prompt.save(file_path=”path/prompt.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot_with_templates.FewShotPromptWithTemplates.html |
062be94f06dc-8 | Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot_with_templates.FewShotPromptWithTemplates.html |
062be94f06dc-9 | between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Type[langchain_core.runnables.utils.Input]¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Any¶
The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain_core.runnables.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
name: Optional[str] = None¶
The name of the runnable. Used for debugging and tracing.
property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot_with_templates.FewShotPromptWithTemplates.html |
52bbf96f109d-0 | langchain_core.prompts.base.format_document¶
langchain_core.prompts.base.format_document(doc: Document, prompt: BasePromptTemplate) → str[source]¶
Format a document into a string based on a prompt template.
First, this pulls information from the document from two sources:
page_content:This takes the information from the document.page_content
and assigns it to a variable named page_content.
metadata:This takes information from document.metadata and assigns
it to variables of the same name.
Those variables are then passed into the prompt to produce a formatted string.
Parameters
doc – Document, the page_content and metadata will be used to create
the final string.
prompt – BasePromptTemplate, will be used to format the page_content
and metadata into the final string.
Returns
string of the document formatted.
Example
from langchain_core import Document
from langchain_core.prompts import PromptTemplate
doc = Document(page_content="This is a joke", metadata={"page": "1"})
prompt = PromptTemplate.from_template("Page {page}: {page_content}")
format_document(doc, prompt)
>>> "Page 1: This is a joke"
Examples using format_document¶
First we add a step to load memory | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.base.format_document.html |
c521721dfa69-0 | langchain_core.prompts.pipeline.PipelinePromptTemplate¶
class langchain_core.prompts.pipeline.PipelinePromptTemplate[source]¶
Bases: BasePromptTemplate
A prompt template for composing multiple prompt templates together.
This can be useful when you want to reuse parts of prompts.
A PipelinePrompt consists of two main parts:
final_prompt: This is the final prompt that is returned
pipeline_prompts: This is a list of tuples, consistingof a string (name) and a Prompt Template.
Each PromptTemplate will be formatted and then passed
to future prompt templates as a variable with
the same name as name
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param final_prompt: langchain_core.prompts.base.BasePromptTemplate [Required]¶
The final prompt that is returned.
param input_types: Dict[str, Any] [Optional]¶
A dictionary of the types of the variables the prompt template expects.
If not provided, all variables are assumed to be strings.
param input_variables: List[str] [Required]¶
A list of the names of the variables the prompt template expects.
param output_parser: Optional[BaseOutputParser] = None¶
How to parse the output of calling an LLM on this formatted prompt.
param partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]¶
param pipeline_prompts: List[Tuple[str, langchain_core.prompts.base.BasePromptTemplate]] [Required]¶
A list of tuples, consisting of a string (name) and a Prompt Template.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.pipeline.PipelinePromptTemplate.html |
c521721dfa69-1 | Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
async ainvoke(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Any) → Output¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously.
assign(**kwargs: Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any], Mapping[str, Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any]]]]) → RunnableSerializable[Any, Any]¶
Assigns new fields to the dict output of this runnable.
Returns a new runnable.
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, with_streamed_output_list: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.pipeline.PipelinePromptTemplate.html |
c521721dfa69-2 | Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
Parameters
input – The input to the runnable.
config – The config to use for the runnable.
diff – Whether to yield diffs between each step, or the current state.
with_streamed_output_list – Whether to yield the streamed_output list.
include_names – Only include logs with these names.
include_types – Only include logs with these types.
include_tags – Only include logs with these tags.
exclude_names – Exclude logs with these names.
exclude_types – Exclude logs with these types.
exclude_tags – Exclude logs with these tags.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.pipeline.PipelinePromptTemplate.html |
c521721dfa69-3 | bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, *, default_key: str = 'default', prefix_keys: bool = False, **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.pipeline.PipelinePromptTemplate.html |
c521721dfa69-4 | exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return dictionary representation of prompt.
format(**kwargs: Any) → str[source]¶
Format the prompt with the inputs.
Parameters
kwargs – Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
format_prompt(**kwargs: Any) → PromptValue[source]¶
Create Prompt Value.
classmethod from_orm(obj: Any) → Model¶
get_graph(config: Optional[RunnableConfig] = None) → Graph¶
Return a graph representation of this runnable.
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str][source]¶
Get the namespace of the langchain object.
get_name(suffix: Optional[str] = None, *, name: Optional[str] = None) → str¶
Get the name of the runnable.
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.pipeline.PipelinePromptTemplate.html |
c521721dfa69-5 | Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
get_prompts(config: Optional[RunnableConfig] = None) → List[BasePromptTemplate]¶
invoke(input: Dict, config: Optional[RunnableConfig] = None) → PromptValue¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool¶
Return whether this class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.pipeline.PipelinePromptTemplate.html |
c521721dfa69-6 | classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
partial(**kwargs: Union[str, Callable[[], str]]) → BasePromptTemplate¶
Return a partial of the prompt template.
pick(keys: Union[str, List[str]]) → RunnableSerializable[Any, Any]¶
Pick keys from the dict output of this runnable.
Returns a new runnable.
pipe(*others: Union[Runnable[Any, Other], Callable[[Any], Other]], name: Optional[str] = None) → RunnableSerializable[Input, Other]¶
Compose this runnable with another object to create a RunnableSequence.
save(file_path: Union[Path, str]) → None¶
Save the prompt.
Parameters
file_path – Path to directory to save prompt to.
Example:
.. code-block:: python
prompt.save(file_path=”path/prompt.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.pipeline.PipelinePromptTemplate.html |
c521721dfa69-7 | stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.pipeline.PipelinePromptTemplate.html |
c521721dfa69-8 | Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Type[langchain_core.runnables.utils.Input]¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Any¶
The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain_core.runnables.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.pipeline.PipelinePromptTemplate.html |
c521721dfa69-9 | The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
name: Optional[str] = None¶
The name of the runnable. Used for debugging and tracing.
property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.pipeline.PipelinePromptTemplate.html |
d7f83ea4a8d5-0 | langchain_core.prompts.loading.load_prompt_from_config¶
langchain_core.prompts.loading.load_prompt_from_config(config: dict) → BasePromptTemplate[source]¶
Load prompt from Config Dict. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.loading.load_prompt_from_config.html |
88c5c881bef5-0 | langchain_core.prompts.base.BasePromptTemplate¶
class langchain_core.prompts.base.BasePromptTemplate[source]¶
Bases: RunnableSerializable[Dict, PromptValue], ABC
Base class for all prompt templates, returning a prompt.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param input_types: Dict[str, Any] [Optional]¶
A dictionary of the types of the variables the prompt template expects.
If not provided, all variables are assumed to be strings.
param input_variables: List[str] [Required]¶
A list of the names of the variables the prompt template expects.
param output_parser: Optional[langchain_core.output_parsers.base.BaseOutputParser] = None¶
How to parse the output of calling an LLM on this formatted prompt.
param partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]¶
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
async ainvoke(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Any) → Output¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.base.BasePromptTemplate.html |
88c5c881bef5-1 | Subclasses should override this method if they can run asynchronously.
assign(**kwargs: Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any], Mapping[str, Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any]]]]) → RunnableSerializable[Any, Any]¶
Assigns new fields to the dict output of this runnable.
Returns a new runnable.
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, with_streamed_output_list: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
Parameters
input – The input to the runnable.
config – The config to use for the runnable.
diff – Whether to yield diffs between each step, or the current state. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.base.BasePromptTemplate.html |
88c5c881bef5-2 | diff – Whether to yield diffs between each step, or the current state.
with_streamed_output_list – Whether to yield the streamed_output list.
include_names – Only include logs with these names.
include_types – Only include logs with these types.
include_tags – Only include logs with these tags.
exclude_names – Exclude logs with these names.
exclude_types – Exclude logs with these types.
exclude_tags – Exclude logs with these tags.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.base.BasePromptTemplate.html |
88c5c881bef5-3 | Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, *, default_key: str = 'default', prefix_keys: bool = False, **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict[source]¶
Return dictionary representation of prompt.
abstract format(**kwargs: Any) → str[source]¶
Format the prompt with the inputs.
Parameters
kwargs – Any arguments to be passed to the prompt template.
Returns | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.base.BasePromptTemplate.html |
88c5c881bef5-4 | Parameters
kwargs – Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
abstract format_prompt(**kwargs: Any) → PromptValue[source]¶
Create Prompt Value.
classmethod from_orm(obj: Any) → Model¶
get_graph(config: Optional[RunnableConfig] = None) → Graph¶
Return a graph representation of this runnable.
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel][source]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str][source]¶
Get the namespace of the langchain object.
get_name(suffix: Optional[str] = None, *, name: Optional[str] = None) → str¶
Get the name of the runnable.
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
get_prompts(config: Optional[RunnableConfig] = None) → List[BasePromptTemplate]¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.base.BasePromptTemplate.html |
88c5c881bef5-5 | invoke(input: Dict, config: Optional[RunnableConfig] = None) → PromptValue[source]¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool[source]¶
Return whether this class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.base.BasePromptTemplate.html |
88c5c881bef5-6 | by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
partial(**kwargs: Union[str, Callable[[], str]]) → BasePromptTemplate[source]¶
Return a partial of the prompt template.
pick(keys: Union[str, List[str]]) → RunnableSerializable[Any, Any]¶
Pick keys from the dict output of this runnable.
Returns a new runnable.
pipe(*others: Union[Runnable[Any, Other], Callable[[Any], Other]], name: Optional[str] = None) → RunnableSerializable[Input, Other]¶
Compose this runnable with another object to create a RunnableSequence.
save(file_path: Union[Path, str]) → None[source]¶
Save the prompt.
Parameters
file_path – Path to directory to save prompt to.
Example:
.. code-block:: python
prompt.save(file_path=”path/prompt.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.base.BasePromptTemplate.html |
88c5c881bef5-7 | to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.base.BasePromptTemplate.html |
88c5c881bef5-8 | on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Type[langchain_core.runnables.utils.Input]¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Any¶
The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain_core.runnables.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.base.BasePromptTemplate.html |
88c5c881bef5-9 | property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
name: Optional[str] = None¶
The name of the runnable. Used for debugging and tracing.
property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model.
Examples using BasePromptTemplate¶
Custom chain | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.base.BasePromptTemplate.html |
05a7541659c3-0 | langchain_core.prompts.chat.BaseChatPromptTemplate¶
class langchain_core.prompts.chat.BaseChatPromptTemplate[source]¶
Bases: BasePromptTemplate, ABC
Base class for chat prompt templates.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param input_types: Dict[str, Any] [Optional]¶
A dictionary of the types of the variables the prompt template expects.
If not provided, all variables are assumed to be strings.
param input_variables: List[str] [Required]¶
A list of the names of the variables the prompt template expects.
param output_parser: Optional[BaseOutputParser] = None¶
How to parse the output of calling an LLM on this formatted prompt.
param partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]¶
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
async ainvoke(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Any) → Output¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.BaseChatPromptTemplate.html |
05a7541659c3-1 | Subclasses should override this method if they can run asynchronously.
assign(**kwargs: Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any], Mapping[str, Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any]]]]) → RunnableSerializable[Any, Any]¶
Assigns new fields to the dict output of this runnable.
Returns a new runnable.
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, with_streamed_output_list: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
Parameters
input – The input to the runnable.
config – The config to use for the runnable.
diff – Whether to yield diffs between each step, or the current state. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.BaseChatPromptTemplate.html |
05a7541659c3-2 | diff – Whether to yield diffs between each step, or the current state.
with_streamed_output_list – Whether to yield the streamed_output list.
include_names – Only include logs with these names.
include_types – Only include logs with these types.
include_tags – Only include logs with these tags.
exclude_names – Exclude logs with these names.
exclude_types – Exclude logs with these types.
exclude_tags – Exclude logs with these tags.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.BaseChatPromptTemplate.html |
05a7541659c3-3 | Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, *, default_key: str = 'default', prefix_keys: bool = False, **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return dictionary representation of prompt.
format(**kwargs: Any) → str[source]¶
Format the chat template into a string.
Parameters
**kwargs – keyword arguments to use for filling in template variables | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.BaseChatPromptTemplate.html |
05a7541659c3-4 | Parameters
**kwargs – keyword arguments to use for filling in template variables
in all the template messages in this chat template.
Returns
formatted string
abstract format_messages(**kwargs: Any) → List[BaseMessage][source]¶
Format kwargs into a list of messages.
format_prompt(**kwargs: Any) → PromptValue[source]¶
Format prompt. Should return a PromptValue.
:param **kwargs: Keyword arguments to use for formatting.
Returns
PromptValue.
classmethod from_orm(obj: Any) → Model¶
get_graph(config: Optional[RunnableConfig] = None) → Graph¶
Return a graph representation of this runnable.
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
get_name(suffix: Optional[str] = None, *, name: Optional[str] = None) → str¶
Get the name of the runnable.
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.BaseChatPromptTemplate.html |
05a7541659c3-5 | This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
get_prompts(config: Optional[RunnableConfig] = None) → List[BasePromptTemplate]¶
invoke(input: Dict, config: Optional[RunnableConfig] = None) → PromptValue¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool¶
Return whether this class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.BaseChatPromptTemplate.html |
05a7541659c3-6 | The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
partial(**kwargs: Union[str, Callable[[], str]]) → BasePromptTemplate¶
Return a partial of the prompt template.
pick(keys: Union[str, List[str]]) → RunnableSerializable[Any, Any]¶
Pick keys from the dict output of this runnable.
Returns a new runnable.
pipe(*others: Union[Runnable[Any, Other], Callable[[Any], Other]], name: Optional[str] = None) → RunnableSerializable[Input, Other]¶
Compose this runnable with another object to create a RunnableSequence.
save(file_path: Union[Path, str]) → None¶
Save the prompt.
Parameters
file_path – Path to directory to save prompt to.
Example:
.. code-block:: python
prompt.save(file_path=”path/prompt.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.BaseChatPromptTemplate.html |
05a7541659c3-7 | stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.BaseChatPromptTemplate.html |
05a7541659c3-8 | Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Type[langchain_core.runnables.utils.Input]¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Any¶
The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain_core.runnables.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.BaseChatPromptTemplate.html |
05a7541659c3-9 | The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
name: Optional[str] = None¶
The name of the runnable. Used for debugging and tracing.
property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.BaseChatPromptTemplate.html |
9b6a4129048a-0 | langchain_core.prompts.string.check_valid_template¶
langchain_core.prompts.string.check_valid_template(template: str, template_format: str, input_variables: List[str]) → None[source]¶
Check that template string is valid.
Parameters
template – The template string.
template_format – The template format. Should be one of “f-string” or “jinja2”.
input_variables – The input variables.
Raises
ValueError – If the template format is not supported. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.string.check_valid_template.html |
c6da739039bb-0 | langchain_core.prompts.chat.SystemMessagePromptTemplate¶
class langchain_core.prompts.chat.SystemMessagePromptTemplate[source]¶
Bases: BaseStringMessagePromptTemplate
System message prompt template.
This is a message that is not sent to the user.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param additional_kwargs: dict [Optional]¶
Additional keyword arguments to pass to the prompt template.
param prompt: langchain_core.prompts.string.StringPromptTemplate [Required]¶
String prompt template.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.SystemMessagePromptTemplate.html |
c6da739039bb-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
format(**kwargs: Any) → BaseMessage[source]¶
Format the prompt template.
Parameters
**kwargs – Keyword arguments to use for formatting.
Returns
Formatted message.
format_messages(**kwargs: Any) → List[BaseMessage]¶
Format messages from kwargs.
Parameters
**kwargs – Keyword arguments to use for formatting.
Returns
List of BaseMessages.
classmethod from_orm(obj: Any) → Model¶
classmethod from_template(template: str, template_format: str = 'f-string', partial_variables: Optional[Dict[str, Any]] = None, **kwargs: Any) → MessagePromptTemplateT¶
Create a class from a string template.
Parameters
template – a template.
template_format – format of the template.
partial_variables –
A dictionary of variables that can be used to partiallyfill in the template. For example, if the template is
”{variable1} {variable2}”, and partial_variables is
{“variable1”: “foo”}, then the final prompt will be
“foo {variable2}”.
**kwargs – keyword arguments to pass to the constructor.
Returns
A new instance of this class. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.SystemMessagePromptTemplate.html |
c6da739039bb-2 | Returns
A new instance of this class.
classmethod from_template_file(template_file: Union[str, Path], input_variables: List[str], **kwargs: Any) → MessagePromptTemplateT¶
Create a class from a template file.
Parameters
template_file – path to a template file. String or Path.
input_variables – list of input variables.
**kwargs – keyword arguments to pass to the constructor.
Returns
A new instance of this class.
classmethod get_lc_namespace() → List[str][source]¶
Get the namespace of the langchain object.
classmethod is_lc_serializable() → bool¶
Return whether or not the class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.SystemMessagePromptTemplate.html |
c6da739039bb-3 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property input_variables: List[str]¶
Input variables for this prompt template.
Returns
List of input variable names.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
Examples using SystemMessagePromptTemplate¶
Anthropic
🚅 LiteLLM
Konko
OpenAI
Google Cloud Platform Vertex AI PaLM
JinaChat
Figma
Set env var OPENAI_API_KEY or load from a .env file:
CAMEL Role-Playing Autonomous Cooperative Agents
Code writing | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.SystemMessagePromptTemplate.html |
9ee4ca169f0d-0 | langchain_core.prompts.chat.AIMessagePromptTemplate¶
class langchain_core.prompts.chat.AIMessagePromptTemplate[source]¶
Bases: BaseStringMessagePromptTemplate
AI message prompt template. This is a message sent from the AI.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param additional_kwargs: dict [Optional]¶
Additional keyword arguments to pass to the prompt template.
param prompt: langchain_core.prompts.string.StringPromptTemplate [Required]¶
String prompt template.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.AIMessagePromptTemplate.html |
9ee4ca169f0d-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
format(**kwargs: Any) → BaseMessage[source]¶
Format the prompt template.
Parameters
**kwargs – Keyword arguments to use for formatting.
Returns
Formatted message.
format_messages(**kwargs: Any) → List[BaseMessage]¶
Format messages from kwargs.
Parameters
**kwargs – Keyword arguments to use for formatting.
Returns
List of BaseMessages.
classmethod from_orm(obj: Any) → Model¶
classmethod from_template(template: str, template_format: str = 'f-string', partial_variables: Optional[Dict[str, Any]] = None, **kwargs: Any) → MessagePromptTemplateT¶
Create a class from a string template.
Parameters
template – a template.
template_format – format of the template.
partial_variables –
A dictionary of variables that can be used to partiallyfill in the template. For example, if the template is
”{variable1} {variable2}”, and partial_variables is
{“variable1”: “foo”}, then the final prompt will be
“foo {variable2}”.
**kwargs – keyword arguments to pass to the constructor.
Returns
A new instance of this class. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.AIMessagePromptTemplate.html |
9ee4ca169f0d-2 | Returns
A new instance of this class.
classmethod from_template_file(template_file: Union[str, Path], input_variables: List[str], **kwargs: Any) → MessagePromptTemplateT¶
Create a class from a template file.
Parameters
template_file – path to a template file. String or Path.
input_variables – list of input variables.
**kwargs – keyword arguments to pass to the constructor.
Returns
A new instance of this class.
classmethod get_lc_namespace() → List[str][source]¶
Get the namespace of the langchain object.
classmethod is_lc_serializable() → bool¶
Return whether or not the class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.AIMessagePromptTemplate.html |
9ee4ca169f0d-3 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property input_variables: List[str]¶
Input variables for this prompt template.
Returns
List of input variable names.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
Examples using AIMessagePromptTemplate¶
Anthropic
🚅 LiteLLM
Konko
OpenAI
JinaChat
Figma | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.AIMessagePromptTemplate.html |
e2446110899c-0 | langchain_core.prompts.chat.ChatMessagePromptTemplate¶
class langchain_core.prompts.chat.ChatMessagePromptTemplate[source]¶
Bases: BaseStringMessagePromptTemplate
Chat message prompt template.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param additional_kwargs: dict [Optional]¶
Additional keyword arguments to pass to the prompt template.
param prompt: langchain_core.prompts.string.StringPromptTemplate [Required]¶
String prompt template.
param role: str [Required]¶
Role of the message.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatMessagePromptTemplate.html |
e2446110899c-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
format(**kwargs: Any) → BaseMessage[source]¶
Format the prompt template.
Parameters
**kwargs – Keyword arguments to use for formatting.
Returns
Formatted message.
format_messages(**kwargs: Any) → List[BaseMessage]¶
Format messages from kwargs.
Parameters
**kwargs – Keyword arguments to use for formatting.
Returns
List of BaseMessages.
classmethod from_orm(obj: Any) → Model¶
classmethod from_template(template: str, template_format: str = 'f-string', partial_variables: Optional[Dict[str, Any]] = None, **kwargs: Any) → MessagePromptTemplateT¶
Create a class from a string template.
Parameters
template – a template.
template_format – format of the template.
partial_variables –
A dictionary of variables that can be used to partiallyfill in the template. For example, if the template is
”{variable1} {variable2}”, and partial_variables is
{“variable1”: “foo”}, then the final prompt will be
“foo {variable2}”.
**kwargs – keyword arguments to pass to the constructor.
Returns
A new instance of this class. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatMessagePromptTemplate.html |
e2446110899c-2 | Returns
A new instance of this class.
classmethod from_template_file(template_file: Union[str, Path], input_variables: List[str], **kwargs: Any) → MessagePromptTemplateT¶
Create a class from a template file.
Parameters
template_file – path to a template file. String or Path.
input_variables – list of input variables.
**kwargs – keyword arguments to pass to the constructor.
Returns
A new instance of this class.
classmethod get_lc_namespace() → List[str][source]¶
Get the namespace of the langchain object.
classmethod is_lc_serializable() → bool¶
Return whether or not the class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatMessagePromptTemplate.html |
e2446110899c-3 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property input_variables: List[str]¶
Input variables for this prompt template.
Returns
List of input variable names.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
Examples using ChatMessagePromptTemplate¶
Types of `MessagePromptTemplate` | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatMessagePromptTemplate.html |
8ba49b5c2ece-0 | langchain_experimental.open_clip.open_clip.OpenCLIPEmbeddings¶
class langchain_experimental.open_clip.open_clip.OpenCLIPEmbeddings[source]¶
Bases: BaseModel, Embeddings
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param checkpoint: str = 'laion2b_s32b_b79k'¶
param model: Any = None¶
param model_name: str = 'ViT-H-14'¶
param preprocess: Any = None¶
param tokenizer: Any = None¶
async aembed_documents(texts: List[str]) → List[List[float]]¶
Asynchronous Embed search docs.
async aembed_query(text: str) → List[float]¶
Asynchronous Embed query text.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data | https://api.python.langchain.com/en/latest/open_clip/langchain_experimental.open_clip.open_clip.OpenCLIPEmbeddings.html |
8ba49b5c2ece-1 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Embed search docs.
embed_image(uris: List[str]) → List[List[float]][source]¶
embed_query(text: str) → List[float][source]¶
Embed query text.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ | https://api.python.langchain.com/en/latest/open_clip/langchain_experimental.open_clip.open_clip.OpenCLIPEmbeddings.html |
8ba49b5c2ece-2 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | https://api.python.langchain.com/en/latest/open_clip/langchain_experimental.open_clip.open_clip.OpenCLIPEmbeddings.html |
9f6731ce7ea1-0 | langchain_community.storage.exceptions.InvalidKeyException¶
class langchain_community.storage.exceptions.InvalidKeyException[source]¶
Raised when a key is invalid; e.g., uses incorrect characters. | https://api.python.langchain.com/en/latest/storage/langchain_community.storage.exceptions.InvalidKeyException.html |
075963289006-0 | langchain.storage.file_system.LocalFileStore¶
class langchain.storage.file_system.LocalFileStore(root_path: Union[str, Path])[source]¶
BaseStore interface that works on the local file system.
Examples
Create a LocalFileStore instance and perform operations on it:
from langchain.storage import LocalFileStore
# Instantiate the LocalFileStore with the root path
file_store = LocalFileStore("/path/to/root")
# Set values for keys
file_store.mset([("key1", b"value1"), ("key2", b"value2")])
# Get values for keys
values = file_store.mget(["key1", "key2"]) # Returns [b"value1", b"value2"]
# Delete keys
file_store.mdelete(["key1"])
# Iterate over keys
for key in file_store.yield_keys():
print(key)
Implement the BaseStore interface for the local file system.
Parameters
root_path (Union[str, Path]) – The root path of the file store. All keys are
interpreted as paths relative to this root.
Methods
__init__(root_path)
Implement the BaseStore interface for the local file system.
mdelete(keys)
Delete the given keys and their associated values.
mget(keys)
Get the values associated with the given keys.
mset(key_value_pairs)
Set the values for the given keys.
yield_keys([prefix])
Get an iterator over keys that match the given prefix.
__init__(root_path: Union[str, Path]) → None[source]¶
Implement the BaseStore interface for the local file system.
Parameters
root_path (Union[str, Path]) – The root path of the file store. All keys are
interpreted as paths relative to this root.
mdelete(keys: Sequence[str]) → None[source]¶
Delete the given keys and their associated values.
Parameters | https://api.python.langchain.com/en/latest/storage/langchain.storage.file_system.LocalFileStore.html |
075963289006-1 | Delete the given keys and their associated values.
Parameters
keys (Sequence[str]) – A sequence of keys to delete.
Returns
None
mget(keys: Sequence[str]) → List[Optional[bytes]][source]¶
Get the values associated with the given keys.
Parameters
keys – A sequence of keys.
Returns
A sequence of optional values associated with the keys.
If a key is not found, the corresponding value will be None.
mset(key_value_pairs: Sequence[Tuple[str, bytes]]) → None[source]¶
Set the values for the given keys.
Parameters
key_value_pairs – A sequence of key-value pairs.
Returns
None
yield_keys(prefix: Optional[str] = None) → Iterator[str][source]¶
Get an iterator over keys that match the given prefix.
Parameters
prefix (Optional[str]) – The prefix to match.
Returns
An iterator over keys that match the given prefix.
Return type
Iterator[str]
Examples using LocalFileStore¶
Caching | https://api.python.langchain.com/en/latest/storage/langchain.storage.file_system.LocalFileStore.html |
845a72234c51-0 | langchain_community.storage.redis.RedisStore¶
class langchain_community.storage.redis.RedisStore(*, client: Any = None, redis_url: Optional[str] = None, client_kwargs: Optional[dict] = None, ttl: Optional[int] = None, namespace: Optional[str] = None)[source]¶
BaseStore implementation using Redis as the underlying store.
Examples
Create a RedisStore instance and perform operations on it:
# Instantiate the RedisStore with a Redis connection
from langchain_community.storage import RedisStore
from langchain_community.utilities.redis import get_client
client = get_client('redis://localhost:6379')
redis_store = RedisStore(client)
# Set values for keys
redis_store.mset([("key1", b"value1"), ("key2", b"value2")])
# Get values for keys
values = redis_store.mget(["key1", "key2"])
# [b"value1", b"value2"]
# Delete keys
redis_store.mdelete(["key1"])
# Iterate over keys
for key in redis_store.yield_keys():
print(key)
Initialize the RedisStore with a Redis connection.
Must provide either a Redis client or a redis_url with optional client_kwargs.
Parameters
client – A Redis connection instance
redis_url – redis url
client_kwargs – Keyword arguments to pass to the Redis client
ttl – time to expire keys in seconds if provided,
if None keys will never expire
namespace – if provided, all keys will be prefixed with this namespace
Methods
__init__(*[, client, redis_url, ...])
Initialize the RedisStore with a Redis connection.
mdelete(keys)
Delete the given keys.
mget(keys)
Get the values associated with the given keys.
mset(key_value_pairs)
Set the given key-value pairs.
yield_keys(*[, prefix]) | https://api.python.langchain.com/en/latest/storage/langchain_community.storage.redis.RedisStore.html |
845a72234c51-1 | Set the given key-value pairs.
yield_keys(*[, prefix])
Yield keys in the store.
__init__(*, client: Any = None, redis_url: Optional[str] = None, client_kwargs: Optional[dict] = None, ttl: Optional[int] = None, namespace: Optional[str] = None) → None[source]¶
Initialize the RedisStore with a Redis connection.
Must provide either a Redis client or a redis_url with optional client_kwargs.
Parameters
client – A Redis connection instance
redis_url – redis url
client_kwargs – Keyword arguments to pass to the Redis client
ttl – time to expire keys in seconds if provided,
if None keys will never expire
namespace – if provided, all keys will be prefixed with this namespace
mdelete(keys: Sequence[str]) → None[source]¶
Delete the given keys.
mget(keys: Sequence[str]) → List[Optional[bytes]][source]¶
Get the values associated with the given keys.
mset(key_value_pairs: Sequence[Tuple[str, bytes]]) → None[source]¶
Set the given key-value pairs.
yield_keys(*, prefix: Optional[str] = None) → Iterator[str][source]¶
Yield keys in the store.
Examples using RedisStore¶
Caching | https://api.python.langchain.com/en/latest/storage/langchain_community.storage.redis.RedisStore.html |
3e463487cd68-0 | langchain_community.storage.upstash_redis.UpstashRedisByteStore¶
class langchain_community.storage.upstash_redis.UpstashRedisByteStore(*, client: Any = None, url: Optional[str] = None, token: Optional[str] = None, ttl: Optional[int] = None, namespace: Optional[str] = None)[source]¶
BaseStore implementation using Upstash Redis
as the underlying store to store raw bytes.
Methods
__init__(*[, client, url, token, ttl, namespace])
mdelete(keys)
Delete the given keys.
mget(keys)
Get the values associated with the given keys.
mset(key_value_pairs)
Set the given key-value pairs.
yield_keys(*[, prefix])
Yield keys in the store.
__init__(*, client: Any = None, url: Optional[str] = None, token: Optional[str] = None, ttl: Optional[int] = None, namespace: Optional[str] = None) → None[source]¶
mdelete(keys: Sequence[str]) → None[source]¶
Delete the given keys.
mget(keys: Sequence[str]) → List[Optional[bytes]][source]¶
Get the values associated with the given keys.
mset(key_value_pairs: Sequence[Tuple[str, bytes]]) → None[source]¶
Set the given key-value pairs.
yield_keys(*, prefix: Optional[str] = None) → Iterator[str][source]¶
Yield keys in the store. | https://api.python.langchain.com/en/latest/storage/langchain_community.storage.upstash_redis.UpstashRedisByteStore.html |
2647407f346e-0 | langchain_community.storage.upstash_redis.UpstashRedisStore¶
class langchain_community.storage.upstash_redis.UpstashRedisStore(*, client: Any = None, url: Optional[str] = None, token: Optional[str] = None, ttl: Optional[int] = None, namespace: Optional[str] = None)[source]¶
[Deprecated] BaseStore implementation using Upstash Redis
as the underlying store to store strings.
Deprecated in favor of the more generic UpstashRedisByteStore.[Deprecated] BaseStore implementation using Upstash Redis
as the underlying store to store strings.
Deprecated in favor of the more generic UpstashRedisByteStore.
Notes
Deprecated since version 0.0.335: Use UpstashRedisByteStore instead.
Initialize the UpstashRedisStore with HTTP API.
Must provide either an Upstash Redis client or a url.
Parameters
client – An Upstash Redis instance
url – UPSTASH_REDIS_REST_URL
token – UPSTASH_REDIS_REST_TOKEN
ttl – time to expire keys in seconds if provided,
if None keys will never expire
namespace – if provided, all keys will be prefixed with this namespace
Methods
__init__(*[, client, url, token, ttl, namespace])
Initialize the UpstashRedisStore with HTTP API.
mdelete(keys)
Delete the given keys.
mget(keys)
Get the values associated with the given keys.
mset(key_value_pairs)
Set the given key-value pairs.
yield_keys(*[, prefix])
Yield keys in the store.
__init__(*, client: Any = None, url: Optional[str] = None, token: Optional[str] = None, ttl: Optional[int] = None, namespace: Optional[str] = None) → None¶
Initialize the UpstashRedisStore with HTTP API. | https://api.python.langchain.com/en/latest/storage/langchain_community.storage.upstash_redis.UpstashRedisStore.html |
2647407f346e-1 | Initialize the UpstashRedisStore with HTTP API.
Must provide either an Upstash Redis client or a url.
Parameters
client – An Upstash Redis instance
url – UPSTASH_REDIS_REST_URL
token – UPSTASH_REDIS_REST_TOKEN
ttl – time to expire keys in seconds if provided,
if None keys will never expire
namespace – if provided, all keys will be prefixed with this namespace
mdelete(keys: Sequence[str]) → None¶
Delete the given keys.
mget(keys: Sequence[str]) → List[Optional[str]]¶
Get the values associated with the given keys.
mset(key_value_pairs: Sequence[Tuple[str, str]]) → None¶
Set the given key-value pairs.
yield_keys(*, prefix: Optional[str] = None) → Iterator[str]¶
Yield keys in the store. | https://api.python.langchain.com/en/latest/storage/langchain_community.storage.upstash_redis.UpstashRedisStore.html |
d7e665d77d41-0 | langchain.storage.encoder_backed.EncoderBackedStore¶
class langchain.storage.encoder_backed.EncoderBackedStore(store: BaseStore[str, Any], key_encoder: Callable[[K], str], value_serializer: Callable[[V], bytes], value_deserializer: Callable[[Any], V])[source]¶
Wraps a store with key and value encoders/decoders.
Examples that uses JSON for encoding/decoding:
import json
def key_encoder(key: int) -> str:
return json.dumps(key)
def value_serializer(value: float) -> str:
return json.dumps(value)
def value_deserializer(serialized_value: str) -> float:
return json.loads(serialized_value)
# Create an instance of the abstract store
abstract_store = MyCustomStore()
# Create an instance of the encoder-backed store
store = EncoderBackedStore(
store=abstract_store,
key_encoder=key_encoder,
value_serializer=value_serializer,
value_deserializer=value_deserializer
)
# Use the encoder-backed store methods
store.mset([(1, 3.14), (2, 2.718)])
values = store.mget([1, 2]) # Retrieves [3.14, 2.718]
store.mdelete([1, 2]) # Deletes the keys 1 and 2
Initialize an EncodedStore.
Methods
__init__(store, key_encoder, ...)
Initialize an EncodedStore.
mdelete(keys)
Delete the given keys and their associated values.
mget(keys)
Get the values associated with the given keys.
mset(key_value_pairs)
Set the values for the given keys.
yield_keys(*[, prefix])
Get an iterator over keys that match the given prefix. | https://api.python.langchain.com/en/latest/storage/langchain.storage.encoder_backed.EncoderBackedStore.html |
d7e665d77d41-1 | yield_keys(*[, prefix])
Get an iterator over keys that match the given prefix.
__init__(store: BaseStore[str, Any], key_encoder: Callable[[K], str], value_serializer: Callable[[V], bytes], value_deserializer: Callable[[Any], V]) → None[source]¶
Initialize an EncodedStore.
mdelete(keys: Sequence[K]) → None[source]¶
Delete the given keys and their associated values.
mget(keys: Sequence[K]) → List[Optional[V]][source]¶
Get the values associated with the given keys.
mset(key_value_pairs: Sequence[Tuple[K, V]]) → None[source]¶
Set the values for the given keys.
yield_keys(*, prefix: Optional[str] = None) → Union[Iterator[K], Iterator[str]][source]¶
Get an iterator over keys that match the given prefix. | https://api.python.langchain.com/en/latest/storage/langchain.storage.encoder_backed.EncoderBackedStore.html |
1dc97f491b8f-0 | langchain.storage.in_memory.InMemoryBaseStore¶
class langchain.storage.in_memory.InMemoryBaseStore[source]¶
In-memory implementation of the BaseStore using a dictionary.
store¶
The underlying dictionary that stores
the key-value pairs.
Type
Dict[str, Any]
Examples
from langchain.storage import InMemoryStore
store = InMemoryStore()
store.mset([('key1', 'value1'), ('key2', 'value2')])
store.mget(['key1', 'key2'])
# ['value1', 'value2']
store.mdelete(['key1'])
list(store.yield_keys())
# ['key2']
list(store.yield_keys(prefix='k'))
# ['key2']
Initialize an empty store.
Methods
__init__()
Initialize an empty store.
mdelete(keys)
Delete the given keys and their associated values.
mget(keys)
Get the values associated with the given keys.
mset(key_value_pairs)
Set the values for the given keys.
yield_keys([prefix])
Get an iterator over keys that match the given prefix.
__init__() → None[source]¶
Initialize an empty store.
mdelete(keys: Sequence[str]) → None[source]¶
Delete the given keys and their associated values.
Parameters
keys (Sequence[str]) – A sequence of keys to delete.
mget(keys: Sequence[str]) → List[Optional[V]][source]¶
Get the values associated with the given keys.
Parameters
keys (Sequence[str]) – A sequence of keys.
Returns
A sequence of optional values associated with the keys.
If a key is not found, the corresponding value will be None.
mset(key_value_pairs: Sequence[Tuple[str, V]]) → None[source]¶
Set the values for the given keys.
Parameters | https://api.python.langchain.com/en/latest/storage/langchain.storage.in_memory.InMemoryBaseStore.html |
1dc97f491b8f-1 | Set the values for the given keys.
Parameters
key_value_pairs (Sequence[Tuple[str, V]]) – A sequence of key-value pairs.
Returns
None
yield_keys(prefix: Optional[str] = None) → Iterator[str][source]¶
Get an iterator over keys that match the given prefix.
Parameters
prefix (str, optional) – The prefix to match. Defaults to None.
Returns
An iterator over keys that match the given prefix.
Return type
Iterator[str] | https://api.python.langchain.com/en/latest/storage/langchain.storage.in_memory.InMemoryBaseStore.html |