filetype
stringclasses
2 values
content
stringlengths
0
75k
filename
stringlengths
59
152
.md
# Migrating ## ๐ŸšจBreaking Changes for select chains (SQLDatabase) on 7/28/23 In an effort to make `langchain` leaner and safer, we are moving select chains to `langchain_experimental`. This migration has already started, but we are remaining backwards compatible until 7/28. On that date, we will remove functionality from `langchain`. Read more about the motivation and the progress [here](https://github.com/langchain-ai/langchain/discussions/8043). ### Migrating to `langchain_experimental` We are moving any experimental components of LangChain, or components with vulnerability issues, into `langchain_experimental`. This guide covers how to migrate. ### Installation Previously: `pip install -U langchain` Now (only if you want to access things in experimental): `pip install -U langchain langchain_experimental` ### Things in `langchain.experimental` Previously: `from langchain.experimental import ...` Now: `from langchain_experimental import ...` ### PALChain Previously: `from langchain.chains import PALChain` Now: `from langchain_experimental.pal_chain import PALChain` ### SQLDatabaseChain Previously: `from langchain.chains import SQLDatabaseChain` Now: `from langchain_experimental.sql import SQLDatabaseChain` Alternatively, if you are just interested in using the query generation part of the SQL chain, you can check out [`create_sql_query_chain`](https://github.com/langchain-ai/langchain/blob/master/docs/extras/use_cases/tabular/sql_query.ipynb) `from langchain.chains import create_sql_query_chain` ### `load_prompt` for Python files Note: this only applies if you want to load Python files as prompts. If you want to load json/yaml files, no change is needed. Previously: `from langchain.prompts import load_prompt` Now: `from langchain_experimental.prompts import load_prompt`
C:\Users\wesla\CodePilotAI\repositories\langchain\MIGRATE.md
.md
# ๐Ÿฆœ๏ธ๐Ÿ”— LangChain โšก Build context-aware reasoning applications โšก [![Release Notes](https://img.shields.io/github/release/langchain-ai/langchain)](https://github.com/langchain-ai/langchain/releases) [![CI](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml/badge.svg)](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml) [![Downloads](https://static.pepy.tech/badge/langchain/month)](https://pepy.tech/project/langchain) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai) [![](https://dcbadge.vercel.app/api/server/6adMQxSpJS?compact=true&style=flat)](https://discord.gg/6adMQxSpJS) [![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain) [![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/langchain) [![GitHub star chart](https://img.shields.io/github/stars/langchain-ai/langchain?style=social)](https://star-history.com/#langchain-ai/langchain) [![Dependency Status](https://img.shields.io/librariesio/github/langchain-ai/langchain)](https://libraries.io/github/langchain-ai/langchain) [![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langchain)](https://github.com/langchain-ai/langchain/issues) Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs). To help you ship LangChain apps to production faster, check out [LangSmith](https://smith.langchain.com). [LangSmith](https://smith.langchain.com) is a unified developer platform for building, testing, and monitoring LLM applications. Fill out [this form](https://www.langchain.com/contact-sales) to speak with our sales team. ## Quick Install With pip: ```bash pip install langchain ``` With conda: ```bash conda install langchain -c conda-forge ``` ## ๐Ÿค” What is LangChain? **LangChain** is a framework for developing applications powered by language models. It enables applications that: - **Are context-aware**: connect a language model to sources of context (prompt instructions, few shot examples, content to ground its response in, etc.) - **Reason**: rely on a language model to reason (about how to answer based on provided context, what actions to take, etc.) This framework consists of several parts. - **LangChain Libraries**: The Python and JavaScript libraries. Contains interfaces and integrations for a myriad of components, a basic run time for combining these components into chains and agents, and off-the-shelf implementations of chains and agents. - **[LangChain Templates](templates)**: A collection of easily deployable reference architectures for a wide variety of tasks. - **[LangServe](https://github.com/langchain-ai/langserve)**: A library for deploying LangChain chains as a REST API. - **[LangSmith](https://smith.langchain.com)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain. - **[LangGraph](https://python.langchain.com/docs/langgraph)**: LangGraph is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain. It extends the LangChain Expression Language with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner. The LangChain libraries themselves are made up of several different packages. - **[`langchain-core`](libs/core)**: Base abstractions and LangChain Expression Language. - **[`langchain-community`](libs/community)**: Third party integrations. - **[`langchain`](libs/langchain)**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture. ![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/img/langchain_stack.png "LangChain Architecture Overview") ## ๐Ÿงฑ What can you build with LangChain? **โ“ Retrieval augmented generation** - [Documentation](https://python.langchain.com/docs/use_cases/question_answering/) - End-to-end Example: [Chat LangChain](https://chat.langchain.com) and [repo](https://github.com/langchain-ai/chat-langchain) **๐Ÿ’ฌ Analyzing structured data** - [Documentation](https://python.langchain.com/docs/use_cases/qa_structured/sql) - End-to-end Example: [SQL Llama2 Template](https://github.com/langchain-ai/langchain/tree/master/templates/sql-llama2) **๐Ÿค– Chatbots** - [Documentation](https://python.langchain.com/docs/use_cases/chatbots) - End-to-end Example: [Web LangChain (web researcher chatbot)](https://weblangchain.vercel.app) and [repo](https://github.com/langchain-ai/weblangchain) And much more! Head to the [Use cases](https://python.langchain.com/docs/use_cases/) section of the docs for more. ## ๐Ÿš€ How does LangChain help? The main value props of the LangChain libraries are: 1. **Components**: composable tools and integrations for working with language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not 2. **Off-the-shelf chains**: built-in assemblages of components for accomplishing higher-level tasks Off-the-shelf chains make it easy to get started. Components make it easy to customize existing chains and build new ones. Components fall into the following **modules**: **๐Ÿ“ƒ Model I/O:** This includes prompt management, prompt optimization, a generic interface for all LLMs, and common utilities for working with LLMs. **๐Ÿ“š Retrieval:** Data Augmented Generation involves specific types of chains that first interact with an external data source to fetch data for use in the generation step. Examples include summarization of long pieces of text and question/answering over specific data sources. **๐Ÿค– Agents:** Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end-to-end agents. ## ๐Ÿ“– Documentation Please see [here](https://python.langchain.com) for full documentation, which includes: - [Getting started](https://python.langchain.com/docs/get_started/introduction): installation, setting up the environment, simple examples - Overview of the [interfaces](https://python.langchain.com/docs/expression_language/), [modules](https://python.langchain.com/docs/modules/), and [integrations](https://python.langchain.com/docs/integrations/providers) - [Use case](https://python.langchain.com/docs/use_cases/qa_structured/sql) walkthroughs and best practice [guides](https://python.langchain.com/docs/guides/adapters/openai) - [LangSmith](https://python.langchain.com/docs/langsmith/), [LangServe](https://python.langchain.com/docs/langserve), and [LangChain Template](https://python.langchain.com/docs/templates/) overviews - [Reference](https://api.python.langchain.com): full API docs ## ๐Ÿ’ Contributing As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation. For detailed information on how to contribute, see [here](https://python.langchain.com/docs/contributing/). ## ๐ŸŒŸ Contributors [![langchain contributors](https://contrib.rocks/image?repo=langchain-ai/langchain&max=2000)](https://github.com/langchain-ai/langchain/graphs/contributors)
C:\Users\wesla\CodePilotAI\repositories\langchain\README.md
.md
# Security Policy ## Reporting a Vulnerability Please report security vulnerabilities by email to `[email protected]`. This email is an alias to a subset of our maintainers, and will ensure the issue is promptly triaged and acted upon as needed.
C:\Users\wesla\CodePilotAI\repositories\langchain\SECURITY.md
.md
# Dev container This project includes a [dev container](https://containers.dev/), which lets you use a container as a full-featured dev environment. You can use the dev container configuration in this folder to build and run the app without needing to install any of its tools locally! You can use it in [GitHub Codespaces](https://github.com/features/codespaces) or the [VS Code Dev Containers extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers). ## GitHub Codespaces [![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/langchain) You may use the button above, or follow these steps to open this repo in a Codespace: 1. Click the **Code** drop-down menu at the top of https://github.com/langchain-ai/langchain. 1. Click on the **Codespaces** tab. 1. Click **Create codespace on master** . For more info, check out the [GitHub documentation](https://docs.github.com/en/free-pro-team@latest/github/developing-online-with-codespaces/creating-a-codespace#creating-a-codespace). ## VS Code Dev Containers [![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain) Note: If you click the link above you will open the main repo (langchain-ai/langchain) and not your local cloned repo. This is fine if you only want to run and test the library, but if you want to contribute you can use the link below and replace with your username and cloned repo name: ``` https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/<yourusername>/<yourclonedreponame> ``` Then you will have a local cloned repo where you can contribute and then create pull requests. If you already have VS Code and Docker installed, you can use the button above to get started. This will cause VS Code to automatically install the Dev Containers extension if needed, clone the source code into a container volume, and spin up a dev container for use. Alternatively you can also follow these steps to open this repo in a container using the VS Code Dev Containers extension: 1. If this is your first time using a development container, please ensure your system meets the pre-reqs (i.e. have Docker installed) in the [getting started steps](https://aka.ms/vscode-remote/containers/getting-started). 2. Open a locally cloned copy of the code: - Fork and Clone this repository to your local filesystem. - Press <kbd>F1</kbd> and select the **Dev Containers: Open Folder in Container...** command. - Select the cloned copy of this folder, wait for the container to start, and try things out! You can learn more in the [Dev Containers documentation](https://code.visualstudio.com/docs/devcontainers/containers). ## Tips and tricks * If you are working with the same repository folder in a container and Windows, you'll want consistent line endings (otherwise you may see hundreds of changes in the SCM view). The `.gitattributes` file in the root of this repo will disable line ending conversion and should prevent this. See [tips and tricks](https://code.visualstudio.com/docs/devcontainers/tips-and-tricks#_resolving-git-line-ending-issues-in-containers-resulting-in-many-modified-files) for more info. * If you'd like to review the contents of the image used in this dev container, you can check it out in the [devcontainers/images](https://github.com/devcontainers/images/tree/main/src/python) repo.
C:\Users\wesla\CodePilotAI\repositories\langchain\.devcontainer\README.md
.md
# Contributor Covenant Code of Conduct ## Our Pledge We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation. We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community. ## Our Standards Examples of behavior that contributes to a positive environment for our community include: * Demonstrating empathy and kindness toward other people * Being respectful of differing opinions, viewpoints, and experiences * Giving and gracefully accepting constructive feedback * Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience * Focusing on what is best not just for us as individuals, but for the overall community Examples of unacceptable behavior include: * The use of sexualized language or imagery, and sexual attention or advances of any kind * Trolling, insulting or derogatory comments, and personal or political attacks * Public or private harassment * Publishing others' private information, such as a physical or email address, without their explicit permission * Other conduct which could reasonably be considered inappropriate in a professional setting ## Enforcement Responsibilities Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful. Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate. ## Scope This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. ## Enforcement Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at [email protected]. All complaints will be reviewed and investigated promptly and fairly. All community leaders are obligated to respect the privacy and security of the reporter of any incident. ## Enforcement Guidelines Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct: ### 1. Correction **Community Impact**: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community. **Consequence**: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested. ### 2. Warning **Community Impact**: A violation through a single incident or series of actions. **Consequence**: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban. ### 3. Temporary Ban **Community Impact**: A serious violation of community standards, including sustained inappropriate behavior. **Consequence**: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban. ### 4. Permanent Ban **Community Impact**: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals. **Consequence**: A permanent ban from any sort of public interaction within the community. ## Attribution This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 2.1, available at [https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1]. Community Impact Guidelines were inspired by [Mozilla's code of conduct enforcement ladder][Mozilla CoC]. For answers to common questions about this code of conduct, see the FAQ at [https://www.contributor-covenant.org/faq][FAQ]. Translations are available at [https://www.contributor-covenant.org/translations][translations]. [homepage]: https://www.contributor-covenant.org [v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html [Mozilla CoC]: https://github.com/mozilla/diversity [FAQ]: https://www.contributor-covenant.org/faq [translations]: https://www.contributor-covenant.org/translations
C:\Users\wesla\CodePilotAI\repositories\langchain\.github\CODE_OF_CONDUCT.md
.md
# Contributing to LangChain Hi there! Thank you for even being interested in contributing to LangChain. As an open-source project in a rapidly developing field, we are extremely open to contributions, whether they involve new features, improved infrastructure, better documentation, or bug fixes. To learn how to contribute to LangChain, please follow the [contribution guide here](https://python.langchain.com/docs/contributing/).
C:\Users\wesla\CodePilotAI\repositories\langchain\.github\CONTRIBUTING.md
.md
Thank you for contributing to LangChain! - [ ] **PR title**: "package: description" - Where "package" is whichever of langchain, community, core, experimental, etc. is being modified. Use "docs: ..." for purely docs changes, "templates: ..." for template changes, "infra: ..." for CI changes. - Example: "community: add foobar LLM" - [ ] **PR message**: ***Delete this entire checklist*** and replace with - **Description:** a description of the change - **Issue:** the issue # it fixes, if applicable - **Dependencies:** any dependencies required for this change - **Twitter handle:** if your PR gets announced, and you'd like a mention, we'll gladly shout you out! - [ ] **Add tests and docs**: If you're adding a new integration, please include 1. a test for the integration, preferably unit tests that do not rely on network access, 2. an example notebook showing its use. It lives in `docs/docs/integrations` directory. - [ ] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. See contribution guidelines for more: https://python.langchain.com/docs/contributing/ Additional guidelines: - Make sure optional dependencies are imported within a function. - Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests. - Most PRs should not touch more than one package. - Changes should be backwards compatible. - If you are adding something to community, do not re-import it in langchain. If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, hwchase17.
C:\Users\wesla\CodePilotAI\repositories\langchain\.github\PULL_REQUEST_TEMPLATE.md
.md
# LangChain cookbook Example code for building applications with LangChain, with an emphasis on more applied and end-to-end examples than contained in the [main documentation](https://python.langchain.com). Notebook | Description :- | :- [LLaMA2_sql_chat.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/LLaMA2_sql_chat.ipynb) | Build a chat application that interacts with a SQL database using an open source llm (llama2), specifically demonstrated on an SQLite database containing rosters. [Semi_Structured_RAG.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_Structured_RAG.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data, including text and tables, using unstructured for parsing, multi-vector retriever for storing, and lcel for implementing chains. [Semi_structured_and_multi_moda...](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_structured_and_multi_modal_RAG.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data and images, using unstructured for parsing, multi-vector retriever for storage and retrieval, and lcel for implementing chains. [Semi_structured_multi_modal_RA...](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data and images, using various tools and methods such as unstructured for parsing, multi-vector retriever for storing, lcel for implementing chains, and open source language models like llama2, llava, and gpt4all. [analyze_document.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/analyze_document.ipynb) | Analyze a single long document. [autogpt/autogpt.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/autogpt/autogpt.ipynb) | Implement autogpt, a language model, with langchain primitives such as llms, prompttemplates, vectorstores, embeddings, and tools. [autogpt/marathon_times.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/autogpt/marathon_times.ipynb) | Implement autogpt for finding winning marathon times. [baby_agi.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/baby_agi.ipynb) | Implement babyagi, an ai agent that can generate and execute tasks based on a given objective, with the flexibility to swap out specific vectorstores/model providers. [baby_agi_with_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/baby_agi_with_agent.ipynb) | Swap out the execution chain in the babyagi notebook with an agent that has access to tools, aiming to obtain more reliable information. [camel_role_playing.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/camel_role_playing.ipynb) | Implement the camel framework for creating autonomous cooperative agents in large-scale language models, using role-playing and inception prompting to guide chat agents towards task completion. [causal_program_aided_language_...](https://github.com/langchain-ai/langchain/tree/master/cookbook/causal_program_aided_language_model.ipynb) | Implement the causal program-aided language (cpal) chain, which improves upon the program-aided language (pal) by incorporating causal structure to prevent hallucination in language models, particularly when dealing with complex narratives and math problems with nested dependencies. [code-analysis-deeplake.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/code-analysis-deeplake.ipynb) | Analyze its own code base with the help of gpt and activeloop's deep lake. [custom_agent_with_plugin_retri...](https://github.com/langchain-ai/langchain/tree/master/cookbook/custom_agent_with_plugin_retrieval.ipynb) | Build a custom agent that can interact with ai plugins by retrieving tools and creating natural language wrappers around openapi endpoints. [custom_agent_with_plugin_retri...](https://github.com/langchain-ai/langchain/tree/master/cookbook/custom_agent_with_plugin_retrieval_using_plugnplai.ipynb) | Build a custom agent with plugin retrieval functionality, utilizing ai plugins from the `plugnplai` directory. [databricks_sql_db.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/databricks_sql_db.ipynb) | Connect to databricks runtimes and databricks sql. [deeplake_semantic_search_over_...](https://github.com/langchain-ai/langchain/tree/master/cookbook/deeplake_semantic_search_over_chat.ipynb) | Perform semantic search and question-answering over a group chat using activeloop's deep lake with gpt4. [elasticsearch_db_qa.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/elasticsearch_db_qa.ipynb) | Interact with elasticsearch analytics databases in natural language and build search queries via the elasticsearch dsl API. [extraction_openai_tools.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/extraction_openai_tools.ipynb) | Structured Data Extraction with OpenAI Tools [forward_looking_retrieval_augm...](https://github.com/langchain-ai/langchain/tree/master/cookbook/forward_looking_retrieval_augmented_generation.ipynb) | Implement the forward-looking active retrieval augmented generation (flare) method, which generates answers to questions, identifies uncertain tokens, generates hypothetical questions based on these tokens, and retrieves relevant documents to continue generating the answer. [generative_agents_interactive_...](https://github.com/langchain-ai/langchain/tree/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb) | Implement a generative agent that simulates human behavior, based on a research paper, using a time-weighted memory object backed by a langchain retriever. [gymnasium_agent_simulation.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/gymnasium_agent_simulation.ipynb) | Create a simple agent-environment interaction loop in simulated environments like text-based games with gymnasium. [hugginggpt.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/hugginggpt.ipynb) | Implement hugginggpt, a system that connects language models like chatgpt with the machine learning community via hugging face. [hypothetical_document_embeddin...](https://github.com/langchain-ai/langchain/tree/master/cookbook/hypothetical_document_embeddings.ipynb) | Improve document indexing with hypothetical document embeddings (hyde), an embedding technique that generates and embeds hypothetical answers to queries. [learned_prompt_optimization.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/learned_prompt_optimization.ipynb) | Automatically enhance language model prompts by injecting specific terms using reinforcement learning, which can be used to personalize responses based on user preferences. [llm_bash.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/llm_bash.ipynb) | Perform simple filesystem commands using language learning models (llms) and a bash process. [llm_checker.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/llm_checker.ipynb) | Create a self-checking chain using the llmcheckerchain function. [llm_math.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/llm_math.ipynb) | Solve complex word math problems using language models and python repls. [llm_summarization_checker.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/llm_summarization_checker.ipynb) | Check the accuracy of text summaries, with the option to run the checker multiple times for improved results. [llm_symbolic_math.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/llm_symbolic_math.ipynb) | Solve algebraic equations with the help of llms (language learning models) and sympy, a python library for symbolic mathematics. [meta_prompt.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/meta_prompt.ipynb) | Implement the meta-prompt concept, which is a method for building self-improving agents that reflect on their own performance and modify their instructions accordingly. [multi_modal_output_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multi_modal_output_agent.ipynb) | Generate multi-modal outputs, specifically images and text. [multi_player_dnd.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multi_player_dnd.ipynb) | Simulate multi-player dungeons & dragons games, with a custom function determining the speaking schedule of the agents. [multiagent_authoritarian.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multiagent_authoritarian.ipynb) | Implement a multi-agent simulation where a privileged agent controls the conversation, including deciding who speaks and when the conversation ends, in the context of a simulated news network. [multiagent_bidding.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multiagent_bidding.ipynb) | Implement a multi-agent simulation where agents bid to speak, with the highest bidder speaking next, demonstrated through a fictitious presidential debate example. [myscale_vector_sql.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/myscale_vector_sql.ipynb) | Access and interact with the myscale integrated vector database, which can enhance the performance of language model (llm) applications. [openai_functions_retrieval_qa....](https://github.com/langchain-ai/langchain/tree/master/cookbook/openai_functions_retrieval_qa.ipynb) | Structure response output in a question-answering system by incorporating openai functions into a retrieval pipeline. [openai_v1_cookbook.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/openai_v1_cookbook.ipynb) | Explore new functionality released alongside the V1 release of the OpenAI Python library. [petting_zoo.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/petting_zoo.ipynb) | Create multi-agent simulations with simulated environments using the petting zoo library. [plan_and_execute_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/plan_and_execute_agent.ipynb) | Create plan-and-execute agents that accomplish objectives by planning tasks with a language model (llm) and executing them with a separate agent. [press_releases.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/press_releases.ipynb) | Retrieve and query company press release data powered by [Kay.ai](https://kay.ai). [program_aided_language_model.i...](https://github.com/langchain-ai/langchain/tree/master/cookbook/program_aided_language_model.ipynb) | Implement program-aided language models as described in the provided research paper. [qa_citations.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/qa_citations.ipynb) | Different ways to get a model to cite its sources. [retrieval_in_sql.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/retrieval_in_sql.ipynb) | Perform retrieval-augmented-generation (rag) on a PostgreSQL database using pgvector. [sales_agent_with_context.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/sales_agent_with_context.ipynb) | Implement a context-aware ai sales agent, salesgpt, that can have natural sales conversations, interact with other systems, and use a product knowledge base to discuss a company's offerings. [self_query_hotel_search.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/self_query_hotel_search.ipynb) | Build a hotel room search feature with self-querying retrieval, using a specific hotel recommendation dataset. [smart_llm.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/smart_llm.ipynb) | Implement a smartllmchain, a self-critique chain that generates multiple output proposals, critiques them to find the best one, and then improves upon it to produce a final output. [tree_of_thought.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/tree_of_thought.ipynb) | Query a large language model using the tree of thought technique. [twitter-the-algorithm-analysis...](https://github.com/langchain-ai/langchain/tree/master/cookbook/twitter-the-algorithm-analysis-deeplake.ipynb) | Analyze the source code of the Twitter algorithm with the help of gpt4 and activeloop's deep lake. [two_agent_debate_tools.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/two_agent_debate_tools.ipynb) | Simulate multi-agent dialogues where the agents can utilize various tools. [two_player_dnd.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/two_player_dnd.ipynb) | Simulate a two-player dungeons & dragons game, where a dialogue simulator class is used to coordinate the dialogue between the protagonist and the dungeon master. [wikibase_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/wikibase_agent.ipynb) | Create a simple wikibase agent that utilizes sparql generation, with testing done on http://wikidata.org.
C:\Users\wesla\CodePilotAI\repositories\langchain\cookbook\README.md
.md
# LangChain Documentation For more information on contributing to our documentation, see the [Documentation Contributing Guide](https://python.langchain.com/docs/contributing/documentation)
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\README.md
.txt
-e ../libs/langchain -e ../libs/community -e ../libs/core urllib3==1.26.18
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\vercel_requirements.txt
.txt
-e libs/experimental -e libs/langchain -e libs/core -e libs/community pydantic<2 autodoc_pydantic==1.8.0 myst_parser nbsphinx==0.8.9 sphinx>=5 sphinx-autobuild==2021.3.14 sphinx_rtd_theme==1.0.0 sphinx-typlog-theme==0.8.0 sphinx-panels toml myst_nb sphinx_copybutton pydata-sphinx-theme==0.13.1
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\api_reference\requirements.txt
.txt
Copyright (c) 2007-2023 The scikit-learn developers. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\api_reference\templates\COPYRIGHT.txt
.txt
Copyright (c) 2007-2023 The scikit-learn developers. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\api_reference\themes\COPYRIGHT.txt
.md
# Security LangChain has a large ecosystem of integrations with various external resources like local and remote file systems, APIs and databases. These integrations allow developers to create versatile applications that combine the power of LLMs with the ability to access, interact with and manipulate external resources. ## Best Practices When building such applications developers should remember to follow good security practices: * [**Limit Permissions**](https://en.wikipedia.org/wiki/Principle_of_least_privilege): Scope permissions specifically to the application's need. Granting broad or excessive permissions can introduce significant security vulnerabilities. To avoid such vulnerabilities, consider using read-only credentials, disallowing access to sensitive resources, using sandboxing techniques (such as running inside a container), etc. as appropriate for your application. * **Anticipate Potential Misuse**: Just as humans can err, so can Large Language Models (LLMs). Always assume that any system access or credentials may be used in any way allowed by the permissions they are assigned. For example, if a pair of database credentials allows deleting data, itโ€™s safest to assume that any LLM able to use those credentials may in fact delete data. * [**Defense in Depth**](https://en.wikipedia.org/wiki/Defense_in_depth_(computing)): No security technique is perfect. Fine-tuning and good chain design can reduce, but not eliminate, the odds that a Large Language Model (LLM) may make a mistake. Itโ€™s best to combine multiple layered security approaches rather than relying on any single layer of defense to ensure security. For example: use both read-only permissions and sandboxing to ensure that LLMs are only able to access data that is explicitly meant for them to use. Risks of not doing so include, but are not limited to: * Data corruption or loss. * Unauthorized access to confidential information. * Compromised performance or availability of critical resources. Example scenarios with mitigation strategies: * A user may ask an agent with access to the file system to delete files that should not be deleted or read the content of files that contain sensitive information. To mitigate, limit the agent to only use a specific directory and only allow it to read or write files that are safe to read or write. Consider further sandboxing the agent by running it in a container. * A user may ask an agent with write access to an external API to write malicious data to the API, or delete data from that API. To mitigate, give the agent read-only API keys, or limit it to only use endpoints that are already resistant to such misuse. * A user may ask an agent with access to a database to drop a table or mutate the schema. To mitigate, scope the credentials to only the tables that the agent needs to access and consider issuing READ-ONLY credentials. If you're building applications that access external resources like file systems, APIs or databases, consider speaking with your company's security team to determine how to best design and secure your applications. ## Reporting a Vulnerability Please report security vulnerabilities by email to [email protected]. This will ensure the issue is promptly triaged and acted upon as needed. ## Enterprise solutions LangChain may offer enterprise solutions for customers who have additional security requirements. Please contact us at [email protected].
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\security.md
.md
# Debugging If you're building with LLMs, at some point something will break, and you'll need to debug. A model call will fail, or the model output will be misformatted, or there will be some nested model calls and it won't be clear where along the way an incorrect output was created. Here are a few different tools and functionalities to aid in debugging. ## Tracing Platforms with tracing capabilities like [LangSmith](/docs/langsmith/) and [WandB](/docs/integrations/providers/wandb_tracing) are the most comprehensive solutions for debugging. These platforms make it easy to not only log and visualize LLM apps, but also to actively debug, test and refine them. For anyone building production-grade LLM applications, we highly recommend using a platform like this. ![Screenshot of the LangSmith debugging interface showing an AgentExecutor run with input and output details, and a run tree visualization.](../../static/img/run_details.png "LangSmith Debugging Interface") ## `set_debug` and `set_verbose` If you're prototyping in Jupyter Notebooks or running Python scripts, it can be helpful to print out the intermediate steps of a Chain run. There are a number of ways to enable printing at varying degrees of verbosity. Let's suppose we have a simple agent, and want to visualize the actions it takes and tool outputs it receives. Without any debugging, here's what we see: ```python from langchain.agents import AgentType, initialize_agent, load_tools from langchain_openai import ChatOpenAI llm = ChatOpenAI(model_name="gpt-4", temperature=0) tools = load_tools(["ddg-search", "llm-math"], llm=llm) agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION) ``` ```python agent.run("Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?") ``` <CodeOutputBlock lang="python"> ``` 'The director of the 2023 film Oppenheimer is Christopher Nolan and he is approximately 19345 days old in 2023.' ``` </CodeOutputBlock> ### `set_debug(True)` Setting the global `debug` flag will cause all LangChain components with callback support (chains, models, agents, tools, retrievers) to print the inputs they receive and outputs they generate. This is the most verbose setting and will fully log raw inputs and outputs. ```python from langchain.globals import set_debug set_debug(True) agent.run("Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?") ``` <details> <summary>Console output</summary> <CodeOutputBlock lang="python"> ``` [chain/start] [1:RunTypeEnum.chain:AgentExecutor] Entering Chain run with input: { "input": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?" } [chain/start] [1:RunTypeEnum.chain:AgentExecutor > 2:RunTypeEnum.chain:LLMChain] Entering Chain run with input: { "input": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "agent_scratchpad": "", "stop": [ "\nObservation:", "\n\tObservation:" ] } [llm/start] [1:RunTypeEnum.chain:AgentExecutor > 2:RunTypeEnum.chain:LLMChain > 3:RunTypeEnum.llm:ChatOpenAI] Entering LLM run with input: { "prompts": [ "Human: Answer the following questions as best you can. You have access to the following tools:\n\nduckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.\nCalculator: Useful for when you need to answer questions about math.\n\nUse the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [duckduckgo_search, Calculator]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\nThought:" ] } [llm/end] [1:RunTypeEnum.chain:AgentExecutor > 2:RunTypeEnum.chain:LLMChain > 3:RunTypeEnum.llm:ChatOpenAI] [5.53s] Exiting LLM run with output: { "generations": [ [ { "text": "I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\nAction: duckduckgo_search\nAction Input: \"Director of the 2023 film Oppenheimer and their age\"", "generation_info": { "finish_reason": "stop" }, "message": { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "messages", "AIMessage" ], "kwargs": { "content": "I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\nAction: duckduckgo_search\nAction Input: \"Director of the 2023 film Oppenheimer and their age\"", "additional_kwargs": {} } } } ] ], "llm_output": { "token_usage": { "prompt_tokens": 206, "completion_tokens": 71, "total_tokens": 277 }, "model_name": "gpt-4" }, "run": null } [chain/end] [1:RunTypeEnum.chain:AgentExecutor > 2:RunTypeEnum.chain:LLMChain] [5.53s] Exiting Chain run with output: { "text": "I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\nAction: duckduckgo_search\nAction Input: \"Director of the 2023 film Oppenheimer and their age\"" } [tool/start] [1:RunTypeEnum.chain:AgentExecutor > 4:RunTypeEnum.tool:duckduckgo_search] Entering Tool run with input: "Director of the 2023 film Oppenheimer and their age" [tool/end] [1:RunTypeEnum.chain:AgentExecutor > 4:RunTypeEnum.tool:duckduckgo_search] [1.51s] Exiting Tool run with output: "Capturing the mad scramble to build the first atomic bomb required rapid-fire filming, strict set rules and the construction of an entire 1940s western town. By Jada Yuan. July 19, 2023 at 5:00 a ... In Christopher Nolan's new film, "Oppenheimer," Cillian Murphy stars as J. Robert Oppenheimer, the American physicist who oversaw the Manhattan Project in Los Alamos, N.M. Universal Pictures... Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. Christopher Nolan goes deep on 'Oppenheimer,' his most 'extreme' film to date. By Kenneth Turan. July 11, 2023 5 AM PT. For Subscribers. Christopher Nolan is photographed in Los Angeles ... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age." [chain/start] [1:RunTypeEnum.chain:AgentExecutor > 5:RunTypeEnum.chain:LLMChain] Entering Chain run with input: { "input": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "agent_scratchpad": "I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\nAction: duckduckgo_search\nAction Input: \"Director of the 2023 film Oppenheimer and their age\"\nObservation: Capturing the mad scramble to build the first atomic bomb required rapid-fire filming, strict set rules and the construction of an entire 1940s western town. By Jada Yuan. July 19, 2023 at 5:00 a ... In Christopher Nolan's new film, \"Oppenheimer,\" Cillian Murphy stars as J. Robert Oppenheimer, the American physicist who oversaw the Manhattan Project in Los Alamos, N.M. Universal Pictures... Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. Christopher Nolan goes deep on 'Oppenheimer,' his most 'extreme' film to date. By Kenneth Turan. July 11, 2023 5 AM PT. For Subscribers. Christopher Nolan is photographed in Los Angeles ... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age.\nThought:", "stop": [ "\nObservation:", "\n\tObservation:" ] } [llm/start] [1:RunTypeEnum.chain:AgentExecutor > 5:RunTypeEnum.chain:LLMChain > 6:RunTypeEnum.llm:ChatOpenAI] Entering LLM run with input: { "prompts": [ "Human: Answer the following questions as best you can. You have access to the following tools:\n\nduckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.\nCalculator: Useful for when you need to answer questions about math.\n\nUse the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [duckduckgo_search, Calculator]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\nThought:I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\nAction: duckduckgo_search\nAction Input: \"Director of the 2023 film Oppenheimer and their age\"\nObservation: Capturing the mad scramble to build the first atomic bomb required rapid-fire filming, strict set rules and the construction of an entire 1940s western town. By Jada Yuan. July 19, 2023 at 5:00 a ... In Christopher Nolan's new film, \"Oppenheimer,\" Cillian Murphy stars as J. Robert Oppenheimer, the American physicist who oversaw the Manhattan Project in Los Alamos, N.M. Universal Pictures... Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. Christopher Nolan goes deep on 'Oppenheimer,' his most 'extreme' film to date. By Kenneth Turan. July 11, 2023 5 AM PT. For Subscribers. Christopher Nolan is photographed in Los Angeles ... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age.\nThought:" ] } [llm/end] [1:RunTypeEnum.chain:AgentExecutor > 5:RunTypeEnum.chain:LLMChain > 6:RunTypeEnum.llm:ChatOpenAI] [4.46s] Exiting LLM run with output: { "generations": [ [ { "text": "The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his age.\nAction: duckduckgo_search\nAction Input: \"Christopher Nolan age\"", "generation_info": { "finish_reason": "stop" }, "message": { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "messages", "AIMessage" ], "kwargs": { "content": "The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his age.\nAction: duckduckgo_search\nAction Input: \"Christopher Nolan age\"", "additional_kwargs": {} } } } ] ], "llm_output": { "token_usage": { "prompt_tokens": 550, "completion_tokens": 39, "total_tokens": 589 }, "model_name": "gpt-4" }, "run": null } [chain/end] [1:RunTypeEnum.chain:AgentExecutor > 5:RunTypeEnum.chain:LLMChain] [4.46s] Exiting Chain run with output: { "text": "The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his age.\nAction: duckduckgo_search\nAction Input: \"Christopher Nolan age\"" } [tool/start] [1:RunTypeEnum.chain:AgentExecutor > 7:RunTypeEnum.tool:duckduckgo_search] Entering Tool run with input: "Christopher Nolan age" [tool/end] [1:RunTypeEnum.chain:AgentExecutor > 7:RunTypeEnum.tool:duckduckgo_search] [1.33s] Exiting Tool run with output: "Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. July 30, 1970 (age 52) London England Notable Works: "Dunkirk" "Tenet" "The Prestige" See all related content โ†’ Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film July 11, 2023 5 AM PT For Subscribers Christopher Nolan is photographed in Los Angeles. (Joe Pugliese / For The Times) This is not the story I was supposed to write. Oppenheimer director Christopher Nolan, Cillian Murphy, Emily Blunt and Matt Damon on the stakes of making a three-hour, CGI-free summer film. Christopher Nolan, the director behind such films as "Dunkirk," "Inception," "Interstellar," and the "Dark Knight" trilogy, has spent the last three years living in Oppenheimer's world, writing ..." [chain/start] [1:RunTypeEnum.chain:AgentExecutor > 8:RunTypeEnum.chain:LLMChain] Entering Chain run with input: { "input": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "agent_scratchpad": "I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\nAction: duckduckgo_search\nAction Input: \"Director of the 2023 film Oppenheimer and their age\"\nObservation: Capturing the mad scramble to build the first atomic bomb required rapid-fire filming, strict set rules and the construction of an entire 1940s western town. By Jada Yuan. July 19, 2023 at 5:00 a ... In Christopher Nolan's new film, \"Oppenheimer,\" Cillian Murphy stars as J. Robert Oppenheimer, the American physicist who oversaw the Manhattan Project in Los Alamos, N.M. Universal Pictures... Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. Christopher Nolan goes deep on 'Oppenheimer,' his most 'extreme' film to date. By Kenneth Turan. July 11, 2023 5 AM PT. For Subscribers. Christopher Nolan is photographed in Los Angeles ... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age.\nThought:The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his age.\nAction: duckduckgo_search\nAction Input: \"Christopher Nolan age\"\nObservation: Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. July 30, 1970 (age 52) London England Notable Works: \"Dunkirk\" \"Tenet\" \"The Prestige\" See all related content โ†’ Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film July 11, 2023 5 AM PT For Subscribers Christopher Nolan is photographed in Los Angeles. (Joe Pugliese / For The Times) This is not the story I was supposed to write. Oppenheimer director Christopher Nolan, Cillian Murphy, Emily Blunt and Matt Damon on the stakes of making a three-hour, CGI-free summer film. Christopher Nolan, the director behind such films as \"Dunkirk,\" \"Inception,\" \"Interstellar,\" and the \"Dark Knight\" trilogy, has spent the last three years living in Oppenheimer's world, writing ...\nThought:", "stop": [ "\nObservation:", "\n\tObservation:" ] } [llm/start] [1:RunTypeEnum.chain:AgentExecutor > 8:RunTypeEnum.chain:LLMChain > 9:RunTypeEnum.llm:ChatOpenAI] Entering LLM run with input: { "prompts": [ "Human: Answer the following questions as best you can. You have access to the following tools:\n\nduckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.\nCalculator: Useful for when you need to answer questions about math.\n\nUse the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [duckduckgo_search, Calculator]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\nThought:I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\nAction: duckduckgo_search\nAction Input: \"Director of the 2023 film Oppenheimer and their age\"\nObservation: Capturing the mad scramble to build the first atomic bomb required rapid-fire filming, strict set rules and the construction of an entire 1940s western town. By Jada Yuan. July 19, 2023 at 5:00 a ... In Christopher Nolan's new film, \"Oppenheimer,\" Cillian Murphy stars as J. Robert Oppenheimer, the American physicist who oversaw the Manhattan Project in Los Alamos, N.M. Universal Pictures... Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. Christopher Nolan goes deep on 'Oppenheimer,' his most 'extreme' film to date. By Kenneth Turan. July 11, 2023 5 AM PT. For Subscribers. Christopher Nolan is photographed in Los Angeles ... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age.\nThought:The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his age.\nAction: duckduckgo_search\nAction Input: \"Christopher Nolan age\"\nObservation: Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. July 30, 1970 (age 52) London England Notable Works: \"Dunkirk\" \"Tenet\" \"The Prestige\" See all related content โ†’ Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film July 11, 2023 5 AM PT For Subscribers Christopher Nolan is photographed in Los Angeles. (Joe Pugliese / For The Times) This is not the story I was supposed to write. Oppenheimer director Christopher Nolan, Cillian Murphy, Emily Blunt and Matt Damon on the stakes of making a three-hour, CGI-free summer film. Christopher Nolan, the director behind such films as \"Dunkirk,\" \"Inception,\" \"Interstellar,\" and the \"Dark Knight\" trilogy, has spent the last three years living in Oppenheimer's world, writing ...\nThought:" ] } [llm/end] [1:RunTypeEnum.chain:AgentExecutor > 8:RunTypeEnum.chain:LLMChain > 9:RunTypeEnum.llm:ChatOpenAI] [2.69s] Exiting LLM run with output: { "generations": [ [ { "text": "Christopher Nolan was born on July 30, 1970, which makes him 52 years old in 2023. Now I need to calculate his age in days.\nAction: Calculator\nAction Input: 52*365", "generation_info": { "finish_reason": "stop" }, "message": { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "messages", "AIMessage" ], "kwargs": { "content": "Christopher Nolan was born on July 30, 1970, which makes him 52 years old in 2023. Now I need to calculate his age in days.\nAction: Calculator\nAction Input: 52*365", "additional_kwargs": {} } } } ] ], "llm_output": { "token_usage": { "prompt_tokens": 868, "completion_tokens": 46, "total_tokens": 914 }, "model_name": "gpt-4" }, "run": null } [chain/end] [1:RunTypeEnum.chain:AgentExecutor > 8:RunTypeEnum.chain:LLMChain] [2.69s] Exiting Chain run with output: { "text": "Christopher Nolan was born on July 30, 1970, which makes him 52 years old in 2023. Now I need to calculate his age in days.\nAction: Calculator\nAction Input: 52*365" } [tool/start] [1:RunTypeEnum.chain:AgentExecutor > 10:RunTypeEnum.tool:Calculator] Entering Tool run with input: "52*365" [chain/start] [1:RunTypeEnum.chain:AgentExecutor > 10:RunTypeEnum.tool:Calculator > 11:RunTypeEnum.chain:LLMMathChain] Entering Chain run with input: { "question": "52*365" } [chain/start] [1:RunTypeEnum.chain:AgentExecutor > 10:RunTypeEnum.tool:Calculator > 11:RunTypeEnum.chain:LLMMathChain > 12:RunTypeEnum.chain:LLMChain] Entering Chain run with input: { "question": "52*365", "stop": [ "```output" ] } [llm/start] [1:RunTypeEnum.chain:AgentExecutor > 10:RunTypeEnum.tool:Calculator > 11:RunTypeEnum.chain:LLMMathChain > 12:RunTypeEnum.chain:LLMChain > 13:RunTypeEnum.llm:ChatOpenAI] Entering LLM run with input: { "prompts": [ "Human: Translate a math problem into a expression that can be executed using Python's numexpr library. Use the output of running this code to answer the question.\n\nQuestion: ${Question with math problem.}\n```text\n${single line mathematical expression that solves the problem}\n```\n...numexpr.evaluate(text)...\n```output\n${Output of running the code}\n```\nAnswer: ${Answer}\n\nBegin.\n\nQuestion: What is 37593 * 67?\n```text\n37593 * 67\n```\n...numexpr.evaluate(\"37593 * 67\")...\n```output\n2518731\n```\nAnswer: 2518731\n\nQuestion: 37593^(1/5)\n```text\n37593**(1/5)\n```\n...numexpr.evaluate(\"37593**(1/5)\")...\n```output\n8.222831614237718\n```\nAnswer: 8.222831614237718\n\nQuestion: 52*365" ] } [llm/end] [1:RunTypeEnum.chain:AgentExecutor > 10:RunTypeEnum.tool:Calculator > 11:RunTypeEnum.chain:LLMMathChain > 12:RunTypeEnum.chain:LLMChain > 13:RunTypeEnum.llm:ChatOpenAI] [2.89s] Exiting LLM run with output: { "generations": [ [ { "text": "```text\n52*365\n```\n...numexpr.evaluate(\"52*365\")...\n", "generation_info": { "finish_reason": "stop" }, "message": { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "messages", "AIMessage" ], "kwargs": { "content": "```text\n52*365\n```\n...numexpr.evaluate(\"52*365\")...\n", "additional_kwargs": {} } } } ] ], "llm_output": { "token_usage": { "prompt_tokens": 203, "completion_tokens": 19, "total_tokens": 222 }, "model_name": "gpt-4" }, "run": null } [chain/end] [1:RunTypeEnum.chain:AgentExecutor > 10:RunTypeEnum.tool:Calculator > 11:RunTypeEnum.chain:LLMMathChain > 12:RunTypeEnum.chain:LLMChain] [2.89s] Exiting Chain run with output: { "text": "```text\n52*365\n```\n...numexpr.evaluate(\"52*365\")...\n" } [chain/end] [1:RunTypeEnum.chain:AgentExecutor > 10:RunTypeEnum.tool:Calculator > 11:RunTypeEnum.chain:LLMMathChain] [2.90s] Exiting Chain run with output: { "answer": "Answer: 18980" } [tool/end] [1:RunTypeEnum.chain:AgentExecutor > 10:RunTypeEnum.tool:Calculator] [2.90s] Exiting Tool run with output: "Answer: 18980" [chain/start] [1:RunTypeEnum.chain:AgentExecutor > 14:RunTypeEnum.chain:LLMChain] Entering Chain run with input: { "input": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "agent_scratchpad": "I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\nAction: duckduckgo_search\nAction Input: \"Director of the 2023 film Oppenheimer and their age\"\nObservation: Capturing the mad scramble to build the first atomic bomb required rapid-fire filming, strict set rules and the construction of an entire 1940s western town. By Jada Yuan. July 19, 2023 at 5:00 a ... In Christopher Nolan's new film, \"Oppenheimer,\" Cillian Murphy stars as J. Robert Oppenheimer, the American physicist who oversaw the Manhattan Project in Los Alamos, N.M. Universal Pictures... Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. Christopher Nolan goes deep on 'Oppenheimer,' his most 'extreme' film to date. By Kenneth Turan. July 11, 2023 5 AM PT. For Subscribers. Christopher Nolan is photographed in Los Angeles ... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age.\nThought:The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his age.\nAction: duckduckgo_search\nAction Input: \"Christopher Nolan age\"\nObservation: Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. July 30, 1970 (age 52) London England Notable Works: \"Dunkirk\" \"Tenet\" \"The Prestige\" See all related content โ†’ Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film July 11, 2023 5 AM PT For Subscribers Christopher Nolan is photographed in Los Angeles. (Joe Pugliese / For The Times) This is not the story I was supposed to write. Oppenheimer director Christopher Nolan, Cillian Murphy, Emily Blunt and Matt Damon on the stakes of making a three-hour, CGI-free summer film. Christopher Nolan, the director behind such films as \"Dunkirk,\" \"Inception,\" \"Interstellar,\" and the \"Dark Knight\" trilogy, has spent the last three years living in Oppenheimer's world, writing ...\nThought:Christopher Nolan was born on July 30, 1970, which makes him 52 years old in 2023. Now I need to calculate his age in days.\nAction: Calculator\nAction Input: 52*365\nObservation: Answer: 18980\nThought:", "stop": [ "\nObservation:", "\n\tObservation:" ] } [llm/start] [1:RunTypeEnum.chain:AgentExecutor > 14:RunTypeEnum.chain:LLMChain > 15:RunTypeEnum.llm:ChatOpenAI] Entering LLM run with input: { "prompts": [ "Human: Answer the following questions as best you can. You have access to the following tools:\n\nduckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.\nCalculator: Useful for when you need to answer questions about math.\n\nUse the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [duckduckgo_search, Calculator]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\nThought:I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\nAction: duckduckgo_search\nAction Input: \"Director of the 2023 film Oppenheimer and their age\"\nObservation: Capturing the mad scramble to build the first atomic bomb required rapid-fire filming, strict set rules and the construction of an entire 1940s western town. By Jada Yuan. July 19, 2023 at 5:00 a ... In Christopher Nolan's new film, \"Oppenheimer,\" Cillian Murphy stars as J. Robert Oppenheimer, the American physicist who oversaw the Manhattan Project in Los Alamos, N.M. Universal Pictures... Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. Christopher Nolan goes deep on 'Oppenheimer,' his most 'extreme' film to date. By Kenneth Turan. July 11, 2023 5 AM PT. For Subscribers. Christopher Nolan is photographed in Los Angeles ... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age.\nThought:The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his age.\nAction: duckduckgo_search\nAction Input: \"Christopher Nolan age\"\nObservation: Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. July 30, 1970 (age 52) London England Notable Works: \"Dunkirk\" \"Tenet\" \"The Prestige\" See all related content โ†’ Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film July 11, 2023 5 AM PT For Subscribers Christopher Nolan is photographed in Los Angeles. (Joe Pugliese / For The Times) This is not the story I was supposed to write. Oppenheimer director Christopher Nolan, Cillian Murphy, Emily Blunt and Matt Damon on the stakes of making a three-hour, CGI-free summer film. Christopher Nolan, the director behind such films as \"Dunkirk,\" \"Inception,\" \"Interstellar,\" and the \"Dark Knight\" trilogy, has spent the last three years living in Oppenheimer's world, writing ...\nThought:Christopher Nolan was born on July 30, 1970, which makes him 52 years old in 2023. Now I need to calculate his age in days.\nAction: Calculator\nAction Input: 52*365\nObservation: Answer: 18980\nThought:" ] } [llm/end] [1:RunTypeEnum.chain:AgentExecutor > 14:RunTypeEnum.chain:LLMChain > 15:RunTypeEnum.llm:ChatOpenAI] [3.52s] Exiting LLM run with output: { "generations": [ [ { "text": "I now know the final answer\nFinal Answer: The director of the 2023 film Oppenheimer is Christopher Nolan and he is 52 years old. His age in days is approximately 18980 days.", "generation_info": { "finish_reason": "stop" }, "message": { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "messages", "AIMessage" ], "kwargs": { "content": "I now know the final answer\nFinal Answer: The director of the 2023 film Oppenheimer is Christopher Nolan and he is 52 years old. His age in days is approximately 18980 days.", "additional_kwargs": {} } } } ] ], "llm_output": { "token_usage": { "prompt_tokens": 926, "completion_tokens": 43, "total_tokens": 969 }, "model_name": "gpt-4" }, "run": null } [chain/end] [1:RunTypeEnum.chain:AgentExecutor > 14:RunTypeEnum.chain:LLMChain] [3.52s] Exiting Chain run with output: { "text": "I now know the final answer\nFinal Answer: The director of the 2023 film Oppenheimer is Christopher Nolan and he is 52 years old. His age in days is approximately 18980 days." } [chain/end] [1:RunTypeEnum.chain:AgentExecutor] [21.96s] Exiting Chain run with output: { "output": "The director of the 2023 film Oppenheimer is Christopher Nolan and he is 52 years old. His age in days is approximately 18980 days." } 'The director of the 2023 film Oppenheimer is Christopher Nolan and he is 52 years old. His age in days is approximately 18980 days.' ``` </CodeOutputBlock> </details> ### `set_verbose(True)` Setting the `verbose` flag will print out inputs and outputs in a slightly more readable format and will skip logging certain raw outputs (like the token usage stats for an LLM call) so that you can focus on application logic. ```python from langchain.globals import set_verbose set_verbose(True) agent.run("Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?") ``` <details> <summary>Console output</summary> <CodeOutputBlock lang="python"> ``` > Entering new AgentExecutor chain... > Entering new LLMChain chain... Prompt after formatting: Answer the following questions as best you can. You have access to the following tools: duckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query. Calculator: Useful for when you need to answer questions about math. Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [duckduckgo_search, Calculator] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! Question: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)? Thought: > Finished chain. First, I need to find out who directed the film Oppenheimer in 2023 and their birth date to calculate their age. Action: duckduckgo_search Action Input: "Director of the 2023 film Oppenheimer" Observation: Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. In Christopher Nolan's new film, "Oppenheimer," Cillian Murphy stars as J. Robert ... 2023, 12:16 p.m. ET. ... including his role as the director of the Manhattan Engineer District, better ... J Robert Oppenheimer was the director of the secret Los Alamos Laboratory. It was established under US president Franklin D Roosevelt as part of the Manhattan Project to build the first atomic bomb. He oversaw the first atomic bomb detonation in the New Mexico desert in July 1945, code-named "Trinity". In this opening salvo of 2023's Oscar battle, Nolan has enjoined a star-studded cast for a retelling of the brilliant and haunted life of J. Robert Oppenheimer, the American physicist whose... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age. Thought: > Entering new LLMChain chain... Prompt after formatting: Answer the following questions as best you can. You have access to the following tools: duckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query. Calculator: Useful for when you need to answer questions about math. Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [duckduckgo_search, Calculator] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! Question: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)? Thought:First, I need to find out who directed the film Oppenheimer in 2023 and their birth date to calculate their age. Action: duckduckgo_search Action Input: "Director of the 2023 film Oppenheimer" Observation: Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. In Christopher Nolan's new film, "Oppenheimer," Cillian Murphy stars as J. Robert ... 2023, 12:16 p.m. ET. ... including his role as the director of the Manhattan Engineer District, better ... J Robert Oppenheimer was the director of the secret Los Alamos Laboratory. It was established under US president Franklin D Roosevelt as part of the Manhattan Project to build the first atomic bomb. He oversaw the first atomic bomb detonation in the New Mexico desert in July 1945, code-named "Trinity". In this opening salvo of 2023's Oscar battle, Nolan has enjoined a star-studded cast for a retelling of the brilliant and haunted life of J. Robert Oppenheimer, the American physicist whose... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age. Thought: > Finished chain. The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his birth date to calculate his age. Action: duckduckgo_search Action Input: "Christopher Nolan birth date" Observation: July 30, 1970 (age 52) London England Notable Works: "Dunkirk" "Tenet" "The Prestige" See all related content โ†’ Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. Christopher Nolan is currently 52 according to his birthdate July 30, 1970 Sun Sign Leo Born Place Westminster, London, England, United Kingdom Residence Los Angeles, California, United States Nationality Education Chris attended Haileybury and Imperial Service College, in Hertford Heath, Hertfordshire. Christopher Nolan's next movie will study the man who developed the atomic bomb, J. Robert Oppenheimer. Here's the release date, plot, trailers & more. July 2023 sees the release of Christopher Nolan's new film, Oppenheimer, his first movie since 2020's Tenet and his split from Warner Bros. Billed as an epic thriller about "the man who ... Thought: > Entering new LLMChain chain... Prompt after formatting: Answer the following questions as best you can. You have access to the following tools: duckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query. Calculator: Useful for when you need to answer questions about math. Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [duckduckgo_search, Calculator] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! Question: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)? Thought:First, I need to find out who directed the film Oppenheimer in 2023 and their birth date to calculate their age. Action: duckduckgo_search Action Input: "Director of the 2023 film Oppenheimer" Observation: Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. In Christopher Nolan's new film, "Oppenheimer," Cillian Murphy stars as J. Robert ... 2023, 12:16 p.m. ET. ... including his role as the director of the Manhattan Engineer District, better ... J Robert Oppenheimer was the director of the secret Los Alamos Laboratory. It was established under US president Franklin D Roosevelt as part of the Manhattan Project to build the first atomic bomb. He oversaw the first atomic bomb detonation in the New Mexico desert in July 1945, code-named "Trinity". In this opening salvo of 2023's Oscar battle, Nolan has enjoined a star-studded cast for a retelling of the brilliant and haunted life of J. Robert Oppenheimer, the American physicist whose... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age. Thought:The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his birth date to calculate his age. Action: duckduckgo_search Action Input: "Christopher Nolan birth date" Observation: July 30, 1970 (age 52) London England Notable Works: "Dunkirk" "Tenet" "The Prestige" See all related content โ†’ Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. Christopher Nolan is currently 52 according to his birthdate July 30, 1970 Sun Sign Leo Born Place Westminster, London, England, United Kingdom Residence Los Angeles, California, United States Nationality Education Chris attended Haileybury and Imperial Service College, in Hertford Heath, Hertfordshire. Christopher Nolan's next movie will study the man who developed the atomic bomb, J. Robert Oppenheimer. Here's the release date, plot, trailers & more. July 2023 sees the release of Christopher Nolan's new film, Oppenheimer, his first movie since 2020's Tenet and his split from Warner Bros. Billed as an epic thriller about "the man who ... Thought: > Finished chain. Christopher Nolan was born on July 30, 1970. Now I need to calculate his age in 2023 and then convert it into days. Action: Calculator Action Input: (2023 - 1970) * 365 > Entering new LLMMathChain chain... (2023 - 1970) * 365 > Entering new LLMChain chain... Prompt after formatting: Translate a math problem into a expression that can be executed using Python's numexpr library. Use the output of running this code to answer the question. Question: ${Question with math problem.} ```text ${single line mathematical expression that solves the problem} ``` ...numexpr.evaluate(text)... ```output ${Output of running the code} ``` Answer: ${Answer} Begin. Question: What is 37593 * 67? ```text 37593 * 67 ``` ...numexpr.evaluate("37593 * 67")... ```output 2518731 ``` Answer: 2518731 Question: 37593^(1/5) ```text 37593**(1/5) ``` ...numexpr.evaluate("37593**(1/5)")... ```output 8.222831614237718 ``` Answer: 8.222831614237718 Question: (2023 - 1970) * 365 > Finished chain. ```text (2023 - 1970) * 365 ``` ...numexpr.evaluate("(2023 - 1970) * 365")... Answer: 19345 > Finished chain. Observation: Answer: 19345 Thought: > Entering new LLMChain chain... Prompt after formatting: Answer the following questions as best you can. You have access to the following tools: duckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query. Calculator: Useful for when you need to answer questions about math. Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [duckduckgo_search, Calculator] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! Question: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)? Thought:First, I need to find out who directed the film Oppenheimer in 2023 and their birth date to calculate their age. Action: duckduckgo_search Action Input: "Director of the 2023 film Oppenheimer" Observation: Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. In Christopher Nolan's new film, "Oppenheimer," Cillian Murphy stars as J. Robert ... 2023, 12:16 p.m. ET. ... including his role as the director of the Manhattan Engineer District, better ... J Robert Oppenheimer was the director of the secret Los Alamos Laboratory. It was established under US president Franklin D Roosevelt as part of the Manhattan Project to build the first atomic bomb. He oversaw the first atomic bomb detonation in the New Mexico desert in July 1945, code-named "Trinity". In this opening salvo of 2023's Oscar battle, Nolan has enjoined a star-studded cast for a retelling of the brilliant and haunted life of J. Robert Oppenheimer, the American physicist whose... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age. Thought:The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his birth date to calculate his age. Action: duckduckgo_search Action Input: "Christopher Nolan birth date" Observation: July 30, 1970 (age 52) London England Notable Works: "Dunkirk" "Tenet" "The Prestige" See all related content โ†’ Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. Christopher Nolan is currently 52 according to his birthdate July 30, 1970 Sun Sign Leo Born Place Westminster, London, England, United Kingdom Residence Los Angeles, California, United States Nationality Education Chris attended Haileybury and Imperial Service College, in Hertford Heath, Hertfordshire. Christopher Nolan's next movie will study the man who developed the atomic bomb, J. Robert Oppenheimer. Here's the release date, plot, trailers & more. July 2023 sees the release of Christopher Nolan's new film, Oppenheimer, his first movie since 2020's Tenet and his split from Warner Bros. Billed as an epic thriller about "the man who ... Thought:Christopher Nolan was born on July 30, 1970. Now I need to calculate his age in 2023 and then convert it into days. Action: Calculator Action Input: (2023 - 1970) * 365 Observation: Answer: 19345 Thought: > Finished chain. I now know the final answer Final Answer: The director of the 2023 film Oppenheimer is Christopher Nolan and he is 53 years old in 2023. His age in days is 19345 days. > Finished chain. 'The director of the 2023 film Oppenheimer is Christopher Nolan and he is 53 years old in 2023. His age in days is 19345 days.' ``` </CodeOutputBlock> </details> ### `Chain(..., verbose=True)` You can also scope verbosity down to a single object, in which case only the inputs and outputs to that object are printed (along with any additional callbacks calls made specifically by that object). ```python # Passing verbose=True to initialize_agent will pass that along to the AgentExecutor (which is a Chain). agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True, ) agent.run("Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?") ``` <details> <summary>Console output</summary> <CodeOutputBlock lang="python"> ``` > Entering new AgentExecutor chain... First, I need to find out who directed the film Oppenheimer in 2023 and their birth date. Then, I can calculate their age in years and days. Action: duckduckgo_search Action Input: "Director of 2023 film Oppenheimer" Observation: Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. In Christopher Nolan's new film, "Oppenheimer," Cillian Murphy stars as J. Robert Oppenheimer, the American physicist who oversaw the Manhattan Project in Los Alamos, N.M. Universal Pictures... J Robert Oppenheimer was the director of the secret Los Alamos Laboratory. It was established under US president Franklin D Roosevelt as part of the Manhattan Project to build the first atomic bomb. He oversaw the first atomic bomb detonation in the New Mexico desert in July 1945, code-named "Trinity". A Review of Christopher Nolan's new film 'Oppenheimer' , the story of the man who fathered the Atomic Bomb. Cillian Murphy leads an all star cast ... Release Date: July 21, 2023. Director ... For his new film, "Oppenheimer," starring Cillian Murphy and Emily Blunt, director Christopher Nolan set out to build an entire 1940s western town. Thought:The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his birth date to calculate his age. Action: duckduckgo_search Action Input: "Christopher Nolan birth date" Observation: July 30, 1970 (age 52) London England Notable Works: "Dunkirk" "Tenet" "The Prestige" See all related content โ†’ Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. Christopher Nolan is currently 52 according to his birthdate July 30, 1970 Sun Sign Leo Born Place Westminster, London, England, United Kingdom Residence Los Angeles, California, United States Nationality Education Chris attended Haileybury and Imperial Service College, in Hertford Heath, Hertfordshire. Christopher Nolan's next movie will study the man who developed the atomic bomb, J. Robert Oppenheimer. Here's the release date, plot, trailers & more. Date of Birth: 30 July 1970 . ... Christopher Nolan is a British-American film director, producer, and screenwriter. His films have grossed more than US$5 billion worldwide, and have garnered 11 Academy Awards from 36 nominations. ... Thought:Christopher Nolan was born on July 30, 1970. Now I can calculate his age in years and then in days. Action: Calculator Action Input: {"operation": "subtract", "operands": [2023, 1970]} Observation: Answer: 53 Thought:Christopher Nolan is 53 years old in 2023. Now I need to calculate his age in days. Action: Calculator Action Input: {"operation": "multiply", "operands": [53, 365]} Observation: Answer: 19345 Thought:I now know the final answer Final Answer: The director of the 2023 film Oppenheimer is Christopher Nolan. He is 53 years old in 2023, which is approximately 19345 days. > Finished chain. 'The director of the 2023 film Oppenheimer is Christopher Nolan. He is 53 years old in 2023, which is approximately 19345 days.' ``` </CodeOutputBlock> </details> ## Other callbacks `Callbacks` are what we use to execute any functionality within a component outside the primary component logic. All of the above solutions use `Callbacks` under the hood to log intermediate steps of components. There are a number of `Callbacks` relevant for debugging that come with LangChain out of the box, like the [FileCallbackHandler](/docs/modules/callbacks/filecallbackhandler). You can also implement your own callbacks to execute custom functionality. See here for more info on [Callbacks](/docs/modules/callbacks/), how to use them, and customize them.
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\guides\debugging.md
.md
# Pydantic compatibility - Pydantic v2 was released in June, 2023 (https://docs.pydantic.dev/2.0/blog/pydantic-v2-final/) - v2 contains has a number of breaking changes (https://docs.pydantic.dev/2.0/migration/) - Pydantic v2 and v1 are under the same package name, so both versions cannot be installed at the same time ## LangChain Pydantic migration plan As of `langchain>=0.0.267`, LangChain will allow users to install either Pydantic V1 or V2. * Internally LangChain will continue to [use V1](https://docs.pydantic.dev/latest/migration/#continue-using-pydantic-v1-features). * During this time, users can pin their pydantic version to v1 to avoid breaking changes, or start a partial migration using pydantic v2 throughout their code, but avoiding mixing v1 and v2 code for LangChain (see below). User can either pin to pydantic v1, and upgrade their code in one go once LangChain has migrated to v2 internally, or they can start a partial migration to v2, but must avoid mixing v1 and v2 code for LangChain. Below are two examples of showing how to avoid mixing pydantic v1 and v2 code in the case of inheritance and in the case of passing objects to LangChain. **Example 1: Extending via inheritance** **YES** ```python from pydantic.v1 import root_validator, validator class CustomTool(BaseTool): # BaseTool is v1 code x: int = Field(default=1) def _run(*args, **kwargs): return "hello" @validator('x') # v1 code @classmethod def validate_x(cls, x: int) -> int: return 1 CustomTool( name='custom_tool', description="hello", x=1, ) ``` Mixing Pydantic v2 primitives with Pydantic v1 primitives can raise cryptic errors **NO** ```python from pydantic import Field, field_validator # pydantic v2 class CustomTool(BaseTool): # BaseTool is v1 code x: int = Field(default=1) def _run(*args, **kwargs): return "hello" @field_validator('x') # v2 code @classmethod def validate_x(cls, x: int) -> int: return 1 CustomTool( name='custom_tool', description="hello", x=1, ) ``` **Example 2: Passing objects to LangChain** **YES** ```python from langchain_core.tools import Tool from pydantic.v1 import BaseModel, Field # <-- Uses v1 namespace class CalculatorInput(BaseModel): question: str = Field() Tool.from_function( # <-- tool uses v1 namespace func=lambda question: 'hello', name="Calculator", description="useful for when you need to answer questions about math", args_schema=CalculatorInput ) ``` **NO** ```python from langchain_core.tools import Tool from pydantic import BaseModel, Field # <-- Uses v2 namespace class CalculatorInput(BaseModel): question: str = Field() Tool.from_function( # <-- tool uses v1 namespace func=lambda question: 'hello', name="Calculator", description="useful for when you need to answer questions about math", args_schema=CalculatorInput ) ```
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\guides\pydantic_compatibility.md
.md
# LLMonitor >[LLMonitor](https://llmonitor.com?utm_source=langchain&utm_medium=py&utm_campaign=docs) is an open-source observability platform that provides cost and usage analytics, user tracking, tracing and evaluation tools. <video controls width='100%' > <source src='https://llmonitor.com/videos/demo-annotated.mp4'/> </video> ## Setup Create an account on [llmonitor.com](https://llmonitor.com?utm_source=langchain&utm_medium=py&utm_campaign=docs), then copy your new app's `tracking id`. Once you have it, set it as an environment variable by running: ```bash export LLMONITOR_APP_ID="..." ``` If you'd prefer not to set an environment variable, you can pass the key directly when initializing the callback handler: ```python from langchain.callbacks import LLMonitorCallbackHandler handler = LLMonitorCallbackHandler(app_id="...") ``` ## Usage with LLM/Chat models ```python from langchain_openai import OpenAI from langchain_openai import ChatOpenAI from langchain.callbacks import LLMonitorCallbackHandler handler = LLMonitorCallbackHandler() llm = OpenAI( callbacks=[handler], ) chat = ChatOpenAI(callbacks=[handler]) llm("Tell me a joke") ``` ## Usage with chains and agents Make sure to pass the callback handler to the `run` method so that all related chains and llm calls are correctly tracked. It is also recommended to pass `agent_name` in the metadata to be able to distinguish between agents in the dashboard. Example: ```python from langchain_openai import ChatOpenAI from langchain_core.messages import SystemMessage, HumanMessage from langchain.agents import OpenAIFunctionsAgent, AgentExecutor, tool from langchain.callbacks import LLMonitorCallbackHandler llm = ChatOpenAI(temperature=0) handler = LLMonitorCallbackHandler() @tool def get_word_length(word: str) -> int: """Returns the length of a word.""" return len(word) tools = [get_word_length] prompt = OpenAIFunctionsAgent.create_prompt( system_message=SystemMessage( content="You are very powerful assistant, but bad at calculating lengths of words." ) ) agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt, verbose=True) agent_executor = AgentExecutor( agent=agent, tools=tools, verbose=True, metadata={"agent_name": "WordCount"} # <- recommended, assign a custom name ) agent_executor.run("how many letters in the word educa?", callbacks=[handler]) ``` Another example: ```python from langchain.agents import load_tools, initialize_agent, AgentType from langchain_openai import OpenAI from langchain.callbacks import LLMonitorCallbackHandler handler = LLMonitorCallbackHandler() llm = OpenAI(temperature=0) tools = load_tools(["serpapi", "llm-math"], llm=llm) agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, metadata={ "agent_name": "GirlfriendAgeFinder" }) # <- recommended, assign a custom name agent.run( "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?", callbacks=[handler], ) ``` ## User Tracking User tracking allows you to identify your users, track their cost, conversations and more. ```python from langchain.callbacks.llmonitor_callback import LLMonitorCallbackHandler, identify with identify("user-123"): llm("Tell me a joke") with identify("user-456", user_props={"email": "[email protected]"}): agen.run("Who is Leo DiCaprio's girlfriend?") ``` ## Support For any question or issue with integration you can reach out to the LLMonitor team on [Discord](http://discord.com/invite/8PafSG58kK) or via [email](mailto:[email protected]).
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\integrations\callbacks\llmonitor.md
.md
# Streamlit > **[Streamlit](https://streamlit.io/) is a faster way to build and share data apps.** > Streamlit turns data scripts into shareable web apps in minutes. All in pure Python. No frontโ€‘end experience required. > See more examples at [streamlit.io/generative-ai](https://streamlit.io/generative-ai). [![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/streamlit-agent?quickstart=1) In this guide we will demonstrate how to use `StreamlitCallbackHandler` to display the thoughts and actions of an agent in an interactive Streamlit app. Try it out with the running app below using the MRKL agent: <iframe loading="lazy" src="https://langchain-mrkl.streamlit.app/?embed=true&embed_options=light_theme" style={{ width: 100 + '%', border: 'none', marginBottom: 1 + 'rem', height: 600 }} allow="camera;clipboard-read;clipboard-write;" ></iframe> ## Installation and Setup ```bash pip install langchain streamlit ``` You can run `streamlit hello` to load a sample app and validate your install succeeded. See full instructions in Streamlit's [Getting started documentation](https://docs.streamlit.io/library/get-started). ## Display thoughts and actions To create a `StreamlitCallbackHandler`, you just need to provide a parent container to render the output. ```python from langchain_community.callbacks import StreamlitCallbackHandler import streamlit as st st_callback = StreamlitCallbackHandler(st.container()) ``` Additional keyword arguments to customize the display behavior are described in the [API reference](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler.html). ### Scenario 1: Using an Agent with Tools The primary supported use case today is visualizing the actions of an Agent with Tools (or Agent Executor). You can create an agent in your Streamlit app and simply pass the `StreamlitCallbackHandler` to `agent.run()` in order to visualize the thoughts and actions live in your app. ```python import streamlit as st from langchain import hub from langchain.agents import AgentExecutor, create_react_agent, load_tools from langchain_community.callbacks import StreamlitCallbackHandler from langchain_openai import OpenAI llm = OpenAI(temperature=0, streaming=True) tools = load_tools(["ddg-search"]) prompt = hub.pull("hwchase17/react") agent = create_react_agent(llm, tools, prompt) agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True) if prompt := st.chat_input(): st.chat_message("user").write(prompt) with st.chat_message("assistant"): st_callback = StreamlitCallbackHandler(st.container()) response = agent_executor.invoke( {"input": prompt}, {"callbacks": [st_callback]} ) st.write(response["output"]) ``` **Note:** You will need to set `OPENAI_API_KEY` for the above app code to run successfully. The easiest way to do this is via [Streamlit secrets.toml](https://docs.streamlit.io/library/advanced-features/secrets-management), or any other local ENV management tool. ### Additional scenarios Currently `StreamlitCallbackHandler` is geared towards use with a LangChain Agent Executor. Support for additional agent types, use directly with Chains, etc will be added in the future. You may also be interested in using [StreamlitChatMessageHistory](/docs/integrations/memory/streamlit_chat_message_history) for LangChain.
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\integrations\callbacks\streamlit.md
.txt
1/22/23, 6:30 PM - User 1: Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks! 1/22/23, 8:24 PM - User 2: Goodmorning! $50 is too low. 1/23/23, 2:59 AM - User 1: How much do you want? 1/23/23, 3:00 AM - User 2: Online is at least $100 1/23/23, 3:01 AM - User 2: Here is $129 1/23/23, 3:01 AM - User 2: <Media omitted> 1/23/23, 3:01 AM - User 1: Im not interested in this bag. Im interested in the blue one! 1/23/23, 3:02 AM - User 1: I thought you were selling the blue one! 1/23/23, 3:18 AM - User 2: No Im sorry it was my mistake, the blue one is not for sale 1/23/23, 3:19 AM - User 1: Oh no worries! Bye 1/23/23, 3:19 AM - User 2: Bye! 1/23/23, 3:22_AM - User 1: And let me know if anything changes
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\integrations\document_loaders\example_data\whatsapp_chat.txt
.txt
application.json 1023495323659816971/ applications/ avatar.gif user.json events-2023-00000-of-00001.json events-2023-00000-of-00001.json events-2023-00000-of-00001.json events-2023-00000-of-00001.json analytics/ modeling/ reporting/ tns/ channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv c1000084973275058257/ c1000108836771856496/ c1004874234339794977/ c1004874234339794979/ c1004874234339794981/ c1004874234339794982/ c1005785616165896283/ c1011447733393043628/ c1011548022905249822/ c1011650063027687575/ c1011714070182895727/ c1013930263950135346/ c1013930396829884426/ c1014957294745829479/ c1014961384821366794/ c1014974864370712696/ c1019288541592817785/ c1024947790767464478/ c1027257686858932255/ c1027927867989962814/ c1032151840999100436/ c1032575808826523662/ c1037561178286739466/ c1038097349660135474/ c1038097372695236729/ c1038689169351913544/ c1038692122452312125/ c1039957371381887049/ c1040989617157066782/ c1047165096452960316/ c1047565374645870743/ c1050225908914589716/ c1050226593668284416/ c1050227353311248404/ c1051632794427723827/ c1052599046717591632/ c1052615516981821531/ c1056285083520217149/ c105765859191975936/ c1061166503753416735/ c1062024667105341502/ c1066640566621835284/ c1070018538758221874/ c1072944049788555314/ c1075121707033042985/ c1075438954632990820/ c1077238309320929342/ c1081432695315386418/ c1082169962157838366/ c1084011585871282256/ c1084352082812878928/ c1085149531437535343/ c1086944178086359060/ c1093214985557123223/ c1093215227555876914/ c1093930791794393089/ c1096323263161978891/ c1096489741710532730/ c1097000752653795358/ c278566343836565505/ c279692806442844161/ c280973436971515906/ c283812709789859851/ c343944376055103488/ c486935104384532502/ c531543370041131008/ c538158613252800512/ c572384192571113512/ c619960843878268950/ c661268593870372876/ c661394153778970624/ c663302088226373632/ c669957895257063445/ c670218237891313664/ c673160333661306880/ c674693947800420363/ c674694138129678375/ c743425228952305695/ c754627904406814770/ c754638493875044503/ c757205803651301436/ c759232323710484531/ c771802926372093973/ c783240623582609416/ c783244379115880448/ c801744322788982814/ c810514969892225024/ c816983218434605057/ c830184175176122389/ c830679381033877564/ c831172308395622480/ c849582819105177650/ c860977555875430492/ c867042653401251880/ c868094992986550322/ c868917941184376842/ c905007686976946176/ c909600839717511211/ c909600931816018031/ c923095048931905557/ c924877027180417035/ c938491245347631114/ c938743368375214110/ c969876184185860107/ c969945714056642580/ c969948939728093214/ c981037338517966889/ c984120044478939146/ c985958948085592064/ c990816829993811978/ c993402018901266436/ c993782366948565102/ c993843360752226364/ c994556806644899870/ index.json audit-log.json guild.json audit-log.json guild.json audit-log.json bans.json channels.json emoji.json guild.json icon.jpeg webhooks.json audit-log.json guild.json audit-log.json bans.json channels.json emoji.json guild.json webhooks.json audit-log.json guild.json audit-log.json bans.json channels.json emoji.json guild.json icon.png webhooks.json audit-log.json guild.json audit-log.json guild.json audit-log.json guild.json audit-log.json guild.json audit-log.json guild.json audit-log.json guild.json audit-log.json guild.json audit-log.json guild.json audit-log.json guild.json audit-log.json guild.json audit-log.json guild.json audit-log.json guild.json audit-log.json guild.json 1024120160740716544/ 102860784329052160/ 1032575808826523659/ 1038097195422978059/ 1039583521112600638/ 1050224141732687912/ 1069661049827111054/ 267624335836053506/ 278285146518716417/ 486935104384532500/ 531303890453397522/ 669880381649977354/ 727016164215226450/ 743099584242516037/ 753173158198116402/ 830184174198718474/ 860977555293470772/ 887994159741427712/ 909600839717511208/ 974519864045756446/ index.json account/ activities_e/ activities_w/ activity/ messages/ programs/ README.txt servers/
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\integrations\document_loaders\example_data\fake_discord_data\output.txt
.md
# Remembrall This page covers how to use the [Remembrall](https://remembrall.dev) ecosystem within LangChain. ## What is Remembrall? Remembrall gives your language model long-term memory, retrieval augmented generation, and complete observability with just a few lines of code. ![Screenshot of the Remembrall dashboard showing request statistics and model interactions.](/img/RemembrallDashboard.png "Remembrall Dashboard Interface") It works as a light-weight proxy on top of your OpenAI calls and simply augments the context of the chat calls at runtime with relevant facts that have been collected. ## Setup To get started, [sign in with Github on the Remembrall platform](https://remembrall.dev/login) and copy your [API key from the settings page](https://remembrall.dev/dashboard/settings). Any request that you send with the modified `openai_api_base` (see below) and Remembrall API key will automatically be tracked in the Remembrall dashboard. You **never** have to share your OpenAI key with our platform and this information is **never** stored by the Remembrall systems. To do this, we need to install the following dependencies: ```bash pip install -U langchain-openai ``` ### Enable Long Term Memory In addition to setting the `openai_api_base` and Remembrall API key via `x-gp-api-key`, you should specify a UID to maintain memory for. This will usually be a unique user identifier (like email). ```python from langchain_openai import ChatOpenAI chat_model = ChatOpenAI(openai_api_base="https://remembrall.dev/api/openai/v1", model_kwargs={ "headers":{ "x-gp-api-key": "remembrall-api-key-here", "x-gp-remember": "[email protected]", } }) chat_model.predict("My favorite color is blue.") import time; time.sleep(5) # wait for system to save fact via auto save print(chat_model.predict("What is my favorite color?")) ``` ### Enable Retrieval Augmented Generation First, create a document context in the [Remembrall dashboard](https://remembrall.dev/dashboard/spells). Paste in the document texts or upload documents as PDFs to be processed. Save the Document Context ID and insert it as shown below. ```python from langchain_openai import ChatOpenAI chat_model = ChatOpenAI(openai_api_base="https://remembrall.dev/api/openai/v1", model_kwargs={ "headers":{ "x-gp-api-key": "remembrall-api-key-here", "x-gp-context": "document-context-id-goes-here", } }) print(chat_model.predict("This is a question that can be answered with my document.")) ```
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\integrations\memory\remembrall.md
.md
# Airtable >[Airtable](https://en.wikipedia.org/wiki/Airtable) is a cloud collaboration service. `Airtable` is a spreadsheet-database hybrid, with the features of a database but applied to a spreadsheet. > The fields in an Airtable table are similar to cells in a spreadsheet, but have types such as 'checkbox', > 'phone number', and 'drop-down list', and can reference file attachments like images. >Users can create a database, set up column types, add records, link tables to one another, collaborate, sort records > and publish views to external websites. ## Installation and Setup ```bash pip install pyairtable ``` * Get your [API key](https://support.airtable.com/docs/creating-and-using-api-keys-and-access-tokens). * Get the [ID of your base](https://airtable.com/developers/web/api/introduction). * Get the [table ID from the table url](https://www.highviewapps.com/kb/where-can-i-find-the-airtable-base-id-and-table-id/#:~:text=Both%20the%20Airtable%20Base%20ID,URL%20that%20begins%20with%20tbl). ## Document Loader ```python from langchain_community.document_loaders import AirtableLoader ``` See an [example](/docs/integrations/document_loaders/airtable).
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\integrations\providers\airtable.md
.md
# AwaDB >[AwaDB](https://github.com/awa-ai/awadb) is an AI Native database for the search and storage of embedding vectors used by LLM Applications. ## Installation and Setup ```bash pip install awadb ``` ## Vector Store ```python from langchain_community.vectorstores import AwaDB ``` See a [usage example](/docs/integrations/vectorstores/awadb). ## Text Embedding Model ```python from langchain_community.embeddings import AwaEmbeddings ``` See a [usage example](/docs/integrations/text_embedding/awadb).
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\integrations\providers\awadb.md
.md
# Baseten [Baseten](https://baseten.co) provides all the infrastructure you need to deploy and serve ML models performantly, scalably, and cost-efficiently. As a model inference platform, Baseten is a `Provider` in the LangChain ecosystem. The Baseten integration currently implements a single `Component`, LLMs, but more are planned! Baseten lets you run both open source models like Llama 2 or Mistral and run proprietary or fine-tuned models on dedicated GPUs. If you're used to a provider like OpenAI, using Baseten has a few differences: * Rather than paying per token, you pay per minute of GPU used. * Every model on Baseten uses [Truss](https://truss.baseten.co/welcome), our open-source model packaging framework, for maximum customizability. * While we have some [OpenAI ChatCompletions-compatible models](https://docs.baseten.co/api-reference/openai), you can define your own I/O spec with Truss. You can learn more about Baseten in [our docs](https://docs.baseten.co/) or read on for LangChain-specific info. ## Setup: LangChain + Baseten You'll need two things to use Baseten models with LangChain: - A [Baseten account](https://baseten.co) - An [API key](https://docs.baseten.co/observability/api-keys) Export your API key to your as an environment variable called `BASETEN_API_KEY`. ```sh export BASETEN_API_KEY="paste_your_api_key_here" ``` ## Component guide: LLMs Baseten integrates with LangChain through the [LLM component](https://python.langchain.com/docs/integrations/llms/baseten), which provides a standardized and interoperable interface for models that are deployed on your Baseten workspace. You can deploy foundation models like Mistral and Llama 2 with one click from the [Baseten model library](https://app.baseten.co/explore/) or if you have your own model, [deploy it with Truss](https://truss.baseten.co/welcome). In this example, we'll work with Mistral 7B. [Deploy Mistral 7B here](https://app.baseten.co/explore/mistral_7b_instruct) and follow along with the deployed model's ID, found in the model dashboard. To use this module, you must: * Export your Baseten API key as the environment variable BASETEN_API_KEY * Get the model ID for your model from your Baseten dashboard * Identify the model deployment ("production" for all model library models) [Learn more](https://docs.baseten.co/deploy/lifecycle) about model IDs and deployments. Production deployment (standard for model library models) ```python from langchain_community.llms import Baseten mistral = Baseten(model="MODEL_ID", deployment="production") mistral("What is the Mistral wind?") ``` Development deployment ```python from langchain_community.llms import Baseten mistral = Baseten(model="MODEL_ID", deployment="development") mistral("What is the Mistral wind?") ``` Other published deployment ```python from langchain_community.llms import Baseten mistral = Baseten(model="MODEL_ID", deployment="DEPLOYMENT_ID") mistral("What is the Mistral wind?") ``` Streaming LLM output, chat completions, embeddings models, and more are all supported on the Baseten platform and coming soon to our LangChain integration. Contact us at [[email protected]](mailto:[email protected]) with any questions about using Baseten with LangChain.
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\integrations\providers\baseten.md
.md
# BREEBS (Open Knowledge) [BREEBS](https://www.breebs.com/) is an open collaborative knowledge platform. Anybody can create a Breeb, a knowledge capsule based on PDFs stored on a Google Drive folder. A breeb can be used by any LLM/chatbot to improve its expertise, reduce hallucinations and give access to sources. Behind the scenes, Breebs implements several Retrieval Augmented Generation (RAG) models to seamlessly provide useful context at each iteration. ## List of available Breebs To get the full list of Breebs, including their key (breeb_key) and description : https://breebs.promptbreeders.com/web/listbreebs. Dozens of Breebs have already been created by the community and are freely available for use. They cover a wide range of expertise, from organic chemistry to mythology, as well as tips on seduction and decentralized finance. ## Creating a new Breeb To generate a new Breeb, simply compile PDF files in a publicly shared Google Drive folder and initiate the creation process on the [BREEBS website](https://www.breebs.com/) by clicking the "Create Breeb" button. You can currently include up to 120 files, with a total character limit of 15 million. ## Retriever ```python from langchain.retrievers import BreebsRetriever ``` # Example [See usage example (Retrieval & ConversationalRetrievalChain)](https://python.langchain.com/docs/integrations/retrievers/breebs)
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\integrations\providers\breebs.md
.md
Databricks ========== The [Databricks](https://www.databricks.com/) Lakehouse Platform unifies data, analytics, and AI on one platform. Databricks embraces the LangChain ecosystem in various ways: 1. Databricks connector for the SQLDatabase Chain: SQLDatabase.from_databricks() provides an easy way to query your data on Databricks through LangChain 2. Databricks MLflow integrates with LangChain: Tracking and serving LangChain applications with fewer steps 3. Databricks as an LLM provider: Deploy your fine-tuned LLMs on Databricks via serving endpoints or cluster driver proxy apps, and query it as langchain.llms.Databricks 4. Databricks Dolly: Databricks open-sourced Dolly which allows for commercial use, and can be accessed through the Hugging Face Hub Databricks connector for the SQLDatabase Chain ---------------------------------------------- You can connect to [Databricks runtimes](https://docs.databricks.com/runtime/index.html) and [Databricks SQL](https://www.databricks.com/product/databricks-sql) using the SQLDatabase wrapper of LangChain. Databricks MLflow integrates with LangChain ------------------------------------------- MLflow is an open-source platform to manage the ML lifecycle, including experimentation, reproducibility, deployment, and a central model registry. See the notebook [MLflow Callback Handler](/docs/integrations/providers/mlflow_tracking) for details about MLflow's integration with LangChain. Databricks provides a fully managed and hosted version of MLflow integrated with enterprise security features, high availability, and other Databricks workspace features such as experiment and run management and notebook revision capture. MLflow on Databricks offers an integrated experience for tracking and securing machine learning model training runs and running machine learning projects. See [MLflow guide](https://docs.databricks.com/mlflow/index.html) for more details. Databricks MLflow makes it more convenient to develop LangChain applications on Databricks. For MLflow tracking, you don't need to set the tracking uri. For MLflow Model Serving, you can save LangChain Chains in the MLflow langchain flavor, and then register and serve the Chain with a few clicks on Databricks, with credentials securely managed by MLflow Model Serving. Databricks External Models -------------------------- [Databricks External Models](https://docs.databricks.com/generative-ai/external-models/index.html) is a service that is designed to streamline the usage and management of various large language model (LLM) providers, such as OpenAI and Anthropic, within an organization. It offers a high-level interface that simplifies the interaction with these services by providing a unified endpoint to handle specific LLM related requests. The following example creates an endpoint that serves OpenAI's GPT-4 model and generates a chat response from it: ```python from langchain_community.chat_models import ChatDatabricks from langchain_core.messages import HumanMessage from mlflow.deployments import get_deploy_client client = get_deploy_client("databricks") name = f"chat" client.create_endpoint( name=name, config={ "served_entities": [ { "name": "test", "external_model": { "name": "gpt-4", "provider": "openai", "task": "llm/v1/chat", "openai_config": { "openai_api_key": "{{secrets/<scope>/<key>}}", }, }, } ], }, ) chat = ChatDatabricks(endpoint=name, temperature=0.1) print(chat([HumanMessage(content="hello")])) # -> content='Hello! How can I assist you today?' ``` Databricks Foundation Model APIs -------------------------------- [Databricks Foundation Model APIs](https://docs.databricks.com/machine-learning/foundation-models/index.html) allow you to access and query state-of-the-art open source models from dedicated serving endpoints. With Foundation Model APIs, developers can quickly and easily build applications that leverage a high-quality generative AI model without maintaining their own model deployment. The following example uses the `databricks-bge-large-en` endpoint to generate embeddings from text: ```python from langchain_community.embeddings import DatabricksEmbeddings embeddings = DatabricksEmbeddings(endpoint="databricks-bge-large-en") print(embeddings.embed_query("hello")[:3]) # -> [0.051055908203125, 0.007221221923828125, 0.003879547119140625, ...] ``` Databricks as an LLM provider ----------------------------- The notebook [Wrap Databricks endpoints as LLMs](/docs/integrations/llms/databricks#wrapping-a-serving-endpoint-custom-model) demonstrates how to serve a custom model that has been registered by MLflow as a Databricks endpoint. It supports two types of endpoints: the serving endpoint, which is recommended for both production and development, and the cluster driver proxy app, which is recommended for interactive development. Databricks Vector Search ------------------------ Databricks Vector Search is a serverless similarity search engine that allows you to store a vector representation of your data, including metadata, in a vector database. With Vector Search, you can create auto-updating vector search indexes from Delta tables managed by Unity Catalog and query them with a simple API to return the most similar vectors. See the notebook [Databricks Vector Search](/docs/integrations/vectorstores/databricks_vector_search) for instructions to use it with LangChain.
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\integrations\providers\databricks.md
.md
# Fireworks This page covers how to use [Fireworks](https://fireworks.ai/) models within Langchain. ## Installation and setup - Install the Fireworks integration package. ``` pip install langchain-fireworks ``` - Get a Fireworks API key by signing up at [fireworks.ai](https://fireworks.ai). - Authenticate by setting the FIREWORKS_API_KEY environment variable. ## Authentication There are two ways to authenticate using your Fireworks API key: 1. Setting the `FIREWORKS_API_KEY` environment variable. ```python os.environ["FIREWORKS_API_KEY"] = "<KEY>" ``` 2. Setting `fireworks_api_key` field in the Fireworks LLM module. ```python llm = Fireworks(fireworks_api_key="<KEY>") ``` ## Using the Fireworks LLM module Fireworks integrates with Langchain through the LLM module. In this example, we will work the mixtral-8x7b-instruct model. ```python from langchain_fireworks import Fireworks llm = Fireworks( fireworks_api_key="<KEY>", model="accounts/fireworks/models/mixtral-8x7b-instruct", max_tokens=256) llm("Name 3 sports.") ``` For a more detailed walkthrough, see [here](/docs/integrations/llms/Fireworks).
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\integrations\providers\fireworks.md
.md
# Marqo This page covers how to use the Marqo ecosystem within LangChain. ### **What is Marqo?** Marqo is a tensor search engine that uses embeddings stored in in-memory HNSW indexes to achieve cutting edge search speeds. Marqo can scale to hundred-million document indexes with horizontal index sharding and allows for async and non-blocking data upload and search. Marqo uses the latest machine learning models from PyTorch, Huggingface, OpenAI and more. You can start with a pre-configured model or bring your own. The built in ONNX support and conversion allows for faster inference and higher throughput on both CPU and GPU. Because Marqo include its own inference your documents can have a mix of text and images, you can bring Marqo indexes with data from your other systems into the langchain ecosystem without having to worry about your embeddings being compatible. Deployment of Marqo is flexible, you can get started yourself with our docker image or [contact us about our managed cloud offering!](https://www.marqo.ai/pricing) To run Marqo locally with our docker image, [see our getting started.](https://docs.marqo.ai/latest/) ## Installation and Setup - Install the Python SDK with `pip install marqo` ## Wrappers ### VectorStore There exists a wrapper around Marqo indexes, allowing you to use them within the vectorstore framework. Marqo lets you select from a range of models for generating embeddings and exposes some preprocessing configurations. The Marqo vectorstore can also work with existing multimodel indexes where your documents have a mix of images and text, for more information refer to [our documentation](https://docs.marqo.ai/latest/#multi-modal-and-cross-modal-search). Note that instaniating the Marqo vectorstore with an existing multimodal index will disable the ability to add any new documents to it via the langchain vectorstore `add_texts` method. To import this vectorstore: ```python from langchain_community.vectorstores import Marqo ``` For a more detailed walkthrough of the Marqo wrapper and some of its unique features, see [this notebook](/docs/integrations/vectorstores/marqo)
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\integrations\providers\marqo.md
.md
# Predibase Learn how to use LangChain with models on Predibase. ## Setup - Create a [Predibase](https://predibase.com/) account and [API key](https://docs.predibase.com/sdk-guide/intro). - Install the Predibase Python client with `pip install predibase` - Use your API key to authenticate ### LLM Predibase integrates with LangChain by implementing LLM module. You can see a short example below or a full notebook under LLM > Integrations > Predibase. ```python import os os.environ["PREDIBASE_API_TOKEN"] = "{PREDIBASE_API_TOKEN}" from langchain_community.llms import Predibase model = Predibase(model = 'vicuna-13b', predibase_api_key=os.environ.get('PREDIBASE_API_TOKEN')) response = model("Can you recommend me a nice dry wine?") print(response) ```
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\integrations\providers\predibase.md
.md
# PubMed # PubMed >[PubMedยฎ](https://pubmed.ncbi.nlm.nih.gov/) by `The National Center for Biotechnology Information, National Library of Medicine` > comprises more than 35 million citations for biomedical literature from `MEDLINE`, life science journals, and online books. > Citations may include links to full text content from `PubMed Central` and publisher web sites. ## Setup You need to install a python package. ```bash pip install xmltodict ``` ### Retriever See a [usage example](/docs/integrations/retrievers/pubmed). ```python from langchain.retrievers import PubMedRetriever ``` ### Document Loader See a [usage example](/docs/integrations/document_loaders/pubmed). ```python from langchain_community.document_loaders import PubMedLoader ```
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\integrations\providers\pubmed.md
.md
# Shale Protocol [Shale Protocol](https://shaleprotocol.com) provides production-ready inference APIs for open LLMs. It's a Plug & Play API as it's hosted on a highly scalable GPU cloud infrastructure. Our free tier supports up to 1K daily requests per key as we want to eliminate the barrier for anyone to start building genAI apps with LLMs. With Shale Protocol, developers/researchers can create apps and explore the capabilities of open LLMs at no cost. This page covers how Shale-Serve API can be incorporated with LangChain. As of June 2023, the API supports Vicuna-13B by default. We are going to support more LLMs such as Falcon-40B in future releases. ## How to ### 1. Find the link to our Discord on https://shaleprotocol.com. Generate an API key through the "Shale Bot" on our Discord. No credit card is required and no free trials. It's a forever free tier with 1K limit per day per API key. ### 2. Use https://shale.live/v1 as OpenAI API drop-in replacement For example ```python from langchain_openai import OpenAI from langchain.prompts import PromptTemplate from langchain.chains import LLMChain import os os.environ['OPENAI_API_BASE'] = "https://shale.live/v1" os.environ['OPENAI_API_KEY'] = "ENTER YOUR API KEY" llm = OpenAI() template = """Question: {question} # Answer: Let's think step by step.""" prompt = PromptTemplate.from_template(template) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) ```
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\integrations\providers\shaleprotocol.md
.md
# Vearch [Vearch](https://github.com/vearch/vearch) is a scalable distributed system for efficient similarity search of deep learning vectors. # Installation and Setup Vearch Python SDK enables vearch to use locally. Vearch python sdk can be installed easily by pip install vearch. # Vectorstore Vearch also can used as vectorstore. Most detalis in [this notebook](/docs/integrations/vectorstores/vearch) ```python from langchain_community.vectorstores import Vearch ```
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\integrations\providers\vearch.md
.md
# Portkey >[Portkey](https://docs.portkey.ai/overview/introduction) is a platform designed to streamline the deployment > and management of Generative AI applications. > It provides comprehensive features for monitoring, managing models, > and improving the performance of your AI applications. ## LLMOps for Langchain Portkey brings production readiness to Langchain. With Portkey, you can - [x] view detailed **metrics & logs** for all requests, - [x] enable **semantic cache** to reduce latency & costs, - [x] implement automatic **retries & fallbacks** for failed requests, - [x] add **custom tags** to requests for better tracking and analysis and [more](https://docs.portkey.ai). ### Using Portkey with Langchain Using Portkey is as simple as just choosing which Portkey features you want, enabling them via `headers=Portkey.Config` and passing it in your LLM calls. To start, get your Portkey API key by [signing up here](https://app.portkey.ai/login). (Click the profile icon on the top left, then click on "Copy API Key") For OpenAI, a simple integration with logging feature would look like this: ```python from langchain_openai import OpenAI from langchain_community.utilities import Portkey # Add the Portkey API Key from your account headers = Portkey.Config( api_key = "<PORTKEY_API_KEY>" ) llm = OpenAI(temperature=0.9, headers=headers) llm.predict("What would be a good company name for a company that makes colorful socks?") ``` Your logs will be captured on your [Portkey dashboard](https://app.portkey.ai). A common Portkey X Langchain use case is to **trace a chain or an agent** and view all the LLM calls originating from that request. ### **Tracing Chains & Agents** ```python from langchain.agents import AgentType, initialize_agent, load_tools from langchain_openai import OpenAI from langchain_community.utilities import Portkey # Add the Portkey API Key from your account headers = Portkey.Config( api_key = "<PORTKEY_API_KEY>", trace_id = "fef659" ) llm = OpenAI(temperature=0, headers=headers) tools = load_tools(["serpapi", "llm-math"], llm=llm) agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) # Let's test it out! agent.run("What was the high temperature in SF yesterday in Fahrenheit? What is that number raised to the .023 power?") ``` **You can see the requests' logs along with the trace id on Portkey dashboard:** <img src="/img/portkey-dashboard.gif" height="250"/> <img src="/img/portkey-tracing.png" height="250"/> ## Advanced Features 1. **Logging:** Log all your LLM requests automatically by sending them through Portkey. Each request log contains `timestamp`, `model name`, `total cost`, `request time`, `request json`, `response json`, and additional Portkey features. 2. **Tracing:** Trace id can be passed along with each request and is visibe on the logs on Portkey dashboard. You can also set a **distinct trace id** for each request. You can [append user feedback](https://docs.portkey.ai/key-features/feedback-api) to a trace id as well. 3. **Caching:** Respond to previously served customers queries from cache instead of sending them again to OpenAI. Match exact strings OR semantically similar strings. Cache can save costs and reduce latencies by 20x. 4. **Retries:** Automatically reprocess any unsuccessful API requests **`upto 5`** times. Uses an **`exponential backoff`** strategy, which spaces out retry attempts to prevent network overload. 5. **Tagging:** Track and audit each user interaction in high detail with predefined tags. | Feature | Config Key | Value (Type) | Required/Optional | | -- | -- | -- | -- | | API Key | `api_key` | API Key (`string`) | โœ… Required | | [Tracing Requests](https://docs.portkey.ai/key-features/request-tracing) | `trace_id` | Custom `string` | โ” Optional | | [Automatic Retries](https://docs.portkey.ai/key-features/automatic-retries) | `retry_count` | `integer` [1,2,3,4,5] | โ” Optional | | [Enabling Cache](https://docs.portkey.ai/key-features/request-caching) | `cache` | `simple` OR `semantic` | โ” Optional | | Cache Force Refresh | `cache_force_refresh` | `True` | โ” Optional | | Set Cache Expiry | `cache_age` | `integer` (in seconds) | โ” Optional | | [Add User](https://docs.portkey.ai/key-features/custom-metadata) | `user` | `string` | โ” Optional | | [Add Organisation](https://docs.portkey.ai/key-features/custom-metadata) | `organisation` | `string` | โ” Optional | | [Add Environment](https://docs.portkey.ai/key-features/custom-metadata) | `environment` | `string` | โ” Optional | | [Add Prompt (version/id/string)](https://docs.portkey.ai/key-features/custom-metadata) | `prompt` | `string` | โ” Optional | ## **Enabling all Portkey Features:** ```py headers = Portkey.Config( # Mandatory api_key="<PORTKEY_API_KEY>", # Cache Options cache="semantic", cache_force_refresh="True", cache_age=1729, # Advanced retry_count=5, trace_id="langchain_agent", # Metadata environment="production", user="john", organisation="acme", prompt="Frost" ) ``` For detailed information on each feature and how to use it, [please refer to the Portkey docs](https://docs.portkey.ai). If you have any questions or need further assistance, [reach out to us on Twitter.](https://twitter.com/portkeyai).
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\integrations\providers\portkey\index.md
.md
--- sidebar_class_name: hidden --- # LangSmith [LangSmith](https://smith.langchain.com) helps you trace and evaluate your language model applications and intelligent agents to help you move from prototype to production. Check out the [interactive walkthrough](/docs/langsmith/walkthrough) to get started. For more information, please refer to the [LangSmith documentation](https://docs.smith.langchain.com/). For tutorials and other end-to-end examples demonstrating ways to integrate LangSmith in your workflow, check out the [LangSmith Cookbook](https://github.com/langchain-ai/langsmith-cookbook). Some of the guides therein include: - Leveraging user feedback in your JS application ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/feedback-examples/nextjs/README.md)). - Building an automated feedback pipeline ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/feedback-examples/algorithmic-feedback/algorithmic_feedback.ipynb)). - How to evaluate and audit your RAG workflows ([link](https://github.com/langchain-ai/langsmith-cookbook/tree/main/testing-examples/qa-correctness)). - How to fine-tune an LLM on real usage data ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/fine-tuning-examples/export-to-openai/fine-tuning-on-chat-runs.ipynb)). - How to use the [LangChain Hub](https://smith.langchain.com/hub) to version your prompts ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/hub-examples/retrieval-qa-chain/retrieval-qa.ipynb))
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\langsmith\index.md
.txt
What I Worked On February 2021 Before college the two main things I worked on, outside of school, were writing and programming. I didn't write essays. I wrote what beginning writers were supposed to write then, and probably still are: short stories. My stories were awful. They had hardly any plot, just characters with strong feelings, which I imagined made them deep. The first programs I tried writing were on the IBM 1401 that our school district used for what was then called "data processing." This was in 9th grade, so I was 13 or 14. The school district's 1401 happened to be in the basement of our junior high school, and my friend Rich Draves and I got permission to use it. It was like a mini Bond villain's lair down there, with all these alien-looking machines โ€” CPU, disk drives, printer, card reader โ€” sitting up on a raised floor under bright fluorescent lights. The language we used was an early version of Fortran. You had to type programs on punch cards, then stack them in the card reader and press a button to load the program into memory and run it. The result would ordinarily be to print something on the spectacularly loud printer. I was puzzled by the 1401. I couldn't figure out what to do with it. And in retrospect there's not much I could have done with it. The only form of input to programs was data stored on punched cards, and I didn't have any data stored on punched cards. The only other option was to do things that didn't rely on any input, like calculate approximations of pi, but I didn't know enough math to do anything interesting of that type. So I'm not surprised I can't remember any programs I wrote, because they can't have done much. My clearest memory is of the moment I learned it was possible for programs not to terminate, when one of mine didn't. On a machine without time-sharing, this was a social as well as a technical error, as the data center manager's expression made clear. With microcomputers, everything changed. Now you could have a computer sitting right in front of you, on a desk, that could respond to your keystrokes as it was running instead of just churning through a stack of punch cards and then stopping. [1] The first of my friends to get a microcomputer built it himself. It was sold as a kit by Heathkit. I remember vividly how impressed and envious I felt watching him sitting in front of it, typing programs right into the computer. Computers were expensive in those days and it took me years of nagging before I convinced my father to buy one, a TRS-80, in about 1980. The gold standard then was the Apple II, but a TRS-80 was good enough. This was when I really started programming. I wrote simple games, a program to predict how high my model rockets would fly, and a word processor that my father used to write at least one book. There was only room in memory for about 2 pages of text, so he'd write 2 pages at a time and then print them out, but it was a lot better than a typewriter. Though I liked programming, I didn't plan to study it in college. In college I was going to study philosophy, which sounded much more powerful. It seemed, to my naive high school self, to be the study of the ultimate truths, compared to which the things studied in other fields would be mere domain knowledge. What I discovered when I got to college was that the other fields took up so much of the space of ideas that there wasn't much left for these supposed ultimate truths. All that seemed left for philosophy were edge cases that people in other fields felt could safely be ignored. I couldn't have put this into words when I was 18. All I knew at the time was that I kept taking philosophy courses and they kept being boring. So I decided to switch to AI. AI was in the air in the mid 1980s, but there were two things especially that made me want to work on it: a novel by Heinlein called The Moon is a Harsh Mistress, which featured an intelligent computer called Mike, and a PBS documentary that showed Terry Winograd using SHRDLU. I haven't tried rereading The Moon is a Harsh Mistress, so I don't know how well it has aged, but when I read it I was drawn entirely into its world. It seemed only a matter of time before we'd have Mike, and when I saw Winograd using SHRDLU, it seemed like that time would be a few years at most. All you had to do was teach SHRDLU more words. There weren't any classes in AI at Cornell then, not even graduate classes, so I started trying to teach myself. Which meant learning Lisp, since in those days Lisp was regarded as the language of AI. The commonly used programming languages then were pretty primitive, and programmers' ideas correspondingly so. The default language at Cornell was a Pascal-like language called PL/I, and the situation was similar elsewhere. Learning Lisp expanded my concept of a program so fast that it was years before I started to have a sense of where the new limits were. This was more like it; this was what I had expected college to do. It wasn't happening in a class, like it was supposed to, but that was ok. For the next couple years I was on a roll. I knew what I was going to do. For my undergraduate thesis, I reverse-engineered SHRDLU. My God did I love working on that program. It was a pleasing bit of code, but what made it even more exciting was my belief โ€” hard to imagine now, but not unique in 1985 โ€” that it was already climbing the lower slopes of intelligence. I had gotten into a program at Cornell that didn't make you choose a major. You could take whatever classes you liked, and choose whatever you liked to put on your degree. I of course chose "Artificial Intelligence." When I got the actual physical diploma, I was dismayed to find that the quotes had been included, which made them read as scare-quotes. At the time this bothered me, but now it seems amusingly accurate, for reasons I was about to discover. I applied to 3 grad schools: MIT and Yale, which were renowned for AI at the time, and Harvard, which I'd visited because Rich Draves went there, and was also home to Bill Woods, who'd invented the type of parser I used in my SHRDLU clone. Only Harvard accepted me, so that was where I went. I don't remember the moment it happened, or if there even was a specific moment, but during the first year of grad school I realized that AI, as practiced at the time, was a hoax. By which I mean the sort of AI in which a program that's told "the dog is sitting on the chair" translates this into some formal representation and adds it to the list of things it knows. What these programs really showed was that there's a subset of natural language that's a formal language. But a very proper subset. It was clear that there was an unbridgeable gap between what they could do and actually understanding natural language. It was not, in fact, simply a matter of teaching SHRDLU more words. That whole way of doing AI, with explicit data structures representing concepts, was not going to work. Its brokenness did, as so often happens, generate a lot of opportunities to write papers about various band-aids that could be applied to it, but it was never going to get us Mike. So I looked around to see what I could salvage from the wreckage of my plans, and there was Lisp. I knew from experience that Lisp was interesting for its own sake and not just for its association with AI, even though that was the main reason people cared about it at the time. So I decided to focus on Lisp. In fact, I decided to write a book about Lisp hacking. It's scary to think how little I knew about Lisp hacking when I started writing that book. But there's nothing like writing a book about something to help you learn it. The book, On Lisp, wasn't published till 1993, but I wrote much of it in grad school. Computer Science is an uneasy alliance between two halves, theory and systems. The theory people prove things, and the systems people build things. I wanted to build things. I had plenty of respect for theory โ€” indeed, a sneaking suspicion that it was the more admirable of the two halves โ€” but building things seemed so much more exciting. The problem with systems work, though, was that it didn't last. Any program you wrote today, no matter how good, would be obsolete in a couple decades at best. People might mention your software in footnotes, but no one would actually use it. And indeed, it would seem very feeble work. Only people with a sense of the history of the field would even realize that, in its time, it had been good. There were some surplus Xerox Dandelions floating around the computer lab at one point. Anyone who wanted one to play around with could have one. I was briefly tempted, but they were so slow by present standards; what was the point? No one else wanted one either, so off they went. That was what happened to systems work. I wanted not just to build things, but to build things that would last. In this dissatisfied state I went in 1988 to visit Rich Draves at CMU, where he was in grad school. One day I went to visit the Carnegie Institute, where I'd spent a lot of time as a kid. While looking at a painting there I realized something that might seem obvious, but was a big surprise to me. There, right on the wall, was something you could make that would last. Paintings didn't become obsolete. Some of the best ones were hundreds of years old. And moreover this was something you could make a living doing. Not as easily as you could by writing software, of course, but I thought if you were really industrious and lived really cheaply, it had to be possible to make enough to survive. And as an artist you could be truly independent. You wouldn't have a boss, or even need to get research funding. I had always liked looking at paintings. Could I make them? I had no idea. I'd never imagined it was even possible. I knew intellectually that people made art โ€” that it didn't just appear spontaneously โ€” but it was as if the people who made it were a different species. They either lived long ago or were mysterious geniuses doing strange things in profiles in Life magazine. The idea of actually being able to make art, to put that verb before that noun, seemed almost miraculous. That fall I started taking art classes at Harvard. Grad students could take classes in any department, and my advisor, Tom Cheatham, was very easy going. If he even knew about the strange classes I was taking, he never said anything. So now I was in a PhD program in computer science, yet planning to be an artist, yet also genuinely in love with Lisp hacking and working away at On Lisp. In other words, like many a grad student, I was working energetically on multiple projects that were not my thesis. I didn't see a way out of this situation. I didn't want to drop out of grad school, but how else was I going to get out? I remember when my friend Robert Morris got kicked out of Cornell for writing the internet worm of 1988, I was envious that he'd found such a spectacular way to get out of grad school. Then one day in April 1990 a crack appeared in the wall. I ran into professor Cheatham and he asked if I was far enough along to graduate that June. I didn't have a word of my dissertation written, but in what must have been the quickest bit of thinking in my life, I decided to take a shot at writing one in the 5 weeks or so that remained before the deadline, reusing parts of On Lisp where I could, and I was able to respond, with no perceptible delay "Yes, I think so. I'll give you something to read in a few days." I picked applications of continuations as the topic. In retrospect I should have written about macros and embedded languages. There's a whole world there that's barely been explored. But all I wanted was to get out of grad school, and my rapidly written dissertation sufficed, just barely. Meanwhile I was applying to art schools. I applied to two: RISD in the US, and the Accademia di Belli Arti in Florence, which, because it was the oldest art school, I imagined would be good. RISD accepted me, and I never heard back from the Accademia, so off to Providence I went. I'd applied for the BFA program at RISD, which meant in effect that I had to go to college again. This was not as strange as it sounds, because I was only 25, and art schools are full of people of different ages. RISD counted me as a transfer sophomore and said I had to do the foundation that summer. The foundation means the classes that everyone has to take in fundamental subjects like drawing, color, and design. Toward the end of the summer I got a big surprise: a letter from the Accademia, which had been delayed because they'd sent it to Cambridge England instead of Cambridge Massachusetts, inviting me to take the entrance exam in Florence that fall. This was now only weeks away. My nice landlady let me leave my stuff in her attic. I had some money saved from consulting work I'd done in grad school; there was probably enough to last a year if I lived cheaply. Now all I had to do was learn Italian. Only stranieri (foreigners) had to take this entrance exam. In retrospect it may well have been a way of excluding them, because there were so many stranieri attracted by the idea of studying art in Florence that the Italian students would otherwise have been outnumbered. I was in decent shape at painting and drawing from the RISD foundation that summer, but I still don't know how I managed to pass the written exam. I remember that I answered the essay question by writing about Cezanne, and that I cranked up the intellectual level as high as I could to make the most of my limited vocabulary. [2] I'm only up to age 25 and already there are such conspicuous patterns. Here I was, yet again about to attend some august institution in the hopes of learning about some prestigious subject, and yet again about to be disappointed. The students and faculty in the painting department at the Accademia were the nicest people you could imagine, but they had long since arrived at an arrangement whereby the students wouldn't require the faculty to teach anything, and in return the faculty wouldn't require the students to learn anything. And at the same time all involved would adhere outwardly to the conventions of a 19th century atelier. We actually had one of those little stoves, fed with kindling, that you see in 19th century studio paintings, and a nude model sitting as close to it as possible without getting burned. Except hardly anyone else painted her besides me. The rest of the students spent their time chatting or occasionally trying to imitate things they'd seen in American art magazines. Our model turned out to live just down the street from me. She made a living from a combination of modelling and making fakes for a local antique dealer. She'd copy an obscure old painting out of a book, and then he'd take the copy and maltreat it to make it look old. [3] While I was a student at the Accademia I started painting still lives in my bedroom at night. These paintings were tiny, because the room was, and because I painted them on leftover scraps of canvas, which was all I could afford at the time. Painting still lives is different from painting people, because the subject, as its name suggests, can't move. People can't sit for more than about 15 minutes at a time, and when they do they don't sit very still. So the traditional m.o. for painting people is to know how to paint a generic person, which you then modify to match the specific person you're painting. Whereas a still life you can, if you want, copy pixel by pixel from what you're seeing. You don't want to stop there, of course, or you get merely photographic accuracy, and what makes a still life interesting is that it's been through a head. You want to emphasize the visual cues that tell you, for example, that the reason the color changes suddenly at a certain point is that it's the edge of an object. By subtly emphasizing such things you can make paintings that are more realistic than photographs not just in some metaphorical sense, but in the strict information-theoretic sense. [4] I liked painting still lives because I was curious about what I was seeing. In everyday life, we aren't consciously aware of much we're seeing. Most visual perception is handled by low-level processes that merely tell your brain "that's a water droplet" without telling you details like where the lightest and darkest points are, or "that's a bush" without telling you the shape and position of every leaf. This is a feature of brains, not a bug. In everyday life it would be distracting to notice every leaf on every bush. But when you have to paint something, you have to look more closely, and when you do there's a lot to see. You can still be noticing new things after days of trying to paint something people usually take for granted, just as you can after days of trying to write an essay about something people usually take for granted. This is not the only way to paint. I'm not 100% sure it's even a good way to paint. But it seemed a good enough bet to be worth trying. Our teacher, professor Ulivi, was a nice guy. He could see I worked hard, and gave me a good grade, which he wrote down in a sort of passport each student had. But the Accademia wasn't teaching me anything except Italian, and my money was running out, so at the end of the first year I went back to the US. I wanted to go back to RISD, but I was now broke and RISD was very expensive, so I decided to get a job for a year and then return to RISD the next fall. I got one at a company called Interleaf, which made software for creating documents. You mean like Microsoft Word? Exactly. That was how I learned that low end software tends to eat high end software. But Interleaf still had a few years to live yet. [5] Interleaf had done something pretty bold. Inspired by Emacs, they'd added a scripting language, and even made the scripting language a dialect of Lisp. Now they wanted a Lisp hacker to write things in it. This was the closest thing I've had to a normal job, and I hereby apologize to my boss and coworkers, because I was a bad employee. Their Lisp was the thinnest icing on a giant C cake, and since I didn't know C and didn't want to learn it, I never understood most of the software. Plus I was terribly irresponsible. This was back when a programming job meant showing up every day during certain working hours. That seemed unnatural to me, and on this point the rest of the world is coming around to my way of thinking, but at the time it caused a lot of friction. Toward the end of the year I spent much of my time surreptitiously working on On Lisp, which I had by this time gotten a contract to publish. The good part was that I got paid huge amounts of money, especially by art student standards. In Florence, after paying my part of the rent, my budget for everything else had been $7 a day. Now I was getting paid more than 4 times that every hour, even when I was just sitting in a meeting. By living cheaply I not only managed to save enough to go back to RISD, but also paid off my college loans. I learned some useful things at Interleaf, though they were mostly about what not to do. I learned that it's better for technology companies to be run by product people than sales people (though sales is a real skill and people who are good at it are really good at it), that it leads to bugs when code is edited by too many people, that cheap office space is no bargain if it's depressing, that planned meetings are inferior to corridor conversations, that big, bureaucratic customers are a dangerous source of money, and that there's not much overlap between conventional office hours and the optimal time for hacking, or conventional offices and the optimal place for it. But the most important thing I learned, and which I used in both Viaweb and Y Combinator, is that the low end eats the high end: that it's good to be the "entry level" option, even though that will be less prestigious, because if you're not, someone else will be, and will squash you against the ceiling. Which in turn means that prestige is a danger sign. When I left to go back to RISD the next fall, I arranged to do freelance work for the group that did projects for customers, and this was how I survived for the next several years. When I came back to visit for a project later on, someone told me about a new thing called HTML, which was, as he described it, a derivative of SGML. Markup language enthusiasts were an occupational hazard at Interleaf and I ignored him, but this HTML thing later became a big part of my life. In the fall of 1992 I moved back to Providence to continue at RISD. The foundation had merely been intro stuff, and the Accademia had been a (very civilized) joke. Now I was going to see what real art school was like. But alas it was more like the Accademia than not. Better organized, certainly, and a lot more expensive, but it was now becoming clear that art school did not bear the same relationship to art that medical school bore to medicine. At least not the painting department. The textile department, which my next door neighbor belonged to, seemed to be pretty rigorous. No doubt illustration and architecture were too. But painting was post-rigorous. Painting students were supposed to express themselves, which to the more worldly ones meant to try to cook up some sort of distinctive signature style. A signature style is the visual equivalent of what in show business is known as a "schtick": something that immediately identifies the work as yours and no one else's. For example, when you see a painting that looks like a certain kind of cartoon, you know it's by Roy Lichtenstein. So if you see a big painting of this type hanging in the apartment of a hedge fund manager, you know he paid millions of dollars for it. That's not always why artists have a signature style, but it's usually why buyers pay a lot for such work. [6] There were plenty of earnest students too: kids who "could draw" in high school, and now had come to what was supposed to be the best art school in the country, to learn to draw even better. They tended to be confused and demoralized by what they found at RISD, but they kept going, because painting was what they did. I was not one of the kids who could draw in high school, but at RISD I was definitely closer to their tribe than the tribe of signature style seekers. I learned a lot in the color class I took at RISD, but otherwise I was basically teaching myself to paint, and I could do that for free. So in 1993 I dropped out. I hung around Providence for a bit, and then my college friend Nancy Parmet did me a big favor. A rent-controlled apartment in a building her mother owned in New York was becoming vacant. Did I want it? It wasn't much more than my current place, and New York was supposed to be where the artists were. So yes, I wanted it! [7] Asterix comics begin by zooming in on a tiny corner of Roman Gaul that turns out not to be controlled by the Romans. You can do something similar on a map of New York City: if you zoom in on the Upper East Side, there's a tiny corner that's not rich, or at least wasn't in 1993. It's called Yorkville, and that was my new home. Now I was a New York artist โ€” in the strictly technical sense of making paintings and living in New York. I was nervous about money, because I could sense that Interleaf was on the way down. Freelance Lisp hacking work was very rare, and I didn't want to have to program in another language, which in those days would have meant C++ if I was lucky. So with my unerring nose for financial opportunity, I decided to write another book on Lisp. This would be a popular book, the sort of book that could be used as a textbook. I imagined myself living frugally off the royalties and spending all my time painting. (The painting on the cover of this book, ANSI Common Lisp, is one that I painted around this time.) The best thing about New York for me was the presence of Idelle and Julian Weber. Idelle Weber was a painter, one of the early photorealists, and I'd taken her painting class at Harvard. I've never known a teacher more beloved by her students. Large numbers of former students kept in touch with her, including me. After I moved to New York I became her de facto studio assistant. She liked to paint on big, square canvases, 4 to 5 feet on a side. One day in late 1994 as I was stretching one of these monsters there was something on the radio about a famous fund manager. He wasn't that much older than me, and was super rich. The thought suddenly occurred to me: why don't I become rich? Then I'll be able to work on whatever I want. Meanwhile I'd been hearing more and more about this new thing called the World Wide Web. Robert Morris showed it to me when I visited him in Cambridge, where he was now in grad school at Harvard. It seemed to me that the web would be a big deal. I'd seen what graphical user interfaces had done for the popularity of microcomputers. It seemed like the web would do the same for the internet. If I wanted to get rich, here was the next train leaving the station. I was right about that part. What I got wrong was the idea. I decided we should start a company to put art galleries online. I can't honestly say, after reading so many Y Combinator applications, that this was the worst startup idea ever, but it was up there. Art galleries didn't want to be online, and still don't, not the fancy ones. That's not how they sell. I wrote some software to generate web sites for galleries, and Robert wrote some to resize images and set up an http server to serve the pages. Then we tried to sign up galleries. To call this a difficult sale would be an understatement. It was difficult to give away. A few galleries let us make sites for them for free, but none paid us. Then some online stores started to appear, and I realized that except for the order buttons they were identical to the sites we'd been generating for galleries. This impressive-sounding thing called an "internet storefront" was something we already knew how to build. So in the summer of 1995, after I submitted the camera-ready copy of ANSI Common Lisp to the publishers, we started trying to write software to build online stores. At first this was going to be normal desktop software, which in those days meant Windows software. That was an alarming prospect, because neither of us knew how to write Windows software or wanted to learn. We lived in the Unix world. But we decided we'd at least try writing a prototype store builder on Unix. Robert wrote a shopping cart, and I wrote a new site generator for stores โ€” in Lisp, of course. We were working out of Robert's apartment in Cambridge. His roommate was away for big chunks of time, during which I got to sleep in his room. For some reason there was no bed frame or sheets, just a mattress on the floor. One morning as I was lying on this mattress I had an idea that made me sit up like a capital L. What if we ran the software on the server, and let users control it by clicking on links? Then we'd never have to write anything to run on users' computers. We could generate the sites on the same server we'd serve them from. Users wouldn't need anything more than a browser. This kind of software, known as a web app, is common now, but at the time it wasn't clear that it was even possible. To find out, we decided to try making a version of our store builder that you could control through the browser. A couple days later, on August 12, we had one that worked. The UI was horrible, but it proved you could build a whole store through the browser, without any client software or typing anything into the command line on the server. Now we felt like we were really onto something. I had visions of a whole new generation of software working this way. You wouldn't need versions, or ports, or any of that crap. At Interleaf there had been a whole group called Release Engineering that seemed to be at least as big as the group that actually wrote the software. Now you could just update the software right on the server. We started a new company we called Viaweb, after the fact that our software worked via the web, and we got $10,000 in seed funding from Idelle's husband Julian. In return for that and doing the initial legal work and giving us business advice, we gave him 10% of the company. Ten years later this deal became the model for Y Combinator's. We knew founders needed something like this, because we'd needed it ourselves. At this stage I had a negative net worth, because the thousand dollars or so I had in the bank was more than counterbalanced by what I owed the government in taxes. (Had I diligently set aside the proper proportion of the money I'd made consulting for Interleaf? No, I had not.) So although Robert had his graduate student stipend, I needed that seed funding to live on. We originally hoped to launch in September, but we got more ambitious about the software as we worked on it. Eventually we managed to build a WYSIWYG site builder, in the sense that as you were creating pages, they looked exactly like the static ones that would be generated later, except that instead of leading to static pages, the links all referred to closures stored in a hash table on the server. It helped to have studied art, because the main goal of an online store builder is to make users look legit, and the key to looking legit is high production values. If you get page layouts and fonts and colors right, you can make a guy running a store out of his bedroom look more legit than a big company. (If you're curious why my site looks so old-fashioned, it's because it's still made with this software. It may look clunky today, but in 1996 it was the last word in slick.) In September, Robert rebelled. "We've been working on this for a month," he said, "and it's still not done." This is funny in retrospect, because he would still be working on it almost 3 years later. But I decided it might be prudent to recruit more programmers, and I asked Robert who else in grad school with him was really good. He recommended Trevor Blackwell, which surprised me at first, because at that point I knew Trevor mainly for his plan to reduce everything in his life to a stack of notecards, which he carried around with him. But Rtm was right, as usual. Trevor turned out to be a frighteningly effective hacker. It was a lot of fun working with Robert and Trevor. They're the two most independent-minded people I know, and in completely different ways. If you could see inside Rtm's brain it would look like a colonial New England church, and if you could see inside Trevor's it would look like the worst excesses of Austrian Rococo. We opened for business, with 6 stores, in January 1996. It was just as well we waited a few months, because although we worried we were late, we were actually almost fatally early. There was a lot of talk in the press then about ecommerce, but not many people actually wanted online stores. [8] There were three main parts to the software: the editor, which people used to build sites and which I wrote, the shopping cart, which Robert wrote, and the manager, which kept track of orders and statistics, and which Trevor wrote. In its time, the editor was one of the best general-purpose site builders. I kept the code tight and didn't have to integrate with any other software except Robert's and Trevor's, so it was quite fun to work on. If all I'd had to do was work on this software, the next 3 years would have been the easiest of my life. Unfortunately I had to do a lot more, all of it stuff I was worse at than programming, and the next 3 years were instead the most stressful. There were a lot of startups making ecommerce software in the second half of the 90s. We were determined to be the Microsoft Word, not the Interleaf. Which meant being easy to use and inexpensive. It was lucky for us that we were poor, because that caused us to make Viaweb even more inexpensive than we realized. We charged $100 a month for a small store and $300 a month for a big one. This low price was a big attraction, and a constant thorn in the sides of competitors, but it wasn't because of some clever insight that we set the price low. We had no idea what businesses paid for things. $300 a month seemed like a lot of money to us. We did a lot of things right by accident like that. For example, we did what's now called "doing things that don't scale," although at the time we would have described it as "being so lame that we're driven to the most desperate measures to get users." The most common of which was building stores for them. This seemed particularly humiliating, since the whole reason d'etre of our software was that people could use it to make their own stores. But anything to get users. We learned a lot more about retail than we wanted to know. For example, that if you could only have a small image of a man's shirt (and all images were small then by present standards), it was better to have a closeup of the collar than a picture of the whole shirt. The reason I remember learning this was that it meant I had to rescan about 30 images of men's shirts. My first set of scans were so beautiful too. Though this felt wrong, it was exactly the right thing to be doing. Building stores for users taught us about retail, and about how it felt to use our software. I was initially both mystified and repelled by "business" and thought we needed a "business person" to be in charge of it, but once we started to get users, I was converted, in much the same way I was converted to fatherhood once I had kids. Whatever users wanted, I was all theirs. Maybe one day we'd have so many users that I couldn't scan their images for them, but in the meantime there was nothing more important to do. Another thing I didn't get at the time is that growth rate is the ultimate test of a startup. Our growth rate was fine. We had about 70 stores at the end of 1996 and about 500 at the end of 1997. I mistakenly thought the thing that mattered was the absolute number of users. And that is the thing that matters in the sense that that's how much money you're making, and if you're not making enough, you might go out of business. But in the long term the growth rate takes care of the absolute number. If we'd been a startup I was advising at Y Combinator, I would have said: Stop being so stressed out, because you're doing fine. You're growing 7x a year. Just don't hire too many more people and you'll soon be profitable, and then you'll control your own destiny. Alas I hired lots more people, partly because our investors wanted me to, and partly because that's what startups did during the Internet Bubble. A company with just a handful of employees would have seemed amateurish. So we didn't reach breakeven until about when Yahoo bought us in the summer of 1998. Which in turn meant we were at the mercy of investors for the entire life of the company. And since both we and our investors were noobs at startups, the result was a mess even by startup standards. It was a huge relief when Yahoo bought us. In principle our Viaweb stock was valuable. It was a share in a business that was profitable and growing rapidly. But it didn't feel very valuable to me; I had no idea how to value a business, but I was all too keenly aware of the near-death experiences we seemed to have every few months. Nor had I changed my grad student lifestyle significantly since we started. So when Yahoo bought us it felt like going from rags to riches. Since we were going to California, I bought a car, a yellow 1998 VW GTI. I remember thinking that its leather seats alone were by far the most luxurious thing I owned. The next year, from the summer of 1998 to the summer of 1999, must have been the least productive of my life. I didn't realize it at the time, but I was worn out from the effort and stress of running Viaweb. For a while after I got to California I tried to continue my usual m.o. of programming till 3 in the morning, but fatigue combined with Yahoo's prematurely aged culture and grim cube farm in Santa Clara gradually dragged me down. After a few months it felt disconcertingly like working at Interleaf. Yahoo had given us a lot of options when they bought us. At the time I thought Yahoo was so overvalued that they'd never be worth anything, but to my astonishment the stock went up 5x in the next year. I hung on till the first chunk of options vested, then in the summer of 1999 I left. It had been so long since I'd painted anything that I'd half forgotten why I was doing this. My brain had been entirely full of software and men's shirts for 4 years. But I had done this to get rich so I could paint, I reminded myself, and now I was rich, so I should go paint. When I said I was leaving, my boss at Yahoo had a long conversation with me about my plans. I told him all about the kinds of pictures I wanted to paint. At the time I was touched that he took such an interest in me. Now I realize it was because he thought I was lying. My options at that point were worth about $2 million a month. If I was leaving that kind of money on the table, it could only be to go and start some new startup, and if I did, I might take people with me. This was the height of the Internet Bubble, and Yahoo was ground zero of it. My boss was at that moment a billionaire. Leaving then to start a new startup must have seemed to him an insanely, and yet also plausibly, ambitious plan. But I really was quitting to paint, and I started immediately. There was no time to lose. I'd already burned 4 years getting rich. Now when I talk to founders who are leaving after selling their companies, my advice is always the same: take a vacation. That's what I should have done, just gone off somewhere and done nothing for a month or two, but the idea never occurred to me. So I tried to paint, but I just didn't seem to have any energy or ambition. Part of the problem was that I didn't know many people in California. I'd compounded this problem by buying a house up in the Santa Cruz Mountains, with a beautiful view but miles from anywhere. I stuck it out for a few more months, then in desperation I went back to New York, where unless you understand about rent control you'll be surprised to hear I still had my apartment, sealed up like a tomb of my old life. Idelle was in New York at least, and there were other people trying to paint there, even though I didn't know any of them. When I got back to New York I resumed my old life, except now I was rich. It was as weird as it sounds. I resumed all my old patterns, except now there were doors where there hadn't been. Now when I was tired of walking, all I had to do was raise my hand, and (unless it was raining) a taxi would stop to pick me up. Now when I walked past charming little restaurants I could go in and order lunch. It was exciting for a while. Painting started to go better. I experimented with a new kind of still life where I'd paint one painting in the old way, then photograph it and print it, blown up, on canvas, and then use that as the underpainting for a second still life, painted from the same objects (which hopefully hadn't rotted yet). Meanwhile I looked for an apartment to buy. Now I could actually choose what neighborhood to live in. Where, I asked myself and various real estate agents, is the Cambridge of New York? Aided by occasional visits to actual Cambridge, I gradually realized there wasn't one. Huh. Around this time, in the spring of 2000, I had an idea. It was clear from our experience with Viaweb that web apps were the future. Why not build a web app for making web apps? Why not let people edit code on our server through the browser, and then host the resulting applications for them? [9] You could run all sorts of services on the servers that these applications could use just by making an API call: making and receiving phone calls, manipulating images, taking credit card payments, etc. I got so excited about this idea that I couldn't think about anything else. It seemed obvious that this was the future. I didn't particularly want to start another company, but it was clear that this idea would have to be embodied as one, so I decided to move to Cambridge and start it. I hoped to lure Robert into working on it with me, but there I ran into a hitch. Robert was now a postdoc at MIT, and though he'd made a lot of money the last time I'd lured him into working on one of my schemes, it had also been a huge time sink. So while he agreed that it sounded like a plausible idea, he firmly refused to work on it. Hmph. Well, I'd do it myself then. I recruited Dan Giffin, who had worked for Viaweb, and two undergrads who wanted summer jobs, and we got to work trying to build what it's now clear is about twenty companies and several open-source projects worth of software. The language for defining applications would of course be a dialect of Lisp. But I wasn't so naive as to assume I could spring an overt Lisp on a general audience; we'd hide the parentheses, like Dylan did. By then there was a name for the kind of company Viaweb was, an "application service provider," or ASP. This name didn't last long before it was replaced by "software as a service," but it was current for long enough that I named this new company after it: it was going to be called Aspra. I started working on the application builder, Dan worked on network infrastructure, and the two undergrads worked on the first two services (images and phone calls). But about halfway through the summer I realized I really didn't want to run a company โ€” especially not a big one, which it was looking like this would have to be. I'd only started Viaweb because I needed the money. Now that I didn't need money anymore, why was I doing this? If this vision had to be realized as a company, then screw the vision. I'd build a subset that could be done as an open-source project. Much to my surprise, the time I spent working on this stuff was not wasted after all. After we started Y Combinator, I would often encounter startups working on parts of this new architecture, and it was very useful to have spent so much time thinking about it and even trying to write some of it. The subset I would build as an open-source project was the new Lisp, whose parentheses I now wouldn't even have to hide. A lot of Lisp hackers dream of building a new Lisp, partly because one of the distinctive features of the language is that it has dialects, and partly, I think, because we have in our minds a Platonic form of Lisp that all existing dialects fall short of. I certainly did. So at the end of the summer Dan and I switched to working on this new dialect of Lisp, which I called Arc, in a house I bought in Cambridge. The following spring, lightning struck. I was invited to give a talk at a Lisp conference, so I gave one about how we'd used Lisp at Viaweb. Afterward I put a postscript file of this talk online, on paulgraham.com, which I'd created years before using Viaweb but had never used for anything. In one day it got 30,000 page views. What on earth had happened? The referring urls showed that someone had posted it on Slashdot. [10] Wow, I thought, there's an audience. If I write something and put it on the web, anyone can read it. That may seem obvious now, but it was surprising then. In the print era there was a narrow channel to readers, guarded by fierce monsters known as editors. The only way to get an audience for anything you wrote was to get it published as a book, or in a newspaper or magazine. Now anyone could publish anything. This had been possible in principle since 1993, but not many people had realized it yet. I had been intimately involved with building the infrastructure of the web for most of that time, and a writer as well, and it had taken me 8 years to realize it. Even then it took me several years to understand the implications. It meant there would be a whole new generation of essays. [11] In the print era, the channel for publishing essays had been vanishingly small. Except for a few officially anointed thinkers who went to the right parties in New York, the only people allowed to publish essays were specialists writing about their specialties. There were so many essays that had never been written, because there had been no way to publish them. Now they could be, and I was going to write them. [12] I've worked on several different things, but to the extent there was a turning point where I figured out what to work on, it was when I started publishing essays online. From then on I knew that whatever else I did, I'd always write essays too. I knew that online essays would be a marginal medium at first. Socially they'd seem more like rants posted by nutjobs on their GeoCities sites than the genteel and beautifully typeset compositions published in The New Yorker. But by this point I knew enough to find that encouraging instead of discouraging. One of the most conspicuous patterns I've noticed in my life is how well it has worked, for me at least, to work on things that weren't prestigious. Still life has always been the least prestigious form of painting. Viaweb and Y Combinator both seemed lame when we started them. I still get the glassy eye from strangers when they ask what I'm writing, and I explain that it's an essay I'm going to publish on my web site. Even Lisp, though prestigious intellectually in something like the way Latin is, also seems about as hip. It's not that unprestigious types of work are good per se. But when you find yourself drawn to some kind of work despite its current lack of prestige, it's a sign both that there's something real to be discovered there, and that you have the right kind of motives. Impure motives are a big danger for the ambitious. If anything is going to lead you astray, it will be the desire to impress people. So while working on things that aren't prestigious doesn't guarantee you're on the right track, it at least guarantees you're not on the most common type of wrong one. Over the next several years I wrote lots of essays about all kinds of different topics. O'Reilly reprinted a collection of them as a book, called Hackers & Painters after one of the essays in it. I also worked on spam filters, and did some more painting. I used to have dinners for a group of friends every thursday night, which taught me how to cook for groups. And I bought another building in Cambridge, a former candy factory (and later, twas said, porn studio), to use as an office. One night in October 2003 there was a big party at my house. It was a clever idea of my friend Maria Daniels, who was one of the thursday diners. Three separate hosts would all invite their friends to one party. So for every guest, two thirds of the other guests would be people they didn't know but would probably like. One of the guests was someone I didn't know but would turn out to like a lot: a woman called Jessica Livingston. A couple days later I asked her out. Jessica was in charge of marketing at a Boston investment bank. This bank thought it understood startups, but over the next year, as she met friends of mine from the startup world, she was surprised how different reality was. And how colorful their stories were. So she decided to compile a book of interviews with startup founders. When the bank had financial problems and she had to fire half her staff, she started looking for a new job. In early 2005 she interviewed for a marketing job at a Boston VC firm. It took them weeks to make up their minds, and during this time I started telling her about all the things that needed to be fixed about venture capital. They should make a larger number of smaller investments instead of a handful of giant ones, they should be funding younger, more technical founders instead of MBAs, they should let the founders remain as CEO, and so on. One of my tricks for writing essays had always been to give talks. The prospect of having to stand up in front of a group of people and tell them something that won't waste their time is a great spur to the imagination. When the Harvard Computer Society, the undergrad computer club, asked me to give a talk, I decided I would tell them how to start a startup. Maybe they'd be able to avoid the worst of the mistakes we'd made. So I gave this talk, in the course of which I told them that the best sources of seed funding were successful startup founders, because then they'd be sources of advice too. Whereupon it seemed they were all looking expectantly at me. Horrified at the prospect of having my inbox flooded by business plans (if I'd only known), I blurted out "But not me!" and went on with the talk. But afterward it occurred to me that I should really stop procrastinating about angel investing. I'd been meaning to since Yahoo bought us, and now it was 7 years later and I still hadn't done one angel investment. Meanwhile I had been scheming with Robert and Trevor about projects we could work on together. I missed working with them, and it seemed like there had to be something we could collaborate on. As Jessica and I were walking home from dinner on March 11, at the corner of Garden and Walker streets, these three threads converged. Screw the VCs who were taking so long to make up their minds. We'd start our own investment firm and actually implement the ideas we'd been talking about. I'd fund it, and Jessica could quit her job and work for it, and we'd get Robert and Trevor as partners too. [13] Once again, ignorance worked in our favor. We had no idea how to be angel investors, and in Boston in 2005 there were no Ron Conways to learn from. So we just made what seemed like the obvious choices, and some of the things we did turned out to be novel. There are multiple components to Y Combinator, and we didn't figure them all out at once. The part we got first was to be an angel firm. In those days, those two words didn't go together. There were VC firms, which were organized companies with people whose job it was to make investments, but they only did big, million dollar investments. And there were angels, who did smaller investments, but these were individuals who were usually focused on other things and made investments on the side. And neither of them helped founders enough in the beginning. We knew how helpless founders were in some respects, because we remembered how helpless we'd been. For example, one thing Julian had done for us that seemed to us like magic was to get us set up as a company. We were fine writing fairly difficult software, but actually getting incorporated, with bylaws and stock and all that stuff, how on earth did you do that? Our plan was not only to make seed investments, but to do for startups everything Julian had done for us. YC was not organized as a fund. It was cheap enough to run that we funded it with our own money. That went right by 99% of readers, but professional investors are thinking "Wow, that means they got all the returns." But once again, this was not due to any particular insight on our part. We didn't know how VC firms were organized. It never occurred to us to try to raise a fund, and if it had, we wouldn't have known where to start. [14] The most distinctive thing about YC is the batch model: to fund a bunch of startups all at once, twice a year, and then to spend three months focusing intensively on trying to help them. That part we discovered by accident, not merely implicitly but explicitly due to our ignorance about investing. We needed to get experience as investors. What better way, we thought, than to fund a whole bunch of startups at once? We knew undergrads got temporary jobs at tech companies during the summer. Why not organize a summer program where they'd start startups instead? We wouldn't feel guilty for being in a sense fake investors, because they would in a similar sense be fake founders. So while we probably wouldn't make much money out of it, we'd at least get to practice being investors on them, and they for their part would probably have a more interesting summer than they would working at Microsoft. We'd use the building I owned in Cambridge as our headquarters. We'd all have dinner there once a week โ€” on tuesdays, since I was already cooking for the thursday diners on thursdays โ€” and after dinner we'd bring in experts on startups to give talks. We knew undergrads were deciding then about summer jobs, so in a matter of days we cooked up something we called the Summer Founders Program, and I posted an announcement on my site, inviting undergrads to apply. I had never imagined that writing essays would be a way to get "deal flow," as investors call it, but it turned out to be the perfect source. [15] We got 225 applications for the Summer Founders Program, and we were surprised to find that a lot of them were from people who'd already graduated, or were about to that spring. Already this SFP thing was starting to feel more serious than we'd intended. We invited about 20 of the 225 groups to interview in person, and from those we picked 8 to fund. They were an impressive group. That first batch included reddit, Justin Kan and Emmett Shear, who went on to found Twitch, Aaron Swartz, who had already helped write the RSS spec and would a few years later become a martyr for open access, and Sam Altman, who would later become the second president of YC. I don't think it was entirely luck that the first batch was so good. You had to be pretty bold to sign up for a weird thing like the Summer Founders Program instead of a summer job at a legit place like Microsoft or Goldman Sachs. The deal for startups was based on a combination of the deal we did with Julian ($10k for 10%) and what Robert said MIT grad students got for the summer ($6k). We invested $6k per founder, which in the typical two-founder case was $12k, in return for 6%. That had to be fair, because it was twice as good as the deal we ourselves had taken. Plus that first summer, which was really hot, Jessica brought the founders free air conditioners. [16] Fairly quickly I realized that we had stumbled upon the way to scale startup funding. Funding startups in batches was more convenient for us, because it meant we could do things for a lot of startups at once, but being part of a batch was better for the startups too. It solved one of the biggest problems faced by founders: the isolation. Now you not only had colleagues, but colleagues who understood the problems you were facing and could tell you how they were solving them. As YC grew, we started to notice other advantages of scale. The alumni became a tight community, dedicated to helping one another, and especially the current batch, whose shoes they remembered being in. We also noticed that the startups were becoming one another's customers. We used to refer jokingly to the "YC GDP," but as YC grows this becomes less and less of a joke. Now lots of startups get their initial set of customers almost entirely from among their batchmates. I had not originally intended YC to be a full-time job. I was going to do three things: hack, write essays, and work on YC. As YC grew, and I grew more excited about it, it started to take up a lot more than a third of my attention. But for the first few years I was still able to work on other things. In the summer of 2006, Robert and I started working on a new version of Arc. This one was reasonably fast, because it was compiled into Scheme. To test this new Arc, I wrote Hacker News in it. It was originally meant to be a news aggregator for startup founders and was called Startup News, but after a few months I got tired of reading about nothing but startups. Plus it wasn't startup founders we wanted to reach. It was future startup founders. So I changed the name to Hacker News and the topic to whatever engaged one's intellectual curiosity. HN was no doubt good for YC, but it was also by far the biggest source of stress for me. If all I'd had to do was select and help founders, life would have been so easy. And that implies that HN was a mistake. Surely the biggest source of stress in one's work should at least be something close to the core of the work. Whereas I was like someone who was in pain while running a marathon not from the exertion of running, but because I had a blister from an ill-fitting shoe. When I was dealing with some urgent problem during YC, there was about a 60% chance it had to do with HN, and a 40% chance it had do with everything else combined. [17] As well as HN, I wrote all of YC's internal software in Arc. But while I continued to work a good deal in Arc, I gradually stopped working on Arc, partly because I didn't have time to, and partly because it was a lot less attractive to mess around with the language now that we had all this infrastructure depending on it. So now my three projects were reduced to two: writing essays and working on YC. YC was different from other kinds of work I've done. Instead of deciding for myself what to work on, the problems came to me. Every 6 months there was a new batch of startups, and their problems, whatever they were, became our problems. It was very engaging work, because their problems were quite varied, and the good founders were very effective. If you were trying to learn the most you could about startups in the shortest possible time, you couldn't have picked a better way to do it. There were parts of the job I didn't like. Disputes between cofounders, figuring out when people were lying to us, fighting with people who maltreated the startups, and so on. But I worked hard even at the parts I didn't like. I was haunted by something Kevin Hale once said about companies: "No one works harder than the boss." He meant it both descriptively and prescriptively, and it was the second part that scared me. I wanted YC to be good, so if how hard I worked set the upper bound on how hard everyone else worked, I'd better work very hard. One day in 2010, when he was visiting California for interviews, Robert Morris did something astonishing: he offered me unsolicited advice. I can only remember him doing that once before. One day at Viaweb, when I was bent over double from a kidney stone, he suggested that it would be a good idea for him to take me to the hospital. That was what it took for Rtm to offer unsolicited advice. So I remember his exact words very clearly. "You know," he said, "you should make sure Y Combinator isn't the last cool thing you do." At the time I didn't understand what he meant, but gradually it dawned on me that he was saying I should quit. This seemed strange advice, because YC was doing great. But if there was one thing rarer than Rtm offering advice, it was Rtm being wrong. So this set me thinking. It was true that on my current trajectory, YC would be the last thing I did, because it was only taking up more of my attention. It had already eaten Arc, and was in the process of eating essays too. Either YC was my life's work or I'd have to leave eventually. And it wasn't, so I would. In the summer of 2012 my mother had a stroke, and the cause turned out to be a blood clot caused by colon cancer. The stroke destroyed her balance, and she was put in a nursing home, but she really wanted to get out of it and back to her house, and my sister and I were determined to help her do it. I used to fly up to Oregon to visit her regularly, and I had a lot of time to think on those flights. On one of them I realized I was ready to hand YC over to someone else. I asked Jessica if she wanted to be president, but she didn't, so we decided we'd try to recruit Sam Altman. We talked to Robert and Trevor and we agreed to make it a complete changing of the guard. Up till that point YC had been controlled by the original LLC we four had started. But we wanted YC to last for a long time, and to do that it couldn't be controlled by the founders. So if Sam said yes, we'd let him reorganize YC. Robert and I would retire, and Jessica and Trevor would become ordinary partners. When we asked Sam if he wanted to be president of YC, initially he said no. He wanted to start a startup to make nuclear reactors. But I kept at it, and in October 2013 he finally agreed. We decided he'd take over starting with the winter 2014 batch. For the rest of 2013 I left running YC more and more to Sam, partly so he could learn the job, and partly because I was focused on my mother, whose cancer had returned. She died on January 15, 2014. We knew this was coming, but it was still hard when it did. I kept working on YC till March, to help get that batch of startups through Demo Day, then I checked out pretty completely. (I still talk to alumni and to new startups working on things I'm interested in, but that only takes a few hours a week.) What should I do next? Rtm's advice hadn't included anything about that. I wanted to do something completely different, so I decided I'd paint. I wanted to see how good I could get if I really focused on it. So the day after I stopped working on YC, I started painting. I was rusty and it took a while to get back into shape, but it was at least completely engaging. [18] I spent most of the rest of 2014 painting. I'd never been able to work so uninterruptedly before, and I got to be better than I had been. Not good enough, but better. Then in November, right in the middle of a painting, I ran out of steam. Up till that point I'd always been curious to see how the painting I was working on would turn out, but suddenly finishing this one seemed like a chore. So I stopped working on it and cleaned my brushes and haven't painted since. So far anyway. I realize that sounds rather wimpy. But attention is a zero sum game. If you can choose what to work on, and you choose a project that's not the best one (or at least a good one) for you, then it's getting in the way of another project that is. And at 50 there was some opportunity cost to screwing around. I started writing essays again, and wrote a bunch of new ones over the next few months. I even wrote a couple that weren't about startups. Then in March 2015 I started working on Lisp again. The distinctive thing about Lisp is that its core is a language defined by writing an interpreter in itself. It wasn't originally intended as a programming language in the ordinary sense. It was meant to be a formal model of computation, an alternative to the Turing machine. If you want to write an interpreter for a language in itself, what's the minimum set of predefined operators you need? The Lisp that John McCarthy invented, or more accurately discovered, is an answer to that question. [19] McCarthy didn't realize this Lisp could even be used to program computers till his grad student Steve Russell suggested it. Russell translated McCarthy's interpreter into IBM 704 machine language, and from that point Lisp started also to be a programming language in the ordinary sense. But its origins as a model of computation gave it a power and elegance that other languages couldn't match. It was this that attracted me in college, though I didn't understand why at the time. McCarthy's 1960 Lisp did nothing more than interpret Lisp expressions. It was missing a lot of things you'd want in a programming language. So these had to be added, and when they were, they weren't defined using McCarthy's original axiomatic approach. That wouldn't have been feasible at the time. McCarthy tested his interpreter by hand-simulating the execution of programs. But it was already getting close to the limit of interpreters you could test that way โ€” indeed, there was a bug in it that McCarthy had overlooked. To test a more complicated interpreter, you'd have had to run it, and computers then weren't powerful enough. Now they are, though. Now you could continue using McCarthy's axiomatic approach till you'd defined a complete programming language. And as long as every change you made to McCarthy's Lisp was a discoveredness-preserving transformation, you could, in principle, end up with a complete language that had this quality. Harder to do than to talk about, of course, but if it was possible in principle, why not try? So I decided to take a shot at it. It took 4 years, from March 26, 2015 to October 12, 2019. It was fortunate that I had a precisely defined goal, or it would have been hard to keep at it for so long. I wrote this new Lisp, called Bel, in itself in Arc. That may sound like a contradiction, but it's an indication of the sort of trickery I had to engage in to make this work. By means of an egregious collection of hacks I managed to make something close enough to an interpreter written in itself that could actually run. Not fast, but fast enough to test. I had to ban myself from writing essays during most of this time, or I'd never have finished. In late 2015 I spent 3 months writing essays, and when I went back to working on Bel I could barely understand the code. Not so much because it was badly written as because the problem is so convoluted. When you're working on an interpreter written in itself, it's hard to keep track of what's happening at what level, and errors can be practically encrypted by the time you get them. So I said no more essays till Bel was done. But I told few people about Bel while I was working on it. So for years it must have seemed that I was doing nothing, when in fact I was working harder than I'd ever worked on anything. Occasionally after wrestling for hours with some gruesome bug I'd check Twitter or HN and see someone asking "Does Paul Graham still code?" Working on Bel was hard but satisfying. I worked on it so intensively that at any given time I had a decent chunk of the code in my head and could write more there. I remember taking the boys to the coast on a sunny day in 2015 and figuring out how to deal with some problem involving continuations while I watched them play in the tide pools. It felt like I was doing life right. I remember that because I was slightly dismayed at how novel it felt. The good news is that I had more moments like this over the next few years. In the summer of 2016 we moved to England. We wanted our kids to see what it was like living in another country, and since I was a British citizen by birth, that seemed the obvious choice. We only meant to stay for a year, but we liked it so much that we still live there. So most of Bel was written in England. In the fall of 2019, Bel was finally finished. Like McCarthy's original Lisp, it's a spec rather than an implementation, although like McCarthy's Lisp it's a spec expressed as code. Now that I could write essays again, I wrote a bunch about topics I'd had stacked up. I kept writing essays through 2020, but I also started to think about other things I could work on. How should I choose what to do? Well, how had I chosen what to work on in the past? I wrote an essay for myself to answer that question, and I was surprised how long and messy the answer turned out to be. If this surprised me, who'd lived it, then I thought perhaps it would be interesting to other people, and encouraging to those with similarly messy lives. So I wrote a more detailed version for others to read, and this is the last sentence of it. Notes [1] My experience skipped a step in the evolution of computers: time-sharing machines with interactive OSes. I went straight from batch processing to microcomputers, which made microcomputers seem all the more exciting. [2] Italian words for abstract concepts can nearly always be predicted from their English cognates (except for occasional traps like polluzione). It's the everyday words that differ. So if you string together a lot of abstract concepts with a few simple verbs, you can make a little Italian go a long way. [3] I lived at Piazza San Felice 4, so my walk to the Accademia went straight down the spine of old Florence: past the Pitti, across the bridge, past Orsanmichele, between the Duomo and the Baptistery, and then up Via Ricasoli to Piazza San Marco. I saw Florence at street level in every possible condition, from empty dark winter evenings to sweltering summer days when the streets were packed with tourists. [4] You can of course paint people like still lives if you want to, and they're willing. That sort of portrait is arguably the apex of still life painting, though the long sitting does tend to produce pained expressions in the sitters. [5] Interleaf was one of many companies that had smart people and built impressive technology, and yet got crushed by Moore's Law. In the 1990s the exponential growth in the power of commodity (i.e. Intel) processors rolled up high-end, special-purpose hardware and software companies like a bulldozer. [6] The signature style seekers at RISD weren't specifically mercenary. In the art world, money and coolness are tightly coupled. Anything expensive comes to be seen as cool, and anything seen as cool will soon become equally expensive. [7] Technically the apartment wasn't rent-controlled but rent-stabilized, but this is a refinement only New Yorkers would know or care about. The point is that it was really cheap, less than half market price. [8] Most software you can launch as soon as it's done. But when the software is an online store builder and you're hosting the stores, if you don't have any users yet, that fact will be painfully obvious. So before we could launch publicly we had to launch privately, in the sense of recruiting an initial set of users and making sure they had decent-looking stores. [9] We'd had a code editor in Viaweb for users to define their own page styles. They didn't know it, but they were editing Lisp expressions underneath. But this wasn't an app editor, because the code ran when the merchants' sites were generated, not when shoppers visited them. [10] This was the first instance of what is now a familiar experience, and so was what happened next, when I read the comments and found they were full of angry people. How could I claim that Lisp was better than other languages? Weren't they all Turing complete? People who see the responses to essays I write sometimes tell me how sorry they feel for me, but I'm not exaggerating when I reply that it has always been like this, since the very beginning. It comes with the territory. An essay must tell readers things they don't already know, and some people dislike being told such things. [11] People put plenty of stuff on the internet in the 90s of course, but putting something online is not the same as publishing it online. Publishing online means you treat the online version as the (or at least a) primary version. [12] There is a general lesson here that our experience with Y Combinator also teaches: Customs continue to constrain you long after the restrictions that caused them have disappeared. Customary VC practice had once, like the customs about publishing essays, been based on real constraints. Startups had once been much more expensive to start, and proportionally rare. Now they could be cheap and common, but the VCs' customs still reflected the old world, just as customs about writing essays still reflected the constraints of the print era. Which in turn implies that people who are independent-minded (i.e. less influenced by custom) will have an advantage in fields affected by rapid change (where customs are more likely to be obsolete). Here's an interesting point, though: you can't always predict which fields will be affected by rapid change. Obviously software and venture capital will be, but who would have predicted that essay writing would be? [13] Y Combinator was not the original name. At first we were called Cambridge Seed. But we didn't want a regional name, in case someone copied us in Silicon Valley, so we renamed ourselves after one of the coolest tricks in the lambda calculus, the Y combinator. I picked orange as our color partly because it's the warmest, and partly because no VC used it. In 2005 all the VCs used staid colors like maroon, navy blue, and forest green, because they were trying to appeal to LPs, not founders. The YC logo itself is an inside joke: the Viaweb logo had been a white V on a red circle, so I made the YC logo a white Y on an orange square. [14] YC did become a fund for a couple years starting in 2009, because it was getting so big I could no longer afford to fund it personally. But after Heroku got bought we had enough money to go back to being self-funded. [15] I've never liked the term "deal flow," because it implies that the number of new startups at any given time is fixed. This is not only false, but it's the purpose of YC to falsify it, by causing startups to be founded that would not otherwise have existed. [16] She reports that they were all different shapes and sizes, because there was a run on air conditioners and she had to get whatever she could, but that they were all heavier than she could carry now. [17] Another problem with HN was a bizarre edge case that occurs when you both write essays and run a forum. When you run a forum, you're assumed to see if not every conversation, at least every conversation involving you. And when you write essays, people post highly imaginative misinterpretations of them on forums. Individually these two phenomena are tedious but bearable, but the combination is disastrous. You actually have to respond to the misinterpretations, because the assumption that you're present in the conversation means that not responding to any sufficiently upvoted misinterpretation reads as a tacit admission that it's correct. But that in turn encourages more; anyone who wants to pick a fight with you senses that now is their chance. [18] The worst thing about leaving YC was not working with Jessica anymore. We'd been working on YC almost the whole time we'd known each other, and we'd neither tried nor wanted to separate it from our personal lives, so leaving was like pulling up a deeply rooted tree. [19] One way to get more precise about the concept of invented vs discovered is to talk about space aliens. Any sufficiently advanced alien civilization would certainly know about the Pythagorean theorem, for example. I believe, though with less certainty, that they would also know about the Lisp in McCarthy's 1960 paper. But if so there's no reason to suppose that this is the limit of the language that might be known to them. Presumably aliens need numbers and errors and I/O too. So it seems likely there exists at least one path out of McCarthy's Lisp along which discoveredness is preserved. Thanks to Trevor Blackwell, John Collison, Patrick Collison, Daniel Gackle, Ralph Hazell, Jessica Livingston, Robert Morris, and Harj Taggar for reading drafts of this.
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\modules\paul_graham_essay.txt
.txt
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the American people to the Constitution. And with an unwavering resolve that freedom will always triumph over tyranny. Six days ago, Russiaโ€™s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. He thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. He met the Ukrainian people. From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. In this struggle as President Zelenskyy said in his speech to the European Parliament โ€œLight will win over darkness.โ€ The Ukrainian Ambassador to the United States is here tonight. Let each of us here tonight in this Chamber send an unmistakable signal to Ukraine and to the world. Please rise if you are able and show that, Yes, we the United States of America stand with the Ukrainian people. Throughout our history weโ€™ve learned this lesson when dictators do not pay a price for their aggression they cause more chaos. They keep moving. And the costs and the threats to America and the world keep rising. Thatโ€™s why the NATO Alliance was created to secure peace and stability in Europe after World War 2. The United States is a member along with 29 other nations. It matters. American diplomacy matters. American resolve matters. Putinโ€™s latest attack on Ukraine was premeditated and unprovoked. He rejected repeated efforts at diplomacy. He thought the West and NATO wouldnโ€™t respond. And he thought he could divide us at home. Putin was wrong. We were ready. Here is what we did. We prepared extensively and carefully. We spent months building a coalition of other freedom-loving nations from Europe and the Americas to Asia and Africa to confront Putin. I spent countless hours unifying our European allies. We shared with the world in advance what we knew Putin was planning and precisely how he would try to falsely justify his aggression. We countered Russiaโ€™s lies with truth. And now that he has acted the free world is holding him accountable. Along with twenty-seven members of the European Union including France, Germany, Italy, as well as countries like the United Kingdom, Canada, Japan, Korea, Australia, New Zealand, and many others, even Switzerland. We are inflicting pain on Russia and supporting the people of Ukraine. Putin is now isolated from the world more than ever. Together with our allies โ€“we are right now enforcing powerful economic sanctions. We are cutting off Russiaโ€™s largest banks from the international financial system. Preventing Russiaโ€™s central bank from defending the Russian Ruble making Putinโ€™s $630 Billion โ€œwar fundโ€ worthless. We are choking off Russiaโ€™s access to technology that will sap its economic strength and weaken its military for years to come. Tonight I say to the Russian oligarchs and corrupt leaders who have bilked billions of dollars off this violent regime no more. The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs. We are joining with our European allies to find and seize your yachts your luxury apartments your private jets. We are coming for your ill-begotten gains. And tonight I am announcing that we will join our allies in closing off American air space to all Russian flights โ€“ further isolating Russia โ€“ and adding an additional squeeze โ€“on their economy. The Ruble has lost 30% of its value. The Russian stock market has lost 40% of its value and trading remains suspended. Russiaโ€™s economy is reeling and Putin alone is to blame. Together with our allies we are providing support to the Ukrainians in their fight for freedom. Military assistance. Economic assistance. Humanitarian assistance. We are giving more than $1 Billion in direct assistance to Ukraine. And we will continue to aid the Ukrainian people as they defend their country and to help ease their suffering. Let me be clear, our forces are not engaged and will not engage in conflict with Russian forces in Ukraine. Our forces are not going to Europe to fight in Ukraine, but to defend our NATO Allies โ€“ in the event that Putin decides to keep moving west. For that purpose weโ€™ve mobilized American ground forces, air squadrons, and ship deployments to protect NATO countries including Poland, Romania, Latvia, Lithuania, and Estonia. As I have made crystal clear the United States and our Allies will defend every inch of territory of NATO countries with the full force of our collective power. And we remain clear-eyed. The Ukrainians are fighting back with pure courage. But the next few days weeks, months, will be hard on them. Putin has unleashed violence and chaos. But while he may make gains on the battlefield โ€“ he will pay a continuing high price over the long run. And a proud Ukrainian people, who have known 30 years of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards. To all Americans, I will be honest with you, as Iโ€™ve always promised. A Russian dictator, invading a foreign country, has costs around the world. And Iโ€™m taking robust action to make sure the pain of our sanctions is targeted at Russiaโ€™s economy. And I will use every tool at our disposal to protect American businesses and consumers. Tonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world. America will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies. These steps will help blunt gas prices here at home. And I know the news about whatโ€™s happening can seem alarming. But I want you to know that we are going to be okay. When the history of this era is written Putinโ€™s war on Ukraine will have left Russia weaker and the rest of the world stronger. While it shouldnโ€™t have taken something so terrible for people around the world to see whatโ€™s at stake now everyone sees it clearly. We see the unity among leaders of nations and a more unified Europe a more unified West. And we see unity among the people who are gathering in cities in large crowds around the world even in Russia to demonstrate their support for Ukraine. In the battle between democracy and autocracy, democracies are rising to the moment, and the world is clearly choosing the side of peace and security. This is a real test. Itโ€™s going to take time. So let us continue to draw inspiration from the iron will of the Ukrainian people. To our fellow Ukrainian Americans who forge a deep bond that connects our two nations we stand with you. Putin may circle Kyiv with tanks, but he will never gain the hearts and souls of the Ukrainian people. He will never extinguish their love of freedom. He will never weaken the resolve of the free world. We meet tonight in an America that has lived through two of the hardest years this nation has ever faced. The pandemic has been punishing. And so many families are living paycheck to paycheck, struggling to keep up with the rising cost of food, gas, housing, and so much more. I understand. I remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it. Thatโ€™s why one of the first things I did as President was fight to pass the American Rescue Plan. Because people were hurting. We needed to act, and we did. Few pieces of legislation have done more in a critical moment in our history to lift us out of crisis. It fueled our efforts to vaccinate the nation and combat COVID-19. It delivered immediate economic relief for tens of millions of Americans. Helped put food on their table, keep a roof over their heads, and cut the cost of health insurance. And as my Dad used to say, it gave people a little breathing room. And unlike the $2 Trillion tax cut passed in the previous administration that benefitted the top 1% of Americans, the American Rescue Plan helped working peopleโ€”and left no one behind. And it worked. It created jobs. Lots of jobs. In factโ€”our economy created over 6.5 Million new jobs just last year, more jobs created in one year than ever before in the history of America. Our economy grew at a rate of 5.7% last year, the strongest growth in nearly 40 years, the first step in bringing fundamental change to an economy that hasnโ€™t worked for the working people of this nation for too long. For the past 40 years we were told that if we gave tax breaks to those at the very top, the benefits would trickle down to everyone else. But that trickle-down theory led to weaker economic growth, lower wages, bigger deficits, and the widest gap between those at the top and everyone else in nearly a century. Vice President Harris and I ran for office with a new economic vision for America. Invest in America. Educate Americans. Grow the workforce. Build the economy from the bottom up and the middle out, not from the top down. Because we know that when the middle class grows, the poor have a ladder up and the wealthy do very well. America used to have the best roads, bridges, and airports on Earth. Now our infrastructure is ranked 13th in the world. We wonโ€™t be able to compete for the jobs of the 21st Century if we donโ€™t fix that. Thatโ€™s why it was so important to pass the Bipartisan Infrastructure Lawโ€”the most sweeping investment to rebuild America in history. This was a bipartisan effort, and I want to thank the members of both parties who worked to make it happen. Weโ€™re done talking about infrastructure weeks. Weโ€™re going to have an infrastructure decade. It is going to transform America and put us on a path to win the economic competition of the 21st Century that we face with the rest of the worldโ€”particularly with China. As Iโ€™ve told Xi Jinping, it is never a good bet to bet against the American people. Weโ€™ll create good jobs for millions of Americans, modernizing roads, airports, ports, and waterways all across America. And weโ€™ll do it all to withstand the devastating effects of the climate crisis and promote environmental justice. Weโ€™ll build a national network of 500,000 electric vehicle charging stations, begin to replace poisonous lead pipesโ€”so every childโ€”and every Americanโ€”has clean water to drink at home and at school, provide affordable high-speed internet for every Americanโ€”urban, suburban, rural, and tribal communities. 4,000 projects have already been announced. And tonight, Iโ€™m announcing that this year we will start fixing over 65,000 miles of highway and 1,500 bridges in disrepair. When we use taxpayer dollars to rebuild America โ€“ we are going to Buy American: buy American products to support American jobs. The federal government spends about $600 Billion a year to keep the country safe and secure. Thereโ€™s been a law on the books for almost a century to make sure taxpayersโ€™ dollars support American jobs and businesses. Every Administration says theyโ€™ll do it, but we are actually doing it. We will buy American to make sure everything from the deck of an aircraft carrier to the steel on highway guardrails are made in America. But to compete for the best jobs of the future, we also need to level the playing field with China and other competitors. Thatโ€™s why it is so important to pass the Bipartisan Innovation Act sitting in Congress that will make record investments in emerging technologies and American manufacturing. Let me give you one example of why itโ€™s so important to pass it. If you travel 20 miles east of Columbus, Ohio, youโ€™ll find 1,000 empty acres of land. It wonโ€™t look like much, but if you stop and look closely, youโ€™ll see a โ€œField of dreams,โ€ the ground on which Americaโ€™s future will be built. This is where Intel, the American company that helped build Silicon Valley, is going to build its $20 billion semiconductor โ€œmega siteโ€. Up to eight state-of-the-art factories in one place. 10,000 new good-paying jobs. Some of the most sophisticated manufacturing in the world to make computer chips the size of a fingertip that power the world and our everyday lives. Smartphones. The Internet. Technology we have yet to invent. But thatโ€™s just the beginning. Intelโ€™s CEO, Pat Gelsinger, who is here tonight, told me they are ready to increase their investment from $20 billion to $100 billion. That would be one of the biggest investments in manufacturing in American history. And all theyโ€™re waiting for is for you to pass this bill. So letโ€™s not wait any longer. Send it to my desk. Iโ€™ll sign it. And we will really take off. And Intel is not alone. Thereโ€™s something happening in America. Just look around and youโ€™ll see an amazing story. The rebirth of the pride that comes from stamping products โ€œMade In America.โ€ The revitalization of American manufacturing. Companies are choosing to build new factories here, when just a few years ago, they would have built them overseas. Thatโ€™s what is happening. Ford is investing $11 billion to build electric vehicles, creating 11,000 jobs across the country. GM is making the largest investment in its historyโ€”$7 billion to build electric vehicles, creating 4,000 jobs in Michigan. All told, we created 369,000 new manufacturing jobs in America just last year. Powered by people Iโ€™ve met like JoJo Burgess, from generations of union steelworkers from Pittsburgh, whoโ€™s here with us tonight. As Ohio Senator Sherrod Brown says, โ€œItโ€™s time to bury the label โ€œRust Belt.โ€ Itโ€™s time. But with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills. Inflation is robbing them of the gains they might otherwise feel. I get it. Thatโ€™s why my top priority is getting prices under control. Look, our economy roared back faster than most predicted, but the pandemic meant that businesses had a hard time hiring enough workers to keep up production in their factories. The pandemic also disrupted global supply chains. When factories close, it takes longer to make goods and get them from the warehouse to the store, and prices go up. Look at cars. Last year, there werenโ€™t enough semiconductors to make all the cars that people wanted to buy. And guess what, prices of automobiles went up. Soโ€”we have a choice. One way to fight inflation is to drive down wages and make Americans poorer. I have a better plan to fight inflation. Lower your costs, not your wages. Make more cars and semiconductors in America. More infrastructure and innovation in America. More goods moving faster and cheaper in America. More jobs where you can earn a good living in America. And instead of relying on foreign supply chains, letโ€™s make it in America. Economists call it โ€œincreasing the productive capacity of our economy.โ€ I call it building a better America. My plan to fight inflation will lower your costs and lower the deficit. 17 Nobel laureates in economics say my plan will ease long-term inflationary pressures. Top business leaders and most Americans support my plan. And hereโ€™s the plan: First โ€“ cut the cost of prescription drugs. Just look at insulin. One in ten Americans has diabetes. In Virginia, I met a 13-year-old boy named Joshua Davis. He and his Dad both have Type 1 diabetes, which means they need insulin every day. Insulin costs about $10 a vial to make. But drug companies charge families like Joshua and his Dad up to 30 times more. I spoke with Joshuaโ€™s mom. Imagine what itโ€™s like to look at your child who needs insulin and have no idea how youโ€™re going to pay for it. What it does to your dignity, your ability to look your child in the eye, to be the parent you expect to be. Joshua is here with us tonight. Yesterday was his birthday. Happy birthday, buddy. For Joshua, and for the 200,000 other young people with Type 1 diabetes, letโ€™s cap the cost of insulin at $35 a month so everyone can afford it. Drug companies will still do very well. And while weโ€™re at it let Medicare negotiate lower prices for prescription drugs, like the VA already does. Look, the American Rescue Plan is helping millions of families on Affordable Care Act plans save $2,400 a year on their health care premiums. Letโ€™s close the coverage gap and make those savings permanent. Second โ€“ cut energy costs for families an average of $500 a year by combatting climate change. Letโ€™s provide investments and tax credits to weatherize your homes and businesses to be energy efficient and you get a tax credit; double Americaโ€™s clean energy production in solar, wind, and so much more; lower the price of electric vehicles, saving you another $80 a month because youโ€™ll never have to pay at the gas pump again. Third โ€“ cut the cost of child care. Many families pay up to $14,000 a year for child care per child. Middle-class and working families shouldnโ€™t have to pay more than 7% of their income for care of young children. My plan will cut the cost in half for most families and help parents, including millions of women, who left the workforce during the pandemic because they couldnโ€™t afford child care, to be able to get back to work. My plan doesnโ€™t stop there. It also includes home and long-term care. More affordable housing. And Pre-K for every 3- and 4-year-old. All of these will lower costs. And under my plan, nobody earning less than $400,000 a year will pay an additional penny in new taxes. Nobody. The one thing all Americans agree on is that the tax system is not fair. We have to fix it. Iโ€™m not looking to punish anyone. But letโ€™s make sure corporations and the wealthiest Americans start paying their fair share. Just last year, 55 Fortune 500 corporations earned $40 billion in profits and paid zero dollars in federal income tax. Thatโ€™s simply not fair. Thatโ€™s why Iโ€™ve proposed a 15% minimum tax rate for corporations. We got more than 130 countries to agree on a global minimum tax rate so companies canโ€™t get out of paying their taxes at home by shipping jobs and factories overseas. Thatโ€™s why Iโ€™ve proposed closing loopholes so the very wealthy donโ€™t pay a lower tax rate than a teacher or a firefighter. So thatโ€™s my plan. It will grow the economy and lower costs for families. So what are we waiting for? Letโ€™s get this done. And while youโ€™re at it, confirm my nominees to the Federal Reserve, which plays a critical role in fighting inflation. My plan will not only lower costs to give families a fair shot, it will lower the deficit. The previous Administration not only ballooned the deficit with tax cuts for the very wealthy and corporations, it undermined the watchdogs whose job was to keep pandemic relief funds from being wasted. But in my administration, the watchdogs have been welcomed back. Weโ€™re going after the criminals who stole billions in relief money meant for small businesses and millions of Americans. And tonight, Iโ€™m announcing that the Justice Department will name a chief prosecutor for pandemic fraud. By the end of this year, the deficit will be down to less than half what it was before I took office. The only president ever to cut the deficit by more than one trillion dollars in a single year. Lowering your costs also means demanding more competition. Iโ€™m a capitalist, but capitalism without competition isnโ€™t capitalism. Itโ€™s exploitationโ€”and it drives up prices. When corporations donโ€™t have to compete, their profits go up, your prices go up, and small businesses and family farmers and ranchers go under. We see it happening with ocean carriers moving goods in and out of America. During the pandemic, these foreign-owned companies raised prices by as much as 1,000% and made record profits. Tonight, Iโ€™m announcing a crackdown on these companies overcharging American businesses and consumers. And as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. That ends on my watch. Medicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. Weโ€™ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. Letโ€™s pass the Paycheck Fairness Act and paid leave. Raise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. Letโ€™s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jillโ€”our First Lady who teaches full-timeโ€”calls Americaโ€™s best-kept secret: community colleges. And letโ€™s pass the PRO Act when a majority of workers want to form a unionโ€”they shouldnโ€™t be stopped. When we invest in our workers, when we build the economy from the bottom up and the middle out together, we can do something we havenโ€™t done in a long time: build a better America. For more than two years, COVID-19 has impacted every decision in our lives and the life of the nation. And I know youโ€™re tired, frustrated, and exhausted. But I also know this. Because of the progress weโ€™ve made, because of your resilience and the tools we have, tonight I can say we are moving forward safely, back to more normal routines. Weโ€™ve reached a new moment in the fight against COVID-19, with severe cases down to a level not seen since last July. Just a few days ago, the Centers for Disease Control and Preventionโ€”the CDCโ€”issued new mask guidelines. Under these new guidelines, most Americans in most of the country can now be mask free. And based on the projections, more of the country will reach that point across the next couple of weeks. Thanks to the progress we have made this past year, COVID-19 need no longer control our lives. I know some are talking about โ€œliving with COVID-19โ€. Tonight โ€“ I say that we will never just accept living with COVID-19. We will continue to combat the virus as we do other diseases. And because this is a virus that mutates and spreads, we will stay on guard. Here are four common sense steps as we move forward safely. First, stay protected with vaccines and treatments. We know how incredibly effective vaccines are. If youโ€™re vaccinated and boosted you have the highest degree of protection. We will never give up on vaccinating more Americans. Now, I know parents with kids under 5 are eager to see a vaccine authorized for their children. The scientists are working hard to get that done and weโ€™ll be ready with plenty of vaccines when they do. Weโ€™re also ready with anti-viral treatments. If you get COVID-19, the Pfizer pill reduces your chances of ending up in the hospital by 90%. Weโ€™ve ordered more of these pills than anyone in the world. And Pfizer is working overtime to get us 1 Million pills this month and more than double that next month. And weโ€™re launching the โ€œTest to Treatโ€ initiative so people can get tested at a pharmacy, and if theyโ€™re positive, receive antiviral pills on the spot at no cost. If youโ€™re immunocompromised or have some other vulnerability, we have treatments and free high-quality masks. Weโ€™re leaving no one behind or ignoring anyoneโ€™s needs as we move forward. And on testing, we have made hundreds of millions of tests available for you to order for free. Even if you already ordered free tests tonight, I am announcing that you can order more from covidtests.gov starting next week. Second โ€“ we must prepare for new variants. Over the past year, weโ€™ve gotten much better at detecting new variants. If necessary, weโ€™ll be able to deploy new vaccines within 100 days instead of many more months or years. And, if Congress provides the funds we need, weโ€™ll have new stockpiles of tests, masks, and pills ready if needed. I cannot promise a new variant wonโ€™t come. But I can promise you weโ€™ll do everything within our power to be ready if it does. Third โ€“ we can end the shutdown of schools and businesses. We have the tools we need. Itโ€™s time for Americans to get back to work and fill our great downtowns again. People working from home can feel safe to begin to return to the office. Weโ€™re doing that here in the federal government. The vast majority of federal workers will once again work in person. Our schools are open. Letโ€™s keep it that way. Our kids need to be in school. And with 75% of adult Americans fully vaccinated and hospitalizations down by 77%, most Americans can remove their masks, return to work, stay in the classroom, and move forward safely. We achieved this because we provided free vaccines, treatments, tests, and masks. Of course, continuing this costs money. I will soon send Congress a request. The vast majority of Americans have used these tools and may want to again, so I expect Congress to pass it quickly. Fourth, we will continue vaccinating the world. Weโ€™ve sent 475 Million vaccine doses to 112 countries, more than any other nation. And we wonโ€™t stop. We have lost so much to COVID-19. Time with one another. And worst of all, so much loss of life. Letโ€™s use this moment to reset. Letโ€™s stop looking at COVID-19 as a partisan dividing line and see it for what it is: A God-awful disease. Letโ€™s stop seeing each other as enemies, and start seeing each other for who we really are: Fellow Americans. We canโ€™t change how divided weโ€™ve been. But we can change how we move forwardโ€”on COVID-19 and other issues we must face together. I recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. Officer Mora was 27 years old. Officer Rivera was 22. Both Dominican Americans whoโ€™d grown up on the same streets they later chose to patrol as police officers. I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. Iโ€™ve worked on these issues a long time. I know what works: Investing in crime prevention and community police officers whoโ€™ll walk the beat, whoโ€™ll know the neighborhood, and who can restore trust and safety. So letโ€™s not abandon our streets. Or choose between safety and equal justice. Letโ€™s come together to protect our communities, restore trust, and hold law enforcement accountable. Thatโ€™s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers. Thatโ€™s why the American Rescue Plan provided $350 Billion that cities, states, and counties can use to hire more police and invest in proven strategies like community violence interruptionโ€”trusted messengers breaking the cycle of violence and trauma and giving young people hope. We should all agree: The answer is not to Defund the police. The answer is to FUND the police with the resources and training they need to protect our communities. I ask Democrats and Republicans alike: Pass my budget and keep our neighborhoods safe. And I will keep doing everything in my power to crack down on gun trafficking and ghost guns you can buy online and make at homeโ€”they have no serial numbers and canโ€™t be traced. And I ask Congress to pass proven measures to reduce gun violence. Pass universal background checks. Why should anyone on a terrorist list be able to purchase a weapon? Ban assault weapons and high-capacity magazines. Repeal the liability shield that makes gun manufacturers the only industry in America that canโ€™t be sued. These laws donโ€™t infringe on the Second Amendment. They save lives. The most fundamental right in America is the right to vote โ€“ and to have it counted. And itโ€™s under assault. In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. We cannot let this happen. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youโ€™re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, Iโ€™d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyerโ€”an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nationโ€™s top legal minds, who will continue Justice Breyerโ€™s legacy of excellence. A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since sheโ€™s been nominated, sheโ€™s received a broad range of supportโ€”from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. We can do both. At our border, weโ€™ve installed new technology like cutting-edge scanners to better detect drug smuggling. Weโ€™ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. Weโ€™re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. Weโ€™re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders. We can do all this while keeping lit the torch of liberty that has led generations of immigrants to this landโ€”my forefathers and so many of yours. Provide a pathway to citizenship for Dreamers, those on temporary status, farm workers, and essential workers. Revise our laws so businesses have the workers they need and families donโ€™t wait decades to reunite. Itโ€™s not only the right thing to doโ€”itโ€™s the economically smart thing to do. Thatโ€™s why immigration reform is supported by everyone from labor unions to religious leaders to the U.S. Chamber of Commerce. Letโ€™s get it done once and for all. Advancing liberty and justice also requires protecting the rights of women. The constitutional right affirmed in Roe v. Wadeโ€”standing precedent for half a centuryโ€”is under attack as never before. If we want to go forwardโ€”not backwardโ€”we must protect access to health care. Preserve a womanโ€™s right to choose. And letโ€™s continue to advance maternal health care in America. And for our LGBTQ+ Americans, letโ€™s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. While it often appears that we never agree, that isnโ€™t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. And soon, weโ€™ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. So tonight Iโ€™m offering a Unity Agenda for the Nation. Four big things we can do together. First, beat the opioid epidemic. There is so much we can do. Increase funding for prevention, treatment, harm reduction, and recovery. Get rid of outdated rules that stop doctors from prescribing treatments. And stop the flow of illicit drugs by working with state and local law enforcement to go after traffickers. If youโ€™re suffering from addiction, know you are not alone. I believe in recovery, and I celebrate the 23 million Americans in recovery. Second, letโ€™s take on mental health. Especially among our children, whose lives and education have been turned upside down. The American Rescue Plan gave schools money to hire teachers and help students make up for lost learning. I urge every parent to make sure your school does just that. And we can all play a partโ€”sign up to be a tutor or a mentor. Children were also struggling before the pandemic. Bullying, violence, trauma, and the harms of social media. As Frances Haugen, who is here with us tonight, has shown, we must hold social media platforms accountable for the national experiment theyโ€™re conducting on our children for profit. Itโ€™s time to strengthen privacy protections, ban targeted advertising to children, demand tech companies stop collecting personal data on our children. And letโ€™s get all Americans the mental health services they need. More people they can turn to for help, and full parity between physical and mental health care. Third, support our veterans. Veterans are the best of us. Iโ€™ve always believed that we have a sacred obligation to equip all those we send to war and care for them and their families when they come home. My administration is providing assistance with job training and housing, and now helping lower-income veterans get VA care debt-free. Our troops in Iraq and Afghanistan faced many dangers. One was stationed at bases and breathing in toxic smoke from โ€œburn pitsโ€ that incinerated wastes of warโ€”medical and hazard material, jet fuel, and more. When they came home, many of the worldโ€™s fittest and best trained warriors were never the same. Headaches. Numbness. Dizziness. A cancer that would put them in a flag-draped coffin. I know. One of those soldiers was my son Major Beau Biden. We donโ€™t know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops. But Iโ€™m committed to finding out everything we can. Committed to military families like Danielle Robinson from Ohio. The widow of Sergeant First Class Heath Robinson. He was born a soldier. Army National Guard. Combat medic in Kosovo and Iraq. Stationed near Baghdad, just yards from burn pits the size of football fields. Heathโ€™s widow Danielle is here with us tonight. They loved going to Ohio State football games. He loved building Legos with their daughter. But cancer from prolonged exposure to burn pits ravaged Heathโ€™s lungs and body. Danielle says Heath was a fighter to the very end. He didnโ€™t know how to stop fighting, and neither did she. Through her pain she found purpose to demand we do better. Tonight, Danielleโ€”we are. The VA is pioneering new ways of linking toxic exposures to diseases, already helping more veterans get benefits. And tonight, Iโ€™m announcing weโ€™re expanding eligibility to veterans suffering from nine respiratory cancers. Iโ€™m also calling on Congress: pass a law to make sure veterans devastated by toxic exposures in Iraq and Afghanistan finally get the benefits and comprehensive health care they deserve. And fourth, letโ€™s end cancer as we know it. This is personal to me and Jill, to Kamala, and to so many of you. Cancer is the #2 cause of death in Americaโ€“second only to heart disease. Last month, I announced our plan to supercharge the Cancer Moonshot that President Obama asked me to lead six years ago. Our goal is to cut the cancer death rate by at least 50% over the next 25 years, turn more cancers from death sentences into treatable diseases. More support for patients and families. To get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health. Itโ€™s based on DARPAโ€”the Defense Department project that led to the Internet, GPS, and so much more. ARPA-H will have a singular purposeโ€”to drive breakthroughs in cancer, Alzheimerโ€™s, diabetes, and more. A unity agenda for the nation. We can do this. My fellow Americansโ€”tonight , we have gathered in a sacred spaceโ€”the citadel of our democracy. In this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things. We have fought for freedom, expanded liberty, defeated totalitarianism and terror. And built the strongest, freest, and most prosperous nation the world has ever known. Now is the hour. Our moment of responsibility. Our test of resolve and conscience, of history itself. It is in this moment that our character is formed. Our purpose is found. Our future is forged. Well I know this nation. We will meet the test. To protect freedom and liberty, to expand fairness and opportunity. We will save democracy. As hard as these times have been, I am more optimistic about America today than I have been my whole life. Because I see the future that is within our grasp. Because I know there is simply nothing beyond our capacity. We are the only nation on Earth that has always turned every crisis we have faced into an opportunity. The only nation that can be defined by a single word: possibilities. So on this night, in our 245th year as a nation, I have come to report on the State of the Union. And my report is this: the State of the Union is strongโ€”because you, the American people, are strong. We are stronger today than we were a year ago. And we will be stronger a year from now than we are today. Now is our moment to meet and overcome the challenges of our time. And we will, as one people. One America. The United States of America. May God bless you all. May God protect our troops.
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\modules\state_of_the_union.txt
.txt
Tell me a {adjective} joke about {content}.
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\modules\model_io\prompts\simple_template.txt
.md
# Contributing to langchain-cli Update CLI versions with `poe bump` to ensure that version commands display correctly.
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\cli\CONTRIBUTING.md
.md
# `langchain` **Usage**: ```console $ langchain [OPTIONS] COMMAND [ARGS]... ``` **Options**: * `--help`: Show this message and exit. * `-v, --version`: Print current CLI version. **Commands**: * `app`: Manage LangChain apps * `serve`: Start the LangServe app, whether it's a... * `template`: Develop installable templates. ## `langchain app` Manage LangChain apps **Usage**: ```console $ langchain app [OPTIONS] COMMAND [ARGS]... ``` **Options**: * `--help`: Show this message and exit. **Commands**: * `add`: Adds the specified template to the current... * `new`: Create a new LangServe application. * `remove`: Removes the specified package from the... * `serve`: Starts the LangServe app. ### `langchain app add` Adds the specified template to the current LangServe app. e.g.: langchain app add extraction-openai-functions langchain app add git+ssh://[email protected]/efriis/simple-pirate.git **Usage**: ```console $ langchain app add [OPTIONS] [DEPENDENCIES]... ``` **Arguments**: * `[DEPENDENCIES]...`: The dependency to add **Options**: * `--api-path TEXT`: API paths to add * `--project-dir PATH`: The project directory * `--repo TEXT`: Install templates from a specific github repo instead * `--branch TEXT`: Install templates from a specific branch * `--help`: Show this message and exit. ### `langchain app new` Create a new LangServe application. **Usage**: ```console $ langchain app new [OPTIONS] NAME ``` **Arguments**: * `NAME`: The name of the folder to create [required] **Options**: * `--package TEXT`: Packages to seed the project with * `--help`: Show this message and exit. ### `langchain app remove` Removes the specified package from the current LangServe app. **Usage**: ```console $ langchain app remove [OPTIONS] API_PATHS... ``` **Arguments**: * `API_PATHS...`: The API paths to remove [required] **Options**: * `--help`: Show this message and exit. ### `langchain app serve` Starts the LangServe app. **Usage**: ```console $ langchain app serve [OPTIONS] ``` **Options**: * `--port INTEGER`: The port to run the server on * `--host TEXT`: The host to run the server on * `--app TEXT`: The app to run, e.g. `app.server:app` * `--help`: Show this message and exit. ## `langchain serve` Start the LangServe app, whether it's a template or an app. **Usage**: ```console $ langchain serve [OPTIONS] ``` **Options**: * `--port INTEGER`: The port to run the server on * `--host TEXT`: The host to run the server on * `--help`: Show this message and exit. ## `langchain template` Develop installable templates. **Usage**: ```console $ langchain template [OPTIONS] COMMAND [ARGS]... ``` **Options**: * `--help`: Show this message and exit. **Commands**: * `new`: Creates a new template package. * `serve`: Starts a demo app for this template. ### `langchain template new` Creates a new template package. **Usage**: ```console $ langchain template new [OPTIONS] NAME ``` **Arguments**: * `NAME`: The name of the folder to create [required] **Options**: * `--with-poetry / --no-poetry`: Don't run poetry install [default: no-poetry] * `--help`: Show this message and exit. ### `langchain template serve` Starts a demo app for this template. **Usage**: ```console $ langchain template serve [OPTIONS] ``` **Options**: * `--port INTEGER`: The port to run the server on * `--host TEXT`: The host to run the server on * `--help`: Show this message and exit.
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\cli\DOCS.md
.md
# langchain-cli This package implements the official CLI for LangChain. Right now, it is most useful for getting started with LangChain Templates! [CLI Docs](https://github.com/langchain-ai/langchain/blob/master/libs/cli/DOCS.md) [LangServe Templates Quickstart](https://github.com/langchain-ai/langchain/blob/master/templates/README.md)
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\cli\README.md
.md
# __package_name__ This package contains the LangChain integration with __ModuleName__ ## Installation ```bash pip install -U __package_name__ ``` And you should configure credentials by setting the following environment variables: * TODO: fill this out ## Chat Models `Chat__ModuleName__` class exposes chat models from __ModuleName__. ```python from __module_name__ import Chat__ModuleName__ llm = Chat__ModuleName__() llm.invoke("Sing a ballad of LangChain.") ``` ## Embeddings `__ModuleName__Embeddings` class exposes embeddings from __ModuleName__. ```python from __module_name__ import __ModuleName__Embeddings embeddings = __ModuleName__Embeddings() embeddings.embed_query("What is the meaning of life?") ``` ## LLMs `__ModuleName__LLM` class exposes LLMs from __ModuleName__. ```python from __module_name__ import __ModuleName__LLM llm = __ModuleName__LLM() llm.invoke("The meaning of life is") ```
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\cli\langchain_cli\integration_template\README.md
.md
# __package_name__ TODO: What does this package do ## Environment Setup TODO: What environment variables need to be set (if any) ## Usage To use this package, you should first have the LangChain CLI installed: ```shell pip install -U langchain-cli ``` To create a new LangChain project and install this as the only package, you can do: ```shell langchain app new my-app --package __package_name__ ``` If you want to add this to an existing project, you can just run: ```shell langchain app add __package_name__ ``` And add the following code to your `server.py` file: ```python __app_route_code__ ``` (Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). If you don't have access, you can skip this section ```shell export LANGCHAIN_TRACING_V2=true export LANGCHAIN_API_KEY=<your-api-key> export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default" ``` If you are inside this directory, then you can spin up a LangServe instance directly by: ```shell langchain serve ``` This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000) We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/__package_name__/playground](http://127.0.0.1:8000/__package_name__/playground) We can access the template from code with: ```python from langserve.client import RemoteRunnable runnable = RemoteRunnable("http://localhost:8000/__package_name__") ```
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\cli\langchain_cli\package_template\README.md
.md
# __app_name__ ## Installation Install the LangChain CLI if you haven't yet ```bash pip install -U langchain-cli ``` ## Adding packages ```bash # adding packages from # https://github.com/langchain-ai/langchain/tree/master/templates langchain app add $PROJECT_NAME # adding custom GitHub repo packages langchain app add --repo $OWNER/$REPO # or with whole git string (supports other git providers): # langchain app add git+https://github.com/hwchase17/chain-of-verification # with a custom api mount point (defaults to `/{package_name}`) langchain app add $PROJECT_NAME --api_path=/my/custom/path/rag ``` Note: you remove packages by their api path ```bash langchain app remove my/custom/path/rag ``` ## Setup LangSmith (Optional) LangSmith will help us trace, monitor and debug LangChain applications. LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). If you don't have access, you can skip this section ```shell export LANGCHAIN_TRACING_V2=true export LANGCHAIN_API_KEY=<your-api-key> export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default" ``` ## Launch LangServe ```bash langchain serve ``` ## Running in Docker This project folder includes a Dockerfile that allows you to easily build and host your LangServe app. ### Building the Image To build the image, you simply: ```shell docker build . -t my-langserve-app ``` If you tag your image with something other than `my-langserve-app`, note it for use in the next step. ### Running the Image Locally To run the image, you'll need to include any environment variables necessary for your application. In the below example, we inject the `OPENAI_API_KEY` environment variable with the value set in my local environment (`$OPENAI_API_KEY`) We also expose port 8080 with the `-p 8080:8080` option. ```shell docker run -e OPENAI_API_KEY=$OPENAI_API_KEY -p 8080:8080 my-langserve-app ```
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\cli\langchain_cli\project_template\README.md
.md
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\cli\langchain_cli\project_template\packages\README.md
.md
# ๐Ÿฆœ๏ธ๐Ÿง‘โ€๐Ÿคโ€๐Ÿง‘ LangChain Community [![Downloads](https://static.pepy.tech/badge/langchain_community/month)](https://pepy.tech/project/langchain_community) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) ## Quick Install ```bash pip install langchain-community ``` ## What is it? LangChain Community contains third-party integrations that implement the base interfaces defined in LangChain Core, making them ready-to-use in any LangChain application. For full documentation see the [API reference](https://api.python.langchain.com/en/stable/community_api_reference.html). ![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](../../docs/static/img/langchain_stack.png "LangChain Framework Overview") ## ๐Ÿ“• Releases & Versioning `langchain-community` is currently on version `0.0.x` All changes will be accompanied by a patch version increase. ## ๐Ÿ’ Contributing As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation. For detailed information on how to contribute, see the [Contributing Guide](https://python.langchain.com/docs/contributing/).
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\community\README.md
.txt
[05.05.23, 15:48:11] James: Hi here [11/8/21, 9:41:32 AM] User name: Message 123 1/23/23, 3:19 AM - User 2: Bye! 1/23/23, 3:22_AM - User 1: And let me know if anything changes [1/24/21, 12:41:03 PM] ~ User name 2: Of course! [2023/5/4, 16:13:23] ~ User 2: See you! 7/19/22, 11:32โ€ฏPM - User 1: Hello 7/20/22, 11:32โ€ฏam - User 2: Goodbye 4/20/23, 9:42โ€ฏam - User 3: <Media omitted> 6/29/23, 12:16โ€ฏam - User 4: This message was deleted
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\community\tests\examples\whatsapp_chat.txt
.txt
[05.05.23, 15:48:11] James: Hi here [11/8/21, 9:41:32 AM] User name: Message 123 1/23/23, 3:19 AM - User 2: Bye! 1/23/23, 3:22_AM - User 1: And let me know if anything changes [1/24/21, 12:41:03 PM] ~ User name 2: Of course! [2023/5/4, 16:13:23] ~ User 2: See you! 7/19/22, 11:32โ€ฏPM - User 1: Hello 7/20/22, 11:32โ€ฏam - User 2: Goodbye 4/20/23, 9:42โ€ฏam - User 3: <Media omitted> 6/29/23, 12:16โ€ฏam - User 4: This message was deleted
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\community\tests\integration_tests\examples\whatsapp_chat.txt
.txt
Sharks are a group of elasmobranch fish characterized by a cartilaginous skeleton, five to seven gill slits on the sides of the head, and pectoral fins that are not fused to the head. Modern sharks are classified within the clade Selachimorpha (or Selachii) and are the sister group to the Batoidea (rays and kin). Some sources extend the term "shark" as an informal category including extinct members of Chondrichthyes (cartilaginous fish) with a shark-like morphology, such as hybodonts and xenacanths. Shark-like chondrichthyans such as Cladoselache and Doliodus first appeared in the Devonian Period (419-359 Ma), though some fossilized chondrichthyan-like scales are as old as the Late Ordovician (458-444 Ma). The oldest modern sharks (selachians) are known from the Early Jurassic, about 200 Ma. Sharks range in size from the small dwarf lanternshark (Etmopterus perryi), a deep sea species that is only 17 centimetres (6.7 in) in length, to the whale shark (Rhincodon typus), the largest fish in the world, which reaches approximately 12 metres (40 ft) in length. They are found in all seas and are common to depths up to 2,000 metres (6,600 ft). They generally do not live in freshwater, although there are a few known exceptions, such as the bull shark and the river shark, which can be found in both seawater and freshwater.[3] Sharks have a covering of dermal denticles that protects their skin from damage and parasites in addition to improving their fluid dynamics. They have numerous sets of replaceable teeth. Several species are apex predators, which are organisms that are at the top of their food chain. Select examples include the tiger shark, blue shark, great white shark, mako shark, thresher shark, and hammerhead shark. Sharks are caught by humans for shark meat or shark fin soup. Many shark populations are threatened by human activities. Since 1970, shark populations have been reduced by 71%, mostly from overfishing.
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\community\tests\integration_tests\vectorstores\fixtures\sharks.txt
.txt
[8/15/23, 9:12:33 AM] Dr. Feather: โ€ŽMessages and calls are end-to-end encrypted. No one outside of this chat, not even WhatsApp, can read or listen to them. [8/15/23, 9:12:43 AM] Dr. Feather: I spotted a rare Hyacinth Macaw yesterday in the Amazon Rainforest. Such a magnificent creature! โ€Ž[8/15/23, 9:12:48 AM] Dr. Feather: โ€Žimage omitted [8/15/23, 9:13:15 AM] Jungle Jane: That's stunning! Were you able to observe its behavior? โ€Ž[8/15/23, 9:13:23 AM] Dr. Feather: โ€Žimage omitted [8/15/23, 9:14:02 AM] Dr. Feather: Yes, it seemed quite social with other macaws. They're known for their playful nature. [8/15/23, 9:14:15 AM] Jungle Jane: How's the research going on parrot communication? โ€Ž[8/15/23, 9:14:30 AM] Dr. Feather: โ€Žimage omitted [8/15/23, 9:14:50 AM] Dr. Feather: It's progressing well. We're learning so much about how they use sound and color to communicate. [8/15/23, 9:15:10 AM] Jungle Jane: That's fascinating! Can't wait to read your paper on it. [8/15/23, 9:15:20 AM] Dr. Feather: Thank you! I'll send you a draft soon. [8/15/23, 9:25:16 PM] Jungle Jane: Looking forward to it! Keep up the great work.
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\community\tests\unit_tests\chat_loaders\data\whatsapp_chat.txt
.md
--- anArray: one - two - three tags: 'onetag', 'twotag' ] --- A document with frontmatter that isn't valid.
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\community\tests\unit_tests\document_loaders\sample_documents\obsidian\bad_frontmatter.md
.md
--- tags: journal/entry, obsidian --- No other content than the frontmatter.
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\community\tests\unit_tests\document_loaders\sample_documents\obsidian\frontmatter.md
.md
### Description #recipes #dessert #cookies A document with HR elements that might trip up a front matter parser: --- ### Ingredients - 3/4 cup (170g) **unsalted butter**, slightly softened toย room temperature. - 1 and 1/2 cupsย (180g) **confectionersโ€™ย sugar** ---
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\community\tests\unit_tests\document_loaders\sample_documents\obsidian\no_frontmatter.md
.md
A markdown document with no additional metadata.
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\community\tests\unit_tests\document_loaders\sample_documents\obsidian\no_metadata.md
.md
--- aFloat: 13.12345 anInt: 15 aBool: true aString: string value anArray: - one - two - three aDict: dictId1: '58417' dictId2: 1500 tags: [ 'onetag', 'twotag' ] --- # Tags ()#notatag #12345 #read something #tagWithCases - #tag-with-dash #tag_with_underscore #tag/with/nesting # Dataview Here is some data in a [dataview1:: a value] line. Here is even more data in a (dataview2:: another value) line. dataview3:: more data notdataview4: this is not a field notdataview5: this is not a field # Text content https://example.com/blog/#not-a-tag
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\community\tests\unit_tests\document_loaders\sample_documents\obsidian\tags_and_frontmatter.md
.md
--- aString: {{var}} anArray: - element - {{varElement}} aDict: dictId1: 'val' dictId2: '{{varVal}}' tags: [ 'tag', '{{varTag}}' ] --- Frontmatter contains template variables.
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\community\tests\unit_tests\document_loaders\sample_documents\obsidian\template_var_frontmatter.md
.txt
Error reading file
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\community\tests\unit_tests\examples\example-non-utf8.txt
.txt
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\community\tests\unit_tests\examples\example-utf8.txt
.md
# ๐Ÿฆœ๐ŸŽ๏ธ LangChain Core [![Downloads](https://static.pepy.tech/badge/langchain_core/month)](https://pepy.tech/project/langchain_core) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) ## Quick Install ```bash pip install langchain-core ``` ## What is it? LangChain Core contains the base abstractions that power the rest of the LangChain ecosystem. These abstractions are designed to be as modular and simple as possible. Examples of these abstractions include those for language models, document loaders, embedding models, vectorstores, retrievers, and more. The benefit of having these abstractions is that any provider can implement the required interface and then easily be used in the rest of the LangChain ecosystem. For full documentation see the [API reference](https://api.python.langchain.com/en/stable/core_api_reference.html). ## 1๏ธโƒฃ Core Interface: Runnables The concept of a Runnable is central to LangChain Core โ€“ it is the interface that most LangChain Core components implement, giving them - a common invocation interface (invoke, batch, stream, etc.) - built-in utilities for retries, fallbacks, schemas and runtime configurability - easy deployment with [LangServe](https://github.com/langchain-ai/langserve) For more check out the [runnable docs](https://python.langchain.com/docs/expression_language/interface). Examples of components that implement the interface include: LLMs, Chat Models, Prompts, Retrievers, Tools, Output Parsers. You can use LangChain Core objects in two ways: 1. **imperative**, ie. call them directly, eg. `model.invoke(...)` 2. **declarative**, with LangChain Expression Language (LCEL) 3. or a mix of both! eg. one of the steps in your LCEL sequence can be a custom function | Feature | Imperative | Declarative | | --------- | ------------------------------- | -------------- | | Syntax | All of Python | LCEL | | Tracing | โœ… โ€“ Automatic | โœ… โ€“ Automatic | | Parallel | โœ… โ€“ with threads or coroutines | โœ… โ€“ Automatic | | Streaming | โœ… โ€“ by yielding | โœ… โ€“ Automatic | | Async | โœ… โ€“ by writing async functions | โœ… โ€“ Automatic | ## โšก๏ธ What is LangChain Expression Language? LangChain Expression Language (LCEL) is a _declarative language_ for composing LangChain Core runnables into sequences (or DAGs), covering the most common patterns when building with LLMs. LangChain Core compiles LCEL sequences to an _optimized execution plan_, with automatic parallelization, streaming, tracing, and async support. For more check out the [LCEL docs](https://python.langchain.com/docs/expression_language/). ![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](../../docs/static/img/langchain_stack.png "LangChain Framework Overview") For more advanced use cases, also check out [LangGraph](https://github.com/langchain-ai/langgraph), which is a graph-based runner for cyclic and recursive LLM workflows. ## ๐Ÿ“• Releases & Versioning `langchain-core` is currently on version `0.1.x`. As `langchain-core` contains the base abstractions and runtime for the whole LangChain ecosystem, we will communicate any breaking changes with advance notice and version bumps. The exception for this is anything in `langchain_core.beta`. The reason for `langchain_core.beta` is that given the rate of change of the field, being able to move quickly is still a priority, and this module is our attempt to do so. Minor version increases will occur for: - Breaking changes for any public interfaces NOT in `langchain_core.beta` Patch version increases will occur for: - Bug fixes - New features - Any changes to private interfaces - Any changes to `langchain_core.beta` ## ๐Ÿ’ Contributing As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation. For detailed information on how to contribute, see the [Contributing Guide](https://python.langchain.com/docs/contributing/). ## โ›ฐ๏ธ Why build on top of LangChain Core? The whole LangChain ecosystem is built on top of LangChain Core, so you're in good company when building on top of it. Some of the benefits: - **Modularity**: LangChain Core is designed around abstractions that are independent of each other, and not tied to any specific model provider. - **Stability**: We are committed to a stable versioning scheme, and will communicate any breaking changes with advance notice and version bumps. - **Battle-tested**: LangChain Core components have the largest install base in the LLM ecosystem, and are used in production by many companies. - **Community**: LangChain Core is developed in the open, and we welcome contributions from the community.
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\core\README.md
.txt
Question: {question} Answer:
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\core\tests\unit_tests\prompt_file.txt
.txt
Question: {question} Answer:
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\core\tests\unit_tests\data\prompt_file.txt
.txt
Error reading file
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\core\tests\unit_tests\examples\example-non-utf8.txt
.txt
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\core\tests\unit_tests\examples\example-utf8.txt
.txt
Tell me a {adjective} joke about {content}.
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\core\tests\unit_tests\examples\simple_template.txt
.md
# ๐Ÿฆœ๏ธ๐Ÿงช LangChain Experimental This package holds experimental LangChain code, intended for research and experimental uses. > [!WARNING] > Portions of the code in this package may be dangerous if not properly deployed > in a sandboxed environment. Please be wary of deploying experimental code > to production unless you've taken appropriate precautions and > have already discussed it with your security team. Some of the code here may be marked with security notices. However, given the exploratory and experimental nature of the code in this package, the lack of a security notice on a piece of code does not mean that the code in question does not require additional security considerations in order to be safe to use.
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\experimental\README.md
.md
# Causal program-aided language (CPAL) chain see https://github.com/langchain-ai/langchain/pull/6255
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\experimental\langchain_experimental\cpal\README.md
.md
# ๐Ÿฆœ๏ธ๐Ÿ”— LangChain โšก Building applications with LLMs through composability โšก [![Release Notes](https://img.shields.io/github/release/langchain-ai/langchain)](https://github.com/langchain-ai/langchain/releases) [![lint](https://github.com/langchain-ai/langchain/actions/workflows/lint.yml/badge.svg)](https://github.com/langchain-ai/langchain/actions/workflows/lint.yml) [![test](https://github.com/langchain-ai/langchain/actions/workflows/test.yml/badge.svg)](https://github.com/langchain-ai/langchain/actions/workflows/test.yml) [![Downloads](https://static.pepy.tech/badge/langchain/month)](https://pepy.tech/project/langchain) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai) [![](https://dcbadge.vercel.app/api/server/6adMQxSpJS?compact=true&style=flat)](https://discord.gg/6adMQxSpJS) [![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain) [![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/langchain) [![GitHub star chart](https://img.shields.io/github/stars/langchain-ai/langchain?style=social)](https://star-history.com/#langchain-ai/langchain) [![Dependency Status](https://img.shields.io/librariesio/github/langchain-ai/langchain)](https://libraries.io/github/langchain-ai/langchain) [![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langchain)](https://github.com/langchain-ai/langchain/issues) Looking for the JS/TS version? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs). To help you ship LangChain apps to production faster, check out [LangSmith](https://smith.langchain.com). [LangSmith](https://smith.langchain.com) is a unified developer platform for building, testing, and monitoring LLM applications. Fill out [this form](https://www.langchain.com/contact-sales) to speak with our sales team. ## Quick Install `pip install langchain` or `pip install langsmith && conda install langchain -c conda-forge` ## ๐Ÿค” What is this? Large language models (LLMs) are emerging as a transformative technology, enabling developers to build applications that they previously could not. However, using these LLMs in isolation is often insufficient for creating a truly powerful app - the real power comes when you can combine them with other sources of computation or knowledge. This library aims to assist in the development of those types of applications. Common examples of these applications include: **โ“ Question Answering over specific documents** - [Documentation](https://python.langchain.com/docs/use_cases/question_answering/) - End-to-end Example: [Question Answering over Notion Database](https://github.com/hwchase17/notion-qa) **๐Ÿ’ฌ Chatbots** - [Documentation](https://python.langchain.com/docs/use_cases/chatbots/) - End-to-end Example: [Chat-LangChain](https://github.com/langchain-ai/chat-langchain) **๐Ÿค– Agents** - [Documentation](https://python.langchain.com/docs/modules/agents/) - End-to-end Example: [GPT+WolframAlpha](https://huggingface.co./spaces/JavaFXpert/Chat-GPT-LangChain) ## ๐Ÿ“– Documentation Please see [here](https://python.langchain.com) for full documentation on: - Getting started (installation, setting up the environment, simple examples) - How-To examples (demos, integrations, helper functions) - Reference (full API docs) - Resources (high-level explanation of core concepts) ## ๐Ÿš€ What can this help with? There are six main areas that LangChain is designed to help with. These are, in increasing order of complexity: **๐Ÿ“ƒ LLMs and Prompts:** This includes prompt management, prompt optimization, a generic interface for all LLMs, and common utilities for working with LLMs. **๐Ÿ”— Chains:** Chains go beyond a single LLM call and involve sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications. **๐Ÿ“š Data Augmented Generation:** Data Augmented Generation involves specific types of chains that first interact with an external data source to fetch data for use in the generation step. Examples include summarization of long pieces of text and question/answering over specific data sources. **๐Ÿค– Agents:** Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end-to-end agents. **๐Ÿง  Memory:** Memory refers to persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory. **๐Ÿง Evaluation:** [BETA] Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this. For more information on these concepts, please see our [full documentation](https://python.langchain.com). ## ๐Ÿ’ Contributing As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation. For detailed information on how to contribute, see the [Contributing Guide](https://python.langchain.com/docs/contributing/).
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\langchain\README.md
.txt
Below are some assertions that have been fact checked and are labeled as true or false. If all of the assertions are true, return "True". If any of the assertions are false, return "False". Here are some examples: === Checked Assertions: """ - The sky is red: False - Water is made of lava: False - The sun is a star: True """ Result: False === Checked Assertions: """ - The sky is blue: True - Water is wet: True - The sun is a star: True """ Result: True === Checked Assertions: """ - The sky is blue - True - Water is made of lava- False - The sun is a star - True """ Result: False === Checked Assertions:""" {checked_assertions} """ Result:
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\langchain\langchain\chains\llm_summarization_checker\prompts\are_all_true_prompt.txt
.txt
You are an expert fact checker. You have been hired by a major news organization to fact check a very important story. Here is a bullet point list of facts: """ {assertions} """ For each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined". If the fact is false, explain why.
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\langchain\langchain\chains\llm_summarization_checker\prompts\check_facts.txt
.txt
Given some text, extract a list of facts from the text. Format your output as a bulleted list. Text: """ {summary} """ Facts:
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\langchain\langchain\chains\llm_summarization_checker\prompts\create_facts.txt
.txt
Below are some assertions that have been fact checked and are labeled as true or false. If the answer is false, a suggestion is given for a correction. Checked Assertions: """ {checked_assertions} """ Original Summary: """ {summary} """ Using these checked assertions, rewrite the original summary to be completely true. The output should have the same structure and formatting as the original summary. Summary:
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\langchain\langchain\chains\llm_summarization_checker\prompts\revise_summary.txt
.md
# Langchain Tests [This guide has moved to the docs](https://python.langchain.com/docs/contributing/testing)
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\langchain\tests\README.md
.txt
[05.05.23, 15:48:11] James: Hi here [11/8/21, 9:41:32 AM] User name: Message 123 1/23/23, 3:19 AM - User 2: Bye! 1/23/23, 3:22_AM - User 1: And let me know if anything changes [1/24/21, 12:41:03 PM] ~ User name 2: Of course! [2023/5/4, 16:13:23] ~ User 2: See you! 7/19/22, 11:32โ€ฏPM - User 1: Hello 7/20/22, 11:32โ€ฏam - User 2: Goodbye 4/20/23, 9:42โ€ฏam - User 3: <Media omitted> 6/29/23, 12:16โ€ฏam - User 4: This message was deleted
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\langchain\tests\integration_tests\examples\whatsapp_chat.txt
.txt
Question: {question} Answer:
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\langchain\tests\unit_tests\data\prompt_file.txt
.txt
Error reading file
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\langchain\tests\unit_tests\examples\example-non-utf8.txt
.txt
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\langchain\tests\unit_tests\examples\example-utf8.txt
.md
# langchain-ai21 This package contains the LangChain integrations for [AI21](https://docs.ai21.com/) through their [AI21](https://pypi.org/project/ai21/) SDK. ## Installation and Setup - Install the AI21 partner package ```bash pip install langchain-ai21 ``` - Get an AI21 api key and set it as an environment variable (`AI21_API_KEY`) ## Chat Models This package contains the `ChatAI21` class, which is the recommended way to interface with AI21 Chat models. To use, install the requirements, and configure your environment. ```bash export AI21_API_KEY=your-api-key ``` Then initialize ```python from langchain_core.messages import HumanMessage from langchain_ai21.chat_models import ChatAI21 chat = ChatAI21(model="j2-ultra") messages = [HumanMessage(content="Hello from AI21")] chat.invoke(messages) ``` ## LLMs You can use AI21's generative AI models as Langchain LLMs: ```python from langchain.prompts import PromptTemplate from langchain_ai21 import AI21LLM llm = AI21LLM(model="j2-ultra") template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate.from_template(template) chain = prompt | llm question = "Which scientist discovered relativity?" print(chain.invoke({"question": question})) ``` ## Embeddings You can use AI21's embeddings models as: ### Query ```python from langchain_ai21 import AI21Embeddings embeddings = AI21Embeddings() embeddings.embed_query("Hello! This is some query") ``` ### Document ```python from langchain_ai21 import AI21Embeddings embeddings = AI21Embeddings() embeddings.embed_documents(["Hello! This is document 1", "And this is document 2!"]) ```
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\partners\ai21\README.md
.md
# langchain-airbyte This package contains the LangChain integration with Airbyte ## Installation ```bash pip install -U langchain-airbyte ``` The integration package doesn't have any global environment variables that need to be set, but some integrations (e.g. `source-github`) may need credentials passed in. ## Document Loaders `AirbyteLoader` class exposes a single document loader for Airbyte sources. ```python from langchain_airbyte import AirbyteLoader loader = AirbyteLoader( source="source-faker", stream="users", config={"count": 100}, ) docs = loader.load() ```
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\partners\airbyte\README.md
.md
# langchain-anthropic This package contains the LangChain integration for Anthropic's generative models. ## Installation `pip install -U langchain-anthropic` ## Chat Models | API Model Name | Model Family | | ------------------ | -------------- | | claude-instant-1.2 | Claude Instant | | claude-2.1 | Claude | | claude-2.0 | Claude | To use, you should have an Anthropic API key configured. Initialize the model as: ``` from langchain_anthropic import ChatAnthropicMessages from langchain_core.messages import AIMessage, HumanMessage model = ChatAnthropicMessages(model="claude-2.1", temperature=0, max_tokens=1024) ``` ### Define the input message `message = HumanMessage(content="What is the capital of France?")` ### Generate a response using the model `response = model.invoke([message])` For a more detailed walkthrough see [here](https://python.langchain.com/docs/integrations/chat/anthropic).
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\partners\anthropic\README.md
.md
This package has moved! https://github.com/langchain-ai/langchain-datastax/tree/main/libs/astradb
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\partners\astradb\README.md
.md
# langchain-elasticsearch This package contains the LangChain integration with Elasticsearch. ## Installation ```bash pip install -U langchain-elasticsearch ``` TODO document how to get id and key ## Usage The `ElasticsearchStore` class exposes the connection to the Pinecone vector store. ```python from langchain_elasticsearch import ElasticsearchStore embeddings = ... # use a LangChain Embeddings class vectorstore = ElasticsearchStore( es_cloud_id="your-cloud-id", es_api_key="your-api-key", index_name="your-index-name", embeddings=embeddings, ) ```
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\partners\elasticsearch\README.md
.md
# langchain-exa This package contains the LangChain integrations for Exa Cloud generative models. ## Installation ```bash pip install -U langchain-exa ``` ## Exa Search Retriever You can retrieve search results as follows ```python from langchain_exa import ExaSearchRetriever exa_api_key = "YOUR API KEY" # Create a new instance of the ExaSearchRetriever exa = ExaSearchRetriever(exa_api_key=exa_api_key) # Search for a query and save the results results = exa.get_relevant_documents(query="What is the capital of France?") # Print the results print(results) ``` ## Exa Search Results You can run the ExaSearchResults module as follows ```python from langchain_exa import ExaSearchResults # Initialize the ExaSearchResults tool search_tool = ExaSearchResults(exa_api_key="YOUR API KEY") # Perform a search query search_results = search_tool._run( query="When was the last time the New York Knicks won the NBA Championship?", num_results=5, text_contents_options=True, highlights=True ) print("Search Results:", search_results) ``` ## Exa Find Similar Results You can run the ExaFindSimilarResults module as follows ```python from langchain_exa import ExaFindSimilarResults # Initialize the ExaFindSimilarResults tool find_similar_tool = ExaFindSimilarResults(exa_api_key="YOUR API KEY") # Find similar results based on a URL similar_results = find_similar_tool._run( url="http://espn.com", num_results=5, text_contents_options=True, highlights=True ) print("Similar Results:", similar_results) ```
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\partners\exa\README.md
.md
# LangChain-Fireworks This is the partner package for tying Fireworks.ai and LangChain. Fireworks really strive to provide good support for LangChain use cases, so if you run into any issues please let us know. You can reach out to us [in our Discord channel](https://discord.com/channels/1137072072808472616/) ## Installation To use the `langchain-fireworks` package, follow these installation steps: ```bash pip install langchain-fireworks ``` ## Basic usage ### Setting up 1. Sign in to [Fireworks AI](http://fireworks.ai/) to obtain an API Key to access the models, and make sure it is set as the `FIREWORKS_API_KEY` environment variable. Once you've signed in and obtained an API key, follow these steps to set the `FIREWORKS_API_KEY` environment variable: - **Linux/macOS:** Open your terminal and execute the following command: ```bash export FIREWORKS_API_KEY='your_api_key' ``` **Note:** To make this environment variable persistent across terminal sessions, add the above line to your `~/.bashrc`, `~/.bash_profile`, or `~/.zshrc` file. - **Windows:** For Command Prompt, use: ```cmd set FIREWORKS_API_KEY=your_api_key ``` 2. Set up your model using a model id. If the model is not set, the default model is `fireworks-llama-v2-7b-chat`. See the full, most up-to-date model list on [fireworks.ai](https://fireworks.ai/models). ```python import getpass import os # Initialize a Fireworks model llm = Fireworks( model="accounts/fireworks/models/mixtral-8x7b-instruct", base_url="https://api.fireworks.ai/inference/v1/completions", ) ``` ### Calling the Model Directly You can call the model directly with string prompts to get completions. ```python # Single prompt output = llm.invoke("Who's the best quarterback in the NFL?") print(output) ``` ```python # Calling multiple prompts output = llm.generate( [ "Who's the best cricket player in 2016?", "Who's the best basketball player in the league?", ] ) print(output.generations) ``` ## Advanced usage ### Tool use: LangChain Agent + Fireworks function calling model Please checkout how to teach Fireworks function calling model to use a [calculator here](https://github.com/fw-ai/cookbook/blob/main/examples/function_calling/fireworks_langchain_tool_usage.ipynb). Fireworks focus on delivering the best experience for fast model inference as well as tool use. You can check out [our blog](https://fireworks.ai/blog/firefunction-v1-gpt-4-level-function-calling) for more details on how it fares compares to GPT-4, the punchline is that it is on par with GPT-4 in terms just function calling use cases, but it is way faster and much cheaper. ### RAG: LangChain agent + Fireworks function calling model + MongoDB + Nomic AI embeddings Please check out the [cookbook here](https://github.com/fw-ai/cookbook/blob/main/examples/rag/mongodb_agent.ipynb) for an end to end flow
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\partners\fireworks\README.md
.md
This package has moved! https://github.com/langchain-ai/langchain-google/tree/main/libs/genai
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\partners\google-genai\README.md
.md
This package has moved! https://github.com/langchain-ai/langchain-google/tree/main/libs/vertexai
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\partners\google-vertexai\README.md
.md
# langchain-groq ## Welcome to Groq! ๐Ÿš€ At Groq, we've developed the world's first Language Processing Unitโ„ข, or LPU. The Groq LPU has a deterministic, single core streaming architecture that sets the standard for GenAI inference speed with predictable and repeatable performance for any given workload. Beyond the architecture, our software is designed to empower developers like you with the tools you need to create innovative, powerful AI applications. With Groq as your engine, you can: * Achieve uncompromised low latency and performance for real-time AI and HPC inferences ๐Ÿ”ฅ * Know the exact performance and compute time for any given workload ๐Ÿ”ฎ * Take advantage of our cutting-edge technology to stay ahead of the competition ๐Ÿ’ช Want more Groq? Check out our [website](https://groq.com) for more resources and join our [Discord community](https://discord.gg/JvNsBDKeCG) to connect with our developers! ## Installation and Setup Install the integration package: ```bash pip install langchain-groq ``` Request an [API key](https://wow.groq.com) and set it as an environment variable ```bash export GROQ_API_KEY=gsk_... ``` ## Chat Model See a [usage example](https://python.langchain.com/docs/integrations/chat/groq). ## Development To develop the `langchain-groq` package, you'll need to follow these instructions: ### Install dev dependencies ```bash poetry install --with test,test_integration,lint,codespell ``` ### Build the package ```bash poetry build ``` ### Run unit tests Unit tests live in `tests/unit_tests` and SHOULD NOT require an internet connection or a valid API KEY. Run unit tests with ```bash make tests ``` ### Run integration tests Integration tests live in `tests/integration_tests` and require a connection to the Groq API and a valid API KEY. ```bash make integration_tests ``` ### Lint & Format Run additional tests and linters to ensure your code is up to standard. ```bash make lint spell_check check_imports ```
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\partners\groq\README.md
.md
# langchain-ibm This package provides the integration between LangChain and IBM watsonx.ai through the `ibm-watsonx-ai` SDK. ## Installation To use the `langchain-ibm` package, follow these installation steps: ```bash pip install langchain-ibm ``` ## Usage ### Setting up To use IBM's models, you must have an IBM Cloud user API key. Here's how to obtain and set up your API key: 1. **Obtain an API Key:** For more details on how to create and manage an API key, refer to IBM's [documentation](https://cloud.ibm.com/docs/account?topic=account-userapikey&interface=ui). 2. **Set the API Key as an Environment Variable:** For security reasons, it's recommended to not hard-code your API key directly in your scripts. Instead, set it up as an environment variable. You can use the following code to prompt for the API key and set it as an environment variable: ```python import os from getpass import getpass watsonx_api_key = getpass() os.environ["WATSONX_APIKEY"] = watsonx_api_key ``` In alternative, you can set the environment variable in your terminal. - **Linux/macOS:** Open your terminal and execute the following command: ```bash export WATSONX_APIKEY='your_ibm_api_key' ``` To make this environment variable persistent across terminal sessions, add the above line to your `~/.bashrc`, `~/.bash_profile`, or `~/.zshrc` file. - **Windows:** For Command Prompt, use: ```cmd set WATSONX_APIKEY=your_ibm_api_key ``` ### Loading the model You might need to adjust model parameters for different models or tasks. For more details on the parameters, refer to IBM's [documentation](https://ibm.github.io/watsonx-ai-python-sdk/fm_model.html#metanames.GenTextParamsMetaNames). ```python parameters = { "decoding_method": "sample", "max_new_tokens": 100, "min_new_tokens": 1, "temperature": 0.5, "top_k": 50, "top_p": 1, } ``` Initialize the WatsonxLLM class with the previously set parameters. ```python from langchain_ibm import WatsonxLLM watsonx_llm = WatsonxLLM( model_id="PASTE THE CHOSEN MODEL_ID HERE", url="PASTE YOUR URL HERE", project_id="PASTE YOUR PROJECT_ID HERE", params=parameters, ) ``` **Note:** - You must provide a `project_id` or `space_id`. For more information refer to IBM's [documentation](https://www.ibm.com/docs/en/watsonx-as-a-service?topic=projects). - Depending on the region of your provisioned service instance, use one of the urls described [here](https://ibm.github.io/watsonx-ai-python-sdk/setup_cloud.html#authentication). - You need to specify the model you want to use for inferencing through `model_id`. You can find the list of available models [here](https://ibm.github.io/watsonx-ai-python-sdk/fm_model.html#ibm_watsonx_ai.foundation_models.utils.enums.ModelTypes). Alternatively you can use Cloud Pak for Data credentials. For more details, refer to IBM's [documentation](https://ibm.github.io/watsonx-ai-python-sdk/setup_cpd.html). ```python watsonx_llm = WatsonxLLM( model_id="ibm/granite-13b-instruct-v2", url="PASTE YOUR URL HERE", username="PASTE YOUR USERNAME HERE", password="PASTE YOUR PASSWORD HERE", instance_id="openshift", version="4.8", project_id="PASTE YOUR PROJECT_ID HERE", params=parameters, ) ``` ### Create a Chain Create `PromptTemplate` objects which will be responsible for creating a random question. ```python from langchain.prompts import PromptTemplate template = "Generate a random question about {topic}: Question: " prompt = PromptTemplate.from_template(template) ``` Provide a topic and run the LLMChain. ```python from langchain.chains import LLMChain llm_chain = LLMChain(prompt=prompt, llm=watsonx_llm) response = llm_chain.invoke("dog") print(response) ``` ### Calling the Model Directly To obtain completions, you can call the model directly using a string prompt. ```python # Calling a single prompt response = watsonx_llm.invoke("Who is man's best friend?") print(response) ``` ```python # Calling multiple prompts response = watsonx_llm.generate( [ "The fastest dog in the world?", "Describe your chosen dog breed", ] ) print(response) ``` ### Streaming the Model output You can stream the model output. ```python for chunk in watsonx_llm.stream( "Describe your favorite breed of dog and why it is your favorite." ): print(chunk, end="") ```
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\partners\ibm\README.md
.md
# langchain-mistralai This package contains the LangChain integrations for [MistralAI](https://docs.mistral.ai) through their [mistralai](https://pypi.org/project/mistralai/) SDK. ## Installation ```bash pip install -U langchain-mistralai ``` ## Chat Models This package contains the `ChatMistralAI` class, which is the recommended way to interface with MistralAI models. To use, install the requirements, and configure your environment. ```bash export MISTRAL_API_KEY=your-api-key ``` Then initialize ```python from langchain_core.messages import HumanMessage from langchain_mistralai.chat_models import ChatMistralAI chat = ChatMistralAI(model="mistral-small") messages = [HumanMessage(content="say a brief hello")] chat.invoke(messages) ``` `ChatMistralAI` also supports async and streaming functionality: ```python # For async... await chat.ainvoke(messages) # For streaming... for chunk in chat.stream(messages): print(chunk.content, end="", flush=True) ``` ## Embeddings With `MistralAIEmbeddings`, you can directly use the default model 'mistral-embed', or set a different one if available. ### Choose model `embedding.model = 'mistral-embed'` ### Simple query `res_query = embedding.embed_query("The test information")` ### Documents `res_document = embedding.embed_documents(["test1", "another test"])`
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\partners\mistralai\README.md
.md
# langchain-mongodb # Installation ``` pip install -U langchain-mongodb ``` # Usage - See [integrations doc](../../../docs/docs/integrations/vectorstores/mongodb.ipynb) for more in-depth usage instructions. - See [Getting Started with the LangChain Integration](https://www.mongodb.com/docs/atlas/atlas-vector-search/ai-integrations/langchain/#get-started-with-the-langchain-integration) for a walkthrough on using your first LangChain implementation with MongoDB Atlas. ## Using MongoDBAtlasVectorSearch ```python from langchain_mongodb import MongoDBAtlasVectorSearch # Pull MongoDB Atlas URI from environment variables MONGODB_ATLAS_CLUSTER_URI = os.environ.get("MONGODB_ATLAS_CLUSTER_URI") DB_NAME = "langchain_db" COLLECTION_NAME = "test" ATLAS_VECTOR_SEARCH_INDEX_NAME = "index_name" MONGODB_COLLECTION = client[DB_NAME][COLLECITON_NAME] # Create the vector search via `from_connection_string` vector_search = MongoDBAtlasVectorSearch.from_connection_string( MONGODB_ATLAS_CLUSTER_URI, DB_NAME + "." + COLLECTION_NAME, OpenAIEmbeddings(disallowed_special=()), index_name=ATLAS_VECTOR_SEARCH_INDEX_NAME, ) # Initialize MongoDB python client client = MongoClient(MONGODB_ATLAS_CLUSTER_URI) # Create the vector search via instantiation vector_search_2 = MongoDBAtlasVectorSearch( collection=MONGODB_COLLECTION, embeddings=OpenAIEmbeddings(disallowed_special=()), index_name=ATLAS_VECTOR_SEARCH_INDEX_NAME, ) ```
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\partners\mongodb\README.md
.md
# langchain-nomic This package contains the LangChain integration with Nomic ## Installation ```bash pip install -U langchain-nomic ``` And you should configure credentials by setting the following environment variables: * `NOMIC_API_KEY`: your nomic API key ## Embeddings `NomicEmbeddings` class exposes embeddings from Nomic. ```python from langchain_nomic import NomicEmbeddings embeddings = NomicEmbeddings() embeddings.embed_query("What is the meaning of life?")
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\partners\nomic\README.md
.md
# langchain-nvidia-ai-endpoints The `langchain-nvidia-ai-endpoints` package contains LangChain integrations for chat models and embeddings powered by the [NVIDIA AI Foundation Model](https://www.nvidia.com/en-us/ai-data-science/foundation-models/) playground environment. > [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/) give users easy access to hosted endpoints for generative AI models like Llama-2, SteerLM, Mistral, etc. Using the API, you can query live endpoints available on the [NVIDIA GPU Cloud (NGC)](https://catalog.ngc.nvidia.com/ai-foundation-models) to get quick results from a DGX-hosted cloud compute environment. All models are source-accessible and can be deployed on your own compute cluster. Below is an example on how to use some common functionality surrounding text-generative and embedding models ## Installation ```python %pip install -U --quiet langchain-nvidia-ai-endpoints ``` ## Setup **To get started:** 1. Create a free account with the [NVIDIA GPU Cloud](https://catalog.ngc.nvidia.com/) service, which hosts AI solution catalogs, containers, models, etc. 2. Navigate to `Catalog > AI Foundation Models > (Model with API endpoint)`. 3. Select the `API` option and click `Generate Key`. 4. Save the generated key as `NVIDIA_API_KEY`. From there, you should have access to the endpoints. ```python import getpass import os if not os.environ.get("NVIDIA_API_KEY", "").startswith("nvapi-"): nvidia_api_key = getpass.getpass("Enter your NVIDIA AIPLAY API key: ") assert nvidia_api_key.startswith("nvapi-"), f"{nvidia_api_key[:5]}... is not a valid key" os.environ["NVIDIA_API_KEY"] = nvidia_api_key ``` ```python ## Core LC Chat Interface from langchain_nvidia_ai_endpoints import ChatNVIDIA llm = ChatNVIDIA(model="mixtral_8x7b") result = llm.invoke("Write a ballad about LangChain.") print(result.content) ``` ## Stream, Batch, and Async These models natively support streaming, and as is the case with all LangChain LLMs they expose a batch method to handle concurrent requests, as well as async methods for invoke, stream, and batch. Below are a few examples. ```python print(llm.batch(["What's 2*3?", "What's 2*6?"])) # Or via the async API # await llm.abatch(["What's 2*3?", "What's 2*6?"]) ``` ```python for chunk in llm.stream("How far can a seagull fly in one day?"): # Show the token separations print(chunk.content, end="|") ``` ```python async for chunk in llm.astream("How long does it take for monarch butterflies to migrate?"): print(chunk.content, end="|") ``` ## Supported models Querying `available_models` will still give you all of the other models offered by your API credentials. The `playground_` prefix is optional. ```python list(llm.available_models) # ['playground_llama2_13b', # 'playground_llama2_code_13b', # 'playground_clip', # 'playground_fuyu_8b', # 'playground_mistral_7b', # 'playground_nvolveqa_40k', # 'playground_yi_34b', # 'playground_nemotron_steerlm_8b', # 'playground_nv_llama2_rlhf_70b', # 'playground_llama2_code_34b', # 'playground_mixtral_8x7b', # 'playground_neva_22b', # 'playground_steerlm_llama_70b', # 'playground_nemotron_qa_8b', # 'playground_sdxl'] ``` ## Model types All of these models above are supported and can be accessed via `ChatNVIDIA`. Some model types support unique prompting techniques and chat messages. We will review a few important ones below. **To find out more about a specific model, please navigate to the API section of an AI Foundation Model [as linked here](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-foundation/models/codellama-13b/api).** ### General Chat Models such as `llama2_13b` and `mixtral_8x7b` are good all-around models that you can use for with any LangChain chat messages. Example below. ```python from langchain_nvidia_ai_endpoints import ChatNVIDIA from langchain_core.prompts import ChatPromptTemplate from langchain_core.output_parsers import StrOutputParser prompt = ChatPromptTemplate.from_messages( [ ("system", "You are a helpful AI assistant named Fred."), ("user", "{input}") ] ) chain = ( prompt | ChatNVIDIA(model="llama2_13b") | StrOutputParser() ) for txt in chain.stream({"input": "What's your name?"}): print(txt, end="") ``` ### Code Generation These models accept the same arguments and input structure as regular chat models, but they tend to perform better on code-genreation and structured code tasks. An example of this is `llama2_code_13b`. ```python prompt = ChatPromptTemplate.from_messages( [ ("system", "You are an expert coding AI. Respond only in valid python; no narration whatsoever."), ("user", "{input}") ] ) chain = ( prompt | ChatNVIDIA(model="llama2_code_13b") | StrOutputParser() ) for txt in chain.stream({"input": "How do I solve this fizz buzz problem?"}): print(txt, end="") ``` ## Steering LLMs > [SteerLM-optimized models](https://developer.nvidia.com/blog/announcing-steerlm-a-simple-and-practical-technique-to-customize-llms-during-inference/) supports "dynamic steering" of model outputs at inference time. This lets you "control" the complexity, verbosity, and creativity of the model via integer labels on a scale from 0 to 9. Under the hood, these are passed as a special type of assistant message to the model. The "steer" models support this type of input, such as `steerlm_llama_70b` ```python from langchain_nvidia_ai_endpoints import ChatNVIDIA llm = ChatNVIDIA(model="steerlm_llama_70b") # Try making it uncreative and not verbose complex_result = llm.invoke( "What's a PB&J?", labels={"creativity": 0, "complexity": 3, "verbosity": 0} ) print("Un-creative\n") print(complex_result.content) # Try making it very creative and verbose print("\n\nCreative\n") creative_result = llm.invoke( "What's a PB&J?", labels={"creativity": 9, "complexity": 3, "verbosity": 9} ) print(creative_result.content) ``` #### Use within LCEL The labels are passed as invocation params. You can `bind` these to the LLM using the `bind` method on the LLM to include it within a declarative, functional chain. Below is an example. ```python from langchain_nvidia_ai_endpoints import ChatNVIDIA from langchain_core.prompts import ChatPromptTemplate from langchain_core.output_parsers import StrOutputParser prompt = ChatPromptTemplate.from_messages( [ ("system", "You are a helpful AI assistant named Fred."), ("user", "{input}") ] ) chain = ( prompt | ChatNVIDIA(model="steerlm_llama_70b").bind(labels={"creativity": 9, "complexity": 0, "verbosity": 9}) | StrOutputParser() ) for txt in chain.stream({"input": "Why is a PB&J?"}): print(txt, end="") ``` ## Multimodal NVIDIA also supports multimodal inputs, meaning you can provide both images and text for the model to reason over. These models also accept `labels`, similar to the Steering LLMs above. In addition to `creativity`, `complexity`, and `verbosity`, these models support a `quality` toggle. An example model supporting multimodal inputs is `playground_neva_22b`. These models accept LangChain's standard image formats. Below are examples. ```python import requests image_url = "https://picsum.photos/seed/kitten/300/200" image_content = requests.get(image_url).content ``` Initialize the model like so: ```python from langchain_nvidia_ai_endpoints import ChatNVIDIA llm = ChatNVIDIA(model="playground_neva_22b") ``` #### Passing an image as a URL ```python from langchain_core.messages import HumanMessage llm.invoke( [ HumanMessage(content=[ {"type": "text", "text": "Describe this image:"}, {"type": "image_url", "image_url": {"url": image_url}}, ]) ]) ``` ```python ### You can specify the labels for steering here as well. You can try setting a low verbosity, for instance from langchain_core.messages import HumanMessage llm.invoke( [ HumanMessage(content=[ {"type": "text", "text": "Describe this image:"}, {"type": "image_url", "image_url": {"url": image_url}}, ]) ], labels={ "creativity": 0, "quality": 9, "complexity": 0, "verbosity": 0 } ) ``` #### Passing an image as a base64 encoded string ```python import base64 b64_string = base64.b64encode(image_content).decode('utf-8') llm.invoke( [ HumanMessage(content=[ {"type": "text", "text": "Describe this image:"}, {"type": "image_url", "image_url": {"url": f"data:image/png;base64,{b64_string}"}}, ]) ]) ``` #### Directly within the string The NVIDIA API uniquely accepts images as base64 images inlined within <img> HTML tags. While this isn't interoperable with other LLMs, you can directly prompt the model accordingly. ```python base64_with_mime_type = f"data:image/png;base64,{b64_string}" llm.invoke( f'What\'s in this image?\n<img src="{base64_with_mime_type}" />' ) ``` ## RAG: Context models NVIDIA also has Q&A models that support a special "context" chat message containing retrieved context (such as documents within a RAG chain). This is useful to avoid prompt-injecting the model. **Note:** Only "user" (human) and "context" chat messages are supported for these models, not system or AI messages useful in conversational flows. The `_qa_` models like `nemotron_qa_8b` support this. ```python from langchain_nvidia_ai_endpoints import ChatNVIDIA from langchain_core.prompts import ChatPromptTemplate from langchain_core.output_parsers import StrOutputParser from langchain_core.messages import ChatMessage prompt = ChatPromptTemplate.from_messages( [ ChatMessage(role="context", content="Parrots and Cats have signed the peace accord."), ("user", "{input}") ] ) llm = ChatNVIDIA(model="nemotron_qa_8b") chain = ( prompt | llm | StrOutputParser() ) chain.invoke({"input": "What was signed?"}) ``` ## Embeddings You can also connect to embeddings models through this package. Below is an example: ``` from langchain_nvidia_ai_endpoints import NVIDIAEmbeddings embedder = NVIDIAEmbeddings(model="nvolveqa_40k") embedder.embed_query("What's the temperature today?") embedder.embed_documents([ "The temperature is 42 degrees.", "Class is dismissed at 9 PM." ]) ``` By default the embedding model will use the "passage" type for documents and "query" type for queries, but you can fix this on the instance. ```python query_embedder = NVIDIAEmbeddings(model="nvolveqa_40k", model_type="query") doc_embeddder = NVIDIAEmbeddings(model="nvolveqa_40k", model_type="passage") ```
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\partners\nvidia-ai-endpoints\README.md
.md
# langchain-nvidia-trt
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\partners\nvidia-trt\README.md
.md
# langchain-openai This package contains the LangChain integrations for OpenAI through their `openai` SDK. ## Installation and Setup - Install the LangChain partner package ```bash pip install langchain-openai ``` - Get an OpenAI api key and set it as an environment variable (`OPENAI_API_KEY`) ## LLM See a [usage example](http://python.langchain.com/docs/integrations/llms/openai). ```python from langchain_openai import OpenAI ``` If you are using a model hosted on `Azure`, you should use different wrapper for that: ```python from langchain_openai import AzureOpenAI ``` For a more detailed walkthrough of the `Azure` wrapper, see [here](http://python.langchain.com/docs/integrations/llms/azure_openai) ## Chat model See a [usage example](http://python.langchain.com/docs/integrations/chat/openai). ```python from langchain_openai import ChatOpenAI ``` If you are using a model hosted on `Azure`, you should use different wrapper for that: ```python from langchain_openai import AzureChatOpenAI ``` For a more detailed walkthrough of the `Azure` wrapper, see [here](http://python.langchain.com/docs/integrations/chat/azure_chat_openai) ## Text Embedding Model See a [usage example](http://python.langchain.com/docs/integrations/text_embedding/openai) ```python from langchain_openai import OpenAIEmbeddings ``` If you are using a model hosted on `Azure`, you should use different wrapper for that: ```python from langchain_openai import AzureOpenAIEmbeddings ``` For a more detailed walkthrough of the `Azure` wrapper, see [here](https://python.langchain.com/docs/integrations/text_embedding/azureopenai)
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\partners\openai\README.md
.md
# langchain-pinecone This package contains the LangChain integration with Pinecone. ## Installation ```bash pip install -U langchain-pinecone ``` And you should configure credentials by setting the following environment variables: - `PINECONE_API_KEY` - `PINECONE_INDEX_NAME` ## Usage The `PineconeVectorStore` class exposes the connection to the Pinecone vector store. ```python from langchain_pinecone import PineconeVectorStore embeddings = ... # use a LangChain Embeddings class vectorstore = PineconeVectorStore(embeddings=embeddings) ```
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\partners\pinecone\README.md
.md
# langchain-robocorp This package contains the LangChain integrations for [Robocorp](https://github.com/robocorp/robocorp). ## Installation ```bash pip install -U langchain-robocorp ``` ## Action Server Toolkit See [ActionServerToolkit](https://python.langchain.com/docs/integrations/toolkits/robocorp) for detailed documentation.
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\partners\robocorp\README.md
.md
# langchain-together
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\partners\together\README.md
.md
# ๐Ÿฆœโœ‚๏ธ LangChain Text Splitters [![Downloads](https://static.pepy.tech/badge/langchain_text_splitters/month)](https://pepy.tech/project/langchain_text_splitters) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) ## Quick Install ```bash pip install langchain-text-splitters ``` ## What is it? LangChain Text Splitters contains utilities for splitting into chunks a wide variety of text documents. For full documentation see the [API reference](https://api.python.langchain.com/en/stable/text_splitters_api_reference.html) and the [Text Splitters](https://python.langchain.com/docs/modules/data_connection/document_transformers/) module in the main docs. ## ๐Ÿ“• Releases & Versioning `langchain-text-splitters` is currently on version `0.0.x`. Minor version increases will occur for: - Breaking changes for any public interfaces NOT marked `beta` Patch version increases will occur for: - Bug fixes - New features - Any changes to private interfaces - Any changes to `beta` features ## ๐Ÿ’ Contributing As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation. For detailed information on how to contribute, see the [Contributing Guide](https://python.langchain.com/docs/contributing/).
C:\Users\wesla\CodePilotAI\repositories\langchain\libs\text-splitters\README.md
.md
# LangChain Templates LangChain Templates are the easiest and fastest way to build a production-ready LLM application. These templates serve as a set of reference architectures for a wide variety of popular LLM use cases. They are all in a standard format which make it easy to deploy them with [LangServe](https://github.com/langchain-ai/langserve). ๐Ÿšฉ We will be releasing a hosted version of LangServe for one-click deployments of LangChain applications. [Sign up here](https://airtable.com/app0hN6sd93QcKubv/shrAjst60xXa6quV2) to get on the waitlist. ## Quick Start To use, first install the LangChain CLI. ```shell pip install -U langchain-cli ``` Next, create a new LangChain project: ```shell langchain app new my-app ``` This will create a new directory called `my-app` with two folders: - `app`: This is where LangServe code will live - `packages`: This is where your chains or agents will live To pull in an existing template as a package, you first need to go into your new project: ```shell cd my-app ``` And you can the add a template as a project. In this getting started guide, we will add a simple `pirate-speak` project. All this project does is convert user input into pirate speak. ```shell langchain app add pirate-speak ``` This will pull in the specified template into `packages/pirate-speak` You will then be prompted if you want to install it. This is the equivalent of running `pip install -e packages/pirate-speak`. You should generally accept this (or run that same command afterwards). We install it with `-e` so that if you modify the template at all (which you likely will) the changes are updated. After that, it will ask you if you want to generate route code for this project. This is code you need to add to your app to start using this chain. If we accept, we will see the following code generated: ```shell from pirate_speak.chain import chain as pirate_speak_chain add_routes(app, pirate_speak_chain, path="/pirate-speak") ``` You can now edit the template you pulled down. You can change the code files in `packages/pirate-speak` to use a different model, different prompt, different logic. Note that the above code snippet always expects the final chain to be importable as `from pirate_speak.chain import chain`, so you should either keep the structure of the package similar enough to respect that or be prepared to update that code snippet. Once you have done as much of that as you want, it is In order to have LangServe use this project, you then need to modify `app/server.py`. Specifically, you should add the above code snippet to `app/server.py` so that file looks like: ```python from fastapi import FastAPI from langserve import add_routes from pirate_speak.chain import chain as pirate_speak_chain app = FastAPI() add_routes(app, pirate_speak_chain, path="/pirate-speak") ``` (Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). If you don't have access, you can skip this section ```shell export LANGCHAIN_TRACING_V2=true export LANGCHAIN_API_KEY=<your-api-key> export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default" ``` For this particular application, we will use OpenAI as the LLM, so we need to export our OpenAI API key: ```shell export OPENAI_API_KEY=sk-... ``` You can then spin up production-ready endpoints, along with a playground, by running: ```shell langchain serve ``` This now gives a fully deployed LangServe application. For example, you get a playground out-of-the-box at [http://127.0.0.1:8000/pirate-speak/playground/](http://127.0.0.1:8000/pirate-speak/playground/): ![Screenshot of the LangServe Playground interface with input and output fields demonstrating pirate speak conversion.](docs/playground.png "LangServe Playground Interface") Access API documentation at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) ![Screenshot of the API documentation interface showing available endpoints for the pirate-speak application.](docs/docs.png "API Documentation Interface") Use the LangServe python or js SDK to interact with the API as if it were a regular [Runnable](https://python.langchain.com/docs/expression_language/). ```python from langserve import RemoteRunnable api = RemoteRunnable("http://127.0.0.1:8000/pirate-speak") api.invoke({"text": "hi"}) ``` That's it for the quick start! You have successfully downloaded your first template and deployed it with LangServe. ## Additional Resources ### [Index of Templates](docs/INDEX.md) Explore the many templates available to use - from advanced RAG to agents. ### [Contributing](docs/CONTRIBUTING.md) Want to contribute your own template? It's pretty easy! These instructions walk through how to do that. ### [Launching LangServe from a Package](docs/LAUNCHING_PACKAGE.md) You can also launch LangServe from a package directly (without having to create a new project). These instructions cover how to do that.
C:\Users\wesla\CodePilotAI\repositories\langchain\templates\README.md
.md
# anthropic-iterative-search This template will create a virtual research assistant with the ability to search Wikipedia to find answers to your questions. It is heavily inspired by [this notebook](https://github.com/anthropics/anthropic-cookbook/blob/main/long_context/wikipedia-search-cookbook.ipynb). ## Environment Setup Set the `ANTHROPIC_API_KEY` environment variable to access the Anthropic models. ## Usage To use this package, you should first have the LangChain CLI installed: ```shell pip install -U langchain-cli ``` To create a new LangChain project and install this as the only package, you can do: ```shell langchain app new my-app --package anthropic-iterative-search ``` If you want to add this to an existing project, you can just run: ```shell langchain app add anthropic-iterative-search ``` And add the following code to your `server.py` file: ```python from anthropic_iterative_search import chain as anthropic_iterative_search_chain add_routes(app, anthropic_iterative_search_chain, path="/anthropic-iterative-search") ``` (Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). If you don't have access, you can skip this section ```shell export LANGCHAIN_TRACING_V2=true export LANGCHAIN_API_KEY=<your-api-key> export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default" ``` If you are inside this directory, then you can spin up a LangServe instance directly by: ```shell langchain serve ``` This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000) We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/anthropic-iterative-search/playground](http://127.0.0.1:8000/anthropic-iterative-search/playground) We can access the template from code with: ```python from langserve.client import RemoteRunnable runnable = RemoteRunnable("http://localhost:8000/anthropic-iterative-search") ```
C:\Users\wesla\CodePilotAI\repositories\langchain\templates\anthropic-iterative-search\README.md
.md
# basic-critique-revise Iteratively generate schema candidates and revise them based on errors. ## Environment Setup This template uses OpenAI function calling, so you will need to set the `OPENAI_API_KEY` environment variable in order to use this template. ## Usage To use this package, you should first have the LangChain CLI installed: ```shell pip install -U "langchain-cli[serve]" ``` To create a new LangChain project and install this as the only package, you can do: ```shell langchain app new my-app --package basic-critique-revise ``` If you want to add this to an existing project, you can just run: ```shell langchain app add basic-critique-revise ``` And add the following code to your `server.py` file: ```python from basic_critique_revise import chain as basic_critique_revise_chain add_routes(app, basic_critique_revise_chain, path="/basic-critique-revise") ``` (Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). If you don't have access, you can skip this section ```shell export LANGCHAIN_TRACING_V2=true export LANGCHAIN_API_KEY=<your-api-key> export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default" ``` If you are inside this directory, then you can spin up a LangServe instance directly by: ```shell langchain serve ``` This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000) We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/basic-critique-revise/playground](http://127.0.0.1:8000/basic-critique-revise/playground) We can access the template from code with: ```python from langserve.client import RemoteRunnable runnable = RemoteRunnable("http://localhost:8000/basic-critique-revise") ```
C:\Users\wesla\CodePilotAI\repositories\langchain\templates\basic-critique-revise\README.md
.md
# Bedrock JCVD ๐Ÿ•บ๐Ÿฅ‹ ## Overview LangChain template that uses [Anthropic's Claude on Amazon Bedrock](https://aws.amazon.com/bedrock/claude/) to behave like JCVD. > I am the Fred Astaire of Chatbots! ๐Ÿ•บ '![Animated GIF of Jean-Claude Van Damme dancing.](https://media.tenor.com/CVp9l7g3axwAAAAj/jean-claude-van-damme-jcvd.gif "Jean-Claude Van Damme Dancing") ## Environment Setup ### AWS Credentials This template uses [Boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html), the AWS SDK for Python, to call [Amazon Bedrock](https://aws.amazon.com/bedrock/). You **must** configure both AWS credentials *and* an AWS Region in order to make requests. > For information on how to do this, see [AWS Boto3 documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html) (Developer Guide > Credentials). ### Foundation Models By default, this template uses [Anthropic's Claude v2](https://aws.amazon.com/about-aws/whats-new/2023/08/claude-2-foundation-model-anthropic-amazon-bedrock/) (`anthropic.claude-v2`). > To request access to a specific model, check out the [Amazon Bedrock User Guide](https://docs.aws.amazon.com/bedrock/latest/userguide/model-access.html) (Model access) To use a different model, set the environment variable `BEDROCK_JCVD_MODEL_ID`. A list of base models is available in the [Amazon Bedrock User Guide](https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids-arns.html) (Use the API > API operations > Run inference > Base Model IDs). > The full list of available models (including base and [custom models](https://docs.aws.amazon.com/bedrock/latest/userguide/custom-models.html)) is available in the [Amazon Bedrock Console](https://docs.aws.amazon.com/bedrock/latest/userguide/using-console.html) under **Foundation Models** or by calling [`aws bedrock list-foundation-models`](https://docs.aws.amazon.com/cli/latest/reference/bedrock/list-foundation-models.html). ## Usage To use this package, you should first have the LangChain CLI installed: ```shell pip install -U langchain-cli ``` To create a new LangChain project and install this as the only package, you can do: ```shell langchain app new my-app --package bedrock-jcvd ``` If you want to add this to an existing project, you can just run: ```shell langchain app add bedrock-jcvd ``` And add the following code to your `server.py` file: ```python from bedrock_jcvd import chain as bedrock_jcvd_chain add_routes(app, bedrock_jcvd_chain, path="/bedrock-jcvd") ``` (Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). If you don't have access, you can skip this section ```shell export LANGCHAIN_TRACING_V2=true export LANGCHAIN_API_KEY=<your-api-key> export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default" ``` If you are inside this directory, then you can spin up a LangServe instance directly by: ```shell langchain serve ``` This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000) We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs). We can also access the playground at [http://127.0.0.1:8000/bedrock-jcvd/playground](http://127.0.0.1:8000/bedrock-jcvd/playground) ![Screenshot of the LangServe Playground interface with an example input and output demonstrating a Jean-Claude Van Damme voice imitation.](jcvd_langserve.png "JCVD Playground")
C:\Users\wesla\CodePilotAI\repositories\langchain\templates\bedrock-jcvd\README.md