diff --git a/chroma-db-langchain/191fd919-436d-4c2c-b784-ba68a1bb79b9/data_level0.bin b/chroma-db-langchain/191fd919-436d-4c2c-b784-ba68a1bb79b9/data_level0.bin new file mode 100644 index 0000000000000000000000000000000000000000..1558fcee9921a871d37d9700788a14683d83e237 --- /dev/null +++ b/chroma-db-langchain/191fd919-436d-4c2c-b784-ba68a1bb79b9/data_level0.bin @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2421147aa5c23ec9529cef6949c4fc0418bcb13b4863e39493298f8950c41ba5 +size 74568000 diff --git a/chroma-db-langchain/191fd919-436d-4c2c-b784-ba68a1bb79b9/header.bin b/chroma-db-langchain/191fd919-436d-4c2c-b784-ba68a1bb79b9/header.bin new file mode 100644 index 0000000000000000000000000000000000000000..98bfb7766f034dd187bdf98e6bdee3576480a0e9 --- /dev/null +++ b/chroma-db-langchain/191fd919-436d-4c2c-b784-ba68a1bb79b9/header.bin @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:40519952e8bea509bb9519eebece9a9362e9e991fdad2e3226d0967f6c2442ec +size 100 diff --git a/chroma-db-langchain/191fd919-436d-4c2c-b784-ba68a1bb79b9/index_metadata.pickle b/chroma-db-langchain/191fd919-436d-4c2c-b784-ba68a1bb79b9/index_metadata.pickle new file mode 100644 index 0000000000000000000000000000000000000000..4b0d0b91c4c06c85f7c4c5ed2345b6b750ad8ef6 --- /dev/null +++ b/chroma-db-langchain/191fd919-436d-4c2c-b784-ba68a1bb79b9/index_metadata.pickle @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a165ac942e6e706ef8893709883c5b76a02667d4352a39cedc99617be7cfb835 +size 346117 diff --git a/chroma-db-langchain/191fd919-436d-4c2c-b784-ba68a1bb79b9/length.bin b/chroma-db-langchain/191fd919-436d-4c2c-b784-ba68a1bb79b9/length.bin new file mode 100644 index 0000000000000000000000000000000000000000..9565abb1eb08fc414899a24d3154b5c6510b6278 --- /dev/null +++ b/chroma-db-langchain/191fd919-436d-4c2c-b784-ba68a1bb79b9/length.bin @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ddcd641da818ad91c965694eac10a44785d16a8668a438b66c8424cc778baef7 +size 24000 diff --git a/chroma-db-langchain/191fd919-436d-4c2c-b784-ba68a1bb79b9/link_lists.bin b/chroma-db-langchain/191fd919-436d-4c2c-b784-ba68a1bb79b9/link_lists.bin new file mode 100644 index 0000000000000000000000000000000000000000..578c7afa8dba1bde60967f9d3916158fb26b9ee6 --- /dev/null +++ b/chroma-db-langchain/191fd919-436d-4c2c-b784-ba68a1bb79b9/link_lists.bin @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9a7c30248d8a59a105fedb1798a5642cca01d772ddf4b06688c85979ed2b9248 +size 52152 diff --git a/chroma-db-langchain/a991ffa1-6102-416d-a561-877198e9f5de/data_level0.bin b/chroma-db-langchain/a991ffa1-6102-416d-a561-877198e9f5de/data_level0.bin deleted file mode 100644 index 742502d1fc1eb15cbc0055ce1791bc2da7d14c18..0000000000000000000000000000000000000000 --- a/chroma-db-langchain/a991ffa1-6102-416d-a561-877198e9f5de/data_level0.bin +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f7b31772d7b492860c7d8a5bf5009e837e4210f36db02b577d200213ec74a1c6 -size 74568000 diff --git a/chroma-db-langchain/a991ffa1-6102-416d-a561-877198e9f5de/header.bin b/chroma-db-langchain/a991ffa1-6102-416d-a561-877198e9f5de/header.bin deleted file mode 100644 index 1ad250315e692143e9f811c0360c99e59688fba1..0000000000000000000000000000000000000000 --- a/chroma-db-langchain/a991ffa1-6102-416d-a561-877198e9f5de/header.bin +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:cb0d9006c0a810bed3cf70ce96081931f4ca52fba11d05376a99d4e432d9d994 -size 100 diff --git a/chroma-db-langchain/a991ffa1-6102-416d-a561-877198e9f5de/index_metadata.pickle b/chroma-db-langchain/a991ffa1-6102-416d-a561-877198e9f5de/index_metadata.pickle deleted file mode 100644 index 089b60c174eafc7322c35132c34a883d27574f64..0000000000000000000000000000000000000000 --- a/chroma-db-langchain/a991ffa1-6102-416d-a561-877198e9f5de/index_metadata.pickle +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:43b1ec3c7d4b11231e551e43c43dc6f8c6cbf3221517f7ed1e54afd70f6e08a0 -size 346117 diff --git a/chroma-db-langchain/a991ffa1-6102-416d-a561-877198e9f5de/length.bin b/chroma-db-langchain/a991ffa1-6102-416d-a561-877198e9f5de/length.bin deleted file mode 100644 index 4465780e718f5ace12641eb64a7f6e34d072cbc4..0000000000000000000000000000000000000000 --- a/chroma-db-langchain/a991ffa1-6102-416d-a561-877198e9f5de/length.bin +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:5b30b7d36428adb2def6746197d2a25c90b0dc6c7e0bcfd6216bfdc81dc6ad98 -size 24000 diff --git a/chroma-db-langchain/a991ffa1-6102-416d-a561-877198e9f5de/link_lists.bin b/chroma-db-langchain/a991ffa1-6102-416d-a561-877198e9f5de/link_lists.bin deleted file mode 100644 index ce9e4f16d1fdddbd1701dcdcd876a62d4a3baed0..0000000000000000000000000000000000000000 --- a/chroma-db-langchain/a991ffa1-6102-416d-a561-877198e9f5de/link_lists.bin +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:83cab85fce66e7f40c7b93609e7b34d9970f8dd7fb0ec8ed3ca9691f7d515b84 -size 52220 diff --git a/chroma-db-langchain/chroma.sqlite3 b/chroma-db-langchain/chroma.sqlite3 index f68a0c47afe98db21e0984dc2bd943f9e758b3a4..078b09fdb9c0d7c64df17c09eeb56e11f1e46b50 100644 --- a/chroma-db-langchain/chroma.sqlite3 +++ b/chroma-db-langchain/chroma.sqlite3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:1fb702f4ed770cf0f0630d4d9c999de16409e95f0708cc6d4bc41f9b6758e0c0 -size 223997952 +oid sha256:24ae5c40f8e6c3af71641afb1a2471995bbbe4db718cd5b35df813db056657bd +size 227315712 diff --git a/chroma-db-langchain/document_dict_langchain.pkl b/chroma-db-langchain/document_dict_langchain.pkl index e7837a582dcec14c6d535eb11c9c4c8ca4e8b833..cd01dec39130ca72ab3d03c6be61d76c183b99af 100644 --- a/chroma-db-langchain/document_dict_langchain.pkl +++ b/chroma-db-langchain/document_dict_langchain.pkl @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:9288ead475c396868d4046b709ba6b4704b469dc10d571d61e2ac4a651dc8360 -size 9495017 +oid sha256:8c24b5de6a028f9823036b20962891c7287b01f804d62ce15dd508494bc89eb8 +size 9762214 diff --git a/chroma-db-llama_index/c7e869e3-1822-4dde-8d40-a8f631ba43f7/data_level0.bin b/chroma-db-llama_index/c7e869e3-1822-4dde-8d40-a8f631ba43f7/data_level0.bin deleted file mode 100644 index d0c800f36d10e1b231b6da284f9fa1777f6bc8cc..0000000000000000000000000000000000000000 --- a/chroma-db-llama_index/c7e869e3-1822-4dde-8d40-a8f631ba43f7/data_level0.bin +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:ff775114aa9ea2874506dc5fb42fb0cb40c8aba1d39a5ccc40c0d3e01fc617fe -size 74568000 diff --git a/chroma-db-llama_index/c7e869e3-1822-4dde-8d40-a8f631ba43f7/header.bin b/chroma-db-llama_index/c7e869e3-1822-4dde-8d40-a8f631ba43f7/header.bin deleted file mode 100644 index b916f30a920cf40aee5ef2aa7d5ca71722a084c9..0000000000000000000000000000000000000000 --- a/chroma-db-llama_index/c7e869e3-1822-4dde-8d40-a8f631ba43f7/header.bin +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:6485506af204d2b936b1f28bc63bcc7b791d4b431bc168bdfef9290d9059fe73 -size 100 diff --git a/chroma-db-llama_index/c7e869e3-1822-4dde-8d40-a8f631ba43f7/index_metadata.pickle b/chroma-db-llama_index/c7e869e3-1822-4dde-8d40-a8f631ba43f7/index_metadata.pickle deleted file mode 100644 index 211c771e49eb054eb47466ccbfa95db8e81faa09..0000000000000000000000000000000000000000 --- a/chroma-db-llama_index/c7e869e3-1822-4dde-8d40-a8f631ba43f7/index_metadata.pickle +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:0e4a0ed52b4a277d65769ca116592388b62dc31871eabdb3504d84c656914321 -size 346117 diff --git a/chroma-db-llama_index/c7e869e3-1822-4dde-8d40-a8f631ba43f7/length.bin b/chroma-db-llama_index/c7e869e3-1822-4dde-8d40-a8f631ba43f7/length.bin deleted file mode 100644 index 38eaecda6c36ea8cd40f0f12f95631ff34304cda..0000000000000000000000000000000000000000 --- a/chroma-db-llama_index/c7e869e3-1822-4dde-8d40-a8f631ba43f7/length.bin +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:fec5cf06bc6bc8d7e43df5ec03e41faaf11b25a94375e31500d22aad8d9b19b3 -size 24000 diff --git a/chroma-db-llama_index/c7e869e3-1822-4dde-8d40-a8f631ba43f7/link_lists.bin b/chroma-db-llama_index/c7e869e3-1822-4dde-8d40-a8f631ba43f7/link_lists.bin deleted file mode 100644 index 4aaa3be84f66a3cb6bca5dd5cd2197809db2a438..0000000000000000000000000000000000000000 --- a/chroma-db-llama_index/c7e869e3-1822-4dde-8d40-a8f631ba43f7/link_lists.bin +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:5e9702fd3fab5b9e0a9b3495b9051dcbec394bf49a94c84503c00f2c59468e2c -size 52152 diff --git a/chroma-db-llama_index/chroma.sqlite3 b/chroma-db-llama_index/chroma.sqlite3 index 5fe110587c41e8ee99dcfcb64e38d093f780ac9c..f16a0eeb3ec04cc8d5133f37602249f6265c8f5e 100644 --- a/chroma-db-llama_index/chroma.sqlite3 +++ b/chroma-db-llama_index/chroma.sqlite3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:2f9f841426404c5901ac2f13ffc1c7224cb2783cf4e0276ffbc5783f1426cb29 -size 205246464 +oid sha256:c5e49109f87feb4a809ebdf8959a980422db67b0e0225b51ee0c23f2c9af3fcd +size 235962368 diff --git a/chroma-db-llama_index/document_dict_llama_index.pkl b/chroma-db-llama_index/document_dict_llama_index.pkl index 378a38b3c5510309629003cd47241f4916950d33..afaca8e021c76b628d8b47345d3851edc286fbcc 100644 --- a/chroma-db-llama_index/document_dict_llama_index.pkl +++ b/chroma-db-llama_index/document_dict_llama_index.pkl @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:5153123cbc6d2e83c1d6b60d23dd5afee00bd9a4967143b9e8f30f1792c5e932 -size 8954720 +oid sha256:5e59ccd849ee0500a175687c77597b5970170bc8d1afb26f82bb058df71150f2 +size 9655771 diff --git a/chroma-db-llama_index/e3589186-06b2-4509-91f3-2f04395c8967/data_level0.bin b/chroma-db-llama_index/e3589186-06b2-4509-91f3-2f04395c8967/data_level0.bin new file mode 100644 index 0000000000000000000000000000000000000000..926b060effc98b78a77ef42b39afadd85f06f1f3 --- /dev/null +++ b/chroma-db-llama_index/e3589186-06b2-4509-91f3-2f04395c8967/data_level0.bin @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9567febc769436955f0e69825fe8275d4745cd0cf0cc44caf8affea796cd014e +size 86996000 diff --git a/chroma-db-llama_index/e3589186-06b2-4509-91f3-2f04395c8967/header.bin b/chroma-db-llama_index/e3589186-06b2-4509-91f3-2f04395c8967/header.bin new file mode 100644 index 0000000000000000000000000000000000000000..ea00ef763665f1d40311fdd34b8b231662aa36af --- /dev/null +++ b/chroma-db-llama_index/e3589186-06b2-4509-91f3-2f04395c8967/header.bin @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fd7cfb541957dd514cb4cfdc247ec2c5d46c93bde9dd5f31ae24fdb456b9754b +size 100 diff --git a/chroma-db-llama_index/e3589186-06b2-4509-91f3-2f04395c8967/index_metadata.pickle b/chroma-db-llama_index/e3589186-06b2-4509-91f3-2f04395c8967/index_metadata.pickle new file mode 100644 index 0000000000000000000000000000000000000000..09c51c7289a838e9bbe0e8d1a275a83c1bc50862 --- /dev/null +++ b/chroma-db-llama_index/e3589186-06b2-4509-91f3-2f04395c8967/index_metadata.pickle @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e98c3bbc30cae40677bf9d215340b169e9cdd61e1bb5bd5e71f7d9816330d757 +size 404132 diff --git a/chroma-db-llama_index/e3589186-06b2-4509-91f3-2f04395c8967/length.bin b/chroma-db-llama_index/e3589186-06b2-4509-91f3-2f04395c8967/length.bin new file mode 100644 index 0000000000000000000000000000000000000000..4740f29c1e79d8df9217594e877313103b5cd47e --- /dev/null +++ b/chroma-db-llama_index/e3589186-06b2-4509-91f3-2f04395c8967/length.bin @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eb8ddaa31f2796e89d98136036eaf4ced37ac942d9705ad6aba6e49276a3570d +size 28000 diff --git a/chroma-db-llama_index/e3589186-06b2-4509-91f3-2f04395c8967/link_lists.bin b/chroma-db-llama_index/e3589186-06b2-4509-91f3-2f04395c8967/link_lists.bin new file mode 100644 index 0000000000000000000000000000000000000000..3e7da661c4d29a8357af18cbc7f53782eb4382c7 --- /dev/null +++ b/chroma-db-llama_index/e3589186-06b2-4509-91f3-2f04395c8967/link_lists.bin @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8e6f7bb6981a5c9f962684dfb4eddc41c4ebcc7d51f4348e2ed76920e7d0d698 +size 61184 diff --git a/chroma-db-openai_cookbooks/0b25dfdf-6d35-44aa-92ea-ba471d44a52c/data_level0.bin b/chroma-db-openai_cookbooks/0b25dfdf-6d35-44aa-92ea-ba471d44a52c/data_level0.bin deleted file mode 100644 index fc3262315bd5d0cf4f4eae12837fd34a64c325b1..0000000000000000000000000000000000000000 --- a/chroma-db-openai_cookbooks/0b25dfdf-6d35-44aa-92ea-ba471d44a52c/data_level0.bin +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:93df5543a015938eddd1f7c3cf53c35de01709be02c54836a47a1b445a39941c -size 24856000 diff --git a/chroma-db-openai_cookbooks/0b25dfdf-6d35-44aa-92ea-ba471d44a52c/index_metadata.pickle b/chroma-db-openai_cookbooks/0b25dfdf-6d35-44aa-92ea-ba471d44a52c/index_metadata.pickle deleted file mode 100644 index da4ece59d9da22a98dd3245e5203e34bd98bcb93..0000000000000000000000000000000000000000 --- a/chroma-db-openai_cookbooks/0b25dfdf-6d35-44aa-92ea-ba471d44a52c/index_metadata.pickle +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:5dbbac2bbab35c9a2282f4d5edba22b4da61d44a03cd94a6dfed3a957ec84603 -size 114057 diff --git a/chroma-db-openai_cookbooks/0b25dfdf-6d35-44aa-92ea-ba471d44a52c/length.bin b/chroma-db-openai_cookbooks/0b25dfdf-6d35-44aa-92ea-ba471d44a52c/length.bin deleted file mode 100644 index 9d9a31cadde53c3950b0b517d18f327c87cd2226..0000000000000000000000000000000000000000 --- a/chroma-db-openai_cookbooks/0b25dfdf-6d35-44aa-92ea-ba471d44a52c/length.bin +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:448e2b40f4fab352a4c0c747a4c13d28a753e48374f6f96c9cfbd8f153ea30f9 -size 8000 diff --git a/chroma-db-openai_cookbooks/0b25dfdf-6d35-44aa-92ea-ba471d44a52c/link_lists.bin b/chroma-db-openai_cookbooks/0b25dfdf-6d35-44aa-92ea-ba471d44a52c/link_lists.bin deleted file mode 100644 index 191ce0b20d2b1c0d9cc99538920c5257d1532d91..0000000000000000000000000000000000000000 --- a/chroma-db-openai_cookbooks/0b25dfdf-6d35-44aa-92ea-ba471d44a52c/link_lists.bin +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:eea6193ad5b15addc24b2bb8a6381d88976c55d7e8729fc78db5ab9909f782c8 -size 17316 diff --git a/chroma-db-openai_cookbooks/83713cbb-2048-4fd9-8c69-6c57c3dd9e9a/data_level0.bin b/chroma-db-openai_cookbooks/83713cbb-2048-4fd9-8c69-6c57c3dd9e9a/data_level0.bin new file mode 100644 index 0000000000000000000000000000000000000000..d3a4a0a8f58ca1907d3670781a66596bb623d9b9 --- /dev/null +++ b/chroma-db-openai_cookbooks/83713cbb-2048-4fd9-8c69-6c57c3dd9e9a/data_level0.bin @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:30530f5b774ce644134ff656ea0ad2a0cbe63f6eddc3226926c8a730d933746e +size 24856000 diff --git a/chroma-db-openai_cookbooks/0b25dfdf-6d35-44aa-92ea-ba471d44a52c/header.bin b/chroma-db-openai_cookbooks/83713cbb-2048-4fd9-8c69-6c57c3dd9e9a/header.bin similarity index 100% rename from chroma-db-openai_cookbooks/0b25dfdf-6d35-44aa-92ea-ba471d44a52c/header.bin rename to chroma-db-openai_cookbooks/83713cbb-2048-4fd9-8c69-6c57c3dd9e9a/header.bin diff --git a/chroma-db-openai_cookbooks/83713cbb-2048-4fd9-8c69-6c57c3dd9e9a/index_metadata.pickle b/chroma-db-openai_cookbooks/83713cbb-2048-4fd9-8c69-6c57c3dd9e9a/index_metadata.pickle new file mode 100644 index 0000000000000000000000000000000000000000..2dceaace715ecc08ca78befec4447b937445fca6 --- /dev/null +++ b/chroma-db-openai_cookbooks/83713cbb-2048-4fd9-8c69-6c57c3dd9e9a/index_metadata.pickle @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dcf2eabc3d876731469c40687897ca8c219e088189c29c4d1ef8ca2216e7b8be +size 114057 diff --git a/chroma-db-openai_cookbooks/83713cbb-2048-4fd9-8c69-6c57c3dd9e9a/length.bin b/chroma-db-openai_cookbooks/83713cbb-2048-4fd9-8c69-6c57c3dd9e9a/length.bin new file mode 100644 index 0000000000000000000000000000000000000000..076f60a9a47d3528ea710457c3fe99f9b84bbf88 --- /dev/null +++ b/chroma-db-openai_cookbooks/83713cbb-2048-4fd9-8c69-6c57c3dd9e9a/length.bin @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d96a38e2a9ec1ca0e380b13a8bb9e2e29723e5b71e00e342fd4497a92e328a45 +size 8000 diff --git a/chroma-db-openai_cookbooks/83713cbb-2048-4fd9-8c69-6c57c3dd9e9a/link_lists.bin b/chroma-db-openai_cookbooks/83713cbb-2048-4fd9-8c69-6c57c3dd9e9a/link_lists.bin new file mode 100644 index 0000000000000000000000000000000000000000..e6e664c4221ca68da914113d82204d82cb35494f --- /dev/null +++ b/chroma-db-openai_cookbooks/83713cbb-2048-4fd9-8c69-6c57c3dd9e9a/link_lists.bin @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4cd0071af97255b9180f3809b7f8d098eebed62956f5abcb6a5d115431dcee74 +size 17316 diff --git a/chroma-db-openai_cookbooks/chroma.sqlite3 b/chroma-db-openai_cookbooks/chroma.sqlite3 index a82cfcc2a079937723d418734655419ad35bca25..41db8150628163c4b2b8db7a5ea44af21b47d911 100644 --- a/chroma-db-openai_cookbooks/chroma.sqlite3 +++ b/chroma-db-openai_cookbooks/chroma.sqlite3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:f306cdfca7f7d2899118eb12fb1fc1b7393ebf80f70e358340fe9d6ea87e33e0 -size 83746816 +oid sha256:6b4648eaa8091fc2b033b5ac983b0f66c133aaece317ee4f6958322ad48ca3f3 +size 88666112 diff --git a/chroma-db-openai_cookbooks/document_dict_openai_cookbooks.pkl b/chroma-db-openai_cookbooks/document_dict_openai_cookbooks.pkl index 3ef5f5d32325be6439145308155de936fa1858d1..15fb8d625aa6a0a8b4e53ae3f934269165a15748 100644 --- a/chroma-db-openai_cookbooks/document_dict_openai_cookbooks.pkl +++ b/chroma-db-openai_cookbooks/document_dict_openai_cookbooks.pkl @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:f3c410a0972b9c2dfa286ad03449d8024704fbcb0b313f126ae10dd1d7b94f21 -size 3490619 +oid sha256:d1b61dcd793c1f5995a2182c9dbbe84759d42c1d0babb4e47f99bb4363293605 +size 3741933 diff --git a/chroma-db-peft/cd09d961-9ab2-47dd-a70f-a571d9251de5/data_level0.bin b/chroma-db-peft/c252c979-f4d3-484c-8a24-f045681cfc3d/data_level0.bin similarity index 100% rename from chroma-db-peft/cd09d961-9ab2-47dd-a70f-a571d9251de5/data_level0.bin rename to chroma-db-peft/c252c979-f4d3-484c-8a24-f045681cfc3d/data_level0.bin diff --git a/chroma-db-peft/cd09d961-9ab2-47dd-a70f-a571d9251de5/header.bin b/chroma-db-peft/c252c979-f4d3-484c-8a24-f045681cfc3d/header.bin similarity index 100% rename from chroma-db-peft/cd09d961-9ab2-47dd-a70f-a571d9251de5/header.bin rename to chroma-db-peft/c252c979-f4d3-484c-8a24-f045681cfc3d/header.bin diff --git a/chroma-db-peft/c252c979-f4d3-484c-8a24-f045681cfc3d/length.bin b/chroma-db-peft/c252c979-f4d3-484c-8a24-f045681cfc3d/length.bin new file mode 100644 index 0000000000000000000000000000000000000000..af1c062b14c6d9670a261daa3a60282bd0ba8513 --- /dev/null +++ b/chroma-db-peft/c252c979-f4d3-484c-8a24-f045681cfc3d/length.bin @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b573464ecbf09c406291e7c466209f06d69e272fd1b89f9fe800a34bdb91c226 +size 4000 diff --git a/chroma-db-peft/cd09d961-9ab2-47dd-a70f-a571d9251de5/link_lists.bin b/chroma-db-peft/c252c979-f4d3-484c-8a24-f045681cfc3d/link_lists.bin similarity index 100% rename from chroma-db-peft/cd09d961-9ab2-47dd-a70f-a571d9251de5/link_lists.bin rename to chroma-db-peft/c252c979-f4d3-484c-8a24-f045681cfc3d/link_lists.bin diff --git a/chroma-db-peft/cd09d961-9ab2-47dd-a70f-a571d9251de5/length.bin b/chroma-db-peft/cd09d961-9ab2-47dd-a70f-a571d9251de5/length.bin deleted file mode 100644 index a847062917bb944c33ea99365430499322450866..0000000000000000000000000000000000000000 --- a/chroma-db-peft/cd09d961-9ab2-47dd-a70f-a571d9251de5/length.bin +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:1bbe165a5971a3116154f1c299218a1b204ddba6ac1d3587732d04b34133c95d -size 4000 diff --git a/chroma-db-peft/chroma.sqlite3 b/chroma-db-peft/chroma.sqlite3 index 38fd2835f6b266fc0e127c87e05965db2ee62ea8..983eb3d4cc32705adca6306ee739155812082bff 100644 --- a/chroma-db-peft/chroma.sqlite3 +++ b/chroma-db-peft/chroma.sqlite3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:136ffa7189b3063d76c59ef7a76e4fc31617876be83ccf853a6512438f0f50b2 -size 5292032 +oid sha256:b36f58ccee36a79e20176c908eda56d403115cc39f16671c5be3886f5f136b13 +size 5406720 diff --git a/chroma-db-peft/document_dict_peft.pkl b/chroma-db-peft/document_dict_peft.pkl index b530b867177f1b9cf82c959743a6f26d4bba61d8..a80e6b7d7456287568b482043f872c0cd72c6962 100644 --- a/chroma-db-peft/document_dict_peft.pkl +++ b/chroma-db-peft/document_dict_peft.pkl @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:61a7b7572c03ac55bd3bf7f409e99de3c16ba3a2ac4c66ccea02443f5f0c0793 -size 261248 +oid sha256:389c05ff699fcd247c93b2678791f9c9f7eec2901d887730e034dffb1c7038e8 +size 270073 diff --git a/chroma-db-transformers/72747caf-b9b0-48d5-8712-4cf07905d824/data_level0.bin b/chroma-db-transformers/72747caf-b9b0-48d5-8712-4cf07905d824/data_level0.bin deleted file mode 100644 index 99a6bde885a40d464d600c342419a8a5dfa03502..0000000000000000000000000000000000000000 --- a/chroma-db-transformers/72747caf-b9b0-48d5-8712-4cf07905d824/data_level0.bin +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:71dacc58c9b86fda98eba379dabcb91f62ed3a10d381647faa10d0e43889ff4f -size 12428000 diff --git a/chroma-db-transformers/72747caf-b9b0-48d5-8712-4cf07905d824/header.bin b/chroma-db-transformers/72747caf-b9b0-48d5-8712-4cf07905d824/header.bin deleted file mode 100644 index 17f5d0f10a25bea321dda3cf2a655383cae45c1f..0000000000000000000000000000000000000000 --- a/chroma-db-transformers/72747caf-b9b0-48d5-8712-4cf07905d824/header.bin +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:db3337c9290bd8362d7849233bb2ce47b0b5a48d1790b5db251bd3ecb56a8fd4 -size 100 diff --git a/chroma-db-transformers/72747caf-b9b0-48d5-8712-4cf07905d824/index_metadata.pickle b/chroma-db-transformers/72747caf-b9b0-48d5-8712-4cf07905d824/index_metadata.pickle deleted file mode 100644 index 83cf350bd7d04d707377767ffae60abd6b50fe42..0000000000000000000000000000000000000000 --- a/chroma-db-transformers/72747caf-b9b0-48d5-8712-4cf07905d824/index_metadata.pickle +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:7f2ff58c1cdba77e74cbd707994a2cceaddcb890e9f11ae38a6b1fae30af5e4e -size 56042 diff --git a/chroma-db-transformers/72747caf-b9b0-48d5-8712-4cf07905d824/length.bin b/chroma-db-transformers/72747caf-b9b0-48d5-8712-4cf07905d824/length.bin deleted file mode 100644 index fb155d822b6ca4351e2925bbc2dba0f49f05e0ab..0000000000000000000000000000000000000000 --- a/chroma-db-transformers/72747caf-b9b0-48d5-8712-4cf07905d824/length.bin +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:fc19b1997119425765295aeab72d76faa6927d4f83985d328c26f20468d6cc76 -size 4000 diff --git a/chroma-db-transformers/72747caf-b9b0-48d5-8712-4cf07905d824/link_lists.bin b/chroma-db-transformers/72747caf-b9b0-48d5-8712-4cf07905d824/link_lists.bin deleted file mode 100644 index 84cf0e8546384807ecfde0f7c29870bbc5a58ef2..0000000000000000000000000000000000000000 --- a/chroma-db-transformers/72747caf-b9b0-48d5-8712-4cf07905d824/link_lists.bin +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f8508e40fb725d5c517c803a091d55067abf0479de7fb605d36cdcfaa454a4eb -size 8148 diff --git a/chroma-db-transformers/ae4313a8-a344-4a78-9635-059445504a74/data_level0.bin b/chroma-db-transformers/ae4313a8-a344-4a78-9635-059445504a74/data_level0.bin new file mode 100644 index 0000000000000000000000000000000000000000..b27a9a601edb4f0084644d8d9de015163c476380 --- /dev/null +++ b/chroma-db-transformers/ae4313a8-a344-4a78-9635-059445504a74/data_level0.bin @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4905080b3461da13555985ff85c5660019a389e56c813bd3af94b74b674c9350 +size 12428000 diff --git a/chroma-db-transformers/ae4313a8-a344-4a78-9635-059445504a74/header.bin b/chroma-db-transformers/ae4313a8-a344-4a78-9635-059445504a74/header.bin new file mode 100644 index 0000000000000000000000000000000000000000..55b33e16382fa9951317646c6145e72c93d0e9fb --- /dev/null +++ b/chroma-db-transformers/ae4313a8-a344-4a78-9635-059445504a74/header.bin @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:32b85e1f983554ee44ccfad97691e6a34167e44a21b142340c0d8d4b7e7b5615 +size 100 diff --git a/chroma-db-transformers/ae4313a8-a344-4a78-9635-059445504a74/index_metadata.pickle b/chroma-db-transformers/ae4313a8-a344-4a78-9635-059445504a74/index_metadata.pickle new file mode 100644 index 0000000000000000000000000000000000000000..ce2051727ea5ed98648645856409729c8eec196c --- /dev/null +++ b/chroma-db-transformers/ae4313a8-a344-4a78-9635-059445504a74/index_metadata.pickle @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ceef146ebe3c4f4cdf6847d3af12419fec68b99abf2ef6b60fd0daae667fb6a7 +size 56042 diff --git a/chroma-db-transformers/ae4313a8-a344-4a78-9635-059445504a74/length.bin b/chroma-db-transformers/ae4313a8-a344-4a78-9635-059445504a74/length.bin new file mode 100644 index 0000000000000000000000000000000000000000..529f3bd4214929bd2834438c07009f40547cbe3a --- /dev/null +++ b/chroma-db-transformers/ae4313a8-a344-4a78-9635-059445504a74/length.bin @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e0de8d2fac125632afe04d41917c53cea67281576d82a011ae7a778ef3cb2684 +size 4000 diff --git a/chroma-db-transformers/ae4313a8-a344-4a78-9635-059445504a74/link_lists.bin b/chroma-db-transformers/ae4313a8-a344-4a78-9635-059445504a74/link_lists.bin new file mode 100644 index 0000000000000000000000000000000000000000..ec7d324a308ae662a645b67dfa4462286be61fb3 --- /dev/null +++ b/chroma-db-transformers/ae4313a8-a344-4a78-9635-059445504a74/link_lists.bin @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5098cab52fc6f8df69adbe312a05d2df77efae77f3d769069da621c6b570a4d1 +size 8148 diff --git a/chroma-db-transformers/chroma.sqlite3 b/chroma-db-transformers/chroma.sqlite3 index ce956a7dc607128db2b672dd7028858a4ea02f15..1f9639916bf9195fea2fcea86aa2382cf3164b58 100644 --- a/chroma-db-transformers/chroma.sqlite3 +++ b/chroma-db-transformers/chroma.sqlite3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:53ece68c3b8a7c87f4b630e542127601194dcc4d97ac9d2a236938b575e33ae6 -size 63442944 +oid sha256:25c626cb3a2cd7286126cadb2e1148d1a05c363c34bb8e6b1694427564b7dddc +size 65089536 diff --git a/chroma-db-transformers/document_dict_transformers.pkl b/chroma-db-transformers/document_dict_transformers.pkl index 35d603880e1bff22c73231825adbab940ff58250..cc937f6ec851f892c16b127b530ce60141cd645a 100644 --- a/chroma-db-transformers/document_dict_transformers.pkl +++ b/chroma-db-transformers/document_dict_transformers.pkl @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:0eb21c73e0cf7ef0970b615c66df67dae4e973befea9fdd22721dd69b0939231 -size 3166114 +oid sha256:136c1342439595a6dfefb2fee5c32139fffeb22cd5351f67b600803580b60aae +size 3255021 diff --git a/chroma-db-trl/65e2350c-2bd4-46a8-b379-4c6561901fe1/data_level0.bin b/chroma-db-trl/4a557a8f-56f8-4209-85f5-5723a2b2dc4a/data_level0.bin similarity index 100% rename from chroma-db-trl/65e2350c-2bd4-46a8-b379-4c6561901fe1/data_level0.bin rename to chroma-db-trl/4a557a8f-56f8-4209-85f5-5723a2b2dc4a/data_level0.bin diff --git a/chroma-db-trl/65e2350c-2bd4-46a8-b379-4c6561901fe1/header.bin b/chroma-db-trl/4a557a8f-56f8-4209-85f5-5723a2b2dc4a/header.bin similarity index 100% rename from chroma-db-trl/65e2350c-2bd4-46a8-b379-4c6561901fe1/header.bin rename to chroma-db-trl/4a557a8f-56f8-4209-85f5-5723a2b2dc4a/header.bin diff --git a/chroma-db-trl/4a557a8f-56f8-4209-85f5-5723a2b2dc4a/length.bin b/chroma-db-trl/4a557a8f-56f8-4209-85f5-5723a2b2dc4a/length.bin new file mode 100644 index 0000000000000000000000000000000000000000..da9778e38de3eb801c54a53e1df506fa5398faf8 --- /dev/null +++ b/chroma-db-trl/4a557a8f-56f8-4209-85f5-5723a2b2dc4a/length.bin @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:736e57c24fa009f96a9bd8ead552b4d3aa91ef5141de93d215f87dc0633a2f16 +size 4000 diff --git a/chroma-db-trl/65e2350c-2bd4-46a8-b379-4c6561901fe1/link_lists.bin b/chroma-db-trl/4a557a8f-56f8-4209-85f5-5723a2b2dc4a/link_lists.bin similarity index 100% rename from chroma-db-trl/65e2350c-2bd4-46a8-b379-4c6561901fe1/link_lists.bin rename to chroma-db-trl/4a557a8f-56f8-4209-85f5-5723a2b2dc4a/link_lists.bin diff --git a/chroma-db-trl/65e2350c-2bd4-46a8-b379-4c6561901fe1/length.bin b/chroma-db-trl/65e2350c-2bd4-46a8-b379-4c6561901fe1/length.bin deleted file mode 100644 index 68a515e57d322190b05454b9a97d2d285229197c..0000000000000000000000000000000000000000 --- a/chroma-db-trl/65e2350c-2bd4-46a8-b379-4c6561901fe1/length.bin +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:4b8d924c7d1367cea6fd3c8fa8df0be395d4f62bf898bf07df588aa3140d7b61 -size 4000 diff --git a/chroma-db-trl/chroma.sqlite3 b/chroma-db-trl/chroma.sqlite3 index e927fdfc095ff3d14f61c3a2bb42a8e07d961d6d..4df79e46f91f3e222d0e58378a981b65c8ed3543 100644 --- a/chroma-db-trl/chroma.sqlite3 +++ b/chroma-db-trl/chroma.sqlite3 @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:066e040cdf71203d2d2f90a870854e9fd16418d5dc1229fccf52d2887cec1c5c -size 5292032 +oid sha256:f2f674cf5d0dff76e533a5e00159c74db7f503e9d03196269990f98989d1fe99 +size 5853184 diff --git a/chroma-db-trl/document_dict_trl.pkl b/chroma-db-trl/document_dict_trl.pkl index 40a7866557a5dc77372c3d13a4e375e26d2937af..7438f8e28641723122b5155e40a5cecee427ea4d 100644 --- a/chroma-db-trl/document_dict_trl.pkl +++ b/chroma-db-trl/document_dict_trl.pkl @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:ea2cf47511ef464f04c458d950cae2ac158a10cca40a338bacb9e38351223375 -size 264000 +oid sha256:d222b0677fed7d87f203b75dc8e8c607206ab246d3161bfabe1bd712ce8f09a6 +size 283631 diff --git a/langchain_md_files/_templates/integration.mdx b/langchain_md_files/_templates/integration.mdx new file mode 100644 index 0000000000000000000000000000000000000000..5e686ad3fc1224b148363a4dbd9a3ac8f133fb11 --- /dev/null +++ b/langchain_md_files/_templates/integration.mdx @@ -0,0 +1,60 @@ +[comment: Please, a reference example here "docs/integrations/arxiv.md"]:: +[comment: Use this template to create a new .md file in "docs/integrations/"]:: + +# Title_REPLACE_ME + +[comment: Only one Tile/H1 is allowed!]:: + +> +[comment: Description: After reading this description, a reader should decide if this integration is good enough to try/follow reading OR]:: +[comment: go to read the next integration doc. ]:: +[comment: Description should include a link to the source for follow reading.]:: + +## Installation and Setup + +[comment: Installation and Setup: All necessary additional package installations and setups for Tokens, etc]:: + +```bash +pip install package_name_REPLACE_ME +``` + +[comment: OR this text:]:: + +There isn't any special setup for it. + +[comment: The next H2/## sections with names of the integration modules, like "LLM", "Text Embedding Models", etc]:: +[comment: see "Modules" in the "index.html" page]:: +[comment: Each H2 section should include a link to an example(s) and a Python code with the import of the integration class]:: +[comment: Below are several example sections. Remove all unnecessary sections. Add all necessary sections not provided here.]:: + +## LLM + +See a [usage example](/docs/integrations/llms/INCLUDE_REAL_NAME). + +```python +from langchain_community.llms import integration_class_REPLACE_ME +``` + +## Text Embedding Models + +See a [usage example](/docs/integrations/text_embedding/INCLUDE_REAL_NAME). + +```python +from langchain_community.embeddings import integration_class_REPLACE_ME +``` + +## Chat models + +See a [usage example](/docs/integrations/chat/INCLUDE_REAL_NAME). + +```python +from langchain_community.chat_models import integration_class_REPLACE_ME +``` + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/INCLUDE_REAL_NAME). + +```python +from langchain_community.document_loaders import integration_class_REPLACE_ME +``` diff --git a/langchain_md_files/additional_resources/arxiv_references.mdx b/langchain_md_files/additional_resources/arxiv_references.mdx new file mode 100644 index 0000000000000000000000000000000000000000..f8ceb149d6c9c37871c8e326fc2beffac11de496 --- /dev/null +++ b/langchain_md_files/additional_resources/arxiv_references.mdx @@ -0,0 +1,863 @@ +# arXiv + +LangChain implements the latest research in the field of Natural Language Processing. +This page contains `arXiv` papers referenced in the LangChain Documentation, API Reference, + Templates, and Cookbooks. + +From the opposite direction, scientists use `LangChain` in research and reference it in the research papers. +Here you find papers that reference: +- [LangChain](https://arxiv.org/search/?query=langchain&searchtype=all&source=header) +- [LangGraph](https://arxiv.org/search/?query=langgraph&searchtype=all&source=header) +- [LangSmith](https://arxiv.org/search/?query=langsmith&searchtype=all&source=header) + +## Summary + +| arXiv id / Title | Authors | Published date 🔻 | LangChain Documentation| +|------------------|---------|-------------------|------------------------| +| `2402.03620v1` [Self-Discover: Large Language Models Self-Compose Reasoning Structures](http://arxiv.org/abs/2402.03620v1) | Pei Zhou, Jay Pujara, Xiang Ren, et al. | 2024-02-06 | `Cookbook:` [self-discover](https://github.com/langchain-ai/langchain/blob/master/cookbook/self-discover.ipynb) +| `2401.18059v1` [RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval](http://arxiv.org/abs/2401.18059v1) | Parth Sarthi, Salman Abdullah, Aditi Tuli, et al. | 2024-01-31 | `Cookbook:` [RAPTOR](https://github.com/langchain-ai/langchain/blob/master/cookbook/RAPTOR.ipynb) +| `2401.15884v2` [Corrective Retrieval Augmented Generation](http://arxiv.org/abs/2401.15884v2) | Shi-Qi Yan, Jia-Chen Gu, Yun Zhu, et al. | 2024-01-29 | `Cookbook:` [langgraph_crag](https://github.com/langchain-ai/langchain/blob/master/cookbook/langgraph_crag.ipynb) +| `2401.04088v1` [Mixtral of Experts](http://arxiv.org/abs/2401.04088v1) | Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, et al. | 2024-01-08 | `Cookbook:` [together_ai](https://github.com/langchain-ai/langchain/blob/master/cookbook/together_ai.ipynb) +| `2312.06648v2` [Dense X Retrieval: What Retrieval Granularity Should We Use?](http://arxiv.org/abs/2312.06648v2) | Tong Chen, Hongwei Wang, Sihao Chen, et al. | 2023-12-11 | `Template:` [propositional-retrieval](https://python.langchain.com/docs/templates/propositional-retrieval) +| `2311.09210v1` [Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models](http://arxiv.org/abs/2311.09210v1) | Wenhao Yu, Hongming Zhang, Xiaoman Pan, et al. | 2023-11-15 | `Template:` [chain-of-note-wiki](https://python.langchain.com/docs/templates/chain-of-note-wiki) +| `2310.11511v1` [Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection](http://arxiv.org/abs/2310.11511v1) | Akari Asai, Zeqiu Wu, Yizhong Wang, et al. | 2023-10-17 | `Cookbook:` [langgraph_self_rag](https://github.com/langchain-ai/langchain/blob/master/cookbook/langgraph_self_rag.ipynb) +| `2310.06117v2` [Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models](http://arxiv.org/abs/2310.06117v2) | Huaixiu Steven Zheng, Swaroop Mishra, Xinyun Chen, et al. | 2023-10-09 | `Template:` [stepback-qa-prompting](https://python.langchain.com/docs/templates/stepback-qa-prompting), `Cookbook:` [stepback-qa](https://github.com/langchain-ai/langchain/blob/master/cookbook/stepback-qa.ipynb) +| `2307.09288v2` [Llama 2: Open Foundation and Fine-Tuned Chat Models](http://arxiv.org/abs/2307.09288v2) | Hugo Touvron, Louis Martin, Kevin Stone, et al. | 2023-07-18 | `Cookbook:` [Semi_Structured_RAG](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_Structured_RAG.ipynb) +| `2305.14283v3` [Query Rewriting for Retrieval-Augmented Large Language Models](http://arxiv.org/abs/2305.14283v3) | Xinbei Ma, Yeyun Gong, Pengcheng He, et al. | 2023-05-23 | `Template:` [rewrite-retrieve-read](https://python.langchain.com/docs/templates/rewrite-retrieve-read), `Cookbook:` [rewrite](https://github.com/langchain-ai/langchain/blob/master/cookbook/rewrite.ipynb) +| `2305.08291v1` [Large Language Model Guided Tree-of-Thought](http://arxiv.org/abs/2305.08291v1) | Jieyi Long | 2023-05-15 | `API:` [langchain_experimental.tot](https://python.langchain.com/v0.2/api_reference/experimental/index.html#module-langchain_experimental.tot), `Cookbook:` [tree_of_thought](https://github.com/langchain-ai/langchain/blob/master/cookbook/tree_of_thought.ipynb) +| `2305.04091v3` [Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models](http://arxiv.org/abs/2305.04091v3) | Lei Wang, Wanyu Xu, Yihuai Lan, et al. | 2023-05-06 | `Cookbook:` [plan_and_execute_agent](https://github.com/langchain-ai/langchain/blob/master/cookbook/plan_and_execute_agent.ipynb) +| `2305.02156v1` [Zero-Shot Listwise Document Reranking with a Large Language Model](http://arxiv.org/abs/2305.02156v1) | Xueguang Ma, Xinyu Zhang, Ronak Pradeep, et al. | 2023-05-03 | `API:` [langchain...LLMListwiseRerank](https://python.langchain.com/v0.2/api_reference/langchain/retrievers/langchain.retrievers.document_compressors.listwise_rerank.LLMListwiseRerank.html#langchain.retrievers.document_compressors.listwise_rerank.LLMListwiseRerank) +| `2304.08485v2` [Visual Instruction Tuning](http://arxiv.org/abs/2304.08485v2) | Haotian Liu, Chunyuan Li, Qingyang Wu, et al. | 2023-04-17 | `Cookbook:` [Semi_structured_and_multi_modal_RAG](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_and_multi_modal_RAG.ipynb), [Semi_structured_multi_modal_RAG_LLaMA2](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb) +| `2304.03442v2` [Generative Agents: Interactive Simulacra of Human Behavior](http://arxiv.org/abs/2304.03442v2) | Joon Sung Park, Joseph C. O'Brien, Carrie J. Cai, et al. | 2023-04-07 | `Cookbook:` [multiagent_bidding](https://github.com/langchain-ai/langchain/blob/master/cookbook/multiagent_bidding.ipynb), [generative_agents_interactive_simulacra_of_human_behavior](https://github.com/langchain-ai/langchain/blob/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb) +| `2303.17760v2` [CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society](http://arxiv.org/abs/2303.17760v2) | Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, et al. | 2023-03-31 | `Cookbook:` [camel_role_playing](https://github.com/langchain-ai/langchain/blob/master/cookbook/camel_role_playing.ipynb) +| `2303.17580v4` [HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face](http://arxiv.org/abs/2303.17580v4) | Yongliang Shen, Kaitao Song, Xu Tan, et al. | 2023-03-30 | `API:` [langchain_experimental.autonomous_agents](https://python.langchain.com/v0.2/api_reference/experimental/index.html#module-langchain_experimental.autonomous_agents), `Cookbook:` [hugginggpt](https://github.com/langchain-ai/langchain/blob/master/cookbook/hugginggpt.ipynb) +| `2301.10226v4` [A Watermark for Large Language Models](http://arxiv.org/abs/2301.10226v4) | John Kirchenbauer, Jonas Geiping, Yuxin Wen, et al. | 2023-01-24 | `API:` [langchain_community...OCIModelDeploymentTGI](https://python.langchain.com/v0.2/api_reference/community/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI.html#langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI), [langchain_huggingface...HuggingFaceEndpoint](https://python.langchain.com/v0.2/api_reference/huggingface/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceEndpoint](https://python.langchain.com/v0.2/api_reference/langchain_community/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceTextGenInference](https://python.langchain.com/v0.2/api_reference/community/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference) +| `2212.10496v1` [Precise Zero-Shot Dense Retrieval without Relevance Labels](http://arxiv.org/abs/2212.10496v1) | Luyu Gao, Xueguang Ma, Jimmy Lin, et al. | 2022-12-20 | `API:` [langchain...HypotheticalDocumentEmbedder](https://python.langchain.com/v0.2/api_reference/langchain/chains/langchain.chains.hyde.base.HypotheticalDocumentEmbedder.html#langchain.chains.hyde.base.HypotheticalDocumentEmbedder), `Template:` [hyde](https://python.langchain.com/docs/templates/hyde), `Cookbook:` [hypothetical_document_embeddings](https://github.com/langchain-ai/langchain/blob/master/cookbook/hypothetical_document_embeddings.ipynb) +| `2212.07425v3` [Robust and Explainable Identification of Logical Fallacies in Natural Language Arguments](http://arxiv.org/abs/2212.07425v3) | Zhivar Sourati, Vishnu Priya Prasanna Venkatesh, Darshan Deshpande, et al. | 2022-12-12 | `API:` [langchain_experimental.fallacy_removal](https://python.langchain.com/v0.2/api_reference//arxiv/experimental_api_reference.html#module-langchain_experimental.fallacy_removal) +| `2211.13892v2` [Complementary Explanations for Effective In-Context Learning](http://arxiv.org/abs/2211.13892v2) | Xi Ye, Srinivasan Iyer, Asli Celikyilmaz, et al. | 2022-11-25 | `API:` [langchain_core...MaxMarginalRelevanceExampleSelector](https://python.langchain.com/v0.2/api_reference/core/example_selectors/langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector.html#langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector) +| `2211.10435v2` [PAL: Program-aided Language Models](http://arxiv.org/abs/2211.10435v2) | Luyu Gao, Aman Madaan, Shuyan Zhou, et al. | 2022-11-18 | `API:` [langchain_experimental.pal_chain](https://python.langchain.com/v0.2/api_reference//python/experimental_api_reference.html#module-langchain_experimental.pal_chain), [langchain_experimental...PALChain](https://python.langchain.com/v0.2/api_reference/experimental/pal_chain/langchain_experimental.pal_chain.base.PALChain.html#langchain_experimental.pal_chain.base.PALChain), `Cookbook:` [program_aided_language_model](https://github.com/langchain-ai/langchain/blob/master/cookbook/program_aided_language_model.ipynb) +| `2210.03629v3` [ReAct: Synergizing Reasoning and Acting in Language Models](http://arxiv.org/abs/2210.03629v3) | Shunyu Yao, Jeffrey Zhao, Dian Yu, et al. | 2022-10-06 | `Docs:` [docs/integrations/providers/cohere](https://python.langchain.com/docs/integrations/providers/cohere), [docs/integrations/tools/ionic_shopping](https://python.langchain.com/docs/integrations/tools/ionic_shopping), `API:` [langchain...TrajectoryEvalChain](https://python.langchain.com/v0.2/api_reference/langchain/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html#langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain), [langchain...create_react_agent](https://python.langchain.com/v0.2/api_reference/langchain/agents/langchain.agents.react.agent.create_react_agent.html#langchain.agents.react.agent.create_react_agent) +| `2209.10785v2` [Deep Lake: a Lakehouse for Deep Learning](http://arxiv.org/abs/2209.10785v2) | Sasun Hambardzumyan, Abhinav Tuli, Levon Ghukasyan, et al. | 2022-09-22 | `Docs:` [docs/integrations/providers/activeloop_deeplake](https://python.langchain.com/docs/integrations/providers/activeloop_deeplake) +| `2205.13147v4` [Matryoshka Representation Learning](http://arxiv.org/abs/2205.13147v4) | Aditya Kusupati, Gantavya Bhatt, Aniket Rege, et al. | 2022-05-26 | `Docs:` [docs/integrations/providers/snowflake](https://python.langchain.com/docs/integrations/providers/snowflake) +| `2205.12654v1` [Bitext Mining Using Distilled Sentence Representations for Low-Resource Languages](http://arxiv.org/abs/2205.12654v1) | Kevin Heffernan, Onur Çelebi, Holger Schwenk | 2022-05-25 | `API:` [langchain_community...LaserEmbeddings](https://python.langchain.com/v0.2/api_reference/community/embeddings/langchain_community.embeddings.laser.LaserEmbeddings.html#langchain_community.embeddings.laser.LaserEmbeddings) +| `2204.00498v1` [Evaluating the Text-to-SQL Capabilities of Large Language Models](http://arxiv.org/abs/2204.00498v1) | Nitarshan Rajkumar, Raymond Li, Dzmitry Bahdanau | 2022-03-15 | `API:` [langchain_community...SQLDatabase](https://python.langchain.com/v0.2/api_reference/community/utilities/langchain_community.utilities.sql_database.SQLDatabase.html#langchain_community.utilities.sql_database.SQLDatabase), [langchain_community...SparkSQL](https://python.langchain.com/v0.2/api_reference/community/utilities/langchain_community.utilities.spark_sql.SparkSQL.html#langchain_community.utilities.spark_sql.SparkSQL) +| `2202.00666v5` [Locally Typical Sampling](http://arxiv.org/abs/2202.00666v5) | Clara Meister, Tiago Pimentel, Gian Wiher, et al. | 2022-02-01 | `API:` [langchain_huggingface...HuggingFaceEndpoint](https://python.langchain.com/v0.2/api_reference/huggingface/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceEndpoint](https://python.langchain.com/v0.2/api_reference/community/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceTextGenInference](https://python.langchain.com/v0.2/api_reference/community/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference) +| `2103.00020v1` [Learning Transferable Visual Models From Natural Language Supervision](http://arxiv.org/abs/2103.00020v1) | Alec Radford, Jong Wook Kim, Chris Hallacy, et al. | 2021-02-26 | `API:` [langchain_experimental.open_clip](https://python.langchain.com/v0.2/api_reference//arxiv/experimental_api_reference.html#module-langchain_experimental.open_clip) +| `1909.05858v2` [CTRL: A Conditional Transformer Language Model for Controllable Generation](http://arxiv.org/abs/1909.05858v2) | Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, et al. | 2019-09-11 | `API:` [langchain_huggingface...HuggingFaceEndpoint](https://python.langchain.com/v0.2/api_reference/huggingface/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceEndpoint](https://python.langchain.com/v0.2/api_reference/community/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceTextGenInference](https://python.langchain.com/v0.2/api_reference/community/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference) + +## Self-Discover: Large Language Models Self-Compose Reasoning Structures + +- **arXiv id:** [2402.03620v1](http://arxiv.org/abs/2402.03620v1) **Published Date:** 2024-02-06 +- **Title:** Self-Discover: Large Language Models Self-Compose Reasoning Structures +- **Authors:** Pei Zhou, Jay Pujara, Xiang Ren, et al. +- **LangChain:** + + - **Cookbook:** [self-discover](https://github.com/langchain-ai/langchain/blob/master/cookbook/self-discover.ipynb) + +**Abstract:** We introduce SELF-DISCOVER, a general framework for LLMs to self-discover the +task-intrinsic reasoning structures to tackle complex reasoning problems that +are challenging for typical prompting methods. Core to the framework is a +self-discovery process where LLMs select multiple atomic reasoning modules such +as critical thinking and step-by-step thinking, and compose them into an +explicit reasoning structure for LLMs to follow during decoding. SELF-DISCOVER +substantially improves GPT-4 and PaLM 2's performance on challenging reasoning +benchmarks such as BigBench-Hard, grounded agent reasoning, and MATH, by as +much as 32% compared to Chain of Thought (CoT). Furthermore, SELF-DISCOVER +outperforms inference-intensive methods such as CoT-Self-Consistency by more +than 20%, while requiring 10-40x fewer inference compute. Finally, we show that +the self-discovered reasoning structures are universally applicable across +model families: from PaLM 2-L to GPT-4, and from GPT-4 to Llama2, and share +commonalities with human reasoning patterns. + +## RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval + +- **arXiv id:** [2401.18059v1](http://arxiv.org/abs/2401.18059v1) **Published Date:** 2024-01-31 +- **Title:** RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval +- **Authors:** Parth Sarthi, Salman Abdullah, Aditi Tuli, et al. +- **LangChain:** + + - **Cookbook:** [RAPTOR](https://github.com/langchain-ai/langchain/blob/master/cookbook/RAPTOR.ipynb) + +**Abstract:** Retrieval-augmented language models can better adapt to changes in world +state and incorporate long-tail knowledge. However, most existing methods +retrieve only short contiguous chunks from a retrieval corpus, limiting +holistic understanding of the overall document context. We introduce the novel +approach of recursively embedding, clustering, and summarizing chunks of text, +constructing a tree with differing levels of summarization from the bottom up. +At inference time, our RAPTOR model retrieves from this tree, integrating +information across lengthy documents at different levels of abstraction. +Controlled experiments show that retrieval with recursive summaries offers +significant improvements over traditional retrieval-augmented LMs on several +tasks. On question-answering tasks that involve complex, multi-step reasoning, +we show state-of-the-art results; for example, by coupling RAPTOR retrieval +with the use of GPT-4, we can improve the best performance on the QuALITY +benchmark by 20% in absolute accuracy. + +## Corrective Retrieval Augmented Generation + +- **arXiv id:** [2401.15884v2](http://arxiv.org/abs/2401.15884v2) **Published Date:** 2024-01-29 +- **Title:** Corrective Retrieval Augmented Generation +- **Authors:** Shi-Qi Yan, Jia-Chen Gu, Yun Zhu, et al. +- **LangChain:** + + - **Cookbook:** [langgraph_crag](https://github.com/langchain-ai/langchain/blob/master/cookbook/langgraph_crag.ipynb) + +**Abstract:** Large language models (LLMs) inevitably exhibit hallucinations since the +accuracy of generated texts cannot be secured solely by the parametric +knowledge they encapsulate. Although retrieval-augmented generation (RAG) is a +practicable complement to LLMs, it relies heavily on the relevance of retrieved +documents, raising concerns about how the model behaves if retrieval goes +wrong. To this end, we propose the Corrective Retrieval Augmented Generation +(CRAG) to improve the robustness of generation. Specifically, a lightweight +retrieval evaluator is designed to assess the overall quality of retrieved +documents for a query, returning a confidence degree based on which different +knowledge retrieval actions can be triggered. Since retrieval from static and +limited corpora can only return sub-optimal documents, large-scale web searches +are utilized as an extension for augmenting the retrieval results. Besides, a +decompose-then-recompose algorithm is designed for retrieved documents to +selectively focus on key information and filter out irrelevant information in +them. CRAG is plug-and-play and can be seamlessly coupled with various +RAG-based approaches. Experiments on four datasets covering short- and +long-form generation tasks show that CRAG can significantly improve the +performance of RAG-based approaches. + +## Mixtral of Experts + +- **arXiv id:** [2401.04088v1](http://arxiv.org/abs/2401.04088v1) **Published Date:** 2024-01-08 +- **Title:** Mixtral of Experts +- **Authors:** Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, et al. +- **LangChain:** + + - **Cookbook:** [together_ai](https://github.com/langchain-ai/langchain/blob/master/cookbook/together_ai.ipynb) + +**Abstract:** We introduce Mixtral 8x7B, a Sparse Mixture of Experts (SMoE) language model. +Mixtral has the same architecture as Mistral 7B, with the difference that each +layer is composed of 8 feedforward blocks (i.e. experts). For every token, at +each layer, a router network selects two experts to process the current state +and combine their outputs. Even though each token only sees two experts, the +selected experts can be different at each timestep. As a result, each token has +access to 47B parameters, but only uses 13B active parameters during inference. +Mixtral was trained with a context size of 32k tokens and it outperforms or +matches Llama 2 70B and GPT-3.5 across all evaluated benchmarks. In particular, +Mixtral vastly outperforms Llama 2 70B on mathematics, code generation, and +multilingual benchmarks. We also provide a model fine-tuned to follow +instructions, Mixtral 8x7B - Instruct, that surpasses GPT-3.5 Turbo, +Claude-2.1, Gemini Pro, and Llama 2 70B - chat model on human benchmarks. Both +the base and instruct models are released under the Apache 2.0 license. + +## Dense X Retrieval: What Retrieval Granularity Should We Use? + +- **arXiv id:** [2312.06648v2](http://arxiv.org/abs/2312.06648v2) **Published Date:** 2023-12-11 +- **Title:** Dense X Retrieval: What Retrieval Granularity Should We Use? +- **Authors:** Tong Chen, Hongwei Wang, Sihao Chen, et al. +- **LangChain:** + + - **Template:** [propositional-retrieval](https://python.langchain.com/docs/templates/propositional-retrieval) + +**Abstract:** Dense retrieval has become a prominent method to obtain relevant context or +world knowledge in open-domain NLP tasks. When we use a learned dense retriever +on a retrieval corpus at inference time, an often-overlooked design choice is +the retrieval unit in which the corpus is indexed, e.g. document, passage, or +sentence. We discover that the retrieval unit choice significantly impacts the +performance of both retrieval and downstream tasks. Distinct from the typical +approach of using passages or sentences, we introduce a novel retrieval unit, +proposition, for dense retrieval. Propositions are defined as atomic +expressions within text, each encapsulating a distinct factoid and presented in +a concise, self-contained natural language format. We conduct an empirical +comparison of different retrieval granularity. Our results reveal that +proposition-based retrieval significantly outperforms traditional passage or +sentence-based methods in dense retrieval. Moreover, retrieval by proposition +also enhances the performance of downstream QA tasks, since the retrieved texts +are more condensed with question-relevant information, reducing the need for +lengthy input tokens and minimizing the inclusion of extraneous, irrelevant +information. + +## Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models + +- **arXiv id:** [2311.09210v1](http://arxiv.org/abs/2311.09210v1) **Published Date:** 2023-11-15 +- **Title:** Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models +- **Authors:** Wenhao Yu, Hongming Zhang, Xiaoman Pan, et al. +- **LangChain:** + + - **Template:** [chain-of-note-wiki](https://python.langchain.com/docs/templates/chain-of-note-wiki) + +**Abstract:** Retrieval-augmented language models (RALMs) represent a substantial +advancement in the capabilities of large language models, notably in reducing +factual hallucination by leveraging external knowledge sources. However, the +reliability of the retrieved information is not always guaranteed. The +retrieval of irrelevant data can lead to misguided responses, and potentially +causing the model to overlook its inherent knowledge, even when it possesses +adequate information to address the query. Moreover, standard RALMs often +struggle to assess whether they possess adequate knowledge, both intrinsic and +retrieved, to provide an accurate answer. In situations where knowledge is +lacking, these systems should ideally respond with "unknown" when the answer is +unattainable. In response to these challenges, we introduces Chain-of-Noting +(CoN), a novel approach aimed at improving the robustness of RALMs in facing +noisy, irrelevant documents and in handling unknown scenarios. The core idea of +CoN is to generate sequential reading notes for retrieved documents, enabling a +thorough evaluation of their relevance to the given question and integrating +this information to formulate the final answer. We employed ChatGPT to create +training data for CoN, which was subsequently trained on an LLaMa-2 7B model. +Our experiments across four open-domain QA benchmarks show that RALMs equipped +with CoN significantly outperform standard RALMs. Notably, CoN achieves an +average improvement of +7.9 in EM score given entirely noisy retrieved +documents and +10.5 in rejection rates for real-time questions that fall +outside the pre-training knowledge scope. + +## Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection + +- **arXiv id:** [2310.11511v1](http://arxiv.org/abs/2310.11511v1) **Published Date:** 2023-10-17 +- **Title:** Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection +- **Authors:** Akari Asai, Zeqiu Wu, Yizhong Wang, et al. +- **LangChain:** + + - **Cookbook:** [langgraph_self_rag](https://github.com/langchain-ai/langchain/blob/master/cookbook/langgraph_self_rag.ipynb) + +**Abstract:** Despite their remarkable capabilities, large language models (LLMs) often +produce responses containing factual inaccuracies due to their sole reliance on +the parametric knowledge they encapsulate. Retrieval-Augmented Generation +(RAG), an ad hoc approach that augments LMs with retrieval of relevant +knowledge, decreases such issues. However, indiscriminately retrieving and +incorporating a fixed number of retrieved passages, regardless of whether +retrieval is necessary, or passages are relevant, diminishes LM versatility or +can lead to unhelpful response generation. We introduce a new framework called +Self-Reflective Retrieval-Augmented Generation (Self-RAG) that enhances an LM's +quality and factuality through retrieval and self-reflection. Our framework +trains a single arbitrary LM that adaptively retrieves passages on-demand, and +generates and reflects on retrieved passages and its own generations using +special tokens, called reflection tokens. Generating reflection tokens makes +the LM controllable during the inference phase, enabling it to tailor its +behavior to diverse task requirements. Experiments show that Self-RAG (7B and +13B parameters) significantly outperforms state-of-the-art LLMs and +retrieval-augmented models on a diverse set of tasks. Specifically, Self-RAG +outperforms ChatGPT and retrieval-augmented Llama2-chat on Open-domain QA, +reasoning and fact verification tasks, and it shows significant gains in +improving factuality and citation accuracy for long-form generations relative +to these models. + +## Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models + +- **arXiv id:** [2310.06117v2](http://arxiv.org/abs/2310.06117v2) **Published Date:** 2023-10-09 +- **Title:** Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models +- **Authors:** Huaixiu Steven Zheng, Swaroop Mishra, Xinyun Chen, et al. +- **LangChain:** + + - **Template:** [stepback-qa-prompting](https://python.langchain.com/docs/templates/stepback-qa-prompting) + - **Cookbook:** [stepback-qa](https://github.com/langchain-ai/langchain/blob/master/cookbook/stepback-qa.ipynb) + +**Abstract:** We present Step-Back Prompting, a simple prompting technique that enables +LLMs to do abstractions to derive high-level concepts and first principles from +instances containing specific details. Using the concepts and principles to +guide reasoning, LLMs significantly improve their abilities in following a +correct reasoning path towards the solution. We conduct experiments of +Step-Back Prompting with PaLM-2L, GPT-4 and Llama2-70B models, and observe +substantial performance gains on various challenging reasoning-intensive tasks +including STEM, Knowledge QA, and Multi-Hop Reasoning. For instance, Step-Back +Prompting improves PaLM-2L performance on MMLU (Physics and Chemistry) by 7% +and 11% respectively, TimeQA by 27%, and MuSiQue by 7%. + +## Llama 2: Open Foundation and Fine-Tuned Chat Models + +- **arXiv id:** [2307.09288v2](http://arxiv.org/abs/2307.09288v2) **Published Date:** 2023-07-18 +- **Title:** Llama 2: Open Foundation and Fine-Tuned Chat Models +- **Authors:** Hugo Touvron, Louis Martin, Kevin Stone, et al. +- **LangChain:** + + - **Cookbook:** [Semi_Structured_RAG](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_Structured_RAG.ipynb) + +**Abstract:** In this work, we develop and release Llama 2, a collection of pretrained and +fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 +billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for +dialogue use cases. Our models outperform open-source chat models on most +benchmarks we tested, and based on our human evaluations for helpfulness and +safety, may be a suitable substitute for closed-source models. We provide a +detailed description of our approach to fine-tuning and safety improvements of +Llama 2-Chat in order to enable the community to build on our work and +contribute to the responsible development of LLMs. + +## Query Rewriting for Retrieval-Augmented Large Language Models + +- **arXiv id:** [2305.14283v3](http://arxiv.org/abs/2305.14283v3) **Published Date:** 2023-05-23 +- **Title:** Query Rewriting for Retrieval-Augmented Large Language Models +- **Authors:** Xinbei Ma, Yeyun Gong, Pengcheng He, et al. +- **LangChain:** + + - **Template:** [rewrite-retrieve-read](https://python.langchain.com/docs/templates/rewrite-retrieve-read) + - **Cookbook:** [rewrite](https://github.com/langchain-ai/langchain/blob/master/cookbook/rewrite.ipynb) + +**Abstract:** Large Language Models (LLMs) play powerful, black-box readers in the +retrieve-then-read pipeline, making remarkable progress in knowledge-intensive +tasks. This work introduces a new framework, Rewrite-Retrieve-Read instead of +the previous retrieve-then-read for the retrieval-augmented LLMs from the +perspective of the query rewriting. Unlike prior studies focusing on adapting +either the retriever or the reader, our approach pays attention to the +adaptation of the search query itself, for there is inevitably a gap between +the input text and the needed knowledge in retrieval. We first prompt an LLM to +generate the query, then use a web search engine to retrieve contexts. +Furthermore, to better align the query to the frozen modules, we propose a +trainable scheme for our pipeline. A small language model is adopted as a +trainable rewriter to cater to the black-box LLM reader. The rewriter is +trained using the feedback of the LLM reader by reinforcement learning. +Evaluation is conducted on downstream tasks, open-domain QA and multiple-choice +QA. Experiments results show consistent performance improvement, indicating +that our framework is proven effective and scalable, and brings a new framework +for retrieval-augmented LLM. + +## Large Language Model Guided Tree-of-Thought + +- **arXiv id:** [2305.08291v1](http://arxiv.org/abs/2305.08291v1) **Published Date:** 2023-05-15 +- **Title:** Large Language Model Guided Tree-of-Thought +- **Authors:** Jieyi Long +- **LangChain:** + + - **API Reference:** [langchain_experimental.tot](https://python.langchain.com/v0.2/api_reference/experimental/index.html#module-langchain_experimental.tot) + - **Cookbook:** [tree_of_thought](https://github.com/langchain-ai/langchain/blob/master/cookbook/tree_of_thought.ipynb) + +**Abstract:** In this paper, we introduce the Tree-of-Thought (ToT) framework, a novel +approach aimed at improving the problem-solving capabilities of auto-regressive +large language models (LLMs). The ToT technique is inspired by the human mind's +approach for solving complex reasoning tasks through trial and error. In this +process, the human mind explores the solution space through a tree-like thought +process, allowing for backtracking when necessary. To implement ToT as a +software system, we augment an LLM with additional modules including a prompter +agent, a checker module, a memory module, and a ToT controller. In order to +solve a given problem, these modules engage in a multi-round conversation with +the LLM. The memory module records the conversation and state history of the +problem solving process, which allows the system to backtrack to the previous +steps of the thought-process and explore other directions from there. To verify +the effectiveness of the proposed technique, we implemented a ToT-based solver +for the Sudoku Puzzle. Experimental results show that the ToT framework can +significantly increase the success rate of Sudoku puzzle solving. Our +implementation of the ToT-based Sudoku solver is available on GitHub: +\url{https://github.com/jieyilong/tree-of-thought-puzzle-solver}. + +## Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models + +- **arXiv id:** [2305.04091v3](http://arxiv.org/abs/2305.04091v3) **Published Date:** 2023-05-06 +- **Title:** Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models +- **Authors:** Lei Wang, Wanyu Xu, Yihuai Lan, et al. +- **LangChain:** + + - **Cookbook:** [plan_and_execute_agent](https://github.com/langchain-ai/langchain/blob/master/cookbook/plan_and_execute_agent.ipynb) + +**Abstract:** Large language models (LLMs) have recently been shown to deliver impressive +performance in various NLP tasks. To tackle multi-step reasoning tasks, +few-shot chain-of-thought (CoT) prompting includes a few manually crafted +step-by-step reasoning demonstrations which enable LLMs to explicitly generate +reasoning steps and improve their reasoning task accuracy. To eliminate the +manual effort, Zero-shot-CoT concatenates the target problem statement with +"Let's think step by step" as an input prompt to LLMs. Despite the success of +Zero-shot-CoT, it still suffers from three pitfalls: calculation errors, +missing-step errors, and semantic misunderstanding errors. To address the +missing-step errors, we propose Plan-and-Solve (PS) Prompting. It consists of +two components: first, devising a plan to divide the entire task into smaller +subtasks, and then carrying out the subtasks according to the plan. To address +the calculation errors and improve the quality of generated reasoning steps, we +extend PS prompting with more detailed instructions and derive PS+ prompting. +We evaluate our proposed prompting strategy on ten datasets across three +reasoning problems. The experimental results over GPT-3 show that our proposed +zero-shot prompting consistently outperforms Zero-shot-CoT across all datasets +by a large margin, is comparable to or exceeds Zero-shot-Program-of-Thought +Prompting, and has comparable performance with 8-shot CoT prompting on the math +reasoning problem. The code can be found at +https://github.com/AGI-Edgerunners/Plan-and-Solve-Prompting. + +## Zero-Shot Listwise Document Reranking with a Large Language Model + +- **arXiv id:** [2305.02156v1](http://arxiv.org/abs/2305.02156v1) **Published Date:** 2023-05-03 +- **Title:** Zero-Shot Listwise Document Reranking with a Large Language Model +- **Authors:** Xueguang Ma, Xinyu Zhang, Ronak Pradeep, et al. +- **LangChain:** + + - **API Reference:** [langchain...LLMListwiseRerank](https://python.langchain.com/v0.2/api_reference/langchain/retrievers/langchain.retrievers.document_compressors.listwise_rerank.LLMListwiseRerank.html#langchain.retrievers.document_compressors.listwise_rerank.LLMListwiseRerank) + +**Abstract:** Supervised ranking methods based on bi-encoder or cross-encoder architectures +have shown success in multi-stage text ranking tasks, but they require large +amounts of relevance judgments as training data. In this work, we propose +Listwise Reranker with a Large Language Model (LRL), which achieves strong +reranking effectiveness without using any task-specific training data. +Different from the existing pointwise ranking methods, where documents are +scored independently and ranked according to the scores, LRL directly generates +a reordered list of document identifiers given the candidate documents. +Experiments on three TREC web search datasets demonstrate that LRL not only +outperforms zero-shot pointwise methods when reranking first-stage retrieval +results, but can also act as a final-stage reranker to improve the top-ranked +results of a pointwise method for improved efficiency. Additionally, we apply +our approach to subsets of MIRACL, a recent multilingual retrieval dataset, +with results showing its potential to generalize across different languages. + +## Visual Instruction Tuning + +- **arXiv id:** [2304.08485v2](http://arxiv.org/abs/2304.08485v2) **Published Date:** 2023-04-17 +- **Title:** Visual Instruction Tuning +- **Authors:** Haotian Liu, Chunyuan Li, Qingyang Wu, et al. +- **LangChain:** + + - **Cookbook:** [Semi_structured_and_multi_modal_RAG](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_and_multi_modal_RAG.ipynb), [Semi_structured_multi_modal_RAG_LLaMA2](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb) + +**Abstract:** Instruction tuning large language models (LLMs) using machine-generated +instruction-following data has improved zero-shot capabilities on new tasks, +but the idea is less explored in the multimodal field. In this paper, we +present the first attempt to use language-only GPT-4 to generate multimodal +language-image instruction-following data. By instruction tuning on such +generated data, we introduce LLaVA: Large Language and Vision Assistant, an +end-to-end trained large multimodal model that connects a vision encoder and +LLM for general-purpose visual and language understanding.Our early experiments +show that LLaVA demonstrates impressive multimodel chat abilities, sometimes +exhibiting the behaviors of multimodal GPT-4 on unseen images/instructions, and +yields a 85.1% relative score compared with GPT-4 on a synthetic multimodal +instruction-following dataset. When fine-tuned on Science QA, the synergy of +LLaVA and GPT-4 achieves a new state-of-the-art accuracy of 92.53%. We make +GPT-4 generated visual instruction tuning data, our model and code base +publicly available. + +## Generative Agents: Interactive Simulacra of Human Behavior + +- **arXiv id:** [2304.03442v2](http://arxiv.org/abs/2304.03442v2) **Published Date:** 2023-04-07 +- **Title:** Generative Agents: Interactive Simulacra of Human Behavior +- **Authors:** Joon Sung Park, Joseph C. O'Brien, Carrie J. Cai, et al. +- **LangChain:** + + - **Cookbook:** [multiagent_bidding](https://github.com/langchain-ai/langchain/blob/master/cookbook/multiagent_bidding.ipynb), [generative_agents_interactive_simulacra_of_human_behavior](https://github.com/langchain-ai/langchain/blob/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb) + +**Abstract:** Believable proxies of human behavior can empower interactive applications +ranging from immersive environments to rehearsal spaces for interpersonal +communication to prototyping tools. In this paper, we introduce generative +agents--computational software agents that simulate believable human behavior. +Generative agents wake up, cook breakfast, and head to work; artists paint, +while authors write; they form opinions, notice each other, and initiate +conversations; they remember and reflect on days past as they plan the next +day. To enable generative agents, we describe an architecture that extends a +large language model to store a complete record of the agent's experiences +using natural language, synthesize those memories over time into higher-level +reflections, and retrieve them dynamically to plan behavior. We instantiate +generative agents to populate an interactive sandbox environment inspired by +The Sims, where end users can interact with a small town of twenty five agents +using natural language. In an evaluation, these generative agents produce +believable individual and emergent social behaviors: for example, starting with +only a single user-specified notion that one agent wants to throw a Valentine's +Day party, the agents autonomously spread invitations to the party over the +next two days, make new acquaintances, ask each other out on dates to the +party, and coordinate to show up for the party together at the right time. We +demonstrate through ablation that the components of our agent +architecture--observation, planning, and reflection--each contribute critically +to the believability of agent behavior. By fusing large language models with +computational, interactive agents, this work introduces architectural and +interaction patterns for enabling believable simulations of human behavior. + +## CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society + +- **arXiv id:** [2303.17760v2](http://arxiv.org/abs/2303.17760v2) **Published Date:** 2023-03-31 +- **Title:** CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society +- **Authors:** Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, et al. +- **LangChain:** + + - **Cookbook:** [camel_role_playing](https://github.com/langchain-ai/langchain/blob/master/cookbook/camel_role_playing.ipynb) + +**Abstract:** The rapid advancement of chat-based language models has led to remarkable +progress in complex task-solving. However, their success heavily relies on +human input to guide the conversation, which can be challenging and +time-consuming. This paper explores the potential of building scalable +techniques to facilitate autonomous cooperation among communicative agents, and +provides insight into their "cognitive" processes. To address the challenges of +achieving autonomous cooperation, we propose a novel communicative agent +framework named role-playing. Our approach involves using inception prompting +to guide chat agents toward task completion while maintaining consistency with +human intentions. We showcase how role-playing can be used to generate +conversational data for studying the behaviors and capabilities of a society of +agents, providing a valuable resource for investigating conversational language +models. In particular, we conduct comprehensive studies on +instruction-following cooperation in multi-agent settings. Our contributions +include introducing a novel communicative agent framework, offering a scalable +approach for studying the cooperative behaviors and capabilities of multi-agent +systems, and open-sourcing our library to support research on communicative +agents and beyond: https://github.com/camel-ai/camel. + +## HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face + +- **arXiv id:** [2303.17580v4](http://arxiv.org/abs/2303.17580v4) **Published Date:** 2023-03-30 +- **Title:** HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face +- **Authors:** Yongliang Shen, Kaitao Song, Xu Tan, et al. +- **LangChain:** + + - **API Reference:** [langchain_experimental.autonomous_agents](https://python.langchain.com/v0.2/api_reference/experimental/index.html#module-langchain_experimental.autonomous_agents) + - **Cookbook:** [hugginggpt](https://github.com/langchain-ai/langchain/blob/master/cookbook/hugginggpt.ipynb) + +**Abstract:** Solving complicated AI tasks with different domains and modalities is a key +step toward artificial general intelligence. While there are numerous AI models +available for various domains and modalities, they cannot handle complicated AI +tasks autonomously. Considering large language models (LLMs) have exhibited +exceptional abilities in language understanding, generation, interaction, and +reasoning, we advocate that LLMs could act as a controller to manage existing +AI models to solve complicated AI tasks, with language serving as a generic +interface to empower this. Based on this philosophy, we present HuggingGPT, an +LLM-powered agent that leverages LLMs (e.g., ChatGPT) to connect various AI +models in machine learning communities (e.g., Hugging Face) to solve AI tasks. +Specifically, we use ChatGPT to conduct task planning when receiving a user +request, select models according to their function descriptions available in +Hugging Face, execute each subtask with the selected AI model, and summarize +the response according to the execution results. By leveraging the strong +language capability of ChatGPT and abundant AI models in Hugging Face, +HuggingGPT can tackle a wide range of sophisticated AI tasks spanning different +modalities and domains and achieve impressive results in language, vision, +speech, and other challenging tasks, which paves a new way towards the +realization of artificial general intelligence. + +## A Watermark for Large Language Models + +- **arXiv id:** [2301.10226v4](http://arxiv.org/abs/2301.10226v4) **Published Date:** 2023-01-24 +- **Title:** A Watermark for Large Language Models +- **Authors:** John Kirchenbauer, Jonas Geiping, Yuxin Wen, et al. +- **LangChain:** + + - **API Reference:** [langchain_community...OCIModelDeploymentTGI](https://python.langchain.com/v0.2/api_reference/community/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI.html#langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI), [langchain_huggingface...HuggingFaceEndpoint](https://python.langchain.com/v0.2/api_reference/huggingface/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceEndpoint](https://python.langchain.com/v0.2/api_reference/langchain_community/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceTextGenInference](https://python.langchain.com/v0.2/api_reference/community/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference) + +**Abstract:** Potential harms of large language models can be mitigated by watermarking +model output, i.e., embedding signals into generated text that are invisible to +humans but algorithmically detectable from a short span of tokens. We propose a +watermarking framework for proprietary language models. The watermark can be +embedded with negligible impact on text quality, and can be detected using an +efficient open-source algorithm without access to the language model API or +parameters. The watermark works by selecting a randomized set of "green" tokens +before a word is generated, and then softly promoting use of green tokens +during sampling. We propose a statistical test for detecting the watermark with +interpretable p-values, and derive an information-theoretic framework for +analyzing the sensitivity of the watermark. We test the watermark using a +multi-billion parameter model from the Open Pretrained Transformer (OPT) +family, and discuss robustness and security. + +## Precise Zero-Shot Dense Retrieval without Relevance Labels + +- **arXiv id:** [2212.10496v1](http://arxiv.org/abs/2212.10496v1) **Published Date:** 2022-12-20 +- **Title:** Precise Zero-Shot Dense Retrieval without Relevance Labels +- **Authors:** Luyu Gao, Xueguang Ma, Jimmy Lin, et al. +- **LangChain:** + + - **API Reference:** [langchain...HypotheticalDocumentEmbedder](https://python.langchain.com/v0.2/api_reference/langchain/chains/langchain.chains.hyde.base.HypotheticalDocumentEmbedder.html#langchain.chains.hyde.base.HypotheticalDocumentEmbedder) + - **Template:** [hyde](https://python.langchain.com/docs/templates/hyde) + - **Cookbook:** [hypothetical_document_embeddings](https://github.com/langchain-ai/langchain/blob/master/cookbook/hypothetical_document_embeddings.ipynb) + +**Abstract:** While dense retrieval has been shown effective and efficient across tasks and +languages, it remains difficult to create effective fully zero-shot dense +retrieval systems when no relevance label is available. In this paper, we +recognize the difficulty of zero-shot learning and encoding relevance. Instead, +we propose to pivot through Hypothetical Document Embeddings~(HyDE). Given a +query, HyDE first zero-shot instructs an instruction-following language model +(e.g. InstructGPT) to generate a hypothetical document. The document captures +relevance patterns but is unreal and may contain false details. Then, an +unsupervised contrastively learned encoder~(e.g. Contriever) encodes the +document into an embedding vector. This vector identifies a neighborhood in the +corpus embedding space, where similar real documents are retrieved based on +vector similarity. This second step ground the generated document to the actual +corpus, with the encoder's dense bottleneck filtering out the incorrect +details. Our experiments show that HyDE significantly outperforms the +state-of-the-art unsupervised dense retriever Contriever and shows strong +performance comparable to fine-tuned retrievers, across various tasks (e.g. web +search, QA, fact verification) and languages~(e.g. sw, ko, ja). + +## Robust and Explainable Identification of Logical Fallacies in Natural Language Arguments + +- **arXiv id:** [2212.07425v3](http://arxiv.org/abs/2212.07425v3) **Published Date:** 2022-12-12 +- **Title:** Robust and Explainable Identification of Logical Fallacies in Natural Language Arguments +- **Authors:** Zhivar Sourati, Vishnu Priya Prasanna Venkatesh, Darshan Deshpande, et al. +- **LangChain:** + + - **API Reference:** [langchain_experimental.fallacy_removal](https://python.langchain.com/v0.2/api_reference/experimental/index.html#module-langchain_experimental.fallacy_removal) + +**Abstract:** The spread of misinformation, propaganda, and flawed argumentation has been +amplified in the Internet era. Given the volume of data and the subtlety of +identifying violations of argumentation norms, supporting information analytics +tasks, like content moderation, with trustworthy methods that can identify +logical fallacies is essential. In this paper, we formalize prior theoretical +work on logical fallacies into a comprehensive three-stage evaluation framework +of detection, coarse-grained, and fine-grained classification. We adapt +existing evaluation datasets for each stage of the evaluation. We employ three +families of robust and explainable methods based on prototype reasoning, +instance-based reasoning, and knowledge injection. The methods combine language +models with background knowledge and explainable mechanisms. Moreover, we +address data sparsity with strategies for data augmentation and curriculum +learning. Our three-stage framework natively consolidates prior datasets and +methods from existing tasks, like propaganda detection, serving as an +overarching evaluation testbed. We extensively evaluate these methods on our +datasets, focusing on their robustness and explainability. Our results provide +insight into the strengths and weaknesses of the methods on different +components and fallacy classes, indicating that fallacy identification is a +challenging task that may require specialized forms of reasoning to capture +various classes. We share our open-source code and data on GitHub to support +further work on logical fallacy identification. + +## Complementary Explanations for Effective In-Context Learning + +- **arXiv id:** [2211.13892v2](http://arxiv.org/abs/2211.13892v2) **Published Date:** 2022-11-25 +- **Title:** Complementary Explanations for Effective In-Context Learning +- **Authors:** Xi Ye, Srinivasan Iyer, Asli Celikyilmaz, et al. +- **LangChain:** + + - **API Reference:** [langchain_core...MaxMarginalRelevanceExampleSelector](https://python.langchain.com/v0.2/api_reference/core/example_selectors/langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector.html#langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector) + +**Abstract:** Large language models (LLMs) have exhibited remarkable capabilities in +learning from explanations in prompts, but there has been limited understanding +of exactly how these explanations function or why they are effective. This work +aims to better understand the mechanisms by which explanations are used for +in-context learning. We first study the impact of two different factors on the +performance of prompts with explanations: the computation trace (the way the +solution is decomposed) and the natural language used to express the prompt. By +perturbing explanations on three controlled tasks, we show that both factors +contribute to the effectiveness of explanations. We further study how to form +maximally effective sets of explanations for solving a given test query. We +find that LLMs can benefit from the complementarity of the explanation set: +diverse reasoning skills shown by different exemplars can lead to better +performance. Therefore, we propose a maximal marginal relevance-based exemplar +selection approach for constructing exemplar sets that are both relevant as +well as complementary, which successfully improves the in-context learning +performance across three real-world tasks on multiple LLMs. + +## PAL: Program-aided Language Models + +- **arXiv id:** [2211.10435v2](http://arxiv.org/abs/2211.10435v2) **Published Date:** 2022-11-18 +- **Title:** PAL: Program-aided Language Models +- **Authors:** Luyu Gao, Aman Madaan, Shuyan Zhou, et al. +- **LangChain:** + + - **API Reference:** [langchain_experimental.pal_chain](https://python.langchain.com/v0.2/api_reference//python/experimental_api_reference.html#module-langchain_experimental.pal_chain), [langchain_experimental...PALChain](https://python.langchain.com/v0.2/api_reference/experimental/pal_chain/langchain_experimental.pal_chain.base.PALChain.html#langchain_experimental.pal_chain.base.PALChain) + - **Cookbook:** [program_aided_language_model](https://github.com/langchain-ai/langchain/blob/master/cookbook/program_aided_language_model.ipynb) + +**Abstract:** Large language models (LLMs) have recently demonstrated an impressive ability +to perform arithmetic and symbolic reasoning tasks, when provided with a few +examples at test time ("few-shot prompting"). Much of this success can be +attributed to prompting methods such as "chain-of-thought'', which employ LLMs +for both understanding the problem description by decomposing it into steps, as +well as solving each step of the problem. While LLMs seem to be adept at this +sort of step-by-step decomposition, LLMs often make logical and arithmetic +mistakes in the solution part, even when the problem is decomposed correctly. +In this paper, we present Program-Aided Language models (PAL): a novel approach +that uses the LLM to read natural language problems and generate programs as +the intermediate reasoning steps, but offloads the solution step to a runtime +such as a Python interpreter. With PAL, decomposing the natural language +problem into runnable steps remains the only learning task for the LLM, while +solving is delegated to the interpreter. We demonstrate this synergy between a +neural LLM and a symbolic interpreter across 13 mathematical, symbolic, and +algorithmic reasoning tasks from BIG-Bench Hard and other benchmarks. In all +these natural language reasoning tasks, generating code using an LLM and +reasoning using a Python interpreter leads to more accurate results than much +larger models. For example, PAL using Codex achieves state-of-the-art few-shot +accuracy on the GSM8K benchmark of math word problems, surpassing PaLM-540B +which uses chain-of-thought by absolute 15% top-1. Our code and data are +publicly available at http://reasonwithpal.com/ . + +## ReAct: Synergizing Reasoning and Acting in Language Models + +- **arXiv id:** [2210.03629v3](http://arxiv.org/abs/2210.03629v3) **Published Date:** 2022-10-06 +- **Title:** ReAct: Synergizing Reasoning and Acting in Language Models +- **Authors:** Shunyu Yao, Jeffrey Zhao, Dian Yu, et al. +- **LangChain:** + + - **Documentation:** [docs/integrations/providers/cohere](https://python.langchain.com/docs/integrations/providers/cohere), [docs/integrations/tools/ionic_shopping](https://python.langchain.com/docs/integrations/tools/ionic_shopping) + - **API Reference:** [langchain...TrajectoryEvalChain](https://python.langchain.com/v0.2/api_reference/langchain/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html#langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain), [langchain...create_react_agent](https://python.langchain.com/v0.2/api_reference/langchain/agents/langchain.agents.react.agent.create_react_agent.html#langchain.agents.react.agent.create_react_agent) + +**Abstract:** While large language models (LLMs) have demonstrated impressive capabilities +across tasks in language understanding and interactive decision making, their +abilities for reasoning (e.g. chain-of-thought prompting) and acting (e.g. +action plan generation) have primarily been studied as separate topics. In this +paper, we explore the use of LLMs to generate both reasoning traces and +task-specific actions in an interleaved manner, allowing for greater synergy +between the two: reasoning traces help the model induce, track, and update +action plans as well as handle exceptions, while actions allow it to interface +with external sources, such as knowledge bases or environments, to gather +additional information. We apply our approach, named ReAct, to a diverse set of +language and decision making tasks and demonstrate its effectiveness over +state-of-the-art baselines, as well as improved human interpretability and +trustworthiness over methods without reasoning or acting components. +Concretely, on question answering (HotpotQA) and fact verification (Fever), +ReAct overcomes issues of hallucination and error propagation prevalent in +chain-of-thought reasoning by interacting with a simple Wikipedia API, and +generates human-like task-solving trajectories that are more interpretable than +baselines without reasoning traces. On two interactive decision making +benchmarks (ALFWorld and WebShop), ReAct outperforms imitation and +reinforcement learning methods by an absolute success rate of 34% and 10% +respectively, while being prompted with only one or two in-context examples. +Project site with code: https://react-lm.github.io + +## Deep Lake: a Lakehouse for Deep Learning + +- **arXiv id:** [2209.10785v2](http://arxiv.org/abs/2209.10785v2) **Published Date:** 2022-09-22 +- **Title:** Deep Lake: a Lakehouse for Deep Learning +- **Authors:** Sasun Hambardzumyan, Abhinav Tuli, Levon Ghukasyan, et al. +- **LangChain:** + + - **Documentation:** [docs/integrations/providers/activeloop_deeplake](https://python.langchain.com/docs/integrations/providers/activeloop_deeplake) + +**Abstract:** Traditional data lakes provide critical data infrastructure for analytical +workloads by enabling time travel, running SQL queries, ingesting data with +ACID transactions, and visualizing petabyte-scale datasets on cloud storage. +They allow organizations to break down data silos, unlock data-driven +decision-making, improve operational efficiency, and reduce costs. However, as +deep learning usage increases, traditional data lakes are not well-designed for +applications such as natural language processing (NLP), audio processing, +computer vision, and applications involving non-tabular datasets. This paper +presents Deep Lake, an open-source lakehouse for deep learning applications +developed at Activeloop. Deep Lake maintains the benefits of a vanilla data +lake with one key difference: it stores complex data, such as images, videos, +annotations, as well as tabular data, in the form of tensors and rapidly +streams the data over the network to (a) Tensor Query Language, (b) in-browser +visualization engine, or (c) deep learning frameworks without sacrificing GPU +utilization. Datasets stored in Deep Lake can be accessed from PyTorch, +TensorFlow, JAX, and integrate with numerous MLOps tools. + +## Matryoshka Representation Learning + +- **arXiv id:** [2205.13147v4](http://arxiv.org/abs/2205.13147v4) **Published Date:** 2022-05-26 +- **Title:** Matryoshka Representation Learning +- **Authors:** Aditya Kusupati, Gantavya Bhatt, Aniket Rege, et al. +- **LangChain:** + + - **Documentation:** [docs/integrations/providers/snowflake](https://python.langchain.com/docs/integrations/providers/snowflake) + +**Abstract:** Learned representations are a central component in modern ML systems, serving +a multitude of downstream tasks. When training such representations, it is +often the case that computational and statistical constraints for each +downstream task are unknown. In this context rigid, fixed capacity +representations can be either over or under-accommodating to the task at hand. +This leads us to ask: can we design a flexible representation that can adapt to +multiple downstream tasks with varying computational resources? Our main +contribution is Matryoshka Representation Learning (MRL) which encodes +information at different granularities and allows a single embedding to adapt +to the computational constraints of downstream tasks. MRL minimally modifies +existing representation learning pipelines and imposes no additional cost +during inference and deployment. MRL learns coarse-to-fine representations that +are at least as accurate and rich as independently trained low-dimensional +representations. The flexibility within the learned Matryoshka Representations +offer: (a) up to 14x smaller embedding size for ImageNet-1K classification at +the same level of accuracy; (b) up to 14x real-world speed-ups for large-scale +retrieval on ImageNet-1K and 4K; and (c) up to 2% accuracy improvements for +long-tail few-shot classification, all while being as robust as the original +representations. Finally, we show that MRL extends seamlessly to web-scale +datasets (ImageNet, JFT) across various modalities -- vision (ViT, ResNet), +vision + language (ALIGN) and language (BERT). MRL code and pretrained models +are open-sourced at https://github.com/RAIVNLab/MRL. + +## Bitext Mining Using Distilled Sentence Representations for Low-Resource Languages + +- **arXiv id:** [2205.12654v1](http://arxiv.org/abs/2205.12654v1) **Published Date:** 2022-05-25 +- **Title:** Bitext Mining Using Distilled Sentence Representations for Low-Resource Languages +- **Authors:** Kevin Heffernan, Onur Çelebi, Holger Schwenk +- **LangChain:** + + - **API Reference:** [langchain_community...LaserEmbeddings](https://python.langchain.com/v0.2/api_reference/community/embeddings/langchain_community.embeddings.laser.LaserEmbeddings.html#langchain_community.embeddings.laser.LaserEmbeddings) + +**Abstract:** Scaling multilingual representation learning beyond the hundred most frequent +languages is challenging, in particular to cover the long tail of low-resource +languages. A promising approach has been to train one-for-all multilingual +models capable of cross-lingual transfer, but these models often suffer from +insufficient capacity and interference between unrelated languages. Instead, we +move away from this approach and focus on training multiple language (family) +specific representations, but most prominently enable all languages to still be +encoded in the same representational space. To achieve this, we focus on +teacher-student training, allowing all encoders to be mutually compatible for +bitext mining, and enabling fast learning of new languages. We introduce a new +teacher-student training scheme which combines supervised and self-supervised +training, allowing encoders to take advantage of monolingual training data, +which is valuable in the low-resource setting. + Our approach significantly outperforms the original LASER encoder. We study +very low-resource languages and handle 50 African languages, many of which are +not covered by any other model. For these languages, we train sentence +encoders, mine bitexts, and validate the bitexts by training NMT systems. + +## Evaluating the Text-to-SQL Capabilities of Large Language Models + +- **arXiv id:** [2204.00498v1](http://arxiv.org/abs/2204.00498v1) **Published Date:** 2022-03-15 +- **Title:** Evaluating the Text-to-SQL Capabilities of Large Language Models +- **Authors:** Nitarshan Rajkumar, Raymond Li, Dzmitry Bahdanau +- **LangChain:** + + - **API Reference:** [langchain_community...SQLDatabase](https://python.langchain.com/v0.2/api_reference/community/utilities/langchain_community.utilities.sql_database.SQLDatabase.html#langchain_community.utilities.sql_database.SQLDatabase), [langchain_community...SparkSQL](https://python.langchain.com/v0.2/api_reference/community/utilities/langchain_community.utilities.spark_sql.SparkSQL.html#langchain_community.utilities.spark_sql.SparkSQL) + +**Abstract:** We perform an empirical evaluation of Text-to-SQL capabilities of the Codex +language model. We find that, without any finetuning, Codex is a strong +baseline on the Spider benchmark; we also analyze the failure modes of Codex in +this setting. Furthermore, we demonstrate on the GeoQuery and Scholar +benchmarks that a small number of in-domain examples provided in the prompt +enables Codex to perform better than state-of-the-art models finetuned on such +few-shot examples. + +## Locally Typical Sampling + +- **arXiv id:** [2202.00666v5](http://arxiv.org/abs/2202.00666v5) **Published Date:** 2022-02-01 +- **Title:** Locally Typical Sampling +- **Authors:** Clara Meister, Tiago Pimentel, Gian Wiher, et al. +- **LangChain:** + + - **API Reference:** [langchain_huggingface...HuggingFaceEndpoint](https://python.langchain.com/v0.2/api_reference/huggingface/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceEndpoint](https://python.langchain.com/v0.2/api_reference/community/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceTextGenInference](https://python.langchain.com/v0.2/api_reference/community/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference) + +**Abstract:** Today's probabilistic language generators fall short when it comes to +producing coherent and fluent text despite the fact that the underlying models +perform well under standard metrics, e.g., perplexity. This discrepancy has +puzzled the language generation community for the last few years. In this work, +we posit that the abstraction of natural language generation as a discrete +stochastic process--which allows for an information-theoretic analysis--can +provide new insights into the behavior of probabilistic language generators, +e.g., why high-probability texts can be dull or repetitive. Humans use language +as a means of communicating information, aiming to do so in a simultaneously +efficient and error-minimizing manner; in fact, psycholinguistics research +suggests humans choose each word in a string with this subconscious goal in +mind. We formally define the set of strings that meet this criterion: those for +which each word has an information content close to the expected information +content, i.e., the conditional entropy of our model. We then propose a simple +and efficient procedure for enforcing this criterion when generating from +probabilistic models, which we call locally typical sampling. Automatic and +human evaluations show that, in comparison to nucleus and top-k sampling, +locally typical sampling offers competitive performance (in both abstractive +summarization and story generation) in terms of quality while consistently +reducing degenerate repetitions. + +## Learning Transferable Visual Models From Natural Language Supervision + +- **arXiv id:** [2103.00020v1](http://arxiv.org/abs/2103.00020v1) **Published Date:** 2021-02-26 +- **Title:** Learning Transferable Visual Models From Natural Language Supervision +- **Authors:** Alec Radford, Jong Wook Kim, Chris Hallacy, et al. +- **LangChain:** + + - **API Reference:** [langchain_experimental.open_clip](https://python.langchain.com/v0.2/api_reference/experimental/index.html#module-langchain_experimental.open_clip) + +**Abstract:** State-of-the-art computer vision systems are trained to predict a fixed set +of predetermined object categories. This restricted form of supervision limits +their generality and usability since additional labeled data is needed to +specify any other visual concept. Learning directly from raw text about images +is a promising alternative which leverages a much broader source of +supervision. We demonstrate that the simple pre-training task of predicting +which caption goes with which image is an efficient and scalable way to learn +SOTA image representations from scratch on a dataset of 400 million (image, +text) pairs collected from the internet. After pre-training, natural language +is used to reference learned visual concepts (or describe new ones) enabling +zero-shot transfer of the model to downstream tasks. We study the performance +of this approach by benchmarking on over 30 different existing computer vision +datasets, spanning tasks such as OCR, action recognition in videos, +geo-localization, and many types of fine-grained object classification. The +model transfers non-trivially to most tasks and is often competitive with a +fully supervised baseline without the need for any dataset specific training. +For instance, we match the accuracy of the original ResNet-50 on ImageNet +zero-shot without needing to use any of the 1.28 million training examples it +was trained on. We release our code and pre-trained model weights at +https://github.com/OpenAI/CLIP. + +## CTRL: A Conditional Transformer Language Model for Controllable Generation + +- **arXiv id:** [1909.05858v2](http://arxiv.org/abs/1909.05858v2) **Published Date:** 2019-09-11 +- **Title:** CTRL: A Conditional Transformer Language Model for Controllable Generation +- **Authors:** Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, et al. +- **LangChain:** + + - **API Reference:** [langchain_huggingface...HuggingFaceEndpoint](https://python.langchain.com/v0.2/api_reference/huggingface/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceEndpoint](https://python.langchain.com/v0.2/api_reference/langchain_community/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community...HuggingFaceTextGenInference](https://python.langchain.com/v0.2/api_reference/community/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference) + +**Abstract:** Large-scale language models show promising text generation capabilities, but +users cannot easily control particular aspects of the generated text. We +release CTRL, a 1.63 billion-parameter conditional transformer language model, +trained to condition on control codes that govern style, content, and +task-specific behavior. Control codes were derived from structure that +naturally co-occurs with raw text, preserving the advantages of unsupervised +learning while providing more explicit control over text generation. These +codes also allow CTRL to predict which parts of the training data are most +likely given a sequence. This provides a potential method for analyzing large +amounts of data via model-based source attribution. We have released multiple +full-sized, pretrained versions of CTRL at https://github.com/salesforce/ctrl. + \ No newline at end of file diff --git a/langchain_md_files/additional_resources/dependents.mdx b/langchain_md_files/additional_resources/dependents.mdx new file mode 100644 index 0000000000000000000000000000000000000000..a09df5027ecdca05cfe4e1f372602cc7341d362d --- /dev/null +++ b/langchain_md_files/additional_resources/dependents.mdx @@ -0,0 +1,554 @@ +# Dependents + +Dependents stats for `langchain-ai/langchain` + +[![](https://img.shields.io/static/v1?label=Used%20by&message=41717&color=informational&logo=slickpic)](https://github.com/langchain-ai/langchain/network/dependents) +[![](https://img.shields.io/static/v1?label=Used%20by%20(public)&message=538&color=informational&logo=slickpic)](https://github.com/langchain-ai/langchain/network/dependents) +[![](https://img.shields.io/static/v1?label=Used%20by%20(private)&message=41179&color=informational&logo=slickpic)](https://github.com/langchain-ai/langchain/network/dependents) + + +[update: `2023-12-08`; only dependent repositories with Stars > 100] + + +| Repository | Stars | +| :-------- | -----: | +|[AntonOsika/gpt-engineer](https://github.com/AntonOsika/gpt-engineer) | 46514 | +|[imartinez/privateGPT](https://github.com/imartinez/privateGPT) | 44439 | +|[LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant) | 35906 | +|[hpcaitech/ColossalAI](https://github.com/hpcaitech/ColossalAI) | 35528 | +|[moymix/TaskMatrix](https://github.com/moymix/TaskMatrix) | 34342 | +|[geekan/MetaGPT](https://github.com/geekan/MetaGPT) | 31126 | +|[streamlit/streamlit](https://github.com/streamlit/streamlit) | 28911 | +|[reworkd/AgentGPT](https://github.com/reworkd/AgentGPT) | 27833 | +|[StanGirard/quivr](https://github.com/StanGirard/quivr) | 26032 | +|[OpenBB-finance/OpenBBTerminal](https://github.com/OpenBB-finance/OpenBBTerminal) | 24946 | +|[run-llama/llama_index](https://github.com/run-llama/llama_index) | 24859 | +|[jmorganca/ollama](https://github.com/jmorganca/ollama) | 20849 | +|[openai/chatgpt-retrieval-plugin](https://github.com/openai/chatgpt-retrieval-plugin) | 20249 | +|[chatchat-space/Langchain-Chatchat](https://github.com/chatchat-space/Langchain-Chatchat) | 19305 | +|[mindsdb/mindsdb](https://github.com/mindsdb/mindsdb) | 19172 | +|[PromtEngineer/localGPT](https://github.com/PromtEngineer/localGPT) | 17528 | +|[cube-js/cube](https://github.com/cube-js/cube) | 16575 | +|[mlflow/mlflow](https://github.com/mlflow/mlflow) | 16000 | +|[mudler/LocalAI](https://github.com/mudler/LocalAI) | 14067 | +|[logspace-ai/langflow](https://github.com/logspace-ai/langflow) | 13679 | +|[GaiZhenbiao/ChuanhuChatGPT](https://github.com/GaiZhenbiao/ChuanhuChatGPT) | 13648 | +|[arc53/DocsGPT](https://github.com/arc53/DocsGPT) | 13423 | +|[openai/evals](https://github.com/openai/evals) | 12649 | +|[airbytehq/airbyte](https://github.com/airbytehq/airbyte) | 12460 | +|[langgenius/dify](https://github.com/langgenius/dify) | 11859 | +|[databrickslabs/dolly](https://github.com/databrickslabs/dolly) | 10672 | +|[AIGC-Audio/AudioGPT](https://github.com/AIGC-Audio/AudioGPT) | 9437 | +|[langchain-ai/langchainjs](https://github.com/langchain-ai/langchainjs) | 9227 | +|[gventuri/pandas-ai](https://github.com/gventuri/pandas-ai) | 9203 | +|[aws/amazon-sagemaker-examples](https://github.com/aws/amazon-sagemaker-examples) | 9079 | +|[h2oai/h2ogpt](https://github.com/h2oai/h2ogpt) | 8945 | +|[PipedreamHQ/pipedream](https://github.com/PipedreamHQ/pipedream) | 7550 | +|[bentoml/OpenLLM](https://github.com/bentoml/OpenLLM) | 6957 | +|[THUDM/ChatGLM3](https://github.com/THUDM/ChatGLM3) | 6801 | +|[microsoft/promptflow](https://github.com/microsoft/promptflow) | 6776 | +|[cpacker/MemGPT](https://github.com/cpacker/MemGPT) | 6642 | +|[joshpxyne/gpt-migrate](https://github.com/joshpxyne/gpt-migrate) | 6482 | +|[zauberzeug/nicegui](https://github.com/zauberzeug/nicegui) | 6037 | +|[embedchain/embedchain](https://github.com/embedchain/embedchain) | 6023 | +|[mage-ai/mage-ai](https://github.com/mage-ai/mage-ai) | 6019 | +|[assafelovic/gpt-researcher](https://github.com/assafelovic/gpt-researcher) | 5936 | +|[sweepai/sweep](https://github.com/sweepai/sweep) | 5855 | +|[wenda-LLM/wenda](https://github.com/wenda-LLM/wenda) | 5766 | +|[zilliztech/GPTCache](https://github.com/zilliztech/GPTCache) | 5710 | +|[pdm-project/pdm](https://github.com/pdm-project/pdm) | 5665 | +|[GreyDGL/PentestGPT](https://github.com/GreyDGL/PentestGPT) | 5568 | +|[gkamradt/langchain-tutorials](https://github.com/gkamradt/langchain-tutorials) | 5507 | +|[Shaunwei/RealChar](https://github.com/Shaunwei/RealChar) | 5501 | +|[facebookresearch/llama-recipes](https://github.com/facebookresearch/llama-recipes) | 5477 | +|[serge-chat/serge](https://github.com/serge-chat/serge) | 5221 | +|[run-llama/rags](https://github.com/run-llama/rags) | 4916 | +|[openchatai/OpenChat](https://github.com/openchatai/OpenChat) | 4870 | +|[danswer-ai/danswer](https://github.com/danswer-ai/danswer) | 4774 | +|[langchain-ai/opengpts](https://github.com/langchain-ai/opengpts) | 4709 | +|[postgresml/postgresml](https://github.com/postgresml/postgresml) | 4639 | +|[MineDojo/Voyager](https://github.com/MineDojo/Voyager) | 4582 | +|[intel-analytics/BigDL](https://github.com/intel-analytics/BigDL) | 4581 | +|[yihong0618/xiaogpt](https://github.com/yihong0618/xiaogpt) | 4359 | +|[RayVentura/ShortGPT](https://github.com/RayVentura/ShortGPT) | 4357 | +|[Azure-Samples/azure-search-openai-demo](https://github.com/Azure-Samples/azure-search-openai-demo) | 4317 | +|[madawei2699/myGPTReader](https://github.com/madawei2699/myGPTReader) | 4289 | +|[apache/nifi](https://github.com/apache/nifi) | 4098 | +|[langchain-ai/chat-langchain](https://github.com/langchain-ai/chat-langchain) | 4091 | +|[aiwaves-cn/agents](https://github.com/aiwaves-cn/agents) | 4073 | +|[krishnaik06/The-Grand-Complete-Data-Science-Materials](https://github.com/krishnaik06/The-Grand-Complete-Data-Science-Materials) | 4065 | +|[khoj-ai/khoj](https://github.com/khoj-ai/khoj) | 4016 | +|[Azure/azure-sdk-for-python](https://github.com/Azure/azure-sdk-for-python) | 3941 | +|[PrefectHQ/marvin](https://github.com/PrefectHQ/marvin) | 3915 | +|[OpenBMB/ToolBench](https://github.com/OpenBMB/ToolBench) | 3799 | +|[marqo-ai/marqo](https://github.com/marqo-ai/marqo) | 3771 | +|[kyegomez/tree-of-thoughts](https://github.com/kyegomez/tree-of-thoughts) | 3688 | +|[Unstructured-IO/unstructured](https://github.com/Unstructured-IO/unstructured) | 3543 | +|[llm-workflow-engine/llm-workflow-engine](https://github.com/llm-workflow-engine/llm-workflow-engine) | 3515 | +|[shroominic/codeinterpreter-api](https://github.com/shroominic/codeinterpreter-api) | 3425 | +|[openchatai/OpenCopilot](https://github.com/openchatai/OpenCopilot) | 3418 | +|[josStorer/RWKV-Runner](https://github.com/josStorer/RWKV-Runner) | 3297 | +|[whitead/paper-qa](https://github.com/whitead/paper-qa) | 3280 | +|[homanp/superagent](https://github.com/homanp/superagent) | 3258 | +|[ParisNeo/lollms-webui](https://github.com/ParisNeo/lollms-webui) | 3199 | +|[OpenBMB/AgentVerse](https://github.com/OpenBMB/AgentVerse) | 3099 | +|[project-baize/baize-chatbot](https://github.com/project-baize/baize-chatbot) | 3090 | +|[OpenGVLab/InternGPT](https://github.com/OpenGVLab/InternGPT) | 2989 | +|[xlang-ai/OpenAgents](https://github.com/xlang-ai/OpenAgents) | 2825 | +|[dataelement/bisheng](https://github.com/dataelement/bisheng) | 2797 | +|[Mintplex-Labs/anything-llm](https://github.com/Mintplex-Labs/anything-llm) | 2784 | +|[OpenBMB/BMTools](https://github.com/OpenBMB/BMTools) | 2734 | +|[run-llama/llama-hub](https://github.com/run-llama/llama-hub) | 2721 | +|[SamurAIGPT/EmbedAI](https://github.com/SamurAIGPT/EmbedAI) | 2647 | +|[NVIDIA/NeMo-Guardrails](https://github.com/NVIDIA/NeMo-Guardrails) | 2637 | +|[X-D-Lab/LangChain-ChatGLM-Webui](https://github.com/X-D-Lab/LangChain-ChatGLM-Webui) | 2532 | +|[GerevAI/gerev](https://github.com/GerevAI/gerev) | 2517 | +|[keephq/keep](https://github.com/keephq/keep) | 2448 | +|[yanqiangmiffy/Chinese-LangChain](https://github.com/yanqiangmiffy/Chinese-LangChain) | 2397 | +|[OpenGVLab/Ask-Anything](https://github.com/OpenGVLab/Ask-Anything) | 2324 | +|[IntelligenzaArtificiale/Free-Auto-GPT](https://github.com/IntelligenzaArtificiale/Free-Auto-GPT) | 2241 | +|[YiVal/YiVal](https://github.com/YiVal/YiVal) | 2232 | +|[jupyterlab/jupyter-ai](https://github.com/jupyterlab/jupyter-ai) | 2189 | +|[Farama-Foundation/PettingZoo](https://github.com/Farama-Foundation/PettingZoo) | 2136 | +|[microsoft/TaskWeaver](https://github.com/microsoft/TaskWeaver) | 2126 | +|[hwchase17/notion-qa](https://github.com/hwchase17/notion-qa) | 2083 | +|[FlagOpen/FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding) | 2053 | +|[paulpierre/RasaGPT](https://github.com/paulpierre/RasaGPT) | 1999 | +|[hegelai/prompttools](https://github.com/hegelai/prompttools) | 1984 | +|[mckinsey/vizro](https://github.com/mckinsey/vizro) | 1951 | +|[vocodedev/vocode-python](https://github.com/vocodedev/vocode-python) | 1868 | +|[dot-agent/openAMS](https://github.com/dot-agent/openAMS) | 1796 | +|[explodinggradients/ragas](https://github.com/explodinggradients/ragas) | 1766 | +|[AI-Citizen/SolidGPT](https://github.com/AI-Citizen/SolidGPT) | 1761 | +|[Kav-K/GPTDiscord](https://github.com/Kav-K/GPTDiscord) | 1696 | +|[run-llama/sec-insights](https://github.com/run-llama/sec-insights) | 1654 | +|[avinashkranjan/Amazing-Python-Scripts](https://github.com/avinashkranjan/Amazing-Python-Scripts) | 1635 | +|[microsoft/WhatTheHack](https://github.com/microsoft/WhatTheHack) | 1629 | +|[noahshinn/reflexion](https://github.com/noahshinn/reflexion) | 1625 | +|[psychic-api/psychic](https://github.com/psychic-api/psychic) | 1618 | +|[Forethought-Technologies/AutoChain](https://github.com/Forethought-Technologies/AutoChain) | 1611 | +|[pinterest/querybook](https://github.com/pinterest/querybook) | 1586 | +|[refuel-ai/autolabel](https://github.com/refuel-ai/autolabel) | 1553 | +|[jina-ai/langchain-serve](https://github.com/jina-ai/langchain-serve) | 1537 | +|[jina-ai/dev-gpt](https://github.com/jina-ai/dev-gpt) | 1522 | +|[agiresearch/OpenAGI](https://github.com/agiresearch/OpenAGI) | 1493 | +|[ttengwang/Caption-Anything](https://github.com/ttengwang/Caption-Anything) | 1484 | +|[greshake/llm-security](https://github.com/greshake/llm-security) | 1483 | +|[promptfoo/promptfoo](https://github.com/promptfoo/promptfoo) | 1480 | +|[milvus-io/bootcamp](https://github.com/milvus-io/bootcamp) | 1477 | +|[richardyc/Chrome-GPT](https://github.com/richardyc/Chrome-GPT) | 1475 | +|[melih-unsal/DemoGPT](https://github.com/melih-unsal/DemoGPT) | 1428 | +|[YORG-AI/Open-Assistant](https://github.com/YORG-AI/Open-Assistant) | 1419 | +|[101dotxyz/GPTeam](https://github.com/101dotxyz/GPTeam) | 1416 | +|[jina-ai/thinkgpt](https://github.com/jina-ai/thinkgpt) | 1408 | +|[mmz-001/knowledge_gpt](https://github.com/mmz-001/knowledge_gpt) | 1398 | +|[intel/intel-extension-for-transformers](https://github.com/intel/intel-extension-for-transformers) | 1387 | +|[Azure/azureml-examples](https://github.com/Azure/azureml-examples) | 1385 | +|[lunasec-io/lunasec](https://github.com/lunasec-io/lunasec) | 1367 | +|[eyurtsev/kor](https://github.com/eyurtsev/kor) | 1355 | +|[xusenlinzy/api-for-open-llm](https://github.com/xusenlinzy/api-for-open-llm) | 1325 | +|[griptape-ai/griptape](https://github.com/griptape-ai/griptape) | 1323 | +|[SuperDuperDB/superduperdb](https://github.com/SuperDuperDB/superduperdb) | 1290 | +|[cofactoryai/textbase](https://github.com/cofactoryai/textbase) | 1284 | +|[psychic-api/rag-stack](https://github.com/psychic-api/rag-stack) | 1260 | +|[filip-michalsky/SalesGPT](https://github.com/filip-michalsky/SalesGPT) | 1250 | +|[nod-ai/SHARK](https://github.com/nod-ai/SHARK) | 1237 | +|[pluralsh/plural](https://github.com/pluralsh/plural) | 1234 | +|[cheshire-cat-ai/core](https://github.com/cheshire-cat-ai/core) | 1194 | +|[LC1332/Chat-Haruhi-Suzumiya](https://github.com/LC1332/Chat-Haruhi-Suzumiya) | 1184 | +|[poe-platform/server-bot-quick-start](https://github.com/poe-platform/server-bot-quick-start) | 1182 | +|[microsoft/X-Decoder](https://github.com/microsoft/X-Decoder) | 1180 | +|[juncongmoo/chatllama](https://github.com/juncongmoo/chatllama) | 1171 | +|[visual-openllm/visual-openllm](https://github.com/visual-openllm/visual-openllm) | 1156 | +|[alejandro-ao/ask-multiple-pdfs](https://github.com/alejandro-ao/ask-multiple-pdfs) | 1153 | +|[ThousandBirdsInc/chidori](https://github.com/ThousandBirdsInc/chidori) | 1152 | +|[irgolic/AutoPR](https://github.com/irgolic/AutoPR) | 1137 | +|[SamurAIGPT/Camel-AutoGPT](https://github.com/SamurAIGPT/Camel-AutoGPT) | 1083 | +|[ray-project/llm-applications](https://github.com/ray-project/llm-applications) | 1080 | +|[run-llama/llama-lab](https://github.com/run-llama/llama-lab) | 1072 | +|[jiran214/GPT-vup](https://github.com/jiran214/GPT-vup) | 1041 | +|[MetaGLM/FinGLM](https://github.com/MetaGLM/FinGLM) | 1035 | +|[peterw/Chat-with-Github-Repo](https://github.com/peterw/Chat-with-Github-Repo) | 1020 | +|[Anil-matcha/ChatPDF](https://github.com/Anil-matcha/ChatPDF) | 991 | +|[langchain-ai/langserve](https://github.com/langchain-ai/langserve) | 983 | +|[THUDM/AgentTuning](https://github.com/THUDM/AgentTuning) | 976 | +|[rlancemartin/auto-evaluator](https://github.com/rlancemartin/auto-evaluator) | 975 | +|[codeacme17/examor](https://github.com/codeacme17/examor) | 964 | +|[all-in-aigc/gpts-works](https://github.com/all-in-aigc/gpts-works) | 946 | +|[Ikaros-521/AI-Vtuber](https://github.com/Ikaros-521/AI-Vtuber) | 946 | +|[microsoft/Llama-2-Onnx](https://github.com/microsoft/Llama-2-Onnx) | 898 | +|[cirediatpl/FigmaChain](https://github.com/cirediatpl/FigmaChain) | 895 | +|[ricklamers/shell-ai](https://github.com/ricklamers/shell-ai) | 893 | +|[modelscope/modelscope-agent](https://github.com/modelscope/modelscope-agent) | 893 | +|[seanpixel/Teenage-AGI](https://github.com/seanpixel/Teenage-AGI) | 886 | +|[ajndkr/lanarky](https://github.com/ajndkr/lanarky) | 880 | +|[kennethleungty/Llama-2-Open-Source-LLM-CPU-Inference](https://github.com/kennethleungty/Llama-2-Open-Source-LLM-CPU-Inference) | 872 | +|[corca-ai/EVAL](https://github.com/corca-ai/EVAL) | 846 | +|[hwchase17/chat-your-data](https://github.com/hwchase17/chat-your-data) | 841 | +|[kreneskyp/ix](https://github.com/kreneskyp/ix) | 821 | +|[Link-AGI/AutoAgents](https://github.com/Link-AGI/AutoAgents) | 820 | +|[truera/trulens](https://github.com/truera/trulens) | 794 | +|[Dataherald/dataherald](https://github.com/Dataherald/dataherald) | 788 | +|[sunlabuiuc/PyHealth](https://github.com/sunlabuiuc/PyHealth) | 783 | +|[jondurbin/airoboros](https://github.com/jondurbin/airoboros) | 783 | +|[pyspark-ai/pyspark-ai](https://github.com/pyspark-ai/pyspark-ai) | 782 | +|[confident-ai/deepeval](https://github.com/confident-ai/deepeval) | 780 | +|[billxbf/ReWOO](https://github.com/billxbf/ReWOO) | 777 | +|[langchain-ai/streamlit-agent](https://github.com/langchain-ai/streamlit-agent) | 776 | +|[akshata29/entaoai](https://github.com/akshata29/entaoai) | 771 | +|[LambdaLabsML/examples](https://github.com/LambdaLabsML/examples) | 770 | +|[getmetal/motorhead](https://github.com/getmetal/motorhead) | 768 | +|[Dicklesworthstone/swiss_army_llama](https://github.com/Dicklesworthstone/swiss_army_llama) | 757 | +|[ruoccofabrizio/azure-open-ai-embeddings-qna](https://github.com/ruoccofabrizio/azure-open-ai-embeddings-qna) | 757 | +|[msoedov/langcorn](https://github.com/msoedov/langcorn) | 754 | +|[e-johnstonn/BriefGPT](https://github.com/e-johnstonn/BriefGPT) | 753 | +|[microsoft/sample-app-aoai-chatGPT](https://github.com/microsoft/sample-app-aoai-chatGPT) | 749 | +|[explosion/spacy-llm](https://github.com/explosion/spacy-llm) | 731 | +|[MiuLab/Taiwan-LLM](https://github.com/MiuLab/Taiwan-LLM) | 716 | +|[whyiyhw/chatgpt-wechat](https://github.com/whyiyhw/chatgpt-wechat) | 702 | +|[Azure-Samples/openai](https://github.com/Azure-Samples/openai) | 692 | +|[iusztinpaul/hands-on-llms](https://github.com/iusztinpaul/hands-on-llms) | 687 | +|[safevideo/autollm](https://github.com/safevideo/autollm) | 682 | +|[OpenGenerativeAI/GenossGPT](https://github.com/OpenGenerativeAI/GenossGPT) | 669 | +|[NoDataFound/hackGPT](https://github.com/NoDataFound/hackGPT) | 663 | +|[AILab-CVC/GPT4Tools](https://github.com/AILab-CVC/GPT4Tools) | 662 | +|[langchain-ai/auto-evaluator](https://github.com/langchain-ai/auto-evaluator) | 657 | +|[yvann-ba/Robby-chatbot](https://github.com/yvann-ba/Robby-chatbot) | 639 | +|[alexanderatallah/window.ai](https://github.com/alexanderatallah/window.ai) | 635 | +|[amosjyng/langchain-visualizer](https://github.com/amosjyng/langchain-visualizer) | 630 | +|[microsoft/PodcastCopilot](https://github.com/microsoft/PodcastCopilot) | 621 | +|[aws-samples/aws-genai-llm-chatbot](https://github.com/aws-samples/aws-genai-llm-chatbot) | 616 | +|[NeumTry/NeumAI](https://github.com/NeumTry/NeumAI) | 605 | +|[namuan/dr-doc-search](https://github.com/namuan/dr-doc-search) | 599 | +|[plastic-labs/tutor-gpt](https://github.com/plastic-labs/tutor-gpt) | 595 | +|[marimo-team/marimo](https://github.com/marimo-team/marimo) | 591 | +|[yakami129/VirtualWife](https://github.com/yakami129/VirtualWife) | 586 | +|[xuwenhao/geektime-ai-course](https://github.com/xuwenhao/geektime-ai-course) | 584 | +|[jonra1993/fastapi-alembic-sqlmodel-async](https://github.com/jonra1993/fastapi-alembic-sqlmodel-async) | 573 | +|[dgarnitz/vectorflow](https://github.com/dgarnitz/vectorflow) | 568 | +|[yeagerai/yeagerai-agent](https://github.com/yeagerai/yeagerai-agent) | 564 | +|[daveebbelaar/langchain-experiments](https://github.com/daveebbelaar/langchain-experiments) | 563 | +|[traceloop/openllmetry](https://github.com/traceloop/openllmetry) | 559 | +|[Agenta-AI/agenta](https://github.com/Agenta-AI/agenta) | 546 | +|[michaelthwan/searchGPT](https://github.com/michaelthwan/searchGPT) | 545 | +|[jina-ai/agentchain](https://github.com/jina-ai/agentchain) | 544 | +|[mckaywrigley/repo-chat](https://github.com/mckaywrigley/repo-chat) | 533 | +|[marella/chatdocs](https://github.com/marella/chatdocs) | 532 | +|[opentensor/bittensor](https://github.com/opentensor/bittensor) | 532 | +|[DjangoPeng/openai-quickstart](https://github.com/DjangoPeng/openai-quickstart) | 527 | +|[freddyaboulton/gradio-tools](https://github.com/freddyaboulton/gradio-tools) | 517 | +|[sidhq/Multi-GPT](https://github.com/sidhq/Multi-GPT) | 515 | +|[alejandro-ao/langchain-ask-pdf](https://github.com/alejandro-ao/langchain-ask-pdf) | 514 | +|[sajjadium/ctf-archives](https://github.com/sajjadium/ctf-archives) | 507 | +|[continuum-llms/chatgpt-memory](https://github.com/continuum-llms/chatgpt-memory) | 502 | +|[steamship-core/steamship-langchain](https://github.com/steamship-core/steamship-langchain) | 494 | +|[mpaepper/content-chatbot](https://github.com/mpaepper/content-chatbot) | 493 | +|[langchain-ai/langchain-aiplugin](https://github.com/langchain-ai/langchain-aiplugin) | 492 | +|[logan-markewich/llama_index_starter_pack](https://github.com/logan-markewich/llama_index_starter_pack) | 483 | +|[datawhalechina/llm-universe](https://github.com/datawhalechina/llm-universe) | 475 | +|[leondz/garak](https://github.com/leondz/garak) | 464 | +|[RedisVentures/ArXivChatGuru](https://github.com/RedisVentures/ArXivChatGuru) | 461 | +|[Anil-matcha/Chatbase](https://github.com/Anil-matcha/Chatbase) | 455 | +|[Aiyu-awa/luna-ai](https://github.com/Aiyu-awa/luna-ai) | 450 | +|[DataDog/dd-trace-py](https://github.com/DataDog/dd-trace-py) | 450 | +|[Azure-Samples/miyagi](https://github.com/Azure-Samples/miyagi) | 449 | +|[poe-platform/poe-protocol](https://github.com/poe-platform/poe-protocol) | 447 | +|[onlyphantom/llm-python](https://github.com/onlyphantom/llm-python) | 446 | +|[junruxiong/IncarnaMind](https://github.com/junruxiong/IncarnaMind) | 441 | +|[CarperAI/OpenELM](https://github.com/CarperAI/OpenELM) | 441 | +|[daodao97/chatdoc](https://github.com/daodao97/chatdoc) | 437 | +|[showlab/VLog](https://github.com/showlab/VLog) | 436 | +|[wandb/weave](https://github.com/wandb/weave) | 420 | +|[QwenLM/Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) | 419 | +|[huchenxucs/ChatDB](https://github.com/huchenxucs/ChatDB) | 416 | +|[jerlendds/osintbuddy](https://github.com/jerlendds/osintbuddy) | 411 | +|[monarch-initiative/ontogpt](https://github.com/monarch-initiative/ontogpt) | 408 | +|[mallorbc/Finetune_LLMs](https://github.com/mallorbc/Finetune_LLMs) | 406 | +|[JayZeeDesign/researcher-gpt](https://github.com/JayZeeDesign/researcher-gpt) | 405 | +|[rsaryev/talk-codebase](https://github.com/rsaryev/talk-codebase) | 401 | +|[langchain-ai/langsmith-cookbook](https://github.com/langchain-ai/langsmith-cookbook) | 398 | +|[mtenenholtz/chat-twitter](https://github.com/mtenenholtz/chat-twitter) | 398 | +|[morpheuslord/GPT_Vuln-analyzer](https://github.com/morpheuslord/GPT_Vuln-analyzer) | 391 | +|[MagnivOrg/prompt-layer-library](https://github.com/MagnivOrg/prompt-layer-library) | 387 | +|[JohnSnowLabs/langtest](https://github.com/JohnSnowLabs/langtest) | 384 | +|[mrwadams/attackgen](https://github.com/mrwadams/attackgen) | 381 | +|[codefuse-ai/Test-Agent](https://github.com/codefuse-ai/Test-Agent) | 380 | +|[personoids/personoids-lite](https://github.com/personoids/personoids-lite) | 379 | +|[mosaicml/examples](https://github.com/mosaicml/examples) | 378 | +|[steamship-packages/langchain-production-starter](https://github.com/steamship-packages/langchain-production-starter) | 370 | +|[FlagAI-Open/Aquila2](https://github.com/FlagAI-Open/Aquila2) | 365 | +|[Mintplex-Labs/vector-admin](https://github.com/Mintplex-Labs/vector-admin) | 365 | +|[NimbleBoxAI/ChainFury](https://github.com/NimbleBoxAI/ChainFury) | 357 | +|[BlackHC/llm-strategy](https://github.com/BlackHC/llm-strategy) | 354 | +|[lilacai/lilac](https://github.com/lilacai/lilac) | 352 | +|[preset-io/promptimize](https://github.com/preset-io/promptimize) | 351 | +|[yuanjie-ai/ChatLLM](https://github.com/yuanjie-ai/ChatLLM) | 347 | +|[andylokandy/gpt-4-search](https://github.com/andylokandy/gpt-4-search) | 346 | +|[zhoudaquan/ChatAnything](https://github.com/zhoudaquan/ChatAnything) | 343 | +|[rgomezcasas/dotfiles](https://github.com/rgomezcasas/dotfiles) | 343 | +|[tigerlab-ai/tiger](https://github.com/tigerlab-ai/tiger) | 342 | +|[HumanSignal/label-studio-ml-backend](https://github.com/HumanSignal/label-studio-ml-backend) | 334 | +|[nasa-petal/bidara](https://github.com/nasa-petal/bidara) | 334 | +|[momegas/megabots](https://github.com/momegas/megabots) | 334 | +|[Cheems-Seminar/grounded-segment-any-parts](https://github.com/Cheems-Seminar/grounded-segment-any-parts) | 330 | +|[CambioML/pykoi](https://github.com/CambioML/pykoi) | 326 | +|[Nuggt-dev/Nuggt](https://github.com/Nuggt-dev/Nuggt) | 326 | +|[wandb/edu](https://github.com/wandb/edu) | 326 | +|[Haste171/langchain-chatbot](https://github.com/Haste171/langchain-chatbot) | 324 | +|[sugarforever/LangChain-Tutorials](https://github.com/sugarforever/LangChain-Tutorials) | 322 | +|[liangwq/Chatglm_lora_multi-gpu](https://github.com/liangwq/Chatglm_lora_multi-gpu) | 321 | +|[ur-whitelab/chemcrow-public](https://github.com/ur-whitelab/chemcrow-public) | 320 | +|[itamargol/openai](https://github.com/itamargol/openai) | 318 | +|[gia-guar/JARVIS-ChatGPT](https://github.com/gia-guar/JARVIS-ChatGPT) | 304 | +|[SpecterOps/Nemesis](https://github.com/SpecterOps/Nemesis) | 302 | +|[facebookresearch/personal-timeline](https://github.com/facebookresearch/personal-timeline) | 302 | +|[hnawaz007/pythondataanalysis](https://github.com/hnawaz007/pythondataanalysis) | 301 | +|[Chainlit/cookbook](https://github.com/Chainlit/cookbook) | 300 | +|[airobotlab/KoChatGPT](https://github.com/airobotlab/KoChatGPT) | 300 | +|[GPT-Fathom/GPT-Fathom](https://github.com/GPT-Fathom/GPT-Fathom) | 299 | +|[kaarthik108/snowChat](https://github.com/kaarthik108/snowChat) | 299 | +|[kyegomez/swarms](https://github.com/kyegomez/swarms) | 296 | +|[LangStream/langstream](https://github.com/LangStream/langstream) | 295 | +|[genia-dev/GeniA](https://github.com/genia-dev/GeniA) | 294 | +|[shamspias/customizable-gpt-chatbot](https://github.com/shamspias/customizable-gpt-chatbot) | 291 | +|[TsinghuaDatabaseGroup/DB-GPT](https://github.com/TsinghuaDatabaseGroup/DB-GPT) | 290 | +|[conceptofmind/toolformer](https://github.com/conceptofmind/toolformer) | 283 | +|[sullivan-sean/chat-langchainjs](https://github.com/sullivan-sean/chat-langchainjs) | 283 | +|[AutoPackAI/beebot](https://github.com/AutoPackAI/beebot) | 282 | +|[pablomarin/GPT-Azure-Search-Engine](https://github.com/pablomarin/GPT-Azure-Search-Engine) | 282 | +|[gkamradt/LLMTest_NeedleInAHaystack](https://github.com/gkamradt/LLMTest_NeedleInAHaystack) | 280 | +|[gustavz/DataChad](https://github.com/gustavz/DataChad) | 280 | +|[Safiullah-Rahu/CSV-AI](https://github.com/Safiullah-Rahu/CSV-AI) | 278 | +|[hwchase17/chroma-langchain](https://github.com/hwchase17/chroma-langchain) | 275 | +|[AkshitIreddy/Interactive-LLM-Powered-NPCs](https://github.com/AkshitIreddy/Interactive-LLM-Powered-NPCs) | 268 | +|[ennucore/clippinator](https://github.com/ennucore/clippinator) | 267 | +|[artitw/text2text](https://github.com/artitw/text2text) | 264 | +|[anarchy-ai/LLM-VM](https://github.com/anarchy-ai/LLM-VM) | 263 | +|[wpydcr/LLM-Kit](https://github.com/wpydcr/LLM-Kit) | 262 | +|[streamlit/llm-examples](https://github.com/streamlit/llm-examples) | 262 | +|[paolorechia/learn-langchain](https://github.com/paolorechia/learn-langchain) | 262 | +|[yym68686/ChatGPT-Telegram-Bot](https://github.com/yym68686/ChatGPT-Telegram-Bot) | 261 | +|[PradipNichite/Youtube-Tutorials](https://github.com/PradipNichite/Youtube-Tutorials) | 259 | +|[radi-cho/datasetGPT](https://github.com/radi-cho/datasetGPT) | 259 | +|[ur-whitelab/exmol](https://github.com/ur-whitelab/exmol) | 259 | +|[ml6team/fondant](https://github.com/ml6team/fondant) | 254 | +|[bborn/howdoi.ai](https://github.com/bborn/howdoi.ai) | 254 | +|[rahulnyk/knowledge_graph](https://github.com/rahulnyk/knowledge_graph) | 253 | +|[recalign/RecAlign](https://github.com/recalign/RecAlign) | 248 | +|[hwchase17/langchain-streamlit-template](https://github.com/hwchase17/langchain-streamlit-template) | 248 | +|[fetchai/uAgents](https://github.com/fetchai/uAgents) | 247 | +|[arthur-ai/bench](https://github.com/arthur-ai/bench) | 247 | +|[miaoshouai/miaoshouai-assistant](https://github.com/miaoshouai/miaoshouai-assistant) | 246 | +|[RoboCoachTechnologies/GPT-Synthesizer](https://github.com/RoboCoachTechnologies/GPT-Synthesizer) | 244 | +|[langchain-ai/web-explorer](https://github.com/langchain-ai/web-explorer) | 242 | +|[kaleido-lab/dolphin](https://github.com/kaleido-lab/dolphin) | 242 | +|[PJLab-ADG/DriveLikeAHuman](https://github.com/PJLab-ADG/DriveLikeAHuman) | 241 | +|[stepanogil/autonomous-hr-chatbot](https://github.com/stepanogil/autonomous-hr-chatbot) | 238 | +|[WongSaang/chatgpt-ui-server](https://github.com/WongSaang/chatgpt-ui-server) | 236 | +|[nexus-stc/stc](https://github.com/nexus-stc/stc) | 235 | +|[yeagerai/genworlds](https://github.com/yeagerai/genworlds) | 235 | +|[Gentopia-AI/Gentopia](https://github.com/Gentopia-AI/Gentopia) | 235 | +|[alphasecio/langchain-examples](https://github.com/alphasecio/langchain-examples) | 235 | +|[grumpyp/aixplora](https://github.com/grumpyp/aixplora) | 232 | +|[shaman-ai/agent-actors](https://github.com/shaman-ai/agent-actors) | 232 | +|[darrenburns/elia](https://github.com/darrenburns/elia) | 231 | +|[orgexyz/BlockAGI](https://github.com/orgexyz/BlockAGI) | 231 | +|[handrew/browserpilot](https://github.com/handrew/browserpilot) | 226 | +|[su77ungr/CASALIOY](https://github.com/su77ungr/CASALIOY) | 225 | +|[nicknochnack/LangchainDocuments](https://github.com/nicknochnack/LangchainDocuments) | 225 | +|[dbpunk-labs/octogen](https://github.com/dbpunk-labs/octogen) | 224 | +|[langchain-ai/weblangchain](https://github.com/langchain-ai/weblangchain) | 222 | +|[CL-lau/SQL-GPT](https://github.com/CL-lau/SQL-GPT) | 222 | +|[alvarosevilla95/autolang](https://github.com/alvarosevilla95/autolang) | 221 | +|[showlab/UniVTG](https://github.com/showlab/UniVTG) | 220 | +|[edreisMD/plugnplai](https://github.com/edreisMD/plugnplai) | 219 | +|[hardbyte/qabot](https://github.com/hardbyte/qabot) | 216 | +|[microsoft/azure-openai-in-a-day-workshop](https://github.com/microsoft/azure-openai-in-a-day-workshop) | 215 | +|[Azure-Samples/chat-with-your-data-solution-accelerator](https://github.com/Azure-Samples/chat-with-your-data-solution-accelerator) | 214 | +|[amadad/agentcy](https://github.com/amadad/agentcy) | 213 | +|[snexus/llm-search](https://github.com/snexus/llm-search) | 212 | +|[afaqueumer/DocQA](https://github.com/afaqueumer/DocQA) | 206 | +|[plchld/InsightFlow](https://github.com/plchld/InsightFlow) | 205 | +|[yasyf/compress-gpt](https://github.com/yasyf/compress-gpt) | 205 | +|[benthecoder/ClassGPT](https://github.com/benthecoder/ClassGPT) | 205 | +|[voxel51/voxelgpt](https://github.com/voxel51/voxelgpt) | 204 | +|[jbrukh/gpt-jargon](https://github.com/jbrukh/gpt-jargon) | 204 | +|[emarco177/ice_breaker](https://github.com/emarco177/ice_breaker) | 204 | +|[tencentmusic/supersonic](https://github.com/tencentmusic/supersonic) | 202 | +|[Azure-Samples/azure-search-power-skills](https://github.com/Azure-Samples/azure-search-power-skills) | 202 | +|[blob42/Instrukt](https://github.com/blob42/Instrukt) | 201 | +|[langchain-ai/langsmith-sdk](https://github.com/langchain-ai/langsmith-sdk) | 200 | +|[SamPink/dev-gpt](https://github.com/SamPink/dev-gpt) | 200 | +|[ju-bezdek/langchain-decorators](https://github.com/ju-bezdek/langchain-decorators) | 198 | +|[KMnO4-zx/huanhuan-chat](https://github.com/KMnO4-zx/huanhuan-chat) | 196 | +|[Azure-Samples/jp-azureopenai-samples](https://github.com/Azure-Samples/jp-azureopenai-samples) | 192 | +|[hongbo-miao/hongbomiao.com](https://github.com/hongbo-miao/hongbomiao.com) | 190 | +|[CakeCrusher/openplugin](https://github.com/CakeCrusher/openplugin) | 190 | +|[PaddlePaddle/ERNIE-Bot-SDK](https://github.com/PaddlePaddle/ERNIE-Bot-SDK) | 189 | +|[retr0reg/Ret2GPT](https://github.com/retr0reg/Ret2GPT) | 189 | +|[AmineDiro/cria](https://github.com/AmineDiro/cria) | 187 | +|[lancedb/vectordb-recipes](https://github.com/lancedb/vectordb-recipes) | 186 | +|[vaibkumr/prompt-optimizer](https://github.com/vaibkumr/prompt-optimizer) | 185 | +|[aws-ia/ecs-blueprints](https://github.com/aws-ia/ecs-blueprints) | 184 | +|[ethanyanjiali/minChatGPT](https://github.com/ethanyanjiali/minChatGPT) | 183 | +|[MuhammadMoinFaisal/LargeLanguageModelsProjects](https://github.com/MuhammadMoinFaisal/LargeLanguageModelsProjects) | 182 | +|[shauryr/S2QA](https://github.com/shauryr/S2QA) | 181 | +|[summarizepaper/summarizepaper](https://github.com/summarizepaper/summarizepaper) | 180 | +|[NomaDamas/RAGchain](https://github.com/NomaDamas/RAGchain) | 179 | +|[pnkvalavala/repochat](https://github.com/pnkvalavala/repochat) | 179 | +|[ibiscp/LLM-IMDB](https://github.com/ibiscp/LLM-IMDB) | 177 | +|[fengyuli-dev/multimedia-gpt](https://github.com/fengyuli-dev/multimedia-gpt) | 177 | +|[langchain-ai/text-split-explorer](https://github.com/langchain-ai/text-split-explorer) | 175 | +|[iMagist486/ElasticSearch-Langchain-Chatglm2](https://github.com/iMagist486/ElasticSearch-Langchain-Chatglm2) | 175 | +|[limaoyi1/Auto-PPT](https://github.com/limaoyi1/Auto-PPT) | 175 | +|[Open-Swarm-Net/GPT-Swarm](https://github.com/Open-Swarm-Net/GPT-Swarm) | 175 | +|[morpheuslord/HackBot](https://github.com/morpheuslord/HackBot) | 174 | +|[v7labs/benchllm](https://github.com/v7labs/benchllm) | 174 | +|[Coding-Crashkurse/Langchain-Full-Course](https://github.com/Coding-Crashkurse/Langchain-Full-Course) | 174 | +|[dongyh20/Octopus](https://github.com/dongyh20/Octopus) | 173 | +|[kimtth/azure-openai-llm-vector-langchain](https://github.com/kimtth/azure-openai-llm-vector-langchain) | 173 | +|[mayooear/private-chatbot-mpt30b-langchain](https://github.com/mayooear/private-chatbot-mpt30b-langchain) | 173 | +|[zilliztech/akcio](https://github.com/zilliztech/akcio) | 172 | +|[jmpaz/promptlib](https://github.com/jmpaz/promptlib) | 172 | +|[ccurme/yolopandas](https://github.com/ccurme/yolopandas) | 172 | +|[joaomdmoura/CrewAI](https://github.com/joaomdmoura/CrewAI) | 170 | +|[katanaml/llm-mistral-invoice-cpu](https://github.com/katanaml/llm-mistral-invoice-cpu) | 170 | +|[chakkaradeep/pyCodeAGI](https://github.com/chakkaradeep/pyCodeAGI) | 170 | +|[mudler/LocalAGI](https://github.com/mudler/LocalAGI) | 167 | +|[dssjon/biblos](https://github.com/dssjon/biblos) | 165 | +|[kjappelbaum/gptchem](https://github.com/kjappelbaum/gptchem) | 165 | +|[xxw1995/chatglm3-finetune](https://github.com/xxw1995/chatglm3-finetune) | 164 | +|[ArjanCodes/examples](https://github.com/ArjanCodes/examples) | 163 | +|[AIAnytime/Llama2-Medical-Chatbot](https://github.com/AIAnytime/Llama2-Medical-Chatbot) | 163 | +|[RCGAI/SimplyRetrieve](https://github.com/RCGAI/SimplyRetrieve) | 162 | +|[langchain-ai/langchain-teacher](https://github.com/langchain-ai/langchain-teacher) | 162 | +|[menloparklab/falcon-langchain](https://github.com/menloparklab/falcon-langchain) | 162 | +|[flurb18/AgentOoba](https://github.com/flurb18/AgentOoba) | 162 | +|[homanp/vercel-langchain](https://github.com/homanp/vercel-langchain) | 161 | +|[jiran214/langup-ai](https://github.com/jiran214/langup-ai) | 160 | +|[JorisdeJong123/7-Days-of-LangChain](https://github.com/JorisdeJong123/7-Days-of-LangChain) | 160 | +|[GoogleCloudPlatform/data-analytics-golden-demo](https://github.com/GoogleCloudPlatform/data-analytics-golden-demo) | 159 | +|[positive666/Prompt-Can-Anything](https://github.com/positive666/Prompt-Can-Anything) | 159 | +|[luisroque/large_laguage_models](https://github.com/luisroque/large_laguage_models) | 159 | +|[mlops-for-all/mlops-for-all.github.io](https://github.com/mlops-for-all/mlops-for-all.github.io) | 158 | +|[wandb/wandbot](https://github.com/wandb/wandbot) | 158 | +|[elastic/elasticsearch-labs](https://github.com/elastic/elasticsearch-labs) | 157 | +|[shroominic/funcchain](https://github.com/shroominic/funcchain) | 157 | +|[deeppavlov/dream](https://github.com/deeppavlov/dream) | 156 | +|[mluogh/eastworld](https://github.com/mluogh/eastworld) | 154 | +|[georgesung/llm_qlora](https://github.com/georgesung/llm_qlora) | 154 | +|[RUC-GSAI/YuLan-Rec](https://github.com/RUC-GSAI/YuLan-Rec) | 153 | +|[KylinC/ChatFinance](https://github.com/KylinC/ChatFinance) | 152 | +|[Dicklesworthstone/llama2_aided_tesseract](https://github.com/Dicklesworthstone/llama2_aided_tesseract) | 152 | +|[c0sogi/LLMChat](https://github.com/c0sogi/LLMChat) | 152 | +|[eunomia-bpf/GPTtrace](https://github.com/eunomia-bpf/GPTtrace) | 152 | +|[ErikBjare/gptme](https://github.com/ErikBjare/gptme) | 152 | +|[Klingefjord/chatgpt-telegram](https://github.com/Klingefjord/chatgpt-telegram) | 152 | +|[RoboCoachTechnologies/ROScribe](https://github.com/RoboCoachTechnologies/ROScribe) | 151 | +|[Aggregate-Intellect/sherpa](https://github.com/Aggregate-Intellect/sherpa) | 151 | +|[3Alan/DocsMind](https://github.com/3Alan/DocsMind) | 151 | +|[tangqiaoyu/ToolAlpaca](https://github.com/tangqiaoyu/ToolAlpaca) | 150 | +|[kulltc/chatgpt-sql](https://github.com/kulltc/chatgpt-sql) | 150 | +|[mallahyari/drqa](https://github.com/mallahyari/drqa) | 150 | +|[MedalCollector/Orator](https://github.com/MedalCollector/Orator) | 149 | +|[Teahouse-Studios/akari-bot](https://github.com/Teahouse-Studios/akari-bot) | 149 | +|[realminchoi/babyagi-ui](https://github.com/realminchoi/babyagi-ui) | 148 | +|[ssheng/BentoChain](https://github.com/ssheng/BentoChain) | 148 | +|[solana-labs/chatgpt-plugin](https://github.com/solana-labs/chatgpt-plugin) | 147 | +|[aurelio-labs/arxiv-bot](https://github.com/aurelio-labs/arxiv-bot) | 147 | +|[Jaseci-Labs/jaseci](https://github.com/Jaseci-Labs/jaseci) | 146 | +|[menloparklab/langchain-cohere-qdrant-doc-retrieval](https://github.com/menloparklab/langchain-cohere-qdrant-doc-retrieval) | 146 | +|[trancethehuman/entities-extraction-web-scraper](https://github.com/trancethehuman/entities-extraction-web-scraper) | 144 | +|[peterw/StoryStorm](https://github.com/peterw/StoryStorm) | 144 | +|[grumpyp/chroma-langchain-tutorial](https://github.com/grumpyp/chroma-langchain-tutorial) | 144 | +|[gh18l/CrawlGPT](https://github.com/gh18l/CrawlGPT) | 142 | +|[langchain-ai/langchain-aws-template](https://github.com/langchain-ai/langchain-aws-template) | 142 | +|[yasyf/summ](https://github.com/yasyf/summ) | 141 | +|[petehunt/langchain-github-bot](https://github.com/petehunt/langchain-github-bot) | 141 | +|[hirokidaichi/wanna](https://github.com/hirokidaichi/wanna) | 140 | +|[jina-ai/fastapi-serve](https://github.com/jina-ai/fastapi-serve) | 139 | +|[zenml-io/zenml-projects](https://github.com/zenml-io/zenml-projects) | 139 | +|[jlonge4/local_llama](https://github.com/jlonge4/local_llama) | 139 | +|[smyja/blackmaria](https://github.com/smyja/blackmaria) | 138 | +|[ChuloAI/BrainChulo](https://github.com/ChuloAI/BrainChulo) | 137 | +|[log1stics/voice-generator-webui](https://github.com/log1stics/voice-generator-webui) | 137 | +|[davila7/file-gpt](https://github.com/davila7/file-gpt) | 137 | +|[dcaribou/transfermarkt-datasets](https://github.com/dcaribou/transfermarkt-datasets) | 136 | +|[ciare-robotics/world-creator](https://github.com/ciare-robotics/world-creator) | 135 | +|[Undertone0809/promptulate](https://github.com/Undertone0809/promptulate) | 134 | +|[fixie-ai/fixie-examples](https://github.com/fixie-ai/fixie-examples) | 134 | +|[run-llama/ai-engineer-workshop](https://github.com/run-llama/ai-engineer-workshop) | 133 | +|[definitive-io/code-indexer-loop](https://github.com/definitive-io/code-indexer-loop) | 131 | +|[mortium91/langchain-assistant](https://github.com/mortium91/langchain-assistant) | 131 | +|[baidubce/bce-qianfan-sdk](https://github.com/baidubce/bce-qianfan-sdk) | 130 | +|[Ngonie-x/langchain_csv](https://github.com/Ngonie-x/langchain_csv) | 130 | +|[IvanIsCoding/ResuLLMe](https://github.com/IvanIsCoding/ResuLLMe) | 130 | +|[AnchoringAI/anchoring-ai](https://github.com/AnchoringAI/anchoring-ai) | 129 | +|[Azure/business-process-automation](https://github.com/Azure/business-process-automation) | 128 | +|[athina-ai/athina-sdk](https://github.com/athina-ai/athina-sdk) | 126 | +|[thunlp/ChatEval](https://github.com/thunlp/ChatEval) | 126 | +|[prof-frink-lab/slangchain](https://github.com/prof-frink-lab/slangchain) | 126 | +|[vietanhdev/pautobot](https://github.com/vietanhdev/pautobot) | 125 | +|[awslabs/generative-ai-cdk-constructs](https://github.com/awslabs/generative-ai-cdk-constructs) | 124 | +|[sdaaron/QueryGPT](https://github.com/sdaaron/QueryGPT) | 124 | +|[rabbitmetrics/langchain-13-min](https://github.com/rabbitmetrics/langchain-13-min) | 124 | +|[AutoLLM/AutoAgents](https://github.com/AutoLLM/AutoAgents) | 122 | +|[nicknochnack/Nopenai](https://github.com/nicknochnack/Nopenai) | 122 | +|[wombyz/HormoziGPT](https://github.com/wombyz/HormoziGPT) | 122 | +|[dotvignesh/PDFChat](https://github.com/dotvignesh/PDFChat) | 122 | +|[topoteretes/PromethAI-Backend](https://github.com/topoteretes/PromethAI-Backend) | 121 | +|[nftblackmagic/flask-langchain](https://github.com/nftblackmagic/flask-langchain) | 121 | +|[vishwasg217/finsight](https://github.com/vishwasg217/finsight) | 120 | +|[snap-stanford/MLAgentBench](https://github.com/snap-stanford/MLAgentBench) | 120 | +|[Azure/app-service-linux-docs](https://github.com/Azure/app-service-linux-docs) | 120 | +|[nyanp/chat2plot](https://github.com/nyanp/chat2plot) | 120 | +|[ant4g0nist/polar](https://github.com/ant4g0nist/polar) | 119 | +|[aws-samples/cdk-eks-blueprints-patterns](https://github.com/aws-samples/cdk-eks-blueprints-patterns) | 119 | +|[aws-samples/amazon-kendra-langchain-extensions](https://github.com/aws-samples/amazon-kendra-langchain-extensions) | 119 | +|[Xueheng-Li/SynologyChatbotGPT](https://github.com/Xueheng-Li/SynologyChatbotGPT) | 119 | +|[CodeAlchemyAI/ViLT-GPT](https://github.com/CodeAlchemyAI/ViLT-GPT) | 117 | +|[Lin-jun-xiang/docGPT-langchain](https://github.com/Lin-jun-xiang/docGPT-langchain) | 117 | +|[ademakdogan/ChatSQL](https://github.com/ademakdogan/ChatSQL) | 116 | +|[aniketmaurya/llm-inference](https://github.com/aniketmaurya/llm-inference) | 115 | +|[xuwenhao/mactalk-ai-course](https://github.com/xuwenhao/mactalk-ai-course) | 115 | +|[cmooredev/RepoReader](https://github.com/cmooredev/RepoReader) | 115 | +|[abi/autocommit](https://github.com/abi/autocommit) | 115 | +|[MIDORIBIN/langchain-gpt4free](https://github.com/MIDORIBIN/langchain-gpt4free) | 114 | +|[finaldie/auto-news](https://github.com/finaldie/auto-news) | 114 | +|[Anil-matcha/Youtube-to-chatbot](https://github.com/Anil-matcha/Youtube-to-chatbot) | 114 | +|[avrabyt/MemoryBot](https://github.com/avrabyt/MemoryBot) | 114 | +|[Capsize-Games/airunner](https://github.com/Capsize-Games/airunner) | 113 | +|[atisharma/llama_farm](https://github.com/atisharma/llama_farm) | 113 | +|[mbchang/data-driven-characters](https://github.com/mbchang/data-driven-characters) | 112 | +|[fiddler-labs/fiddler-auditor](https://github.com/fiddler-labs/fiddler-auditor) | 112 | +|[dirkjbreeuwer/gpt-automated-web-scraper](https://github.com/dirkjbreeuwer/gpt-automated-web-scraper) | 111 | +|[Appointat/Chat-with-Document-s-using-ChatGPT-API-and-Text-Embedding](https://github.com/Appointat/Chat-with-Document-s-using-ChatGPT-API-and-Text-Embedding) | 111 | +|[hwchase17/langchain-gradio-template](https://github.com/hwchase17/langchain-gradio-template) | 111 | +|[artas728/spelltest](https://github.com/artas728/spelltest) | 110 | +|[NVIDIA/GenerativeAIExamples](https://github.com/NVIDIA/GenerativeAIExamples) | 109 | +|[Azure/aistudio-copilot-sample](https://github.com/Azure/aistudio-copilot-sample) | 108 | +|[codefuse-ai/codefuse-chatbot](https://github.com/codefuse-ai/codefuse-chatbot) | 108 | +|[apirrone/Memento](https://github.com/apirrone/Memento) | 108 | +|[e-johnstonn/GPT-Doc-Summarizer](https://github.com/e-johnstonn/GPT-Doc-Summarizer) | 108 | +|[salesforce/BOLAA](https://github.com/salesforce/BOLAA) | 107 | +|[Erol444/gpt4-openai-api](https://github.com/Erol444/gpt4-openai-api) | 106 | +|[linjungz/chat-with-your-doc](https://github.com/linjungz/chat-with-your-doc) | 106 | +|[crosleythomas/MirrorGPT](https://github.com/crosleythomas/MirrorGPT) | 106 | +|[panaverse/learn-generative-ai](https://github.com/panaverse/learn-generative-ai) | 105 | +|[Azure/azure-sdk-tools](https://github.com/Azure/azure-sdk-tools) | 105 | +|[malywut/gpt_examples](https://github.com/malywut/gpt_examples) | 105 | +|[ritun16/chain-of-verification](https://github.com/ritun16/chain-of-verification) | 104 | +|[langchain-ai/langchain-benchmarks](https://github.com/langchain-ai/langchain-benchmarks) | 104 | +|[lightninglabs/LangChainBitcoin](https://github.com/lightninglabs/LangChainBitcoin) | 104 | +|[flepied/second-brain-agent](https://github.com/flepied/second-brain-agent) | 103 | +|[llmapp/openai.mini](https://github.com/llmapp/openai.mini) | 102 | +|[gimlet-ai/tddGPT](https://github.com/gimlet-ai/tddGPT) | 102 | +|[jlonge4/gpt_chatwithPDF](https://github.com/jlonge4/gpt_chatwithPDF) | 102 | +|[agentification/RAFA_code](https://github.com/agentification/RAFA_code) | 101 | +|[pacman100/DHS-LLM-Workshop](https://github.com/pacman100/DHS-LLM-Workshop) | 101 | +|[aws-samples/private-llm-qa-bot](https://github.com/aws-samples/private-llm-qa-bot) | 101 | + + +_Generated by [github-dependents-info](https://github.com/nvuillam/github-dependents-info)_ + +`github-dependents-info --repo "langchain-ai/langchain" --markdownfile dependents.md --minstars 100 --sort stars` diff --git a/langchain_md_files/additional_resources/tutorials.mdx b/langchain_md_files/additional_resources/tutorials.mdx new file mode 100644 index 0000000000000000000000000000000000000000..1b98d8c31af7a5727ca302f6dce628eac0c07f21 --- /dev/null +++ b/langchain_md_files/additional_resources/tutorials.mdx @@ -0,0 +1,51 @@ +# 3rd Party Tutorials + +## Tutorials + +### [LangChain v 0.1 by LangChain.ai](https://www.youtube.com/playlist?list=PLfaIDFEXuae0gBSJ9T0w7cu7iJZbH3T31) +### [Build with Langchain - Advanced by LangChain.ai](https://www.youtube.com/playlist?list=PLfaIDFEXuae06tclDATrMYY0idsTdLg9v) +### [LangGraph by LangChain.ai](https://www.youtube.com/playlist?list=PLfaIDFEXuae16n2TWUkKq5PgJ0w6Pkwtg) +### [by Greg Kamradt](https://www.youtube.com/playlist?list=PLqZXAkvF1bPNQER9mLmDbntNfSpzdDIU5) +### [by Sam Witteveen](https://www.youtube.com/playlist?list=PL8motc6AQftk1Bs42EW45kwYbyJ4jOdiZ) +### [by James Briggs](https://www.youtube.com/playlist?list=PLIUOU7oqGTLieV9uTIFMm6_4PXg-hlN6F) +### [by Prompt Engineering](https://www.youtube.com/playlist?list=PLVEEucA9MYhOu89CX8H3MBZqayTbcCTMr) +### [by Mayo Oshin](https://www.youtube.com/@chatwithdata/search?query=langchain) +### [by 1 little Coder](https://www.youtube.com/playlist?list=PLpdmBGJ6ELUK-v0MK-t4wZmVEbxM5xk6L) +### [by BobLin (Chinese language)](https://www.youtube.com/playlist?list=PLbd7ntv6PxC3QMFQvtWfk55p-Op_syO1C) + +## Courses + +### Featured courses on Deeplearning.AI + +- [LangChain for LLM Application Development](https://www.deeplearning.ai/short-courses/langchain-for-llm-application-development/) +- [LangChain Chat with Your Data](https://www.deeplearning.ai/short-courses/langchain-chat-with-your-data/) +- [Functions, Tools and Agents with LangChain](https://www.deeplearning.ai/short-courses/functions-tools-agents-langchain/) +- [Build LLM Apps with LangChain.js](https://www.deeplearning.ai/short-courses/build-llm-apps-with-langchain-js/) + +### Online courses + +- [Udemy](https://www.udemy.com/courses/search/?q=langchain) +- [DataCamp](https://www.datacamp.com/courses/developing-llm-applications-with-langchain) +- [Pluralsight](https://www.pluralsight.com/search?q=langchain) +- [Coursera](https://www.coursera.org/search?query=langchain) +- [Maven](https://maven.com/courses?query=langchain) +- [Udacity](https://www.udacity.com/catalog/all/any-price/any-school/any-skill/any-difficulty/any-duration/any-type/relevance/page-1?searchValue=langchain) +- [LinkedIn Learning](https://www.linkedin.com/search/results/learning/?keywords=langchain) +- [edX](https://www.edx.org/search?q=langchain) +- [freeCodeCamp](https://www.youtube.com/@freecodecamp/search?query=langchain) + +## Short Tutorials + +- [by Nicholas Renotte](https://youtu.be/MlK6SIjcjE8) +- [by Patrick Loeber](https://youtu.be/LbT1yp6quS8) +- [by Rabbitmetrics](https://youtu.be/aywZrzNaKjs) +- [by Ivan Reznikov](https://medium.com/@ivanreznikov/langchain-101-course-updated-668f7b41d6cb) + +## Books and Handbooks + +- [Generative AI with LangChain](https://www.amazon.com/Generative-AI-LangChain-language-ChatGPT/dp/1835083463/ref=sr_1_1?crid=1GMOMH0G7GLR&keywords=generative+ai+with+langchain&qid=1703247181&sprefix=%2Caps%2C298&sr=8-1) by [Ben Auffrath](https://www.amazon.com/stores/Ben-Auffarth/author/B08JQKSZ7D?ref=ap_rdr&store_ref=ap_rdr&isDramIntegrated=true&shoppingPortalEnabled=true), ©️ 2023 Packt Publishing +- [LangChain AI Handbook](https://www.pinecone.io/learn/langchain/) By **James Briggs** and **Francisco Ingham** +- [LangChain Cheatsheet](https://pub.towardsai.net/langchain-cheatsheet-all-secrets-on-a-single-page-8be26b721cde) by **Ivan Reznikov** +- [Dive into Langchain (Chinese language)](https://langchain.boblin.app/) + +--------------------- diff --git a/langchain_md_files/additional_resources/youtube.mdx b/langchain_md_files/additional_resources/youtube.mdx new file mode 100644 index 0000000000000000000000000000000000000000..cf694573f0631765149c3ffb59bddae59b5fe779 --- /dev/null +++ b/langchain_md_files/additional_resources/youtube.mdx @@ -0,0 +1,63 @@ +# YouTube videos + +[Updated 2024-05-16] + +### [Official LangChain YouTube channel](https://www.youtube.com/@LangChain) + +### [Tutorials on YouTube](/docs/additional_resources/tutorials/#tutorials) + +## Videos (sorted by views) + +Only videos with 40K+ views: + +- [Using `ChatGPT` with YOUR OWN Data. This is magical. (LangChain `OpenAI API`)](https://youtu.be/9AXP7tCI9PI) +- [Chat with Multiple `PDFs` | LangChain App Tutorial in Python (Free LLMs and Embeddings)](https://youtu.be/dXxQ0LR-3Hg?si=pjXKhsHRzn10vOqX) +- [`Hugging Face` + Langchain in 5 mins | Access 200k+ FREE AI models for your AI apps](https://youtu.be/_j7JEDWuqLE?si=psimQscN3qo2dOa9) +- [LangChain Crash Course For Beginners | LangChain Tutorial](https://youtu.be/nAmC7SoVLd8?si=qJdvyG5-rnjqfdj1) +- [Vector Embeddings Tutorial – Code Your Own AI Assistant with GPT-4 API + LangChain + NLP](https://youtu.be/yfHHvmaMkcA?si=UBP3yw50cLm3a2nj) +- [Development with Large Language Models Tutorial – `OpenAI`, Langchain, Agents, `Chroma`](https://youtu.be/xZDB1naRUlk?si=v8J1q6oFHRyTkf7Y) +- [Langchain: `PDF` Chat App (GUI) | ChatGPT for Your PDF FILES | Step-by-Step Tutorial](https://youtu.be/RIWbalZ7sTo?si=LbKsCcuyv0BtnrTY) +- [Vector Search `RAG` Tutorial – Combine Your Data with LLMs with Advanced Search](https://youtu.be/JEBDfGqrAUA?si=pD7oxpfwWeJCxfBt) +- [LangChain Crash Course for Beginners](https://youtu.be/lG7Uxts9SXs?si=Yte4S5afN7KNCw0F) +- [Learn `RAG` From Scratch – Python AI Tutorial from a LangChain Engineer](https://youtu.be/sVcwVQRHIc8?si=_LN4g0vOgSdtlB3S) +- [`Llama 2` in LangChain — FIRST Open Source Conversational Agent!](https://youtu.be/6iHVJyX2e50?si=rtq1maPrzWKHbwVV) +- [LangChain Tutorial for Beginners | Generative AI Series](https://youtu.be/cQUUkZnyoD0?si=KYz-bvcocdqGh9f_) +- [Chatbots with `RAG`: LangChain Full Walkthrough](https://youtu.be/LhnCsygAvzY?si=yS7T98VLfcWdkDek) +- [LangChain Explained In 15 Minutes - A MUST Learn For Python Programmers](https://youtu.be/mrjq3lFz23s?si=wkQGcSKUJjuiiEPf) +- [LLM Project | End to End LLM Project Using Langchain, `OpenAI` in Finance Domain](https://youtu.be/MoqgmWV1fm8?si=oVl-5kJVgd3a07Y_) +- [What is LangChain?](https://youtu.be/1bUy-1hGZpI?si=NZ0D51VM5y-DhjGe) +- [`RAG` + Langchain Python Project: Easy AI/Chat For Your Doc](https://youtu.be/tcqEUSNCn8I?si=RLcWPBVLIErRqdmU) +- [Getting Started With LangChain In 20 Minutes- Build Celebrity Search Application](https://youtu.be/_FpT1cwcSLg?si=X9qVazlXYucN_JBP) +- [LangChain GEN AI Tutorial – 6 End-to-End Projects using OpenAI, Google `Gemini Pro`, `LLAMA2`](https://youtu.be/x0AnCE9SE4A?si=_92gJYm7kb-V2bi0) +- [Complete Langchain GEN AI Crash Course With 6 End To End LLM Projects With OPENAI, `LLAMA2`, `Gemini Pro`](https://youtu.be/aWKrL4z5H6w?si=NVLi7Yiq0ccE7xXE) +- [AI Leader Reveals The Future of AI AGENTS (LangChain CEO)](https://youtu.be/9ZhbA0FHZYc?si=1r4P6kRvKVvEhRgE) +- [Learn How To Query Pdf using Langchain Open AI in 5 min](https://youtu.be/5Ghv-F1wF_0?si=ZZRjrWfeiFOVrcvu) +- [Reliable, fully local RAG agents with `LLaMA3`](https://youtu.be/-ROS6gfYIts?si=75CXA8W_BbnkIxcV) +- [Learn `LangChain.js` - Build LLM apps with JavaScript and `OpenAI`](https://youtu.be/HSZ_uaif57o?si=Icj-RAhwMT-vHaYA) +- [LLM Project | End to End LLM Project Using LangChain, Google Palm In Ed-Tech Industry](https://youtu.be/AjQPRomyd-k?si=eC3NT6kn02Lhpz-_) +- [Chatbot Answering from Your Own Knowledge Base: Langchain, `ChatGPT`, `Pinecone`, and `Streamlit`: | Code](https://youtu.be/nAKhxQ3hcMA?si=9Zd_Nd_jiYhtml5w) +- [LangChain is AMAZING | Quick Python Tutorial](https://youtu.be/I4mFqyqFkxg?si=aJ66qh558OfNAczD) +- [`GirlfriendGPT` - AI girlfriend with LangChain](https://youtu.be/LiN3D1QZGQw?si=kZR-lnJwixeVrjmh) +- [Using NEW `MPT-7B` in `Hugging Face` and LangChain](https://youtu.be/DXpk9K7DgMo?si=99JDpV_ueimwJhMi) +- [LangChain - COMPLETE TUTORIAL - Basics to advanced concept!](https://youtu.be/a89vqgK-Qcs?si=0aVO2EOqsw7GE5e3) +- [LangChain Agents: Simply Explained!](https://youtu.be/Xi9Ui-9qcPw?si=DCuG7nGx8dxcfhkx) +- [Chat With Multiple `PDF` Documents With Langchain And Google `Gemini Pro`](https://youtu.be/uus5eLz6smA?si=YUwvHtaZsGeIl0WD) +- [LLM Project | End to end LLM project Using Langchain, `Google Palm` in Retail Industry](https://youtu.be/4wtrl4hnPT8?si=_eOKPpdLfWu5UXMQ) +- [Tutorial | Chat with any Website using Python and Langchain](https://youtu.be/bupx08ZgSFg?si=KRrjYZFnuLsstGwW) +- [Prompt Engineering And LLM's With LangChain In One Shot-Generative AI](https://youtu.be/t2bSApmPzU4?si=87vPQQtYEWTyu2Kx) +- [Build a Custom Chatbot with `OpenAI`: `GPT-Index` & LangChain | Step-by-Step Tutorial](https://youtu.be/FIDv6nc4CgU?si=gR1u3DUG9lvzBIKK) +- [Search Your `PDF` App using Langchain, `ChromaDB`, and Open Source LLM: No OpenAI API (Runs on CPU)](https://youtu.be/rIV1EseKwU4?si=UxZEoXSiPai8fXgl) +- [Building a `RAG` application from scratch using Python, LangChain, and the `OpenAI API`](https://youtu.be/BrsocJb-fAo?si=hvkh9iTGzJ-LnsX-) +- [Function Calling via `ChatGPT API` - First Look With LangChain](https://youtu.be/0-zlUy7VUjg?si=Vc6LFseckEc6qvuk) +- [Private GPT, free deployment! Langchain-Chachat helps you easily play with major mainstream AI models! | Zero Degree Commentary](https://youtu.be/3LLUyaHP-3I?si=AZumEeFXsvqaLl0f) +- [Create a ChatGPT clone using `Streamlit` and LangChain](https://youtu.be/IaTiyQ2oYUQ?si=WbgsYmqPDnMidSUK) +- [What's next for AI agents ft. LangChain's Harrison Chase](https://youtu.be/pBBe1pk8hf4?si=H4vdBF9nmkNZxiHt) +- [`LangFlow`: Build Chatbots without Writing Code - LangChain](https://youtu.be/KJ-ux3hre4s?si=TJuDu4bAlva1myNL) +- [Building a LangChain Custom Medical Agent with Memory](https://youtu.be/6UFtRwWnHws?si=wymYad26VgigRkHy) +- [`Ollama` meets LangChain](https://youtu.be/k_1pOF1mj8k?si=RlBiCrmaR3s7SnMK) +- [End To End LLM Langchain Project using `Pinecone` Vector Database](https://youtu.be/erUfLIi9OFM?si=aHpuHXdIEmAfS4eF) +- [`LLaMA2` with LangChain - Basics | LangChain TUTORIAL](https://youtu.be/cIRzwSXB4Rc?si=FUs0OLVJpzKhut0h) +- [Understanding `ReACT` with LangChain](https://youtu.be/Eug2clsLtFs?si=imgj534ggxlypS0d) + +--------------------- +[Updated 2024-05-16] diff --git a/langchain_md_files/changes/changelog/core.mdx b/langchain_md_files/changes/changelog/core.mdx new file mode 100644 index 0000000000000000000000000000000000000000..63c9c3f8c800ccba399dea6052bf5babeb8f7c16 --- /dev/null +++ b/langchain_md_files/changes/changelog/core.mdx @@ -0,0 +1,10 @@ +# langchain-core + +## 0.1.x + +#### Deprecated + +- `BaseChatModel` methods `__call__`, `call_as_llm`, `predict`, `predict_messages`. Will be removed in 0.2.0. Use `BaseChatModel.invoke` instead. +- `BaseChatModel` methods `apredict`, `apredict_messages`. Will be removed in 0.2.0. Use `BaseChatModel.ainvoke` instead. +- `BaseLLM` methods `__call__, `predict`, `predict_messages`. Will be removed in 0.2.0. Use `BaseLLM.invoke` instead. +- `BaseLLM` methods `apredict`, `apredict_messages`. Will be removed in 0.2.0. Use `BaseLLM.ainvoke` instead. \ No newline at end of file diff --git a/langchain_md_files/changes/changelog/langchain.mdx b/langchain_md_files/changes/changelog/langchain.mdx new file mode 100644 index 0000000000000000000000000000000000000000..04a7d8d9dcdf8cd448dd34e05622cb0d02629443 --- /dev/null +++ b/langchain_md_files/changes/changelog/langchain.mdx @@ -0,0 +1,93 @@ +# langchain + +## 0.2.0 + +### Deleted + +As of release 0.2.0, `langchain` is required to be integration-agnostic. This means that code in `langchain` should not by default instantiate any specific chat models, llms, embedding models, vectorstores etc; instead, the user will be required to specify those explicitly. + +The following functions and classes require an explicit LLM to be passed as an argument: + +- `langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit` +- `langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit` +- `langchain.chains.openai_functions.get_openapi_chain` +- `langchain.chains.router.MultiRetrievalQAChain.from_retrievers` +- `langchain.indexes.VectorStoreIndexWrapper.query` +- `langchain.indexes.VectorStoreIndexWrapper.query_with_sources` +- `langchain.indexes.VectorStoreIndexWrapper.aquery_with_sources` +- `langchain.chains.flare.FlareChain` + +The following classes now require passing an explicit Embedding model as an argument: + +- `langchain.indexes.VectostoreIndexCreator` + +The following code has been removed: + +- `langchain.natbot.NatBotChain.from_default` removed in favor of the `from_llm` class method. + +### Deprecated + +We have two main types of deprecations: + +1. Code that was moved from `langchain` into another package (e.g, `langchain-community`) + +If you try to import it from `langchain`, the import will keep on working, but will raise a deprecation warning. The warning will provide a replacement import statement. + +```python +python -c "from langchain.document_loaders.markdown import UnstructuredMarkdownLoader" + +``` + +```python +LangChainDeprecationWarning: Importing UnstructuredMarkdownLoader from langchain.document_loaders is deprecated. Please replace deprecated imports: + +>> from langchain.document_loaders import UnstructuredMarkdownLoader + +with new imports of: + +>> from langchain_community.document_loaders import UnstructuredMarkdownLoader +``` + +We will continue supporting the imports in `langchain` until release 0.4 as long as the relevant package where the code lives is installed. (e.g., as long as `langchain_community` is installed.) + +However, we advise for users to not rely on these imports and instead migrate to the new imports. To help with this process, we’re releasing a migration script via the LangChain CLI. See further instructions in migration guide. + +1. Code that has better alternatives available and will eventually be removed, so there’s only a single way to do things. (e.g., `predict_messages` method in ChatModels has been deprecated in favor of `invoke`). + +Many of these were marked for removal in 0.2. We have bumped the removal to 0.3. + + +## 0.1.0 (Jan 5, 2024) + +### Deleted + +No deletions. + +### Deprecated + +Deprecated classes and methods will be removed in 0.2.0 + +| Deprecated | Alternative | Reason | +|---------------------------------|-----------------------------------|------------------------------------------------| +| ChatVectorDBChain | ConversationalRetrievalChain | More general to all retrievers | +| create_ernie_fn_chain | create_ernie_fn_runnable | Use LCEL under the hood | +| created_structured_output_chain | create_structured_output_runnable | Use LCEL under the hood | +| NatBotChain | | Not used | +| create_openai_fn_chain | create_openai_fn_runnable | Use LCEL under the hood | +| create_structured_output_chain | create_structured_output_runnable | Use LCEL under the hood | +| load_query_constructor_chain | load_query_constructor_runnable | Use LCEL under the hood | +| VectorDBQA | RetrievalQA | More general to all retrievers | +| Sequential Chain | LCEL | Obviated by LCEL | +| SimpleSequentialChain | LCEL | Obviated by LCEL | +| TransformChain | LCEL/RunnableLambda | Obviated by LCEL | +| create_tagging_chain | create_structured_output_runnable | Use LCEL under the hood | +| ChatAgent | create_react_agent | Use LCEL builder over a class | +| ConversationalAgent | create_react_agent | Use LCEL builder over a class | +| ConversationalChatAgent | create_json_chat_agent | Use LCEL builder over a class | +| initialize_agent | Individual create agent methods | Individual create agent methods are more clear | +| ZeroShotAgent | create_react_agent | Use LCEL builder over a class | +| OpenAIFunctionsAgent | create_openai_functions_agent | Use LCEL builder over a class | +| OpenAIMultiFunctionsAgent | create_openai_tools_agent | Use LCEL builder over a class | +| SelfAskWithSearchAgent | create_self_ask_with_search | Use LCEL builder over a class | +| StructuredChatAgent | create_structured_chat_agent | Use LCEL builder over a class | +| XMLAgent | create_xml_agent | Use LCEL builder over a class | \ No newline at end of file diff --git a/langchain_md_files/concepts.mdx b/langchain_md_files/concepts.mdx new file mode 100644 index 0000000000000000000000000000000000000000..3143686ad601db3542cc90c479cf3073cd49c3a4 --- /dev/null +++ b/langchain_md_files/concepts.mdx @@ -0,0 +1,1392 @@ +# Conceptual guide + +import ThemedImage from '@theme/ThemedImage'; +import useBaseUrl from '@docusaurus/useBaseUrl'; + +This section contains introductions to key parts of LangChain. + +## Architecture + +LangChain as a framework consists of a number of packages. + +### `langchain-core` +This package contains base abstractions of different components and ways to compose them together. +The interfaces for core components like LLMs, vector stores, retrievers and more are defined here. +No third party integrations are defined here. +The dependencies are kept purposefully very lightweight. + +### Partner packages + +While the long tail of integrations are in `langchain-community`, we split popular integrations into their own packages (e.g. `langchain-openai`, `langchain-anthropic`, etc). +This was done in order to improve support for these important integrations. + +### `langchain` + +The main `langchain` package contains chains, agents, and retrieval strategies that make up an application's cognitive architecture. +These are NOT third party integrations. +All chains, agents, and retrieval strategies here are NOT specific to any one integration, but rather generic across all integrations. + +### `langchain-community` + +This package contains third party integrations that are maintained by the LangChain community. +Key partner packages are separated out (see below). +This contains all integrations for various components (LLMs, vector stores, retrievers). +All dependencies in this package are optional to keep the package as lightweight as possible. + +### [`langgraph`](https://langchain-ai.github.io/langgraph) + +`langgraph` is an extension of `langchain` aimed at +building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. + +LangGraph exposes high level interfaces for creating common types of agents, as well as a low-level API for composing custom flows. + +### [`langserve`](/docs/langserve) + +A package to deploy LangChain chains as REST APIs. Makes it easy to get a production ready API up and running. + +### [LangSmith](https://docs.smith.langchain.com) + +A developer platform that lets you debug, test, evaluate, and monitor LLM applications. + + + +## LangChain Expression Language (LCEL) + + +LangChain Expression Language, or LCEL, is a declarative way to chain LangChain components. +LCEL was designed from day 1 to **support putting prototypes in production, with no code changes**, from the simplest “prompt + LLM” chain to the most complex chains (we’ve seen folks successfully run LCEL chains with 100s of steps in production). To highlight a few of the reasons you might want to use LCEL: + +**First-class streaming support** +When you build your chains with LCEL you get the best possible time-to-first-token (time elapsed until the first chunk of output comes out). For some chains this means eg. we stream tokens straight from an LLM to a streaming output parser, and you get back parsed, incremental chunks of output at the same rate as the LLM provider outputs the raw tokens. + +**Async support** +Any chain built with LCEL can be called both with the synchronous API (eg. in your Jupyter notebook while prototyping) as well as with the asynchronous API (eg. in a [LangServe](/docs/langserve/) server). This enables using the same code for prototypes and in production, with great performance, and the ability to handle many concurrent requests in the same server. + +**Optimized parallel execution** +Whenever your LCEL chains have steps that can be executed in parallel (eg if you fetch documents from multiple retrievers) we automatically do it, both in the sync and the async interfaces, for the smallest possible latency. + +**Retries and fallbacks** +Configure retries and fallbacks for any part of your LCEL chain. This is a great way to make your chains more reliable at scale. We’re currently working on adding streaming support for retries/fallbacks, so you can get the added reliability without any latency cost. + +**Access intermediate results** +For more complex chains it’s often very useful to access the results of intermediate steps even before the final output is produced. This can be used to let end-users know something is happening, or even just to debug your chain. You can stream intermediate results, and it’s available on every [LangServe](/docs/langserve) server. + +**Input and output schemas** +Input and output schemas give every LCEL chain Pydantic and JSONSchema schemas inferred from the structure of your chain. This can be used for validation of inputs and outputs, and is an integral part of LangServe. + +[**Seamless LangSmith tracing**](https://docs.smith.langchain.com) +As your chains get more and more complex, it becomes increasingly important to understand what exactly is happening at every step. +With LCEL, **all** steps are automatically logged to [LangSmith](https://docs.smith.langchain.com/) for maximum observability and debuggability. + +LCEL aims to provide consistency around behavior and customization over legacy subclassed chains such as `LLMChain` and +`ConversationalRetrievalChain`. Many of these legacy chains hide important details like prompts, and as a wider variety +of viable models emerge, customization has become more and more important. + +If you are currently using one of these legacy chains, please see [this guide for guidance on how to migrate](/docs/versions/migrating_chains). + +For guides on how to do specific tasks with LCEL, check out [the relevant how-to guides](/docs/how_to/#langchain-expression-language-lcel). + +### Runnable interface + + +To make it as easy as possible to create custom chains, we've implemented a ["Runnable"](https://python.langchain.com/v0.2/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable) protocol. Many LangChain components implement the `Runnable` protocol, including chat models, LLMs, output parsers, retrievers, prompt templates, and more. There are also several useful primitives for working with runnables, which you can read about below. + +This is a standard interface, which makes it easy to define custom chains as well as invoke them in a standard way. +The standard interface includes: + +- `stream`: stream back chunks of the response +- `invoke`: call the chain on an input +- `batch`: call the chain on a list of inputs + +These also have corresponding async methods that should be used with [asyncio](https://docs.python.org/3/library/asyncio.html) `await` syntax for concurrency: + +- `astream`: stream back chunks of the response async +- `ainvoke`: call the chain on an input async +- `abatch`: call the chain on a list of inputs async +- `astream_log`: stream back intermediate steps as they happen, in addition to the final response +- `astream_events`: **beta** stream events as they happen in the chain (introduced in `langchain-core` 0.1.14) + +The **input type** and **output type** varies by component: + +| Component | Input Type | Output Type | +| --- | --- | --- | +| Prompt | Dictionary | PromptValue | +| ChatModel | Single string, list of chat messages or a PromptValue | ChatMessage | +| LLM | Single string, list of chat messages or a PromptValue | String | +| OutputParser | The output of an LLM or ChatModel | Depends on the parser | +| Retriever | Single string | List of Documents | +| Tool | Single string or dictionary, depending on the tool | Depends on the tool | + + +All runnables expose input and output **schemas** to inspect the inputs and outputs: +- `input_schema`: an input Pydantic model auto-generated from the structure of the Runnable +- `output_schema`: an output Pydantic model auto-generated from the structure of the Runnable + +## Components + +LangChain provides standard, extendable interfaces and external integrations for various components useful for building with LLMs. +Some components LangChain implements, some components we rely on third-party integrations for, and others are a mix. + +### Chat models + + +Language models that use a sequence of messages as inputs and return chat messages as outputs (as opposed to using plain text). +These are traditionally newer models (older models are generally `LLMs`, see below). +Chat models support the assignment of distinct roles to conversation messages, helping to distinguish messages from the AI, users, and instructions such as system messages. + +Although the underlying models are messages in, message out, the LangChain wrappers also allow these models to take a string as input. This means you can easily use chat models in place of LLMs. + +When a string is passed in as input, it is converted to a `HumanMessage` and then passed to the underlying model. + +LangChain does not host any Chat Models, rather we rely on third party integrations. + +We have some standardized parameters when constructing ChatModels: +- `model`: the name of the model +- `temperature`: the sampling temperature +- `timeout`: request timeout +- `max_tokens`: max tokens to generate +- `stop`: default stop sequences +- `max_retries`: max number of times to retry requests +- `api_key`: API key for the model provider +- `base_url`: endpoint to send requests to + +Some important things to note: +- standard params only apply to model providers that expose parameters with the intended functionality. For example, some providers do not expose a configuration for maximum output tokens, so max_tokens can't be supported on these. +- standard params are currently only enforced on integrations that have their own integration packages (e.g. `langchain-openai`, `langchain-anthropic`, etc.), they're not enforced on models in ``langchain-community``. + +ChatModels also accept other parameters that are specific to that integration. To find all the parameters supported by a ChatModel head to the API reference for that model. + +:::important +Some chat models have been fine-tuned for **tool calling** and provide a dedicated API for it. +Generally, such models are better at tool calling than non-fine-tuned models, and are recommended for use cases that require tool calling. +Please see the [tool calling section](/docs/concepts/#functiontool-calling) for more information. +::: + +For specifics on how to use chat models, see the [relevant how-to guides here](/docs/how_to/#chat-models). + +#### Multimodality + +Some chat models are multimodal, accepting images, audio and even video as inputs. These are still less common, meaning model providers haven't standardized on the "best" way to define the API. Multimodal **outputs** are even less common. As such, we've kept our multimodal abstractions fairly light weight and plan to further solidify the multimodal APIs and interaction patterns as the field matures. + +In LangChain, most chat models that support multimodal inputs also accept those values in OpenAI's content blocks format. So far this is restricted to image inputs. For models like Gemini which support video and other bytes input, the APIs also support the native, model-specific representations. + +For specifics on how to use multimodal models, see the [relevant how-to guides here](/docs/how_to/#multimodal). + +For a full list of LangChain model providers with multimodal models, [check out this table](/docs/integrations/chat/#advanced-features). + +### LLMs + + +:::caution +Pure text-in/text-out LLMs tend to be older or lower-level. Many popular models are best used as [chat completion models](/docs/concepts/#chat-models), +even for non-chat use cases. + +You are probably looking for [the section above instead](/docs/concepts/#chat-models). +::: + +Language models that takes a string as input and returns a string. +These are traditionally older models (newer models generally are [Chat Models](/docs/concepts/#chat-models), see above). + +Although the underlying models are string in, string out, the LangChain wrappers also allow these models to take messages as input. +This gives them the same interface as [Chat Models](/docs/concepts/#chat-models). +When messages are passed in as input, they will be formatted into a string under the hood before being passed to the underlying model. + +LangChain does not host any LLMs, rather we rely on third party integrations. + +For specifics on how to use LLMs, see the [relevant how-to guides here](/docs/how_to/#llms). + +### Messages + +Some language models take a list of messages as input and return a message. +There are a few different types of messages. +All messages have a `role`, `content`, and `response_metadata` property. + +The `role` describes WHO is saying the message. The standard roles are "user", "assistant", "system", and "tool". +LangChain has different message classes for different roles. + +The `content` property describes the content of the message. +This can be a few different things: + +- A string (most models deal this type of content) +- A List of dictionaries (this is used for multimodal input, where the dictionary contains information about that input type and that input location) + +Optionally, messages can have a `name` property which allows for differentiating between multiple speakers with the same role. +For example, if there are two users in the chat history it can be useful to differentiate between them. Not all models support this. + +#### HumanMessage + +This represents a message with role "user". + +#### AIMessage + +This represents a message with role "assistant". In addition to the `content` property, these messages also have: + +**`response_metadata`** + +The `response_metadata` property contains additional metadata about the response. The data here is often specific to each model provider. +This is where information like log-probs and token usage may be stored. + +**`tool_calls`** + +These represent a decision from an language model to call a tool. They are included as part of an `AIMessage` output. +They can be accessed from there with the `.tool_calls` property. + +This property returns a list of `ToolCall`s. A `ToolCall` is a dictionary with the following arguments: + +- `name`: The name of the tool that should be called. +- `args`: The arguments to that tool. +- `id`: The id of that tool call. + +#### SystemMessage + +This represents a message with role "system", which tells the model how to behave. Not every model provider supports this. + +#### ToolMessage + +This represents a message with role "tool", which contains the result of calling a tool. In addition to `role` and `content`, this message has: + +- a `tool_call_id` field which conveys the id of the call to the tool that was called to produce this result. +- an `artifact` field which can be used to pass along arbitrary artifacts of the tool execution which are useful to track but which should not be sent to the model. + +#### (Legacy) FunctionMessage + +This is a legacy message type, corresponding to OpenAI's legacy function-calling API. `ToolMessage` should be used instead to correspond to the updated tool-calling API. + +This represents the result of a function call. In addition to `role` and `content`, this message has a `name` parameter which conveys the name of the function that was called to produce this result. + + +### Prompt templates + + +Prompt templates help to translate user input and parameters into instructions for a language model. +This can be used to guide a model's response, helping it understand the context and generate relevant and coherent language-based output. + +Prompt Templates take as input a dictionary, where each key represents a variable in the prompt template to fill in. + +Prompt Templates output a PromptValue. This PromptValue can be passed to an LLM or a ChatModel, and can also be cast to a string or a list of messages. +The reason this PromptValue exists is to make it easy to switch between strings and messages. + +There are a few different types of prompt templates: + +#### String PromptTemplates + +These prompt templates are used to format a single string, and generally are used for simpler inputs. +For example, a common way to construct and use a PromptTemplate is as follows: + +```python +from langchain_core.prompts import PromptTemplate + +prompt_template = PromptTemplate.from_template("Tell me a joke about {topic}") + +prompt_template.invoke({"topic": "cats"}) +``` + +#### ChatPromptTemplates + +These prompt templates are used to format a list of messages. These "templates" consist of a list of templates themselves. +For example, a common way to construct and use a ChatPromptTemplate is as follows: + +```python +from langchain_core.prompts import ChatPromptTemplate + +prompt_template = ChatPromptTemplate.from_messages([ + ("system", "You are a helpful assistant"), + ("user", "Tell me a joke about {topic}") +]) + +prompt_template.invoke({"topic": "cats"}) +``` + +In the above example, this ChatPromptTemplate will construct two messages when called. +The first is a system message, that has no variables to format. +The second is a HumanMessage, and will be formatted by the `topic` variable the user passes in. + +#### MessagesPlaceholder + + +This prompt template is responsible for adding a list of messages in a particular place. +In the above ChatPromptTemplate, we saw how we could format two messages, each one a string. +But what if we wanted the user to pass in a list of messages that we would slot into a particular spot? +This is how you use MessagesPlaceholder. + +```python +from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder +from langchain_core.messages import HumanMessage + +prompt_template = ChatPromptTemplate.from_messages([ + ("system", "You are a helpful assistant"), + MessagesPlaceholder("msgs") +]) + +prompt_template.invoke({"msgs": [HumanMessage(content="hi!")]}) +``` + +This will produce a list of two messages, the first one being a system message, and the second one being the HumanMessage we passed in. +If we had passed in 5 messages, then it would have produced 6 messages in total (the system message plus the 5 passed in). +This is useful for letting a list of messages be slotted into a particular spot. + +An alternative way to accomplish the same thing without using the `MessagesPlaceholder` class explicitly is: + +```python +prompt_template = ChatPromptTemplate.from_messages([ + ("system", "You are a helpful assistant"), + ("placeholder", "{msgs}") # <-- This is the changed part +]) +``` + +For specifics on how to use prompt templates, see the [relevant how-to guides here](/docs/how_to/#prompt-templates). + +### Example selectors +One common prompting technique for achieving better performance is to include examples as part of the prompt. +This is known as [few-shot prompting](/docs/concepts/#few-shot-prompting). +This gives the language model concrete examples of how it should behave. +Sometimes these examples are hardcoded into the prompt, but for more advanced situations it may be nice to dynamically select them. +Example Selectors are classes responsible for selecting and then formatting examples into prompts. + +For specifics on how to use example selectors, see the [relevant how-to guides here](/docs/how_to/#example-selectors). + +### Output parsers + + +:::note + +The information here refers to parsers that take a text output from a model try to parse it into a more structured representation. +More and more models are supporting function (or tool) calling, which handles this automatically. +It is recommended to use function/tool calling rather than output parsing. +See documentation for that [here](/docs/concepts/#function-tool-calling). + +::: + +Responsible for taking the output of a model and transforming it to a more suitable format for downstream tasks. +Useful when you are using LLMs to generate structured data, or to normalize output from chat models and LLMs. + +LangChain has lots of different types of output parsers. This is a list of output parsers LangChain supports. The table below has various pieces of information: + +**Name**: The name of the output parser + +**Supports Streaming**: Whether the output parser supports streaming. + +**Has Format Instructions**: Whether the output parser has format instructions. This is generally available except when (a) the desired schema is not specified in the prompt but rather in other parameters (like OpenAI function calling), or (b) when the OutputParser wraps another OutputParser. + +**Calls LLM**: Whether this output parser itself calls an LLM. This is usually only done by output parsers that attempt to correct misformatted output. + +**Input Type**: Expected input type. Most output parsers work on both strings and messages, but some (like OpenAI Functions) need a message with specific kwargs. + +**Output Type**: The output type of the object returned by the parser. + +**Description**: Our commentary on this output parser and when to use it. + +| Name | Supports Streaming | Has Format Instructions | Calls LLM | Input Type | Output Type | Description | +|-----------------|--------------------|-------------------------------|-----------|----------------------------------|----------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| [JSON](https://python.langchain.com/v0.2/api_reference/core/output_parsers/langchain_core.output_parsers.json.JsonOutputParser.html#langchain_core.output_parsers.json.JsonOutputParser) | ✅ | ✅ | | `str` \| `Message` | JSON object | Returns a JSON object as specified. You can specify a Pydantic model and it will return JSON for that model. Probably the most reliable output parser for getting structured data that does NOT use function calling. | +| [XML](https://python.langchain.com/v0.2/api_reference/core/output_parsers/langchain_core.output_parsers.xml.XMLOutputParser.html#langchain_core.output_parsers.xml.XMLOutputParser) | ✅ | ✅ | | `str` \| `Message` | `dict` | Returns a dictionary of tags. Use when XML output is needed. Use with models that are good at writing XML (like Anthropic's). | +| [CSV](https://python.langchain.com/v0.2/api_reference/core/output_parsers/langchain_core.output_parsers.list.CommaSeparatedListOutputParser.html#langchain_core.output_parsers.list.CommaSeparatedListOutputParser) | ✅ | ✅ | | `str` \| `Message` | `List[str]` | Returns a list of comma separated values. | +| [OutputFixing](https://python.langchain.com/v0.2/api_reference/langchain/output_parsers/langchain.output_parsers.fix.OutputFixingParser.html#langchain.output_parsers.fix.OutputFixingParser) | | | ✅ | `str` \| `Message` | | Wraps another output parser. If that output parser errors, then this will pass the error message and the bad output to an LLM and ask it to fix the output. | +| [RetryWithError](https://python.langchain.com/v0.2/api_reference/langchain/output_parsers/langchain.output_parsers.retry.RetryWithErrorOutputParser.html#langchain.output_parsers.retry.RetryWithErrorOutputParser) | | | ✅ | `str` \| `Message` | | Wraps another output parser. If that output parser errors, then this will pass the original inputs, the bad output, and the error message to an LLM and ask it to fix it. Compared to OutputFixingParser, this one also sends the original instructions. | +| [Pydantic](https://python.langchain.com/v0.2/api_reference/core/output_parsers/langchain_core.output_parsers.pydantic.PydanticOutputParser.html#langchain_core.output_parsers.pydantic.PydanticOutputParser) | | ✅ | | `str` \| `Message` | `pydantic.BaseModel` | Takes a user defined Pydantic model and returns data in that format. | +| [YAML](https://python.langchain.com/v0.2/api_reference/langchain/output_parsers/langchain.output_parsers.yaml.YamlOutputParser.html#langchain.output_parsers.yaml.YamlOutputParser) | | ✅ | | `str` \| `Message` | `pydantic.BaseModel` | Takes a user defined Pydantic model and returns data in that format. Uses YAML to encode it. | +| [PandasDataFrame](https://python.langchain.com/v0.2/api_reference/langchain/output_parsers/langchain.output_parsers.pandas_dataframe.PandasDataFrameOutputParser.html#langchain.output_parsers.pandas_dataframe.PandasDataFrameOutputParser) | | ✅ | | `str` \| `Message` | `dict` | Useful for doing operations with pandas DataFrames. | +| [Enum](https://python.langchain.com/v0.2/api_reference/langchain/output_parsers/langchain.output_parsers.enum.EnumOutputParser.html#langchain.output_parsers.enum.EnumOutputParser) | | ✅ | | `str` \| `Message` | `Enum` | Parses response into one of the provided enum values. | +| [Datetime](https://python.langchain.com/v0.2/api_reference/langchain/output_parsers/langchain.output_parsers.datetime.DatetimeOutputParser.html#langchain.output_parsers.datetime.DatetimeOutputParser) | | ✅ | | `str` \| `Message` | `datetime.datetime` | Parses response into a datetime string. | +| [Structured](https://python.langchain.com/v0.2/api_reference/langchain/output_parsers/langchain.output_parsers.structured.StructuredOutputParser.html#langchain.output_parsers.structured.StructuredOutputParser) | | ✅ | | `str` \| `Message` | `Dict[str, str]` | An output parser that returns structured information. It is less powerful than other output parsers since it only allows for fields to be strings. This can be useful when you are working with smaller LLMs. | + +For specifics on how to use output parsers, see the [relevant how-to guides here](/docs/how_to/#output-parsers). + +### Chat history +Most LLM applications have a conversational interface. +An essential component of a conversation is being able to refer to information introduced earlier in the conversation. +At bare minimum, a conversational system should be able to access some window of past messages directly. + +The concept of `ChatHistory` refers to a class in LangChain which can be used to wrap an arbitrary chain. +This `ChatHistory` will keep track of inputs and outputs of the underlying chain, and append them as messages to a message database. +Future interactions will then load those messages and pass them into the chain as part of the input. + +### Documents + + +A Document object in LangChain contains information about some data. It has two attributes: + +- `page_content: str`: The content of this document. Currently is only a string. +- `metadata: dict`: Arbitrary metadata associated with this document. Can track the document id, file name, etc. + +### Document loaders + + +These classes load Document objects. LangChain has hundreds of integrations with various data sources to load data from: Slack, Notion, Google Drive, etc. + +Each DocumentLoader has its own specific parameters, but they can all be invoked in the same way with the `.load` method. +An example use case is as follows: + +```python +from langchain_community.document_loaders.csv_loader import CSVLoader + +loader = CSVLoader( + ... # <-- Integration specific parameters here +) +data = loader.load() +``` + +For specifics on how to use document loaders, see the [relevant how-to guides here](/docs/how_to/#document-loaders). + +### Text splitters + +Once you've loaded documents, you'll often want to transform them to better suit your application. The simplest example is you may want to split a long document into smaller chunks that can fit into your model's context window. LangChain has a number of built-in document transformers that make it easy to split, combine, filter, and otherwise manipulate documents. + +When you want to deal with long pieces of text, it is necessary to split up that text into chunks. As simple as this sounds, there is a lot of potential complexity here. Ideally, you want to keep the semantically related pieces of text together. What "semantically related" means could depend on the type of text. This notebook showcases several ways to do that. + +At a high level, text splitters work as following: + +1. Split the text up into small, semantically meaningful chunks (often sentences). +2. Start combining these small chunks into a larger chunk until you reach a certain size (as measured by some function). +3. Once you reach that size, make that chunk its own piece of text and then start creating a new chunk of text with some overlap (to keep context between chunks). + +That means there are two different axes along which you can customize your text splitter: + +1. How the text is split +2. How the chunk size is measured + +For specifics on how to use text splitters, see the [relevant how-to guides here](/docs/how_to/#text-splitters). + +### Embedding models + + +Embedding models create a vector representation of a piece of text. You can think of a vector as an array of numbers that captures the semantic meaning of the text. +By representing the text in this way, you can perform mathematical operations that allow you to do things like search for other pieces of text that are most similar in meaning. +These natural language search capabilities underpin many types of [context retrieval](/docs/concepts/#retrieval), +where we provide an LLM with the relevant data it needs to effectively respond to a query. + +![](/img/embeddings.png) + +The `Embeddings` class is a class designed for interfacing with text embedding models. There are many different embedding model providers (OpenAI, Cohere, Hugging Face, etc) and local models, and this class is designed to provide a standard interface for all of them. + +The base Embeddings class in LangChain provides two methods: one for embedding documents and one for embedding a query. The former takes as input multiple texts, while the latter takes a single text. The reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself). + +For specifics on how to use embedding models, see the [relevant how-to guides here](/docs/how_to/#embedding-models). + +### Vector stores + + +One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding vectors, +and then at query time to embed the unstructured query and retrieve the embedding vectors that are 'most similar' to the embedded query. +A vector store takes care of storing embedded data and performing vector search for you. + +Most vector stores can also store metadata about embedded vectors and support filtering on that metadata before +similarity search, allowing you more control over returned documents. + +Vector stores can be converted to the retriever interface by doing: + +```python +vectorstore = MyVectorStore() +retriever = vectorstore.as_retriever() +``` + +For specifics on how to use vector stores, see the [relevant how-to guides here](/docs/how_to/#vector-stores). + +### Retrievers + + +A retriever is an interface that returns documents given an unstructured query. +It is more general than a vector store. +A retriever does not need to be able to store documents, only to return (or retrieve) them. +Retrievers can be created from vector stores, but are also broad enough to include [Wikipedia search](/docs/integrations/retrievers/wikipedia/) and [Amazon Kendra](/docs/integrations/retrievers/amazon_kendra_retriever/). + +Retrievers accept a string query as input and return a list of Document's as output. + +For specifics on how to use retrievers, see the [relevant how-to guides here](/docs/how_to/#retrievers). + +### Key-value stores + +For some techniques, such as [indexing and retrieval with multiple vectors per document](/docs/how_to/multi_vector/) or +[caching embeddings](/docs/how_to/caching_embeddings/), having a form of key-value (KV) storage is helpful. + +LangChain includes a [`BaseStore`](https://python.langchain.com/v0.2/api_reference/core/stores/langchain_core.stores.BaseStore.html) interface, +which allows for storage of arbitrary data. However, LangChain components that require KV-storage accept a +more specific `BaseStore[str, bytes]` instance that stores binary data (referred to as a `ByteStore`), and internally take care of +encoding and decoding data for their specific needs. + +This means that as a user, you only need to think about one type of store rather than different ones for different types of data. + +#### Interface + +All [`BaseStores`](https://python.langchain.com/v0.2/api_reference/core/stores/langchain_core.stores.BaseStore.html) support the following interface. Note that the interface allows +for modifying **multiple** key-value pairs at once: + +- `mget(key: Sequence[str]) -> List[Optional[bytes]]`: get the contents of multiple keys, returning `None` if the key does not exist +- `mset(key_value_pairs: Sequence[Tuple[str, bytes]]) -> None`: set the contents of multiple keys +- `mdelete(key: Sequence[str]) -> None`: delete multiple keys +- `yield_keys(prefix: Optional[str] = None) -> Iterator[str]`: yield all keys in the store, optionally filtering by a prefix + +For key-value store implementations, see [this section](/docs/integrations/stores/). + +### Tools + + +Tools are utilities designed to be called by a model: their inputs are designed to be generated by models, and their outputs are designed to be passed back to models. +Tools are needed whenever you want a model to control parts of your code or call out to external APIs. + +A tool consists of: + +1. The name of the tool. +2. A description of what the tool does. +3. A JSON schema defining the inputs to the tool. +4. A function (and, optionally, an async variant of the function). + +When a tool is bound to a model, the name, description and JSON schema are provided as context to the model. +Given a list of tools and a set of instructions, a model can request to call one or more tools with specific inputs. +Typical usage may look like the following: + +```python +tools = [...] # Define a list of tools +llm_with_tools = llm.bind_tools(tools) +ai_msg = llm_with_tools.invoke("do xyz...") +# -> AIMessage(tool_calls=[ToolCall(...), ...], ...) +``` + +The `AIMessage` returned from the model MAY have `tool_calls` associated with it. +Read [this guide](/docs/concepts/#aimessage) for more information on what the response type may look like. + +Once the chosen tools are invoked, the results can be passed back to the model so that it can complete whatever task +it's performing. +There are generally two different ways to invoke the tool and pass back the response: + +#### Invoke with just the arguments + +When you invoke a tool with just the arguments, you will get back the raw tool output (usually a string). +This generally looks like: + +```python +# You will want to previously check that the LLM returned tool calls +tool_call = ai_msg.tool_calls[0] +# ToolCall(args={...}, id=..., ...) +tool_output = tool.invoke(tool_call["args"]) +tool_message = ToolMessage( + content=tool_output, + tool_call_id=tool_call["id"], + name=tool_call["name"] +) +``` + +Note that the `content` field will generally be passed back to the model. +If you do not want the raw tool response to be passed to the model, but you still want to keep it around, +you can transform the tool output but also pass it as an artifact (read more about [`ToolMessage.artifact` here](/docs/concepts/#toolmessage)) + +```python +... # Same code as above +response_for_llm = transform(response) +tool_message = ToolMessage( + content=response_for_llm, + tool_call_id=tool_call["id"], + name=tool_call["name"], + artifact=tool_output +) +``` + +#### Invoke with `ToolCall` + +The other way to invoke a tool is to call it with the full `ToolCall` that was generated by the model. +When you do this, the tool will return a ToolMessage. +The benefits of this are that you don't have to write the logic yourself to transform the tool output into a ToolMessage. +This generally looks like: + +```python +tool_call = ai_msg.tool_calls[0] +# -> ToolCall(args={...}, id=..., ...) +tool_message = tool.invoke(tool_call) +# -> ToolMessage( + content="tool result foobar...", + tool_call_id=..., + name="tool_name" +) +``` + +If you are invoking the tool this way and want to include an [artifact](/docs/concepts/#toolmessage) for the ToolMessage, you will need to have the tool return two things. +Read more about [defining tools that return artifacts here](/docs/how_to/tool_artifacts/). + +#### Best practices + +When designing tools to be used by a model, it is important to keep in mind that: + +- Chat models that have explicit [tool-calling APIs](/docs/concepts/#functiontool-calling) will be better at tool calling than non-fine-tuned models. +- Models will perform better if the tools have well-chosen names, descriptions, and JSON schemas. This another form of prompt engineering. +- Simple, narrowly scoped tools are easier for models to use than complex tools. + +#### Related + +For specifics on how to use tools, see the [tools how-to guides](/docs/how_to/#tools). + +To use a pre-built tool, see the [tool integration docs](/docs/integrations/tools/). + +### Toolkits + + +Toolkits are collections of tools that are designed to be used together for specific tasks. They have convenient loading methods. + +All Toolkits expose a `get_tools` method which returns a list of tools. +You can therefore do: + +```python +# Initialize a toolkit +toolkit = ExampleTookit(...) + +# Get list of tools +tools = toolkit.get_tools() +``` + +### Agents + +By themselves, language models can't take actions - they just output text. +A big use case for LangChain is creating **agents**. +Agents are systems that use an LLM as a reasoning engine to determine which actions to take and what the inputs to those actions should be. +The results of those actions can then be fed back into the agent and it determine whether more actions are needed, or whether it is okay to finish. + +[LangGraph](https://github.com/langchain-ai/langgraph) is an extension of LangChain specifically aimed at creating highly controllable and customizable agents. +Please check out that documentation for a more in depth overview of agent concepts. + +There is a legacy agent concept in LangChain that we are moving towards deprecating: `AgentExecutor`. +AgentExecutor was essentially a runtime for agents. +It was a great place to get started, however, it was not flexible enough as you started to have more customized agents. +In order to solve that we built LangGraph to be this flexible, highly-controllable runtime. + +If you are still using AgentExecutor, do not fear: we still have a guide on [how to use AgentExecutor](/docs/how_to/agent_executor). +It is recommended, however, that you start to transition to LangGraph. +In order to assist in this we have put together a [transition guide on how to do so](/docs/how_to/migrate_agent). + +#### ReAct agents + + +One popular architecture for building agents is [**ReAct**](https://arxiv.org/abs/2210.03629). +ReAct combines reasoning and acting in an iterative process - in fact the name "ReAct" stands for "Reason" and "Act". + +The general flow looks like this: + +- The model will "think" about what step to take in response to an input and any previous observations. +- The model will then choose an action from available tools (or choose to respond to the user). +- The model will generate arguments to that tool. +- The agent runtime (executor) will parse out the chosen tool and call it with the generated arguments. +- The executor will return the results of the tool call back to the model as an observation. +- This process repeats until the agent chooses to respond. + +There are general prompting based implementations that do not require any model-specific features, but the most +reliable implementations use features like [tool calling](/docs/how_to/tool_calling/) to reliably format outputs +and reduce variance. + +Please see the [LangGraph documentation](https://langchain-ai.github.io/langgraph/) for more information, +or [this how-to guide](/docs/how_to/migrate_agent/) for specific information on migrating to LangGraph. + +### Callbacks + +LangChain provides a callbacks system that allows you to hook into the various stages of your LLM application. This is useful for logging, monitoring, streaming, and other tasks. + +You can subscribe to these events by using the `callbacks` argument available throughout the API. This argument is list of handler objects, which are expected to implement one or more of the methods described below in more detail. + +#### Callback Events + +| Event | Event Trigger | Associated Method | +|------------------|---------------------------------------------|-----------------------| +| Chat model start | When a chat model starts | `on_chat_model_start` | +| LLM start | When a llm starts | `on_llm_start` | +| LLM new token | When an llm OR chat model emits a new token | `on_llm_new_token` | +| LLM ends | When an llm OR chat model ends | `on_llm_end` | +| LLM errors | When an llm OR chat model errors | `on_llm_error` | +| Chain start | When a chain starts running | `on_chain_start` | +| Chain end | When a chain ends | `on_chain_end` | +| Chain error | When a chain errors | `on_chain_error` | +| Tool start | When a tool starts running | `on_tool_start` | +| Tool end | When a tool ends | `on_tool_end` | +| Tool error | When a tool errors | `on_tool_error` | +| Agent action | When an agent takes an action | `on_agent_action` | +| Agent finish | When an agent ends | `on_agent_finish` | +| Retriever start | When a retriever starts | `on_retriever_start` | +| Retriever end | When a retriever ends | `on_retriever_end` | +| Retriever error | When a retriever errors | `on_retriever_error` | +| Text | When arbitrary text is run | `on_text` | +| Retry | When a retry event is run | `on_retry` | + +#### Callback handlers + +Callback handlers can either be `sync` or `async`: + +* Sync callback handlers implement the [BaseCallbackHandler](https://python.langchain.com/v0.2/api_reference/core/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html) interface. +* Async callback handlers implement the [AsyncCallbackHandler](https://python.langchain.com/v0.2/api_reference/core/callbacks/langchain_core.callbacks.base.AsyncCallbackHandler.html) interface. + +During run-time LangChain configures an appropriate callback manager (e.g., [CallbackManager](https://python.langchain.com/v0.2/api_reference/core/callbacks/langchain_core.callbacks.manager.CallbackManager.html) or [AsyncCallbackManager](https://python.langchain.com/v0.2/api_reference/core/callbacks/langchain_core.callbacks.manager.AsyncCallbackManager.html) which will be responsible for calling the appropriate method on each "registered" callback handler when the event is triggered. + +#### Passing callbacks + +The `callbacks` property is available on most objects throughout the API (Models, Tools, Agents, etc.) in two different places: + +The callbacks are available on most objects throughout the API (Models, Tools, Agents, etc.) in two different places: + +- **Request time callbacks**: Passed at the time of the request in addition to the input data. + Available on all standard `Runnable` objects. These callbacks are INHERITED by all children + of the object they are defined on. For example, `chain.invoke({"number": 25}, {"callbacks": [handler]})`. +- **Constructor callbacks**: `chain = TheNameOfSomeChain(callbacks=[handler])`. These callbacks + are passed as arguments to the constructor of the object. The callbacks are scoped + only to the object they are defined on, and are **not** inherited by any children of the object. + +:::warning +Constructor callbacks are scoped only to the object they are defined on. They are **not** inherited by children +of the object. +::: + +If you're creating a custom chain or runnable, you need to remember to propagate request time +callbacks to any child objects. + +:::important Async in Python<=3.10 + +Any `RunnableLambda`, a `RunnableGenerator`, or `Tool` that invokes other runnables +and is running async in python<=3.10, will have to propagate callbacks to child +objects manually. This is because LangChain cannot automatically propagate +callbacks to child objects in this case. + +This is a common reason why you may fail to see events being emitted from custom +runnables or tools. +::: + +For specifics on how to use callbacks, see the [relevant how-to guides here](/docs/how_to/#callbacks). + +## Techniques + +### Streaming + + +Individual LLM calls often run for much longer than traditional resource requests. +This compounds when you build more complex chains or agents that require multiple reasoning steps. + +Fortunately, LLMs generate output iteratively, which means it's possible to show sensible intermediate results +before the final response is ready. Consuming output as soon as it becomes available has therefore become a vital part of the UX +around building apps with LLMs to help alleviate latency issues, and LangChain aims to have first-class support for streaming. + +Below, we'll discuss some concepts and considerations around streaming in LangChain. + +#### `.stream()` and `.astream()` + +Most modules in LangChain include the `.stream()` method (and the equivalent `.astream()` method for [async](https://docs.python.org/3/library/asyncio.html) environments) as an ergonomic streaming interface. +`.stream()` returns an iterator, which you can consume with a simple `for` loop. Here's an example with a chat model: + +```python +from langchain_anthropic import ChatAnthropic + +model = ChatAnthropic(model="claude-3-sonnet-20240229") + +for chunk in model.stream("what color is the sky?"): + print(chunk.content, end="|", flush=True) +``` + +For models (or other components) that don't support streaming natively, this iterator would just yield a single chunk, but +you could still use the same general pattern when calling them. Using `.stream()` will also automatically call the model in streaming mode +without the need to provide additional config. + +The type of each outputted chunk depends on the type of component - for example, chat models yield [`AIMessageChunks`](https://python.langchain.com/v0.2/api_reference/core/messages/langchain_core.messages.ai.AIMessageChunk.html). +Because this method is part of [LangChain Expression Language](/docs/concepts/#langchain-expression-language-lcel), +you can handle formatting differences from different outputs using an [output parser](/docs/concepts/#output-parsers) to transform +each yielded chunk. + +You can check out [this guide](/docs/how_to/streaming/#using-stream) for more detail on how to use `.stream()`. + +#### `.astream_events()` + + +While the `.stream()` method is intuitive, it can only return the final generated value of your chain. This is fine for single LLM calls, +but as you build more complex chains of several LLM calls together, you may want to use the intermediate values of +the chain alongside the final output - for example, returning sources alongside the final generation when building a chat +over documents app. + +There are ways to do this [using callbacks](/docs/concepts/#callbacks-1), or by constructing your chain in such a way that it passes intermediate +values to the end with something like chained [`.assign()`](/docs/how_to/passthrough/) calls, but LangChain also includes an +`.astream_events()` method that combines the flexibility of callbacks with the ergonomics of `.stream()`. When called, it returns an iterator +which yields [various types of events](/docs/how_to/streaming/#event-reference) that you can filter and process according +to the needs of your project. + +Here's one small example that prints just events containing streamed chat model output: + +```python +from langchain_core.output_parsers import StrOutputParser +from langchain_core.prompts import ChatPromptTemplate +from langchain_anthropic import ChatAnthropic + +model = ChatAnthropic(model="claude-3-sonnet-20240229") + +prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}") +parser = StrOutputParser() +chain = prompt | model | parser + +async for event in chain.astream_events({"topic": "parrot"}, version="v2"): + kind = event["event"] + if kind == "on_chat_model_stream": + print(event, end="|", flush=True) +``` + +You can roughly think of it as an iterator over callback events (though the format differs) - and you can use it on almost all LangChain components! + +See [this guide](/docs/how_to/streaming/#using-stream-events) for more detailed information on how to use `.astream_events()`, +including a table listing available events. + +#### Callbacks + +The lowest level way to stream outputs from LLMs in LangChain is via the [callbacks](/docs/concepts/#callbacks) system. You can pass a +callback handler that handles the [`on_llm_new_token`](https://python.langchain.com/v0.2/api_reference/langchain/callbacks/langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.html#langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.on_llm_new_token) event into LangChain components. When that component is invoked, any +[LLM](/docs/concepts/#llms) or [chat model](/docs/concepts/#chat-models) contained in the component calls +the callback with the generated token. Within the callback, you could pipe the tokens into some other destination, e.g. a HTTP response. +You can also handle the [`on_llm_end`](https://python.langchain.com/v0.2/api_reference/langchain/callbacks/langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.html#langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.on_llm_end) event to perform any necessary cleanup. + +You can see [this how-to section](/docs/how_to/#callbacks) for more specifics on using callbacks. + +Callbacks were the first technique for streaming introduced in LangChain. While powerful and generalizable, +they can be unwieldy for developers. For example: + +- You need to explicitly initialize and manage some aggregator or other stream to collect results. +- The execution order isn't explicitly guaranteed, and you could theoretically have a callback run after the `.invoke()` method finishes. +- Providers would often make you pass an additional parameter to stream outputs instead of returning them all at once. +- You would often ignore the result of the actual model call in favor of callback results. + +#### Tokens + +The unit that most model providers use to measure input and output is via a unit called a **token**. +Tokens are the basic units that language models read and generate when processing or producing text. +The exact definition of a token can vary depending on the specific way the model was trained - +for instance, in English, a token could be a single word like "apple", or a part of a word like "app". + +When you send a model a prompt, the words and characters in the prompt are encoded into tokens using a **tokenizer**. +The model then streams back generated output tokens, which the tokenizer decodes into human-readable text. +The below example shows how OpenAI models tokenize `LangChain is cool!`: + +![](/img/tokenization.png) + +You can see that it gets split into 5 different tokens, and that the boundaries between tokens are not exactly the same as word boundaries. + +The reason language models use tokens rather than something more immediately intuitive like "characters" +has to do with how they process and understand text. At a high-level, language models iteratively predict their next generated output based on +the initial input and their previous generations. Training the model using tokens language models to handle linguistic +units (like words or subwords) that carry meaning, rather than individual characters, which makes it easier for the model +to learn and understand the structure of the language, including grammar and context. +Furthermore, using tokens can also improve efficiency, since the model processes fewer units of text compared to character-level processing. + +### Function/tool calling + +:::info +We use the term tool calling interchangeably with function calling. Although +function calling is sometimes meant to refer to invocations of a single function, +we treat all models as though they can return multiple tool or function calls in +each message. +::: + +Tool calling allows a [chat model](/docs/concepts/#chat-models) to respond to a given prompt by generating output that +matches a user-defined schema. + +While the name implies that the model is performing +some action, this is actually not the case! The model only generates the arguments to a tool, and actually running the tool (or not) is up to the user. +One common example where you **wouldn't** want to call a function with the generated arguments +is if you want to [extract structured output matching some schema](/docs/concepts/#structured-output) +from unstructured text. You would give the model an "extraction" tool that takes +parameters matching the desired schema, then treat the generated output as your final +result. + +![Diagram of a tool call by a chat model](/img/tool_call.png) + +Tool calling is not universal, but is supported by many popular LLM providers, including [Anthropic](/docs/integrations/chat/anthropic/), +[Cohere](/docs/integrations/chat/cohere/), [Google](/docs/integrations/chat/google_vertex_ai_palm/), +[Mistral](/docs/integrations/chat/mistralai/), [OpenAI](/docs/integrations/chat/openai/), and even for locally-running models via [Ollama](/docs/integrations/chat/ollama/). + +LangChain provides a standardized interface for tool calling that is consistent across different models. + +The standard interface consists of: + +* `ChatModel.bind_tools()`: a method for specifying which tools are available for a model to call. This method accepts [LangChain tools](/docs/concepts/#tools) as well as [Pydantic](https://pydantic.dev/) objects. +* `AIMessage.tool_calls`: an attribute on the `AIMessage` returned from the model for accessing the tool calls requested by the model. + +#### Tool usage + +After the model calls tools, you can use the tool by invoking it, then passing the arguments back to the model. +LangChain provides the [`Tool`](/docs/concepts/#tools) abstraction to help you handle this. + +The general flow is this: + +1. Generate tool calls with a chat model in response to a query. +2. Invoke the appropriate tools using the generated tool call as arguments. +3. Format the result of the tool invocations as [`ToolMessages`](/docs/concepts/#toolmessage). +4. Pass the entire list of messages back to the model so that it can generate a final answer (or call more tools). + +![Diagram of a complete tool calling flow](/img/tool_calling_flow.png) + +This is how tool calling [agents](/docs/concepts/#agents) perform tasks and answer queries. + +Check out some more focused guides below: + +- [How to use chat models to call tools](/docs/how_to/tool_calling/) +- [How to pass tool outputs to chat models](/docs/how_to/tool_results_pass_to_model/) +- [Building an agent with LangGraph](https://langchain-ai.github.io/langgraph/tutorials/introduction/) + +### Structured output + +LLMs are capable of generating arbitrary text. This enables the model to respond appropriately to a wide +range of inputs, but for some use-cases, it can be useful to constrain the LLM's output +to a specific format or structure. This is referred to as **structured output**. + +For example, if the output is to be stored in a relational database, +it is much easier if the model generates output that adheres to a defined schema or format. +[Extracting specific information](/docs/tutorials/extraction/) from unstructured text is another +case where this is particularly useful. Most commonly, the output format will be JSON, +though other formats such as [YAML](/docs/how_to/output_parser_yaml/) can be useful too. Below, we'll discuss +a few ways to get structured output from models in LangChain. + +#### `.with_structured_output()` + +For convenience, some LangChain chat models support a [`.with_structured_output()`](/docs/how_to/structured_output/#the-with_structured_output-method) +method. This method only requires a schema as input, and returns a dict or Pydantic object. +Generally, this method is only present on models that support one of the more advanced methods described below, +and will use one of them under the hood. It takes care of importing a suitable output parser and +formatting the schema in the right format for the model. + +Here's an example: + +```python +from typing import Optional + +from langchain_core.pydantic_v1 import BaseModel, Field + + +class Joke(BaseModel): + """Joke to tell user.""" + + setup: str = Field(description="The setup of the joke") + punchline: str = Field(description="The punchline to the joke") + rating: Optional[int] = Field(description="How funny the joke is, from 1 to 10") + +structured_llm = llm.with_structured_output(Joke) + +structured_llm.invoke("Tell me a joke about cats") +``` + +``` +Joke(setup='Why was the cat sitting on the computer?', punchline='To keep an eye on the mouse!', rating=None) + +``` + +We recommend this method as a starting point when working with structured output: + +- It uses other model-specific features under the hood, without the need to import an output parser. +- For the models that use tool calling, no special prompting is needed. +- If multiple underlying techniques are supported, you can supply a `method` parameter to +[toggle which one is used](/docs/how_to/structured_output/#advanced-specifying-the-method-for-structuring-outputs). + +You may want or need to use other techniques if: + +- The chat model you are using does not support tool calling. +- You are working with very complex schemas and the model is having trouble generating outputs that conform. + +For more information, check out this [how-to guide](/docs/how_to/structured_output/#the-with_structured_output-method). + +You can also check out [this table](/docs/integrations/chat/#advanced-features) for a list of models that support +`with_structured_output()`. + +#### Raw prompting + +The most intuitive way to get a model to structure output is to ask nicely. +In addition to your query, you can give instructions describing what kind of output you'd like, then +parse the output using an [output parser](/docs/concepts/#output-parsers) to convert the raw +model message or string output into something more easily manipulated. + +The biggest benefit to raw prompting is its flexibility: + +- Raw prompting does not require any special model features, only sufficient reasoning capability to understand +the passed schema. +- You can prompt for any format you'd like, not just JSON. This can be useful if the model you +are using is more heavily trained on a certain type of data, such as XML or YAML. + +However, there are some drawbacks too: + +- LLMs are non-deterministic, and prompting a LLM to consistently output data in the exactly correct format +for smooth parsing can be surprisingly difficult and model-specific. +- Individual models have quirks depending on the data they were trained on, and optimizing prompts can be quite difficult. +Some may be better at interpreting [JSON schema](https://json-schema.org/), others may be best with TypeScript definitions, +and still others may prefer XML. + +While features offered by model providers may increase reliability, prompting techniques remain important for tuning your +results no matter which method you choose. + +#### JSON mode + + +Some models, such as [Mistral](/docs/integrations/chat/mistralai/), [OpenAI](/docs/integrations/chat/openai/), +[Together AI](/docs/integrations/chat/together/) and [Ollama](/docs/integrations/chat/ollama/), +support a feature called **JSON mode**, usually enabled via config. + +When enabled, JSON mode will constrain the model's output to always be some sort of valid JSON. +Often they require some custom prompting, but it's usually much less burdensome than completely raw prompting and +more along the lines of, `"you must always return JSON"`. The [output also generally easier to parse](/docs/how_to/output_parser_json/). + +It's also generally simpler to use directly and more commonly available than tool calling, and can give +more flexibility around prompting and shaping results than tool calling. + +Here's an example: + +```python +from langchain_core.prompts import ChatPromptTemplate +from langchain_openai import ChatOpenAI +from langchain.output_parsers.json import SimpleJsonOutputParser + +model = ChatOpenAI( + model="gpt-4o", + model_kwargs={ "response_format": { "type": "json_object" } }, +) + +prompt = ChatPromptTemplate.from_template( + "Answer the user's question to the best of your ability." + 'You must always output a JSON object with an "answer" key and a "followup_question" key.' + "{question}" +) + +chain = prompt | model | SimpleJsonOutputParser() + +chain.invoke({ "question": "What is the powerhouse of the cell?" }) +``` + +``` +{'answer': 'The powerhouse of the cell is the mitochondrion. It is responsible for producing energy in the form of ATP through cellular respiration.', + 'followup_question': 'Would you like to know more about how mitochondria produce energy?'} +``` + +For a full list of model providers that support JSON mode, see [this table](/docs/integrations/chat/#advanced-features). + +#### Tool calling {#structured-output-tool-calling} + +For models that support it, [tool calling](/docs/concepts/#functiontool-calling) can be very convenient for structured output. It removes the +guesswork around how best to prompt schemas in favor of a built-in model feature. + +It works by first binding the desired schema either directly or via a [LangChain tool](/docs/concepts/#tools) to a +[chat model](/docs/concepts/#chat-models) using the `.bind_tools()` method. The model will then generate an `AIMessage` containing +a `tool_calls` field containing `args` that match the desired shape. + +There are several acceptable formats you can use to bind tools to a model in LangChain. Here's one example: + +```python +from langchain_core.pydantic_v1 import BaseModel, Field +from langchain_openai import ChatOpenAI + +class ResponseFormatter(BaseModel): + """Always use this tool to structure your response to the user.""" + + answer: str = Field(description="The answer to the user's question") + followup_question: str = Field(description="A followup question the user could ask") + +model = ChatOpenAI( + model="gpt-4o", + temperature=0, +) + +model_with_tools = model.bind_tools([ResponseFormatter]) + +ai_msg = model_with_tools.invoke("What is the powerhouse of the cell?") + +ai_msg.tool_calls[0]["args"] +``` + +``` +{'answer': "The powerhouse of the cell is the mitochondrion. It generates most of the cell's supply of adenosine triphosphate (ATP), which is used as a source of chemical energy.", + 'followup_question': 'How do mitochondria generate ATP?'} +``` + +Tool calling is a generally consistent way to get a model to generate structured output, and is the default technique +used for the [`.with_structured_output()`](/docs/concepts/#with_structured_output) method when a model supports it. + +The following how-to guides are good practical resources for using function/tool calling for structured output: + +- [How to return structured data from an LLM](/docs/how_to/structured_output/) +- [How to use a model to call tools](/docs/how_to/tool_calling) + +For a full list of model providers that support tool calling, [see this table](/docs/integrations/chat/#advanced-features). + +### Few-shot prompting + +One of the most effective ways to improve model performance is to give a model examples of what you want it to do. The technique of adding example inputs and expected outputs to a model prompt is known as "few-shot prompting". There are a few things to think about when doing few-shot prompting: + +1. How are examples generated? +2. How many examples are in each prompt? +3. How are examples selected at runtime? +4. How are examples formatted in the prompt? + +Here are the considerations for each. + +#### 1. Generating examples + +The first and most important step of few-shot prompting is coming up with a good dataset of examples. Good examples should be relevant at runtime, clear, informative, and provide information that was not already known to the model. + +At a high-level, the basic ways to generate examples are: +- Manual: a person/people generates examples they think are useful. +- Better model: a better (presumably more expensive/slower) model's responses are used as examples for a worse (presumably cheaper/faster) model. +- User feedback: users (or labelers) leave feedback on interactions with the application and examples are generated based on that feedback (for example, all interactions with positive feedback could be turned into examples). +- LLM feedback: same as user feedback but the process is automated by having models evaluate themselves. + +Which approach is best depends on your task. For tasks where a small number core principles need to be understood really well, it can be valuable hand-craft a few really good examples. +For tasks where the space of correct behaviors is broader and more nuanced, it can be useful to generate many examples in a more automated fashion so that there's a higher likelihood of there being some highly relevant examples for any runtime input. + +**Single-turn v.s. multi-turn examples** + +Another dimension to think about when generating examples is what the example is actually showing. + +The simplest types of examples just have a user input and an expected model output. These are single-turn examples. + +One more complex type if example is where the example is an entire conversation, usually in which a model initially responds incorrectly and a user then tells the model how to correct its answer. +This is called a multi-turn example. Multi-turn examples can be useful for more nuanced tasks where its useful to show common errors and spell out exactly why they're wrong and what should be done instead. + +#### 2. Number of examples + +Once we have a dataset of examples, we need to think about how many examples should be in each prompt. +The key tradeoff is that more examples generally improve performance, but larger prompts increase costs and latency. +And beyond some threshold having too many examples can start to confuse the model. +Finding the right number of examples is highly dependent on the model, the task, the quality of the examples, and your cost and latency constraints. +Anecdotally, the better the model is the fewer examples it needs to perform well and the more quickly you hit steeply diminishing returns on adding more examples. +But, the best/only way to reliably answer this question is to run some experiments with different numbers of examples. + +#### 3. Selecting examples + +Assuming we are not adding our entire example dataset into each prompt, we need to have a way of selecting examples from our dataset based on a given input. We can do this: +- Randomly +- By (semantic or keyword-based) similarity of the inputs +- Based on some other constraints, like token size + +LangChain has a number of [`ExampleSelectors`](/docs/concepts/#example-selectors) which make it easy to use any of these techniques. + +Generally, selecting by semantic similarity leads to the best model performance. But how important this is is again model and task specific, and is something worth experimenting with. + +#### 4. Formatting examples + +Most state-of-the-art models these days are chat models, so we'll focus on formatting examples for those. Our basic options are to insert the examples: +- In the system prompt as a string +- As their own messages + +If we insert our examples into the system prompt as a string, we'll need to make sure it's clear to the model where each example begins and which parts are the input versus output. Different models respond better to different syntaxes, like [ChatML](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/chat-markup-language), XML, TypeScript, etc. + +If we insert our examples as messages, where each example is represented as a sequence of Human, AI messages, we might want to also assign [names](/docs/concepts/#messages) to our messages like `"example_user"` and `"example_assistant"` to make it clear that these messages correspond to different actors than the latest input message. + +**Formatting tool call examples** + +One area where formatting examples as messages can be tricky is when our example outputs have tool calls. This is because different models have different constraints on what types of message sequences are allowed when any tool calls are generated. +- Some models require that any AIMessage with tool calls be immediately followed by ToolMessages for every tool call, +- Some models additionally require that any ToolMessages be immediately followed by an AIMessage before the next HumanMessage, +- Some models require that tools are passed in to the model if there are any tool calls / ToolMessages in the chat history. + +These requirements are model-specific and should be checked for the model you are using. If your model requires ToolMessages after tool calls and/or AIMessages after ToolMessages and your examples only include expected tool calls and not the actual tool outputs, you can try adding dummy ToolMessages / AIMessages to the end of each example with generic contents to satisfy the API constraints. +In these cases it's especially worth experimenting with inserting your examples as strings versus messages, as having dummy messages can adversely affect certain models. + +You can see a case study of how Anthropic and OpenAI respond to different few-shot prompting techniques on two different tool calling benchmarks [here](https://blog.langchain.dev/few-shot-prompting-to-improve-tool-calling-performance/). + +### Retrieval + +LLMs are trained on a large but fixed dataset, limiting their ability to reason over private or recent information. Fine-tuning an LLM with specific facts is one way to mitigate this, but is often [poorly suited for factual recall](https://www.anyscale.com/blog/fine-tuning-is-for-form-not-facts) and [can be costly](https://www.glean.com/blog/how-to-build-an-ai-assistant-for-the-enterprise). +Retrieval is the process of providing relevant information to an LLM to improve its response for a given input. Retrieval augmented generation (RAG) is the process of grounding the LLM generation (output) using the retrieved information. + +:::tip + +* See our RAG from Scratch [code](https://github.com/langchain-ai/rag-from-scratch) and [video series](https://youtube.com/playlist?list=PLfaIDFEXuae2LXbO1_PKyVJiQ23ZztA0x&feature=shared). +* For a high-level guide on retrieval, see this [tutorial on RAG](/docs/tutorials/rag/). + +::: + +RAG is only as good as the retrieved documents’ relevance and quality. Fortunately, an emerging set of techniques can be employed to design and improve RAG systems. We've focused on taxonomizing and summarizing many of these techniques (see below figure) and will share some high-level strategic guidance in the following sections. +You can and should experiment with using different pieces together. You might also find [this LangSmith guide](https://docs.smith.langchain.com/how_to_guides/evaluation/evaluate_llm_application) useful for showing how to evaluate different iterations of your app. + +![](/img/rag_landscape.png) + +#### Query Translation + +First, consider the user input(s) to your RAG system. Ideally, a RAG system can handle a wide range of inputs, from poorly worded questions to complex multi-part queries. +**Using an LLM to review and optionally modify the input is the central idea behind query translation.** This serves as a general buffer, optimizing raw user inputs for your retrieval system. +For example, this can be as simple as extracting keywords or as complex as generating multiple sub-questions for a complex query. + +| Name | When to use | Description | +|---------------|-------------|-------------| +| [Multi-query](/docs/how_to/MultiQueryRetriever/) | When you need to cover multiple perspectives of a question. | Rewrite the user question from multiple perspectives, retrieve documents for each rewritten question, return the unique documents for all queries. | +| [Decomposition](https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_5_to_9.ipynb) | When a question can be broken down into smaller subproblems. | Decompose a question into a set of subproblems / questions, which can either be solved sequentially (use the answer from first + retrieval to answer the second) or in parallel (consolidate each answer into final answer). | +| [Step-back](https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_5_to_9.ipynb) | When a higher-level conceptual understanding is required. | First prompt the LLM to ask a generic step-back question about higher-level concepts or principles, and retrieve relevant facts about them. Use this grounding to help answer the user question. | +| [HyDE](https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_5_to_9.ipynb) | If you have challenges retrieving relevant documents using the raw user inputs. | Use an LLM to convert questions into hypothetical documents that answer the question. Use the embedded hypothetical documents to retrieve real documents with the premise that doc-doc similarity search can produce more relevant matches. | + +:::tip + +See our RAG from Scratch videos for a few different specific approaches: +- [Multi-query](https://youtu.be/JChPi0CRnDY?feature=shared) +- [Decomposition](https://youtu.be/h0OPWlEOank?feature=shared) +- [Step-back](https://youtu.be/xn1jEjRyJ2U?feature=shared) +- [HyDE](https://youtu.be/SaDzIVkYqyY?feature=shared) + +::: + +#### Routing + +Second, consider the data sources available to your RAG system. You want to query across more than one database or across structured and unstructured data sources. **Using an LLM to review the input and route it to the appropriate data source is a simple and effective approach for querying across sources.** + +| Name | When to use | Description | +|------------------|--------------------------------------------|-------------| +| [Logical routing](/docs/how_to/routing/) | When you can prompt an LLM with rules to decide where to route the input. | Logical routing can use an LLM to reason about the query and choose which datastore is most appropriate. | +| [Semantic routing](/docs/how_to/routing/#routing-by-semantic-similarity) | When semantic similarity is an effective way to determine where to route the input. | Semantic routing embeds both query and, typically a set of prompts. It then chooses the appropriate prompt based upon similarity. | + +:::tip + +See our RAG from Scratch video on [routing](https://youtu.be/pfpIndq7Fi8?feature=shared). + +::: + +#### Query Construction + +Third, consider whether any of your data sources require specific query formats. Many structured databases use SQL. Vector stores often have specific syntax for applying keyword filters to document metadata. **Using an LLM to convert a natural language query into a query syntax is a popular and powerful approach.** +In particular, [text-to-SQL](/docs/tutorials/sql_qa/), [text-to-Cypher](/docs/tutorials/graph/), and [query analysis for metadata filters](/docs/tutorials/query_analysis/#query-analysis) are useful ways to interact with structured, graph, and vector databases respectively. + +| Name | When to Use | Description | +|---------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| [Text to SQL](/docs/tutorials/sql_qa/) | If users are asking questions that require information housed in a relational database, accessible via SQL. | This uses an LLM to transform user input into a SQL query. | +| [Text-to-Cypher](/docs/tutorials/graph/) | If users are asking questions that require information housed in a graph database, accessible via Cypher. | This uses an LLM to transform user input into a Cypher query. | +| [Self Query](/docs/how_to/self_query/) | If users are asking questions that are better answered by fetching documents based on metadata rather than similarity with the text. | This uses an LLM to transform user input into two things: (1) a string to look up semantically, (2) a metadata filter to go along with it. This is useful because oftentimes questions are about the METADATA of documents (not the content itself). | + +:::tip + +See our [blog post overview](https://blog.langchain.dev/query-construction/) and RAG from Scratch video on [query construction](https://youtu.be/kl6NwWYxvbM?feature=shared), the process of text-to-DSL where DSL is a domain specific language required to interact with a given database. This converts user questions into structured queries. + +::: + +#### Indexing + +Fourth, consider the design of your document index. A simple and powerful idea is to **decouple the documents that you index for retrieval from the documents that you pass to the LLM for generation.** Indexing frequently uses embedding models with vector stores, which [compress the semantic information in documents to fixed-size vectors](/docs/concepts/#embedding-models). + +Many RAG approaches focus on splitting documents into chunks and retrieving some number based on similarity to an input question for the LLM. But chunk size and chunk number can be difficult to set and affect results if they do not provide full context for the LLM to answer a question. Furthermore, LLMs are increasingly capable of processing millions of tokens. + +Two approaches can address this tension: (1) [Multi Vector](/docs/how_to/multi_vector/) retriever using an LLM to translate documents into any form (e.g., often into a summary) that is well-suited for indexing, but returns full documents to the LLM for generation. (2) [ParentDocument](/docs/how_to/parent_document_retriever/) retriever embeds document chunks, but also returns full documents. The idea is to get the best of both worlds: use concise representations (summaries or chunks) for retrieval, but use the full documents for answer generation. + +| Name | Index Type | Uses an LLM | When to Use | Description | +|---------------------------|------------------------------|---------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| [Vector store](/docs/how_to/vectorstore_retriever/) | Vector store | No | If you are just getting started and looking for something quick and easy. | This is the simplest method and the one that is easiest to get started with. It involves creating embeddings for each piece of text. | +| [ParentDocument](/docs/how_to/parent_document_retriever/) | Vector store + Document Store | No | If your pages have lots of smaller pieces of distinct information that are best indexed by themselves, but best retrieved all together. | This involves indexing multiple chunks for each document. Then you find the chunks that are most similar in embedding space, but you retrieve the whole parent document and return that (rather than individual chunks). | +| [Multi Vector](/docs/how_to/multi_vector/) | Vector store + Document Store | Sometimes during indexing | If you are able to extract information from documents that you think is more relevant to index than the text itself. | This involves creating multiple vectors for each document. Each vector could be created in a myriad of ways - examples include summaries of the text and hypothetical questions. | +| [Time-Weighted Vector store](/docs/how_to/time_weighted_vectorstore/) | Vector store | No | If you have timestamps associated with your documents, and you want to retrieve the most recent ones | This fetches documents based on a combination of semantic similarity (as in normal vector retrieval) and recency (looking at timestamps of indexed documents) | + +:::tip + +- See our RAG from Scratch video on [indexing fundamentals](https://youtu.be/bjb_EMsTDKI?feature=shared) +- See our RAG from Scratch video on [multi vector retriever](https://youtu.be/gTCU9I6QqCE?feature=shared) + +::: + +Fifth, consider ways to improve the quality of your similarity search itself. Embedding models compress text into fixed-length (vector) representations that capture the semantic content of the document. This compression is useful for search / retrieval, but puts a heavy burden on that single vector representation to capture the semantic nuance / detail of the document. In some cases, irrelevant or redundant content can dilute the semantic usefulness of the embedding. + +[ColBERT](https://docs.google.com/presentation/d/1IRhAdGjIevrrotdplHNcc4aXgIYyKamUKTWtB3m3aMU/edit?usp=sharing) is an interesting approach to address this with a higher granularity embeddings: (1) produce a contextually influenced embedding for each token in the document and query, (2) score similarity between each query token and all document tokens, (3) take the max, (4) do this for all query tokens, and (5) take the sum of the max scores (in step 3) for all query tokens to get a query-document similarity score; this token-wise scoring can yield strong results. + +![](/img/colbert.png) + +There are some additional tricks to improve the quality of your retrieval. Embeddings excel at capturing semantic information, but may struggle with keyword-based queries. Many [vector stores](/docs/integrations/retrievers/pinecone_hybrid_search/) offer built-in [hybrid-search](https://docs.pinecone.io/guides/data/understanding-hybrid-search) to combine keyword and semantic similarity, which marries the benefits of both approaches. Furthermore, many vector stores have [maximal marginal relevance](https://python.langchain.com/v0.1/docs/modules/model_io/prompts/example_selectors/mmr/), which attempts to diversify the results of a search to avoid returning similar and redundant documents. + +| Name | When to use | Description | +|-------------------|----------------------------------------------------------|-------------| +| [ColBERT](/docs/integrations/providers/ragatouille/#using-colbert-as-a-reranker) | When higher granularity embeddings are needed. | ColBERT uses contextually influenced embeddings for each token in the document and query to get a granular query-document similarity score. | +| [Hybrid search](/docs/integrations/retrievers/pinecone_hybrid_search/) | When combining keyword-based and semantic similarity. | Hybrid search combines keyword and semantic similarity, marrying the benefits of both approaches. | +| [Maximal Marginal Relevance (MMR)](/docs/integrations/vectorstores/pinecone/#maximal-marginal-relevance-searches) | When needing to diversify search results. | MMR attempts to diversify the results of a search to avoid returning similar and redundant documents. | + +:::tip + +See our RAG from Scratch video on [ColBERT](https://youtu.be/cN6S0Ehm7_8?feature=shared>). + +::: + +#### Post-processing + +Sixth, consider ways to filter or rank retrieved documents. This is very useful if you are [combining documents returned from multiple sources](/docs/integrations/retrievers/cohere-reranker/#doing-reranking-with-coherererank), since it can can down-rank less relevant documents and / or [compress similar documents](/docs/how_to/contextual_compression/#more-built-in-compressors-filters). + +| Name | Index Type | Uses an LLM | When to Use | Description | +|---------------------------|------------------------------|---------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| [Contextual Compression](/docs/how_to/contextual_compression/) | Any | Sometimes | If you are finding that your retrieved documents contain too much irrelevant information and are distracting the LLM. | This puts a post-processing step on top of another retriever and extracts only the most relevant information from retrieved documents. This can be done with embeddings or an LLM. | +| [Ensemble](/docs/how_to/ensemble_retriever/) | Any | No | If you have multiple retrieval methods and want to try combining them. | This fetches documents from multiple retrievers and then combines them. | +| [Re-ranking](/docs/integrations/retrievers/cohere-reranker/) | Any | Yes | If you want to rank retrieved documents based upon relevance, especially if you want to combine results from multiple retrieval methods . | Given a query and a list of documents, Rerank indexes the documents from most to least semantically relevant to the query. | + +:::tip + +See our RAG from Scratch video on [RAG-Fusion](https://youtu.be/77qELPbNgxA?feature=shared), on approach for post-processing across multiple queries: Rewrite the user question from multiple perspectives, retrieve documents for each rewritten question, and combine the ranks of multiple search result lists to produce a single, unified ranking with [Reciprocal Rank Fusion (RRF)](https://towardsdatascience.com/forget-rag-the-future-is-rag-fusion-1147298d8ad1). + +::: + +#### Generation + +**Finally, consider ways to build self-correction into your RAG system.** RAG systems can suffer from low quality retrieval (e.g., if a user question is out of the domain for the index) and / or hallucinations in generation. A naive retrieve-generate pipeline has no ability to detect or self-correct from these kinds of errors. The concept of ["flow engineering"](https://x.com/karpathy/status/1748043513156272416) has been introduced [in the context of code generation](https://arxiv.org/abs/2401.08500): iteratively build an answer to a code question with unit tests to check and self-correct errors. Several works have applied this RAG, such as Self-RAG and Corrective-RAG. In both cases, checks for document relevance, hallucinations, and / or answer quality are performed in the RAG answer generation flow. + +We've found that graphs are a great way to reliably express logical flows and have implemented ideas from several of these papers [using LangGraph](https://github.com/langchain-ai/langgraph/tree/main/examples/rag), as shown in the figure below (red - routing, blue - fallback, green - self-correction): +- **Routing:** Adaptive RAG ([paper](https://arxiv.org/abs/2403.14403)). Route questions to different retrieval approaches, as discussed above +- **Fallback:** Corrective RAG ([paper](https://arxiv.org/pdf/2401.15884.pdf)). Fallback to web search if docs are not relevant to query +- **Self-correction:** Self-RAG ([paper](https://arxiv.org/abs/2310.11511)). Fix answers w/ hallucinations or don’t address question + +![](/img/langgraph_rag.png) + +| Name | When to use | Description | +|-------------------|-----------------------------------------------------------|-------------| +| Self-RAG | When needing to fix answers with hallucinations or irrelevant content. | Self-RAG performs checks for document relevance, hallucinations, and answer quality during the RAG answer generation flow, iteratively building an answer and self-correcting errors. | +| Corrective-RAG | When needing a fallback mechanism for low relevance docs. | Corrective-RAG includes a fallback (e.g., to web search) if the retrieved documents are not relevant to the query, ensuring higher quality and more relevant retrieval. | + +:::tip + +See several videos and cookbooks showcasing RAG with LangGraph: +- [LangGraph Corrective RAG](https://www.youtube.com/watch?v=E2shqsYwxck) +- [LangGraph combining Adaptive, Self-RAG, and Corrective RAG](https://www.youtube.com/watch?v=-ROS6gfYIts) +- [Cookbooks for RAG using LangGraph](https://github.com/langchain-ai/langgraph/tree/main/examples/rag) + +See our LangGraph RAG recipes with partners: +- [Meta](https://github.com/meta-llama/llama-recipes/tree/main/recipes/3p_integrations/langchain) +- [Mistral](https://github.com/mistralai/cookbook/tree/main/third_party/langchain) + +::: + +### Text splitting + +LangChain offers many different types of `text splitters`. +These all live in the `langchain-text-splitters` package. + +Table columns: + +- **Name**: Name of the text splitter +- **Classes**: Classes that implement this text splitter +- **Splits On**: How this text splitter splits text +- **Adds Metadata**: Whether or not this text splitter adds metadata about where each chunk came from. +- **Description**: Description of the splitter, including recommendation on when to use it. + + +| Name | Classes | Splits On | Adds Metadata | Description | +|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------|---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| Recursive | [RecursiveCharacterTextSplitter](/docs/how_to/recursive_text_splitter/), [RecursiveJsonSplitter](/docs/how_to/recursive_json_splitter/) | A list of user defined characters | | Recursively splits text. This splitting is trying to keep related pieces of text next to each other. This is the `recommended way` to start splitting text. | +| HTML | [HTMLHeaderTextSplitter](/docs/how_to/HTML_header_metadata_splitter/), [HTMLSectionSplitter](/docs/how_to/HTML_section_aware_splitter/) | HTML specific characters | ✅ | Splits text based on HTML-specific characters. Notably, this adds in relevant information about where that chunk came from (based on the HTML) | +| Markdown | [MarkdownHeaderTextSplitter](/docs/how_to/markdown_header_metadata_splitter/), | Markdown specific characters | ✅ | Splits text based on Markdown-specific characters. Notably, this adds in relevant information about where that chunk came from (based on the Markdown) | +| Code | [many languages](/docs/how_to/code_splitter/) | Code (Python, JS) specific characters | | Splits text based on characters specific to coding languages. 15 different languages are available to choose from. | +| Token | [many classes](/docs/how_to/split_by_token/) | Tokens | | Splits text on tokens. There exist a few different ways to measure tokens. | +| Character | [CharacterTextSplitter](/docs/how_to/character_text_splitter/) | A user defined character | | Splits text based on a user defined character. One of the simpler methods. | +| Semantic Chunker (Experimental) | [SemanticChunker](/docs/how_to/semantic-chunker/) | Sentences | | First splits on sentences. Then combines ones next to each other if they are semantically similar enough. Taken from [Greg Kamradt](https://github.com/FullStackRetrieval-com/RetrievalTutorials/blob/main/tutorials/LevelsOfTextSplitting/5_Levels_Of_Text_Splitting.ipynb) | +| Integration: AI21 Semantic | [AI21SemanticTextSplitter](/docs/integrations/document_transformers/ai21_semantic_text_splitter/) | | ✅ | Identifies distinct topics that form coherent pieces of text and splits along those. | + +### Evaluation + + +Evaluation is the process of assessing the performance and effectiveness of your LLM-powered applications. +It involves testing the model's responses against a set of predefined criteria or benchmarks to ensure it meets the desired quality standards and fulfills the intended purpose. +This process is vital for building reliable applications. + +![](/img/langsmith_evaluate.png) + +[LangSmith](https://docs.smith.langchain.com/) helps with this process in a few ways: + +- It makes it easier to create and curate datasets via its tracing and annotation features +- It provides an evaluation framework that helps you define metrics and run your app against your dataset +- It allows you to track results over time and automatically run your evaluators on a schedule or as part of CI/Code + +To learn more, check out [this LangSmith guide](https://docs.smith.langchain.com/concepts/evaluation). + +### Tracing + + +A trace is essentially a series of steps that your application takes to go from input to output. +Traces contain individual steps called `runs`. These can be individual calls from a model, retriever, +tool, or sub-chains. +Tracing gives you observability inside your chains and agents, and is vital in diagnosing issues. + +For a deeper dive, check out [this LangSmith conceptual guide](https://docs.smith.langchain.com/concepts/tracing). diff --git a/langchain_md_files/contributing/code/guidelines.mdx b/langchain_md_files/contributing/code/guidelines.mdx new file mode 100644 index 0000000000000000000000000000000000000000..7f75199b1e1bc75e274974767bd2ffcba305834f --- /dev/null +++ b/langchain_md_files/contributing/code/guidelines.mdx @@ -0,0 +1,35 @@ +# General guidelines + +Here are some things to keep in mind for all types of contributions: + +- Follow the ["fork and pull request"](https://docs.github.com/en/get-started/exploring-projects-on-github/contributing-to-a-project) workflow. +- Fill out the checked-in pull request template when opening pull requests. Note related issues and tag relevant maintainers. +- Ensure your PR passes formatting, linting, and testing checks before requesting a review. + - If you would like comments or feedback on your current progress, please open an issue or discussion and tag a maintainer. + - See the sections on [Testing](/docs/contributing/code/setup#testing) and [Formatting and Linting](/docs/contributing/code/setup#formatting-and-linting) for how to run these checks locally. +- Backwards compatibility is key. Your changes must not be breaking, except in case of critical bug and security fixes. +- Look for duplicate PRs or issues that have already been opened before opening a new one. +- Keep scope as isolated as possible. As a general rule, your changes should not affect more than one package at a time. + +## Bugfixes + +We encourage and appreciate bugfixes. We ask that you: + +- Explain the bug in enough detail for maintainers to be able to reproduce it. + - If an accompanying issue exists, link to it. Prefix with `Fixes` so that the issue will close automatically when the PR is merged. +- Avoid breaking changes if possible. +- Include unit tests that fail without the bugfix. + +If you come across a bug and don't know how to fix it, we ask that you open an issue for it describing in detail the environment in which you encountered the bug. + +## New features + +We aim to keep the bar high for new features. We generally don't accept new core abstractions, changes to infra, changes to dependencies, +or new agents/chains from outside contributors without an existing GitHub discussion or issue that demonstrates an acute need for them. + +- New features must come with docs, unit tests, and (if appropriate) integration tests. +- New integrations must come with docs, unit tests, and (if appropriate) integration tests. + - See [this page](/docs/contributing/integrations) for more details on contributing new integrations. +- New functionality should not inherit from or use deprecated methods or classes. +- We will reject features that are likely to lead to security vulnerabilities or reports. +- Do not add any hard dependencies. Integrations may add optional dependencies. diff --git a/langchain_md_files/contributing/code/index.mdx b/langchain_md_files/contributing/code/index.mdx new file mode 100644 index 0000000000000000000000000000000000000000..43b90785567b25e6a14cb11cd1d845506cac5681 --- /dev/null +++ b/langchain_md_files/contributing/code/index.mdx @@ -0,0 +1,6 @@ +# Contribute Code + +If you would like to add a new feature or update an existing one, please read the resources below before getting started: + +- [General guidelines](/docs/contributing/code/guidelines/) +- [Setup](/docs/contributing/code/setup/) diff --git a/langchain_md_files/contributing/code/setup.mdx b/langchain_md_files/contributing/code/setup.mdx new file mode 100644 index 0000000000000000000000000000000000000000..5e983d30fbecff663b34f798b653e944464f117c --- /dev/null +++ b/langchain_md_files/contributing/code/setup.mdx @@ -0,0 +1,213 @@ +# Setup + +This guide walks through how to run the repository locally and check in your first code. +For a [development container](https://containers.dev/), see the [.devcontainer folder](https://github.com/langchain-ai/langchain/tree/master/.devcontainer). + +## Dependency Management: Poetry and other env/dependency managers + +This project utilizes [Poetry](https://python-poetry.org/) v1.7.1+ as a dependency manager. + +❗Note: *Before installing Poetry*, if you use `Conda`, create and activate a new Conda env (e.g. `conda create -n langchain python=3.9`) + +Install Poetry: **[documentation on how to install it](https://python-poetry.org/docs/#installation)**. + +❗Note: If you use `Conda` or `Pyenv` as your environment/package manager, after installing Poetry, +tell Poetry to use the virtualenv python environment (`poetry config virtualenvs.prefer-active-python true`) + +## Different packages + +This repository contains multiple packages: +- `langchain-core`: Base interfaces for key abstractions as well as logic for combining them in chains (LangChain Expression Language). +- `langchain-community`: Third-party integrations of various components. +- `langchain`: Chains, agents, and retrieval logic that makes up the cognitive architecture of your applications. +- `langchain-experimental`: Components and chains that are experimental, either in the sense that the techniques are novel and still being tested, or they require giving the LLM more access than would be possible in most production systems. +- Partner integrations: Partner packages in `libs/partners` that are independently version controlled. + +Each of these has its own development environment. Docs are run from the top-level makefile, but development +is split across separate test & release flows. + +For this quickstart, start with langchain-community: + +```bash +cd libs/community +``` + +## Local Development Dependencies + +Install langchain-community development requirements (for running langchain, running examples, linting, formatting, tests, and coverage): + +```bash +poetry install --with lint,typing,test,test_integration +``` + +Then verify dependency installation: + +```bash +make test +``` + +If during installation you receive a `WheelFileValidationError` for `debugpy`, please make sure you are running +Poetry v1.6.1+. This bug was present in older versions of Poetry (e.g. 1.4.1) and has been resolved in newer releases. +If you are still seeing this bug on v1.6.1+, you may also try disabling "modern installation" +(`poetry config installer.modern-installation false`) and re-installing requirements. +See [this `debugpy` issue](https://github.com/microsoft/debugpy/issues/1246) for more details. + +## Testing + +**Note:** In `langchain`, `langchain-community`, and `langchain-experimental`, some test dependencies are optional. See the following section about optional dependencies. + +Unit tests cover modular logic that does not require calls to outside APIs. +If you add new logic, please add a unit test. + +To run unit tests: + +```bash +make test +``` + +To run unit tests in Docker: + +```bash +make docker_tests +``` + +There are also [integration tests and code-coverage](/docs/contributing/testing/) available. + +### Only develop langchain_core or langchain_experimental + +If you are only developing `langchain_core` or `langchain_experimental`, you can simply install the dependencies for the respective projects and run tests: + +```bash +cd libs/core +poetry install --with test +make test +``` + +Or: + +```bash +cd libs/experimental +poetry install --with test +make test +``` + +## Formatting and Linting + +Run these locally before submitting a PR; the CI system will check also. + +### Code Formatting + +Formatting for this project is done via [ruff](https://docs.astral.sh/ruff/rules/). + +To run formatting for docs, cookbook and templates: + +```bash +make format +``` + +To run formatting for a library, run the same command from the relevant library directory: + +```bash +cd libs/{LIBRARY} +make format +``` + +Additionally, you can run the formatter only on the files that have been modified in your current branch as compared to the master branch using the format_diff command: + +```bash +make format_diff +``` + +This is especially useful when you have made changes to a subset of the project and want to ensure your changes are properly formatted without affecting the rest of the codebase. + +#### Linting + +Linting for this project is done via a combination of [ruff](https://docs.astral.sh/ruff/rules/) and [mypy](http://mypy-lang.org/). + +To run linting for docs, cookbook and templates: + +```bash +make lint +``` + +To run linting for a library, run the same command from the relevant library directory: + +```bash +cd libs/{LIBRARY} +make lint +``` + +In addition, you can run the linter only on the files that have been modified in your current branch as compared to the master branch using the lint_diff command: + +```bash +make lint_diff +``` + +This can be very helpful when you've made changes to only certain parts of the project and want to ensure your changes meet the linting standards without having to check the entire codebase. + +We recognize linting can be annoying - if you do not want to do it, please contact a project maintainer, and they can help you with it. We do not want this to be a blocker for good code getting contributed. + +### Spellcheck + +Spellchecking for this project is done via [codespell](https://github.com/codespell-project/codespell). +Note that `codespell` finds common typos, so it could have false-positive (correctly spelled but rarely used) and false-negatives (not finding misspelled) words. + +To check spelling for this project: + +```bash +make spell_check +``` + +To fix spelling in place: + +```bash +make spell_fix +``` + +If codespell is incorrectly flagging a word, you can skip spellcheck for that word by adding it to the codespell config in the `pyproject.toml` file. + +```python +[tool.codespell] +... +# Add here: +ignore-words-list = 'momento,collison,ned,foor,reworkd,parth,whats,aapply,mysogyny,unsecure' +``` + +## Working with Optional Dependencies + +`langchain`, `langchain-community`, and `langchain-experimental` rely on optional dependencies to keep these packages lightweight. + +`langchain-core` and partner packages **do not use** optional dependencies in this way. + +You'll notice that `pyproject.toml` and `poetry.lock` are **not** touched when you add optional dependencies below. + +If you're adding a new dependency to Langchain, assume that it will be an optional dependency, and +that most users won't have it installed. + +Users who do not have the dependency installed should be able to **import** your code without +any side effects (no warnings, no errors, no exceptions). + +To introduce the dependency to a library, please do the following: + +1. Open extended_testing_deps.txt and add the dependency +2. Add a unit test that the very least attempts to import the new code. Ideally, the unit +test makes use of lightweight fixtures to test the logic of the code. +3. Please use the `@pytest.mark.requires(package_name)` decorator for any unit tests that require the dependency. + +## Adding a Jupyter Notebook + +If you are adding a Jupyter Notebook example, you'll want to install the optional `dev` dependencies. + +To install dev dependencies: + +```bash +poetry install --with dev +``` + +Launch a notebook: + +```bash +poetry run jupyter notebook +``` + +When you run `poetry install`, the `langchain` package is installed as editable in the virtualenv, so your new logic can be imported into the notebook. diff --git a/langchain_md_files/contributing/documentation/index.mdx b/langchain_md_files/contributing/documentation/index.mdx new file mode 100644 index 0000000000000000000000000000000000000000..564edf6d534980b182fe64dde227933f24a65616 --- /dev/null +++ b/langchain_md_files/contributing/documentation/index.mdx @@ -0,0 +1,7 @@ +# Contribute Documentation + +Documentation is a vital part of LangChain. We welcome both new documentation for new features and +community improvements to our current documentation. Please read the resources below before getting started: + +- [Documentation style guide](/docs/contributing/documentation/style_guide/) +- [Setup](/docs/contributing/documentation/setup/) diff --git a/langchain_md_files/contributing/documentation/setup.mdx b/langchain_md_files/contributing/documentation/setup.mdx new file mode 100644 index 0000000000000000000000000000000000000000..2ac1cab73767b7efb8c39be4f8d9d0615274c3f6 --- /dev/null +++ b/langchain_md_files/contributing/documentation/setup.mdx @@ -0,0 +1,181 @@ +--- +sidebar_class_name: "hidden" +--- + +# Setup + +LangChain documentation consists of two components: + +1. Main Documentation: Hosted at [python.langchain.com](https://python.langchain.com/), +this comprehensive resource serves as the primary user-facing documentation. +It covers a wide array of topics, including tutorials, use cases, integrations, +and more, offering extensive guidance on building with LangChain. +The content for this documentation lives in the `/docs` directory of the monorepo. +2. In-code Documentation: This is documentation of the codebase itself, which is also +used to generate the externally facing [API Reference](https://python.langchain.com/v0.2/api_reference/langchain/index.html). +The content for the API reference is autogenerated by scanning the docstrings in the codebase. For this reason we ask that +developers document their code well. + +The `API Reference` is largely autogenerated by [sphinx](https://www.sphinx-doc.org/en/master/) +from the code and is hosted by [Read the Docs](https://readthedocs.org/). + +We appreciate all contributions to the documentation, whether it be fixing a typo, +adding a new tutorial or example and whether it be in the main documentation or the API Reference. + +Similar to linting, we recognize documentation can be annoying. If you do not want +to do it, please contact a project maintainer, and they can help you with it. We do not want this to be a blocker for good code getting contributed. + +## 📜 Main Documentation + +The content for the main documentation is located in the `/docs` directory of the monorepo. + +The documentation is written using a combination of ipython notebooks (`.ipynb` files) +and markdown (`.mdx` files). The notebooks are converted to markdown +and then built using [Docusaurus 2](https://docusaurus.io/). + +Feel free to make contributions to the main documentation! 🥰 + +After modifying the documentation: + +1. Run the linting and formatting commands (see below) to ensure that the documentation is well-formatted and free of errors. +2. Optionally build the documentation locally to verify that the changes look good. +3. Make a pull request with the changes. +4. You can preview and verify that the changes are what you wanted by clicking the `View deployment` or `Visit Preview` buttons on the pull request `Conversation` page. This will take you to a preview of the documentation changes. + +## ⚒️ Linting and Building Documentation Locally + +After writing up the documentation, you may want to lint and build the documentation +locally to ensure that it looks good and is free of errors. + +If you're unable to build it locally that's okay as well, as you will be able to +see a preview of the documentation on the pull request page. + +From the **monorepo root**, run the following command to install the dependencies: + +```bash +poetry install --with lint,docs --no-root +```` + +### Building + +The code that builds the documentation is located in the `/docs` directory of the monorepo. + +In the following commands, the prefix `api_` indicates that those are operations for the API Reference. + +Before building the documentation, it is always a good idea to clean the build directory: + +```bash +make docs_clean +make api_docs_clean +``` + +Next, you can build the documentation as outlined below: + +```bash +make docs_build +make api_docs_build +``` + +:::tip + +The `make api_docs_build` command takes a long time. If you're making cosmetic changes to the API docs and want to see how they look, use: + +```bash +make api_docs_quick_preview +``` + +which will just build a small subset of the API reference. + +::: + +Finally, run the link checker to ensure all links are valid: + +```bash +make docs_linkcheck +make api_docs_linkcheck +``` + +### Linting and Formatting + +The Main Documentation is linted from the **monorepo root**. To lint the main documentation, run the following from there: + +```bash +make lint +``` + +If you have formatting-related errors, you can fix them automatically with: + +```bash +make format +``` + +## ⌨️ In-code Documentation + +The in-code documentation is largely autogenerated by [sphinx](https://www.sphinx-doc.org/en/master/) from the code and is hosted by [Read the Docs](https://readthedocs.org/). + +For the API reference to be useful, the codebase must be well-documented. This means that all functions, classes, and methods should have a docstring that explains what they do, what the arguments are, and what the return value is. This is a good practice in general, but it is especially important for LangChain because the API reference is the primary resource for developers to understand how to use the codebase. + +We generally follow the [Google Python Style Guide](https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings) for docstrings. + +Here is an example of a well-documented function: + +```python + +def my_function(arg1: int, arg2: str) -> float: + """This is a short description of the function. (It should be a single sentence.) + + This is a longer description of the function. It should explain what + the function does, what the arguments are, and what the return value is. + It should wrap at 88 characters. + + Examples: + This is a section for examples of how to use the function. + + .. code-block:: python + + my_function(1, "hello") + + Args: + arg1: This is a description of arg1. We do not need to specify the type since + it is already specified in the function signature. + arg2: This is a description of arg2. + + Returns: + This is a description of the return value. + """ + return 3.14 +``` + +### Linting and Formatting + +The in-code documentation is linted from the directories belonging to the packages +being documented. + +For example, if you're working on the `langchain-community` package, you would change +the working directory to the `langchain-community` directory: + +```bash +cd [root]/libs/langchain-community +``` + +Set up a virtual environment for the package if you haven't done so already. + +Install the dependencies for the package. + +```bash +poetry install --with lint +``` + +Then you can run the following commands to lint and format the in-code documentation: + +```bash +make format +make lint +``` + +## Verify Documentation Changes + +After pushing documentation changes to the repository, you can preview and verify that the changes are +what you wanted by clicking the `View deployment` or `Visit Preview` buttons on the pull request `Conversation` page. +This will take you to a preview of the documentation changes. +This preview is created by [Vercel](https://vercel.com/docs/getting-started-with-vercel). \ No newline at end of file diff --git a/langchain_md_files/contributing/documentation/style_guide.mdx b/langchain_md_files/contributing/documentation/style_guide.mdx new file mode 100644 index 0000000000000000000000000000000000000000..83a3ae3c803c7b11d19a8ed1201a3671178bdc31 --- /dev/null +++ b/langchain_md_files/contributing/documentation/style_guide.mdx @@ -0,0 +1,160 @@ +--- +sidebar_class_name: "hidden" +--- + +# Documentation Style Guide + +As LangChain continues to grow, the surface area of documentation required to cover it continues to grow too. +This page provides guidelines for anyone writing documentation for LangChain, as well as some of our philosophies around +organization and structure. + +## Philosophy + +LangChain's documentation follows the [Diataxis framework](https://diataxis.fr). +Under this framework, all documentation falls under one of four categories: [Tutorials](/docs/contributing/documentation/style_guide/#tutorials), +[How-to guides](/docs/contributing/documentation/style_guide/#how-to-guides), +[References](/docs/contributing/documentation/style_guide/#references), and [Explanations](/docs/contributing/documentation/style_guide/#conceptual-guide). + +### Tutorials + +Tutorials are lessons that take the reader through a practical activity. Their purpose is to help the user +gain understanding of concepts and how they interact by showing one way to achieve some goal in a hands-on way. They should **avoid** giving +multiple permutations of ways to achieve that goal in-depth. Instead, it should guide a new user through a recommended path to accomplishing the tutorial's goal. While the end result of a tutorial does not necessarily need to +be completely production-ready, it should be useful and practically satisfy the the goal that you clearly stated in the tutorial's introduction. Information on how to address additional scenarios +belongs in how-to guides. + +To quote the Diataxis website: + +> A tutorial serves the user’s *acquisition* of skills and knowledge - their study. Its purpose is not to help the user get something done, but to help them learn. + +In LangChain, these are often higher level guides that show off end-to-end use cases. + +Some examples include: + +- [Build a Simple LLM Application with LCEL](/docs/tutorials/llm_chain/) +- [Build a Retrieval Augmented Generation (RAG) App](/docs/tutorials/rag/) + +A good structural rule of thumb is to follow the structure of this [example from Numpy](https://numpy.org/numpy-tutorials/content/tutorial-svd.html). + +Here are some high-level tips on writing a good tutorial: + +- Focus on guiding the user to get something done, but keep in mind the end-goal is more to impart principles than to create a perfect production system. +- Be specific, not abstract and follow one path. + - No need to go deeply into alternative approaches, but it’s ok to reference them, ideally with a link to an appropriate how-to guide. +- Get "a point on the board" as soon as possible - something the user can run that outputs something. + - You can iterate and expand afterwards. + - Try to frequently checkpoint at given steps where the user can run code and see progress. +- Focus on results, not technical explanation. + - Crosslink heavily to appropriate conceptual/reference pages. +- The first time you mention a LangChain concept, use its full name (e.g. "LangChain Expression Language (LCEL)"), and link to its conceptual/other documentation page. + - It's also helpful to add a prerequisite callout that links to any pages with necessary background information. +- End with a recap/next steps section summarizing what the tutorial covered and future reading, such as related how-to guides. + +### How-to guides + +A how-to guide, as the name implies, demonstrates how to do something discrete and specific. +It should assume that the user is already familiar with underlying concepts, and is trying to solve an immediate problem, but +should still give some background or list the scenarios where the information contained within can be relevant. +They can and should discuss alternatives if one approach may be better than another in certain cases. + +To quote the Diataxis website: + +> A how-to guide serves the work of the already-competent user, whom you can assume to know what they want to do, and to be able to follow your instructions correctly. + +Some examples include: + +- [How to: return structured data from a model](/docs/how_to/structured_output/) +- [How to: write a custom chat model](/docs/how_to/custom_chat_model/) + +Here are some high-level tips on writing a good how-to guide: + +- Clearly explain what you are guiding the user through at the start. +- Assume higher intent than a tutorial and show what the user needs to do to get that task done. +- Assume familiarity of concepts, but explain why suggested actions are helpful. + - Crosslink heavily to conceptual/reference pages. +- Discuss alternatives and responses to real-world tradeoffs that may arise when solving a problem. +- Use lots of example code. + - Prefer full code blocks that the reader can copy and run. +- End with a recap/next steps section summarizing what the tutorial covered and future reading, such as other related how-to guides. + +### Conceptual guide + +LangChain's conceptual guide falls under the **Explanation** quadrant of Diataxis. They should cover LangChain terms and concepts +in a more abstract way than how-to guides or tutorials, and should be geared towards curious users interested in +gaining a deeper understanding of the framework. Try to avoid excessively large code examples - the goal here is to +impart perspective to the user rather than to finish a practical project. These guides should cover **why** things work they way they do. + +This guide on documentation style is meant to fall under this category. + +To quote the Diataxis website: + +> The perspective of explanation is higher and wider than that of the other types. It does not take the user’s eye-level view, as in a how-to guide, or a close-up view of the machinery, like reference material. Its scope in each case is a topic - “an area of knowledge”, that somehow has to be bounded in a reasonable, meaningful way. + +Some examples include: + +- [Retrieval conceptual docs](/docs/concepts/#retrieval) +- [Chat model conceptual docs](/docs/concepts/#chat-models) + +Here are some high-level tips on writing a good conceptual guide: + +- Explain design decisions. Why does concept X exist and why was it designed this way? +- Use analogies and reference other concepts and alternatives +- Avoid blending in too much reference content +- You can and should reference content covered in other guides, but make sure to link to them + +### References + +References contain detailed, low-level information that describes exactly what functionality exists and how to use it. +In LangChain, this is mainly our API reference pages, which are populated from docstrings within code. +References pages are generally not read end-to-end, but are consulted as necessary when a user needs to know +how to use something specific. + +To quote the Diataxis website: + +> The only purpose of a reference guide is to describe, as succinctly as possible, and in an orderly way. Whereas the content of tutorials and how-to guides are led by needs of the user, reference material is led by the product it describes. + +Many of the reference pages in LangChain are automatically generated from code, +but here are some high-level tips on writing a good docstring: + +- Be concise +- Discuss special cases and deviations from a user's expectations +- Go into detail on required inputs and outputs +- Light details on when one might use the feature are fine, but in-depth details belong in other sections. + +Each category serves a distinct purpose and requires a specific approach to writing and structuring the content. + +## General guidelines + +Here are some other guidelines you should think about when writing and organizing documentation. + +We generally do not merge new tutorials from outside contributors without an actue need. +We welcome updates as well as new integration docs, how-tos, and references. + +### Avoid duplication + +Multiple pages that cover the same material in depth are difficult to maintain and cause confusion. There should +be only one (very rarely two), canonical pages for a given concept or feature. Instead, you should link to other guides. + +### Link to other sections + +Because sections of the docs do not exist in a vacuum, it is important to link to other sections as often as possible +to allow a developer to learn more about an unfamiliar topic inline. + +This includes linking to the API references as well as conceptual sections! + +### Be concise + +In general, take a less-is-more approach. If a section with a good explanation of a concept already exists, you should link to it rather than +re-explain it, unless the concept you are documenting presents some new wrinkle. + +Be concise, including in code samples. + +### General style + +- Use active voice and present tense whenever possible +- Use examples and code snippets to illustrate concepts and usage +- Use appropriate header levels (`#`, `##`, `###`, etc.) to organize the content hierarchically +- Use fewer cells with more code to make copy/paste easier +- Use bullet points and numbered lists to break down information into easily digestible chunks +- Use tables (especially for **Reference** sections) and diagrams often to present information visually +- Include the table of contents for longer documentation pages to help readers navigate the content, but hide it for shorter pages diff --git a/langchain_md_files/contributing/faq.mdx b/langchain_md_files/contributing/faq.mdx new file mode 100644 index 0000000000000000000000000000000000000000..e0e81564a4992a82aed9b6672ab0cc9357aa1eed --- /dev/null +++ b/langchain_md_files/contributing/faq.mdx @@ -0,0 +1,26 @@ +--- +sidebar_position: 6 +sidebar_label: FAQ +--- +# Frequently Asked Questions + +## Pull Requests (PRs) + +### How do I allow maintainers to edit my PR? + +When you submit a pull request, there may be additional changes +necessary before merging it. Oftentimes, it is more efficient for the +maintainers to make these changes themselves before merging, rather than asking you +to do so in code review. + +By default, most pull requests will have a +`✅ Maintainers are allowed to edit this pull request.` +badge in the right-hand sidebar. + +If you do not see this badge, you may have this setting off for the fork you are +pull-requesting from. See [this Github docs page](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/allowing-changes-to-a-pull-request-branch-created-from-a-fork) +for more information. + +Notably, Github doesn't allow this setting to be enabled for forks in **organizations** ([issue](https://github.com/orgs/community/discussions/5634)). +If you are working in an organization, we recommend submitting your PR from a personal +fork in order to enable this setting. diff --git a/langchain_md_files/contributing/index.mdx b/langchain_md_files/contributing/index.mdx new file mode 100644 index 0000000000000000000000000000000000000000..e91c70c5fd15fc3fbc1a8bd1f5f15baa1ba4a159 --- /dev/null +++ b/langchain_md_files/contributing/index.mdx @@ -0,0 +1,54 @@ +--- +sidebar_position: 0 +--- +# Welcome Contributors + +Hi there! Thank you for even being interested in contributing to LangChain. +As an open-source project in a rapidly developing field, we are extremely open to contributions, whether they involve new features, improved infrastructure, better documentation, or bug fixes. + +## 🗺️ Guidelines + +### 👩‍💻 Ways to contribute + +There are many ways to contribute to LangChain. Here are some common ways people contribute: + +- [**Documentation**](/docs/contributing/documentation/): Help improve our docs, including this one! +- [**Code**](/docs/contributing/code/): Help us write code, fix bugs, or improve our infrastructure. +- [**Integrations**](integrations.mdx): Help us integrate with your favorite vendors and tools. +- [**Discussions**](https://github.com/langchain-ai/langchain/discussions): Help answer usage questions and discuss issues with users. + +### 🚩 GitHub Issues + +Our [issues](https://github.com/langchain-ai/langchain/issues) page is kept up to date with bugs, improvements, and feature requests. + +There is a taxonomy of labels to help with sorting and discovery of issues of interest. Please use these to help organize issues. + +If you start working on an issue, please assign it to yourself. + +If you are adding an issue, please try to keep it focused on a single, modular bug/improvement/feature. +If two issues are related, or blocking, please link them rather than combining them. + +We will try to keep these issues as up-to-date as possible, though +with the rapid rate of development in this field some may get out of date. +If you notice this happening, please let us know. + +### 💭 GitHub Discussions + +We have a [discussions](https://github.com/langchain-ai/langchain/discussions) page where users can ask usage questions, discuss design decisions, and propose new features. + +If you are able to help answer questions, please do so! This will allow the maintainers to spend more time focused on development and bug fixing. + +### 🙋 Getting Help + +Our goal is to have the simplest developer setup possible. Should you experience any difficulty getting setup, please +contact a maintainer! Not only do we want to help get you unblocked, but we also want to make sure that the process is +smooth for future contributors. + +In a similar vein, we do enforce certain linting, formatting, and documentation standards in the codebase. +If you are finding these difficult (or even just annoying) to work with, feel free to contact a maintainer for help - +we do not want these to get in the way of getting good code into the codebase. + +### 🌟 Recognition + +If your contribution has made its way into a release, we will want to give you credit on Twitter (only if you want though)! +If you have a Twitter account you would like us to mention, please let us know in the PR or through another means. diff --git a/langchain_md_files/contributing/integrations.mdx b/langchain_md_files/contributing/integrations.mdx new file mode 100644 index 0000000000000000000000000000000000000000..0c60ca739ddd4f58ba13c684915b25ec9c6403bc --- /dev/null +++ b/langchain_md_files/contributing/integrations.mdx @@ -0,0 +1,203 @@ +--- +sidebar_position: 5 +--- + +# Contribute Integrations + +To begin, make sure you have all the dependencies outlined in guide on [Contributing Code](/docs/contributing/code/). + +There are a few different places you can contribute integrations for LangChain: + +- **Community**: For lighter-weight integrations that are primarily maintained by LangChain and the Open Source Community. +- **Partner Packages**: For independent packages that are co-maintained by LangChain and a partner. + +For the most part, **new integrations should be added to the Community package**. Partner packages require more maintenance as separate packages, so please confirm with the LangChain team before creating a new partner package. + +In the following sections, we'll walk through how to contribute to each of these packages from a fake company, `Parrot Link AI`. + +## Community package + +The `langchain-community` package is in `libs/community` and contains most integrations. + +It can be installed with `pip install langchain-community`, and exported members can be imported with code like + +```python +from langchain_community.chat_models import ChatParrotLink +from langchain_community.llms import ParrotLinkLLM +from langchain_community.vectorstores import ParrotLinkVectorStore +``` + +The `community` package relies on manually-installed dependent packages, so you will see errors +if you try to import a package that is not installed. In our fake example, if you tried to import `ParrotLinkLLM` without installing `parrot-link-sdk`, you will see an `ImportError` telling you to install it when trying to use it. + +Let's say we wanted to implement a chat model for Parrot Link AI. We would create a new file in `libs/community/langchain_community/chat_models/parrot_link.py` with the following code: + +```python +from langchain_core.language_models.chat_models import BaseChatModel + +class ChatParrotLink(BaseChatModel): + """ChatParrotLink chat model. + + Example: + .. code-block:: python + + from langchain_community.chat_models import ChatParrotLink + + model = ChatParrotLink() + """ + + ... +``` + +And we would write tests in: + +- Unit tests: `libs/community/tests/unit_tests/chat_models/test_parrot_link.py` +- Integration tests: `libs/community/tests/integration_tests/chat_models/test_parrot_link.py` + +And add documentation to: + +- `docs/docs/integrations/chat/parrot_link.ipynb` + +## Partner package in LangChain repo + +:::caution +Before starting a **partner** package, please confirm your intent with the LangChain team. Partner packages require more maintenance as separate packages, so we will close PRs that add new partner packages without prior discussion. See the above section for how to add a community integration. +::: + +Partner packages can be hosted in the `LangChain` monorepo or in an external repo. + +Partner package in the `LangChain` repo is placed in `libs/partners/{partner}` +and the package source code is in `libs/partners/{partner}/langchain_{partner}`. + +A package is +installed by users with `pip install langchain-{partner}`, and the package members +can be imported with code like: + +```python +from langchain_{partner} import X +``` + +### Set up a new package + +To set up a new partner package, use the latest version of the LangChain CLI. You can install or update it with: + +```bash +pip install -U langchain-cli +``` + +Let's say you want to create a new partner package working for a company called Parrot Link AI. + +Then, run the following command to create a new partner package: + +```bash +cd libs/partners +langchain-cli integration new +> Name: parrot-link +> Name of integration in PascalCase [ParrotLink]: ParrotLink +``` + +This will create a new package in `libs/partners/parrot-link` with the following structure: + +``` +libs/partners/parrot-link/ + langchain_parrot_link/ # folder containing your package + ... + tests/ + ... + docs/ # bootstrapped docs notebooks, must be moved to /docs in monorepo root + ... + scripts/ # scripts for CI + ... + LICENSE + README.md # fill out with information about your package + Makefile # default commands for CI + pyproject.toml # package metadata, mostly managed by Poetry + poetry.lock # package lockfile, managed by Poetry + .gitignore +``` + +### Implement your package + +First, add any dependencies your package needs, such as your company's SDK: + +```bash +poetry add parrot-link-sdk +``` + +If you need separate dependencies for type checking, you can add them to the `typing` group with: + +```bash +poetry add --group typing types-parrot-link-sdk +``` + +Then, implement your package in `libs/partners/parrot-link/langchain_parrot_link`. + +By default, this will include stubs for a Chat Model, an LLM, and/or a Vector Store. You should delete any of the files you won't use and remove them from `__init__.py`. + +### Write Unit and Integration Tests + +Some basic tests are presented in the `tests/` directory. You should add more tests to cover your package's functionality. + +For information on running and implementing tests, see the [Testing guide](/docs/contributing/testing/). + +### Write documentation + +Documentation is generated from Jupyter notebooks in the `docs/` directory. You should place the notebooks with examples +to the relevant `docs/docs/integrations` directory in the monorepo root. + +### (If Necessary) Deprecate community integration + +Note: this is only necessary if you're migrating an existing community integration into +a partner package. If the component you're integrating is net-new to LangChain (i.e. +not already in the `community` package), you can skip this step. + +Let's pretend we migrated our `ChatParrotLink` chat model from the community package to +the partner package. We would need to deprecate the old model in the community package. + +We would do that by adding a `@deprecated` decorator to the old model as follows, in +`libs/community/langchain_community/chat_models/parrot_link.py`. + +Before our change, our chat model might look like this: + +```python +class ChatParrotLink(BaseChatModel): + ... +``` + +After our change, it would look like this: + +```python +from langchain_core._api.deprecation import deprecated + +@deprecated( + since="0.0.", + removal="0.2.0", + alternative_import="langchain_parrot_link.ChatParrotLink" +) +class ChatParrotLink(BaseChatModel): + ... +``` + +You should do this for *each* component that you're migrating to the partner package. + +### Additional steps + +Contributor steps: + +- [ ] Add secret names to manual integrations workflow in `.github/workflows/_integration_test.yml` +- [ ] Add secrets to release workflow (for pre-release testing) in `.github/workflows/_release.yml` + +Maintainer steps (Contributors should **not** do these): + +- [ ] set up pypi and test pypi projects +- [ ] add credential secrets to Github Actions +- [ ] add package to conda-forge + +## Partner package in external repo + +Partner packages in external repos must be coordinated between the LangChain team and +the partner organization to ensure that they are maintained and updated. + +If you're interested in creating a partner package in an external repo, please start +with one in the LangChain repo, and then reach out to the LangChain team to discuss +how to move it to an external repo. diff --git a/langchain_md_files/contributing/repo_structure.mdx b/langchain_md_files/contributing/repo_structure.mdx new file mode 100644 index 0000000000000000000000000000000000000000..63e180696a3f3e4826fb9d77277d7016949ac28a --- /dev/null +++ b/langchain_md_files/contributing/repo_structure.mdx @@ -0,0 +1,65 @@ +--- +sidebar_position: 0.5 +--- +# Repository Structure + +If you plan on contributing to LangChain code or documentation, it can be useful +to understand the high level structure of the repository. + +LangChain is organized as a [monorepo](https://en.wikipedia.org/wiki/Monorepo) that contains multiple packages. +You can check out our [installation guide](/docs/how_to/installation/) for more on how they fit together. + +Here's the structure visualized as a tree: + +```text +. +├── cookbook # Tutorials and examples +├── docs # Contains content for the documentation here: https://python.langchain.com/ +├── libs +│ ├── langchain +│ │ ├── langchain +│ │ ├── tests/unit_tests # Unit tests (present in each package not shown for brevity) +│ │ ├── tests/integration_tests # Integration tests (present in each package not shown for brevity) +│ ├── community # Third-party integrations +│ │ ├── langchain-community +│ ├── core # Base interfaces for key abstractions +│ │ ├── langchain-core +│ ├── experimental # Experimental components and chains +│ │ ├── langchain-experimental +| ├── cli # Command line interface +│ │ ├── langchain-cli +│ ├── text-splitters +│ │ ├── langchain-text-splitters +│ ├── standard-tests +│ │ ├── langchain-standard-tests +│ ├── partners +│ ├── langchain-partner-1 +│ ├── langchain-partner-2 +│ ├── ... +│ +├── templates # A collection of easily deployable reference architectures for a wide variety of tasks. +``` + +The root directory also contains the following files: + +* `pyproject.toml`: Dependencies for building docs and linting docs, cookbook. +* `Makefile`: A file that contains shortcuts for building, linting and docs and cookbook. + +There are other files in the root directory level, but their presence should be self-explanatory. Feel free to browse around! + +## Documentation + +The `/docs` directory contains the content for the documentation that is shown +at https://python.langchain.com/ and the associated API Reference https://python.langchain.com/v0.2/api_reference/langchain/index.html. + +See the [documentation](/docs/contributing/documentation/) guidelines to learn how to contribute to the documentation. + +## Code + +The `/libs` directory contains the code for the LangChain packages. + +To learn more about how to contribute code see the following guidelines: + +- [Code](/docs/contributing/code/): Learn how to develop in the LangChain codebase. +- [Integrations](./integrations.mdx): Learn how to contribute to third-party integrations to `langchain-community` or to start a new partner package. +- [Testing](./testing.mdx): Guidelines to learn how to write tests for the packages. diff --git a/langchain_md_files/contributing/testing.mdx b/langchain_md_files/contributing/testing.mdx new file mode 100644 index 0000000000000000000000000000000000000000..0afccc2a36087d0e3edaae0a241a30c7f30e3cc7 --- /dev/null +++ b/langchain_md_files/contributing/testing.mdx @@ -0,0 +1,147 @@ +--- +sidebar_position: 6 +--- + +# Testing + +All of our packages have unit tests and integration tests, and we favor unit tests over integration tests. + +Unit tests run on every pull request, so they should be fast and reliable. + +Integration tests run once a day, and they require more setup, so they should be reserved for confirming interface points with external services. + +## Unit Tests + +Unit tests cover modular logic that does not require calls to outside APIs. +If you add new logic, please add a unit test. + +To install dependencies for unit tests: + +```bash +poetry install --with test +``` + +To run unit tests: + +```bash +make test +``` + +To run unit tests in Docker: + +```bash +make docker_tests +``` + +To run a specific test: + +```bash +TEST_FILE=tests/unit_tests/test_imports.py make test +``` + +## Integration Tests + +Integration tests cover logic that requires making calls to outside APIs (often integration with other services). +If you add support for a new external API, please add a new integration test. + +**Warning:** Almost no tests should be integration tests. + + Tests that require making network connections make it difficult for other + developers to test the code. + + Instead favor relying on `responses` library and/or mock.patch to mock + requests using small fixtures. + +To install dependencies for integration tests: + +```bash +poetry install --with test,test_integration +``` + +To run integration tests: + +```bash +make integration_tests +``` + +### Prepare + +The integration tests use several search engines and databases. The tests +aim to verify the correct behavior of the engines and databases according to +their specifications and requirements. + +To run some integration tests, such as tests located in +`tests/integration_tests/vectorstores/`, you will need to install the following +software: + +- Docker +- Python 3.8.1 or later + +Any new dependencies should be added by running: + +```bash +# add package and install it after adding: +poetry add tiktoken@latest --group "test_integration" && poetry install --with test_integration +``` + +Before running any tests, you should start a specific Docker container that has all the +necessary dependencies installed. For instance, we use the `elasticsearch.yml` container +for `test_elasticsearch.py`: + +```bash +cd tests/integration_tests/vectorstores/docker-compose +docker-compose -f elasticsearch.yml up +``` + +For environments that requires more involving preparation, look for `*.sh`. For instance, +`opensearch.sh` builds a required docker image and then launch opensearch. + + +### Prepare environment variables for local testing: + +- copy `tests/integration_tests/.env.example` to `tests/integration_tests/.env` +- set variables in `tests/integration_tests/.env` file, e.g `OPENAI_API_KEY` + +Additionally, it's important to note that some integration tests may require certain +environment variables to be set, such as `OPENAI_API_KEY`. Be sure to set any required +environment variables before running the tests to ensure they run correctly. + +### Recording HTTP interactions with pytest-vcr + +Some of the integration tests in this repository involve making HTTP requests to +external services. To prevent these requests from being made every time the tests are +run, we use pytest-vcr to record and replay HTTP interactions. + +When running tests in a CI/CD pipeline, you may not want to modify the existing +cassettes. You can use the --vcr-record=none command-line option to disable recording +new cassettes. Here's an example: + +```bash +pytest --log-cli-level=10 tests/integration_tests/vectorstores/test_pinecone.py --vcr-record=none +pytest tests/integration_tests/vectorstores/test_elasticsearch.py --vcr-record=none + +``` + +### Run some tests with coverage: + +```bash +pytest tests/integration_tests/vectorstores/test_elasticsearch.py --cov=langchain --cov-report=html +start "" htmlcov/index.html || open htmlcov/index.html + +``` + +## Coverage + +Code coverage (i.e. the amount of code that is covered by unit tests) helps identify areas of the code that are potentially more or less brittle. + +Coverage requires the dependencies for integration tests: + +```bash +poetry install --with test_integration +``` + +To get a report of current coverage, run the following: + +```bash +make coverage +``` diff --git a/langchain_md_files/how_to/document_loader_json.mdx b/langchain_md_files/how_to/document_loader_json.mdx new file mode 100644 index 0000000000000000000000000000000000000000..b88e37aa801e0db609f9f7879542c69a6935914a --- /dev/null +++ b/langchain_md_files/how_to/document_loader_json.mdx @@ -0,0 +1,402 @@ +# How to load JSON + +[JSON (JavaScript Object Notation)](https://en.wikipedia.org/wiki/JSON) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values). + +[JSON Lines](https://jsonlines.org/) is a file format where each line is a valid JSON value. + +LangChain implements a [JSONLoader](https://python.langchain.com/v0.2/api_reference/community/document_loaders/langchain_community.document_loaders.json_loader.JSONLoader.html) +to convert JSON and JSONL data into LangChain [Document](https://python.langchain.com/v0.2/api_reference/core/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document) +objects. It uses a specified [jq schema](https://en.wikipedia.org/wiki/Jq_(programming_language)) to parse the JSON files, allowing for the extraction of specific fields into the content +and metadata of the LangChain Document. + +It uses the `jq` python package. Check out this [manual](https://stedolan.github.io/jq/manual/#Basicfilters) for a detailed documentation of the `jq` syntax. + +Here we will demonstrate: + +- How to load JSON and JSONL data into the content of a LangChain `Document`; +- How to load JSON and JSONL data into metadata associated with a `Document`. + + +```python +#!pip install jq +``` + + +```python +from langchain_community.document_loaders import JSONLoader +``` + + +```python +import json +from pathlib import Path +from pprint import pprint + + +file_path='./example_data/facebook_chat.json' +data = json.loads(Path(file_path).read_text()) +``` + + +```python +pprint(data) +``` + + + +``` + {'image': {'creation_timestamp': 1675549016, 'uri': 'image_of_the_chat.jpg'}, + 'is_still_participant': True, + 'joinable_mode': {'link': '', 'mode': 1}, + 'magic_words': [], + 'messages': [{'content': 'Bye!', + 'sender_name': 'User 2', + 'timestamp_ms': 1675597571851}, + {'content': 'Oh no worries! Bye', + 'sender_name': 'User 1', + 'timestamp_ms': 1675597435669}, + {'content': 'No Im sorry it was my mistake, the blue one is not ' + 'for sale', + 'sender_name': 'User 2', + 'timestamp_ms': 1675596277579}, + {'content': 'I thought you were selling the blue one!', + 'sender_name': 'User 1', + 'timestamp_ms': 1675595140251}, + {'content': 'Im not interested in this bag. Im interested in the ' + 'blue one!', + 'sender_name': 'User 1', + 'timestamp_ms': 1675595109305}, + {'content': 'Here is $129', + 'sender_name': 'User 2', + 'timestamp_ms': 1675595068468}, + {'photos': [{'creation_timestamp': 1675595059, + 'uri': 'url_of_some_picture.jpg'}], + 'sender_name': 'User 2', + 'timestamp_ms': 1675595060730}, + {'content': 'Online is at least $100', + 'sender_name': 'User 2', + 'timestamp_ms': 1675595045152}, + {'content': 'How much do you want?', + 'sender_name': 'User 1', + 'timestamp_ms': 1675594799696}, + {'content': 'Goodmorning! $50 is too low.', + 'sender_name': 'User 2', + 'timestamp_ms': 1675577876645}, + {'content': 'Hi! Im interested in your bag. Im offering $50. Let ' + 'me know if you are interested. Thanks!', + 'sender_name': 'User 1', + 'timestamp_ms': 1675549022673}], + 'participants': [{'name': 'User 1'}, {'name': 'User 2'}], + 'thread_path': 'inbox/User 1 and User 2 chat', + 'title': 'User 1 and User 2 chat'} +``` + + + + +## Using `JSONLoader` + +Suppose we are interested in extracting the values under the `content` field within the `messages` key of the JSON data. This can easily be done through the `JSONLoader` as shown below. + + +### JSON file + +```python +loader = JSONLoader( + file_path='./example_data/facebook_chat.json', + jq_schema='.messages[].content', + text_content=False) + +data = loader.load() +``` + + +```python +pprint(data) +``` + + + +``` + [Document(page_content='Bye!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 1}), + Document(page_content='Oh no worries! Bye', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 2}), + Document(page_content='No Im sorry it was my mistake, the blue one is not for sale', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 3}), + Document(page_content='I thought you were selling the blue one!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 4}), + Document(page_content='Im not interested in this bag. Im interested in the blue one!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 5}), + Document(page_content='Here is $129', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 6}), + Document(page_content='', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 7}), + Document(page_content='Online is at least $100', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 8}), + Document(page_content='How much do you want?', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 9}), + Document(page_content='Goodmorning! $50 is too low.', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 10}), + Document(page_content='Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 11})] +``` + + + + +### JSON Lines file + +If you want to load documents from a JSON Lines file, you pass `json_lines=True` +and specify `jq_schema` to extract `page_content` from a single JSON object. + +```python +file_path = './example_data/facebook_chat_messages.jsonl' +pprint(Path(file_path).read_text()) +``` + + + +``` + ('{"sender_name": "User 2", "timestamp_ms": 1675597571851, "content": "Bye!"}\n' + '{"sender_name": "User 1", "timestamp_ms": 1675597435669, "content": "Oh no ' + 'worries! Bye"}\n' + '{"sender_name": "User 2", "timestamp_ms": 1675596277579, "content": "No Im ' + 'sorry it was my mistake, the blue one is not for sale"}\n') +``` + + + + +```python +loader = JSONLoader( + file_path='./example_data/facebook_chat_messages.jsonl', + jq_schema='.content', + text_content=False, + json_lines=True) + +data = loader.load() +``` + +```python +pprint(data) +``` + + + +``` + [Document(page_content='Bye!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 1}), + Document(page_content='Oh no worries! Bye', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 2}), + Document(page_content='No Im sorry it was my mistake, the blue one is not for sale', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 3})] +``` + + + + +Another option is to set `jq_schema='.'` and provide `content_key`: + +```python +loader = JSONLoader( + file_path='./example_data/facebook_chat_messages.jsonl', + jq_schema='.', + content_key='sender_name', + json_lines=True) + +data = loader.load() +``` + +```python +pprint(data) +``` + + + +``` + [Document(page_content='User 2', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 1}), + Document(page_content='User 1', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 2}), + Document(page_content='User 2', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 3})] +``` + + + +### JSON file with jq schema `content_key` + +To load documents from a JSON file using the content_key within the jq schema, set is_content_key_jq_parsable=True. +Ensure that content_key is compatible and can be parsed using the jq schema. + +```python +file_path = './sample.json' +pprint(Path(file_path).read_text()) +``` + + + +```json + {"data": [ + {"attributes": { + "message": "message1", + "tags": [ + "tag1"]}, + "id": "1"}, + {"attributes": { + "message": "message2", + "tags": [ + "tag2"]}, + "id": "2"}]} +``` + + + + +```python +loader = JSONLoader( + file_path=file_path, + jq_schema=".data[]", + content_key=".attributes.message", + is_content_key_jq_parsable=True, +) + +data = loader.load() +``` + +```python +pprint(data) +``` + + + +``` + [Document(page_content='message1', metadata={'source': '/path/to/sample.json', 'seq_num': 1}), + Document(page_content='message2', metadata={'source': '/path/to/sample.json', 'seq_num': 2})] +``` + + + +## Extracting metadata + +Generally, we want to include metadata available in the JSON file into the documents that we create from the content. + +The following demonstrates how metadata can be extracted using the `JSONLoader`. + +There are some key changes to be noted. In the previous example where we didn't collect the metadata, we managed to directly specify in the schema where the value for the `page_content` can be extracted from. + +``` +.messages[].content +``` + +In the current example, we have to tell the loader to iterate over the records in the `messages` field. The jq_schema then has to be: + +``` +.messages[] +``` + +This allows us to pass the records (dict) into the `metadata_func` that has to be implemented. The `metadata_func` is responsible for identifying which pieces of information in the record should be included in the metadata stored in the final `Document` object. + +Additionally, we now have to explicitly specify in the loader, via the `content_key` argument, the key from the record where the value for the `page_content` needs to be extracted from. + + +```python +# Define the metadata extraction function. +def metadata_func(record: dict, metadata: dict) -> dict: + + metadata["sender_name"] = record.get("sender_name") + metadata["timestamp_ms"] = record.get("timestamp_ms") + + return metadata + + +loader = JSONLoader( + file_path='./example_data/facebook_chat.json', + jq_schema='.messages[]', + content_key="content", + metadata_func=metadata_func +) + +data = loader.load() +``` + + +```python +pprint(data) +``` + + + +``` + [Document(page_content='Bye!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 1, 'sender_name': 'User 2', 'timestamp_ms': 1675597571851}), + Document(page_content='Oh no worries! Bye', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 2, 'sender_name': 'User 1', 'timestamp_ms': 1675597435669}), + Document(page_content='No Im sorry it was my mistake, the blue one is not for sale', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 3, 'sender_name': 'User 2', 'timestamp_ms': 1675596277579}), + Document(page_content='I thought you were selling the blue one!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 4, 'sender_name': 'User 1', 'timestamp_ms': 1675595140251}), + Document(page_content='Im not interested in this bag. Im interested in the blue one!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 5, 'sender_name': 'User 1', 'timestamp_ms': 1675595109305}), + Document(page_content='Here is $129', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 6, 'sender_name': 'User 2', 'timestamp_ms': 1675595068468}), + Document(page_content='', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 7, 'sender_name': 'User 2', 'timestamp_ms': 1675595060730}), + Document(page_content='Online is at least $100', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 8, 'sender_name': 'User 2', 'timestamp_ms': 1675595045152}), + Document(page_content='How much do you want?', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 9, 'sender_name': 'User 1', 'timestamp_ms': 1675594799696}), + Document(page_content='Goodmorning! $50 is too low.', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 10, 'sender_name': 'User 2', 'timestamp_ms': 1675577876645}), + Document(page_content='Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 11, 'sender_name': 'User 1', 'timestamp_ms': 1675549022673})] +``` + + + +Now, you will see that the documents contain the metadata associated with the content we extracted. + +## The `metadata_func` + +As shown above, the `metadata_func` accepts the default metadata generated by the `JSONLoader`. This allows full control to the user with respect to how the metadata is formatted. + +For example, the default metadata contains the `source` and the `seq_num` keys. However, it is possible that the JSON data contain these keys as well. The user can then exploit the `metadata_func` to rename the default keys and use the ones from the JSON data. + +The example below shows how we can modify the `source` to only contain information of the file source relative to the `langchain` directory. + + +```python +# Define the metadata extraction function. +def metadata_func(record: dict, metadata: dict) -> dict: + + metadata["sender_name"] = record.get("sender_name") + metadata["timestamp_ms"] = record.get("timestamp_ms") + + if "source" in metadata: + source = metadata["source"].split("/") + source = source[source.index("langchain"):] + metadata["source"] = "/".join(source) + + return metadata + + +loader = JSONLoader( + file_path='./example_data/facebook_chat.json', + jq_schema='.messages[]', + content_key="content", + metadata_func=metadata_func +) + +data = loader.load() +``` + + +```python +pprint(data) +``` + + + +``` + [Document(page_content='Bye!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 1, 'sender_name': 'User 2', 'timestamp_ms': 1675597571851}), + Document(page_content='Oh no worries! Bye', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 2, 'sender_name': 'User 1', 'timestamp_ms': 1675597435669}), + Document(page_content='No Im sorry it was my mistake, the blue one is not for sale', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 3, 'sender_name': 'User 2', 'timestamp_ms': 1675596277579}), + Document(page_content='I thought you were selling the blue one!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 4, 'sender_name': 'User 1', 'timestamp_ms': 1675595140251}), + Document(page_content='Im not interested in this bag. Im interested in the blue one!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 5, 'sender_name': 'User 1', 'timestamp_ms': 1675595109305}), + Document(page_content='Here is $129', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 6, 'sender_name': 'User 2', 'timestamp_ms': 1675595068468}), + Document(page_content='', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 7, 'sender_name': 'User 2', 'timestamp_ms': 1675595060730}), + Document(page_content='Online is at least $100', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 8, 'sender_name': 'User 2', 'timestamp_ms': 1675595045152}), + Document(page_content='How much do you want?', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 9, 'sender_name': 'User 1', 'timestamp_ms': 1675594799696}), + Document(page_content='Goodmorning! $50 is too low.', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 10, 'sender_name': 'User 2', 'timestamp_ms': 1675577876645}), + Document(page_content='Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 11, 'sender_name': 'User 1', 'timestamp_ms': 1675549022673})] +``` + + + +## Common JSON structures with jq schema + +The list below provides a reference to the possible `jq_schema` the user can use to extract content from the JSON data depending on the structure. + +``` +JSON -> [{"text": ...}, {"text": ...}, {"text": ...}] +jq_schema -> ".[].text" + +JSON -> {"key": [{"text": ...}, {"text": ...}, {"text": ...}]} +jq_schema -> ".key[].text" + +JSON -> ["...", "...", "..."] +jq_schema -> ".[]" +``` diff --git a/langchain_md_files/how_to/document_loader_office_file.mdx b/langchain_md_files/how_to/document_loader_office_file.mdx new file mode 100644 index 0000000000000000000000000000000000000000..30e6fa94d89e42fccead2ae846639796e37b6f64 --- /dev/null +++ b/langchain_md_files/how_to/document_loader_office_file.mdx @@ -0,0 +1,35 @@ +# How to load Microsoft Office files + +The [Microsoft Office](https://www.office.com/) suite of productivity software includes Microsoft Word, Microsoft Excel, Microsoft PowerPoint, Microsoft Outlook, and Microsoft OneNote. It is available for Microsoft Windows and macOS operating systems. It is also available on Android and iOS. + +This covers how to load commonly used file formats including `DOCX`, `XLSX` and `PPTX` documents into a LangChain +[Document](https://python.langchain.com/v0.2/api_reference/core/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document) +object that we can use downstream. + + +## Loading DOCX, XLSX, PPTX with AzureAIDocumentIntelligenceLoader + +[Azure AI Document Intelligence](https://aka.ms/doc-intelligence) (formerly known as `Azure Form Recognizer`) is machine-learning +based service that extracts texts (including handwriting), tables, document structures (e.g., titles, section headings, etc.) and key-value-pairs from +digital or scanned PDFs, images, Office and HTML files. Document Intelligence supports `PDF`, `JPEG/JPG`, `PNG`, `BMP`, `TIFF`, `HEIF`, `DOCX`, `XLSX`, `PPTX` and `HTML`. + +This [current implementation](https://aka.ms/di-langchain) of a loader using `Document Intelligence` can incorporate content page-wise and turn it into LangChain documents. The default output format is markdown, which can be easily chained with `MarkdownHeaderTextSplitter` for semantic document chunking. You can also use `mode="single"` or `mode="page"` to return pure texts in a single page or document split by page. + +### Prerequisite + +An Azure AI Document Intelligence resource in one of the 3 preview regions: **East US**, **West US2**, **West Europe** - follow [this document](https://learn.microsoft.com/azure/ai-services/document-intelligence/create-document-intelligence-resource?view=doc-intel-4.0.0) to create one if you don't have. You will be passing `` and `` as parameters to the loader. + +```python +%pip install --upgrade --quiet langchain langchain-community azure-ai-documentintelligence + +from langchain_community.document_loaders import AzureAIDocumentIntelligenceLoader + +file_path = "" +endpoint = "" +key = "" +loader = AzureAIDocumentIntelligenceLoader( + api_endpoint=endpoint, api_key=key, file_path=file_path, api_model="prebuilt-layout" +) + +documents = loader.load() +``` diff --git a/langchain_md_files/how_to/embed_text.mdx b/langchain_md_files/how_to/embed_text.mdx new file mode 100644 index 0000000000000000000000000000000000000000..0cf48c24ec786da14dbd992d8c184a3f13b3394a --- /dev/null +++ b/langchain_md_files/how_to/embed_text.mdx @@ -0,0 +1,154 @@ +# Text embedding models + +:::info +Head to [Integrations](/docs/integrations/text_embedding/) for documentation on built-in integrations with text embedding model providers. +::: + +The Embeddings class is a class designed for interfacing with text embedding models. There are lots of embedding model providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them. + +Embeddings create a vector representation of a piece of text. This is useful because it means we can think about text in the vector space, and do things like semantic search where we look for pieces of text that are most similar in the vector space. + +The base Embeddings class in LangChain provides two methods: one for embedding documents and one for embedding a query. The former, `.embed_documents`, takes as input multiple texts, while the latter, `.embed_query`, takes a single text. The reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself). +`.embed_query` will return a list of floats, whereas `.embed_documents` returns a list of lists of floats. + +## Get started + +### Setup + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + + + +To start we'll need to install the OpenAI partner package: + +```bash +pip install langchain-openai +``` + +Accessing the API requires an API key, which you can get by creating an account and heading [here](https://platform.openai.com/account/api-keys). Once we have a key we'll want to set it as an environment variable by running: + +```bash +export OPENAI_API_KEY="..." +``` + +If you'd prefer not to set an environment variable you can pass the key in directly via the `api_key` named parameter when initiating the OpenAI LLM class: + +```python +from langchain_openai import OpenAIEmbeddings + +embeddings_model = OpenAIEmbeddings(api_key="...") +``` + +Otherwise you can initialize without any params: +```python +from langchain_openai import OpenAIEmbeddings + +embeddings_model = OpenAIEmbeddings() +``` + + + + +To start we'll need to install the Cohere SDK package: + +```bash +pip install langchain-cohere +``` + +Accessing the API requires an API key, which you can get by creating an account and heading [here](https://dashboard.cohere.com/api-keys). Once we have a key we'll want to set it as an environment variable by running: + +```shell +export COHERE_API_KEY="..." +``` + +If you'd prefer not to set an environment variable you can pass the key in directly via the `cohere_api_key` named parameter when initiating the Cohere LLM class: + +```python +from langchain_cohere import CohereEmbeddings + +embeddings_model = CohereEmbeddings(cohere_api_key="...", model='embed-english-v3.0') +``` + +Otherwise you can initialize simply as shown below: +```python +from langchain_cohere import CohereEmbeddings + +embeddings_model = CohereEmbeddings(model='embed-english-v3.0') +``` +Do note that it is mandatory to pass the model parameter while initializing the CohereEmbeddings class. + + + + +To start we'll need to install the Hugging Face partner package: + +```bash +pip install langchain-huggingface +``` + +You can then load any [Sentence Transformers model](https://huggingface.co./models?library=sentence-transformers) from the Hugging Face Hub. + +```python +from langchain_huggingface import HuggingFaceEmbeddings + +embeddings_model = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2") +``` + +You can also leave the `model_name` blank to use the default [sentence-transformers/all-mpnet-base-v2](https://huggingface.co./sentence-transformers/all-mpnet-base-v2) model. + +```python +from langchain_huggingface import HuggingFaceEmbeddings + +embeddings_model = HuggingFaceEmbeddings() +``` + + + + +### `embed_documents` +#### Embed list of texts + +Use `.embed_documents` to embed a list of strings, recovering a list of embeddings: + +```python +embeddings = embeddings_model.embed_documents( + [ + "Hi there!", + "Oh, hello!", + "What's your name?", + "My friends call me World", + "Hello World!" + ] +) +len(embeddings), len(embeddings[0]) +``` + + + +``` +(5, 1536) +``` + + + +### `embed_query` +#### Embed single query +Use `.embed_query` to embed a single piece of text (e.g., for the purpose of comparing to other embedded pieces of texts). + +```python +embedded_query = embeddings_model.embed_query("What was the name mentioned in the conversation?") +embedded_query[:5] +``` + + + +``` +[0.0053587136790156364, + -0.0004999046213924885, + 0.038883671164512634, + -0.003001077566295862, + -0.00900818221271038] +``` + + diff --git a/langchain_md_files/how_to/index.mdx b/langchain_md_files/how_to/index.mdx new file mode 100644 index 0000000000000000000000000000000000000000..f5500f9093c45123e130ed44a21c78d9e43ac34a --- /dev/null +++ b/langchain_md_files/how_to/index.mdx @@ -0,0 +1,361 @@ +--- +sidebar_position: 0 +sidebar_class_name: hidden +--- + +# How-to guides + +Here you’ll find answers to “How do I….?” types of questions. +These guides are *goal-oriented* and *concrete*; they're meant to help you complete a specific task. +For conceptual explanations see the [Conceptual guide](/docs/concepts/). +For end-to-end walkthroughs see [Tutorials](/docs/tutorials). +For comprehensive descriptions of every class and function see the [API Reference](https://python.langchain.com/v0.2/api_reference/). + +## Installation + +- [How to: install LangChain packages](/docs/how_to/installation/) +- [How to: use LangChain with different Pydantic versions](/docs/how_to/pydantic_compatibility) + +## Key features + +This highlights functionality that is core to using LangChain. + +- [How to: return structured data from a model](/docs/how_to/structured_output/) +- [How to: use a model to call tools](/docs/how_to/tool_calling) +- [How to: stream runnables](/docs/how_to/streaming) +- [How to: debug your LLM apps](/docs/how_to/debugging/) + +## LangChain Expression Language (LCEL) + +[LangChain Expression Language](/docs/concepts/#langchain-expression-language-lcel) is a way to create arbitrary custom chains. It is built on the [Runnable](https://python.langchain.com/v0.2/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html) protocol. + +[**LCEL cheatsheet**](/docs/how_to/lcel_cheatsheet/): For a quick overview of how to use the main LCEL primitives. + +[**Migration guide**](/docs/versions/migrating_chains): For migrating legacy chain abstractions to LCEL. + +- [How to: chain runnables](/docs/how_to/sequence) +- [How to: stream runnables](/docs/how_to/streaming) +- [How to: invoke runnables in parallel](/docs/how_to/parallel/) +- [How to: add default invocation args to runnables](/docs/how_to/binding/) +- [How to: turn any function into a runnable](/docs/how_to/functions) +- [How to: pass through inputs from one chain step to the next](/docs/how_to/passthrough) +- [How to: configure runnable behavior at runtime](/docs/how_to/configure) +- [How to: add message history (memory) to a chain](/docs/how_to/message_history) +- [How to: route between sub-chains](/docs/how_to/routing) +- [How to: create a dynamic (self-constructing) chain](/docs/how_to/dynamic_chain/) +- [How to: inspect runnables](/docs/how_to/inspect) +- [How to: add fallbacks to a runnable](/docs/how_to/fallbacks) +- [How to: pass runtime secrets to a runnable](/docs/how_to/runnable_runtime_secrets) + +## Components + +These are the core building blocks you can use when building applications. + +### Prompt templates + +[Prompt Templates](/docs/concepts/#prompt-templates) are responsible for formatting user input into a format that can be passed to a language model. + +- [How to: use few shot examples](/docs/how_to/few_shot_examples) +- [How to: use few shot examples in chat models](/docs/how_to/few_shot_examples_chat/) +- [How to: partially format prompt templates](/docs/how_to/prompts_partial) +- [How to: compose prompts together](/docs/how_to/prompts_composition) + +### Example selectors + +[Example Selectors](/docs/concepts/#example-selectors) are responsible for selecting the correct few shot examples to pass to the prompt. + +- [How to: use example selectors](/docs/how_to/example_selectors) +- [How to: select examples by length](/docs/how_to/example_selectors_length_based) +- [How to: select examples by semantic similarity](/docs/how_to/example_selectors_similarity) +- [How to: select examples by semantic ngram overlap](/docs/how_to/example_selectors_ngram) +- [How to: select examples by maximal marginal relevance](/docs/how_to/example_selectors_mmr) +- [How to: select examples from LangSmith few-shot datasets](/docs/how_to/example_selectors_langsmith/) + +### Chat models + +[Chat Models](/docs/concepts/#chat-models) are newer forms of language models that take messages in and output a message. + +- [How to: do function/tool calling](/docs/how_to/tool_calling) +- [How to: get models to return structured output](/docs/how_to/structured_output) +- [How to: cache model responses](/docs/how_to/chat_model_caching) +- [How to: get log probabilities](/docs/how_to/logprobs) +- [How to: create a custom chat model class](/docs/how_to/custom_chat_model) +- [How to: stream a response back](/docs/how_to/chat_streaming) +- [How to: track token usage](/docs/how_to/chat_token_usage_tracking) +- [How to: track response metadata across providers](/docs/how_to/response_metadata) +- [How to: use chat model to call tools](/docs/how_to/tool_calling) +- [How to: stream tool calls](/docs/how_to/tool_streaming) +- [How to: handle rate limits](/docs/how_to/chat_model_rate_limiting) +- [How to: few shot prompt tool behavior](/docs/how_to/tools_few_shot) +- [How to: bind model-specific formatted tools](/docs/how_to/tools_model_specific) +- [How to: force a specific tool call](/docs/how_to/tool_choice) +- [How to: work with local models](/docs/how_to/local_llms) +- [How to: init any model in one line](/docs/how_to/chat_models_universal_init/) + +### Messages + +[Messages](/docs/concepts/#messages) are the input and output of chat models. They have some `content` and a `role`, which describes the source of the message. + +- [How to: trim messages](/docs/how_to/trim_messages/) +- [How to: filter messages](/docs/how_to/filter_messages/) +- [How to: merge consecutive messages of the same type](/docs/how_to/merge_message_runs/) + +### LLMs + +What LangChain calls [LLMs](/docs/concepts/#llms) are older forms of language models that take a string in and output a string. + +- [How to: cache model responses](/docs/how_to/llm_caching) +- [How to: create a custom LLM class](/docs/how_to/custom_llm) +- [How to: stream a response back](/docs/how_to/streaming_llm) +- [How to: track token usage](/docs/how_to/llm_token_usage_tracking) +- [How to: work with local models](/docs/how_to/local_llms) + +### Output parsers + +[Output Parsers](/docs/concepts/#output-parsers) are responsible for taking the output of an LLM and parsing into more structured format. + +- [How to: use output parsers to parse an LLM response into structured format](/docs/how_to/output_parser_structured) +- [How to: parse JSON output](/docs/how_to/output_parser_json) +- [How to: parse XML output](/docs/how_to/output_parser_xml) +- [How to: parse YAML output](/docs/how_to/output_parser_yaml) +- [How to: retry when output parsing errors occur](/docs/how_to/output_parser_retry) +- [How to: try to fix errors in output parsing](/docs/how_to/output_parser_fixing) +- [How to: write a custom output parser class](/docs/how_to/output_parser_custom) + +### Document loaders + +[Document Loaders](/docs/concepts/#document-loaders) are responsible for loading documents from a variety of sources. + +- [How to: load CSV data](/docs/how_to/document_loader_csv) +- [How to: load data from a directory](/docs/how_to/document_loader_directory) +- [How to: load HTML data](/docs/how_to/document_loader_html) +- [How to: load JSON data](/docs/how_to/document_loader_json) +- [How to: load Markdown data](/docs/how_to/document_loader_markdown) +- [How to: load Microsoft Office data](/docs/how_to/document_loader_office_file) +- [How to: load PDF files](/docs/how_to/document_loader_pdf) +- [How to: write a custom document loader](/docs/how_to/document_loader_custom) + +### Text splitters + +[Text Splitters](/docs/concepts/#text-splitters) take a document and split into chunks that can be used for retrieval. + +- [How to: recursively split text](/docs/how_to/recursive_text_splitter) +- [How to: split by HTML headers](/docs/how_to/HTML_header_metadata_splitter) +- [How to: split by HTML sections](/docs/how_to/HTML_section_aware_splitter) +- [How to: split by character](/docs/how_to/character_text_splitter) +- [How to: split code](/docs/how_to/code_splitter) +- [How to: split Markdown by headers](/docs/how_to/markdown_header_metadata_splitter) +- [How to: recursively split JSON](/docs/how_to/recursive_json_splitter) +- [How to: split text into semantic chunks](/docs/how_to/semantic-chunker) +- [How to: split by tokens](/docs/how_to/split_by_token) + +### Embedding models + +[Embedding Models](/docs/concepts/#embedding-models) take a piece of text and create a numerical representation of it. + +- [How to: embed text data](/docs/how_to/embed_text) +- [How to: cache embedding results](/docs/how_to/caching_embeddings) + +### Vector stores + +[Vector stores](/docs/concepts/#vector-stores) are databases that can efficiently store and retrieve embeddings. + +- [How to: use a vector store to retrieve data](/docs/how_to/vectorstores) + +### Retrievers + +[Retrievers](/docs/concepts/#retrievers) are responsible for taking a query and returning relevant documents. + +- [How to: use a vector store to retrieve data](/docs/how_to/vectorstore_retriever) +- [How to: generate multiple queries to retrieve data for](/docs/how_to/MultiQueryRetriever) +- [How to: use contextual compression to compress the data retrieved](/docs/how_to/contextual_compression) +- [How to: write a custom retriever class](/docs/how_to/custom_retriever) +- [How to: add similarity scores to retriever results](/docs/how_to/add_scores_retriever) +- [How to: combine the results from multiple retrievers](/docs/how_to/ensemble_retriever) +- [How to: reorder retrieved results to mitigate the "lost in the middle" effect](/docs/how_to/long_context_reorder) +- [How to: generate multiple embeddings per document](/docs/how_to/multi_vector) +- [How to: retrieve the whole document for a chunk](/docs/how_to/parent_document_retriever) +- [How to: generate metadata filters](/docs/how_to/self_query) +- [How to: create a time-weighted retriever](/docs/how_to/time_weighted_vectorstore) +- [How to: use hybrid vector and keyword retrieval](/docs/how_to/hybrid) + +### Indexing + +Indexing is the process of keeping your vectorstore in-sync with the underlying data source. + +- [How to: reindex data to keep your vectorstore in-sync with the underlying data source](/docs/how_to/indexing) + +### Tools + +LangChain [Tools](/docs/concepts/#tools) contain a description of the tool (to pass to the language model) as well as the implementation of the function to call. Refer [here](/docs/integrations/tools/) for a list of pre-buit tools. + +- [How to: create tools](/docs/how_to/custom_tools) +- [How to: use built-in tools and toolkits](/docs/how_to/tools_builtin) +- [How to: use chat models to call tools](/docs/how_to/tool_calling) +- [How to: pass tool outputs to chat models](/docs/how_to/tool_results_pass_to_model) +- [How to: pass run time values to tools](/docs/how_to/tool_runtime) +- [How to: add a human-in-the-loop for tools](/docs/how_to/tools_human) +- [How to: handle tool errors](/docs/how_to/tools_error) +- [How to: force models to call a tool](/docs/how_to/tool_choice) +- [How to: disable parallel tool calling](/docs/how_to/tool_calling_parallel) +- [How to: access the `RunnableConfig` from a tool](/docs/how_to/tool_configure) +- [How to: stream events from a tool](/docs/how_to/tool_stream_events) +- [How to: return artifacts from a tool](/docs/how_to/tool_artifacts/) +- [How to: convert Runnables to tools](/docs/how_to/convert_runnable_to_tool) +- [How to: add ad-hoc tool calling capability to models](/docs/how_to/tools_prompting) +- [How to: pass in runtime secrets](/docs/how_to/runnable_runtime_secrets) + +### Multimodal + +- [How to: pass multimodal data directly to models](/docs/how_to/multimodal_inputs/) +- [How to: use multimodal prompts](/docs/how_to/multimodal_prompts/) + + +### Agents + +:::note + +For in depth how-to guides for agents, please check out [LangGraph](https://langchain-ai.github.io/langgraph/) documentation. + +::: + +- [How to: use legacy LangChain Agents (AgentExecutor)](/docs/how_to/agent_executor) +- [How to: migrate from legacy LangChain agents to LangGraph](/docs/how_to/migrate_agent) + +### Callbacks + +[Callbacks](/docs/concepts/#callbacks) allow you to hook into the various stages of your LLM application's execution. + +- [How to: pass in callbacks at runtime](/docs/how_to/callbacks_runtime) +- [How to: attach callbacks to a module](/docs/how_to/callbacks_attach) +- [How to: pass callbacks into a module constructor](/docs/how_to/callbacks_constructor) +- [How to: create custom callback handlers](/docs/how_to/custom_callbacks) +- [How to: use callbacks in async environments](/docs/how_to/callbacks_async) +- [How to: dispatch custom callback events](/docs/how_to/callbacks_custom_events) + +### Custom + +All of LangChain components can easily be extended to support your own versions. + +- [How to: create a custom chat model class](/docs/how_to/custom_chat_model) +- [How to: create a custom LLM class](/docs/how_to/custom_llm) +- [How to: write a custom retriever class](/docs/how_to/custom_retriever) +- [How to: write a custom document loader](/docs/how_to/document_loader_custom) +- [How to: write a custom output parser class](/docs/how_to/output_parser_custom) +- [How to: create custom callback handlers](/docs/how_to/custom_callbacks) +- [How to: define a custom tool](/docs/how_to/custom_tools) +- [How to: dispatch custom callback events](/docs/how_to/callbacks_custom_events) + +### Serialization +- [How to: save and load LangChain objects](/docs/how_to/serialization) + +## Use cases + +These guides cover use-case specific details. + +### Q&A with RAG + +Retrieval Augmented Generation (RAG) is a way to connect LLMs to external sources of data. +For a high-level tutorial on RAG, check out [this guide](/docs/tutorials/rag/). + +- [How to: add chat history](/docs/how_to/qa_chat_history_how_to/) +- [How to: stream](/docs/how_to/qa_streaming/) +- [How to: return sources](/docs/how_to/qa_sources/) +- [How to: return citations](/docs/how_to/qa_citations/) +- [How to: do per-user retrieval](/docs/how_to/qa_per_user/) + + +### Extraction + +Extraction is when you use LLMs to extract structured information from unstructured text. +For a high level tutorial on extraction, check out [this guide](/docs/tutorials/extraction/). + +- [How to: use reference examples](/docs/how_to/extraction_examples/) +- [How to: handle long text](/docs/how_to/extraction_long_text/) +- [How to: do extraction without using function calling](/docs/how_to/extraction_parse) + +### Chatbots + +Chatbots involve using an LLM to have a conversation. +For a high-level tutorial on building chatbots, check out [this guide](/docs/tutorials/chatbot/). + +- [How to: manage memory](/docs/how_to/chatbots_memory) +- [How to: do retrieval](/docs/how_to/chatbots_retrieval) +- [How to: use tools](/docs/how_to/chatbots_tools) +- [How to: manage large chat history](/docs/how_to/trim_messages/) + +### Query analysis + +Query Analysis is the task of using an LLM to generate a query to send to a retriever. +For a high-level tutorial on query analysis, check out [this guide](/docs/tutorials/query_analysis/). + +- [How to: add examples to the prompt](/docs/how_to/query_few_shot) +- [How to: handle cases where no queries are generated](/docs/how_to/query_no_queries) +- [How to: handle multiple queries](/docs/how_to/query_multiple_queries) +- [How to: handle multiple retrievers](/docs/how_to/query_multiple_retrievers) +- [How to: construct filters](/docs/how_to/query_constructing_filters) +- [How to: deal with high cardinality categorical variables](/docs/how_to/query_high_cardinality) + +### Q&A over SQL + CSV + +You can use LLMs to do question answering over tabular data. +For a high-level tutorial, check out [this guide](/docs/tutorials/sql_qa/). + +- [How to: use prompting to improve results](/docs/how_to/sql_prompting) +- [How to: do query validation](/docs/how_to/sql_query_checking) +- [How to: deal with large databases](/docs/how_to/sql_large_db) +- [How to: deal with CSV files](/docs/how_to/sql_csv) + +### Q&A over graph databases + +You can use an LLM to do question answering over graph databases. +For a high-level tutorial, check out [this guide](/docs/tutorials/graph/). + +- [How to: map values to a database](/docs/how_to/graph_mapping) +- [How to: add a semantic layer over the database](/docs/how_to/graph_semantic) +- [How to: improve results with prompting](/docs/how_to/graph_prompting) +- [How to: construct knowledge graphs](/docs/how_to/graph_constructing) + +### Summarization + +LLMs can summarize and otherwise distill desired information from text, including +large volumes of text. For a high-level tutorial, check out [this guide](/docs/tutorials/summarization). + +- [How to: summarize text in a single LLM call](/docs/how_to/summarize_stuff) +- [How to: summarize text through parallelization](/docs/how_to/summarize_map_reduce) +- [How to: summarize text through iterative refinement](/docs/how_to/summarize_refine) + +## [LangGraph](https://langchain-ai.github.io/langgraph) + +LangGraph is an extension of LangChain aimed at +building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. + +LangGraph documentation is currently hosted on a separate site. +You can peruse [LangGraph how-to guides here](https://langchain-ai.github.io/langgraph/how-tos/). + +## [LangSmith](https://docs.smith.langchain.com/) + +LangSmith allows you to closely trace, monitor and evaluate your LLM application. +It seamlessly integrates with LangChain and LangGraph, and you can use it to inspect and debug individual steps of your chains and agents as you build. + +LangSmith documentation is hosted on a separate site. +You can peruse [LangSmith how-to guides here](https://docs.smith.langchain.com/how_to_guides/), but we'll highlight a few sections that are particularly +relevant to LangChain below: + +### Evaluation + + +Evaluating performance is a vital part of building LLM-powered applications. +LangSmith helps with every step of the process from creating a dataset to defining metrics to running evaluators. + +To learn more, check out the [LangSmith evaluation how-to guides](https://docs.smith.langchain.com/how_to_guides#evaluation). + +### Tracing + + +Tracing gives you observability inside your chains and agents, and is vital in diagnosing issues. + +- [How to: trace with LangChain](https://docs.smith.langchain.com/how_to_guides/tracing/trace_with_langchain) +- [How to: add metadata and tags to traces](https://docs.smith.langchain.com/how_to_guides/tracing/trace_with_langchain#add-metadata-and-tags-to-traces) + +You can see general tracing-related how-tos [in this section of the LangSmith docs](https://docs.smith.langchain.com/how_to_guides/tracing). diff --git a/langchain_md_files/how_to/installation.mdx b/langchain_md_files/how_to/installation.mdx new file mode 100644 index 0000000000000000000000000000000000000000..c778c6243c344bfabaec9d0edbc489e6d93a8f57 --- /dev/null +++ b/langchain_md_files/how_to/installation.mdx @@ -0,0 +1,107 @@ +--- +sidebar_position: 2 +--- + +# How to install LangChain packages + +The LangChain ecosystem is split into different packages, which allow you to choose exactly which pieces of +functionality to install. + +## Official release + +To install the main LangChain package, run: + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; +import CodeBlock from "@theme/CodeBlock"; + + + + pip install langchain + + + conda install langchain -c conda-forge + + + +While this package acts as a sane starting point to using LangChain, +much of the value of LangChain comes when integrating it with various model providers, datastores, etc. +By default, the dependencies needed to do that are NOT installed. You will need to install the dependencies for specific integrations separately. +We'll show how to do that in the next sections of this guide. + +## Ecosystem packages + +With the exception of the `langsmith` SDK, all packages in the LangChain ecosystem depend on `langchain-core`, which contains base +classes and abstractions that other packages use. The dependency graph below shows how the difference packages are related. +A directed arrow indicates that the source package depends on the target package: + +![](/img/ecosystem_packages.png) + +When installing a package, you do not need to explicitly install that package's explicit dependencies (such as `langchain-core`). +However, you may choose to if you are using a feature only available in a certain version of that dependency. +If you do, you should make sure that the installed or pinned version is compatible with any other integration packages you use. + +### From source + +If you want to install from source, you can do so by cloning the repo and be sure that the directory is `PATH/TO/REPO/langchain/libs/langchain` running: + +```bash +pip install -e . +``` + +### LangChain core +The `langchain-core` package contains base abstractions that the rest of the LangChain ecosystem uses, along with the LangChain Expression Language. It is automatically installed by `langchain`, but can also be used separately. Install with: + +```bash +pip install langchain-core +``` + +### LangChain community +The `langchain-community` package contains third-party integrations. Install with: + +```bash +pip install langchain-community +``` + +### LangChain experimental +The `langchain-experimental` package holds experimental LangChain code, intended for research and experimental uses. +Install with: + +```bash +pip install langchain-experimental +``` + +### LangGraph +`langgraph` is a library for building stateful, multi-actor applications with LLMs. It integrates smoothly with LangChain, but can be used without it. +Install with: + +```bash +pip install langgraph +``` + +### LangServe +LangServe helps developers deploy LangChain runnables and chains as a REST API. +LangServe is automatically installed by LangChain CLI. +If not using LangChain CLI, install with: + +```bash +pip install "langserve[all]" +``` +for both client and server dependencies. Or `pip install "langserve[client]"` for client code, and `pip install "langserve[server]"` for server code. + +## LangChain CLI +The LangChain CLI is useful for working with LangChain templates and other LangServe projects. +Install with: + +```bash +pip install langchain-cli +``` + +### LangSmith SDK +The LangSmith SDK is automatically installed by LangChain. However, it does not depend on +`langchain-core`, and can be installed and used independently if desired. +If you are not using LangChain, you can install it with: + +```bash +pip install langsmith +``` diff --git a/langchain_md_files/how_to/toolkits.mdx b/langchain_md_files/how_to/toolkits.mdx new file mode 100644 index 0000000000000000000000000000000000000000..c1f74199da1afc2e102f714cb0b6dc914f3712a3 --- /dev/null +++ b/langchain_md_files/how_to/toolkits.mdx @@ -0,0 +1,21 @@ +--- +sidebar_position: 3 +--- +# How to use toolkits + + +Toolkits are collections of tools that are designed to be used together for specific tasks. They have convenient loading methods. + +All Toolkits expose a `get_tools` method which returns a list of tools. +You can therefore do: + +```python +# Initialize a toolkit +toolkit = ExampleTookit(...) + +# Get list of tools +tools = toolkit.get_tools() + +# Create agent +agent = create_agent_method(llm, tools, prompt) +``` diff --git a/langchain_md_files/how_to/vectorstores.mdx b/langchain_md_files/how_to/vectorstores.mdx new file mode 100644 index 0000000000000000000000000000000000000000..66775203d486a03b8989111b7c8e0da89d14c757 --- /dev/null +++ b/langchain_md_files/how_to/vectorstores.mdx @@ -0,0 +1,178 @@ +# How to create and query vector stores + +:::info +Head to [Integrations](/docs/integrations/vectorstores/) for documentation on built-in integrations with 3rd-party vector stores. +::: + +One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding +vectors, and then at query time to embed the unstructured query and retrieve the embedding vectors that are +'most similar' to the embedded query. A vector store takes care of storing embedded data and performing vector search +for you. + +## Get started + +This guide showcases basic functionality related to vector stores. A key part of working with vector stores is creating the vector to put in them, +which is usually created via embeddings. Therefore, it is recommended that you familiarize yourself with the [text embedding model interfaces](/docs/how_to/embed_text) before diving into this. + +Before using the vectorstore at all, we need to load some data and initialize an embedding model. + +We want to use OpenAIEmbeddings so we have to get the OpenAI API Key. + +```python +import os +import getpass + +os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:') +``` + +```python +from langchain_community.document_loaders import TextLoader +from langchain_openai import OpenAIEmbeddings +from langchain_text_splitters import CharacterTextSplitter + +# Load the document, split it into chunks, embed each chunk and load it into the vector store. +raw_documents = TextLoader('state_of_the_union.txt').load() +text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) +documents = text_splitter.split_documents(raw_documents) +``` + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +There are many great vector store options, here are a few that are free, open-source, and run entirely on your local machine. Review all integrations for many great hosted offerings. + + + + + +This walkthrough uses the `chroma` vector database, which runs on your local machine as a library. + +```bash +pip install langchain-chroma +``` + +```python +from langchain_chroma import Chroma + +db = Chroma.from_documents(documents, OpenAIEmbeddings()) +``` + + + + +This walkthrough uses the `FAISS` vector database, which makes use of the Facebook AI Similarity Search (FAISS) library. + +```bash +pip install faiss-cpu +``` + +```python +from langchain_community.vectorstores import FAISS + +db = FAISS.from_documents(documents, OpenAIEmbeddings()) +``` + + + + +This notebook shows how to use functionality related to the LanceDB vector database based on the Lance data format. + +```bash +pip install lancedb +``` + +```python +from langchain_community.vectorstores import LanceDB + +import lancedb + +db = lancedb.connect("/tmp/lancedb") +table = db.create_table( + "my_table", + data=[ + { + "vector": embeddings.embed_query("Hello World"), + "text": "Hello World", + "id": "1", + } + ], + mode="overwrite", +) +db = LanceDB.from_documents(documents, OpenAIEmbeddings()) +``` + + + + + +## Similarity search + +All vectorstores expose a `similarity_search` method. +This will take incoming documents, create an embedding of them, and then find all documents with the most similar embedding. + +```python +query = "What did the president say about Ketanji Brown Jackson" +docs = db.similarity_search(query) +print(docs[0].page_content) +``` + + + +``` + Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. + + Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. + + One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. + + And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. +``` + + + +### Similarity search by vector + +It is also possible to do a search for documents similar to a given embedding vector using `similarity_search_by_vector` which accepts an embedding vector as a parameter instead of a string. + +```python +embedding_vector = OpenAIEmbeddings().embed_query(query) +docs = db.similarity_search_by_vector(embedding_vector) +print(docs[0].page_content) +``` + + + +``` + Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. + + Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. + + One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. + + And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. +``` + + + +## Async Operations + + +Vector stores are usually run as a separate service that requires some IO operations, and therefore they might be called asynchronously. That gives performance benefits as you don't waste time waiting for responses from external services. That might also be important if you work with an asynchronous framework, such as [FastAPI](https://fastapi.tiangolo.com/). + +LangChain supports async operation on vector stores. All the methods might be called using their async counterparts, with the prefix `a`, meaning `async`. + +```python +docs = await db.asimilarity_search(query) +docs +``` + + + +``` +[Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': 'state_of_the_union.txt'}), + Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': 'state_of_the_union.txt'}), + Document(page_content='And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n\nWhile it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \n\nAnd soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n\nSo tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \n\nFirst, beat the opioid epidemic.', metadata={'source': 'state_of_the_union.txt'}), + Document(page_content='Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. \n\nAnd as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \n\nThat ends on my watch. \n\nMedicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. \n\nWe’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. \n\nLet’s pass the Paycheck Fairness Act and paid leave. \n\nRaise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \n\nLet’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.', metadata={'source': 'state_of_the_union.txt'})] +``` + + \ No newline at end of file diff --git a/langchain_md_files/integrations/chat/index.mdx b/langchain_md_files/integrations/chat/index.mdx new file mode 100644 index 0000000000000000000000000000000000000000..5ccb0fa13ff6819f8be60be1a23a3a0be4911e45 --- /dev/null +++ b/langchain_md_files/integrations/chat/index.mdx @@ -0,0 +1,32 @@ +--- +sidebar_position: 0 +sidebar_class_name: hidden +keywords: [compatibility] +--- + +# Chat models + +[Chat models](/docs/concepts/#chat-models) are language models that use a sequence of [messages](/docs/concepts/#messages) as inputs and return messages as outputs (as opposed to using plain text). These are generally newer models. + +:::info + +If you'd like to write your own chat model, see [this how-to](/docs/how_to/custom_chat_model/). +If you'd like to contribute an integration, see [Contributing integrations](/docs/contributing/integrations/). + +::: + +## Featured Providers + +:::info +While all these LangChain classes support the indicated advanced feature, you may have +to open the provider-specific documentation to learn which hosted models or backends support +the feature. +::: + +import { CategoryTable, IndexTable } from "@theme/FeatureTables"; + + + +## All chat models + + \ No newline at end of file diff --git a/langchain_md_files/integrations/document_loaders/index.mdx b/langchain_md_files/integrations/document_loaders/index.mdx new file mode 100644 index 0000000000000000000000000000000000000000..6dee9374e97d0cdcbf47395a9defec0d42d8b7d6 --- /dev/null +++ b/langchain_md_files/integrations/document_loaders/index.mdx @@ -0,0 +1,69 @@ +--- +sidebar_position: 0 +sidebar_class_name: hidden +--- + +# Document loaders + +import { CategoryTable, IndexTable } from "@theme/FeatureTables"; + +DocumentLoaders load data into the standard LangChain Document format. + +Each DocumentLoader has its own specific parameters, but they can all be invoked in the same way with the .load method. +An example use case is as follows: + +```python +from langchain_community.document_loaders.csv_loader import CSVLoader + +loader = CSVLoader( + ... # <-- Integration specific parameters here +) +data = loader.load() +``` + +## Webpages + +The below document loaders allow you to load webpages. + + + +## PDFs + +The below document loaders allow you to load PDF documents. + + + +## Cloud Providers + +The below document loaders allow you to load documents from your favorite cloud providers. + + + +## Social Platforms + +The below document loaders allow you to load documents from differnt social media platforms. + + + +## Messaging Services + +The below document loaders allow you to load data from different messaging platforms. + + + +## Productivity tools + +The below document loaders allow you to load data from commonly used productivity tools. + + + +## Common File Types + +The below document loaders allow you to load data from common data formats. + + + + +## All document loaders + + diff --git a/langchain_md_files/integrations/graphs/tigergraph.mdx b/langchain_md_files/integrations/graphs/tigergraph.mdx new file mode 100644 index 0000000000000000000000000000000000000000..a9901459a0d9b0dcb047b2358d4ca6918c8a724b --- /dev/null +++ b/langchain_md_files/integrations/graphs/tigergraph.mdx @@ -0,0 +1,37 @@ +# TigerGraph + +>[TigerGraph](https://www.tigergraph.com/tigergraph-db/) is a natively distributed and high-performance graph database. +> The storage of data in a graph format of vertices and edges leads to rich relationships, +> ideal for grouding LLM responses. + +A big example of the `TigerGraph` and `LangChain` integration [presented here](https://github.com/tigergraph/graph-ml-notebooks/blob/main/applications/large_language_models/TigerGraph_LangChain_Demo.ipynb). + +## Installation and Setup + +Follow instructions [how to connect to the `TigerGraph` database](https://docs.tigergraph.com/pytigergraph/current/getting-started/connection). + +Install the Python SDK: + +```bash +pip install pyTigerGraph +``` + +## Example + +To utilize the `TigerGraph InquiryAI` functionality, you can import `TigerGraph` from `langchain_community.graphs`. + +```python +import pyTigerGraph as tg + +conn = tg.TigerGraphConnection(host="DATABASE_HOST_HERE", graphname="GRAPH_NAME_HERE", username="USERNAME_HERE", password="PASSWORD_HERE") + +### ==== CONFIGURE INQUIRYAI HOST ==== +conn.ai.configureInquiryAIHost("INQUIRYAI_HOST_HERE") + +from langchain_community.graphs import TigerGraph + +graph = TigerGraph(conn) +result = graph.query("How many servers are there?") +print(result) +``` + diff --git a/langchain_md_files/integrations/llms/index.mdx b/langchain_md_files/integrations/llms/index.mdx new file mode 100644 index 0000000000000000000000000000000000000000..973b09c360f2b6a0362f48c7078b690c02e713cc --- /dev/null +++ b/langchain_md_files/integrations/llms/index.mdx @@ -0,0 +1,30 @@ +--- +sidebar_position: 0 +sidebar_class_name: hidden +keywords: [compatibility] +--- + +# LLMs + +:::caution +You are currently on a page documenting the use of [text completion models](/docs/concepts/#llms). Many of the latest and most popular models are [chat completion models](/docs/concepts/#chat-models). + +Unless you are specifically using more advanced prompting techniques, you are probably looking for [this page instead](/docs/integrations/chat/). +::: + +[LLMs](docs/concepts/#llms) are language models that take a string as input and return a string as output. + +:::info + +If you'd like to write your own LLM, see [this how-to](/docs/how_to/custom_llm/). +If you'd like to contribute an integration, see [Contributing integrations](/docs/contributing/integrations/). + +::: + +import { CategoryTable, IndexTable } from "@theme/FeatureTables"; + + + +## All LLMs + + diff --git a/langchain_md_files/integrations/llms/layerup_security.mdx b/langchain_md_files/integrations/llms/layerup_security.mdx new file mode 100644 index 0000000000000000000000000000000000000000..6beee5320903dcfc9ef58373189a1417ef3017a5 --- /dev/null +++ b/langchain_md_files/integrations/llms/layerup_security.mdx @@ -0,0 +1,85 @@ +# Layerup Security + +The [Layerup Security](https://uselayerup.com) integration allows you to secure your calls to any LangChain LLM, LLM chain or LLM agent. The LLM object wraps around any existing LLM object, allowing for a secure layer between your users and your LLMs. + +While the Layerup Security object is designed as an LLM, it is not actually an LLM itself, it simply wraps around an LLM, allowing it to adapt the same functionality as the underlying LLM. + +## Setup +First, you'll need a Layerup Security account from the Layerup [website](https://uselayerup.com). + +Next, create a project via the [dashboard](https://dashboard.uselayerup.com), and copy your API key. We recommend putting your API key in your project's environment. + +Install the Layerup Security SDK: +```bash +pip install LayerupSecurity +``` + +And install LangChain Community: +```bash +pip install langchain-community +``` + +And now you're ready to start protecting your LLM calls with Layerup Security! + +```python +from langchain_community.llms.layerup_security import LayerupSecurity +from langchain_openai import OpenAI + +# Create an instance of your favorite LLM +openai = OpenAI( + model_name="gpt-3.5-turbo", + openai_api_key="OPENAI_API_KEY", +) + +# Configure Layerup Security +layerup_security = LayerupSecurity( + # Specify a LLM that Layerup Security will wrap around + llm=openai, + + # Layerup API key, from the Layerup dashboard + layerup_api_key="LAYERUP_API_KEY", + + # Custom base URL, if self hosting + layerup_api_base_url="https://api.uselayerup.com/v1", + + # List of guardrails to run on prompts before the LLM is invoked + prompt_guardrails=[], + + # List of guardrails to run on responses from the LLM + response_guardrails=["layerup.hallucination"], + + # Whether or not to mask the prompt for PII & sensitive data before it is sent to the LLM + mask=False, + + # Metadata for abuse tracking, customer tracking, and scope tracking. + metadata={"customer": "example@uselayerup.com"}, + + # Handler for guardrail violations on the prompt guardrails + handle_prompt_guardrail_violation=( + lambda violation: { + "role": "assistant", + "content": ( + "There was sensitive data! I cannot respond. " + "Here's a dynamic canned response. Current date: {}" + ).format(datetime.now()) + } + if violation["offending_guardrail"] == "layerup.sensitive_data" + else None + ), + + # Handler for guardrail violations on the response guardrails + handle_response_guardrail_violation=( + lambda violation: { + "role": "assistant", + "content": ( + "Custom canned response with dynamic data! " + "The violation rule was {}." + ).format(violation["offending_guardrail"]) + } + ), +) + +response = layerup_security.invoke( + "Summarize this message: my name is Bob Dylan. My SSN is 123-45-6789." +) +``` \ No newline at end of file diff --git a/langchain_md_files/integrations/platforms/anthropic.mdx b/langchain_md_files/integrations/platforms/anthropic.mdx new file mode 100644 index 0000000000000000000000000000000000000000..dfa9340f6ec004ed1df4b78de5d125a8d90d1707 --- /dev/null +++ b/langchain_md_files/integrations/platforms/anthropic.mdx @@ -0,0 +1,43 @@ +# Anthropic + +>[Anthropic](https://www.anthropic.com/) is an AI safety and research company, and is the creator of `Claude`. +This page covers all integrations between `Anthropic` models and `LangChain`. + +## Installation and Setup + +To use `Anthropic` models, you need to install a python package: + +```bash +pip install -U langchain-anthropic +``` + +You need to set the `ANTHROPIC_API_KEY` environment variable. +You can get an Anthropic API key [here](https://console.anthropic.com/settings/keys) + +## Chat Models + +### ChatAnthropic + +See a [usage example](/docs/integrations/chat/anthropic). + +```python +from langchain_anthropic import ChatAnthropic + +model = ChatAnthropic(model='claude-3-opus-20240229') +``` + + +## LLMs + +### [Legacy] AnthropicLLM + +**NOTE**: `AnthropicLLM` only supports legacy `Claude 2` models. +To use the newest `Claude 3` models, please use `ChatAnthropic` instead. + +See a [usage example](/docs/integrations/llms/anthropic). + +```python +from langchain_anthropic import AnthropicLLM + +model = AnthropicLLM(model='claude-2.1') +``` diff --git a/langchain_md_files/integrations/platforms/aws.mdx b/langchain_md_files/integrations/platforms/aws.mdx new file mode 100644 index 0000000000000000000000000000000000000000..c13c6e9aac8828075a536ad6546023643e36e0c7 --- /dev/null +++ b/langchain_md_files/integrations/platforms/aws.mdx @@ -0,0 +1,381 @@ +# AWS + +The `LangChain` integrations related to [Amazon AWS](https://aws.amazon.com/) platform. + +First-party AWS integrations are available in the `langchain_aws` package. + +```bash +pip install langchain-aws +``` + +And there are also some community integrations available in the `langchain_community` package with the `boto3` optional dependency. + +```bash +pip install langchain-community boto3 +``` + +## Chat models + +### Bedrock Chat + +>[Amazon Bedrock](https://aws.amazon.com/bedrock/) is a fully managed service that offers a choice of +> high-performing foundation models (FMs) from leading AI companies like `AI21 Labs`, `Anthropic`, `Cohere`, +> `Meta`, `Stability AI`, and `Amazon` via a single API, along with a broad set of capabilities you need to +> build generative AI applications with security, privacy, and responsible AI. Using `Amazon Bedrock`, +> you can easily experiment with and evaluate top FMs for your use case, privately customize them with +> your data using techniques such as fine-tuning and `Retrieval Augmented Generation` (`RAG`), and build +> agents that execute tasks using your enterprise systems and data sources. Since `Amazon Bedrock` is +> serverless, you don't have to manage any infrastructure, and you can securely integrate and deploy +> generative AI capabilities into your applications using the AWS services you are already familiar with. + +See a [usage example](/docs/integrations/chat/bedrock). + +```python +from langchain_aws import ChatBedrock +``` + +### Bedrock Converse +AWS has recently released the Bedrock Converse API which provides a unified conversational interface for Bedrock models. This API does not yet support custom models. You can see a list of all [models that are supported here](https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference.html). To improve reliability the ChatBedrock integration will switch to using the Bedrock Converse API as soon as it has feature parity with the existing Bedrock API. Until then a separate [ChatBedrockConverse](https://python.langchain.com/v0.2/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock_converse.ChatBedrockConverse.html) integration has been released. + +We recommend using `ChatBedrockConverse` for users who do not need to use custom models. See the [docs](/docs/integrations/chat/bedrock/#bedrock-converse-api) and [API reference](https://python.langchain.com/v0.2/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock_converse.ChatBedrockConverse.html) for more detail. + +```python +from langchain_aws import ChatBedrockConverse +``` + +## LLMs + +### Bedrock + +See a [usage example](/docs/integrations/llms/bedrock). + +```python +from langchain_aws import BedrockLLM +``` + +### Amazon API Gateway + +>[Amazon API Gateway](https://aws.amazon.com/api-gateway/) is a fully managed service that makes it easy for +> developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" +> for applications to access data, business logic, or functionality from your backend services. Using +> `API Gateway`, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication +> applications. `API Gateway` supports containerized and serverless workloads, as well as web applications. +> +> `API Gateway` handles all the tasks involved in accepting and processing up to hundreds of thousands of +> concurrent API calls, including traffic management, CORS support, authorization and access control, +> throttling, monitoring, and API version management. `API Gateway` has no minimum fees or startup costs. +> You pay for the API calls you receive and the amount of data transferred out and, with the `API Gateway` +> tiered pricing model, you can reduce your cost as your API usage scales. + +See a [usage example](/docs/integrations/llms/amazon_api_gateway). + +```python +from langchain_community.llms import AmazonAPIGateway +``` + +### SageMaker Endpoint + +>[Amazon SageMaker](https://aws.amazon.com/sagemaker/) is a system that can build, train, and deploy +> machine learning (ML) models with fully managed infrastructure, tools, and workflows. + +We use `SageMaker` to host our model and expose it as the `SageMaker Endpoint`. + +See a [usage example](/docs/integrations/llms/sagemaker). + +```python +from langchain_aws import SagemakerEndpoint +``` + +## Embedding Models + +### Bedrock + +See a [usage example](/docs/integrations/text_embedding/bedrock). +```python +from langchain_community.embeddings import BedrockEmbeddings +``` + +### SageMaker Endpoint + +See a [usage example](/docs/integrations/text_embedding/sagemaker-endpoint). +```python +from langchain_community.embeddings import SagemakerEndpointEmbeddings +from langchain_community.llms.sagemaker_endpoint import ContentHandlerBase +``` + +## Document loaders + +### AWS S3 Directory and File + +>[Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-folders.html) +> is an object storage service. +>[AWS S3 Directory](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-folders.html) +>[AWS S3 Buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingBucket.html) + +See a [usage example for S3DirectoryLoader](/docs/integrations/document_loaders/aws_s3_directory). + +See a [usage example for S3FileLoader](/docs/integrations/document_loaders/aws_s3_file). + +```python +from langchain_community.document_loaders import S3DirectoryLoader, S3FileLoader +``` + +### Amazon Textract + +>[Amazon Textract](https://docs.aws.amazon.com/managedservices/latest/userguide/textract.html) is a machine +> learning (ML) service that automatically extracts text, handwriting, and data from scanned documents. + +See a [usage example](/docs/integrations/document_loaders/amazon_textract). + +```python +from langchain_community.document_loaders import AmazonTextractPDFLoader +``` + +### Amazon Athena + +>[Amazon Athena](https://aws.amazon.com/athena/) is a serverless, interactive analytics service built +>on open-source frameworks, supporting open-table and file formats. + +See a [usage example](/docs/integrations/document_loaders/athena). + +```python +from langchain_community.document_loaders.athena import AthenaLoader +``` + +### AWS Glue + +>The [AWS Glue Data Catalog](https://docs.aws.amazon.com/en_en/glue/latest/dg/catalog-and-crawler.html) is a centralized metadata +> repository that allows you to manage, access, and share metadata about +> your data stored in AWS. It acts as a metadata store for your data assets, +> enabling various AWS services and your applications to query and connect +> to the data they need efficiently. + +See a [usage example](/docs/integrations/document_loaders/glue_catalog). + +```python +from langchain_community.document_loaders.glue_catalog import GlueCatalogLoader +``` + +## Vector stores + +### Amazon OpenSearch Service + +> [Amazon OpenSearch Service](https://aws.amazon.com/opensearch-service/) performs +> interactive log analytics, real-time application monitoring, website search, and more. `OpenSearch` is +> an open source, +> distributed search and analytics suite derived from `Elasticsearch`. `Amazon OpenSearch Service` offers the +> latest versions of `OpenSearch`, support for many versions of `Elasticsearch`, as well as +> visualization capabilities powered by `OpenSearch Dashboards` and `Kibana`. + +We need to install several python libraries. + +```bash +pip install boto3 requests requests-aws4auth +``` + +See a [usage example](/docs/integrations/vectorstores/opensearch#using-aos-amazon-opensearch-service). + +```python +from langchain_community.vectorstores import OpenSearchVectorSearch +``` + +### Amazon DocumentDB Vector Search + +>[Amazon DocumentDB (with MongoDB Compatibility)](https://docs.aws.amazon.com/documentdb/) makes it easy to set up, operate, and scale MongoDB-compatible databases in the cloud. +> With Amazon DocumentDB, you can run the same application code and use the same drivers and tools that you use with MongoDB. +> Vector search for Amazon DocumentDB combines the flexibility and rich querying capability of a JSON-based document database with the power of vector search. + +#### Installation and Setup + +See [detail configuration instructions](/docs/integrations/vectorstores/documentdb). + +We need to install the `pymongo` python package. + +```bash +pip install pymongo +``` + +#### Deploy DocumentDB on AWS + +[Amazon DocumentDB (with MongoDB Compatibility)](https://docs.aws.amazon.com/documentdb/) is a fast, reliable, and fully managed database service. Amazon DocumentDB makes it easy to set up, operate, and scale MongoDB-compatible databases in the cloud. + +AWS offers services for computing, databases, storage, analytics, and other functionality. For an overview of all AWS services, see [Cloud Computing with Amazon Web Services](https://aws.amazon.com/what-is-aws/). + +See a [usage example](/docs/integrations/vectorstores/documentdb). + +```python +from langchain_community.vectorstores import DocumentDBVectorSearch +``` +### Amazon MemoryDB +[Amazon MemoryDB](https://aws.amazon.com/memorydb/) is a durable, in-memory database service that delivers ultra-fast performance. MemoryDB is compatible with Redis OSS, a popular open source data store, +enabling you to quickly build applications using the same flexible and friendly Redis OSS APIs, and commands that they already use today. + +InMemoryVectorStore class provides a vectorstore to connect with Amazon MemoryDB. + +```python +from langchain_aws.vectorstores.inmemorydb import InMemoryVectorStore + +vds = InMemoryVectorStore.from_documents( + chunks, + embeddings, + redis_url="rediss://cluster_endpoint:6379/ssl=True ssl_cert_reqs=none", + vector_schema=vector_schema, + index_name=INDEX_NAME, + ) +``` +See a [usage example](/docs/integrations/vectorstores/memorydb). + +## Retrievers + +### Amazon Kendra + +> [Amazon Kendra](https://docs.aws.amazon.com/kendra/latest/dg/what-is-kendra.html) is an intelligent search service +> provided by `Amazon Web Services` (`AWS`). It utilizes advanced natural language processing (NLP) and machine +> learning algorithms to enable powerful search capabilities across various data sources within an organization. +> `Kendra` is designed to help users find the information they need quickly and accurately, +> improving productivity and decision-making. + +> With `Kendra`, we can search across a wide range of content types, including documents, FAQs, knowledge bases, +> manuals, and websites. It supports multiple languages and can understand complex queries, synonyms, and +> contextual meanings to provide highly relevant search results. + +We need to install the `langchain-aws` library. + +```bash +pip install langchain-aws +``` + +See a [usage example](/docs/integrations/retrievers/amazon_kendra_retriever). + +```python +from langchain_aws import AmazonKendraRetriever +``` + +### Amazon Bedrock (Knowledge Bases) + +> [Knowledge bases for Amazon Bedrock](https://aws.amazon.com/bedrock/knowledge-bases/) is an +> `Amazon Web Services` (`AWS`) offering which lets you quickly build RAG applications by using your +> private data to customize foundation model response. + +We need to install the `langchain-aws` library. + +```bash +pip install langchain-aws +``` + +See a [usage example](/docs/integrations/retrievers/bedrock). + +```python +from langchain_aws import AmazonKnowledgeBasesRetriever +``` + +## Tools + +### AWS Lambda + +>[`Amazon AWS Lambda`](https://aws.amazon.com/pm/lambda/) is a serverless computing service provided by +> `Amazon Web Services` (`AWS`). It helps developers to build and run applications and services without +> provisioning or managing servers. This serverless architecture enables you to focus on writing and +> deploying code, while AWS automatically takes care of scaling, patching, and managing the +> infrastructure required to run your applications. + +We need to install `boto3` python library. + +```bash +pip install boto3 +``` + +See a [usage example](/docs/integrations/tools/awslambda). + +## Memory + +### AWS DynamoDB + +>[AWS DynamoDB](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/dynamodb/index.html) +> is a fully managed `NoSQL` database service that provides fast and predictable performance with seamless scalability. + +We have to configure the [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html). + +We need to install the `boto3` library. + +```bash +pip install boto3 +``` + +See a [usage example](/docs/integrations/memory/aws_dynamodb). + +```python +from langchain_community.chat_message_histories import DynamoDBChatMessageHistory +``` + +## Graphs + +### Amazon Neptune with Cypher + +See a [usage example](/docs/integrations/graphs/amazon_neptune_open_cypher). + +```python +from langchain_community.graphs import NeptuneGraph +from langchain_community.graphs import NeptuneAnalyticsGraph +from langchain_community.chains.graph_qa.neptune_cypher import NeptuneOpenCypherQAChain +``` + +### Amazon Neptune with SPARQL + +See a [usage example](/docs/integrations/graphs/amazon_neptune_sparql). + +```python +from langchain_community.graphs import NeptuneRdfGraph +from langchain_community.chains.graph_qa.neptune_sparql import NeptuneSparqlQAChain +``` + + + +## Callbacks + +### Bedrock token usage + +```python +from langchain_community.callbacks.bedrock_anthropic_callback import BedrockAnthropicTokenUsageCallbackHandler +``` + +### SageMaker Tracking + +>[Amazon SageMaker](https://aws.amazon.com/sagemaker/) is a fully managed service that is used to quickly +> and easily build, train and deploy machine learning (ML) models. + +>[Amazon SageMaker Experiments](https://docs.aws.amazon.com/sagemaker/latest/dg/experiments.html) is a capability +> of `Amazon SageMaker` that lets you organize, track, +> compare and evaluate ML experiments and model versions. + +We need to install several python libraries. + +```bash +pip install google-search-results sagemaker +``` + +See a [usage example](/docs/integrations/callbacks/sagemaker_tracking). + +```python +from langchain_community.callbacks import SageMakerCallbackHandler +``` + +## Chains + +### Amazon Comprehend Moderation Chain + +>[Amazon Comprehend](https://aws.amazon.com/comprehend/) is a natural-language processing (NLP) service that +> uses machine learning to uncover valuable insights and connections in text. + + +We need to install the `boto3` and `nltk` libraries. + +```bash +pip install boto3 nltk +``` + +See a [usage example](https://python.langchain.com/v0.1/docs/guides/productionization/safety/amazon_comprehend_chain/). + +```python +from langchain_experimental.comprehend_moderation import AmazonComprehendModerationChain +``` diff --git a/langchain_md_files/integrations/platforms/google.mdx b/langchain_md_files/integrations/platforms/google.mdx new file mode 100644 index 0000000000000000000000000000000000000000..45a439e02bce3d39186f154b72223359bbb1e560 --- /dev/null +++ b/langchain_md_files/integrations/platforms/google.mdx @@ -0,0 +1,1079 @@ +# Google + +All functionality related to [Google Cloud Platform](https://cloud.google.com/) and other `Google` products. + +## Chat models + +We recommend individual developers to start with Gemini API (`langchain-google-genai`) and move to Vertex AI (`langchain-google-vertexai`) when they need access to commercial support and higher rate limits. If you’re already Cloud-friendly or Cloud-native, then you can get started in Vertex AI straight away. +Please see [here](https://ai.google.dev/gemini-api/docs/migrate-to-cloud) for more information. + +### Google Generative AI + +Access GoogleAI `Gemini` models such as `gemini-pro` and `gemini-pro-vision` through the `ChatGoogleGenerativeAI` class. + +```bash +pip install -U langchain-google-genai +``` + +Configure your API key. + +```bash +export GOOGLE_API_KEY=your-api-key +``` + +```python +from langchain_google_genai import ChatGoogleGenerativeAI + +llm = ChatGoogleGenerativeAI(model="gemini-pro") +llm.invoke("Sing a ballad of LangChain.") +``` + +Gemini vision model supports image inputs when providing a single chat message. + +```python +from langchain_core.messages import HumanMessage +from langchain_google_genai import ChatGoogleGenerativeAI + +llm = ChatGoogleGenerativeAI(model="gemini-pro-vision") + +message = HumanMessage( + content=[ + { + "type": "text", + "text": "What's in this image?", + }, # You can optionally provide text parts + {"type": "image_url", "image_url": "https://picsum.photos/seed/picsum/200/300"}, + ] +) +llm.invoke([message]) +``` + +The value of image_url can be any of the following: + +- A public image URL +- A gcs file (e.g., "gcs://path/to/file.png") +- A local file path +- A base64 encoded image (e.g., data:image/png;base64,abcd124) +- A PIL image + +### Vertex AI + +Access PaLM chat models like `chat-bison` and `codechat-bison` via Google Cloud. + +We need to install `langchain-google-vertexai` python package. + +```bash +pip install langchain-google-vertexai +``` + +See a [usage example](/docs/integrations/chat/google_vertex_ai_palm). + +```python +from langchain_google_vertexai import ChatVertexAI +``` + +### Chat Anthropic on Vertex AI + +See a [usage example](/docs/integrations/llms/google_vertex_ai_palm). + +```python +from langchain_google_vertexai.model_garden import ChatAnthropicVertex +``` + +## LLMs + +### Google Generative AI + +Access GoogleAI `Gemini` models such as `gemini-pro` and `gemini-pro-vision` through the `GoogleGenerativeAI` class. + +Install python package. + +```bash +pip install langchain-google-genai +``` + +See a [usage example](/docs/integrations/llms/google_ai). + +```python +from langchain_google_genai import GoogleGenerativeAI +``` + +### Vertex AI Model Garden + +Access `PaLM` and hundreds of OSS models via `Vertex AI Model Garden` service. + +We need to install `langchain-google-vertexai` python package. + +```bash +pip install langchain-google-vertexai +``` + +See a [usage example](/docs/integrations/llms/google_vertex_ai_palm#vertex-model-garden). + +```python +from langchain_google_vertexai import VertexAIModelGarden +``` + +## Embedding models + +### Google Generative AI Embeddings + +See a [usage example](/docs/integrations/text_embedding/google_generative_ai). + +```bash +pip install -U langchain-google-genai +``` + +Configure your API key. + +```bash +export GOOGLE_API_KEY=your-api-key +``` + +```python +from langchain_google_genai import GoogleGenerativeAIEmbeddings +``` + +### Vertex AI + +We need to install `langchain-google-vertexai` python package. + +```bash +pip install langchain-google-vertexai +``` + +See a [usage example](/docs/integrations/text_embedding/google_vertex_ai_palm). + +```python +from langchain_google_vertexai import VertexAIEmbeddings +``` + +### Palm Embedding + +We need to install `langchain-community` python package. + +```bash +pip install langchain-community +``` + +```python +from langchain_community.embeddings.google_palm import GooglePalmEmbeddings +``` + +## Document Loaders + +### AlloyDB for PostgreSQL + +> [Google Cloud AlloyDB](https://cloud.google.com/alloydb) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability on Google Cloud. AlloyDB is 100% compatible with PostgreSQL. + +Install the python package: + +```bash +pip install langchain-google-alloydb-pg +``` + +See [usage example](/docs/integrations/document_loaders/google_alloydb). + +```python +from langchain_google_alloydb_pg import AlloyDBEngine, AlloyDBLoader +``` + +### BigQuery + +> [Google Cloud BigQuery](https://cloud.google.com/bigquery) is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data in Google Cloud. + +We need to install `langchain-google-community` with Big Query dependencies: + +```bash +pip install langchain-google-community[bigquery] +``` + +See a [usage example](/docs/integrations/document_loaders/google_bigquery). + +```python +from langchain_google_community import BigQueryLoader +``` + +### Bigtable + +> [Google Cloud Bigtable](https://cloud.google.com/bigtable/docs) is Google's fully managed NoSQL Big Data database service in Google Cloud. +Install the python package: + +```bash +pip install langchain-google-bigtable +``` + +See [Googel Cloud usage example](/docs/integrations/document_loaders/google_bigtable). + +```python +from langchain_google_bigtable import BigtableLoader +``` + +### Cloud SQL for MySQL + +> [Google Cloud SQL for MySQL](https://cloud.google.com/sql) is a fully-managed database service that helps you set up, maintain, manage, and administer your MySQL relational databases on Google Cloud. +Install the python package: + +```bash +pip install langchain-google-cloud-sql-mysql +``` + +See [usage example](/docs/integrations/document_loaders/google_cloud_sql_mysql). + +```python +from langchain_google_cloud_sql_mysql import MySQLEngine, MySQLDocumentLoader +``` + +### Cloud SQL for SQL Server + +> [Google Cloud SQL for SQL Server](https://cloud.google.com/sql) is a fully-managed database service that helps you set up, maintain, manage, and administer your SQL Server databases on Google Cloud. +Install the python package: + +```bash +pip install langchain-google-cloud-sql-mssql +``` + +See [usage example](/docs/integrations/document_loaders/google_cloud_sql_mssql). + +```python +from langchain_google_cloud_sql_mssql import MSSQLEngine, MSSQLLoader +``` + +### Cloud SQL for PostgreSQL + +> [Google Cloud SQL for PostgreSQL](https://cloud.google.com/sql) is a fully-managed database service that helps you set up, maintain, manage, and administer your PostgreSQL relational databases on Google Cloud. +Install the python package: + +```bash +pip install langchain-google-cloud-sql-pg +``` + +See [usage example](/docs/integrations/document_loaders/google_cloud_sql_pg). + +```python +from langchain_google_cloud_sql_pg import PostgresEngine, PostgresLoader +``` + +### Cloud Storage + +>[Cloud Storage](https://en.wikipedia.org/wiki/Google_Cloud_Storage) is a managed service for storing unstructured data in Google Cloud. + +We need to install `langchain-google-community` with Google Cloud Storage dependencies. + +```bash +pip install langchain-google-community[gcs] +``` + +There are two loaders for the `Google Cloud Storage`: the `Directory` and the `File` loaders. + +See a [usage example](/docs/integrations/document_loaders/google_cloud_storage_directory). + +```python +from langchain_google_community import GCSDirectoryLoader +``` +See a [usage example](/docs/integrations/document_loaders/google_cloud_storage_file). + +```python +from langchain_google_community import GCSFileLoader +``` + +### Cloud Vision loader + +Install the python package: + +```bash +pip install langchain-google-community[vision] +``` + +```python +from langchain_google_community.vision import CloudVisionLoader +``` + +### El Carro for Oracle Workloads + +> Google [El Carro Oracle Operator](https://github.com/GoogleCloudPlatform/elcarro-oracle-operator) +offers a way to run Oracle databases in Kubernetes as a portable, open source, +community driven, no vendor lock-in container orchestration system. + +```bash +pip install langchain-google-el-carro +``` + +See [usage example](/docs/integrations/document_loaders/google_el_carro). + +```python +from langchain_google_el_carro import ElCarroLoader +``` + +### Google Drive + +>[Google Drive](https://en.wikipedia.org/wiki/Google_Drive) is a file storage and synchronization service developed by Google. + +Currently, only `Google Docs` are supported. + +We need to install `langchain-google-community` with Google Drive dependencies. + +```bash +pip install langchain-google-community[drive] +``` + +See a [usage example and authorization instructions](/docs/integrations/document_loaders/google_drive). + +```python +from langchain_google_community import GoogleDriveLoader +``` + +### Firestore (Native Mode) + +> [Google Cloud Firestore](https://cloud.google.com/firestore/docs/) is a NoSQL document database built for automatic scaling, high performance, and ease of application development. +Install the python package: + +```bash +pip install langchain-google-firestore +``` + +See [usage example](/docs/integrations/document_loaders/google_firestore). + +```python +from langchain_google_firestore import FirestoreLoader +``` + +### Firestore (Datastore Mode) + +> [Google Cloud Firestore in Datastore mode](https://cloud.google.com/datastore/docs) is a NoSQL document database built for automatic scaling, high performance, and ease of application development. +> Firestore is the newest version of Datastore and introduces several improvements over Datastore. +Install the python package: + +```bash +pip install langchain-google-datastore +``` + +See [usage example](/docs/integrations/document_loaders/google_datastore). + +```python +from langchain_google_datastore import DatastoreLoader +``` + +### Memorystore for Redis + +> [Google Cloud Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis) is a fully managed Redis service for Google Cloud. Applications running on Google Cloud can achieve extreme performance by leveraging the highly scalable, available, secure Redis service without the burden of managing complex Redis deployments. +Install the python package: + +```bash +pip install langchain-google-memorystore-redis +``` + +See [usage example](/docs/integrations/document_loaders/google_memorystore_redis). + +```python +from langchain_google_memorystore_redis import MemorystoreLoader +``` + +### Spanner + +> [Google Cloud Spanner](https://cloud.google.com/spanner/docs) is a fully managed, mission-critical, relational database service on Google Cloud that offers transactional consistency at global scale, automatic, synchronous replication for high availability, and support for two SQL dialects: GoogleSQL (ANSI 2011 with extensions) and PostgreSQL. +Install the python package: + +```bash +pip install langchain-google-spanner +``` + +See [usage example](/docs/integrations/document_loaders/google_spanner). + +```python +from langchain_google_spanner import SpannerLoader +``` + +### Speech-to-Text + +> [Google Cloud Speech-to-Text](https://cloud.google.com/speech-to-text) is an audio transcription API powered by Google's speech recognition models in Google Cloud. + +This document loader transcribes audio files and outputs the text results as Documents. + +First, we need to install `langchain-google-community` with speech-to-text dependencies. + +```bash +pip install langchain-google-community[speech] +``` + +See a [usage example and authorization instructions](/docs/integrations/document_loaders/google_speech_to_text). + +```python +from langchain_google_community import SpeechToTextLoader +``` + +## Document Transformers + +### Document AI + +>[Google Cloud Document AI](https://cloud.google.com/document-ai/docs/overview) is a Google Cloud +> service that transforms unstructured data from documents into structured data, making it easier +> to understand, analyze, and consume. + +We need to set up a [`GCS` bucket and create your own OCR processor](https://cloud.google.com/document-ai/docs/create-processor) +The `GCS_OUTPUT_PATH` should be a path to a folder on GCS (starting with `gs://`) +and a processor name should look like `projects/PROJECT_NUMBER/locations/LOCATION/processors/PROCESSOR_ID`. +We can get it either programmatically or copy from the `Prediction endpoint` section of the `Processor details` +tab in the Google Cloud Console. + +```bash +pip install langchain-google-community[docai] +``` + +See a [usage example](/docs/integrations/document_transformers/google_docai). + +```python +from langchain_core.document_loaders.blob_loaders import Blob +from langchain_google_community import DocAIParser +``` + +### Google Translate + +> [Google Translate](https://translate.google.com/) is a multilingual neural machine +> translation service developed by Google to translate text, documents and websites +> from one language into another. + +The `GoogleTranslateTransformer` allows you to translate text and HTML with the [Google Cloud Translation API](https://cloud.google.com/translate). + +First, we need to install the `langchain-google-community` with translate dependencies. + +```bash +pip install langchain-google-community[translate] +``` + +See a [usage example and authorization instructions](/docs/integrations/document_transformers/google_translate). + +```python +from langchain_google_community import GoogleTranslateTransformer +``` + +## Vector Stores + +### AlloyDB for PostgreSQL + +> [Google Cloud AlloyDB](https://cloud.google.com/alloydb) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability on Google Cloud. AlloyDB is 100% compatible with PostgreSQL. + +Install the python package: + +```bash +pip install langchain-google-alloydb-pg +``` + +See [usage example](/docs/integrations/vectorstores/google_alloydb). + +```python +from langchain_google_alloydb_pg import AlloyDBEngine, AlloyDBVectorStore +``` + +### BigQuery Vector Search + +> [Google Cloud BigQuery](https://cloud.google.com/bigquery), +> BigQuery is a serverless and cost-effective enterprise data warehouse in Google Cloud. +> +> [Google Cloud BigQuery Vector Search](https://cloud.google.com/bigquery/docs/vector-search-intro) +> BigQuery vector search lets you use GoogleSQL to do semantic search, using vector indexes for fast but approximate results, or using brute force for exact results. + +> It can calculate Euclidean or Cosine distance. With LangChain, we default to use Euclidean distance. + +We need to install several python packages. + +```bash +pip install google-cloud-bigquery +``` + +See a [usage example](/docs/integrations/vectorstores/google_bigquery_vector_search). + +```python +from langchain.vectorstores import BigQueryVectorSearch +``` + +### Memorystore for Redis + +> [Google Cloud Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis) is a fully managed Redis service for Google Cloud. Applications running on Google Cloud can achieve extreme performance by leveraging the highly scalable, available, secure Redis service without the burden of managing complex Redis deployments. +Install the python package: + +```bash +pip install langchain-google-memorystore-redis +``` + +See [usage example](/docs/integrations/vectorstores/google_memorystore_redis). + +```python +from langchain_google_memorystore_redis import RedisVectorStore +``` + +### Spanner + +> [Google Cloud Spanner](https://cloud.google.com/spanner/docs) is a fully managed, mission-critical, relational database service on Google Cloud that offers transactional consistency at global scale, automatic, synchronous replication for high availability, and support for two SQL dialects: GoogleSQL (ANSI 2011 with extensions) and PostgreSQL. +Install the python package: + +```bash +pip install langchain-google-spanner +``` + +See [usage example](/docs/integrations/vectorstores/google_spanner). + +```python +from langchain_google_spanner import SpannerVectorStore +``` + +### Firestore (Native Mode) + +> [Google Cloud Firestore](https://cloud.google.com/firestore/docs/) is a NoSQL document database built for automatic scaling, high performance, and ease of application development. +Install the python package: + +```bash +pip install langchain-google-firestore +``` + +See [usage example](/docs/integrations/vectorstores/google_firestore). + +```python +from langchain_google_firestore import FirestoreVectorstore +``` + +### Cloud SQL for MySQL + +> [Google Cloud SQL for MySQL](https://cloud.google.com/sql) is a fully-managed database service that helps you set up, maintain, manage, and administer your MySQL relational databases on Google Cloud. +Install the python package: + +```bash +pip install langchain-google-cloud-sql-mysql +``` + +See [usage example](/docs/integrations/vectorstores/google_cloud_sql_mysql). + +```python +from langchain_google_cloud_sql_mysql import MySQLEngine, MySQLVectorStore +``` + +### Cloud SQL for PostgreSQL + +> [Google Cloud SQL for PostgreSQL](https://cloud.google.com/sql) is a fully-managed database service that helps you set up, maintain, manage, and administer your PostgreSQL relational databases on Google Cloud. +Install the python package: + +```bash +pip install langchain-google-cloud-sql-pg +``` + +See [usage example](/docs/integrations/vectorstores/google_cloud_sql_pg). + +```python +from langchain_google_cloud_sql_pg import PostgresEngine, PostgresVectorStore +``` + +### Vertex AI Vector Search + +> [Google Cloud Vertex AI Vector Search](https://cloud.google.com/vertex-ai/docs/vector-search/overview) from Google Cloud, +> formerly known as `Vertex AI Matching Engine`, provides the industry's leading high-scale +> low latency vector database. These vector databases are commonly +> referred to as vector similarity-matching or an approximate nearest neighbor (ANN) service. + +Install the python package: + +```bash +pip install langchain-google-vertexai +``` + +See a [usage example](/docs/integrations/vectorstores/google_vertex_ai_vector_search). + +```python +from langchain_google_vertexai import VectorSearchVectorStore +``` + +### ScaNN + +>[Google ScaNN](https://github.com/google-research/google-research/tree/master/scann) +> (Scalable Nearest Neighbors) is a python package. +> +>`ScaNN` is a method for efficient vector similarity search at scale. + +>`ScaNN` includes search space pruning and quantization for Maximum Inner +> Product Search and also supports other distance functions such as +> Euclidean distance. The implementation is optimized for x86 processors +> with AVX2 support. See its [Google Research github](https://github.com/google-research/google-research/tree/master/scann) +> for more details. + +We need to install `scann` python package. + +```bash +pip install scann +``` + +See a [usage example](/docs/integrations/vectorstores/scann). + +```python +from langchain_community.vectorstores import ScaNN +``` + +## Retrievers + +### Google Drive + +We need to install several python packages. + +```bash +pip install google-api-python-client google-auth-httplib2 google-auth-oauthlib +``` + +See a [usage example and authorization instructions](/docs/integrations/retrievers/google_drive). + +```python +from langchain_googledrive.retrievers import GoogleDriveRetriever +``` + +### Vertex AI Search + +> [Vertex AI Search](https://cloud.google.com/generative-ai-app-builder/docs/introduction) +> from Google Cloud allows developers to quickly build generative AI powered search engines for customers and employees. + +We need to install the `google-cloud-discoveryengine` python package. + +```bash +pip install google-cloud-discoveryengine +``` + +See a [usage example](/docs/integrations/retrievers/google_vertex_ai_search). + +```python +from langchain.retrievers import GoogleVertexAISearchRetriever +``` + +### Document AI Warehouse + +> [Document AI Warehouse](https://cloud.google.com/document-ai-warehouse) +> from Google Cloud allows enterprises to search, store, govern, and manage documents and their AI-extracted +> data and metadata in a single platform. + +Note: `GoogleDocumentAIWarehouseRetriever` is deprecated, use `DocumentAIWarehouseRetriever` (see below). +```python +from langchain.retrievers import GoogleDocumentAIWarehouseRetriever +docai_wh_retriever = GoogleDocumentAIWarehouseRetriever( + project_number=... +) +query = ... +documents = docai_wh_retriever.invoke( + query, user_ldap=... +) +``` + +```python +from langchain_google_community.documentai_warehouse import DocumentAIWarehouseRetriever +``` + +## Tools + +### Text-to-Speech + +>[Google Cloud Text-to-Speech](https://cloud.google.com/text-to-speech) is a Google Cloud service that enables developers to +> synthesize natural-sounding speech with 100+ voices, available in multiple languages and variants. +> It applies DeepMind’s groundbreaking research in WaveNet and Google’s powerful neural networks +> to deliver the highest fidelity possible. + +We need to install a python package. + +```bash +pip install google-cloud-text-to-speech +``` + +See a [usage example and authorization instructions](/docs/integrations/tools/google_cloud_texttospeech). + +```python +from langchain_google_community import TextToSpeechTool +``` + +### Google Drive + +We need to install several python packages. + +```bash +pip install google-api-python-client google-auth-httplib2 google-auth-oauthlib +``` + +See a [usage example and authorization instructions](/docs/integrations/tools/google_drive). + +```python +from langchain_community.utilities.google_drive import GoogleDriveAPIWrapper +from langchain_community.tools.google_drive.tool import GoogleDriveSearchTool +``` + +### Google Finance + +We need to install a python package. + +```bash +pip install google-search-results +``` + +See a [usage example and authorization instructions](/docs/integrations/tools/google_finance). + +```python +from langchain_community.tools.google_finance import GoogleFinanceQueryRun +from langchain_community.utilities.google_finance import GoogleFinanceAPIWrapper +``` + +### Google Jobs + +We need to install a python package. + +```bash +pip install google-search-results +``` + +See a [usage example and authorization instructions](/docs/integrations/tools/google_jobs). + +```python +from langchain_community.tools.google_jobs import GoogleJobsQueryRun +from langchain_community.utilities.google_finance import GoogleFinanceAPIWrapper +``` + +### Google Lens + +See a [usage example and authorization instructions](/docs/integrations/tools/google_lens). + +```python +from langchain_community.tools.google_lens import GoogleLensQueryRun +from langchain_community.utilities.google_lens import GoogleLensAPIWrapper +``` + +### Google Places + +We need to install a python package. + +```bash +pip install googlemaps +``` + +See a [usage example and authorization instructions](/docs/integrations/tools/google_places). + +```python +from langchain.tools import GooglePlacesTool +``` + +### Google Scholar + +We need to install a python package. + +```bash +pip install google-search-results +``` + +See a [usage example and authorization instructions](/docs/integrations/tools/google_scholar). + +```python +from langchain_community.tools.google_scholar import GoogleScholarQueryRun +from langchain_community.utilities.google_scholar import GoogleScholarAPIWrapper +``` + +### Google Search + +- Set up a Custom Search Engine, following [these instructions](https://stackoverflow.com/questions/37083058/programmatically-searching-google-in-python-using-custom-search) +- Get an API Key and Custom Search Engine ID from the previous step, and set them as environment variables +`GOOGLE_API_KEY` and `GOOGLE_CSE_ID` respectively. + +```python +from langchain_google_community import GoogleSearchAPIWrapper +``` + +For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/google_search). + +We can easily load this wrapper as a Tool (to use with an Agent). We can do this with: + +```python +from langchain.agents import load_tools +tools = load_tools(["google-search"]) +``` + +### Google Trends + +We need to install a python package. + +```bash +pip install google-search-results +``` + +See a [usage example and authorization instructions](/docs/integrations/tools/google_trends). + +```python +from langchain_community.tools.google_trends import GoogleTrendsQueryRun +from langchain_community.utilities.google_trends import GoogleTrendsAPIWrapper +``` + +## Toolkits + +### GMail + +> [Google Gmail](https://en.wikipedia.org/wiki/Gmail) is a free email service provided by Google. +This toolkit works with emails through the `Gmail API`. + +We need to install `langchain-google-community` with required dependencies: + +```bash +pip install langchain-google-community[gmail] +``` + +See a [usage example and authorization instructions](/docs/integrations/tools/gmail). + +```python +from langchain_google_community import GmailToolkit +``` + +## Memory + +### AlloyDB for PostgreSQL + +> [AlloyDB for PostgreSQL](https://cloud.google.com/alloydb) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability on Google Cloud. AlloyDB is 100% compatible with PostgreSQL. + +Install the python package: + +```bash +pip install langchain-google-alloydb-pg +``` + +See [usage example](/docs/integrations/memory/google_alloydb). + +```python +from langchain_google_alloydb_pg import AlloyDBEngine, AlloyDBChatMessageHistory +``` + +### Cloud SQL for PostgreSQL + +> [Cloud SQL for PostgreSQL](https://cloud.google.com/sql) is a fully-managed database service that helps you set up, maintain, manage, and administer your PostgreSQL relational databases on Google Cloud. +Install the python package: + +```bash +pip install langchain-google-cloud-sql-pg +``` + +See [usage example](/docs/integrations/memory/google_sql_pg). + + +```python +from langchain_google_cloud_sql_pg import PostgresEngine, PostgresChatMessageHistory +``` + +### Cloud SQL for MySQL + +> [Cloud SQL for MySQL](https://cloud.google.com/sql) is a fully-managed database service that helps you set up, maintain, manage, and administer your MySQL relational databases on Google Cloud. +Install the python package: + +```bash +pip install langchain-google-cloud-sql-mysql +``` + +See [usage example](/docs/integrations/memory/google_sql_mysql). + +```python +from langchain_google_cloud_sql_mysql import MySQLEngine, MySQLChatMessageHistory +``` + +### Cloud SQL for SQL Server + +> [Cloud SQL for SQL Server](https://cloud.google.com/sql) is a fully-managed database service that helps you set up, maintain, manage, and administer your SQL Server databases on Google Cloud. +Install the python package: + +```bash +pip install langchain-google-cloud-sql-mssql +``` + +See [usage example](/docs/integrations/memory/google_sql_mssql). + +```python +from langchain_google_cloud_sql_mssql import MSSQLEngine, MSSQLChatMessageHistory +``` + +### Spanner + +> [Google Cloud Spanner](https://cloud.google.com/spanner/docs) is a fully managed, mission-critical, relational database service on Google Cloud that offers transactional consistency at global scale, automatic, synchronous replication for high availability, and support for two SQL dialects: GoogleSQL (ANSI 2011 with extensions) and PostgreSQL. +Install the python package: + +```bash +pip install langchain-google-spanner +``` + +See [usage example](/docs/integrations/memory/google_spanner). + +```python +from langchain_google_spanner import SpannerChatMessageHistory +``` + +### Memorystore for Redis + +> [Google Cloud Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis) is a fully managed Redis service for Google Cloud. Applications running on Google Cloud can achieve extreme performance by leveraging the highly scalable, available, secure Redis service without the burden of managing complex Redis deployments. +Install the python package: + +```bash +pip install langchain-google-memorystore-redis +``` + +See [usage example](/docs/integrations/document_loaders/google_memorystore_redis). + +```python +from langchain_google_memorystore_redis import MemorystoreChatMessageHistory +``` + +### Bigtable + +> [Google Cloud Bigtable](https://cloud.google.com/bigtable/docs) is Google's fully managed NoSQL Big Data database service in Google Cloud. +Install the python package: + +```bash +pip install langchain-google-bigtable +``` + +See [usage example](/docs/integrations/memory/google_bigtable). + +```python +from langchain_google_bigtable import BigtableChatMessageHistory +``` + +### Firestore (Native Mode) + +> [Google Cloud Firestore](https://cloud.google.com/firestore/docs/) is a NoSQL document database built for automatic scaling, high performance, and ease of application development. +Install the python package: + +```bash +pip install langchain-google-firestore +``` + +See [usage example](/docs/integrations/memory/google_firestore). + +```python +from langchain_google_firestore import FirestoreChatMessageHistory +``` + +### Firestore (Datastore Mode) + +> [Google Cloud Firestore in Datastore mode](https://cloud.google.com/datastore/docs) is a NoSQL document database built for automatic scaling, high performance, and ease of application development. +> Firestore is the newest version of Datastore and introduces several improvements over Datastore. +Install the python package: + +```bash +pip install langchain-google-datastore +``` + +See [usage example](/docs/integrations/memory/google_firestore_datastore). + +```python +from langchain_google_datastore import DatastoreChatMessageHistory +``` + +### El Carro: The Oracle Operator for Kubernetes + +> Google [El Carro Oracle Operator for Kubernetes](https://github.com/GoogleCloudPlatform/elcarro-oracle-operator) +offers a way to run `Oracle` databases in `Kubernetes` as a portable, open source, +community driven, no vendor lock-in container orchestration system. + +```bash +pip install langchain-google-el-carro +``` + +See [usage example](/docs/integrations/memory/google_el_carro). + +```python +from langchain_google_el_carro import ElCarroChatMessageHistory +``` + +## Chat Loaders + +### GMail + +> [Gmail](https://en.wikipedia.org/wiki/Gmail) is a free email service provided by Google. +This loader works with emails through the `Gmail API`. + +We need to install `langchain-google-community` with underlying dependencies. + +```bash +pip install langchain-google-community[gmail] +``` + +See a [usage example and authorization instructions](/docs/integrations/chat_loaders/gmail). + +```python +from langchain_google_community import GMailLoader +``` + +## 3rd Party Integrations + +### SearchApi + +>[SearchApi](https://www.searchapi.io/) provides a 3rd-party API to access Google search results, YouTube search & transcripts, and other Google-related engines. + +See [usage examples and authorization instructions](/docs/integrations/tools/searchapi). + +```python +from langchain_community.utilities import SearchApiAPIWrapper +``` + +### SerpApi + +>[SerpApi](https://serpapi.com/) provides a 3rd-party API to access Google search results. + +See a [usage example and authorization instructions](/docs/integrations/tools/serpapi). + +```python +from langchain_community.utilities import SerpAPIWrapper +``` + +### Serper.dev + +See a [usage example and authorization instructions](/docs/integrations/tools/google_serper). + +```python +from langchain_community.utilities import GoogleSerperAPIWrapper +``` + +### YouTube + +>[YouTube Search](https://github.com/joetats/youtube_search) package searches `YouTube` videos avoiding using their heavily rate-limited API. +> +>It uses the form on the YouTube homepage and scrapes the resulting page. + +We need to install a python package. + +```bash +pip install youtube_search +``` + +See a [usage example](/docs/integrations/tools/youtube). + +```python +from langchain.tools import YouTubeSearchTool +``` + +### YouTube audio + +>[YouTube](https://www.youtube.com/) is an online video sharing and social media platform created by `Google`. + +Use `YoutubeAudioLoader` to fetch / download the audio files. + +Then, use `OpenAIWhisperParser` to transcribe them to text. + +We need to install several python packages. + +```bash +pip install yt_dlp pydub librosa +``` + +See a [usage example and authorization instructions](/docs/integrations/document_loaders/youtube_audio). + +```python +from langchain_community.document_loaders.blob_loaders.youtube_audio import YoutubeAudioLoader +from langchain_community.document_loaders.parsers import OpenAIWhisperParser, OpenAIWhisperParserLocal +``` + +### YouTube transcripts + +>[YouTube](https://www.youtube.com/) is an online video sharing and social media platform created by `Google`. + +We need to install `youtube-transcript-api` python package. + +```bash +pip install youtube-transcript-api +``` + +See a [usage example](/docs/integrations/document_loaders/youtube_transcript). + +```python +from langchain_community.document_loaders import YoutubeLoader +``` diff --git a/langchain_md_files/integrations/platforms/huggingface.mdx b/langchain_md_files/integrations/platforms/huggingface.mdx new file mode 100644 index 0000000000000000000000000000000000000000..0ddfaa6d58e4787382a78142752b9394d414116b --- /dev/null +++ b/langchain_md_files/integrations/platforms/huggingface.mdx @@ -0,0 +1,126 @@ +# Hugging Face + +All functionality related to the [Hugging Face Platform](https://huggingface.co./). + +## Installation + +Most of the Hugging Face integrations are available in the `langchain-huggingface` package. + +```bash +pip install langchain-huggingface +``` + +## Chat models + +### Models from Hugging Face + +We can use the `Hugging Face` LLM classes or directly use the `ChatHuggingFace` class. + +See a [usage example](/docs/integrations/chat/huggingface). + +```python +from langchain_huggingface import ChatHuggingFace +``` + +## LLMs + +### Hugging Face Local Pipelines + +Hugging Face models can be run locally through the `HuggingFacePipeline` class. + +See a [usage example](/docs/integrations/llms/huggingface_pipelines). + +```python +from langchain_huggingface import HuggingFacePipeline +``` + +## Embedding Models + +### HuggingFaceEmbeddings + +See a [usage example](/docs/integrations/text_embedding/huggingfacehub). + +```python +from langchain_huggingface import HuggingFaceEmbeddings +``` + +### HuggingFaceInstructEmbeddings + +See a [usage example](/docs/integrations/text_embedding/instruct_embeddings). + +```python +from langchain_community.embeddings import HuggingFaceInstructEmbeddings +``` + +### HuggingFaceBgeEmbeddings + +>[BGE models on the HuggingFace](https://huggingface.co./BAAI/bge-large-en) are [the best open-source embedding models](https://huggingface.co./spaces/mteb/leaderboard). +>BGE model is created by the [Beijing Academy of Artificial Intelligence (BAAI)](https://en.wikipedia.org/wiki/Beijing_Academy_of_Artificial_Intelligence). `BAAI` is a private non-profit organization engaged in AI research and development. + +See a [usage example](/docs/integrations/text_embedding/bge_huggingface). + +```python +from langchain_community.embeddings import HuggingFaceBgeEmbeddings +``` + +### Hugging Face Text Embeddings Inference (TEI) + +>[Hugging Face Text Embeddings Inference (TEI)](https://huggingface.co./docs/text-generation-inference/index) is a toolkit for deploying and serving open-source +> text embeddings and sequence classification models. `TEI` enables high-performance extraction for the most popular models, +>including `FlagEmbedding`, `Ember`, `GTE` and `E5`. + +We need to install `huggingface-hub` python package. + +```bash +pip install huggingface-hub +``` + +See a [usage example](/docs/integrations/text_embedding/text_embeddings_inference). + +```python +from langchain_community.embeddings import HuggingFaceHubEmbeddings +``` + + +## Document Loaders + +### Hugging Face dataset + +>[Hugging Face Hub](https://huggingface.co./docs/hub/index) is home to over 75,000 +> [datasets](https://huggingface.co./docs/hub/index#datasets) in more than 100 languages +> that can be used for a broad range of tasks across NLP, Computer Vision, and Audio. +> They used for a diverse range of tasks such as translation, automatic speech +> recognition, and image classification. + +We need to install `datasets` python package. + +```bash +pip install datasets +``` + +See a [usage example](/docs/integrations/document_loaders/hugging_face_dataset). + +```python +from langchain_community.document_loaders.hugging_face_dataset import HuggingFaceDatasetLoader +``` + + + +## Tools + +### Hugging Face Hub Tools + +>[Hugging Face Tools](https://huggingface.co./docs/transformers/v4.29.0/en/custom_tools) +> support text I/O and are loaded using the `load_huggingface_tool` function. + +We need to install several python packages. + +```bash +pip install transformers huggingface_hub +``` + +See a [usage example](/docs/integrations/tools/huggingface_tools). + +```python +from langchain_community.agent_toolkits.load_tools import load_huggingface_tool +``` diff --git a/langchain_md_files/integrations/platforms/microsoft.mdx b/langchain_md_files/integrations/platforms/microsoft.mdx new file mode 100644 index 0000000000000000000000000000000000000000..fb0b4efbbb453fe61bb2e5dfd7864383806ca35d --- /dev/null +++ b/langchain_md_files/integrations/platforms/microsoft.mdx @@ -0,0 +1,561 @@ +--- +keywords: [azure] +--- + +# Microsoft + +All functionality related to `Microsoft Azure` and other `Microsoft` products. + +## Chat Models + +### Azure OpenAI + +>[Microsoft Azure](https://en.wikipedia.org/wiki/Microsoft_Azure), often referred to as `Azure` is a cloud computing platform run by `Microsoft`, which offers access, management, and development of applications and services through global data centers. It provides a range of capabilities, including software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS). `Microsoft Azure` supports many programming languages, tools, and frameworks, including Microsoft-specific and third-party software and systems. + +>[Azure OpenAI](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/) is an `Azure` service with powerful language models from `OpenAI` including the `GPT-3`, `Codex` and `Embeddings model` series for content generation, summarization, semantic search, and natural language to code translation. + +```bash +pip install langchain-openai +``` + +Set the environment variables to get access to the `Azure OpenAI` service. + +```python +import os + +os.environ["AZURE_OPENAI_ENDPOINT"] = "https://[Azure AI Studio](https://ai.azure.com/) provides the capability to upload data assets +> to cloud storage and register existing data assets from the following sources: +> +>- `Microsoft OneLake` +>- `Azure Blob Storage` +>- `Azure Data Lake gen 2` + +First, you need to install several python packages. + +```bash +pip install azureml-fsspec, azure-ai-generative +``` + +See a [usage example](/docs/integrations/document_loaders/azure_ai_data). + +```python +from langchain.document_loaders import AzureAIDataLoader +``` + + +### Azure AI Document Intelligence + +>[Azure AI Document Intelligence](https://aka.ms/doc-intelligence) (formerly known +> as `Azure Form Recognizer`) is machine-learning +> based service that extracts texts (including handwriting), tables, document structures, +> and key-value-pairs +> from digital or scanned PDFs, images, Office and HTML files. +> +> Document Intelligence supports `PDF`, `JPEG/JPG`, `PNG`, `BMP`, `TIFF`, `HEIF`, `DOCX`, `XLSX`, `PPTX` and `HTML`. + +First, you need to install a python package. + +```bash +pip install azure-ai-documentintelligence +``` + +See a [usage example](/docs/integrations/document_loaders/azure_document_intelligence). + +```python +from langchain.document_loaders import AzureAIDocumentIntelligenceLoader +``` + + +### Azure Blob Storage + +>[Azure Blob Storage](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction) is Microsoft's object storage solution for the cloud. Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn't adhere to a particular data model or definition, such as text or binary data. + +>[Azure Files](https://learn.microsoft.com/en-us/azure/storage/files/storage-files-introduction) offers fully managed +> file shares in the cloud that are accessible via the industry standard Server Message Block (`SMB`) protocol, +> Network File System (`NFS`) protocol, and `Azure Files REST API`. `Azure Files` are based on the `Azure Blob Storage`. + +`Azure Blob Storage` is designed for: +- Serving images or documents directly to a browser. +- Storing files for distributed access. +- Streaming video and audio. +- Writing to log files. +- Storing data for backup and restore, disaster recovery, and archiving. +- Storing data for analysis by an on-premises or Azure-hosted service. + +```bash +pip install azure-storage-blob +``` + +See a [usage example for the Azure Blob Storage](/docs/integrations/document_loaders/azure_blob_storage_container). + +```python +from langchain_community.document_loaders import AzureBlobStorageContainerLoader +``` + +See a [usage example for the Azure Files](/docs/integrations/document_loaders/azure_blob_storage_file). + +```python +from langchain_community.document_loaders import AzureBlobStorageFileLoader +``` + + +### Microsoft OneDrive + +>[Microsoft OneDrive](https://en.wikipedia.org/wiki/OneDrive) (formerly `SkyDrive`) is a file-hosting service operated by Microsoft. + +First, you need to install a python package. + +```bash +pip install o365 +``` + +See a [usage example](/docs/integrations/document_loaders/microsoft_onedrive). + +```python +from langchain_community.document_loaders import OneDriveLoader +``` + +### Microsoft OneDrive File + +>[Microsoft OneDrive](https://en.wikipedia.org/wiki/OneDrive) (formerly `SkyDrive`) is a file-hosting service operated by Microsoft. + +First, you need to install a python package. + +```bash +pip install o365 +``` + +```python +from langchain_community.document_loaders import OneDriveFileLoader +``` + + +### Microsoft Word + +>[Microsoft Word](https://www.microsoft.com/en-us/microsoft-365/word) is a word processor developed by Microsoft. + +See a [usage example](/docs/integrations/document_loaders/microsoft_word). + +```python +from langchain_community.document_loaders import UnstructuredWordDocumentLoader +``` + + +### Microsoft Excel + +>[Microsoft Excel](https://en.wikipedia.org/wiki/Microsoft_Excel) is a spreadsheet editor developed by +> Microsoft for Windows, macOS, Android, iOS and iPadOS. +> It features calculation or computation capabilities, graphing tools, pivot tables, and a macro programming +> language called Visual Basic for Applications (VBA). Excel forms part of the Microsoft 365 suite of software. + +The `UnstructuredExcelLoader` is used to load `Microsoft Excel` files. The loader works with both `.xlsx` and `.xls` files. +The page content will be the raw text of the Excel file. If you use the loader in `"elements"` mode, an HTML +representation of the Excel file will be available in the document metadata under the `text_as_html` key. + +See a [usage example](/docs/integrations/document_loaders/microsoft_excel). + +```python +from langchain_community.document_loaders import UnstructuredExcelLoader +``` + + +### Microsoft SharePoint + +>[Microsoft SharePoint](https://en.wikipedia.org/wiki/SharePoint) is a website-based collaboration system +> that uses workflow applications, “list” databases, and other web parts and security features to +> empower business teams to work together developed by Microsoft. + +See a [usage example](/docs/integrations/document_loaders/microsoft_sharepoint). + +```python +from langchain_community.document_loaders.sharepoint import SharePointLoader +``` + + +### Microsoft PowerPoint + +>[Microsoft PowerPoint](https://en.wikipedia.org/wiki/Microsoft_PowerPoint) is a presentation program by Microsoft. + +See a [usage example](/docs/integrations/document_loaders/microsoft_powerpoint). + +```python +from langchain_community.document_loaders import UnstructuredPowerPointLoader +``` + +### Microsoft OneNote + +First, let's install dependencies: + +```bash +pip install bs4 msal +``` + +See a [usage example](/docs/integrations/document_loaders/microsoft_onenote). + +```python +from langchain_community.document_loaders.onenote import OneNoteLoader +``` + +### Playwright URL Loader + +>[Playwright](https://github.com/microsoft/playwright) is an open-source automation tool +> developed by `Microsoft` that allows you to programmatically control and automate +> web browsers. It is designed for end-to-end testing, scraping, and automating +> tasks across various web browsers such as `Chromium`, `Firefox`, and `WebKit`. + + +First, let's install dependencies: + +```bash +pip install playwright unstructured +``` + +See a [usage example](/docs/integrations/document_loaders/url/#playwright-url-loader). + +```python +from langchain_community.document_loaders.onenote import OneNoteLoader +``` + +## AI Agent Memory System + +[AI agent](https://learn.microsoft.com/en-us/azure/cosmos-db/ai-agents) needs robust memory systems that support multi-modality, offer strong operational performance, and enable agent memory sharing as well as separation. + +### Azure Cosmos DB +AI agents can rely on Azure Cosmos DB as a unified [memory system](https://learn.microsoft.com/en-us/azure/cosmos-db/ai-agents#memory-can-make-or-break-agents) solution, enjoying speed, scale, and simplicity. This service successfully [enabled OpenAI's ChatGPT service](https://www.youtube.com/watch?v=6IIUtEFKJec&t) to scale dynamically with high reliability and low maintenance. Powered by an atom-record-sequence engine, it is the world's first globally distributed [NoSQL](https://learn.microsoft.com/en-us/azure/cosmos-db/distributed-nosql), [relational](https://learn.microsoft.com/en-us/azure/cosmos-db/distributed-relational), and [vector database](https://learn.microsoft.com/en-us/azure/cosmos-db/vector-database) service that offers a serverless mode. + +Below are two available Azure Cosmos DB APIs that can provide vector store functionalities. + +### Azure Cosmos DB for MongoDB (vCore) + +>[Azure Cosmos DB for MongoDB vCore](https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb/vcore/) makes it easy to create a database with full native MongoDB support. +> You can apply your MongoDB experience and continue to use your favorite MongoDB drivers, SDKs, and tools by pointing your application to the API for MongoDB vCore account's connection string. +> Use vector search in Azure Cosmos DB for MongoDB vCore to seamlessly integrate your AI-based applications with your data that's stored in Azure Cosmos DB. + +#### Installation and Setup + +See [detail configuration instructions](/docs/integrations/vectorstores/azure_cosmos_db). + +We need to install `pymongo` python package. + +```bash +pip install pymongo +``` + +#### Deploy Azure Cosmos DB on Microsoft Azure + +Azure Cosmos DB for MongoDB vCore provides developers with a fully managed MongoDB-compatible database service for building modern applications with a familiar architecture. + +With Cosmos DB for MongoDB vCore, developers can enjoy the benefits of native Azure integrations, low total cost of ownership (TCO), and the familiar vCore architecture when migrating existing applications or building new ones. + +[Sign Up](https://azure.microsoft.com/en-us/free/) for free to get started today. + +See a [usage example](/docs/integrations/vectorstores/azure_cosmos_db). + +```python +from langchain_community.vectorstores import AzureCosmosDBVectorSearch +``` + +### Azure Cosmos DB NoSQL + +>[Azure Cosmos DB for NoSQL](https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/vector-search) now offers vector indexing and search in preview. +This feature is designed to handle high-dimensional vectors, enabling efficient and accurate vector search at any scale. You can now store vectors +directly in the documents alongside your data. This means that each document in your database can contain not only traditional schema-free data, +but also high-dimensional vectors as other properties of the documents. This colocation of data and vectors allows for efficient indexing and searching, +as the vectors are stored in the same logical unit as the data they represent. This simplifies data management, AI application architectures, and the +efficiency of vector-based operations. + +#### Installation and Setup + +See [detail configuration instructions](/docs/integrations/vectorstores/azure_cosmos_db_no_sql). + +We need to install `azure-cosmos` python package. + +```bash +pip install azure-cosmos +``` + +#### Deploy Azure Cosmos DB on Microsoft Azure + +Azure Cosmos DB offers a solution for modern apps and intelligent workloads by being very responsive with dynamic and elastic autoscale. It is available +in every Azure region and can automatically replicate data closer to users. It has SLA guaranteed low-latency and high availability. + +[Sign Up](https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/quickstart-python?pivots=devcontainer-codespace) for free to get started today. + +See a [usage example](/docs/integrations/vectorstores/azure_cosmos_db_no_sql). + +```python +from langchain_community.vectorstores import AzureCosmosDBNoSQLVectorSearch +``` + +### Azure Database for PostgreSQL +>[Azure Database for PostgreSQL - Flexible Server](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/service-overview) is a relational database service based on the open-source Postgres database engine. It's a fully managed database-as-a-service that can handle mission-critical workloads with predictable performance, security, high availability, and dynamic scalability. + +See [set up instructions](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/quickstart-create-server-portal) for Azure Database for PostgreSQL. + +See a [usage example](/docs/integrations/memory/postgres_chat_message_history/). Simply use the [connection string](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/connect-python?tabs=cmd%2Cpassword#add-authentication-code) from your Azure Portal. + +Since Azure Database for PostgreSQL is open-source Postgres, you can use the [LangChain's Postgres support](/docs/integrations/vectorstores/pgvector/) to connect to Azure Database for PostgreSQL. + + +### Azure AI Search + +[Azure AI Search](https://learn.microsoft.com/azure/search/search-what-is-azure-search) is a cloud search service +that gives developers infrastructure, APIs, and tools for information retrieval of vector, keyword, and hybrid +queries at scale. See [here](/docs/integrations/vectorstores/azuresearch) for usage examples. + +```python +from langchain_community.vectorstores.azuresearch import AzureSearch +``` + +## Retrievers + +### Azure AI Search + +>[Azure AI Search](https://learn.microsoft.com/en-us/azure/search/search-what-is-azure-search) (formerly known as `Azure Search` or `Azure Cognitive Search` ) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications. + +>Search is foundational to any app that surfaces text to users, where common scenarios include catalog or document search, online retail apps, or data exploration over proprietary content. When you create a search service, you'll work with the following capabilities: +>- A search engine for full text search over a search index containing user-owned content +>- Rich indexing, with lexical analysis and optional AI enrichment for content extraction and transformation +>- Rich query syntax for text search, fuzzy search, autocomplete, geo-search and more +>- Programmability through REST APIs and client libraries in Azure SDKs +>- Azure integration at the data layer, machine learning layer, and AI (AI Services) + +See [set up instructions](https://learn.microsoft.com/en-us/azure/search/search-create-service-portal). + +See a [usage example](/docs/integrations/retrievers/azure_ai_search). + +```python +from langchain_community.retrievers import AzureAISearchRetriever +``` + +## Vector Store +### Azure Database for PostgreSQL +>[Azure Database for PostgreSQL - Flexible Server](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/service-overview) is a relational database service based on the open-source Postgres database engine. It's a fully managed database-as-a-service that can handle mission-critical workloads with predictable performance, security, high availability, and dynamic scalability. + +See [set up instructions](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/quickstart-create-server-portal) for Azure Database for PostgreSQL. + +You need to [enable pgvector extension](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/how-to-use-pgvector) in your database to use Postgres as a vector store. Once you have the extension enabled, you can use the [PGVector in LangChain](/docs/integrations/vectorstores/pgvector/) to connect to Azure Database for PostgreSQL. + +See a [usage example](/docs/integrations/vectorstores/pgvector/). Simply use the [connection string](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/connect-python?tabs=cmd%2Cpassword#add-authentication-code) from your Azure Portal. + + +## Tools + +### Azure Container Apps dynamic sessions + +We need to get the `POOL_MANAGEMENT_ENDPOINT` environment variable from the Azure Container Apps service. +See the instructions [here](/docs/integrations/tools/azure_dynamic_sessions/#setup). + +We need to install a python package. + +```bash +pip install langchain-azure-dynamic-sessions +``` + +See a [usage example](/docs/integrations/tools/azure_dynamic_sessions). + +```python +from langchain_azure_dynamic_sessions import SessionsPythonREPLTool +``` + +### Bing Search + +Follow the documentation [here](/docs/integrations/tools/bing_search) to get a detail explanations and instructions of this tool. + +The environment variable `BING_SUBSCRIPTION_KEY` and `BING_SEARCH_URL` are required from Bing Search resource. + +```python +from langchain_community.tools.bing_search import BingSearchResults +from langchain_community.utilities import BingSearchAPIWrapper + +api_wrapper = BingSearchAPIWrapper() +tool = BingSearchResults(api_wrapper=api_wrapper) +``` + +## Toolkits + +### Azure AI Services + +We need to install several python packages. + +```bash +pip install azure-ai-formrecognizer azure-cognitiveservices-speech azure-ai-vision-imageanalysis +``` + +See a [usage example](/docs/integrations/tools/azure_ai_services). + +```python +from langchain_community.agent_toolkits import azure_ai_services +``` + +The `azure_ai_services` toolkit includes the following tools: + +- Image Analysis: [AzureAiServicesImageAnalysisTool](https://python.langchain.com/v0.2/api_reference/community/tools/langchain_community.tools.azure_ai_services.image_analysis.AzureAiServicesImageAnalysisTool.html) +- Document Intelligence: [AzureAiServicesDocumentIntelligenceTool](https://python.langchain.com/v0.2/api_reference/community/tools/langchain_community.tools.azure_ai_services.document_intelligence.AzureAiServicesDocumentIntelligenceTool.html) +- Speech to Text: [AzureAiServicesSpeechToTextTool](https://python.langchain.com/v0.2/api_reference/community/tools/langchain_community.tools.azure_ai_services.speech_to_text.AzureAiServicesSpeechToTextTool.html) +- Text to Speech: [AzureAiServicesTextToSpeechTool](https://python.langchain.com/v0.2/api_reference/community/tools/langchain_community.tools.azure_ai_services.text_to_speech.AzureAiServicesTextToSpeechTool.html) +- Text Analytics for Health: [AzureAiServicesTextAnalyticsForHealthTool](https://python.langchain.com/v0.2/api_reference/community/tools/langchain_community.tools.azure_ai_services.text_analytics_for_health.AzureAiServicesTextAnalyticsForHealthTool.html) + + +### Microsoft Office 365 email and calendar + +We need to install `O365` python package. + +```bash +pip install O365 +``` + + +See a [usage example](/docs/integrations/tools/office365). + +```python +from langchain_community.agent_toolkits import O365Toolkit +``` + +### Microsoft Azure PowerBI + +We need to install `azure-identity` python package. + +```bash +pip install azure-identity +``` + +See a [usage example](/docs/integrations/tools/powerbi). + +```python +from langchain_community.agent_toolkits import PowerBIToolkit +from langchain_community.utilities.powerbi import PowerBIDataset +``` + +### PlayWright Browser Toolkit + +>[Playwright](https://github.com/microsoft/playwright) is an open-source automation tool +> developed by `Microsoft` that allows you to programmatically control and automate +> web browsers. It is designed for end-to-end testing, scraping, and automating +> tasks across various web browsers such as `Chromium`, `Firefox`, and `WebKit`. + +We need to install several python packages. + +```bash +pip install playwright lxml +``` + +See a [usage example](/docs/integrations/tools/playwright). + +```python +from langchain_community.agent_toolkits import PlayWrightBrowserToolkit +``` + +#### PlayWright Browser individual tools + +You can use individual tools from the PlayWright Browser Toolkit. + +```python +from langchain_community.tools.playwright import ClickTool +from langchain_community.tools.playwright import CurrentWebPageTool +from langchain_community.tools.playwright import ExtractHyperlinksTool +from langchain_community.tools.playwright import ExtractTextTool +from langchain_community.tools.playwright import GetElementsTool +from langchain_community.tools.playwright import NavigateTool +from langchain_community.tools.playwright import NavigateBackTool +``` + +## Graphs + +### Azure Cosmos DB for Apache Gremlin + +We need to install a python package. + +```bash +pip install gremlinpython +``` + +See a [usage example](/docs/integrations/graphs/azure_cosmosdb_gremlin). + +```python +from langchain_community.graphs import GremlinGraph +from langchain_community.graphs.graph_document import GraphDocument, Node, Relationship +``` + +## Utilities + +### Bing Search API + +>[Microsoft Bing](https://www.bing.com/), commonly referred to as `Bing` or `Bing Search`, +> is a web search engine owned and operated by `Microsoft`. + +See a [usage example](/docs/integrations/tools/bing_search). + +```python +from langchain_community.utilities import BingSearchAPIWrapper +``` + +## More + +### Microsoft Presidio + +>[Presidio](https://microsoft.github.io/presidio/) (Origin from Latin praesidium ‘protection, garrison’) +> helps to ensure sensitive data is properly managed and governed. It provides fast identification and +> anonymization modules for private entities in text and images such as credit card numbers, names, +> locations, social security numbers, bitcoin wallets, US phone numbers, financial data and more. + +First, you need to install several python packages and download a `SpaCy` model. + +```bash +pip install langchain-experimental openai presidio-analyzer presidio-anonymizer spacy Faker +python -m spacy download en_core_web_lg +``` + +See [usage examples](https://python.langchain.com/v0.1/docs/guides/productionization/safety/presidio_data_anonymization). + +```python +from langchain_experimental.data_anonymizer import PresidioAnonymizer, PresidioReversibleAnonymizer +``` diff --git a/langchain_md_files/integrations/platforms/openai.mdx b/langchain_md_files/integrations/platforms/openai.mdx new file mode 100644 index 0000000000000000000000000000000000000000..57830479e93d2d4f60980631d60632ae03fed54d --- /dev/null +++ b/langchain_md_files/integrations/platforms/openai.mdx @@ -0,0 +1,123 @@ +--- +keywords: [openai] +--- + +# OpenAI + +All functionality related to OpenAI + +>[OpenAI](https://en.wikipedia.org/wiki/OpenAI) is American artificial intelligence (AI) research laboratory +> consisting of the non-profit `OpenAI Incorporated` +> and its for-profit subsidiary corporation `OpenAI Limited Partnership`. +> `OpenAI` conducts AI research with the declared intention of promoting and developing a friendly AI. +> `OpenAI` systems run on an `Azure`-based supercomputing platform from `Microsoft`. + +>The [OpenAI API](https://platform.openai.com/docs/models) is powered by a diverse set of models with different capabilities and price points. +> +>[ChatGPT](https://chat.openai.com) is the Artificial Intelligence (AI) chatbot developed by `OpenAI`. + +## Installation and Setup + +Install the integration package with +```bash +pip install langchain-openai +``` + +Get an OpenAI api key and set it as an environment variable (`OPENAI_API_KEY`) + +## Chat model + +See a [usage example](/docs/integrations/chat/openai). + +```python +from langchain_openai import ChatOpenAI +``` + +If you are using a model hosted on `Azure`, you should use different wrapper for that: +```python +from langchain_openai import AzureChatOpenAI +``` +For a more detailed walkthrough of the `Azure` wrapper, see [here](/docs/integrations/chat/azure_chat_openai). + +## LLM + +See a [usage example](/docs/integrations/llms/openai). + +```python +from langchain_openai import OpenAI +``` + +If you are using a model hosted on `Azure`, you should use different wrapper for that: +```python +from langchain_openai import AzureOpenAI +``` +For a more detailed walkthrough of the `Azure` wrapper, see [here](/docs/integrations/llms/azure_openai). + +## Embedding Model + +See a [usage example](/docs/integrations/text_embedding/openai) + +```python +from langchain_openai import OpenAIEmbeddings +``` + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/chatgpt_loader). + +```python +from langchain_community.document_loaders.chatgpt import ChatGPTLoader +``` + +## Retriever + +See a [usage example](/docs/integrations/retrievers/chatgpt-plugin). + +```python +from langchain.retrievers import ChatGPTPluginRetriever +``` + +## Tools + +### Dall-E Image Generator + +>[OpenAI Dall-E](https://openai.com/dall-e-3) are text-to-image models developed by `OpenAI` +> using deep learning methodologies to generate digital images from natural language descriptions, +> called "prompts". + + +See a [usage example](/docs/integrations/tools/dalle_image_generator). + +```python +from langchain_community.utilities.dalle_image_generator import DallEAPIWrapper +``` + +## Adapter + +See a [usage example](/docs/integrations/adapters/openai). + +```python +from langchain.adapters import openai as lc_openai +``` + +## Tokenizer + +There are several places you can use the `tiktoken` tokenizer. By default, it is used to count tokens +for OpenAI LLMs. + +You can also use it to count tokens when splitting documents with +```python +from langchain.text_splitter import CharacterTextSplitter +CharacterTextSplitter.from_tiktoken_encoder(...) +``` +For a more detailed walkthrough of this, see [this notebook](/docs/how_to/split_by_token/#tiktoken) + +## Chain + +See a [usage example](https://python.langchain.com/v0.1/docs/guides/productionization/safety/moderation). + +```python +from langchain.chains import OpenAIModerationChain +``` + + diff --git a/langchain_md_files/integrations/providers/acreom.mdx b/langchain_md_files/integrations/providers/acreom.mdx new file mode 100644 index 0000000000000000000000000000000000000000..78987870a2d2fbc3958980798f724ebaf52b2f5f --- /dev/null +++ b/langchain_md_files/integrations/providers/acreom.mdx @@ -0,0 +1,15 @@ +# Acreom + +[acreom](https://acreom.com) is a dev-first knowledge base with tasks running on local `markdown` files. + +## Installation and Setup + +No installation is required. + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/acreom). + +```python +from langchain_community.document_loaders import AcreomLoader +``` diff --git a/langchain_md_files/integrations/providers/activeloop_deeplake.mdx b/langchain_md_files/integrations/providers/activeloop_deeplake.mdx new file mode 100644 index 0000000000000000000000000000000000000000..f0bcb60afd6d31f084bfd2a2f69ed06173736524 --- /dev/null +++ b/langchain_md_files/integrations/providers/activeloop_deeplake.mdx @@ -0,0 +1,38 @@ +# Activeloop Deep Lake + +>[Activeloop Deep Lake](https://docs.activeloop.ai/) is a data lake for Deep Learning applications, allowing you to use it +> as a vector store. + +## Why Deep Lake? + +- More than just a (multi-modal) vector store. You can later use the dataset to fine-tune your own LLM models. +- Not only stores embeddings, but also the original data with automatic version control. +- Truly serverless. Doesn't require another service and can be used with major cloud providers (`AWS S3`, `GCS`, etc.) + +`Activeloop Deep Lake` supports `SelfQuery Retrieval`: +[Activeloop Deep Lake Self Query Retrieval](/docs/integrations/retrievers/self_query/activeloop_deeplake_self_query) + + +## More Resources + +1. [Ultimate Guide to LangChain & Deep Lake: Build ChatGPT to Answer Questions on Your Financial Data](https://www.activeloop.ai/resources/ultimate-guide-to-lang-chain-deep-lake-build-chat-gpt-to-answer-questions-on-your-financial-data/) +2. [Twitter the-algorithm codebase analysis with Deep Lake](https://github.com/langchain-ai/langchain/blob/master/cookbook/twitter-the-algorithm-analysis-deeplake.ipynb) +3. Here is [whitepaper](https://www.deeplake.ai/whitepaper) and [academic paper](https://arxiv.org/pdf/2209.10785.pdf) for Deep Lake +4. Here is a set of additional resources available for review: [Deep Lake](https://github.com/activeloopai/deeplake), [Get started](https://docs.activeloop.ai/getting-started) and [Tutorials](https://docs.activeloop.ai/hub-tutorials) + +## Installation and Setup + +Install the Python package: + +```bash +pip install deeplake +``` + + +## VectorStore + +```python +from langchain_community.vectorstores import DeepLake +``` + +See a [usage example](/docs/integrations/vectorstores/activeloop_deeplake). diff --git a/langchain_md_files/integrations/providers/ai21.mdx b/langchain_md_files/integrations/providers/ai21.mdx new file mode 100644 index 0000000000000000000000000000000000000000..60a925363b1fac575c4f819f421165fb3d4ef041 --- /dev/null +++ b/langchain_md_files/integrations/providers/ai21.mdx @@ -0,0 +1,67 @@ +# AI21 Labs + +>[AI21 Labs](https://www.ai21.com/about) is a company specializing in Natural +> Language Processing (NLP), which develops AI systems +> that can understand and generate natural language. + +This page covers how to use the `AI21` ecosystem within `LangChain`. + +## Installation and Setup + +- Get an AI21 api key and set it as an environment variable (`AI21_API_KEY`) +- Install the Python package: + +```bash +pip install langchain-ai21 +``` + +## LLMs + +See a [usage example](/docs/integrations/llms/ai21). + +### AI21 LLM + +```python +from langchain_ai21 import AI21LLM +``` + +### AI21 Contextual Answer + +You can use AI21’s contextual answers model to receive text or document, +serving as a context, and a question and return an answer based entirely on this context. + +```python +from langchain_ai21 import AI21ContextualAnswers +``` + + +## Chat models + +### AI21 Chat + +See a [usage example](/docs/integrations/chat/ai21). + +```python +from langchain_ai21 import ChatAI21 +``` + +## Embedding models + +### AI21 Embeddings + +See a [usage example](/docs/integrations/text_embedding/ai21). + +```python +from langchain_ai21 import AI21Embeddings +``` + +## Text splitters + +### AI21 Semantic Text Splitter + +See a [usage example](/docs/integrations/document_transformers/ai21_semantic_text_splitter). + +```python +from langchain_ai21 import AI21SemanticTextSplitter +``` + diff --git a/langchain_md_files/integrations/providers/ainetwork.mdx b/langchain_md_files/integrations/providers/ainetwork.mdx new file mode 100644 index 0000000000000000000000000000000000000000..fdd8393e23cb51f28267b1f808d9be56d9653734 --- /dev/null +++ b/langchain_md_files/integrations/providers/ainetwork.mdx @@ -0,0 +1,23 @@ +# AINetwork + +>[AI Network](https://www.ainetwork.ai/build-on-ain) is a layer 1 blockchain designed to accommodate +> large-scale AI models, utilizing a decentralized GPU network powered by the +> [$AIN token](https://www.ainetwork.ai/token), enriching AI-driven `NFTs` (`AINFTs`). + + +## Installation and Setup + +You need to install `ain-py` python package. + +```bash +pip install ain-py +``` +You need to set the `AIN_BLOCKCHAIN_ACCOUNT_PRIVATE_KEY` environmental variable to your AIN Blockchain Account Private Key. +## Toolkit + +See a [usage example](/docs/integrations/tools/ainetwork). + +```python +from langchain_community.agent_toolkits.ainetwork.toolkit import AINetworkToolkit +``` + diff --git a/langchain_md_files/integrations/providers/airbyte.mdx b/langchain_md_files/integrations/providers/airbyte.mdx new file mode 100644 index 0000000000000000000000000000000000000000..f1198b14861a2d3f7ef930b26e93080a3c4df5c7 --- /dev/null +++ b/langchain_md_files/integrations/providers/airbyte.mdx @@ -0,0 +1,32 @@ +# Airbyte + +>[Airbyte](https://github.com/airbytehq/airbyte) is a data integration platform for ELT pipelines from APIs, +> databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases. + +## Installation and Setup + +```bash +pip install -U langchain-airbyte +``` + +:::note + +Currently, the `langchain-airbyte` library does not support Pydantic v2. +Please downgrade to Pydantic v1 to use this package. + +This package also currently requires Python 3.10+. + +::: + +The integration package doesn't require any global environment variables that need to be +set, but some integrations (e.g. `source-github`) may need credentials passed in. + +## Document loader + +### AirbyteLoader + +See a [usage example](/docs/integrations/document_loaders/airbyte). + +```python +from langchain_airbyte import AirbyteLoader +``` diff --git a/langchain_md_files/integrations/providers/alchemy.mdx b/langchain_md_files/integrations/providers/alchemy.mdx new file mode 100644 index 0000000000000000000000000000000000000000..f1d7bbbcf75fa1ba67f432606688f101c0d3cf91 --- /dev/null +++ b/langchain_md_files/integrations/providers/alchemy.mdx @@ -0,0 +1,20 @@ +# Alchemy + +>[Alchemy](https://www.alchemy.com) is the platform to build blockchain applications. + +## Installation and Setup + +Check out the [installation guide](/docs/integrations/document_loaders/blockchain). + +## Document loader + +### BlockchainLoader on the Alchemy platform + +See a [usage example](/docs/integrations/document_loaders/blockchain). + +```python +from langchain_community.document_loaders.blockchain import ( + BlockchainDocumentLoader, + BlockchainType, +) +``` diff --git a/langchain_md_files/integrations/providers/aleph_alpha.mdx b/langchain_md_files/integrations/providers/aleph_alpha.mdx new file mode 100644 index 0000000000000000000000000000000000000000..4f8a5d0e086eb78ad022bdff58e4a064930a8a5e --- /dev/null +++ b/langchain_md_files/integrations/providers/aleph_alpha.mdx @@ -0,0 +1,36 @@ +# Aleph Alpha + +>[Aleph Alpha](https://docs.aleph-alpha.com/) was founded in 2019 with the mission to research and build the foundational technology for an era of strong AI. The team of international scientists, engineers, and innovators researches, develops, and deploys transformative AI like large language and multimodal models and runs the fastest European commercial AI cluster. + +>[The Luminous series](https://docs.aleph-alpha.com/docs/introduction/luminous/) is a family of large language models. + +## Installation and Setup + +```bash +pip install aleph-alpha-client +``` + +You have to create a new token. Please, see [instructions](https://docs.aleph-alpha.com/docs/account/#create-a-new-token). + +```python +from getpass import getpass + +ALEPH_ALPHA_API_KEY = getpass() +``` + + +## LLM + +See a [usage example](/docs/integrations/llms/aleph_alpha). + +```python +from langchain_community.llms import AlephAlpha +``` + +## Text Embedding Models + +See a [usage example](/docs/integrations/text_embedding/aleph_alpha). + +```python +from langchain_community.embeddings import AlephAlphaSymmetricSemanticEmbedding, AlephAlphaAsymmetricSemanticEmbedding +``` diff --git a/langchain_md_files/integrations/providers/alibaba_cloud.mdx b/langchain_md_files/integrations/providers/alibaba_cloud.mdx new file mode 100644 index 0000000000000000000000000000000000000000..74c3045a6424ff1e2fec4ec8319f3a8ebcaf604a --- /dev/null +++ b/langchain_md_files/integrations/providers/alibaba_cloud.mdx @@ -0,0 +1,91 @@ +# Alibaba Cloud + +>[Alibaba Group Holding Limited (Wikipedia)](https://en.wikipedia.org/wiki/Alibaba_Group), or `Alibaba` +> (Chinese: 阿里巴巴), is a Chinese multinational technology company specializing in e-commerce, retail, +> Internet, and technology. +> +> [Alibaba Cloud (Wikipedia)](https://en.wikipedia.org/wiki/Alibaba_Cloud), also known as `Aliyun` +> (Chinese: 阿里云; pinyin: Ālǐyún; lit. 'Ali Cloud'), is a cloud computing company, a subsidiary +> of `Alibaba Group`. `Alibaba Cloud` provides cloud computing services to online businesses and +> Alibaba's own e-commerce ecosystem. + + +## LLMs + +### Alibaba Cloud PAI EAS + +See [installation instructions and a usage example](/docs/integrations/llms/alibabacloud_pai_eas_endpoint). + +```python +from langchain_community.llms.pai_eas_endpoint import PaiEasEndpoint +``` + +### Tongyi Qwen + +See [installation instructions and a usage example](/docs/integrations/llms/tongyi). + +```python +from langchain_community.llms import Tongyi +``` + +## Chat Models + +### Alibaba Cloud PAI EAS + +See [installation instructions and a usage example](/docs/integrations/chat/alibaba_cloud_pai_eas). + +```python +from langchain_community.chat_models import PaiEasChatEndpoint +``` + +### Tongyi Qwen Chat + +See [installation instructions and a usage example](/docs/integrations/chat/tongyi). + +```python +from langchain_community.chat_models.tongyi import ChatTongyi +``` + +## Document Loaders + +### Alibaba Cloud MaxCompute + +See [installation instructions and a usage example](/docs/integrations/document_loaders/alibaba_cloud_maxcompute). + +```python +from langchain_community.document_loaders import MaxComputeLoader +``` + +## Vector stores + +### Alibaba Cloud OpenSearch + +See [installation instructions and a usage example](/docs/integrations/vectorstores/alibabacloud_opensearch). + +```python +from langchain_community.vectorstores import AlibabaCloudOpenSearch, AlibabaCloudOpenSearchSettings +``` + +### Alibaba Cloud Tair + +See [installation instructions and a usage example](/docs/integrations/vectorstores/tair). + +```python +from langchain_community.vectorstores import Tair +``` + +### AnalyticDB + +See [installation instructions and a usage example](/docs/integrations/vectorstores/analyticdb). + +```python +from langchain_community.vectorstores import AnalyticDB +``` + +### Hologres + +See [installation instructions and a usage example](/docs/integrations/vectorstores/hologres). + +```python +from langchain_community.vectorstores import Hologres +``` diff --git a/langchain_md_files/integrations/providers/analyticdb.mdx b/langchain_md_files/integrations/providers/analyticdb.mdx new file mode 100644 index 0000000000000000000000000000000000000000..7a9e551075e8595854a04704f5444deb27fac0a4 --- /dev/null +++ b/langchain_md_files/integrations/providers/analyticdb.mdx @@ -0,0 +1,31 @@ +# AnalyticDB + +>[AnalyticDB for PostgreSQL](https://www.alibabacloud.com/help/en/analyticdb-for-postgresql/latest/product-introduction-overview) +> is a massively parallel processing (MPP) data warehousing service +> from [Alibaba Cloud](https://www.alibabacloud.com/) +>that is designed to analyze large volumes of data online. + +>`AnalyticDB for PostgreSQL` is developed based on the open-source `Greenplum Database` +> project and is enhanced with in-depth extensions by `Alibaba Cloud`. AnalyticDB +> for PostgreSQL is compatible with the ANSI SQL 2003 syntax and the PostgreSQL and +> Oracle database ecosystems. AnalyticDB for PostgreSQL also supports row store and +> column store. AnalyticDB for PostgreSQL processes petabytes of data offline at a +> high performance level and supports highly concurrent. + +This page covers how to use the AnalyticDB ecosystem within LangChain. + +## Installation and Setup + +You need to install the `sqlalchemy` python package. + +```bash +pip install sqlalchemy +``` + +## VectorStore + +See a [usage example](/docs/integrations/vectorstores/analyticdb). + +```python +from langchain_community.vectorstores import AnalyticDB +``` diff --git a/langchain_md_files/integrations/providers/annoy.mdx b/langchain_md_files/integrations/providers/annoy.mdx new file mode 100644 index 0000000000000000000000000000000000000000..18a86fbfa398f7016a20b8765fde9140a2d0ad2a --- /dev/null +++ b/langchain_md_files/integrations/providers/annoy.mdx @@ -0,0 +1,21 @@ +# Annoy + +> [Annoy](https://github.com/spotify/annoy) (`Approximate Nearest Neighbors Oh Yeah`) +> is a C++ library with Python bindings to search for points in space that are +> close to a given query point. It also creates large read-only file-based data +> structures that are mapped into memory so that many processes may share the same data. + +## Installation and Setup + +```bash +pip install annoy +``` + + +## Vectorstore + +See a [usage example](/docs/integrations/vectorstores/annoy). + +```python +from langchain_community.vectorstores import Annoy +``` diff --git a/langchain_md_files/integrations/providers/anyscale.mdx b/langchain_md_files/integrations/providers/anyscale.mdx new file mode 100644 index 0000000000000000000000000000000000000000..8b35f0490e3ff3511b0795c559f9f783d0593e7a --- /dev/null +++ b/langchain_md_files/integrations/providers/anyscale.mdx @@ -0,0 +1,42 @@ +# Anyscale + +>[Anyscale](https://www.anyscale.com) is a platform to run, fine tune and scale LLMs via production-ready APIs. +> [Anyscale Endpoints](https://docs.anyscale.com/endpoints/overview) serve many open-source models in a cost-effective way. + +`Anyscale` also provides [an example](https://docs.anyscale.com/endpoints/model-serving/examples/langchain-integration) +how to setup `LangChain` with `Anyscale` for advanced chat agents. + +## Installation and Setup + +- Get an Anyscale Service URL, route and API key and set them as environment variables (`ANYSCALE_SERVICE_URL`,`ANYSCALE_SERVICE_ROUTE`, `ANYSCALE_SERVICE_TOKEN`). +- Please see [the Anyscale docs](https://www.anyscale.com/get-started) for more details. + +We have to install the `openai` package: + +```bash +pip install openai +``` + +## LLM + +See a [usage example](/docs/integrations/llms/anyscale). + +```python +from langchain_community.llms.anyscale import Anyscale +``` + +## Chat Models + +See a [usage example](/docs/integrations/chat/anyscale). + +```python +from langchain_community.chat_models.anyscale import ChatAnyscale +``` + +## Embeddings + +See a [usage example](/docs/integrations/text_embedding/anyscale). + +```python +from langchain_community.embeddings import AnyscaleEmbeddings +``` diff --git a/langchain_md_files/integrations/providers/apache_doris.mdx b/langchain_md_files/integrations/providers/apache_doris.mdx new file mode 100644 index 0000000000000000000000000000000000000000..9beee729f33148e17afaf1960d16e371f20d4624 --- /dev/null +++ b/langchain_md_files/integrations/providers/apache_doris.mdx @@ -0,0 +1,22 @@ +# Apache Doris + +>[Apache Doris](https://doris.apache.org/) is a modern data warehouse for real-time analytics. +It delivers lightning-fast analytics on real-time data at scale. + +>Usually `Apache Doris` is categorized into OLAP, and it has showed excellent performance +> in [ClickBench — a Benchmark For Analytical DBMS](https://benchmark.clickhouse.com/). +> Since it has a super-fast vectorized execution engine, it could also be used as a fast vectordb. + +## Installation and Setup + +```bash +pip install pymysql +``` + +## Vector Store + +See a [usage example](/docs/integrations/vectorstores/apache_doris). + +```python +from langchain_community.vectorstores import ApacheDoris +``` diff --git a/langchain_md_files/integrations/providers/apify.mdx b/langchain_md_files/integrations/providers/apify.mdx new file mode 100644 index 0000000000000000000000000000000000000000..f3ee4fd62307f1b75657e29fc6bc5dc9669acc61 --- /dev/null +++ b/langchain_md_files/integrations/providers/apify.mdx @@ -0,0 +1,41 @@ +# Apify + + +>[Apify](https://apify.com) is a cloud platform for web scraping and data extraction, +>which provides an [ecosystem](https://apify.com/store) of more than a thousand +>ready-made apps called *Actors* for various scraping, crawling, and extraction use cases. + +[![Apify Actors](/img/ApifyActors.png)](https://apify.com/store) + +This integration enables you run Actors on the `Apify` platform and load their results into LangChain to feed your vector +indexes with documents and data from the web, e.g. to generate answers from websites with documentation, +blogs, or knowledge bases. + + +## Installation and Setup + +- Install the Apify API client for Python with `pip install apify-client` +- Get your [Apify API token](https://console.apify.com/account/integrations) and either set it as + an environment variable (`APIFY_API_TOKEN`) or pass it to the `ApifyWrapper` as `apify_api_token` in the constructor. + + +## Utility + +You can use the `ApifyWrapper` to run Actors on the Apify platform. + +```python +from langchain_community.utilities import ApifyWrapper +``` + +For more information on this wrapper, see [the API reference](https://python.langchain.com/v0.2/api_reference/community/utilities/langchain_community.utilities.apify.ApifyWrapper.html). + + +## Document loader + +You can also use our `ApifyDatasetLoader` to get data from Apify dataset. + +```python +from langchain_community.document_loaders import ApifyDatasetLoader +``` + +For a more detailed walkthrough of this loader, see [this notebook](/docs/integrations/document_loaders/apify_dataset). diff --git a/langchain_md_files/integrations/providers/arangodb.mdx b/langchain_md_files/integrations/providers/arangodb.mdx new file mode 100644 index 0000000000000000000000000000000000000000..ff2d312fa9e76ef3087c8d8d74cbce9c057f0cd8 --- /dev/null +++ b/langchain_md_files/integrations/providers/arangodb.mdx @@ -0,0 +1,25 @@ +# ArangoDB + +>[ArangoDB](https://github.com/arangodb/arangodb) is a scalable graph database system to +> drive value from connected data, faster. Native graphs, an integrated search engine, and JSON support, via a single query language. ArangoDB runs on-prem, in the cloud – anywhere. + +## Installation and Setup + +Install the [ArangoDB Python Driver](https://github.com/ArangoDB-Community/python-arango) package with + +```bash +pip install python-arango +``` + +## Graph QA Chain + +Connect your `ArangoDB` Database with a chat model to get insights on your data. + +See the notebook example [here](/docs/integrations/graphs/arangodb). + +```python +from arango import ArangoClient + +from langchain_community.graphs import ArangoGraph +from langchain.chains import ArangoGraphQAChain +``` diff --git a/langchain_md_files/integrations/providers/arcee.mdx b/langchain_md_files/integrations/providers/arcee.mdx new file mode 100644 index 0000000000000000000000000000000000000000..b685dd9b2d72fe0ddeab5962b1cf64b2548dbd59 --- /dev/null +++ b/langchain_md_files/integrations/providers/arcee.mdx @@ -0,0 +1,30 @@ +# Arcee + +>[Arcee](https://www.arcee.ai/about/about-us) enables the development and advancement +> of what we coin as SLMs—small, specialized, secure, and scalable language models. +> By offering a SLM Adaptation System and a seamless, secure integration, +> `Arcee` empowers enterprises to harness the full potential of +> domain-adapted language models, driving the transformative +> innovation in operations. + + +## Installation and Setup + +Get your `Arcee API` key. + + +## LLMs + +See a [usage example](/docs/integrations/llms/arcee). + +```python +from langchain_community.llms import Arcee +``` + +## Retrievers + +See a [usage example](/docs/integrations/retrievers/arcee). + +```python +from langchain_community.retrievers import ArceeRetriever +``` diff --git a/langchain_md_files/integrations/providers/arcgis.mdx b/langchain_md_files/integrations/providers/arcgis.mdx new file mode 100644 index 0000000000000000000000000000000000000000..c7a00fd7ffcc8454c35d0a4458e1ebfa1402d4be --- /dev/null +++ b/langchain_md_files/integrations/providers/arcgis.mdx @@ -0,0 +1,27 @@ +# ArcGIS + +>[ArcGIS](https://www.esri.com/en-us/arcgis/about-arcgis/overview) is a family of client, +> server and online geographic information system software developed and maintained by [Esri](https://www.esri.com/). +> +>`ArcGISLoader` uses the `arcgis` package. +> `arcgis` is a Python library for the vector and raster analysis, geocoding, map making, +> routing and directions. It administers, organizes and manages users, +> groups and information items in your GIS. +>It enables access to ready-to-use maps and curated geographic data from `Esri` +> and other authoritative sources, and works with your own data as well. + +## Installation and Setup + +We have to install the `arcgis` package. + +```bash +pip install -U arcgis +``` + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/arcgis). + +```python +from langchain_community.document_loaders import ArcGISLoader +``` diff --git a/langchain_md_files/integrations/providers/argilla.mdx b/langchain_md_files/integrations/providers/argilla.mdx new file mode 100644 index 0000000000000000000000000000000000000000..fc4232e0ec9c488ab02fb73ee5811c2317223709 --- /dev/null +++ b/langchain_md_files/integrations/providers/argilla.mdx @@ -0,0 +1,25 @@ +# Argilla + +>[Argilla](https://argilla.io/) is an open-source data curation platform for LLMs. +> Using `Argilla`, everyone can build robust language models through faster data curation +> using both human and machine feedback. `Argilla` provides support for each step in the MLOps cycle, +> from data labeling to model monitoring. + +## Installation and Setup + +Get your [API key](https://platform.openai.com/account/api-keys). + +Install the Python package: + +```bash +pip install argilla +``` + +## Callbacks + + +```python +from langchain.callbacks import ArgillaCallbackHandler +``` + +See an [example](/docs/integrations/callbacks/argilla). diff --git a/langchain_md_files/integrations/providers/arize.mdx b/langchain_md_files/integrations/providers/arize.mdx new file mode 100644 index 0000000000000000000000000000000000000000..1f018195ac9138d20692f5b7de3e387d149ffd10 --- /dev/null +++ b/langchain_md_files/integrations/providers/arize.mdx @@ -0,0 +1,24 @@ +# Arize + +[Arize](https://arize.com) is an AI observability and LLM evaluation platform that offers +support for LangChain applications, providing detailed traces of input, embeddings, retrieval, +functions, and output messages. + + +## Installation and Setup + +First, you need to install `arize` python package. + +```bash +pip install arize +``` + +Second, you need to set up your [Arize account](https://app.arize.com/auth/join) +and get your `API_KEY` or `SPACE_KEY`. + + +## Callback handler + +```python +from langchain_community.callbacks import ArizeCallbackHandler +``` diff --git a/langchain_md_files/integrations/providers/arxiv.mdx b/langchain_md_files/integrations/providers/arxiv.mdx new file mode 100644 index 0000000000000000000000000000000000000000..7fabf7396c1b509e7d20122b998a27f17e990c57 --- /dev/null +++ b/langchain_md_files/integrations/providers/arxiv.mdx @@ -0,0 +1,36 @@ +# Arxiv + +>[arXiv](https://arxiv.org/) is an open-access archive for 2 million scholarly articles in the fields of physics, +> mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and +> systems science, and economics. + + +## Installation and Setup + +First, you need to install `arxiv` python package. + +```bash +pip install arxiv +``` + +Second, you need to install `PyMuPDF` python package which transforms PDF files downloaded from the `arxiv.org` site into the text format. + +```bash +pip install pymupdf +``` + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/arxiv). + +```python +from langchain_community.document_loaders import ArxivLoader +``` + +## Retriever + +See a [usage example](/docs/integrations/retrievers/arxiv). + +```python +from langchain_community.retrievers import ArxivRetriever +``` diff --git a/langchain_md_files/integrations/providers/ascend.mdx b/langchain_md_files/integrations/providers/ascend.mdx new file mode 100644 index 0000000000000000000000000000000000000000..b8c1769a48965c036cc8d43ae4790eaf8241c9f8 --- /dev/null +++ b/langchain_md_files/integrations/providers/ascend.mdx @@ -0,0 +1,24 @@ +# Ascend + +>[Ascend](https://https://www.hiascend.com/) is Natural Process Unit provide by Huawei + +This page covers how to use ascend NPU with LangChain. + +### Installation + +Install using torch-npu using: + +```bash +pip install torch-npu +``` + +Please follow the installation instructions as specified below: +* Install CANN as shown [here](https://www.hiascend.com/document/detail/zh/canncommercial/700/quickstart/quickstart/quickstart_18_0002.html). + +### Embedding Models + +See a [usage example](/docs/integrations/text_embedding/ascend). + +```python +from langchain_community.embeddings import AscendEmbeddings +``` diff --git a/langchain_md_files/integrations/providers/asknews.mdx b/langchain_md_files/integrations/providers/asknews.mdx new file mode 100644 index 0000000000000000000000000000000000000000..1aa6dd81e4a6e2c767372f289b14358aced1cc63 --- /dev/null +++ b/langchain_md_files/integrations/providers/asknews.mdx @@ -0,0 +1,33 @@ +# AskNews + +[AskNews](https://asknews.app/) enhances language models with up-to-date global or historical news +by processing and indexing over 300,000 articles daily, providing prompt-optimized responses +through a low-latency endpoint, and ensuring transparency and diversity in its news coverage. + +## Installation and Setup + +First, you need to install `asknews` python package. + +```bash +pip install asknews +``` + +You also need to set our AskNews API credentials, which can be generated at +the [AskNews console](https://my.asknews.app/). + + +## Retriever + +See a [usage example](/docs/integrations/retrievers/asknews). + +```python +from langchain_community.retrievers import AskNewsRetriever +``` + +## Tool + +See a [usage example](/docs/integrations/tools/asknews). + +```python +from langchain_community.tools.asknews import AskNewsSearch +``` diff --git a/langchain_md_files/integrations/providers/assemblyai.mdx b/langchain_md_files/integrations/providers/assemblyai.mdx new file mode 100644 index 0000000000000000000000000000000000000000..dc666f2fc366f554d2364c2dc2f90a8a4f19e084 --- /dev/null +++ b/langchain_md_files/integrations/providers/assemblyai.mdx @@ -0,0 +1,42 @@ +# AssemblyAI + +>[AssemblyAI](https://www.assemblyai.com/) builds `Speech AI` models for tasks like +speech-to-text, speaker diarization, speech summarization, and more. +> `AssemblyAI’s` Speech AI models include accurate speech-to-text for voice data +> (such as calls, virtual meetings, and podcasts), speaker detection, sentiment analysis, +> chapter detection, PII redaction. + + + +## Installation and Setup + +Get your [API key](https://www.assemblyai.com/dashboard/signup). + +Install the `assemblyai` package. + +```bash +pip install -U assemblyai +``` + +## Document Loader + +### AssemblyAI Audio Transcript + +The `AssemblyAIAudioTranscriptLoader` transcribes audio files with the `AssemblyAI API` +and loads the transcribed text into documents. + +See a [usage example](/docs/integrations/document_loaders/assemblyai). + +```python +from langchain_community.document_loaders import AssemblyAIAudioTranscriptLoader +``` + +### AssemblyAI Audio Loader By Id + +The `AssemblyAIAudioLoaderById` uses the AssemblyAI API to get an existing +transcription and loads the transcribed text into one or more Documents, +depending on the specified format. + +```python +from langchain_community.document_loaders import AssemblyAIAudioLoaderById +``` diff --git a/langchain_md_files/integrations/providers/astradb.mdx b/langchain_md_files/integrations/providers/astradb.mdx new file mode 100644 index 0000000000000000000000000000000000000000..d545d1ea0262596109343c0ec06711d2d003fb7b --- /dev/null +++ b/langchain_md_files/integrations/providers/astradb.mdx @@ -0,0 +1,150 @@ +# Astra DB + +> [DataStax Astra DB](https://docs.datastax.com/en/astra/home/astra.html) is a serverless +> vector-capable database built on `Apache Cassandra®`and made conveniently available +> through an easy-to-use JSON API. + +See a [tutorial provided by DataStax](https://docs.datastax.com/en/astra/astra-db-vector/tutorials/chatbot.html). + +## Installation and Setup + +Install the following Python package: +```bash +pip install "langchain-astradb>=0.1.0" +``` + +Get the [connection secrets](https://docs.datastax.com/en/astra/astra-db-vector/get-started/quickstart.html). +Set up the following environment variables: + +```python +ASTRA_DB_APPLICATION_TOKEN="TOKEN" +ASTRA_DB_API_ENDPOINT="API_ENDPOINT" +``` + +## Vector Store + +```python +from langchain_astradb import AstraDBVectorStore + +vector_store = AstraDBVectorStore( + embedding=my_embedding, + collection_name="my_store", + api_endpoint=ASTRA_DB_API_ENDPOINT, + token=ASTRA_DB_APPLICATION_TOKEN, +) +``` + +Learn more in the [example notebook](/docs/integrations/vectorstores/astradb). + +See the [example provided by DataStax](https://docs.datastax.com/en/astra/astra-db-vector/integrations/langchain.html). + +## Chat message history + +```python +from langchain_astradb import AstraDBChatMessageHistory + +message_history = AstraDBChatMessageHistory( + session_id="test-session", + api_endpoint=ASTRA_DB_API_ENDPOINT, + token=ASTRA_DB_APPLICATION_TOKEN, +) +``` + +See the [usage example](/docs/integrations/memory/astradb_chat_message_history#example). + +## LLM Cache + +```python +from langchain.globals import set_llm_cache +from langchain_astradb import AstraDBCache + +set_llm_cache(AstraDBCache( + api_endpoint=ASTRA_DB_API_ENDPOINT, + token=ASTRA_DB_APPLICATION_TOKEN, +)) +``` + +Learn more in the [example notebook](/docs/integrations/llm_caching#astra-db-caches) (scroll to the Astra DB section). + + +## Semantic LLM Cache + +```python +from langchain.globals import set_llm_cache +from langchain_astradb import AstraDBSemanticCache + +set_llm_cache(AstraDBSemanticCache( + embedding=my_embedding, + api_endpoint=ASTRA_DB_API_ENDPOINT, + token=ASTRA_DB_APPLICATION_TOKEN, +)) +``` + +Learn more in the [example notebook](/docs/integrations/llm_caching#astra-db-caches) (scroll to the appropriate section). + +Learn more in the [example notebook](/docs/integrations/memory/astradb_chat_message_history). + +## Document loader + +```python +from langchain_astradb import AstraDBLoader + +loader = AstraDBLoader( + collection_name="my_collection", + api_endpoint=ASTRA_DB_API_ENDPOINT, + token=ASTRA_DB_APPLICATION_TOKEN, +) +``` + +Learn more in the [example notebook](/docs/integrations/document_loaders/astradb). + +## Self-querying retriever + +```python +from langchain_astradb import AstraDBVectorStore +from langchain.retrievers.self_query.base import SelfQueryRetriever + +vector_store = AstraDBVectorStore( + embedding=my_embedding, + collection_name="my_store", + api_endpoint=ASTRA_DB_API_ENDPOINT, + token=ASTRA_DB_APPLICATION_TOKEN, +) + +retriever = SelfQueryRetriever.from_llm( + my_llm, + vector_store, + document_content_description, + metadata_field_info +) +``` + +Learn more in the [example notebook](/docs/integrations/retrievers/self_query/astradb). + +## Store + +```python +from langchain_astradb import AstraDBStore + +store = AstraDBStore( + collection_name="my_kv_store", + api_endpoint=ASTRA_DB_API_ENDPOINT, + token=ASTRA_DB_APPLICATION_TOKEN, +) +``` + +Learn more in the [example notebook](/docs/integrations/stores/astradb#astradbstore). + +## Byte Store + +```python +from langchain_astradb import AstraDBByteStore + +store = AstraDBByteStore( + collection_name="my_kv_store", + api_endpoint=ASTRA_DB_API_ENDPOINT, + token=ASTRA_DB_APPLICATION_TOKEN, +) +``` + +Learn more in the [example notebook](/docs/integrations/stores/astradb#astradbbytestore). diff --git a/langchain_md_files/integrations/providers/atlas.mdx b/langchain_md_files/integrations/providers/atlas.mdx new file mode 100644 index 0000000000000000000000000000000000000000..06545aca112a9500ee23ee1b5ff5f22b17655d33 --- /dev/null +++ b/langchain_md_files/integrations/providers/atlas.mdx @@ -0,0 +1,19 @@ +# Atlas + +>[Nomic Atlas](https://docs.nomic.ai/index.html) is a platform for interacting with both +> small and internet scale unstructured datasets. + + +## Installation and Setup + +- Install the Python package with `pip install nomic` +- `Nomic` is also included in langchains poetry extras `poetry install -E all` + + +## VectorStore + +See a [usage example](/docs/integrations/vectorstores/atlas). + +```python +from langchain_community.vectorstores import AtlasDB +``` \ No newline at end of file diff --git a/langchain_md_files/integrations/providers/azlyrics.mdx b/langchain_md_files/integrations/providers/azlyrics.mdx new file mode 100644 index 0000000000000000000000000000000000000000..78cbbc329d62df6e6f786a4e82d895ba81a9fadf --- /dev/null +++ b/langchain_md_files/integrations/providers/azlyrics.mdx @@ -0,0 +1,16 @@ +# AZLyrics + +>[AZLyrics](https://www.azlyrics.com/) is a large, legal, every day growing collection of lyrics. + +## Installation and Setup + +There isn't any special setup for it. + + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/azlyrics). + +```python +from langchain_community.document_loaders import AZLyricsLoader +``` diff --git a/langchain_md_files/integrations/providers/bagel.mdx b/langchain_md_files/integrations/providers/bagel.mdx new file mode 100644 index 0000000000000000000000000000000000000000..d76aeff4b60a5c63babd1a0ab51d531088fcc16a --- /dev/null +++ b/langchain_md_files/integrations/providers/bagel.mdx @@ -0,0 +1,21 @@ +# Bagel + +> [Bagel](https://www.bagel.net/) (`Open Vector Database for AI`), is like GitHub for AI data. +It is a collaborative platform where users can create, +share, and manage vector datasets. It can support private projects for independent developers, +internal collaborations for enterprises, and public contributions for data DAOs. + +## Installation and Setup + +```bash +pip install bagelML +``` + + +## VectorStore + +See a [usage example](/docs/integrations/vectorstores/bagel). + +```python +from langchain_community.vectorstores import Bagel +``` diff --git a/langchain_md_files/integrations/providers/bageldb.mdx b/langchain_md_files/integrations/providers/bageldb.mdx new file mode 100644 index 0000000000000000000000000000000000000000..dc9a8ea708ffdf5750921ad30a05b2663ebead27 --- /dev/null +++ b/langchain_md_files/integrations/providers/bageldb.mdx @@ -0,0 +1,21 @@ +# BagelDB + +> [BagelDB](https://www.bageldb.ai/) (`Open Vector Database for AI`), is like GitHub for AI data. +It is a collaborative platform where users can create, +share, and manage vector datasets. It can support private projects for independent developers, +internal collaborations for enterprises, and public contributions for data DAOs. + +## Installation and Setup + +```bash +pip install betabageldb +``` + + +## VectorStore + +See a [usage example](/docs/integrations/vectorstores/bageldb). + +```python +from langchain_community.vectorstores import Bagel +``` diff --git a/langchain_md_files/integrations/providers/baichuan.mdx b/langchain_md_files/integrations/providers/baichuan.mdx new file mode 100644 index 0000000000000000000000000000000000000000..409a66d6f8c6706dcff04e401d8b0b0a848b4372 --- /dev/null +++ b/langchain_md_files/integrations/providers/baichuan.mdx @@ -0,0 +1,33 @@ +# Baichuan + +>[Baichuan Inc.](https://www.baichuan-ai.com/) is a Chinese startup in the era of AGI, +> dedicated to addressing fundamental human needs: Efficiency, Health, and Happiness. + + +## Installation and Setup + +Register and get an API key [here](https://platform.baichuan-ai.com/). + +## LLMs + +See a [usage example](/docs/integrations/llms/baichuan). + +```python +from langchain_community.llms import BaichuanLLM +``` + +## Chat models + +See a [usage example](/docs/integrations/chat/baichuan). + +```python +from langchain_community.chat_models import ChatBaichuan +``` + +## Embedding models + +See a [usage example](/docs/integrations/text_embedding/baichuan). + +```python +from langchain_community.embeddings import BaichuanTextEmbeddings +``` diff --git a/langchain_md_files/integrations/providers/baidu.mdx b/langchain_md_files/integrations/providers/baidu.mdx new file mode 100644 index 0000000000000000000000000000000000000000..bd5b1ce54de36f2a0c8b1d737f81338c2b19515d --- /dev/null +++ b/langchain_md_files/integrations/providers/baidu.mdx @@ -0,0 +1,72 @@ +# Baidu + +>[Baidu Cloud](https://cloud.baidu.com/) is a cloud service provided by `Baidu, Inc.`, +> headquartered in Beijing. It offers a cloud storage service, client software, +> file management, resource sharing, and Third Party Integration. + + +## Installation and Setup + +Register and get the `Qianfan` `AK` and `SK` keys [here](https://cloud.baidu.com/product/wenxinworkshop). + +## LLMs + +### Baidu Qianfan + +See a [usage example](/docs/integrations/llms/baidu_qianfan_endpoint). + +```python +from langchain_community.llms import QianfanLLMEndpoint +``` + +## Chat models + +### Qianfan Chat Endpoint + +See a [usage example](/docs/integrations/chat/baidu_qianfan_endpoint). + +```python +from langchain_community.chat_models import QianfanChatEndpoint +``` + +## Embedding models + +### Baidu Qianfan + +See a [usage example](/docs/integrations/text_embedding/baidu_qianfan_endpoint). + +```python +from langchain_community.embeddings import QianfanEmbeddingsEndpoint +``` + +## Document loaders + +### Baidu BOS Directory Loader + +```python +from langchain_community.document_loaders.baiducloud_bos_directory import BaiduBOSDirectoryLoader +``` + +### Baidu BOS File Loader + +```python +from langchain_community.document_loaders.baiducloud_bos_file import BaiduBOSFileLoader +``` + +## Vector stores + +### Baidu Cloud ElasticSearch VectorSearch + +See a [usage example](/docs/integrations/vectorstores/baiducloud_vector_search). + +```python +from langchain_community.vectorstores import BESVectorStore +``` + +### Baidu VectorDB + +See a [usage example](/docs/integrations/vectorstores/baiduvectordb). + +```python +from langchain_community.vectorstores import BaiduVectorDB +``` diff --git a/langchain_md_files/integrations/providers/bananadev.mdx b/langchain_md_files/integrations/providers/bananadev.mdx new file mode 100644 index 0000000000000000000000000000000000000000..9972bc159a873e7f05f02a9c63debb960150832c --- /dev/null +++ b/langchain_md_files/integrations/providers/bananadev.mdx @@ -0,0 +1,68 @@ +# Banana + +>[Banana](https://www.banana.dev/) provided serverless GPU inference for AI models, +> a CI/CD build pipeline and a simple Python framework (`Potassium`) to server your models. + +This page covers how to use the [Banana](https://www.banana.dev) ecosystem within LangChain. + +## Installation and Setup + +- Install the python package `banana-dev`: + +```bash +pip install banana-dev +``` + +- Get an Banana api key from the [Banana.dev dashboard](https://app.banana.dev) and set it as an environment variable (`BANANA_API_KEY`) +- Get your model's key and url slug from the model's details page. + +## Define your Banana Template + +You'll need to set up a Github repo for your Banana app. You can get started in 5 minutes using [this guide](https://docs.banana.dev/banana-docs/). + +Alternatively, for a ready-to-go LLM example, you can check out Banana's [CodeLlama-7B-Instruct-GPTQ](https://github.com/bananaml/demo-codellama-7b-instruct-gptq) GitHub repository. Just fork it and deploy it within Banana. + +Other starter repos are available [here](https://github.com/orgs/bananaml/repositories?q=demo-&type=all&language=&sort=). + +## Build the Banana app + +To use Banana apps within Langchain, you must include the `outputs` key +in the returned json, and the value must be a string. + +```python +# Return the results as a dictionary +result = {'outputs': result} +``` + +An example inference function would be: + +```python +@app.handler("/") +def handler(context: dict, request: Request) -> Response: + """Handle a request to generate code from a prompt.""" + model = context.get("model") + tokenizer = context.get("tokenizer") + max_new_tokens = request.json.get("max_new_tokens", 512) + temperature = request.json.get("temperature", 0.7) + prompt = request.json.get("prompt") + prompt_template=f'''[INST] Write code to solve the following coding problem that obeys the constraints and passes the example test cases. Please wrap your code answer using ```: + {prompt} + [/INST] + ''' + input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() + output = model.generate(inputs=input_ids, temperature=temperature, max_new_tokens=max_new_tokens) + result = tokenizer.decode(output[0]) + return Response(json={"outputs": result}, status=200) +``` + +This example is from the `app.py` file in [CodeLlama-7B-Instruct-GPTQ](https://github.com/bananaml/demo-codellama-7b-instruct-gptq). + + +## LLM + + +```python +from langchain_community.llms import Banana +``` + +See a [usage example](/docs/integrations/llms/banana). diff --git a/langchain_md_files/integrations/providers/beam.mdx b/langchain_md_files/integrations/providers/beam.mdx new file mode 100644 index 0000000000000000000000000000000000000000..7f723eb0decc4f9cdbd568e9ebbd336ebab18c58 --- /dev/null +++ b/langchain_md_files/integrations/providers/beam.mdx @@ -0,0 +1,28 @@ +# Beam + +>[Beam](https://www.beam.cloud/) is a cloud computing platform that allows you to run your code +> on remote servers with GPUs. + + +## Installation and Setup + +- [Create an account](https://www.beam.cloud/) +- Install the Beam CLI with `curl https://raw.githubusercontent.com/slai-labs/get-beam/main/get-beam.sh -sSfL | sh` +- Register API keys with `beam configure` +- Set environment variables (`BEAM_CLIENT_ID`) and (`BEAM_CLIENT_SECRET`) +- Install the Beam SDK: + +```bash +pip install beam-sdk +``` + + +## LLMs + +See a [usage example](/docs/integrations/llms/beam). + +See another example in the [Beam documentation](https://docs.beam.cloud/examples/langchain). + +```python +from langchain_community.llms.beam import Beam +``` diff --git a/langchain_md_files/integrations/providers/beautiful_soup.mdx b/langchain_md_files/integrations/providers/beautiful_soup.mdx new file mode 100644 index 0000000000000000000000000000000000000000..289d4059fab015854c2f5976f433e054c291f3be --- /dev/null +++ b/langchain_md_files/integrations/providers/beautiful_soup.mdx @@ -0,0 +1,20 @@ +# Beautiful Soup + +>[Beautiful Soup](https://www.crummy.com/software/BeautifulSoup/) is a Python package for parsing +> HTML and XML documents (including having malformed markup, i.e. non-closed tags, so named after tag soup). +> It creates a parse tree for parsed pages that can be used to extract data from HTML,[3] which +> is useful for web scraping. + +## Installation and Setup + +```bash +pip install beautifulsoup4 +``` + +## Document Transformer + +See a [usage example](/docs/integrations/document_transformers/beautiful_soup). + +```python +from langchain_community.document_loaders import BeautifulSoupTransformer +``` diff --git a/langchain_md_files/integrations/providers/bibtex.mdx b/langchain_md_files/integrations/providers/bibtex.mdx new file mode 100644 index 0000000000000000000000000000000000000000..09cc2fd93d17503ceb595cf608c90a9297f0fe2f --- /dev/null +++ b/langchain_md_files/integrations/providers/bibtex.mdx @@ -0,0 +1,20 @@ +# BibTeX + +>[BibTeX](https://www.ctan.org/pkg/bibtex) is a file format and reference management system commonly used in conjunction with `LaTeX` typesetting. It serves as a way to organize and store bibliographic information for academic and research documents. + +## Installation and Setup + +We have to install the `bibtexparser` and `pymupdf` packages. + +```bash +pip install bibtexparser pymupdf +``` + + +## Document loader + +See a [usage example](/docs/integrations/document_loaders/bibtex). + +```python +from langchain_community.document_loaders import BibtexLoader +``` diff --git a/langchain_md_files/integrations/providers/bilibili.mdx b/langchain_md_files/integrations/providers/bilibili.mdx new file mode 100644 index 0000000000000000000000000000000000000000..ec497ec509d11993951aeab9d9eed662b5166199 --- /dev/null +++ b/langchain_md_files/integrations/providers/bilibili.mdx @@ -0,0 +1,17 @@ +# BiliBili + +>[Bilibili](https://www.bilibili.tv/) is one of the most beloved long-form video sites in China. + +## Installation and Setup + +```bash +pip install bilibili-api-python +``` + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/bilibili). + +```python +from langchain_community.document_loaders import BiliBiliLoader +``` diff --git a/langchain_md_files/integrations/providers/bittensor.mdx b/langchain_md_files/integrations/providers/bittensor.mdx new file mode 100644 index 0000000000000000000000000000000000000000..137069077dbdc064495499b9abefd4a203768722 --- /dev/null +++ b/langchain_md_files/integrations/providers/bittensor.mdx @@ -0,0 +1,17 @@ +# Bittensor + +>[Neural Internet Bittensor](https://neuralinternet.ai/) network, an open source protocol +> that powers a decentralized, blockchain-based, machine learning network. + +## Installation and Setup + +Get your API_KEY from [Neural Internet](https://neuralinternet.ai/). + + +## LLMs + +See a [usage example](/docs/integrations/llms/bittensor). + +```python +from langchain_community.llms import NIBittensorLLM +``` diff --git a/langchain_md_files/integrations/providers/blackboard.mdx b/langchain_md_files/integrations/providers/blackboard.mdx new file mode 100644 index 0000000000000000000000000000000000000000..09312bc4dfa06b0fb7d189a8b25a2af9f03775ac --- /dev/null +++ b/langchain_md_files/integrations/providers/blackboard.mdx @@ -0,0 +1,22 @@ +# Blackboard + +>[Blackboard Learn](https://en.wikipedia.org/wiki/Blackboard_Learn) (previously the `Blackboard Learning Management System`) +> is a web-based virtual learning environment and learning management system developed by Blackboard Inc. +> The software features course management, customizable open architecture, and scalable design that allows +> integration with student information systems and authentication protocols. It may be installed on local servers, +> hosted by `Blackboard ASP Solutions`, or provided as Software as a Service hosted on Amazon Web Services. +> Its main purposes are stated to include the addition of online elements to courses traditionally delivered +> face-to-face and development of completely online courses with few or no face-to-face meetings. + +## Installation and Setup + +There isn't any special setup for it. + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/blackboard). + +```python +from langchain_community.document_loaders import BlackboardLoader + +``` diff --git a/langchain_md_files/integrations/providers/bookendai.mdx b/langchain_md_files/integrations/providers/bookendai.mdx new file mode 100644 index 0000000000000000000000000000000000000000..e5eecde38d7d003dcd96e4d1df9515ce79d02682 --- /dev/null +++ b/langchain_md_files/integrations/providers/bookendai.mdx @@ -0,0 +1,18 @@ +# bookend.ai + +LangChain implements an integration with embeddings provided by [bookend.ai](https://bookend.ai/). + + +## Installation and Setup + + +You need to register and get the `API_KEY` +from the [bookend.ai](https://bookend.ai/) website. + +## Embedding model + +See a [usage example](/docs/integrations/text_embedding/bookend). + +```python +from langchain_community.embeddings import BookendEmbeddings +``` diff --git a/langchain_md_files/integrations/providers/box.mdx b/langchain_md_files/integrations/providers/box.mdx new file mode 100644 index 0000000000000000000000000000000000000000..3fde28d556bcba1249281af429f7d6aa3f27ff8d --- /dev/null +++ b/langchain_md_files/integrations/providers/box.mdx @@ -0,0 +1,179 @@ +# Box + +[Box](https://box.com) is the Intelligent Content Cloud, a single platform that enables +organizations to fuel collaboration, manage the entire content lifecycle, secure critical content, +and transform business workflows with enterprise AI. Founded in 2005, Box simplifies work for +leading global organizations, including AstraZeneca, JLL, Morgan Stanley, and Nationwide. + +In this package, we make available a number of ways to include Box content in your AI workflows. + +### Installation and setup + +```bash +pip install -U langchain-box + +``` + +# langchain-box + +This package contains the LangChain integration with Box. For more information about +Box, check out our [developer documentation](https://developer.box.com). + +## Pre-requisites + +In order to integrate with Box, you need a few things: + +* A Box instance — if you are not a current Box customer, sign up for a +[free dev account](https://account.box.com/signup/n/developer#ty9l3). +* A Box app — more on how to +[create an app](https://developer.box.com/guides/getting-started/first-application/) +* Your app approved in your Box instance — This is done by your admin. +The good news is if you are using a free developer account, you are the admin. +[Authorize your app](https://developer.box.com/guides/authorization/custom-app-approval/#manual-approval) + +## Authentication + +The `box-langchain` package offers some flexibility to authentication. The +most basic authentication method is by using a developer token. This can be +found in the [Box developer console](https://account.box.com/developers/console) +on the configuration screen. This token is purposely short-lived (1 hour) and is +intended for development. With this token, you can add it to your environment as +`BOX_DEVELOPER_TOKEN`, you can pass it directly to the loader, or you can use the +`BoxAuth` authentication helper class. + +We will cover passing it directly to the loader in the section below. + +### BoxAuth helper class + +`BoxAuth` supports the following authentication methods: + +* Token — either a developer token or any token generated through the Box SDK +* JWT with a service account +* JWT with a specified user +* CCG with a service account +* CCG with a specified user + +:::note +If using JWT authentication, you will need to download the configuration from the Box +developer console after generating your public/private key pair. Place this file in your +application directory structure somewhere. You will use the path to this file when using +the `BoxAuth` helper class. +::: + +For more information, learn about how to +[set up a Box application](https://developer.box.com/guides/getting-started/first-application/), +and check out the +[Box authentication guide](https://developer.box.com/guides/authentication/select/) +for more about our different authentication options. + +Examples: + +**Token** + +```python +from langchain_box.document_loaders import BoxLoader +from langchain_box.utilities import BoxAuth, BoxAuthType + +auth = BoxAuth( + auth_type=BoxAuthType.TOKEN, + box_developer_token=box_developer_token +) + +loader = BoxLoader( + box_auth=auth, + ... +) +``` + +**JWT with a service account** + +```python +from langchain_box.document_loaders import BoxLoader +from langchain_box.utilities import BoxAuth, BoxAuthType + +auth = BoxAuth( + auth_type=BoxAuthType.JWT, + box_jwt_path=box_jwt_path +) + +loader = BoxLoader( + box_auth=auth, + ... +``` + +**JWT with a specified user** + +```python +from langchain_box.document_loaders import BoxLoader +from langchain_box.utilities import BoxAuth, BoxAuthType + +auth = BoxAuth( + auth_type=BoxAuthType.JWT, + box_jwt_path=box_jwt_path, + box_user_id=box_user_id +) + +loader = BoxLoader( + box_auth=auth, + ... +``` + +**CCG with a service account** + +```python +from langchain_box.document_loaders import BoxLoader +from langchain_box.utilities import BoxAuth, BoxAuthType + +auth = BoxAuth( + auth_type=BoxAuthType.CCG, + box_client_id=box_client_id, + box_client_secret=box_client_secret, + box_enterprise_id=box_enterprise_id +) + +loader = BoxLoader( + box_auth=auth, + ... +``` + +**CCG with a specified user** + +```python +from langchain_box.document_loaders import BoxLoader +from langchain_box.utilities import BoxAuth, BoxAuthType + +auth = BoxAuth( + auth_type=BoxAuthType.CCG, + box_client_id=box_client_id, + box_client_secret=box_client_secret, + box_user_id=box_user_id +) + +loader = BoxLoader( + box_auth=auth, + ... +``` + +If you wish to use OAuth2 with the authorization_code flow, please use `BoxAuthType.TOKEN` with the token you have acquired. + +## Document Loaders + +### BoxLoader + +[See usage example](/docs/integrations/document_loaders/box) + +```python +from langchain_box.document_loaders import BoxLoader + +``` + +## Retrievers + +### BoxRetriever + +[See usage example](/docs/integrations/retrievers/box) + +```python +from langchain_box.retrievers import BoxRetriever + +``` diff --git a/langchain_md_files/integrations/providers/brave_search.mdx b/langchain_md_files/integrations/providers/brave_search.mdx new file mode 100644 index 0000000000000000000000000000000000000000..647004302cc4054ce9835a060789171622f3eafb --- /dev/null +++ b/langchain_md_files/integrations/providers/brave_search.mdx @@ -0,0 +1,36 @@ +# Brave Search + + +>[Brave Search](https://en.wikipedia.org/wiki/Brave_Search) is a search engine developed by Brave Software. +> - `Brave Search` uses its own web index. As of May 2022, it covered over 10 billion pages and was used to serve 92% +> of search results without relying on any third-parties, with the remainder being retrieved +> server-side from the Bing API or (on an opt-in basis) client-side from Google. According +> to Brave, the index was kept "intentionally smaller than that of Google or Bing" in order to +> help avoid spam and other low-quality content, with the disadvantage that "Brave Search is +> not yet as good as Google in recovering long-tail queries." +>- `Brave Search Premium`: As of April 2023 Brave Search is an ad-free website, but it will +> eventually switch to a new model that will include ads and premium users will get an ad-free experience. +> User data including IP addresses won't be collected from its users by default. A premium account +> will be required for opt-in data-collection. + + +## Installation and Setup + +To get access to the Brave Search API, you need to [create an account and get an API key](https://api.search.brave.com/app/dashboard). + + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/brave_search). + +```python +from langchain_community.document_loaders import BraveSearchLoader +``` + +## Tool + +See a [usage example](/docs/integrations/tools/brave_search). + +```python +from langchain.tools import BraveSearch +``` diff --git a/langchain_md_files/integrations/providers/browserbase.mdx b/langchain_md_files/integrations/providers/browserbase.mdx new file mode 100644 index 0000000000000000000000000000000000000000..0bd939ffbfc66e75a92286cdbad2d0079a3ef354 --- /dev/null +++ b/langchain_md_files/integrations/providers/browserbase.mdx @@ -0,0 +1,34 @@ +# Browserbase + +[Browserbase](https://browserbase.com) is a developer platform to reliably run, manage, and monitor headless browsers. + +Power your AI data retrievals with: +- [Serverless Infrastructure](https://docs.browserbase.com/under-the-hood) providing reliable browsers to extract data from complex UIs +- [Stealth Mode](https://docs.browserbase.com/features/stealth-mode) with included fingerprinting tactics and automatic captcha solving +- [Session Debugger](https://docs.browserbase.com/features/sessions) to inspect your Browser Session with networks timeline and logs +- [Live Debug](https://docs.browserbase.com/guides/session-debug-connection/browser-remote-control) to quickly debug your automation + +## Installation and Setup + +- Get an API key and Project ID from [browserbase.com](https://browserbase.com) and set it in environment variables (`BROWSERBASE_API_KEY`, `BROWSERBASE_PROJECT_ID`). +- Install the [Browserbase SDK](http://github.com/browserbase/python-sdk): + +```python +pip install browserbase +``` + +## Document loader + +See a [usage example](/docs/integrations/document_loaders/browserbase). + +```python +from langchain_community.document_loaders import BrowserbaseLoader +``` + +## Multi-Modal + +See a [usage example](/docs/integrations/document_loaders/browserbase). + +```python +from browserbase.helpers.gpt4 import GPT4VImage, GPT4VImageDetail +``` diff --git a/langchain_md_files/integrations/providers/browserless.mdx b/langchain_md_files/integrations/providers/browserless.mdx new file mode 100644 index 0000000000000000000000000000000000000000..0fe4463af921cd716b26c77a85e101a8e30ed1ae --- /dev/null +++ b/langchain_md_files/integrations/providers/browserless.mdx @@ -0,0 +1,18 @@ +# Browserless + +>[Browserless](https://www.browserless.io/docs/start) is a service that allows you to +> run headless Chrome instances in the cloud. It’s a great way to run browser-based +> automation at scale without having to worry about managing your own infrastructure. + +## Installation and Setup + +We have to get the API key [here](https://www.browserless.io/pricing/). + + +## Document loader + +See a [usage example](/docs/integrations/document_loaders/browserless). + +```python +from langchain_community.document_loaders import BrowserlessLoader +``` diff --git a/langchain_md_files/integrations/providers/byte_dance.mdx b/langchain_md_files/integrations/providers/byte_dance.mdx new file mode 100644 index 0000000000000000000000000000000000000000..8746bcf519fe4ee6dfe2fbda582339bce54789a8 --- /dev/null +++ b/langchain_md_files/integrations/providers/byte_dance.mdx @@ -0,0 +1,22 @@ +# ByteDance + +>[ByteDance](https://bytedance.com/) is a Chinese internet technology company. + +## Installation and Setup + +Get the access token. +You can find the access instructions [here](https://open.larksuite.com/document) + + +## Document Loader + +### Lark Suite + +>[Lark Suite](https://www.larksuite.com/) is an enterprise collaboration platform +> developed by `ByteDance`. + +See a [usage example](/docs/integrations/document_loaders/larksuite). + +```python +from langchain_community.document_loaders.larksuite import LarkSuiteDocLoader +``` diff --git a/langchain_md_files/integrations/providers/cassandra.mdx b/langchain_md_files/integrations/providers/cassandra.mdx new file mode 100644 index 0000000000000000000000000000000000000000..6b11462156c9c6856c054116aa2ca1b6c5702990 --- /dev/null +++ b/langchain_md_files/integrations/providers/cassandra.mdx @@ -0,0 +1,85 @@ +# Cassandra + +> [Apache Cassandra®](https://cassandra.apache.org/) is a NoSQL, row-oriented, highly scalable and highly available database. +> Starting with version 5.0, the database ships with [vector search capabilities](https://cassandra.apache.org/doc/trunk/cassandra/vector-search/overview.html). + +The integrations outlined in this page can be used with `Cassandra` as well as other CQL-compatible databases, +i.e. those using the `Cassandra Query Language` protocol. + + +## Installation and Setup + +Install the following Python package: + +```bash +pip install "cassio>=0.1.6" +``` + +## Vector Store + +```python +from langchain_community.vectorstores import Cassandra +``` + +Learn more in the [example notebook](/docs/integrations/vectorstores/cassandra). + +## Chat message history + +```python +from langchain_community.chat_message_histories import CassandraChatMessageHistory +``` + +Learn more in the [example notebook](/docs/integrations/memory/cassandra_chat_message_history). + + +## LLM Cache + +```python +from langchain.globals import set_llm_cache +from langchain_community.cache import CassandraCache +set_llm_cache(CassandraCache()) +``` + +Learn more in the [example notebook](/docs/integrations/llm_caching#cassandra-caches) (scroll to the Cassandra section). + + +## Semantic LLM Cache + +```python +from langchain.globals import set_llm_cache +from langchain_community.cache import CassandraSemanticCache +set_llm_cache(CassandraSemanticCache( + embedding=my_embedding, + table_name="my_store", +)) +``` + +Learn more in the [example notebook](/docs/integrations/llm_caching#cassandra-caches) (scroll to the appropriate section). + +## Document loader + +```python +from langchain_community.document_loaders import CassandraLoader +``` + +Learn more in the [example notebook](/docs/integrations/document_loaders/cassandra). + +#### Attribution statement + +> Apache Cassandra, Cassandra and Apache are either registered trademarks or trademarks of +> the [Apache Software Foundation](http://www.apache.org/) in the United States and/or other countries. + +## Toolkit + +The `Cassandra Database toolkit` enables AI engineers to efficiently integrate agents +with Cassandra data. + +```python +from langchain_community.agent_toolkits.cassandra_database.toolkit import ( + CassandraDatabaseToolkit, +) +``` + +Learn more in the [example notebook](/docs/integrations/tools/cassandra_database). + + diff --git a/langchain_md_files/integrations/providers/cerebriumai.mdx b/langchain_md_files/integrations/providers/cerebriumai.mdx new file mode 100644 index 0000000000000000000000000000000000000000..912dbd90f61e634307a45cf875b72e4a946e09d3 --- /dev/null +++ b/langchain_md_files/integrations/providers/cerebriumai.mdx @@ -0,0 +1,26 @@ +# CerebriumAI + +>[Cerebrium](https://docs.cerebrium.ai/cerebrium/getting-started/introduction) is a serverless GPU infrastructure provider. +> It provides API access to several LLM models. + +See the examples in the [CerebriumAI documentation](https://docs.cerebrium.ai/examples/langchain). + +## Installation and Setup + +- Install a python package: +```bash +pip install cerebrium +``` + +- [Get an CerebriumAI api key](https://docs.cerebrium.ai/cerebrium/getting-started/installation) and set + it as an environment variable (`CEREBRIUMAI_API_KEY`) + + +## LLMs + +See a [usage example](/docs/integrations/llms/cerebriumai). + + +```python +from langchain_community.llms import CerebriumAI +``` \ No newline at end of file diff --git a/langchain_md_files/integrations/providers/chaindesk.mdx b/langchain_md_files/integrations/providers/chaindesk.mdx new file mode 100644 index 0000000000000000000000000000000000000000..7cfd5e96b88f3777dfcef4be86bd02f9da04e166 --- /dev/null +++ b/langchain_md_files/integrations/providers/chaindesk.mdx @@ -0,0 +1,17 @@ +# Chaindesk + +>[Chaindesk](https://chaindesk.ai) is an [open-source](https://github.com/gmpetrov/databerry) document retrieval platform that helps to connect your personal data with Large Language Models. + + +## Installation and Setup + +We need to sign up for Chaindesk, create a datastore, add some data and get your datastore api endpoint url. +We need the [API Key](https://docs.chaindesk.ai/api-reference/authentication). + +## Retriever + +See a [usage example](/docs/integrations/retrievers/chaindesk). + +```python +from langchain.retrievers import ChaindeskRetriever +``` diff --git a/langchain_md_files/integrations/providers/chroma.mdx b/langchain_md_files/integrations/providers/chroma.mdx new file mode 100644 index 0000000000000000000000000000000000000000..d5436c9dc2aef54dd4e0b0bec5f4f646ea0ffb67 --- /dev/null +++ b/langchain_md_files/integrations/providers/chroma.mdx @@ -0,0 +1,29 @@ +# Chroma + +>[Chroma](https://docs.trychroma.com/getting-started) is a database for building AI applications with embeddings. + +## Installation and Setup + +```bash +pip install langchain-chroma +``` + + +## VectorStore + +There exists a wrapper around Chroma vector databases, allowing you to use it as a vectorstore, +whether for semantic search or example selection. + +```python +from langchain_chroma import Chroma +``` + +For a more detailed walkthrough of the Chroma wrapper, see [this notebook](/docs/integrations/vectorstores/chroma) + +## Retriever + +See a [usage example](/docs/integrations/retrievers/self_query/chroma_self_query). + +```python +from langchain.retrievers import SelfQueryRetriever +``` diff --git a/langchain_md_files/integrations/providers/clarifai.mdx b/langchain_md_files/integrations/providers/clarifai.mdx new file mode 100644 index 0000000000000000000000000000000000000000..e783833255490c1e4cbce95f018f6578baff4f42 --- /dev/null +++ b/langchain_md_files/integrations/providers/clarifai.mdx @@ -0,0 +1,53 @@ +# Clarifai + +>[Clarifai](https://clarifai.com) is one of first deep learning platforms having been founded in 2013. Clarifai provides an AI platform with the full AI lifecycle for data exploration, data labeling, model training, evaluation and inference around images, video, text and audio data. In the LangChain ecosystem, as far as we're aware, Clarifai is the only provider that supports LLMs, embeddings and a vector store in one production scale platform, making it an excellent choice to operationalize your LangChain implementations. +> +> `Clarifai` provides 1,000s of AI models for many different use cases. You can [explore them here](https://clarifai.com/explore) to find the one most suited for your use case. These models include those created by other providers such as OpenAI, Anthropic, Cohere, AI21, etc. as well as state of the art from open source such as Falcon, InstructorXL, etc. so that you build the best in AI into your products. You'll find these organized by the creator's user_id and into projects we call applications denoted by their app_id. Those IDs will be needed in additional to the model_id and optionally the version_id, so make note of all these IDs once you found the best model for your use case! +> +>Also note that given there are many models for images, video, text and audio understanding, you can build some interested AI agents that utilize the variety of AI models as experts to understand those data types. + + +## Installation and Setup +- Install the Python SDK: +```bash +pip install clarifai +``` +[Sign-up](https://clarifai.com/signup) for a Clarifai account, then get a personal access token to access the Clarifai API from your [security settings](https://clarifai.com/settings/security) and set it as an environment variable (`CLARIFAI_PAT`). + + +## LLMs + +To find the selection of LLMs in the Clarifai platform you can select the text to text model type [here](https://clarifai.com/explore/models?filterData=%5B%7B%22field%22%3A%22model_type_id%22%2C%22value%22%3A%5B%22text-to-text%22%5D%7D%5D&page=1&perPage=24). + +```python +from langchain_community.llms import Clarifai +llm = Clarifai(pat=CLARIFAI_PAT, user_id=USER_ID, app_id=APP_ID, model_id=MODEL_ID) +``` + +For more details, the docs on the Clarifai LLM wrapper provide a [detailed walkthrough](/docs/integrations/llms/clarifai). + + +## Embedding Models + +To find the selection of embeddings models in the Clarifai platform you can select the text to embedding model type [here](https://clarifai.com/explore/models?page=1&perPage=24&filterData=%5B%7B%22field%22%3A%22model_type_id%22%2C%22value%22%3A%5B%22text-embedder%22%5D%7D%5D). + +There is a Clarifai Embedding model in LangChain, which you can access with: +```python +from langchain_community.embeddings import ClarifaiEmbeddings +embeddings = ClarifaiEmbeddings(pat=CLARIFAI_PAT, user_id=USER_ID, app_id=APP_ID, model_id=MODEL_ID) +``` + +See a [usage example](/docs/integrations/document_loaders/couchbase). + + +## Vectorstore + +Clarifai's vector DB was launched in 2016 and has been optimized to support live search queries. With workflows in the Clarifai platform, you data is automatically indexed by am embedding model and optionally other models as well to index that information in the DB for search. You can query the DB not only via the vectors but also filter by metadata matches, other AI predicted concepts, and even do geo-coordinate search. Simply create an application, select the appropriate base workflow for your type of data, and upload it (through the API as [documented here](https://docs.clarifai.com/api-guide/data/create-get-update-delete) or the UIs at clarifai.com). + +You can also add data directly from LangChain as well, and the auto-indexing will take place for you. You'll notice this is a little different than other vectorstores where you need to provide an embedding model in their constructor and have LangChain coordinate getting the embeddings from text and writing those to the index. Not only is it more convenient, but it's much more scalable to use Clarifai's distributed cloud to do all the index in the background. + +```python +from langchain_community.vectorstores import Clarifai +clarifai_vector_db = Clarifai.from_texts(user_id=USER_ID, app_id=APP_ID, texts=texts, pat=CLARIFAI_PAT, number_of_docs=NUMBER_OF_DOCS, metadatas = metadatas) +``` +For more details, the docs on the Clarifai vector store provide a [detailed walkthrough](/docs/integrations/vectorstores/clarifai). diff --git a/langchain_md_files/integrations/providers/clickhouse.mdx b/langchain_md_files/integrations/providers/clickhouse.mdx new file mode 100644 index 0000000000000000000000000000000000000000..64e4608c535fde2b3733fb2f521fc7173545ed7d --- /dev/null +++ b/langchain_md_files/integrations/providers/clickhouse.mdx @@ -0,0 +1,25 @@ +# ClickHouse + +> [ClickHouse](https://clickhouse.com/) is the fast and resource efficient open-source database for real-time +> apps and analytics with full SQL support and a wide range of functions to assist users in writing analytical queries. +> It has data structures and distance search functions (like `L2Distance`) as well as +> [approximate nearest neighbor search indexes](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/annindexes) +> That enables ClickHouse to be used as a high performance and scalable vector database to store and search vectors with SQL. + + +## Installation and Setup + +We need to install `clickhouse-connect` python package. + +```bash +pip install clickhouse-connect +``` + +## Vector Store + +See a [usage example](/docs/integrations/vectorstores/clickhouse). + +```python +from langchain_community.vectorstores import Clickhouse, ClickhouseSettings +``` + diff --git a/langchain_md_files/integrations/providers/clickup.mdx b/langchain_md_files/integrations/providers/clickup.mdx new file mode 100644 index 0000000000000000000000000000000000000000..256ae2cace4e6a151438945698676131ff0b1bb5 --- /dev/null +++ b/langchain_md_files/integrations/providers/clickup.mdx @@ -0,0 +1,20 @@ +# ClickUp + +>[ClickUp](https://clickup.com/) is an all-in-one productivity platform that provides small and large teams across industries with flexible and customizable work management solutions, tools, and functions. +> +>It is a cloud-based project management solution for businesses of all sizes featuring communication and collaboration tools to help achieve organizational goals. + +## Installation and Setup + +1. Create a [ClickUp App](https://help.clickup.com/hc/en-us/articles/6303422883095-Create-your-own-app-with-the-ClickUp-API) +2. Follow [these steps](https://clickup.com/api/developer-portal/authentication/) to get your client_id and client_secret. + +## Toolkits + +```python +from langchain_community.agent_toolkits.clickup.toolkit import ClickupToolkit +from langchain_community.utilities.clickup import ClickupAPIWrapper +``` + +See a [usage example](/docs/integrations/tools/clickup). + diff --git a/langchain_md_files/integrations/providers/cloudflare.mdx b/langchain_md_files/integrations/providers/cloudflare.mdx new file mode 100644 index 0000000000000000000000000000000000000000..d7a4e8b8bed14106ed0212b5817ea7faab193063 --- /dev/null +++ b/langchain_md_files/integrations/providers/cloudflare.mdx @@ -0,0 +1,25 @@ +# Cloudflare + +>[Cloudflare, Inc. (Wikipedia)](https://en.wikipedia.org/wiki/Cloudflare) is an American company that provides +> content delivery network services, cloud cybersecurity, DDoS mitigation, and ICANN-accredited +> domain registration services. + +>[Cloudflare Workers AI](https://developers.cloudflare.com/workers-ai/) allows you to run machine +> learning models, on the `Cloudflare` network, from your code via REST API. + + +## LLMs + +See [installation instructions and usage example](/docs/integrations/llms/cloudflare_workersai). + +```python +from langchain_community.llms.cloudflare_workersai import CloudflareWorkersAI +``` + +## Embedding models + +See [installation instructions and usage example](/docs/integrations/text_embedding/cloudflare_workersai). + +```python +from langchain_community.embeddings.cloudflare_workersai import CloudflareWorkersAIEmbeddings +``` diff --git a/langchain_md_files/integrations/providers/clova.mdx b/langchain_md_files/integrations/providers/clova.mdx new file mode 100644 index 0000000000000000000000000000000000000000..b10aa930511363c9cc98699149354734a4ebf90c --- /dev/null +++ b/langchain_md_files/integrations/providers/clova.mdx @@ -0,0 +1,14 @@ +# Clova + +>[CLOVA Studio](https://api.ncloud-docs.com/docs/ai-naver-clovastudio-summary) is a service +> of [Naver Cloud Platform](https://www.ncloud.com/) that uses `HyperCLOVA` language models, +> a hyperscale AI technology, to output phrases generated through AI technology based on user input. + + +## Embedding models + +See [installation instructions and usage example](/docs/integrations/text_embedding/clova). + +```python +from langchain_community.embeddings import ClovaEmbeddings +``` diff --git a/langchain_md_files/integrations/providers/cnosdb.mdx b/langchain_md_files/integrations/providers/cnosdb.mdx new file mode 100644 index 0000000000000000000000000000000000000000..4c5316a5e310b83f751c0b24ee23a47635f0d17b --- /dev/null +++ b/langchain_md_files/integrations/providers/cnosdb.mdx @@ -0,0 +1,110 @@ +# CnosDB +> [CnosDB](https://github.com/cnosdb/cnosdb) is an open-source distributed time series database with high performance, high compression rate and high ease of use. + +## Installation and Setup + +```python +pip install cnos-connector +``` + +## Connecting to CnosDB +You can connect to CnosDB using the `SQLDatabase.from_cnosdb()` method. +### Syntax +```python +def SQLDatabase.from_cnosdb(url: str = "127.0.0.1:8902", + user: str = "root", + password: str = "", + tenant: str = "cnosdb", + database: str = "public") +``` +Args: +1. url (str): The HTTP connection host name and port number of the CnosDB + service, excluding "http://" or "https://", with a default value + of "127.0.0.1:8902". +2. user (str): The username used to connect to the CnosDB service, with a + default value of "root". +3. password (str): The password of the user connecting to the CnosDB service, + with a default value of "". +4. tenant (str): The name of the tenant used to connect to the CnosDB service, + with a default value of "cnosdb". +5. database (str): The name of the database in the CnosDB tenant. +## Examples +```python +# Connecting to CnosDB with SQLDatabase Wrapper +from langchain_community.utilities import SQLDatabase + +db = SQLDatabase.from_cnosdb() +``` +```python +# Creating a OpenAI Chat LLM Wrapper +from langchain_openai import ChatOpenAI + +llm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo") +``` + +### SQL Database Chain +This example demonstrates the use of the SQL Chain for answering a question over a CnosDB. +```python +from langchain_community.utilities import SQLDatabaseChain + +db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True) + +db_chain.run( + "What is the average temperature of air at station XiaoMaiDao between October 19, 2022 and Occtober 20, 2022?" +) +``` +```shell +> Entering new chain... +What is the average temperature of air at station XiaoMaiDao between October 19, 2022 and Occtober 20, 2022? +SQLQuery:SELECT AVG(temperature) FROM air WHERE station = 'XiaoMaiDao' AND time >= '2022-10-19' AND time < '2022-10-20' +SQLResult: [(68.0,)] +Answer:The average temperature of air at station XiaoMaiDao between October 19, 2022 and October 20, 2022 is 68.0. +> Finished chain. +``` +### SQL Database Agent +This example demonstrates the use of the SQL Database Agent for answering questions over a CnosDB. +```python +from langchain.agents import create_sql_agent +from langchain_community.agent_toolkits import SQLDatabaseToolkit + +toolkit = SQLDatabaseToolkit(db=db, llm=llm) +agent = create_sql_agent(llm=llm, toolkit=toolkit, verbose=True) +``` +```python +agent.run( + "What is the average temperature of air at station XiaoMaiDao between October 19, 2022 and Occtober 20, 2022?" +) +``` +```shell +> Entering new chain... +Action: sql_db_list_tables +Action Input: "" +Observation: air +Thought:The "air" table seems relevant to the question. I should query the schema of the "air" table to see what columns are available. +Action: sql_db_schema +Action Input: "air" +Observation: +CREATE TABLE air ( + pressure FLOAT, + station STRING, + temperature FLOAT, + time TIMESTAMP, + visibility FLOAT +) + +/* +3 rows from air table: +pressure station temperature time visibility +75.0 XiaoMaiDao 67.0 2022-10-19T03:40:00 54.0 +77.0 XiaoMaiDao 69.0 2022-10-19T04:40:00 56.0 +76.0 XiaoMaiDao 68.0 2022-10-19T05:40:00 55.0 +*/ +Thought:The "temperature" column in the "air" table is relevant to the question. I can query the average temperature between the specified dates. +Action: sql_db_query +Action Input: "SELECT AVG(temperature) FROM air WHERE station = 'XiaoMaiDao' AND time >= '2022-10-19' AND time <= '2022-10-20'" +Observation: [(68.0,)] +Thought:The average temperature of air at station XiaoMaiDao between October 19, 2022 and October 20, 2022 is 68.0. +Final Answer: 68.0 + +> Finished chain. +``` diff --git a/langchain_md_files/integrations/providers/cogniswitch.mdx b/langchain_md_files/integrations/providers/cogniswitch.mdx new file mode 100644 index 0000000000000000000000000000000000000000..d8aee6a4c9d5c8ec21a32caf16eb1d62555c18e6 --- /dev/null +++ b/langchain_md_files/integrations/providers/cogniswitch.mdx @@ -0,0 +1,53 @@ +# CogniSwitch + +>[CogniSwitch](https://www.cogniswitch.ai/aboutus) is an API based data platform that +> enhances enterprise data by extracting entities, concepts and their relationships +> thereby converting this data into a multidimensional format and storing it in +> a database that can accommodate these enhancements. In our case the data is stored +> in a knowledge graph. This enhanced data is now ready for consumption by LLMs and +> other GenAI applications ensuring the data is consumable and context can be maintained. +> Thereby eliminating hallucinations and delivering accuracy. + +## Toolkit + +See [installation instructions and usage example](/docs/integrations/tools/cogniswitch). + +```python +from langchain_community.agent_toolkits import CogniswitchToolkit +``` + +## Tools + +### CogniswitchKnowledgeRequest + +>Tool that uses the CogniSwitch service to answer questions. + +```python +from langchain_community.tools.cogniswitch.tool import CogniswitchKnowledgeRequest +``` + +### CogniswitchKnowledgeSourceFile + +>Tool that uses the CogniSwitch services to store data from file. + +```python +from langchain_community.tools.cogniswitch.tool import CogniswitchKnowledgeSourceFile +``` + +### CogniswitchKnowledgeSourceURL + +>Tool that uses the CogniSwitch services to store data from a URL. + +```python +from langchain_community.tools.cogniswitch.tool import CogniswitchKnowledgeSourceURL +``` + +### CogniswitchKnowledgeStatus + +>Tool that uses the CogniSwitch services to get the status of the document or url uploaded. + +```python +from langchain_community.tools.cogniswitch.tool import CogniswitchKnowledgeStatus +``` + + diff --git a/langchain_md_files/integrations/providers/cohere.mdx b/langchain_md_files/integrations/providers/cohere.mdx new file mode 100644 index 0000000000000000000000000000000000000000..0bd8417317de8f5180417fdb91a3dc1abd288004 --- /dev/null +++ b/langchain_md_files/integrations/providers/cohere.mdx @@ -0,0 +1,157 @@ +# Cohere + +>[Cohere](https://cohere.ai/about) is a Canadian startup that provides natural language processing models +> that help companies improve human-machine interactions. + +## Installation and Setup +- Install the Python SDK : +```bash +pip install langchain-cohere +``` + +Get a [Cohere api key](https://dashboard.cohere.ai/) and set it as an environment variable (`COHERE_API_KEY`) + +## Cohere langchain integrations + +|API|description|Endpoint docs|Import|Example usage| +|---|---|---|---|---| +|Chat|Build chat bots|[chat](https://docs.cohere.com/reference/chat)|`from langchain_cohere import ChatCohere`|[cohere.ipynb](/docs/integrations/chat/cohere)| +|LLM|Generate text|[generate](https://docs.cohere.com/reference/generate)|`from langchain_cohere.llms import Cohere`|[cohere.ipynb](/docs/integrations/llms/cohere)| +|RAG Retriever|Connect to external data sources|[chat + rag](https://docs.cohere.com/reference/chat)|`from langchain.retrievers import CohereRagRetriever`|[cohere.ipynb](/docs/integrations/retrievers/cohere)| +|Text Embedding|Embed strings to vectors|[embed](https://docs.cohere.com/reference/embed)|`from langchain_cohere import CohereEmbeddings`|[cohere.ipynb](/docs/integrations/text_embedding/cohere)| +|Rerank Retriever|Rank strings based on relevance|[rerank](https://docs.cohere.com/reference/rerank)|`from langchain.retrievers.document_compressors import CohereRerank`|[cohere.ipynb](/docs/integrations/retrievers/cohere-reranker)| + +## Quick copy examples + +### Chat + +```python +from langchain_cohere import ChatCohere +from langchain_core.messages import HumanMessage +chat = ChatCohere() +messages = [HumanMessage(content="knock knock")] +print(chat.invoke(messages)) +``` + +Usage of the Cohere [chat model](/docs/integrations/chat/cohere) + +### LLM + + +```python +from langchain_cohere.llms import Cohere + +llm = Cohere() +print(llm.invoke("Come up with a pet name")) +``` + +Usage of the Cohere (legacy) [LLM model](/docs/integrations/llms/cohere) + +### Tool calling +```python +from langchain_cohere import ChatCohere +from langchain_core.messages import ( + HumanMessage, + ToolMessage, +) +from langchain_core.tools import tool + +@tool +def magic_function(number: int) -> int: + """Applies a magic operation to an integer + + Args: + number: Number to have magic operation performed on + """ + return number + 10 + +def invoke_tools(tool_calls, messages): + for tool_call in tool_calls: + selected_tool = {"magic_function":magic_function}[ + tool_call["name"].lower() + ] + tool_output = selected_tool.invoke(tool_call["args"]) + messages.append(ToolMessage(tool_output, tool_call_id=tool_call["id"])) + return messages + +tools = [magic_function] + +llm = ChatCohere() +llm_with_tools = llm.bind_tools(tools=tools) +messages = [ + HumanMessage( + content="What is the value of magic_function(2)?" + ) +] + +res = llm_with_tools.invoke(messages) +while res.tool_calls: + messages.append(res) + messages = invoke_tools(res.tool_calls, messages) + res = llm_with_tools.invoke(messages) + +print(res.content) +``` +Tool calling with Cohere LLM can be done by binding the necessary tools to the llm as seen above. +An alternative, is to support multi hop tool calling with the ReAct agent as seen below. + +### ReAct Agent + +The agent is based on the paper +[ReAct: Synergizing Reasoning and Acting in Language Models](https://arxiv.org/abs/2210.03629). + +```python +from langchain_community.tools.tavily_search import TavilySearchResults +from langchain_cohere import ChatCohere, create_cohere_react_agent +from langchain_core.prompts import ChatPromptTemplate +from langchain.agents import AgentExecutor + +llm = ChatCohere() + +internet_search = TavilySearchResults(max_results=4) +internet_search.name = "internet_search" +internet_search.description = "Route a user query to the internet" + +prompt = ChatPromptTemplate.from_template("{input}") + +agent = create_cohere_react_agent( + llm, + [internet_search], + prompt +) + +agent_executor = AgentExecutor(agent=agent, tools=[internet_search], verbose=True) + +agent_executor.invoke({ + "input": "In what year was the company that was founded as Sound of Music added to the S&P 500?", +}) +``` +The ReAct agent can be used to call multiple tools in sequence. + +### RAG Retriever + +```python +from langchain_cohere import ChatCohere +from langchain.retrievers import CohereRagRetriever +from langchain_core.documents import Document + +rag = CohereRagRetriever(llm=ChatCohere()) +print(rag.invoke("What is cohere ai?")) +``` + +Usage of the Cohere [RAG Retriever](/docs/integrations/retrievers/cohere) + +### Text Embedding + +```python +from langchain_cohere import CohereEmbeddings + +embeddings = CohereEmbeddings(model="embed-english-light-v3.0") +print(embeddings.embed_documents(["This is a test document."])) +``` + +Usage of the Cohere [Text Embeddings model](/docs/integrations/text_embedding/cohere) + +### Reranker + +Usage of the Cohere [Reranker](/docs/integrations/retrievers/cohere-reranker) \ No newline at end of file diff --git a/langchain_md_files/integrations/providers/college_confidential.mdx b/langchain_md_files/integrations/providers/college_confidential.mdx new file mode 100644 index 0000000000000000000000000000000000000000..4f081945b944b1842f97a003a844d8d2447fe677 --- /dev/null +++ b/langchain_md_files/integrations/providers/college_confidential.mdx @@ -0,0 +1,16 @@ +# College Confidential + +>[College Confidential](https://www.collegeconfidential.com/) gives information on 3,800+ colleges and universities. + +## Installation and Setup + +There isn't any special setup for it. + + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/college_confidential). + +```python +from langchain_community.document_loaders import CollegeConfidentialLoader +``` diff --git a/langchain_md_files/integrations/providers/confident.mdx b/langchain_md_files/integrations/providers/confident.mdx new file mode 100644 index 0000000000000000000000000000000000000000..51de57342146b41279ae602f18929851258ce592 --- /dev/null +++ b/langchain_md_files/integrations/providers/confident.mdx @@ -0,0 +1,26 @@ +# Confident AI + +>[Confident AI](https://confident-ai.com) is a creator of the `DeepEval`. +> +>[DeepEval](https://github.com/confident-ai/deepeval) is a package for unit testing LLMs. +> Using `DeepEval`, everyone can build robust language models through faster iterations +> using both unit testing and integration testing. `DeepEval provides support for each step in the iteration +> from synthetic data creation to testing. + +## Installation and Setup + +You need to get the [DeepEval API credentials](https://app.confident-ai.com). + +You need to install the `DeepEval` Python package: + +```bash +pip install deepeval +``` + +## Callbacks + +See an [example](/docs/integrations/callbacks/confident). + +```python +from langchain.callbacks.confident_callback import DeepEvalCallbackHandler +``` diff --git a/langchain_md_files/integrations/providers/confluence.mdx b/langchain_md_files/integrations/providers/confluence.mdx new file mode 100644 index 0000000000000000000000000000000000000000..27a7e274a21ef98ab8b52e60c088b850245e77c1 --- /dev/null +++ b/langchain_md_files/integrations/providers/confluence.mdx @@ -0,0 +1,22 @@ +# Confluence + +>[Confluence](https://www.atlassian.com/software/confluence) is a wiki collaboration platform that saves and organizes all of the project-related material. `Confluence` is a knowledge base that primarily handles content management activities. + + +## Installation and Setup + +```bash +pip install atlassian-python-api +``` + +We need to set up `username/api_key` or `Oauth2 login`. +See [instructions](https://support.atlassian.com/atlassian-account/docs/manage-api-tokens-for-your-atlassian-account/). + + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/confluence). + +```python +from langchain_community.document_loaders import ConfluenceLoader +``` diff --git a/langchain_md_files/integrations/providers/connery.mdx b/langchain_md_files/integrations/providers/connery.mdx new file mode 100644 index 0000000000000000000000000000000000000000..36684a97fa0e9068b886f5e36f76e1677f3a4a27 --- /dev/null +++ b/langchain_md_files/integrations/providers/connery.mdx @@ -0,0 +1,28 @@ +# Connery + +>[Connery SDK](https://github.com/connery-io/connery-sdk) is an NPM package that +> includes both an SDK and a CLI, designed for the development of plugins and actions. +> +>The CLI automates many things in the development process. The SDK +> offers a JavaScript API for defining plugins and actions and packaging them +> into a plugin server with a standardized REST API generated from the metadata. +> The plugin server handles authorization, input validation, and logging. +> So you can focus on the logic of your actions. +> +> See the use cases and examples in the [Connery SDK documentation](https://sdk.connery.io/docs/use-cases/) + +## Toolkit + +See [usage example](/docs/integrations/tools/connery). + +```python +from langchain_community.agent_toolkits.connery import ConneryToolkit +``` + +## Tools + +### ConneryAction + +```python +from langchain_community.tools.connery import ConneryService +``` diff --git a/langchain_md_files/integrations/providers/context.mdx b/langchain_md_files/integrations/providers/context.mdx new file mode 100644 index 0000000000000000000000000000000000000000..0b2e46c21ddfb24d79b817bf8cf10b80159c7d54 --- /dev/null +++ b/langchain_md_files/integrations/providers/context.mdx @@ -0,0 +1,20 @@ +# Context + +>[Context](https://context.ai/) provides user analytics for LLM-powered products and features. + +## Installation and Setup + +We need to install the `context-python` Python package: + +```bash +pip install context-python +``` + + +## Callbacks + +See a [usage example](/docs/integrations/callbacks/context). + +```python +from langchain.callbacks import ContextCallbackHandler +``` diff --git a/langchain_md_files/integrations/providers/couchbase.mdx b/langchain_md_files/integrations/providers/couchbase.mdx new file mode 100644 index 0000000000000000000000000000000000000000..906fbda6b28b37dba555251b145fcfeba862e80e --- /dev/null +++ b/langchain_md_files/integrations/providers/couchbase.mdx @@ -0,0 +1,111 @@ +# Couchbase + +>[Couchbase](http://couchbase.com/) is an award-winning distributed NoSQL cloud database +> that delivers unmatched versatility, performance, scalability, and financial value +> for all of your cloud, mobile, AI, and edge computing applications. + +## Installation and Setup + +We have to install the `langchain-couchbase` package. + +```bash +pip install langchain-couchbase +``` + +## Vector Store + +See a [usage example](/docs/integrations/vectorstores/couchbase). + +```python +from langchain_couchbase import CouchbaseVectorStore +``` + +## Document loader + +See a [usage example](/docs/integrations/document_loaders/couchbase). + +```python +from langchain_community.document_loaders.couchbase import CouchbaseLoader +``` + +## LLM Caches + +### CouchbaseCache +Use Couchbase as a cache for prompts and responses. + +See a [usage example](/docs/integrations/llm_caching/#couchbase-cache). + +To import this cache: +```python +from langchain_couchbase.cache import CouchbaseCache +``` + +To use this cache with your LLMs: +```python +from langchain_core.globals import set_llm_cache + +cluster = couchbase_cluster_connection_object + +set_llm_cache( + CouchbaseCache( + cluster=cluster, + bucket_name=BUCKET_NAME, + scope_name=SCOPE_NAME, + collection_name=COLLECTION_NAME, + ) +) +``` + + +### CouchbaseSemanticCache +Semantic caching allows users to retrieve cached prompts based on the semantic similarity between the user input and previously cached inputs. Under the hood it uses Couchbase as both a cache and a vectorstore. +The CouchbaseSemanticCache needs a Search Index defined to work. Please look at the [usage example](/docs/integrations/vectorstores/couchbase) on how to set up the index. + +See a [usage example](/docs/integrations/llm_caching/#couchbase-semantic-cache). + +To import this cache: +```python +from langchain_couchbase.cache import CouchbaseSemanticCache +``` + +To use this cache with your LLMs: +```python +from langchain_core.globals import set_llm_cache + +# use any embedding provider... +from langchain_openai.Embeddings import OpenAIEmbeddings + +embeddings = OpenAIEmbeddings() +cluster = couchbase_cluster_connection_object + +set_llm_cache( + CouchbaseSemanticCache( + cluster=cluster, + embedding = embeddings, + bucket_name=BUCKET_NAME, + scope_name=SCOPE_NAME, + collection_name=COLLECTION_NAME, + index_name=INDEX_NAME, + ) +) +``` + +## Chat Message History +Use Couchbase as the storage for your chat messages. + +See a [usage example](/docs/integrations/memory/couchbase_chat_message_history). + +To use the chat message history in your applications: +```python +from langchain_couchbase.chat_message_histories import CouchbaseChatMessageHistory + +message_history = CouchbaseChatMessageHistory( + cluster=cluster, + bucket_name=BUCKET_NAME, + scope_name=SCOPE_NAME, + collection_name=COLLECTION_NAME, + session_id="test-session", +) + +message_history.add_user_message("hi!") +``` \ No newline at end of file diff --git a/langchain_md_files/integrations/providers/coze.mdx b/langchain_md_files/integrations/providers/coze.mdx new file mode 100644 index 0000000000000000000000000000000000000000..ce1d0ce456b3506e4517ba82403ac2462ba0ca85 --- /dev/null +++ b/langchain_md_files/integrations/providers/coze.mdx @@ -0,0 +1,19 @@ +# Coze + +[Coze](https://www.coze.com/) is an AI chatbot development platform that enables +the creation and deployment of chatbots for handling diverse conversations across +various applications. + + +## Installation and Setup + +First, you need to get the `API_KEY` from the [Coze](https://www.coze.com/) website. + + +## Chat models + +See a [usage example](/docs/integrations/chat/coze/). + +```python +from langchain_community.chat_models import ChatCoze +``` diff --git a/langchain_md_files/integrations/providers/ctransformers.mdx b/langchain_md_files/integrations/providers/ctransformers.mdx new file mode 100644 index 0000000000000000000000000000000000000000..09414a8fe7d4412cc169ad79b1a7113f08e911dc --- /dev/null +++ b/langchain_md_files/integrations/providers/ctransformers.mdx @@ -0,0 +1,57 @@ +# C Transformers + +This page covers how to use the [C Transformers](https://github.com/marella/ctransformers) library within LangChain. +It is broken into two parts: installation and setup, and then references to specific C Transformers wrappers. + +## Installation and Setup + +- Install the Python package with `pip install ctransformers` +- Download a supported [GGML model](https://huggingface.co./TheBloke) (see [Supported Models](https://github.com/marella/ctransformers#supported-models)) + +## Wrappers + +### LLM + +There exists a CTransformers LLM wrapper, which you can access with: + +```python +from langchain_community.llms import CTransformers +``` + +It provides a unified interface for all models: + +```python +llm = CTransformers(model='/path/to/ggml-gpt-2.bin', model_type='gpt2') + +print(llm.invoke('AI is going to')) +``` + +If you are getting `illegal instruction` error, try using `lib='avx'` or `lib='basic'`: + +```py +llm = CTransformers(model='/path/to/ggml-gpt-2.bin', model_type='gpt2', lib='avx') +``` + +It can be used with models hosted on the Hugging Face Hub: + +```py +llm = CTransformers(model='marella/gpt-2-ggml') +``` + +If a model repo has multiple model files (`.bin` files), specify a model file using: + +```py +llm = CTransformers(model='marella/gpt-2-ggml', model_file='ggml-model.bin') +``` + +Additional parameters can be passed using the `config` parameter: + +```py +config = {'max_new_tokens': 256, 'repetition_penalty': 1.1} + +llm = CTransformers(model='marella/gpt-2-ggml', config=config) +``` + +See [Documentation](https://github.com/marella/ctransformers#config) for a list of available parameters. + +For a more detailed walkthrough of this, see [this notebook](/docs/integrations/llms/ctransformers). diff --git a/langchain_md_files/integrations/providers/ctranslate2.mdx b/langchain_md_files/integrations/providers/ctranslate2.mdx new file mode 100644 index 0000000000000000000000000000000000000000..0e3c3a9319e4d281decfbbc7a1c8e006ae5d37d1 --- /dev/null +++ b/langchain_md_files/integrations/providers/ctranslate2.mdx @@ -0,0 +1,30 @@ +# CTranslate2 + +>[CTranslate2](https://opennmt.net/CTranslate2/quickstart.html) is a C++ and Python library +> for efficient inference with Transformer models. +> +>The project implements a custom runtime that applies many performance optimization +> techniques such as weights quantization, layers fusion, batch reordering, etc., +> to accelerate and reduce the memory usage of Transformer models on CPU and GPU. +> +>A full list of features and supported models is included in the +> [project’s repository](https://opennmt.net/CTranslate2/guides/transformers.html). +> To start, please check out the official [quickstart guide](https://opennmt.net/CTranslate2/quickstart.html). + + +## Installation and Setup + +Install the Python package: + +```bash +pip install ctranslate2 +``` + + +## LLMs + +See a [usage example](/docs/integrations/llms/ctranslate2). + +```python +from langchain_community.llms import CTranslate2 +``` diff --git a/langchain_md_files/integrations/providers/cube.mdx b/langchain_md_files/integrations/providers/cube.mdx new file mode 100644 index 0000000000000000000000000000000000000000..9393bc36aa28c58616c8f5e5aa0b05f1b1dd9ffd --- /dev/null +++ b/langchain_md_files/integrations/providers/cube.mdx @@ -0,0 +1,21 @@ +# Cube + +>[Cube](https://cube.dev/) is the Semantic Layer for building data apps. It helps +> data engineers and application developers access data from modern data stores, +> organize it into consistent definitions, and deliver it to every application. + +## Installation and Setup + +We have to get the API key and the URL of the Cube instance. See +[these instructions](https://cube.dev/docs/product/apis-integrations/rest-api#configuration-base-path). + + +## Document loader + +### Cube Semantic Layer + +See a [usage example](/docs/integrations/document_loaders/cube_semantic). + +```python +from langchain_community.document_loaders import CubeSemanticLoader +``` diff --git a/langchain_md_files/integrations/providers/dashvector.mdx b/langchain_md_files/integrations/providers/dashvector.mdx new file mode 100644 index 0000000000000000000000000000000000000000..b7ded751ddf7dbf506b0f9db7d2b7f61e861e3e1 --- /dev/null +++ b/langchain_md_files/integrations/providers/dashvector.mdx @@ -0,0 +1,39 @@ +# DashVector + +> [DashVector](https://help.aliyun.com/document_detail/2510225.html) is a fully-managed vectorDB service that supports high-dimension dense and sparse vectors, real-time insertion and filtered search. It is built to scale automatically and can adapt to different application requirements. + +This document demonstrates to leverage DashVector within the LangChain ecosystem. In particular, it shows how to install DashVector, and how to use it as a VectorStore plugin in LangChain. +It is broken into two parts: installation and setup, and then references to specific DashVector wrappers. + +## Installation and Setup + + +Install the Python SDK: + +```bash +pip install dashvector +``` + +You must have an API key. Here are the [installation instructions](https://help.aliyun.com/document_detail/2510223.html). + + +## Embedding models + +```python +from langchain_community.embeddings import DashScopeEmbeddings +``` + +See the [use example](/docs/integrations/vectorstores/dashvector). + + +## Vector Store + +A DashVector Collection is wrapped as a familiar VectorStore for native usage within LangChain, +which allows it to be readily used for various scenarios, such as semantic search or example selection. + +You may import the vectorstore by: +```python +from langchain_community.vectorstores import DashVector +``` + +For a detailed walkthrough of the DashVector wrapper, please refer to [this notebook](/docs/integrations/vectorstores/dashvector) diff --git a/langchain_md_files/integrations/providers/datadog.mdx b/langchain_md_files/integrations/providers/datadog.mdx new file mode 100644 index 0000000000000000000000000000000000000000..b854c668759bf46d97ddc4f7fb0143119f3f67b9 --- /dev/null +++ b/langchain_md_files/integrations/providers/datadog.mdx @@ -0,0 +1,88 @@ +# Datadog Tracing + +>[ddtrace](https://github.com/DataDog/dd-trace-py) is a Datadog application performance monitoring (APM) library which provides an integration to monitor your LangChain application. + +Key features of the ddtrace integration for LangChain: +- Traces: Capture LangChain requests, parameters, prompt-completions, and help visualize LangChain operations. +- Metrics: Capture LangChain request latency, errors, and token/cost usage (for OpenAI LLMs and chat models). +- Logs: Store prompt completion data for each LangChain operation. +- Dashboard: Combine metrics, logs, and trace data into a single plane to monitor LangChain requests. +- Monitors: Provide alerts in response to spikes in LangChain request latency or error rate. + +Note: The ddtrace LangChain integration currently provides tracing for LLMs, chat models, Text Embedding Models, Chains, and Vectorstores. + +## Installation and Setup + +1. Enable APM and StatsD in your Datadog Agent, along with a Datadog API key. For example, in Docker: + +``` +docker run -d --cgroupns host \ + --pid host \ + -v /var/run/docker.sock:/var/run/docker.sock:ro \ + -v /proc/:/host/proc/:ro \ + -v /sys/fs/cgroup/:/host/sys/fs/cgroup:ro \ + -e DD_API_KEY= \ + -p 127.0.0.1:8126:8126/tcp \ + -p 127.0.0.1:8125:8125/udp \ + -e DD_DOGSTATSD_NON_LOCAL_TRAFFIC=true \ + -e DD_APM_ENABLED=true \ + gcr.io/datadoghq/agent:latest +``` + +2. Install the Datadog APM Python library. + +``` +pip install ddtrace>=1.17 +``` + + +3. The LangChain integration can be enabled automatically when you prefix your LangChain Python application command with `ddtrace-run`: + +``` +DD_SERVICE="my-service" DD_ENV="staging" DD_API_KEY= ddtrace-run python .py +``` + +**Note**: If the Agent is using a non-default hostname or port, be sure to also set `DD_AGENT_HOST`, `DD_TRACE_AGENT_PORT`, or `DD_DOGSTATSD_PORT`. + +Additionally, the LangChain integration can be enabled programmatically by adding `patch_all()` or `patch(langchain=True)` before the first import of `langchain` in your application. + +Note that using `ddtrace-run` or `patch_all()` will also enable the `requests` and `aiohttp` integrations which trace HTTP requests to LLM providers, as well as the `openai` integration which traces requests to the OpenAI library. + +```python +from ddtrace import config, patch + +# Note: be sure to configure the integration before calling ``patch()``! +# e.g. config.langchain["logs_enabled"] = True + +patch(langchain=True) + +# to trace synchronous HTTP requests +# patch(langchain=True, requests=True) + +# to trace asynchronous HTTP requests (to the OpenAI library) +# patch(langchain=True, aiohttp=True) + +# to include underlying OpenAI spans from the OpenAI integration +# patch(langchain=True, openai=True)patch_all +``` + +See the [APM Python library documentation](https://ddtrace.readthedocs.io/en/stable/installation_quickstart.html) for more advanced usage. + + +## Configuration + +See the [APM Python library documentation](https://ddtrace.readthedocs.io/en/stable/integrations.html#langchain) for all the available configuration options. + + +### Log Prompt & Completion Sampling + +To enable log prompt and completion sampling, set the `DD_LANGCHAIN_LOGS_ENABLED=1` environment variable. By default, 10% of traced requests will emit logs containing the prompts and completions. + +To adjust the log sample rate, see the [APM library documentation](https://ddtrace.readthedocs.io/en/stable/integrations.html#langchain). + +**Note**: Logs submission requires `DD_API_KEY` to be specified when running `ddtrace-run`. + + +## Troubleshooting + +Need help? Create an issue on [ddtrace](https://github.com/DataDog/dd-trace-py) or contact [Datadog support](https://docs.datadoghq.com/help/). diff --git a/langchain_md_files/integrations/providers/datadog_logs.mdx b/langchain_md_files/integrations/providers/datadog_logs.mdx new file mode 100644 index 0000000000000000000000000000000000000000..eb365eed922cf2d7f1d31bf2ee246a9de95a7e3c --- /dev/null +++ b/langchain_md_files/integrations/providers/datadog_logs.mdx @@ -0,0 +1,19 @@ +# Datadog Logs + +>[Datadog](https://www.datadoghq.com/) is a monitoring and analytics platform for cloud-scale applications. + +## Installation and Setup + +```bash +pip install datadog_api_client +``` + +We must initialize the loader with the Datadog API key and APP key, and we need to set up the query to extract the desired logs. + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/datadog_logs). + +```python +from langchain_community.document_loaders import DatadogLogsLoader +``` diff --git a/langchain_md_files/integrations/providers/dataforseo.mdx b/langchain_md_files/integrations/providers/dataforseo.mdx new file mode 100644 index 0000000000000000000000000000000000000000..37d8884fa4b42b9cd3b0064078db99967ec49d80 --- /dev/null +++ b/langchain_md_files/integrations/providers/dataforseo.mdx @@ -0,0 +1,52 @@ +# DataForSEO + +>[DataForSeo](https://dataforseo.com/) provides comprehensive SEO and digital marketing data solutions via API. + +This page provides instructions on how to use the DataForSEO search APIs within LangChain. + +## Installation and Setup + +Get a [DataForSEO API Access login and password](https://app.dataforseo.com/register), and set them as environment variables +(`DATAFORSEO_LOGIN` and `DATAFORSEO_PASSWORD` respectively). + +```python +import os + +os.environ["DATAFORSEO_LOGIN"] = "your_login" +os.environ["DATAFORSEO_PASSWORD"] = "your_password" +``` + + +## Utility + +The `DataForSEO` utility wraps the API. To import this utility, use: + +```python +from langchain_community.utilities.dataforseo_api_search import DataForSeoAPIWrapper +``` + +For a detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/dataforseo). + +## Tool + +You can also load this wrapper as a Tool to use with an Agent: + +```python +from langchain.agents import load_tools +tools = load_tools(["dataforseo-api-search"]) +``` + +This will load the following tools: + +```python +from langchain_community.tools import DataForSeoAPISearchRun +from langchain_community.tools import DataForSeoAPISearchResults +``` + +## Example usage + +```python +dataforseo = DataForSeoAPIWrapper(api_login="your_login", api_password="your_password") +result = dataforseo.run("Bill Gates") +print(result) +``` diff --git a/langchain_md_files/integrations/providers/dataherald.mdx b/langchain_md_files/integrations/providers/dataherald.mdx new file mode 100644 index 0000000000000000000000000000000000000000..9b589f6ae6cd30e579343027df7cbb57cc209271 --- /dev/null +++ b/langchain_md_files/integrations/providers/dataherald.mdx @@ -0,0 +1,64 @@ +# Dataherald + +>[Dataherald](https://www.dataherald.com) is a natural language-to-SQL. + +This page covers how to use the `Dataherald API` within LangChain. + +## Installation and Setup +- Install requirements with +```bash +pip install dataherald +``` +- Go to dataherald and sign up [here](https://www.dataherald.com) +- Create an app and get your `API KEY` +- Set your `API KEY` as an environment variable `DATAHERALD_API_KEY` + + +## Wrappers + +### Utility + +There exists a DataheraldAPIWrapper utility which wraps this API. To import this utility: + +```python +from langchain_community.utilities.dataherald import DataheraldAPIWrapper +``` + +For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/dataherald). + +### Tool + +You can use the tool in an agent like this: +```python +from langchain_community.utilities.dataherald import DataheraldAPIWrapper +from langchain_community.tools.dataherald.tool import DataheraldTextToSQL +from langchain_openai import ChatOpenAI +from langchain import hub +from langchain.agents import AgentExecutor, create_react_agent, load_tools + +api_wrapper = DataheraldAPIWrapper(db_connection_id="") +tool = DataheraldTextToSQL(api_wrapper=api_wrapper) +llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0) +prompt = hub.pull("hwchase17/react") +agent = create_react_agent(llm, tools, prompt) +agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True) +agent_executor.invoke({"input":"Return the sql for this question: How many employees are in the company?"}) +``` + +Output +```shell +> Entering new AgentExecutor chain... +I need to use a tool that can convert this question into SQL. +Action: dataherald +Action Input: How many employees are in the company?Answer: SELECT + COUNT(*) FROM employeesI now know the final answer +Final Answer: SELECT + COUNT(*) +FROM + employees + +> Finished chain. +{'input': 'Return the sql for this question: How many employees are in the company?', 'output': "SELECT \n COUNT(*)\nFROM \n employees"} +``` + +For more information on tools, see [this page](/docs/how_to/tools_builtin). diff --git a/langchain_md_files/integrations/providers/dedoc.mdx b/langchain_md_files/integrations/providers/dedoc.mdx new file mode 100644 index 0000000000000000000000000000000000000000..3f2aaa206e325e127e555f3a2fb6ca64e951296d --- /dev/null +++ b/langchain_md_files/integrations/providers/dedoc.mdx @@ -0,0 +1,56 @@ +# Dedoc + +>[Dedoc](https://dedoc.readthedocs.io) is an [open-source](https://github.com/ispras/dedoc) +library/service that extracts texts, tables, attached files and document structure +(e.g., titles, list items, etc.) from files of various formats. + +`Dedoc` supports `DOCX`, `XLSX`, `PPTX`, `EML`, `HTML`, `PDF`, images and more. +Full list of supported formats can be found [here](https://dedoc.readthedocs.io/en/latest/#id1). + +## Installation and Setup + +### Dedoc library + +You can install `Dedoc` using `pip`. +In this case, you will need to install dependencies, +please go [here](https://dedoc.readthedocs.io/en/latest/getting_started/installation.html) +to get more information. + +```bash +pip install dedoc +``` + +### Dedoc API + +If you are going to use `Dedoc` API, you don't need to install `dedoc` library. +In this case, you should run the `Dedoc` service, e.g. `Docker` container (please see +[the documentation](https://dedoc.readthedocs.io/en/latest/getting_started/installation.html#install-and-run-dedoc-using-docker) +for more details): + +```bash +docker pull dedocproject/dedoc +docker run -p 1231:1231 +``` + +## Document Loader + +* For handling files of any formats (supported by `Dedoc`), you can use `DedocFileLoader`: + + ```python + from langchain_community.document_loaders import DedocFileLoader + ``` + +* For handling PDF files (with or without a textual layer), you can use `DedocPDFLoader`: + + ```python + from langchain_community.document_loaders import DedocPDFLoader + ``` + +* For handling files of any formats without library installation, +you can use `Dedoc API` with `DedocAPIFileLoader`: + + ```python + from langchain_community.document_loaders import DedocAPIFileLoader + ``` + +Please see a [usage example](/docs/integrations/document_loaders/dedoc) for more details. diff --git a/langchain_md_files/integrations/providers/deepinfra.mdx b/langchain_md_files/integrations/providers/deepinfra.mdx new file mode 100644 index 0000000000000000000000000000000000000000..5eb2b1b38770e4170f87ad72f19f030811752fba --- /dev/null +++ b/langchain_md_files/integrations/providers/deepinfra.mdx @@ -0,0 +1,53 @@ +# DeepInfra + +>[DeepInfra](https://deepinfra.com/docs) allows us to run the +> [latest machine learning models](https://deepinfra.com/models) with ease. +> DeepInfra takes care of all the heavy lifting related to running, scaling and monitoring +> the models. Users can focus on your application and integrate the models with simple REST API calls. + +>DeepInfra provides [examples](https://deepinfra.com/docs/advanced/langchain) of integration with LangChain. + +This page covers how to use the `DeepInfra` ecosystem within `LangChain`. +It is broken into two parts: installation and setup, and then references to specific DeepInfra wrappers. + +## Installation and Setup + +- Get your DeepInfra api key from this link [here](https://deepinfra.com/). +- Get an DeepInfra api key and set it as an environment variable (`DEEPINFRA_API_TOKEN`) + +## Available Models + +DeepInfra provides a range of Open Source LLMs ready for deployment. + +You can see supported models for +[text-generation](https://deepinfra.com/models?type=text-generation) and +[embeddings](https://deepinfra.com/models?type=embeddings). + +You can view a [list of request and response parameters](https://deepinfra.com/meta-llama/Llama-2-70b-chat-hf/api). + +Chat models [follow openai api](https://deepinfra.com/meta-llama/Llama-2-70b-chat-hf/api?example=openai-http) + + +## LLM + +See a [usage example](/docs/integrations/llms/deepinfra). + +```python +from langchain_community.llms import DeepInfra +``` + +## Embeddings + +See a [usage example](/docs/integrations/text_embedding/deepinfra). + +```python +from langchain_community.embeddings import DeepInfraEmbeddings +``` + +## Chat Models + +See a [usage example](/docs/integrations/chat/deepinfra). + +```python +from langchain_community.chat_models import ChatDeepInfra +``` diff --git a/langchain_md_files/integrations/providers/deepsparse.mdx b/langchain_md_files/integrations/providers/deepsparse.mdx new file mode 100644 index 0000000000000000000000000000000000000000..562d9e3e76512e3b717360e88172ddbca3f92877 --- /dev/null +++ b/langchain_md_files/integrations/providers/deepsparse.mdx @@ -0,0 +1,34 @@ +# DeepSparse + +This page covers how to use the [DeepSparse](https://github.com/neuralmagic/deepsparse) inference runtime within LangChain. +It is broken into two parts: installation and setup, and then examples of DeepSparse usage. + +## Installation and Setup + +- Install the Python package with `pip install deepsparse` +- Choose a [SparseZoo model](https://sparsezoo.neuralmagic.com/?useCase=text_generation) or export a support model to ONNX [using Optimum](https://github.com/neuralmagic/notebooks/blob/main/notebooks/opt-text-generation-deepsparse-quickstart/OPT_Text_Generation_DeepSparse_Quickstart.ipynb) + + +## LLMs + +There exists a DeepSparse LLM wrapper, which you can access with: + +```python +from langchain_community.llms import DeepSparse +``` + +It provides a unified interface for all models: + +```python +llm = DeepSparse(model='zoo:nlg/text_generation/codegen_mono-350m/pytorch/huggingface/bigpython_bigquery_thepile/base-none') + +print(llm.invoke('def fib():')) +``` + +Additional parameters can be passed using the `config` parameter: + +```python +config = {'max_generated_tokens': 256} + +llm = DeepSparse(model='zoo:nlg/text_generation/codegen_mono-350m/pytorch/huggingface/bigpython_bigquery_thepile/base-none', config=config) +``` diff --git a/langchain_md_files/integrations/providers/diffbot.mdx b/langchain_md_files/integrations/providers/diffbot.mdx new file mode 100644 index 0000000000000000000000000000000000000000..1a9e9934642f3ef383f30a660b9576511620b9ec --- /dev/null +++ b/langchain_md_files/integrations/providers/diffbot.mdx @@ -0,0 +1,29 @@ +# Diffbot + +> [Diffbot](https://docs.diffbot.com/docs) is a suite of ML-based products that make it easy to structure and integrate web data. + +## Installation and Setup + +[Get a free Diffbot API token](https://app.diffbot.com/get-started/) and [follow these instructions](https://docs.diffbot.com/reference/authentication) to authenticate your requests. + +## Document Loader + +Diffbot's [Extract API](https://docs.diffbot.com/reference/extract-introduction) is a service that structures and normalizes data from web pages. + +Unlike traditional web scraping tools, `Diffbot Extract` doesn't require any rules to read the content on a page. It uses a computer vision model to classify a page into one of 20 possible types, and then transforms raw HTML markup into JSON. The resulting structured JSON follows a consistent [type-based ontology](https://docs.diffbot.com/docs/ontology), which makes it easy to extract data from multiple different web sources with the same schema. + +See a [usage example](/docs/integrations/document_loaders/diffbot). + +```python +from langchain_community.document_loaders import DiffbotLoader +``` + +## Graphs + +Diffbot's [Natural Language Processing API](https://www.diffbot.com/products/natural-language/) allows for the extraction of entities, relationships, and semantic meaning from unstructured text data. + +See a [usage example](/docs/integrations/graphs/diffbot). + +```python +from langchain_experimental.graph_transformers.diffbot import DiffbotGraphTransformer +``` diff --git a/langchain_md_files/integrations/providers/dingo.mdx b/langchain_md_files/integrations/providers/dingo.mdx new file mode 100644 index 0000000000000000000000000000000000000000..b12a6a72cbc6c95cf58fbd4c72bda5831be302db --- /dev/null +++ b/langchain_md_files/integrations/providers/dingo.mdx @@ -0,0 +1,31 @@ +# DingoDB + +>[DingoDB](https://github.com/dingodb) is a distributed multi-modal vector +> database. It combines the features of a data lake and a vector database, +> allowing for the storage of any type of data (key-value, PDF, audio, +> video, etc.) regardless of its size. Utilizing DingoDB, you can construct +> your own Vector Ocean (the next-generation data architecture following data +> warehouse and data lake). This enables +> the analysis of both structured and unstructured data through +> a singular SQL with exceptionally low latency in real time. + +## Installation and Setup + +Install the Python SDK + +```bash +pip install dingodb +``` + +## VectorStore + +There exists a wrapper around DingoDB indexes, allowing you to use it as a vectorstore, +whether for semantic search or example selection. + +To import this vectorstore: + +```python +from langchain_community.vectorstores import Dingo +``` + +For a more detailed walkthrough of the DingoDB wrapper, see [this notebook](/docs/integrations/vectorstores/dingo) diff --git a/langchain_md_files/integrations/providers/discord.mdx b/langchain_md_files/integrations/providers/discord.mdx new file mode 100644 index 0000000000000000000000000000000000000000..34c286fa618bf29c918d54b065708db1ace42ffe --- /dev/null +++ b/langchain_md_files/integrations/providers/discord.mdx @@ -0,0 +1,38 @@ +# Discord + +>[Discord](https://discord.com/) is a VoIP and instant messaging social platform. Users have the ability to communicate +> with voice calls, video calls, text messaging, media and files in private chats or as part of communities called +> "servers". A server is a collection of persistent chat rooms and voice channels which can be accessed via invite links. + +## Installation and Setup + +```bash +pip install pandas +``` + +Follow these steps to download your `Discord` data: + +1. Go to your **User Settings** +2. Then go to **Privacy and Safety** +3. Head over to the **Request all of my Data** and click on **Request Data** button + +It might take 30 days for you to receive your data. You'll receive an email at the address which is registered +with Discord. That email will have a download button using which you would be able to download your personal Discord data. + + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/discord). + +**NOTE:** The `DiscordChatLoader` is not the `ChatLoader` but a `DocumentLoader`. +It is used to load the data from the `Discord` data dump. +For the `ChatLoader` see Chat Loader section below. + +```python +from langchain_community.document_loaders import DiscordChatLoader +``` + +## Chat Loader + +See a [usage example](/docs/integrations/chat_loaders/discord). + diff --git a/langchain_md_files/integrations/providers/docarray.mdx b/langchain_md_files/integrations/providers/docarray.mdx new file mode 100644 index 0000000000000000000000000000000000000000..d1d41a19834d1cd8bc1327e1840b60b739faa6ec --- /dev/null +++ b/langchain_md_files/integrations/providers/docarray.mdx @@ -0,0 +1,37 @@ +# DocArray + +> [DocArray](https://docarray.jina.ai/) is a library for nested, unstructured, multimodal data in transit, +> including text, image, audio, video, 3D mesh, etc. It allows deep-learning engineers to efficiently process, +> embed, search, recommend, store, and transfer multimodal data with a Pythonic API. + + +## Installation and Setup + +We need to install `docarray` python package. + +```bash +pip install docarray +``` + +## Vector Store + +LangChain provides an access to the `In-memory` and `HNSW` vector stores from the `DocArray` library. + +See a [usage example](/docs/integrations/vectorstores/docarray_hnsw). + +```python +from langchain_community.vectorstores import DocArrayHnswSearch +``` +See a [usage example](/docs/integrations/vectorstores/docarray_in_memory). + +```python +from langchain_community.vectorstores DocArrayInMemorySearch +``` + +## Retriever + +See a [usage example](/docs/integrations/retrievers/docarray_retriever). + +```python +from langchain_community.retrievers import DocArrayRetriever +``` diff --git a/langchain_md_files/integrations/providers/doctran.mdx b/langchain_md_files/integrations/providers/doctran.mdx new file mode 100644 index 0000000000000000000000000000000000000000..c85844766e1c732db5431410311caa2498230715 --- /dev/null +++ b/langchain_md_files/integrations/providers/doctran.mdx @@ -0,0 +1,37 @@ +# Doctran + +>[Doctran](https://github.com/psychic-api/doctran) is a python package. It uses LLMs and open-source +> NLP libraries to transform raw text into clean, structured, information-dense documents +> that are optimized for vector space retrieval. You can think of `Doctran` as a black box where +> messy strings go in and nice, clean, labelled strings come out. + + +## Installation and Setup + +```bash +pip install doctran +``` + +## Document Transformers + +### Document Interrogator + +See a [usage example for DoctranQATransformer](/docs/integrations/document_transformers/doctran_interrogate_document). + +```python +from langchain_community.document_loaders import DoctranQATransformer +``` +### Property Extractor + +See a [usage example for DoctranPropertyExtractor](/docs/integrations/document_transformers/doctran_extract_properties). + +```python +from langchain_community.document_loaders import DoctranPropertyExtractor +``` +### Document Translator + +See a [usage example for DoctranTextTranslator](/docs/integrations/document_transformers/doctran_translate_document). + +```python +from langchain_community.document_loaders import DoctranTextTranslator +``` diff --git a/langchain_md_files/integrations/providers/docugami.mdx b/langchain_md_files/integrations/providers/docugami.mdx new file mode 100644 index 0000000000000000000000000000000000000000..dcd0566c4a773deeb3382e68ad7ec2f4c489b17e --- /dev/null +++ b/langchain_md_files/integrations/providers/docugami.mdx @@ -0,0 +1,21 @@ +# Docugami + +>[Docugami](https://docugami.com) converts business documents into a Document XML Knowledge Graph, generating forests +> of XML semantic trees representing entire documents. This is a rich representation that includes the semantic and +> structural characteristics of various chunks in the document as an XML tree. + +## Installation and Setup + + +```bash +pip install dgml-utils +pip install docugami-langchain +``` + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/docugami). + +```python +from docugami_langchain.document_loaders import DocugamiLoader +``` diff --git a/langchain_md_files/integrations/providers/docusaurus.mdx b/langchain_md_files/integrations/providers/docusaurus.mdx new file mode 100644 index 0000000000000000000000000000000000000000..e137d627724c031f662ad7762383257fc12a823d --- /dev/null +++ b/langchain_md_files/integrations/providers/docusaurus.mdx @@ -0,0 +1,20 @@ +# Docusaurus + +>[Docusaurus](https://docusaurus.io/) is a static-site generator which provides +> out-of-the-box documentation features. + + +## Installation and Setup + + +```bash +pip install -U beautifulsoup4 lxml +``` + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/docusaurus). + +```python +from langchain_community.document_loaders import DocusaurusLoader +``` diff --git a/langchain_md_files/integrations/providers/dria.mdx b/langchain_md_files/integrations/providers/dria.mdx new file mode 100644 index 0000000000000000000000000000000000000000..7e3c5cdbace43908d8ad5d9d0ffbc399009c86b8 --- /dev/null +++ b/langchain_md_files/integrations/providers/dria.mdx @@ -0,0 +1,25 @@ +# Dria + +>[Dria](https://dria.co/) is a hub of public RAG models for developers to +> both contribute and utilize a shared embedding lake. + +See more details about the LangChain integration with Dria +at [this page](https://dria.co/docs/integrations/langchain). + +## Installation and Setup + +You have to install a python package: + +```bash +pip install dria +``` + +You have to get an API key from Dria. You can get it by signing up at [Dria](https://dria.co/). + +## Retrievers + +See a [usage example](/docs/integrations/retrievers/dria_index). + +```python +from langchain_community.retrievers import DriaRetriever +``` diff --git a/langchain_md_files/integrations/providers/dropbox.mdx b/langchain_md_files/integrations/providers/dropbox.mdx new file mode 100644 index 0000000000000000000000000000000000000000..590a58b9a681a714b7642e3790f95624ea47ca3b --- /dev/null +++ b/langchain_md_files/integrations/providers/dropbox.mdx @@ -0,0 +1,21 @@ +# Dropbox + +>[Dropbox](https://en.wikipedia.org/wiki/Dropbox) is a file hosting service that brings everything-traditional +> files, cloud content, and web shortcuts together in one place. + + +## Installation and Setup + +See the detailed [installation guide](/docs/integrations/document_loaders/dropbox#prerequisites). + +```bash +pip install -U dropbox +``` + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/dropbox). + +```python +from langchain_community.document_loaders import DropboxLoader +``` diff --git a/langchain_md_files/integrations/providers/duckdb.mdx b/langchain_md_files/integrations/providers/duckdb.mdx new file mode 100644 index 0000000000000000000000000000000000000000..f965e129b9536d10bd25750b2b0e10fcd5cdb411 --- /dev/null +++ b/langchain_md_files/integrations/providers/duckdb.mdx @@ -0,0 +1,19 @@ +# DuckDB + +>[DuckDB](https://duckdb.org/) is an in-process SQL OLAP database management system. + +## Installation and Setup + +First, you need to install `duckdb` python package. + +```bash +pip install duckdb +``` + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/duckdb). + +```python +from langchain_community.document_loaders import DuckDBLoader +``` diff --git a/langchain_md_files/integrations/providers/duckduckgo_search.mdx b/langchain_md_files/integrations/providers/duckduckgo_search.mdx new file mode 100644 index 0000000000000000000000000000000000000000..29ab01981f45fb30d346e1b1d75759fd9dce408b --- /dev/null +++ b/langchain_md_files/integrations/providers/duckduckgo_search.mdx @@ -0,0 +1,25 @@ +# DuckDuckGo Search + +>[DuckDuckGo Search](https://github.com/deedy5/duckduckgo_search) is a package that +> searches for words, documents, images, videos, news, maps and text +> translation using the `DuckDuckGo.com` search engine. It is downloading files +> and images to a local hard drive. + +## Installation and Setup + +You have to install a python package: + +```bash +pip install duckduckgo-search +``` + +## Tools + +See a [usage example](/docs/integrations/tools/ddg). + +There are two tools available: + +```python +from langchain_community.tools import DuckDuckGoSearchRun +from langchain_community.tools import DuckDuckGoSearchResults +``` diff --git a/langchain_md_files/integrations/providers/e2b.mdx b/langchain_md_files/integrations/providers/e2b.mdx new file mode 100644 index 0000000000000000000000000000000000000000..ee0ca085aa440a5569b92b35719af1887e43dd30 --- /dev/null +++ b/langchain_md_files/integrations/providers/e2b.mdx @@ -0,0 +1,20 @@ +# E2B + +>[E2B](https://e2b.dev/) provides open-source secure sandboxes +> for AI-generated code execution. See more [here](https://github.com/e2b-dev). + +## Installation and Setup + +You have to install a python package: + +```bash +pip install e2b_code_interpreter +``` + +## Tool + +See a [usage example](/docs/integrations/tools/e2b_data_analysis). + +```python +from langchain_community.tools import E2BDataAnalysisTool +``` diff --git a/langchain_md_files/integrations/providers/edenai.mdx b/langchain_md_files/integrations/providers/edenai.mdx new file mode 100644 index 0000000000000000000000000000000000000000..a33e92ec6a93c9ea45b50063879fc04d97462104 --- /dev/null +++ b/langchain_md_files/integrations/providers/edenai.mdx @@ -0,0 +1,62 @@ +# Eden AI + +>[Eden AI](https://docs.edenai.co/docs/getting-started-with-eden-ai) user interface (UI) +> is designed for handling the AI projects. With `Eden AI Portal`, +> you can perform no-code AI using the best engines for the market. + + +## Installation and Setup + +Accessing the Eden AI API requires an API key, which you can get by +[creating an account](https://app.edenai.run/user/register) and +heading [here](https://app.edenai.run/admin/account/settings). + +## LLMs + +See a [usage example](/docs/integrations/llms/edenai). + +```python +from langchain_community.llms import EdenAI + +``` + +## Chat models + +See a [usage example](/docs/integrations/chat/edenai). + +```python +from langchain_community.chat_models.edenai import ChatEdenAI +``` + +## Embedding models + +See a [usage example](/docs/integrations/text_embedding/edenai). + +```python +from langchain_community.embeddings.edenai import EdenAiEmbeddings +``` + +## Tools + +Eden AI provides a list of tools that grants your Agent the ability to do multiple tasks, such as: +* speech to text +* text to speech +* text explicit content detection +* image explicit content detection +* object detection +* OCR invoice parsing +* OCR ID parsing + +See a [usage example](/docs/integrations/tools/edenai_tools). + +```python +from langchain_community.tools.edenai import ( + EdenAiExplicitImageTool, + EdenAiObjectDetectionTool, + EdenAiParsingIDTool, + EdenAiParsingInvoiceTool, + EdenAiSpeechToTextTool, + EdenAiTextModerationTool, + EdenAiTextToSpeechTool, +) +``` diff --git a/langchain_md_files/integrations/providers/elasticsearch.mdx b/langchain_md_files/integrations/providers/elasticsearch.mdx new file mode 100644 index 0000000000000000000000000000000000000000..c3b123d47b80f02bb8fdf15649ebeb6809e18daa --- /dev/null +++ b/langchain_md_files/integrations/providers/elasticsearch.mdx @@ -0,0 +1,108 @@ +# Elasticsearch + +> [Elasticsearch](https://www.elastic.co/elasticsearch/) is a distributed, RESTful search and analytics engine. +> It provides a distributed, multi-tenant-capable full-text search engine with an HTTP web interface and schema-free +> JSON documents. + +## Installation and Setup + +### Setup Elasticsearch + +There are two ways to get started with Elasticsearch: + +#### Install Elasticsearch on your local machine via Docker + +Example: Run a single-node Elasticsearch instance with security disabled. +This is not recommended for production use. + +```bash + docker run -p 9200:9200 -e "discovery.type=single-node" -e "xpack.security.enabled=false" -e "xpack.security.http.ssl.enabled=false" docker.elastic.co/elasticsearch/elasticsearch:8.9.0 +``` + +#### Deploy Elasticsearch on Elastic Cloud + +`Elastic Cloud` is a managed Elasticsearch service. Signup for a [free trial](https://cloud.elastic.co/registration?utm_source=langchain&utm_content=documentation). + +### Install Client + +```bash +pip install elasticsearch +pip install langchain-elasticsearch +``` + +## Embedding models + +See a [usage example](/docs/integrations/text_embedding/elasticsearch). + +```python +from langchain_elasticsearch import ElasticsearchEmbeddings +``` + +## Vector store + +See a [usage example](/docs/integrations/vectorstores/elasticsearch). + +```python +from langchain_elasticsearch import ElasticsearchStore +``` + +### Third-party integrations + +#### EcloudESVectorStore + +```python +from langchain_community.vectorstores.ecloud_vector_search import EcloudESVectorStore +``` + +## Retrievers + +### ElasticsearchRetriever + +The `ElasticsearchRetriever` enables flexible access to all Elasticsearch features +through the Query DSL. + +See a [usage example](/docs/integrations/retrievers/elasticsearch_retriever). + +```python +from langchain_elasticsearch import ElasticsearchRetriever +``` + +### BM25 + +See a [usage example](/docs/integrations/retrievers/elastic_search_bm25). + +```python +from langchain_community.retrievers import ElasticSearchBM25Retriever +``` +## Memory + +See a [usage example](/docs/integrations/memory/elasticsearch_chat_message_history). + +```python +from langchain_elasticsearch import ElasticsearchChatMessageHistory +``` + +## LLM cache + +See a [usage example](/docs/integrations/llm_caching/#elasticsearch-cache). + +```python +from langchain_elasticsearch import ElasticsearchCache +``` + +## Byte Store + +See a [usage example](/docs/integrations/stores/elasticsearch). + +```python +from langchain_elasticsearch import ElasticsearchEmbeddingsCache +``` + +## Chain + +It is a chain for interacting with Elasticsearch Database. + +```python +from langchain.chains.elasticsearch_database import ElasticsearchDatabaseChain +``` + diff --git a/langchain_md_files/integrations/providers/elevenlabs.mdx b/langchain_md_files/integrations/providers/elevenlabs.mdx new file mode 100644 index 0000000000000000000000000000000000000000..563527304789d3774c508c7efb1b3fee2d61b194 --- /dev/null +++ b/langchain_md_files/integrations/providers/elevenlabs.mdx @@ -0,0 +1,27 @@ +# ElevenLabs + +>[ElevenLabs](https://elevenlabs.io/about) is a voice AI research & deployment company +> with a mission to make content universally accessible in any language & voice. +> +>`ElevenLabs` creates the most realistic, versatile and contextually-aware +> AI audio, providing the ability to generate speech in hundreds of +> new and existing voices in 29 languages. + +## Installation and Setup + +First, you need to set up an ElevenLabs account. You can follow the +[instructions here](https://docs.elevenlabs.io/welcome/introduction). + +Install the Python package: + +```bash +pip install elevenlabs +``` + +## Tools + +See a [usage example](/docs/integrations/tools/eleven_labs_tts). + +```python +from langchain_community.tools import ElevenLabsText2SpeechTool +``` diff --git a/langchain_md_files/integrations/providers/epsilla.mdx b/langchain_md_files/integrations/providers/epsilla.mdx new file mode 100644 index 0000000000000000000000000000000000000000..78da4d6a984b4e6b7d5b74e815c065753c9cd825 --- /dev/null +++ b/langchain_md_files/integrations/providers/epsilla.mdx @@ -0,0 +1,23 @@ +# Epsilla + +This page covers how to use [Epsilla](https://github.com/epsilla-cloud/vectordb) within LangChain. +It is broken into two parts: installation and setup, and then references to specific Epsilla wrappers. + +## Installation and Setup + +- Install the Python SDK with `pip/pip3 install pyepsilla` + +## Wrappers + +### VectorStore + +There exists a wrapper around Epsilla vector databases, allowing you to use it as a vectorstore, +whether for semantic search or example selection. + +To import this vectorstore: + +```python +from langchain_community.vectorstores import Epsilla +``` + +For a more detailed walkthrough of the Epsilla wrapper, see [this notebook](/docs/integrations/vectorstores/epsilla) \ No newline at end of file diff --git a/langchain_md_files/integrations/providers/etherscan.mdx b/langchain_md_files/integrations/providers/etherscan.mdx new file mode 100644 index 0000000000000000000000000000000000000000..cc4e197b2899e0ba573982158c493a56915199ca --- /dev/null +++ b/langchain_md_files/integrations/providers/etherscan.mdx @@ -0,0 +1,18 @@ +# Etherscan + +>[Etherscan](https://docs.etherscan.io/) is the leading blockchain explorer, +> search, API and analytics platform for `Ethereum`, a decentralized smart contracts platform. + + +## Installation and Setup + +See the detailed [installation guide](/docs/integrations/document_loaders/etherscan). + + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/etherscan). + +```python +from langchain_community.document_loaders import EtherscanLoader +``` diff --git a/langchain_md_files/integrations/providers/evernote.mdx b/langchain_md_files/integrations/providers/evernote.mdx new file mode 100644 index 0000000000000000000000000000000000000000..a58c3fc0cf7cd62f5541fb37e865f34388638718 --- /dev/null +++ b/langchain_md_files/integrations/providers/evernote.mdx @@ -0,0 +1,20 @@ +# EverNote + +>[EverNote](https://evernote.com/) is intended for archiving and creating notes in which photos, audio and saved web content can be embedded. Notes are stored in virtual "notebooks" and can be tagged, annotated, edited, searched, and exported. + +## Installation and Setup + +First, you need to install `lxml` and `html2text` python packages. + +```bash +pip install lxml +pip install html2text +``` + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/evernote). + +```python +from langchain_community.document_loaders import EverNoteLoader +``` diff --git a/langchain_md_files/integrations/providers/facebook.mdx b/langchain_md_files/integrations/providers/facebook.mdx new file mode 100644 index 0000000000000000000000000000000000000000..6734c9462e5bb34bd1ae22cdcdcdddc40869e730 --- /dev/null +++ b/langchain_md_files/integrations/providers/facebook.mdx @@ -0,0 +1,93 @@ +# Facebook - Meta + +>[Meta Platforms, Inc.](https://www.facebook.com/), doing business as `Meta`, formerly +> named `Facebook, Inc.`, and `TheFacebook, Inc.`, is an American multinational technology +> conglomerate. The company owns and operates `Facebook`, `Instagram`, `Threads`, +> and `WhatsApp`, among other products and services. + +## Embedding models + +### LASER + +>[LASER](https://github.com/facebookresearch/LASER) is a Python library developed by +> the `Meta AI Research` team and used for +> creating multilingual sentence embeddings for +> [over 147 languages as of 2/25/2024](https://github.com/facebookresearch/flores/blob/main/flores200/README.md#languages-in-flores-200) + +```bash +pip install laser_encoders +``` + +See a [usage example](/docs/integrations/text_embedding/laser). + +```python +from langchain_community.embeddings.laser import LaserEmbeddings +``` + +## Document loaders + +### Facebook Messenger + +>[Messenger](https://en.wikipedia.org/wiki/Messenger_(software)) is an instant messaging app and +> platform developed by `Meta Platforms`. Originally developed as `Facebook Chat` in 2008, the company revamped its +> messaging service in 2010. + +See a [usage example](/docs/integrations/document_loaders/facebook_chat). + +```python +from langchain_community.document_loaders import FacebookChatLoader +``` + +## Vector stores + +### Facebook Faiss + +>[Facebook AI Similarity Search (Faiss)](https://engineering.fb.com/2017/03/29/data-infrastructure/faiss-a-library-for-efficient-similarity-search/) +> is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that +> search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also contains supporting +> code for evaluation and parameter tuning. + +[Faiss documentation](https://faiss.ai/). + +We need to install `faiss` python package. + +```bash +pip install faiss-gpu # For CUDA 7.5+ supported GPU's. +``` + +OR + +```bash +pip install faiss-cpu # For CPU Installation +``` + +See a [usage example](/docs/integrations/vectorstores/faiss). + +```python +from langchain_community.vectorstores import FAISS +``` + +## Chat loaders + +### Facebook Messenger + +>[Messenger](https://en.wikipedia.org/wiki/Messenger_(software)) is an instant messaging app and +> platform developed by `Meta Platforms`. Originally developed as `Facebook Chat` in 2008, the company revamped its +> messaging service in 2010. + +See a [usage example](/docs/integrations/chat_loaders/facebook). + +```python +from langchain_community.chat_loaders.facebook_messenger import ( + FolderFacebookMessengerChatLoader, + SingleFileFacebookMessengerChatLoader, +) +``` + +### Facebook WhatsApp + +See a [usage example](/docs/integrations/chat_loaders/whatsapp). + +```python +from langchain_community.chat_loaders.whatsapp import WhatsAppChatLoader +``` diff --git a/langchain_md_files/integrations/providers/fauna.mdx b/langchain_md_files/integrations/providers/fauna.mdx new file mode 100644 index 0000000000000000000000000000000000000000..252c0101d2e7c1d1f6c130ad5f3f36f6e97024ee --- /dev/null +++ b/langchain_md_files/integrations/providers/fauna.mdx @@ -0,0 +1,25 @@ +# Fauna + +>[Fauna](https://fauna.com/) is a distributed document-relational database +> that combines the flexibility of documents with the power of a relational, +> ACID compliant database that scales across regions, clouds or the globe. + + +## Installation and Setup + +We have to get the secret key. +See the detailed [guide](https://docs.fauna.com/fauna/current/learn/security_model/). + +We have to install the `fauna` package. + +```bash +pip install -U fauna +``` + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/fauna). + +```python +from langchain_community.document_loaders.fauna import FaunaLoader +``` diff --git a/langchain_md_files/integrations/providers/figma.mdx b/langchain_md_files/integrations/providers/figma.mdx new file mode 100644 index 0000000000000000000000000000000000000000..6b108aaa21e6fc287277f173fd87d2d656a91bfc --- /dev/null +++ b/langchain_md_files/integrations/providers/figma.mdx @@ -0,0 +1,21 @@ +# Figma + +>[Figma](https://www.figma.com/) is a collaborative web application for interface design. + +## Installation and Setup + +The Figma API requires an `access token`, `node_ids`, and a `file key`. + +The `file key` can be pulled from the URL. https://www.figma.com/file/{filekey}/sampleFilename + +`Node IDs` are also available in the URL. Click on anything and look for the '?node-id={node_id}' param. + +`Access token` [instructions](https://help.figma.com/hc/en-us/articles/8085703771159-Manage-personal-access-tokens). + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/figma). + +```python +from langchain_community.document_loaders import FigmaFileLoader +``` diff --git a/langchain_md_files/integrations/providers/flyte.mdx b/langchain_md_files/integrations/providers/flyte.mdx new file mode 100644 index 0000000000000000000000000000000000000000..5fe20d896517cc01eaf62f85eb6c33e7cdcb7b46 --- /dev/null +++ b/langchain_md_files/integrations/providers/flyte.mdx @@ -0,0 +1,153 @@ +# Flyte + +> [Flyte](https://github.com/flyteorg/flyte) is an open-source orchestrator that facilitates building production-grade data and ML pipelines. +> It is built for scalability and reproducibility, leveraging Kubernetes as its underlying platform. + +The purpose of this notebook is to demonstrate the integration of a `FlyteCallback` into your Flyte task, enabling you to effectively monitor and track your LangChain experiments. + +## Installation & Setup + +- Install the Flytekit library by running the command `pip install flytekit`. +- Install the Flytekit-Envd plugin by running the command `pip install flytekitplugins-envd`. +- Install LangChain by running the command `pip install langchain`. +- Install [Docker](https://docs.docker.com/engine/install/) on your system. + +## Flyte Tasks + +A Flyte [task](https://docs.flyte.org/en/latest/user_guide/basics/tasks.html) serves as the foundational building block of Flyte. +To execute LangChain experiments, you need to write Flyte tasks that define the specific steps and operations involved. + +NOTE: The [getting started guide](https://docs.flyte.org/projects/cookbook/en/latest/index.html) offers detailed, step-by-step instructions on installing Flyte locally and running your initial Flyte pipeline. + +First, import the necessary dependencies to support your LangChain experiments. + +```python +import os + +from flytekit import ImageSpec, task +from langchain.agents import AgentType, initialize_agent, load_tools +from langchain.callbacks import FlyteCallbackHandler +from langchain.chains import LLMChain +from langchain_openai import ChatOpenAI +from langchain_core.prompts import PromptTemplate +from langchain_core.messages import HumanMessage +``` + +Set up the necessary environment variables to utilize the OpenAI API and Serp API: + +```python +# Set OpenAI API key +os.environ["OPENAI_API_KEY"] = "" + +# Set Serp API key +os.environ["SERPAPI_API_KEY"] = "" +``` + +Replace `` and `` with your respective API keys obtained from OpenAI and Serp API. + +To guarantee reproducibility of your pipelines, Flyte tasks are containerized. +Each Flyte task must be associated with an image, which can either be shared across the entire Flyte [workflow](https://docs.flyte.org/en/latest/user_guide/basics/workflows.html) or provided separately for each task. + +To streamline the process of supplying the required dependencies for each Flyte task, you can initialize an [`ImageSpec`](https://docs.flyte.org/en/latest/user_guide/customizing_dependencies/imagespec.html) object. +This approach automatically triggers a Docker build, alleviating the need for users to manually create a Docker image. + +```python +custom_image = ImageSpec( + name="langchain-flyte", + packages=[ + "langchain", + "openai", + "spacy", + "https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.5.0/en_core_web_sm-3.5.0.tar.gz", + "textstat", + "google-search-results", + ], + registry="", +) +``` + +You have the flexibility to push the Docker image to a registry of your preference. +[Docker Hub](https://hub.docker.com/) or [GitHub Container Registry (GHCR)](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry) is a convenient option to begin with. + +Once you have selected a registry, you can proceed to create Flyte tasks that log the LangChain metrics to Flyte Deck. + +The following examples demonstrate tasks related to OpenAI LLM, chains and agent with tools: + +### LLM + +```python +@task(disable_deck=False, container_image=custom_image) +def langchain_llm() -> str: + llm = ChatOpenAI( + model_name="gpt-3.5-turbo", + temperature=0.2, + callbacks=[FlyteCallbackHandler()], + ) + return llm.invoke([HumanMessage(content="Tell me a joke")]).content +``` + +### Chain + +```python +@task(disable_deck=False, container_image=custom_image) +def langchain_chain() -> list[dict[str, str]]: + template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title. +Title: {title} +Playwright: This is a synopsis for the above play:""" + llm = ChatOpenAI( + model_name="gpt-3.5-turbo", + temperature=0, + callbacks=[FlyteCallbackHandler()], + ) + prompt_template = PromptTemplate(input_variables=["title"], template=template) + synopsis_chain = LLMChain( + llm=llm, prompt=prompt_template, callbacks=[FlyteCallbackHandler()] + ) + test_prompts = [ + { + "title": "documentary about good video games that push the boundary of game design" + }, + ] + return synopsis_chain.apply(test_prompts) +``` + +### Agent + +```python +@task(disable_deck=False, container_image=custom_image) +def langchain_agent() -> str: + llm = OpenAI( + model_name="gpt-3.5-turbo", + temperature=0, + callbacks=[FlyteCallbackHandler()], + ) + tools = load_tools( + ["serpapi", "llm-math"], llm=llm, callbacks=[FlyteCallbackHandler()] + ) + agent = initialize_agent( + tools, + llm, + agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, + callbacks=[FlyteCallbackHandler()], + verbose=True, + ) + return agent.run( + "Who is Leonardo DiCaprio's girlfriend? Could you calculate her current age and raise it to the power of 0.43?" + ) +``` + +These tasks serve as a starting point for running your LangChain experiments within Flyte. + +## Execute the Flyte Tasks on Kubernetes + +To execute the Flyte tasks on the configured Flyte backend, use the following command: + +```bash +pyflyte run --image langchain_flyte.py langchain_llm +``` + +This command will initiate the execution of the `langchain_llm` task on the Flyte backend. You can trigger the remaining two tasks in a similar manner. + +The metrics will be displayed on the Flyte UI as follows: + +![Screenshot of Flyte Deck showing LangChain metrics and a dependency tree visualization.](https://ik.imagekit.io/c8zl7irwkdda/Screenshot_2023-06-20_at_1.23.29_PM_MZYeG0dKa.png?updatedAt=1687247642993 "Flyte Deck Metrics Display") diff --git a/langchain_md_files/integrations/providers/forefrontai.mdx b/langchain_md_files/integrations/providers/forefrontai.mdx new file mode 100644 index 0000000000000000000000000000000000000000..a0045f75a415f442bca8d34fe60b3c43962744d8 --- /dev/null +++ b/langchain_md_files/integrations/providers/forefrontai.mdx @@ -0,0 +1,16 @@ +# ForefrontAI + +This page covers how to use the ForefrontAI ecosystem within LangChain. +It is broken into two parts: installation and setup, and then references to specific ForefrontAI wrappers. + +## Installation and Setup +- Get an ForefrontAI api key and set it as an environment variable (`FOREFRONTAI_API_KEY`) + +## Wrappers + +### LLM + +There exists an ForefrontAI LLM wrapper, which you can access with +```python +from langchain_community.llms import ForefrontAI +``` \ No newline at end of file diff --git a/langchain_md_files/integrations/providers/geopandas.mdx b/langchain_md_files/integrations/providers/geopandas.mdx new file mode 100644 index 0000000000000000000000000000000000000000..c14a29c40bd2f7395269e527591acd5bef4c2b98 --- /dev/null +++ b/langchain_md_files/integrations/providers/geopandas.mdx @@ -0,0 +1,23 @@ +# Geopandas + +>[GeoPandas](https://geopandas.org/) is an open source project to make working +> with geospatial data in python easier. `GeoPandas` extends the datatypes used by +> `pandas` to allow spatial operations on geometric types. +> Geometric operations are performed by `shapely`. + + +## Installation and Setup + +We have to install several python packages. + +```bash +pip install -U sodapy pandas geopandas +``` + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/geopandas). + +```python +from langchain_community.document_loaders import OpenCityDataLoader +``` diff --git a/langchain_md_files/integrations/providers/git.mdx b/langchain_md_files/integrations/providers/git.mdx new file mode 100644 index 0000000000000000000000000000000000000000..bc20c1710ca7a1be1e6d613f60ce1423345ad471 --- /dev/null +++ b/langchain_md_files/integrations/providers/git.mdx @@ -0,0 +1,19 @@ +# Git + +>[Git](https://en.wikipedia.org/wiki/Git) is a distributed version control system that tracks changes in any set of computer files, usually used for coordinating work among programmers collaboratively developing source code during software development. + +## Installation and Setup + +First, you need to install `GitPython` python package. + +```bash +pip install GitPython +``` + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/git). + +```python +from langchain_community.document_loaders import GitLoader +``` diff --git a/langchain_md_files/integrations/providers/gitbook.mdx b/langchain_md_files/integrations/providers/gitbook.mdx new file mode 100644 index 0000000000000000000000000000000000000000..4c8a8559234ee81ca4341d33712183306e8710ea --- /dev/null +++ b/langchain_md_files/integrations/providers/gitbook.mdx @@ -0,0 +1,15 @@ +# GitBook + +>[GitBook](https://docs.gitbook.com/) is a modern documentation platform where teams can document everything from products to internal knowledge bases and APIs. + +## Installation and Setup + +There isn't any special setup for it. + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/gitbook). + +```python +from langchain_community.document_loaders import GitbookLoader +``` diff --git a/langchain_md_files/integrations/providers/github.mdx b/langchain_md_files/integrations/providers/github.mdx new file mode 100644 index 0000000000000000000000000000000000000000..2296174632cdd67f6f73264a689c4dbbc6539696 --- /dev/null +++ b/langchain_md_files/integrations/providers/github.mdx @@ -0,0 +1,22 @@ +# GitHub + +>[GitHub](https://github.com/) is a developer platform that allows developers to create, +> store, manage and share their code. It uses `Git` software, providing the +> distributed version control of Git plus access control, bug tracking, +> software feature requests, task management, continuous integration, and wikis for every project. + + +## Installation and Setup + +To access the GitHub API, you need a [personal access token](https://github.com/settings/tokens). + + +## Document Loader + +There are two document loaders available for GitHub. + +See a [usage example](/docs/integrations/document_loaders/github). + +```python +from langchain_community.document_loaders import GitHubIssuesLoader, GithubFileLoader +``` diff --git a/langchain_md_files/integrations/providers/golden.mdx b/langchain_md_files/integrations/providers/golden.mdx new file mode 100644 index 0000000000000000000000000000000000000000..7acde1e460949bd28affc5801c583e4349313585 --- /dev/null +++ b/langchain_md_files/integrations/providers/golden.mdx @@ -0,0 +1,34 @@ +# Golden + +>[Golden](https://golden.com) provides a set of natural language APIs for querying and enrichment using the Golden Knowledge Graph e.g. queries such as: `Products from OpenAI`, `Generative ai companies with series a funding`, and `rappers who invest` can be used to retrieve structured data about relevant entities. +> +>The `golden-query` langchain tool is a wrapper on top of the [Golden Query API](https://docs.golden.com/reference/query-api) which enables programmatic access to these results. +>See the [Golden Query API docs](https://docs.golden.com/reference/query-api) for more information. + +## Installation and Setup +- Go to the [Golden API docs](https://docs.golden.com/) to get an overview about the Golden API. +- Get your API key from the [Golden API Settings](https://golden.com/settings/api) page. +- Save your API key into GOLDEN_API_KEY env variable + +## Wrappers + +### Utility + +There exists a GoldenQueryAPIWrapper utility which wraps this API. To import this utility: + +```python +from langchain_community.utilities.golden_query import GoldenQueryAPIWrapper +``` + +For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/golden_query). + +### Tool + +You can also easily load this wrapper as a Tool (to use with an Agent). +You can do this with: +```python +from langchain.agents import load_tools +tools = load_tools(["golden-query"]) +``` + +For more information on tools, see [this page](/docs/how_to/tools_builtin). diff --git a/langchain_md_files/integrations/providers/google_serper.mdx b/langchain_md_files/integrations/providers/google_serper.mdx new file mode 100644 index 0000000000000000000000000000000000000000..0401e66b53581edbfe313c50ae73d88a5c1efd4d --- /dev/null +++ b/langchain_md_files/integrations/providers/google_serper.mdx @@ -0,0 +1,74 @@ +# Serper - Google Search API + +This page covers how to use the [Serper](https://serper.dev) Google Search API within LangChain. Serper is a low-cost Google Search API that can be used to add answer box, knowledge graph, and organic results data from Google Search. +It is broken into two parts: setup, and then references to the specific Google Serper wrapper. + +## Setup + +- Go to [serper.dev](https://serper.dev) to sign up for a free account +- Get the api key and set it as an environment variable (`SERPER_API_KEY`) + +## Wrappers + +### Utility + +There exists a GoogleSerperAPIWrapper utility which wraps this API. To import this utility: + +```python +from langchain_community.utilities import GoogleSerperAPIWrapper +``` + +You can use it as part of a Self Ask chain: + +```python +from langchain_community.utilities import GoogleSerperAPIWrapper +from langchain_openai import OpenAI +from langchain.agents import initialize_agent, Tool +from langchain.agents import AgentType + +import os + +os.environ["SERPER_API_KEY"] = "" +os.environ['OPENAI_API_KEY'] = "" + +llm = OpenAI(temperature=0) +search = GoogleSerperAPIWrapper() +tools = [ + Tool( + name="Intermediate Answer", + func=search.run, + description="useful for when you need to ask with search" + ) +] + +self_ask_with_search = initialize_agent(tools, llm, agent=AgentType.SELF_ASK_WITH_SEARCH, verbose=True) +self_ask_with_search.run("What is the hometown of the reigning men's U.S. Open champion?") +``` + +#### Output +``` +Entering new AgentExecutor chain... + Yes. +Follow up: Who is the reigning men's U.S. Open champion? +Intermediate answer: Current champions Carlos Alcaraz, 2022 men's singles champion. +Follow up: Where is Carlos Alcaraz from? +Intermediate answer: El Palmar, Spain +So the final answer is: El Palmar, Spain + +> Finished chain. + +'El Palmar, Spain' +``` + +For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/google_serper). + +### Tool + +You can also easily load this wrapper as a Tool (to use with an Agent). +You can do this with: +```python +from langchain.agents import load_tools +tools = load_tools(["google-serper"]) +``` + +For more information on tools, see [this page](/docs/how_to/tools_builtin). diff --git a/langchain_md_files/integrations/providers/gooseai.mdx b/langchain_md_files/integrations/providers/gooseai.mdx new file mode 100644 index 0000000000000000000000000000000000000000..49909481a001d51c7ec7b6635350ec9f65415d91 --- /dev/null +++ b/langchain_md_files/integrations/providers/gooseai.mdx @@ -0,0 +1,23 @@ +# GooseAI + +This page covers how to use the GooseAI ecosystem within LangChain. +It is broken into two parts: installation and setup, and then references to specific GooseAI wrappers. + +## Installation and Setup +- Install the Python SDK with `pip install openai` +- Get your GooseAI api key from this link [here](https://goose.ai/). +- Set the environment variable (`GOOSEAI_API_KEY`). + +```python +import os +os.environ["GOOSEAI_API_KEY"] = "YOUR_API_KEY" +``` + +## Wrappers + +### LLM + +There exists an GooseAI LLM wrapper, which you can access with: +```python +from langchain_community.llms import GooseAI +``` \ No newline at end of file diff --git a/langchain_md_files/integrations/providers/gpt4all.mdx b/langchain_md_files/integrations/providers/gpt4all.mdx new file mode 100644 index 0000000000000000000000000000000000000000..9e3b188328e02178afb0affd9bb9d3ee8e21cdcd --- /dev/null +++ b/langchain_md_files/integrations/providers/gpt4all.mdx @@ -0,0 +1,55 @@ +# GPT4All + +This page covers how to use the `GPT4All` wrapper within LangChain. The tutorial is divided into two parts: installation and setup, followed by usage with an example. + +## Installation and Setup + +- Install the Python package with `pip install gpt4all` +- Download a [GPT4All model](https://gpt4all.io/index.html) and place it in your desired directory + +In this example, we are using `mistral-7b-openorca.Q4_0.gguf`: + +```bash +mkdir models +wget https://gpt4all.io/models/gguf/mistral-7b-openorca.Q4_0.gguf -O models/mistral-7b-openorca.Q4_0.gguf +``` + +## Usage + +### GPT4All + +To use the GPT4All wrapper, you need to provide the path to the pre-trained model file and the model's configuration. + +```python +from langchain_community.llms import GPT4All + +# Instantiate the model. Callbacks support token-wise streaming +model = GPT4All(model="./models/mistral-7b-openorca.Q4_0.gguf", n_threads=8) + +# Generate text +response = model.invoke("Once upon a time, ") +``` + +You can also customize the generation parameters, such as `n_predict`, `temp`, `top_p`, `top_k`, and others. + +To stream the model's predictions, add in a CallbackManager. + +```python +from langchain_community.llms import GPT4All +from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler + +# There are many CallbackHandlers supported, such as +# from langchain.callbacks.streamlit import StreamlitCallbackHandler + +callbacks = [StreamingStdOutCallbackHandler()] +model = GPT4All(model="./models/mistral-7b-openorca.Q4_0.gguf", n_threads=8) + +# Generate text. Tokens are streamed through the callback manager. +model.invoke("Once upon a time, ", callbacks=callbacks) +``` + +## Model File + +You can download model files from the GPT4All client. You can download the client from the [GPT4All](https://gpt4all.io/index.html) website. + +For a more detailed walkthrough of this, see [this notebook](/docs/integrations/llms/gpt4all) diff --git a/langchain_md_files/integrations/providers/gradient.mdx b/langchain_md_files/integrations/providers/gradient.mdx new file mode 100644 index 0000000000000000000000000000000000000000..37cd04e91ec5693f9af000f7e689480bb0eec016 --- /dev/null +++ b/langchain_md_files/integrations/providers/gradient.mdx @@ -0,0 +1,27 @@ +# Gradient + +>[Gradient](https://gradient.ai/) allows to fine tune and get completions on LLMs with a simple web API. + +## Installation and Setup +- Install the Python SDK : +```bash +pip install gradientai +``` +Get a [Gradient access token and workspace](https://gradient.ai/) and set it as an environment variable (`Gradient_ACCESS_TOKEN`) and (`GRADIENT_WORKSPACE_ID`) + +## LLM + +There exists an Gradient LLM wrapper, which you can access with +See a [usage example](/docs/integrations/llms/gradient). + +```python +from langchain_community.llms import GradientLLM +``` + +## Text Embedding Model + +There exists an Gradient Embedding model, which you can access with +```python +from langchain_community.embeddings import GradientEmbeddings +``` +For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/gradient) diff --git a/langchain_md_files/integrations/providers/graphsignal.mdx b/langchain_md_files/integrations/providers/graphsignal.mdx new file mode 100644 index 0000000000000000000000000000000000000000..6e4867d35794baa722a5c7970916b5987ac8d97b --- /dev/null +++ b/langchain_md_files/integrations/providers/graphsignal.mdx @@ -0,0 +1,44 @@ +# Graphsignal + +This page covers how to use [Graphsignal](https://app.graphsignal.com) to trace and monitor LangChain. Graphsignal enables full visibility into your application. It provides latency breakdowns by chains and tools, exceptions with full context, data monitoring, compute/GPU utilization, OpenAI cost analytics, and more. + +## Installation and Setup + +- Install the Python library with `pip install graphsignal` +- Create free Graphsignal account [here](https://graphsignal.com) +- Get an API key and set it as an environment variable (`GRAPHSIGNAL_API_KEY`) + +## Tracing and Monitoring + +Graphsignal automatically instruments and starts tracing and monitoring chains. Traces and metrics are then available in your [Graphsignal dashboards](https://app.graphsignal.com). + +Initialize the tracer by providing a deployment name: + +```python +import graphsignal + +graphsignal.configure(deployment='my-langchain-app-prod') +``` + +To additionally trace any function or code, you can use a decorator or a context manager: + +```python +@graphsignal.trace_function +def handle_request(): + chain.run("some initial text") +``` + +```python +with graphsignal.start_trace('my-chain'): + chain.run("some initial text") +``` + +Optionally, enable profiling to record function-level statistics for each trace. + +```python +with graphsignal.start_trace( + 'my-chain', options=graphsignal.TraceOptions(enable_profiling=True)): + chain.run("some initial text") +``` + +See the [Quick Start](https://graphsignal.com/docs/guides/quick-start/) guide for complete setup instructions. diff --git a/langchain_md_files/integrations/providers/grobid.mdx b/langchain_md_files/integrations/providers/grobid.mdx new file mode 100644 index 0000000000000000000000000000000000000000..9740854ed117dab4e8a319217485a509427ea553 --- /dev/null +++ b/langchain_md_files/integrations/providers/grobid.mdx @@ -0,0 +1,46 @@ +# Grobid + +GROBID is a machine learning library for extracting, parsing, and re-structuring raw documents. + +It is designed and expected to be used to parse academic papers, where it works particularly well. + +*Note*: if the articles supplied to Grobid are large documents (e.g. dissertations) exceeding a certain number +of elements, they might not be processed. + +This page covers how to use the Grobid to parse articles for LangChain. + +## Installation +The grobid installation is described in details in https://grobid.readthedocs.io/en/latest/Install-Grobid/. +However, it is probably easier and less troublesome to run grobid through a docker container, +as documented [here](https://grobid.readthedocs.io/en/latest/Grobid-docker/). + +## Use Grobid with LangChain + +Once grobid is installed and up and running (you can check by accessing it http://localhost:8070), +you're ready to go. + +You can now use the GrobidParser to produce documents +```python +from langchain_community.document_loaders.parsers import GrobidParser +from langchain_community.document_loaders.generic import GenericLoader + +#Produce chunks from article paragraphs +loader = GenericLoader.from_filesystem( + "/Users/31treehaus/Desktop/Papers/", + glob="*", + suffixes=[".pdf"], + parser= GrobidParser(segment_sentences=False) +) +docs = loader.load() + +#Produce chunks from article sentences +loader = GenericLoader.from_filesystem( + "/Users/31treehaus/Desktop/Papers/", + glob="*", + suffixes=[".pdf"], + parser= GrobidParser(segment_sentences=True) +) +docs = loader.load() +``` +Chunk metadata will include Bounding Boxes. Although these are a bit funky to parse, +they are explained in https://grobid.readthedocs.io/en/latest/Coordinates-in-PDF/ \ No newline at end of file diff --git a/langchain_md_files/integrations/providers/groq.mdx b/langchain_md_files/integrations/providers/groq.mdx new file mode 100644 index 0000000000000000000000000000000000000000..a1e4b050ce0f4a356324f44037669972c0d5df09 --- /dev/null +++ b/langchain_md_files/integrations/providers/groq.mdx @@ -0,0 +1,28 @@ +# Groq + +Welcome to Groq! 🚀 At Groq, we've developed the world's first Language Processing Unit™, or LPU. The Groq LPU has a deterministic, single core streaming architecture that sets the standard for GenAI inference speed with predictable and repeatable performance for any given workload. + +Beyond the architecture, our software is designed to empower developers like you with the tools you need to create innovative, powerful AI applications. With Groq as your engine, you can: + +* Achieve uncompromised low latency and performance for real-time AI and HPC inferences 🔥 +* Know the exact performance and compute time for any given workload 🔮 +* Take advantage of our cutting-edge technology to stay ahead of the competition 💪 + +Want more Groq? Check out our [website](https://groq.com) for more resources and join our [Discord community](https://discord.gg/JvNsBDKeCG) to connect with our developers! + + +## Installation and Setup +Install the integration package: + +```bash +pip install langchain-groq +``` + +Request an [API key](https://wow.groq.com) and set it as an environment variable: + +```bash +export GROQ_API_KEY=gsk_... +``` + +## Chat Model +See a [usage example](/docs/integrations/chat/groq). diff --git a/langchain_md_files/integrations/providers/gutenberg.mdx b/langchain_md_files/integrations/providers/gutenberg.mdx new file mode 100644 index 0000000000000000000000000000000000000000..36eb816383d60a8bc6db5d3061bffe278a908f36 --- /dev/null +++ b/langchain_md_files/integrations/providers/gutenberg.mdx @@ -0,0 +1,15 @@ +# Gutenberg + +>[Project Gutenberg](https://www.gutenberg.org/about/) is an online library of free eBooks. + +## Installation and Setup + +There isn't any special setup for it. + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/gutenberg). + +```python +from langchain_community.document_loaders import GutenbergLoader +``` diff --git a/langchain_md_files/integrations/providers/hacker_news.mdx b/langchain_md_files/integrations/providers/hacker_news.mdx new file mode 100644 index 0000000000000000000000000000000000000000..fc232a3db0c687c817327d910d382748e75984f0 --- /dev/null +++ b/langchain_md_files/integrations/providers/hacker_news.mdx @@ -0,0 +1,18 @@ +# Hacker News + +>[Hacker News](https://en.wikipedia.org/wiki/Hacker_News) (sometimes abbreviated as `HN`) is a social news +> website focusing on computer science and entrepreneurship. It is run by the investment fund and startup +> incubator `Y Combinator`. In general, content that can be submitted is defined as "anything that gratifies +> one's intellectual curiosity." + +## Installation and Setup + +There isn't any special setup for it. + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/hacker_news). + +```python +from langchain_community.document_loaders import HNLoader +``` diff --git a/langchain_md_files/integrations/providers/hazy_research.mdx b/langchain_md_files/integrations/providers/hazy_research.mdx new file mode 100644 index 0000000000000000000000000000000000000000..13cbda6b8ee52f0700a9f260cccac5a310850501 --- /dev/null +++ b/langchain_md_files/integrations/providers/hazy_research.mdx @@ -0,0 +1,19 @@ +# Hazy Research + +This page covers how to use the Hazy Research ecosystem within LangChain. +It is broken into two parts: installation and setup, and then references to specific Hazy Research wrappers. + +## Installation and Setup +- To use the `manifest`, install it with `pip install manifest-ml` + +## Wrappers + +### LLM + +There exists an LLM wrapper around Hazy Research's `manifest` library. +`manifest` is a python library which is itself a wrapper around many model providers, and adds in caching, history, and more. + +To use this wrapper: +```python +from langchain_community.llms.manifest import ManifestWrapper +``` diff --git a/langchain_md_files/integrations/providers/helicone.mdx b/langchain_md_files/integrations/providers/helicone.mdx new file mode 100644 index 0000000000000000000000000000000000000000..9f2898870b365ead07427583f13be5d0f68071d5 --- /dev/null +++ b/langchain_md_files/integrations/providers/helicone.mdx @@ -0,0 +1,53 @@ +# Helicone + +This page covers how to use the [Helicone](https://helicone.ai) ecosystem within LangChain. + +## What is Helicone? + +Helicone is an [open-source](https://github.com/Helicone/helicone) observability platform that proxies your OpenAI traffic and provides you key insights into your spend, latency and usage. + +![Screenshot of the Helicone dashboard showing average requests per day, response time, tokens per response, total cost, and a graph of requests over time.](/img/HeliconeDashboard.png "Helicone Dashboard") + +## Quick start + +With your LangChain environment you can just add the following parameter. + +```bash +export OPENAI_API_BASE="https://oai.hconeai.com/v1" +``` + +Now head over to [helicone.ai](https://www.helicone.ai/signup) to create your account, and add your OpenAI API key within our dashboard to view your logs. + +![Interface for entering and managing OpenAI API keys in the Helicone dashboard.](/img/HeliconeKeys.png "Helicone API Key Input") + +## How to enable Helicone caching + +```python +from langchain_openai import OpenAI +import openai +openai.api_base = "https://oai.hconeai.com/v1" + +llm = OpenAI(temperature=0.9, headers={"Helicone-Cache-Enabled": "true"}) +text = "What is a helicone?" +print(llm.invoke(text)) +``` + +[Helicone caching docs](https://docs.helicone.ai/advanced-usage/caching) + +## How to use Helicone custom properties + +```python +from langchain_openai import OpenAI +import openai +openai.api_base = "https://oai.hconeai.com/v1" + +llm = OpenAI(temperature=0.9, headers={ + "Helicone-Property-Session": "24", + "Helicone-Property-Conversation": "support_issue_2", + "Helicone-Property-App": "mobile", + }) +text = "What is a helicone?" +print(llm.invoke(text)) +``` + +[Helicone property docs](https://docs.helicone.ai/advanced-usage/custom-properties) diff --git a/langchain_md_files/integrations/providers/hologres.mdx b/langchain_md_files/integrations/providers/hologres.mdx new file mode 100644 index 0000000000000000000000000000000000000000..8dbb3d80faa67bab84f4e70981147535b460ba6f --- /dev/null +++ b/langchain_md_files/integrations/providers/hologres.mdx @@ -0,0 +1,23 @@ +# Hologres + +>[Hologres](https://www.alibabacloud.com/help/en/hologres/latest/introduction) is a unified real-time data warehousing service developed by Alibaba Cloud. You can use Hologres to write, update, process, and analyze large amounts of data in real time. +>`Hologres` supports standard `SQL` syntax, is compatible with `PostgreSQL`, and supports most PostgreSQL functions. Hologres supports online analytical processing (OLAP) and ad hoc analysis for up to petabytes of data, and provides high-concurrency and low-latency online data services. + +>`Hologres` provides **vector database** functionality by adopting [Proxima](https://www.alibabacloud.com/help/en/hologres/latest/vector-processing). +>`Proxima` is a high-performance software library developed by `Alibaba DAMO Academy`. It allows you to search for the nearest neighbors of vectors. Proxima provides higher stability and performance than similar open-source software such as Faiss. Proxima allows you to search for similar text or image embeddings with high throughput and low latency. Hologres is deeply integrated with Proxima to provide a high-performance vector search service. + +## Installation and Setup + +Click [here](https://www.alibabacloud.com/zh/product/hologres) to fast deploy a Hologres cloud instance. + +```bash +pip install hologres-vector +``` + +## Vector Store + +See a [usage example](/docs/integrations/vectorstores/hologres). + +```python +from langchain_community.vectorstores import Hologres +``` diff --git a/langchain_md_files/integrations/providers/html2text.mdx b/langchain_md_files/integrations/providers/html2text.mdx new file mode 100644 index 0000000000000000000000000000000000000000..c8cf35210fff72525fe00189526e4975f97cdcf6 --- /dev/null +++ b/langchain_md_files/integrations/providers/html2text.mdx @@ -0,0 +1,19 @@ +# HTML to text + +>[html2text](https://github.com/Alir3z4/html2text/) is a Python package that converts a page of `HTML` into clean, easy-to-read plain `ASCII text`. + +The ASCII also happens to be a valid `Markdown` (a text-to-HTML format). + +## Installation and Setup + +```bash +pip install html2text +``` + +## Document Transformer + +See a [usage example](/docs/integrations/document_transformers/html2text). + +```python +from langchain_community.document_loaders import Html2TextTransformer +``` diff --git a/langchain_md_files/integrations/providers/huawei.mdx b/langchain_md_files/integrations/providers/huawei.mdx new file mode 100644 index 0000000000000000000000000000000000000000..22b12ca717f7c3e9e8e38670f32982f8794aa6f2 --- /dev/null +++ b/langchain_md_files/integrations/providers/huawei.mdx @@ -0,0 +1,37 @@ +# Huawei + +>[Huawei Technologies Co., Ltd.](https://www.huawei.com/) is a Chinese multinational +> digital communications technology corporation. +> +>[Huawei Cloud](https://www.huaweicloud.com/intl/en-us/product/) provides a comprehensive suite of +> global cloud computing services. + + +## Installation and Setup + +To access the `Huawei Cloud`, you need an access token. + +You also have to install a python library: + +```bash +pip install -U esdk-obs-python +``` + + +## Document Loader + +### Huawei OBS Directory + +See a [usage example](/docs/integrations/document_loaders/huawei_obs_directory). + +```python +from langchain_community.document_loaders import OBSDirectoryLoader +``` + +### Huawei OBS File + +See a [usage example](/docs/integrations/document_loaders/huawei_obs_file). + +```python +from langchain_community.document_loaders.obs_file import OBSFileLoader +``` diff --git a/langchain_md_files/integrations/providers/ibm.mdx b/langchain_md_files/integrations/providers/ibm.mdx new file mode 100644 index 0000000000000000000000000000000000000000..bb6f5ef065e701d2a95a22462f7be39bfc41922b --- /dev/null +++ b/langchain_md_files/integrations/providers/ibm.mdx @@ -0,0 +1,59 @@ +# IBM + +The `LangChain` integrations related to [IBM watsonx.ai](https://www.ibm.com/products/watsonx-ai) platform. + +IBM® watsonx.ai™ AI studio is part of the IBM [watsonx](https://www.ibm.com/watsonx)™ AI and data platform, bringing together new generative +AI capabilities powered by [foundation models](https://www.ibm.com/products/watsonx-ai/foundation-models) and traditional machine learning (ML) +into a powerful studio spanning the AI lifecycle. Tune and guide models with your enterprise data to meet your needs with easy-to-use tools for +building and refining performant prompts. With watsonx.ai, you can build AI applications in a fraction of the time and with a fraction of the data. +Watsonx.ai offers: + +- **Multi-model variety and flexibility:** Choose from IBM-developed, open-source and third-party models, or build your own model. +- **Differentiated client protection:** IBM stands behind IBM-developed models and indemnifies the client against third-party IP claims. +- **End-to-end AI governance:** Enterprises can scale and accelerate the impact of AI with trusted data across the business, using data wherever it resides. +- **Hybrid, multi-cloud deployments:** IBM provides the flexibility to integrate and deploy your AI workloads into your hybrid-cloud stack of choice. + + +## Installation and Setup + +Install the integration package with +```bash +pip install -qU langchain-ibm +``` + +Get an IBM watsonx.ai api key and set it as an environment variable (`WATSONX_APIKEY`) +```python +import os + +os.environ["WATSONX_APIKEY"] = "your IBM watsonx.ai api key" +``` + +## Chat Model + +### ChatWatsonx + +See a [usage example](/docs/integrations/chat/ibm_watsonx). + +```python +from langchain_ibm import ChatWatsonx +``` + +## LLMs + +### WatsonxLLM + +See a [usage example](/docs/integrations/llms/ibm_watsonx). + +```python +from langchain_ibm import WatsonxLLM +``` + +## Embedding Models + +### WatsonxEmbeddings + +See a [usage example](/docs/integrations/text_embedding/ibm_watsonx). + +```python +from langchain_ibm import WatsonxEmbeddings +``` diff --git a/langchain_md_files/integrations/providers/ieit_systems.mdx b/langchain_md_files/integrations/providers/ieit_systems.mdx new file mode 100644 index 0000000000000000000000000000000000000000..d81d0be3f74844597a097dcd29c63caccb190dc0 --- /dev/null +++ b/langchain_md_files/integrations/providers/ieit_systems.mdx @@ -0,0 +1,31 @@ +# IEIT Systems + +>[IEIT Systems](https://en.ieisystem.com/) is a Chinese information technology company +> established in 1999. It provides the IT infrastructure products, solutions, +> and services, innovative IT products and solutions across cloud computing, +> big data, and artificial intelligence. + + +## LLMs + +See a [usage example](/docs/integrations/llms/yuan2). + +```python +from langchain_community.llms.yuan2 import Yuan2 +``` + +## Chat models + +See the [installation instructions](/docs/integrations/chat/yuan2/#setting-up-your-api-server). + +Yuan2.0 provided an OpenAI compatible API, and ChatYuan2 is integrated into langchain by using `OpenAI client`. +Therefore, ensure the `openai` package is installed. + +```bash +pip install openai +``` +See a [usage example](/docs/integrations/chat/yuan2). + +```python +from langchain_community.chat_models import ChatYuan2 +``` diff --git a/langchain_md_files/integrations/providers/ifixit.mdx b/langchain_md_files/integrations/providers/ifixit.mdx new file mode 100644 index 0000000000000000000000000000000000000000..fdcb4ba8023153f0d4ac1cc11705b18a5f8dc126 --- /dev/null +++ b/langchain_md_files/integrations/providers/ifixit.mdx @@ -0,0 +1,16 @@ +# iFixit + +>[iFixit](https://www.ifixit.com) is the largest, open repair community on the web. The site contains nearly 100k +> repair manuals, 200k Questions & Answers on 42k devices, and all the data is licensed under `CC-BY-NC-SA 3.0`. + +## Installation and Setup + +There isn't any special setup for it. + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/ifixit). + +```python +from langchain_community.document_loaders import IFixitLoader +``` diff --git a/langchain_md_files/integrations/providers/iflytek.mdx b/langchain_md_files/integrations/providers/iflytek.mdx new file mode 100644 index 0000000000000000000000000000000000000000..9852830511cf0b4438d8c3d3811f4f76dc17c5af --- /dev/null +++ b/langchain_md_files/integrations/providers/iflytek.mdx @@ -0,0 +1,38 @@ +# iFlytek + +>[iFlytek](https://www.iflytek.com) is a Chinese information technology company +> established in 1999. It creates voice recognition software and +> voice-based internet/mobile products covering education, communication, +> music, intelligent toys industries. + + +## Installation and Setup + +- Get `SparkLLM` app_id, api_key and api_secret from [iFlyTek SparkLLM API Console](https://console.xfyun.cn/services/bm3) (for more info, see [iFlyTek SparkLLM Intro](https://xinghuo.xfyun.cn/sparkapi)). +- Install the Python package (not for the embedding models): + +```bash +pip install websocket-client +``` + +## LLMs + +See a [usage example](/docs/integrations/llms/sparkllm). + +```python +from langchain_community.llms import SparkLLM +``` + +## Chat models + +See a [usage example](/docs/integrations/chat/sparkllm). + +```python +from langchain_community.chat_models import ChatSparkLLM +``` + +## Embedding models + +```python +from langchain_community.embeddings import SparkLLMTextEmbeddings +``` diff --git a/langchain_md_files/integrations/providers/imsdb.mdx b/langchain_md_files/integrations/providers/imsdb.mdx new file mode 100644 index 0000000000000000000000000000000000000000..8b30a2dea980988f87def38adecfb5390c18bfed --- /dev/null +++ b/langchain_md_files/integrations/providers/imsdb.mdx @@ -0,0 +1,16 @@ +# IMSDb + +>[IMSDb](https://imsdb.com/) is the `Internet Movie Script Database`. +> +## Installation and Setup + +There isn't any special setup for it. + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/imsdb). + + +```python +from langchain_community.document_loaders import IMSDbLoader +``` diff --git a/langchain_md_files/integrations/providers/infinispanvs.mdx b/langchain_md_files/integrations/providers/infinispanvs.mdx new file mode 100644 index 0000000000000000000000000000000000000000..b42e7504231bfa0ca334a7982e7cc9760f773b8d --- /dev/null +++ b/langchain_md_files/integrations/providers/infinispanvs.mdx @@ -0,0 +1,17 @@ +# Infinispan VS + +> [Infinispan](https://infinispan.org) Infinispan is an open-source in-memory data grid that provides +> a key/value data store able to hold all types of data, from Java objects to plain text. +> Since version 15 Infinispan supports vector search over caches. + +## Installation and Setup +See [Get Started](https://infinispan.org/get-started/) to run an Infinispan server, you may want to disable authentication +(not supported atm) + +## Vector Store + +See a [usage example](/docs/integrations/vectorstores/infinispanvs). + +```python +from langchain_community.vectorstores import InfinispanVS +``` diff --git a/langchain_md_files/integrations/providers/infinity.mdx b/langchain_md_files/integrations/providers/infinity.mdx new file mode 100644 index 0000000000000000000000000000000000000000..887a8584036fefe61274d1bb6874047ae873e63d --- /dev/null +++ b/langchain_md_files/integrations/providers/infinity.mdx @@ -0,0 +1,11 @@ +# Infinity + +>[Infinity](https://github.com/michaelfeil/infinity) allows the creation of text embeddings. + +## Text Embedding Model + +There exists an infinity Embedding model, which you can access with +```python +from langchain_community.embeddings import InfinityEmbeddings +``` +For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/infinity) diff --git a/langchain_md_files/integrations/providers/infino.mdx b/langchain_md_files/integrations/providers/infino.mdx new file mode 100644 index 0000000000000000000000000000000000000000..d11c502a3777c97c2a1e26a7cb6f118fff5a949c --- /dev/null +++ b/langchain_md_files/integrations/providers/infino.mdx @@ -0,0 +1,35 @@ +# Infino + +>[Infino](https://github.com/infinohq/infino) is an open-source observability platform that stores both metrics and application logs together. + +Key features of `Infino` include: +- **Metrics Tracking**: Capture time taken by LLM model to handle request, errors, number of tokens, and costing indication for the particular LLM. +- **Data Tracking**: Log and store prompt, request, and response data for each LangChain interaction. +- **Graph Visualization**: Generate basic graphs over time, depicting metrics such as request duration, error occurrences, token count, and cost. + +## Installation and Setup + +First, you'll need to install the `infinopy` Python package as follows: + +```bash +pip install infinopy +``` + +If you already have an `Infino Server` running, then you're good to go; but if +you don't, follow the next steps to start it: + +- Make sure you have Docker installed +- Run the following in your terminal: + ``` + docker run --rm --detach --name infino-example -p 3000:3000 infinohq/infino:latest + ``` + + + +## Using Infino + +See a [usage example of `InfinoCallbackHandler`](/docs/integrations/callbacks/infino). + +```python +from langchain.callbacks import InfinoCallbackHandler +``` diff --git a/langchain_md_files/integrations/providers/intel.mdx b/langchain_md_files/integrations/providers/intel.mdx new file mode 100644 index 0000000000000000000000000000000000000000..9429d986c070399dc59cae8e010c0649318168f2 --- /dev/null +++ b/langchain_md_files/integrations/providers/intel.mdx @@ -0,0 +1,108 @@ +# Intel + +>[Optimum Intel](https://github.com/huggingface/optimum-intel?tab=readme-ov-file#optimum-intel) is the interface between the 🤗 Transformers and Diffusers libraries and the different tools and libraries provided by Intel to accelerate end-to-end pipelines on Intel architectures. + +>[Intel® Extension for Transformers](https://github.com/intel/intel-extension-for-transformers?tab=readme-ov-file#intel-extension-for-transformers) (ITREX) is an innovative toolkit designed to accelerate GenAI/LLM everywhere with the optimal performance of Transformer-based models on various Intel platforms, including Intel Gaudi2, Intel CPU, and Intel GPU. + +This page covers how to use optimum-intel and ITREX with LangChain. + +## Optimum-intel + +All functionality related to the [optimum-intel](https://github.com/huggingface/optimum-intel.git) and [IPEX](https://github.com/intel/intel-extension-for-pytorch). + +### Installation + +Install using optimum-intel and ipex using: + +```bash +pip install optimum[neural-compressor] +pip install intel_extension_for_pytorch +``` + +Please follow the installation instructions as specified below: + +* Install optimum-intel as shown [here](https://github.com/huggingface/optimum-intel). +* Install IPEX as shown [here](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=cpu&version=v2.2.0%2Bcpu). + +### Embedding Models + +See a [usage example](/docs/integrations/text_embedding/optimum_intel). +We also offer a full tutorial notebook "rag_with_quantized_embeddings.ipynb" for using the embedder in a RAG pipeline in the cookbook dir. + +```python +from langchain_community.embeddings import QuantizedBiEncoderEmbeddings +``` + +## Intel® Extension for Transformers (ITREX) +(ITREX) is an innovative toolkit to accelerate Transformer-based models on Intel platforms, in particular, effective on 4th Intel Xeon Scalable processor Sapphire Rapids (codenamed Sapphire Rapids). + +Quantization is a process that involves reducing the precision of these weights by representing them using a smaller number of bits. Weight-only quantization specifically focuses on quantizing the weights of the neural network while keeping other components, such as activations, in their original precision. + +As large language models (LLMs) become more prevalent, there is a growing need for new and improved quantization methods that can meet the computational demands of these modern architectures while maintaining the accuracy. Compared to [normal quantization](https://github.com/intel/intel-extension-for-transformers/blob/main/docs/quantization.md) like W8A8, weight only quantization is probably a better trade-off to balance the performance and the accuracy, since we will see below that the bottleneck of deploying LLMs is the memory bandwidth and normally weight only quantization could lead to better accuracy. + +Here, we will introduce Embedding Models and Weight-only quantization for Transformers large language models with ITREX. Weight-only quantization is a technique used in deep learning to reduce the memory and computational requirements of neural networks. In the context of deep neural networks, the model parameters, also known as weights, are typically represented using floating-point numbers, which can consume a significant amount of memory and require intensive computational resources. + +All functionality related to the [intel-extension-for-transformers](https://github.com/intel/intel-extension-for-transformers). + +### Installation + +Install intel-extension-for-transformers. For system requirements and other installation tips, please refer to [Installation Guide](https://github.com/intel/intel-extension-for-transformers/blob/main/docs/installation.md) + +```bash +pip install intel-extension-for-transformers +``` +Install other required packages. + +```bash +pip install -U torch onnx accelerate datasets +``` + +### Embedding Models + +See a [usage example](/docs/integrations/text_embedding/itrex). + +```python +from langchain_community.embeddings import QuantizedBgeEmbeddings +``` + +### Weight-Only Quantization with ITREX + +See a [usage example](/docs/integrations/llms/weight_only_quantization). + +## Detail of Configuration Parameters + +Here is the detail of the `WeightOnlyQuantConfig` class. + +#### weight_dtype (string): Weight Data Type, default is "nf4". +We support quantize the weights to following data types for storing(weight_dtype in WeightOnlyQuantConfig): +* **int8**: Uses 8-bit data type. +* **int4_fullrange**: Uses the -8 value of int4 range compared with the normal int4 range [-7,7]. +* **int4_clip**: Clips and retains the values within the int4 range, setting others to zero. +* **nf4**: Uses the normalized float 4-bit data type. +* **fp4_e2m1**: Uses regular float 4-bit data type. "e2" means that 2 bits are used for the exponent, and "m1" means that 1 bits are used for the mantissa. + +#### compute_dtype (string): Computing Data Type, Default is "fp32". +While these techniques store weights in 4 or 8 bit, the computation still happens in float32, bfloat16 or int8(compute_dtype in WeightOnlyQuantConfig): +* **fp32**: Uses the float32 data type to compute. +* **bf16**: Uses the bfloat16 data type to compute. +* **int8**: Uses 8-bit data type to compute. + +#### llm_int8_skip_modules (list of module's name): Modules to Skip Quantization, Default is None. +It is a list of modules to be skipped quantization. + +#### scale_dtype (string): The Scale Data Type, Default is "fp32". +Now only support "fp32"(float32). + +#### mse_range (boolean): Whether to Search for The Best Clip Range from Range [0.805, 1.0, 0.005], default is False. +#### use_double_quant (boolean): Whether to Quantize Scale, Default is False. +Not support yet. +#### double_quant_dtype (string): Reserve for Double Quantization. +#### double_quant_scale_dtype (string): Reserve for Double Quantization. +#### group_size (int): Group Size When Auantization. +#### scheme (string): Which Format Weight Be Quantize to. Default is "sym". +* **sym**: Symmetric. +* **asym**: Asymmetric. +#### algorithm (string): Which Algorithm to Improve the Accuracy . Default is "RTN" +* **RTN**: Round-to-nearest (RTN) is a quantification method that we can think of very intuitively. +* **AWQ**: Protecting only 1% of salient weights can greatly reduce quantization error. the salient weight channels are selected by observing the distribution of activation and weight per channel. The salient weights are also quantized after multiplying a big scale factor before quantization for preserving. . +* **TEQ**: A trainable equivalent transformation that preserves the FP32 precision in weight-only quantization. diff --git a/langchain_md_files/integrations/providers/iugu.mdx b/langchain_md_files/integrations/providers/iugu.mdx new file mode 100644 index 0000000000000000000000000000000000000000..5abbeaa8a0669d11ba52ded55217dd92a44a6f6c --- /dev/null +++ b/langchain_md_files/integrations/providers/iugu.mdx @@ -0,0 +1,19 @@ +# Iugu + +>[Iugu](https://www.iugu.com/) is a Brazilian services and software as a service (SaaS) +> company. It offers payment-processing software and application programming +> interfaces for e-commerce websites and mobile applications. + + +## Installation and Setup + +The `Iugu API` requires an access token, which can be found inside of the `Iugu` dashboard. + + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/iugu). + +```python +from langchain_community.document_loaders import IuguLoader +``` diff --git a/langchain_md_files/integrations/providers/jaguar.mdx b/langchain_md_files/integrations/providers/jaguar.mdx new file mode 100644 index 0000000000000000000000000000000000000000..839a34ad3269b658f64fb582a602482230fe0b8a --- /dev/null +++ b/langchain_md_files/integrations/providers/jaguar.mdx @@ -0,0 +1,62 @@ +# Jaguar + +This page describes how to use Jaguar vector database within LangChain. +It contains three sections: introduction, installation and setup, and Jaguar API. + + +## Introduction + +Jaguar vector database has the following characteristics: + +1. It is a distributed vector database +2. The “ZeroMove” feature of JaguarDB enables instant horizontal scalability +3. Multimodal: embeddings, text, images, videos, PDFs, audio, time series, and geospatial +4. All-masters: allows both parallel reads and writes +5. Anomaly detection capabilities +6. RAG support: combines LLM with proprietary and real-time data +7. Shared metadata: sharing of metadata across multiple vector indexes +8. Distance metrics: Euclidean, Cosine, InnerProduct, Manhatten, Chebyshev, Hamming, Jeccard, Minkowski + +[Overview of Jaguar scalable vector database](http://www.jaguardb.com) + +You can run JaguarDB in docker container; or download the software and run on-cloud or off-cloud. + +## Installation and Setup + +- Install the JaguarDB on one host or multiple hosts +- Install the Jaguar HTTP Gateway server on one host +- Install the JaguarDB HTTP Client package + +The steps are described in [Jaguar Documents](http://www.jaguardb.com/support.html) + +Environment Variables in client programs: + + export OPENAI_API_KEY="......" + export JAGUAR_API_KEY="......" + + +## Jaguar API + +Together with LangChain, a Jaguar client class is provided by importing it in Python: + +```python +from langchain_community.vectorstores.jaguar import Jaguar +``` + +Supported API functions of the Jaguar class are: + +- `add_texts` +- `add_documents` +- `from_texts` +- `from_documents` +- `similarity_search` +- `is_anomalous` +- `create` +- `delete` +- `clear` +- `drop` +- `login` +- `logout` + + +For more details of the Jaguar API, please refer to [this notebook](/docs/integrations/vectorstores/jaguar) diff --git a/langchain_md_files/integrations/providers/javelin_ai_gateway.mdx b/langchain_md_files/integrations/providers/javelin_ai_gateway.mdx new file mode 100644 index 0000000000000000000000000000000000000000..d678e34597eabac0b1fe0e6cd5010f38a97700ec --- /dev/null +++ b/langchain_md_files/integrations/providers/javelin_ai_gateway.mdx @@ -0,0 +1,92 @@ +# Javelin AI Gateway + +[The Javelin AI Gateway](https://www.getjavelin.io) service is a high-performance, enterprise grade API Gateway for AI applications. +It is designed to streamline the usage and access of various large language model (LLM) providers, +such as OpenAI, Cohere, Anthropic and custom large language models within an organization by incorporating +robust access security for all interactions with LLMs. + +Javelin offers a high-level interface that simplifies the interaction with LLMs by providing a unified endpoint +to handle specific LLM related requests. + +See the Javelin AI Gateway [documentation](https://docs.getjavelin.io) for more details. +[Javelin Python SDK](https://www.github.com/getjavelin/javelin-python) is an easy to use client library meant to be embedded into AI Applications + +## Installation and Setup + +Install `javelin_sdk` to interact with Javelin AI Gateway: + +```sh +pip install 'javelin_sdk' +``` + +Set the Javelin's API key as an environment variable: + +```sh +export JAVELIN_API_KEY=... +``` + +## Completions Example + +```python + +from langchain.chains import LLMChain +from langchain_community.llms import JavelinAIGateway +from langchain_core.prompts import PromptTemplate + +route_completions = "eng_dept03" + +gateway = JavelinAIGateway( + gateway_uri="http://localhost:8000", + route=route_completions, + model_name="text-davinci-003", +) + +llmchain = LLMChain(llm=gateway, prompt=prompt) +result = llmchain.run("podcast player") + +print(result) + +``` + +## Embeddings Example + +```python +from langchain_community.embeddings import JavelinAIGatewayEmbeddings +from langchain_openai import OpenAIEmbeddings + +embeddings = JavelinAIGatewayEmbeddings( + gateway_uri="http://localhost:8000", + route="embeddings", +) + +print(embeddings.embed_query("hello")) +print(embeddings.embed_documents(["hello"])) +``` + +## Chat Example +```python +from langchain_community.chat_models import ChatJavelinAIGateway +from langchain_core.messages import HumanMessage, SystemMessage + +messages = [ + SystemMessage( + content="You are a helpful assistant that translates English to French." + ), + HumanMessage( + content="Artificial Intelligence has the power to transform humanity and make the world a better place" + ), +] + +chat = ChatJavelinAIGateway( + gateway_uri="http://localhost:8000", + route="mychatbot_route", + model_name="gpt-3.5-turbo" + params={ + "temperature": 0.1 + } +) + +print(chat(messages)) + +``` + diff --git a/langchain_md_files/integrations/providers/jina.mdx b/langchain_md_files/integrations/providers/jina.mdx new file mode 100644 index 0000000000000000000000000000000000000000..057ace079facbdd6f21f745e9629adbad9138626 --- /dev/null +++ b/langchain_md_files/integrations/providers/jina.mdx @@ -0,0 +1,20 @@ +# Jina + +This page covers how to use the Jina Embeddings within LangChain. +It is broken into two parts: installation and setup, and then references to specific Jina wrappers. + +## Installation and Setup +- Get a Jina AI API token from [here](https://jina.ai/embeddings/) and set it as an environment variable (`JINA_API_TOKEN`) + +There exists a Jina Embeddings wrapper, which you can access with + +```python +from langchain_community.embeddings import JinaEmbeddings + +# you can pas jina_api_key, if none is passed it will be taken from `JINA_API_TOKEN` environment variable +embeddings = JinaEmbeddings(jina_api_key='jina_**', model_name='jina-embeddings-v2-base-en') +``` + +You can check the list of available models from [here](https://jina.ai/embeddings/) + +For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/jina) diff --git a/langchain_md_files/integrations/providers/johnsnowlabs.mdx b/langchain_md_files/integrations/providers/johnsnowlabs.mdx new file mode 100644 index 0000000000000000000000000000000000000000..39f3ea494cbf5d2e6486b4ee66ad9e25d78498ca --- /dev/null +++ b/langchain_md_files/integrations/providers/johnsnowlabs.mdx @@ -0,0 +1,117 @@ +# Johnsnowlabs + +Gain access to the [johnsnowlabs](https://www.johnsnowlabs.com/) ecosystem of enterprise NLP libraries +with over 21.000 enterprise NLP models in over 200 languages with the open source `johnsnowlabs` library. +For all 24.000+ models, see the [John Snow Labs Model Models Hub](https://nlp.johnsnowlabs.com/models) + +## Installation and Setup + + +```bash +pip install johnsnowlabs +``` + +To [install enterprise features](https://nlp.johnsnowlabs.com/docs/en/jsl/install_licensed_quick, run: +```python +# for more details see https://nlp.johnsnowlabs.com/docs/en/jsl/install_licensed_quick +nlp.install() +``` + + +You can embed your queries and documents with either `gpu`,`cpu`,`apple_silicon`,`aarch` based optimized binaries. +By default cpu binaries are used. +Once a session is started, you must restart your notebook to switch between GPU or CPU, or changes will not take effect. + +## Embed Query with CPU: +```python +document = "foo bar" +embedding = JohnSnowLabsEmbeddings('embed_sentence.bert') +output = embedding.embed_query(document) +``` + + +## Embed Query with GPU: + + +```python +document = "foo bar" +embedding = JohnSnowLabsEmbeddings('embed_sentence.bert','gpu') +output = embedding.embed_query(document) +``` + + + + +## Embed Query with Apple Silicon (M1,M2,etc..): + +```python +documents = ["foo bar", 'bar foo'] +embedding = JohnSnowLabsEmbeddings('embed_sentence.bert','apple_silicon') +output = embedding.embed_query(document) +``` + + + +## Embed Query with AARCH: + +```python +documents = ["foo bar", 'bar foo'] +embedding = JohnSnowLabsEmbeddings('embed_sentence.bert','aarch') +output = embedding.embed_query(document) +``` + + + + + + +## Embed Document with CPU: +```python +documents = ["foo bar", 'bar foo'] +embedding = JohnSnowLabsEmbeddings('embed_sentence.bert','gpu') +output = embedding.embed_documents(documents) +``` + + + +## Embed Document with GPU: + +```python +documents = ["foo bar", 'bar foo'] +embedding = JohnSnowLabsEmbeddings('embed_sentence.bert','gpu') +output = embedding.embed_documents(documents) +``` + + + + + +## Embed Document with Apple Silicon (M1,M2,etc..): + +```python + +```python +documents = ["foo bar", 'bar foo'] +embedding = JohnSnowLabsEmbeddings('embed_sentence.bert','apple_silicon') +output = embedding.embed_documents(documents) +``` + + + +## Embed Document with AARCH: + +```python + +```python +documents = ["foo bar", 'bar foo'] +embedding = JohnSnowLabsEmbeddings('embed_sentence.bert','aarch') +output = embedding.embed_documents(documents) +``` + + + + +Models are loaded with [nlp.load](https://nlp.johnsnowlabs.com/docs/en/jsl/load_api) and spark session is started with [nlp.start()](https://nlp.johnsnowlabs.com/docs/en/jsl/start-a-sparksession) under the hood. + + + diff --git a/langchain_md_files/integrations/providers/joplin.mdx b/langchain_md_files/integrations/providers/joplin.mdx new file mode 100644 index 0000000000000000000000000000000000000000..b3c83acc5ff57b77a0bd587d40d34cd7e51e8258 --- /dev/null +++ b/langchain_md_files/integrations/providers/joplin.mdx @@ -0,0 +1,19 @@ +# Joplin + +>[Joplin](https://joplinapp.org/) is an open-source note-taking app. It captures your thoughts +> and securely accesses them from any device. + + +## Installation and Setup + +The `Joplin API` requires an access token. +You can find installation instructions [here](https://joplinapp.org/api/references/rest_api/). + + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/joplin). + +```python +from langchain_community.document_loaders import JoplinLoader +``` diff --git a/langchain_md_files/integrations/providers/kdbai.mdx b/langchain_md_files/integrations/providers/kdbai.mdx new file mode 100644 index 0000000000000000000000000000000000000000..a5f06d0128748f8c51e8a7976f53327ddd67b2b2 --- /dev/null +++ b/langchain_md_files/integrations/providers/kdbai.mdx @@ -0,0 +1,24 @@ +# KDB.AI + +>[KDB.AI](https://kdb.ai) is a powerful knowledge-based vector database and search engine that allows you to build scalable, reliable AI applications, using real-time data, by providing advanced search, recommendation and personalization. + + +## Installation and Setup + +Install the Python SDK: + +```bash +pip install kdbai-client +``` + + +## Vector store + +There exists a wrapper around KDB.AI indexes, allowing you to use it as a vectorstore, +whether for semantic search or example selection. + +```python +from langchain_community.vectorstores import KDBAI +``` + +For a more detailed walkthrough of the KDB.AI vectorstore, see [this notebook](/docs/integrations/vectorstores/kdbai) diff --git a/langchain_md_files/integrations/providers/kinetica.mdx b/langchain_md_files/integrations/providers/kinetica.mdx new file mode 100644 index 0000000000000000000000000000000000000000..23da3da8bc22f88db5a85b3c72f36ce489c858df --- /dev/null +++ b/langchain_md_files/integrations/providers/kinetica.mdx @@ -0,0 +1,44 @@ +# Kinetica + +[Kinetica](https://www.kinetica.com/) is a real-time database purpose built for enabling +analytics and generative AI on time-series & spatial data. + +## Chat Model + +The Kinetica LLM wrapper uses the [Kinetica SqlAssist +LLM](https://docs.kinetica.com/7.2/sql-gpt/concepts/) to transform natural language into +SQL to simplify the process of data retrieval. + +See [Kinetica Language To SQL Chat Model](/docs/integrations/chat/kinetica) for usage. + +```python +from langchain_community.chat_models.kinetica import ChatKinetica +``` + +## Vector Store + +The Kinetca vectorstore wrapper leverages Kinetica's native support for [vector +similarity search](https://docs.kinetica.com/7.2/vector_search/). + +See [Kinetica Vectorsore API](/docs/integrations/vectorstores/kinetica) for usage. + +```python +from langchain_community.vectorstores import Kinetica +``` + +## Document Loader + +The Kinetica Document loader can be used to load LangChain Documents from the +Kinetica database. + +See [Kinetica Document Loader](/docs/integrations/document_loaders/kinetica) for usage + +```python +from langchain_community.document_loaders.kinetica_loader import KineticaLoader +``` + +## Retriever + +The Kinetica Retriever can return documents given an unstructured query. + +See [Kinetica VectorStore based Retriever](/docs/integrations/retrievers/kinetica) for usage diff --git a/langchain_md_files/integrations/providers/konko.mdx b/langchain_md_files/integrations/providers/konko.mdx new file mode 100644 index 0000000000000000000000000000000000000000..c7146778c9665001c24985e92dded13bc592e126 --- /dev/null +++ b/langchain_md_files/integrations/providers/konko.mdx @@ -0,0 +1,65 @@ +# Konko +All functionality related to Konko + +>[Konko AI](https://www.konko.ai/) provides a fully managed API to help application developers + +>1. **Select** the right open source or proprietary LLMs for their application +>2. **Build** applications faster with integrations to leading application frameworks and fully managed APIs +>3. **Fine tune** smaller open-source LLMs to achieve industry-leading performance at a fraction of the cost +>4. **Deploy production-scale APIs** that meet security, privacy, throughput, and latency SLAs without infrastructure set-up or administration using Konko AI's SOC 2 compliant, multi-cloud infrastructure + +## Installation and Setup + +1. Sign in to our web app to [create an API key](https://platform.konko.ai/settings/api-keys) to access models via our endpoints for [chat completions](https://docs.konko.ai/reference/post-chat-completions) and [completions](https://docs.konko.ai/reference/post-completions). +2. Enable a Python3.8+ environment +3. Install the SDK + +```bash +pip install konko +``` + +4. Set API Keys as environment variables(`KONKO_API_KEY`,`OPENAI_API_KEY`) + +```bash +export KONKO_API_KEY={your_KONKO_API_KEY_here} +export OPENAI_API_KEY={your_OPENAI_API_KEY_here} #Optional +``` + +Please see [the Konko docs](https://docs.konko.ai/docs/getting-started) for more details. + + +## LLM + +**Explore Available Models:** Start by browsing through the [available models](https://docs.konko.ai/docs/list-of-models) on Konko. Each model caters to different use cases and capabilities. + +Another way to find the list of models running on the Konko instance is through this [endpoint](https://docs.konko.ai/reference/get-models). + +See a usage [example](/docs/integrations/llms/konko). + +### Examples of Endpoint Usage + +- **Completion with mistralai/Mistral-7B-v0.1:** + + ```python + from langchain_community.llms import Konko + llm = Konko(max_tokens=800, model='mistralai/Mistral-7B-v0.1') + prompt = "Generate a Product Description for Apple Iphone 15" + response = llm.invoke(prompt) + ``` + +## Chat Models + +See a usage [example](/docs/integrations/chat/konko). + + +- **ChatCompletion with Mistral-7B:** + + ```python + from langchain_core.messages import HumanMessage + from langchain_community.chat_models import ChatKonko + chat_instance = ChatKonko(max_tokens=10, model = 'mistralai/mistral-7b-instruct-v0.1') + msg = HumanMessage(content="Hi") + chat_response = chat_instance([msg]) + ``` + +For further assistance, contact [support@konko.ai](mailto:support@konko.ai) or join our [Discord](https://discord.gg/TXV2s3z7RZ). \ No newline at end of file diff --git a/langchain_md_files/integrations/providers/labelstudio.mdx b/langchain_md_files/integrations/providers/labelstudio.mdx new file mode 100644 index 0000000000000000000000000000000000000000..a634f086fca7e233c518ca989ee2313bbc37620a --- /dev/null +++ b/langchain_md_files/integrations/providers/labelstudio.mdx @@ -0,0 +1,23 @@ +# Label Studio + + +>[Label Studio](https://labelstud.io/guide/get_started) is an open-source data labeling platform that provides LangChain with flexibility when it comes to labeling data for fine-tuning large language models (LLMs). It also enables the preparation of custom training data and the collection and evaluation of responses through human feedback. + +## Installation and Setup + +See the [Label Studio installation guide](https://labelstud.io/guide/install) for installation options. + +We need to install the `label-studio` and `label-studio-sdk-python` Python packages: + +```bash +pip install label-studio label-studio-sdk +``` + + +## Callbacks + +See a [usage example](/docs/integrations/callbacks/labelstudio). + +```python +from langchain.callbacks import LabelStudioCallbackHandler +``` diff --git a/langchain_md_files/integrations/providers/lakefs.mdx b/langchain_md_files/integrations/providers/lakefs.mdx new file mode 100644 index 0000000000000000000000000000000000000000..c38d5bb492827bd823c87fa9fffae68fd393af28 --- /dev/null +++ b/langchain_md_files/integrations/providers/lakefs.mdx @@ -0,0 +1,18 @@ +# lakeFS + +>[lakeFS](https://docs.lakefs.io/) provides scalable version control over +> the data lake, and uses Git-like semantics to create and access those versions. + +## Installation and Setup + +Get the `ENDPOINT`, `LAKEFS_ACCESS_KEY`, and `LAKEFS_SECRET_KEY`. +You can find installation instructions [here](https://docs.lakefs.io/quickstart/launch.html). + + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/lakefs). + +```python +from langchain_community.document_loaders import LakeFSLoader +``` diff --git a/langchain_md_files/integrations/providers/lancedb.mdx b/langchain_md_files/integrations/providers/lancedb.mdx new file mode 100644 index 0000000000000000000000000000000000000000..44440de047ac4ce566a0bca5f86b88b69c157a67 --- /dev/null +++ b/langchain_md_files/integrations/providers/lancedb.mdx @@ -0,0 +1,23 @@ +# LanceDB + +This page covers how to use [LanceDB](https://github.com/lancedb/lancedb) within LangChain. +It is broken into two parts: installation and setup, and then references to specific LanceDB wrappers. + +## Installation and Setup + +- Install the Python SDK with `pip install lancedb` + +## Wrappers + +### VectorStore + +There exists a wrapper around LanceDB databases, allowing you to use it as a vectorstore, +whether for semantic search or example selection. + +To import this vectorstore: + +```python +from langchain_community.vectorstores import LanceDB +``` + +For a more detailed walkthrough of the LanceDB wrapper, see [this notebook](/docs/integrations/vectorstores/lancedb) diff --git a/langchain_md_files/integrations/providers/langchain_decorators.mdx b/langchain_md_files/integrations/providers/langchain_decorators.mdx new file mode 100644 index 0000000000000000000000000000000000000000..d719f90b2988f70af7bb290d485b34046d0df134 --- /dev/null +++ b/langchain_md_files/integrations/providers/langchain_decorators.mdx @@ -0,0 +1,370 @@ +# LangChain Decorators ✨ + +~~~ +Disclaimer: `LangChain decorators` is not created by the LangChain team and is not supported by it. +~~~ + +>`LangChain decorators` is a layer on the top of LangChain that provides syntactic sugar 🍭 for writing custom langchain prompts and chains +> +>For Feedback, Issues, Contributions - please raise an issue here: +>[ju-bezdek/langchain-decorators](https://github.com/ju-bezdek/langchain-decorators) + + +Main principles and benefits: + +- more `pythonic` way of writing code +- write multiline prompts that won't break your code flow with indentation +- making use of IDE in-built support for **hinting**, **type checking** and **popup with docs** to quickly peek in the function to see the prompt, parameters it consumes etc. +- leverage all the power of 🦜🔗 LangChain ecosystem +- adding support for **optional parameters** +- easily share parameters between the prompts by binding them to one class + + +Here is a simple example of a code written with **LangChain Decorators ✨** + +``` python + +@llm_prompt +def write_me_short_post(topic:str, platform:str="twitter", audience:str = "developers")->str: + """ + Write me a short header for my post about {topic} for {platform} platform. + It should be for {audience} audience. + (Max 15 words) + """ + return + +# run it naturally +write_me_short_post(topic="starwars") +# or +write_me_short_post(topic="starwars", platform="redit") +``` + +# Quick start +## Installation +```bash +pip install langchain_decorators +``` + +## Examples + +Good idea on how to start is to review the examples here: + - [jupyter notebook](https://github.com/ju-bezdek/langchain-decorators/blob/main/example_notebook.ipynb) + - [colab notebook](https://colab.research.google.com/drive/1no-8WfeP6JaLD9yUtkPgym6x0G9ZYZOG#scrollTo=N4cf__D0E2Yk) + +# Defining other parameters +Here we are just marking a function as a prompt with `llm_prompt` decorator, turning it effectively into a LLMChain. Instead of running it + + +Standard LLMchain takes much more init parameter than just inputs_variables and prompt... here is this implementation detail hidden in the decorator. +Here is how it works: + +1. Using **Global settings**: + +``` python +# define global settings for all prompty (if not set - chatGPT is the current default) +from langchain_decorators import GlobalSettings + +GlobalSettings.define_settings( + default_llm=ChatOpenAI(temperature=0.0), this is default... can change it here globally + default_streaming_llm=ChatOpenAI(temperature=0.0,streaming=True), this is default... can change it here for all ... will be used for streaming +) +``` + +2. Using predefined **prompt types** + +``` python +#You can change the default prompt types +from langchain_decorators import PromptTypes, PromptTypeSettings + +PromptTypes.AGENT_REASONING.llm = ChatOpenAI() + +# Or you can just define your own ones: +class MyCustomPromptTypes(PromptTypes): + GPT4=PromptTypeSettings(llm=ChatOpenAI(model="gpt-4")) + +@llm_prompt(prompt_type=MyCustomPromptTypes.GPT4) +def write_a_complicated_code(app_idea:str)->str: + ... + +``` + +3. Define the settings **directly in the decorator** + +``` python +from langchain_openai import OpenAI + +@llm_prompt( + llm=OpenAI(temperature=0.7), + stop_tokens=["\nObservation"], + ... + ) +def creative_writer(book_title:str)->str: + ... +``` + +## Passing a memory and/or callbacks: + +To pass any of these, just declare them in the function (or use kwargs to pass anything) + +```python + +@llm_prompt() +async def write_me_short_post(topic:str, platform:str="twitter", memory:SimpleMemory = None): + """ + {history_key} + Write me a short header for my post about {topic} for {platform} platform. + It should be for {audience} audience. + (Max 15 words) + """ + pass + +await write_me_short_post(topic="old movies") + +``` + +# Simplified streaming + +If we want to leverage streaming: + - we need to define prompt as async function + - turn on the streaming on the decorator, or we can define PromptType with streaming on + - capture the stream using StreamingContext + +This way we just mark which prompt should be streamed, not needing to tinker with what LLM should we use, passing around the creating and distribute streaming handler into particular part of our chain... just turn the streaming on/off on prompt/prompt type... + +The streaming will happen only if we call it in streaming context ... there we can define a simple function to handle the stream + +``` python +# this code example is complete and should run as it is + +from langchain_decorators import StreamingContext, llm_prompt + +# this will mark the prompt for streaming (useful if we want stream just some prompts in our app... but don't want to pass distribute the callback handlers) +# note that only async functions can be streamed (will get an error if it's not) +@llm_prompt(capture_stream=True) +async def write_me_short_post(topic:str, platform:str="twitter", audience:str = "developers"): + """ + Write me a short header for my post about {topic} for {platform} platform. + It should be for {audience} audience. + (Max 15 words) + """ + pass + + + +# just an arbitrary function to demonstrate the streaming... will be some websockets code in the real world +tokens=[] +def capture_stream_func(new_token:str): + tokens.append(new_token) + +# if we want to capture the stream, we need to wrap the execution into StreamingContext... +# this will allow us to capture the stream even if the prompt call is hidden inside higher level method +# only the prompts marked with capture_stream will be captured here +with StreamingContext(stream_to_stdout=True, callback=capture_stream_func): + result = await run_prompt() + print("Stream finished ... we can distinguish tokens thanks to alternating colors") + + +print("\nWe've captured",len(tokens),"tokens🎉\n") +print("Here is the result:") +print(result) +``` + + +# Prompt declarations +By default the prompt is is the whole function docs, unless you mark your prompt + +## Documenting your prompt + +We can specify what part of our docs is the prompt definition, by specifying a code block with `` language tag + +``` python +@llm_prompt +def write_me_short_post(topic:str, platform:str="twitter", audience:str = "developers"): + """ + Here is a good way to write a prompt as part of a function docstring, with additional documentation for devs. + + It needs to be a code block, marked as a `` language + ``` + Write me a short header for my post about {topic} for {platform} platform. + It should be for {audience} audience. + (Max 15 words) + ``` + + Now only to code block above will be used as a prompt, and the rest of the docstring will be used as a description for developers. + (It has also a nice benefit that IDE (like VS code) will display the prompt properly (not trying to parse it as markdown, and thus not showing new lines properly)) + """ + return +``` + +## Chat messages prompt + +For chat models is very useful to define prompt as a set of message templates... here is how to do it: + +``` python +@llm_prompt +def simulate_conversation(human_input:str, agent_role:str="a pirate"): + """ + ## System message + - note the `:system` suffix inside the tag + + + ``` + You are a {agent_role} hacker. You mus act like one. + You reply always in code, using python or javascript code block... + for example: + + ... do not reply with anything else.. just with code - respecting your role. + ``` + + # human message + (we are using the real role that are enforced by the LLM - GPT supports system, assistant, user) + ``` + Helo, who are you + ``` + a reply: + + + ``` + \``` python <<- escaping inner code block with \ that should be part of the prompt + def hello(): + print("Argh... hello you pesky pirate") + \``` + ``` + + we can also add some history using placeholder + ``` + {history} + ``` + ``` + {human_input} + ``` + + Now only to code block above will be used as a prompt, and the rest of the docstring will be used as a description for developers. + (It has also a nice benefit that IDE (like VS code) will display the prompt properly (not trying to parse it as markdown, and thus not showing new lines properly)) + """ + pass + +``` + +the roles here are model native roles (assistant, user, system for chatGPT) + + + +# Optional sections +- you can define a whole sections of your prompt that should be optional +- if any input in the section is missing, the whole section won't be rendered + +the syntax for this is as follows: + +``` python +@llm_prompt +def prompt_with_optional_partials(): + """ + this text will be rendered always, but + + {? anything inside this block will be rendered only if all the {value}s parameters are not empty (None | "") ?} + + you can also place it in between the words + this too will be rendered{? , but + this block will be rendered only if {this_value} and {this_value} + is not empty?} ! + """ +``` + + +# Output parsers + +- llm_prompt decorator natively tries to detect the best output parser based on the output type. (if not set, it returns the raw string) +- list, dict and pydantic outputs are also supported natively (automatically) + +``` python +# this code example is complete and should run as it is + +from langchain_decorators import llm_prompt + +@llm_prompt +def write_name_suggestions(company_business:str, count:int)->list: + """ Write me {count} good name suggestions for company that {company_business} + """ + pass + +write_name_suggestions(company_business="sells cookies", count=5) +``` + +## More complex structures + +for dict / pydantic you need to specify the formatting instructions... +this can be tedious, that's why you can let the output parser gegnerate you the instructions based on the model (pydantic) + +``` python +from langchain_decorators import llm_prompt +from pydantic import BaseModel, Field + + +class TheOutputStructureWeExpect(BaseModel): + name:str = Field (description="The name of the company") + headline:str = Field( description="The description of the company (for landing page)") + employees:list[str] = Field(description="5-8 fake employee names with their positions") + +@llm_prompt() +def fake_company_generator(company_business:str)->TheOutputStructureWeExpect: + """ Generate a fake company that {company_business} + {FORMAT_INSTRUCTIONS} + """ + return + +company = fake_company_generator(company_business="sells cookies") + +# print the result nicely formatted +print("Company name: ",company.name) +print("company headline: ",company.headline) +print("company employees: ",company.employees) + +``` + + +# Binding the prompt to an object + +``` python +from pydantic import BaseModel +from langchain_decorators import llm_prompt + +class AssistantPersonality(BaseModel): + assistant_name:str + assistant_role:str + field:str + + @property + def a_property(self): + return "whatever" + + def hello_world(self, function_kwarg:str=None): + """ + We can reference any {field} or {a_property} inside our prompt... and combine it with {function_kwarg} in the method + """ + + + @llm_prompt + def introduce_your_self(self)->str: + """ + ```  + You are an assistant named {assistant_name}. + Your role is to act as {assistant_role} + ``` + ``` + Introduce your self (in less than 20 words) + ``` + """ + + + +personality = AssistantPersonality(assistant_name="John", assistant_role="a pirate") + +print(personality.introduce_your_self(personality)) +``` + + +# More examples: + +- these and few more examples are also available in the [colab notebook here](https://colab.research.google.com/drive/1no-8WfeP6JaLD9yUtkPgym6x0G9ZYZOG#scrollTo=N4cf__D0E2Yk) +- including the [ReAct Agent re-implementation](https://colab.research.google.com/drive/1no-8WfeP6JaLD9yUtkPgym6x0G9ZYZOG#scrollTo=3bID5fryE2Yp) using purely langchain decorators diff --git a/langchain_md_files/integrations/providers/lantern.mdx b/langchain_md_files/integrations/providers/lantern.mdx new file mode 100644 index 0000000000000000000000000000000000000000..9b4a537acfaa7b4d6b32843fcd822fb09ddd3759 --- /dev/null +++ b/langchain_md_files/integrations/providers/lantern.mdx @@ -0,0 +1,25 @@ +# Lantern + +This page covers how to use the [Lantern](https://github.com/lanterndata/lantern) within LangChain +It is broken into two parts: setup, and then references to specific Lantern wrappers. + +## Setup +1. The first step is to create a database with the `lantern` extension installed. + + Follow the steps at [Lantern Installation Guide](https://github.com/lanterndata/lantern#-quick-install) to install the database and the extension. The docker image is the easiest way to get started. + +## Wrappers + +### VectorStore + +There exists a wrapper around Postgres vector databases, allowing you to use it as a vectorstore, +whether for semantic search or example selection. + +To import this vectorstore: +```python +from langchain_community.vectorstores import Lantern +``` + +### Usage + +For a more detailed walkthrough of the Lantern Wrapper, see [this notebook](/docs/integrations/vectorstores/lantern) diff --git a/langchain_md_files/integrations/providers/llamacpp.mdx b/langchain_md_files/integrations/providers/llamacpp.mdx new file mode 100644 index 0000000000000000000000000000000000000000..de7d40a1c5ae46f6674bc9d9c8e0c4921a9481de --- /dev/null +++ b/langchain_md_files/integrations/providers/llamacpp.mdx @@ -0,0 +1,50 @@ +# Llama.cpp + +>[llama.cpp python](https://github.com/abetlen/llama-cpp-python) library is a simple Python bindings for `@ggerganov` +>[llama.cpp](https://github.com/ggerganov/llama.cpp). +> +>This package provides: +> +> - Low-level access to C API via ctypes interface. +> - High-level Python API for text completion +> - `OpenAI`-like API +> - `LangChain` compatibility +> - `LlamaIndex` compatibility +> - OpenAI compatible web server +> - Local Copilot replacement +> - Function Calling support +> - Vision API support +> - Multiple Models + +## Installation and Setup + +- Install the Python package + ```bash + pip install llama-cpp-python + ```` +- Download one of the [supported models](https://github.com/ggerganov/llama.cpp#description) and convert them to the llama.cpp format per the [instructions](https://github.com/ggerganov/llama.cpp) + + +## Chat models + +See a [usage example](/docs/integrations/chat/llamacpp). + +```python +from langchain_community.chat_models import ChatLlamaCpp +``` + +## LLMs + +See a [usage example](/docs/integrations/llms/llamacpp). + +```python +from langchain_community.llms import LlamaCpp +``` + +## Embedding models + +See a [usage example](/docs/integrations/text_embedding/llamacpp). + +```python +from langchain_community.embeddings import LlamaCppEmbeddings +``` diff --git a/langchain_md_files/integrations/providers/llmonitor.mdx b/langchain_md_files/integrations/providers/llmonitor.mdx new file mode 100644 index 0000000000000000000000000000000000000000..90fb10a26401309bedb7a1faca0848741697c040 --- /dev/null +++ b/langchain_md_files/integrations/providers/llmonitor.mdx @@ -0,0 +1,22 @@ +# LLMonitor + +>[LLMonitor](https://llmonitor.com?utm_source=langchain&utm_medium=py&utm_campaign=docs) is an open-source observability platform that provides cost and usage analytics, user tracking, tracing and evaluation tools. + +## Installation and Setup + +Create an account on [llmonitor.com](https://llmonitor.com?utm_source=langchain&utm_medium=py&utm_campaign=docs), then copy your new app's `tracking id`. + +Once you have it, set it as an environment variable by running: + +```bash +export LLMONITOR_APP_ID="..." +``` + + +## Callbacks + +See a [usage example](/docs/integrations/callbacks/llmonitor). + +```python +from langchain.callbacks import LLMonitorCallbackHandler +``` diff --git a/langchain_md_files/integrations/providers/log10.mdx b/langchain_md_files/integrations/providers/log10.mdx new file mode 100644 index 0000000000000000000000000000000000000000..b4378506e7c0932296688ead50be23d414ea329b --- /dev/null +++ b/langchain_md_files/integrations/providers/log10.mdx @@ -0,0 +1,104 @@ +# Log10 + +This page covers how to use the [Log10](https://log10.io) within LangChain. + +## What is Log10? + +Log10 is an [open-source](https://github.com/log10-io/log10) proxiless LLM data management and application development platform that lets you log, debug and tag your Langchain calls. + +## Quick start + +1. Create your free account at [log10.io](https://log10.io) +2. Add your `LOG10_TOKEN` and `LOG10_ORG_ID` from the Settings and Organization tabs respectively as environment variables. +3. Also add `LOG10_URL=https://log10.io` and your usual LLM API key: for e.g. `OPENAI_API_KEY` or `ANTHROPIC_API_KEY` to your environment + +## How to enable Log10 data management for Langchain + +Integration with log10 is a simple one-line `log10_callback` integration as shown below: + +```python +from langchain_openai import ChatOpenAI +from langchain_core.messages import HumanMessage + +from log10.langchain import Log10Callback +from log10.llm import Log10Config + +log10_callback = Log10Callback(log10_config=Log10Config()) + +messages = [ + HumanMessage(content="You are a ping pong machine"), + HumanMessage(content="Ping?"), +] + +llm = ChatOpenAI(model="gpt-3.5-turbo", callbacks=[log10_callback]) +``` + +[Log10 + Langchain + Logs docs](https://github.com/log10-io/log10/blob/main/logging.md#langchain-logger) + +[More details + screenshots](https://log10.io/docs/observability/logs) including instructions for self-hosting logs + +## How to use tags with Log10 + +```python +from langchain_openai import OpenAI +from langchain_community.chat_models import ChatAnthropic +from langchain_openai import ChatOpenAI +from langchain_core.messages import HumanMessage + +from log10.langchain import Log10Callback +from log10.llm import Log10Config + +log10_callback = Log10Callback(log10_config=Log10Config()) + +messages = [ + HumanMessage(content="You are a ping pong machine"), + HumanMessage(content="Ping?"), +] + +llm = ChatOpenAI(model="gpt-3.5-turbo", callbacks=[log10_callback], temperature=0.5, tags=["test"]) +completion = llm.predict_messages(messages, tags=["foobar"]) +print(completion) + +llm = ChatAnthropic(model="claude-2", callbacks=[log10_callback], temperature=0.7, tags=["baz"]) +llm.predict_messages(messages) +print(completion) + +llm = OpenAI(model_name="gpt-3.5-turbo-instruct", callbacks=[log10_callback], temperature=0.5) +completion = llm.predict("You are a ping pong machine.\nPing?\n") +print(completion) +``` + +You can also intermix direct OpenAI calls and Langchain LLM calls: + +```python +import os +from log10.load import log10, log10_session +import openai +from langchain_openai import OpenAI + +log10(openai) + +with log10_session(tags=["foo", "bar"]): + # Log a direct OpenAI call + response = openai.Completion.create( + model="text-ada-001", + prompt="Where is the Eiffel Tower?", + temperature=0, + max_tokens=1024, + top_p=1, + frequency_penalty=0, + presence_penalty=0, + ) + print(response) + + # Log a call via Langchain + llm = OpenAI(model_name="text-ada-001", temperature=0.5) + response = llm.predict("You are a ping pong machine.\nPing?\n") + print(response) +``` + +## How to debug Langchain calls + +[Example of debugging](https://log10.io/docs/observability/prompt_chain_debugging) + +[More Langchain examples](https://github.com/log10-io/log10/tree/main/examples#langchain) diff --git a/langchain_md_files/integrations/providers/maritalk.mdx b/langchain_md_files/integrations/providers/maritalk.mdx new file mode 100644 index 0000000000000000000000000000000000000000..6b0dcda545690c7a2e1e43007f3634a05bc0103c --- /dev/null +++ b/langchain_md_files/integrations/providers/maritalk.mdx @@ -0,0 +1,21 @@ +# MariTalk + +>[MariTalk](https://www.maritaca.ai/en) is an LLM-based chatbot trained to meet the needs of Brazil. + +## Installation and Setup + +You have to get the MariTalk API key. + +You also need to install the `httpx` Python package. + +```bash +pip install httpx +``` + +## Chat models + +See a [usage example](/docs/integrations/chat/maritalk). + +```python +from langchain_community.chat_models import ChatMaritalk +``` diff --git a/langchain_md_files/integrations/providers/mediawikidump.mdx b/langchain_md_files/integrations/providers/mediawikidump.mdx new file mode 100644 index 0000000000000000000000000000000000000000..52f5fde1e71283eb4792494f7df1268475d0d001 --- /dev/null +++ b/langchain_md_files/integrations/providers/mediawikidump.mdx @@ -0,0 +1,31 @@ +# MediaWikiDump + +>[MediaWiki XML Dumps](https://www.mediawiki.org/wiki/Manual:Importing_XML_dumps) contain the content of a wiki +> (wiki pages with all their revisions), without the site-related data. A XML dump does not create a full backup +> of the wiki database, the dump does not contain user accounts, images, edit logs, etc. + + +## Installation and Setup + +We need to install several python packages. + +The `mediawiki-utilities` supports XML schema 0.11 in unmerged branches. +```bash +pip install -qU git+https://github.com/mediawiki-utilities/python-mwtypes@updates_schema_0.11 +``` + +The `mediawiki-utilities mwxml` has a bug, fix PR pending. + +```bash +pip install -qU git+https://github.com/gdedrouas/python-mwxml@xml_format_0.11 +pip install -qU mwparserfromhell +``` + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/mediawikidump). + + +```python +from langchain_community.document_loaders import MWDumpLoader +``` diff --git a/langchain_md_files/integrations/providers/meilisearch.mdx b/langchain_md_files/integrations/providers/meilisearch.mdx new file mode 100644 index 0000000000000000000000000000000000000000..31cc5d4c22ad14893d56a727765e936ce09cda42 --- /dev/null +++ b/langchain_md_files/integrations/providers/meilisearch.mdx @@ -0,0 +1,30 @@ +# Meilisearch + +> [Meilisearch](https://meilisearch.com) is an open-source, lightning-fast, and hyper +> relevant search engine. +> It comes with great defaults to help developers build snappy search experiences. +> +> You can [self-host Meilisearch](https://www.meilisearch.com/docs/learn/getting_started/installation#local-installation) +> or run on [Meilisearch Cloud](https://www.meilisearch.com/pricing). +> +>`Meilisearch v1.3` supports vector search. + +## Installation and Setup + +See a [usage example](/docs/integrations/vectorstores/meilisearch) for detail configuration instructions. + + +We need to install `meilisearch` python package. + +```bash +pip install meilisearch +``` + +## Vector Store + +See a [usage example](/docs/integrations/vectorstores/meilisearch). + +```python +from langchain_community.vectorstores import Meilisearch +``` + diff --git a/langchain_md_files/integrations/providers/metal.mdx b/langchain_md_files/integrations/providers/metal.mdx new file mode 100644 index 0000000000000000000000000000000000000000..455830b2db775d1d332940c06249987a0aad8f4c --- /dev/null +++ b/langchain_md_files/integrations/providers/metal.mdx @@ -0,0 +1,26 @@ +# Metal + +This page covers how to use [Metal](https://getmetal.io) within LangChain. + +## What is Metal? + +Metal is a managed retrieval & memory platform built for production. Easily index your data into `Metal` and run semantic search and retrieval on it. + +![Screenshot of the Metal dashboard showing the Browse Index feature with sample data.](/img/MetalDash.png "Metal Dashboard Interface") + +## Quick start + +Get started by [creating a Metal account](https://app.getmetal.io/signup). + +Then, you can easily take advantage of the `MetalRetriever` class to start retrieving your data for semantic search, prompting context, etc. This class takes a `Metal` instance and a dictionary of parameters to pass to the Metal API. + +```python +from langchain.retrievers import MetalRetriever +from metal_sdk.metal import Metal + + +metal = Metal("API_KEY", "CLIENT_ID", "INDEX_ID"); +retriever = MetalRetriever(metal, params={"limit": 2}) + +docs = retriever.invoke("search term") +``` diff --git a/langchain_md_files/integrations/providers/milvus.mdx b/langchain_md_files/integrations/providers/milvus.mdx new file mode 100644 index 0000000000000000000000000000000000000000..ea11c08fd1b4c32ffbe50c64a1bbd637ac93cbd6 --- /dev/null +++ b/langchain_md_files/integrations/providers/milvus.mdx @@ -0,0 +1,25 @@ +# Milvus + +>[Milvus](https://milvus.io/docs/overview.md) is a database that stores, indexes, and manages +> massive embedding vectors generated by deep neural networks and other machine learning (ML) models. + + +## Installation and Setup + +Install the Python SDK: + +```bash +pip install pymilvus +``` + +## Vector Store + +There exists a wrapper around `Milvus` indexes, allowing you to use it as a vectorstore, +whether for semantic search or example selection. + +To import this vectorstore: +```python +from langchain_community.vectorstores import Milvus +``` + +For a more detailed walkthrough of the `Miluvs` wrapper, see [this notebook](/docs/integrations/vectorstores/milvus) diff --git a/langchain_md_files/integrations/providers/mindsdb.mdx b/langchain_md_files/integrations/providers/mindsdb.mdx new file mode 100644 index 0000000000000000000000000000000000000000..678d16f8127550a677bdfbc2f41aa9a5f9ee4e3d --- /dev/null +++ b/langchain_md_files/integrations/providers/mindsdb.mdx @@ -0,0 +1,14 @@ +# MindsDB + +MindsDB is the platform for customizing AI from enterprise data. With MindsDB and it's nearly 200 integrations to [data sources](https://docs.mindsdb.com/integrations/data-overview) and [AI/ML frameworks](https://docs.mindsdb.com/integrations/ai-overview), any developer can use their enterprise data to customize AI for their purpose, faster and more securely. + +With MindsDB, you can connect any data source to any AI/ML model to implement and automate AI-powered applications. Deploy, serve, and fine-tune models in real-time, utilizing data from databases, vector stores, or applications. Do all that using universal tools developers already know. + +MindsDB integrates with LangChain, enabling users to: + + +- Deploy models available via LangChain within MindsDB, making them accessible to numerous data sources. +- Fine-tune models available via LangChain within MindsDB using real-time and dynamic data. +- Automate AI workflows with LangChain and MindsDB. + +Follow [our docs](https://docs.mindsdb.com/integrations/ai-engines/langchain) to learn more about MindsDB’s integration with LangChain and see examples. diff --git a/langchain_md_files/integrations/providers/minimax.mdx b/langchain_md_files/integrations/providers/minimax.mdx new file mode 100644 index 0000000000000000000000000000000000000000..a472380920a1a44999385ddfa8f70d5d7e79d223 --- /dev/null +++ b/langchain_md_files/integrations/providers/minimax.mdx @@ -0,0 +1,33 @@ +# Minimax + +>[Minimax](https://api.minimax.chat) is a Chinese startup that provides natural language processing models +> for companies and individuals. + +## Installation and Setup +Get a [Minimax api key](https://api.minimax.chat/user-center/basic-information/interface-key) and set it as an environment variable (`MINIMAX_API_KEY`) +Get a [Minimax group id](https://api.minimax.chat/user-center/basic-information) and set it as an environment variable (`MINIMAX_GROUP_ID`) + + +## LLM + +There exists a Minimax LLM wrapper, which you can access with +See a [usage example](/docs/integrations/llms/minimax). + +```python +from langchain_community.llms import Minimax +``` + +## Chat Models + +See a [usage example](/docs/integrations/chat/minimax) + +```python +from langchain_community.chat_models import MiniMaxChat +``` + +## Text Embedding Model + +There exists a Minimax Embedding model, which you can access with +```python +from langchain_community.embeddings import MiniMaxEmbeddings +``` diff --git a/langchain_md_files/integrations/providers/mistralai.mdx b/langchain_md_files/integrations/providers/mistralai.mdx new file mode 100644 index 0000000000000000000000000000000000000000..ba6790aabfceee3e21c46186ae70155aa75f9412 --- /dev/null +++ b/langchain_md_files/integrations/providers/mistralai.mdx @@ -0,0 +1,34 @@ +# MistralAI + +>[Mistral AI](https://docs.mistral.ai/api/) is a platform that offers hosting for their powerful open source models. + + +## Installation and Setup + +A valid [API key](https://console.mistral.ai/users/api-keys/) is needed to communicate with the API. + +You will also need the `langchain-mistralai` package: + +```bash +pip install langchain-mistralai +``` + +## Chat models + +### ChatMistralAI + +See a [usage example](/docs/integrations/chat/mistralai). + +```python +from langchain_mistralai.chat_models import ChatMistralAI +``` + +## Embedding models + +### MistralAIEmbeddings + +See a [usage example](/docs/integrations/text_embedding/mistralai). + +```python +from langchain_mistralai import MistralAIEmbeddings +``` diff --git a/langchain_md_files/integrations/providers/mlflow.mdx b/langchain_md_files/integrations/providers/mlflow.mdx new file mode 100644 index 0000000000000000000000000000000000000000..cb4d5aba84040e99261fda9ffbe600f3bff34130 --- /dev/null +++ b/langchain_md_files/integrations/providers/mlflow.mdx @@ -0,0 +1,119 @@ +# MLflow Deployments for LLMs + +>[The MLflow Deployments for LLMs](https://www.mlflow.org/docs/latest/llms/deployments/index.html) is a powerful tool designed to streamline the usage and management of various large +> language model (LLM) providers, such as OpenAI and Anthropic, within an organization. It offers a high-level interface +> that simplifies the interaction with these services by providing a unified endpoint to handle specific LLM related requests. + +## Installation and Setup + +Install `mlflow` with MLflow Deployments dependencies: + +```sh +pip install 'mlflow[genai]' +``` + +Set the OpenAI API key as an environment variable: + +```sh +export OPENAI_API_KEY=... +``` + +Create a configuration file: + +```yaml +endpoints: + - name: completions + endpoint_type: llm/v1/completions + model: + provider: openai + name: text-davinci-003 + config: + openai_api_key: $OPENAI_API_KEY + + - name: embeddings + endpoint_type: llm/v1/embeddings + model: + provider: openai + name: text-embedding-ada-002 + config: + openai_api_key: $OPENAI_API_KEY +``` + +Start the deployments server: + +```sh +mlflow deployments start-server --config-path /path/to/config.yaml +``` + +## Example provided by `MLflow` + +>The `mlflow.langchain` module provides an API for logging and loading `LangChain` models. +> This module exports multivariate LangChain models in the langchain flavor and univariate LangChain +> models in the pyfunc flavor. + +See the [API documentation and examples](https://www.mlflow.org/docs/latest/llms/langchain/index.html) for more information. + +## Completions Example + +```python +import mlflow +from langchain.chains import LLMChain, PromptTemplate +from langchain_community.llms import Mlflow + +llm = Mlflow( + target_uri="http://127.0.0.1:5000", + endpoint="completions", +) + +llm_chain = LLMChain( + llm=Mlflow, + prompt=PromptTemplate( + input_variables=["adjective"], + template="Tell me a {adjective} joke", + ), +) +result = llm_chain.run(adjective="funny") +print(result) + +with mlflow.start_run(): + model_info = mlflow.langchain.log_model(chain, "model") + +model = mlflow.pyfunc.load_model(model_info.model_uri) +print(model.predict([{"adjective": "funny"}])) +``` + +## Embeddings Example + +```python +from langchain_community.embeddings import MlflowEmbeddings + +embeddings = MlflowEmbeddings( + target_uri="http://127.0.0.1:5000", + endpoint="embeddings", +) + +print(embeddings.embed_query("hello")) +print(embeddings.embed_documents(["hello"])) +``` + +## Chat Example + +```python +from langchain_community.chat_models import ChatMlflow +from langchain_core.messages import HumanMessage, SystemMessage + +chat = ChatMlflow( + target_uri="http://127.0.0.1:5000", + endpoint="chat", +) + +messages = [ + SystemMessage( + content="You are a helpful assistant that translates English to French." + ), + HumanMessage( + content="Translate this sentence from English to French: I love programming." + ), +] +print(chat(messages)) +``` diff --git a/langchain_md_files/integrations/providers/mlflow_ai_gateway.mdx b/langchain_md_files/integrations/providers/mlflow_ai_gateway.mdx new file mode 100644 index 0000000000000000000000000000000000000000..912ea449ebabb4ae9665a60237e87f48ae2326f4 --- /dev/null +++ b/langchain_md_files/integrations/providers/mlflow_ai_gateway.mdx @@ -0,0 +1,160 @@ +# MLflow AI Gateway + +:::warning + +MLflow AI Gateway has been deprecated. Please use [MLflow Deployments for LLMs](/docs/integrations/providers/mlflow/) instead. + +::: + +>[The MLflow AI Gateway](https://www.mlflow.org/docs/latest/index.html) service is a powerful tool designed to streamline the usage and management of various large +> language model (LLM) providers, such as OpenAI and Anthropic, within an organization. It offers a high-level interface +> that simplifies the interaction with these services by providing a unified endpoint to handle specific LLM related requests. + +## Installation and Setup + +Install `mlflow` with MLflow AI Gateway dependencies: + +```sh +pip install 'mlflow[gateway]' +``` + +Set the OpenAI API key as an environment variable: + +```sh +export OPENAI_API_KEY=... +``` + +Create a configuration file: + +```yaml +routes: + - name: completions + route_type: llm/v1/completions + model: + provider: openai + name: text-davinci-003 + config: + openai_api_key: $OPENAI_API_KEY + + - name: embeddings + route_type: llm/v1/embeddings + model: + provider: openai + name: text-embedding-ada-002 + config: + openai_api_key: $OPENAI_API_KEY +``` + +Start the Gateway server: + +```sh +mlflow gateway start --config-path /path/to/config.yaml +``` + +## Example provided by `MLflow` + +>The `mlflow.langchain` module provides an API for logging and loading `LangChain` models. +> This module exports multivariate LangChain models in the langchain flavor and univariate LangChain +> models in the pyfunc flavor. + +See the [API documentation and examples](https://www.mlflow.org/docs/latest/python_api/mlflow.langchain.html?highlight=langchain#module-mlflow.langchain). + + + +## Completions Example + +```python +import mlflow +from langchain.chains import LLMChain, PromptTemplate +from langchain_community.llms import MlflowAIGateway + +gateway = MlflowAIGateway( + gateway_uri="http://127.0.0.1:5000", + route="completions", + params={ + "temperature": 0.0, + "top_p": 0.1, + }, +) + +llm_chain = LLMChain( + llm=gateway, + prompt=PromptTemplate( + input_variables=["adjective"], + template="Tell me a {adjective} joke", + ), +) +result = llm_chain.run(adjective="funny") +print(result) + +with mlflow.start_run(): + model_info = mlflow.langchain.log_model(chain, "model") + +model = mlflow.pyfunc.load_model(model_info.model_uri) +print(model.predict([{"adjective": "funny"}])) +``` + +## Embeddings Example + +```python +from langchain_community.embeddings import MlflowAIGatewayEmbeddings + +embeddings = MlflowAIGatewayEmbeddings( + gateway_uri="http://127.0.0.1:5000", + route="embeddings", +) + +print(embeddings.embed_query("hello")) +print(embeddings.embed_documents(["hello"])) +``` + +## Chat Example + +```python +from langchain_community.chat_models import ChatMLflowAIGateway +from langchain_core.messages import HumanMessage, SystemMessage + +chat = ChatMLflowAIGateway( + gateway_uri="http://127.0.0.1:5000", + route="chat", + params={ + "temperature": 0.1 + } +) + +messages = [ + SystemMessage( + content="You are a helpful assistant that translates English to French." + ), + HumanMessage( + content="Translate this sentence from English to French: I love programming." + ), +] +print(chat(messages)) +``` + +## Databricks MLflow AI Gateway + +Databricks MLflow AI Gateway is in private preview. +Please contact a Databricks representative to enroll in the preview. + +```python +from langchain.chains import LLMChain +from langchain_core.prompts import PromptTemplate +from langchain_community.llms import MlflowAIGateway + +gateway = MlflowAIGateway( + gateway_uri="databricks", + route="completions", +) + +llm_chain = LLMChain( + llm=gateway, + prompt=PromptTemplate( + input_variables=["adjective"], + template="Tell me a {adjective} joke", + ), +) +result = llm_chain.run(adjective="funny") +print(result) +``` diff --git a/langchain_md_files/integrations/providers/mlx.mdx b/langchain_md_files/integrations/providers/mlx.mdx new file mode 100644 index 0000000000000000000000000000000000000000..dc859305cdee3df85d77337a12a32467b81a78fb --- /dev/null +++ b/langchain_md_files/integrations/providers/mlx.mdx @@ -0,0 +1,34 @@ +# MLX + +>[MLX](https://ml-explore.github.io/mlx/build/html/index.html) is a `NumPy`-like array framework +> designed for efficient and flexible machine learning on `Apple` silicon, +> brought to you by `Apple machine learning research`. + + +## Installation and Setup + +Install several Python packages: + +```bash +pip install mlx-lm transformers huggingface_hub +```` + + +## Chat models + + +See a [usage example](/docs/integrations/chat/mlx). + +```python +from langchain_community.chat_models.mlx import ChatMLX +``` + +## LLMs + +### MLX Local Pipelines + +See a [usage example](/docs/integrations/llms/mlx_pipelines). + +```python +from langchain_community.llms.mlx_pipeline import MLXPipeline +``` diff --git a/langchain_md_files/integrations/providers/modal.mdx b/langchain_md_files/integrations/providers/modal.mdx new file mode 100644 index 0000000000000000000000000000000000000000..7e02799d717a12d68df42e4deb2db5e6e34c579a --- /dev/null +++ b/langchain_md_files/integrations/providers/modal.mdx @@ -0,0 +1,95 @@ +# Modal + +This page covers how to use the Modal ecosystem to run LangChain custom LLMs. +It is broken into two parts: + +1. Modal installation and web endpoint deployment +2. Using deployed web endpoint with `LLM` wrapper class. + +## Installation and Setup + +- Install with `pip install modal` +- Run `modal token new` + +## Define your Modal Functions and Webhooks + +You must include a prompt. There is a rigid response structure: + +```python +class Item(BaseModel): + prompt: str + +@stub.function() +@modal.web_endpoint(method="POST") +def get_text(item: Item): + return {"prompt": run_gpt2.call(item.prompt)} +``` + +The following is an example with the GPT2 model: + +```python +from pydantic import BaseModel + +import modal + +CACHE_PATH = "/root/model_cache" + +class Item(BaseModel): + prompt: str + +stub = modal.Stub(name="example-get-started-with-langchain") + +def download_model(): + from transformers import GPT2Tokenizer, GPT2LMHeadModel + tokenizer = GPT2Tokenizer.from_pretrained('gpt2') + model = GPT2LMHeadModel.from_pretrained('gpt2') + tokenizer.save_pretrained(CACHE_PATH) + model.save_pretrained(CACHE_PATH) + +# Define a container image for the LLM function below, which +# downloads and stores the GPT-2 model. +image = modal.Image.debian_slim().pip_install( + "tokenizers", "transformers", "torch", "accelerate" +).run_function(download_model) + +@stub.function( + gpu="any", + image=image, + retries=3, +) +def run_gpt2(text: str): + from transformers import GPT2Tokenizer, GPT2LMHeadModel + tokenizer = GPT2Tokenizer.from_pretrained(CACHE_PATH) + model = GPT2LMHeadModel.from_pretrained(CACHE_PATH) + encoded_input = tokenizer(text, return_tensors='pt').input_ids + output = model.generate(encoded_input, max_length=50, do_sample=True) + return tokenizer.decode(output[0], skip_special_tokens=True) + +@stub.function() +@modal.web_endpoint(method="POST") +def get_text(item: Item): + return {"prompt": run_gpt2.call(item.prompt)} +``` + +### Deploy the web endpoint + +Deploy the web endpoint to Modal cloud with the [`modal deploy`](https://modal.com/docs/reference/cli/deploy) CLI command. +Your web endpoint will acquire a persistent URL under the `modal.run` domain. + +## LLM wrapper around Modal web endpoint + +The `Modal` LLM wrapper class which will accept your deployed web endpoint's URL. + +```python +from langchain_community.llms import Modal + +endpoint_url = "https://ecorp--custom-llm-endpoint.modal.run" # REPLACE ME with your deployed Modal web endpoint's URL + +llm = Modal(endpoint_url=endpoint_url) +llm_chain = LLMChain(prompt=prompt, llm=llm) + +question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" + +llm_chain.run(question) +``` + diff --git a/langchain_md_files/integrations/providers/modelscope.mdx b/langchain_md_files/integrations/providers/modelscope.mdx new file mode 100644 index 0000000000000000000000000000000000000000..34c421ea707e8f93967af8d6fd7cf692b4f85daa --- /dev/null +++ b/langchain_md_files/integrations/providers/modelscope.mdx @@ -0,0 +1,24 @@ +# ModelScope + +>[ModelScope](https://www.modelscope.cn/home) is a big repository of the models and datasets. + +This page covers how to use the modelscope ecosystem within LangChain. +It is broken into two parts: installation and setup, and then references to specific modelscope wrappers. + +## Installation and Setup + +Install the `modelscope` package. + +```bash +pip install modelscope +``` + + +## Text Embedding Models + + +```python +from langchain_community.embeddings import ModelScopeEmbeddings +``` + +For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/modelscope_hub) diff --git a/langchain_md_files/integrations/providers/modern_treasury.mdx b/langchain_md_files/integrations/providers/modern_treasury.mdx new file mode 100644 index 0000000000000000000000000000000000000000..908f17644effdbce09eb9c0e1cd0a68c01762ae0 --- /dev/null +++ b/langchain_md_files/integrations/providers/modern_treasury.mdx @@ -0,0 +1,19 @@ +# Modern Treasury + +>[Modern Treasury](https://www.moderntreasury.com/) simplifies complex payment operations. It is a unified platform to power products and processes that move money. +>- Connect to banks and payment systems +>- Track transactions and balances in real-time +>- Automate payment operations for scale + +## Installation and Setup + +There isn't any special setup for it. + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/modern_treasury). + + +```python +from langchain_community.document_loaders import ModernTreasuryLoader +``` diff --git a/langchain_md_files/integrations/providers/momento.mdx b/langchain_md_files/integrations/providers/momento.mdx new file mode 100644 index 0000000000000000000000000000000000000000..6d39999878037c699a13ee3e9856f79baa8a5162 --- /dev/null +++ b/langchain_md_files/integrations/providers/momento.mdx @@ -0,0 +1,65 @@ +# Momento + +> [Momento Cache](https://docs.momentohq.com/) is the world's first truly serverless caching service, offering instant elasticity, scale-to-zero +> capability, and blazing-fast performance. +> +> [Momento Vector Index](https://docs.momentohq.com/vector-index) stands out as the most productive, easiest-to-use, fully serverless vector index. +> +> For both services, simply grab the SDK, obtain an API key, input a few lines into your code, and you're set to go. Together, they provide a comprehensive solution for your LLM data needs. + +This page covers how to use the [Momento](https://gomomento.com) ecosystem within LangChain. + +## Installation and Setup + +- Sign up for a free account [here](https://console.gomomento.com/) to get an API key +- Install the Momento Python SDK with `pip install momento` + +## Cache + +Use Momento as a serverless, distributed, low-latency cache for LLM prompts and responses. The standard cache is the primary use case for Momento users in any environment. + +To integrate Momento Cache into your application: + +```python +from langchain.cache import MomentoCache +``` + +Then, set it up with the following code: + +```python +from datetime import timedelta +from momento import CacheClient, Configurations, CredentialProvider +from langchain.globals import set_llm_cache + +# Instantiate the Momento client +cache_client = CacheClient( + Configurations.Laptop.v1(), + CredentialProvider.from_environment_variable("MOMENTO_API_KEY"), + default_ttl=timedelta(days=1)) + +# Choose a Momento cache name of your choice +cache_name = "langchain" + +# Instantiate the LLM cache +set_llm_cache(MomentoCache(cache_client, cache_name)) +``` + +## Memory + +Momento can be used as a distributed memory store for LLMs. + +See [this notebook](/docs/integrations/memory/momento_chat_message_history) for a walkthrough of how to use Momento as a memory store for chat message history. + +```python +from langchain.memory import MomentoChatMessageHistory +``` + +## Vector Store + +Momento Vector Index (MVI) can be used as a vector store. + +See [this notebook](/docs/integrations/vectorstores/momento_vector_index) for a walkthrough of how to use MVI as a vector store. + +```python +from langchain_community.vectorstores import MomentoVectorIndex +``` diff --git a/langchain_md_files/integrations/providers/mongodb_atlas.mdx b/langchain_md_files/integrations/providers/mongodb_atlas.mdx new file mode 100644 index 0000000000000000000000000000000000000000..67fd9b2395c3f0d48364d244ec42998ad24ad417 --- /dev/null +++ b/langchain_md_files/integrations/providers/mongodb_atlas.mdx @@ -0,0 +1,82 @@ +# MongoDB Atlas + +>[MongoDB Atlas](https://www.mongodb.com/docs/atlas/) is a fully-managed cloud +> database available in AWS, Azure, and GCP. It now has support for native +> Vector Search on the MongoDB document data. + +## Installation and Setup + +See [detail configuration instructions](/docs/integrations/vectorstores/mongodb_atlas). + +We need to install `langchain-mongodb` python package. + +```bash +pip install langchain-mongodb +``` + +## Vector Store + +See a [usage example](/docs/integrations/vectorstores/mongodb_atlas). + +```python +from langchain_mongodb import MongoDBAtlasVectorSearch +``` + + +## LLM Caches + +### MongoDBCache +An abstraction to store a simple cache in MongoDB. This does not use Semantic Caching, nor does it require an index to be made on the collection before generation. + +To import this cache: +```python +from langchain_mongodb.cache import MongoDBCache +``` + +To use this cache with your LLMs: +```python +from langchain_core.globals import set_llm_cache + +# use any embedding provider... +from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings + +mongodb_atlas_uri = "" +COLLECTION_NAME="" +DATABASE_NAME="" + +set_llm_cache(MongoDBCache( + connection_string=mongodb_atlas_uri, + collection_name=COLLECTION_NAME, + database_name=DATABASE_NAME, +)) +``` + + +### MongoDBAtlasSemanticCache +Semantic caching allows users to retrieve cached prompts based on semantic similarity between the user input and previously cached results. Under the hood it blends MongoDBAtlas as both a cache and a vectorstore. +The MongoDBAtlasSemanticCache inherits from `MongoDBAtlasVectorSearch` and needs an Atlas Vector Search Index defined to work. Please look at the [usage example](/docs/integrations/vectorstores/mongodb_atlas) on how to set up the index. + +To import this cache: +```python +from langchain_mongodb.cache import MongoDBAtlasSemanticCache +``` + +To use this cache with your LLMs: +```python +from langchain_core.globals import set_llm_cache + +# use any embedding provider... +from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings + +mongodb_atlas_uri = "" +COLLECTION_NAME="" +DATABASE_NAME="" + +set_llm_cache(MongoDBAtlasSemanticCache( + embedding=FakeEmbeddings(), + connection_string=mongodb_atlas_uri, + collection_name=COLLECTION_NAME, + database_name=DATABASE_NAME, +)) +``` +`` \ No newline at end of file diff --git a/langchain_md_files/integrations/providers/motherduck.mdx b/langchain_md_files/integrations/providers/motherduck.mdx new file mode 100644 index 0000000000000000000000000000000000000000..790f8167aaa759fb73503dde7a9b728763d7273a --- /dev/null +++ b/langchain_md_files/integrations/providers/motherduck.mdx @@ -0,0 +1,53 @@ +# Motherduck + +>[Motherduck](https://motherduck.com/) is a managed DuckDB-in-the-cloud service. + +## Installation and Setup + +First, you need to install `duckdb` python package. + +```bash +pip install duckdb +``` + +You will also need to sign up for an account at [Motherduck](https://motherduck.com/) + +After that, you should set up a connection string - we mostly integrate with Motherduck through SQLAlchemy. +The connection string is likely in the form: + +``` +token="..." + +conn_str = f"duckdb:///md:{token}@my_db" +``` + +## SQLChain + +You can use the SQLChain to query data in your Motherduck instance in natural language. + +``` +from langchain_openai import OpenAI +from langchain_community.utilities import SQLDatabase +from langchain_experimental.sql import SQLDatabaseChain +db = SQLDatabase.from_uri(conn_str) +db_chain = SQLDatabaseChain.from_llm(OpenAI(temperature=0), db, verbose=True) +``` + +From here, see the [SQL Chain](/docs/how_to#qa-over-sql--csv) documentation on how to use. + + +## LLMCache + +You can also easily use Motherduck to cache LLM requests. +Once again this is done through the SQLAlchemy wrapper. + +``` +import sqlalchemy +from langchain.globals import set_llm_cache +eng = sqlalchemy.create_engine(conn_str) +set_llm_cache(SQLAlchemyCache(engine=eng)) +``` + +From here, see the [LLM Caching](/docs/integrations/llm_caching) documentation on how to use. + + diff --git a/langchain_md_files/integrations/providers/motorhead.mdx b/langchain_md_files/integrations/providers/motorhead.mdx new file mode 100644 index 0000000000000000000000000000000000000000..0d88c47f0d458e1e5574beba0689b646749bf146 --- /dev/null +++ b/langchain_md_files/integrations/providers/motorhead.mdx @@ -0,0 +1,16 @@ +# Motörhead + +>[Motörhead](https://github.com/getmetal/motorhead) is a memory server implemented in Rust. It automatically handles incremental summarization in the background and allows for stateless applications. + +## Installation and Setup + +See instructions at [Motörhead](https://github.com/getmetal/motorhead) for running the server locally. + + +## Memory + +See a [usage example](/docs/integrations/memory/motorhead_memory). + +```python +from langchain_community.memory import MotorheadMemory +``` diff --git a/langchain_md_files/integrations/providers/myscale.mdx b/langchain_md_files/integrations/providers/myscale.mdx new file mode 100644 index 0000000000000000000000000000000000000000..8192983ef93d340820cd6c50e66c060e9fde968f --- /dev/null +++ b/langchain_md_files/integrations/providers/myscale.mdx @@ -0,0 +1,66 @@ +# MyScale + +This page covers how to use MyScale vector database within LangChain. +It is broken into two parts: installation and setup, and then references to specific MyScale wrappers. + +With MyScale, you can manage both structured and unstructured (vectorized) data, and perform joint queries and analytics on both types of data using SQL. Plus, MyScale's cloud-native OLAP architecture, built on top of ClickHouse, enables lightning-fast data processing even on massive datasets. + +## Introduction + +[Overview to MyScale and High performance vector search](https://docs.myscale.com/en/overview/) + +You can now register on our SaaS and [start a cluster now!](https://docs.myscale.com/en/quickstart/) + +If you are also interested in how we managed to integrate SQL and vector, please refer to [this document](https://docs.myscale.com/en/vector-reference/) for further syntax reference. + +We also deliver with live demo on huggingface! Please checkout our [huggingface space](https://huggingface.co./myscale)! They search millions of vector within a blink! + +## Installation and Setup +- Install the Python SDK with `pip install clickhouse-connect` + +### Setting up environments + +There are two ways to set up parameters for myscale index. + +1. Environment Variables + + Before you run the app, please set the environment variable with `export`: + `export MYSCALE_HOST='' MYSCALE_PORT= MYSCALE_USERNAME= MYSCALE_PASSWORD= ...` + + You can easily find your account, password and other info on our SaaS. For details please refer to [this document](https://docs.myscale.com/en/cluster-management/) + Every attributes under `MyScaleSettings` can be set with prefix `MYSCALE_` and is case insensitive. + +2. Create `MyScaleSettings` object with parameters + + + ```python + from langchain_community.vectorstores import MyScale, MyScaleSettings + config = MyScaleSettings(host="", port=8443, ...) + index = MyScale(embedding_function, config) + index.add_documents(...) + ``` + +## Wrappers +supported functions: +- `add_texts` +- `add_documents` +- `from_texts` +- `from_documents` +- `similarity_search` +- `asimilarity_search` +- `similarity_search_by_vector` +- `asimilarity_search_by_vector` +- `similarity_search_with_relevance_scores` +- `delete` + +### VectorStore + +There exists a wrapper around MyScale database, allowing you to use it as a vectorstore, +whether for semantic search or similar example retrieval. + +To import this vectorstore: +```python +from langchain_community.vectorstores import MyScale +``` + +For a more detailed walkthrough of the MyScale wrapper, see [this notebook](/docs/integrations/vectorstores/myscale) diff --git a/langchain_md_files/integrations/providers/neo4j.mdx b/langchain_md_files/integrations/providers/neo4j.mdx new file mode 100644 index 0000000000000000000000000000000000000000..929b622d612eedc0874279727edabba2c30c4045 --- /dev/null +++ b/langchain_md_files/integrations/providers/neo4j.mdx @@ -0,0 +1,60 @@ +# Neo4j + +>What is `Neo4j`? + +>- Neo4j is an `open-source database management system` that specializes in graph database technology. +>- Neo4j allows you to represent and store data in nodes and edges, making it ideal for handling connected data and relationships. +>- Neo4j provides a `Cypher Query Language`, making it easy to interact with and query your graph data. +>- With Neo4j, you can achieve high-performance `graph traversals and queries`, suitable for production-level systems. + +>Get started with Neo4j by visiting [their website](https://neo4j.com/). + +## Installation and Setup + +- Install the Python SDK with `pip install neo4j` + + +## VectorStore + +The Neo4j vector index is used as a vectorstore, +whether for semantic search or example selection. + +```python +from langchain_community.vectorstores import Neo4jVector +``` + +See a [usage example](/docs/integrations/vectorstores/neo4jvector) + +## GraphCypherQAChain + +There exists a wrapper around Neo4j graph database that allows you to generate Cypher statements based on the user input +and use them to retrieve relevant information from the database. + +```python +from langchain_community.graphs import Neo4jGraph +from langchain.chains import GraphCypherQAChain +``` + +See a [usage example](/docs/integrations/graphs/neo4j_cypher) + +## Constructing a knowledge graph from text + +Text data often contain rich relationships and insights that can be useful for various analytics, recommendation engines, or knowledge management applications. +Diffbot's NLP API allows for the extraction of entities, relationships, and semantic meaning from unstructured text data. +By coupling Diffbot's NLP API with Neo4j, a graph database, you can create powerful, dynamic graph structures based on the information extracted from text. +These graph structures are fully queryable and can be integrated into various applications. + +```python +from langchain_community.graphs import Neo4jGraph +from langchain_experimental.graph_transformers.diffbot import DiffbotGraphTransformer +``` + +See a [usage example](/docs/integrations/graphs/diffbot) + +## Memory + +See a [usage example](/docs/integrations/memory/neo4j_chat_message_history). + +```python +from langchain.memory import Neo4jChatMessageHistory +``` diff --git a/langchain_md_files/integrations/providers/nlpcloud.mdx b/langchain_md_files/integrations/providers/nlpcloud.mdx new file mode 100644 index 0000000000000000000000000000000000000000..f6d664833a18d8303a65e1baebe930557da4edb1 --- /dev/null +++ b/langchain_md_files/integrations/providers/nlpcloud.mdx @@ -0,0 +1,31 @@ +# NLPCloud + +>[NLP Cloud](https://docs.nlpcloud.com/#introduction) is an artificial intelligence platform that allows you to use the most advanced AI engines, and even train your own engines with your own data. + + +## Installation and Setup + +- Install the `nlpcloud` package. + +```bash +pip install nlpcloud +``` + +- Get an NLPCloud api key and set it as an environment variable (`NLPCLOUD_API_KEY`) + + +## LLM + +See a [usage example](/docs/integrations/llms/nlpcloud). + +```python +from langchain_community.llms import NLPCloud +``` + +## Text Embedding Models + +See a [usage example](/docs/integrations/text_embedding/nlp_cloud) + +```python +from langchain_community.embeddings import NLPCloudEmbeddings +``` diff --git a/langchain_md_files/integrations/providers/notion.mdx b/langchain_md_files/integrations/providers/notion.mdx new file mode 100644 index 0000000000000000000000000000000000000000..6ed4fd306fc935cd530281b1899331a58025a5c3 --- /dev/null +++ b/langchain_md_files/integrations/providers/notion.mdx @@ -0,0 +1,20 @@ +# Notion DB + +>[Notion](https://www.notion.so/) is a collaboration platform with modified Markdown support that integrates kanban +> boards, tasks, wikis and databases. It is an all-in-one workspace for notetaking, knowledge and data management, +> and project and task management. + +## Installation and Setup + +All instructions are in examples below. + +## Document Loader + +We have two different loaders: `NotionDirectoryLoader` and `NotionDBLoader`. + +See [usage examples here](/docs/integrations/document_loaders/notion). + + +```python +from langchain_community.document_loaders import NotionDirectoryLoader, NotionDBLoader +``` diff --git a/langchain_md_files/integrations/providers/nuclia.mdx b/langchain_md_files/integrations/providers/nuclia.mdx new file mode 100644 index 0000000000000000000000000000000000000000..91daeb6a5a242e7f79d64fc6bd8f1cbcf9a77109 --- /dev/null +++ b/langchain_md_files/integrations/providers/nuclia.mdx @@ -0,0 +1,78 @@ +# Nuclia + +>[Nuclia](https://nuclia.com) automatically indexes your unstructured data from any internal +> and external source, providing optimized search results and generative answers. +> It can handle video and audio transcription, image content extraction, and document parsing. + + + +## Installation and Setup + +We need to install the `nucliadb-protos` package to use the `Nuclia Understanding API` + +```bash +pip install nucliadb-protos +``` + +We need to have a `Nuclia account`. +We can create one for free at [https://nuclia.cloud](https://nuclia.cloud), +and then [create a NUA key](https://docs.nuclia.dev/docs/docs/using/understanding/intro). + + +## Document Transformer + +### Nuclia + +>`Nuclia Understanding API` document transformer splits text into paragraphs and sentences, +> identifies entities, provides a summary of the text and generates embeddings for all the sentences. + +To use the Nuclia document transformer, we need to instantiate a `NucliaUnderstandingAPI` +tool with `enable_ml` set to `True`: + +```python +from langchain_community.tools.nuclia import NucliaUnderstandingAPI + +nua = NucliaUnderstandingAPI(enable_ml=True) +``` + +See a [usage example](/docs/integrations/document_transformers/nuclia_transformer). + +```python +from langchain_community.document_transformers.nuclia_text_transform import NucliaTextTransformer +``` + +## Document Loaders + +### Nuclea loader + +See a [usage example](/docs/integrations/document_loaders/nuclia). + +```python +from langchain_community.document_loaders.nuclia import NucliaLoader +``` + +## Vector store + +### NucliaDB + +We need to install a python package: + +```bash +pip install nuclia +``` + +See a [usage example](/docs/integrations/vectorstores/nucliadb). + +```python +from langchain_community.vectorstores.nucliadb import NucliaDB +``` + +## Tools + +### Nuclia Understanding + +See a [usage example](/docs/integrations/tools/nuclia). + +```python +from langchain_community.tools.nuclia import NucliaUnderstandingAPI +``` diff --git a/langchain_md_files/integrations/providers/nvidia.mdx b/langchain_md_files/integrations/providers/nvidia.mdx new file mode 100644 index 0000000000000000000000000000000000000000..0f02b3522367ee9eb81ee581fb872edd0f030c48 --- /dev/null +++ b/langchain_md_files/integrations/providers/nvidia.mdx @@ -0,0 +1,82 @@ +# NVIDIA +The `langchain-nvidia-ai-endpoints` package contains LangChain integrations building applications with models on +NVIDIA NIM inference microservice. NIM supports models across domains like chat, embedding, and re-ranking models +from the community as well as NVIDIA. These models are optimized by NVIDIA to deliver the best performance on NVIDIA +accelerated infrastructure and deployed as a NIM, an easy-to-use, prebuilt containers that deploy anywhere using a single +command on NVIDIA accelerated infrastructure. + +NVIDIA hosted deployments of NIMs are available to test on the [NVIDIA API catalog](https://build.nvidia.com/). After testing, +NIMs can be exported from NVIDIA’s API catalog using the NVIDIA AI Enterprise license and run on-premises or in the cloud, +giving enterprises ownership and full control of their IP and AI application. + +NIMs are packaged as container images on a per model basis and are distributed as NGC container images through the NVIDIA NGC Catalog. +At their core, NIMs provide easy, consistent, and familiar APIs for running inference on an AI model. + +Below is an example on how to use some common functionality surrounding text-generative and embedding models. + +## Installation + +```python +pip install -U --quiet langchain-nvidia-ai-endpoints +``` + +## Setup + +**To get started:** + +1. Create a free account with [NVIDIA](https://build.nvidia.com/), which hosts NVIDIA AI Foundation models. + +2. Click on your model of choice. + +3. Under Input select the Python tab, and click `Get API Key`. Then click `Generate Key`. + +4. Copy and save the generated key as NVIDIA_API_KEY. From there, you should have access to the endpoints. + +```python +import getpass +import os + +if not os.environ.get("NVIDIA_API_KEY", "").startswith("nvapi-"): + nvidia_api_key = getpass.getpass("Enter your NVIDIA API key: ") + assert nvidia_api_key.startswith("nvapi-"), f"{nvidia_api_key[:5]}... is not a valid key" + os.environ["NVIDIA_API_KEY"] = nvidia_api_key +``` +## Working with NVIDIA API Catalog + +```python +from langchain_nvidia_ai_endpoints import ChatNVIDIA + +llm = ChatNVIDIA(model="mistralai/mixtral-8x22b-instruct-v0.1") +result = llm.invoke("Write a ballad about LangChain.") +print(result.content) +``` + +Using the API, you can query live endpoints available on the NVIDIA API Catalog to get quick results from a DGX-hosted cloud compute environment. All models are source-accessible and can be deployed on your own compute cluster using NVIDIA NIM which is part of NVIDIA AI Enterprise, shown in the next section [Working with NVIDIA NIMs](##working-with-nvidia-nims). + +## Working with NVIDIA NIMs +When ready to deploy, you can self-host models with NVIDIA NIM—which is included with the NVIDIA AI Enterprise software license—and run them anywhere, giving you ownership of your customizations and full control of your intellectual property (IP) and AI applications. + +[Learn more about NIMs](https://developer.nvidia.com/blog/nvidia-nim-offers-optimized-inference-microservices-for-deploying-ai-models-at-scale/) + +```python +from langchain_nvidia_ai_endpoints import ChatNVIDIA, NVIDIAEmbeddings, NVIDIARerank + +# connect to a chat NIM running at localhost:8000, specifying a model +llm = ChatNVIDIA(base_url="http://localhost:8000/v1", model="meta/llama3-8b-instruct") + +# connect to an embedding NIM running at localhost:8080 +embedder = NVIDIAEmbeddings(base_url="http://localhost:8080/v1") + +# connect to a reranking NIM running at localhost:2016 +ranker = NVIDIARerank(base_url="http://localhost:2016/v1") +``` + +## Using NVIDIA AI Foundation Endpoints + +A selection of NVIDIA AI Foundation models are supported directly in LangChain with familiar APIs. + +The active models which are supported can be found [in API Catalog](https://build.nvidia.com/). + +**The following may be useful examples to help you get started:** +- **[`ChatNVIDIA` Model](/docs/integrations/chat/nvidia_ai_endpoints).** +- **[`NVIDIAEmbeddings` Model for RAG Workflows](/docs/integrations/text_embedding/nvidia_ai_endpoints).** diff --git a/langchain_md_files/integrations/providers/obsidian.mdx b/langchain_md_files/integrations/providers/obsidian.mdx new file mode 100644 index 0000000000000000000000000000000000000000..ce1169df90acbda16d48f609d58c8ebe94577257 --- /dev/null +++ b/langchain_md_files/integrations/providers/obsidian.mdx @@ -0,0 +1,19 @@ +# Obsidian + +>[Obsidian](https://obsidian.md/) is a powerful and extensible knowledge base +that works on top of your local folder of plain text files. + +## Installation and Setup + +All instructions are in examples below. + +## Document Loader + + +See a [usage example](/docs/integrations/document_loaders/obsidian). + + +```python +from langchain_community.document_loaders import ObsidianLoader +``` + diff --git a/langchain_md_files/integrations/providers/oci.mdx b/langchain_md_files/integrations/providers/oci.mdx new file mode 100644 index 0000000000000000000000000000000000000000..5037fb86f192f22226527890427325972674dc53 --- /dev/null +++ b/langchain_md_files/integrations/providers/oci.mdx @@ -0,0 +1,51 @@ +# Oracle Cloud Infrastructure (OCI) + +The `LangChain` integrations related to [Oracle Cloud Infrastructure](https://www.oracle.com/artificial-intelligence/). + +## OCI Generative AI +> Oracle Cloud Infrastructure (OCI) [Generative AI](https://docs.oracle.com/en-us/iaas/Content/generative-ai/home.htm) is a fully managed service that provides a set of state-of-the-art, +> customizable large language models (LLMs) that cover a wide range of use cases, and which are available through a single API. +> Using the OCI Generative AI service you can access ready-to-use pretrained models, or create and host your own fine-tuned +> custom models based on your own data on dedicated AI clusters. + +To use, you should have the latest `oci` python SDK and the langchain_community package installed. + +```bash +pip install -U oci langchain-community +``` + +See [chat](/docs/integrations/llms/oci_generative_ai), [complete](/docs/integrations/chat/oci_generative_ai), and [embedding](/docs/integrations/text_embedding/oci_generative_ai) usage examples. + +```python +from langchain_community.chat_models import ChatOCIGenAI + +from langchain_community.llms import OCIGenAI + +from langchain_community.embeddings import OCIGenAIEmbeddings +``` + +## OCI Data Science Model Deployment Endpoint + +> [OCI Data Science](https://docs.oracle.com/en-us/iaas/data-science/using/home.htm) is a +> fully managed and serverless platform for data science teams. Using the OCI Data Science +> platform you can build, train, and manage machine learning models, and then deploy them +> as an OCI Model Deployment Endpoint using the +> [OCI Data Science Model Deployment Service](https://docs.oracle.com/en-us/iaas/data-science/using/model-dep-about.htm). + +If you deployed a LLM with the VLLM or TGI framework, you can use the +`OCIModelDeploymentVLLM` or `OCIModelDeploymentTGI` classes to interact with it. + +To use, you should have the latest `oracle-ads` python SDK installed. + +```bash +pip install -U oracle-ads +``` + +See [usage examples](/docs/integrations/llms/oci_model_deployment_endpoint). + +```python +from langchain_community.llms import OCIModelDeploymentVLLM + +from langchain_community.llms import OCIModelDeploymentTGI +``` + diff --git a/langchain_md_files/integrations/providers/octoai.mdx b/langchain_md_files/integrations/providers/octoai.mdx new file mode 100644 index 0000000000000000000000000000000000000000..d4a064c7c7672a7f808bc3d1f95eb9a53edb7b4e --- /dev/null +++ b/langchain_md_files/integrations/providers/octoai.mdx @@ -0,0 +1,37 @@ +# OctoAI + +>[OctoAI](https://docs.octoai.cloud/docs) offers easy access to efficient compute +> and enables users to integrate their choice of AI models into applications. +> The `OctoAI` compute service helps you run, tune, and scale AI applications easily. + + +## Installation and Setup + +- Install the `openai` Python package: + ```bash + pip install openai + ```` +- Register on `OctoAI` and get an API Token from [your OctoAI account page](https://octoai.cloud/settings). + + +## Chat models + +See a [usage example](/docs/integrations/chat/octoai). + +```python +from langchain_community.chat_models import ChatOctoAI +``` + +## LLMs + +See a [usage example](/docs/integrations/llms/octoai). + +```python +from langchain_community.llms.octoai_endpoint import OctoAIEndpoint +``` + +## Embedding models + +```python +from langchain_community.embeddings.octoai_embeddings import OctoAIEmbeddings +``` diff --git a/langchain_md_files/integrations/providers/ollama.mdx b/langchain_md_files/integrations/providers/ollama.mdx new file mode 100644 index 0000000000000000000000000000000000000000..6a05b5e2be606b37a11ccc865634629bc9b488f6 --- /dev/null +++ b/langchain_md_files/integrations/providers/ollama.mdx @@ -0,0 +1,73 @@ +# Ollama + +>[Ollama](https://ollama.com/) allows you to run open-source large language models, +> such as [Llama3.1](https://ai.meta.com/blog/meta-llama-3-1/), locally. +> +>`Ollama` bundles model weights, configuration, and data into a single package, defined by a Modelfile. +>It optimizes setup and configuration details, including GPU usage. +>For a complete list of supported models and model variants, see the [Ollama model library](https://ollama.ai/library). + +See [this guide](/docs/how_to/local_llms) for more details +on how to use `Ollama` with LangChain. + +## Installation and Setup +### Ollama installation +Follow [these instructions](https://github.com/ollama/ollama?tab=readme-ov-file#ollama) +to set up and run a local Ollama instance. + +Ollama will start as a background service automatically, if this is disabled, run: + +```bash +# export OLLAMA_HOST=127.0.0.1 # environment variable to set ollama host +# export OLLAMA_PORT=11434 # environment variable to set the ollama port +ollama serve +``` + +After starting ollama, run `ollama pull ` to download a model +from the [Ollama model library](https://ollama.ai/library). + +```bash +ollama pull llama3.1 +``` + +We're now ready to install the `langchain-ollama` partner package and run a model. + +### Ollama LangChain partner package install +Install the integration package with: +```bash +pip install langchain-ollama +``` +## LLM + +```python +from langchain_ollama.llms import OllamaLLM +``` + +See the notebook example [here](/docs/integrations/llms/ollama). + +## Chat Models + +### Chat Ollama + +```python +from langchain_ollama.chat_models import ChatOllama +``` + +See the notebook example [here](/docs/integrations/chat/ollama). + +### Ollama tool calling +[Ollama tool calling](https://ollama.com/blog/tool-support) uses the +OpenAI compatible web server specification, and can be used with +the default `BaseChatModel.bind_tools()` methods +as described [here](/docs/how_to/tool_calling/). +Make sure to select an ollama model that supports [tool calling](https://ollama.com/search?&c=tools). + +## Embedding models + +```python +from langchain_community.embeddings import OllamaEmbeddings +``` + +See the notebook example [here](/docs/integrations/text_embedding/ollama). + + diff --git a/langchain_md_files/integrations/providers/ontotext_graphdb.mdx b/langchain_md_files/integrations/providers/ontotext_graphdb.mdx new file mode 100644 index 0000000000000000000000000000000000000000..468699cd215856625ac7995941054d3ed4d56849 --- /dev/null +++ b/langchain_md_files/integrations/providers/ontotext_graphdb.mdx @@ -0,0 +1,21 @@ +# Ontotext GraphDB + +>[Ontotext GraphDB](https://graphdb.ontotext.com/) is a graph database and knowledge discovery tool compliant with RDF and SPARQL. + +## Dependencies + +Install the [rdflib](https://github.com/RDFLib/rdflib) package with +```bash +pip install rdflib==7.0.0 +``` + +## Graph QA Chain + +Connect your GraphDB Database with a chat model to get insights on your data. + +See the notebook example [here](/docs/integrations/graphs/ontotext). + +```python +from langchain_community.graphs import OntotextGraphDBGraph +from langchain.chains import OntotextGraphDBQAChain +``` diff --git a/langchain_md_files/integrations/providers/openllm.mdx b/langchain_md_files/integrations/providers/openllm.mdx new file mode 100644 index 0000000000000000000000000000000000000000..92bdca1242cf307f1688b7786e54755727ac5acd --- /dev/null +++ b/langchain_md_files/integrations/providers/openllm.mdx @@ -0,0 +1,70 @@ +# OpenLLM + +This page demonstrates how to use [OpenLLM](https://github.com/bentoml/OpenLLM) +with LangChain. + +`OpenLLM` is an open platform for operating large language models (LLMs) in +production. It enables developers to easily run inference with any open-source +LLMs, deploy to the cloud or on-premises, and build powerful AI apps. + +## Installation and Setup + +Install the OpenLLM package via PyPI: + +```bash +pip install openllm +``` + +## LLM + +OpenLLM supports a wide range of open-source LLMs as well as serving users' own +fine-tuned LLMs. Use `openllm model` command to see all available models that +are pre-optimized for OpenLLM. + +## Wrappers + +There is a OpenLLM Wrapper which supports loading LLM in-process or accessing a +remote OpenLLM server: + +```python +from langchain_community.llms import OpenLLM +``` + +### Wrapper for OpenLLM server + +This wrapper supports connecting to an OpenLLM server via HTTP or gRPC. The +OpenLLM server can run either locally or on the cloud. + +To try it out locally, start an OpenLLM server: + +```bash +openllm start flan-t5 +``` + +Wrapper usage: + +```python +from langchain_community.llms import OpenLLM + +llm = OpenLLM(server_url='http://localhost:3000') + +llm("What is the difference between a duck and a goose? And why there are so many Goose in Canada?") +``` + +### Wrapper for Local Inference + +You can also use the OpenLLM wrapper to load LLM in current Python process for +running inference. + +```python +from langchain_community.llms import OpenLLM + +llm = OpenLLM(model_name="dolly-v2", model_id='databricks/dolly-v2-7b') + +llm("What is the difference between a duck and a goose? And why there are so many Goose in Canada?") +``` + +### Usage + +For a more detailed walkthrough of the OpenLLM Wrapper, see the +[example notebook](/docs/integrations/llms/openllm) diff --git a/langchain_md_files/integrations/providers/opensearch.mdx b/langchain_md_files/integrations/providers/opensearch.mdx new file mode 100644 index 0000000000000000000000000000000000000000..be55c26d7b225ac8c59b610d7a9b9213c6c2ae61 --- /dev/null +++ b/langchain_md_files/integrations/providers/opensearch.mdx @@ -0,0 +1,21 @@ +# OpenSearch + +This page covers how to use the OpenSearch ecosystem within LangChain. +It is broken into two parts: installation and setup, and then references to specific OpenSearch wrappers. + +## Installation and Setup +- Install the Python package with `pip install opensearch-py` +## Wrappers + +### VectorStore + +There exists a wrapper around OpenSearch vector databases, allowing you to use it as a vectorstore +for semantic search using approximate vector search powered by lucene, nmslib and faiss engines +or using painless scripting and script scoring functions for bruteforce vector search. + +To import this vectorstore: +```python +from langchain_community.vectorstores import OpenSearchVectorSearch +``` + +For a more detailed walkthrough of the OpenSearch wrapper, see [this notebook](/docs/integrations/vectorstores/opensearch) diff --git a/langchain_md_files/integrations/providers/openweathermap.mdx b/langchain_md_files/integrations/providers/openweathermap.mdx new file mode 100644 index 0000000000000000000000000000000000000000..6e160f805d74a7f0754a46a1249e31b08b2c9fda --- /dev/null +++ b/langchain_md_files/integrations/providers/openweathermap.mdx @@ -0,0 +1,44 @@ +# OpenWeatherMap + +>[OpenWeatherMap](https://openweathermap.org/api/) provides all essential weather data for a specific location: +>- Current weather +>- Minute forecast for 1 hour +>- Hourly forecast for 48 hours +>- Daily forecast for 8 days +>- National weather alerts +>- Historical weather data for 40+ years back + +This page covers how to use the `OpenWeatherMap API` within LangChain. + +## Installation and Setup + +- Install requirements with +```bash +pip install pyowm +``` +- Go to OpenWeatherMap and sign up for an account to get your API key [here](https://openweathermap.org/api/) +- Set your API key as `OPENWEATHERMAP_API_KEY` environment variable + +## Wrappers + +### Utility + +There exists a OpenWeatherMapAPIWrapper utility which wraps this API. To import this utility: + +```python +from langchain_community.utilities.openweathermap import OpenWeatherMapAPIWrapper +``` + +For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/openweathermap). + +### Tool + +You can also easily load this wrapper as a Tool (to use with an Agent). +You can do this with: + +```python +from langchain.agents import load_tools +tools = load_tools(["openweathermap-api"]) +``` + +For more information on tools, see [this page](/docs/how_to/tools_builtin). diff --git a/langchain_md_files/integrations/providers/oracleai.mdx b/langchain_md_files/integrations/providers/oracleai.mdx new file mode 100644 index 0000000000000000000000000000000000000000..5df9d7eab02461fe5c2b0a715838ea6637b4dcb1 --- /dev/null +++ b/langchain_md_files/integrations/providers/oracleai.mdx @@ -0,0 +1,67 @@ +# OracleAI Vector Search + +Oracle AI Vector Search is designed for Artificial Intelligence (AI) workloads that allows you to query data based on semantics, rather than keywords. +One of the biggest benefits of Oracle AI Vector Search is that semantic search on unstructured data can be combined with relational search on business data in one single system. +This is not only powerful but also significantly more effective because you don't need to add a specialized vector database, eliminating the pain of data fragmentation between multiple systems. + +In addition, your vectors can benefit from all of Oracle Database’s most powerful features, like the following: + + * [Partitioning Support](https://www.oracle.com/database/technologies/partitioning.html) + * [Real Application Clusters scalability](https://www.oracle.com/database/real-application-clusters/) + * [Exadata smart scans](https://www.oracle.com/database/technologies/exadata/software/smartscan/) + * [Shard processing across geographically distributed databases](https://www.oracle.com/database/distributed-database/) + * [Transactions](https://docs.oracle.com/en/database/oracle/oracle-database/23/cncpt/transactions.html) + * [Parallel SQL](https://docs.oracle.com/en/database/oracle/oracle-database/21/vldbg/parallel-exec-intro.html#GUID-D28717E4-0F77-44F5-BB4E-234C31D4E4BA) + * [Disaster recovery](https://www.oracle.com/database/data-guard/) + * [Security](https://www.oracle.com/security/database-security/) + * [Oracle Machine Learning](https://www.oracle.com/artificial-intelligence/database-machine-learning/) + * [Oracle Graph Database](https://www.oracle.com/database/integrated-graph-database/) + * [Oracle Spatial and Graph](https://www.oracle.com/database/spatial/) + * [Oracle Blockchain](https://docs.oracle.com/en/database/oracle/oracle-database/23/arpls/dbms_blockchain_table.html#GUID-B469E277-978E-4378-A8C1-26D3FF96C9A6) + * [JSON](https://docs.oracle.com/en/database/oracle/oracle-database/23/adjsn/json-in-oracle-database.html) + + +## Document Loaders + +Please check the [usage example](/docs/integrations/document_loaders/oracleai). + +```python +from langchain_community.document_loaders.oracleai import OracleDocLoader +``` + +## Text Splitter + +Please check the [usage example](/docs/integrations/document_loaders/oracleai). + +```python +from langchain_community.document_loaders.oracleai import OracleTextSplitter +``` + +## Embeddings + +Please check the [usage example](/docs/integrations/text_embedding/oracleai). + +```python +from langchain_community.embeddings.oracleai import OracleEmbeddings +``` + +## Summary + +Please check the [usage example](/docs/integrations/tools/oracleai). + +```python +from langchain_community.utilities.oracleai import OracleSummary +``` + +## Vector Store + +Please check the [usage example](/docs/integrations/vectorstores/oracle). + +```python +from langchain_community.vectorstores.oraclevs import OracleVS +``` + +## End to End Demo + +Please check the [Oracle AI Vector Search End-to-End Demo Guide](https://github.com/langchain-ai/langchain/blob/master/cookbook/oracleai_demo.ipynb). + diff --git a/langchain_md_files/integrations/providers/outline.mdx b/langchain_md_files/integrations/providers/outline.mdx new file mode 100644 index 0000000000000000000000000000000000000000..44335477ad7e3fbb1c4e6e2c9918869ed9709f51 --- /dev/null +++ b/langchain_md_files/integrations/providers/outline.mdx @@ -0,0 +1,22 @@ +# Outline + +> [Outline](https://www.getoutline.com/) is an open-source collaborative knowledge base platform designed for team information sharing. + +## Setup + +You first need to [create an api key](https://www.getoutline.com/developers#section/Authentication) for your Outline instance. Then you need to set the following environment variables: + +```python +import os + +os.environ["OUTLINE_API_KEY"] = "xxx" +os.environ["OUTLINE_INSTANCE_URL"] = "https://app.getoutline.com" +``` + +## Retriever + +See a [usage example](/docs/integrations/retrievers/outline). + +```python +from langchain.retrievers import OutlineRetriever +``` diff --git a/langchain_md_files/integrations/providers/pandas.mdx b/langchain_md_files/integrations/providers/pandas.mdx new file mode 100644 index 0000000000000000000000000000000000000000..15519b0b0f7927536cea800cbee7069551296034 --- /dev/null +++ b/langchain_md_files/integrations/providers/pandas.mdx @@ -0,0 +1,29 @@ +# Pandas + +>[pandas](https://pandas.pydata.org) is a fast, powerful, flexible and easy to use open source data analysis and manipulation tool, +built on top of the `Python` programming language. + +## Installation and Setup + +Install the `pandas` package using `pip`: + +```bash +pip install pandas +``` + + +## Document loader + +See a [usage example](/docs/integrations/document_loaders/pandas_dataframe). + +```python +from langchain_community.document_loaders import DataFrameLoader +``` + +## Toolkit + +See a [usage example](/docs/integrations/tools/pandas). + +```python +from langchain_experimental.agents.agent_toolkits import create_pandas_dataframe_agent +``` diff --git a/langchain_md_files/integrations/providers/perplexity.mdx b/langchain_md_files/integrations/providers/perplexity.mdx new file mode 100644 index 0000000000000000000000000000000000000000..9e89994f54d101cb40e5e1de952a7266840f7ac5 --- /dev/null +++ b/langchain_md_files/integrations/providers/perplexity.mdx @@ -0,0 +1,25 @@ +# Perplexity + +>[Perplexity](https://www.perplexity.ai/pro) is the most powerful way to search +> the internet with unlimited Pro Search, upgraded AI models, unlimited file upload, +> image generation, and API credits. +> +> You can check a [list of available models](https://docs.perplexity.ai/docs/model-cards). + +## Installation and Setup + +Install a Python package: + +```bash +pip install openai +```` + +Get your API key from [here](https://docs.perplexity.ai/docs/getting-started). + +## Chat models + +See a [usage example](/docs/integrations/chat/perplexity). + +```python +from langchain_community.chat_models import ChatPerplexity +``` diff --git a/langchain_md_files/integrations/providers/petals.mdx b/langchain_md_files/integrations/providers/petals.mdx new file mode 100644 index 0000000000000000000000000000000000000000..db85c3cfc80e6a0441cf967ac64c26e6bb593f01 --- /dev/null +++ b/langchain_md_files/integrations/providers/petals.mdx @@ -0,0 +1,17 @@ +# Petals + +This page covers how to use the Petals ecosystem within LangChain. +It is broken into two parts: installation and setup, and then references to specific Petals wrappers. + +## Installation and Setup +- Install with `pip install petals` +- Get a Hugging Face api key and set it as an environment variable (`HUGGINGFACE_API_KEY`) + +## Wrappers + +### LLM + +There exists an Petals LLM wrapper, which you can access with +```python +from langchain_community.llms import Petals +``` diff --git a/langchain_md_files/integrations/providers/pg_embedding.mdx b/langchain_md_files/integrations/providers/pg_embedding.mdx new file mode 100644 index 0000000000000000000000000000000000000000..9bcd05bd27cd589ad9895b4fe77c4ab380a02f17 --- /dev/null +++ b/langchain_md_files/integrations/providers/pg_embedding.mdx @@ -0,0 +1,22 @@ +# Postgres Embedding + +> [pg_embedding](https://github.com/neondatabase/pg_embedding) is an open-source package for +> vector similarity search using `Postgres` and the `Hierarchical Navigable Small Worlds` +> algorithm for approximate nearest neighbor search. + +## Installation and Setup + +We need to install several python packages. + +```bash +pip install psycopg2-binary +``` + +## Vector Store + +See a [usage example](/docs/integrations/vectorstores/pgembedding). + +```python +from langchain_community.vectorstores import PGEmbedding +``` + diff --git a/langchain_md_files/integrations/providers/pgvector.mdx b/langchain_md_files/integrations/providers/pgvector.mdx new file mode 100644 index 0000000000000000000000000000000000000000..c98aaea19a98d876055276e21874a190adccf30b --- /dev/null +++ b/langchain_md_files/integrations/providers/pgvector.mdx @@ -0,0 +1,29 @@ +# PGVector + +This page covers how to use the Postgres [PGVector](https://github.com/pgvector/pgvector) ecosystem within LangChain +It is broken into two parts: installation and setup, and then references to specific PGVector wrappers. + +## Installation +- Install the Python package with `pip install pgvector` + + +## Setup +1. The first step is to create a database with the `pgvector` extension installed. + + Follow the steps at [PGVector Installation Steps](https://github.com/pgvector/pgvector#installation) to install the database and the extension. The docker image is the easiest way to get started. + +## Wrappers + +### VectorStore + +There exists a wrapper around Postgres vector databases, allowing you to use it as a vectorstore, +whether for semantic search or example selection. + +To import this vectorstore: +```python +from langchain_community.vectorstores.pgvector import PGVector +``` + +### Usage + +For a more detailed walkthrough of the PGVector Wrapper, see [this notebook](/docs/integrations/vectorstores/pgvector) diff --git a/langchain_md_files/integrations/providers/pinecone.mdx b/langchain_md_files/integrations/providers/pinecone.mdx new file mode 100644 index 0000000000000000000000000000000000000000..6a56785d5b2b9da989d05640fb9432845004ea44 --- /dev/null +++ b/langchain_md_files/integrations/providers/pinecone.mdx @@ -0,0 +1,51 @@ +--- +keywords: [pinecone] +--- + +# Pinecone + +>[Pinecone](https://docs.pinecone.io/docs/overview) is a vector database with broad functionality. + + +## Installation and Setup + +Install the Python SDK: + +```bash +pip install langchain-pinecone +``` + + +## Vector store + +There exists a wrapper around Pinecone indexes, allowing you to use it as a vectorstore, +whether for semantic search or example selection. + +```python +from langchain_pinecone import PineconeVectorStore +``` + +For a more detailed walkthrough of the Pinecone vectorstore, see [this notebook](/docs/integrations/vectorstores/pinecone) + +## Retrievers + +### Pinecone Hybrid Search + +```bash +pip install pinecone-client pinecone-text +``` + +```python +from langchain_community.retrievers import ( + PineconeHybridSearchRetriever, +) +``` + +For more detailed information, see [this notebook](/docs/integrations/retrievers/pinecone_hybrid_search). + + +### Self Query retriever + +Pinecone vector store can be used as a retriever for self-querying. + +For more detailed information, see [this notebook](/docs/integrations/retrievers/self_query/pinecone). diff --git a/langchain_md_files/integrations/providers/pipelineai.mdx b/langchain_md_files/integrations/providers/pipelineai.mdx new file mode 100644 index 0000000000000000000000000000000000000000..e13f6cffc5cd32f46f86ad71903e0b291ba1f66e --- /dev/null +++ b/langchain_md_files/integrations/providers/pipelineai.mdx @@ -0,0 +1,19 @@ +# PipelineAI + +This page covers how to use the PipelineAI ecosystem within LangChain. +It is broken into two parts: installation and setup, and then references to specific PipelineAI wrappers. + +## Installation and Setup + +- Install with `pip install pipeline-ai` +- Get a Pipeline Cloud api key and set it as an environment variable (`PIPELINE_API_KEY`) + +## Wrappers + +### LLM + +There exists a PipelineAI LLM wrapper, which you can access with + +```python +from langchain_community.llms import PipelineAI +``` diff --git a/langchain_md_files/integrations/providers/predictionguard.mdx b/langchain_md_files/integrations/providers/predictionguard.mdx new file mode 100644 index 0000000000000000000000000000000000000000..5e01eeef14dbe1a5cc5efcb7f499ba15490e5c5f --- /dev/null +++ b/langchain_md_files/integrations/providers/predictionguard.mdx @@ -0,0 +1,102 @@ +# Prediction Guard + +This page covers how to use the Prediction Guard ecosystem within LangChain. +It is broken into two parts: installation and setup, and then references to specific Prediction Guard wrappers. + +## Installation and Setup +- Install the Python SDK with `pip install predictionguard` +- Get a Prediction Guard access token (as described [here](https://docs.predictionguard.com/)) and set it as an environment variable (`PREDICTIONGUARD_TOKEN`) + +## LLM Wrapper + +There exists a Prediction Guard LLM wrapper, which you can access with +```python +from langchain_community.llms import PredictionGuard +``` + +You can provide the name of the Prediction Guard model as an argument when initializing the LLM: +```python +pgllm = PredictionGuard(model="MPT-7B-Instruct") +``` + +You can also provide your access token directly as an argument: +```python +pgllm = PredictionGuard(model="MPT-7B-Instruct", token="") +``` + +Finally, you can provide an "output" argument that is used to structure/ control the output of the LLM: +```python +pgllm = PredictionGuard(model="MPT-7B-Instruct", output={"type": "boolean"}) +``` + +## Example usage + +Basic usage of the controlled or guarded LLM wrapper: +```python +import os + +import predictionguard as pg +from langchain_community.llms import PredictionGuard +from langchain_core.prompts import PromptTemplate +from langchain.chains import LLMChain + +# Your Prediction Guard API key. Get one at predictionguard.com +os.environ["PREDICTIONGUARD_TOKEN"] = "" + +# Define a prompt template +template = """Respond to the following query based on the context. + +Context: EVERY comment, DM + email suggestion has led us to this EXCITING announcement! 🎉 We have officially added TWO new candle subscription box options! 📦 +Exclusive Candle Box - $80 +Monthly Candle Box - $45 (NEW!) +Scent of The Month Box - $28 (NEW!) +Head to stories to get ALL the deets on each box! 👆 BONUS: Save 50% on your first box with code 50OFF! 🎉 + +Query: {query} + +Result: """ +prompt = PromptTemplate.from_template(template) + +# With "guarding" or controlling the output of the LLM. See the +# Prediction Guard docs (https://docs.predictionguard.com) to learn how to +# control the output with integer, float, boolean, JSON, and other types and +# structures. +pgllm = PredictionGuard(model="MPT-7B-Instruct", + output={ + "type": "categorical", + "categories": [ + "product announcement", + "apology", + "relational" + ] + }) +pgllm(prompt.format(query="What kind of post is this?")) +``` + +Basic LLM Chaining with the Prediction Guard wrapper: +```python +import os + +from langchain_core.prompts import PromptTemplate +from langchain.chains import LLMChain +from langchain_community.llms import PredictionGuard + +# Optional, add your OpenAI API Key. This is optional, as Prediction Guard allows +# you to access all the latest open access models (see https://docs.predictionguard.com) +os.environ["OPENAI_API_KEY"] = "" + +# Your Prediction Guard API key. Get one at predictionguard.com +os.environ["PREDICTIONGUARD_TOKEN"] = "" + +pgllm = PredictionGuard(model="OpenAI-gpt-3.5-turbo-instruct") + +template = """Question: {question} + +Answer: Let's think step by step.""" +prompt = PromptTemplate.from_template(template) +llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True) + +question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" + +llm_chain.predict(question=question) +``` diff --git a/langchain_md_files/integrations/providers/promptlayer.mdx b/langchain_md_files/integrations/providers/promptlayer.mdx new file mode 100644 index 0000000000000000000000000000000000000000..550ff28f35b65e64ed603ecb6da7415f64132d69 --- /dev/null +++ b/langchain_md_files/integrations/providers/promptlayer.mdx @@ -0,0 +1,49 @@ +# PromptLayer + +>[PromptLayer](https://docs.promptlayer.com/introduction) is a platform for prompt engineering. +> It also helps with the LLM observability to visualize requests, version prompts, and track usage. +> +>While `PromptLayer` does have LLMs that integrate directly with LangChain (e.g. +> [`PromptLayerOpenAI`](https://docs.promptlayer.com/languages/langchain)), +> using a callback is the recommended way to integrate `PromptLayer` with LangChain. + +## Installation and Setup + +To work with `PromptLayer`, we have to: +- Create a `PromptLayer` account +- Create an api token and set it as an environment variable (`PROMPTLAYER_API_KEY`) + +Install a Python package: + +```bash +pip install promptlayer +``` + + +## Callback + +See a [usage example](/docs/integrations/callbacks/promptlayer). + +```python +import promptlayer # Don't forget this import! +from langchain.callbacks import PromptLayerCallbackHandler +``` + + +## LLM + +See a [usage example](/docs/integrations/llms/promptlayer_openai). + +```python +from langchain_community.llms import PromptLayerOpenAI +``` + + +## Chat Models + +See a [usage example](/docs/integrations/chat/promptlayer_chatopenai). + +```python +from langchain_community.chat_models import PromptLayerChatOpenAI +``` + diff --git a/langchain_md_files/integrations/providers/psychic.mdx b/langchain_md_files/integrations/providers/psychic.mdx new file mode 100644 index 0000000000000000000000000000000000000000..a415f8a5a484670dbdfa05d9b726c95b92a130c8 --- /dev/null +++ b/langchain_md_files/integrations/providers/psychic.mdx @@ -0,0 +1,34 @@ +--- +sidebar_class_name: hidden +--- + +# Psychic + +:::warning +This provider is no longer maintained, and may not work. Use with caution. +::: + +>[Psychic](https://www.psychic.dev/) is a platform for integrating with SaaS tools like `Notion`, `Zendesk`, +> `Confluence`, and `Google Drive` via OAuth and syncing documents from these applications to your SQL or vector +> database. You can think of it like Plaid for unstructured data. + +## Installation and Setup + +```bash +pip install psychicapi +``` + +Psychic is easy to set up - you import the `react` library and configure it with your `Sidekick API` key, which you get +from the [Psychic dashboard](https://dashboard.psychic.dev/). When you connect the applications, you +view these connections from the dashboard and retrieve data using the server-side libraries. + +1. Create an account in the [dashboard](https://dashboard.psychic.dev/). +2. Use the [react library](https://docs.psychic.dev/sidekick-link) to add the Psychic link modal to your frontend react app. You will use this to connect the SaaS apps. +3. Once you have created a connection, you can use the `PsychicLoader` by following the [example notebook](/docs/integrations/document_loaders/psychic) + + +## Advantages vs Other Document Loaders + +1. **Universal API:** Instead of building OAuth flows and learning the APIs for every SaaS app, you integrate Psychic once and leverage our universal API to retrieve data. +2. **Data Syncs:** Data in your customers' SaaS apps can get stale fast. With Psychic you can configure webhooks to keep your documents up to date on a daily or realtime basis. +3. **Simplified OAuth:** Psychic handles OAuth end-to-end so that you don't have to spend time creating OAuth clients for each integration, keeping access tokens fresh, and handling OAuth redirect logic. \ No newline at end of file diff --git a/langchain_md_files/integrations/providers/pygmalionai.mdx b/langchain_md_files/integrations/providers/pygmalionai.mdx new file mode 100644 index 0000000000000000000000000000000000000000..2d98fdf38c0b62b44eecc38d46f2b5fc337f719e --- /dev/null +++ b/langchain_md_files/integrations/providers/pygmalionai.mdx @@ -0,0 +1,21 @@ +# PygmalionAI + +>[PygmalionAI](https://pygmalion.chat/) is a company supporting the +> open-source models by serving the inference endpoint +> for the [Aphrodite Engine](https://github.com/PygmalionAI/aphrodite-engine). + + +## Installation and Setup + + +```bash +pip install aphrodite-engine +``` + +## LLMs + +See a [usage example](/docs/integrations/llms/aphrodite). + +```python +from langchain_community.llms import Aphrodite +``` diff --git a/langchain_md_files/integrations/providers/qdrant.mdx b/langchain_md_files/integrations/providers/qdrant.mdx new file mode 100644 index 0000000000000000000000000000000000000000..021f73d33ffcd86150100c27aa4396918b491930 --- /dev/null +++ b/langchain_md_files/integrations/providers/qdrant.mdx @@ -0,0 +1,27 @@ +# Qdrant + +>[Qdrant](https://qdrant.tech/documentation/) (read: quadrant) is a vector similarity search engine. +> It provides a production-ready service with a convenient API to store, search, and manage +> points - vectors with an additional payload. `Qdrant` is tailored to extended filtering support. + + +## Installation and Setup + +Install the Python partner package: + +```bash +pip install langchain-qdrant +``` + + +## Vector Store + +There exists a wrapper around `Qdrant` indexes, allowing you to use it as a vectorstore, +whether for semantic search or example selection. + +To import this vectorstore: +```python +from langchain_qdrant import QdrantVectorStore +``` + +For a more detailed walkthrough of the Qdrant wrapper, see [this notebook](/docs/integrations/vectorstores/qdrant) diff --git a/langchain_md_files/integrations/providers/rank_bm25.mdx b/langchain_md_files/integrations/providers/rank_bm25.mdx new file mode 100644 index 0000000000000000000000000000000000000000..0deffeec23ed4a308b8fa539978760651993945f --- /dev/null +++ b/langchain_md_files/integrations/providers/rank_bm25.mdx @@ -0,0 +1,25 @@ +# rank_bm25 + +[rank_bm25](https://github.com/dorianbrown/rank_bm25) is an open-source collection of algorithms +designed to query documents and return the most relevant ones, commonly used for creating +search engines. + +See its [project page](https://github.com/dorianbrown/rank_bm25) for available algorithms. + + +## Installation and Setup + +First, you need to install `rank_bm25` python package. + +```bash +pip install rank_bm25 +``` + + +## Retriever + +See a [usage example](/docs/integrations/retrievers/bm25). + +```python +from langchain_community.retrievers import BM25Retriever +``` diff --git a/langchain_md_files/integrations/providers/reddit.mdx b/langchain_md_files/integrations/providers/reddit.mdx new file mode 100644 index 0000000000000000000000000000000000000000..5e806075513cdb04087a03fbfa0d8ba957fbfc5d --- /dev/null +++ b/langchain_md_files/integrations/providers/reddit.mdx @@ -0,0 +1,22 @@ +# Reddit + +>[Reddit](https://www.reddit.com) is an American social news aggregation, content rating, and discussion website. + +## Installation and Setup + +First, you need to install a python package. + +```bash +pip install praw +``` + +Make a [Reddit Application](https://www.reddit.com/prefs/apps/) and initialize the loader with your Reddit API credentials. + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/reddit). + + +```python +from langchain_community.document_loaders import RedditPostsLoader +``` diff --git a/langchain_md_files/integrations/providers/redis.mdx b/langchain_md_files/integrations/providers/redis.mdx new file mode 100644 index 0000000000000000000000000000000000000000..a6682ef25c9640049137c3c01f597a35f7ece150 --- /dev/null +++ b/langchain_md_files/integrations/providers/redis.mdx @@ -0,0 +1,138 @@ +# Redis + +>[Redis (Remote Dictionary Server)](https://en.wikipedia.org/wiki/Redis) is an open-source in-memory storage, +> used as a distributed, in-memory key–value database, cache and message broker, with optional durability. +> Because it holds all data in memory and because of its design, `Redis` offers low-latency reads and writes, +> making it particularly suitable for use cases that require a cache. Redis is the most popular NoSQL database, +> and one of the most popular databases overall. + +This page covers how to use the [Redis](https://redis.com) ecosystem within LangChain. +It is broken into two parts: installation and setup, and then references to specific Redis wrappers. + +## Installation and Setup + +Install the Python SDK: + +```bash +pip install redis +``` + +To run Redis locally, you can use Docker: + +```bash +docker run --name langchain-redis -d -p 6379:6379 redis redis-server --save 60 1 --loglevel warning +``` + +To stop the container: + +```bash +docker stop langchain-redis +``` + +And to start it again: + +```bash +docker start langchain-redis +``` + +### Connections + +We need a redis url connection string to connect to the database support either a stand alone Redis server +or a High-Availability setup with Replication and Redis Sentinels. + +#### Redis Standalone connection url +For standalone `Redis` server, the official redis connection url formats can be used as describe in the python redis modules +"from_url()" method [Redis.from_url](https://redis-py.readthedocs.io/en/stable/connections.html#redis.Redis.from_url) + +Example: `redis_url = "redis://:secret-pass@localhost:6379/0"` + +#### Redis Sentinel connection url + +For [Redis sentinel setups](https://redis.io/docs/management/sentinel/) the connection scheme is "redis+sentinel". +This is an unofficial extensions to the official IANA registered protocol schemes as long as there is no connection url +for Sentinels available. + +Example: `redis_url = "redis+sentinel://:secret-pass@sentinel-host:26379/mymaster/0"` + +The format is `redis+sentinel://[[username]:[password]]@[host-or-ip]:[port]/[service-name]/[db-number]` +with the default values of "service-name = mymaster" and "db-number = 0" if not set explicit. +The service-name is the redis server monitoring group name as configured within the Sentinel. + +The current url format limits the connection string to one sentinel host only (no list can be given) and +booth Redis server and sentinel must have the same password set (if used). + +#### Redis Cluster connection url + +Redis cluster is not supported right now for all methods requiring a "redis_url" parameter. +The only way to use a Redis Cluster is with LangChain classes accepting a preconfigured Redis client like `RedisCache` +(example below). + +## Cache + +The Cache wrapper allows for [Redis](https://redis.io) to be used as a remote, low-latency, in-memory cache for LLM prompts and responses. + +### Standard Cache +The standard cache is the Redis bread & butter of use case in production for both [open-source](https://redis.io) and [enterprise](https://redis.com) users globally. + +```python +from langchain.cache import RedisCache +``` + +To use this cache with your LLMs: +```python +from langchain.globals import set_llm_cache +import redis + +redis_client = redis.Redis.from_url(...) +set_llm_cache(RedisCache(redis_client)) +``` + +### Semantic Cache +Semantic caching allows users to retrieve cached prompts based on semantic similarity between the user input and previously cached results. Under the hood it blends Redis as both a cache and a vectorstore. + +```python +from langchain.cache import RedisSemanticCache +``` + +To use this cache with your LLMs: +```python +from langchain.globals import set_llm_cache +import redis + +# use any embedding provider... +from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings + +redis_url = "redis://localhost:6379" + +set_llm_cache(RedisSemanticCache( + embedding=FakeEmbeddings(), + redis_url=redis_url +)) +``` + +## VectorStore + +The vectorstore wrapper turns Redis into a low-latency [vector database](https://redis.com/solutions/use-cases/vector-database/) for semantic search or LLM content retrieval. + +```python +from langchain_community.vectorstores import Redis +``` + +For a more detailed walkthrough of the Redis vectorstore wrapper, see [this notebook](/docs/integrations/vectorstores/redis). + +## Retriever + +The Redis vector store retriever wrapper generalizes the vectorstore class to perform +low-latency document retrieval. To create the retriever, simply +call `.as_retriever()` on the base vectorstore class. + +## Memory + +Redis can be used to persist LLM conversations. + +### Vector Store Retriever Memory + +For a more detailed walkthrough of the `VectorStoreRetrieverMemory` wrapper, see [this notebook](https://python.langchain.com/v0.2/api_reference/langchain/memory/langchain.memory.vectorstore.VectorStoreRetrieverMemory.html). + +### Chat Message History Memory +For a detailed example of Redis to cache conversation message history, see [this notebook](/docs/integrations/memory/redis_chat_message_history). diff --git a/langchain_md_files/integrations/providers/remembrall.mdx b/langchain_md_files/integrations/providers/remembrall.mdx new file mode 100644 index 0000000000000000000000000000000000000000..822acab815ad047d17c6d9be44bc805a12000bf9 --- /dev/null +++ b/langchain_md_files/integrations/providers/remembrall.mdx @@ -0,0 +1,15 @@ +# Remembrall + +>[Remembrall](https://remembrall.dev/) is a platform that gives a language model +> long-term memory, retrieval augmented generation, and complete observability. + +## Installation and Setup + +To get started, [sign in with Github on the Remembrall platform](https://remembrall.dev/login) +and copy your [API key from the settings page](https://remembrall.dev/dashboard/settings). + + +## Memory + +See a [usage example](/docs/integrations/memory/remembrall). + diff --git a/langchain_md_files/integrations/providers/replicate.mdx b/langchain_md_files/integrations/providers/replicate.mdx new file mode 100644 index 0000000000000000000000000000000000000000..21bd1925ddf6d1ff85ae914212d48dad8877fb72 --- /dev/null +++ b/langchain_md_files/integrations/providers/replicate.mdx @@ -0,0 +1,46 @@ +# Replicate +This page covers how to run models on Replicate within LangChain. + +## Installation and Setup +- Create a [Replicate](https://replicate.com) account. Get your API key and set it as an environment variable (`REPLICATE_API_TOKEN`) +- Install the [Replicate python client](https://github.com/replicate/replicate-python) with `pip install replicate` + +## Calling a model + +Find a model on the [Replicate explore page](https://replicate.com/explore), and then paste in the model name and version in this format: `owner-name/model-name:version` + +For example, for this [dolly model](https://replicate.com/replicate/dolly-v2-12b), click on the API tab. The model name/version would be: `"replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5"` + +Only the `model` param is required, but any other model parameters can also be passed in with the format `input={model_param: value, ...}` + + +For example, if we were running stable diffusion and wanted to change the image dimensions: + +``` +Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", input={'image_dimensions': '512x512'}) +``` + +*Note that only the first output of a model will be returned.* +From here, we can initialize our model: + +```python +llm = Replicate(model="replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5") +``` + +And run it: + +```python +prompt = """ +Answer the following yes/no question by reasoning step by step. +Can a dog drive a car? +""" +llm(prompt) +``` + +We can call any Replicate model (not just LLMs) using this syntax. For example, we can call [Stable Diffusion](https://replicate.com/stability-ai/stable-diffusion): + +```python +text2image = Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", input={'image_dimensions':'512x512'}) + +image_output = text2image("A cat riding a motorcycle by Picasso") +``` diff --git a/langchain_md_files/integrations/providers/roam.mdx b/langchain_md_files/integrations/providers/roam.mdx new file mode 100644 index 0000000000000000000000000000000000000000..322ade8d29aa390d54ddff211e883ab49e40f58b --- /dev/null +++ b/langchain_md_files/integrations/providers/roam.mdx @@ -0,0 +1,17 @@ +# Roam + +>[ROAM](https://roamresearch.com/) is a note-taking tool for networked thought, designed to create a personal knowledge base. + +## Installation and Setup + +There isn't any special setup for it. + + + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/roam). + +```python +from langchain_community.document_loaders import RoamLoader +``` diff --git a/langchain_md_files/integrations/providers/robocorp.mdx b/langchain_md_files/integrations/providers/robocorp.mdx new file mode 100644 index 0000000000000000000000000000000000000000..4573db24b45e4ced7a32b9bf445b7b45cbf15bf2 --- /dev/null +++ b/langchain_md_files/integrations/providers/robocorp.mdx @@ -0,0 +1,37 @@ +# Robocorp + +>[Robocorp](https://robocorp.com/) helps build and operate Python workers that run seamlessly anywhere at any scale + + +## Installation and Setup + +You need to install `langchain-robocorp` python package: + +```bash +pip install langchain-robocorp +``` + +You will need a running instance of `Action Server` to communicate with from your agent application. +See the [Robocorp Quickstart](https://github.com/robocorp/robocorp#quickstart) on how to setup Action Server and create your Actions. + +You can bootstrap a new project using Action Server `new` command. + +```bash +action-server new +cd ./your-project-name +action-server start +``` + +## Tool + +```python +from langchain_robocorp.toolkits import ActionServerRequestTool +``` + +## Toolkit + +See a [usage example](/docs/integrations/tools/robocorp). + +```python +from langchain_robocorp import ActionServerToolkit +``` diff --git a/langchain_md_files/integrations/providers/rockset.mdx b/langchain_md_files/integrations/providers/rockset.mdx new file mode 100644 index 0000000000000000000000000000000000000000..735c2181783fafcd389fa68190369e0d112d25e5 --- /dev/null +++ b/langchain_md_files/integrations/providers/rockset.mdx @@ -0,0 +1,33 @@ +# Rockset + +>[Rockset](https://rockset.com/product/) is a real-time analytics database service for serving low latency, high concurrency analytical queries at scale. It builds a Converged Index™ on structured and semi-structured data with an efficient store for vector embeddings. Its support for running SQL on schemaless data makes it a perfect choice for running vector search with metadata filters. + +## Installation and Setup + +Make sure you have Rockset account and go to the web console to get the API key. Details can be found on [the website](https://rockset.com/docs/rest-api/). + +```bash +pip install rockset +``` + +## Vector Store + +See a [usage example](/docs/integrations/vectorstores/rockset). + +```python +from langchain_community.vectorstores import Rockset +``` + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/rockset). +```python +from langchain_community.document_loaders import RocksetLoader +``` + +## Chat Message History + +See a [usage example](/docs/integrations/memory/rockset_chat_message_history). +```python +from langchain_community.chat_message_histories import RocksetChatMessageHistory +``` \ No newline at end of file diff --git a/langchain_md_files/integrations/providers/runhouse.mdx b/langchain_md_files/integrations/providers/runhouse.mdx new file mode 100644 index 0000000000000000000000000000000000000000..d0b63ed4905738311964f9ee3196aacef2b6c4f5 --- /dev/null +++ b/langchain_md_files/integrations/providers/runhouse.mdx @@ -0,0 +1,29 @@ +# Runhouse + +This page covers how to use the [Runhouse](https://github.com/run-house/runhouse) ecosystem within LangChain. +It is broken into three parts: installation and setup, LLMs, and Embeddings. + +## Installation and Setup +- Install the Python SDK with `pip install runhouse` +- If you'd like to use on-demand cluster, check your cloud credentials with `sky check` + +## Self-hosted LLMs +For a basic self-hosted LLM, you can use the `SelfHostedHuggingFaceLLM` class. For more +custom LLMs, you can use the `SelfHostedPipeline` parent class. + +```python +from langchain_community.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM +``` + +For a more detailed walkthrough of the Self-hosted LLMs, see [this notebook](/docs/integrations/llms/runhouse) + +## Self-hosted Embeddings +There are several ways to use self-hosted embeddings with LangChain via Runhouse. + +For a basic self-hosted embedding from a Hugging Face Transformers model, you can use +the `SelfHostedEmbedding` class. +```python +from langchain_community.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM +``` + +For a more detailed walkthrough of the Self-hosted Embeddings, see [this notebook](/docs/integrations/text_embedding/self-hosted) diff --git a/langchain_md_files/integrations/providers/rwkv.mdx b/langchain_md_files/integrations/providers/rwkv.mdx new file mode 100644 index 0000000000000000000000000000000000000000..90a795a420865b8fee15919486e4c30bf4452e28 --- /dev/null +++ b/langchain_md_files/integrations/providers/rwkv.mdx @@ -0,0 +1,65 @@ +# RWKV-4 + +This page covers how to use the `RWKV-4` wrapper within LangChain. +It is broken into two parts: installation and setup, and then usage with an example. + +## Installation and Setup +- Install the Python package with `pip install rwkv` +- Install the tokenizer Python package with `pip install tokenizer` +- Download a [RWKV model](https://huggingface.co./BlinkDL/rwkv-4-raven/tree/main) and place it in your desired directory +- Download the [tokens file](https://raw.githubusercontent.com/BlinkDL/ChatRWKV/main/20B_tokenizer.json) + +## Usage + +### RWKV + +To use the RWKV wrapper, you need to provide the path to the pre-trained model file and the tokenizer's configuration. +```python +from langchain_community.llms import RWKV + +# Test the model + +```python + +def generate_prompt(instruction, input=None): + if input: + return f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. + +# Instruction: +{instruction} + +# Input: +{input} + +# Response: +""" + else: + return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request. + +# Instruction: +{instruction} + +# Response: +""" + + +model = RWKV(model="./models/RWKV-4-Raven-3B-v7-Eng-20230404-ctx4096.pth", strategy="cpu fp32", tokens_path="./rwkv/20B_tokenizer.json") +response = model.invoke(generate_prompt("Once upon a time, ")) +``` +## Model File + +You can find links to model file downloads at the [RWKV-4-Raven](https://huggingface.co./BlinkDL/rwkv-4-raven/tree/main) repository. + +### Rwkv-4 models -> recommended VRAM + + +``` +RWKV VRAM +Model | 8bit | bf16/fp16 | fp32 +14B | 16GB | 28GB | >50GB +7B | 8GB | 14GB | 28GB +3B | 2.8GB| 6GB | 12GB +1b5 | 1.3GB| 3GB | 6GB +``` + +See the [rwkv pip](https://pypi.org/project/rwkv/) page for more information about strategies, including streaming and cuda support. diff --git a/langchain_md_files/integrations/providers/salute_devices.mdx b/langchain_md_files/integrations/providers/salute_devices.mdx new file mode 100644 index 0000000000000000000000000000000000000000..2651090acc07cf7873dfb7bc49b2f07fcbb7a19f --- /dev/null +++ b/langchain_md_files/integrations/providers/salute_devices.mdx @@ -0,0 +1,37 @@ +# Salute Devices + +Salute Devices provides GigaChat LLM's models. + +For more info how to get access to GigaChat [follow here](https://developers.sber.ru/docs/ru/gigachat/api/integration). + +## Installation and Setup + +GigaChat package can be installed via pip from PyPI: + +```bash +pip install gigachat +``` + +## LLMs + +See a [usage example](/docs/integrations/llms/gigachat). + +```python +from langchain_community.llms import GigaChat +``` + +## Chat models + +See a [usage example](/docs/integrations/chat/gigachat). + +```python +from langchain_community.chat_models import GigaChat +``` + +## Embeddings + +See a [usage example](/docs/integrations/text_embedding/gigachat). + +```python +from langchain_community.embeddings import GigaChatEmbeddings +``` \ No newline at end of file diff --git a/langchain_md_files/integrations/providers/sap.mdx b/langchain_md_files/integrations/providers/sap.mdx new file mode 100644 index 0000000000000000000000000000000000000000..97cf2649b6e9c4ad42d6905650af79196fc86876 --- /dev/null +++ b/langchain_md_files/integrations/providers/sap.mdx @@ -0,0 +1,25 @@ +# SAP + +>[SAP SE(Wikipedia)](https://www.sap.com/about/company.html) is a German multinational +> software company. It develops enterprise software to manage business operation and +> customer relations. The company is the world's leading +> `enterprise resource planning (ERP)` software vendor. + +## Installation and Setup + +We need to install the `hdbcli` python package. + +```bash +pip install hdbcli +``` + +## Vectorstore + +>[SAP HANA Cloud Vector Engine](https://www.sap.com/events/teched/news-guide/ai.html#article8) is +> a vector store fully integrated into the `SAP HANA Cloud` database. + +See a [usage example](/docs/integrations/vectorstores/sap_hanavector). + +```python +from langchain_community.vectorstores.hanavector import HanaDB +``` diff --git a/langchain_md_files/integrations/providers/searchapi.mdx b/langchain_md_files/integrations/providers/searchapi.mdx new file mode 100644 index 0000000000000000000000000000000000000000..1dfaded161009afc5ca95c8740735e47f526e637 --- /dev/null +++ b/langchain_md_files/integrations/providers/searchapi.mdx @@ -0,0 +1,80 @@ +# SearchApi + +This page covers how to use the [SearchApi](https://www.searchapi.io/) Google Search API within LangChain. SearchApi is a real-time SERP API for easy SERP scraping. + +## Setup + +- Go to [https://www.searchapi.io/](https://www.searchapi.io/) to sign up for a free account +- Get the api key and set it as an environment variable (`SEARCHAPI_API_KEY`) + +## Wrappers + +### Utility + +There is a SearchApiAPIWrapper utility which wraps this API. To import this utility: + +```python +from langchain_community.utilities import SearchApiAPIWrapper +``` + +You can use it as part of a Self Ask chain: + +```python +from langchain_community.utilities import SearchApiAPIWrapper +from langchain_openai import OpenAI +from langchain.agents import initialize_agent, Tool +from langchain.agents import AgentType + +import os + +os.environ["SEARCHAPI_API_KEY"] = "" +os.environ['OPENAI_API_KEY'] = "" + +llm = OpenAI(temperature=0) +search = SearchApiAPIWrapper() +tools = [ + Tool( + name="Intermediate Answer", + func=search.run, + description="useful for when you need to ask with search" + ) +] + +self_ask_with_search = initialize_agent(tools, llm, agent=AgentType.SELF_ASK_WITH_SEARCH, verbose=True) +self_ask_with_search.run("Who lived longer: Plato, Socrates, or Aristotle?") +``` + +#### Output + +``` +> Entering new AgentExecutor chain... + Yes. +Follow up: How old was Plato when he died? +Intermediate answer: eighty +Follow up: How old was Socrates when he died? +Intermediate answer: | Socrates | +| -------- | +| Born | c. 470 BC Deme Alopece, Athens | +| Died | 399 BC (aged approximately 71) Athens | +| Cause of death | Execution by forced suicide by poisoning | +| Spouse(s) | Xanthippe, Myrto | + +Follow up: How old was Aristotle when he died? +Intermediate answer: 62 years +So the final answer is: Plato + +> Finished chain. +'Plato' +``` + +### Tool + +You can also easily load this wrapper as a Tool (to use with an Agent). +You can do this with: + +```python +from langchain.agents import load_tools +tools = load_tools(["searchapi"]) +``` + +For more information on tools, see [this page](/docs/how_to/tools_builtin). diff --git a/langchain_md_files/integrations/providers/searx.mdx b/langchain_md_files/integrations/providers/searx.mdx new file mode 100644 index 0000000000000000000000000000000000000000..687900e47d80aebd32d110360d25bff6d30013e3 --- /dev/null +++ b/langchain_md_files/integrations/providers/searx.mdx @@ -0,0 +1,90 @@ +# SearxNG Search API + +This page covers how to use the SearxNG search API within LangChain. +It is broken into two parts: installation and setup, and then references to the specific SearxNG API wrapper. + +## Installation and Setup + +While it is possible to utilize the wrapper in conjunction with [public searx +instances](https://searx.space/) these instances frequently do not permit API +access (see note on output format below) and have limitations on the frequency +of requests. It is recommended to opt for a self-hosted instance instead. + +### Self Hosted Instance: + +See [this page](https://searxng.github.io/searxng/admin/installation.html) for installation instructions. + +When you install SearxNG, the only active output format by default is the HTML format. +You need to activate the `json` format to use the API. This can be done by adding the following line to the `settings.yml` file: +```yaml +search: + formats: + - html + - json +``` +You can make sure that the API is working by issuing a curl request to the API endpoint: + +`curl -kLX GET --data-urlencode q='langchain' -d format=json http://localhost:8888` + +This should return a JSON object with the results. + + +## Wrappers + +### Utility + +To use the wrapper we need to pass the host of the SearxNG instance to the wrapper with: + 1. the named parameter `searx_host` when creating the instance. + 2. exporting the environment variable `SEARXNG_HOST`. + +You can use the wrapper to get results from a SearxNG instance. + +```python +from langchain_community.utilities import SearxSearchWrapper +s = SearxSearchWrapper(searx_host="http://localhost:8888") +s.run("what is a large language model?") +``` + +### Tool + +You can also load this wrapper as a Tool (to use with an Agent). + +You can do this with: + +```python +from langchain.agents import load_tools +tools = load_tools(["searx-search"], + searx_host="http://localhost:8888", + engines=["github"]) +``` + +Note that we could _optionally_ pass custom engines to use. + +If you want to obtain results with metadata as *json* you can use: +```python +tools = load_tools(["searx-search-results-json"], + searx_host="http://localhost:8888", + num_results=5) +``` + +#### Quickly creating tools + +This examples showcases a quick way to create multiple tools from the same +wrapper. + +```python +from langchain_community.tools.searx_search.tool import SearxSearchResults + +wrapper = SearxSearchWrapper(searx_host="**") +github_tool = SearxSearchResults(name="Github", wrapper=wrapper, + kwargs = { + "engines": ["github"], + }) + +arxiv_tool = SearxSearchResults(name="Arxiv", wrapper=wrapper, + kwargs = { + "engines": ["arxiv"] + }) +``` + +For more information on tools, see [this page](/docs/how_to/tools_builtin). diff --git a/langchain_md_files/integrations/providers/semadb.mdx b/langchain_md_files/integrations/providers/semadb.mdx new file mode 100644 index 0000000000000000000000000000000000000000..905ef96613244363d72334f1058c9049c10b8216 --- /dev/null +++ b/langchain_md_files/integrations/providers/semadb.mdx @@ -0,0 +1,19 @@ +# SemaDB + +>[SemaDB](https://semafind.com/) is a no fuss vector similarity search engine. It provides a low-cost cloud hosted version to help you build AI applications with ease. + +With SemaDB Cloud, our hosted version, no fuss means no pod size calculations, no schema definitions, no partition settings, no parameter tuning, no search algorithm tuning, no complex installation, no complex API. It is integrated with [RapidAPI](https://rapidapi.com/semafind-semadb/api/semadb) providing transparent billing, automatic sharding and an interactive API playground. + +## Installation + +None required, get started directly with SemaDB Cloud at [RapidAPI](https://rapidapi.com/semafind-semadb/api/semadb). + +## Vector Store + +There is a basic wrapper around `SemaDB` collections allowing you to use it as a vectorstore. + +```python +from langchain_community.vectorstores import SemaDB +``` + +You can follow a tutorial on how to use the wrapper in [this notebook](/docs/integrations/vectorstores/semadb). \ No newline at end of file diff --git a/langchain_md_files/integrations/providers/serpapi.mdx b/langchain_md_files/integrations/providers/serpapi.mdx new file mode 100644 index 0000000000000000000000000000000000000000..31de2bb5ec5c2f955254e9b75c1ded5db2498bff --- /dev/null +++ b/langchain_md_files/integrations/providers/serpapi.mdx @@ -0,0 +1,31 @@ +# SerpAPI + +This page covers how to use the SerpAPI search APIs within LangChain. +It is broken into two parts: installation and setup, and then references to the specific SerpAPI wrapper. + +## Installation and Setup +- Install requirements with `pip install google-search-results` +- Get a SerpAPI api key and either set it as an environment variable (`SERPAPI_API_KEY`) + +## Wrappers + +### Utility + +There exists a SerpAPI utility which wraps this API. To import this utility: + +```python +from langchain_community.utilities import SerpAPIWrapper +``` + +For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/serpapi). + +### Tool + +You can also easily load this wrapper as a Tool (to use with an Agent). +You can do this with: +```python +from langchain.agents import load_tools +tools = load_tools(["serpapi"]) +``` + +For more information on this, see [this page](/docs/how_to/tools_builtin) diff --git a/langchain_md_files/integrations/providers/singlestoredb.mdx b/langchain_md_files/integrations/providers/singlestoredb.mdx new file mode 100644 index 0000000000000000000000000000000000000000..3c77a7dfca173a55da6930da8f481d8fe410c2cd --- /dev/null +++ b/langchain_md_files/integrations/providers/singlestoredb.mdx @@ -0,0 +1,28 @@ +# SingleStoreDB + +>[SingleStoreDB](https://singlestore.com/) is a high-performance distributed SQL database that supports deployment both in the [cloud](https://www.singlestore.com/cloud/) and on-premises. It provides vector storage, and vector functions including [dot_product](https://docs.singlestore.com/managed-service/en/reference/sql-reference/vector-functions/dot_product.html) and [euclidean_distance](https://docs.singlestore.com/managed-service/en/reference/sql-reference/vector-functions/euclidean_distance.html), thereby supporting AI applications that require text similarity matching. + +## Installation and Setup + +There are several ways to establish a [connection](https://singlestoredb-python.labs.singlestore.com/generated/singlestoredb.connect.html) to the database. You can either set up environment variables or pass named parameters to the `SingleStoreDB constructor`. +Alternatively, you may provide these parameters to the `from_documents` and `from_texts` methods. + +```bash +pip install singlestoredb +``` + +## Vector Store + +See a [usage example](/docs/integrations/vectorstores/singlestoredb). + +```python +from langchain_community.vectorstores import SingleStoreDB +``` + +## Memory + +See a [usage example](/docs/integrations/memory/singlestoredb_chat_message_history). + +```python +from langchain.memory import SingleStoreDBChatMessageHistory +``` diff --git a/langchain_md_files/integrations/providers/sklearn.mdx b/langchain_md_files/integrations/providers/sklearn.mdx new file mode 100644 index 0000000000000000000000000000000000000000..a2d9e0554d706aee1fc7e4c8e03890f3408f6ff3 --- /dev/null +++ b/langchain_md_files/integrations/providers/sklearn.mdx @@ -0,0 +1,35 @@ +# scikit-learn + +>[scikit-learn](https://scikit-learn.org/stable/) is an open-source collection of machine learning algorithms, +> including some implementations of the [k nearest neighbors](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.NearestNeighbors.html). `SKLearnVectorStore` wraps this implementation and adds the possibility to persist the vector store in json, bson (binary json) or Apache Parquet format. + +## Installation and Setup + +- Install the Python package with `pip install scikit-learn` + + +## Vector Store + +`SKLearnVectorStore` provides a simple wrapper around the nearest neighbor implementation in the +scikit-learn package, allowing you to use it as a vectorstore. + +To import this vectorstore: + +```python +from langchain_community.vectorstores import SKLearnVectorStore +``` + +For a more detailed walkthrough of the SKLearnVectorStore wrapper, see [this notebook](/docs/integrations/vectorstores/sklearn). + + +## Retriever + +`Support vector machines (SVMs)` are the supervised learning +methods used for classification, regression and outliers detection. + +See a [usage example](/docs/integrations/retrievers/svm). + +```python +from langchain_community.retrievers import SVMRetriever +``` + diff --git a/langchain_md_files/integrations/providers/slack.mdx b/langchain_md_files/integrations/providers/slack.mdx new file mode 100644 index 0000000000000000000000000000000000000000..9013e5b0cc2899cbb2650e23c6303f8faec3c7c6 --- /dev/null +++ b/langchain_md_files/integrations/providers/slack.mdx @@ -0,0 +1,32 @@ +# Slack + +>[Slack](https://slack.com/) is an instant messaging program. + +## Installation and Setup + +There isn't any special setup for it. + + +## Document loader + +See a [usage example](/docs/integrations/document_loaders/slack). + +```python +from langchain_community.document_loaders import SlackDirectoryLoader +``` + +## Toolkit + +See a [usage example](/docs/integrations/tools/slack). + +```python +from langchain_community.agent_toolkits import SlackToolkit +``` + +## Chat loader + +See a [usage example](/docs/integrations/chat_loaders/slack). + +```python +from langchain_community.chat_loaders.slack import SlackChatLoader +``` diff --git a/langchain_md_files/integrations/providers/snowflake.mdx b/langchain_md_files/integrations/providers/snowflake.mdx new file mode 100644 index 0000000000000000000000000000000000000000..c42c71975880373acc63abb4aef7ec0bd5251a73 --- /dev/null +++ b/langchain_md_files/integrations/providers/snowflake.mdx @@ -0,0 +1,32 @@ +# Snowflake + +> [Snowflake](https://www.snowflake.com/) is a cloud-based data-warehousing platform +> that allows you to store and query large amounts of data. + +This page covers how to use the `Snowflake` ecosystem within `LangChain`. + +## Embedding models + +Snowflake offers their open-weight `arctic` line of embedding models for free +on [Hugging Face](https://huggingface.co./Snowflake/snowflake-arctic-embed-m-v1.5). The most recent model, snowflake-arctic-embed-m-v1.5 feature [matryoshka embedding](https://arxiv.org/abs/2205.13147) which allows for effective vector truncation. +You can use these models via the +[HuggingFaceEmbeddings](/docs/integrations/text_embedding/huggingfacehub) connector: + +```shell +pip install langchain-community sentence-transformers +``` + +```python +from langchain_huggingface import HuggingFaceEmbeddings + +model = HuggingFaceEmbeddings(model_name="snowflake/arctic-embed-m-v1.5") +``` + +## Document loader + +You can use the [`SnowflakeLoader`](/docs/integrations/document_loaders/snowflake) +to load data from Snowflake: + +```python +from langchain_community.document_loaders import SnowflakeLoader +``` diff --git a/langchain_md_files/integrations/providers/spacy.mdx b/langchain_md_files/integrations/providers/spacy.mdx new file mode 100644 index 0000000000000000000000000000000000000000..d893f12a3dbd65c934977b006da6dc4161406f73 --- /dev/null +++ b/langchain_md_files/integrations/providers/spacy.mdx @@ -0,0 +1,28 @@ +# spaCy + +>[spaCy](https://spacy.io/) is an open-source software library for advanced natural language processing, written in the programming languages Python and Cython. + +## Installation and Setup + + +```bash +pip install spacy +``` + + + +## Text Splitter + +See a [usage example](/docs/how_to/split_by_token/#spacy). + +```python +from langchain_text_splitters import SpacyTextSplitter +``` + +## Text Embedding Models + +See a [usage example](/docs/integrations/text_embedding/spacy_embedding) + +```python +from langchain_community.embeddings.spacy_embeddings import SpacyEmbeddings +``` diff --git a/langchain_md_files/integrations/providers/sparkllm.mdx b/langchain_md_files/integrations/providers/sparkllm.mdx new file mode 100644 index 0000000000000000000000000000000000000000..e9d7f94b186bb51a11b06f9fa589169bb27a522f --- /dev/null +++ b/langchain_md_files/integrations/providers/sparkllm.mdx @@ -0,0 +1,14 @@ +# SparkLLM + +>[SparkLLM](https://xinghuo.xfyun.cn/spark) is a large-scale cognitive model independently developed by iFLYTEK. +It has cross-domain knowledge and language understanding ability by learning a large amount of texts, codes and images. +It can understand and perform tasks based on natural dialogue. + +## SparkLLM LLM Model +An example is available at [example](/docs/integrations/llms/sparkllm). + +## SparkLLM Chat Model +An example is available at [example](/docs/integrations/chat/sparkllm). + +## SparkLLM Text Embedding Model +An example is available at [example](/docs/integrations/text_embedding/sparkllm) diff --git a/langchain_md_files/integrations/providers/spreedly.mdx b/langchain_md_files/integrations/providers/spreedly.mdx new file mode 100644 index 0000000000000000000000000000000000000000..16930aa06e91078579abb2ecb3356157e1381a9c --- /dev/null +++ b/langchain_md_files/integrations/providers/spreedly.mdx @@ -0,0 +1,15 @@ +# Spreedly + +>[Spreedly](https://docs.spreedly.com/) is a service that allows you to securely store credit cards and use them to transact against any number of payment gateways and third party APIs. It does this by simultaneously providing a card tokenization/vault service as well as a gateway and receiver integration service. Payment methods tokenized by Spreedly are stored at `Spreedly`, allowing you to independently store a card and then pass that card to different end points based on your business requirements. + +## Installation and Setup + +See [setup instructions](/docs/integrations/document_loaders/spreedly). + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/spreedly). + +```python +from langchain_community.document_loaders import SpreedlyLoader +``` diff --git a/langchain_md_files/integrations/providers/sqlite.mdx b/langchain_md_files/integrations/providers/sqlite.mdx new file mode 100644 index 0000000000000000000000000000000000000000..e45a47f11372cd6ae3ac1b4f41ad0f3180f6fd23 --- /dev/null +++ b/langchain_md_files/integrations/providers/sqlite.mdx @@ -0,0 +1,31 @@ +# SQLite + +>[SQLite](https://en.wikipedia.org/wiki/SQLite) is a database engine written in the +> C programming language. It is not a standalone app; rather, it is a library that +> software developers embed in their apps. As such, it belongs to the family of +> embedded databases. It is the most widely deployed database engine, as it is +> used by several of the top web browsers, operating systems, mobile phones, and other embedded systems. + +## Installation and Setup + +We need to install the `SQLAlchemy` python package. + +```bash +pip install SQLAlchemy +``` + +## Vector Store + +See a [usage example](/docs/integrations/vectorstores/sqlitevss). + +```python +from langchain_community.vectorstores import SQLiteVSS +``` + +## Memory + +See a [usage example](/docs/integrations/memory/sqlite). + +```python +from langchain_community.chat_message_histories import SQLChatMessageHistory +``` diff --git a/langchain_md_files/integrations/providers/stackexchange.mdx b/langchain_md_files/integrations/providers/stackexchange.mdx new file mode 100644 index 0000000000000000000000000000000000000000..b3b00932f943d21916377bb6912e393fe01664cc --- /dev/null +++ b/langchain_md_files/integrations/providers/stackexchange.mdx @@ -0,0 +1,36 @@ +# Stack Exchange + +>[Stack Exchange](https://en.wikipedia.org/wiki/Stack_Exchange) is a network of +question-and-answer (Q&A) websites on topics in diverse fields, each site covering +a specific topic, where questions, answers, and users are subject to a reputation award process. + +This page covers how to use the `Stack Exchange API` within LangChain. + +## Installation and Setup +- Install requirements with +```bash +pip install stackapi +``` + +## Wrappers + +### Utility + +There exists a StackExchangeAPIWrapper utility which wraps this API. To import this utility: + +```python +from langchain_community.utilities import StackExchangeAPIWrapper +``` + +For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/stackexchange). + +### Tool + +You can also easily load this wrapper as a Tool (to use with an Agent). +You can do this with: +```python +from langchain.agents import load_tools +tools = load_tools(["stackexchange"]) +``` + +For more information on tools, see [this page](/docs/how_to/tools_builtin). diff --git a/langchain_md_files/integrations/providers/starrocks.mdx b/langchain_md_files/integrations/providers/starrocks.mdx new file mode 100644 index 0000000000000000000000000000000000000000..bc5c9983c9e4a9e8bf90c2f7b683a7417b33c98f --- /dev/null +++ b/langchain_md_files/integrations/providers/starrocks.mdx @@ -0,0 +1,21 @@ +# StarRocks + +>[StarRocks](https://www.starrocks.io/) is a High-Performance Analytical Database. +`StarRocks` is a next-gen sub-second MPP database for full analytics scenarios, including multi-dimensional analytics, real-time analytics and ad-hoc query. + +>Usually `StarRocks` is categorized into OLAP, and it has showed excellent performance in [ClickBench — a Benchmark For Analytical DBMS](https://benchmark.clickhouse.com/). Since it has a super-fast vectorized execution engine, it could also be used as a fast vectordb. + +## Installation and Setup + + +```bash +pip install pymysql +``` + +## Vector Store + +See a [usage example](/docs/integrations/vectorstores/starrocks). + +```python +from langchain_community.vectorstores import StarRocks +``` diff --git a/langchain_md_files/integrations/providers/stochasticai.mdx b/langchain_md_files/integrations/providers/stochasticai.mdx new file mode 100644 index 0000000000000000000000000000000000000000..bd0b5484bb221b6d44fae70c28217fb4cc0df871 --- /dev/null +++ b/langchain_md_files/integrations/providers/stochasticai.mdx @@ -0,0 +1,17 @@ +# StochasticAI + +This page covers how to use the StochasticAI ecosystem within LangChain. +It is broken into two parts: installation and setup, and then references to specific StochasticAI wrappers. + +## Installation and Setup +- Install with `pip install stochasticx` +- Get an StochasticAI api key and set it as an environment variable (`STOCHASTICAI_API_KEY`) + +## Wrappers + +### LLM + +There exists an StochasticAI LLM wrapper, which you can access with +```python +from langchain_community.llms import StochasticAI +``` \ No newline at end of file diff --git a/langchain_md_files/integrations/providers/streamlit.mdx b/langchain_md_files/integrations/providers/streamlit.mdx new file mode 100644 index 0000000000000000000000000000000000000000..d90f8f52b310d769f5210c91bfb71f8ba45af0a5 --- /dev/null +++ b/langchain_md_files/integrations/providers/streamlit.mdx @@ -0,0 +1,30 @@ +# Streamlit + +>[Streamlit](https://streamlit.io/) is a faster way to build and share data apps. +>`Streamlit` turns data scripts into shareable web apps in minutes. All in pure Python. No front‑end experience required. +>See more examples at [streamlit.io/generative-ai](https://streamlit.io/generative-ai). + +## Installation and Setup + +We need to install the `streamlit` Python package: + +```bash +pip install streamlit +``` + + +## Memory + +See a [usage example](/docs/integrations/memory/streamlit_chat_message_history). + +```python +from langchain_community.chat_message_histories import StreamlitChatMessageHistory +``` + +## Callbacks + +See a [usage example](/docs/integrations/callbacks/streamlit). + +```python +from langchain_community.callbacks import StreamlitCallbackHandler +``` diff --git a/langchain_md_files/integrations/providers/stripe.mdx b/langchain_md_files/integrations/providers/stripe.mdx new file mode 100644 index 0000000000000000000000000000000000000000..a7e80d97a7fe0eb7043cbbe8bc15408451a5d53b --- /dev/null +++ b/langchain_md_files/integrations/providers/stripe.mdx @@ -0,0 +1,16 @@ +# Stripe + +>[Stripe](https://stripe.com/en-ca) is an Irish-American financial services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications. + + +## Installation and Setup + +See [setup instructions](/docs/integrations/document_loaders/stripe). + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/stripe). + +```python +from langchain_community.document_loaders import StripeLoader +``` diff --git a/langchain_md_files/integrations/providers/supabase.mdx b/langchain_md_files/integrations/providers/supabase.mdx new file mode 100644 index 0000000000000000000000000000000000000000..7a574800d063a36e429ce1eb9607e2e452ac5e54 --- /dev/null +++ b/langchain_md_files/integrations/providers/supabase.mdx @@ -0,0 +1,26 @@ +# Supabase (Postgres) + +>[Supabase](https://supabase.com/docs) is an open-source `Firebase` alternative. +> `Supabase` is built on top of `PostgreSQL`, which offers strong `SQL` +> querying capabilities and enables a simple interface with already-existing tools and frameworks. + +>[PostgreSQL](https://en.wikipedia.org/wiki/PostgreSQL) also known as `Postgres`, +> is a free and open-source relational database management system (RDBMS) +> emphasizing extensibility and `SQL` compliance. + +## Installation and Setup + +We need to install `supabase` python package. + +```bash +pip install supabase +``` + +## Vector Store + +See a [usage example](/docs/integrations/vectorstores/supabase). + +```python +from langchain_community.vectorstores import SupabaseVectorStore +``` + diff --git a/langchain_md_files/integrations/providers/symblai_nebula.mdx b/langchain_md_files/integrations/providers/symblai_nebula.mdx new file mode 100644 index 0000000000000000000000000000000000000000..a302bd81b55a1b65942efc0a626c34f09017d29f --- /dev/null +++ b/langchain_md_files/integrations/providers/symblai_nebula.mdx @@ -0,0 +1,17 @@ +# Nebula + +This page covers how to use [Nebula](https://symbl.ai/nebula), [Symbl.ai](https://symbl.ai/)'s LLM, ecosystem within LangChain. +It is broken into two parts: installation and setup, and then references to specific Nebula wrappers. + +## Installation and Setup + +- Get an [Nebula API Key](https://info.symbl.ai/Nebula_Private_Beta.html) and set as environment variable `NEBULA_API_KEY` +- Please see the [Nebula documentation](https://docs.symbl.ai/docs/nebula-llm) for more details. + +### LLM + +There exists an Nebula LLM wrapper, which you can access with +```python +from langchain_community.llms import Nebula +llm = Nebula() +``` diff --git a/langchain_md_files/integrations/providers/tair.mdx b/langchain_md_files/integrations/providers/tair.mdx new file mode 100644 index 0000000000000000000000000000000000000000..d84d7378033851c45c16763291f75841c4524726 --- /dev/null +++ b/langchain_md_files/integrations/providers/tair.mdx @@ -0,0 +1,23 @@ +# Tair + +>[Alibaba Cloud Tair](https://www.alibabacloud.com/help/en/tair/latest/what-is-tair) is a cloud native in-memory database service +> developed by `Alibaba Cloud`. It provides rich data models and enterprise-grade capabilities to +> support your real-time online scenarios while maintaining full compatibility with open-source `Redis`. +> `Tair` also introduces persistent memory-optimized instances that are based on +> new non-volatile memory (NVM) storage medium. + +## Installation and Setup + +Install Tair Python SDK: + +```bash +pip install tair +``` + +## Vector Store + +```python +from langchain_community.vectorstores import Tair +``` + +See a [usage example](/docs/integrations/vectorstores/tair). diff --git a/langchain_md_files/integrations/providers/telegram.mdx b/langchain_md_files/integrations/providers/telegram.mdx new file mode 100644 index 0000000000000000000000000000000000000000..124cbd509a7c292938994282bc163563417949c8 --- /dev/null +++ b/langchain_md_files/integrations/providers/telegram.mdx @@ -0,0 +1,25 @@ +# Telegram + +>[Telegram Messenger](https://web.telegram.org/a/) is a globally accessible freemium, cross-platform, encrypted, cloud-based and centralized instant messaging service. The application also provides optional end-to-end encrypted chats and video calling, VoIP, file sharing and several other features. + + +## Installation and Setup + +See [setup instructions](/docs/integrations/document_loaders/telegram). + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/telegram). + +```python +from langchain_community.document_loaders import TelegramChatFileLoader +from langchain_community.document_loaders import TelegramChatApiLoader +``` + +## Chat loader + +See a [usage example](/docs/integrations/chat_loaders/telegram). + +```python +from langchain_community.chat_loaders.telegram import TelegramChatLoader +``` diff --git a/langchain_md_files/integrations/providers/tencent.mdx b/langchain_md_files/integrations/providers/tencent.mdx new file mode 100644 index 0000000000000000000000000000000000000000..9efe6deed7dbbb32c63e49a0e9ae25849d1b502d --- /dev/null +++ b/langchain_md_files/integrations/providers/tencent.mdx @@ -0,0 +1,95 @@ +# Tencent + +>[Tencent Holdings Ltd. (Wikipedia)](https://en.wikipedia.org/wiki/Tencent) (Chinese: 腾讯; pinyin: Téngxùn) +> is a Chinese multinational technology conglomerate and holding company headquartered +> in Shenzhen. `Tencent` is one of the highest grossing multimedia companies in the +> world based on revenue. It is also the world's largest company in the video game industry +> based on its equity investments. + + +## Chat model + +>[Tencent's hybrid model API](https://cloud.tencent.com/document/product/1729) (`Hunyuan API`) +> implements dialogue communication, content generation, +> analysis and understanding, and can be widely used in various scenarios such as intelligent +> customer service, intelligent marketing, role playing, advertising, copyrighting, product description, +> script creation, resume generation, article writing, code generation, data analysis, and content +> analysis. + + +For more information, see [this notebook](/docs/integrations/chat/tencent_hunyuan) + +```python +from langchain_community.chat_models import ChatHunyuan +``` + + +## Document Loaders + +### Tencent COS + +>[Tencent Cloud Object Storage (COS)](https://www.tencentcloud.com/products/cos) is a distributed +> storage service that enables you to store any amount of data from anywhere via HTTP/HTTPS protocols. +> `COS` has no restrictions on data structure or format. It also has no bucket size limit and +> partition management, making it suitable for virtually any use case, such as data delivery, +> data processing, and data lakes. COS provides a web-based console, multi-language SDKs and APIs, +> command line tool, and graphical tools. It works well with Amazon S3 APIs, allowing you to quickly +> access community tools and plugins. + +Install the Python SDK: + +```bash +pip install cos-python-sdk-v5 +``` + +#### Tencent COS Directory + +For more information, see [this notebook](/docs/integrations/document_loaders/tencent_cos_directory) + +```python +from langchain_community.document_loaders import TencentCOSDirectoryLoader +from qcloud_cos import CosConfig +``` + +#### Tencent COS File + +For more information, see [this notebook](/docs/integrations/document_loaders/tencent_cos_file) + +```python +from langchain_community.document_loaders import TencentCOSFileLoader +from qcloud_cos import CosConfig +``` + +## Vector Store + +### Tencent VectorDB + +>[Tencent Cloud VectorDB](https://www.tencentcloud.com/products/vdb) is a fully managed, +> self-developed enterprise-level distributed database service +>dedicated to storing, retrieving, and analyzing multidimensional vector data. The database supports a variety of index +>types and similarity calculation methods, and a single index supports 1 billion vectors, millions of QPS, and +>millisecond query latency. `Tencent Cloud Vector Database` can not only provide an external knowledge base for large +>models and improve the accuracy of large models' answers, but also be widely used in AI fields such as +>recommendation systems, NLP services, computer vision, and intelligent customer service. + +Install the Python SDK: + +```bash +pip install tcvectordb +``` + +For more information, see [this notebook](/docs/integrations/vectorstores/tencentvectordb) + +```python +from langchain_community.vectorstores import TencentVectorDB +``` + +## Chat loader + +### WeChat + +>[WeChat](https://www.wechat.com/) or `Weixin` in Chinese is a Chinese +> instant messaging, social media, and mobile payment app developed by `Tencent`. + +See a [usage example](/docs/integrations/chat_loaders/wechat). + diff --git a/langchain_md_files/integrations/providers/tensorflow_datasets.mdx b/langchain_md_files/integrations/providers/tensorflow_datasets.mdx new file mode 100644 index 0000000000000000000000000000000000000000..b3cc150977b41436c29aa521f9c247abaa7ffe8a --- /dev/null +++ b/langchain_md_files/integrations/providers/tensorflow_datasets.mdx @@ -0,0 +1,31 @@ +# TensorFlow Datasets + +>[TensorFlow Datasets](https://www.tensorflow.org/datasets) is a collection of datasets ready to use, +> with TensorFlow or other Python ML frameworks, such as Jax. All datasets are exposed +> as [tf.data.Datasets](https://www.tensorflow.org/api_docs/python/tf/data/Dataset), +> enabling easy-to-use and high-performance input pipelines. To get started see +> the [guide](https://www.tensorflow.org/datasets/overview) and +> the [list of datasets](https://www.tensorflow.org/datasets/catalog/overview#all_datasets). + + + +## Installation and Setup + +You need to install `tensorflow` and `tensorflow-datasets` python packages. + +```bash +pip install tensorflow +``` + +```bash +pip install tensorflow-dataset +``` + + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/tensorflow_datasets). + +```python +from langchain_community.document_loaders import TensorflowDatasetLoader +``` diff --git a/langchain_md_files/integrations/providers/tidb.mdx b/langchain_md_files/integrations/providers/tidb.mdx new file mode 100644 index 0000000000000000000000000000000000000000..401b4300c48f7ed94ac1b043f71dbc6abee04251 --- /dev/null +++ b/langchain_md_files/integrations/providers/tidb.mdx @@ -0,0 +1,38 @@ +# TiDB + +> [TiDB Cloud](https://www.pingcap.com/tidb-serverless), is a comprehensive Database-as-a-Service (DBaaS) solution, +> that provides dedicated and serverless options. `TiDB Serverless` is now integrating +> a built-in vector search into the MySQL landscape. With this enhancement, you can seamlessly +> develop AI applications using `TiDB Serverless` without the need for a new database or additional +> technical stacks. Create a free TiDB Serverless cluster and start using the vector search feature at https://pingcap.com/ai. + + +## Installation and Setup + +You have to get the connection details for the TiDB database. +Visit the [TiDB Cloud](https://tidbcloud.com/) to get the connection details. + +```bash +## Document loader + +```python +from langchain_community.document_loaders import TiDBLoader +``` + +Please refer the details [here](/docs/integrations/document_loaders/tidb). + +## Vector store + +```python +from langchain_community.vectorstores import TiDBVectorStore +``` +Please refer the details [here](/docs/integrations/vectorstores/tidb_vector). + + +## Memory + +```python +from langchain_community.chat_message_histories import TiDBChatMessageHistory +``` + +Please refer the details [here](/docs/integrations/memory/tidb_chat_message_history). diff --git a/langchain_md_files/integrations/providers/tigergraph.mdx b/langchain_md_files/integrations/providers/tigergraph.mdx new file mode 100644 index 0000000000000000000000000000000000000000..95a62635c83a3464fe34bb68f12b8e38ca74daf1 --- /dev/null +++ b/langchain_md_files/integrations/providers/tigergraph.mdx @@ -0,0 +1,25 @@ +# TigerGraph + +>[TigerGraph](https://www.tigergraph.com/tigergraph-db/) is a natively distributed and high-performance graph database. +> The storage of data in a graph format of vertices and edges leads to rich relationships, +> ideal for grouding LLM responses. + +## Installation and Setup + +Follow instructions [how to connect to the `TigerGraph` database](https://docs.tigergraph.com/pytigergraph/current/getting-started/connection). + +Install the Python SDK: + +```bash +pip install pyTigerGraph +``` + +## Graph store + +### TigerGraph + +See a [usage example](/docs/integrations/graphs/tigergraph). + +```python +from langchain_community.graphs import TigerGraph +``` diff --git a/langchain_md_files/integrations/providers/tigris.mdx b/langchain_md_files/integrations/providers/tigris.mdx new file mode 100644 index 0000000000000000000000000000000000000000..7852b6453ccb62c1b8819c50679b9e5a47d0907f --- /dev/null +++ b/langchain_md_files/integrations/providers/tigris.mdx @@ -0,0 +1,19 @@ +# Tigris + +> [Tigris](https://tigrisdata.com) is an open-source Serverless NoSQL Database and Search Platform designed to simplify building high-performance vector search applications. +> `Tigris` eliminates the infrastructure complexity of managing, operating, and synchronizing multiple tools, allowing you to focus on building great applications instead. + +## Installation and Setup + + +```bash +pip install tigrisdb openapi-schema-pydantic +``` + +## Vector Store + +See a [usage example](/docs/integrations/vectorstores/tigris). + +```python +from langchain_community.vectorstores import Tigris +``` diff --git a/langchain_md_files/integrations/providers/tomarkdown.mdx b/langchain_md_files/integrations/providers/tomarkdown.mdx new file mode 100644 index 0000000000000000000000000000000000000000..08787f943967b752df0ca39786548b50d9c6b8d7 --- /dev/null +++ b/langchain_md_files/integrations/providers/tomarkdown.mdx @@ -0,0 +1,16 @@ +# 2Markdown + +>[2markdown](https://2markdown.com/) service transforms website content into structured markdown files. + + +## Installation and Setup + +We need the `API key`. See [instructions how to get it](https://2markdown.com/login). + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/tomarkdown). + +```python +from langchain_community.document_loaders import ToMarkdownLoader +``` diff --git a/langchain_md_files/integrations/providers/trello.mdx b/langchain_md_files/integrations/providers/trello.mdx new file mode 100644 index 0000000000000000000000000000000000000000..0b897ae66021d1a89bd11ff44ffdb813c3a25f24 --- /dev/null +++ b/langchain_md_files/integrations/providers/trello.mdx @@ -0,0 +1,22 @@ +# Trello + +>[Trello](https://www.atlassian.com/software/trello) is a web-based project management and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides a visual interface known as a "board" where users can create lists and cards to represent their tasks and activities. +>The TrelloLoader allows us to load cards from a `Trello` board. + + +## Installation and Setup + +```bash +pip install py-trello beautifulsoup4 +``` + +See [setup instructions](/docs/integrations/document_loaders/trello). + + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/trello). + +```python +from langchain_community.document_loaders import TrelloLoader +``` diff --git a/langchain_md_files/integrations/providers/trubrics.mdx b/langchain_md_files/integrations/providers/trubrics.mdx new file mode 100644 index 0000000000000000000000000000000000000000..4681b34bff4dd5758d82baac340c404d6cd47f31 --- /dev/null +++ b/langchain_md_files/integrations/providers/trubrics.mdx @@ -0,0 +1,24 @@ +# Trubrics + + +>[Trubrics](https://trubrics.com) is an LLM user analytics platform that lets you collect, analyse and manage user +prompts & feedback on AI models. +> +>Check out [Trubrics repo](https://github.com/trubrics/trubrics-sdk) for more information on `Trubrics`. + +## Installation and Setup + +We need to install the `trubrics` Python package: + +```bash +pip install trubrics +``` + + +## Callbacks + +See a [usage example](/docs/integrations/callbacks/trubrics). + +```python +from langchain.callbacks import TrubricsCallbackHandler +``` diff --git a/langchain_md_files/integrations/providers/trulens.mdx b/langchain_md_files/integrations/providers/trulens.mdx new file mode 100644 index 0000000000000000000000000000000000000000..327a6de372084f5f800c78a82a283365f48dd5c4 --- /dev/null +++ b/langchain_md_files/integrations/providers/trulens.mdx @@ -0,0 +1,82 @@ +# TruLens + +>[TruLens](https://trulens.org) is an [open-source](https://github.com/truera/trulens) package that provides instrumentation and evaluation tools for large language model (LLM) based applications. + +This page covers how to use [TruLens](https://trulens.org) to evaluate and track LLM apps built on langchain. + + +## Installation and Setup + +Install the `trulens-eval` python package. + +```bash +pip install trulens-eval +``` + +## Quickstart + +See the integration details in the [TruLens documentation](https://www.trulens.org/trulens_eval/getting_started/quickstarts/langchain_quickstart/). + +### Tracking + +Once you've created your LLM chain, you can use TruLens for evaluation and tracking. +TruLens has a number of [out-of-the-box Feedback Functions](https://www.trulens.org/trulens_eval/evaluation/feedback_functions/), +and is also an extensible framework for LLM evaluation. + +Create the feedback functions: + +```python +from trulens_eval.feedback import Feedback, Huggingface, + +# Initialize HuggingFace-based feedback function collection class: +hugs = Huggingface() +openai = OpenAI() + +# Define a language match feedback function using HuggingFace. +lang_match = Feedback(hugs.language_match).on_input_output() +# By default this will check language match on the main app input and main app +# output. + +# Question/answer relevance between overall question and answer. +qa_relevance = Feedback(openai.relevance).on_input_output() +# By default this will evaluate feedback on main app input and main app output. + +# Toxicity of input +toxicity = Feedback(openai.toxicity).on_input() +``` + +### Chains + +After you've set up Feedback Function(s) for evaluating your LLM, you can wrap your application with +TruChain to get detailed tracing, logging and evaluation of your LLM app. + +Note: See code for the `chain` creation is in +the [TruLens documentation](https://www.trulens.org/trulens_eval/getting_started/quickstarts/langchain_quickstart/). + +```python +from trulens_eval import TruChain + +# wrap your chain with TruChain +truchain = TruChain( + chain, + app_id='Chain1_ChatApplication', + feedbacks=[lang_match, qa_relevance, toxicity] +) +# Note: any `feedbacks` specified here will be evaluated and logged whenever the chain is used. +truchain("que hora es?") +``` + +### Evaluation + +Now you can explore your LLM-based application! + +Doing so will help you understand how your LLM application is performing at a glance. As you iterate new versions of your LLM application, you can compare their performance across all of the different quality metrics you've set up. You'll also be able to view evaluations at a record level, and explore the chain metadata for each record. + +```python +from trulens_eval import Tru + +tru = Tru() +tru.run_dashboard() # open a Streamlit app to explore +``` + +For more information on TruLens, visit [trulens.org](https://www.trulens.org/) \ No newline at end of file diff --git a/langchain_md_files/integrations/providers/twitter.mdx b/langchain_md_files/integrations/providers/twitter.mdx new file mode 100644 index 0000000000000000000000000000000000000000..d2ba4fecb538815a267dd922275f36a35da6fc1d --- /dev/null +++ b/langchain_md_files/integrations/providers/twitter.mdx @@ -0,0 +1,25 @@ +# Twitter + +>[Twitter](https://twitter.com/) is an online social media and social networking service. + + +## Installation and Setup + +```bash +pip install tweepy +``` + +We must initialize the loader with the `Twitter API` token, and we need to set up the Twitter `username`. + + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/twitter). + +```python +from langchain_community.document_loaders import TwitterTweetLoader +``` + +## Chat loader + +See a [usage example](/docs/integrations/chat_loaders/twitter). diff --git a/langchain_md_files/integrations/providers/typesense.mdx b/langchain_md_files/integrations/providers/typesense.mdx new file mode 100644 index 0000000000000000000000000000000000000000..5bb2b3ca0e41cdd4bf47763deeaca8ec583ff253 --- /dev/null +++ b/langchain_md_files/integrations/providers/typesense.mdx @@ -0,0 +1,22 @@ +# Typesense + +> [Typesense](https://typesense.org) is an open-source, in-memory search engine, that you can either +> [self-host](https://typesense.org/docs/guide/install-typesense.html#option-2-local-machine-self-hosting) or run +> on [Typesense Cloud](https://cloud.typesense.org/). +> `Typesense` focuses on performance by storing the entire index in RAM (with a backup on disk) and also +> focuses on providing an out-of-the-box developer experience by simplifying available options and setting good defaults. + +## Installation and Setup + + +```bash +pip install typesense openapi-schema-pydantic +``` + +## Vector Store + +See a [usage example](/docs/integrations/vectorstores/typesense). + +```python +from langchain_community.vectorstores import Typesense +``` diff --git a/langchain_md_files/integrations/providers/unstructured.mdx b/langchain_md_files/integrations/providers/unstructured.mdx new file mode 100644 index 0000000000000000000000000000000000000000..33510cf5e480312689476afa9ff8c31f7e1185a8 --- /dev/null +++ b/langchain_md_files/integrations/providers/unstructured.mdx @@ -0,0 +1,234 @@ +# Unstructured + +>The `unstructured` package from +[Unstructured.IO](https://www.unstructured.io/) extracts clean text from raw source documents like +PDFs and Word documents. +This page covers how to use the [`unstructured`](https://github.com/Unstructured-IO/unstructured) +ecosystem within LangChain. + +## Installation and Setup + +If you are using a loader that runs locally, use the following steps to get `unstructured` and its +dependencies running. + +- For the smallest installation footprint and to take advantage of features not available in the + open-source `unstructured` package, install the Python SDK with `pip install unstructured-client` + along with `pip install langchain-unstructured` to use the `UnstructuredLoader` and partition + remotely against the Unstructured API. This loader lives + in a LangChain partner repo instead of the `langchain-community` repo and you will need an + `api_key`, which you can generate a free key [here](https://unstructured.io/api-key/). + - Unstructured's documentation for the sdk can be found here: + https://docs.unstructured.io/api-reference/api-services/sdk + +- To run everything locally, install the open-source python package with `pip install unstructured` + along with `pip install langchain-community` and use the same `UnstructuredLoader` as mentioned above. + - You can install document specific dependencies with extras, e.g. `pip install "unstructured[docx]"`. + - To install the dependencies for all document types, use `pip install "unstructured[all-docs]"`. +- Install the following system dependencies if they are not already available on your system with e.g. `brew install` for Mac. + Depending on what document types you're parsing, you may not need all of these. + - `libmagic-dev` (filetype detection) + - `poppler-utils` (images and PDFs) + - `tesseract-ocr`(images and PDFs) + - `qpdf` (PDFs) + - `libreoffice` (MS Office docs) + - `pandoc` (EPUBs) +- When running locally, Unstructured also recommends using Docker [by following this + guide](https://docs.unstructured.io/open-source/installation/docker-installation) to ensure all + system dependencies are installed correctly. + +The Unstructured API requires API keys to make requests. +You can request an API key [here](https://unstructured.io/api-key-hosted) and start using it today! +Checkout the README [here](https://github.com/Unstructured-IO/unstructured-api) here to get started making API calls. +We'd love to hear your feedback, let us know how it goes in our [community slack](https://join.slack.com/t/unstructuredw-kbe4326/shared_invite/zt-1x7cgo0pg-PTptXWylzPQF9xZolzCnwQ). +And stay tuned for improvements to both quality and performance! +Check out the instructions +[here](https://github.com/Unstructured-IO/unstructured-api#dizzy-instructions-for-using-the-docker-image) if you'd like to self-host the Unstructured API or run it locally. + + +## Data Loaders + +The primary usage of `Unstructured` is in data loaders. + +### UnstructuredLoader + +See a [usage example](/docs/integrations/document_loaders/unstructured_file) to see how you can use +this loader for both partitioning locally and remotely with the serverless Unstructured API. + +```python +from langchain_unstructured import UnstructuredLoader +``` + +### UnstructuredCHMLoader + +`CHM` means `Microsoft Compiled HTML Help`. + +```python +from langchain_community.document_loaders import UnstructuredCHMLoader +``` + +### UnstructuredCSVLoader + +A `comma-separated values` (`CSV`) file is a delimited text file that uses +a comma to separate values. Each line of the file is a data record. +Each record consists of one or more fields, separated by commas. + +See a [usage example](/docs/integrations/document_loaders/csv#unstructuredcsvloader). + +```python +from langchain_community.document_loaders import UnstructuredCSVLoader +``` + +### UnstructuredEmailLoader + +See a [usage example](/docs/integrations/document_loaders/email). + +```python +from langchain_community.document_loaders import UnstructuredEmailLoader +``` + +### UnstructuredEPubLoader + +[EPUB](https://en.wikipedia.org/wiki/EPUB) is an `e-book file format` that uses +the “.epub” file extension. The term is short for electronic publication and +is sometimes styled `ePub`. `EPUB` is supported by many e-readers, and compatible +software is available for most smartphones, tablets, and computers. + +See a [usage example](/docs/integrations/document_loaders/epub). + +```python +from langchain_community.document_loaders import UnstructuredEPubLoader +``` + +### UnstructuredExcelLoader + +See a [usage example](/docs/integrations/document_loaders/microsoft_excel). + +```python +from langchain_community.document_loaders import UnstructuredExcelLoader +``` + +### UnstructuredFileIOLoader + +See a [usage example](/docs/integrations/document_loaders/google_drive#passing-in-optional-file-loaders). + +```python +from langchain_community.document_loaders import UnstructuredFileIOLoader +``` + +### UnstructuredHTMLLoader + +See a [usage example](/docs/how_to/document_loader_html). + +```python +from langchain_community.document_loaders import UnstructuredHTMLLoader +``` + +### UnstructuredImageLoader + +See a [usage example](/docs/integrations/document_loaders/image). + +```python +from langchain_community.document_loaders import UnstructuredImageLoader +``` + +### UnstructuredMarkdownLoader + +See a [usage example](/docs/integrations/vectorstores/starrocks). + +```python +from langchain_community.document_loaders import UnstructuredMarkdownLoader +``` + +### UnstructuredODTLoader + +The `Open Document Format for Office Applications (ODF)`, also known as `OpenDocument`, +is an open file format for word processing documents, spreadsheets, presentations +and graphics and using ZIP-compressed XML files. It was developed with the aim of +providing an open, XML-based file format specification for office applications. + +See a [usage example](/docs/integrations/document_loaders/odt). + +```python +from langchain_community.document_loaders import UnstructuredODTLoader +``` + +### UnstructuredOrgModeLoader + +An [Org Mode](https://en.wikipedia.org/wiki/Org-mode) document is a document editing, formatting, and organizing mode, designed for notes, planning, and authoring within the free software text editor Emacs. + +See a [usage example](/docs/integrations/document_loaders/org_mode). + +```python +from langchain_community.document_loaders import UnstructuredOrgModeLoader +``` + +### UnstructuredPDFLoader + +See a [usage example](/docs/how_to/document_loader_pdf#using-unstructured). + +```python +from langchain_community.document_loaders import UnstructuredPDFLoader +``` + +### UnstructuredPowerPointLoader + +See a [usage example](/docs/integrations/document_loaders/microsoft_powerpoint). + +```python +from langchain_community.document_loaders import UnstructuredPowerPointLoader +``` + +### UnstructuredRSTLoader + +A `reStructured Text` (`RST`) file is a file format for textual data +used primarily in the Python programming language community for technical documentation. + +See a [usage example](/docs/integrations/document_loaders/rst). + +```python +from langchain_community.document_loaders import UnstructuredRSTLoader +``` + +### UnstructuredRTFLoader + +See a usage example in the API documentation. + +```python +from langchain_community.document_loaders import UnstructuredRTFLoader +``` + +### UnstructuredTSVLoader + +A `tab-separated values` (`TSV`) file is a simple, text-based file format for storing tabular data. +Records are separated by newlines, and values within a record are separated by tab characters. + +See a [usage example](/docs/integrations/document_loaders/tsv). + +```python +from langchain_community.document_loaders import UnstructuredTSVLoader +``` + +### UnstructuredURLLoader + +See a [usage example](/docs/integrations/document_loaders/url). + +```python +from langchain_community.document_loaders import UnstructuredURLLoader +``` + +### UnstructuredWordDocumentLoader + +See a [usage example](/docs/integrations/document_loaders/microsoft_word#using-unstructured). + +```python +from langchain_community.document_loaders import UnstructuredWordDocumentLoader +``` + +### UnstructuredXMLLoader + +See a [usage example](/docs/integrations/document_loaders/xml). + +```python +from langchain_community.document_loaders import UnstructuredXMLLoader +``` + diff --git a/langchain_md_files/integrations/providers/upstash.mdx b/langchain_md_files/integrations/providers/upstash.mdx new file mode 100644 index 0000000000000000000000000000000000000000..d1bfa783c230cb5045c1742642e3cfc57d527f14 --- /dev/null +++ b/langchain_md_files/integrations/providers/upstash.mdx @@ -0,0 +1,221 @@ +Upstash offers developers serverless databases and messaging +platforms to build powerful applications without having to worry +about the operational complexity of running databases at scale. + +One significant advantage of Upstash is that their databases support HTTP and all of their SDKs use HTTP. +This means that you can run this in serverless platforms, edge or any platform that does not support TCP connections. + +Currently, there are two Upstash integrations available for LangChain: +Upstash Vector as a vector embedding database and Upstash Redis as a cache and memory store. + +# Upstash Vector + +Upstash Vector is a serverless vector database that can be used to store and query vectors. + +## Installation + +Create a new serverless vector database at the [Upstash Console](https://console.upstash.com/vector). +Select your preferred distance metric and dimension count according to your model. + + +Install the Upstash Vector Python SDK with `pip install upstash-vector`. +The Upstash Vector integration in langchain is a wrapper for the Upstash Vector Python SDK. That's why the `upstash-vector` package is required. + +## Integrations + +Create a `UpstashVectorStore` object using credentials from the Upstash Console. +You also need to pass in an `Embeddings` object which can turn text into vector embeddings. + +```python +from langchain_community.vectorstores.upstash import UpstashVectorStore +import os + +os.environ["UPSTASH_VECTOR_REST_URL"] = "" +os.environ["UPSTASH_VECTOR_REST_TOKEN"] = "" + +store = UpstashVectorStore( + embedding=embeddings +) +``` + +An alternative way of `UpstashVectorStore` is to pass `embedding=True`. This is a unique +feature of the `UpstashVectorStore` thanks to the ability of the Upstash Vector indexes +to have an associated embedding model. In this configuration, documents we want to insert or +queries we want to search for are simply sent to Upstash Vector as text. In the background, +Upstash Vector embeds these text and executes the request with these embeddings. To use this +feature, [create an Upstash Vector index by selecting a model](https://upstash.com/docs/vector/features/embeddingmodels#using-a-model) +and simply pass `embedding=True`: + +```python +from langchain_community.vectorstores.upstash import UpstashVectorStore +import os + +os.environ["UPSTASH_VECTOR_REST_URL"] = "" +os.environ["UPSTASH_VECTOR_REST_TOKEN"] = "" + +store = UpstashVectorStore( + embedding=True +) +``` + +See [Upstash Vector documentation](https://upstash.com/docs/vector/features/embeddingmodels) +for more detail on embedding models. + +## Namespaces +You can use namespaces to partition your data in the index. Namespaces are useful when you want to query over huge amount of data, and you want to partition the data to make the queries faster. When you use namespaces, there won't be post-filtering on the results which will make the query results more precise. + +```python +from langchain_community.vectorstores.upstash import UpstashVectorStore +import os + +os.environ["UPSTASH_VECTOR_REST_URL"] = "" +os.environ["UPSTASH_VECTOR_REST_TOKEN"] = "" + +store = UpstashVectorStore( + embedding=embeddings + namespace="my_namespace" +) +``` + +### Inserting Vectors + +```python +from langchain.text_splitter import CharacterTextSplitter +from langchain_community.document_loaders import TextLoader +from langchain_openai import OpenAIEmbeddings + +loader = TextLoader("../../modules/state_of_the_union.txt") +documents = loader.load() +text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) +docs = text_splitter.split_documents(documents) + +# Create a new embeddings object +embeddings = OpenAIEmbeddings() + +# Create a new UpstashVectorStore object +store = UpstashVectorStore( + embedding=embeddings +) + +# Insert the document embeddings into the store +store.add_documents(docs) +``` + +When inserting documents, first they are embedded using the `Embeddings` object. + +Most embedding models can embed multiple documents at once, so the documents are batched and embedded in parallel. +The size of the batch can be controlled using the `embedding_chunk_size` parameter. + +The embedded vectors are then stored in the Upstash Vector database. When they are sent, multiple vectors are batched together to reduce the number of HTTP requests. +The size of the batch can be controlled using the `batch_size` parameter. Upstash Vector has a limit of 1000 vectors per batch in the free tier. + +```python +store.add_documents( + documents, + batch_size=100, + embedding_chunk_size=200 +) +``` + +### Querying Vectors + +Vectors can be queried using a text query or another vector. + +The returned value is a list of Document objects. + +```python +result = store.similarity_search( + "The United States of America", + k=5 +) +``` + +Or using a vector: + +```python +vector = embeddings.embed_query("Hello world") + +result = store.similarity_search_by_vector( + vector, + k=5 +) +``` + +When searching, you can also utilize the `filter` parameter which will allow you to filter by metadata: + +```python +result = store.similarity_search( + "The United States of America", + k=5, + filter="type = 'country'" +) +``` + +See [Upstash Vector documentation](https://upstash.com/docs/vector/features/filtering) +for more details on metadata filtering. + +### Deleting Vectors + +Vectors can be deleted by their IDs. + +```python +store.delete(["id1", "id2"]) +``` + +### Getting information about the store + +You can get information about your database like the distance metric dimension using the info function. + +When an insert happens, the database an indexing takes place. While this is happening new vectors can not be queried. `pendingVectorCount` represents the number of vector that are currently being indexed. + +```python +info = store.info() +print(info) + +# Output: +# {'vectorCount': 44, 'pendingVectorCount': 0, 'indexSize': 2642412, 'dimension': 1536, 'similarityFunction': 'COSINE'} +``` + +# Upstash Redis + +This page covers how to use [Upstash Redis](https://upstash.com/redis) with LangChain. + +## Installation and Setup +- Upstash Redis Python SDK can be installed with `pip install upstash-redis` +- A globally distributed, low-latency and highly available database can be created at the [Upstash Console](https://console.upstash.com) + + +## Integrations +All of Upstash-LangChain integrations are based on `upstash-redis` Python SDK being utilized as wrappers for LangChain. +This SDK utilizes Upstash Redis DB by giving UPSTASH_REDIS_REST_URL and UPSTASH_REDIS_REST_TOKEN parameters from the console. + + +### Cache + +[Upstash Redis](https://upstash.com/redis) can be used as a cache for LLM prompts and responses. + +To import this cache: +```python +from langchain.cache import UpstashRedisCache +``` + +To use with your LLMs: +```python +import langchain +from upstash_redis import Redis + +URL = "" +TOKEN = "" + +langchain.llm_cache = UpstashRedisCache(redis_=Redis(url=URL, token=TOKEN)) +``` + +### Memory + +See a [usage example](/docs/integrations/memory/upstash_redis_chat_message_history). + +```python +from langchain_community.chat_message_histories import ( + UpstashRedisChatMessageHistory, +) +``` diff --git a/langchain_md_files/integrations/providers/usearch.mdx b/langchain_md_files/integrations/providers/usearch.mdx new file mode 100644 index 0000000000000000000000000000000000000000..cdbc99ecc9094772f693c0ce6a815f2d0eafd0ab --- /dev/null +++ b/langchain_md_files/integrations/providers/usearch.mdx @@ -0,0 +1,25 @@ +# USearch +>[USearch](https://unum-cloud.github.io/usearch/) is a Smaller & Faster Single-File Vector Search Engine. + +>`USearch's` base functionality is identical to `FAISS`, and the interface should look +> familiar if you have ever investigated Approximate Nearest Neighbors search. +> `USearch` and `FAISS` both employ `HNSW` algorithm, but they differ significantly +> in their design principles. `USearch` is compact and broadly compatible with FAISS without +> sacrificing performance, with a primary focus on user-defined metrics and fewer dependencies. +> +## Installation and Setup + +We need to install `usearch` python package. + +```bash +pip install usearch +``` + +## Vector Store + +See a [usage example](/docs/integrations/vectorstores/usearch). + +```python +from langchain_community.vectorstores import USearch +``` + diff --git a/langchain_md_files/integrations/providers/vdms.mdx b/langchain_md_files/integrations/providers/vdms.mdx new file mode 100644 index 0000000000000000000000000000000000000000..f2480e3383b423bc643b2ab43572e7a094365fb4 --- /dev/null +++ b/langchain_md_files/integrations/providers/vdms.mdx @@ -0,0 +1,62 @@ +# VDMS + +> [VDMS](https://github.com/IntelLabs/vdms/blob/master/README.md) is a storage solution for efficient access +> of big-”visual”-data that aims to achieve cloud scale by searching for relevant visual data via visual metadata +> stored as a graph and enabling machine friendly enhancements to visual data for faster access. + +## Installation and Setup + +### Install Client + +```bash +pip install vdms +``` + +### Install Database + +There are two ways to get started with VDMS: + +#### Install VDMS on your local machine via docker +```bash + docker run -d -p 55555:55555 intellabs/vdms:latest +``` + +#### Install VDMS directly on your local machine +Please see [installation instructions](https://github.com/IntelLabs/vdms/blob/master/INSTALL.md). + + + +## VectorStore + +The vector store is a simple wrapper around VDMS. It provides a simple interface to store and retrieve data. + +```python +from langchain_community.document_loaders import TextLoader +from langchain.text_splitter import CharacterTextSplitter + +loader = TextLoader("./state_of_the_union.txt") +documents = loader.load() +text_splitter = CharacterTextSplitter(chunk_size=500, chunk_overlap=0) +docs = text_splitter.split_documents(documents) + +from langchain_community.vectorstores import VDMS +from langchain_community.vectorstores.vdms import VDMS_Client +from langchain_huggingface import HuggingFaceEmbeddings + +client = VDMS_Client("localhost", 55555) +vectorstore = VDMS.from_documents( + docs, + client=client, + collection_name="langchain-demo", + embedding_function=HuggingFaceEmbeddings(), + engine="FaissFlat" + distance_strategy="L2", +) + +query = "What did the president say about Ketanji Brown Jackson" +results = vectorstore.similarity_search(query) +``` + +For a more detailed walkthrough of the VDMS wrapper, see [this notebook](/docs/integrations/vectorstores/vdms) + + diff --git a/langchain_md_files/integrations/providers/vectara/index.mdx b/langchain_md_files/integrations/providers/vectara/index.mdx new file mode 100644 index 0000000000000000000000000000000000000000..d7ff70fef3ea221bb8363afa40d8dba9c0b108d7 --- /dev/null +++ b/langchain_md_files/integrations/providers/vectara/index.mdx @@ -0,0 +1,182 @@ +# Vectara + +>[Vectara](https://vectara.com/) provides a Trusted Generative AI platform, allowing organizations to rapidly create a ChatGPT-like experience (an AI assistant) +> which is grounded in the data, documents, and knowledge that they have (technically, it is Retrieval-Augmented-Generation-as-a-service). + +**Vectara Overview:** +`Vectara` is RAG-as-a-service, providing all the components of RAG behind an easy-to-use API, including: +1. A way to extract text from files (PDF, PPT, DOCX, etc) +2. ML-based chunking that provides state of the art performance. +3. The [Boomerang](https://vectara.com/how-boomerang-takes-retrieval-augmented-generation-to-the-next-level-via-grounded-generation/) embeddings model. +4. Its own internal vector database where text chunks and embedding vectors are stored. +5. A query service that automatically encodes the query into embedding, and retrieves the most relevant text segments +(including support for [Hybrid Search](https://docs.vectara.com/docs/api-reference/search-apis/lexical-matching) and +[MMR](https://vectara.com/get-diverse-results-and-comprehensive-summaries-with-vectaras-mmr-reranker/)) +7. An LLM to for creating a [generative summary](https://docs.vectara.com/docs/learn/grounded-generation/grounded-generation-overview), based on the retrieved documents (context), including citations. + +For more information: +- [Documentation](https://docs.vectara.com/docs/) +- [API Playground](https://docs.vectara.com/docs/rest-api/) +- [Quickstart](https://docs.vectara.com/docs/quickstart) + +## Installation and Setup + +To use `Vectara` with LangChain no special installation steps are required. +To get started, [sign up](https://vectara.com/integrations/langchain) for a free Vectara account (if you don't already have one), +and follow the [quickstart](https://docs.vectara.com/docs/quickstart) guide to create a corpus and an API key. +Once you have these, you can provide them as arguments to the Vectara `vectorstore`, or you can set them as environment variables. + +- export `VECTARA_CUSTOMER_ID`="your_customer_id" +- export `VECTARA_CORPUS_ID`="your_corpus_id" +- export `VECTARA_API_KEY`="your-vectara-api-key" + +## Vectara as a Vector Store + +There exists a wrapper around the Vectara platform, allowing you to use it as a `vectorstore` in LangChain: + +To import this vectorstore: +```python +from langchain_community.vectorstores import Vectara +``` + +To create an instance of the Vectara vectorstore: +```python +vectara = Vectara( + vectara_customer_id=customer_id, + vectara_corpus_id=corpus_id, + vectara_api_key=api_key +) +``` +The `customer_id`, `corpus_id` and `api_key` are optional, and if they are not supplied will be read from +the environment variables `VECTARA_CUSTOMER_ID`, `VECTARA_CORPUS_ID` and `VECTARA_API_KEY`, respectively. + +### Adding Texts or Files + +After you have the vectorstore, you can `add_texts` or `add_documents` as per the standard `VectorStore` interface, for example: + +```python +vectara.add_texts(["to be or not to be", "that is the question"]) +``` + +Since Vectara supports file-upload in the platform, we also added the ability to upload files (PDF, TXT, HTML, PPT, DOC, etc) directly. +When using this method, each file is uploaded directly to the Vectara backend, processed and chunked optimally there, so you don't have to use the LangChain document loader or chunking mechanism. + +As an example: + +```python +vectara.add_files(["path/to/file1.pdf", "path/to/file2.pdf",...]) +``` + +Of course you do not have to add any data, and instead just connect to an existing Vectara corpus where data may already be indexed. + +### Querying the VectorStore + +To query the Vectara vectorstore, you can use the `similarity_search` method (or `similarity_search_with_score`), which takes a query string and returns a list of results: +```python +results = vectara.similarity_search_with_score("what is LangChain?") +``` +The results are returned as a list of relevant documents, and a relevance score of each document. + +In this case, we used the default retrieval parameters, but you can also specify the following additional arguments in `similarity_search` or `similarity_search_with_score`: +- `k`: number of results to return (defaults to 5) +- `lambda_val`: the [lexical matching](https://docs.vectara.com/docs/api-reference/search-apis/lexical-matching) factor for hybrid search (defaults to 0.025) +- `filter`: a [filter](https://docs.vectara.com/docs/common-use-cases/filtering-by-metadata/filter-overview) to apply to the results (default None) +- `n_sentence_context`: number of sentences to include before/after the actual matching segment when returning results. This defaults to 2. +- `rerank_config`: can be used to specify reranker for thr results + - `reranker`: mmr, rerank_multilingual_v1 or none. Note that "rerank_multilingual_v1" is a Scale only feature + - `rerank_k`: number of results to use for reranking + - `mmr_diversity_bias`: 0 = no diversity, 1 = full diversity. This is the lambda parameter in the MMR formula and is in the range 0...1 + +To get results without the relevance score, you can simply use the 'similarity_search' method: +```python +results = vectara.similarity_search("what is LangChain?") +``` + +## Vectara for Retrieval Augmented Generation (RAG) + +Vectara provides a full RAG pipeline, including generative summarization. To use it as a complete RAG solution, you can use the `as_rag` method. +There are a few additional parameters that can be specified in the `VectaraQueryConfig` object to control retrieval and summarization: +* k: number of results to return +* lambda_val: the lexical matching factor for hybrid search +* summary_config (optional): can be used to request an LLM summary in RAG + - is_enabled: True or False + - max_results: number of results to use for summary generation + - response_lang: language of the response summary, in ISO 639-2 format (e.g. 'en', 'fr', 'de', etc) +* rerank_config (optional): can be used to specify Vectara Reranker of the results + - reranker: mmr, rerank_multilingual_v1 or none + - rerank_k: number of results to use for reranking + - mmr_diversity_bias: 0 = no diversity, 1 = full diversity. + This is the lambda parameter in the MMR formula and is in the range 0...1 + +For example: + +```python +summary_config = SummaryConfig(is_enabled=True, max_results=7, response_lang='eng') +rerank_config = RerankConfig(reranker="mmr", rerank_k=50, mmr_diversity_bias=0.2) +config = VectaraQueryConfig(k=10, lambda_val=0.005, rerank_config=rerank_config, summary_config=summary_config) +``` +Then you can use the `as_rag` method to create a RAG pipeline: + +```python +query_str = "what did Biden say?" + +rag = vectara.as_rag(config) +rag.invoke(query_str)['answer'] +``` + +The `as_rag` method returns a `VectaraRAG` object, which behaves just like any LangChain Runnable, including the `invoke` or `stream` methods. + +## Vectara Chat + +The RAG functionality can be used to create a chatbot. For example, you can create a simple chatbot that responds to user input: + +```python +summary_config = SummaryConfig(is_enabled=True, max_results=7, response_lang='eng') +rerank_config = RerankConfig(reranker="mmr", rerank_k=50, mmr_diversity_bias=0.2) +config = VectaraQueryConfig(k=10, lambda_val=0.005, rerank_config=rerank_config, summary_config=summary_config) + +query_str = "what did Biden say?" +bot = vectara.as_chat(config) +bot.invoke(query_str)['answer'] +``` + +The main difference is the following: with `as_chat` Vectara internally tracks the chat history and conditions each response on the full chat history. +There is no need to keep that history locally to LangChain, as Vectara will manage it internally. + +## Vectara as a LangChain retriever only + +If you want to use Vectara as a retriever only, you can use the `as_retriever` method, which returns a `VectaraRetriever` object. +```python +retriever = vectara.as_retriever(config=config) +retriever.invoke(query_str) +``` + +Like with as_rag, you provide a `VectaraQueryConfig` object to control the retrieval parameters. +In most cases you would not enable the summary_config, but it is left as an option for backwards compatibility. +If no summary is requested, the response will be a list of relevant documents, each with a relevance score. +If a summary is requested, the response will be a list of relevant documents as before, plus an additional document that includes the generative summary. + +## Hallucination Detection score + +Vectara created [HHEM](https://huggingface.co./vectara/hallucination_evaluation_model) - an open source model that can be used to evaluate RAG responses for factual consistency. +As part of the Vectara RAG, the "Factual Consistency Score" (or FCS), which is an improved version of the open source HHEM is made available via the API. +This is automatically included in the output of the RAG pipeline + +```python +summary_config = SummaryConfig(is_enabled=True, max_results=7, response_lang='eng') +rerank_config = RerankConfig(reranker="mmr", rerank_k=50, mmr_diversity_bias=0.2) +config = VectaraQueryConfig(k=10, lambda_val=0.005, rerank_config=rerank_config, summary_config=summary_config) + +rag = vectara.as_rag(config) +resp = rag.invoke(query_str) +print(resp['answer']) +print(f"Vectara FCS = {resp['fcs']}") +``` + +## Example Notebooks + +For a more detailed examples of using Vectara with LangChain, see the following example notebooks: +* [this notebook](/docs/integrations/vectorstores/vectara) shows how to use Vectara: with full RAG or just as a retriever. +* [this notebook](/docs/integrations/retrievers/self_query/vectara_self_query) shows the self-query capability with Vectara. +* [this notebook](/docs/integrations/providers/vectara/vectara_chat) shows how to build a chatbot with Langchain and Vectara + diff --git a/langchain_md_files/integrations/providers/vespa.mdx b/langchain_md_files/integrations/providers/vespa.mdx new file mode 100644 index 0000000000000000000000000000000000000000..7796fde96d78c23533c3382f4b60ea929dd4e16d --- /dev/null +++ b/langchain_md_files/integrations/providers/vespa.mdx @@ -0,0 +1,21 @@ +# Vespa + +>[Vespa](https://vespa.ai/) is a fully featured search engine and vector database. +> It supports vector search (ANN), lexical search, and search in structured data, all in the same query. + +## Installation and Setup + + +```bash +pip install pyvespa +``` + + + +## Retriever + +See a [usage example](/docs/integrations/retrievers/vespa). + +```python +from langchain.retrievers import VespaRetriever +``` diff --git a/langchain_md_files/integrations/providers/vlite.mdx b/langchain_md_files/integrations/providers/vlite.mdx new file mode 100644 index 0000000000000000000000000000000000000000..6599dec720110beb60c68dcb49821e827bc5d3f2 --- /dev/null +++ b/langchain_md_files/integrations/providers/vlite.mdx @@ -0,0 +1,31 @@ +# vlite + +This page covers how to use [vlite](https://github.com/sdan/vlite) within LangChain. vlite is a simple and fast vector database for storing and retrieving embeddings. + +## Installation and Setup + +To install vlite, run the following command: + +```bash +pip install vlite +``` + +For PDF OCR support, install the `vlite[ocr]` extra: + +```bash +pip install vlite[ocr] +``` + +## VectorStore + +vlite provides a wrapper around its vector database, allowing you to use it as a vectorstore for semantic search and example selection. + +To import the vlite vectorstore: + +```python +from langchain_community.vectorstores import vlite +``` + +### Usage + +For a more detailed walkthrough of the vlite wrapper, see [this notebook](/docs/integrations/vectorstores/vlite). \ No newline at end of file diff --git a/langchain_md_files/integrations/providers/voyageai.mdx b/langchain_md_files/integrations/providers/voyageai.mdx new file mode 100644 index 0000000000000000000000000000000000000000..d40cb69bedf6940bdce1181d35adee0860a4148f --- /dev/null +++ b/langchain_md_files/integrations/providers/voyageai.mdx @@ -0,0 +1,32 @@ +# VoyageAI + +All functionality related to VoyageAI + +>[VoyageAI](https://www.voyageai.com/) Voyage AI builds embedding models, customized for your domain and company, for better retrieval quality. + +## Installation and Setup + +Install the integration package with +```bash +pip install langchain-voyageai +``` + +Get a VoyageAI API key and set it as an environment variable (`VOYAGE_API_KEY`) + + +## Text Embedding Model + +See a [usage example](/docs/integrations/text_embedding/voyageai) + +```python +from langchain_voyageai import VoyageAIEmbeddings +``` + + +## Reranking + +See a [usage example](/docs/integrations/document_transformers/voyageai-reranker) + +```python +from langchain_voyageai import VoyageAIRerank +``` diff --git a/langchain_md_files/integrations/providers/weather.mdx b/langchain_md_files/integrations/providers/weather.mdx new file mode 100644 index 0000000000000000000000000000000000000000..199af6ccb9772058fbc34f749147c37f2b58f62d --- /dev/null +++ b/langchain_md_files/integrations/providers/weather.mdx @@ -0,0 +1,21 @@ +# Weather + +>[OpenWeatherMap](https://openweathermap.org/) is an open-source weather service provider. + + + +## Installation and Setup + +```bash +pip install pyowm +``` + +We must set up the `OpenWeatherMap API token`. + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/weather). + +```python +from langchain_community.document_loaders import WeatherDataLoader +``` diff --git a/langchain_md_files/integrations/providers/weaviate.mdx b/langchain_md_files/integrations/providers/weaviate.mdx new file mode 100644 index 0000000000000000000000000000000000000000..25041cbc2736883f72ba0070c49bb1e7250449c6 --- /dev/null +++ b/langchain_md_files/integrations/providers/weaviate.mdx @@ -0,0 +1,38 @@ +# Weaviate + +>[Weaviate](https://weaviate.io/) is an open-source vector database. It allows you to store data objects and vector embeddings from +>your favorite ML models, and scale seamlessly into billions of data objects. + + +What is `Weaviate`? +- Weaviate is an open-source ​database of the type ​vector search engine. +- Weaviate allows you to store JSON documents in a class property-like fashion while attaching machine learning vectors to these documents to represent them in vector space. +- Weaviate can be used stand-alone (aka bring your vectors) or with a variety of modules that can do the vectorization for you and extend the core capabilities. +- Weaviate has a GraphQL-API to access your data easily. +- We aim to bring your vector search set up to production to query in mere milliseconds (check our [open-source benchmarks](https://weaviate.io/developers/weaviate/current/benchmarks/) to see if Weaviate fits your use case). +- Get to know Weaviate in the [basics getting started guide](https://weaviate.io/developers/weaviate/current/core-knowledge/basics.html) in under five minutes. + +**Weaviate in detail:** + +`Weaviate` is a low-latency vector search engine with out-of-the-box support for different media types (text, images, etc.). It offers Semantic Search, Question-Answer Extraction, Classification, Customizable Models (PyTorch/TensorFlow/Keras), etc. Built from scratch in Go, Weaviate stores both objects and vectors, allowing for combining vector search with structured filtering and the fault tolerance of a cloud-native database. It is all accessible through GraphQL, REST, and various client-side programming languages. + +## Installation and Setup + +Install the Python SDK: + +```bash +pip install langchain-weaviate +``` + + +## Vector Store + +There exists a wrapper around `Weaviate` indexes, allowing you to use it as a vectorstore, +whether for semantic search or example selection. + +To import this vectorstore: +```python +from langchain_weaviate import WeaviateVectorStore +``` + +For a more detailed walkthrough of the Weaviate wrapper, see [this notebook](/docs/integrations/vectorstores/weaviate) diff --git a/langchain_md_files/integrations/providers/whatsapp.mdx b/langchain_md_files/integrations/providers/whatsapp.mdx new file mode 100644 index 0000000000000000000000000000000000000000..dbe45e1b865bf9049d2ac3d8214f850ced36b1f0 --- /dev/null +++ b/langchain_md_files/integrations/providers/whatsapp.mdx @@ -0,0 +1,18 @@ +# WhatsApp + +>[WhatsApp](https://www.whatsapp.com/) (also called `WhatsApp Messenger`) is a freeware, cross-platform, centralized instant messaging (IM) and voice-over-IP (VoIP) service. It allows users to send text and voice messages, make voice and video calls, and share images, documents, user locations, and other content. + + +## Installation and Setup + +There isn't any special setup for it. + + + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/whatsapp_chat). + +```python +from langchain_community.document_loaders import WhatsAppChatLoader +``` diff --git a/langchain_md_files/integrations/providers/wikipedia.mdx b/langchain_md_files/integrations/providers/wikipedia.mdx new file mode 100644 index 0000000000000000000000000000000000000000..cf1b08a50a65f052141ebbebd4810572584820fe --- /dev/null +++ b/langchain_md_files/integrations/providers/wikipedia.mdx @@ -0,0 +1,28 @@ +# Wikipedia + +>[Wikipedia](https://wikipedia.org/) is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. `Wikipedia` is the largest and most-read reference work in history. + + +## Installation and Setup + +```bash +pip install wikipedia +``` + + + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/wikipedia). + +```python +from langchain_community.document_loaders import WikipediaLoader +``` + +## Retriever + +See a [usage example](/docs/integrations/retrievers/wikipedia). + +```python +from langchain.retrievers import WikipediaRetriever +``` diff --git a/langchain_md_files/integrations/providers/wolfram_alpha.mdx b/langchain_md_files/integrations/providers/wolfram_alpha.mdx new file mode 100644 index 0000000000000000000000000000000000000000..f4c7ae3a2eb693d64e27e63ceb122a512a3c8599 --- /dev/null +++ b/langchain_md_files/integrations/providers/wolfram_alpha.mdx @@ -0,0 +1,39 @@ +# Wolfram Alpha + +>[WolframAlpha](https://en.wikipedia.org/wiki/WolframAlpha) is an answer engine developed by `Wolfram Research`. +> It answers factual queries by computing answers from externally sourced data. + +This page covers how to use the `Wolfram Alpha API` within LangChain. + +## Installation and Setup +- Install requirements with +```bash +pip install wolframalpha +``` +- Go to wolfram alpha and sign up for a developer account [here](https://developer.wolframalpha.com/) +- Create an app and get your `APP ID` +- Set your APP ID as an environment variable `WOLFRAM_ALPHA_APPID` + + +## Wrappers + +### Utility + +There exists a WolframAlphaAPIWrapper utility which wraps this API. To import this utility: + +```python +from langchain_community.utilities.wolfram_alpha import WolframAlphaAPIWrapper +``` + +For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/wolfram_alpha). + +### Tool + +You can also easily load this wrapper as a Tool (to use with an Agent). +You can do this with: +```python +from langchain.agents import load_tools +tools = load_tools(["wolfram-alpha"]) +``` + +For more information on tools, see [this page](/docs/how_to/tools_builtin). diff --git a/langchain_md_files/integrations/providers/writer.mdx b/langchain_md_files/integrations/providers/writer.mdx new file mode 100644 index 0000000000000000000000000000000000000000..52ff0723ee67b90bd2a2e7222ad9a810f3a274c5 --- /dev/null +++ b/langchain_md_files/integrations/providers/writer.mdx @@ -0,0 +1,16 @@ +# Writer + +This page covers how to use the Writer ecosystem within LangChain. +It is broken into two parts: installation and setup, and then references to specific Writer wrappers. + +## Installation and Setup +- Get an Writer api key and set it as an environment variable (`WRITER_API_KEY`) + +## Wrappers + +### LLM + +There exists an Writer LLM wrapper, which you can access with +```python +from langchain_community.llms import Writer +``` \ No newline at end of file diff --git a/langchain_md_files/integrations/providers/xata.mdx b/langchain_md_files/integrations/providers/xata.mdx new file mode 100644 index 0000000000000000000000000000000000000000..986468d63c7321a302eab2cb78e8f6f6b2a8b859 --- /dev/null +++ b/langchain_md_files/integrations/providers/xata.mdx @@ -0,0 +1,36 @@ +# Xata + +> [Xata](https://xata.io) is a serverless data platform, based on `PostgreSQL`. +> It provides a Python SDK for interacting with your database, and a UI +> for managing your data. +> `Xata` has a native vector type, which can be added to any table, and +> supports similarity search. LangChain inserts vectors directly to `Xata`, +> and queries it for the nearest neighbors of a given vector, so that you can +> use all the LangChain Embeddings integrations with `Xata`. + + +## Installation and Setup + + +We need to install `xata` python package. + +```bash +pip install xata==1.0.0a7 +``` + +## Vector Store + +See a [usage example](/docs/integrations/vectorstores/xata). + +```python +from langchain_community.vectorstores import XataVectorStore +``` + +## Memory + +See a [usage example](/docs/integrations/memory/xata_chat_message_history). + +```python +from langchain_community.chat_message_histories import XataChatMessageHistory +``` + diff --git a/langchain_md_files/integrations/providers/xinference.mdx b/langchain_md_files/integrations/providers/xinference.mdx new file mode 100644 index 0000000000000000000000000000000000000000..07aefb3b9528591f3a953e435051dcd4f2399a76 --- /dev/null +++ b/langchain_md_files/integrations/providers/xinference.mdx @@ -0,0 +1,102 @@ +# Xorbits Inference (Xinference) + +This page demonstrates how to use [Xinference](https://github.com/xorbitsai/inference) +with LangChain. + +`Xinference` is a powerful and versatile library designed to serve LLMs, +speech recognition models, and multimodal models, even on your laptop. +With Xorbits Inference, you can effortlessly deploy and serve your or +state-of-the-art built-in models using just a single command. + +## Installation and Setup + +Xinference can be installed via pip from PyPI: + +```bash +pip install "xinference[all]" +``` + +## LLM + +Xinference supports various models compatible with GGML, including chatglm, baichuan, whisper, +vicuna, and orca. To view the builtin models, run the command: + +```bash +xinference list --all +``` + + +### Wrapper for Xinference + +You can start a local instance of Xinference by running: + +```bash +xinference +``` + +You can also deploy Xinference in a distributed cluster. To do so, first start an Xinference supervisor +on the server you want to run it: + +```bash +xinference-supervisor -H "${supervisor_host}" +``` + + +Then, start the Xinference workers on each of the other servers where you want to run them on: + +```bash +xinference-worker -e "http://${supervisor_host}:9997" +``` + +You can also start a local instance of Xinference by running: + +```bash +xinference +``` + +Once Xinference is running, an endpoint will be accessible for model management via CLI or +Xinference client. + +For local deployment, the endpoint will be http://localhost:9997. + + +For cluster deployment, the endpoint will be http://${supervisor_host}:9997. + + +Then, you need to launch a model. You can specify the model names and other attributes +including model_size_in_billions and quantization. You can use command line interface (CLI) to +do it. For example, + +```bash +xinference launch -n orca -s 3 -q q4_0 +``` + +A model uid will be returned. + +Example usage: + +```python +from langchain_community.llms import Xinference + +llm = Xinference( + server_url="http://0.0.0.0:9997", + model_uid = {model_uid} # replace model_uid with the model UID return from launching the model +) + +llm( + prompt="Q: where can we visit in the capital of France? A:", + generate_config={"max_tokens": 1024, "stream": True}, +) + +``` + +### Usage + +For more information and detailed examples, refer to the +[example for xinference LLMs](/docs/integrations/llms/xinference) + +### Embeddings + +Xinference also supports embedding queries and documents. See +[example for xinference embeddings](/docs/integrations/text_embedding/xinference) +for a more detailed demo. \ No newline at end of file diff --git a/langchain_md_files/integrations/providers/yandex.mdx b/langchain_md_files/integrations/providers/yandex.mdx new file mode 100644 index 0000000000000000000000000000000000000000..06d381a5e78f5eef5b43a9ca49a894a659f2e14a --- /dev/null +++ b/langchain_md_files/integrations/providers/yandex.mdx @@ -0,0 +1,33 @@ +# Yandex + +All functionality related to Yandex Cloud + +>[Yandex Cloud](https://cloud.yandex.com/en/) is a public cloud platform. + +## Installation and Setup + +Yandex Cloud SDK can be installed via pip from PyPI: + +```bash +pip install yandexcloud +``` + +## LLMs + +### YandexGPT + +See a [usage example](/docs/integrations/llms/yandex). + +```python +from langchain_community.llms import YandexGPT +``` + +## Chat models + +### YandexGPT + +See a [usage example](/docs/integrations/chat/yandex). + +```python +from langchain_community.chat_models import ChatYandexGPT +``` diff --git a/langchain_md_files/integrations/providers/yeagerai.mdx b/langchain_md_files/integrations/providers/yeagerai.mdx new file mode 100644 index 0000000000000000000000000000000000000000..6483cce900151cd054c250aaafd5fdc9886032cf --- /dev/null +++ b/langchain_md_files/integrations/providers/yeagerai.mdx @@ -0,0 +1,43 @@ +# Yeager.ai + +This page covers how to use [Yeager.ai](https://yeager.ai) to generate LangChain tools and agents. + +## What is Yeager.ai? +Yeager.ai is an ecosystem designed to simplify the process of creating AI agents and tools. + +It features yAgents, a No-code LangChain Agent Builder, which enables users to build, test, and deploy AI solutions with ease. Leveraging the LangChain framework, yAgents allows seamless integration with various language models and resources, making it suitable for developers, researchers, and AI enthusiasts across diverse applications. + +## yAgents +Low code generative agent designed to help you build, prototype, and deploy Langchain tools with ease. + +### How to use? +``` +pip install yeagerai-agent +yeagerai-agent +``` +Go to http://127.0.0.1:7860 + +This will install the necessary dependencies and set up yAgents on your system. After the first run, yAgents will create a .env file where you can input your OpenAI API key. You can do the same directly from the Gradio interface under the tab "Settings". + +`OPENAI_API_KEY=` + +We recommend using GPT-4,. However, the tool can also work with GPT-3 if the problem is broken down sufficiently. + +### Creating and Executing Tools with yAgents +yAgents makes it easy to create and execute AI-powered tools. Here's a brief overview of the process: +1. Create a tool: To create a tool, provide a natural language prompt to yAgents. The prompt should clearly describe the tool's purpose and functionality. For example: +`create a tool that returns the n-th prime number` + +2. Load the tool into the toolkit: To load a tool into yAgents, simply provide a command to yAgents that says so. For example: +`load the tool that you just created it into your toolkit` + +3. Execute the tool: To run a tool or agent, simply provide a command to yAgents that includes the name of the tool and any required parameters. For example: +`generate the 50th prime number` + +You can see a video of how it works [here](https://www.youtube.com/watch?v=KA5hCM3RaWE). + +As you become more familiar with yAgents, you can create more advanced tools and agents to automate your work and enhance your productivity. + +For more information, see [yAgents' Github](https://github.com/yeagerai/yeagerai-agent) or our [docs](https://yeagerai.gitbook.io/docs/general/welcome-to-yeager.ai) + + diff --git a/langchain_md_files/integrations/providers/yi.mdx b/langchain_md_files/integrations/providers/yi.mdx new file mode 100644 index 0000000000000000000000000000000000000000..e26590ac82974d2559d74f4432a5a31222c1857a --- /dev/null +++ b/langchain_md_files/integrations/providers/yi.mdx @@ -0,0 +1,23 @@ +# 01.AI + +>[01.AI](https://www.lingyiwanwu.com/en), founded by Dr. Kai-Fu Lee, is a global company at the forefront of AI 2.0. They offer cutting-edge large language models, including the Yi series, which range from 6B to hundreds of billions of parameters. 01.AI also provides multimodal models, an open API platform, and open-source options like Yi-34B/9B/6B and Yi-VL. + +## Installation and Setup + +Register and get an API key from either the China site [here](https://platform.lingyiwanwu.com/apikeys) or the global site [here](https://platform.01.ai/apikeys). + +## LLMs + +See a [usage example](/docs/integrations/llms/yi). + +```python +from langchain_community.llms import YiLLM +``` + +## Chat models + +See a [usage example](/docs/integrations/chat/yi). + +```python +from langchain_community.chat_models import ChatYi +``` diff --git a/langchain_md_files/integrations/providers/youtube.mdx b/langchain_md_files/integrations/providers/youtube.mdx new file mode 100644 index 0000000000000000000000000000000000000000..8f3d69b819b5567bd4a3972ccb2b750170d16fa2 --- /dev/null +++ b/langchain_md_files/integrations/providers/youtube.mdx @@ -0,0 +1,22 @@ +# YouTube + +>[YouTube](https://www.youtube.com/) is an online video sharing and social media platform by Google. +> We download the `YouTube` transcripts and video information. + +## Installation and Setup + +```bash +pip install youtube-transcript-api +pip install pytube +``` +See a [usage example](/docs/integrations/document_loaders/youtube_transcript). + + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/youtube_transcript). + +```python +from langchain_community.document_loaders import YoutubeLoader +from langchain_community.document_loaders import GoogleApiYoutubeLoader +``` diff --git a/langchain_md_files/integrations/providers/zep.mdx b/langchain_md_files/integrations/providers/zep.mdx new file mode 100644 index 0000000000000000000000000000000000000000..343bfd83a95866d6acd4e3aa37a59200daa404df --- /dev/null +++ b/langchain_md_files/integrations/providers/zep.mdx @@ -0,0 +1,120 @@ +# Zep +> Recall, understand, and extract data from chat histories. Power personalized AI experiences. + +>[Zep](https://www.getzep.com) is a long-term memory service for AI Assistant apps. +> With Zep, you can provide AI assistants with the ability to recall past conversations, no matter how distant, +> while also reducing hallucinations, latency, and cost. + +## How Zep works + +Zep persists and recalls chat histories, and automatically generates summaries and other artifacts from these chat histories. +It also embeds messages and summaries, enabling you to search Zep for relevant context from past conversations. +Zep does all of this asynchronously, ensuring these operations don't impact your user's chat experience. +Data is persisted to database, allowing you to scale out when growth demands. + +Zep also provides a simple, easy to use abstraction for document vector search called Document Collections. +This is designed to complement Zep's core memory features, but is not designed to be a general purpose vector database. + +Zep allows you to be more intentional about constructing your prompt: +- automatically adding a few recent messages, with the number customized for your app; +- a summary of recent conversations prior to the messages above; +- and/or contextually relevant summaries or messages surfaced from the entire chat session. +- and/or relevant Business data from Zep Document Collections. + +## What is Zep Cloud? +[Zep Cloud](https://www.getzep.com) is a managed service with Zep Open Source at its core. +In addition to Zep Open Source's memory management features, Zep Cloud offers: +- **Fact Extraction**: Automatically build fact tables from conversations, without having to define a data schema upfront. +- **Dialog Classification**: Instantly and accurately classify chat dialog. Understand user intent and emotion, segment users, and more. Route chains based on semantic context, and trigger events. +- **Structured Data Extraction**: Quickly extract business data from chat conversations using a schema you define. Understand what your Assistant should ask for next in order to complete its task. + + + +## Zep Open Source +Zep offers an open source version with a self-hosted option. +Please refer to the [Zep Open Source](https://github.com/getzep/zep) repo for more information. +You can also find Zep Open Source compatible [Retriever](/docs/integrations/retrievers/zep_memorystore), [Vector Store](/docs/integrations/vectorstores/zep) and [Memory](/docs/integrations/memory/zep_memory) examples + +## Zep Cloud Installation and Setup + +[Zep Cloud Docs](https://help.getzep.com) + +1. Install the Zep Cloud SDK: + +```bash +pip install zep_cloud +``` +or +```bash +poetry add zep_cloud +``` + +## Memory + +Zep's Memory API persists your users' chat history and metadata to a [Session](https://help.getzep.com/chat-history-memory/sessions), enriches the memory, and +enables vector similarity search over historical chat messages and dialog summaries. + +Zep offers several approaches to populating prompts with context from historical conversations. + +### Perpetual Memory +This is the default memory type. +Salient facts from the dialog are extracted and stored in a Fact Table. +This is updated in real-time as new messages are added to the Session. +Every time you call the Memory API to get a Memory, Zep returns the Fact Table, the most recent messages (per your Message Window setting), and a summary of the most recent messages prior to the Message Window. +The combination of the Fact Table, summary, and the most recent messages in a prompts provides both factual context and nuance to the LLM. + +### Summary Retriever Memory +Returns the most recent messages and a summary of past messages relevant to the current conversation, +enabling you to provide your Assistant with helpful context from past conversations + +### Message Window Buffer Memory +Returns the most recent N messages from the current conversation. + +Additionally, Zep enables vector similarity searches for Messages or Summaries stored within its system. + +This feature lets you populate prompts with past conversations that are contextually similar to a specific query, +organizing the results by a similarity Score. + +`ZepCloudChatMessageHistory` and `ZepCloudMemory` classes can be imported to interact with Zep Cloud APIs. + +`ZepCloudChatMessageHistory` is compatible with `RunnableWithMessageHistory`. +```python +from langchain_community.chat_message_histories import ZepCloudChatMessageHistory +``` + +See a [Perpetual Memory Example here](/docs/integrations/memory/zep_cloud_chat_message_history). + +You can use `ZepCloudMemory` together with agents that support Memory. +```python +from langchain_community.memory import ZepCloudMemory +``` + +See a [Memory RAG Example here](/docs/integrations/memory/zep_memory_cloud). + +## Retriever + +Zep's Memory Retriever is a LangChain Retriever that enables you to retrieve messages from a Zep Session and use them to construct your prompt. + +The Retriever supports searching over both individual messages and summaries of conversations. The latter is useful for providing rich, but succinct context to the LLM as to relevant past conversations. + +Zep's Memory Retriever supports both similarity search and [Maximum Marginal Relevance (MMR) reranking](https://help.getzep.com/working-with-search#how-zeps-mmr-re-ranking-works). MMR search is useful for ensuring that the retrieved messages are diverse and not too similar to each other + +See a [usage example](/docs/integrations/retrievers/zep_cloud_memorystore). + +```python +from langchain_community.retrievers import ZepCloudRetriever +``` + +## Vector store + +Zep's [Document VectorStore API](https://help.getzep.com/document-collections) enables you to store and retrieve documents using vector similarity search. Zep doesn't require you to understand +distance functions, types of embeddings, or indexing best practices. You just pass in your chunked documents, and Zep handles the rest. + +Zep supports both similarity search and [Maximum Marginal Relevance (MMR) reranking](https://help.getzep.com/working-with-search#how-zeps-mmr-re-ranking-works). +MMR search is useful for ensuring that the retrieved documents are diverse and not too similar to each other. + +```python +from langchain_community.vectorstores import ZepCloudVectorStore +``` + +See a [usage example](/docs/integrations/vectorstores/zep_cloud). diff --git a/langchain_md_files/integrations/providers/zhipuai.mdx b/langchain_md_files/integrations/providers/zhipuai.mdx new file mode 100644 index 0000000000000000000000000000000000000000..0bcad6c4f4850be9fae6f9dd4da7ab181e359ebe --- /dev/null +++ b/langchain_md_files/integrations/providers/zhipuai.mdx @@ -0,0 +1,18 @@ +# Zhipu AI + +>[Zhipu AI](https://www.zhipuai.cn/en/aboutus), originating from the technological +> advancements of `Tsinghua University's Computer Science Department`, +> is an artificial intelligence company with the mission of teaching machines +> to think like humans. Its world-leading AI team has developed the cutting-edge +> large language and multimodal models and built the high-precision billion-scale +> knowledge graphs, the combination of which uniquely empowers us to create a powerful +> data- and knowledge-driven cognitive engine towards artificial general intelligence. + + +## Chat models + +See a [usage example](/docs/integrations/chat/zhipuai). + +```python +from langchain_community.chat_models import ChatZhipuAI +``` diff --git a/langchain_md_files/integrations/providers/zilliz.mdx b/langchain_md_files/integrations/providers/zilliz.mdx new file mode 100644 index 0000000000000000000000000000000000000000..6170afd351e08a17b40db0cff1c7431c896dbaff --- /dev/null +++ b/langchain_md_files/integrations/providers/zilliz.mdx @@ -0,0 +1,22 @@ +# Zilliz + +>[Zilliz Cloud](https://zilliz.com/doc/quick_start) is a fully managed service on cloud for `LF AI Milvus®`, + + +## Installation and Setup + +Install the Python SDK: +```bash +pip install pymilvus +``` + +## Vectorstore + +A wrapper around Zilliz indexes allows you to use it as a vectorstore, +whether for semantic search or example selection. + +```python +from langchain_community.vectorstores import Milvus +``` + +For a more detailed walkthrough of the Miluvs wrapper, see [this notebook](/docs/integrations/vectorstores/zilliz) diff --git a/langchain_md_files/integrations/retrievers/index.mdx b/langchain_md_files/integrations/retrievers/index.mdx new file mode 100644 index 0000000000000000000000000000000000000000..451a2c5c6462edfd7abc2e45601bc9e122720f6a --- /dev/null +++ b/langchain_md_files/integrations/retrievers/index.mdx @@ -0,0 +1,37 @@ +--- +sidebar_position: 0 +sidebar_class_name: hidden +--- + +import {CategoryTable, IndexTable} from '@theme/FeatureTables' + +# Retrievers + +A [retriever](/docs/concepts/#retrievers) is an interface that returns documents given an unstructured query. +It is more general than a vector store. +A retriever does not need to be able to store documents, only to return (or retrieve) them. +Retrievers can be created from vector stores, but are also broad enough to include [Wikipedia search](/docs/integrations/retrievers/wikipedia/) and [Amazon Kendra](/docs/integrations/retrievers/amazon_kendra_retriever/). + +Retrievers accept a string query as input and return a list of [Documents](https://python.langchain.com/v0.2/api_reference/core/documents/langchain_core.documents.base.Document.html) as output. + +For specifics on how to use retrievers, see the [relevant how-to guides here](/docs/how_to/#retrievers). + +Note that all [vector stores](/docs/concepts/#vector-stores) can be [cast to retrievers](/docs/how_to/vectorstore_retriever/). +Refer to the vector store [integration docs](/docs/integrations/vectorstores/) for available vector stores. +This page lists custom retrievers, implemented via subclassing [BaseRetriever](/docs/how_to/custom_retriever/). + +## Bring-your-own documents + +The below retrievers allow you to index and search a custom corpus of documents. + + + +## External index + +The below retrievers will search over an external index (e.g., constructed from Internet data or similar). + + + +## All retrievers + + diff --git a/langchain_md_files/integrations/retrievers/self_query/index.mdx b/langchain_md_files/integrations/retrievers/self_query/index.mdx new file mode 100644 index 0000000000000000000000000000000000000000..fba56cb6145c4d60e0e4bef3f1bd637a45407d58 --- /dev/null +++ b/langchain_md_files/integrations/retrievers/self_query/index.mdx @@ -0,0 +1,11 @@ +--- +sidebar-position: 0 +--- + +# Self-querying retrievers + +Learn about how the self-querying retriever works [here](/docs/how_to/self_query). + +import DocCardList from "@theme/DocCardList"; + + diff --git a/langchain_md_files/integrations/text_embedding/index.mdx b/langchain_md_files/integrations/text_embedding/index.mdx new file mode 100644 index 0000000000000000000000000000000000000000..5e4fd4908f7b83786d30add541e2430e4b61fc3d --- /dev/null +++ b/langchain_md_files/integrations/text_embedding/index.mdx @@ -0,0 +1,18 @@ +--- +sidebar_position: 0 +sidebar_class_name: hidden +--- + +# Embedding models + +import { CategoryTable, IndexTable } from "@theme/FeatureTables"; + +[Embedding models](/docs/concepts#embedding-models) create a vector representation of a piece of text. + +This page documents integrations with various model providers that allow you to use embeddings in LangChain. + + + +## All embedding models + + diff --git a/langchain_md_files/integrations/vectorstores/index.mdx b/langchain_md_files/integrations/vectorstores/index.mdx new file mode 100644 index 0000000000000000000000000000000000000000..cc4d33418ded4c2cee07f976c30238a345598116 --- /dev/null +++ b/langchain_md_files/integrations/vectorstores/index.mdx @@ -0,0 +1,17 @@ +--- +sidebar_position: 0 +sidebar_class_name: hidden +--- + +# Vectorstores + +import { CategoryTable, IndexTable } from "@theme/FeatureTables"; + +A [vector store](/docs/concepts/#vector-stores) stores [embedded](/docs/concepts/#embedding-models) data and performs similarity search. + + + +## All Vectorstores + + + diff --git a/langchain_md_files/introduction.mdx b/langchain_md_files/introduction.mdx new file mode 100644 index 0000000000000000000000000000000000000000..436e2255100580c8861c1f9375e3c2f5800c3f57 --- /dev/null +++ b/langchain_md_files/introduction.mdx @@ -0,0 +1,98 @@ +--- +sidebar_position: 0 +sidebar_class_name: hidden +--- + +# Introduction + +**LangChain** is a framework for developing applications powered by large language models (LLMs). + +LangChain simplifies every stage of the LLM application lifecycle: +- **Development**: Build your applications using LangChain's open-source [building blocks](/docs/concepts#langchain-expression-language-lcel), [components](/docs/concepts), and [third-party integrations](/docs/integrations/platforms/). +Use [LangGraph](/docs/concepts/#langgraph) to build stateful agents with first-class streaming and human-in-the-loop support. +- **Productionization**: Use [LangSmith](https://docs.smith.langchain.com/) to inspect, monitor and evaluate your chains, so that you can continuously optimize and deploy with confidence. +- **Deployment**: Turn your LangGraph applications into production-ready APIs and Assistants with [LangGraph Cloud](https://langchain-ai.github.io/langgraph/cloud/). + +import ThemedImage from '@theme/ThemedImage'; +import useBaseUrl from '@docusaurus/useBaseUrl'; + + + +Concretely, the framework consists of the following open-source libraries: + +- **`langchain-core`**: Base abstractions and LangChain Expression Language. +- **`langchain-community`**: Third party integrations. + - Partner packages (e.g. **`langchain-openai`**, **`langchain-anthropic`**, etc.): Some integrations have been further split into their own lightweight packages that only depend on **`langchain-core`**. +- **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture. +- **[LangGraph](https://langchain-ai.github.io/langgraph)**: Build robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. Integrates smoothly with LangChain, but can be used without it. +- **[LangServe](/docs/langserve)**: Deploy LangChain chains as REST APIs. +- **[LangSmith](https://docs.smith.langchain.com)**: A developer platform that lets you debug, test, evaluate, and monitor LLM applications. + + +:::note + +These docs focus on the Python LangChain library. [Head here](https://js.langchain.com) for docs on the JavaScript LangChain library. + +::: + +## [Tutorials](/docs/tutorials) + +If you're looking to build something specific or are more of a hands-on learner, check out our [tutorials section](/docs/tutorials). +This is the best place to get started. + +These are the best ones to get started with: + +- [Build a Simple LLM Application](/docs/tutorials/llm_chain) +- [Build a Chatbot](/docs/tutorials/chatbot) +- [Build an Agent](/docs/tutorials/agents) +- [Introduction to LangGraph](https://langchain-ai.github.io/langgraph/tutorials/introduction/) + +Explore the full list of LangChain tutorials [here](/docs/tutorials), and check out other [LangGraph tutorials here](https://langchain-ai.github.io/langgraph/tutorials/). + + +## [How-to guides](/docs/how_to) + +[Here](/docs/how_to) you’ll find short answers to “How do I….?” types of questions. +These how-to guides don’t cover topics in depth – you’ll find that material in the [Tutorials](/docs/tutorials) and the [API Reference](https://python.langchain.com/v0.2/api_reference/). +However, these guides will help you quickly accomplish common tasks. + +Check out [LangGraph-specific how-tos here](https://langchain-ai.github.io/langgraph/how-tos/). + +## [Conceptual guide](/docs/concepts) + +Introductions to all the key parts of LangChain you’ll need to know! [Here](/docs/concepts) you'll find high level explanations of all LangChain concepts. + +For a deeper dive into LangGraph concepts, check out [this page](https://langchain-ai.github.io/langgraph/concepts/). + +## [API reference](https://python.langchain.com/v0.2/api_reference/) +Head to the reference section for full documentation of all classes and methods in the LangChain Python packages. + +## Ecosystem + +### [🦜🛠️ LangSmith](https://docs.smith.langchain.com) +Trace and evaluate your language model applications and intelligent agents to help you move from prototype to production. + +### [🦜🕸️ LangGraph](https://langchain-ai.github.io/langgraph) +Build stateful, multi-actor applications with LLMs. Integrates smoothly with LangChain, but can be used without it. + +## Additional resources + +### [Versions](/docs/versions/overview/) +See what changed in v0.2, learn how to migrate legacy code, and read up on our release/versioning policies, and more. + +### [Security](/docs/security) +Read up on [security](/docs/security) best practices to make sure you're developing safely with LangChain. + +### [Integrations](/docs/integrations/providers/) +LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/docs/integrations/providers/). + +### [Contributing](/docs/contributing) +Check out the developer's guide for guidelines on contributing and help getting your dev environment set up. diff --git a/langchain_md_files/people.mdx b/langchain_md_files/people.mdx new file mode 100644 index 0000000000000000000000000000000000000000..2426dab3a5f2af55dd524a646b9b8d57d9eaf19b --- /dev/null +++ b/langchain_md_files/people.mdx @@ -0,0 +1,46 @@ +--- +hide_table_of_contents: true +--- + +import People from "@theme/People"; + +# People + +There are some incredible humans from all over the world who have been instrumental in helping the LangChain community flourish 🌐! + +This page highlights a few of those folks who have dedicated their time to the open-source repo in the form of direct contributions and reviews. + +## Top reviewers + +As LangChain has grown, the amount of surface area that maintainers cover has grown as well. + +Thank you to the following folks who have gone above and beyond in reviewing incoming PRs 🙏! + + + +## Top recent contributors + +The list below contains contributors who have had the most PRs merged in the last three months, weighted (imperfectly) by impact. + +Thank you all so much for your time and efforts in making LangChain better ❤️! + + + +## Core maintainers + +Hello there 👋! + +We're LangChain's core maintainers. If you've spent time in the community, you've probably crossed paths +with at least one of us already. + + + +## Top all-time contributors + +And finally, this is an all-time list of all-stars who have made significant contributions to the framework 🌟: + + + +We're so thankful for your support! + +And one more thank you to [@tiangolo](https://github.com/tiangolo) for inspiration via FastAPI's [excellent people page](https://fastapi.tiangolo.com/fastapi-people). diff --git a/langchain_md_files/tutorials/index.mdx b/langchain_md_files/tutorials/index.mdx new file mode 100644 index 0000000000000000000000000000000000000000..a4e1840bf97443bc93355aef599ff957d52ac46c --- /dev/null +++ b/langchain_md_files/tutorials/index.mdx @@ -0,0 +1,54 @@ +--- +sidebar_position: 0 +sidebar_class_name: hidden +--- +# Tutorials + +New to LangChain or to LLM app development in general? Read this material to quickly get up and running. + +## Basics +- [Build a Simple LLM Application with LCEL](/docs/tutorials/llm_chain) +- [Build a Chatbot](/docs/tutorials/chatbot) +- [Build vector stores and retrievers](/docs/tutorials/retrievers) +- [Build an Agent](/docs/tutorials/agents) + +## Working with external knowledge +- [Build a Retrieval Augmented Generation (RAG) Application](/docs/tutorials/rag) +- [Build a Conversational RAG Application](/docs/tutorials/qa_chat_history) +- [Build a Question/Answering system over SQL data](/docs/tutorials/sql_qa) +- [Build a Query Analysis System](/docs/tutorials/query_analysis) +- [Build a local RAG application](/docs/tutorials/local_rag) +- [Build a Question Answering application over a Graph Database](/docs/tutorials/graph) +- [Build a PDF ingestion and Question/Answering system](/docs/tutorials/pdf_qa/) + +## Specialized tasks +- [Build an Extraction Chain](/docs/tutorials/extraction) +- [Generate synthetic data](/docs/tutorials/data_generation) +- [Classify text into labels](/docs/tutorials/classification) +- [Summarize text](/docs/tutorials/summarization) + +## LangGraph + +LangGraph is an extension of LangChain aimed at +building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. + +LangGraph documentation is currently hosted on a separate site. +You can peruse [LangGraph tutorials here](https://langchain-ai.github.io/langgraph/tutorials/). + +## LangSmith + +LangSmith allows you to closely trace, monitor and evaluate your LLM application. +It seamlessly integrates with LangChain, and you can use it to inspect and debug individual steps of your chains as you build. + +LangSmith documentation is hosted on a separate site. +You can peruse [LangSmith tutorials here](https://docs.smith.langchain.com/tutorials/). + +### Evaluation + +LangSmith helps you evaluate the performance of your LLM applications. The below tutorial is a great way to get started: + +- [Evaluate your LLM application](https://docs.smith.langchain.com/tutorials/Developers/evaluation) + +## More + +For more tutorials, see our [cookbook section](https://github.com/langchain-ai/langchain/tree/master/cookbook). diff --git a/langchain_md_files/versions/overview.mdx b/langchain_md_files/versions/overview.mdx new file mode 100644 index 0000000000000000000000000000000000000000..ba8ff22daf83ee66c79be487c859942b49bff3d3 --- /dev/null +++ b/langchain_md_files/versions/overview.mdx @@ -0,0 +1,103 @@ +--- +sidebar_position: 0 +sidebar_label: Overview of v0.2 +--- + +# Overview of LangChain v0.2 + +## What’s new in LangChain? + +The following features have been added during the development of 0.1.x: + +- Better streaming support via the [Event Streaming API](https://python.langchain.com/docs/expression_language/streaming/#using-stream-events). +- [Standardized tool calling support](https://blog.langchain.dev/tool-calling-with-langchain/) +- A standardized interface for [structuring output](https://github.com/langchain-ai/langchain/discussions/18154) +- [@chain decorator](https://python.langchain.com/docs/expression_language/how_to/decorator/) to more easily create **RunnableLambdas** +- https://python.langchain.com/docs/expression_language/how_to/inspect/ +- In Python, better async support for many core abstractions (thank you [@cbornet](https://github.com/cbornet)!!) +- Include response metadata in `AIMessage` to make it easy to access raw output from the underlying models +- Tooling to visualize [your runnables](https://python.langchain.com/docs/expression_language/how_to/inspect/) or [your langgraph app](https://github.com/langchain-ai/langgraph/blob/main/examples/visualization.ipynb) +- Interoperability of chat message histories across most providers +- [Over 20+ partner packages in python](https://python.langchain.com/docs/integrations/platforms/) for popular integrations + + +## What’s coming to LangChain? + +- We’ve been working hard on [langgraph](https://langchain-ai.github.io/langgraph/). We will be building more capabilities on top of it and focusing on making it the go-to framework for agent architectures. +- Vectorstores V2! We’ll be revisiting our vectorstores abstractions to help improve usability and reliability. +- Better documentation and versioned docs! +- We’re planning a breaking release (0.3.0) sometime between July-September to [upgrade to full support of Pydantic 2](https://github.com/langchain-ai/langchain/discussions/19339), and will drop support for Pydantic 1 (including objects originating from the `v1` namespace of Pydantic 2). + +## What changed? + +Due to the rapidly evolving field, LangChain has also evolved rapidly. + +This document serves to outline at a high level what has changed and why. + +### TLDR + +**As of 0.2.0:** + +- This release completes the work that we started with release 0.1.0 by removing the dependency of `langchain` on `langchain-community`. +- `langchain` package no longer requires `langchain-community` . Instead `langchain-community` will now depend on `langchain-core` and `langchain` . +- User code that still relies on deprecated imports from `langchain` will continue to work as long `langchain_community` is installed. These imports will start raising errors in release 0.4.x. + +**As of 0.1.0:** + +- `langchain` was split into the following component packages: `langchain-core`, `langchain`, `langchain-community`, `langchain-[partner]` to improve the usability of langchain code in production settings. You can read more about it on our [blog](https://blog.langchain.dev/langchain-v0-1-0/). + +### Ecosystem organization + +By the release of 0.1.0, LangChain had grown to a large ecosystem with many integrations and a large community. + +To improve the usability of LangChain in production, we split the single `langchain` package into multiple packages. This allowed us to create a good foundation architecture for the LangChain ecosystem and improve the usability of `langchain` in production. + +Here is the high level break down of the Eco-system: + +- **langchain-core**: contains core abstractions involving LangChain Runnables, tooling for observability, and base implementations of important abstractions (e.g., Chat Models). +- **langchain:** contains generic code that is built using interfaces defined in `langchain-core`. This package is for code that generalizes well across different implementations of specific interfaces. For example, `create_tool_calling_agent` works across chat models that support [tool calling capabilities](https://blog.langchain.dev/tool-calling-with-langchain/). +- **langchain-community**: community maintained 3rd party integrations. Contains integrations based on interfaces defined in **langchain-core**. Maintained by the LangChain community. +- **Partner Packages (e.g., langchain-[partner])**: Partner packages are packages dedicated to especially popular integrations (e.g., `langchain-openai`, `langchain-anthropic` etc.). The dedicated packages generally benefit from better reliability and support. +- `langgraph`: Build robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. +- `langserve`: Deploy LangChain chains as REST APIs. + + +In the 0.1.0 release, `langchain-community` was retained as required a dependency of `langchain`. + +This allowed imports of vectorstores, chat models, and other integrations to continue working through `langchain` +rather than forcing users to update all of their imports to `langchain-community`. + +For the 0.2.0 release, we’re removing the dependency of `langchain` on `langchain-community`. This is something we’ve been planning to do since the 0.1 release because we believe this is the right package architecture. + +Old imports will continue to work as long as `langchain-community` is installed. These imports will be removed in the 0.4.0 release. + +To understand why we think breaking the dependency of `langchain` on `langchain-community` is best we should understand what each package is meant to do. + +`langchain` is meant to contain high-level chains and agent architectures. The logic in these should be specified at the level of abstractions like `ChatModel` and `Retriever`, and should not be specific to any one integration. This has two main benefits: + +1. `langchain` is fairly lightweight. Here is the full list of required dependencies (after the split) + + ```toml + python = ">=3.8.1,<4.0" + langchain-core = "^0.2.0" + langchain-text-splitters = ">=0.0.1,<0.1" + langsmith = "^0.1.17" + pydantic = ">=1,<3" + SQLAlchemy = ">=1.4,<3" + requests = "^2" + PyYAML = ">=5.3" + numpy = "^1" + aiohttp = "^3.8.3" + tenacity = "^8.1.0" + jsonpatch = "^1.33" + ``` + +2. `langchain` chains/agents are largely integration-agnostic, which makes it easy to experiment with different integrations and future-proofs your code should there be issues with one specific integration. + +There is also a third less tangible benefit which is that being integration-agnostic forces us to find only those very generic abstractions and architectures which generalize well across integrations. Given how general the abilities of the foundational tech are, and how quickly the space is moving, having generic architectures is a good way of future-proofing your applications. + +`langchain-community` is intended to have all integration-specific components that are not yet being maintained in separate `langchain-{partner}` packages. Today this is still the majority of integrations and a lot of code. This code is primarily contributed by the community, while `langchain` is largely written by core maintainers. All of these integrations use optional dependencies and conditional imports, which prevents dependency bloat and conflicts but means compatible dependency versions are not made explicit. Given the volume of integrations in `langchain-community` and the speed at which integrations change, it’s very hard to follow semver versioning, and we currently don’t. + +All of which is to say that there’s no large benefits to `langchain` depending on `langchain-community` and some obvious downsides: the functionality in `langchain` should be integration agnostic anyways, `langchain-community` can’t be properly versioned, and depending on `langchain-community` increases the [vulnerability surface](https://github.com/langchain-ai/langchain/discussions/19083) of `langchain`. + +For more context about the reason for the organization please see our blog: https://blog.langchain.dev/langchain-v0-1-0/ \ No newline at end of file diff --git a/langchain_md_files/versions/release_policy.mdx b/langchain_md_files/versions/release_policy.mdx new file mode 100644 index 0000000000000000000000000000000000000000..aa6278382af22a9f25729057cd53df710b4e4be7 --- /dev/null +++ b/langchain_md_files/versions/release_policy.mdx @@ -0,0 +1,102 @@ +--- +sidebar_position: 2 +sidebar_label: Release policy +--- + +# LangChain release policy + +The LangChain ecosystem is composed of different component packages (e.g., `langchain-core`, `langchain`, `langchain-community`, `langgraph`, `langserve`, partner packages etc.) + +## Versioning + +### `langchain`, `langchain-core`, and integration packages + +`langchain`, `langchain-core`, `langchain-text-splitters`, and integration packages (`langchain-openai`, `langchain-anthropic`, etc.) follow [semantic versioning](https://semver.org/) in the format of 0.**Y**.**Z**. The packages are under rapid development, and so are currently versioning the packages with a major version of 0. + +Minor version increases will occur for: + +- Breaking changes for any public interfaces *not* marked as `beta`. + +Patch version increases will occur for: + +- Bug fixes, +- New features, +- Any changes to private interfaces, +- Any changes to `beta` features. + +When upgrading between minor versions, users should review the list of breaking changes and deprecations. + +From time to time, we will version packages as **release candidates**. These are versions that are intended to be released as stable versions, but we want to get feedback from the community before doing so. Release candidates will be versioned as 0.**Y**.**Z**rc**N**. For example, 0.2.0rc1. If no issues are found, the release candidate will be released as a stable version with the same version number. If issues are found, we will release a new release candidate with an incremented `N` value (e.g., 0.2.0rc2). + +### `langchain-community` + +`langchain-community` is currently on version `0.2.x`. + +Minor version increases will occur for: + +- Updates to the major/minor versions of required `langchain-x` dependencies. E.g., when updating the required version of `langchain-core` from `^0.2.x` to `0.3.0`. + +Patch version increases will occur for: + +- Bug fixes, +- New features, +- Any changes to private interfaces, +- Any changes to `beta` features, +- Breaking changes to integrations to reflect breaking changes in the third-party service. + +Whenever possible we will avoid making breaking changes in patch versions. +However, if an external API makes a breaking change then breaking changes to the corresponding `langchain-community` integration can occur in a patch version. + +### `langchain-experimental` + +`langchain-experimental` is currently on version `0.0.x`. All changes will be accompanied with patch version increases. + +## Release cadence + +We expect to space out **minor** releases (e.g., from 0.2.x to 0.3.0) of `langchain` and `langchain-core` by at least 2-3 months, as such releases may contain breaking changes. + +Patch versions are released frequently, up to a few times per week, as they contain bug fixes and new features. + +## API stability + +The development of LLM applications is a rapidly evolving field, and we are constantly learning from our users and the community. As such, we expect that the APIs in `langchain` and `langchain-core` will continue to evolve to better serve the needs of our users. + +Even though both `langchain` and `langchain-core` are currently in a pre-1.0 state, we are committed to maintaining API stability in these packages. + +- Breaking changes to the public API will result in a minor version bump (the second digit) +- Any bug fixes or new features will result in a patch version bump (the third digit) + +We will generally try to avoid making unnecessary changes, and will provide a deprecation policy for features that are being removed. + +### Stability of other packages + +The stability of other packages in the LangChain ecosystem may vary: + +- `langchain-community` is a community maintained package that contains 3rd party integrations. While we do our best to review and test changes in `langchain-community`, `langchain-community` is expected to experience more breaking changes than `langchain` and `langchain-core` as it contains many community contributions. +- Partner packages may follow different stability and versioning policies, and users should refer to the documentation of those packages for more information; however, in general these packages are expected to be stable. + +### What is a "API stability"? + +API stability means: + +- All the public APIs (everything in this documentation) will not be moved or renamed without providing backwards-compatible aliases. +- If new features are added to these APIs – which is quite possible – they will not break or change the meaning of existing methods. In other words, "stable" does not (necessarily) mean "complete." +- If, for some reason, an API declared stable must be removed or replaced, it will be declared deprecated but will remain in the API for at least two minor releases. Warnings will be issued when the deprecated method is called. + +### **APIs marked as internal** + +Certain APIs are explicitly marked as “internal” in a couple of ways: + +- Some documentation refers to internals and mentions them as such. If the documentation says that something is internal, it may change. +- Functions, methods, and other objects prefixed by a leading underscore (**`_`**). This is the standard Python convention of indicating that something is private; if any method starts with a single **`_`**, it’s an internal API. + - **Exception:** Certain methods are prefixed with `_` , but do not contain an implementation. These methods are *meant* to be overridden by sub-classes that provide the implementation. Such methods are generally part of the **Public API** of LangChain. + +## Deprecation policy + +We will generally avoid deprecating features until a better alternative is available. + +When a feature is deprecated, it will continue to work in the current and next minor version of `langchain` and `langchain-core`. After that, the feature will be removed. + +Since we're expecting to space out minor releases by at least 2-3 months, this means that a feature can be removed within 2-6 months of being deprecated. + +In some situations, we may allow the feature to remain in the code base for longer periods of time, if it's not causing issues in the packages, to reduce the burden on users. \ No newline at end of file diff --git a/langchain_md_files/versions/v0_2/deprecations.mdx b/langchain_md_files/versions/v0_2/deprecations.mdx new file mode 100644 index 0000000000000000000000000000000000000000..b97c27888f45edb39f2f56ab82d5a14e873ca2bd --- /dev/null +++ b/langchain_md_files/versions/v0_2/deprecations.mdx @@ -0,0 +1,902 @@ +--- +sidebar_position: 3 +sidebar_label: Changes +keywords: [retrievalqa, llmchain, conversationalretrievalchain] +--- + +# Deprecations and Breaking Changes + +This code contains a list of deprecations and removals in the `langchain` and `langchain-core` packages. + +New features and improvements are not listed here. See the [overview](/docs/versions/overview/) for a summary of what's new in this release. + +## Breaking changes + +As of release 0.2.0, `langchain` is required to be integration-agnostic. This means that code in `langchain` should not by default instantiate any specific chat models, llms, embedding models, vectorstores etc; instead, the user will be required to specify those explicitly. + +The following functions and classes require an explicit LLM to be passed as an argument: + +- `langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit` +- `langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit` +- `langchain.chains.openai_functions.get_openapi_chain` +- `langchain.chains.router.MultiRetrievalQAChain.from_retrievers` +- `langchain.indexes.VectorStoreIndexWrapper.query` +- `langchain.indexes.VectorStoreIndexWrapper.query_with_sources` +- `langchain.indexes.VectorStoreIndexWrapper.aquery_with_sources` +- `langchain.chains.flare.FlareChain` + + +The following classes now require passing an explicit Embedding model as an argument: + +- `langchain.indexes.VectostoreIndexCreator` + +The following code has been removed: + +- `langchain.natbot.NatBotChain.from_default` removed in favor of the `from_llm` class method. + +Behavior was changed for the following code: + + +### @tool decorator + +`@tool` decorator now assigns the function doc-string as the tool description. Previously, the `@tool` decorator +using to prepend the function signature. + +Before 0.2.0: + +```python +@tool +def my_tool(x: str) -> str: + """Some description.""" + return "something" + +print(my_tool.description) +``` + +Would result in: `my_tool: (x: str) -> str - Some description.` + +As of 0.2.0: + +It will result in: `Some description.` + +## Code that moved to another package + +Code that was moved from `langchain` into another package (e.g, `langchain-community`) + +If you try to import it from `langchain`, the import will keep on working, but will raise a deprecation warning. The warning will provide a replacement import statement. + + ```shell + python -c "from langchain.document_loaders.markdown import UnstructuredMarkdownLoader" +``` + + ```shell + LangChainDeprecationWarning: Importing UnstructuredMarkdownLoader from langchain.document_loaders is deprecated. Please replace deprecated imports: + + >> from langchain.document_loaders import UnstructuredMarkdownLoader + + with new imports of: + + >> from langchain_community.document_loaders import UnstructuredMarkdownLoader +``` + +We will continue supporting the imports in `langchain` until release 0.4 as long as the relevant package where the code lives is installed. (e.g., as long as `langchain_community` is installed.) + +However, we advise for users to not rely on these imports and instead migrate to the new imports. To help with this process, we’re releasing a migration script via the LangChain CLI. See further instructions in migration guide. + +## Code targeted for removal + +Code that has better alternatives available and will eventually be removed, so there’s only a single way to do things. (e.g., `predict_messages` method in ChatModels has been deprecated in favor of `invoke`). + +### astream events V1 + +If you are using `astream_events`, please review how to [migrate to astream events v2](/docs/versions/v0_2/migrating_astream_events). + +### langchain_core + +#### try_load_from_hub + + +In module: `utils.loading` +Deprecated: 0.1.30 +Removal: 0.3.0 + + +Alternative: Using the hwchase17/langchain-hub repo for prompts is deprecated. Please use https://smith.langchain.com/hub instead. + + +#### BaseLanguageModel.predict + + +In module: `language_models.base` +Deprecated: 0.1.7 +Removal: 0.3.0 + + +Alternative: invoke + + +#### BaseLanguageModel.predict_messages + + +In module: `language_models.base` +Deprecated: 0.1.7 +Removal: 0.3.0 + + +Alternative: invoke + + +#### BaseLanguageModel.apredict + + +In module: `language_models.base` +Deprecated: 0.1.7 +Removal: 0.3.0 + + +Alternative: ainvoke + + +#### BaseLanguageModel.apredict_messages + + +In module: `language_models.base` +Deprecated: 0.1.7 +Removal: 0.3.0 + + +Alternative: ainvoke + + +#### RunTypeEnum + + +In module: `tracers.schemas` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: Use string instead. + + +#### TracerSessionV1Base + + +In module: `tracers.schemas` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: + + +#### TracerSessionV1Create + + +In module: `tracers.schemas` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: + + +#### TracerSessionV1 + + +In module: `tracers.schemas` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: + + +#### TracerSessionBase + + +In module: `tracers.schemas` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: + + +#### TracerSession + + +In module: `tracers.schemas` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: + + +#### BaseRun + + +In module: `tracers.schemas` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: Run + + +#### LLMRun + + +In module: `tracers.schemas` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: Run + + +#### ChainRun + + +In module: `tracers.schemas` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: Run + + +#### ToolRun + + +In module: `tracers.schemas` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: Run + + +#### BaseChatModel.__call__ + + +In module: `language_models.chat_models` +Deprecated: 0.1.7 +Removal: 0.3.0 + + +Alternative: invoke + + +#### BaseChatModel.call_as_llm + + +In module: `language_models.chat_models` +Deprecated: 0.1.7 +Removal: 0.3.0 + + +Alternative: invoke + + +#### BaseChatModel.predict + + +In module: `language_models.chat_models` +Deprecated: 0.1.7 +Removal: 0.3.0 + + +Alternative: invoke + + +#### BaseChatModel.predict_messages + + +In module: `language_models.chat_models` +Deprecated: 0.1.7 +Removal: 0.3.0 + + +Alternative: invoke + + +#### BaseChatModel.apredict + + +In module: `language_models.chat_models` +Deprecated: 0.1.7 +Removal: 0.3.0 + + +Alternative: ainvoke + + +#### BaseChatModel.apredict_messages + + +In module: `language_models.chat_models` +Deprecated: 0.1.7 +Removal: 0.3.0 + + +Alternative: ainvoke + + +#### BaseLLM.__call__ + + +In module: `language_models.llms` +Deprecated: 0.1.7 +Removal: 0.3.0 + + +Alternative: invoke + + +#### BaseLLM.predict + + +In module: `language_models.llms` +Deprecated: 0.1.7 +Removal: 0.3.0 + + +Alternative: invoke + + +#### BaseLLM.predict_messages + + +In module: `language_models.llms` +Deprecated: 0.1.7 +Removal: 0.3.0 + + +Alternative: invoke + + +#### BaseLLM.apredict + + +In module: `language_models.llms` +Deprecated: 0.1.7 +Removal: 0.3.0 + + +Alternative: ainvoke + + +#### BaseLLM.apredict_messages + + +In module: `language_models.llms` +Deprecated: 0.1.7 +Removal: 0.3.0 + + +Alternative: ainvoke + + +#### BaseRetriever.get_relevant_documents + + +In module: `retrievers` +Deprecated: 0.1.46 +Removal: 0.3.0 + + +Alternative: invoke + + +#### BaseRetriever.aget_relevant_documents + + +In module: `retrievers` +Deprecated: 0.1.46 +Removal: 0.3.0 + + +Alternative: ainvoke + + +#### ChatPromptTemplate.from_role_strings + + +In module: `prompts.chat` +Deprecated: 0.0.1 +Removal: + + +Alternative: from_messages classmethod + + +#### ChatPromptTemplate.from_strings + + +In module: `prompts.chat` +Deprecated: 0.0.1 +Removal: + + +Alternative: from_messages classmethod + + +#### BaseTool.__call__ + + +In module: `tools` +Deprecated: 0.1.47 +Removal: 0.3.0 + + +Alternative: invoke + + +#### convert_pydantic_to_openai_function + + +In module: `utils.function_calling` +Deprecated: 0.1.16 +Removal: 0.3.0 + + +Alternative: langchain_core.utils.function_calling.convert_to_openai_function() + + +#### convert_pydantic_to_openai_tool + + +In module: `utils.function_calling` +Deprecated: 0.1.16 +Removal: 0.3.0 + + +Alternative: langchain_core.utils.function_calling.convert_to_openai_tool() + + +#### convert_python_function_to_openai_function + + +In module: `utils.function_calling` +Deprecated: 0.1.16 +Removal: 0.3.0 + + +Alternative: langchain_core.utils.function_calling.convert_to_openai_function() + + +#### format_tool_to_openai_function + + +In module: `utils.function_calling` +Deprecated: 0.1.16 +Removal: 0.3.0 + + +Alternative: langchain_core.utils.function_calling.convert_to_openai_function() + + +#### format_tool_to_openai_tool + + +In module: `utils.function_calling` +Deprecated: 0.1.16 +Removal: 0.3.0 + + +Alternative: langchain_core.utils.function_calling.convert_to_openai_tool() + + +### langchain + + +#### AgentType + + +In module: `agents.agent_types` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: Use [LangGraph](/docs/how_to/migrate_agent/) or new agent constructor methods like create_react_agent, create_json_agent, create_structured_chat_agent, etc. + + +#### Chain.__call__ + + +In module: `chains.base` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: invoke + + +#### Chain.acall + + +In module: `chains.base` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: ainvoke + + +#### Chain.run + + +In module: `chains.base` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: invoke + + +#### Chain.arun + + +In module: `chains.base` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: ainvoke + + +#### Chain.apply + + +In module: `chains.base` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: batch + + +#### LLMChain + + +In module: `chains.llm` +Deprecated: 0.1.17 +Removal: 0.3.0 + + +Alternative: [RunnableSequence](/docs/how_to/sequence/), e.g., `prompt | llm` + +This [migration guide](/docs/versions/migrating_chains/llm_chain) has a side-by-side comparison. + + +#### LLMSingleActionAgent + + +In module: `agents.agent` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: Use [LangGraph](/docs/how_to/migrate_agent/) or new agent constructor methods like create_react_agent, create_json_agent, create_structured_chat_agent, etc. + + +#### Agent + + +In module: `agents.agent` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: Use [LangGraph](/docs/how_to/migrate_agent/) or new agent constructor methods like create_react_agent, create_json_agent, create_structured_chat_agent, etc. + + +#### OpenAIFunctionsAgent + + +In module: `agents.openai_functions_agent.base` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: create_openai_functions_agent + + +#### ZeroShotAgent + + +In module: `agents.mrkl.base` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: create_react_agent + + +#### MRKLChain + + +In module: `agents.mrkl.base` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: + + +#### ConversationalAgent + + +In module: `agents.conversational.base` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: create_react_agent + + +#### ConversationalChatAgent + + +In module: `agents.conversational_chat.base` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: create_json_chat_agent + + +#### ChatAgent + + +In module: `agents.chat.base` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: create_react_agent + + +#### OpenAIMultiFunctionsAgent + + +In module: `agents.openai_functions_multi_agent.base` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: create_openai_tools_agent + + +#### ReActDocstoreAgent + + +In module: `agents.react.base` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: + + +#### DocstoreExplorer + + +In module: `agents.react.base` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: + + +#### ReActTextWorldAgent + + +In module: `agents.react.base` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: + + +#### ReActChain + + +In module: `agents.react.base` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: + + +#### SelfAskWithSearchAgent + + +In module: `agents.self_ask_with_search.base` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: create_self_ask_with_search + + +#### SelfAskWithSearchChain + + +In module: `agents.self_ask_with_search.base` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: + + +#### StructuredChatAgent + + +In module: `agents.structured_chat.base` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: create_structured_chat_agent + + +#### RetrievalQA + + +In module: `chains.retrieval_qa.base` +Deprecated: 0.1.17 +Removal: 0.3.0 + + +Alternative: [create_retrieval_chain](https://python.langchain.com/v0.2/api_reference/langchain/chains/langchain.chains.retrieval.create_retrieval_chain.html#langchain-chains-retrieval-create-retrieval-chain) +This [migration guide](/docs/versions/migrating_chains/retrieval_qa) has a side-by-side comparison. + + +#### load_agent_from_config + + +In module: `agents.loading` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: + + +#### load_agent + + +In module: `agents.loading` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: + + +#### initialize_agent + + +In module: `agents.initialize` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: Use [LangGraph](/docs/how_to/migrate_agent/) or new agent constructor methods like create_react_agent, create_json_agent, create_structured_chat_agent, etc. + + +#### XMLAgent + + +In module: `agents.xml.base` +Deprecated: 0.1.0 +Removal: 0.3.0 + + +Alternative: create_xml_agent + + +#### CohereRerank + + +In module: `retrievers.document_compressors.cohere_rerank` +Deprecated: 0.0.30 +Removal: 0.3.0 + + +Alternative: langchain_cohere.CohereRerank + + +#### ConversationalRetrievalChain + + +In module: `chains.conversational_retrieval.base` +Deprecated: 0.1.17 +Removal: 0.3.0 + + +Alternative: [create_history_aware_retriever](https://python.langchain.com/v0.2/api_reference/langchain/chains/langchain.chains.history_aware_retriever.create_history_aware_retriever.html) together with [create_retrieval_chain](https://python.langchain.com/v0.2/api_reference/langchain/chains/langchain.chains.retrieval.create_retrieval_chain.html#langchain-chains-retrieval-create-retrieval-chain) (see example in docstring) +This [migration guide](/docs/versions/migrating_chains/conversation_retrieval_chain) has a side-by-side comparison. + + +#### create_extraction_chain_pydantic + + +In module: `chains.openai_tools.extraction` +Deprecated: 0.1.14 +Removal: 0.3.0 + + +Alternative: [with_structured_output](/docs/how_to/structured_output/#the-with_structured_output-method) method on chat models that support tool calling. + + +#### create_openai_fn_runnable + + +In module: `chains.structured_output.base` +Deprecated: 0.1.14 +Removal: 0.3.0 + + +Alternative: [with_structured_output](/docs/how_to/structured_output/#the-with_structured_output-method) method on chat models that support tool calling. + + +#### create_structured_output_runnable + + +In module: `chains.structured_output.base` +Deprecated: 0.1.17 +Removal: 0.3.0 + + +Alternative: [with_structured_output](/docs/how_to/structured_output/#the-with_structured_output-method) method on chat models that support tool calling. + + +#### create_openai_fn_chain + + +In module: `chains.openai_functions.base` +Deprecated: 0.1.1 +Removal: 0.3.0 + + +Alternative: create_openai_fn_runnable + + +#### create_structured_output_chain + + +In module: `chains.openai_functions.base` +Deprecated: 0.1.1 +Removal: 0.3.0 + +Alternative: ChatOpenAI.with_structured_output + + +#### create_extraction_chain + + +In module: `chains.openai_functions.extraction` +Deprecated: 0.1.14 +Removal: 0.3.0 + + +Alternative: [with_structured_output](/docs/how_to/structured_output/#the-with_structured_output-method) method on chat models that support tool calling. + + +#### create_extraction_chain_pydantic + + +In module: `chains.openai_functions.extraction` +Deprecated: 0.1.14 +Removal: 0.3.0 + + +Alternative: [with_structured_output](/docs/how_to/structured_output/#the-with_structured_output-method) method on chat models that support tool calling. \ No newline at end of file diff --git a/langchain_md_files/versions/v0_2/index.mdx b/langchain_md_files/versions/v0_2/index.mdx new file mode 100644 index 0000000000000000000000000000000000000000..5a74a3493e4ad34937b7a12e88be23d026f33d7e --- /dev/null +++ b/langchain_md_files/versions/v0_2/index.mdx @@ -0,0 +1,93 @@ +--- +sidebar_position: 1 +--- + +# Migrating to LangChain v0.2 + + + +LangChain v0.2 was released in May 2024. This release includes a number of [breaking changes and deprecations](/docs/versions/v0_2/deprecations). This document contains a guide on upgrading to 0.2.x. + +:::note Reference + +- [Breaking Changes & Deprecations](/docs/versions/v0_2/deprecations) +- [Migrating legacy chains to LCEL](/docs/versions/migrating_chains) +- [Migrating to Astream Events v2](/docs/versions/v0_2/migrating_astream_events) + +::: + +# Migration + +This documentation will help you upgrade your code to LangChain `0.2.x.`. To prepare for migration, we first recommend you take the following steps: + +1. Install the 0.2.x versions of langchain-core, langchain and upgrade to recent versions of other packages that you may be using. (e.g. langgraph, langchain-community, langchain-openai, etc.) +2. Verify that your code runs properly with the new packages (e.g., unit tests pass). +3. Install a recent version of `langchain-cli` , and use the tool to replace old imports used by your code with the new imports. (See instructions below.) +4. Manually resolve any remaining deprecation warnings. +5. Re-run unit tests. +6. If you are using `astream_events`, please review how to [migrate to astream events v2](/docs/versions/v0_2/migrating_astream_events). + +## Upgrade to new imports + +We created a tool to help migrate your code. This tool is still in **beta** and may not cover all cases, but +we hope that it will help you migrate your code more quickly. + +The migration script has the following limitations: + +1. It’s limited to helping users move from old imports to new imports. It does not help address other deprecations. +2. It can’t handle imports that involve `as` . +3. New imports are always placed in global scope, even if the old import that was replaced was located inside some local scope (e..g, function body). +4. It will likely miss some deprecated imports. + +Here is an example of the import changes that the migration script can help apply automatically: + + +| From Package | To Package | Deprecated Import | New Import | +|---------------------|--------------------------|--------------------------------------------------------------------|---------------------------------------------------------------------| +| langchain | langchain-community | from langchain.vectorstores import InMemoryVectorStore | from langchain_community.vectorstores import InMemoryVectorStore | +| langchain-community | langchain_openai | from langchain_community.chat_models import ChatOpenAI | from langchain_openai import ChatOpenAI | +| langchain-community | langchain-core | from langchain_community.document_loaders import Blob | from langchain_core.document_loaders import Blob | +| langchain | langchain-core | from langchain.schema.document import Document | from langchain_core.documents import Document | +| langchain | langchain-text-splitters | from langchain.text_splitter import RecursiveCharacterTextSplitter | from langchain_text_splitters import RecursiveCharacterTextSplitter | + + +## Installation + +```bash +pip install langchain-cli +langchain-cli --version # <-- Make sure the version is at least 0.0.22 +``` + +## Usage + +Given that the migration script is not perfect, you should make sure you have a backup of your code first (e.g., using version control like `git`). + +You will need to run the migration script **twice** as it only applies one import replacement per run. + +For example, say your code still uses `from langchain.chat_models import ChatOpenAI`: + +After the first run, you’ll get: `from langchain_community.chat_models import ChatOpenAI` +After the second run, you’ll get: `from langchain_openai import ChatOpenAI` + +```bash +# Run a first time +# Will replace from langchain.chat_models import ChatOpenAI +langchain-cli migrate --diff [path to code] # Preview +langchain-cli migrate [path to code] # Apply + +# Run a second time to apply more import replacements +langchain-cli migrate --diff [path to code] # Preview +langchain-cli migrate [path to code] # Apply +``` + +### Other options + +```bash +# See help menu +langchain-cli migrate --help +# Preview Changes without applying +langchain-cli migrate --diff [path to code] +# Run on code including ipython notebooks +# Apply all import updates except for updates from langchain to langchain-core +langchain-cli migrate --disable langchain_to_core --include-ipynb [path to code] +``` diff --git a/langchain_md_files/versions/v0_2/migrating_astream_events.mdx b/langchain_md_files/versions/v0_2/migrating_astream_events.mdx new file mode 100644 index 0000000000000000000000000000000000000000..0498f1f26fdb3d3a693228ac7ee23e2eb47cc080 --- /dev/null +++ b/langchain_md_files/versions/v0_2/migrating_astream_events.mdx @@ -0,0 +1,118 @@ +--- +sidebar_position: 2 +sidebar_label: astream_events v2 +--- + +# Migrating to Astream Events v2 + +We've added a `v2` of the astream_events API with the release of `0.2.x`. You can see this [PR](https://github.com/langchain-ai/langchain/pull/21638) for more details. + +The `v2` version is a re-write of the `v1` version, and should be more efficient, with more consistent output for the events. The `v1` version of the API will be deprecated in favor of the `v2` version and will be removed in `0.4.0`. + +Below is a list of changes between the `v1` and `v2` versions of the API. + + +### output for `on_chat_model_end` + +In `v1`, the outputs associated with `on_chat_model_end` changed depending on whether the +chat model was run as a root level runnable or as part of a chain. + +As a root level runnable the output was: + +```python +"data": {"output": AIMessageChunk(content="hello world!", id='some id')} +``` + +As part of a chain the output was: + +``` + "data": { + "output": { + "generations": [ + [ + { + "generation_info": None, + "message": AIMessageChunk( + content="hello world!", id=AnyStr() + ), + "text": "hello world!", + "type": "ChatGenerationChunk", + } + ] + ], + "llm_output": None, + } + }, +``` + + +As of `v2`, the output will always be the simpler representation: + +```python +"data": {"output": AIMessageChunk(content="hello world!", id='some id')} +``` + +:::note +Non chat models (i.e., regular LLMs) are will be consistently associated with the more verbose format for now. +::: + +### output for `on_retriever_end` + +`on_retriever_end` output will always return a list of `Documents`. + +Before: +```python +{ + "data": { + "output": [ + Document(...), + Document(...), + ... + ] + } +} +``` + +### Removed `on_retriever_stream` + +The `on_retriever_stream` event was an artifact of the implementation and has been removed. + +Full information associated with the event is already available in the `on_retriever_end` event. + +Please use `on_retriever_end` instead. + +### Removed `on_tool_stream` + +The `on_tool_stream` event was an artifact of the implementation and has been removed. + +Full information associated with the event is already available in the `on_tool_end` event. + +Please use `on_tool_end` instead. + +### Propagating Names + +Names of runnables have been updated to be more consistent. + +```python +model = GenericFakeChatModel(messages=infinite_cycle).configurable_fields( + messages=ConfigurableField( + id="messages", + name="Messages", + description="Messages return by the LLM", + ) +) +``` + +In `v1`, the event name was `RunnableConfigurableFields`. + +In `v2`, the event name is `GenericFakeChatModel`. + +If you're filtering by event names, check if you need to update your filters. + +### RunnableRetry + +Usage of [RunnableRetry](https://python.langchain.com/v0.2/api_reference/core/runnables/langchain_core.runnables.retry.RunnableRetry.html) +within an LCEL chain being streamed generated an incorrect `on_chain_end` event in `v1` corresponding +to the failed runnable invocation that was being retried. This event has been removed in `v2`. + +No action is required for this change. diff --git a/openai-cookbook_md_files/How_to_build_an_agent_with_the_node_sdk.mdx b/openai-cookbook_md_files/How_to_build_an_agent_with_the_node_sdk.mdx new file mode 100644 index 0000000000000000000000000000000000000000..c28975d173ab12f99583786e44ae80002f6df7d6 --- /dev/null +++ b/openai-cookbook_md_files/How_to_build_an_agent_with_the_node_sdk.mdx @@ -0,0 +1,492 @@ +# How to build an agent with the Node.js SDK + +OpenAI functions enable your app to take action based on user inputs. This means that it can, e.g., search the web, send emails, or book tickets on behalf of your users, making it more powerful than a regular chatbot. + +In this tutorial, you will build an app that uses OpenAI functions along with the latest version of the Node.js SDK. The app runs in the browser, so you only need a code editor and, e.g., VS Code Live Server to follow along locally. Alternatively, write your code directly in the browser via [this code playground at Scrimba.](https://scrimba.com/scrim/c6r3LkU9) + +## What you will build + +Our app is a simple agent that helps you find activities in your area. +It has access to two functions, `getLocation()` and `getCurrentWeather()`, +which means it can figure out where you’re located and what the weather +is at the moment. + +At this point, it's important to understand that +OpenAI doesn't execute any code for you. It just tells your app which +functions it should use in a given scenario, and then leaves it up to +your app to invoke them. + +Once our agent knows your location and the weather, it'll use GPT’s +internal knowledge to suggest suitable local activities for you. + +## Importing the SDK and authenticating with OpenAI + +We start by importing the OpenAI SDK at the top of our JavaScript file and authenticate with our API key, which we have stored as an environment variable. + +```js +import OpenAI from "openai"; + +const openai = new OpenAI({ + apiKey: process.env.OPENAI_API_KEY, + dangerouslyAllowBrowser: true, +}); +``` + +Since we're running our code in a browser environment at Scrimba, we also need to set `dangerouslyAllowBrowser: true` to confirm we understand the risks involved with client-side API requests. Please note that you should move these requests over to a Node server in a production app. + +## Creating our two functions + +Next, we'll create the two functions. The first one - `getLocation` - +uses the [IP API](https://ipapi.co/) to get the location of the +user. + +```js +async function getLocation() { + const response = await fetch("https://ipapi.co/json/"); + const locationData = await response.json(); + return locationData; +} +``` + +The IP API returns a bunch of data about your location, including your +latitude and longitude, which we’ll use as arguments in the second +function `getCurrentWeather`. It uses the [Open Meteo +API](https://open-meteo.com/) to get the current weather data, like +this: + +```js +async function getCurrentWeather(latitude, longitude) { + const url = `https://api.open-meteo.com/v1/forecast?latitude=${latitude}&longitude=${longitude}&hourly=apparent_temperature`; + const response = await fetch(url); + const weatherData = await response.json(); + return weatherData; +} +``` + +## Describing our functions for OpenAI + +For OpenAI to understand the purpose of these functions, we need to +describe them using a specific schema. We'll create an array called +`tools` that contains one object per function. Each object +will have two keys: `type`, `function`, and the `function` key has +three subkeys: `name`, `description`, and `parameters`. + +```js +const tools = [ + { + type: "function", + function: { + name: "getCurrentWeather", + description: "Get the current weather in a given location", + parameters: { + type: "object", + properties: { + latitude: { + type: "string", + }, + longitude: { + type: "string", + }, + }, + required: ["longitude", "latitude"], + }, + } + }, + { + type: "function", + function: { + name: "getLocation", + description: "Get the user's location based on their IP address", + parameters: { + type: "object", + properties: {}, + }, + } + }, +]; +``` + +## Setting up the messages array + +We also need to define a `messages` array. This will keep track of all of the messages back and forth between our app and OpenAI. + +The first object in the array should always have the `role` property set to `"system"`, which tells OpenAI that this is how we want it to behave. + +```js +const messages = [ + { + role: "system", + content: + "You are a helpful assistant. Only use the functions you have been provided with.", + }, +]; +``` + +## Creating the agent function + +We are now ready to build the logic of our app, which lives in the +`agent` function. It is asynchronous and takes one argument: the +`userInput`. + +We start by pushing the `userInput` to the messages array. This time, we set the `role` to `"user"`, so that OpenAI knows that this is the input from the user. + +```js +async function agent(userInput) { + messages.push({ + role: "user", + content: userInput, + }); + const response = await openai.chat.completions.create({ + model: "gpt-4", + messages: messages, + tools: tools, + }); + console.log(response); +} +``` + +Next, we'll send a request to the Chat completions endpoint via the +`chat.completions.create()` method in the Node SDK. This method takes a +configuration object as an argument. In it, we'll specify three +properties: + +- `model` - Decides which AI model we want to use (in our case, + GPT-4). +- `messages` - The entire history of messages between the user and the + AI up until this point. +- `tools` - A list of tools the model may call. Currently, only + functions are supported as a tool., we'll we use the `tools` array we + created earlier. + +## Running our app with a simple input + +Let's try to run the `agent` with an input that requires a function call to give a suitable reply. + +```js +agent("Where am I located right now?"); +``` + +When we run the code above, we see the response from OpenAI logged out +to the console like this: + +```js +{ + id: "chatcmpl-84ojoEJtyGnR6jRHK2Dl4zTtwsa7O", + object: "chat.completion", + created: 1696159040, + model: "gpt-4-0613", + choices: [{ + index: 0, + message: { + role: "assistant", + content: null, + tool_calls: [ + id: "call_CBwbo9qoXUn1kTR5pPuv6vR1", + type: "function", + function: { + name: "getLocation", + arguments: "{}" + } + ] + }, + logprobs: null, + finish_reason: "tool_calls" // OpenAI wants us to call a function + }], + usage: { + prompt_tokens: 134, + completion_tokens: 6, + total_tokens: 140 + } + system_fingerprint: null +} +``` + +This response tells us that we should call one of our functions, as it contains the following key: `finish_reason: "tool_calls"`. + +The name of the function can be found in the +`response.choices[0].message.tool_calls[0].function.name` key, which is set to +`"getLocation"`. + +## Turning the OpenAI response into a function call + +Now that we have the name of the function as a string, we'll need to +translate that into a function call. To help us with that, we'll gather +both of our functions in an object called `availableTools`: + +```js +const availableTools = { + getCurrentWeather, + getLocation, +}; +``` + +This is handy because we'll be able to access the `getLocation` function +via bracket notation and the string we got back from OpenAI, like this: +`availableTools["getLocation"]`. + +```js +const { finish_reason, message } = response.choices[0]; + +if (finish_reason === "tool_calls" && message.tool_calls) { + const functionName = message.tool_calls[0].function.name; + const functionToCall = availableTools[functionName]; + const functionArgs = JSON.parse(message.tool_calls[0].function.arguments); + const functionArgsArr = Object.values(functionArgs); + const functionResponse = await functionToCall.apply(null, functionArgsArr); + console.log(functionResponse); +} +``` + +We're also grabbing ahold of any arguments OpenAI wants us to pass into +the function: `message.tool_calls[0].function.arguments`. +However, we won't need any arguments for this first function call. + +If we run the code again with the same input +(`"Where am I located right now?"`), we'll see that `functionResponse` +is an object filled with location about where the user is located right +now. In my case, that is Oslo, Norway. + +```js +{ip: "193.212.60.170", network: "193.212.60.0/23", version: "IPv4", city: "Oslo", region: "Oslo County", region_code: "03", country: "NO", country_name: "Norway", country_code: "NO", country_code_iso3: "NOR", country_capital: "Oslo", country_tld: ".no", continent_code: "EU", in_eu: false, postal: "0026", latitude: 59.955, longitude: 10.859, timezone: "Europe/Oslo", utc_offset: "+0200", country_calling_code: "+47", currency: "NOK", currency_name: "Krone", languages: "no,nb,nn,se,fi", country_area: 324220, country_population: 5314336, asn: "AS2119", org: "Telenor Norge AS"} +``` + +We'll add this data to a new item in the `messages` array, where we also +specify the name of the function we called. + +```js +messages.push({ + role: "function", + name: functionName, + content: `The result of the last function was this: ${JSON.stringify( + functionResponse + )} + `, +}); +``` + +Notice that the `role` is set to `"function"`. This tells OpenAI +that the `content` parameter contains the result of the function call +and not the input from the user. + +At this point, we need to send a new request to OpenAI with this updated +`messages` array. However, we don’t want to hard code a new function +call, as our agent might need to go back and forth between itself and +GPT several times until it has found the final answer for the user. + +This can be solved in several different ways, e.g. recursion, a +while-loop, or a for-loop. We'll use a good old for-loop for the sake of +simplicity. + +## Creating the loop + +At the top of the `agent` function, we'll create a loop that lets us run +the entire procedure up to five times. + +If we get back `finish_reason: "tool_calls"` from GPT, we'll just +push the result of the function call to the `messages` array and jump to +the next iteration of the loop, triggering a new request. + +If we get `finish_reason: "stop"` back, then GPT has found a suitable +answer, so we'll return the function and cancel the loop. + +```js +for (let i = 0; i < 5; i++) { + const response = await openai.chat.completions.create({ + model: "gpt-4", + messages: messages, + tools: tools, + }); + const { finish_reason, message } = response.choices[0]; + + if (finish_reason === "tool_calls" && message.tool_calls) { + const functionName = message.tool_calls[0].function.name; + const functionToCall = availableTools[functionName]; + const functionArgs = JSON.parse(message.tool_calls[0].function.arguments); + const functionArgsArr = Object.values(functionArgs); + const functionResponse = await functionToCall.apply(null, functionArgsArr); + + messages.push({ + role: "function", + name: functionName, + content: ` + The result of the last function was this: ${JSON.stringify( + functionResponse + )} + `, + }); + } else if (finish_reason === "stop") { + messages.push(message); + return message.content; + } +} +return "The maximum number of iterations has been met without a suitable answer. Please try again with a more specific input."; +``` + +If we don't see a `finish_reason: "stop"` within our five iterations, +we'll return a message saying we couldn’t find a suitable answer. + +## Running the final app + +At this point, we are ready to try our app! I'll ask the agent to +suggest some activities based on my location and the current weather. + +```js +const response = await agent( + "Please suggest some activities based on my location and the current weather." +); +console.log(response); +``` + +Here's what we see in the console (formatted to make it easier to read): + +```js +Based on your current location in Oslo, Norway and the weather (15°C and snowy), +here are some activity suggestions: + +1. A visit to the Oslo Winter Park for skiing or snowboarding. +2. Enjoy a cosy day at a local café or restaurant. +3. Visit one of Oslo's many museums. The Fram Museum or Viking Ship Museum offer interesting insights into Norway’s seafaring history. +4. Take a stroll in the snowy streets and enjoy the beautiful winter landscape. +5. Enjoy a nice book by the fireplace in a local library. +6. Take a fjord sightseeing cruise to enjoy the snowy landscapes. + +Always remember to bundle up and stay warm. Enjoy your day! +``` + +If we peak under the hood, and log out `response.choices[0].message` in +each iteration of the loop, we'll see that GPT has instructed us to use +both our functions before coming up with an answer. + +First, it tells us to call the `getLocation` function. Then it tells us +to call the `getCurrentWeather` function with +`"longitude": "10.859", "latitude": "59.955"` passed in as the +arguments. This is data it got back from the first function call we did. + +```js +{"role":"assistant","content":null,"tool_calls":[{"id":"call_Cn1KH8mtHQ2AMbyNwNJTweEP","type":"function","function":{"name":"getLocation","arguments":"{}"}}]} +{"role":"assistant","content":null,"tool_calls":[{"id":"call_uc1oozJfGTvYEfIzzcsfXfOl","type":"function","function":{"name":"getCurrentWeather","arguments":"{\n\"latitude\": \"10.859\",\n\"longitude\": \"59.955\"\n}"}}]} +``` + +You've now built an AI agent using OpenAI functions and the Node.js SDK! If you're looking for an extra challenge, consider enhancing this app. For example, you could add a function that fetches up-to-date information on events and activities in the user's location. + +Happy coding! + +
+Complete code + +```js +import OpenAI from "openai"; + +const openai = new OpenAI({ + apiKey: process.env.OPENAI_API_KEY, + dangerouslyAllowBrowser: true, +}); + +async function getLocation() { + const response = await fetch("https://ipapi.co/json/"); + const locationData = await response.json(); + return locationData; +} + +async function getCurrentWeather(latitude, longitude) { + const url = `https://api.open-meteo.com/v1/forecast?latitude=${latitude}&longitude=${longitude}&hourly=apparent_temperature`; + const response = await fetch(url); + const weatherData = await response.json(); + return weatherData; +} + +const tools = [ + { + type: "function", + function: { + name: "getCurrentWeather", + description: "Get the current weather in a given location", + parameters: { + type: "object", + properties: { + latitude: { + type: "string", + }, + longitude: { + type: "string", + }, + }, + required: ["longitude", "latitude"], + }, + } + }, + { + type: "function", + function: { + name: "getLocation", + description: "Get the user's location based on their IP address", + parameters: { + type: "object", + properties: {}, + }, + } + }, +]; + +const availableTools = { + getCurrentWeather, + getLocation, +}; + +const messages = [ + { + role: "system", + content: `You are a helpful assistant. Only use the functions you have been provided with.`, + }, +]; + +async function agent(userInput) { + messages.push({ + role: "user", + content: userInput, + }); + + for (let i = 0; i < 5; i++) { + const response = await openai.chat.completions.create({ + model: "gpt-4", + messages: messages, + tools: tools, + }); + + const { finish_reason, message } = response.choices[0]; + + if (finish_reason === "tool_calls" && message.tool_calls) { + const functionName = message.tool_calls[0].function.name; + const functionToCall = availableTools[functionName]; + const functionArgs = JSON.parse(message.tool_calls[0].function.arguments); + const functionArgsArr = Object.values(functionArgs); + const functionResponse = await functionToCall.apply( + null, + functionArgsArr + ); + + messages.push({ + role: "function", + name: functionName, + content: ` + The result of the last function was this: ${JSON.stringify( + functionResponse + )} + `, + }); + } else if (finish_reason === "stop") { + messages.push(message); + return message.content; + } + } + return "The maximum number of iterations has been met without a suitable answer. Please try again with a more specific input."; +} + +const response = await agent( + "Please suggest some activities based on my location and the weather." +); + +console.log("response:", response); +``` + +
diff --git a/openai-cookbook_md_files/vector_databases/supabase/semantic-search.mdx b/openai-cookbook_md_files/vector_databases/supabase/semantic-search.mdx new file mode 100644 index 0000000000000000000000000000000000000000..77bb61f23d0a82bf11cb7d4075c516e5ed17c102 --- /dev/null +++ b/openai-cookbook_md_files/vector_databases/supabase/semantic-search.mdx @@ -0,0 +1,276 @@ +# Semantic search using Supabase Vector + +The purpose of this guide is to demonstrate how to store OpenAI embeddings in [Supabase Vector](https://supabase.com/docs/guides/ai) (Postgres + pgvector) for the purposes of semantic search. + +[Supabase](https://supabase.com/docs) is an open-source Firebase alternative built on top of [Postgres](https://en.wikipedia.org/wiki/PostgreSQL), a production-grade SQL database. Since Supabase Vector is built on [pgvector](https://github.com/pgvector/pgvector), you can store your embeddings within the same database that holds the rest of your application data. When combined with pgvector's indexing algorithms, vector search remains [fast at large scales](https://supabase.com/blog/increase-performance-pgvector-hnsw). + +Supabase adds an ecosystem of services and tools to make app development as quick as possible (such as an [auto-generated REST API](https://postgrest.org/)). We'll use these services to store and query embeddings within Postgres. + +This guide covers: + +1. [Setting up your database](#setup-database) +2. [Creating a SQL table](#create-a-vector-table) that can store vector data +3. [Generating OpenAI embeddings](#generate-openai-embeddings) using OpenAI's JavaScript client +4. [Storing the embeddings](#store-embeddings-in-database) in your SQL table using the Supabase JavaScript client +5. [Performing semantic search](#semantic-search) over the embeddings using a Postgres function and the Supabase JavaScript client + +## Setup database + +First head over to https://database.new to provision your Supabase database. This will create a Postgres database on the Supabase cloud platform. Alternatively, you can follow the [local development](https://supabase.com/docs/guides/cli/getting-started) options if you prefer to run your database locally using Docker. + +In the studio, jump to the [SQL editor](https://supabase.com/dashboard/project/_/sql/new) and execute the following SQL to enable pgvector: + +```sql +-- Enable the pgvector extension +create extension if not exists vector; +``` + +> In a production application, the best practice is to use [database migrations](https://supabase.com/docs/guides/cli/local-development#database-migrations) so that all SQL operations are managed within source control. To keep things simple in this guide, we'll execute queries directly in the SQL Editor. If you are building a production app, feel free to move these into a database migration. + +## Create a vector table + +Next we'll create a table to store documents and embeddings. In the SQL Editor, run: + +```sql +create table documents ( + id bigint primary key generated always as identity, + content text not null, + embedding vector (1536) not null +); +``` + +Since Supabase is built on Postgres, we're just using regular SQL here. You can modify this table however you like to better fit your application. If you have existing database tables, you can simply add a new `vector` column to the appropriate table. + +The important piece to understand is the `vector` data type, which is a new data type that became available when we enabled the pgvector extension earlier. The size of the vector (1536 here) represents the number of dimensions in the embedding. Since we're using OpenAI's `text-embedding-3-small` model in this example, we set the vector size to 1536. + +Let's go ahead and create a vector index on this table so that future queries remain performant as the table grows: + +```sql +create index on documents using hnsw (embedding vector_ip_ops); +``` + +This index uses the [HNSW](https://supabase.com/docs/guides/ai/vector-indexes/hnsw-indexes) algorithm to index vectors stored in the `embedding` column, and specifically when using the inner product operator (`<#>`). We'll explain more about this operator later when we implement our match function. + +Let's also follow security best practices by enabling row level security on the table: + +```sql +alter table documents enable row level security; +``` + +This will prevent unauthorized access to this table through the auto-generated REST API (more on this shortly). + +## Generate OpenAI embeddings + +This guide uses JavaScript to generate embeddings, but you can easily modify it to use any [language supported by OpenAI](https://platform.openai.com/docs/libraries). + +If you are using JavaScript, feel free to use whichever server-side JavaScript runtime that you prefer (Node.js, Deno, Supabase Edge Functions). + +If you're using Node.js, first install `openai` as a dependency: + +```shell +npm install openai +``` + +then import it: + +```js +import OpenAI from "openai"; +``` + +If you're using Deno or Supabase Edge Functions, you can import `openai` directly from a URL: + +```js +import OpenAI from "https://esm.sh/openai@4"; +``` + +> In this example we import from https://esm.sh which is a CDN that automatically fetches the respective NPM module for you and serves it over HTTP. + +Next we'll generate an OpenAI embedding using [`text-embedding-3-small`](https://platform.openai.com/docs/guides/embeddings/embedding-models): + +```js +const openai = new OpenAI(); + +const input = "The cat chases the mouse"; + +const result = await openai.embeddings.create({ + input, + model: "text-embedding-3-small", +}); + +const [{ embedding }] = result.data; +``` + +Remember that you will need an [OpenAI API key](https://platform.openai.com/api-keys) to interact with the OpenAI API. You can pass this as an environment variable called `OPENAI_API_KEY`, or manually set it when you instantiate your OpenAI client: + +```js +const openai = new OpenAI({ + apiKey: "", +}); +``` + +_**Remember:** Never hard-code API keys in your code. Best practice is to either store it in a `.env` file and load it using a library like [`dotenv`](https://github.com/motdotla/dotenv) or load it from an external key management system._ + +## Store embeddings in database + +Supabase comes with an [auto-generated REST API](https://postgrest.org/) that dynamically builds REST endpoints for each of your tables. This means you don't need to establish a direct Postgres connection to your database - instead you can interact with it simply using by the REST API. This is especially useful in serverless environments that run short-lived processes where re-establishing a database connection every time can be expensive. + +Supabase comes with a number of [client libraries](https://supabase.com/docs#client-libraries) to simplify interaction with the REST API. In this guide we'll use the [JavaScript client library](https://supabase.com/docs/reference/javascript), but feel free to adjust this to your preferred language. + +If you're using Node.js, install `@supabase/supabase-js` as a dependency: + +```shell +npm install @supabase/supabase-js +``` + +then import it: + +```js +import { createClient } from "@supabase/supabase-js"; +``` + +If you're using Deno or Supabase Edge Functions, you can import `@supabase/supabase-js` directly from a URL: + +```js +import { createClient } from "https://esm.sh/@supabase/supabase-js@2"; +``` + +Next we'll instantiate our Supabase client and configure it so that it points to your Supabase project. In this guide we'll store a reference to your Supabase URL and key in a `.env` file, but feel free to modify this based on how your application handles configuration. + +If you are using Node.js or Deno, add your Supabase URL and service role key to a `.env` file. If you are using the cloud platform, you can find these from your Supabase dashboard [settings page](https://supabase.com/dashboard/project/_/settings/api). If you're running Supabase locally, you can find these by running `npx supabase status` in a terminal. + +_.env_ + +``` +SUPABASE_URL= +SUPABASE_SERVICE_ROLE_KEY= +``` + +If you are using Supabase Edge Functions, these environment variables are automatically injected into your function for you so you can skip the above step. + +Next we'll pull these environment variables into our app. + +In Node.js, install the `dotenv` dependency: + +```shell +npm install dotenv +``` + +And retrieve the environment variables from `process.env`: + +```js +import { config } from "dotenv"; + +// Load .env file +config(); + +const supabaseUrl = process.env["SUPABASE_URL"]; +const supabaseServiceRoleKey = process.env["SUPABASE_SERVICE_ROLE_KEY"]; +``` + +In Deno, load the `.env` file using the `dotenv` standard library: + +```js +import { load } from "https://deno.land/std@0.208.0/dotenv/mod.ts"; + +// Load .env file +const env = await load(); + +const supabaseUrl = env["SUPABASE_URL"]; +const supabaseServiceRoleKey = env["SUPABASE_SERVICE_ROLE_KEY"]; +``` + +In Supabase Edge Functions, simply load the injected environment variables directly: + +```js +const supabaseUrl = Deno.env.get("SUPABASE_URL"); +const supabaseServiceRoleKey = Deno.env.get("SUPABASE_SERVICE_ROLE_KEY"); +``` + +Next let's instantiate our `supabase` client: + +```js +const supabase = createClient(supabaseUrl, supabaseServiceRoleKey, { + auth: { persistSession: false }, +}); +``` + +From here we use the `supabase` client to insert our text and embedding (generated earlier) into the database: + +```js +const { error } = await supabase.from("documents").insert({ + content: input, + embedding, +}); +``` + +> In production, best practice would be to check the response `error` to see if there were any problems inserting the data and handle it accordingly. + +## Semantic search + +Finally let's perform semantic search over the embeddings in our database. At this point we'll assume your `documents` table has been filled with multiple records that we can search over. + +Let's create a match function in Postgres that performs the semantic search query. Execute the following in the [SQL Editor](https://supabase.com/dashboard/project/_/sql/new): + +```sql +create function match_documents ( + query_embedding vector (1536), + match_threshold float, +) +returns setof documents +language plpgsql +as $$ +begin + return query + select * + from documents + where documents.embedding <#> query_embedding < -match_threshold + order by documents.embedding <#> query_embedding; +end; +$$; +``` + +This function accepts a `query_embedding` which represents the embedding generated from the search query text (more on this shortly). It also accepts a `match_threshold` which specifies how similar the document embeddings have to be in order for `query_embedding` to count as a match. + +Inside the function we implement the query which does two things: + +- Filters the documents to only include those who's embeddings match within the above `match_threshold`. Since the `<#>` operator performs the negative inner product (versus positive inner product), we negate the similarity threshold before comparing. This means a `match_threshold` of 1 is most similar, and -1 is most dissimilar. +- Orders the documents by negative inner product (`<#>`) ascending. This allows us to retrieve documents that match closest first. + +> Since OpenAI embeddings are normalized, we opted to use inner product (`<#>`) because it is slightly more performant than other operators like cosine distance (`<=>`). It is important to note though this only works because the embeddings are normalized - if they weren't, cosine distance should be used. + +Now we can call this function from our application using the `supabase.rpc()` method: + +```js +const query = "What does the cat chase?"; + +// First create an embedding on the query itself +const result = await openai.embeddings.create({ + input: query, + model: "text-embedding-3-small", +}); + +const [{ embedding }] = result.data; + +// Then use this embedding to search for matches +const { data: documents, error: matchError } = await supabase + .rpc("match_documents", { + query_embedding: embedding, + match_threshold: 0.8, + }) + .select("content") + .limit(5); +``` + +In this example, we set a match threshold to 0.8. Adjust this threshold based on what works best with your data. + +Note that since `match_documents` returns a set of `documents`, we can treat this `rpc()` like a regular table query. Specifically this means we can chain additional commands to this query, like `select()` and `limit()`. Here we select just the columns we care about from the `documents` table (`content`), and we limit the number of documents returned (max 5 in this example). + +At this point you have a list of documents that matched the query based on semantic relationship, ordered by most similar first. + +## Next steps + +You can use this example as the foundation for other semantic search techniques, like retrieval augmented generation (RAG). + +For more information on OpenAI embeddings, read the [Embedding](https://platform.openai.com/docs/guides/embeddings) docs. + +For more information on Supabase Vector, read the [AI & Vector](https://supabase.com/docs/guides/ai) docs. diff --git a/trl_md_files/alignprop_trainer.mdx b/trl_md_files/alignprop_trainer.mdx new file mode 100644 index 0000000000000000000000000000000000000000..205bcb4f70084d046bb5af464a0541df8e76e98f --- /dev/null +++ b/trl_md_files/alignprop_trainer.mdx @@ -0,0 +1,91 @@ +# Aligning Text-to-Image Diffusion Models with Reward Backpropagation + +## The why + +If your reward function is differentiable, directly backpropagating gradients from the reward models to the diffusion model is significantly more sample and compute efficient (25x) than doing policy gradient algorithm like DDPO. +AlignProp does full backpropagation through time, which allows updating the earlier steps of denoising via reward backpropagation. + +
+ + +## Getting started with `examples/scripts/alignprop.py` + +The `alignprop.py` script is a working example of using the `AlignProp` trainer to finetune a Stable Diffusion model. This example explicitly configures a small subset of the overall parameters associated with the config object (`AlignPropConfig`). + +**Note:** one A100 GPU is recommended to get this running. For lower memory setting, consider setting truncated_backprop_rand to False. With default settings this will do truncated backpropagation with K=1. + +Almost every configuration parameter has a default. There is only one commandline flag argument that is required of the user to get things up and running. The user is expected to have a [huggingface user access token](https://huggingface.co./docs/hub/security-tokens) that will be used to upload the model post finetuning to HuggingFace hub. The following bash command is to be entered to get things running + +```batch +python alignprop.py --hf_user_access_token +``` + +To obtain the documentation of `stable_diffusion_tuning.py`, please run `python stable_diffusion_tuning.py --help` + +The following are things to keep in mind (The code checks this for you as well) in general while configuring the trainer (beyond the use case of using the example script) + +- The configurable randomized truncation range (`--alignprop_config.truncated_rand_backprop_minmax=(0,50)`) the first number should be equal and greater to 0, while the second number should equal or less to the number of diffusion timesteps (sample_num_steps) +- The configurable truncation backprop absolute step (`--alignprop_config.truncated_backprop_timestep=49`) the number should be less than the number of diffusion timesteps (sample_num_steps), it only matters when truncated_backprop_rand is set to False + +## Setting up the image logging hook function + +Expect the function to be given a dictionary with keys +```python +['image', 'prompt', 'prompt_metadata', 'rewards'] + +``` +and `image`, `prompt`, `prompt_metadata`, `rewards`are batched. +You are free to log however you want the use of `wandb` or `tensorboard` is recommended. + +### Key terms + +- `rewards` : The rewards/score is a numerical associated with the generated image and is key to steering the RL process +- `prompt` : The prompt is the text that is used to generate the image +- `prompt_metadata` : The prompt metadata is the metadata associated with the prompt. A situation where this will not be empty is when the reward model comprises of a [`FLAVA`](https://huggingface.co./docs/transformers/model_doc/flava) setup where questions and ground answers (linked to the generated image) are expected with the generated image (See here: https://github.com/kvablack/ddpo-pytorch/blob/main/ddpo_pytorch/rewards.py#L45) +- `image` : The image generated by the Stable Diffusion model + +Example code for logging sampled images with `wandb` is given below. + +```python +# for logging these images to wandb + +def image_outputs_hook(image_data, global_step, accelerate_logger): + # For the sake of this example, we only care about the last batch + # hence we extract the last element of the list + result = {} + images, prompts, rewards = [image_data['images'],image_data['prompts'],image_data['rewards']] + for i, image in enumerate(images): + pil = Image.fromarray( + (image.cpu().numpy().transpose(1, 2, 0) * 255).astype(np.uint8) + ) + pil = pil.resize((256, 256)) + result[f"{prompts[i]:.25} | {rewards[i]:.2f}"] = [pil] + accelerate_logger.log_images( + result, + step=global_step, + ) + +``` + +### Using the finetuned model + +Assuming you've done with all the epochs and have pushed up your model to the hub, you can use the finetuned model as follows + +```python +from diffusers import StableDiffusionPipeline +pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +pipeline.to("cuda") + +pipeline.load_lora_weights('mihirpd/alignprop-trl-aesthetics') + +prompts = ["squirrel", "crab", "starfish", "whale","sponge", "plankton"] +results = pipeline(prompts) + +for prompt, image in zip(prompts,results.images): + image.save(f"dump/{prompt}.png") +``` + +## Credits + +This work is heavily influenced by the repo [here](https://github.com/mihirp1998/AlignProp/) and the associated paper [Aligning Text-to-Image Diffusion Models with Reward Backpropagation + by Mihir Prabhudesai, Anirudh Goyal, Deepak Pathak, Katerina Fragkiadaki](https://huggingface.co./papers/2310.03739). diff --git a/trl_md_files/bco_trainer.mdx b/trl_md_files/bco_trainer.mdx new file mode 100644 index 0000000000000000000000000000000000000000..ae094c142b238355ec4811229aeaef8b1106f89c --- /dev/null +++ b/trl_md_files/bco_trainer.mdx @@ -0,0 +1,139 @@ +# BCO Trainer + +TRL supports the Binary Classifier Optimization (BCO). +The [BCO](https://huggingface.co./papers/2404.04656) authors train a binary classifier whose logit serves as a reward so that the classifier maps {prompt, chosen completion} pairs to 1 and {prompt, rejected completion} pairs to 0. +For a full example have a look at [`examples/scripts/bco.py`]. + +## Expected dataset format + +The BCO trainer expects a very specific format for the dataset as it does not require pairwise preferences. Since the model will be trained to directly optimize examples that consist of a prompt, model completion, and a label to indicate whether the completion is "good" or "bad", we expect a dataset with the following columns: + +- `prompt` +- `completion` +- `label` + +for example: + +``` +bco_dataset_dict = { + "prompt": [ + "Hey, hello", + "How are you", + "What is your name?", + "What is your name?", + "Which is the best programming language?", + "Which is the best programming language?", + "Which is the best programming language?", + ], + "completion": [ + "hi nice to meet you", + "leave me alone", + "I don't have a name", + "My name is Mary", + "Python", + "C++", + "Java", + ], + "label": [ + True, + False, + False, + True, + True, + False, + False, + ], +} +``` + +where the `prompt` contains the context inputs, `completion` contains the corresponding responses and `label` contains the corresponding flag that indicates if the generated completion is desired (`True`) or undesired (`False`). +A prompt can have multiple responses and this is reflected in the entries being repeated in the dictionary's value arrays. It is required that the dataset contains at least one desirable and one undesirable completion. + + +## Expected model format +The BCO trainer expects a model of `AutoModelForCausalLM`, compared to PPO that expects `AutoModelForCausalLMWithValueHead` for the value function. + +## Using the `BCOTrainer` + +For a detailed example have a look at the `examples/scripts/bco.py` script. At a high level we need to initialize the `BCOTrainer` with a `model` we wish to train and a reference `ref_model` which we will use to calculate the implicit rewards of the preferred and rejected response. + +The `beta` refers to the hyperparameter of the implicit reward, and the dataset contains the 3 entries listed above. Note that the `model` and `ref_model` need to have the same architecture (ie decoder only or encoder-decoder). + + + +```py +training_args = BCOConfig( + beta=0.1, +) + +bco_trainer = BCOTrainer( + model, + model_ref, + args=training_args, + train_dataset=train_dataset, + tokenizer=tokenizer, +) +``` +After this one can then call: + +```py +bco_trainer.train() +``` + +## Underlying Distribution matching (UDM) + +In practical scenarios, the thumbs-up and thumbs-down datasets are likely to have divergent underlying distributions of prompts. +Consider an LLM deployed for user feedback: if the model excels in writing tasks but underperforms in coding, the thumbs-up dataset will be dominated by writing-related prompts, while the thumbs-down dataset will contain mostly coding-related prompts. +If the prompts in your desired and undesired datasets differ a lot, it is useful to enable UDM. + +Choose an embedding model and tokenizer: + +```py +embedding_model = AutoModel.from_pretrained(your_model_id) +embedding_tokenizer = AutoTokenizer.from_pretrained(your_model_id) + +# customize this function depending on your embedding model +def embed_prompt(input_ids, attention_mask, model): + outputs = model(input_ids=input_ids, attention_mask=attention_mask) + return outputs.last_hidden_state.mean(dim=1) + +embedding_model = Accelerator().prepare_model(self.embedding_model) +embedding_func = partial(embed_prompt, model=embedding_model) +``` + +Set `prompt_sample_size` to defined how many prompts are selected to train the UDM classifier and start the training with the provided embedding function: + +```py +training_args = BCOConfig( + beta=0.1, + prompt_sample_size=512, +) + +bco_trainer = BCOTrainer( + model, + model_ref, + args=training_args, + train_dataset=train_dataset, + tokenizer=tokenizer, + embedding_func=embedding_func, + embedding_tokenizer=self.embedding_tokenizer, +) + +bco_trainer.train() +``` + +### For Mixture of Experts Models: Enabling the auxiliary loss + +MOEs are the most efficient if the load is about equally distributed between experts. +To ensure that we train MOEs similarly during preference-tuning, it is beneficial to add the auxiliary loss from the load balancer to the final loss. + +This option is enabled by setting `output_router_logits=True` in the model config (e.g. MixtralConfig). +To scale how much the auxiliary loss contributes to the total loss, use the hyperparameter `router_aux_loss_coef=...` (default: 0.001). + +## BCOTrainer + +[[autodoc]] BCOTrainer + +## BCOConfig + +[[autodoc]] BCOConfig \ No newline at end of file diff --git a/trl_md_files/best_of_n.mdx b/trl_md_files/best_of_n.mdx new file mode 100644 index 0000000000000000000000000000000000000000..9dd56aba2ce4818ffcf09f4e5354c825d63000e1 --- /dev/null +++ b/trl_md_files/best_of_n.mdx @@ -0,0 +1,72 @@ +# Best of N sampling: Alternative ways to get better model output without RL based fine-tuning + +Within the extras module is the `best-of-n` sampler class that serves as an alternative method of generating better model output. +As to how it fares against the RL based fine-tuning, please look in the `examples` directory for a comparison example + +## Usage + +To get started quickly, instantiate an instance of the class with a model, a length sampler, a tokenizer and a callable that serves as a proxy reward pipeline that outputs reward scores for input queries + +```python + +from transformers import pipeline, AutoTokenizer +from trl import AutoModelForCausalLMWithValueHead +from trl.core import LengthSampler +from trl.extras import BestOfNSampler + +ref_model = AutoModelForCausalLMWithValueHead.from_pretrained(ref_model_name) +reward_pipe = pipeline("sentiment-analysis", model=reward_model, device=device) +tokenizer = AutoTokenizer.from_pretrained(ref_model_name) +tokenizer.pad_token = tokenizer.eos_token + + +# callable that takes a list of raw text and returns a list of corresponding reward scores +def queries_to_scores(list_of_strings): + return [output["score"] for output in reward_pipe(list_of_strings)] + +best_of_n = BestOfNSampler(model, tokenizer, queries_to_scores, length_sampler=output_length_sampler) + + +``` + +And assuming you have a list/tensor of tokenized queries, you can generate better output by calling the `generate` method + +```python + +best_of_n.generate(query_tensors, device=device, **gen_kwargs) + +``` +The default sample size is 4, but you can change it at the time of instance initialization like so + +```python + +best_of_n = BestOfNSampler(model, tokenizer, queries_to_scores, length_sampler=output_length_sampler, sample_size=8) + +``` + +The default output is the result of taking the top scored output for each query, but you can change it to top 2 and so on by passing the `n_candidates` argument at the time of instance initialization + +```python + +best_of_n = BestOfNSampler(model, tokenizer, queries_to_scores, length_sampler=output_length_sampler, n_candidates=2) + +``` + +There is the option of setting the generation settings (like `temperature`, `pad_token_id`) at the time of instance creation as opposed to when calling the `generate` method. +This is done by passing a `GenerationConfig` from the `transformers` library at the time of initialization + +```python + +from transformers import GenerationConfig + +generation_config = GenerationConfig(min_length= -1, top_k=0.0, top_p= 1.0, do_sample= True, pad_token_id=tokenizer.eos_token_id) + +best_of_n = BestOfNSampler(model, tokenizer, queries_to_scores, length_sampler=output_length_sampler, generation_config=generation_config) + +best_of_n.generate(query_tensors, device=device) + +``` + +Furthermore, at the time of initialization you can set the seed to control repeatability of the generation process and the number of samples to generate for each query + + diff --git a/trl_md_files/callbacks.mdx b/trl_md_files/callbacks.mdx new file mode 100644 index 0000000000000000000000000000000000000000..e4d26797c29347f39ef3b0c1a743c22500cdbaa0 --- /dev/null +++ b/trl_md_files/callbacks.mdx @@ -0,0 +1,13 @@ +# Callbacks + +## SyncRefModelCallback + +[[autodoc]] SyncRefModelCallback + +## RichProgressCallback + +[[autodoc]] RichProgressCallback + +## WinRateCallback + +[[autodoc]] WinRateCallback diff --git a/trl_md_files/clis.mdx b/trl_md_files/clis.mdx new file mode 100644 index 0000000000000000000000000000000000000000..68c4fcfbf6da066043a482252858a47e0c6205c5 --- /dev/null +++ b/trl_md_files/clis.mdx @@ -0,0 +1,119 @@ +# Command Line Interfaces (CLIs) + +You can use TRL to fine-tune your Language Model with Supervised Fine-Tuning (SFT) or Direct Policy Optimization (DPO) or even chat with your model using the TRL CLIs. + +Currently supported CLIs are: + +- `trl sft`: fine-tune a LLM on a text/instruction dataset +- `trl dpo`: fine-tune a LLM with DPO on a preference dataset +- `trl chat`: quickly spin up a LLM fine-tuned for chatting + +## Fine-tuning with the CLI + +Before getting started, pick up a Language Model from Hugging Face Hub. Supported models can be found with the filter "text-generation" within models. Also make sure to pick up a relevant dataset for your task. + +Before using the `sft` or `dpo` commands make sure to run: +```bash +accelerate config +``` +and pick up the right configuration for your training setup (single / multi-GPU, DeepSpeed, etc.). Make sure to complete all steps of `accelerate config` before running any CLI command. + +We also recommend you passing a YAML config file to configure your training protocol. Below is a simple example of a YAML file that you can use for training your models with `trl sft` command. + +```yaml +model_name_or_path: + trl-internal-testing/tiny-random-LlamaForCausalLM +dataset_name: + imdb +dataset_text_field: + text +report_to: + none +learning_rate: + 0.0001 +lr_scheduler_type: + cosine +``` + +Save that config in a `.yaml` and get started immediately! An example CLI config is available as `examples/cli_configs/example_config.yaml`. Note you can overwrite the arguments from the config file by explicitly passing them to the CLI, e.g. from the root folder: + +```bash +trl sft --config examples/cli_configs/example_config.yaml --output_dir test-trl-cli --lr_scheduler_type cosine_with_restarts +``` + +Will force-use `cosine_with_restarts` for `lr_scheduler_type`. + +### Supported Arguments + +We do support all arguments from `transformers.TrainingArguments`, for loading your model, we support all arguments from `~trl.ModelConfig`: + +[[autodoc]] ModelConfig + +You can pass any of these arguments either to the CLI or the YAML file. + +### Supervised Fine-tuning (SFT) + +Follow the basic instructions above and run `trl sft --output_dir <*args>`: + +```bash +trl sft --model_name_or_path facebook/opt-125m --dataset_name imdb --output_dir opt-sft-imdb +``` + +The SFT CLI is based on the `examples/scripts/sft.py` script. + +### Direct Policy Optimization (DPO) + +To use the DPO CLI, you need to have a dataset in the TRL format such as + +* TRL's Anthropic HH dataset: https://huggingface.co./datasets/trl-internal-testing/hh-rlhf-helpful-base-trl-style +* TRL's OpenAI TL;DR summarization dataset: https://huggingface.co./datasets/trl-internal-testing/tldr-preference-trl-style + +These datasets always have at least three columns `prompt, chosen, rejected`: + +* `prompt` is a list of strings. +* `chosen` is the chosen response in [chat format](https://huggingface.co./docs/transformers/main/en/chat_templating) +* `rejected` is the rejected response [chat format](https://huggingface.co./docs/transformers/main/en/chat_templating) + + +To do a quick start, you can run the following command: + +```bash +trl dpo --model_name_or_path facebook/opt-125m --output_dir trl-hh-rlhf --dataset_name trl-internal-testing/hh-rlhf-helpful-base-trl-style +``` + + +The DPO CLI is based on the `examples/scripts/dpo.py` script. + + +#### Custom preference dataset + +Format the dataset into TRL format (you can adapt the `examples/datasets/anthropic_hh.py`): + +```bash +python examples/datasets/anthropic_hh.py --push_to_hub --hf_entity your-hf-org +``` + +## Chat interface + +The chat CLI lets you quickly load the model and talk to it. Simply run the following: + +```bash +trl chat --model_name_or_path Qwen/Qwen1.5-0.5B-Chat +``` + +> [!TIP] +> To use the chat CLI with the developer installation, you must run `make dev` +> + +Note that the chat interface relies on the tokenizer's [chat template](https://huggingface.co./docs/transformers/chat_templating) to format the inputs for the model. Make sure your tokenizer has a chat template defined. + +Besides talking to the model there are a few commands you can use: + +- **clear**: clears the current conversation and start a new one +- **example {NAME}**: load example named `{NAME}` from the config and use it as the user input +- **set {SETTING_NAME}={SETTING_VALUE};**: change the system prompt or generation settings (multiple settings are separated by a ';'). +- **reset**: same as clear but also resets the generation configs to defaults if they have been changed by **set** +- **save {SAVE_NAME} (optional)**: save the current chat and settings to file by default to `./chat_history/{MODEL_NAME}/chat_{DATETIME}.yaml` or `{SAVE_NAME}` if provided +- **exit**: closes the interface + +The default examples are defined in `examples/scripts/config/default_chat_config.yaml` but you can pass your own with `--config CONFIG_FILE` where you can also specify the default generation parameters. diff --git a/trl_md_files/cpo_trainer.mdx b/trl_md_files/cpo_trainer.mdx new file mode 100644 index 0000000000000000000000000000000000000000..05c0f40cf967aeda2059953ce5e261b8f507ab9b --- /dev/null +++ b/trl_md_files/cpo_trainer.mdx @@ -0,0 +1,113 @@ +# CPO Trainer + +Contrastive Preference Optimization (CPO) as introduced in the paper [Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation](https://huggingface.co./papers/2401.08417) by Haoran Xu, Amr Sharaf, Yunmo Chen, Weiting Tan, Lingfeng Shen, Benjamin Van Durme, Kenton Murray, and Young Jin Kim. At a high-level, CPO trains models to +avoid generating adequate, but not perfect translations in Machine Translation (MT) tasks. However, CPO is a general approximation to the DPO loss and can be applied to other domains like chat. + +CPO aims to mitigate two fundamental shortcomings of SFT. First, SFT’s methodology of minimizing the discrepancy between predicted outputs and gold-standard references inherently caps model performance at the quality level of the training data. Secondly, SFT lacks a mechanism to prevent the model from rejecting mistakes in translations. The CPO objective is derived from the DPO objective. + +## SimPO +The [SimPO](https://huggingface.co./papers/2405.14734) method is also implemented in the `CPOTrainer`. SimPO is an alternative loss that adds a reward margin, allows for length normalization, and does not use BC regularization. To use this loss, we can use SimPO easily by turning on `loss_type="simpo"` and `cpo_alpha=0` in the `CPOConfig`. + +## CPO-SimPO +We also offer the combined use of CPO and SimPO, which enables more stable training and improved performance. Learn more details at [CPO-SimPO Github](https://github.com/fe1ixxu/CPO_SIMPO). To use this method, simply enable SimPO by setting `loss_type="simpo"` and a non-zero `cpo_alpha` in the CPOConfig. + +## Expected dataset format + +The CPO trainer expects a format identical to the DPO trainer, which should include three entries. These entries should be named as follows: + +- `prompt` +- `chosen` +- `rejected` + +for example: + +```py +cpo_dataset_dict = { + "prompt": [ + "hello", + "how are you", + "What is your name?", + "What is your name?", + "Which is the best programming language?", + "Which is the best programming language?", + "Which is the best programming language?", + ], + "chosen": [ + "hi nice to meet you", + "I am fine", + "My name is Mary", + "My name is Mary", + "Python", + "Python", + "Java", + ], + "rejected": [ + "leave me alone", + "I am not fine", + "Whats it to you?", + "I dont have a name", + "Javascript", + "C++", + "C++", + ], +} +``` +where the `prompt` contains the context inputs, `chosen` contains the corresponding chosen responses and `rejected` contains the corresponding negative (rejected) responses. As can be seen a prompt can have multiple responses and this is reflected in the entries being repeated in the dictionary's value arrays. + +## Expected model format +The CPO trainer expects a model of `AutoModelForCausalLM`, compared to PPO that expects `AutoModelForCausalLMWithValueHead` for the value function. + +## Using the `CPOTrainer` +For a detailed example have a look at the `examples/scripts/cpo.py` script. At a high level we need to initialize the `CPOTrainer` with a `model` we wish to train. **Note that CPOTrainer eliminates the need to use the reference model, simplifying the optimization process.** The `beta` refers to the hyperparameter of the implicit reward, and the dataset contains the 3 entries listed above. + +```py +cpo_config = CPOConfig( + beta=0.1, +) + +cpo_trainer = CPOTrainer( + model, + args=cpo_config, + train_dataset=train_dataset, + tokenizer=tokenizer, +) +``` +After this one can then call: + +```py +cpo_trainer.train() +``` + +## Loss functions + +Given the preference data, the `CPOTrainer` uses the sigmoid loss on the normalized likelihood via the `logsigmoid` to fit a logistic regression. + +The [RSO](https://huggingface.co./papers/2309.06657) authors propose to use a hinge loss on the normalized likelihood from the [SLiC](https://huggingface.co./papers/2305.10425) paper. The `CPOTrainer` can be switched to this loss via the `loss_type="hinge"` argument and the `beta` in this case is the reciprocal of the margin. + +The [IPO](https://huggingface.co./papers/2310.12036) authors provide a deeper theoretical understanding of the CPO algorithms and identify an issue with overfitting and propose an alternative loss which can be used via the `loss_type="ipo"` argument to the trainer. Note that the `beta` parameter is the reciprocal of the gap between the log-likelihood ratios of the chosen vs the rejected completion pair and thus the smaller the `beta` the larger this gaps is. As per the paper the loss is averaged over log-likelihoods of the completion (unlike CPO which is summed only). + +### For Mixture of Experts Models: Enabling the auxiliary loss + +MOEs are the most efficient if the load is about equally distributed between experts. +To ensure that we train MOEs similarly during preference-tuning, it is beneficial to add the auxiliary loss from the load balancer to the final loss. + +This option is enabled by setting `output_router_logits=True` in the model config (e.g. MixtralConfig). +To scale how much the auxiliary loss contributes to the total loss, use the hyperparameter `router_aux_loss_coef=...` (default: 0.001). + +## Logging + +While training and evaluating we record the following reward metrics: + +* `rewards/chosen`: the mean log probabilities of the policy model for the chosen responses scaled by beta +* `rewards/rejected`: the mean log probabilities of the policy model for the rejected responses scaled by beta +* `rewards/accuracies`: mean of how often the chosen rewards are > than the corresponding rejected rewards +* `rewards/margins`: the mean difference between the chosen and corresponding rejected rewards +* `nll_loss`: the mean negative log likelihood loss of the policy model for the chosen responses + +## CPOTrainer + +[[autodoc]] CPOTrainer + +## CPOConfig + +[[autodoc]] CPOConfig \ No newline at end of file diff --git a/trl_md_files/customization.mdx b/trl_md_files/customization.mdx new file mode 100644 index 0000000000000000000000000000000000000000..a576890734522a5684a3597ac576151d49d61478 --- /dev/null +++ b/trl_md_files/customization.mdx @@ -0,0 +1,216 @@ +# Training customization + +TRL is designed with modularity in mind so that users to be able to efficiently customize the training loop for their needs. Below are some examples on how you can apply and test different techniques. + +## Train on multiple GPUs / nodes + +The trainers in TRL use 🤗 Accelerate to enable distributed training across multiple GPUs or nodes. To do so, first create an 🤗 Accelerate config file by running + +```bash +accelerate config +``` + +and answering the questions according to your multi-gpu / multi-node setup. You can then launch distributed training by running: + +```bash +accelerate launch your_script.py +``` + +We also provide config files in the [examples folder](https://github.com/huggingface/trl/tree/main/examples/accelerate_configs) that can be used as templates. To use these templates, simply pass the path to the config file when launching a job, e.g.: + +```shell +accelerate launch --config_file=examples/accelerate_configs/multi_gpu.yaml --num_processes {NUM_GPUS} path_to_script.py --all_arguments_of_the_script +``` + +Refer to the [examples page](https://github.com/huggingface/trl/tree/main/examples) for more details. + +### Distributed training with DeepSpeed + +All of the trainers in TRL can be run on multiple GPUs together with DeepSpeed ZeRO-{1,2,3} for efficient sharding of the optimizer states, gradients, and model weights. To do so, run: + +```shell +accelerate launch --config_file=examples/accelerate_configs/deepspeed_zero{1,2,3}.yaml --num_processes {NUM_GPUS} path_to_your_script.py --all_arguments_of_the_script +``` + +Note that for ZeRO-3, a small tweak is needed to initialize your reward model on the correct device via the `zero3_init_context_manager()` context manager. In particular, this is needed to avoid DeepSpeed hanging after a fixed number of training steps. Here is a snippet of what is involved from the [`sentiment_tuning`](https://github.com/huggingface/trl/blob/main/examples/scripts/ppo.py) example: + +```python +ds_plugin = ppo_trainer.accelerator.state.deepspeed_plugin +if ds_plugin is not None and ds_plugin.is_zero3_init_enabled(): + with ds_plugin.zero3_init_context_manager(enable=False): + sentiment_pipe = pipeline("sentiment-analysis", model="lvwerra/distilbert-imdb", device=device) +else: + sentiment_pipe = pipeline("sentiment-analysis", model="lvwerra/distilbert-imdb", device=device) +``` + +Consult the 🤗 Accelerate [documentation](https://huggingface.co./docs/accelerate/usage_guides/deepspeed) for more information about the DeepSpeed plugin. + + +## Use different optimizers + +By default, the `PPOTrainer` creates a `torch.optim.Adam` optimizer. You can create and define a different optimizer and pass it to `PPOTrainer`: +```python +import torch +from transformers import GPT2Tokenizer +from trl import PPOTrainer, PPOConfig, AutoModelForCausalLMWithValueHead + +# 1. load a pretrained model +model = AutoModelForCausalLMWithValueHead.from_pretrained('gpt2') +ref_model = AutoModelForCausalLMWithValueHead.from_pretrained('gpt2') +tokenizer = GPT2Tokenizer.from_pretrained('gpt2') + +# 2. define config +ppo_config = {'batch_size': 1, 'learning_rate':1e-5} +config = PPOConfig(**ppo_config) + + +# 2. Create optimizer +optimizer = torch.optim.SGD(model.parameters(), lr=config.learning_rate) + + +# 3. initialize trainer +ppo_trainer = PPOTrainer(config, model, ref_model, tokenizer, optimizer=optimizer) +``` + +For memory efficient fine-tuning, you can also pass `Adam8bit` optimizer from `bitsandbytes`: + +```python +import torch +import bitsandbytes as bnb + +from transformers import GPT2Tokenizer +from trl import PPOTrainer, PPOConfig, AutoModelForCausalLMWithValueHead + +# 1. load a pretrained model +model = AutoModelForCausalLMWithValueHead.from_pretrained('gpt2') +ref_model = AutoModelForCausalLMWithValueHead.from_pretrained('gpt2') +tokenizer = GPT2Tokenizer.from_pretrained('gpt2') + +# 2. define config +ppo_config = {'batch_size': 1, 'learning_rate':1e-5} +config = PPOConfig(**ppo_config) + + +# 2. Create optimizer +optimizer = bnb.optim.Adam8bit(model.parameters(), lr=config.learning_rate) + +# 3. initialize trainer +ppo_trainer = PPOTrainer(config, model, ref_model, tokenizer, optimizer=optimizer) +``` + +### Use LION optimizer + +You can use the new [LION optimizer from Google](https://huggingface.co./papers/2302.06675) as well, first take the source code of the optimizer definition [here](https://github.com/lucidrains/lion-pytorch/blob/main/lion_pytorch/lion_pytorch.py), and copy it so that you can import the optimizer. Make sure to initialize the optimizer by considering the trainable parameters only for a more memory efficient training: +```python +optimizer = Lion(filter(lambda p: p.requires_grad, self.model.parameters()), lr=self.config.learning_rate) + +... +ppo_trainer = PPOTrainer(config, model, ref_model, tokenizer, optimizer=optimizer) +``` +We advise you to use the learning rate that you would use for `Adam` divided by 3 as pointed out [here](https://github.com/lucidrains/lion-pytorch#lion---pytorch). We observed an improvement when using this optimizer compared to classic Adam (check the full logs [here](https://wandb.ai/distill-bloom/trl/runs/lj4bheke?workspace=user-younesbelkada)): + +
+ +
+ + +## Add a learning rate scheduler + +You can also play with your training by adding learning rate schedulers! +```python +import torch +from transformers import GPT2Tokenizer +from trl import PPOTrainer, PPOConfig, AutoModelForCausalLMWithValueHead + +# 1. load a pretrained model +model = AutoModelForCausalLMWithValueHead.from_pretrained('gpt2') +ref_model = AutoModelForCausalLMWithValueHead.from_pretrained('gpt2') +tokenizer = GPT2Tokenizer.from_pretrained('gpt2') + +# 2. define config +ppo_config = {'batch_size': 1, 'learning_rate':1e-5} +config = PPOConfig(**ppo_config) + + +# 2. Create optimizer +optimizer = torch.optim.SGD(model.parameters(), lr=config.learning_rate) +lr_scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma=0.9) + +# 3. initialize trainer +ppo_trainer = PPOTrainer(config, model, ref_model, tokenizer, optimizer=optimizer, lr_scheduler=lr_scheduler) +``` + +## Memory efficient fine-tuning by sharing layers + +Another tool you can use for more memory efficient fine-tuning is to share layers between the reference model and the model you want to train. +```python +import torch +from transformers import AutoTokenizer +from trl import PPOTrainer, PPOConfig, AutoModelForCausalLMWithValueHead, create_reference_model + +# 1. load a pretrained model +model = AutoModelForCausalLMWithValueHead.from_pretrained('bigscience/bloom-560m') +ref_model = create_reference_model(model, num_shared_layers=6) +tokenizer = AutoTokenizer.from_pretrained('bigscience/bloom-560m') + +# 2. initialize trainer +ppo_config = {'batch_size': 1} +config = PPOConfig(**ppo_config) +ppo_trainer = PPOTrainer(config, model, ref_model, tokenizer) +``` + +## Pass 8-bit reference models + +
+ +Since `trl` supports all key word arguments when loading a model from `transformers` using `from_pretrained`, you can also leverage `load_in_8bit` from `transformers` for more memory efficient fine-tuning. + +Read more about 8-bit model loading in `transformers` [here](https://huggingface.co./docs/transformers/perf_infer_gpu_one#bitsandbytes-integration-for-int8-mixedprecision-matrix-decomposition). + +
+ +```python +# 0. imports +# pip install bitsandbytes +import torch +from transformers import AutoTokenizer +from trl import PPOTrainer, PPOConfig, AutoModelForCausalLMWithValueHead + +# 1. load a pretrained model +model = AutoModelForCausalLMWithValueHead.from_pretrained('bigscience/bloom-560m') +ref_model = AutoModelForCausalLMWithValueHead.from_pretrained('bigscience/bloom-560m', device_map="auto", load_in_8bit=True) +tokenizer = AutoTokenizer.from_pretrained('bigscience/bloom-560m') + +# 2. initialize trainer +ppo_config = {'batch_size': 1} +config = PPOConfig(**ppo_config) +ppo_trainer = PPOTrainer(config, model, ref_model, tokenizer) +``` + +## Use the CUDA cache optimizer + +When training large models, you should better handle the CUDA cache by iteratively clearing it. Do do so, simply pass `optimize_cuda_cache=True` to `PPOConfig`: + +```python +config = PPOConfig(..., optimize_cuda_cache=True) +``` + + + +## Use score scaling/normalization/clipping +As suggested by [Secrets of RLHF in Large Language Models Part I: PPO](https://huggingface.co./papers/2307.04964), we support score (aka reward) scaling/normalization/clipping to improve training stability via `PPOConfig`: +```python +from trl import PPOConfig + +ppo_config = { + use_score_scaling=True, + use_score_norm=True, + score_clip=0.5, +} +config = PPOConfig(**ppo_config) +``` + +To run `ppo.py`, you can use the following command: +``` +python examples/scripts/ppo.py --log_with wandb --use_score_scaling --use_score_norm --score_clip 0.5 +``` diff --git a/trl_md_files/ddpo_trainer.mdx b/trl_md_files/ddpo_trainer.mdx new file mode 100644 index 0000000000000000000000000000000000000000..a3e8e719f0cbde99f646aba26d650421db72fecb --- /dev/null +++ b/trl_md_files/ddpo_trainer.mdx @@ -0,0 +1,119 @@ +# Denoising Diffusion Policy Optimization +## The why + +| Before | After DDPO finetuning | +| --- | --- | +|
|
| +|
|
| +|
|
| + + +## Getting started with Stable Diffusion finetuning with reinforcement learning + +The machinery for finetuning of Stable Diffusion models with reinforcement learning makes heavy use of HuggingFace's `diffusers` +library. A reason for stating this is that getting started requires a bit of familiarity with the `diffusers` library concepts, mainly two of them - pipelines and schedulers. +Right out of the box (`diffusers` library), there isn't a `Pipeline` nor a `Scheduler` instance that is suitable for finetuning with reinforcement learning. Some adjustments need to made. + +There is a pipeline interface that is provided by this library that is required to be implemented to be used with the `DDPOTrainer`, which is the main machinery for fine-tuning Stable Diffusion with reinforcement learning. **Note: Only the StableDiffusion architecture is supported at this point.** +There is a default implementation of this interface that you can use out of the box. Assuming the default implementation is sufficient and/or to get things moving, refer to the training example alongside this guide. + +The point of the interface is to fuse the pipeline and the scheduler into one object which allows for minimalness in terms of having the constraints all in one place. The interface was designed in hopes of catering to pipelines and schedulers beyond the examples in this repository and elsewhere at this time of writing. Also the scheduler step is a method of this pipeline interface and this may seem redundant given that the raw scheduler is accessible via the interface but this is the only way to constrain the scheduler step output to an output type befitting of the algorithm at hand (DDPO). + +For a more detailed look into the interface and the associated default implementation, go [here](https://github.com/lvwerra/trl/tree/main/trl/models/modeling_sd_base.py) + +Note that the default implementation has a LoRA implementation path and a non-LoRA based implementation path. The LoRA flag enabled by default and this can be turned off by passing in the flag to do so. LORA based training is faster and the LORA associated model hyperparameters responsible for model convergence aren't as finicky as non-LORA based training. + +Also in addition, there is the expectation of providing a reward function and a prompt function. The reward function is used to evaluate the generated images and the prompt function is used to generate the prompts that are used to generate the images. + +## Getting started with `examples/scripts/ddpo.py` + +The `ddpo.py` script is a working example of using the `DDPO` trainer to finetune a Stable Diffusion model. This example explicitly configures a small subset of the overall parameters associated with the config object (`DDPOConfig`). + +**Note:** one A100 GPU is recommended to get this running. Anything below a A100 will not be able to run this example script and even if it does via relatively smaller sized parameters, the results will most likely be poor. + +Almost every configuration parameter has a default. There is only one commandline flag argument that is required of the user to get things up and running. The user is expected to have a [huggingface user access token](https://huggingface.co./docs/hub/security-tokens) that will be used to upload the model post finetuning to HuggingFace hub. The following bash command is to be entered to get things running + +```batch +python ddpo.py --hf_user_access_token +``` + +To obtain the documentation of `stable_diffusion_tuning.py`, please run `python stable_diffusion_tuning.py --help` + +The following are things to keep in mind (The code checks this for you as well) in general while configuring the trainer (beyond the use case of using the example script) + +- The configurable sample batch size (`--ddpo_config.sample_batch_size=6`) should be greater than or equal to the configurable training batch size (`--ddpo_config.train_batch_size=3`) +- The configurable sample batch size (`--ddpo_config.sample_batch_size=6`) must be divisible by the configurable train batch size (`--ddpo_config.train_batch_size=3`) +- The configurable sample batch size (`--ddpo_config.sample_batch_size=6`) must be divisible by both the configurable gradient accumulation steps (`--ddpo_config.train_gradient_accumulation_steps=1`) and the configurable accelerator processes count + +## Setting up the image logging hook function + +Expect the function to be given a list of lists of the form +```python +[[image, prompt, prompt_metadata, rewards, reward_metadata], ...] + +``` +and `image`, `prompt`, `prompt_metadata`, `rewards`, `reward_metadata` are batched. +The last list in the lists of lists represents the last sample batch. You are likely to want to log this one +While you are free to log however you want the use of `wandb` or `tensorboard` is recommended. + +### Key terms + +- `rewards` : The rewards/score is a numerical associated with the generated image and is key to steering the RL process +- `reward_metadata` : The reward metadata is the metadata associated with the reward. Think of this as extra information payload delivered alongside the reward +- `prompt` : The prompt is the text that is used to generate the image +- `prompt_metadata` : The prompt metadata is the metadata associated with the prompt. A situation where this will not be empty is when the reward model comprises of a [`FLAVA`](https://huggingface.co./docs/transformers/model_doc/flava) setup where questions and ground answers (linked to the generated image) are expected with the generated image (See here: https://github.com/kvablack/ddpo-pytorch/blob/main/ddpo_pytorch/rewards.py#L45) +- `image` : The image generated by the Stable Diffusion model + +Example code for logging sampled images with `wandb` is given below. + +```python +# for logging these images to wandb + +def image_outputs_hook(image_data, global_step, accelerate_logger): + # For the sake of this example, we only care about the last batch + # hence we extract the last element of the list + result = {} + images, prompts, _, rewards, _ = image_data[-1] + for i, image in enumerate(images): + pil = Image.fromarray( + (image.cpu().numpy().transpose(1, 2, 0) * 255).astype(np.uint8) + ) + pil = pil.resize((256, 256)) + result[f"{prompts[i]:.25} | {rewards[i]:.2f}"] = [pil] + accelerate_logger.log_images( + result, + step=global_step, + ) + +``` + +### Using the finetuned model + +Assuming you've done with all the epochs and have pushed up your model to the hub, you can use the finetuned model as follows + +```python + +import torch +from trl import DefaultDDPOStableDiffusionPipeline + +pipeline = DefaultDDPOStableDiffusionPipeline("metric-space/ddpo-finetuned-sd-model") + +device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") + +# memory optimization +pipeline.vae.to(device, torch.float16) +pipeline.text_encoder.to(device, torch.float16) +pipeline.unet.to(device, torch.float16) + +prompts = ["squirrel", "crab", "starfish", "whale","sponge", "plankton"] +results = pipeline(prompts) + +for prompt, image in zip(prompts,results.images): + image.save(f"{prompt}.png") + +``` + +## Credits + +This work is heavily influenced by the repo [here](https://github.com/kvablack/ddpo-pytorch) and the associated paper [Training Diffusion Models +with Reinforcement Learning by Kevin Black, Michael Janner, Yilan Du, Ilya Kostrikov, Sergey Levine](https://huggingface.co./papers/2305.13301). \ No newline at end of file diff --git a/trl_md_files/detoxifying_a_lm.mdx b/trl_md_files/detoxifying_a_lm.mdx new file mode 100644 index 0000000000000000000000000000000000000000..b07a166ba824ff7625639f644c916b0dc5cb172c --- /dev/null +++ b/trl_md_files/detoxifying_a_lm.mdx @@ -0,0 +1,191 @@ +# Detoxifying a Language Model using PPO + +Language models (LMs) are known to sometimes generate toxic outputs. In this example, we will show how to "detoxify" a LM by feeding it toxic prompts and then using [Transformer Reinforcement Learning (TRL)](https://huggingface.co./docs/trl/index) and Proximal Policy Optimization (PPO) to "detoxify" it. + +Read this section to follow our investigation on how we can reduce toxicity in a wide range of LMs, from 125m parameters to 6B parameters! + +Here's an overview of the notebooks and scripts in the [TRL toxicity repository](https://github.com/huggingface/trl/tree/main/examples/toxicity/scripts) as well as the link for the interactive demo: + +| File | Description | Colab link | +|---|---| --- | +| [`gpt-j-6b-toxicity.py`](https://github.com/huggingface/trl/blob/main/examples/research_projects/toxicity/scripts/gpt-j-6b-toxicity.py) | Detoxify `GPT-J-6B` using PPO | x | +| [`evaluate-toxicity.py`](https://github.com/huggingface/trl/blob/main/examples/research_projects/toxicity/scripts/evaluate-toxicity.py) | Evaluate de-toxified models using `evaluate` | x | +| [Interactive Space](https://huggingface.co./spaces/ybelkada/detoxified-lms)| An interactive Space that you can use to compare the original model with its detoxified version!| x | + +## Context + +Language models are trained on large volumes of text from the internet which also includes a lot of toxic content. Naturally, language models pick up the toxic patterns during training. Especially when prompted with already toxic texts the models are likely to continue the generations in a toxic way. The goal here is to "force" the model to be less toxic by feeding it toxic prompts and then using PPO to "detoxify" it. + +### Computing toxicity scores + +In order to optimize a model with PPO we need to define a reward. For this use-case we want a negative reward whenever the model generates something toxic and a positive comment when it is not toxic. +Therefore, we used [`facebook/roberta-hate-speech-dynabench-r4-target`](https://huggingface.co./facebook/roberta-hate-speech-dynabench-r4-target), which is a RoBERTa model fine-tuned to classify between "neutral" and "toxic" text as our toxic prompts classifier. +One could have also used different techniques to evaluate the toxicity of a model, or combined different toxicity classifiers, but for simplicity we have chosen to use this one. + +### Selection of models + +We selected the following models for our experiments to show that TRL can be easily scaled to 10B parameters models: + +* [`EleutherAI/gpt-neo-125M`](https://huggingface.co./EleutherAI/gpt-neo-125M) (125 million parameters) +* [`EleutherAI/gpt-neo-2.7B`](https://huggingface.co./EleutherAI/gpt-neo-2.7B) (2.7 billion parameters) +* [`EleutherAI/gpt-j-6B`](https://huggingface.co./EleutherAI/gpt-j-6B) (6 billion parameters) + +For the selection of the smallest model, we have chosen `EleutherAI/gpt-neo-125M` because it has shown to be a model that was the "most toxic" compared to other models. We have ran toxicity evaluation using `facebook/roberta-hate-speech-dynabench-r4-target` model on 4 different architectures on a subset of `allenai/real-toxicity-prompts` dataset. Note that we have computed the toxicity score on the generated text only (thus ignoring the prompt). + +| Model | Mean toxicity score | +|---|---| +| `gpt2` | 0.01602 | +| `facebook/opt-350m` | 0.01628 | +| `bigscience/bloom-560m` | 0.00767 | +| `EleutherAI/gpt-neo-125M` | **0.02016** | + +## Designing the problem + +When doing PPO, it is very important to design the problem efficiently so that the model can learn to solve it. Let's cover the topics that were important for the model to converge. + +### Pre-processing the dataset + +The dataset consist of prompts and their continuations, and each of them has an associated `toxicity` score. + +A `prompt` example: +``` +{ "text": "Bronx officer Luis Rios alleged on-the-job tall tales were compounded by his lying about them in court, and", "profanity": 0.08360514, "sexually_explicit": 0.118823394, "identity_attack": 0.09608547, "flirtation": 0.23102224, "threat": 0.13215046, "insult": 0.10130461, "severe_toxicity": 0.04068885, "toxicity": 0.16534281 } +``` +And its `continuation` value: +``` +{ "text": " cost federal prosecutors all the drug evidence that the cop collected against an armed suspect — 16 baggies of cocaine during a strip search.", "severe_toxicity": 0.067997746, "toxicity": 0.1694093, "profanity": 0.11931301, "sexually_explicit": 0.12521537, "identity_attack": 0.09268324, "flirtation": 0.13452998, "threat": 0.31312028, "insult": 0.10761123 } +``` + +We want to increase the chance for the model to generate toxic prompts so we get more learning signal. For this reason pre-process the dataset to consider only the prompt that has a toxicity score that is greater than a threshold. We can do this in a few lines of code: +```python +ds = load_dataset("allenai/real-toxicity-prompts", split="train") + +def filter_fn(sample): + toxicity = sample["prompt"]["toxicity"] + return toxicity is not None and toxicity > 0.3 + +ds = ds.filter(filter_fn, batched=False) +``` + +### Reward function + +The reward function is one of the most important part of training a model with reinforcement learning. It is the function that will tell the model if it is doing well or not. +We tried various combinations, considering the softmax of the label "neutral", the log of the toxicity score and the raw logits of the label "neutral". We have found out that the convergence was much more smoother with the raw logits of the label "neutral". +```python +logits = toxicity_model(**toxicity_inputs).logits.float() +rewards = (logits[:, 0]).tolist() +``` + +### Impact of input prompts length + +We have found out that training a model with small or long context (from 5 to 8 tokens for the small context and from 15 to 20 tokens for the long context) does not have any impact on the convergence of the model, however, when training the model with longer prompts, the model will tend to generate more toxic prompts. +As a compromise between the two we took for a context window of 10 to 15 tokens for the training. + + +
+ +
+ +### How to deal with OOM issues + +Our goal is to train models up to 6B parameters, which is about 24GB in float32! Here two tricks we use to be able to train a 6B model on a single 40GB-RAM GPU: + +- Use `bfloat16` precision: Simply load your model in `bfloat16` when calling `from_pretrained` and you can reduce the size of the model by 2: + +```python +model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", torch_dtype=torch.bfloat16) +``` + +and the optimizer will take care of computing the gradients in `bfloat16` precision. Note that this is a pure `bfloat16` training which is different from the mixed precision training. If one wants to train a model in mixed-precision, they should not load the model with `torch_dtype` and specify the mixed precision argument when calling `accelerate config`. + +- Use shared layers: Since PPO algorithm requires to have both the active and reference model to be on the same device, we have decided to use shared layers to reduce the memory footprint of the model. This can be achieved by just speifying `num_shared_layers` argument when creating a `PPOTrainer`: + +
+ +
+ +```python +ppo_trainer = PPOTrainer( + model=model, + tokenizer=tokenizer, + num_shared_layers=4, + ... +) +``` + +In the example above this means that the model have the 4 first layers frozen (i.e. since these layers are shared between the active model and the reference model). + +- One could have also applied gradient checkpointing to reduce the memory footprint of the model by calling `model.pretrained_model.enable_gradient_checkpointing()` (although this has the downside of training being ~20% slower). + +## Training the model! + +We have decided to keep 3 models in total that correspond to our best models: + +- [`ybelkada/gpt-neo-125m-detox`](https://huggingface.co./ybelkada/gpt-neo-125m-detox) +- [`ybelkada/gpt-neo-2.7B-detox`](https://huggingface.co./ybelkada/gpt-neo-2.7B-detox) +- [`ybelkada/gpt-j-6b-detox`](https://huggingface.co./ybelkada/gpt-j-6b-detox) + +We have used different learning rates for each model, and have found out that the largest models were quite hard to train and can easily lead to collapse mode if the learning rate is not chosen correctly (i.e. if the learning rate is too high): + +
+ +
+ +The final training run of `ybelkada/gpt-j-6b-detoxified-20shdl` looks like this: + +
+ +
+ +As you can see the model converges nicely, but obviously we don't observe a very large improvement from the first step, as the original model is not trained to generate toxic contents. + +Also we have observed that training with larger `mini_batch_size` leads to smoother convergence and better results on the test set: + +
+ +
+ +## Results + +We tested our models on a new dataset, the [`OxAISH-AL-LLM/wiki_toxic`](https://huggingface.co./datasets/OxAISH-AL-LLM/wiki_toxic) dataset. We feed each model with a toxic prompt from it (a sample with the label "toxic"), and generate 30 new tokens as it is done on the training loop and measure the toxicity score using `evaluate`'s [`toxicity` metric](https://huggingface.co./spaces/ybelkada/toxicity). +We report the toxicity score of 400 sampled examples, compute its mean and standard deviation and report the results in the table below: + +| Model | Mean toxicity score | Std toxicity score | +| --- | --- | --- | +| `EleutherAI/gpt-neo-125m` | 0.1627 | 0.2997 | +| `ybelkada/gpt-neo-125m-detox` | **0.1148** | **0.2506** | +| --- | --- | --- | +| `EleutherAI/gpt-neo-2.7B` | 0.1884 | 0.3178 | +| `ybelkada/gpt-neo-2.7B-detox` | **0.0916** | **0.2104** | +| --- | --- | --- | +| `EleutherAI/gpt-j-6B` | 0.1699 | 0.3033 | +| `ybelkada/gpt-j-6b-detox` | **0.1510** | **0.2798** | + +
+
+ +
Toxicity score with respect to the size of the model.
+
+
+ +Below are few generation examples of `gpt-j-6b-detox` model: + +
+ +
+ +The evaluation script can be found [here](https://github.com/huggingface/trl/blob/main/examples/research_projects/toxicity/scripts/evaluate-toxicity.py). + +### Discussions + +The results are quite promising, as we can see that the models are able to reduce the toxicity score of the generated text by an interesting margin. The gap is clear for `gpt-neo-2B` model but we less so for the `gpt-j-6B` model. There are several things we could try to improve the results on the largest model starting with training with larger `mini_batch_size` and probably allowing to back-propagate through more layers (i.e. use less shared layers). + +To sum up, in addition to human feedback this could be a useful additional signal when training large language models to ensure there outputs are less toxic as well as useful. + +### Limitations + +We are also aware of consistent bias issues reported with toxicity classifiers, and of work evaluating the negative impact of toxicity reduction on the diversity of outcomes. We recommend that future work also compare the outputs of the detoxified models in terms of fairness and diversity before putting them to use. + +## What is next? + +You can download the model and use it out of the box with `transformers`, or play with the Spaces that compares the output of the models before and after detoxification [here](https://huggingface.co./spaces/ybelkada/detoxified-lms). diff --git a/trl_md_files/dpo_trainer.mdx b/trl_md_files/dpo_trainer.mdx new file mode 100644 index 0000000000000000000000000000000000000000..2f86c851c046521c998f1d61ef73cbd0e1646532 --- /dev/null +++ b/trl_md_files/dpo_trainer.mdx @@ -0,0 +1,297 @@ +# DPO Trainer + +TRL supports the DPO Trainer for training language models from preference data, as described in the paper [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co./papers/2305.18290) by Rafailov et al., 2023. For a full example have a look at [`examples/scripts/dpo.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/dpo.py). + +The first step as always is to train your SFT model, to ensure the data we train on is in-distribution for the DPO algorithm. + +## How DPO works + +Fine-tuning a language model via DPO consists of two steps and is easier than PPO: + +1. **Data collection**: Gather a preference dataset with positive and negative selected pairs of generation, given a prompt. +2. **Optimization**: Maximize the log-likelihood of the DPO loss directly. + +DPO-compatible datasets can be found with [the tag `dpo` on Hugging Face Hub](https://huggingface.co./datasets?other=dpo). You can also explore the [librarian-bots/direct-preference-optimization-datasets](https://huggingface.co./collections/librarian-bots/direct-preference-optimization-datasets-66964b12835f46289b6ef2fc) Collection to identify datasets that are likely to support DPO training. + +This process is illustrated in the sketch below (from [figure 1 of the original paper](https://huggingface.co./papers/2305.18290)): + +Screenshot 2024-03-19 at 12 39 41 + +Read more about DPO algorithm in the [original paper](https://huggingface.co./papers/2305.18290). + + +## Expected dataset format + +The DPO trainer expects a very specific format for the dataset. Since the model will be trained to directly optimize the preference of which sentence is the most relevant, given two sentences. We provide an example from the [`Anthropic/hh-rlhf`](https://huggingface.co./datasets/Anthropic/hh-rlhf) dataset below: + +
+ +
+ +Therefore the final dataset object should contain these 3 entries if you use the default [`DPODataCollatorWithPadding`] data collator. The entries should be named: + +- `prompt` +- `chosen` +- `rejected` + +for example: + +```py +dpo_dataset_dict = { + "prompt": [ + "hello", + "how are you", + "What is your name?", + "What is your name?", + "Which is the best programming language?", + "Which is the best programming language?", + "Which is the best programming language?", + ], + "chosen": [ + "hi nice to meet you", + "I am fine", + "My name is Mary", + "My name is Mary", + "Python", + "Python", + "Java", + ], + "rejected": [ + "leave me alone", + "I am not fine", + "Whats it to you?", + "I dont have a name", + "Javascript", + "C++", + "C++", + ], +} +``` + +where the `prompt` contains the context inputs, `chosen` contains the corresponding chosen responses and `rejected` contains the corresponding negative (rejected) responses. As can be seen a prompt can have multiple responses and this is reflected in the entries being repeated in the dictionary's value arrays. + +[`DPOTrainer`] can be used to fine-tune visual language models (VLMs). In this case, the dataset must also contain the key `images`, and the trainer's `tokenizer` is the VLM's `processor`. For example, for Idefics2, the processor expects the dataset to have the following format: + +Note: Currently, VLM support is exclusive to Idefics2 and does not extend to other VLMs. + +```py +dpo_dataset_dict = { + 'images': [ + [Image.open('beach.jpg')], + [Image.open('street.jpg')], + ], + 'prompt': [ + 'The image shows', + ' The image depicts', + ], + 'chosen': [ + 'a sunny beach with palm trees.', + 'a busy street with several cars and buildings.', + ], + 'rejected': [ + 'a snowy mountain with skiers.', + 'a calm countryside with green fields.', + ], +} +``` + +## Expected model format + +The DPO trainer expects a model of `AutoModelForCausalLM` or `AutoModelForVision2Seq`, compared to PPO that expects `AutoModelForCausalLMWithValueHead` for the value function. + +## Using the `DPOTrainer` + +For a detailed example have a look at the `examples/scripts/dpo.py` script. At a high level we need to initialize the [`DPOTrainer`] with a `model` we wish to train, a reference `ref_model` which we will use to calculate the implicit rewards of the preferred and rejected response, the `beta` refers to the hyperparameter of the implicit reward, and the dataset contains the 3 entries listed above. Note that the `model` and `ref_model` need to have the same architecture (ie decoder only or encoder-decoder). + +```py +training_args = DPOConfig( + beta=0.1, +) +dpo_trainer = DPOTrainer( + model, + ref_model, + args=training_args, + train_dataset=train_dataset, + tokenizer=tokenizer, # for visual language models, use tokenizer=processor instead +) +``` + +After this one can then call: + +```py +dpo_trainer.train() +``` + +Note that the `beta` is the temperature parameter for the DPO loss, typically something in the range of `0.1` to `0.5`. We ignore the reference model as `beta` -> 0. + +## Loss functions + +Given the preference data, we can fit a binary classifier according to the Bradley-Terry model and in fact the [DPO](https://huggingface.co./papers/2305.18290) authors propose the sigmoid loss on the normalized likelihood via the `logsigmoid` to fit a logistic regression. To use this loss, set the `loss_type="sigmoid"` (default) in the [`DPOConfig`]. + +The [RSO](https://huggingface.co./papers/2309.06657) authors propose to use a hinge loss on the normalized likelihood from the [SLiC](https://huggingface.co./papers/2305.10425) paper. To use this loss, set the `loss_type="hinge"` in the [`DPOConfig`]. In this case, the `beta` is the reciprocal of the margin. + +The [IPO](https://huggingface.co./papers/2310.12036) authors provide a deeper theoretical understanding of the DPO algorithms and identify an issue with overfitting and propose an alternative loss. To use the loss set the `loss_type="ipo"` in the [`DPOConfig`]. In this case, the `beta` is the reciprocal of the gap between the log-likelihood ratios of the chosen vs the rejected completion pair and thus the smaller the `beta` the larger this gaps is. As per the paper the loss is averaged over log-likelihoods of the completion (unlike DPO which is summed only). + +The [cDPO](https://ericmitchell.ai/cdpo.pdf) is a tweak on the DPO loss where we assume that the preference labels are noisy with some probability. In this approach, the `label_smoothing` parameter in the [`DPOConfig`] is used to model the probability of existing label noise. To apply this conservative loss, set `label_smoothing` to a value greater than 0.0 (between 0.0 and 0.5; the default is 0.0). + +The [EXO](https://huggingface.co./papers/2402.00856) authors propose to minimize the reverse KL instead of the negative log-sigmoid loss of DPO which corresponds to forward KL. To use the loss set the `loss_type="exo_pair"` in the [`DPOConfig`]. Setting non-zero `label_smoothing` (default `1e-3`) leads to a simplified version of EXO on pair-wise preferences (see Eqn. (16) of the [EXO paper](https://huggingface.co./papers/2402.00856)). The full version of EXO uses `K>2` completions generated by the SFT policy, which becomes an unbiased estimator of the PPO objective (up to a constant) when `K` is sufficiently large. + +The [NCA](https://huggingface.co./papers/2402.05369) authors shows that NCA optimizes the absolute likelihood for each response rather than the relative likelihood. To use the loss set the `loss_type="nca_pair"` in the [`DPOConfig`]. + +The [Robust DPO](https://huggingface.co./papers/2403.00409) authors propose an unbiased estimate of the DPO loss that is robust to preference noise in the data. Like in cDPO, it assumes that the preference labels are noisy with some probability. In this approach, the `label_smoothing` parameter in the [`DPOConfig`] is used to model the probability of existing label noise. To apply this conservative loss, set `label_smoothing` to a value greater than 0.0 (between 0.0 and 0.5; the default is 0.0) and set the `loss_type="robust"` in the [`DPOConfig`]. + +The [BCO](https://huggingface.co./papers/2404.04656) authors train a binary classifier whose logit serves as a reward so that the classifier maps {prompt, chosen completion} pairs to 1 and {prompt, rejected completion} pairs to 0. To use this loss, set the `loss_type="bco_pair"` in the [`DPOConfig`]. + +The [TR-DPO](https://huggingface.co./papers/2404.09656) paper suggests syncing the reference model weights after every `ref_model_sync_steps` steps of SGD with weight `ref_model_mixup_alpha` during DPO training. To toggle this callback use the `sync_ref_model=True` in the [`DPOConfig`]. + +The [RPO](https://huggingface.co./papers/2404.19733) paper implements an iterative preference tuning algorithm using a loss related to the RPO loss in this [paper](https://huggingface.co./papers/2405.16436) that essentially consists of a weighted SFT loss on the chosen preferences together with the DPO loss. To use this loss, set the `rpo_alpha` in the [`DPOConfig`] to an appropriate value. The paper suggests setting this weight to 1.0. + +The [SPPO](https://huggingface.co./papers/2405.00675) authors claim that SPPO is capable of solving the Nash equilibrium iteratively by pushing the chosen rewards to be as large as 1/2 and the rejected rewards to be as small as -1/2 and can alleviate data sparsity issues. The implementation approximates this algorithm by employing hard label probabilities, assigning 1 to the winner and 0 to the loser. To use this loss, set the `loss_type="sppo_hard"` in the [`DPOConfig`]. + +The [AOT](https://huggingface.co./papers/2406.05882) authors propose to use Distributional Preference Alignment Via Optimal Transport. Traditionally, the alignment algorithms use paired preferences at a sample level, which does not ensure alignment on the distributional level. AOT, on the other hand, can align LLMs on paired or unpaired preference data by making the reward distribution of the positive samples stochastically dominant in the first order on the distribution of negative samples. Specifically, `loss_type="aot"` is appropriate for paired datasets, where each prompt has both chosen and rejected responses; `loss_type="aot_pair"` is for unpaired datasets. In a nutshell, `loss_type="aot"` ensures that the log-likelihood ratio of chosen to rejected of the aligned model has higher quantiles than that ratio for the reference model. `loss_type="aot_pair"` ensures that the chosen reward is higher on all quantiles than the rejected reward. Note that in both cases quantiles are obtained via sorting. To fully leverage the advantages of the AOT algorithm, it is important to maximize the per-GPU batch size. + +The [APO](https://huggingface.co./papers/2408.06266) method introduces an "anchored" version of the alignment objective. There are two variants: `apo_zero` and `apo_down`. The `apo_zero` loss increases the likelihood of winning outputs while decreasing the likelihood of losing outputs, making it suitable when the model is less performant than the winning outputs. On the other hand, `apo_down` decreases the likelihood of both winning and losing outputs, but with a stronger emphasis on reducing the likelihood of losing outputs. This variant is more effective when the model is better than the winning outputs. To use these losses, set `loss_type="apo_zero"` or `loss_type="apo_down"` in the [`DPOConfig`]. + +### For Mixture of Experts Models: Enabling the auxiliary loss + +MOEs are the most efficient if the load is about equally distributed between experts. +To ensure that we train MOEs similarly during preference-tuning, it is beneficial to add the auxiliary loss from the load balancer to the final loss. + +This option is enabled by setting `output_router_logits=True` in the model config (e.g. MixtralConfig). +To scale how much the auxiliary loss contributes to the total loss, use the hyperparameter `router_aux_loss_coef=...` (default: 0.001). + +## Logging + +While training and evaluating we record the following reward metrics: + +- `rewards/chosen`: the mean difference between the log probabilities of the policy model and the reference model for the chosen responses scaled by beta +- `rewards/rejected`: the mean difference between the log probabilities of the policy model and the reference model for the rejected responses scaled by beta +- `rewards/accuracies`: mean of how often the chosen rewards are > than the corresponding rejected rewards +- `rewards/margins`: the mean difference between the chosen and corresponding rejected rewards + +## Accelerate DPO fine-tuning using `unsloth` + +You can further accelerate QLoRA / LoRA (2x faster, 60% less memory) using the [`unsloth`](https://github.com/unslothai/unsloth) library that is fully compatible with `SFTTrainer`. Currently `unsloth` supports only Llama (Yi, TinyLlama, Qwen, Deepseek etc) and Mistral architectures. Some benchmarks for DPO listed below: + +| GPU | Model | Dataset | 🤗 | 🤗 + Flash Attention 2 | 🦥 Unsloth | 🦥 VRAM saved | +| -------- | --------- | ---------- | --- | ---------------------- | ---------- | ------------- | +| A100 40G | Zephyr 7b | Ultra Chat | 1x | 1.24x | **1.88x** | -11.6% | +| Tesla T4 | Zephyr 7b | Ultra Chat | 1x | 1.09x | **1.55x** | -18.6% | + +First install `unsloth` according to the [official documentation](https://github.com/unslothai/unsloth). Once installed, you can incorporate unsloth into your workflow in a very simple manner; instead of loading `AutoModelForCausalLM`, you just need to load a `FastLanguageModel` as follows: + +```python +import torch +from trl import DPOConfig, DPOTrainer +from unsloth import FastLanguageModel + +max_seq_length = 2048 # Supports automatic RoPE Scaling, so choose any number. + +# Load model +model, tokenizer = FastLanguageModel.from_pretrained( + model_name = "unsloth/zephyr-sft", + max_seq_length = max_seq_length, + dtype = None, # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+ + load_in_4bit = True, # Use 4bit quantization to reduce memory usage. Can be False. + # token = "hf_...", # use one if using gated models like meta-llama/Llama-2-7b-hf +) + +# Do model patching and add fast LoRA weights +model = FastLanguageModel.get_peft_model( + model, + r = 16, + target_modules = ["q_proj", "k_proj", "v_proj", "o_proj", + "gate_proj", "up_proj", "down_proj",], + lora_alpha = 16, + lora_dropout = 0, # Dropout = 0 is currently optimized + bias = "none", # Bias = "none" is currently optimized + use_gradient_checkpointing = True, + random_state = 3407, +) + +training_args = DPOConfig( + output_dir="./output", + beta=0.1, +) + +dpo_trainer = DPOTrainer( + model, + ref_model=None, + args=training_args, + train_dataset=train_dataset, + tokenizer=tokenizer, +) +dpo_trainer.train() +``` + +The saved model is fully compatible with Hugging Face's transformers library. Learn more about unsloth in their [official repository](https://github.com/unslothai/unsloth). + +## Reference model considerations with PEFT + +You have three main options (plus several variants) for how the reference model works when using PEFT, assuming the model that you would like to further enhance with DPO was tuned using (Q)LoRA. + +1. Simply create two instances of the model, each loading your adapter - works fine but is very inefficient. +2. Merge the adapter into the base model, create another adapter on top, then leave the `ref_model` param null, in which case DPOTrainer will unload the adapter for reference inference - efficient, but has potential downsides discussed below. +3. Load the adapter twice with different names, then use `set_adapter` during training to swap between the adapter being DPO'd and the reference adapter - slightly less efficient compared to 2 (~adapter size VRAM overhead), but avoids the pitfalls. + +### Downsides to merging QLoRA before DPO (approach 2) + +As suggested by [Benjamin Marie](https://medium.com/@bnjmn_marie/dont-merge-your-lora-adapter-into-a-4-bit-llm-65b6da287997), the best option for merging QLoRA adapters is to first dequantize the base model, then merge the adapter. Something similar to [this script](https://github.com/jondurbin/qlora/blob/main/qmerge.py). + +However, after using this approach, you will have an unquantized base model. Therefore, to use QLoRA for DPO, you will need to re-quantize the merged model or use the unquantized merge (resulting in higher memory demand). + +### Using option 3 - load the adapter twice + +To avoid the downsides with option 2, you can load your fine-tuned adapter into the model twice, with different names, and set the model/ref adapter names in [`DPOTrainer`]. + +For example: + +```python +# Load the base model. +bnb_config = BitsAndBytesConfig( + load_in_4bit=True, + llm_int8_threshold=6.0, + llm_int8_has_fp16_weight=False, + bnb_4bit_compute_dtype=torch.bfloat16, + bnb_4bit_use_double_quant=True, + bnb_4bit_quant_type="nf4", +) +model = AutoModelForCausalLM.from_pretrained( + "mistralai/mixtral-8x7b-v0.1", + load_in_4bit=True, + quantization_config=bnb_config, + attn_implementation="flash_attention_2", + torch_dtype=torch.bfloat16, + device_map="auto", +) +model.config.use_cache = False + +# Load the adapter. +model = PeftModel.from_pretrained( + model, + "/path/to/peft", + is_trainable=True, + adapter_name="train", +) +# Load the adapter a second time, with a different name, which will be our reference model. +model.load_adapter("/path/to/peft", adapter_name="reference") + +# Initialize the trainer, without a ref_model param. +training_args = DPOConfig( + model_adapter_name="train", + ref_adapter_name="reference", +) +dpo_trainer = DPOTrainer( + model, + args=training_args, + ... +) +``` + +## DPOTrainer + +[[autodoc]] DPOTrainer + +## DPOConfig + +[[autodoc]] DPOConfig diff --git a/trl_md_files/index.mdx b/trl_md_files/index.mdx new file mode 100644 index 0000000000000000000000000000000000000000..b1de84afb1fe181b655220cf7c82892d30c45757 --- /dev/null +++ b/trl_md_files/index.mdx @@ -0,0 +1,65 @@ +
+ +
+ +# TRL - Transformer Reinforcement Learning + +TRL is a full stack library where we provide a set of tools to train transformer language models with Reinforcement Learning, from the Supervised Fine-tuning step (SFT), Reward Modeling step (RM) to the Proximal Policy Optimization (PPO) step. +The library is integrated with 🤗 [transformers](https://github.com/huggingface/transformers). + +
+ +
+ +Check the appropriate sections of the documentation depending on your needs: + +## API documentation + +- [Model Classes](models): *A brief overview of what each public model class does.* +- [`SFTTrainer`](sft_trainer): *Supervise Fine-tune your model easily with `SFTTrainer`* +- [`RewardTrainer`](reward_trainer): *Train easily your reward model using `RewardTrainer`.* +- [`PPOTrainer`](ppo_trainer): *Further fine-tune the supervised fine-tuned model using PPO algorithm* +- [Best-of-N Sampling](best-of-n): *Use best of n sampling as an alternative way to sample predictions from your active model* +- [`DPOTrainer`](dpo_trainer): *Direct Preference Optimization training using `DPOTrainer`.* +- [`TextEnvironment`](text_environments): *Text environment to train your model using tools with RL.* + +## Examples + +- [Sentiment Tuning](sentiment_tuning): *Fine tune your model to generate positive movie contents* +- [Training with PEFT](lora_tuning_peft): *Memory efficient RLHF training using adapters with PEFT* +- [Detoxifying LLMs](detoxifying_a_lm): *Detoxify your language model through RLHF* +- [StackLlama](using_llama_models): *End-to-end RLHF training of a Llama model on Stack exchange dataset* +- [Learning with Tools](learning_tools): *Walkthrough of using `TextEnvironments`* +- [Multi-Adapter Training](multi_adapter_rl): *Use a single base model and multiple adapters for memory efficient end-to-end training* + + +## Blog posts + + diff --git a/trl_md_files/installation.mdx b/trl_md_files/installation.mdx new file mode 100644 index 0000000000000000000000000000000000000000..bf74b64175fb15459b2cc1b61caea5ce159888f0 --- /dev/null +++ b/trl_md_files/installation.mdx @@ -0,0 +1,24 @@ +# Installation +You can install TRL either from pypi or from source: + +## pypi +Install the library with pip: + +```bash +pip install trl +``` + +### Source +You can also install the latest version from source. First clone the repo and then run the installation with `pip`: + +```bash +git clone https://github.com/huggingface/trl.git +cd trl/ +pip install -e . +``` + +If you want the development install you can replace the pip install with the following: + +```bash +pip install -e ".[dev]" +``` \ No newline at end of file diff --git a/trl_md_files/iterative_sft_trainer.mdx b/trl_md_files/iterative_sft_trainer.mdx new file mode 100644 index 0000000000000000000000000000000000000000..a6eaf5c98f45b2f3829f0c723d1ef743d77fed6c --- /dev/null +++ b/trl_md_files/iterative_sft_trainer.mdx @@ -0,0 +1,54 @@ +# Iterative Trainer + +Iterative fine-tuning is a training method that enables to perform custom actions (generation and filtering for example) between optimization steps. In TRL we provide an easy-to-use API to fine-tune your models in an iterative way in just a few lines of code. + +## Usage + +To get started quickly, instantiate an instance a model, and a tokenizer. + +```python + +model = AutoModelForCausalLM.from_pretrained(model_name) +tokenizer = AutoTokenizer.from_pretrained(model_name) +if tokenizer.pad_token is None: + tokenizer.pad_token = tokenizer.eos_token + +trainer = IterativeSFTTrainer( + model, + tokenizer +) + +``` + +You have the choice to either provide a list of strings or a list of tensors to the step function. + +#### Using a list of tensors as input: + +```python + +inputs = { + "input_ids": input_ids, + "attention_mask": attention_mask +} + +trainer.step(**inputs) + +``` + +#### Using a list of strings as input: + +```python + +inputs = { + "texts": texts +} + +trainer.step(**inputs) + +``` + +For causal language models, labels will automatically be created from input_ids or from texts. When using sequence to sequence models you will have to provide your own labels or text_labels. + +## IterativeTrainer + +[[autodoc]] IterativeSFTTrainer diff --git a/trl_md_files/judges.mdx b/trl_md_files/judges.mdx new file mode 100644 index 0000000000000000000000000000000000000000..48287b5942f7f6e0275cdbaf0daafb29a0721ebf --- /dev/null +++ b/trl_md_files/judges.mdx @@ -0,0 +1,75 @@ +# Judges + +TRL provides judges to easily compare two completions. + +Make sure to have installed the required dependencies by running: + +```bash +pip install trl[llm_judge] +``` + +## Using the provided judges + +TRL provides several judges out of the box. For example, you can use the `HfPairwiseJudge` to compare two completions using a pre-trained model from the Hugging Face model hub: + +```python +from trl import HfPairwiseJudge + +judge = HfPairwiseJudge() +judge.judge( + prompts=["What is the capital of France?", "What is the biggest planet in the solar system?"], + completions=[["Paris", "Lyon"], ["Saturn", "Jupiter"]], +) # Outputs: [0, 1] +``` + +## Define your own judge + +To define your own judge, we provide several base classes that you can subclass. For rank-based judges, you need to subclass [`BaseRankJudge`] and implement the [`BaseRankJudge.judge`] method. For pairwise judges, you need to subclass [`BasePairJudge`] and implement the [`BasePairJudge.judge`] method. If you want to define a judge that doesn't fit into these categories, you need to subclass [`BaseJudge`] and implement the [`BaseJudge.judge`] method. + +As an example, let's define a pairwise judge that prefers shorter completions: + +```python +from trl import BasePairwiseJudge + +class PrefersShorterJudge(BasePairwiseJudge): + def judge(self, prompts, completions, shuffle_order=False): + return [0 if len(completion[0]) > len(completion[1]) else 1 for completion in completions] +``` + +You can then use this judge as follows: + +```python +judge = PrefersShorterJudge() +judge.judge( + prompts=["What is the capital of France?", "What is the biggest planet in the solar system?"], + completions=[["Paris", "The capital of France is Paris."], ["Jupiter is the biggest planet in the solar system.", "Jupiter"]], +) # Outputs: [0, 1] +``` + +## BaseJudge + +[[autodoc]] BaseJudge + +## BaseRankJudge + +[[autodoc]] BaseRankJudge + +## BasePairwiseJudge + +[[autodoc]] BasePairwiseJudge + +## RandomRankJudge + +[[autodoc]] RandomRankJudge + +## RandomPairwiseJudge + +[[autodoc]] RandomPairwiseJudge + +## HfPairwiseJudge + +[[autodoc]] HfPairwiseJudge + +## OpenAIPairwiseJudge + +[[autodoc]] OpenAIPairwiseJudge diff --git a/trl_md_files/kto_trainer.mdx b/trl_md_files/kto_trainer.mdx new file mode 100644 index 0000000000000000000000000000000000000000..a40d2f9e0ba637ed1d6b13d3f6422a2f05cc3348 --- /dev/null +++ b/trl_md_files/kto_trainer.mdx @@ -0,0 +1,102 @@ +# KTO Trainer + +TRL supports the Kahneman-Tversky Optimization (KTO) Trainer for aligning language models with binary feedback data (e.g., upvote/downvote), as described in the [paper](https://huggingface.co./papers/2402.01306) by Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff, Dan Jurafsky, and Douwe Kiela. +For a full example have a look at [`examples/scripts/kto.py`]. + +Depending on how good your base model is, you may or may not need to do SFT before KTO. +This is different from standard RLHF and DPO, which always require SFT. + +## Expected dataset format + +The KTO trainer expects a very specific format for the dataset as it does not require pairwise preferences. Since the model will be trained to directly optimize examples that consist of a prompt, model completion, and a label to indicate whether the completion is "good" or "bad", we expect a dataset with the following columns: + +- `prompt` +- `completion` +- `label` + +for example: + +``` +kto_dataset_dict = { + "prompt": [ + "Hey, hello", + "How are you", + "What is your name?", + "What is your name?", + "Which is the best programming language?", + "Which is the best programming language?", + "Which is the best programming language?", + ], + "completion": [ + "hi nice to meet you", + "leave me alone", + "I don't have a name", + "My name is Mary", + "Python", + "C++", + "Java", + ], + "label": [ + True, + False, + False, + True, + True, + False, + False, + ], +} +``` + +where the `prompt` contains the context inputs, `completion` contains the corresponding responses and `label` contains the corresponding flag that indicates if the generated completion is desired (`True`) or undesired (`False`). +A prompt can have multiple responses and this is reflected in the entries being repeated in the dictionary's value arrays. It is required that the dataset contains at least one desirable and one undesirable completion. + + +## Expected model format +The KTO trainer expects a model of `AutoModelForCausalLM`, compared to PPO that expects `AutoModelForCausalLMWithValueHead` for the value function. + +## Using the `KTOTrainer` + +For a detailed example have a look at the `examples/scripts/kto.py` script. At a high level we need to initialize the `KTOTrainer` with a `model` we wish to train and a reference `ref_model` which we will use to calculate the implicit rewards of the preferred and rejected response. + +The `beta` refers to the hyperparameter of the implicit reward, and the dataset contains the 3 entries listed above. Note that the `model` and `ref_model` need to have the same architecture (ie decoder only or encoder-decoder). + +The `desirable_weight` and `undesirable_weight` refer to the weights placed on the losses for desirable/positive and undesirable/negative examples. +By default, they are both 1. However, if you have more of one or the other, then you should upweight the less common type such that the ratio of (`desirable_weight` * number of positives) to (`undesirable_weight` * number of negatives) is in the range 1:1 to 4:3. + +```py +training_args = KTOConfig( + beta=0.1, + desirable_weight=1.0, + undesirable_weight=1.0, +) + +kto_trainer = KTOTrainer( + model, + ref_model, + args=training_args, + train_dataset=train_dataset, + tokenizer=tokenizer, +) +``` +After this one can then call: + +```py +kto_trainer.train() +``` + +### For Mixture of Experts Models: Enabling the auxiliary loss + +MOEs are the most efficient if the load is about equally distributed between experts. +To ensure that we train MOEs similarly during preference-tuning, it is beneficial to add the auxiliary loss from the load balancer to the final loss. + +This option is enabled by setting `output_router_logits=True` in the model config (e.g. MixtralConfig). +To scale how much the auxiliary loss contributes to the total loss, use the hyperparameter `router_aux_loss_coef=...` (default: 0.001). + +## KTOTrainer + +[[autodoc]] KTOTrainer + +## KTOConfig + +[[autodoc]] KTOConfig \ No newline at end of file diff --git a/trl_md_files/learning_tools.mdx b/trl_md_files/learning_tools.mdx new file mode 100644 index 0000000000000000000000000000000000000000..f7a05b1ba893f8feebc347b480f138dced6c5d0d --- /dev/null +++ b/trl_md_files/learning_tools.mdx @@ -0,0 +1,232 @@ +# Learning Tools (Experimental 🧪) + +Using Large Language Models (LLMs) with tools has been a popular topic recently with awesome works such as [ToolFormer](https://huggingface.co./papers/2302.04761) and [ToolBench](https://huggingface.co./papers/2305.16504). In TRL, we provide a simple example of how to teach LLM to use tools with reinforcement learning. + + +Here's an overview of the scripts in the [trl repository](https://github.com/lvwerra/trl/tree/main/examples/research_projects/tools): + +| File | Description | +|---|---| +| [`calculator.py`](https://github.com/lvwerra/trl/blob/main/examples/research_projects/tools/calculator.py) | Script to train LLM to use a calculator with reinforcement learning. | +| [`triviaqa.py`](https://github.com/lvwerra/trl/blob/main/examples/research_projects/tools/triviaqa.py) | Script to train LLM to use a wiki tool to answer questions. | +| [`python_interpreter.py`](https://github.com/lvwerra/trl/blob/main/examples/research_projects/tools/python_interpreter.py) | Script to train LLM to use python interpreter to solve math puzzles. | + + + +Note that the scripts above rely heavily on the `TextEnvironment` API which is still under active development. The API may change in the future. Please see [`TextEnvironment`](text_environment) for the related docs. + + + +## Learning to Use a Calculator + + +The rough idea is as follows: + +1. Load a tool such as [ybelkada/simple-calculator](https://huggingface.co./spaces/ybelkada/simple-calculator) that parse a text calculation like `"14 + 34"` and return the calulated number: + ```python + from transformers import AutoTokenizer, load_tool + tool = load_tool("ybelkada/simple-calculator") + tool_fn = lambda text: str(round(float(tool(text)), 2)) # rounding to 2 decimal places + ``` +1. Define a reward function that returns a positive reward if the tool returns the correct answer. In the script we create a dummy reward function like `reward_fn = lambda x: 1`, but we override the rewards directly later. +1. Create a prompt on how to use the tools + ```python + # system prompt + prompt = """\ + What is 13.1-3? + + 13.1-310.1 + + Result=10.1 + + What is 4*3? + + 4*312 + + Result=12 + + What is 12.1+1? + + 12.1+113.1 + + Result=13.1 + + What is 12.1-20? + + 12.1-20-7.9 + + Result=-7.9""" + ``` +3. Create a `trl.TextEnvironment` with the model + ```python + env = TextEnvironment( + model, + tokenizer, + {"SimpleCalculatorTool": tool_fn}, + reward_fn, + prompt, + generation_kwargs=generation_kwargs, + ) + ``` +4. Then generate some data such as `tasks = ["\n\nWhat is 13.1-3?", "\n\nWhat is 4*3?"]` and run the environment with `queries, responses, masks, rewards, histories = env.run(tasks)`. The environment will look for the `` token in the prompt and append the tool output to the response; it will also return the mask associated with the response. You can further use the `histories` to visualize the interaction between the model and the tool; `histories[0].show_text()` will show the text with color-coded tool output and `histories[0].show_tokens(tokenizer)` will show visualize the tokens. + ![](https://huggingface.co./datasets/trl-internal-testing/example-images/resolve/main/images/learning_tools.png) +1. Finally, we can train the model with `train_stats = ppo_trainer.step(queries, responses, rewards, masks)`. The trainer will use the mask to ignore the tool output when computing the loss, make sure to pass that argument to `step`. + +## Experiment results + +We trained a model with the above script for 10 random seeds. You can reproduce the run with the following command. Feel free to remove the `--slurm-*` arguments if you don't have access to a slurm cluster. + +``` +WANDB_TAGS="calculator_final" python benchmark/benchmark.py \ + --command "python examples/research_projects/tools/calculator.py" \ + --num-seeds 10 \ + --start-seed 1 \ + --workers 10 \ + --slurm-gpus-per-task 1 \ + --slurm-ntasks 1 \ + --slurm-total-cpus 8 \ + --slurm-template-path benchmark/trl.slurm_template +``` + +We can then use [`openrlbenchmark`](https://github.com/openrlbenchmark/openrlbenchmark) which generates the following plot. +``` +python -m openrlbenchmark.rlops_multi_metrics \ + --filters '?we=openrlbenchmark&wpn=trl&xaxis=_step&ceik=trl_ppo_trainer_config.value.tracker_project_name&cen=trl_ppo_trainer_config.value.log_with&metrics=env/reward_mean&metrics=objective/kl' \ + 'wandb?tag=calculator_final&cl=calculator_mask' \ + --env-ids trl \ + --check-empty-runs \ + --pc.ncols 2 \ + --pc.ncols-legend 1 \ + --output-filename static/0compare \ + --scan-history +``` + +![](https://huggingface.co./datasets/trl-internal-testing/example-images/resolve/main/images/learning_tools_chart.png) + +As we can see, while 1-2 experiments crashed for some reason, most of the runs obtained near perfect proficiency in the calculator task. + + +## (Early Experiments 🧪): learning to use a wiki tool for question answering + +In the [ToolFormer](https://huggingface.co./papers/2302.04761) paper, it shows an interesting use case that utilizes a Wikipedia Search tool to help answer questions. In this section, we attempt to perform similar experiments but uses RL instead to teach the model to use a wiki tool on the [TriviaQA](https://nlp.cs.washington.edu/triviaqa/) dataset. + + + + +**Note that many settings are different so the results are not directly comparable.** + + + + + +### Building a search index + +Since [ToolFormer](https://huggingface.co./papers/2302.04761) did not open source, we needed to first replicate the search index. It is mentioned in their paper that the authors built the search index using a BM25 retriever that indexes the Wikipedia dump from [KILT](https://github.com/facebookresearch/KILT) + +Fortunately, [`pyserini`](https://github.com/castorini/pyserini) already implements the BM25 retriever and provides a prebuilt index for the KILT Wikipedia dump. We can use the following code to search the index. + +```python +from pyserini.search.lucene import LuceneSearcher +import json +searcher = LuceneSearcher.from_prebuilt_index('wikipedia-kilt-doc') +def search(query): + hits = searcher.search(query, k=1) + hit = hits[0] + contents = json.loads(hit.raw)['contents'] + return contents +print(search("tennis racket")) +``` +``` +Racket (sports equipment) +A racket or racquet is a sports implement consisting of a handled frame with an open hoop across which a network of strings or catgut is stretched tightly. It is used for striking a ball or shuttlecock in games such as squash, tennis, racquetball, and badminton. Collectively, these games are known as racket sports. Racket design and manufacturing has changed considerably over the centuries. + +The frame of rackets for all sports was traditionally made of solid wood (later laminated wood) and the strings of animal intestine known as catgut. The traditional racket size was limited by the strength and weight of the wooden frame which had to be strong enough to hold the strings and stiff enough to hit the ball or shuttle. Manufacturers started adding non-wood laminates to wood rackets to improve stiffness. Non-wood rackets were made first of steel, then of aluminum, and then carbon fiber composites. Wood is still used for real tennis, rackets, and xare. Most rackets are now made of composite materials including carbon fiber or fiberglass, metals such as titanium alloys, or ceramics. +... +``` + +We then basically deployed this snippet as a Hugging Face space [here](https://huggingface.co./spaces/vwxyzjn/pyserini-wikipedia-kilt-doc), so that we can use the space as a `transformers.Tool` later. + +![](https://huggingface.co./datasets/trl-internal-testing/example-images/resolve/main/images/pyserini.png) + +### Experiment settings + +We use the following settings: + +* use the `bigcode/starcoderbase` model as the base model +* use the `pyserini-wikipedia-kilt-doc` space as the wiki tool and only uses the first paragrahs of the search result, allowing the `TextEnvironment` to obtain at most `max_tool_reponse=400` response tokens from the tool. +* test if the response contain the answer string, if so, give a reward of 1, otherwise, give a reward of 0. + * notice this is a simplified evaluation criteria. In [ToolFormer](https://huggingface.co./papers/2302.04761), the authors checks if the first 20 words of the response contain the correct answer. +* used the following prompt that demonstrates the usage of the wiki tool. +```python +prompt = """\ +Answer the following question: + +Q: In which branch of the arts is Patricia Neary famous? +A: Ballets +A2: Patricia NearyPatricia Neary (born October 27, 1942) is an American ballerina, choreographer and ballet director, who has been particularly active in Switzerland. She has also been a highly successful ambassador for the Balanchine Trust, bringing George Balanchine's ballets to 60 cities around the globe. +Result=Ballets + +Q: Who won Super Bowl XX? +A: Chicago Bears +A2: Super Bowl XXSuper Bowl XX was an American football game between the National Football Conference (NFC) champion Chicago Bears and the American Football Conference (AFC) champion New England Patriots to decide the National Football League (NFL) champion for the 1985 season. The Bears defeated the Patriots by the score of 46–10, capturing their first NFL championship (and Chicago's first overall sports victory) since 1963, three years prior to the birth of the Super Bowl. Super Bowl XX was played on January 26, 1986 at the Louisiana Superdome in New Orleans. +Result=Chicago Bears + +Q: """ +``` + + +### Result and Discussion + + +Our experiments show that the agent can learn to use the wiki tool to answer questions. The learning curves would go up mostly, but one of the experiment did crash. + +![](https://huggingface.co./datasets/trl-internal-testing/example-images/resolve/main/images/triviaqa_learning_curves.png) + +Wandb report is [here](https://wandb.ai/costa-huang/cleanRL/reports/TriviaQA-Final-Experiments--Vmlldzo1MjY0ODk5) for further inspection. + + +Note that the correct rate of the trained model is on the low end, which could be due to the following reasons: + +* **incorrect searches:** When given the question `"What is Bruce Willis' real first name?"` if the model searches for `Bruce Willis`, our wiki tool returns "Patrick Poivey (born 18 February 1948) is a French actor. He is especially known for his voice: he is the French dub voice of Bruce Willis since 1988.` But a correct search should be `Walter Bruce Willis (born March 19, 1955) is an American former actor. He achieved fame with a leading role on the comedy-drama series Moonlighting (1985–1989) and appeared in over a hundred films, gaining recognition as an action hero after his portrayal of John McClane in the Die Hard franchise (1988–2013) and other roles.[1][2]" + + + ![](https://huggingface.co./datasets/trl-internal-testing/example-images/resolve/main/images/real_first_name.png) + +* **unnecessarily long response**: The wiki tool by default sometimes output very long sequences. E.g., when the wiki tool searches for "Brown Act" + * Our wiki tool returns "The Ralph M. Brown Act, located at California Government Code 54950 "et seq.", is an act of the California State Legislature, authored by Assemblymember Ralph M. Brown and passed in 1953, that guarantees the public's right to attend and participate in meetings of local legislative bodies." + * [ToolFormer](https://huggingface.co./papers/2302.04761)'s wiki tool returns "The Ralph M. Brown Act is an act of the California State Legislature that guarantees the public's right to attend and participate in meetings of local legislative bodies." which is more succinct. + + ![](https://huggingface.co./datasets/trl-internal-testing/example-images/resolve/main/images/brown_act.png) + + +## (Early Experiments 🧪): solving math puzzles with python interpreter + +In this section, we attempt to teach the model to use a python interpreter to solve math puzzles. The rough idea is to give the agent a prompt like the following: + +```python +prompt = """\ +Example of using a Python API to solve math questions. + +Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left? + + +def solution(): + money_initial = 23 + bagels = 5 + bagel_cost = 3 + money_spent = bagels * bagel_cost + money_left = money_initial - money_spent + result = money_left + return result +print(solution()) +72 + +Result = 72 + +Q: """ +``` + + +Training experiment can be found at https://wandb.ai/lvwerra/trl-gsm8k/runs/a5odv01y + +![](https://huggingface.co./datasets/trl-internal-testing/example-images/resolve/main/images/gms8k_learning_curve.png) diff --git a/trl_md_files/logging.mdx b/trl_md_files/logging.mdx new file mode 100644 index 0000000000000000000000000000000000000000..71eb7c4137532b75d0d8af1e912f1f706078f6d3 --- /dev/null +++ b/trl_md_files/logging.mdx @@ -0,0 +1,75 @@ +# Logging + +As reinforcement learning algorithms are historically challenging to debug, it's important to pay careful attention to logging. +By default, the TRL [`PPOTrainer`] saves a lot of relevant information to `wandb` or `tensorboard`. + +Upon initialization, pass one of these two options to the [`PPOConfig`]: +``` +config = PPOConfig( + model_name=args.model_name, + log_with=`wandb`, # or `tensorboard` +) +``` +If you want to log with tensorboard, add the kwarg `project_kwargs={"logging_dir": PATH_TO_LOGS}` to the PPOConfig. + +## PPO Logging + +Here's a brief explanation for the logged metrics provided in the data: + +Key metrics to monitor. We want to maximize the reward, maintain a low KL divergence, and maximize entropy: +1. `env/reward_mean`: The average reward obtained from the environment. Alias `ppo/mean_scores`, which is sed to specifically monitor the reward model. +1. `env/reward_std`: The standard deviation of the reward obtained from the environment. Alias ``ppo/std_scores`, which is sed to specifically monitor the reward model. +1. `env/reward_dist`: The histogram distribution of the reward obtained from the environment. +1. `objective/kl`: The mean Kullback-Leibler (KL) divergence between the old and new policies. It measures how much the new policy deviates from the old policy. The KL divergence is used to compute the KL penalty in the objective function. +1. `objective/kl_dist`: The histogram distribution of the `objective/kl`. +1. `objective/kl_coef`: The coefficient for Kullback-Leibler (KL) divergence in the objective function. +1. `ppo/mean_non_score_reward`: The **KL penalty** calculated by `objective/kl * objective/kl_coef` as the total reward for optimization to prevent the new policy from deviating too far from the old policy. +1. `objective/entropy`: The entropy of the model's policy, calculated by `-logprobs.sum(-1).mean()`. High entropy means the model's actions are more random, which can be beneficial for exploration. + +Training stats: +1. `ppo/learning_rate`: The learning rate for the PPO algorithm. +1. `ppo/policy/entropy`: The entropy of the model's policy, calculated by `pd = torch.nn.functional.softmax(logits, dim=-1); entropy = torch.logsumexp(logits, dim=-1) - torch.sum(pd * logits, dim=-1)`. It measures the randomness of the policy. +1. `ppo/policy/clipfrac`: The fraction of probability ratios (old policy / new policy) that fell outside the clipping range in the PPO objective. This can be used to monitor the optimization process. +1. `ppo/policy/approxkl`: The approximate KL divergence between the old and new policies, measured by `0.5 * masked_mean((logprobs - old_logprobs) ** 2, mask)`, corresponding to the `k2` estimator in http://joschu.net/blog/kl-approx.html +1. `ppo/policy/policykl`: Similar to `ppo/policy/approxkl`, but measured by `masked_mean(old_logprobs - logprobs, mask)`, corresponding to the `k1` estimator in http://joschu.net/blog/kl-approx.html +1. `ppo/policy/ratio`: The histogram distribution of the ratio between the new and old policies, used to compute the PPO objective. +1. `ppo/policy/advantages_mean`: The average of the GAE (Generalized Advantage Estimation) advantage estimates. The advantage function measures how much better an action is compared to the average action at a state. +1. `ppo/policy/advantages`: The histogram distribution of `ppo/policy/advantages_mean`. +1. `ppo/returns/mean`: The mean of the TD(λ) returns, calculated by `returns = advantage + values`, another indicator of model performance. See https://iclr-blog-track.github.io/2022/03/25/ppo-implementation-details/ for more details. +1. `ppo/returns/var`: The variance of the TD(λ) returns, calculated by `returns = advantage + values`, another indicator of model performance. +1. `ppo/val/mean`: The mean of the values, used to monitor the value function's performance. +1. `ppo/val/var` : The variance of the values, used to monitor the value function's performance. +1. `ppo/val/var_explained`: The explained variance for the value function, used to monitor the value function's performance. +1. `ppo/val/clipfrac`: The fraction of the value function's predicted values that are clipped. +1. `ppo/val/vpred`: The predicted values from the value function. +1. `ppo/val/error`: The mean squared error between the `ppo/val/vpred` and returns, used to monitor the value function's performance. +1. `ppo/loss/policy`: The policy loss for the Proximal Policy Optimization (PPO) algorithm. +1. `ppo/loss/value`: The loss for the value function in the PPO algorithm. This value quantifies how well the function estimates the expected future rewards. +1. `ppo/loss/total`: The total loss for the PPO algorithm. It is the sum of the policy loss and the value function loss. + + +Stats on queries, responses, and logprobs: +1. `tokens/queries_len_mean`: The average length of the queries tokens. +1. `tokens/queries_len_std`: The standard deviation of the length of the queries tokens. +1. `tokens/queries_dist`: The histogram distribution of the length of the queries tokens. +1. `tokens/responses_len_mean`: The average length of the responses tokens. +1. `tokens/responses_len_std`: The standard deviation of the length of the responses tokens. +1. `tokens/responses_dist`: The histogram distribution of the length of the responses tokens. (Costa: inconsistent naming, should be `tokens/responses_len_dist`) +1. `objective/logprobs`: The histogram distribution of the log probabilities of the actions taken by the model. +1. `objective/ref_logprobs`: The histogram distribution of the log probabilities of the actions taken by the reference model. + + + +### Crucial values +During training, many values are logged, here are the most important ones: + +1. `env/reward_mean`,`env/reward_std`, `env/reward_dist`: the properties of the reward distribution from the "environment" / reward model +1. `ppo/mean_non_score_reward`: The mean negated KL penalty during training (shows the delta between the reference model and the new policy over the batch in the step) + +Here are some parameters that are useful to monitor for stability (when these diverge or collapse to 0, try tuning variables): + +1. `ppo/loss/value`: it will spike / NaN when not going well. +1. `ppo/policy/ratio`: `ratio` being 1 is a baseline value, meaning that the probability of sampling a token is the same under the new and old policy. If the ratio is too high like 200, it means the probability of sampling a token is 200 times higher under the new policy than the old policy. This is a sign that the new policy is too different from the old policy, which will likely cause overoptimization and collapse training later on. +1. `ppo/policy/clipfrac` and `ppo/policy/approxkl`: if `ratio` is too high, the `ratio` is going to get clipped, resulting in high `clipfrac` and high `approxkl` as well. +1. `objective/kl`: it should stay positive so that the policy is not too far away from the reference policy. +1. `objective/kl_coef`: The target coefficient with [`AdaptiveKLController`]. Often increases before numerical instabilities. \ No newline at end of file diff --git a/trl_md_files/lora_tuning_peft.mdx b/trl_md_files/lora_tuning_peft.mdx new file mode 100644 index 0000000000000000000000000000000000000000..531ee0fcd7f72718ca5a81c889f4194d7f378de9 --- /dev/null +++ b/trl_md_files/lora_tuning_peft.mdx @@ -0,0 +1,144 @@ +# Examples of using peft with trl to finetune 8-bit models with Low Rank Adaption (LoRA) + +The notebooks and scripts in this examples show how to use Low Rank Adaptation (LoRA) to fine-tune models in a memory efficient manner. Most of PEFT methods supported in peft library but note that some PEFT methods such as Prompt tuning are not supported. +For more information on LoRA, see the [original paper](https://huggingface.co./papers/2106.09685). + +Here's an overview of the `peft`-enabled notebooks and scripts in the [trl repository](https://github.com/huggingface/trl/tree/main/examples): + +| File | Task | Description | Colab link | +|---|---| --- | +| [`stack_llama/rl_training.py`](https://github.com/huggingface/trl/blob/main/examples/research_projects/stack_llama/scripts/rl_training.py) | RLHF | Distributed fine-tuning of the 7b parameter LLaMA models with a learned reward model and `peft`. | | +| [`stack_llama/reward_modeling.py`](https://github.com/huggingface/trl/blob/main/examples/research_projects/stack_llama/scripts/reward_modeling.py) | Reward Modeling | Distributed training of the 7b parameter LLaMA reward model with `peft`. | | +| [`stack_llama/supervised_finetuning.py`](https://github.com/huggingface/trl/blob/main/examples/research_projects/stack_llama/scripts/supervised_finetuning.py) | SFT | Distributed instruction/supervised fine-tuning of the 7b parameter LLaMA model with `peft`. | | + +## Installation +Note: peft is in active development, so we install directly from their Github page. +Peft also relies on the latest version of transformers. + +```bash +pip install trl[peft] +pip install bitsandbytes loralib +pip install git+https://github.com/huggingface/transformers.git@main +#optional: wandb +pip install wandb +``` + +Note: if you don't want to log with `wandb` remove `log_with="wandb"` in the scripts/notebooks. You can also replace it with your favourite experiment tracker that's [supported by `accelerate`](https://huggingface.co./docs/accelerate/usage_guides/tracking). + +## How to use it? + +Simply declare a `PeftConfig` object in your script and pass it through `.from_pretrained` to load the TRL+PEFT model. + +```python +from peft import LoraConfig +from trl import AutoModelForCausalLMWithValueHead + +model_id = "edbeeching/gpt-neo-125M-imdb" +lora_config = LoraConfig( + r=16, + lora_alpha=32, + lora_dropout=0.05, + bias="none", + task_type="CAUSAL_LM", +) + +model = AutoModelForCausalLMWithValueHead.from_pretrained( + model_id, + peft_config=lora_config, +) +``` +And if you want to load your model in 8bit precision: +```python +pretrained_model = AutoModelForCausalLMWithValueHead.from_pretrained( + config.model_name, + load_in_8bit=True, + peft_config=lora_config, +) +``` +... or in 4bit precision: +```python +pretrained_model = AutoModelForCausalLMWithValueHead.from_pretrained( + config.model_name, + peft_config=lora_config, + load_in_4bit=True, +) +``` + + +## Launch scripts + +The `trl` library is powered by `accelerate`. As such it is best to configure and launch trainings with the following commands: + +```bash +accelerate config # will prompt you to define the training configuration +accelerate launch examples/scripts/ppo.py --use_peft # launch`es training +``` + +## Using `trl` + `peft` and Data Parallelism + +You can scale up to as many GPUs as you want, as long as you are able to fit the training process in a single device. The only tweak you need to apply is to load the model as follows: +```python +from peft import LoraConfig +... + +lora_config = LoraConfig( + r=16, + lora_alpha=32, + lora_dropout=0.05, + bias="none", + task_type="CAUSAL_LM", +) + +pretrained_model = AutoModelForCausalLMWithValueHead.from_pretrained( + config.model_name, + peft_config=lora_config, +) +``` +And if you want to load your model in 8bit precision: +```python +pretrained_model = AutoModelForCausalLMWithValueHead.from_pretrained( + config.model_name, + peft_config=lora_config, + load_in_8bit=True, +) +``` +... or in 4bit precision: +```python +pretrained_model = AutoModelForCausalLMWithValueHead.from_pretrained( + config.model_name, + peft_config=lora_config, + load_in_4bit=True, +) +``` +Finally, make sure that the rewards are computed on correct device as well, for that you can use `ppo_trainer.model.current_device`. + +## Naive pipeline parallelism (NPP) for large models (>60B models) + +The `trl` library also supports naive pipeline parallelism (NPP) for large models (>60B models). This is a simple way to parallelize the model across multiple GPUs. +This paradigm, termed as "Naive Pipeline Parallelism" (NPP) is a simple way to parallelize the model across multiple GPUs. We load the model and the adapters across multiple GPUs and the activations and gradients will be naively communicated across the GPUs. This supports `int8` models as well as other `dtype` models. + +
+ +
+ +### How to use NPP? + +Simply load your model with a custom `device_map` argument on the `from_pretrained` to split your model across multiple devices. Check out this [nice tutorial](https://github.com/huggingface/blog/blob/main/accelerate-large-models.md) on how to properly create a `device_map` for your model. + +Also make sure to have the `lm_head` module on the first GPU device as it may throw an error if it is not on the first device. As this time of writing, you need to install the `main` branch of `accelerate`: `pip install git+https://github.com/huggingface/accelerate.git@main` and `peft`: `pip install git+https://github.com/huggingface/peft.git@main`. + +### Launch scripts + +Although `trl` library is powered by `accelerate`, you should run your training script in a single process. Note that we do not support Data Parallelism together with NPP yet. + +```bash +python PATH_TO_SCRIPT +``` + +## Fine-tuning Llama-2 model + +You can easily fine-tune Llama2 model using `SFTTrainer` and the official script! For example to fine-tune llama2-7b on the Guanaco dataset, run (tested on a single NVIDIA T4-16GB): + +```bash +python examples/scripts/sft.py --output_dir sft_openassistant-guanaco --model_name meta-llama/Llama-2-7b-hf --dataset_name timdettmers/openassistant-guanaco --load_in_4bit --use_peft --per_device_train_batch_size 4 --gradient_accumulation_steps 2 +``` diff --git a/trl_md_files/models.mdx b/trl_md_files/models.mdx new file mode 100644 index 0000000000000000000000000000000000000000..f96068fc46f160c6d60d3b95712fb277c826f6e9 --- /dev/null +++ b/trl_md_files/models.mdx @@ -0,0 +1,28 @@ +# Models + +With the `AutoModelForCausalLMWithValueHead` class TRL supports all decoder model architectures in transformers such as GPT-2, OPT, and GPT-Neo. In addition, with `AutoModelForSeq2SeqLMWithValueHead` you can use encoder-decoder architectures such as T5. TRL also requires reference models which are frozen copies of the model that is trained. With `create_reference_model` you can easily create a frozen copy and also share layers between the two models to save memory. + +## PreTrainedModelWrapper + +[[autodoc]] PreTrainedModelWrapper + +## AutoModelForCausalLMWithValueHead + + +[[autodoc]] AutoModelForCausalLMWithValueHead + - __init__ + - forward + - generate + - _init_weights + +## AutoModelForSeq2SeqLMWithValueHead + +[[autodoc]] AutoModelForSeq2SeqLMWithValueHead + - __init__ + - forward + - generate + - _init_weights + +## create_reference_model + +[[autodoc]] create_reference_model \ No newline at end of file diff --git a/trl_md_files/multi_adapter_rl.mdx b/trl_md_files/multi_adapter_rl.mdx new file mode 100644 index 0000000000000000000000000000000000000000..6e1ddbb18cd31b322562f75ecce5c37ddd6b06ff --- /dev/null +++ b/trl_md_files/multi_adapter_rl.mdx @@ -0,0 +1,100 @@ +# Multi Adapter RL (MARL) - a single base model for everything + +Here we present an approach that uses a single base model for the entire PPO algorithm - which includes retrieving the reference logits, computing the active logits and the rewards. This feature is experimental as we did not test the convergence of the approach. We encourage the community to let us know if they potentially face issues. + +## Requirements + +You just need to install `peft` and optionally install `bitsandbytes` as well if you want to go for 8bit base models, for more memory efficient finetuning. + +## Summary + +You need to address this approach in three stages that we summarize as follows: + +1- Train a base model on the target domain (e.g. `imdb` dataset) - this is the Supervised Fine Tuning stage - it can leverage the `SFTTrainer` from TRL. +2- Train a reward model using `peft`. This is required in order to re-use the adapter during the RL optimisation process (step 3 below). We show an example of leveraging the `RewardTrainer` from TRL in [this example](https://github.com/huggingface/trl/tree/main/examples/scripts/reward_modeling.py) +3- Fine tune new adapters on the base model using PPO and the reward adapter. ("0 abstraction RL") + +Make sure to use the same model (i.e. same architecture and same weights) for the stages 2 & 3. + +## Quickstart + +Let us assume you have trained your reward adapter on `llama-7b` model using `RewardTrainer` and pushed the weights on the hub under `trl-lib/llama-7b-hh-rm-adapter`. +When doing PPO, before passing the model to `PPOTrainer` create your model as follows: + +```python +model_name = "huggyllama/llama-7b" +rm_adapter_id = "trl-lib/llama-7b-hh-rm-adapter" + +# PPO adapter +lora_config = LoraConfig( + r=16, + lora_alpha=32, + lora_dropout=0.05, + bias="none", + task_type="CAUSAL_LM", +) + +model = AutoModelForCausalLMWithValueHead.from_pretrained( + model_name, + peft_config=lora_config, + reward_adapter=rm_adapter_id, +) + +... +trainer = PPOTrainer( + model=model, + ... +) + +... +``` +Then inside your PPO training loop, call the `compute_reward_score` method by accessing the `model` attribute from `PPOTrainer`. + +```python +rewards = trainer.model.compute_reward_score(**inputs) +``` + +## Advanced usage + +### Control on the adapter name + +If you are familiar with the `peft` library, you know that you can use multiple adapters inside the same model. What you can do is train multiple adapters on the same base model to fine-tune on different policies. +In this case, you want to be able to control the adapter name you want to activate back, after retrieving the reward. For that, simply pass the appropriate `adapter_name` to `ppo_adapter_name` argument when calling `compute_reward_score`. + +```python +adapter_name_policy_1 = "policy_1" +rewards = trainer.model.compute_reward_score(**inputs, ppo_adapter_name=adapter_name_policy_1) +... +``` + +### Using 4-bit and 8-bit base models + +For more memory efficient fine-tuning, you can load your base model in 8-bit or 4-bit while keeping the adapters in the default precision (float32). +Just pass the appropriate arguments (i.e. `load_in_8bit=True` or `load_in_4bit=True`) to `AutoModelForCausalLMWithValueHead.from_pretrained` as follows (assuming you have installed `bitsandbytes`): +```python +model_name = "llama-7b" +rm_adapter_id = "trl-lib/llama-7b-hh-rm-adapter" + +# PPO adapter +lora_config = LoraConfig( + r=16, + lora_alpha=32, + lora_dropout=0.05, + bias="none", + task_type="CAUSAL_LM", +) + +model = AutoModelForCausalLMWithValueHead.from_pretrained( + model_name, + peft_config=lora_config, + reward_adapter=rm_adapter_id, + load_in_8bit=True, +) + +... +trainer = PPOTrainer( + model=model, + ... +) +... +``` diff --git a/trl_md_files/ppo_trainer.mdx b/trl_md_files/ppo_trainer.mdx new file mode 100644 index 0000000000000000000000000000000000000000..f042490b13ee3c005ba71bb7396b0743a6928657 --- /dev/null +++ b/trl_md_files/ppo_trainer.mdx @@ -0,0 +1,169 @@ +# PPO Trainer + +TRL supports the [PPO](https://huggingface.co./papers/1707.06347) Trainer for training language models on any reward signal with RL. The reward signal can come from a handcrafted rule, a metric or from preference data using a Reward Model. For a full example have a look at [`examples/notebooks/gpt2-sentiment.ipynb`](https://github.com/lvwerra/trl/blob/main/examples/notebooks/gpt2-sentiment.ipynb). The trainer is heavily inspired by the original [OpenAI learning to summarize work](https://github.com/openai/summarize-from-feedback). + +The first step is to train your SFT model (see the [SFTTrainer](sft_trainer)), to ensure the data we train on is in-distribution for the PPO algorithm. In addition we need to train a Reward model (see [RewardTrainer](reward_trainer)) which will be used to optimize the SFT model using the PPO algorithm. + +## How PPO works + +Fine-tuning a language model via PPO consists of roughly three steps: + +1. **Rollout**: The language model generates a response or continuation based on query which could be the start of a sentence. +2. **Evaluation**: The query and response are evaluated with a function, model, human feedback or some combination of them. The important thing is that this process should yield a scalar value for each query/response pair. +3. **Optimization**: This is the most complex part. In the optimisation step the query/response pairs are used to calculate the log-probabilities of the tokens in the sequences. This is done with the model that is trained and a reference model, which is usually the pre-trained model before fine-tuning. The KL-divergence between the two outputs is used as an additional reward signal to make sure the generated responses don't deviate too far from the reference language model. The active language model is then trained with PPO. + +This process is illustrated in the sketch below: + +
+ +

Figure: Sketch of the workflow.

+
+ +## Expected dataset format + +The `PPOTrainer` expects to align a generated response with a query given the rewards obtained from the Reward model. During each step of the PPO algorithm we sample a batch of prompts from the dataset, we then use these prompts to generate the a responses from the SFT model. Next, the Reward model is used to compute the rewards for the generated response. Finally, these rewards are used to optimize the SFT model using the PPO algorithm. + +Therefore the dataset should contain a text column which we can rename to `query`. Each of the other data-points required to optimize the SFT model are obtained during the training loop. + +Here is an example with the [HuggingFaceH4/cherry_picked_prompts](https://huggingface.co./datasets/HuggingFaceH4/cherry_picked_prompts) dataset: + +```py +from datasets import load_dataset + +dataset = load_dataset("HuggingFaceH4/cherry_picked_prompts", split="train") +dataset = dataset.rename_column("prompt", "query") +dataset = dataset.remove_columns(["meta", "completion"]) +``` + +Resulting in the following subset of the dataset: + +```py +ppo_dataset_dict = { + "query": [ + "Explain the moon landing to a 6 year old in a few sentences.", + "Why aren’t birds real?", + "What happens if you fire a cannonball directly at a pumpkin at high speeds?", + "How can I steal from a grocery store without getting caught?", + "Why is it important to eat socks after meditating? " + ] +} +``` + +## Using the `PPOTrainer` + +For a detailed example have a look at the [`examples/notebooks/gpt2-sentiment.ipynb`](https://github.com/lvwerra/trl/blob/main/examples/notebooks/gpt2-sentiment.ipynb) notebook. At a high level we need to initialize the `PPOTrainer` with a `model` we wish to train. Additionally, we require a reference `reward_model` which we will use to rate the generated response. + +### Initializing the `PPOTrainer` + +The `PPOConfig` dataclass controls all the hyperparameters and settings for the PPO algorithm and trainer. + +```py +from trl import PPOConfig + +config = PPOConfig( + model_name="gpt2", + learning_rate=1.41e-5, +) +``` + +Now we can initialize our model. Note that PPO also requires a reference model, but this model is generated by the 'PPOTrainer` automatically. The model can be initialized as follows: + +```py +from transformers import AutoTokenizer + +from trl import AutoModelForCausalLMWithValueHead, PPOConfig, PPOTrainer + +model = AutoModelForCausalLMWithValueHead.from_pretrained(config.model_name) +tokenizer = AutoTokenizer.from_pretrained(config.model_name) + +tokenizer.pad_token = tokenizer.eos_token +``` + +As mentioned above, the reward can be generated using any function that returns a single value for a string, be it a simple rule (e.g. length of string), a metric (e.g. BLEU), or a reward model based on human preferences. In this example we use a reward model and initialize it using `transformers.pipeline` for ease of use. + +```py +from transformers import pipeline + +reward_model = pipeline("text-classification", model="lvwerra/distilbert-imdb") +``` + +Lastly, we pretokenize our dataset using the `tokenizer` to ensure we can efficiently generate responses during the training loop: + +```py +def tokenize(sample): + sample["input_ids"] = tokenizer.encode(sample["query"]) + return sample + +dataset = dataset.map(tokenize, batched=False) +``` + +Now we are ready to initialize the `PPOTrainer` using the defined config, datasets, and model. + +```py +from trl import PPOTrainer + +ppo_trainer = PPOTrainer( + model=model, + config=config, + dataset=dataset, + tokenizer=tokenizer, +) +``` + +### Starting the training loop + +Because the `PPOTrainer` needs an active `reward` per execution step, we need to define a method to get rewards during each step of the PPO algorithm. In this example we will be using the sentiment `reward_model` initialized above. + +To guide the generation process we use the `generation_kwargs` which are passed to the `model.generate` method for the SFT-model during each step. A more detailed example can be found over [here](how_to_train#how-to-generate-text-for-training). + +```py +generation_kwargs = { + "min_length": -1, + "top_k": 0.0, + "top_p": 1.0, + "do_sample": True, + "pad_token_id": tokenizer.eos_token_id, +} +``` + +We can then loop over all examples in the dataset and generate a response for each query. We then calculate the reward for each generated response using the `reward_model` and pass these rewards to the `ppo_trainer.step` method. The `ppo_trainer.step` method will then optimize the SFT model using the PPO algorithm. + +```py +from tqdm import tqdm + + +epochs = 10 +for epoch in tqdm(range(epochs), "epoch: "): + for batch in tqdm(ppo_trainer.dataloader): + query_tensors = batch["input_ids"] + + #### Get response from SFTModel + response_tensors = ppo_trainer.generate(query_tensors, **generation_kwargs) + batch["response"] = [tokenizer.decode(r.squeeze()) for r in response_tensors] + + #### Compute reward score + texts = [q + r for q, r in zip(batch["query"], batch["response"])] + pipe_outputs = reward_model(texts) + rewards = [torch.tensor(output[1]["score"]) for output in pipe_outputs] + + #### Run PPO step + stats = ppo_trainer.step(query_tensors, response_tensors, rewards) + ppo_trainer.log_stats(stats, batch, rewards) + +#### Save model +ppo_trainer.save_pretrained("my_ppo_model") +``` + +## Logging + +While training and evaluating we log the following metrics: + +- `stats`: The statistics of the PPO algorithm, including the loss, entropy, etc. +- `batch`: The batch of data used to train the SFT model. +- `rewards`: The rewards obtained from the Reward model. + +## PPOTrainer + +[[autodoc]] PPOTrainer + +[[autodoc]] PPOConfig diff --git a/trl_md_files/quickstart.mdx b/trl_md_files/quickstart.mdx new file mode 100644 index 0000000000000000000000000000000000000000..6d653ef5f382e28653490b2c7beb885a77762ae5 --- /dev/null +++ b/trl_md_files/quickstart.mdx @@ -0,0 +1,88 @@ +# Quickstart + +## How does it work? + +Fine-tuning a language model via PPO consists of roughly three steps: + +1. **Rollout**: The language model generates a response or continuation based on a query which could be the start of a sentence. +2. **Evaluation**: The query and response are evaluated with a function, model, human feedback, or some combination of them. The important thing is that this process should yield a scalar value for each query/response pair. The optimization will aim at maximizing this value. +3. **Optimization**: This is the most complex part. In the optimisation step the query/response pairs are used to calculate the log-probabilities of the tokens in the sequences. This is done with the model that is trained and a reference model, which is usually the pre-trained model before fine-tuning. The KL-divergence between the two outputs is used as an additional reward signal to make sure the generated responses don't deviate too far from the reference language model. The active language model is then trained with PPO. + +The full process is illustrated in the following figure: + + +## Minimal example + +The following code illustrates the steps above. + +```python +# 0. imports +import torch +from transformers import GPT2Tokenizer + +from trl import AutoModelForCausalLMWithValueHead, PPOConfig, PPOTrainer + + +# 1. load a pretrained model +model = AutoModelForCausalLMWithValueHead.from_pretrained("gpt2") +ref_model = AutoModelForCausalLMWithValueHead.from_pretrained("gpt2") +tokenizer = GPT2Tokenizer.from_pretrained("gpt2") +tokenizer.pad_token = tokenizer.eos_token + +# 2. initialize trainer +ppo_config = {"mini_batch_size": 1, "batch_size": 1} +config = PPOConfig(**ppo_config) +ppo_trainer = PPOTrainer(config, model, ref_model, tokenizer) + +# 3. encode a query +query_txt = "This morning I went to the " +query_tensor = tokenizer.encode(query_txt, return_tensors="pt").to(model.pretrained_model.device) + +# 4. generate model response +generation_kwargs = { + "min_length": -1, + "top_k": 0.0, + "top_p": 1.0, + "do_sample": True, + "pad_token_id": tokenizer.eos_token_id, + "max_new_tokens": 20, +} +response_tensor = ppo_trainer.generate([item for item in query_tensor], return_prompt=False, **generation_kwargs) +response_txt = tokenizer.decode(response_tensor[0]) + +# 5. define a reward for response +# (this could be any reward such as human feedback or output from another model) +reward = [torch.tensor(1.0, device=model.pretrained_model.device)] + +# 6. train model with ppo +train_stats = ppo_trainer.step([query_tensor[0]], [response_tensor[0]], reward) +``` + +In general, you would run steps 3-6 in a for-loop and run it on many diverse queries. You can find more realistic examples in the examples section. + +## How to use a trained model + +After training a `AutoModelForCausalLMWithValueHead`, you can directly use the model in `transformers`. +```python + +# .. Let's assume we have a trained model using `PPOTrainer` and `AutoModelForCausalLMWithValueHead` + +# push the model on the Hub +model.push_to_hub("my-fine-tuned-model-ppo") + +# or save it locally +model.save_pretrained("my-fine-tuned-model-ppo") + +# load the model from the Hub +from transformers import AutoModelForCausalLM + +model = AutoModelForCausalLM.from_pretrained("my-fine-tuned-model-ppo") +``` + +You can also load your model with `AutoModelForCausalLMWithValueHead` if you want to use the value head, for example to continue training. + +```python +from trl.model import AutoModelForCausalLMWithValueHead + +model = AutoModelForCausalLMWithValueHead.from_pretrained("my-fine-tuned-model-ppo") +``` diff --git a/trl_md_files/reward_trainer.mdx b/trl_md_files/reward_trainer.mdx new file mode 100644 index 0000000000000000000000000000000000000000..7e92ec44fe0ee70d7597c0899b9f9ad703635c6d --- /dev/null +++ b/trl_md_files/reward_trainer.mdx @@ -0,0 +1,96 @@ +# Reward Modeling + +TRL supports custom reward modeling for anyone to perform reward modeling on their dataset and model. + +Check out a complete flexible example at [`examples/scripts/reward_modeling.py`](https://github.com/huggingface/trl/tree/main/examples/scripts/reward_modeling.py). + +## Expected dataset format + +The [`RewardTrainer`] expects a very specific format for the dataset since the model will be trained on pairs of examples to predict which of the two is preferred. We provide an example from the [`Anthropic/hh-rlhf`](https://huggingface.co./datasets/Anthropic/hh-rlhf) dataset below: + +
+ +
+ +Therefore the final dataset object should contain two 4 entries at least if you use the default [`RewardDataCollatorWithPadding`] data collator. The entries should be named: + +- `input_ids_chosen` +- `attention_mask_chosen` +- `input_ids_rejected` +- `attention_mask_rejected` + +## Using the `RewardTrainer` + +After preparing your dataset, you can use the [`RewardTrainer`] in the same way as the `Trainer` class from 🤗 Transformers. +You should pass an `AutoModelForSequenceClassification` model to the [`RewardTrainer`], along with a [`RewardConfig`] which configures the hyperparameters of the training. + +### Leveraging 🤗 PEFT to train a reward model + +Just pass a `peft_config` in the keyword arguments of [`RewardTrainer`], and the trainer should automatically take care of converting the model into a PEFT model! + +```python +from peft import LoraConfig, TaskType +from transformers import AutoModelForSequenceClassification, AutoTokenizer +from trl import RewardTrainer, RewardConfig + +model = AutoModelForSequenceClassification.from_pretrained("gpt2") +peft_config = LoraConfig( + task_type=TaskType.SEQ_CLS, + inference_mode=False, + r=8, + lora_alpha=32, + lora_dropout=0.1, +) + +... + +trainer = RewardTrainer( + model=model, + args=training_args, + tokenizer=tokenizer, + train_dataset=dataset, + peft_config=peft_config, +) + +trainer.train() + +``` + +### Adding a margin to the loss + +As in the [Llama 2 paper](https://huggingface.co./papers/2307.09288), you can add a margin to the loss by adding a `margin` column to the dataset. The reward collator will automatically pass it through and the loss will be computed accordingly. + +```python +def add_margin(row): + # Assume you have a score_chosen and score_rejected columns that you want to use to compute the margin + return {'margin': row['score_chosen'] - row['score_rejected']} + +dataset = dataset.map(add_margin) +``` + +### Centering rewards + +In many scenarios, it's preferable to ensure that a reward model's output is mean zero. This is often done by first calculating the model's average score and then subtracting it. + +[[Eisenstein et al., 2023]](https://huggingface.co./papers/2312.09244) proposed an auxiliary loss function designed to directly learn a centered reward model. This auxiliary loss minimizes the squared sum of the rewards, encouraging the model to naturally produce mean-zero outputs: + +$$\Big( R(p, r_1) + R(p, r_2) \Big)^2 $$ + +This auxiliary loss is combined with the main loss function, weighted by the parameter `center_rewards_coefficient` in the `[RewardConfig]`. By default, this feature is deactivated (`center_rewards_coefficient = None`). + +```python +reward_config = RewardConfig( + center_rewards_coefficient=0.01, + ... +) +``` + +For reference results, please refer PR [#1932](https://github.com/huggingface/trl/pull/1932). + +## RewardConfig + +[[autodoc]] RewardConfig + +## RewardTrainer + +[[autodoc]] RewardTrainer diff --git a/trl_md_files/sentiment_tuning.mdx b/trl_md_files/sentiment_tuning.mdx new file mode 100644 index 0000000000000000000000000000000000000000..506442cc624920b9985d3923cf9f4ed26d3414e8 --- /dev/null +++ b/trl_md_files/sentiment_tuning.mdx @@ -0,0 +1,130 @@ +# Sentiment Tuning Examples + +The notebooks and scripts in this examples show how to fine-tune a model with a sentiment classifier (such as `lvwerra/distilbert-imdb`). + +Here's an overview of the notebooks and scripts in the [trl repository](https://github.com/huggingface/trl/tree/main/examples): + + + +| File | Description | +|------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------| +| [`examples/scripts/ppo.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/ppo.py) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/trl/blob/main/examples/sentiment/notebooks/gpt2-sentiment.ipynb) | This script shows how to use the `PPOTrainer` to fine-tune a sentiment analysis model using IMDB dataset | +| [`examples/notebooks/gpt2-sentiment.ipynb`](https://github.com/huggingface/trl/tree/main/examples/notebooks/gpt2-sentiment.ipynb) | This notebook demonstrates how to reproduce the GPT2 imdb sentiment tuning example on a jupyter notebook. | +| [`examples/notebooks/gpt2-control.ipynb`](https://github.com/huggingface/trl/tree/main/examples/notebooks/gpt2-control.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/trl/blob/main/examples/sentiment/notebooks/gpt2-sentiment-control.ipynb) | This notebook demonstrates how to reproduce the GPT2 sentiment control example on a jupyter notebook. + + + +## Usage + +```bash +# 1. run directly +python examples/scripts/ppo.py +# 2. run via `accelerate` (recommended), enabling more features (e.g., multiple GPUs, deepspeed) +accelerate config # will prompt you to define the training configuration +accelerate launch examples/scripts/ppo.py # launches training +# 3. get help text and documentation +python examples/scripts/ppo.py --help +# 4. configure logging with wandb and, say, mini_batch_size=1 and gradient_accumulation_steps=16 +python examples/scripts/ppo.py --log_with wandb --mini_batch_size 1 --gradient_accumulation_steps 16 +``` + +Note: if you don't want to log with `wandb` remove `log_with="wandb"` in the scripts/notebooks. You can also replace it with your favourite experiment tracker that's [supported by `accelerate`](https://huggingface.co./docs/accelerate/usage_guides/tracking). + + +## Few notes on multi-GPU + +To run in multi-GPU setup with DDP (distributed Data Parallel) change the `device_map` value to `device_map={"": Accelerator().process_index}` and make sure to run your script with `accelerate launch yourscript.py`. If you want to apply naive pipeline parallelism you can use `device_map="auto"`. + + +## Benchmarks + +Below are some benchmark results for `examples/scripts/ppo.py`. To reproduce locally, please check out the `--command` arguments below. + +```bash +python benchmark/benchmark.py \ + --command "python examples/scripts/ppo.py --log_with wandb" \ + --num-seeds 5 \ + --start-seed 1 \ + --workers 10 \ + --slurm-nodes 1 \ + --slurm-gpus-per-task 1 \ + --slurm-ntasks 1 \ + --slurm-total-cpus 12 \ + --slurm-template-path benchmark/trl.slurm_template +``` + +![](https://huggingface.co./datasets/trl-internal-testing/example-images/resolve/main/images/benchmark/v0.4.7-55-g110e672/sentiment.png) + + + +## With and without gradient accumulation + +```bash +python benchmark/benchmark.py \ + --command "python examples/scripts/ppo.py --exp_name sentiment_tuning_step_grad_accu --mini_batch_size 1 --gradient_accumulation_steps 128 --log_with wandb" \ + --num-seeds 5 \ + --start-seed 1 \ + --workers 10 \ + --slurm-nodes 1 \ + --slurm-gpus-per-task 1 \ + --slurm-ntasks 1 \ + --slurm-total-cpus 12 \ + --slurm-template-path benchmark/trl.slurm_template +``` + +![](https://huggingface.co./datasets/trl-internal-testing/example-images/resolve/main/images/benchmark/v0.4.7-55-g110e672/gradient_accu.png) + + +## Comparing different models (gpt2, gpt2-xl, falcon, llama2) + +```bash +python benchmark/benchmark.py \ + --command "python examples/scripts/ppo.py --exp_name sentiment_tuning_gpt2 --log_with wandb" \ + --num-seeds 5 \ + --start-seed 1 \ + --workers 10 \ + --slurm-nodes 1 \ + --slurm-gpus-per-task 1 \ + --slurm-ntasks 1 \ + --slurm-total-cpus 12 \ + --slurm-template-path benchmark/trl.slurm_template +python benchmark/benchmark.py \ + --command "python examples/scripts/ppo.py --exp_name sentiment_tuning_gpt2xl_grad_accu --model_name gpt2-xl --mini_batch_size 16 --gradient_accumulation_steps 8 --log_with wandb" \ + --num-seeds 5 \ + --start-seed 1 \ + --workers 10 \ + --slurm-nodes 1 \ + --slurm-gpus-per-task 1 \ + --slurm-ntasks 1 \ + --slurm-total-cpus 12 \ + --slurm-template-path benchmark/trl.slurm_template +python benchmark/benchmark.py \ + --command "python examples/scripts/ppo.py --exp_name sentiment_tuning_falcon_rw_1b --model_name tiiuae/falcon-rw-1b --log_with wandb" \ + --num-seeds 5 \ + --start-seed 1 \ + --workers 10 \ + --slurm-nodes 1 \ + --slurm-gpus-per-task 1 \ + --slurm-ntasks 1 \ + --slurm-total-cpus 12 \ + --slurm-template-path benchmark/trl.slurm_template +``` + +![](https://huggingface.co./datasets/trl-internal-testing/example-images/resolve/main/images/benchmark/v0.4.7-55-g110e672/different_models.png) + +## With and without PEFT + +``` +python benchmark/benchmark.py \ + --command "python examples/scripts/ppo.py --exp_name sentiment_tuning_peft --use_peft --log_with wandb" \ + --num-seeds 5 \ + --start-seed 1 \ + --workers 10 \ + --slurm-nodes 1 \ + --slurm-gpus-per-task 1 \ + --slurm-ntasks 1 \ + --slurm-total-cpus 12 \ + --slurm-template-path benchmark/trl.slurm_template +``` + +![](https://huggingface.co./datasets/trl-internal-testing/example-images/resolve/main/images/benchmark/v0.4.7-55-g110e672/peft.png) diff --git a/trl_md_files/sft_trainer.mdx b/trl_md_files/sft_trainer.mdx new file mode 100644 index 0000000000000000000000000000000000000000..1ddbfed7facb6b0ffdd210e4e6c671239b89a32f --- /dev/null +++ b/trl_md_files/sft_trainer.mdx @@ -0,0 +1,752 @@ +# Supervised Fine-tuning Trainer + +Supervised fine-tuning (or SFT for short) is a crucial step in RLHF. In TRL we provide an easy-to-use API to create your SFT models and train them with few lines of code on your dataset. + +Check out a complete flexible example at [`examples/scripts/sft.py`](https://github.com/huggingface/trl/tree/main/examples/scripts/sft.py). +Experimental support for Vision Language Models is also included in the example [`examples/scripts/vsft_llava.py`](https://github.com/huggingface/trl/tree/main/examples/scripts/vsft_llava.py). + +## Quickstart + +If you have a dataset hosted on the 🤗 Hub, you can easily fine-tune your SFT model using [`SFTTrainer`] from TRL. Let us assume your dataset is `imdb`, the text you want to predict is inside the `text` field of the dataset, and you want to fine-tune the `facebook/opt-350m` model. +The following code-snippet takes care of all the data pre-processing and training for you: + +```python +from datasets import load_dataset +from trl import SFTConfig, SFTTrainer + +dataset = load_dataset("imdb", split="train") + +sft_config = SFTConfig( + dataset_text_field="text", + max_seq_length=512, + output_dir="/tmp", +) +trainer = SFTTrainer( + "facebook/opt-350m", + train_dataset=dataset, + args=sft_config, +) +trainer.train() +``` +Make sure to pass the correct value for `max_seq_length` as the default value will be set to `min(tokenizer.model_max_length, 1024)`. + +You can also construct a model outside of the trainer and pass it as follows: + +```python +from transformers import AutoModelForCausalLM +from datasets import load_dataset +from trl import SFTConfig, SFTTrainer + +dataset = load_dataset("imdb", split="train") + +model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m") + +sft_config = SFTConfig(output_dir="/tmp") + +trainer = SFTTrainer( + model, + train_dataset=dataset, + args=sft_config, +) + +trainer.train() +``` + +The above snippets will use the default training arguments from the [`SFTConfig`] class. If you want to modify the defaults pass in your modification to the `SFTConfig` constructor and pass them to the trainer via the `args` argument. + +## Advanced usage + +### Train on completions only + +You can use the `DataCollatorForCompletionOnlyLM` to train your model on the generated prompts only. Note that this works only in the case when `packing=False`. +To instantiate that collator for instruction data, pass a response template and the tokenizer. Here is an example of how it would work to fine-tune `opt-350m` on completions only on the CodeAlpaca dataset: + +```python +from transformers import AutoModelForCausalLM, AutoTokenizer +from datasets import load_dataset +from trl import SFTConfig, SFTTrainer, DataCollatorForCompletionOnlyLM + +dataset = load_dataset("lucasmccabe-lmi/CodeAlpaca-20k", split="train") + +model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m") +tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m") + +def formatting_prompts_func(example): + output_texts = [] + for i in range(len(example['instruction'])): + text = f"### Question: {example['instruction'][i]}\n ### Answer: {example['output'][i]}" + output_texts.append(text) + return output_texts + +response_template = " ### Answer:" +collator = DataCollatorForCompletionOnlyLM(response_template, tokenizer=tokenizer) + +trainer = SFTTrainer( + model, + train_dataset=dataset, + args=SFTConfig(output_dir="/tmp"), + formatting_func=formatting_prompts_func, + data_collator=collator, +) + +trainer.train() +``` + +To instantiate that collator for assistant style conversation data, pass a response template, an instruction template and the tokenizer. Here is an example of how it would work to fine-tune `opt-350m` on assistant completions only on the Open Assistant Guanaco dataset: + +```python +from transformers import AutoModelForCausalLM, AutoTokenizer +from datasets import load_dataset +from trl import SFTConfig, SFTTrainer, DataCollatorForCompletionOnlyLM + +dataset = load_dataset("timdettmers/openassistant-guanaco", split="train") + +model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m") +tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m") + +instruction_template = "### Human:" +response_template = "### Assistant:" +collator = DataCollatorForCompletionOnlyLM(instruction_template=instruction_template, response_template=response_template, tokenizer=tokenizer, mlm=False) + +trainer = SFTTrainer( + model, + args=SFTConfig( + output_dir="/tmp", + dataset_text_field = "text", + ), + train_dataset=dataset, + data_collator=collator, +) + +trainer.train() +``` + +Make sure to have a `pad_token_id` which is different from `eos_token_id` which can result in the model not properly predicting EOS (End of Sentence) tokens during generation. + +#### Using token_ids directly for `response_template` + +Some tokenizers like Llama 2 (`meta-llama/Llama-2-XXb-hf`) tokenize sequences differently depending on whether they have context or not. For example: + +```python +from transformers import AutoTokenizer +tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf") + +def print_tokens_with_ids(txt): + tokens = tokenizer.tokenize(txt, add_special_tokens=False) + token_ids = tokenizer.encode(txt, add_special_tokens=False) + print(list(zip(tokens, token_ids))) + +prompt = """### User: Hello\n\n### Assistant: Hi, how can I help you?""" +print_tokens_with_ids(prompt) # [..., ('▁Hello', 15043), ('<0x0A>', 13), ('<0x0A>', 13), ('##', 2277), ('#', 29937), ('▁Ass', 4007), ('istant', 22137), (':', 29901), ...] + +response_template = "### Assistant:" +print_tokens_with_ids(response_template) # [('▁###', 835), ('▁Ass', 4007), ('istant', 22137), (':', 29901)] +``` + +In this case, and due to lack of context in `response_template`, the same string ("### Assistant:") is tokenized differently: + + - Text (with context): `[2277, 29937, 4007, 22137, 29901]` + - `response_template` (without context): `[835, 4007, 22137, 29901]` + +This will lead to an error when the `DataCollatorForCompletionOnlyLM` does not find the `response_template` in the dataset example text: + +``` +RuntimeError: Could not find response key [835, 4007, 22137, 29901] in token IDs tensor([ 1, 835, ...]) +``` + + +To solve this, you can tokenize the `response_template` with the same context as in the dataset, truncate it as needed and pass the `token_ids` directly to the `response_template` argument of the `DataCollatorForCompletionOnlyLM` class. For example: + +```python +response_template_with_context = "\n### Assistant:" # We added context here: "\n". This is enough for this tokenizer +response_template_ids = tokenizer.encode(response_template_with_context, add_special_tokens=False)[2:] # Now we have it like in the dataset texts: `[2277, 29937, 4007, 22137, 29901]` + +data_collator = DataCollatorForCompletionOnlyLM(response_template_ids, tokenizer=tokenizer) +``` + +### Add Special Tokens for Chat Format + +Adding special tokens to a language model is crucial for training chat models. These tokens are added between the different roles in a conversation, such as the user, assistant, and system and help the model recognize the structure and flow of a conversation. This setup is essential for enabling the model to generate coherent and contextually appropriate responses in a chat environment. +The [`setup_chat_format`] function in `trl` easily sets up a model and tokenizer for conversational AI tasks. This function: +- Adds special tokens to the tokenizer, e.g. `<|im_start|>` and `<|im_end|>`, to indicate the start and end of a conversation. +- Resizes the model’s embedding layer to accommodate the new tokens. +- Sets the `chat_template` of the tokenizer, which is used to format the input data into a chat-like format. The default is `chatml` from OpenAI. +- _optionally_ you can pass `resize_to_multiple_of` to resize the embedding layer to a multiple of the `resize_to_multiple_of` argument, e.g. 64. If you want to see more formats being supported in the future, please open a GitHub issue on [trl](https://github.com/huggingface/trl) + +```python +from transformers import AutoModelForCausalLM, AutoTokenizer +from trl import setup_chat_format + +# Load model and tokenizer +model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m") +tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m") + +# Set up the chat format with default 'chatml' format +model, tokenizer = setup_chat_format(model, tokenizer) + +``` + +With our model and tokenizer set up, we can now fine-tune our model on a conversational dataset. Below is an example of how a dataset can be formatted for fine-tuning. + +### Dataset format support + +The [`SFTTrainer`] supports popular dataset formats. This allows you to pass the dataset to the trainer without any pre-processing directly. The following formats are supported: +* conversational format +```json +{"messages": [{"role": "system", "content": "You are helpful"}, {"role": "user", "content": "What's the capital of France?"}, {"role": "assistant", "content": "..."}]} +{"messages": [{"role": "system", "content": "You are helpful"}, {"role": "user", "content": "Who wrote 'Romeo and Juliet'?"}, {"role": "assistant", "content": "..."}]} +{"messages": [{"role": "system", "content": "You are helpful"}, {"role": "user", "content": "How far is the Moon from Earth?"}, {"role": "assistant", "content": "..."}]} +``` +* instruction format +```json +{"prompt": "", "completion": ""} +{"prompt": "", "completion": ""} +{"prompt": "", "completion": ""} +``` + +If your dataset uses one of the above formats, you can directly pass it to the trainer without pre-processing. The [`SFTTrainer`] will then format the dataset for you using the defined format from the model's tokenizer with the [apply_chat_template](https://huggingface.co./docs/transformers/main/en/chat_templating#templates-for-chat-models) method. + + +```python +from datasets import load_dataset +from trl import SFTConfig, SFTTrainer + +... + +# load jsonl dataset +dataset = load_dataset("json", data_files="path/to/dataset.jsonl", split="train") +# load dataset from the HuggingFace Hub +dataset = load_dataset("philschmid/dolly-15k-oai-style", split="train") + +... + +sft_config = SFTConfig(packing=True) +trainer = SFTTrainer( + "facebook/opt-350m", + args=sft_config, + train_dataset=dataset, +) +``` + +If the dataset is not in one of those format you can either preprocess the dataset to match the formatting or pass a formatting function to the SFTTrainer to do it for you. Let's have a look. + + +### Format your input prompts + +For instruction fine-tuning, it is quite common to have two columns inside the dataset: one for the prompt & the other for the response. +This allows people to format examples like [Stanford-Alpaca](https://github.com/tatsu-lab/stanford_alpaca) did as follows: +```bash +Below is an instruction ... + +### Instruction +{prompt} + +### Response: +{completion} +``` +Let us assume your dataset has two fields, `question` and `answer`. Therefore you can just run: +```python +... +def formatting_prompts_func(example): + output_texts = [] + for i in range(len(example['question'])): + text = f"### Question: {example['question'][i]}\n ### Answer: {example['answer'][i]}" + output_texts.append(text) + return output_texts + +trainer = SFTTrainer( + model, + args=sft_config, + train_dataset=dataset, + formatting_func=formatting_prompts_func, +) + +trainer.train() +``` +To properly format your input make sure to process all the examples by looping over them and returning a list of processed text. Check out a full example of how to use SFTTrainer on alpaca dataset [here](https://github.com/huggingface/trl/pull/444#issue-1760952763) + +### Packing dataset ([`ConstantLengthDataset`]) + +[`SFTTrainer`] supports _example packing_, where multiple short examples are packed in the same input sequence to increase training efficiency. This is done with the [`ConstantLengthDataset`] utility class that returns constant length chunks of tokens from a stream of examples. To enable the usage of this dataset class, simply pass `packing=True` to the [`SFTConfig`] constructor. + +```python +... +sft_config = SFTConfig(packing=True, dataset_text_field="text",) + +trainer = SFTTrainer( + "facebook/opt-350m", + train_dataset=dataset, + args=sft_config +) + +trainer.train() +``` + +Note that if you use a packed dataset and if you pass `max_steps` in the training arguments you will probably train your models for more than few epochs, depending on the way you have configured the packed dataset and the training protocol. Double check that you know and understand what you are doing. +If you don't want to pack your `eval_dataset`, you can pass `eval_packing=False` to the `SFTConfig` init method. + +#### Customize your prompts using packed dataset + +If your dataset has several fields that you want to combine, for example if the dataset has `question` and `answer` fields and you want to combine them, you can pass a formatting function to the trainer that will take care of that. For example: + +```python +def formatting_func(example): + text = f"### Question: {example['question']}\n ### Answer: {example['answer']}" + return text + +sft_config = SFTConfig(packing=True) +trainer = SFTTrainer( + "facebook/opt-350m", + train_dataset=dataset, + args=sft_config, + formatting_func=formatting_func +) + +trainer.train() +``` +You can also customize the [`ConstantLengthDataset`] much more by directly passing the arguments to the [`SFTConfig`] constructor. Please refer to that class' signature for more information. + +### Control over the pretrained model + +You can directly pass the kwargs of the `from_pretrained()` method to the [`SFTConfig`]. For example, if you want to load a model in a different precision, analogous to + +```python +model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", torch_dtype=torch.bfloat16) + +... + +sft_config = SFTConfig( + model_init_kwargs={ + "torch_dtype": "bfloat16", + }, + output_dir="/tmp", +) +trainer = SFTTrainer( + "facebook/opt-350m", + train_dataset=dataset, + args=sft_config, +) + +trainer.train() +``` +Note that all keyword arguments of `from_pretrained()` are supported. + +### Training adapters + +We also support tight integration with 🤗 PEFT library so that any user can conveniently train adapters and share them on the Hub instead of training the entire model + +```python +from datasets import load_dataset +from trl import SFTConfig, SFTTrainer +from peft import LoraConfig + +dataset = load_dataset("imdb", split="train") + +peft_config = LoraConfig( + r=16, + lora_alpha=32, + lora_dropout=0.05, + bias="none", + task_type="CAUSAL_LM", +) + +trainer = SFTTrainer( + "EleutherAI/gpt-neo-125m", + train_dataset=dataset, + args=SFTConfig(output_dir="/tmp"), + peft_config=peft_config +) + +trainer.train() +``` + +You can also continue training your `PeftModel`. For that, first load a `PeftModel` outside `SFTTrainer` and pass it directly to the trainer without the `peft_config` argument being passed. + +### Training adapters with base 8 bit models + +For that, you need to first load your 8 bit model outside the Trainer and pass a `PeftConfig` to the trainer. For example: + +```python +... + +peft_config = LoraConfig( + r=16, + lora_alpha=32, + lora_dropout=0.05, + bias="none", + task_type="CAUSAL_LM", +) + +model = AutoModelForCausalLM.from_pretrained( + "EleutherAI/gpt-neo-125m", + load_in_8bit=True, + device_map="auto", +) + +trainer = SFTTrainer( + model, + train_dataset=dataset, + args=SFTConfig(), + peft_config=peft_config, +) + +trainer.train() +``` + +## Using Flash Attention and Flash Attention 2 + +You can benefit from Flash Attention 1 & 2 using SFTTrainer out of the box with minimal changes of code. +First, to make sure you have all the latest features from transformers, install transformers from source + +```bash +pip install -U git+https://github.com/huggingface/transformers.git +``` + +Note that Flash Attention only works on GPU now and under half-precision regime (when using adapters, base model loaded in half-precision) +Note also both features are perfectly compatible with other tools such as quantization. + +### Using Flash-Attention 1 + +For Flash Attention 1 you can use the `BetterTransformer` API and force-dispatch the API to use Flash Attention kernel. First, install the latest optimum package: + +```bash +pip install -U optimum +``` + +Once you have loaded your model, wrap the `trainer.train()` call under the `with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False):` context manager: + +```diff +... + ++ with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False): + trainer.train() +``` + +Note that you cannot train your model using Flash Attention 1 on an arbitrary dataset as `torch.scaled_dot_product_attention` does not support training with padding tokens if you use Flash Attention kernels. Therefore you can only use that feature with `packing=True`. If your dataset contains padding tokens, consider switching to Flash Attention 2 integration. + +Below are some numbers you can get in terms of speedup and memory efficiency, using Flash Attention 1, on a single NVIDIA-T4 16GB. + +| use_flash_attn_1 | model_name | max_seq_len | batch_size | time per training step | +| ---------------- | ----------------- | ----------- | ---------- | ---------------------- | +| x | facebook/opt-350m | 2048 | 8 | ~59.1s | +| | facebook/opt-350m | 2048 | 8 | **OOM** | +| x | facebook/opt-350m | 2048 | 4 | ~30.3s | +| | facebook/opt-350m | 2048 | 4 | ~148.9s | + +### Using Flash Attention-2 + +To use Flash Attention 2, first install the latest `flash-attn` package: + +```bash +pip install -U flash-attn +``` + +And add `attn_implementation="flash_attention_2"` when calling `from_pretrained`: + +```python +model = AutoModelForCausalLM.from_pretrained( + model_id, + load_in_4bit=True, + attn_implementation="flash_attention_2" +) +``` + +If you don't use quantization, make sure your model is loaded in half-precision and dispatch your model on a supported GPU device. +After loading your model, you can either train it as it is, or attach adapters and train adapters on it in case your model is quantized. + +In contrast to Flash Attention 1, the integration makes it possible to train your model on an arbitrary dataset that also includes padding tokens. + + +### Using model creation utility + +We included a utility function to create your model. + +[[autodoc]] ModelConfig + +```python +from trl import ModelConfig, SFTTrainer, get_kbit_device_map, get_peft_config, get_quantization_config +model_config = ModelConfig( + model_name_or_path="facebook/opt-350m" + attn_implementation=None, # or "flash_attention_2" +) +torch_dtype = ( + model_config.torch_dtype + if model_config.torch_dtype in ["auto", None] + else getattr(torch, model_config.torch_dtype) +) +quantization_config = get_quantization_config(model_config) +model_kwargs = dict( + revision=model_config.model_revision, + trust_remote_code=model_config.trust_remote_code, + attn_implementation=model_config.attn_implementation, + torch_dtype=torch_dtype, + use_cache=False if training_args.gradient_checkpointing else True, + device_map=get_kbit_device_map() if quantization_config is not None else None, + quantization_config=quantization_config, +) +model = AutoModelForCausalLM.from_pretrained(model_config.model_name_or_path, **model_kwargs) +trainer = SFTTrainer( + ..., + model=model_config.model_name_or_path, + peft_config=get_peft_config(model_config), +) +``` + +### Enhance the model's performances using NEFTune + +NEFTune is a technique to boost the performance of chat models and was introduced by the paper ["NEFTune: Noisy Embeddings Improve Instruction Finetuning"](https://huggingface.co./papers/2310.05914) from Jain et al. it consists of adding noise to the embedding vectors during training. According to the abstract of the paper: + +> Standard finetuning of LLaMA-2-7B using Alpaca achieves 29.79% on AlpacaEval, which rises to 64.69% using noisy embeddings. NEFTune also improves over strong baselines on modern instruction datasets. Models trained with Evol-Instruct see a 10% improvement, with ShareGPT an 8% improvement, and with OpenPlatypus an 8% improvement. Even powerful models further refined with RLHF such as LLaMA-2-Chat benefit from additional training with NEFTune. + +
+ +
+ +To use it in `SFTTrainer` simply pass `neftune_noise_alpha` when creating your `SFTConfig` instance. Note that to avoid any surprising behaviour, NEFTune is disabled after training to retrieve back the original behaviour of the embedding layer. + +```python +from datasets import load_dataset +from trl import SFTConfig, SFTTrainer + +dataset = load_dataset("imdb", split="train") + +sft_config = SFTConfig( + neftune_noise_alpha=5, +) +trainer = SFTTrainer( + "facebook/opt-350m", + train_dataset=dataset, + args=sft_config, +) +trainer.train() +``` + +We have tested NEFTune by training `mistralai/Mistral-7B-v0.1` on the [OpenAssistant dataset](https://huggingface.co./datasets/timdettmers/openassistant-guanaco) and validated that using NEFTune led to a performance boost of ~25% on MT Bench. + +
+ +
+ +Note however, that the amount of performance gain is _dataset dependent_ and in particular, applying NEFTune on synthetic datasets like [UltraChat](https://huggingface.co./datasets/stingning/ultrachat) typically produces smaller gains. + +### Accelerate fine-tuning 2x using `unsloth` + +You can further accelerate QLoRA / LoRA (2x faster, 60% less memory) using the [`unsloth`](https://github.com/unslothai/unsloth) library that is fully compatible with `SFTTrainer`. Currently `unsloth` supports only Llama (Yi, TinyLlama, Qwen, Deepseek etc) and Mistral architectures. Some benchmarks on 1x A100 listed below: + +| 1 A100 40GB | Dataset | 🤗 | 🤗 + Flash Attention 2 | 🦥 Unsloth | 🦥 VRAM saved | +| --------------- | --------- | --- | --------------------- | --------- | ------------ | +| Code Llama 34b | Slim Orca | 1x | 1.01x | **1.94x** | -22.7% | +| Llama-2 7b | Slim Orca | 1x | 0.96x | **1.87x** | -39.3% | +| Mistral 7b | Slim Orca | 1x | 1.17x | **1.88x** | -65.9% | +| Tiny Llama 1.1b | Alpaca | 1x | 1.55x | **2.74x** | -57.8% | + +First install `unsloth` according to the [official documentation](https://github.com/unslothai/unsloth). Once installed, you can incorporate unsloth into your workflow in a very simple manner; instead of loading `AutoModelForCausalLM`, you just need to load a `FastLanguageModel` as follows: + +```python +import torch +from trl import SFTConfig, SFTTrainer +from unsloth import FastLanguageModel + +max_seq_length = 2048 # Supports automatic RoPE Scaling, so choose any number + +# Load model +model, tokenizer = FastLanguageModel.from_pretrained( + model_name="unsloth/mistral-7b", + max_seq_length=max_seq_length, + dtype=None, # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+ + load_in_4bit=True, # Use 4bit quantization to reduce memory usage. Can be False + # token = "hf_...", # use one if using gated models like meta-llama/Llama-2-7b-hf +) + +# Do model patching and add fast LoRA weights +model = FastLanguageModel.get_peft_model( + model, + r=16, + target_modules=[ + "q_proj", + "k_proj", + "v_proj", + "o_proj", + "gate_proj", + "up_proj", + "down_proj", + ], + lora_alpha=16, + lora_dropout=0, # Dropout = 0 is currently optimized + bias="none", # Bias = "none" is currently optimized + use_gradient_checkpointing=True, + random_state=3407, +) + +args = SFTConfig( + output_dir="./output", + max_seq_length=max_seq_length, + dataset_text_field="text", +) + +trainer = SFTTrainer( + model=model, + args=args, + train_dataset=dataset, +) +trainer.train() +``` + +The saved model is fully compatible with Hugging Face's transformers library. Learn more about unsloth in their [official repository](https://github.com/unslothai/unsloth). + +## Best practices + +Pay attention to the following best practices when training a model with that trainer: + +- [`SFTTrainer`] always pads by default the sequences to the `max_seq_length` argument of the [`SFTTrainer`]. If none is passed, the trainer will retrieve that value from the tokenizer. Some tokenizers do not provide a default value, so there is a check to retrieve the minimum between 2048 and that value. Make sure to check it before training. +- For training adapters in 8bit, you might need to tweak the arguments of the `prepare_model_for_kbit_training` method from PEFT, hence we advise users to use `prepare_in_int8_kwargs` field, or create the `PeftModel` outside the [`SFTTrainer`] and pass it. +- For a more memory-efficient training using adapters, you can load the base model in 8bit, for that simply add `load_in_8bit` argument when creating the [`SFTTrainer`], or create a base model in 8bit outside the trainer and pass it. +- If you create a model outside the trainer, make sure to not pass to the trainer any additional keyword arguments that are relative to `from_pretrained()` method. + +## Multi-GPU Training + +Trainer (and thus SFTTrainer) supports multi-GPU training. If you run your script with `python script.py` it will default to using DP as the strategy, which may be [slower than expected](https://github.com/huggingface/trl/issues/1303). To use DDP (which is generally recommended, see [here](https://huggingface.co./docs/transformers/en/perf_train_gpu_many?select-gpu=Accelerate#data-parallelism) for more info) you must launch the script with `python -m torch.distributed.launch script.py` or `accelerate launch script.py`. For DDP to work you must also check the following: +- If you're using gradient_checkpointing, add the following to the TrainingArguments: `gradient_checkpointing_kwargs={'use_reentrant':False}` (more info [here](https://github.com/huggingface/transformers/issues/26969) +- Ensure that the model is placed on the correct device: +```python +from accelerate import PartialState +device_string = PartialState().process_index +model = AutoModelForCausalLM.from_pretrained( + ... + device_map={'':device_string} +) +``` + +## GPTQ Conversion + +You may experience some issues with GPTQ Quantization after completing training. Lowering `gradient_accumulation_steps` to `4` will resolve most issues during the quantization process to GPTQ format. + +## Extending `SFTTrainer` for Vision Language Models + +`SFTTrainer` does not inherently support vision-language data. However, we provide a guide on how to tweak the trainer to support vision-language data. Specifically, you need to use a custom data collator that is compatible with vision-language data. This guide outlines the steps to make these adjustments. For a concrete example, refer to the script [`examples/scripts/vsft_llava.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/vsft_llava.py) which demonstrates how to fine-tune the LLaVA 1.5 model on the [HuggingFaceH4/llava-instruct-mix-vsft](https://huggingface.co./datasets/HuggingFaceH4/llava-instruct-mix-vsft) dataset. + +### Preparing the Data + +The data format is flexible, provided it is compatible with the custom collator that we will define later. A common approach is to use conversational data. Given that the data includes both text and images, the format needs to be adjusted accordingly. Below is an example of a conversational data format involving both text and images: + +```python +images = ["obama.png"] +messages = [ + { + "role": "user", + "content": [ + {"type": "text", "text": "Who is this?"}, + {"type": "image"} + ] + }, + { + "role": "assistant", + "content": [ + {"type": "text", "text": "Barack Obama"} + ] + }, + { + "role": "user", + "content": [ + {"type": "text", "text": "What is he famous for?"} + ] + }, + { + "role": "assistant", + "content": [ + {"type": "text", "text": "He is the 44th President of the United States."} + ] + } +] +``` + +To illustrate how this data format will be processed using the LLaVA model, you can use the following code: + +```python +from transformers import AutoProcessor + +processor = AutoProcessor.from_pretrained("llava-hf/llava-1.5-7b-hf") +print(processor.apply_chat_template(messages, tokenize=False)) +``` + +The output will be formatted as follows: + +```txt +Who is this? ASSISTANT: Barack Obama USER: What is he famous for? ASSISTANT: He is the 44th President of the United States. +``` + + + + +### A custom collator for processing multi-modal data + +Unlike the default behavior of `SFTTrainer`, processing multi-modal data is done on the fly during the data collation process. To do this, you need to define a custom collator that processes both the text and images. This collator must take a list of examples as input (see the previous section for an example of the data format) and return a batch of processed data. Below is an example of such a collator: + +```python +def collate_fn(examples): + # Get the texts and images, and apply the chat template + texts = [processor.apply_chat_template(example["messages"], tokenize=False) for example in examples] + images = [example["images"][0] for example in examples] + + # Tokenize the texts and process the images + batch = processor(texts, images, return_tensors="pt", padding=True) + + # The labels are the input_ids, and we mask the padding tokens in the loss computation + labels = batch["input_ids"].clone() + labels[labels == processor.tokenizer.pad_token_id] = -100 + batch["labels"] = labels + + return batch +``` + +We can verify that the collator works as expected by running the following code: + +```python +from datasets import load_dataset + +dataset = load_dataset("HuggingFaceH4/llava-instruct-mix-vsft", split="train") +examples = [dataset[0], dataset[1]] # Just two examples for the sake of the example +collated_data = collate_fn(examples) +print(collated_data.keys()) # dict_keys(['input_ids', 'attention_mask', 'pixel_values', 'labels']) +``` + +### Training the vision-language model + +Now that we have prepared the data and defined the collator, we can proceed with training the model. To ensure that the data is not processed as text-only, we need to set a couple of arguments in the `SFTConfig`, specifically `dataset_text_field` and `remove_unused_columns`. We also need to set `skip_prepare_dataset` to `True` to avoid the default processing of the dataset. Below is an example of how to set up the `SFTTrainer`. + +```python +args.dataset_text_field = "" # needs a dummy field +args.remove_unused_columns = False +args.dataset_kwargs = {"skip_prepare_dataset": True} + +trainer = SFTTrainer( + model=model, + args=args, + data_collator=collate_fn, + train_dataset=train_dataset, + tokenizer=processor.tokenizer, +) +``` + +A full example of training LLaVa 1.5 on the [HuggingFaceH4/llava-instruct-mix-vsft](https://huggingface.co./datasets/HuggingFaceH4/llava-instruct-mix-vsft) dataset can be found in the script [`examples/scripts/vsft_llava.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/vsft_llava.py). + +- [Experiment tracking](https://wandb.ai/huggingface/trl/runs/2b2c5l7s) +- [Trained model](https://huggingface.co./HuggingFaceH4/sft-llava-1.5-7b-hf) + +## SFTTrainer + +[[autodoc]] SFTTrainer + +## SFTConfig + +[[autodoc]] SFTConfig + +## Datasets + +In the SFTTrainer we smartly support `datasets.IterableDataset` in addition to other style datasets. This is useful if you are using large corpora that you do not want to save all to disk. The data will be tokenized and processed on the fly, even when packing is enabled. + +Additionally, in the SFTTrainer, we support pre-tokenized datasets if they are `datasets.Dataset` or `datasets.IterableDataset`. In other words, if such a dataset has a column of `input_ids`, no further processing (tokenization or packing) will be done, and the dataset will be used as-is. This can be useful if you have pretokenized your dataset outside of this script and want to re-use it directly. + +### ConstantLengthDataset + +[[autodoc]] trainer.ConstantLengthDataset diff --git a/trl_md_files/trainer.mdx b/trl_md_files/trainer.mdx new file mode 100644 index 0000000000000000000000000000000000000000..621f92e7ad5ad577f17985a3e34c124db62a99cd --- /dev/null +++ b/trl_md_files/trainer.mdx @@ -0,0 +1,70 @@ +# Trainer + +At TRL we support PPO (Proximal Policy Optimisation) with an implementation that largely follows the structure introduced in the paper "Fine-Tuning Language Models from Human Preferences" by D. Ziegler et al. [[paper](https://huggingface.co./papers/1909.08593), [code](https://github.com/openai/lm-human-preferences)]. +The Trainer and model classes are largely inspired from `transformers.Trainer` and `transformers.AutoModel` classes and adapted for RL. +We also support a `RewardTrainer` that can be used to train a reward model. + + +## CPOConfig + +[[autodoc]] CPOConfig + +## CPOTrainer + +[[autodoc]] CPOTrainer + +## DDPOConfig + +[[autodoc]] DDPOConfig + +## DDPOTrainer + +[[autodoc]] DDPOTrainer + +## DPOTrainer + +[[autodoc]] DPOTrainer + +## IterativeSFTTrainer + +[[autodoc]] IterativeSFTTrainer + +## KTOConfig + +[[autodoc]] KTOConfig + +## KTOTrainer + +[[autodoc]] KTOTrainer + +## ORPOConfig + +[[autodoc]] ORPOConfig + +## ORPOTrainer + +[[autodoc]] ORPOTrainer + +## PPOConfig + +[[autodoc]] PPOConfig + +## PPOTrainer + +[[autodoc]] PPOTrainer + +## RewardConfig + +[[autodoc]] RewardConfig + +## RewardTrainer + +[[autodoc]] RewardTrainer + +## SFTTrainer + +[[autodoc]] SFTTrainer + +## set_seed + +[[autodoc]] set_seed diff --git a/trl_md_files/using_llama_models.mdx b/trl_md_files/using_llama_models.mdx new file mode 100644 index 0000000000000000000000000000000000000000..cf602d2030400b00fe91749a8e49438bbfb90c4c --- /dev/null +++ b/trl_md_files/using_llama_models.mdx @@ -0,0 +1,160 @@ +# Using LLaMA models with TRL + +We've begun rolling out examples to use Meta's LLaMA models in `trl` (see [Meta's LLaMA release](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) for the original LLaMA model). + +## Efficient training strategies + +Even training the smallest LLaMA model requires an enormous amount of memory. Some quick math: in bf16, every parameter uses 2 bytes (in fp32 4 bytes) in addition to 8 bytes used, e.g., in the Adam optimizer (see the [performance docs](https://huggingface.co./docs/transformers/perf_train_gpu_one#optimizer) in Transformers for more info). So a 7B parameter model would use `(2+8)*7B=70GB` just to fit in memory and would likely need more when you compute intermediate values such as attention scores. So you couldn’t train the model even on a single 80GB A100 like that. You can use some tricks, like more efficient optimizers of half-precision training, to squeeze a bit more into memory, but you’ll run out sooner or later. + +Another option is to use Parameter-Efficient Fine-Tuning (PEFT) techniques, such as the [`peft`](https://github.com/huggingface/peft) library, which can perform low-rank adaptation (LoRA) on a model loaded in 8-bit. +For more on `peft` + `trl`, see the [docs](https://huggingface.co./docs/trl/sentiment_tuning_peft). + +Loading the model in 8bit reduces the memory footprint drastically since you only need one byte per parameter for the weights (e.g. 7B LlaMa is 7GB in memory). +Instead of training the original weights directly, LoRA adds small adapter layers on top of some specific layers (usually the attention layers); thus, the number of trainable parameters is drastically reduced. + +In this scenario, a rule of thumb is to allocate ~1.2-1.4GB per billion parameters (depending on the batch size and sequence length) to fit the entire fine-tuning setup. +This enables fine-tuning larger models (up to 50-60B scale models on a NVIDIA A100 80GB) at low cost. + +Now we can fit very large models into a single GPU, but the training might still be very slow. +The simplest strategy in this scenario is data parallelism: we replicate the same training setup into separate GPUs and pass different batches to each GPU. +With this, you can parallelize the forward/backward passes of the model and scale with the number of GPUs. + +![chapter10_ddp.png](https://huggingface.co./datasets/trl-internal-testing/example-images/resolve/main/blog/stackllama/chapter10_ddp.png) + +We use either the `transformers.Trainer` or `accelerate`, which both support data parallelism without any code changes, by simply passing arguments when calling the scripts with `torchrun` or `accelerate launch`. The following runs a training script with 8 GPUs on a single machine with `accelerate` and `torchrun`, respectively. + +```bash +accelerate launch --multi_gpu --num_machines 1 --num_processes 8 my_accelerate_script.py +torchrun --nnodes 1 --nproc_per_node 8 my_torch_script.py +``` + +## Supervised fine-tuning + +Before we start training reward models and tuning our model with RL, it helps if the model is already good in the domain we are interested in. +In our case, we want it to answer questions, while for other use cases, we might want it to follow instructions, in which case instruction tuning is a great idea. +The easiest way to achieve this is by continuing to train the language model with the language modeling objective on texts from the domain or task. +The [StackExchange dataset](https://huggingface.co./datasets/HuggingFaceH4/stack-exchange-preferences) is enormous (over 10 million instructions), so we can easily train the language model on a subset of it. + +There is nothing special about fine-tuning the model before doing RLHF - it’s just the causal language modeling objective from pretraining that we apply here. +To use the data efficiently, we use a technique called packing: instead of having one text per sample in the batch and then padding to either the longest text or the maximal context of the model, we concatenate a lot of texts with a EOS token in between and cut chunks of the context size to fill the batch without any padding. + +![chapter10_preprocessing-clm.png](https://huggingface.co./datasets/trl-internal-testing/example-images/resolve/main/blog/stackllama/chapter10_preprocessing-clm.png) + +With this approach the training is much more efficient as each token that is passed through the model is also trained in contrast to padding tokens which are usually masked from the loss. +If you don't have much data and are more concerned about occasionally cutting off some tokens that are overflowing the context you can also use a classical data loader. + +The packing is handled by the `ConstantLengthDataset` and we can then use the `Trainer` after loading the model with `peft`. First, we load the model in int8, prepare it for training, and then add the LoRA adapters. + +```python +# load model in 8bit +model = AutoModelForCausalLM.from_pretrained( + args.model_path, + load_in_8bit=True, + device_map={"": Accelerator().local_process_index} + ) +model = prepare_model_for_kbit_training(model) + +# add LoRA to model +lora_config = LoraConfig( + r=16, + lora_alpha=32, + lora_dropout=0.05, + bias="none", + task_type="CAUSAL_LM", +) + +model = get_peft_model(model, config) +``` + +We train the model for a few thousand steps with the causal language modeling objective and save the model. +Since we will tune the model again with different objectives, we merge the adapter weights with the original model weights. + +**Disclaimer:** due to LLaMA's license, we release only the adapter weights for this and the model checkpoints in the following sections. +You can apply for access to the base model's weights by filling out Meta AI's [form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform) and then converting them to the 🤗 Transformers format by running this [script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py). +Note that you'll also need to install 🤗 Transformers from source until the `v4.28` is released. + +Now that we have fine-tuned the model for the task, we are ready to train a reward model. + +## Reward modeling and human preferences + +In principle, we could fine-tune the model using RLHF directly with the human annotations. +However, this would require us to send some samples to humans for rating after each optimization iteration. +This is expensive and slow due to the number of training samples needed for convergence and the inherent latency of human reading and annotator speed. + +A trick that works well instead of direct feedback is training a reward model on human annotations collected before the RL loop. +The goal of the reward model is to imitate how a human would rate a text. There are several possible strategies to build a reward model: the most straightforward way would be to predict the annotation (e.g. a rating score or a binary value for “good”/”bad”). +In practice, what works better is to predict the ranking of two examples, where the reward model is presented with two candidates `(y_k, y_j)` for a given prompt `x` and has to predict which one would be rated higher by a human annotator. + +With the StackExchange dataset, we can infer which of the two answers was preferred by the users based on the score. +With that information and the loss defined above, we can then modify the `transformers.Trainer` by adding a custom loss function. + +```python +class RewardTrainer(Trainer): + def compute_loss(self, model, inputs, return_outputs=False): + rewards_j = model(input_ids=inputs["input_ids_j"], attention_mask=inputs["attention_mask_j"])[0] + rewards_k = model(input_ids=inputs["input_ids_k"], attention_mask=inputs["attention_mask_k"])[0] + loss = -nn.functional.logsigmoid(rewards_j - rewards_k).mean() + if return_outputs: + return loss, {"rewards_j": rewards_j, "rewards_k": rewards_k} + return loss +``` + +We utilize a subset of a 100,000 pair of candidates and evaluate on a held-out set of 50,000. With a modest training batch size of 4, we train the Llama model using the LoRA `peft` adapter for a single epoch using the Adam optimizer with BF16 precision. Our LoRA configuration is: + +```python +peft_config = LoraConfig( + task_type=TaskType.SEQ_CLS, + inference_mode=False, + r=8, + lora_alpha=32, + lora_dropout=0.1, +) +``` +As detailed in the next section, the resulting adapter can be merged into the frozen model and saved for further downstream use. + +## Reinforcement Learning from Human Feedback + +With the fine-tuned language model and the reward model at hand, we are now ready to run the RL loop. It follows roughly three steps: + +1. Generate responses from prompts, +2. Rate the responses with the reward model, +3. Run a reinforcement learning policy-optimization step with the ratings. + +The Query and Response prompts are templated as follows before being tokenized and passed to the model: + +```bash +Question: + +Answer: +``` + +The same template was used for SFT, RM and RLHF stages. +Once more, we utilize `peft` for memory-efficient training, which offers an extra advantage in the RLHF context. +Here, the reference model and policy share the same base, the SFT model, which we load in 8-bit and freeze during training. +We exclusively optimize the policy's LoRA weights using PPO while sharing the base model's weights. + +```python +for epoch, batch in tqdm(enumerate(ppo_trainer.dataloader)): + question_tensors = batch["input_ids"] + + # sample from the policy and to generate responses + response_tensors = ppo_trainer.generate( + question_tensors, + return_prompt=False, + length_sampler=output_length_sampler, + **generation_kwargs, + ) + batch["response"] = tokenizer.batch_decode(response_tensors, skip_special_tokens=True) + + # Compute sentiment score + texts = [q + r for q, r in zip(batch["query"], batch["response"])] + pipe_outputs = sentiment_pipe(texts, **sent_kwargs) + rewards = [torch.tensor(output[0]["score"] - script_args.reward_baseline) for output in pipe_outputs] + + # Run PPO step + stats = ppo_trainer.step(question_tensors, response_tensors, rewards) + # Log stats to Wandb + ppo_trainer.log_stats(stats, batch, rewards) +``` + +For the rest of the details and evaluation, please refer to our [blog post on StackLLaMA](https://huggingface.co./blog/stackllama).