{"tokens": 1869, "doc_id": "f2a017cd-c6a2-4611-b722-10951ad23a91", "name": "Welcome to LlamaIndex 🦙 !", "url": "https://docs.llamaindex.ai/en/stable/index", "retrieve_doc": true, "source": "llama_index", "content": "\n\n# Welcome to LlamaIndex 🦙 !\n\nLlamaIndex is a framework for building context-augmented generative AI applications with [LLMs](https://en.wikipedia.org/wiki/Large_language_model).\n\n
\n\n- [Introduction](#introduction)\n\n What is context augmentation? How does LlamaIndex help?\n\n- [Use cases](#use-cases)\n\n What kind of apps can you build with LlamaIndex? Who should use it?\n\n- [Getting started](#getting-started)\n\n Get started in Python or TypeScript in just 5 lines of code!\n\n- [LlamaCloud](#llamacloud)\n\n Managed services for LlamaIndex including [LlamaParse](https://docs.cloud.llamaindex.ai/llamaparse/getting_started), the world's best document parser.\n\n- [Community](#community)\n\n Get help and meet collaborators on Discord, Twitter, LinkedIn, and learn how to contribute to the project.\n\n- [Related projects](#related-projects)\n\n Check out our library of connectors, readers, and other integrations at [LlamaHub](https://llamahub.ai) as well as demos and starter apps like [create-llama](https://www.npmjs.com/package/create-llama).\n\n
\n\n## Introduction\n\n### What is context augmentation?\n\nLLMs offer a natural language interface between humans and data. LLMs come pre-trained on huge amounts of publicly available data, but they are not trained on **your** data. Your data may be private or specific to the problem you're trying to solve. It's behind APIs, in SQL databases, or trapped in PDFs and slide decks.\n\nContext augmentation makes your data available to the LLM to solve the problem at hand. LlamaIndex provides the tools to build any of context-augmentation use case, from prototype to production. Our tools allow you to ingest, parse, index and process your data and quickly implement complex query workflows combining data access with LLM prompting.\n\nThe most popular example of context-augmentation is [Retrieval-Augmented Generation or RAG](./getting_started/concepts.md), which combines context with LLMs at inference time.\n\n### LlamaIndex is the Data Framework for Context-Augmented LLM Apps\n\nLlamaIndex imposes no restriction on how you use LLMs. You can use LLMs as auto-complete, chatbots, semi-autonomous agents, and more. It just makes using them easier. We provide tools like:\n\n- **Data connectors** ingest your existing data from their native source and format. These could be APIs, PDFs, SQL, and (much) more.\n- **Data indexes** structure your data in intermediate representations that are easy and performant for LLMs to consume.\n- **Engines** provide natural language access to your data. For example:\n - Query engines are powerful interfaces for question-answering (e.g. a RAG pipeline).\n - Chat engines are conversational interfaces for multi-message, \"back and forth\" interactions with your data.\n- **Agents** are LLM-powered knowledge workers augmented by tools, from simple helper functions to API integrations and more.\n- **Observability/Evaluation** integrations that enable you to rigorously experiment, evaluate, and monitor your app in a virtuous cycle.\n\n## Use cases\n\nSome popular use cases for LlamaIndex and context augmentation in general include:\n\n- [Question-Answering](./use_cases/q_and_a/index.md) (Retrieval-Augmented Generation aka RAG)\n- [Chatbots](./use_cases/chatbots.md)\n- [Document Understanding and Data Extraction](./use_cases/extraction.md)\n- [Autonomous Agents](./use_cases/agents.md) that can perform research and take actions\n- [Multi-modal applications](./use_cases/multimodal.md) that combine text, images, and other data types\n- [Fine-tuning](./use_cases/fine_tuning.md) models on data to improve performance\n\nCheck out our [use cases](./use_cases/index.md) documentation for more examples and links to tutorials.\n\n### 👨‍👩‍👧‍👦 Who is LlamaIndex for?\n\nLlamaIndex provides tools for beginners, advanced users, and everyone in between.\n\nOur high-level API allows beginner users to use LlamaIndex to ingest and query their data in 5 lines of code.\n\nFor more complex applications, our lower-level APIs allow advanced users to customize and extend any module—data connectors, indices, retrievers, query engines, reranking modules—to fit their needs.\n\n## Getting Started\n\nLlamaIndex is available in Python (these docs) and [Typescript](https://ts.llamaindex.ai/). If you're not sure where to start, we recommend reading [how to read these docs](./getting_started/reading.md) which will point you to the right place based on your experience level.\n\n### 30 second quickstart\n\nSet an environment variable called `OPENAI_API_KEY` with an [OpenAI API key](https://platform.openai.com/api-keys). Install the Python library:\n\n```bash\npip install llama-index\n```\n\nPut some documents in a folder called `data`, then ask questions about them with our famous 5-line starter:\n\n```python\nfrom llama_index.core import VectorStoreIndex, SimpleDirectoryReader\n\ndocuments = SimpleDirectoryReader(\"data\").load_data()\nindex = VectorStoreIndex.from_documents(documents)\nquery_engine = index.as_query_engine()\nresponse = query_engine.query(\"Some question about the data should go here\")\nprint(response)\n```\n\nIf any part of this trips you up, don't worry! Check out our more comprehensive starter tutorials using [remote APIs like OpenAI](./getting_started/starter_example.md) or [any model that runs on your laptop](./getting_started/starter_example_local.md).\n\n## LlamaCloud\n\nIf you're an enterprise developer, check out [**LlamaCloud**](https://llamaindex.ai/enterprise). It is an end-to-end managed service for data parsing, ingestion, indexing, and retrieval, allowing you to get production-quality data for your production LLM application. It's available both hosted on our servers or as a self-hosted solution.\n\n### LlamaParse\n\nLlamaParse is our state-of-the-art document parsing solution. It's available as part of LlamaCloud and also available as a self-serve API. You can [sign up](https://cloud.llamaindex.ai/) and parse up to 1000 pages/day for free, or enter a credit card for unlimited parsing. [Learn more](https://llamaindex.ai/enterprise).\n\n## Community\n\nNeed help? Have a feature suggestion? Join the LlamaIndex community:\n\n- [Twitter](https://twitter.com/llama_index)\n- [Discord](https://discord.gg/dGcwcsnxhU)\n- [LinkedIn](https://www.linkedin.com/company/llamaindex/)\n\n### Getting the library\n\n- LlamaIndex Python\n - [LlamaIndex Python Github](https://github.com/run-llama/llama_index)\n - [Python Docs](https://docs.llamaindex.ai/) (what you're reading now)\n - [LlamaIndex on PyPi](https://pypi.org/project/llama-index/)\n- LlamaIndex.TS (Typescript/Javascript package):\n - [LlamaIndex.TS Github](https://github.com/run-llama/LlamaIndexTS)\n - [TypeScript Docs](https://ts.llamaindex.ai/)\n - [LlamaIndex.TS on npm](https://www.npmjs.com/package/llamaindex)\n\n### Contributing\n\nWe are open-source and always welcome contributions to the project! Check out our [contributing guide](./CONTRIBUTING.md) for full details on how to extend the core library or add an integration to a third party like an LLM, a vector store, an agent tool and more.\n\n## Related projects\n\nThere's more to the LlamaIndex universe! Check out some of our other projects:\n\n- [LlamaHub](https://llamahub.ai) | A large (and growing!) collection of custom data connectors\n- [SEC Insights](https://secinsights.ai) | A LlamaIndex-powered application for financial research\n- [create-llama](https://www.npmjs.com/package/create-llama) | A CLI tool to quickly scaffold LlamaIndex projects"} {"tokens": 979, "doc_id": "4ce1a9a2-e91a-47ae-9cbe-0566b5db3acb", "name": "Building an LLM application", "url": "https://docs.llamaindex.ai/en/stable/understanding/index", "retrieve_doc": true, "source": "llama_index", "content": "# Building an LLM application\n\nWelcome to the beginning of Understanding LlamaIndex. This is a series of short, bite-sized tutorials on every stage of building an LLM application to get you acquainted with how to use LlamaIndex before diving into more advanced and subtle strategies. If you're an experienced programmer new to LlamaIndex, this is the place to start.\n\n## Key steps in building an LLM application\n\n!!! tip\n If you've already read our [high-level concepts](../getting_started/concepts.md) page you'll recognize several of these steps.\n\nThis tutorial has two main parts: **Building a RAG pipeline** and **Building an agent**, with some smaller sections before and after. Here's what to expect:\n\n- **[Using LLMs](./using_llms/using_llms.md)**: hit the ground running by getting started working with LLMs. We'll show you how to use any of our [dozens of supported LLMs](../module_guides/models/llms/modules/), whether via remote API calls or running locally on your machine.\n\n- **Building a RAG pipeline**: Retrieval-Augmented Generation (RAG) is a key technique for getting your data into an LLM, and a component of more sophisticated agentic systems. We'll show you how to build a full-featured RAG pipeline that can answer questions about your data. This includes:\n\n - **[Loading & Ingestion](./loading/loading.md)**: Getting your data from wherever it lives, whether that's unstructured text, PDFs, databases, or APIs to other applications. LlamaIndex has hundreds of connectors to every data source over at [LlamaHub](https://llamahub.ai/).\n\n - **[Indexing and Embedding](./indexing/indexing.md)**: Once you've got your data there are an infinite number of ways to structure access to that data to ensure your applications is always working with the most relevant data. LlamaIndex has a huge number of these strategies built-in and can help you select the best ones.\n\n - **[Storing](./storing/storing.md)**: You will probably find it more efficient to store your data in indexed form, or pre-processed summaries provided by an LLM, often in a specialized database known as a `Vector Store` (see below). You can also store your indexes, metadata and more.\n\n - **[Querying](./querying/querying.md)**: Every indexing strategy has a corresponding querying strategy and there are lots of ways to improve the relevance, speed and accuracy of what you retrieve and what the LLM does with it before returning it to you, including turning it into structured responses such as an API.\n\n- **Building an agent**: agents are LLM-powered knowledge workers that can interact with the world via a set of tools. Those tools can be RAG engines such as you learned how to build in the previous section, or any arbitrary code. This tutorial includes:\n\n - **[Building a basic agent](./agent/basic_agent.md)**: We show you how to build a simple agent that can interact with the world via a set of tools.\n\n - **[Using local models with agents](./agent/local_models.md)**: Agents can be built to use local models, which can be important for performance or privacy reasons.\n\n - **[Adding RAG to an agent](./agent/rag_agent.md)**: The RAG pipelines you built in the previous tutorial can be used as a tool by an agent, giving your agent powerful information-retrieval capabilities.\n\n - **[Adding other tools](./agent/tools.md)**: Let's add more sophisticated tools to your agent, such as API integrations.\n\n- **[Putting it all together](./putting_it_all_together/index.md)**: whether you are building question & answering, chatbots, an API, or an autonomous agent, we show you how to get your application into production.\n\n- **[Tracing and debugging](./tracing_and_debugging/tracing_and_debugging.md)**: also called **observability**, it's especially important with LLM applications to be able to look into the inner workings of what's going on to help you debug problems and spot places to improve.\n\n- **[Evaluating](./evaluating/evaluating.md)**: every strategy has pros and cons and a key part of building, shipping and evolving your application is evaluating whether your change has improved your application in terms of accuracy, performance, clarity, cost and more. Reliably evaluating your changes is a crucial part of LLM application development.\n\n## Let's get started!\n\nReady to dive in? Head to [using LLMs](./using_llms/using_llms.md)."} {"tokens": 182, "doc_id": "5b64e132-a551-4e6f-9c95-2606810cae8c", "name": "Privacy and Security", "url": "https://docs.llamaindex.ai/en/stable/understanding/using_llms/privacy", "retrieve_doc": true, "source": "llama_index", "content": "# Privacy and Security\n\nBy default, LLamaIndex sends your data to OpenAI for generating embeddings and natural language responses. However, it is important to note that this can be configured according to your preferences. LLamaIndex provides the flexibility to use your own embedding model or run a large language model locally if desired.\n\n## Data Privacy\n\nRegarding data privacy, when using LLamaIndex with OpenAI, the privacy details and handling of your data are subject to OpenAI's policies. And each custom service other than OpenAI has its policies as well.\n\n## Vector stores\n\nLLamaIndex offers modules to connect with other vector stores within indexes to store embeddings. It is worth noting that each vector store has its own privacy policies and practices, and LLamaIndex does not assume responsibility for how it handles or uses your data. Also by default, LLamaIndex has a default option to store your embeddings locally."} {"tokens": 869, "doc_id": "7be87819-70df-4a9c-b558-ea795bb332d3", "name": "Using LLMs", "url": "https://docs.llamaindex.ai/en/stable/understanding/using_llms/using_llms", "retrieve_doc": true, "source": "llama_index", "content": "# Using LLMs\n\n!!! tip\n For a list of our supported LLMs and a comparison of their functionality, check out our [LLM module guide](../../module_guides/models/llms.md).\n\nOne of the first steps when building an LLM-based application is which LLM to use; you can also use more than one if you wish.\n\nLLMs are used at multiple different stages of your pipeline:\n\n- During **Indexing** you may use an LLM to determine the relevance of data (whether to index it at all) or you may use an LLM to summarize the raw data and index the summaries instead.\n- During **Querying** LLMs can be used in two ways:\n - During **Retrieval** (fetching data from your index) LLMs can be given an array of options (such as multiple different indices) and make decisions about where best to find the information you're looking for. An agentic LLM can also use _tools_ at this stage to query different data sources.\n - During **Response Synthesis** (turning the retrieved data into an answer) an LLM can combine answers to multiple sub-queries into a single coherent answer, or it can transform data, such as from unstructured text to JSON or another programmatic output format.\n\nLlamaIndex provides a single interface to a large number of different LLMs, allowing you to pass in any LLM you choose to any stage of the pipeline. It can be as simple as this:\n\n```python\nfrom llama_index.llms.openai import OpenAI\n\nresponse = OpenAI().complete(\"Paul Graham is \")\nprint(response)\n```\n\nUsually, you will instantiate an LLM and pass it to `Settings`, which you then pass to other stages of the pipeline, as in this example:\n\n```python\nfrom llama_index.llms.openai import OpenAI\nfrom llama_index.core import Settings\nfrom llama_index.core import VectorStoreIndex, SimpleDirectoryReader\n\nSettings.llm = OpenAI(temperature=0.2, model=\"gpt-4\")\n\ndocuments = SimpleDirectoryReader(\"data\").load_data()\nindex = VectorStoreIndex.from_documents(\n documents,\n)\n```\n\nIn this case, you've instantiated OpenAI and customized it to use the `gpt-4` model instead of the default `gpt-3.5-turbo`, and also modified the `temperature`. The `VectorStoreIndex` will now use gpt-4 to answer questions when querying.\n\n!!! tip\n The `Settings` is a bundle of configuration data that you pass into different parts of LlamaIndex. You can [learn more about Settings](../../module_guides/supporting_modules/settings.md) and how to customize it.\n\n## Available LLMs\n\nWe support integrations with OpenAI, Hugging Face, PaLM, and more. Check out our [module guide to LLMs](../../module_guides/models/llms.md) for a full list, including how to run a local model.\n\n!!! tip\n A general note on privacy and LLMs can be found on the [privacy page](./privacy.md).\n\n### Using a local LLM\n\nLlamaIndex doesn't just support hosted LLM APIs; you can also [run a local model such as Llama2 locally](https://replicate.com/blog/run-llama-locally).\n\nFor example, if you have [Ollama](https://github.com/ollama/ollama) installed and running:\n\n```python\nfrom llama_index.llms.ollama import Ollama\nfrom llama_index.core import Settings\n\nSettings.llm = Ollama(model=\"llama2\", request_timeout=60.0)\n```\n\nSee the [custom LLM's How-To](../../module_guides/models/llms/usage_custom.md) for more details.\n\n## Prompts\n\nBy default LlamaIndex comes with a great set of built-in, battle-tested prompts that handle the tricky work of getting a specific LLM to correctly handle and format data. This is one of the biggest benefits of using LlamaIndex. If you want to, you can [customize the prompts](../../module_guides/models/prompts/index.md)."} {"tokens": 363, "doc_id": "888d853a-1b0c-4456-b289-be9ed2c89c2a", "name": "LlamaHub", "url": "https://docs.llamaindex.ai/en/stable/understanding/loading/llamahub", "retrieve_doc": true, "source": "llama_index", "content": "# LlamaHub\n\nOur data connectors are offered through [LlamaHub](https://llamahub.ai/) 🦙.\nLlamaHub contains a registry of open-source data connectors that you can easily plug into any LlamaIndex application (+ Agent Tools, and Llama Packs).\n\n![](../../_static/data_connectors/llamahub.png)\n\n## Usage Pattern\n\nGet started with:\n\n```python\nfrom llama_index.core import download_loader\n\nfrom llama_index.readers.google import GoogleDocsReader\n\nloader = GoogleDocsReader()\ndocuments = loader.load_data(document_ids=[...])\n```\n\n## Built-in connector: SimpleDirectoryReader\n\n`SimpleDirectoryReader`. Can support parsing a wide range of file types including `.md`, `.pdf`, `.jpg`, `.png`, `.docx`, as well as audio and video types. It is available directly as part of LlamaIndex:\n\n```python\nfrom llama_index.core import SimpleDirectoryReader\n\ndocuments = SimpleDirectoryReader(\"./data\").load_data()\n```\n\n## Available connectors\n\nBrowse [LlamaHub](https://llamahub.ai/) directly to see the hundreds of connectors available, including:\n\n- [Notion](https://developers.notion.com/) (`NotionPageReader`)\n- [Google Docs](https://developers.google.com/docs/api) (`GoogleDocsReader`)\n- [Slack](https://api.slack.com/) (`SlackReader`)\n- [Discord](https://discord.com/developers/docs/intro) (`DiscordReader`)\n- [Apify Actors](https://llamahub.ai/l/apify-actor) (`ApifyActor`). Can crawl the web, scrape webpages, extract text content, download files including `.pdf`, `.jpg`, `.png`, `.docx`, etc."} {"tokens": 1418, "doc_id": "88e2611e-eb6e-43c2-97bf-9252717a0a56", "name": "Loading Data (Ingestion)", "url": "https://docs.llamaindex.ai/en/stable/understanding/loading/loading", "retrieve_doc": true, "source": "llama_index", "content": "# Loading Data (Ingestion)\n\nBefore your chosen LLM can act on your data, you first need to process the data and load it. This has parallels to data cleaning/feature engineering pipelines in the ML world, or ETL pipelines in the traditional data setting.\n\nThis ingestion pipeline typically consists of three main stages:\n\n1. Load the data\n2. Transform the data\n3. Index and store the data\n\nWe cover indexing/storage in [future](../indexing/indexing.md) [sections](../storing/storing.md). In this guide we'll mostly talk about loaders and transformations.\n\n## Loaders\n\nBefore your chosen LLM can act on your data you need to load it. The way LlamaIndex does this is via data connectors, also called `Reader`. Data connectors ingest data from different data sources and format the data into `Document` objects. A `Document` is a collection of data (currently text, and in future, images and audio) and metadata about that data.\n\n### Loading using SimpleDirectoryReader\n\nThe easiest reader to use is our SimpleDirectoryReader, which creates documents out of every file in a given directory. It is built in to LlamaIndex and can read a variety of formats including Markdown, PDFs, Word documents, PowerPoint decks, images, audio and video.\n\n```python\nfrom llama_index.core import SimpleDirectoryReader\n\ndocuments = SimpleDirectoryReader(\"./data\").load_data()\n```\n\n### Using Readers from LlamaHub\n\nBecause there are so many possible places to get data, they are not all built-in. Instead, you download them from our registry of data connectors, [LlamaHub](llamahub.md).\n\nIn this example LlamaIndex downloads and installs the connector called [DatabaseReader](https://llamahub.ai/l/readers/llama-index-readers-database), which runs a query against a SQL database and returns every row of the results as a `Document`:\n\n```python\nfrom llama_index.core import download_loader\n\nfrom llama_index.readers.database import DatabaseReader\n\nreader = DatabaseReader(\n scheme=os.getenv(\"DB_SCHEME\"),\n host=os.getenv(\"DB_HOST\"),\n port=os.getenv(\"DB_PORT\"),\n user=os.getenv(\"DB_USER\"),\n password=os.getenv(\"DB_PASS\"),\n dbname=os.getenv(\"DB_NAME\"),\n)\n\nquery = \"SELECT * FROM users\"\ndocuments = reader.load_data(query=query)\n```\n\nThere are hundreds of connectors to use on [LlamaHub](https://llamahub.ai)!\n\n### Creating Documents directly\n\nInstead of using a loader, you can also use a Document directly.\n\n```python\nfrom llama_index.core import Document\n\ndoc = Document(text=\"text\")\n```\n\n## Transformations\n\nAfter the data is loaded, you then need to process and transform your data before putting it into a storage system. These transformations include chunking, extracting metadata, and embedding each chunk. This is necessary to make sure that the data can be retrieved, and used optimally by the LLM.\n\nTransformation input/outputs are `Node` objects (a `Document` is a subclass of a `Node`). Transformations can also be stacked and reordered.\n\nWe have both a high-level and lower-level API for transforming documents.\n\n### High-Level Transformation API\n\nIndexes have a `.from_documents()` method which accepts an array of Document objects and will correctly parse and chunk them up. However, sometimes you will want greater control over how your documents are split up.\n\n```python\nfrom llama_index.core import VectorStoreIndex\n\nvector_index = VectorStoreIndex.from_documents(documents)\nvector_index.as_query_engine()\n```\n\nUnder the hood, this splits your Document into Node objects, which are similar to Documents (they contain text and metadata) but have a relationship to their parent Document.\n\nIf you want to customize core components, like the text splitter, through this abstraction you can pass in a custom `transformations` list or apply to the global `Settings`:\n\n```python\nfrom llama_index.core.node_parser import SentenceSplitter\n\ntext_splitter = SentenceSplitter(chunk_size=512, chunk_overlap=10)\n\n# global\nfrom llama_index.core import Settings\n\nSettings.text_splitter = text_splitter\n\n# per-index\nindex = VectorStoreIndex.from_documents(\n documents, transformations=[text_splitter]\n)\n```\n\n### Lower-Level Transformation API\n\nYou can also define these steps explicitly.\n\nYou can do this by either using our transformation modules (text splitters, metadata extractors, etc.) as standalone components, or compose them in our declarative [Transformation Pipeline interface](../../module_guides/loading/ingestion_pipeline/index.md).\n\nLet's walk through the steps below.\n\n#### Splitting Your Documents into Nodes\n\nA key step to process your documents is to split them into \"chunks\"/Node objects. The key idea is to process your data into bite-sized pieces that can be retrieved / fed to the LLM.\n\nLlamaIndex has support for a wide range of [text splitters](../../module_guides/loading/node_parsers/modules.md), ranging from paragraph/sentence/token based splitters to file-based splitters like HTML, JSON.\n\nThese can be [used on their own or as part of an ingestion pipeline](../../module_guides/loading/node_parsers/index.md).\n\n```python\nfrom llama_index.core import SimpleDirectoryReader\nfrom llama_index.core.ingestion import IngestionPipeline\nfrom llama_index.core.node_parser import TokenTextSplitter\n\ndocuments = SimpleDirectoryReader(\"./data\").load_data()\n\npipeline = IngestionPipeline(transformations=[TokenTextSplitter(), ...])\n\nnodes = pipeline.run(documents=documents)\n```\n\n### Adding Metadata\n\nYou can also choose to add metadata to your documents and nodes. This can be done either manually or with [automatic metadata extractors](../../module_guides/loading/documents_and_nodes/usage_metadata_extractor.md).\n\nHere are guides on 1) [how to customize Documents](../../module_guides/loading/documents_and_nodes/usage_documents.md), and 2) [how to customize Nodes](../../module_guides/loading/documents_and_nodes/usage_nodes.md).\n\n```python\ndocument = Document(\n text=\"text\",\n metadata={\"filename\": \"\", \"category\": \"\"},\n)\n```\n\n### Adding Embeddings\n\nTo insert a node into a vector index, it should have an embedding. See our [ingestion pipeline](../../module_guides/loading/ingestion_pipeline/index.md) or our [embeddings guide](../../module_guides/models/embeddings.md) for more details.\n\n### Creating and passing Nodes directly\n\nIf you want to, you can create nodes directly and pass a list of Nodes directly to an indexer:\n\n```python\nfrom llama_index.core.schema import TextNode\n\nnode1 = TextNode(text=\"\", id_=\"\")\nnode2 = TextNode(text=\"\", id_=\"\")\n\nindex = VectorStoreIndex([node1, node2])\n```"} {"tokens": 581, "doc_id": "81066675-5d92-4073-853a-02f7605ce032", "name": "Evaluating", "url": "https://docs.llamaindex.ai/en/stable/understanding/evaluating/evaluating", "retrieve_doc": true, "source": "llama_index", "content": "# Evaluating\n\nEvaluation and benchmarking are crucial concepts in LLM development. To improve the performance of an LLM app (RAG, agents), you must have a way to measure it.\n\nLlamaIndex offers key modules to measure the quality of generated results. We also offer key modules to measure retrieval quality. You can learn more about how evaluation works in LlamaIndex in our [module guides](../../module_guides/evaluating/index.md).\n\n## Response Evaluation\n\nDoes the response match the retrieved context? Does it also match the query? Does it match the reference answer or guidelines? Here's a simple example that evaluates a single response for Faithfulness, i.e. whether the response is aligned to the context, such as being free from hallucinations:\n\n```python\nfrom llama_index.core import VectorStoreIndex\nfrom llama_index.llms.openai import OpenAI\nfrom llama_index.core.evaluation import FaithfulnessEvaluator\n\n# create llm\nllm = OpenAI(model=\"gpt-4\", temperature=0.0)\n\n# build index\n...\nvector_index = VectorStoreIndex(...)\n\n# define evaluator\nevaluator = FaithfulnessEvaluator(llm=llm)\n\n# query index\nquery_engine = vector_index.as_query_engine()\nresponse = query_engine.query(\n \"What battles took place in New York City in the American Revolution?\"\n)\neval_result = evaluator.evaluate_response(response=response)\nprint(str(eval_result.passing))\n```\n\nThe response contains both the response and the source from which the response was generated; the evaluator compares them and determines if the response is faithful to the source.\n\nYou can learn more in our module guides about [response evaluation](../../module_guides/evaluating/usage_pattern.md).\n\n## Retrieval Evaluation\n\nAre the retrieved sources relevant to the query? This is a simple example that evaluates a single retrieval:\n\n```python\nfrom llama_index.core.evaluation import RetrieverEvaluator\n\n# define retriever somewhere (e.g. from index)\n# retriever = index.as_retriever(similarity_top_k=2)\nretriever = ...\n\nretriever_evaluator = RetrieverEvaluator.from_metric_names(\n [\"mrr\", \"hit_rate\"], retriever=retriever\n)\n\nretriever_evaluator.evaluate(\n query=\"query\", expected_ids=[\"node_id1\", \"node_id2\"]\n)\n```\n\nThis compares what was retrieved for the query to a set of nodes that were expected to be retrieved.\n\nIn reality you would want to evaluate a whole batch of retrievals; you can learn how do this in our module guide on [retrieval evaluation](../../module_guides/evaluating/usage_pattern_retrieval.md).\n\n## Related concepts\n\nYou may be interested in [analyzing the cost of your application](cost_analysis/index.md) if you are making calls to a hosted, remote LLM."} {"tokens": 492, "doc_id": "94a22f57-ea69-4559-926d-77f80c448b7e", "name": "Usage Pattern", "url": "https://docs.llamaindex.ai/en/stable/understanding/evaluating/cost_analysis/usage_pattern", "retrieve_doc": true, "source": "llama_index", "content": "# Usage Pattern\n\n## Estimating LLM and Embedding Token Counts\n\nIn order to measure LLM and Embedding token counts, you'll need to\n\n1. Setup `MockLLM` and `MockEmbedding` objects\n\n```python\nfrom llama_index.core.llms import MockLLM\nfrom llama_index.core import MockEmbedding\n\nllm = MockLLM(max_tokens=256)\nembed_model = MockEmbedding(embed_dim=1536)\n```\n\n2. Setup the `TokenCountingCallback` handler\n\n```python\nimport tiktoken\nfrom llama_index.core.callbacks import CallbackManager, TokenCountingHandler\n\ntoken_counter = TokenCountingHandler(\n tokenizer=tiktoken.encoding_for_model(\"gpt-3.5-turbo\").encode\n)\n\ncallback_manager = CallbackManager([token_counter])\n```\n\n3. Add them to the global `Settings`\n\n```python\nfrom llama_index.core import Settings\n\nSettings.llm = llm\nSettings.embed_model = embed_model\nSettings.callback_manager = callback_manager\n```\n\n4. Construct an Index\n\n```python\nfrom llama_index.core import VectorStoreIndex, SimpleDirectoryReader\n\ndocuments = SimpleDirectoryReader(\n \"./docs/examples/data/paul_graham\"\n).load_data()\n\nindex = VectorStoreIndex.from_documents(documents)\n```\n\n5. Measure the counts!\n\n```python\nprint(\n \"Embedding Tokens: \",\n token_counter.total_embedding_token_count,\n \"\\n\",\n \"LLM Prompt Tokens: \",\n token_counter.prompt_llm_token_count,\n \"\\n\",\n \"LLM Completion Tokens: \",\n token_counter.completion_llm_token_count,\n \"\\n\",\n \"Total LLM Token Count: \",\n token_counter.total_llm_token_count,\n \"\\n\",\n)\n\n# reset counts\ntoken_counter.reset_counts()\n```\n\n6. Run a query, measure again\n\n```python\nquery_engine = index.as_query_engine()\n\nresponse = query_engine.query(\"query\")\n\nprint(\n \"Embedding Tokens: \",\n token_counter.total_embedding_token_count,\n \"\\n\",\n \"LLM Prompt Tokens: \",\n token_counter.prompt_llm_token_count,\n \"\\n\",\n \"LLM Completion Tokens: \",\n token_counter.completion_llm_token_count,\n \"\\n\",\n \"Total LLM Token Count: \",\n token_counter.total_llm_token_count,\n \"\\n\",\n)\n```"} {"tokens": 885, "doc_id": "20ea3cb9-4145-4805-887e-7c48f1333c04", "name": "Cost Analysis", "url": "https://docs.llamaindex.ai/en/stable/understanding/evaluating/cost_analysis/index", "retrieve_doc": true, "source": "llama_index", "content": "# Cost Analysis\n\n## Concept\n\nEach call to an LLM will cost some amount of money - for instance, OpenAI's gpt-3.5-turbo costs $0.002 / 1k tokens. The cost of building an index and querying depends on\n\n- the type of LLM used\n- the type of data structure used\n- parameters used during building\n- parameters used during querying\n\nThe cost of building and querying each index is a TODO in the reference documentation. In the meantime, we provide the following information:\n\n1. A high-level overview of the cost structure of the indices.\n2. A token predictor that you can use directly within LlamaIndex!\n\n### Overview of Cost Structure\n\n#### Indices with no LLM calls\n\nThe following indices don't require LLM calls at all during building (0 cost):\n\n- `SummaryIndex`\n- `SimpleKeywordTableIndex` - uses a regex keyword extractor to extract keywords from each document\n- `RAKEKeywordTableIndex` - uses a RAKE keyword extractor to extract keywords from each document\n\n#### Indices with LLM calls\n\nThe following indices do require LLM calls during build time:\n\n- `TreeIndex` - use LLM to hierarchically summarize the text to build the tree\n- `KeywordTableIndex` - use LLM to extract keywords from each document\n\n### Query Time\n\nThere will always be >= 1 LLM call during query time, in order to synthesize the final answer.\nSome indices contain cost tradeoffs between index building and querying. `SummaryIndex`, for instance,\nis free to build, but running a query over a summary index (without filtering or embedding lookups), will\ncall the LLM {math}`N` times.\n\nHere are some notes regarding each of the indices:\n\n- `SummaryIndex`: by default requires {math}`N` LLM calls, where N is the number of nodes.\n- `TreeIndex`: by default requires {math}`\\log (N)` LLM calls, where N is the number of leaf nodes.\n - Setting `child_branch_factor=2` will be more expensive than the default `child_branch_factor=1` (polynomial vs logarithmic), because we traverse 2 children instead of just 1 for each parent node.\n- `KeywordTableIndex`: by default requires an LLM call to extract query keywords.\n - Can do `index.as_retriever(retriever_mode=\"simple\")` or `index.as_retriever(retriever_mode=\"rake\")` to also use regex/RAKE keyword extractors on your query text.\n- `VectorStoreIndex`: by default, requires one LLM call per query. If you increase the `similarity_top_k` or `chunk_size`, or change the `response_mode`, then this number will increase.\n\n## Usage Pattern\n\nLlamaIndex offers token **predictors** to predict token usage of LLM and embedding calls.\nThis allows you to estimate your costs during 1) index construction, and 2) index querying, before\nany respective LLM calls are made.\n\nTokens are counted using the `TokenCountingHandler` callback. See the [example notebook](../../../examples/callbacks/TokenCountingHandler.ipynb) for details on the setup.\n\n### Using MockLLM\n\nTo predict token usage of LLM calls, import and instantiate the MockLLM as shown below. The `max_tokens` parameter is used as a \"worst case\" prediction, where each LLM response will contain exactly that number of tokens. If `max_tokens` is not specified, then it will simply predict back the prompt.\n\n```python\nfrom llama_index.core.llms import MockLLM\nfrom llama_index.core import Settings\n\n# use a mock llm globally\nSettings.llm = MockLLM(max_tokens=256)\n```\n\nYou can then use this predictor during both index construction and querying.\n\n### Using MockEmbedding\n\nYou may also predict the token usage of embedding calls with `MockEmbedding`.\n\n```python\nfrom llama_index.core import MockEmbedding\nfrom llama_index.core import Settings\n\n# use a mock embedding globally\nSettings.embed_model = MockEmbedding(embed_dim=1536)\n```\n\n## Usage Pattern\n\nRead about the [full usage pattern](./usage_pattern.md) for more details!"} {"tokens": 710, "doc_id": "90154ae9-1d90-4442-a9b3-5bedaba0074c", "name": "Agents with local models", "url": "https://docs.llamaindex.ai/en/stable/understanding/agent/local_models", "retrieve_doc": true, "source": "llama_index", "content": "# Agents with local models\n\nIf you're happy using OpenAI or another remote model, you can skip this section, but many people are interested in using models they run themselves. The easiest way to do this is via the great work of our friends at [Ollama](https://ollama.com/), who provide a simple to use client that will download, install and run a [growing range of models](https://ollama.com/library) for you.\n\n## Install Ollama\n\nThey provide a one-click installer for Mac, Linux and Windows on their [home page](https://ollama.com/).\n\n## Pick and run a model\n\nSince we're going to be doing agentic work, we'll need a very capable model, but the largest models are hard to run on a laptop. We think `mixtral 8x7b` is a good balance between power and resources, but `llama3` is another great option. You can run Mixtral by running\n\n```bash\nollama run mixtral:8x7b\n```\n\nThe first time you run, it will also automatically download and install the model for you, which can take a while.\n\n## Switch to local agent\n\nTo switch to Mixtral, you'll need to bring in the Ollama integration:\n\n```bash\npip install llama-index-llms-ollama\n```\n\nThen modify your dependencies to bring in Ollama instead of OpenAI:\n\n```python\nfrom llama_index.llms.ollama import Ollama\n```\n\nAnd finally initialize Mixtral as your LLM instead:\n\n```python\nllm = Ollama(model=\"mixtral:8x7b\", request_timeout=120.0)\n```\n\n## Ask the question again\n\n```python\nresponse = agent.chat(\"What is 20+(2*4)? Calculate step by step.\")\n```\n\nThe exact output looks different from OpenAI (it makes a mistake the first time it tries), but Mixtral gets the right answer:\n\n```\nThought: The current language of the user is: English. The user wants to calculate the value of 20+(2*4). I need to break down this task into subtasks and use appropriate tools to solve each subtask.\nAction: multiply\nAction Input: {'a': 2, 'b': 4}\nObservation: 8\nThought: The user has calculated the multiplication part of the expression, which is (2*4), and got 8 as a result. Now I need to add this value to 20 by using the 'add' tool.\nAction: add\nAction Input: {'a': 20, 'b': 8}\nObservation: 28\nThought: The user has calculated the sum of 20+(2*4) and got 28 as a result. Now I can answer without using any more tools.\nAnswer: The solution to the expression 20+(2*4) is 28.\nThe solution to the expression 20+(2*4) is 28.\n```\n\nCheck the [repo](https://github.com/run-llama/python-agents-tutorial/blob/main/2_local_agent.py) to see what this final code looks like.\n\nYou can now continue the rest of the tutorial with a local model if you prefer. We'll keep using OpenAI as we move on to [adding RAG to your agent](./rag_agent.md)."} {"tokens": 971, "doc_id": "9830872c-c9b8-4b01-9518-9a1fa6c14821", "name": "Adding RAG to an agent", "url": "https://docs.llamaindex.ai/en/stable/understanding/agent/rag_agent", "retrieve_doc": true, "source": "llama_index", "content": "# Adding RAG to an agent\n\nTo demonstrate using RAG engines as a tool in an agent, we're going to create a very simple RAG query engine. Our source data is going to be the [Wikipedia page about the 2023 Canadian federal budget](https://en.wikipedia.org/wiki/2023_Canadian_federal_budget) that we've [printed as a PDF](https://www.dropbox.com/scl/fi/rop435rax7mn91p3r8zj3/2023_canadian_budget.pdf?rlkey=z8j6sab5p6i54qa9tr39a43l7&dl=0).\n\n## Bring in new dependencies\n\nTo read the PDF and index it, we'll need a few new dependencies. They were installed along with the rest of LlamaIndex, so we just need to import them:\n\n```python\nfrom llama_index.core import SimpleDirectoryReader, VectorStoreIndex, Settings\n```\n\n## Add LLM to settings\n\nWe were previously passing the LLM directly, but now we need to use it in multiple places, so we'll add it to the global settings.\n\n```python\nSettings.llm = OpenAI(model=\"gpt-3.5-turbo\", temperature=0)\n```\n\nPlace this line near the top of the file; you can delete the other `llm` assignment.\n\n## Load and index documents\n\nWe'll now do 3 things in quick succession: we'll load the PDF from a folder called \"data\", index and embed it using the `VectorStoreIndex`, and then create a query engine from that index:\n\n```python\ndocuments = SimpleDirectoryReader(\"./data\").load_data()\nindex = VectorStoreIndex.from_documents(documents)\nquery_engine = index.as_query_engine()\n```\n\nWe can run a quick smoke-test to make sure the engine is working:\n\n```python\nresponse = query_engine.query(\n \"What was the total amount of the 2023 Canadian federal budget?\"\n)\nprint(response)\n```\n\nThe response is fast:\n\n```\nThe total amount of the 2023 Canadian federal budget was $496.9 billion.\n```\n\n## Add a query engine tool\n\nThis requires one more import:\n\n```python\nfrom llama_index.core.tools import QueryEngineTool\n```\n\nNow we turn our query engine into a tool by supplying the appropriate metadata (for the python functions, this was being automatically extracted so we didn't need to add it):\n\n```python\nbudget_tool = QueryEngineTool.from_defaults(\n query_engine,\n name=\"canadian_budget_2023\",\n description=\"A RAG engine with some basic facts about the 2023 Canadian federal budget.\",\n)\n```\n\nWe modify our agent by adding this engine to our array of tools (we also remove the `llm` parameter, since it's now provided by settings):\n\n```python\nagent = ReActAgent.from_tools(\n [multiply_tool, add_tool, budget_tool], verbose=True\n)\n```\n\n## Ask a question using multiple tools\n\nThis is kind of a silly question, we'll ask something more useful later:\n\n```python\nresponse = agent.chat(\n \"What is the total amount of the 2023 Canadian federal budget multiplied by 3? Go step by step, using a tool to do any math.\"\n)\n\nprint(response)\n```\n\nWe get a perfect answer:\n\n```\nThought: The current language of the user is English. I need to use the tools to help me answer the question.\nAction: canadian_budget_2023\nAction Input: {'input': 'total'}\nObservation: $496.9 billion\nThought: I need to multiply the total amount of the 2023 Canadian federal budget by 3.\nAction: multiply\nAction Input: {'a': 496.9, 'b': 3}\nObservation: 1490.6999999999998\nThought: I can answer without using any more tools. I'll use the user's language to answer\nAnswer: The total amount of the 2023 Canadian federal budget multiplied by 3 is $1,490.70 billion.\nThe total amount of the 2023 Canadian federal budget multiplied by 3 is $1,490.70 billion.\n```\n\nAs usual, you can check the [repo](https://github.com/run-llama/python-agents-tutorial/blob/main/3_rag_agent.py) to see this code all together.\n\nExcellent! Your agent can now use any arbitrarily advanced query engine to help answer questions. You can also add as many different RAG engines as you need to consult different data sources. Next, we'll look at how we can answer more advanced questions [using LlamaParse](./llamaparse.md)."} {"tokens": 559, "doc_id": "8df3083f-e2ae-48de-b70c-82b0213e5af4", "name": "Enhancing with LlamaParse", "url": "https://docs.llamaindex.ai/en/stable/understanding/agent/llamaparse", "retrieve_doc": true, "source": "llama_index", "content": "# Enhancing with LlamaParse\n\nIn the previous example we asked a very basic question of our document, about the total amount of the budget. Let's instead ask a more complicated question about a specific fact in the document:\n\n```python\ndocuments = SimpleDirectoryReader(\"./data\").load_data()\nindex = VectorStoreIndex.from_documents(documents)\nquery_engine = index.as_query_engine()\n\nresponse = query_engine.query(\n \"How much exactly was allocated to a tax credit to promote investment in green technologies in the 2023 Canadian federal budget?\"\n)\nprint(response)\n```\n\nWe unfortunately get an unhelpful answer:\n\n```\nThe budget allocated funds to a new green investments tax credit, but the exact amount was not specified in the provided context information.\n```\n\nThis is bad, because we happen to know the exact number is in the document! But the PDF is complicated, with tables and multi-column layout, and the LLM is missing the answer. Luckily, we can use LlamaParse to help us out.\n\nFirst, you need a LlamaCloud API key. You can [get one for free](https://cloud.llamaindex.ai/) by signing up for LlamaCloud. Then put it in your `.env` file just like your OpenAI key:\n\n```bash\nLLAMA_CLOUD_API_KEY=llx-xxxxx\n```\n\nNow you're ready to use LlamaParse in your code. Let's bring it in as as import:\n\n```python\nfrom llama_parse import LlamaParse\n```\n\nAnd let's put in a second attempt to parse and query the file (note that this uses `documents2`, `index2`, etc.) and see if we get a better answer to the exact same question:\n\n```python\ndocuments2 = LlamaParse(result_type=\"markdown\").load_data(\n \"./data/2023_canadian_budget.pdf\"\n)\nindex2 = VectorStoreIndex.from_documents(documents2)\nquery_engine2 = index2.as_query_engine()\n\nresponse2 = query_engine2.query(\n \"How much exactly was allocated to a tax credit to promote investment in green technologies in the 2023 Canadian federal budget?\"\n)\nprint(response2)\n```\n\nWe do!\n\n```\n$20 billion was allocated to a tax credit to promote investment in green technologies in the 2023 Canadian federal budget.\n```\n\nYou can always check [the repo](https://github.com/run-llama/python-agents-tutorial/blob/main/4_llamaparse.py) to what this code looks like.\n\nAs you can see, parsing quality makes a big difference to what the LLM can understand, even for relatively simple questions. Next let's see how [memory](./memory.md) can help us with more complex questions."} {"tokens": 793, "doc_id": "c8371e03-8cc7-4a36-b589-27a79fad6c81", "name": "Memory", "url": "https://docs.llamaindex.ai/en/stable/understanding/agent/memory", "retrieve_doc": true, "source": "llama_index", "content": "# Memory\n\nWe've now made several additions and subtractions to our code. To make it clear what we're using, you can see [the current code for our agent](https://github.com/run-llama/python-agents-tutorial/blob/main/5_memory.py) in the repo. It's using OpenAI for the LLM and LlamaParse to enhance parsing.\n\nWe've also added 3 questions in a row. Let's see how the agent handles them:\n\n```python\nresponse = agent.chat(\n \"How much exactly was allocated to a tax credit to promote investment in green technologies in the 2023 Canadian federal budget?\"\n)\n\nprint(response)\n\nresponse = agent.chat(\n \"How much was allocated to a implement a means-tested dental care program in the 2023 Canadian federal budget?\"\n)\n\nprint(response)\n\nresponse = agent.chat(\n \"How much was the total of those two allocations added together? Use a tool to answer any questions.\"\n)\n\nprint(response)\n```\n\nThis is demonstrating a powerful feature of agents in LlamaIndex: memory. Let's see what the output looks like:\n\n```\nStarted parsing the file under job_id cac11eca-45e0-4ea9-968a-25f1ac9b8f99\nThought: The current language of the user is English. I need to use a tool to help me answer the question.\nAction: canadian_budget_2023\nAction Input: {'input': 'How much was allocated to a tax credit to promote investment in green technologies in the 2023 Canadian federal budget?'}\nObservation: $20 billion was allocated to a tax credit to promote investment in green technologies in the 2023 Canadian federal budget.\nThought: I can answer without using any more tools. I'll use the user's language to answer\nAnswer: $20 billion was allocated to a tax credit to promote investment in green technologies in the 2023 Canadian federal budget.\n$20 billion was allocated to a tax credit to promote investment in green technologies in the 2023 Canadian federal budget.\nThought: The current language of the user is: English. I need to use a tool to help me answer the question.\nAction: canadian_budget_2023\nAction Input: {'input': 'How much was allocated to implement a means-tested dental care program in the 2023 Canadian federal budget?'}\nObservation: $13 billion was allocated to implement a means-tested dental care program in the 2023 Canadian federal budget.\nThought: I can answer without using any more tools. I'll use the user's language to answer\nAnswer: $13 billion was allocated to implement a means-tested dental care program in the 2023 Canadian federal budget.\n$13 billion was allocated to implement a means-tested dental care program in the 2023 Canadian federal budget.\nThought: The current language of the user is: English. I need to use a tool to help me answer the question.\nAction: add\nAction Input: {'a': 20, 'b': 13}\nObservation: 33\nThought: I can answer without using any more tools. I'll use the user's language to answer\nAnswer: The total of the allocations for the tax credit to promote investment in green technologies and the means-tested dental care program in the 2023 Canadian federal budget is $33 billion.\nThe total of the allocations for the tax credit to promote investment in green technologies and the means-tested dental care program in the 2023 Canadian federal budget is $33 billion.\n```\n\nThe agent remembers that it already has the budget allocations from previous questions, and can answer a contextual question like \"add those two allocations together\" without needing to specify which allocations exactly. It even correctly uses the other addition tool to sum up the numbers.\n\nHaving demonstrated how memory helps, let's [add some more complex tools](./tools.md) to our agent."} {"tokens": 983, "doc_id": "105b26c9-8f71-4dbb-915e-3c10c5105353", "name": "Adding other tools", "url": "https://docs.llamaindex.ai/en/stable/understanding/agent/tools", "retrieve_doc": true, "source": "llama_index", "content": "# Adding other tools\n\nNow that you've built a capable agent, we hope you're excited about all it can do. The core of expanding agent capabilities is the tools available, and we have good news: [LlamaHub](https://llamahub.ai) from LlamaIndex has hundreds of integrations, including [dozens of existing agent tools](https://llamahub.ai/?tab=tools) that you can use right away. We'll show you how to use one of the existing tools, and also how to build and contribute your own.\n\n## Using an existing tool from LlamaHub\n\nFor our example, we're going to use the [Yahoo Finance tool](https://llamahub.ai/l/tools/llama-index-tools-yahoo-finance?from=tools) from LlamaHub. It provides a set of six agent tools that look up a variety of information about stock ticker symbols.\n\nFirst we need to install the tool:\n\n```bash\npip install llama-index-tools-yahoo-finance\n```\n\nThen we can set up our dependencies. This is exactly the same as our previous examples, except for the final import:\n\n```python\nfrom dotenv import load_dotenv\n\nload_dotenv()\nfrom llama_index.core.agent import ReActAgent\nfrom llama_index.llms.openai import OpenAI\nfrom llama_index.core.tools import FunctionTool\nfrom llama_index.core import Settings\nfrom llama_index.tools.yahoo_finance import YahooFinanceToolSpec\n```\n\nTo show how custom tools and LlamaHub tools can work together, we'll include the code from our previous examples the defines a \"multiple\" tool. We'll also take this opportunity to set up the LLM:\n\n```python\n# settings\nSettings.llm = OpenAI(model=\"gpt-4o\", temperature=0)\n\n\n# function tools\ndef multiply(a: float, b: float) -> float:\n \"\"\"Multiply two numbers and returns the product\"\"\"\n return a * b\n\n\nmultiply_tool = FunctionTool.from_defaults(fn=multiply)\n\n\ndef add(a: float, b: float) -> float:\n \"\"\"Add two numbers and returns the sum\"\"\"\n return a + b\n\n\nadd_tool = FunctionTool.from_defaults(fn=add)\n```\n\nNow we'll do the new step, which is to fetch the array of tools:\n\n```python\nfinance_tools = YahooFinanceToolSpec().to_tool_list()\n```\n\nThis is just a regular array, so we can use Python's `extend` method to add our own tools to the mix:\n\n```python\nfinance_tools.extend([multiply_tool, add_tool])\n```\n\nThen we set up the agent as usual, and ask a question:\n\n```python\nagent = ReActAgent.from_tools(finance_tools, verbose=True)\n\nresponse = agent.chat(\"What is the current price of NVDA?\")\n\nprint(response)\n```\n\nThe response is very wordy, so we've truncated it:\n\n```\nThought: The current language of the user is English. I need to use a tool to help me answer the question.\nAction: stock_basic_info\nAction Input: {'ticker': 'NVDA'}\nObservation: Info:\n{'address1': '2788 San Tomas Expressway'\n...\n'currentPrice': 135.58\n...}\nThought: I have obtained the current price of NVDA from the stock basic info.\nAnswer: The current price of NVDA (NVIDIA Corporation) is $135.58.\nThe current price of NVDA (NVIDIA Corporation) is $135.58.\n```\n\nPerfect! As you can see, using existing tools is a snap.\n\nAs always, you can check [the repo](https://github.com/run-llama/python-agents-tutorial/blob/main/6_tools.py) to see this code all in one place.\n\n## Building and contributing your own tools\n\nWe love open source contributions of new tools! You can see an example of [what the code of the Yahoo finance tool looks like](https://github.com/run-llama/llama_index/blob/main/llama-index-integrations/tools/llama-index-tools-yahoo-finance/llama_index/tools/yahoo_finance/base.py):\n* A class that extends `BaseToolSpec`\n* A set of arbitrary Python functions\n* A `spec_functions` list that maps the functions to the tool's API\n\nOnce you've got a tool working, follow our [contributing guide](https://github.com/run-llama/llama_index/blob/main/CONTRIBUTING.md#2--contribute-a-pack-reader-tool-or-dataset-formerly-from-llama-hub) for instructions on correctly setting metadata and submitting a pull request.\n\nCongratulations! You've completed our guide to building agents with LlamaIndex. We can't wait to see what use-cases you build!"} {"tokens": 1197, "doc_id": "e539dfa2-9a44-42a8-aa53-598e47a4b591", "name": "Building a basic agent", "url": "https://docs.llamaindex.ai/en/stable/understanding/agent/basic_agent", "retrieve_doc": true, "source": "llama_index", "content": "# Building a basic agent\n\nIn LlamaIndex, an agent is a semi-autonomous piece of software powered by an LLM that is given a task and executes a series of steps towards solving that task. It is given a set of tools, which can be anything from arbitrary functions up to full LlamaIndex query engines, and it selects the best available tool to complete each step. When each step is completed, the agent judges whether the task is now complete, in which case it returns a result to the user, or whether it needs to take another step, in which case it loops back to the start.\n\n![agent flow](./agent_flow.png)\n\n## Getting started\n\nYou can find all of this code in [the tutorial repo](https://github.com/run-llama/python-agents-tutorial).\n\nTo avoid conflicts and keep things clean, we'll start a new Python virtual environment. You can use any virtual environment manager, but we'll use `poetry` here:\n\n```bash\npoetry init\npoetry shell\n```\n\nAnd then we'll install the LlamaIndex library and some other dependencies that will come in handy:\n\n```bash\npip install llama-index python-dotenv\n```\n\nIf any of this gives you trouble, check out our more detailed [installation guide](../getting_started/installation/).\n\n## OpenAI Key\n\nOur agent will be powered by OpenAI's `GPT-3.5-Turbo` LLM, so you'll need an [API key](https://platform.openai.com/). Once you have your key, you can put it in a `.env` file in the root of your project:\n\n```bash\nOPENAI_API_KEY=sk-proj-xxxx\n```\n\nIf you don't want to use OpenAI, we'll show you how to use other models later.\n\n## Bring in dependencies\n\nWe'll start by importing the components of LlamaIndex we need, as well as loading the environment variables from our `.env` file:\n\n```python\nfrom dotenv import load_dotenv\n\nload_dotenv()\nfrom llama_index.core.agent import ReActAgent\nfrom llama_index.llms.openai import OpenAI\nfrom llama_index.core.tools import FunctionTool\n```\n\n## Create basic tools\n\nFor this simple example we'll be creating two tools: one that knows how to multiply numbers together, and one that knows how to add them.\n\n```python\ndef multiply(a: float, b: float) -> float:\n \"\"\"Multiply two numbers and returns the product\"\"\"\n return a * b\n\n\nmultiply_tool = FunctionTool.from_defaults(fn=multiply)\n\n\ndef add(a: float, b: float) -> float:\n \"\"\"Add two numbers and returns the sum\"\"\"\n return a + b\n\n\nadd_tool = FunctionTool.from_defaults(fn=add)\n```\n\nAs you can see, these are regular vanilla Python functions. The docstring comments provide metadata to the agent about what the tool does: if your LLM is having trouble figuring out which tool to use, these docstrings are what you should tweak first.\n\nAfter each function is defined we create `FunctionTool` objects from these functions, which wrap them in a way that the agent can understand.\n\n## Initialize the LLM\n\n`GPT-3.5-Turbo` is going to be doing the work today:\n\n```python\nllm = OpenAI(model=\"gpt-3.5-turbo\", temperature=0)\n```\n\nYou could also pick another popular model accessible via API, such as those from [Mistral](../examples/llm/mistralai/), [Claude from Anthropic](../examples/llm/anthropic/) or [Gemini from Google](../examples/llm/gemini/).\n\n## Initialize the agent\n\nNow we create our agent. In this case, this is a [ReAct agent](https://klu.ai/glossary/react-agent-model), a relatively simple but powerful agent. We give it an array containing our two tools, the LLM we just created, and set `verbose=True` so we can see what's going on:\n\n```python\nagent = ReActAgent.from_tools([multiply_tool, add_tool], llm=llm, verbose=True)\n```\n\n## Ask a question\n\nWe specify that it should use a tool, as this is pretty simple and GPT-3.5 doesn't really need this tool to get the answer.\n\n```python\nresponse = agent.chat(\"What is 20+(2*4)? Use a tool to calculate every step.\")\n```\n\nThis should give you output similar to the following:\n\n```\nThought: The current language of the user is: English. I need to use a tool to help me answer the question.\nAction: multiply\nAction Input: {'a': 2, 'b': 4}\nObservation: 8\nThought: I need to add 20 to the result of the multiplication.\nAction: add\nAction Input: {'a': 20, 'b': 8}\nObservation: 28\nThought: I can answer without using any more tools. I'll use the user's language to answer\nAnswer: The result of 20 + (2 * 4) is 28.\nThe result of 20 + (2 * 4) is 28.\n```\n\nAs you can see, the agent picks the correct tools one after the other and combines the answers to give the final result. Check the [repo](https://github.com/run-llama/python-agents-tutorial/blob/main/1_basic_agent.py) to see what the final code should look like.\n\nCongratulations! You've built the most basic kind of agent. Next you can find out how to use [local models](./local_models.md) or skip to [adding RAG to your agent](./rag_agent.md)."} {"tokens": 1069, "doc_id": "37983b44-ac28-44e2-b2a8-455df06ee13b", "name": "Storing", "url": "https://docs.llamaindex.ai/en/stable/understanding/storing/storing", "retrieve_doc": true, "source": "llama_index", "content": "# Storing\n\nOnce you have data [loaded](../loading/loading.md) and [indexed](../indexing/indexing.md), you will probably want to store it to avoid the time and cost of re-indexing it. By default, your indexed data is stored only in memory.\n\n## Persisting to disk\n\nThe simplest way to store your indexed data is to use the built-in `.persist()` method of every Index, which writes all the data to disk at the location specified. This works for any type of index.\n\n```python\nindex.storage_context.persist(persist_dir=\"\")\n```\n\nHere is an example of a Composable Graph:\n\n```python\ngraph.root_index.storage_context.persist(persist_dir=\"\")\n```\n\nYou can then avoid re-loading and re-indexing your data by loading the persisted index like this:\n\n```python\nfrom llama_index.core import StorageContext, load_index_from_storage\n\n# rebuild storage context\nstorage_context = StorageContext.from_defaults(persist_dir=\"\")\n\n# load index\nindex = load_index_from_storage(storage_context)\n```\n\n!!! tip\n Important: if you had initialized your index with a custom `transformations`, `embed_model`, etc., you will need to pass in the same options during `load_index_from_storage`, or have it set as the [global settings](../../module_guides/supporting_modules/settings.md).\n\n## Using Vector Stores\n\nAs discussed in [indexing](../indexing/indexing.md), one of the most common types of Index is the VectorStoreIndex. The API calls to create the {ref}`embeddings ` in a VectorStoreIndex can be expensive in terms of time and money, so you will want to store them to avoid having to constantly re-index things.\n\nLlamaIndex supports a [huge number of vector stores](../../module_guides/storing/vector_stores.md) which vary in architecture, complexity and cost. In this example we'll be using Chroma, an open-source vector store.\n\nFirst you will need to install chroma:\n\n```\npip install chromadb\n```\n\nTo use Chroma to store the embeddings from a VectorStoreIndex, you need to:\n\n- initialize the Chroma client\n- create a Collection to store your data in Chroma\n- assign Chroma as the `vector_store` in a `StorageContext`\n- initialize your VectorStoreIndex using that StorageContext\n\nHere's what that looks like, with a sneak peek at actually querying the data:\n\n```python\nimport chromadb\nfrom llama_index.core import VectorStoreIndex, SimpleDirectoryReader\nfrom llama_index.vector_stores.chroma import ChromaVectorStore\nfrom llama_index.core import StorageContext\n\n# load some documents\ndocuments = SimpleDirectoryReader(\"./data\").load_data()\n\n# initialize client, setting path to save data\ndb = chromadb.PersistentClient(path=\"./chroma_db\")\n\n# create collection\nchroma_collection = db.get_or_create_collection(\"quickstart\")\n\n# assign chroma as the vector_store to the context\nvector_store = ChromaVectorStore(chroma_collection=chroma_collection)\nstorage_context = StorageContext.from_defaults(vector_store=vector_store)\n\n# create your index\nindex = VectorStoreIndex.from_documents(\n documents, storage_context=storage_context\n)\n\n# create a query engine and query\nquery_engine = index.as_query_engine()\nresponse = query_engine.query(\"What is the meaning of life?\")\nprint(response)\n```\n\nIf you've already created and stored your embeddings, you'll want to load them directly without loading your documents or creating a new VectorStoreIndex:\n\n```python\nimport chromadb\nfrom llama_index.core import VectorStoreIndex\nfrom llama_index.vector_stores.chroma import ChromaVectorStore\nfrom llama_index.core import StorageContext\n\n# initialize client\ndb = chromadb.PersistentClient(path=\"./chroma_db\")\n\n# get collection\nchroma_collection = db.get_or_create_collection(\"quickstart\")\n\n# assign chroma as the vector_store to the context\nvector_store = ChromaVectorStore(chroma_collection=chroma_collection)\nstorage_context = StorageContext.from_defaults(vector_store=vector_store)\n\n# load your index from stored vectors\nindex = VectorStoreIndex.from_vector_store(\n vector_store, storage_context=storage_context\n)\n\n# create a query engine\nquery_engine = index.as_query_engine()\nresponse = query_engine.query(\"What is llama2?\")\nprint(response)\n```\n\n!!! tip\n We have a [more thorough example of using Chroma](../../examples/vector_stores/ChromaIndexDemo.ipynb) if you want to go deeper on this store.\n\n### You're ready to query!\n\nNow you have loaded data, indexed it, and stored that index, you're ready to [query your data](../querying/querying.md).\n\n## Inserting Documents or Nodes\n\nIf you've already created an index, you can add new documents to your index using the `insert` method.\n\n```python\nfrom llama_index.core import VectorStoreIndex\n\nindex = VectorStoreIndex([])\nfor doc in documents:\n index.insert(doc)\n```\n\nSee the [document management how-to](../../module_guides/indexing/document_management.md) for more details on managing documents and an example notebook."} {"tokens": 397, "doc_id": "5f60c10c-560d-47ff-87c3-228f49a478c0", "name": "Tracing and Debugging", "url": "https://docs.llamaindex.ai/en/stable/understanding/tracing_and_debugging/tracing_and_debugging", "retrieve_doc": true, "source": "llama_index", "content": "# Tracing and Debugging\n\nDebugging and tracing the operation of your application is key to understanding and optimizing it. LlamaIndex provides a variety of ways to do this.\n\n## Basic logging\n\nThe simplest possible way to look into what your application is doing is to turn on debug logging. That can be done anywhere in your application like this:\n\n```python\nimport logging\nimport sys\n\nlogging.basicConfig(stream=sys.stdout, level=logging.DEBUG)\nlogging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))\n```\n\n## Callback handler\n\nLlamaIndex provides callbacks to help debug, track, and trace the inner workings of the library. Using the callback manager, as many callbacks as needed can be added.\n\nIn addition to logging data related to events, you can also track the duration and number of occurrences\nof each event.\n\nFurthermore, a trace map of events is also recorded, and callbacks can use this data however they want. For example, the `LlamaDebugHandler` will, by default, print the trace of events after most operations.\n\nYou can get a simple callback handler like this:\n\n```python\nimport llama_index.core\n\nllama_index.core.set_global_handler(\"simple\")\n```\n\nYou can also learn how to [build you own custom callback handler](../../module_guides/observability/callbacks/index.md).\n\n## Observability\n\nLlamaIndex provides **one-click observability** to allow you to build principled LLM applications in a production setting.\n\nThis feature allows you to seamlessly integrate the LlamaIndex library with powerful observability/evaluation tools offered by our partners. Configure a variable once, and you'll be able to do things like the following:\n\n- View LLM/prompt inputs/outputs\n- Ensure that the outputs of any component (LLMs, embeddings) are performing as expected\n- View call traces for both indexing and querying\n\nTo learn more, check out our [observability docs](../../module_guides/observability/index.md)"} {"tokens": 899, "doc_id": "5b253e54-efac-4382-b5a5-7462cefcbce2", "name": "Indexing", "url": "https://docs.llamaindex.ai/en/stable/understanding/indexing/indexing", "retrieve_doc": true, "source": "llama_index", "content": "# Indexing\n\nWith your data loaded, you now have a list of Document objects (or a list of Nodes). It's time to build an `Index` over these objects so you can start querying them.\n\n## What is an Index?\n\nIn LlamaIndex terms, an `Index` is a data structure composed of `Document` objects, designed to enable querying by an LLM. Your Index is designed to be complementary to your querying strategy.\n\nLlamaIndex offers several different index types. We'll cover the two most common here.\n\n## Vector Store Index\n\nA `VectorStoreIndex` is by far the most frequent type of Index you'll encounter. The Vector Store Index takes your Documents and splits them up into Nodes. It then creates `vector embeddings` of the text of every node, ready to be queried by an LLM.\n\n### What is an embedding?\n\n`Vector embeddings` are central to how LLM applications function.\n\nA `vector embedding`, often just called an embedding, is a **numerical representation of the semantics, or meaning of your text**. Two pieces of text with similar meanings will have mathematically similar embeddings, even if the actual text is quite different.\n\nThis mathematical relationship enables **semantic search**, where a user provides query terms and LlamaIndex can locate text that is related to the **meaning of the query terms** rather than simple keyword matching. This is a big part of how Retrieval-Augmented Generation works, and how LLMs function in general.\n\nThere are [many types of embeddings](../../module_guides/models/embeddings.md), and they vary in efficiency, effectiveness and computational cost. By default LlamaIndex uses `text-embedding-ada-002`, which is the default embedding used by OpenAI. If you are using different LLMs you will often want to use different embeddings.\n\n### Vector Store Index embeds your documents\n\nVector Store Index turns all of your text into embeddings using an API from your LLM; this is what is meant when we say it \"embeds your text\". If you have a lot of text, generating embeddings can take a long time since it involves many round-trip API calls.\n\nWhen you want to search your embeddings, your query is itself turned into a vector embedding, and then a mathematical operation is carried out by VectorStoreIndex to rank all the embeddings by how semantically similar they are to your query.\n\n### Top K Retrieval\n\nOnce the ranking is complete, VectorStoreIndex returns the most-similar embeddings as their corresponding chunks of text. The number of embeddings it returns is known as `k`, so the parameter controlling how many embeddings to return is known as `top_k`. This whole type of search is often referred to as \"top-k semantic retrieval\" for this reason.\n\nTop-k retrieval is the simplest form of querying a vector index; you will learn about more complex and subtler strategies when you read the [querying](../querying/querying.md) section.\n\n### Using Vector Store Index\n\nTo use the Vector Store Index, pass it the list of Documents you created during the loading stage:\n\n```python\nfrom llama_index.core import VectorStoreIndex\n\nindex = VectorStoreIndex.from_documents(documents)\n```\n\n!!! tip\n `from_documents` also takes an optional argument `show_progress`. Set it to `True` to display a progress bar during index construction.\n\nYou can also choose to build an index over a list of Node objects directly:\n\n```python\nfrom llama_index.core import VectorStoreIndex\n\nindex = VectorStoreIndex(nodes)\n```\n\nWith your text indexed, it is now technically ready for [querying](../querying/querying.md)! However, embedding all your text can be time-consuming and, if you are using a hosted LLM, it can also be expensive. To save time and money you will want to [store your embeddings](../storing/storing.md) first.\n\n## Summary Index\n\nA Summary Index is a simpler form of Index best suited to queries where, as the name suggests, you are trying to generate a summary of the text in your Documents. It simply stores all of the Documents and returns all of them to your query engine.\n\n## Further Reading\n\nIf your data is a set of interconnected concepts (in computer science terms, a \"graph\") then you may be interested in our [knowledge graph index](../../examples/index_structs/knowledge_graph/KnowledgeGraphDemo.ipynb)."} {"tokens": 1494, "doc_id": "92a2e347-69c9-4c40-85bf-65093eb36b46", "name": "Querying", "url": "https://docs.llamaindex.ai/en/stable/understanding/querying/querying", "retrieve_doc": true, "source": "llama_index", "content": "# Querying\n\nNow you've loaded your data, built an index, and stored that index for later, you're ready to get to the most significant part of an LLM application: querying.\n\nAt its simplest, querying is just a prompt call to an LLM: it can be a question and get an answer, or a request for summarization, or a much more complex instruction.\n\nMore complex querying could involve repeated/chained prompt + LLM calls, or even a reasoning loop across multiple components.\n\n## Getting started\n\nThe basis of all querying is the `QueryEngine`. The simplest way to get a QueryEngine is to get your index to create one for you, like this:\n\n```python\nquery_engine = index.as_query_engine()\nresponse = query_engine.query(\n \"Write an email to the user given their background information.\"\n)\nprint(response)\n```\n\n## Stages of querying\n\nHowever, there is more to querying than initially meets the eye. Querying consists of three distinct stages:\n\n- **Retrieval** is when you find and return the most relevant documents for your query from your `Index`. As previously discussed in [indexing](../indexing/indexing.md), the most common type of retrieval is \"top-k\" semantic retrieval, but there are many other retrieval strategies.\n- **Postprocessing** is when the `Node`s retrieved are optionally reranked, transformed, or filtered, for instance by requiring that they have specific metadata such as keywords attached.\n- **Response synthesis** is when your query, your most-relevant data and your prompt are combined and sent to your LLM to return a response.\n\n!!! tip\n You can find out about [how to attach metadata to documents](../../module_guides/loading/documents_and_nodes/usage_documents.md) and [nodes](../../module_guides/loading/documents_and_nodes/usage_nodes.md).\n\n## Customizing the stages of querying\n\nLlamaIndex features a low-level composition API that gives you granular control over your querying.\n\nIn this example, we customize our retriever to use a different number for `top_k` and add a post-processing step that requires that the retrieved nodes reach a minimum similarity score to be included. This would give you a lot of data when you have relevant results but potentially no data if you have nothing relevant.\n\n```python\nfrom llama_index.core import VectorStoreIndex, get_response_synthesizer\nfrom llama_index.core.retrievers import VectorIndexRetriever\nfrom llama_index.core.query_engine import RetrieverQueryEngine\nfrom llama_index.core.postprocessor import SimilarityPostprocessor\n\n# build index\nindex = VectorStoreIndex.from_documents(documents)\n\n# configure retriever\nretriever = VectorIndexRetriever(\n index=index,\n similarity_top_k=10,\n)\n\n# configure response synthesizer\nresponse_synthesizer = get_response_synthesizer()\n\n# assemble query engine\nquery_engine = RetrieverQueryEngine(\n retriever=retriever,\n response_synthesizer=response_synthesizer,\n node_postprocessors=[SimilarityPostprocessor(similarity_cutoff=0.7)],\n)\n\n# query\nresponse = query_engine.query(\"What did the author do growing up?\")\nprint(response)\n```\n\nYou can also add your own retrieval, response synthesis, and overall query logic, by implementing the corresponding interfaces.\n\nFor a full list of implemented components and the supported configurations, check out our [reference docs](../../api_reference/index.md).\n\nLet's go into more detail about customizing each step:\n\n### Configuring retriever\n\n```python\nretriever = VectorIndexRetriever(\n index=index,\n similarity_top_k=10,\n)\n```\n\nThere are a huge variety of retrievers that you can learn about in our [module guide on retrievers](../../module_guides/querying/retriever/index.md).\n\n### Configuring node postprocessors\n\nWe support advanced `Node` filtering and augmentation that can further improve the relevancy of the retrieved `Node` objects.\nThis can help reduce the time/number of LLM calls/cost or improve response quality.\n\nFor example:\n\n- `KeywordNodePostprocessor`: filters nodes by `required_keywords` and `exclude_keywords`.\n- `SimilarityPostprocessor`: filters nodes by setting a threshold on the similarity score (thus only supported by embedding-based retrievers)\n- `PrevNextNodePostprocessor`: augments retrieved `Node` objects with additional relevant context based on `Node` relationships.\n\nThe full list of node postprocessors is documented in the [Node Postprocessor Reference](../../api_reference/postprocessor/index.md).\n\nTo configure the desired node postprocessors:\n\n```python\nnode_postprocessors = [\n KeywordNodePostprocessor(\n required_keywords=[\"Combinator\"], exclude_keywords=[\"Italy\"]\n )\n]\nquery_engine = RetrieverQueryEngine.from_args(\n retriever, node_postprocessors=node_postprocessors\n)\nresponse = query_engine.query(\"What did the author do growing up?\")\n```\n\n### Configuring response synthesis\n\nAfter a retriever fetches relevant nodes, a `BaseSynthesizer` synthesizes the final response by combining the information.\n\nYou can configure it via\n\n```python\nquery_engine = RetrieverQueryEngine.from_args(\n retriever, response_mode=response_mode\n)\n```\n\nRight now, we support the following options:\n\n- `default`: \"create and refine\" an answer by sequentially going through each retrieved `Node`;\n This makes a separate LLM call per Node. Good for more detailed answers.\n- `compact`: \"compact\" the prompt during each LLM call by stuffing as\n many `Node` text chunks that can fit within the maximum prompt size. If there are\n too many chunks to stuff in one prompt, \"create and refine\" an answer by going through\n multiple prompts.\n- `tree_summarize`: Given a set of `Node` objects and the query, recursively construct a tree\n and return the root node as the response. Good for summarization purposes.\n- `no_text`: Only runs the retriever to fetch the nodes that would have been sent to the LLM,\n without actually sending them. Then can be inspected by checking `response.source_nodes`.\n The response object is covered in more detail in Section 5.\n- `accumulate`: Given a set of `Node` objects and the query, apply the query to each `Node` text\n chunk while accumulating the responses into an array. Returns a concatenated string of all\n responses. Good for when you need to run the same query separately against each text\n chunk.\n\n## Structured Outputs\n\nYou may want to ensure your output is structured. See our [Query Engines + Pydantic Outputs](../../module_guides/querying/structured_outputs/query_engine.md) to see how to extract a Pydantic object from a query engine class.\n\nAlso make sure to check out our entire [Structured Outputs](../../module_guides/querying/structured_outputs/index.md) guide.\n\n## Creating your own Query Pipeline\n\nIf you want to design complex query flows, you can compose your own query pipeline across many different modules, from prompts/LLMs/output parsers to retrievers to response synthesizers to your own custom components.\n\nTake a look at our [Query Pipelines Module Guide](../../module_guides/querying/pipeline/index.md) for more details."} {"tokens": 399, "doc_id": "906509df-1a70-4ab8-9df2-68aee062407c", "name": "Putting It All Together", "url": "https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/index", "retrieve_doc": true, "source": "llama_index", "content": "# Putting It All Together\n\nCongratulations! You've loaded your data, indexed it, stored your index, and queried your index. Now you've got to ship something to production. We can show you how to do that!\n\n- In [Q&A Patterns](q_and_a.md) we'll go into some of the more advanced and subtle ways you can build a query engine beyond the basics.\n - The [terms definition tutorial](q_and_a/terms_definitions_tutorial.md) is a detailed, step-by-step tutorial on creating a subtle query application including defining your prompts and supporting images as input.\n - We have a guide to [creating a unified query framework over your indexes](../../examples/retrievers/reciprocal_rerank_fusion.ipynb) which shows you how to run queries across multiple indexes.\n - And also over [structured data like SQL](structured_data.md)\n- We have a guide on [how to build a chatbot](chatbots/building_a_chatbot.md)\n- We talk about [building agents in LlamaIndex](agents.md)\n- We have a complete guide to using [property graphs for indexing and retrieval](../../module_guides/indexing/lpg_index_guide.md)\n- And last but not least we show you how to build [a full stack web application](apps/index.md) using LlamaIndex\n\nLlamaIndex also provides some tools / project templates to help you build a full-stack template. For instance, [`create-llama`](https://github.com/run-llama/LlamaIndexTS/tree/main/packages/create-llama) spins up a full-stack scaffold for you.\n\nCheck out our [Full-Stack Projects](../../community/full_stack_projects.md) page for more details.\n\nWe also have the [`llamaindex-cli rag` CLI tool](../../getting_started/starter_tools/rag_cli.md) that combines some of the above concepts into an easy to use tool for chatting with files from your terminal!"} {"tokens": 1084, "doc_id": "bf31b6c1-15db-4298-aacf-793390f87cb0", "name": "Agents", "url": "https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/agents", "retrieve_doc": true, "source": "llama_index", "content": "# Agents\n\nPutting together an agent in LlamaIndex can be done by defining a set of tools and providing them to our ReActAgent implementation. We're using it here with OpenAI, but it can be used with any sufficiently capable LLM:\n\n```python\nfrom llama_index.core.tools import FunctionTool\nfrom llama_index.llms.openai import OpenAI\nfrom llama_index.core.agent import ReActAgent\n\n\n# define sample Tool\ndef multiply(a: int, b: int) -> int:\n \"\"\"Multiply two integers and returns the result integer\"\"\"\n return a * b\n\n\nmultiply_tool = FunctionTool.from_defaults(fn=multiply)\n\n# initialize llm\nllm = OpenAI(model=\"gpt-3.5-turbo-0613\")\n\n# initialize ReAct agent\nagent = ReActAgent.from_tools([multiply_tool], llm=llm, verbose=True)\n```\n\nThese tools can be Python functions as shown above, or they can be LlamaIndex query engines:\n\n```python\nfrom llama_index.core.tools import QueryEngineTool\n\nquery_engine_tools = [\n QueryEngineTool(\n query_engine=sql_agent,\n metadata=ToolMetadata(\n name=\"sql_agent\", description=\"Agent that can execute SQL queries.\"\n ),\n ),\n]\n\nagent = ReActAgent.from_tools(query_engine_tools, llm=llm, verbose=True)\n```\n\nYou can learn more in our [Agent Module Guide](../../module_guides/deploying/agents/index.md).\n\n## Native OpenAIAgent\n\nWe have an `OpenAIAgent` implementation built on the [OpenAI API for function calling](https://openai.com/blog/function-calling-and-other-api-updates) that allows you to rapidly build agents:\n\n- [OpenAIAgent](../../examples/agent/openai_agent.ipynb)\n- [OpenAIAgent with Query Engine Tools](../../examples/agent/openai_agent_with_query_engine.ipynb)\n- [OpenAIAgent Query Planning](../../examples/agent/openai_agent_query_plan.ipynb)\n- [OpenAI Assistant](../../examples/agent/openai_assistant_agent.ipynb)\n- [OpenAI Assistant Cookbook](../../examples/agent/openai_assistant_query_cookbook.ipynb)\n- [Forced Function Calling](../../examples/agent/openai_forced_function_call.ipynb)\n- [Parallel Function Calling](../../examples/agent/openai_agent_parallel_function_calling.ipynb)\n- [Context Retrieval](../../examples/agent/openai_agent_context_retrieval.ipynb)\n\n## Agentic Components within LlamaIndex\n\nLlamaIndex provides core modules capable of automated reasoning for different use cases over your data which makes them essentially Agents. Some of these core modules are shown below along with example tutorials.\n\n**SubQuestionQueryEngine for Multi Document Analysis**\n\n- [Sub Question Query Engine (Intro)](../../examples/query_engine/sub_question_query_engine.ipynb)\n- [10Q Analysis (Uber)](../../examples/usecases/10q_sub_question.ipynb)\n- [10K Analysis (Uber and Lyft)](../../examples/usecases/10k_sub_question.ipynb)\n\n**Query Transformations**\n\n- [How-To](../../optimizing/advanced_retrieval/query_transformations.md)\n- [Multi-Step Query Decomposition](../../examples/query_transformations/HyDEQueryTransformDemo.ipynb) ([Notebook](https://github.com/jerryjliu/llama_index/blob/main/docs/docs/examples/query_transformations/HyDEQueryTransformDemo.ipynb))\n\n**Routing**\n\n- [Usage](../../module_guides/querying/router/index.md)\n- [Router Query Engine Guide](../../examples/query_engine/RouterQueryEngine.ipynb) ([Notebook](https://github.com/jerryjliu/llama_index/blob/main/docs../../examples/query_engine/RouterQueryEngine.ipynb))\n\n**LLM Reranking**\n\n- [Second Stage Processing How-To](../../module_guides/querying/node_postprocessors/index.md)\n- [LLM Reranking Guide (Great Gatsby)](../../examples/node_postprocessor/LLMReranker-Gatsby.ipynb)\n\n**Chat Engines**\n\n- [Chat Engines How-To](../../module_guides/deploying/chat_engines/index.md)\n\n## Using LlamaIndex as as Tool within an Agent Framework\n\nLlamaIndex can be used as as Tool within an agent framework - including LangChain, ChatGPT. These integrations are described below.\n\n### LangChain\n\nWe have deep integrations with LangChain.\nLlamaIndex query engines can be easily packaged as Tools to be used within a LangChain agent, and LlamaIndex can also be used as a memory module / retriever. Check out our guides/tutorials below!\n\n**Resources**\n\n- [Building a Chatbot Tutorial](chatbots/building_a_chatbot.md)\n- [OnDemandLoaderTool Tutorial](../../examples/tools/OnDemandLoaderTool.ipynb)\n\n### ChatGPT\n\nLlamaIndex can be used as a ChatGPT retrieval plugin (we have a TODO to develop a more general plugin as well).\n\n**Resources**\n\n- [LlamaIndex ChatGPT Retrieval Plugin](https://github.com/openai/chatgpt-retrieval-plugin#llamaindex)"} {"tokens": 5652, "doc_id": "8dada3ca-6484-4531-8f3d-cf97f6b9fcd9", "name": "A Guide to Extracting Terms and Definitions", "url": "https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/q_and_a/terms_definitions_tutorial", "retrieve_doc": true, "source": "llama_index", "content": "# A Guide to Extracting Terms and Definitions\n\nLlama Index has many use cases (semantic search, summarization, etc.) that are well documented. However, this doesn't mean we can't apply Llama Index to very specific use cases!\n\nIn this tutorial, we will go through the design process of using Llama Index to extract terms and definitions from text, while allowing users to query those terms later. Using [Streamlit](https://streamlit.io/), we can provide an easy way to build frontend for running and testing all of this, and quickly iterate with our design.\n\nThis tutorial assumes you have Python3.9+ and the following packages installed:\n\n- llama-index\n- streamlit\n\nAt the base level, our objective is to take text from a document, extract terms and definitions, and then provide a way for users to query that knowledge base of terms and definitions. The tutorial will go over features from both Llama Index and Streamlit, and hopefully provide some interesting solutions for common problems that come up.\n\nThe final version of this tutorial can be found [here](https://github.com/abdulasiraj/A-Guide-to-Extracting-Terms-and-Definitions) and a live hosted demo is available on [Huggingface Spaces](https://huggingface.co./spaces/Nobody4591/Llama_Index_Term_Extractor).\n\n## Uploading Text\n\nStep one is giving users a way to input text manually. Let’s write some code using Streamlit to provide the interface for this! Use the following code and launch the app with `streamlit run app.py`.\n\n```python\nimport streamlit as st\n\nst.title(\"🦙 Llama Index Term Extractor 🦙\")\n\ndocument_text = st.text_area(\"Enter raw text\")\nif st.button(\"Extract Terms and Definitions\") and document_text:\n with st.spinner(\"Extracting...\"):\n extracted_terms = document_text # this is a placeholder!\n st.write(extracted_terms)\n```\n\nSuper simple right! But you'll notice that the app doesn't do anything useful yet. To use llama_index, we also need to setup our OpenAI LLM. There are a bunch of possible settings for the LLM, so we can let the user figure out what's best. We should also let the user set the prompt that will extract the terms (which will also help us debug what works best).\n\n## LLM Settings\n\nThis next step introduces some tabs to our app, to separate it into different panes that provide different features. Let's create a tab for LLM settings and for uploading text:\n\n```python\nimport os\nimport streamlit as st\n\nDEFAULT_TERM_STR = (\n \"Make a list of terms and definitions that are defined in the context, \"\n \"with one pair on each line. \"\n \"If a term is missing it's definition, use your best judgment. \"\n \"Write each line as as follows:\\nTerm: Definition: \"\n)\n\nst.title(\"🦙 Llama Index Term Extractor 🦙\")\n\nsetup_tab, upload_tab = st.tabs([\"Setup\", \"Upload/Extract Terms\"])\n\nwith setup_tab:\n st.subheader(\"LLM Setup\")\n api_key = st.text_input(\"Enter your OpenAI API key here\", type=\"password\")\n llm_name = st.selectbox(\"Which LLM?\", [\"gpt-3.5-turbo\", \"gpt-4\"])\n model_temperature = st.slider(\n \"LLM Temperature\", min_value=0.0, max_value=1.0, step=0.1\n )\n term_extract_str = st.text_area(\n \"The query to extract terms and definitions with.\",\n value=DEFAULT_TERM_STR,\n )\n\nwith upload_tab:\n st.subheader(\"Extract and Query Definitions\")\n document_text = st.text_area(\"Enter raw text\")\n if st.button(\"Extract Terms and Definitions\") and document_text:\n with st.spinner(\"Extracting...\"):\n extracted_terms = document_text # this is a placeholder!\n st.write(extracted_terms)\n```\n\nNow our app has two tabs, which really helps with the organization. You'll also noticed I added a default prompt to extract terms -- you can change this later once you try extracting some terms, it's just the prompt I arrived at after experimenting a bit.\n\nSpeaking of extracting terms, it's time to add some functions to do just that!\n\n## Extracting and Storing Terms\n\nNow that we are able to define LLM settings and input text, we can try using Llama Index to extract the terms from text for us!\n\nWe can add the following functions to both initialize our LLM, as well as use it to extract terms from the input text.\n\n```python\nfrom llama_index.core import Document, SummaryIndex, load_index_from_storage\nfrom llama_index.llms.openai import OpenAI\nfrom llama_index.core import Settings\n\n\ndef get_llm(llm_name, model_temperature, api_key, max_tokens=256):\n os.environ[\"OPENAI_API_KEY\"] = api_key\n return OpenAI(\n temperature=model_temperature, model=llm_name, max_tokens=max_tokens\n )\n\n\ndef extract_terms(\n documents, term_extract_str, llm_name, model_temperature, api_key\n):\n llm = get_llm(llm_name, model_temperature, api_key, max_tokens=1024)\n\n temp_index = SummaryIndex.from_documents(\n documents,\n )\n query_engine = temp_index.as_query_engine(\n response_mode=\"tree_summarize\", llm=llm\n )\n terms_definitions = str(query_engine.query(term_extract_str))\n terms_definitions = [\n x\n for x in terms_definitions.split(\"\\n\")\n if x and \"Term:\" in x and \"Definition:\" in x\n ]\n # parse the text into a dict\n terms_to_definition = {\n x.split(\"Definition:\")[0]\n .split(\"Term:\")[-1]\n .strip(): x.split(\"Definition:\")[-1]\n .strip()\n for x in terms_definitions\n }\n return terms_to_definition\n```\n\nNow, using the new functions, we can finally extract our terms!\n\n```python\n...\nwith upload_tab:\n st.subheader(\"Extract and Query Definitions\")\n document_text = st.text_area(\"Enter raw text\")\n if st.button(\"Extract Terms and Definitions\") and document_text:\n with st.spinner(\"Extracting...\"):\n extracted_terms = extract_terms(\n [Document(text=document_text)],\n term_extract_str,\n llm_name,\n model_temperature,\n api_key,\n )\n st.write(extracted_terms)\n```\n\nThere's a lot going on now, let's take a moment to go over what is happening.\n\n`get_llm()` is instantiating the LLM based on the user configuration from the setup tab. Based on the model name, we need to use the appropriate class (`OpenAI` vs. `ChatOpenAI`).\n\n`extract_terms()` is where all the good stuff happens. First, we call `get_llm()` with `max_tokens=1024`, since we don't want to limit the model too much when it is extracting our terms and definitions (the default is 256 if not set). Then, we define our `Settings` object, aligning `num_output` with our `max_tokens` value, as well as setting the chunk size to be no larger than the output. When documents are indexed by Llama Index, they are broken into chunks (also called nodes) if they are large, and `chunk_size` sets the size for these chunks.\n\nNext, we create a temporary summary index and pass in our llm. A summary index will read every single piece of text in our index, which is perfect for extracting terms. Finally, we use our pre-defined query text to extract terms, using `response_mode=\"tree_summarize`. This response mode will generate a tree of summaries from the bottom up, where each parent summarizes its children. Finally, the top of the tree is returned, which will contain all our extracted terms and definitions.\n\nLastly, we do some minor post processing. We assume the model followed instructions and put a term/definition pair on each line. If a line is missing the `Term:` or `Definition:` labels, we skip it. Then, we convert this to a dictionary for easy storage!\n\n## Saving Extracted Terms\n\nNow that we can extract terms, we need to put them somewhere so that we can query for them later. A `VectorStoreIndex` should be a perfect choice for now! But in addition, our app should also keep track of which terms are inserted into the index so that we can inspect them later. Using `st.session_state`, we can store the current list of terms in a session dict, unique to each user!\n\nFirst things first though, let's add a feature to initialize a global vector index and another function to insert the extracted terms.\n\n```python\nfrom llama_index.core import Settings, VectorStoreIndex\n\n...\nif \"all_terms\" not in st.session_state:\n st.session_state[\"all_terms\"] = DEFAULT_TERMS\n...\n\n\ndef insert_terms(terms_to_definition):\n for term, definition in terms_to_definition.items():\n doc = Document(text=f\"Term: {term}\\nDefinition: {definition}\")\n st.session_state[\"llama_index\"].insert(doc)\n\n\n@st.cache_resource\ndef initialize_index(llm_name, model_temperature, api_key):\n \"\"\"Create the VectorStoreIndex object.\"\"\"\n Settings.llm = get_llm(llm_name, model_temperature, api_key)\n\n index = VectorStoreIndex([])\n\n return index, llm\n\n\n...\n\nwith upload_tab:\n st.subheader(\"Extract and Query Definitions\")\n if st.button(\"Initialize Index and Reset Terms\"):\n st.session_state[\"llama_index\"] = initialize_index(\n llm_name, model_temperature, api_key\n )\n st.session_state[\"all_terms\"] = {}\n\n if \"llama_index\" in st.session_state:\n st.markdown(\n \"Either upload an image/screenshot of a document, or enter the text manually.\"\n )\n document_text = st.text_area(\"Or enter raw text\")\n if st.button(\"Extract Terms and Definitions\") and (\n uploaded_file or document_text\n ):\n st.session_state[\"terms\"] = {}\n terms_docs = {}\n with st.spinner(\"Extracting...\"):\n terms_docs.update(\n extract_terms(\n [Document(text=document_text)],\n term_extract_str,\n llm_name,\n model_temperature,\n api_key,\n )\n )\n st.session_state[\"terms\"].update(terms_docs)\n\n if \"terms\" in st.session_state and st.session_state[\"terms\"]:\n st.markdown(\"Extracted terms\")\n st.json(st.session_state[\"terms\"])\n\n if st.button(\"Insert terms?\"):\n with st.spinner(\"Inserting terms\"):\n insert_terms(st.session_state[\"terms\"])\n st.session_state[\"all_terms\"].update(st.session_state[\"terms\"])\n st.session_state[\"terms\"] = {}\n st.experimental_rerun()\n```\n\nNow you are really starting to leverage the power of streamlit! Let's start with the code under the upload tab. We added a button to initialize the vector index, and we store it in the global streamlit state dictionary, as well as resetting the currently extracted terms. Then, after extracting terms from the input text, we store it the extracted terms in the global state again and give the user a chance to review them before inserting. If the insert button is pressed, then we call our insert terms function, update our global tracking of inserted terms, and remove the most recently extracted terms from the session state.\n\n## Querying for Extracted Terms/Definitions\n\nWith the terms and definitions extracted and saved, how can we use them? And how will the user even remember what's previously been saved?? We can simply add some more tabs to the app to handle these features.\n\n```python\n...\nsetup_tab, terms_tab, upload_tab, query_tab = st.tabs(\n [\"Setup\", \"All Terms\", \"Upload/Extract Terms\", \"Query Terms\"]\n)\n...\nwith terms_tab:\n with terms_tab:\n st.subheader(\"Current Extracted Terms and Definitions\")\n st.json(st.session_state[\"all_terms\"])\n...\nwith query_tab:\n st.subheader(\"Query for Terms/Definitions!\")\n st.markdown(\n (\n \"The LLM will attempt to answer your query, and augment it's answers using the terms/definitions you've inserted. \"\n \"If a term is not in the index, it will answer using it's internal knowledge.\"\n )\n )\n if st.button(\"Initialize Index and Reset Terms\", key=\"init_index_2\"):\n st.session_state[\"llama_index\"] = initialize_index(\n llm_name, model_temperature, api_key\n )\n st.session_state[\"all_terms\"] = {}\n\n if \"llama_index\" in st.session_state:\n query_text = st.text_input(\"Ask about a term or definition:\")\n if query_text:\n query_text = (\n query_text\n + \"\\nIf you can't find the answer, answer the query with the best of your knowledge.\"\n )\n with st.spinner(\"Generating answer...\"):\n response = (\n st.session_state[\"llama_index\"]\n .as_query_engine(\n similarity_top_k=5,\n response_mode=\"compact\",\n text_qa_template=TEXT_QA_TEMPLATE,\n refine_template=DEFAULT_REFINE_PROMPT,\n )\n .query(query_text)\n )\n st.markdown(str(response))\n```\n\nWhile this is mostly basic, some important things to note:\n\n- Our initialize button has the same text as our other button. Streamlit will complain about this, so we provide a unique key instead.\n- Some additional text has been added to the query! This is to try and compensate for times when the index does not have the answer.\n- In our index query, we've specified two options:\n - `similarity_top_k=5` means the index will fetch the top 5 closest matching terms/definitions to the query.\n - `response_mode=\"compact\"` means as much text as possible from the 5 matching terms/definitions will be used in each LLM call. Without this, the index would make at least 5 calls to the LLM, which can slow things down for the user.\n\n## Dry Run Test\n\nWell, actually I hope you've been testing as we went. But now, let's try one complete test.\n\n1. Refresh the app\n2. Enter your LLM settings\n3. Head over to the query tab\n4. Ask the following: `What is a bunnyhug?`\n5. The app should give some nonsense response. If you didn't know, a bunnyhug is another word for a hoodie, used by people from the Canadian Prairies!\n6. Let's add this definition to the app. Open the upload tab and enter the following text: `A bunnyhug is a common term used to describe a hoodie. This term is used by people from the Canadian Prairies.`\n7. Click the extract button. After a few moments, the app should display the correctly extracted term/definition. Click the insert term button to save it!\n8. If we open the terms tab, the term and definition we just extracted should be displayed\n9. Go back to the query tab and try asking what a bunnyhug is. Now, the answer should be correct!\n\n## Improvement #1 - Create a Starting Index\n\nWith our base app working, it might feel like a lot of work to build up a useful index. What if we gave the user some kind of starting point to show off the app's query capabilities? We can do just that! First, let's make a small change to our app so that we save the index to disk after every upload:\n\n```python\ndef insert_terms(terms_to_definition):\n for term, definition in terms_to_definition.items():\n doc = Document(text=f\"Term: {term}\\nDefinition: {definition}\")\n st.session_state[\"llama_index\"].insert(doc)\n # TEMPORARY - save to disk\n st.session_state[\"llama_index\"].storage_context.persist()\n```\n\nNow, we need some document to extract from! The repository for this project used the wikipedia page on New York City, and you can find the text [here](https://github.com/jerryjliu/llama_index/blob/main/examples/test_wiki/data/nyc_text.txt).\n\nIf you paste the text into the upload tab and run it (it may take some time), we can insert the extracted terms. Make sure to also copy the text for the extracted terms into a notepad or similar before inserting into the index! We will need them in a second.\n\nAfter inserting, remove the line of code we used to save the index to disk. With a starting index now saved, we can modify our `initialize_index` function to look like this:\n\n```python\n@st.cache_resource\ndef initialize_index(llm_name, model_temperature, api_key):\n \"\"\"Load the Index object.\"\"\"\n Settings.llm = get_llm(llm_name, model_temperature, api_key)\n\n index = load_index_from_storage(storage_context)\n\n return index\n```\n\nDid you remember to save that giant list of extracted terms in a notepad? Now when our app initializes, we want to pass in the default terms that are in the index to our global terms state:\n\n```python\n...\nif \"all_terms\" not in st.session_state:\n st.session_state[\"all_terms\"] = DEFAULT_TERMS\n...\n```\n\nRepeat the above anywhere where we were previously resetting the `all_terms` values.\n\n## Improvement #2 - (Refining) Better Prompts\n\nIf you play around with the app a bit now, you might notice that it stopped following our prompt! Remember, we added to our `query_str` variable that if the term/definition could not be found, answer to the best of its knowledge. But now if you try asking about random terms (like bunnyhug!), it may or may not follow those instructions.\n\nThis is due to the concept of \"refining\" answers in Llama Index. Since we are querying across the top 5 matching results, sometimes all the results do not fit in a single prompt! OpenAI models typically have a max input size of 4097 tokens. So, Llama Index accounts for this by breaking up the matching results into chunks that will fit into the prompt. After Llama Index gets an initial answer from the first API call, it sends the next chunk to the API, along with the previous answer, and asks the model to refine that answer.\n\nSo, the refine process seems to be messing with our results! Rather than appending extra instructions to the `query_str`, remove that, and Llama Index will let us provide our own custom prompts! Let's create those now, using the [default prompts](https://github.com/jerryjliu/llama_index/blob/main/llama_index/prompts/default_prompts.py) and [chat specific prompts](https://github.com/jerryjliu/llama_index/blob/main/llama_index/prompts/chat_prompts.py) as a guide. Using a new file `constants.py`, let's create some new query templates:\n\n```python\nfrom llama_index.core import (\n PromptTemplate,\n SelectorPromptTemplate,\n ChatPromptTemplate,\n)\nfrom llama_index.core.prompts.utils import is_chat_model\nfrom llama_index.core.llms import ChatMessage, MessageRole\n\n# Text QA templates\nDEFAULT_TEXT_QA_PROMPT_TMPL = (\n \"Context information is below. \\n\"\n \"---------------------\\n\"\n \"{context_str}\"\n \"\\n---------------------\\n\"\n \"Given the context information answer the following question \"\n \"(if you don't know the answer, use the best of your knowledge): {query_str}\\n\"\n)\nTEXT_QA_TEMPLATE = PromptTemplate(DEFAULT_TEXT_QA_PROMPT_TMPL)\n\n# Refine templates\nDEFAULT_REFINE_PROMPT_TMPL = (\n \"The original question is as follows: {query_str}\\n\"\n \"We have provided an existing answer: {existing_answer}\\n\"\n \"We have the opportunity to refine the existing answer \"\n \"(only if needed) with some more context below.\\n\"\n \"------------\\n\"\n \"{context_msg}\\n\"\n \"------------\\n\"\n \"Given the new context and using the best of your knowledge, improve the existing answer. \"\n \"If you can't improve the existing answer, just repeat it again.\"\n)\nDEFAULT_REFINE_PROMPT = PromptTemplate(DEFAULT_REFINE_PROMPT_TMPL)\n\nCHAT_REFINE_PROMPT_TMPL_MSGS = [\n ChatMessage(content=\"{query_str}\", role=MessageRole.USER),\n ChatMessage(content=\"{existing_answer}\", role=MessageRole.ASSISTANT),\n ChatMessage(\n content=\"We have the opportunity to refine the above answer \"\n \"(only if needed) with some more context below.\\n\"\n \"------------\\n\"\n \"{context_msg}\\n\"\n \"------------\\n\"\n \"Given the new context and using the best of your knowledge, improve the existing answer. \"\n \"If you can't improve the existing answer, just repeat it again.\",\n role=MessageRole.USER,\n ),\n]\n\nCHAT_REFINE_PROMPT = ChatPromptTemplate(CHAT_REFINE_PROMPT_TMPL_MSGS)\n\n# refine prompt selector\nREFINE_TEMPLATE = SelectorPromptTemplate(\n default_template=DEFAULT_REFINE_PROMPT,\n conditionals=[(is_chat_model, CHAT_REFINE_PROMPT)],\n)\n```\n\nThat seems like a lot of code, but it's not too bad! If you looked at the default prompts, you might have noticed that there are default prompts, and prompts specific to chat models. Continuing that trend, we do the same for our custom prompts. Then, using a prompt selector, we can combine both prompts into a single object. If the LLM being used is a chat model (ChatGPT, GPT-4), then the chat prompts are used. Otherwise, use the normal prompt templates.\n\nAnother thing to note is that we only defined one QA template. In a chat model, this will be converted to a single \"human\" message.\n\nSo, now we can import these prompts into our app and use them during the query.\n\n```python\nfrom constants import REFINE_TEMPLATE, TEXT_QA_TEMPLATE\n\n...\nif \"llama_index\" in st.session_state:\n query_text = st.text_input(\"Ask about a term or definition:\")\n if query_text:\n query_text = query_text # Notice we removed the old instructions\n with st.spinner(\"Generating answer...\"):\n response = (\n st.session_state[\"llama_index\"]\n .as_query_engine(\n similarity_top_k=5,\n response_mode=\"compact\",\n text_qa_template=TEXT_QA_TEMPLATE,\n refine_template=DEFAULT_REFINE_PROMPT,\n )\n .query(query_text)\n )\n st.markdown(str(response))\n...\n```\n\nIf you experiment a bit more with queries, hopefully you notice that the responses follow our instructions a little better now!\n\n## Improvement #3 - Image Support\n\nLlama index also supports images! Using Llama Index, we can upload images of documents (papers, letters, etc.), and Llama Index handles extracting the text. We can leverage this to also allow users to upload images of their documents and extract terms and definitions from them.\n\nIf you get an import error about PIL, install it using `pip install Pillow` first.\n\n```python\nfrom PIL import Image\nfrom llama_index.readers.file import ImageReader\n\n\n@st.cache_resource\ndef get_file_extractor():\n image_parser = ImageReader(keep_image=True, parse_text=True)\n file_extractor = {\n \".jpg\": image_parser,\n \".png\": image_parser,\n \".jpeg\": image_parser,\n }\n return file_extractor\n\n\nfile_extractor = get_file_extractor()\n...\nwith upload_tab:\n st.subheader(\"Extract and Query Definitions\")\n if st.button(\"Initialize Index and Reset Terms\", key=\"init_index_1\"):\n st.session_state[\"llama_index\"] = initialize_index(\n llm_name, model_temperature, api_key\n )\n st.session_state[\"all_terms\"] = DEFAULT_TERMS\n\n if \"llama_index\" in st.session_state:\n st.markdown(\n \"Either upload an image/screenshot of a document, or enter the text manually.\"\n )\n uploaded_file = st.file_uploader(\n \"Upload an image/screenshot of a document:\",\n type=[\"png\", \"jpg\", \"jpeg\"],\n )\n document_text = st.text_area(\"Or enter raw text\")\n if st.button(\"Extract Terms and Definitions\") and (\n uploaded_file or document_text\n ):\n st.session_state[\"terms\"] = {}\n terms_docs = {}\n with st.spinner(\"Extracting (images may be slow)...\"):\n if document_text:\n terms_docs.update(\n extract_terms(\n [Document(text=document_text)],\n term_extract_str,\n llm_name,\n model_temperature,\n api_key,\n )\n )\n if uploaded_file:\n Image.open(uploaded_file).convert(\"RGB\").save(\"temp.png\")\n img_reader = SimpleDirectoryReader(\n input_files=[\"temp.png\"], file_extractor=file_extractor\n )\n img_docs = img_reader.load_data()\n os.remove(\"temp.png\")\n terms_docs.update(\n extract_terms(\n img_docs,\n term_extract_str,\n llm_name,\n model_temperature,\n api_key,\n )\n )\n st.session_state[\"terms\"].update(terms_docs)\n\n if \"terms\" in st.session_state and st.session_state[\"terms\"]:\n st.markdown(\"Extracted terms\")\n st.json(st.session_state[\"terms\"])\n\n if st.button(\"Insert terms?\"):\n with st.spinner(\"Inserting terms\"):\n insert_terms(st.session_state[\"terms\"])\n st.session_state[\"all_terms\"].update(st.session_state[\"terms\"])\n st.session_state[\"terms\"] = {}\n st.experimental_rerun()\n```\n\nHere, we added the option to upload a file using Streamlit. Then the image is opened and saved to disk (this seems hacky but it keeps things simple). Then we pass the image path to the reader, extract the documents/text, and remove our temp image file.\n\nNow that we have the documents, we can call `extract_terms()` the same as before.\n\n## Conclusion/TLDR\n\nIn this tutorial, we covered a ton of information, while solving some common issues and problems along the way:\n\n- Using different indexes for different use cases (List vs. Vector index)\n- Storing global state values with Streamlit's `session_state` concept\n- Customizing internal prompts with Llama Index\n- Reading text from images with Llama Index\n\nThe final version of this tutorial can be found [here](https://github.com/abdulasiraj/A-Guide-to-Extracting-Terms-and-Definitions) and a live hosted demo is available on [Huggingface Spaces](https://huggingface.co./spaces/Nobody4591/Llama_Index_Term_Extractor)."} {"tokens": 1871, "doc_id": "86e843c6-0a02-4475-84f3-0daaee761aeb", "name": "Q&A patterns", "url": "https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/q_and_a/index", "retrieve_doc": true, "source": "llama_index", "content": "# Q&A patterns\n\n## Semantic Search\n\nThe most basic example usage of LlamaIndex is through semantic search. We provide a simple in-memory vector store for you to get started, but you can also choose to use any one of our [vector store integrations](../../community/integrations/vector_stores.md):\n\n```python\nfrom llama_index.core import VectorStoreIndex, SimpleDirectoryReader\n\ndocuments = SimpleDirectoryReader(\"data\").load_data()\nindex = VectorStoreIndex.from_documents(documents)\nquery_engine = index.as_query_engine()\nresponse = query_engine.query(\"What did the author do growing up?\")\nprint(response)\n```\n\n**Tutorials**\n\n- [Starter Tutorial](../../getting_started/starter_example.md)\n- [Basic Usage Pattern](../querying/querying.md)\n\n**Guides**\n\n- [Example](../../examples/vector_stores/SimpleIndexDemo.ipynb) ([Notebook](https://github.com/run-llama/llama_index/tree/main/docs../../examples/vector_stores/SimpleIndexDemo.ipynb))\n\n## Summarization\n\nA summarization query requires the LLM to iterate through many if not most documents in order to synthesize an answer.\nFor instance, a summarization query could look like one of the following:\n\n- \"What is a summary of this collection of text?\"\n- \"Give me a summary of person X's experience with the company.\"\n\nIn general, a summary index would be suited for this use case. A summary index by default goes through all the data.\n\nEmpirically, setting `response_mode=\"tree_summarize\"` also leads to better summarization results.\n\n```python\nindex = SummaryIndex.from_documents(documents)\n\nquery_engine = index.as_query_engine(response_mode=\"tree_summarize\")\nresponse = query_engine.query(\"\")\n```\n\n## Queries over Structured Data\n\nLlamaIndex supports queries over structured data, whether that's a Pandas DataFrame or a SQL Database.\n\nHere are some relevant resources:\n\n**Tutorials**\n\n- [Guide on Text-to-SQL](structured_data.md)\n\n**Guides**\n\n- [SQL Guide (Core)](../../examples/index_structs/struct_indices/SQLIndexDemo.ipynb) ([Notebook](https://github.com/jerryjliu/llama_index/blob/main/docs../../examples/index_structs/struct_indices/SQLIndexDemo.ipynb))\n- [Pandas Demo](../../examples/query_engine/pandas_query_engine.ipynb) ([Notebook](https://github.com/jerryjliu/llama_index/blob/main/docs../../examples/query_engine/pandas_query_engine.ipynb))\n\n## Routing over Heterogeneous Data\n\nLlamaIndex also supports routing over heterogeneous data sources with `RouterQueryEngine` - for instance, if you want to \"route\" a query to an\nunderlying Document or a sub-index.\n\nTo do this, first build the sub-indices over different data sources.\nThen construct the corresponding query engines, and give each query engine a description to obtain a `QueryEngineTool`.\n\n```python\nfrom llama_index.core import TreeIndex, VectorStoreIndex\nfrom llama_index.core.tools import QueryEngineTool\n\n...\n\n# define sub-indices\nindex1 = VectorStoreIndex.from_documents(notion_docs)\nindex2 = VectorStoreIndex.from_documents(slack_docs)\n\n# define query engines and tools\ntool1 = QueryEngineTool.from_defaults(\n query_engine=index1.as_query_engine(),\n description=\"Use this query engine to do...\",\n)\ntool2 = QueryEngineTool.from_defaults(\n query_engine=index2.as_query_engine(),\n description=\"Use this query engine for something else...\",\n)\n```\n\nThen, we define a `RouterQueryEngine` over them.\nBy default, this uses a `LLMSingleSelector` as the router, which uses the LLM to choose the best sub-index to router the query to, given the descriptions.\n\n```python\nfrom llama_index.core.query_engine import RouterQueryEngine\n\nquery_engine = RouterQueryEngine.from_defaults(\n query_engine_tools=[tool1, tool2]\n)\n\nresponse = query_engine.query(\n \"In Notion, give me a summary of the product roadmap.\"\n)\n```\n\n**Guides**\n\n- [Router Query Engine Guide](../../examples/query_engine/RouterQueryEngine.ipynb) ([Notebook](https://github.com/jerryjliu/llama_index/blob/main/docs../../examples/query_engine/RouterQueryEngine.ipynb))\n\n## Compare/Contrast Queries\n\nYou can explicitly perform compare/contrast queries with a **query transformation** module within a ComposableGraph.\n\n```python\nfrom llama_index.core.query.query_transform.base import DecomposeQueryTransform\n\ndecompose_transform = DecomposeQueryTransform(\n service_context.llm, verbose=True\n)\n```\n\nThis module will help break down a complex query into a simpler one over your existing index structure.\n\n**Guides**\n\n- [Query Transformations](../../optimizing/advanced_retrieval/query_transformations.md)\n\nYou can also rely on the LLM to _infer_ whether to perform compare/contrast queries (see Multi Document Queries below).\n\n## Multi Document Queries\n\nBesides the explicit synthesis/routing flows described above, LlamaIndex can support more general multi-document queries as well.\nIt can do this through our `SubQuestionQueryEngine` class. Given a query, this query engine will generate a \"query plan\" containing\nsub-queries against sub-documents before synthesizing the final answer.\n\nTo do this, first define an index for each document/data source, and wrap it with a `QueryEngineTool` (similar to above):\n\n```python\nfrom llama_index.core.tools import QueryEngineTool, ToolMetadata\n\nquery_engine_tools = [\n QueryEngineTool(\n query_engine=sept_engine,\n metadata=ToolMetadata(\n name=\"sept_22\",\n description=\"Provides information about Uber quarterly financials ending September 2022\",\n ),\n ),\n QueryEngineTool(\n query_engine=june_engine,\n metadata=ToolMetadata(\n name=\"june_22\",\n description=\"Provides information about Uber quarterly financials ending June 2022\",\n ),\n ),\n QueryEngineTool(\n query_engine=march_engine,\n metadata=ToolMetadata(\n name=\"march_22\",\n description=\"Provides information about Uber quarterly financials ending March 2022\",\n ),\n ),\n]\n```\n\nThen, we define a `SubQuestionQueryEngine` over these tools:\n\n```python\nfrom llama_index.core.query_engine import SubQuestionQueryEngine\n\nquery_engine = SubQuestionQueryEngine.from_defaults(\n query_engine_tools=query_engine_tools\n)\n```\n\nThis query engine can execute any number of sub-queries against any subset of query engine tools before synthesizing the final answer.\nThis makes it especially well-suited for compare/contrast queries across documents as well as queries pertaining to a specific document.\n\n**Guides**\n\n- [Sub Question Query Engine (Intro)](../../examples/query_engine/sub_question_query_engine.ipynb)\n- [10Q Analysis (Uber)](../../examples/usecases/10q_sub_question.ipynb)\n- [10K Analysis (Uber and Lyft)](../../examples/usecases/10k_sub_question.ipynb)\n\n## Multi-Step Queries\n\nLlamaIndex can also support iterative multi-step queries. Given a complex query, break it down into an initial subquestions,\nand sequentially generate subquestions based on returned answers until the final answer is returned.\n\nFor instance, given a question \"Who was in the first batch of the accelerator program the author started?\",\nthe module will first decompose the query into a simpler initial question \"What was the accelerator program the author started?\",\nquery the index, and then ask followup questions.\n\n**Guides**\n\n- [Query Transformations](../../optimizing/advanced_retrieval/query_transformations.md)\n- [Multi-Step Query Decomposition](../../examples/query_transformations/HyDEQueryTransformDemo.ipynb) ([Notebook](https://github.com/jerryjliu/llama_index/blob/main/docs/docs/examples/query_transformations/HyDEQueryTransformDemo.ipynb))\n\n## Temporal Queries\n\nLlamaIndex can support queries that require an understanding of time. It can do this in two ways:\n\n- Decide whether the query requires utilizing temporal relationships between nodes (prev/next relationships) in order to retrieve additional context to answer the question.\n- Sort by recency and filter outdated context.\n\n**Guides**\n\n- [Postprocessing Guide](../../module_guides/querying/node_postprocessors/node_postprocessors.md)\n- [Prev/Next Postprocessing](../../examples/node_postprocessor/PrevNextPostprocessorDemo.ipynb)\n- [Recency Postprocessing](../../examples/node_postprocessor/RecencyPostprocessorDemo.ipynb)\n\n## Additional Resources\n\n- [A Guide to Extracting Terms and Definitions](q_and_a/terms_definitions_tutorial.md)\n- [SEC 10k Analysis](https://medium.com/@jerryjliu98/how-unstructured-and-llamaindex-can-help-bring-the-power-of-llms-to-your-own-data-3657d063e30d)"} {"tokens": 3639, "doc_id": "0a9fdd80-bd50-41e1-86b6-4dddbefd25f0", "name": "Airbyte SQL Index Guide", "url": "https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/structured_data/Airbyte_demo", "retrieve_doc": true, "source": "llama_index", "content": "# Airbyte SQL Index Guide\n\nWe will show how to generate SQL queries on a Snowflake db generated by Airbyte.\n\n\n```python\n# Uncomment to enable debugging.\n\n# import logging\n# import sys\n\n# logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)\n# logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))\n```\n\n### Airbyte ingestion\n\nHere we show how to ingest data from Github into a Snowflake db using Airbyte.\n\n\n```python\nfrom IPython.display import Image\n\nImage(filename=\"img/airbyte_1.png\")\n```\n\n\n\n\n \n![png](output_4_0.png)\n \n\n\n\nLet's create a new connection. Here we will be dumping our Zendesk tickets into a Snowflake db.\n\n\n```python\nImage(filename=\"img/github_1.png\")\n```\n\n\n\n\n \n![png](output_6_0.png)\n \n\n\n\n\n```python\nImage(filename=\"img/github_2.png\")\n```\n\n\n\n\n \n![png](output_7_0.png)\n \n\n\n\n\n```python\nImage(filename=\"img/snowflake_1.png\")\n```\n\n\n\n\n \n![png](output_8_0.png)\n \n\n\n\n\n```python\nImage(filename=\"img/snowflake_2.png\")\n```\n\n\n\n\n \n![png](output_9_0.png)\n \n\n\n\nChoose the streams you want to sync.\n\n\n```python\nImage(filename=\"img/airbyte_7.png\")\n```\n\n\n\n\n \n![png](output_11_0.png)\n \n\n\n\n\n```python\nImage(filename=\"img/github_3.png\")\n```\n\n\n\n\n \n![png](output_12_0.png)\n \n\n\n\nSync your data.\n\n\n```python\nImage(filename=\"img/airbyte_9.png\")\n```\n\n\n\n\n \n![png](output_14_0.png)\n \n\n\n\n\n```python\nImage(filename=\"img/airbyte_8.png\")\n```\n\n\n\n\n \n![png](output_15_0.png)\n \n\n\n\n### Snowflake-SQLAlchemy version fix\n\nHack to make snowflake-sqlalchemy work despite incompatible sqlalchemy versions\n\nTaken from https://github.com/snowflakedb/snowflake-sqlalchemy/issues/380#issuecomment-1470762025\n\n\n```python\n# Hack to make snowflake-sqlalchemy work until they patch it\n\n\ndef snowflake_sqlalchemy_20_monkey_patches():\n import sqlalchemy.util.compat\n\n # make strings always return unicode strings\n sqlalchemy.util.compat.string_types = (str,)\n sqlalchemy.types.String.RETURNS_UNICODE = True\n\n import snowflake.sqlalchemy.snowdialect\n\n snowflake.sqlalchemy.snowdialect.SnowflakeDialect.returns_unicode_strings = (\n True\n )\n\n # make has_table() support the `info_cache` kwarg\n import snowflake.sqlalchemy.snowdialect\n\n def has_table(self, connection, table_name, schema=None, info_cache=None):\n \"\"\"\n Checks if the table exists\n \"\"\"\n return self._has_object(connection, \"TABLE\", table_name, schema)\n\n snowflake.sqlalchemy.snowdialect.SnowflakeDialect.has_table = has_table\n\n\n# usage: call this function before creating an engine:\ntry:\n snowflake_sqlalchemy_20_monkey_patches()\nexcept Exception as e:\n raise ValueError(\"Please run `pip install snowflake-sqlalchemy`\")\n```\n\n### Define database\n\nWe pass the Snowflake uri to the SQL db constructor\n\n\n```python\nsnowflake_uri = \"snowflake://:@//?warehouse=&role=\"\n```\n\nFirst we try connecting with sqlalchemy to check the db works.\n\n\n```python\nfrom sqlalchemy import select, create_engine, MetaData, Table\n\n# view current table\nengine = create_engine(snowflake_uri)\nmetadata = MetaData(bind=None)\ntable = Table(\"ZENDESK_TICKETS\", metadata, autoload=True, autoload_with=engine)\nstmt = select(table.columns)\n\n\nwith engine.connect() as connection:\n results = connection.execute(stmt).fetchone()\n print(results)\n print(results.keys())\n```\n\n /var/folders/dx/n9yhm8p9039b5bgmgjqy46y40000gn/T/ipykernel_57673/3609487787.py:6: RemovedIn20Warning: Deprecated API features detected! These feature(s) are not compatible with SQLAlchemy 2.0. To prevent incompatible upgrades prior to updating applications, ensure requirements files are pinned to \"sqlalchemy<2.0\". Set environment variable SQLALCHEMY_WARN_20=1 to show all deprecation warnings. Set environment variable SQLALCHEMY_SILENCE_UBER_WARNING=1 to silence this message. (Background on SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9)\n table = Table(\n\n\n (False, 'test case', '[]', datetime.datetime(2022, 7, 18, 16, 59, 13, tzinfo=), 'test to', None, None, 'question', '{\\n \"channel\": \"web\",\\n \"source\": {\\n \"from\": {},\\n \"rel\": null,\\n \"to\": {}\\n }\\n}', True, datetime.datetime(2022, 7, 18, 18, 1, 37, tzinfo=), None, '[]', None, 134, None, 1658167297, 'test case', None, '[]', False, '{\\n \"score\": \"offered\"\\n}', 360786799676, 'low', '[]', 'https://d3v-airbyte.zendesk.com/api/v2/tickets/134.json', '[]', 360000358316, 360000084116, '[]', None, '[]', 360033549136, True, None, False, 'new', 360786799676, 'abd39a87-b1f9-4390-bf8b-cf3c288b1f74', datetime.datetime(2023, 6, 9, 0, 25, 23, 501000, tzinfo=pytz.FixedOffset(-420)), datetime.datetime(2023, 6, 9, 0, 38, 20, 440000, tzinfo=), '6577ef036668746df889983970579a55', '02522a2b2726fb0a03bb19f2d8d9524d')\n RMKeyView(['from_messaging_channel', 'subject', 'email_cc_ids', 'created_at', 'description', 'custom_status_id', 'external_id', 'type', 'via', 'allow_attachments', 'updated_at', 'problem_id', 'follower_ids', 'due_at', 'id', 'assignee_id', 'generated_timestamp', 'raw_subject', 'forum_topic_id', 'custom_fields', 'allow_channelback', 'satisfaction_rating', 'submitter_id', 'priority', 'collaborator_ids', 'url', 'tags', 'brand_id', 'ticket_form_id', 'sharing_agreement_ids', 'group_id', 'followup_ids', 'organization_id', 'is_public', 'recipient', 'has_incidents', 'status', 'requester_id', '_airbyte_ab_id', '_airbyte_emitted_at', '_airbyte_normalized_at', '_airbyte_zendesk_tickets_hashid', '_airbyte_unique_key'])\n\n\n### Define SQL DB\n\nOnce we have defined the SQLDatabase, we can wrap it in a query engine to query it.\nIf we know what tables we want to use we can use `NLSQLTableQueryEngine`.\nThis will generate a SQL query on the specified tables.\n\n\n```python\nfrom llama_index import SQLDatabase\n\n# You can specify table filters during engine creation.\n# sql_database = SQLDatabase(engine, include_tables=[\"github_issues\",\"github_comments\", \"github_users\"])\n\nsql_database = SQLDatabase(engine)\n```\n\n### Synthesize Query\n\nWe then show a natural language query, which is translated to a SQL query under the hood with our text-to-SQL prompt.\n\n\n```python\nfrom llama_index.indices.struct_store.sql_query import NLSQLTableQueryEngine\nfrom IPython.display import Markdown, display\n\nquery_engine = NLSQLTableQueryEngine(\n sql_database=sql_database,\n tables=[\"github_issues\", \"github_comments\", \"github_users\"],\n)\nquery_str = \"Which issues have the most comments? Give the top 10 and use a join on url.\"\nresponse = query_engine.query(query_str)\ndisplay(Markdown(f\"{response}\"))\n```\n\n\n The top 10 issues with the most comments, based on a join on url, are 'Proof of concept parallel source stream reading implementation for MySQL', 'Remove noisy logging for `LegacyStateManager`', 'Track stream status in source', 'Source Google Analytics v4: - add pk and lookback window', 'Connector Health: Fixed SAT for marketo, close, chargebee, facebook marketing, paystack, hubspot, pipedrive and marketo', '📝 Update outdated docs urls in metadata files', 'Fix emitted intermediate state for initial incremental non-CDC syncs', 'source-postgres : Add logic to handle xmin wraparound', ':bug: Source HubSpot: fix cast string as boolean using string comparison', and 'Fix db-lib JdbcUtils.java to accept JDBC parameters with = sign.'.\n\n\n\n```python\n# You can also get only the SQL query result.\n\nquery_engine = NLSQLTableQueryEngine(\n sql_database=sql_database,\n synthesize_response=False,\n tables=[\"github_issues\", \"github_comments\", \"github_users\"],\n)\nresponse = query_engine.query(query_str)\ndisplay(Markdown(f\"{response}\"))\n```\n\n\n[('Proof of concept parallel source stream reading implementation for MySQL', 'https://api.github.com/repos/airbytehq/airbyte/issues/26580', 'https://api.github.com/repos/airbytehq/airbyte/issues/26580', 104), ('Remove noisy logging for `LegacyStateManager`', 'https://api.github.com/repos/airbytehq/airbyte/issues/27335', 'https://api.github.com/repos/airbytehq/airbyte/issues/27335', 39), ('Track stream status in source', 'https://api.github.com/repos/airbytehq/airbyte/issues/24971', 'https://api.github.com/repos/airbytehq/airbyte/issues/24971', 35), ('Source Google Analytics v4: - add pk and lookback window', 'https://api.github.com/repos/airbytehq/airbyte/issues/26283', 'https://api.github.com/repos/airbytehq/airbyte/issues/26283', 29), ('Connector Health: Fixed SAT for marketo, close, chargebee, facebook marketing, paystack, hubspot, pipedrive and marketo', 'https://api.github.com/repos/airbytehq/airbyte/issues/24802', 'https://api.github.com/repos/airbytehq/airbyte/issues/24802', 28), ('📝 Update outdated docs urls in metadata files', 'https://api.github.com/repos/airbytehq/airbyte/issues/27420', 'https://api.github.com/repos/airbytehq/airbyte/issues/27420', 26), ('Fix emitted intermediate state for initial incremental non-CDC syncs', 'https://api.github.com/repos/airbytehq/airbyte/issues/24820', 'https://api.github.com/repos/airbytehq/airbyte/issues/24820', 25), ('source-postgres : Add logic to handle xmin wraparound', 'https://api.github.com/repos/airbytehq/airbyte/issues/27384', 'https://api.github.com/repos/airbytehq/airbyte/issues/27384', 24), (':bug: Source HubSpot: fix cast string as boolean using string comparison', 'https://api.github.com/repos/airbytehq/airbyte/issues/26082', 'https://api.github.com/repos/airbytehq/airbyte/issues/26082', 24), ('Fix db-lib JdbcUtils.java to accept JDBC parameters with = sign.', 'https://api.github.com/repos/airbytehq/airbyte/issues/25386', 'https://api.github.com/repos/airbytehq/airbyte/issues/25386', 22)]\n\n\n\n```python\n# You can also get the original SQL query\nsql_query = response.metadata[\"sql_query\"]\ndisplay(Markdown(f\"{sql_query}\"))\n```\n\n\nSELECT gi.title, gi.url, gc.issue_url, COUNT(*) AS comment_count \nFROM github_issues gi \nJOIN github_comments gc ON gi.url = gc.issue_url \nGROUP BY gi.title, gi.url, gc.issue_url \nORDER BY comment_count DESC \nLIMIT 10;\n\n\nWe can also use LLM prediction to figure out what tables to use.\n\nWe first need to create an ObjectIndex of SQLTableSchema. In this case we only pass in the table names.\nThe query engine will fetch the relevant table schema at query time.\n\n\n```python\nfrom llama_index.indices.struct_store.sql_query import (\n SQLTableRetrieverQueryEngine,\n)\nfrom llama_index.objects import (\n SQLTableNodeMapping,\n ObjectIndex,\n SQLTableSchema,\n)\nfrom llama_index import VectorStoreIndex\n\ntable_node_mapping = SQLTableNodeMapping(sql_database)\nall_table_names = sql_database.get_usable_table_names()\ntable_schema_objs = []\nfor table_name in all_table_names:\n table_schema_objs.append(SQLTableSchema(table_name=table_name))\n\nobj_index = ObjectIndex.from_objects(\n table_schema_objs,\n table_node_mapping,\n VectorStoreIndex,\n)\ntable_retriever_query_engine = SQLTableRetrieverQueryEngine(\n sql_database, obj_index.as_retriever(similarity_top_k=1)\n)\nresponse = query_engine.query(query_str)\n\ndisplay(Markdown(f\"{response}\"))\nsql_query = response.metadata[\"sql_query\"]\ndisplay(Markdown(f\"{sql_query}\"))\n```\n\n /Users/hongyishi/Documents/GitHub/gpt_index/.venv/lib/python3.11/site-packages/langchain/sql_database.py:279: UserWarning: This method is deprecated - please use `get_usable_table_names`.\n warnings.warn(\n\n\n\n[('Proof of concept parallel source stream reading implementation for MySQL', 'https://api.github.com/repos/airbytehq/airbyte/issues/26580', 'https://api.github.com/repos/airbytehq/airbyte/issues/26580', 104), ('Remove noisy logging for `LegacyStateManager`', 'https://api.github.com/repos/airbytehq/airbyte/issues/27335', 'https://api.github.com/repos/airbytehq/airbyte/issues/27335', 39), ('Track stream status in source', 'https://api.github.com/repos/airbytehq/airbyte/issues/24971', 'https://api.github.com/repos/airbytehq/airbyte/issues/24971', 35), ('Source Google Analytics v4: - add pk and lookback window', 'https://api.github.com/repos/airbytehq/airbyte/issues/26283', 'https://api.github.com/repos/airbytehq/airbyte/issues/26283', 29), ('Connector Health: Fixed SAT for marketo, close, chargebee, facebook marketing, paystack, hubspot, pipedrive and marketo', 'https://api.github.com/repos/airbytehq/airbyte/issues/24802', 'https://api.github.com/repos/airbytehq/airbyte/issues/24802', 28), ('📝 Update outdated docs urls in metadata files', 'https://api.github.com/repos/airbytehq/airbyte/issues/27420', 'https://api.github.com/repos/airbytehq/airbyte/issues/27420', 26), ('Fix emitted intermediate state for initial incremental non-CDC syncs', 'https://api.github.com/repos/airbytehq/airbyte/issues/24820', 'https://api.github.com/repos/airbytehq/airbyte/issues/24820', 25), ('source-postgres : Add logic to handle xmin wraparound', 'https://api.github.com/repos/airbytehq/airbyte/issues/27384', 'https://api.github.com/repos/airbytehq/airbyte/issues/27384', 24), (':bug: Source HubSpot: fix cast string as boolean using string comparison', 'https://api.github.com/repos/airbytehq/airbyte/issues/26082', 'https://api.github.com/repos/airbytehq/airbyte/issues/26082', 24), ('Fix db-lib JdbcUtils.java to accept JDBC parameters with = sign.', 'https://api.github.com/repos/airbytehq/airbyte/issues/25386', 'https://api.github.com/repos/airbytehq/airbyte/issues/25386', 22)]\n\n\n\nSELECT gi.title, gi.url, gc.issue_url, COUNT(*) AS comment_count \nFROM github_issues gi \nJOIN github_comments gc ON gi.url = gc.issue_url \nGROUP BY gi.title, gi.url, gc.issue_url \nORDER BY comment_count DESC \nLIMIT 10;"} {"tokens": 1389, "doc_id": "2ed4f255-948b-40be-8d07-7a07057fa10e", "name": "Structured Data", "url": "https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/structured_data/index", "retrieve_doc": true, "source": "llama_index", "content": "# Structured Data\n\n# A Guide to LlamaIndex + Structured Data\n\nA lot of modern data systems depend on structured data, such as a Postgres DB or a Snowflake data warehouse.\nLlamaIndex provides a lot of advanced features, powered by LLM's, to both create structured data from\nunstructured data, as well as analyze this structured data through augmented text-to-SQL capabilities.\n\n**NOTE:** Any Text-to-SQL application should be aware that executing\narbitrary SQL queries can be a security risk. It is recommended to\ntake precautions as needed, such as using restricted roles, read-only\ndatabases, sandboxing, etc.\n\nThis guide helps walk through each of these capabilities. Specifically, we cover the following topics:\n\n- **Setup**: Defining up our example SQL Table.\n- **Building our Table Index**: How to go from sql database to a Table Schema Index\n- **Using natural language SQL queries**: How to query our SQL database using natural language.\n\nWe will walk through a toy example table which contains city/population/country information.\nA notebook for this tutorial is [available here](../../examples/index_structs/struct_indices/SQLIndexDemo.ipynb).\n\n## Setup\n\nFirst, we use SQLAlchemy to setup a simple sqlite db:\n\n```python\nfrom sqlalchemy import (\n create_engine,\n MetaData,\n Table,\n Column,\n String,\n Integer,\n select,\n column,\n)\n\nengine = create_engine(\"sqlite:///:memory:\")\nmetadata_obj = MetaData()\n```\n\nWe then create a toy `city_stats` table:\n\n```python\n# create city SQL table\ntable_name = \"city_stats\"\ncity_stats_table = Table(\n table_name,\n metadata_obj,\n Column(\"city_name\", String(16), primary_key=True),\n Column(\"population\", Integer),\n Column(\"country\", String(16), nullable=False),\n)\nmetadata_obj.create_all(engine)\n```\n\nNow it's time to insert some datapoints!\n\nIf you want to look into filling into this table by inferring structured datapoints\nfrom unstructured data, take a look at the below section. Otherwise, you can choose\nto directly populate this table:\n\n```python\nfrom sqlalchemy import insert\n\nrows = [\n {\"city_name\": \"Toronto\", \"population\": 2731571, \"country\": \"Canada\"},\n {\"city_name\": \"Tokyo\", \"population\": 13929286, \"country\": \"Japan\"},\n {\"city_name\": \"Berlin\", \"population\": 600000, \"country\": \"Germany\"},\n]\nfor row in rows:\n stmt = insert(city_stats_table).values(**row)\n with engine.begin() as connection:\n cursor = connection.execute(stmt)\n```\n\nFinally, we can wrap the SQLAlchemy engine with our SQLDatabase wrapper;\nthis allows the db to be used within LlamaIndex:\n\n```python\nfrom llama_index.core import SQLDatabase\n\nsql_database = SQLDatabase(engine, include_tables=[\"city_stats\"])\n```\n\n## Natural language SQL\n\nOnce we have constructed our SQL database, we can use the NLSQLTableQueryEngine to\nconstruct natural language queries that are synthesized into SQL queries.\n\nNote that we need to specify the tables we want to use with this query engine.\nIf we don't the query engine will pull all the schema context, which could\noverflow the context window of the LLM.\n\n```python\nfrom llama_index.core.query_engine import NLSQLTableQueryEngine\n\nquery_engine = NLSQLTableQueryEngine(\n sql_database=sql_database,\n tables=[\"city_stats\"],\n)\nquery_str = \"Which city has the highest population?\"\nresponse = query_engine.query(query_str)\n```\n\nThis query engine should used in any case where you can specify the tables you want\nto query over beforehand, or the total size of all the table schema plus the rest of\nthe prompt fits your context window.\n\n## Building our Table Index\n\nIf we don't know ahead of time which table we would like to use, and the total size of\nthe table schema overflows your context window size, we should store the table schema\nin an index so that during query time we can retrieve the right schema.\n\nThe way we can do this is using the SQLTableNodeMapping object, which takes in a\nSQLDatabase and produces a Node object for each SQLTableSchema object passed\ninto the ObjectIndex constructor.\n\n```python\nfrom llama_index.core.objects import (\n SQLTableNodeMapping,\n ObjectIndex,\n SQLTableSchema,\n)\n\ntable_node_mapping = SQLTableNodeMapping(sql_database)\ntable_schema_objs = [\n (SQLTableSchema(table_name=\"city_stats\")),\n ...,\n] # one SQLTableSchema for each table\n\nobj_index = ObjectIndex.from_objects(\n table_schema_objs,\n table_node_mapping,\n VectorStoreIndex,\n)\n```\n\nHere you can see we define our table_node_mapping, and a single SQLTableSchema with the\n\"city_stats\" table name. We pass these into the ObjectIndex constructor, along with the\nVectorStoreIndex class definition we want to use. This will give us a VectorStoreIndex where\neach Node contains table schema and other context information. You can also add any additional\ncontext information you'd like.\n\n```python\n# manually set extra context text\ncity_stats_text = (\n \"This table gives information regarding the population and country of a given city.\\n\"\n \"The user will query with codewords, where 'foo' corresponds to population and 'bar'\"\n \"corresponds to city.\"\n)\n\ntable_node_mapping = SQLTableNodeMapping(sql_database)\ntable_schema_objs = [\n (SQLTableSchema(table_name=\"city_stats\", context_str=city_stats_text))\n]\n```\n\n## Using natural language SQL queries\n\nOnce we have defined our table schema index obj_index, we can construct a SQLTableRetrieverQueryEngine\nby passing in our SQLDatabase, and a retriever constructed from our object index.\n\n```python\nfrom llama_index.core.indices.struct_store import SQLTableRetrieverQueryEngine\n\nquery_engine = SQLTableRetrieverQueryEngine(\n sql_database, obj_index.as_retriever(similarity_top_k=1)\n)\nresponse = query_engine.query(\"Which city has the highest population?\")\nprint(response)\n```\n\nNow when we query the retriever query engine, it will retrieve the relevant table schema\nand synthesize a SQL query and a response from the results of that query.\n\n## Concluding Thoughts\n\nThis is it for now! We're constantly looking for ways to improve our structured data support.\nIf you have any questions let us know in [our Discord](https://discord.gg/dGcwcsnxhU).\n\nRelevant Resources:\n\n- [Airbyte SQL Index Guide](./structured_data/Airbyte_demo.ipynb)"} {"tokens": 4506, "doc_id": "3b04b376-b99a-40a3-96f6-571a5dda5fcb", "name": "How to Build a Chatbot", "url": "https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/chatbots/building_a_chatbot", "retrieve_doc": true, "source": "llama_index", "content": "# How to Build a Chatbot\n\nLlamaIndex serves as a bridge between your data and Large Language Models (LLMs), providing a toolkit that enables you to establish a query interface around your data for a variety of tasks, such as question-answering and summarization.\n\nIn this tutorial, we'll walk you through building a context-augmented chatbot using a [Data Agent](https://gpt-index.readthedocs.io/en/stable/core_modules/agent_modules/agents/root.html). This agent, powered by LLMs, is capable of intelligently executing tasks over your data. The end result is a chatbot agent equipped with a robust set of data interface tools provided by LlamaIndex to answer queries about your data.\n\n**Note**: This tutorial builds upon initial work on creating a query interface over SEC 10-K filings - [check it out here](https://medium.com/@jerryjliu98/how-unstructured-and-llamaindex-can-help-bring-the-power-of-llms-to-your-own-data-3657d063e30d).\n\n### Context\n\nIn this guide, we’ll build a \"10-K Chatbot\" that uses raw UBER 10-K HTML filings from Dropbox. Users can interact with the chatbot to ask questions related to the 10-K filings.\n\n### Preparation\n\n```python\nimport os\nimport openai\n\nos.environ[\"OPENAI_API_KEY\"] = \"sk-...\"\nopenai.api_key = os.environ[\"OPENAI_API_KEY\"]\n\nimport nest_asyncio\n\nnest_asyncio.apply()\n```\n\n### Ingest Data\n\nLet's first download the raw 10-k files, from 2019-2022.\n\n```\n# NOTE: the code examples assume you're operating within a Jupyter notebook.\n# download files\n!mkdir data\n!wget \"https://www.dropbox.com/s/948jr9cfs7fgj99/UBER.zip?dl=1\" -O data/UBER.zip\n!unzip data/UBER.zip -d data\n```\n\nTo parse the HTML files into formatted text, we use the [Unstructured](https://github.com/Unstructured-IO/unstructured) library. Thanks to [LlamaHub](https://llamahub.ai/), we can directly integrate with Unstructured, allowing conversion of any text into a Document format that LlamaIndex can ingest.\n\nFirst we install the necessary packages:\n\n```\n!pip install llama-hub unstructured\n```\n\nThen we can use the `UnstructuredReader` to parse the HTML files into a list of `Document` objects.\n\n```python\nfrom llama_index.readers.file import UnstructuredReader\nfrom pathlib import Path\n\nyears = [2022, 2021, 2020, 2019]\n\nloader = UnstructuredReader()\ndoc_set = {}\nall_docs = []\nfor year in years:\n year_docs = loader.load_data(\n file=Path(f\"./data/UBER/UBER_{year}.html\"), split_documents=False\n )\n # insert year metadata into each year\n for d in year_docs:\n d.metadata = {\"year\": year}\n doc_set[year] = year_docs\n all_docs.extend(year_docs)\n```\n\n### Setting up Vector Indices for each year\n\nWe first setup a vector index for each year. Each vector index allows us\nto ask questions about the 10-K filing of a given year.\n\nWe build each index and save it to disk.\n\n```python\n# initialize simple vector indices\nfrom llama_index.core import VectorStoreIndex, StorageContext\nfrom llama_index.core import Settings\n\nSettings.chunk_size = 512\nindex_set = {}\nfor year in years:\n storage_context = StorageContext.from_defaults()\n cur_index = VectorStoreIndex.from_documents(\n doc_set[year],\n storage_context=storage_context,\n )\n index_set[year] = cur_index\n storage_context.persist(persist_dir=f\"./storage/{year}\")\n```\n\nTo load an index from disk, do the following\n\n```python\n# Load indices from disk\nfrom llama_index.core import load_index_from_storage\n\nindex_set = {}\nfor year in years:\n storage_context = StorageContext.from_defaults(\n persist_dir=f\"./storage/{year}\"\n )\n cur_index = load_index_from_storage(\n storage_context,\n )\n index_set[year] = cur_index\n```\n\n### Setting up a Sub Question Query Engine to Synthesize Answers Across 10-K Filings\n\nSince we have access to documents of 4 years, we may not only want to ask questions regarding the 10-K document of a given year, but ask questions that require analysis over all 10-K filings.\n\nTo address this, we can use a [Sub Question Query Engine](https://gpt-index.readthedocs.io/en/stable/examples/query_engine/sub_question_query_engine.html). It decomposes a query into subqueries, each answered by an individual vector index, and synthesizes the results to answer the overall query.\n\nLlamaIndex provides some wrappers around indices (and query engines) so that they can be used by query engines and agents. First we define a `QueryEngineTool` for each vector index.\nEach tool has a name and a description; these are what the LLM agent sees to decide which tool to choose.\n\n```python\nfrom llama_index.core.tools import QueryEngineTool, ToolMetadata\n\nindividual_query_engine_tools = [\n QueryEngineTool(\n query_engine=index_set[year].as_query_engine(),\n metadata=ToolMetadata(\n name=f\"vector_index_{year}\",\n description=f\"useful for when you want to answer queries about the {year} SEC 10-K for Uber\",\n ),\n )\n for year in years\n]\n```\n\nNow we can create the Sub Question Query Engine, which will allow us to synthesize answers across the 10-K filings. We pass in the `individual_query_engine_tools` we defined above, as well as an `llm` that will be used to run the subqueries.\n\n```python\nfrom llama_index.llms.openai import OpenAI\nfrom llama_index.core.query_engine import SubQuestionQueryEngine\n\nquery_engine = SubQuestionQueryEngine.from_defaults(\n query_engine_tools=individual_query_engine_tools,\n llm=OpenAI(model=\"gpt-3.5-turbo\"),\n)\n```\n\n### Setting up the Chatbot Agent\n\nWe use a LlamaIndex Data Agent to setup the outer chatbot agent, which has access to a set of Tools. Specifically, we will use an OpenAIAgent, that takes advantage of OpenAI API function calling. We want to use the separate Tools we defined previously for each index (corresponding to a given year), as well as a tool for the sub question query engine we defined above.\n\nFirst we define a `QueryEngineTool` for the sub question query engine:\n\n```python\nquery_engine_tool = QueryEngineTool(\n query_engine=query_engine,\n metadata=ToolMetadata(\n name=\"sub_question_query_engine\",\n description=\"useful for when you want to answer queries that require analyzing multiple SEC 10-K documents for Uber\",\n ),\n)\n```\n\nThen, we combine the Tools we defined above into a single list of tools for the agent:\n\n```python\ntools = individual_query_engine_tools + [query_engine_tool]\n```\n\nFinally, we call `OpenAIAgent.from_tools` to create the agent, passing in the list of tools we defined above.\n\n```python\nfrom llama_index.agent.openai import OpenAIAgent\n\nagent = OpenAIAgent.from_tools(tools, verbose=True)\n```\n\n### Testing the Agent\n\nWe can now test the agent with various queries.\n\nIf we test it with a simple \"hello\" query, the agent does not use any Tools.\n\n```python\nresponse = agent.chat(\"hi, i am bob\")\nprint(str(response))\n```\n\n```\nHello Bob! How can I assist you today?\n```\n\nIf we test it with a query regarding the 10-k of a given year, the agent will use\nthe relevant vector index Tool.\n\n```python\nresponse = agent.chat(\n \"What were some of the biggest risk factors in 2020 for Uber?\"\n)\nprint(str(response))\n```\n\n```\n=== Calling Function ===\nCalling function: vector_index_2020 with args: {\n \"input\": \"biggest risk factors\"\n}\nGot output: The biggest risk factors mentioned in the context are:\n1. The adverse impact of the COVID-19 pandemic and actions taken to mitigate it on the business.\n2. The potential reclassification of drivers as employees, workers, or quasi-employees instead of independent contractors.\n3. Intense competition in the mobility, delivery, and logistics industries, with low-cost alternatives and well-capitalized competitors.\n4. The need to lower fares or service fees and offer driver incentives and consumer discounts to remain competitive.\n5. Significant losses incurred and the uncertainty of achieving profitability.\n6. The risk of not attracting or maintaining a critical mass of platform users.\n7. Operational, compliance, and cultural challenges related to the workplace culture and forward-leaning approach.\n8. The potential negative impact of international investments and the challenges of conducting business in foreign countries.\n9. Risks associated with operational and compliance challenges, localization, laws and regulations, competition, social acceptance, technological compatibility, improper business practices, liability uncertainty, managing international operations, currency fluctuations, cash transactions, tax consequences, and payment fraud.\n========================\nSome of the biggest risk factors for Uber in 2020 were:\n\n1. The adverse impact of the COVID-19 pandemic and actions taken to mitigate it on the business.\n2. The potential reclassification of drivers as employees, workers, or quasi-employees instead of independent contractors.\n3. Intense competition in the mobility, delivery, and logistics industries, with low-cost alternatives and well-capitalized competitors.\n4. The need to lower fares or service fees and offer driver incentives and consumer discounts to remain competitive.\n5. Significant losses incurred and the uncertainty of achieving profitability.\n6. The risk of not attracting or maintaining a critical mass of platform users.\n7. Operational, compliance, and cultural challenges related to the workplace culture and forward-leaning approach.\n8. The potential negative impact of international investments and the challenges of conducting business in foreign countries.\n9. Risks associated with operational and compliance challenges, localization, laws and regulations, competition, social acceptance, technological compatibility, improper business practices, liability uncertainty, managing international operations, currency fluctuations, cash transactions, tax consequences, and payment fraud.\n\nThese risk factors highlight the challenges and uncertainties that Uber faced in 2020.\n```\n\nFinally, if we test it with a query to compare/contrast risk factors across years,\nthe agent will use the Sub Question Query Engine Tool.\n\n```python\ncross_query_str = \"Compare/contrast the risk factors described in the Uber 10-K across years. Give answer in bullet points.\"\n\nresponse = agent.chat(cross_query_str)\nprint(str(response))\n```\n\n```\n=== Calling Function ===\nCalling function: sub_question_query_engine with args: {\n \"input\": \"Compare/contrast the risk factors described in the Uber 10-K across years\"\n}\nGenerated 4 sub questions.\n[vector_index_2022] Q: What are the risk factors described in the 2022 SEC 10-K for Uber?\n[vector_index_2021] Q: What are the risk factors described in the 2021 SEC 10-K for Uber?\n[vector_index_2020] Q: What are the risk factors described in the 2020 SEC 10-K for Uber?\n[vector_index_2019] Q: What are the risk factors described in the 2019 SEC 10-K for Uber?\n[vector_index_2021] A: The risk factors described in the 2021 SEC 10-K for Uber include the adverse impact of the COVID-19 pandemic on their business, the potential reclassification of drivers as employees instead of independent contractors, intense competition in the mobility, delivery, and logistics industries, the need to lower fares and offer incentives to remain competitive, significant losses incurred by the company, the importance of attracting and maintaining a critical mass of platform users, and the ongoing legal challenges regarding driver classification.\n[vector_index_2020] A: The risk factors described in the 2020 SEC 10-K for Uber include the adverse impact of the COVID-19 pandemic on their business, the potential reclassification of drivers as employees instead of independent contractors, intense competition in the mobility, delivery, and logistics industries, the need to lower fares and offer incentives to remain competitive, significant losses and the uncertainty of achieving profitability, the importance of attracting and retaining a critical mass of drivers and users, and the challenges associated with their workplace culture and operational compliance.\n[vector_index_2022] A: The risk factors described in the 2022 SEC 10-K for Uber include the potential adverse effect on their business if drivers were classified as employees instead of independent contractors, the highly competitive nature of the mobility, delivery, and logistics industries, the need to lower fares or service fees to remain competitive in certain markets, the company's history of significant losses and the expectation of increased operating expenses in the future, and the potential impact on their platform if they are unable to attract or maintain a critical mass of drivers, consumers, merchants, shippers, and carriers.\n[vector_index_2019] A: The risk factors described in the 2019 SEC 10-K for Uber include the loss of their license to operate in London, the complexity of their business and operating model due to regulatory uncertainties, the potential for additional regulations for their other products in the Other Bets segment, the evolving laws and regulations regarding the development and deployment of autonomous vehicles, and the increasing number of data protection and privacy laws around the world. Additionally, there are legal proceedings, litigation, claims, and government investigations that Uber is involved in, which could impose a burden on management and employees and come with defense costs or unfavorable rulings.\nGot output: The risk factors described in the Uber 10-K reports across the years include the potential reclassification of drivers as employees instead of independent contractors, intense competition in the mobility, delivery, and logistics industries, the need to lower fares and offer incentives to remain competitive, significant losses incurred by the company, the importance of attracting and maintaining a critical mass of platform users, and the ongoing legal challenges regarding driver classification. Additionally, there are specific risk factors mentioned in each year's report, such as the adverse impact of the COVID-19 pandemic in 2020 and 2021, the loss of their license to operate in London in 2019, and the evolving laws and regulations regarding autonomous vehicles in 2019. Overall, while there are some similarities in the risk factors mentioned, there are also specific factors that vary across the years.\n========================\n=== Calling Function ===\nCalling function: vector_index_2022 with args: {\n \"input\": \"risk factors\"\n}\nGot output: Some of the risk factors mentioned in the context include the potential adverse effect on the business if drivers were classified as employees instead of independent contractors, the highly competitive nature of the mobility, delivery, and logistics industries, the need to lower fares or service fees to remain competitive, the company's history of significant losses and the expectation of increased operating expenses, the impact of future pandemics or disease outbreaks on the business and financial results, and the potential harm to the business due to economic conditions and their effect on discretionary consumer spending.\n========================\n=== Calling Function ===\nCalling function: vector_index_2021 with args: {\n \"input\": \"risk factors\"\n}\nGot output: The COVID-19 pandemic and the impact of actions to mitigate the pandemic have adversely affected and may continue to adversely affect parts of our business. Our business would be adversely affected if Drivers were classified as employees, workers or quasi-employees instead of independent contractors. The mobility, delivery, and logistics industries are highly competitive, with well-established and low-cost alternatives that have been available for decades, low barriers to entry, low switching costs, and well-capitalized competitors in nearly every major geographic region. To remain competitive in certain markets, we have in the past lowered, and may continue to lower, fares or service fees, and we have in the past offered, and may continue to offer, significant Driver incentives and consumer discounts and promotions. We have incurred significant losses since inception, including in the United States and other major markets. We expect our operating expenses to increase significantly in the foreseeable future, and we may not achieve or maintain profitability. If we are unable to attract or maintain a critical mass of Drivers, consumers, merchants, shippers, and carriers, whether as a result of competition or other factors, our platform will become less appealing to platform users.\n========================\n=== Calling Function ===\nCalling function: vector_index_2020 with args: {\n \"input\": \"risk factors\"\n}\nGot output: The risk factors mentioned in the context include the adverse impact of the COVID-19 pandemic on the business, the potential reclassification of drivers as employees, the highly competitive nature of the mobility, delivery, and logistics industries, the need to lower fares or service fees to remain competitive, the company's history of significant losses and potential future expenses, the importance of attracting and maintaining a critical mass of platform users, and the operational and cultural challenges faced by the company.\n========================\n=== Calling Function ===\nCalling function: vector_index_2019 with args: {\n \"input\": \"risk factors\"\n}\nGot output: The risk factors mentioned in the context include competition with local companies, differing levels of social acceptance, technological compatibility issues, exposure to improper business practices, legal uncertainty, difficulties in managing international operations, fluctuations in currency exchange rates, regulations governing local currencies, tax consequences, financial accounting burdens, difficulties in implementing financial systems, import and export restrictions, political and economic instability, public health concerns, reduced protection for intellectual property rights, limited influence over minority-owned affiliates, and regulatory complexities. These risk factors could adversely affect the international operations, business, financial condition, and operating results of the company.\n========================\nHere is a comparison of the risk factors described in the Uber 10-K reports across years:\n\n2022 Risk Factors:\n- Potential adverse effect if drivers were classified as employees instead of independent contractors.\n- Highly competitive nature of the mobility, delivery, and logistics industries.\n- Need to lower fares or service fees to remain competitive.\n- History of significant losses and expectation of increased operating expenses.\n- Impact of future pandemics or disease outbreaks on the business and financial results.\n- Potential harm to the business due to economic conditions and their effect on discretionary consumer spending.\n\n2021 Risk Factors:\n- Adverse impact of the COVID-19 pandemic and actions to mitigate it on the business.\n- Potential reclassification of drivers as employees instead of independent contractors.\n- Highly competitive nature of the mobility, delivery, and logistics industries.\n- Need to lower fares or service fees and offer incentives to remain competitive.\n- History of significant losses and uncertainty of achieving profitability.\n- Importance of attracting and maintaining a critical mass of platform users.\n\n2020 Risk Factors:\n- Adverse impact of the COVID-19 pandemic on the business.\n- Potential reclassification of drivers as employees.\n- Highly competitive nature of the mobility, delivery, and logistics industries.\n- Need to lower fares or service fees to remain competitive.\n- History of significant losses and potential future expenses.\n- Importance of attracting and maintaining a critical mass of platform users.\n- Operational and cultural challenges faced by the company.\n\n2019 Risk Factors:\n- Competition with local companies.\n- Differing levels of social acceptance.\n- Technological compatibility issues.\n- Exposure to improper business practices.\n- Legal uncertainty.\n- Difficulties in managing international operations.\n- Fluctuations in currency exchange rates.\n- Regulations governing local currencies.\n- Tax consequences.\n- Financial accounting burdens.\n- Difficulties in implementing financial systems.\n- Import and export restrictions.\n- Political and economic instability.\n- Public health concerns.\n- Reduced protection for intellectual property rights.\n- Limited influence over minority-owned affiliates.\n- Regulatory complexities.\n\nThese comparisons highlight both common and unique risk factors that Uber faced in different years.\n```\n\n### Setting up the Chatbot Loop\n\nNow that we have the chatbot setup, it only takes a few more steps to setup a basic interactive loop to chat with our SEC-augmented chatbot!\n\n```python\nagent = OpenAIAgent.from_tools(tools) # verbose=False by default\n\nwhile True:\n text_input = input(\"User: \")\n if text_input == \"exit\":\n break\n response = agent.chat(text_input)\n print(f\"Agent: {response}\")\n```\n\nHere's an example of the loop in action:\n\n```\nUser: What were some of the legal proceedings against Uber in 2022?\nAgent: In 2022, Uber faced several legal proceedings. Some of the notable ones include:\n\n1. Petition against Proposition 22: A petition was filed in California alleging that Proposition 22, which classifies app-based drivers as independent contractors, is unconstitutional.\n\n2. Lawsuit by Massachusetts Attorney General: The Massachusetts Attorney General filed a lawsuit against Uber, claiming that drivers should be classified as employees and entitled to protections under wage and labor laws.\n\n3. Allegations by New York Attorney General: The New York Attorney General made allegations against Uber regarding the misclassification of drivers and related employment violations.\n\n4. Swiss social security rulings: Swiss social security rulings classified Uber drivers as employees, which could have implications for Uber's operations in Switzerland.\n\n5. Class action lawsuits in Australia: Uber faced class action lawsuits in Australia, with allegations that the company conspired to harm participants in the taxi, hire-car, and limousine industries.\n\nIt's important to note that the outcomes of these legal proceedings are uncertain and may vary.\n\nUser:\n\n```\n\n### Notebook\n\nTake a look at our [corresponding notebook](../../../examples/agent/Chatbot_SEC.ipynb)."} {"tokens": 3667, "doc_id": "874edc9f-5575-4c23-a772-908223caa446", "name": "A Guide to Building a Full-Stack Web App with LLamaIndex", "url": "https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/apps/fullstack_app_guide", "retrieve_doc": true, "source": "llama_index", "content": "# A Guide to Building a Full-Stack Web App with LLamaIndex\n\nLlamaIndex is a python library, which means that integrating it with a full-stack web application will be a little different than what you might be used to.\n\nThis guide seeks to walk through the steps needed to create a basic API service written in python, and how this interacts with a TypeScript+React frontend.\n\nAll code examples here are available from the [llama_index_starter_pack](https://github.com/logan-markewich/llama_index_starter_pack/tree/main/flask_react) in the flask_react folder.\n\nThe main technologies used in this guide are as follows:\n\n- python3.11\n- llama_index\n- flask\n- typescript\n- react\n\n## Flask Backend\n\nFor this guide, our backend will use a [Flask](https://flask.palletsprojects.com/en/2.2.x/) API server to communicate with our frontend code. If you prefer, you can also easily translate this to a [FastAPI](https://fastapi.tiangolo.com/) server, or any other python server library of your choice.\n\nSetting up a server using Flask is easy. You import the package, create the app object, and then create your endpoints. Let's create a basic skeleton for the server first:\n\n```python\nfrom flask import Flask\n\napp = Flask(__name__)\n\n\n@app.route(\"/\")\ndef home():\n return \"Hello World!\"\n\n\nif __name__ == \"__main__\":\n app.run(host=\"0.0.0.0\", port=5601)\n```\n\n_flask_demo.py_\n\nIf you run this file (`python flask_demo.py`), it will launch a server on port 5601. If you visit `http://localhost:5601/`, you will see the \"Hello World!\" text rendered in your browser. Nice!\n\nThe next step is deciding what functions we want to include in our server, and to start using LlamaIndex.\n\nTo keep things simple, the most basic operation we can provide is querying an existing index. Using the [paul graham essay](https://github.com/jerryjliu/llama_index/blob/main/examples/paul_graham_essay/data/paul_graham_essay.txt) from LlamaIndex, create a documents folder and download+place the essay text file inside of it.\n\n### Basic Flask - Handling User Index Queries\n\nNow, let's write some code to initialize our index:\n\n```python\nimport os\nfrom llama_index.core import (\n SimpleDirectoryReader,\n VectorStoreIndex,\n StorageContext,\n load_index_from_storage,\n)\n\n# NOTE: for local testing only, do NOT deploy with your key hardcoded\nos.environ[\"OPENAI_API_KEY\"] = \"your key here\"\n\nindex = None\n\n\ndef initialize_index():\n global index\n storage_context = StorageContext.from_defaults()\n index_dir = \"./.index\"\n if os.path.exists(index_dir):\n index = load_index_from_storage(storage_context)\n else:\n documents = SimpleDirectoryReader(\"./documents\").load_data()\n index = VectorStoreIndex.from_documents(\n documents, storage_context=storage_context\n )\n storage_context.persist(index_dir)\n```\n\nThis function will initialize our index. If we call this just before starting the flask server in the `main` function, then our index will be ready for user queries!\n\nOur query endpoint will accept `GET` requests with the query text as a parameter. Here's what the full endpoint function will look like:\n\n```python\nfrom flask import request\n\n\n@app.route(\"/query\", methods=[\"GET\"])\ndef query_index():\n global index\n query_text = request.args.get(\"text\", None)\n if query_text is None:\n return (\n \"No text found, please include a ?text=blah parameter in the URL\",\n 400,\n )\n query_engine = index.as_query_engine()\n response = query_engine.query(query_text)\n return str(response), 200\n```\n\nNow, we've introduced a few new concepts to our server:\n\n- a new `/query` endpoint, defined by the function decorator\n- a new import from flask, `request`, which is used to get parameters from the request\n- if the `text` parameter is missing, then we return an error message and an appropriate HTML response code\n- otherwise, we query the index, and return the response as a string\n\nA full query example that you can test in your browser might look something like this: `http://localhost:5601/query?text=what did the author do growing up` (once you press enter, the browser will convert the spaces into \"%20\" characters).\n\nThings are looking pretty good! We now have a functional API. Using your own documents, you can easily provide an interface for any application to call the flask API and get answers to queries.\n\n### Advanced Flask - Handling User Document Uploads\n\nThings are looking pretty cool, but how can we take this a step further? What if we want to allow users to build their own indexes by uploading their own documents? Have no fear, Flask can handle it all :muscle:.\n\nTo let users upload documents, we have to take some extra precautions. Instead of querying an existing index, the index will become **mutable**. If you have many users adding to the same index, we need to think about how to handle concurrency. Our Flask server is threaded, which means multiple users can ping the server with requests which will be handled at the same time.\n\nOne option might be to create an index for each user or group, and store and fetch things from S3. But for this example, we will assume there is one locally stored index that users are interacting with.\n\nTo handle concurrent uploads and ensure sequential inserts into the index, we can use the `BaseManager` python package to provide sequential access to the index using a separate server and locks. This sounds scary, but it's not so bad! We will just move all our index operations (initializing, querying, inserting) into the `BaseManager` \"index_server\", which will be called from our Flask server.\n\nHere's a basic example of what our `index_server.py` will look like after we've moved our code:\n\n```python\nimport os\nfrom multiprocessing import Lock\nfrom multiprocessing.managers import BaseManager\nfrom llama_index.core import SimpleDirectoryReader, VectorStoreIndex, Document\n\n# NOTE: for local testing only, do NOT deploy with your key hardcoded\nos.environ[\"OPENAI_API_KEY\"] = \"your key here\"\n\nindex = None\nlock = Lock()\n\n\ndef initialize_index():\n global index\n\n with lock:\n # same as before ...\n pass\n\n\ndef query_index(query_text):\n global index\n query_engine = index.as_query_engine()\n response = query_engine.query(query_text)\n return str(response)\n\n\nif __name__ == \"__main__\":\n # init the global index\n print(\"initializing index...\")\n initialize_index()\n\n # setup server\n # NOTE: you might want to handle the password in a less hardcoded way\n manager = BaseManager((\"\", 5602), b\"password\")\n manager.register(\"query_index\", query_index)\n server = manager.get_server()\n\n print(\"starting server...\")\n server.serve_forever()\n```\n\n_index_server.py_\n\nSo, we've moved our functions, introduced the `Lock` object which ensures sequential access to the global index, registered our single function in the server, and started the server on port 5602 with the password `password`.\n\nThen, we can adjust our flask code as follows:\n\n```python\nfrom multiprocessing.managers import BaseManager\nfrom flask import Flask, request\n\n# initialize manager connection\n# NOTE: you might want to handle the password in a less hardcoded way\nmanager = BaseManager((\"\", 5602), b\"password\")\nmanager.register(\"query_index\")\nmanager.connect()\n\n\n@app.route(\"/query\", methods=[\"GET\"])\ndef query_index():\n global index\n query_text = request.args.get(\"text\", None)\n if query_text is None:\n return (\n \"No text found, please include a ?text=blah parameter in the URL\",\n 400,\n )\n response = manager.query_index(query_text)._getvalue()\n return str(response), 200\n\n\n@app.route(\"/\")\ndef home():\n return \"Hello World!\"\n\n\nif __name__ == \"__main__\":\n app.run(host=\"0.0.0.0\", port=5601)\n```\n\n_flask_demo.py_\n\nThe two main changes are connecting to our existing `BaseManager` server and registering the functions, as well as calling the function through the manager in the `/query` endpoint.\n\nOne special thing to note is that `BaseManager` servers don't return objects quite as we expect. To resolve the return value into it's original object, we call the `_getvalue()` function.\n\nIf we allow users to upload their own documents, we should probably remove the Paul Graham essay from the documents folder, so let's do that first. Then, let's add an endpoint to upload files! First, let's define our Flask endpoint function:\n\n```python\n...\nmanager.register(\"insert_into_index\")\n...\n\n\n@app.route(\"/uploadFile\", methods=[\"POST\"])\ndef upload_file():\n global manager\n if \"file\" not in request.files:\n return \"Please send a POST request with a file\", 400\n\n filepath = None\n try:\n uploaded_file = request.files[\"file\"]\n filename = secure_filename(uploaded_file.filename)\n filepath = os.path.join(\"documents\", os.path.basename(filename))\n uploaded_file.save(filepath)\n\n if request.form.get(\"filename_as_doc_id\", None) is not None:\n manager.insert_into_index(filepath, doc_id=filename)\n else:\n manager.insert_into_index(filepath)\n except Exception as e:\n # cleanup temp file\n if filepath is not None and os.path.exists(filepath):\n os.remove(filepath)\n return \"Error: {}\".format(str(e)), 500\n\n # cleanup temp file\n if filepath is not None and os.path.exists(filepath):\n os.remove(filepath)\n\n return \"File inserted!\", 200\n```\n\nNot too bad! You will notice that we write the file to disk. We could skip this if we only accept basic file formats like `txt` files, but written to disk we can take advantage of LlamaIndex's `SimpleDirectoryReader` to take care of a bunch of more complex file formats. Optionally, we also use a second `POST` argument to either use the filename as a doc_id or let LlamaIndex generate one for us. This will make more sense once we implement the frontend.\n\nWith these more complicated requests, I also suggest using a tool like [Postman](https://www.postman.com/downloads/?utm_source=postman-home). Examples of using postman to test our endpoints are in the [repository for this project](https://github.com/logan-markewich/llama_index_starter_pack/tree/main/flask_react/postman_examples).\n\nLastly, you'll notice we added a new function to the manager. Let's implement that inside `index_server.py`:\n\n```python\ndef insert_into_index(doc_text, doc_id=None):\n global index\n document = SimpleDirectoryReader(input_files=[doc_text]).load_data()[0]\n if doc_id is not None:\n document.doc_id = doc_id\n\n with lock:\n index.insert(document)\n index.storage_context.persist()\n\n\n...\nmanager.register(\"insert_into_index\", insert_into_index)\n...\n```\n\nEasy! If we launch both the `index_server.py` and then the `flask_demo.py` python files, we have a Flask API server that can handle multiple requests to insert documents into a vector index and respond to user queries!\n\nTo support some functionality in the frontend, I've adjusted what some responses look like from the Flask API, as well as added some functionality to keep track of which documents are stored in the index (LlamaIndex doesn't currently support this in a user-friendly way, but we can augment it ourselves!). Lastly, I had to add CORS support to the server using the `Flask-cors` python package.\n\nCheck out the complete `flask_demo.py` and `index_server.py` scripts in the [repository](https://github.com/logan-markewich/llama_index_starter_pack/tree/main/flask_react) for the final minor changes, the`requirements.txt` file, and a sample `Dockerfile` to help with deployment.\n\n## React Frontend\n\nGenerally, React and Typescript are one of the most popular libraries and languages for writing webapps today. This guide will assume you are familiar with how these tools work, because otherwise this guide will triple in length :smile:.\n\nIn the [repository](https://github.com/logan-markewich/llama_index_starter_pack/tree/main/flask_react), the frontend code is organized inside of the `react_frontend` folder.\n\nThe most relevant part of the frontend will be the `src/apis` folder. This is where we make calls to the Flask server, supporting the following queries:\n\n- `/query` -- make a query to the existing index\n- `/uploadFile` -- upload a file to the flask server for insertion into the index\n- `/getDocuments` -- list the current document titles and a portion of their texts\n\nUsing these three queries, we can build a robust frontend that allows users to upload and keep track of their files, query the index, and view the query response and information about which text nodes were used to form the response.\n\n### fetchDocuments.tsx\n\nThis file contains the function to, you guessed it, fetch the list of current documents in the index. The code is as follows:\n\n```typescript\nexport type Document = {\n id: string;\n text: string;\n};\n\nconst fetchDocuments = async (): Promise => {\n const response = await fetch(\"http://localhost:5601/getDocuments\", {\n mode: \"cors\",\n });\n\n if (!response.ok) {\n return [];\n }\n\n const documentList = (await response.json()) as Document[];\n return documentList;\n};\n```\n\nAs you can see, we make a query to the Flask server (here, it assumes running on localhost). Notice that we need to include the `mode: 'cors'` option, as we are making an external request.\n\nThen, we check if the response was ok, and if so, get the response json and return it. Here, the response json is a list of `Document` objects that are defined in the same file.\n\n### queryIndex.tsx\n\nThis file sends the user query to the flask server, and gets the response back, as well as details about which nodes in our index provided the response.\n\n```typescript\nexport type ResponseSources = {\n text: string;\n doc_id: string;\n start: number;\n end: number;\n similarity: number;\n};\n\nexport type QueryResponse = {\n text: string;\n sources: ResponseSources[];\n};\n\nconst queryIndex = async (query: string): Promise => {\n const queryURL = new URL(\"http://localhost:5601/query?text=1\");\n queryURL.searchParams.append(\"text\", query);\n\n const response = await fetch(queryURL, { mode: \"cors\" });\n if (!response.ok) {\n return { text: \"Error in query\", sources: [] };\n }\n\n const queryResponse = (await response.json()) as QueryResponse;\n\n return queryResponse;\n};\n\nexport default queryIndex;\n```\n\nThis is similar to the `fetchDocuments.tsx` file, with the main difference being we include the query text as a parameter in the URL. Then, we check if the response is ok and return it with the appropriate typescript type.\n\n### insertDocument.tsx\n\nProbably the most complex API call is uploading a document. The function here accepts a file object and constructs a `POST` request using `FormData`.\n\nThe actual response text is not used in the app but could be utilized to provide some user feedback on if the file failed to upload or not.\n\n```typescript\nconst insertDocument = async (file: File) => {\n const formData = new FormData();\n formData.append(\"file\", file);\n formData.append(\"filename_as_doc_id\", \"true\");\n\n const response = await fetch(\"http://localhost:5601/uploadFile\", {\n mode: \"cors\",\n method: \"POST\",\n body: formData,\n });\n\n const responseText = response.text();\n return responseText;\n};\n\nexport default insertDocument;\n```\n\n### All the Other Frontend Good-ness\n\nAnd that pretty much wraps up the frontend portion! The rest of the react frontend code is some pretty basic react components, and my best attempt to make it look at least a little nice :smile:.\n\nI encourage to read the rest of the [codebase](https://github.com/logan-markewich/llama_index_starter_pack/tree/main/flask_react/react_frontend) and submit any PRs for improvements!\n\n## Conclusion\n\nThis guide has covered a ton of information. We went from a basic \"Hello World\" Flask server written in python, to a fully functioning LlamaIndex powered backend and how to connect that to a frontend application.\n\nAs you can see, we can easily augment and wrap the services provided by LlamaIndex (like the little external document tracker) to help provide a good user experience on the frontend.\n\nYou could take this and add many features (multi-index/user support, saving objects into S3, adding a Pinecone vector server, etc.). And when you build an app after reading this, be sure to share the final result in the Discord! Good Luck! :muscle:"} {"tokens": 182, "doc_id": "d4157c1a-a595-4350-9ba4-63e0e92e2984", "name": "Full-Stack Web Application", "url": "https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/apps/index", "retrieve_doc": true, "source": "llama_index", "content": "# Full-Stack Web Application\n\nLlamaIndex can be integrated into a downstream full-stack web application. It can be used in a backend server (such as Flask), packaged into a Docker container, and/or directly used in a framework such as Streamlit.\n\nWe provide tutorials and resources to help you get started in this area:\n\n- [Fullstack Application Guide](./fullstack_app_guide.md) shows you how to build an app with LlamaIndex as an API and a TypeScript+React frontend\n- [Fullstack Application with Delphic](./fullstack_with_delphic.md) walks you through using LlamaIndex with a production-ready web app starter template called Delphic.\n- The [LlamaIndex Starter Pack](https://github.com/logan-markewich/llama_index_starter_pack) provides very basic flask, streamlit, and docker examples for LlamaIndex."} {"tokens": 7293, "doc_id": "d380d740-f28f-467b-ae53-b9b4e17404fe", "name": "A Guide to Building a Full-Stack LlamaIndex Web App with Delphic", "url": "https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/apps/fullstack_with_delphic", "retrieve_doc": true, "source": "llama_index", "content": "# A Guide to Building a Full-Stack LlamaIndex Web App with Delphic\n\nThis guide seeks to walk you through using LlamaIndex with a production-ready web app starter template\ncalled [Delphic](https://github.com/JSv4/Delphic). All code examples here are available from\nthe [Delphic](https://github.com/JSv4/Delphic) repo\n\n## What We're Building\n\nHere's a quick demo of the out-of-the-box functionality of Delphic:\n\nhttps://user-images.githubusercontent.com/5049984/233236432-aa4980b6-a510-42f3-887a-81485c9644e6.mp4\n\n## Architectural Overview\n\nDelphic leverages the LlamaIndex python library to let users to create their own document collections they can then\nquery in a responsive frontend.\n\nWe chose a stack that provides a responsive, robust mix of technologies that can (1) orchestrate complex python\nprocessing tasks while providing (2) a modern, responsive frontend and (3) a secure backend to build additional\nfunctionality upon.\n\nThe core libraries are:\n\n1. [Django](https://www.djangoproject.com/)\n2. [Django Channels](https://channels.readthedocs.io/en/stable/)\n3. [Django Ninja](https://django-ninja.rest-framework.com/)\n4. [Redis](https://redis.io/)\n5. [Celery](https://docs.celeryq.dev/en/stable/getting-started/introduction.html)\n6. [LlamaIndex](https://gpt-index.readthedocs.io/en/latest/)\n7. [Langchain](https://python.langchain.com/en/latest/index.html)\n8. [React](https://github.com/facebook/react)\n9. Docker & Docker Compose\n\nThanks to this modern stack built on the super stable Django web framework, the starter Delphic app boasts a streamlined\ndeveloper experience, built-in authentication and user management, asynchronous vector store processing, and\nweb-socket-based query connections for a responsive UI. In addition, our frontend is built with TypeScript and is based\non MUI React for a responsive and modern user interface.\n\n## System Requirements\n\nCelery doesn't work on Windows. It may be deployable with Windows Subsystem for Linux, but configuring that is beyond\nthe scope of this tutorial. For this reason, we recommend you only follow this tutorial if you're running Linux or OSX.\nYou will need Docker and Docker Compose installed to deploy the application. Local development will require node version\nmanager (nvm).\n\n## Django Backend\n\n### Project Directory Overview\n\nThe Delphic application has a structured backend directory organization that follows common Django project conventions.\nFrom the repo root, in the `./delphic` subfolder, the main folders are:\n\n1. `contrib`: This directory contains custom modifications or additions to Django's built-in `contrib` apps.\n2. `indexes`: This directory contains the core functionality related to document indexing and LLM integration. It\n includes:\n\n- `admin.py`: Django admin configuration for the app\n- `apps.py`: Application configuration\n- `models.py`: Contains the app's database models\n- `migrations`: Directory containing database schema migrations for the app\n- `signals.py`: Defines any signals for the app\n- `tests.py`: Unit tests for the app\n\n3. `tasks`: This directory contains tasks for asynchronous processing using Celery. The `index_tasks.py` file includes\n the tasks for creating vector indexes.\n4. `users`: This directory is dedicated to user management, including:\n5. `utils`: This directory contains utility modules and functions that are used across the application, such as custom\n storage backends, path helpers, and collection-related utilities.\n\n### Database Models\n\nThe Delphic application has two core models: `Document` and `Collection`. These models represent the central entities\nthe application deals with when indexing and querying documents using LLMs. They're defined in\n[`./delphic/indexes/models.py`](https://github.com/JSv4/Delphic/blob/main/delphic/indexes/models.py).\n\n1. `Collection`:\n\n- `api_key`: A foreign key that links a collection to an API key. This helps associate jobs with the source API key.\n- `title`: A character field that provides a title for the collection.\n- `description`: A text field that provides a description of the collection.\n- `status`: A character field that stores the processing status of the collection, utilizing the `CollectionStatus`\n enumeration.\n- `created`: A datetime field that records when the collection was created.\n- `modified`: A datetime field that records the last modification time of the collection.\n- `model`: A file field that stores the model associated with the collection.\n- `processing`: A boolean field that indicates if the collection is currently being processed.\n\n2. `Document`:\n\n- `collection`: A foreign key that links a document to a collection. This represents the relationship between documents\n and collections.\n- `file`: A file field that stores the uploaded document file.\n- `description`: A text field that provides a description of the document.\n- `created`: A datetime field that records when the document was created.\n- `modified`: A datetime field that records the last modification time of the document.\n\nThese models provide a solid foundation for collections of documents and the indexes created from them with LlamaIndex.\n\n### Django Ninja API\n\nDjango Ninja is a web framework for building APIs with Django and Python 3.7+ type hints. It provides a simple,\nintuitive, and expressive way of defining API endpoints, leveraging Python’s type hints to automatically generate input\nvalidation, serialization, and documentation.\n\nIn the Delphic repo,\nthe [`./config/api/endpoints.py`](https://github.com/JSv4/Delphic/blob/main/config/api/endpoints.py)\nfile contains the API routes and logic for the API endpoints. Now, let’s briefly address the purpose of each endpoint\nin the `endpoints.py` file:\n\n1. `/heartbeat`: A simple GET endpoint to check if the API is up and running. Returns `True` if the API is accessible.\n This is helpful for Kubernetes setups that expect to be able to query your container to ensure it's up and running.\n\n2. `/collections/create`: A POST endpoint to create a new `Collection`. Accepts form parameters such\n as `title`, `description`, and a list of `files`. Creates a new `Collection` and `Document` instances for each file,\n and schedules a Celery task to create an index.\n\n```python\n@collections_router.post(\"/create\")\nasync def create_collection(\n request,\n title: str = Form(...),\n description: str = Form(...),\n files: list[UploadedFile] = File(...),\n):\n key = None if getattr(request, \"auth\", None) is None else request.auth\n if key is not None:\n key = await key\n\n collection_instance = Collection(\n api_key=key,\n title=title,\n description=description,\n status=CollectionStatusEnum.QUEUED,\n )\n\n await sync_to_async(collection_instance.save)()\n\n for uploaded_file in files:\n doc_data = uploaded_file.file.read()\n doc_file = ContentFile(doc_data, uploaded_file.name)\n document = Document(collection=collection_instance, file=doc_file)\n await sync_to_async(document.save)()\n\n create_index.si(collection_instance.id).apply_async()\n\n return await sync_to_async(CollectionModelSchema)(...)\n```\n\n3. `/collections/query` — a POST endpoint to query a document collection using the LLM. Accepts a JSON payload\n containing `collection_id` and `query_str`, and returns a response generated by querying the collection. We don't\n actually use this endpoint in our chat GUI (We use a websocket - see below), but you could build an app to integrate\n to this REST endpoint to query a specific collection.\n\n```python\n@collections_router.post(\n \"/query\",\n response=CollectionQueryOutput,\n summary=\"Ask a question of a document collection\",\n)\ndef query_collection_view(\n request: HttpRequest, query_input: CollectionQueryInput\n):\n collection_id = query_input.collection_id\n query_str = query_input.query_str\n response = query_collection(collection_id, query_str)\n return {\"response\": response}\n```\n\n4. `/collections/available`: A GET endpoint that returns a list of all collections created with the user's API key. The\n output is serialized using the `CollectionModelSchema`.\n\n```python\n@collections_router.get(\n \"/available\",\n response=list[CollectionModelSchema],\n summary=\"Get a list of all of the collections created with my api_key\",\n)\nasync def get_my_collections_view(request: HttpRequest):\n key = None if getattr(request, \"auth\", None) is None else request.auth\n if key is not None:\n key = await key\n\n collections = Collection.objects.filter(api_key=key)\n\n return [{...} async for collection in collections]\n```\n\n5. `/collections/{collection_id}/add_file`: A POST endpoint to add a file to an existing collection. Accepts\n a `collection_id` path parameter, and form parameters such as `file` and `description`. Adds the file as a `Document`\n instance associated with the specified collection.\n\n```python\n@collections_router.post(\n \"/{collection_id}/add_file\", summary=\"Add a file to a collection\"\n)\nasync def add_file_to_collection(\n request,\n collection_id: int,\n file: UploadedFile = File(...),\n description: str = Form(...),\n):\n collection = await sync_to_async(Collection.objects.get)(id=collection_id)\n```\n\n### Intro to Websockets\n\nWebSockets are a communication protocol that enables bidirectional and full-duplex communication between a client and a\nserver over a single, long-lived connection. The WebSocket protocol is designed to work over the same ports as HTTP and\nHTTPS (ports 80 and 443, respectively) and uses a similar handshake process to establish a connection. Once the\nconnection is established, data can be sent in both directions as “frames” without the need to reestablish the\nconnection each time, unlike traditional HTTP requests.\n\nThere are several reasons to use WebSockets, particularly when working with code that takes a long time to load into\nmemory but is quick to run once loaded:\n\n1. **Performance**: WebSockets eliminate the overhead associated with opening and closing multiple connections for each\n request, reducing latency.\n2. **Efficiency**: WebSockets allow for real-time communication without the need for polling, resulting in more\n efficient use of resources and better responsiveness.\n3. **Scalability**: WebSockets can handle a large number of simultaneous connections, making it ideal for applications\n that require high concurrency.\n\nIn the case of the Delphic application, using WebSockets makes sense as the LLMs can be expensive to load into memory.\nBy establishing a WebSocket connection, the LLM can remain loaded in memory, allowing subsequent requests to be\nprocessed quickly without the need to reload the model each time.\n\nThe ASGI configuration file [`./config/asgi.py`](https://github.com/JSv4/Delphic/blob/main/config/asgi.py) defines how\nthe application should handle incoming connections, using the Django Channels `ProtocolTypeRouter` to route connections\nbased on their protocol type. In this case, we have two protocol types: \"http\" and \"websocket\".\n\nThe “http” protocol type uses the standard Django ASGI application to handle HTTP requests, while the “websocket”\nprotocol type uses a custom `TokenAuthMiddleware` to authenticate WebSocket connections. The `URLRouter` within\nthe `TokenAuthMiddleware` defines a URL pattern for the `CollectionQueryConsumer`, which is responsible for handling\nWebSocket connections related to querying document collections.\n\n```python\napplication = ProtocolTypeRouter(\n {\n \"http\": get_asgi_application(),\n \"websocket\": TokenAuthMiddleware(\n URLRouter(\n [\n re_path(\n r\"ws/collections/(?P\\w+)/query/$\",\n CollectionQueryConsumer.as_asgi(),\n ),\n ]\n )\n ),\n }\n)\n```\n\nThis configuration allows clients to establish WebSocket connections with the Delphic application to efficiently query\ndocument collections using the LLMs, without the need to reload the models for each request.\n\n### Websocket Handler\n\nThe `CollectionQueryConsumer` class\nin [`config/api/websockets/queries.py`](https://github.com/JSv4/Delphic/blob/main/config/api/websockets/queries.py) is\nresponsible for handling WebSocket connections related to querying document collections. It inherits from\nthe `AsyncWebsocketConsumer` class provided by Django Channels.\n\nThe `CollectionQueryConsumer` class has three main methods:\n\n1. `connect`: Called when a WebSocket is handshaking as part of the connection process.\n2. `disconnect`: Called when a WebSocket closes for any reason.\n3. `receive`: Called when the server receives a message from the WebSocket.\n\n#### Websocket connect listener\n\nThe `connect` method is responsible for establishing the connection, extracting the collection ID from the connection\npath, loading the collection model, and accepting the connection.\n\n```python\nasync def connect(self):\n try:\n self.collection_id = extract_connection_id(self.scope[\"path\"])\n self.index = await load_collection_model(self.collection_id)\n await self.accept()\n\n except ValueError as e:\n await self.accept()\n await self.close(code=4000)\n except Exception as e:\n pass\n```\n\n#### Websocket disconnect listener\n\nThe `disconnect` method is empty in this case, as there are no additional actions to be taken when the WebSocket is\nclosed.\n\n#### Websocket receive listener\n\nThe `receive` method is responsible for processing incoming messages from the WebSocket. It takes the incoming message,\ndecodes it, and then queries the loaded collection model using the provided query. The response is then formatted as a\nmarkdown string and sent back to the client over the WebSocket connection.\n\n```python\nasync def receive(self, text_data):\n text_data_json = json.loads(text_data)\n\n if self.index is not None:\n query_str = text_data_json[\"query\"]\n modified_query_str = f\"Please return a nicely formatted markdown string to this request:\\n\\n{query_str}\"\n query_engine = self.index.as_query_engine()\n response = query_engine.query(modified_query_str)\n\n markdown_response = f\"## Response\\n\\n{response}\\n\\n\"\n if response.source_nodes:\n markdown_sources = (\n f\"## Sources\\n\\n{response.get_formatted_sources()}\"\n )\n else:\n markdown_sources = \"\"\n\n formatted_response = f\"{markdown_response}{markdown_sources}\"\n\n await self.send(json.dumps({\"response\": formatted_response}, indent=4))\n else:\n await self.send(\n json.dumps(\n {\"error\": \"No index loaded for this connection.\"}, indent=4\n )\n )\n```\n\nTo load the collection model, the `load_collection_model` function is used, which can be found\nin [`delphic/utils/collections.py`](https://github.com/JSv4/Delphic/blob/main/delphic/utils/collections.py). This\nfunction retrieves the collection object with the given collection ID, checks if a JSON file for the collection model\nexists, and if not, creates one. Then, it sets up the `LLM` and `Settings` before loading\nthe `VectorStoreIndex` using the cache file.\n\n```python\nfrom llama_index.core import Settings\n\n\nasync def load_collection_model(collection_id: str | int) -> VectorStoreIndex:\n \"\"\"\n Load the Collection model from cache or the database, and return the index.\n\n Args:\n collection_id (Union[str, int]): The ID of the Collection model instance.\n\n Returns:\n VectorStoreIndex: The loaded index.\n\n This function performs the following steps:\n 1. Retrieve the Collection object with the given collection_id.\n 2. Check if a JSON file with the name '/cache/model_{collection_id}.json' exists.\n 3. If the JSON file doesn't exist, load the JSON from the Collection.model FileField and save it to\n '/cache/model_{collection_id}.json'.\n 4. Call VectorStoreIndex.load_from_disk with the cache_file_path.\n \"\"\"\n # Retrieve the Collection object\n collection = await Collection.objects.aget(id=collection_id)\n logger.info(f\"load_collection_model() - loaded collection {collection_id}\")\n\n # Make sure there's a model\n if collection.model.name:\n logger.info(\"load_collection_model() - Setup local json index file\")\n\n # Check if the JSON file exists\n cache_dir = Path(settings.BASE_DIR) / \"cache\"\n cache_file_path = cache_dir / f\"model_{collection_id}.json\"\n if not cache_file_path.exists():\n cache_dir.mkdir(parents=True, exist_ok=True)\n with collection.model.open(\"rb\") as model_file:\n with cache_file_path.open(\n \"w+\", encoding=\"utf-8\"\n ) as cache_file:\n cache_file.write(model_file.read().decode(\"utf-8\"))\n\n # define LLM\n logger.info(\n f\"load_collection_model() - Setup Settings with tokens {settings.MAX_TOKENS} and \"\n f\"model {settings.MODEL_NAME}\"\n )\n Settings.llm = OpenAI(\n temperature=0, model=\"gpt-3.5-turbo\", max_tokens=512\n )\n\n # Call VectorStoreIndex.load_from_disk\n logger.info(\"load_collection_model() - Load llama index\")\n index = VectorStoreIndex.load_from_disk(\n cache_file_path,\n )\n logger.info(\n \"load_collection_model() - Llamaindex loaded and ready for query...\"\n )\n\n else:\n logger.error(\n f\"load_collection_model() - collection {collection_id} has no model!\"\n )\n raise ValueError(\"No model exists for this collection!\")\n\n return index\n```\n\n## React Frontend\n\n### Overview\n\nWe chose to use TypeScript, React and Material-UI (MUI) for the Delphic project’s frontend for a couple reasons. First,\nas the most popular component library (MUI) for the most popular frontend framework (React), this choice makes this\nproject accessible to a huge community of developers. Second, React is, at this point, a stable and generally well-liked\nframework that delivers valuable abstractions in the form of its virtual DOM while still being relatively stable and, in\nour opinion, pretty easy to learn, again making it accessible.\n\n### Frontend Project Structure\n\nThe frontend can be found in the [`/frontend`](https://github.com/JSv4/Delphic/tree/main/frontend) directory of the\nrepo, with the React-related components being in `/frontend/src` . You’ll notice there is a DockerFile in the `frontend`\ndirectory and several folders and files related to configuring our frontend web\nserver — [nginx](https://www.nginx.com/).\n\nThe `/frontend/src/App.tsx` file serves as the entry point of the application. It defines the main components, such as\nthe login form, the drawer layout, and the collection create modal. The main components are conditionally rendered based\non whether the user is logged in and has an authentication token.\n\nThe DrawerLayout2 component is defined in the`DrawerLayour2.tsx` file. This component manages the layout of the\napplication and provides the navigation and main content areas.\n\nSince the application is relatively simple, we can get away with not using a complex state management solution like\nRedux and just use React’s useState hooks.\n\n### Grabbing Collections from the Backend\n\nThe collections available to the logged-in user are retrieved and displayed in the DrawerLayout2 component. The process\ncan be broken down into the following steps:\n\n1. Initializing state variables:\n\n```tsx\nconst [collections, setCollections] = useState([]);\nconst [loading, setLoading] = useState(true);\n```\n\nHere, we initialize two state variables: `collections` to store the list of collections and `loading` to track whether\nthe collections are being fetched.\n\n2. Collections are fetched for the logged-in user with the `fetchCollections()` function:\n\n```tsx\nconst\nfetchCollections = async () = > {\ntry {\nconst accessToken = localStorage.getItem(\"accessToken\");\nif (accessToken) {\nconst response = await getMyCollections(accessToken);\nsetCollections(response.data);\n}\n} catch (error) {\nconsole.error(error);\n} finally {\nsetLoading(false);\n}\n};\n```\n\nThe `fetchCollections` function retrieves the collections for the logged-in user by calling the `getMyCollections` API\nfunction with the user's access token. It then updates the `collections` state with the retrieved data and sets\nthe `loading` state to `false` to indicate that fetching is complete.\n\n### Displaying Collections\n\nThe latest collectios are displayed in the drawer like this:\n\n```tsx\n< List >\n{collections.map((collection) = > (\n < div key={collection.id} >\n < ListItem disablePadding >\n < ListItemButton\n disabled={\n collection.status != = CollectionStatus.COMPLETE | |\n !collection.has_model\n }\n onClick={() = > handleCollectionClick(collection)}\nselected = {\n selectedCollection & &\n selectedCollection.id == = collection.id\n}\n>\n< ListItemText\nprimary = {collection.title} / >\n {collection.status == = CollectionStatus.RUNNING ? (\n < CircularProgress\n size={24}\n style={{position: \"absolute\", right: 16}}\n / >\n): null}\n< / ListItemButton >\n < / ListItem >\n < / div >\n))}\n< / List >\n```\n\nYou’ll notice that the `disabled` property of a collection’s `ListItemButton` is set based on whether the collection's\nstatus is not `CollectionStatus.COMPLETE` or the collection does not have a model (`!collection.has_model`). If either\nof these conditions is true, the button is disabled, preventing users from selecting an incomplete or model-less\ncollection. Where the CollectionStatus is RUNNING, we also show a loading wheel over the button.\n\nIn a separate `useEffect` hook, we check if any collection in the `collections` state has a status\nof `CollectionStatus.RUNNING` or `CollectionStatus.QUEUED`. If so, we set up an interval to repeatedly call\nthe `fetchCollections` function every 15 seconds (15,000 milliseconds) to update the collection statuses. This way, the\napplication periodically checks for completed collections, and the UI is updated accordingly when the processing is\ndone.\n\n```tsx\nuseEffect(() = > {\n let\ninterval: NodeJS.Timeout;\nif (\n collections.some(\n (collection) = >\ncollection.status == = CollectionStatus.RUNNING | |\ncollection.status == = CollectionStatus.QUEUED\n)\n) {\n interval = setInterval(() = > {\n fetchCollections();\n}, 15000);\n}\nreturn () = > clearInterval(interval);\n}, [collections]);\n```\n\n### Chat View Component\n\nThe `ChatView` component in `frontend/src/chat/ChatView.tsx` is responsible for handling and displaying a chat interface\nfor a user to interact with a collection. The component establishes a WebSocket connection to communicate in real-time\nwith the server, sending and receiving messages.\n\nKey features of the `ChatView` component include:\n\n1. Establishing and managing the WebSocket connection with the server.\n2. Displaying messages from the user and the server in a chat-like format.\n3. Handling user input to send messages to the server.\n4. Updating the messages state and UI based on received messages from the server.\n5. Displaying connection status and errors, such as loading messages, connecting to the server, or encountering errors\n while loading a collection.\n\nTogether, all of this allows users to interact with their selected collection with a very smooth, low-latency\nexperience.\n\n#### Chat Websocket Client\n\nThe WebSocket connection in the `ChatView` component is used to establish real-time communication between the client and\nthe server. The WebSocket connection is set up and managed in the `ChatView` component as follows:\n\nFirst, we want to initialize the WebSocket reference:\n\nconst websocket = useRef(null);\n\nA `websocket` reference is created using `useRef`, which holds the WebSocket object that will be used for\ncommunication. `useRef` is a hook in React that allows you to create a mutable reference object that persists across\nrenders. It is particularly useful when you need to hold a reference to a mutable object, such as a WebSocket\nconnection, without causing unnecessary re-renders.\n\nIn the `ChatView` component, the WebSocket connection needs to be established and maintained throughout the lifetime of\nthe component, and it should not trigger a re-render when the connection state changes. By using `useRef`, you ensure\nthat the WebSocket connection is kept as a reference, and the component only re-renders when there are actual state\nchanges, such as updating messages or displaying errors.\n\nThe `setupWebsocket` function is responsible for establishing the WebSocket connection and setting up event handlers to\nhandle different WebSocket events.\n\nOverall, the setupWebsocket function looks like this:\n\n```tsx\nconst setupWebsocket = () => {\n setConnecting(true);\n // Here, a new WebSocket object is created using the specified URL, which includes the\n // selected collection's ID and the user's authentication token.\n\n websocket.current = new WebSocket(\n `ws://localhost:8000/ws/collections/${selectedCollection.id}/query/?token=${authToken}`,\n );\n\n websocket.current.onopen = (event) => {\n //...\n };\n\n websocket.current.onmessage = (event) => {\n //...\n };\n\n websocket.current.onclose = (event) => {\n //...\n };\n\n websocket.current.onerror = (event) => {\n //...\n };\n\n return () => {\n websocket.current?.close();\n };\n};\n```\n\nNotice in a bunch of places we trigger updates to the GUI based on the information from the web socket client.\n\nWhen the component first opens and we try to establish a connection, the `onopen` listener is triggered. In the\ncallback, the component updates the states to reflect that the connection is established, any previous errors are\ncleared, and no messages are awaiting responses:\n\n```tsx\nwebsocket.current.onopen = (event) => {\n setError(false);\n setConnecting(false);\n setAwaitingMessage(false);\n\n console.log(\"WebSocket connected:\", event);\n};\n```\n\n`onmessage`is triggered when a new message is received from the server through the WebSocket connection. In the\ncallback, the received data is parsed and the `messages` state is updated with the new message from the server:\n\n```\nwebsocket.current.onmessage = (event) => {\n const data = JSON.parse(event.data);\n console.log(\"WebSocket message received:\", data);\n setAwaitingMessage(false);\n\n if (data.response) {\n // Update the messages state with the new message from the server\n setMessages((prevMessages) => [\n ...prevMessages,\n {\n sender_id: \"server\",\n message: data.response,\n timestamp: new Date().toLocaleTimeString(),\n },\n ]);\n }\n};\n```\n\n`onclose`is triggered when the WebSocket connection is closed. In the callback, the component checks for a specific\nclose code (`4000`) to display a warning toast and update the component states accordingly. It also logs the close\nevent:\n\n```tsx\nwebsocket.current.onclose = (event) => {\n if (event.code === 4000) {\n toast.warning(\n \"Selected collection's model is unavailable. Was it created properly?\",\n );\n setError(true);\n setConnecting(false);\n setAwaitingMessage(false);\n }\n console.log(\"WebSocket closed:\", event);\n};\n```\n\nFinally, `onerror` is triggered when an error occurs with the WebSocket connection. In the callback, the component\nupdates the states to reflect the error and logs the error event:\n\n```tsx\nwebsocket.current.onerror = (event) => {\n setError(true);\n setConnecting(false);\n setAwaitingMessage(false);\n\n console.error(\"WebSocket error:\", event);\n};\n```\n\n#### Rendering our Chat Messages\n\nIn the `ChatView` component, the layout is determined using CSS styling and Material-UI components. The main layout\nconsists of a container with a `flex` display and a column-oriented `flexDirection`. This ensures that the content\nwithin the container is arranged vertically.\n\nThere are three primary sections within the layout:\n\n1. The chat messages area: This section takes up most of the available space and displays a list of messages exchanged\n between the user and the server. It has an overflow-y set to ‘auto’, which allows scrolling when the content\n overflows the available space. The messages are rendered using the `ChatMessage` component for each message and\n a `ChatMessageLoading` component to show the loading state while waiting for a server response.\n2. The divider: A Material-UI `Divider` component is used to separate the chat messages area from the input area,\n creating a clear visual distinction between the two sections.\n3. The input area: This section is located at the bottom and allows the user to type and send messages. It contains\n a `TextField` component from Material-UI, which is set to accept multiline input with a maximum of 2 rows. The input\n area also includes a `Button` component to send the message. The user can either click the \"Send\" button or press \"\n Enter\" on their keyboard to send the message.\n\nThe user inputs accepted in the `ChatView` component are text messages that the user types in the `TextField`. The\ncomponent processes these text inputs and sends them to the server through the WebSocket connection.\n\n## Deployment\n\n### Prerequisites\n\nTo deploy the app, you're going to need Docker and Docker Compose installed. If you're on Ubuntu or another, common\nLinux distribution, DigitalOcean has\na [great Docker tutorial](https://www.digitalocean.com/community/tutorial_collections/how-to-install-and-use-docker) and\nanother great tutorial\nfor [Docker Compose](https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-compose-on-ubuntu-20-04)\nyou can follow. If those don't work for you, try\nthe [official docker documentation.](https://docs.docker.com/engine/install/)\n\n### Build and Deploy\n\nThe project is based on django-cookiecutter, and it’s pretty easy to get it deployed on a VM and configured to serve\nHTTPs traffic for a specific domain. The configuration is somewhat involved, however — not because of this project, but\nit’s just a fairly involved topic to configure your certificates, DNS, etc.\n\nFor the purposes of this guide, let’s just get running locally. Perhaps we’ll release a guide on production deployment.\nIn the meantime, check out\nthe [Django Cookiecutter project docs](https://cookiecutter-django.readthedocs.io/en/latest/deployment-with-docker.html)\nfor starters.\n\nThis guide assumes your goal is to get the application up and running for use. If you want to develop, most likely you\nwon’t want to launch the compose stack with the — profiles fullstack flag and will instead want to launch the react\nfrontend using the node development server.\n\nTo deploy, first clone the repo:\n\n```commandline\ngit clone https://github.com/yourusername/delphic.git\n```\n\nChange into the project directory:\n\n```commandline\ncd delphic\n```\n\nCopy the sample environment files:\n\n```commandline\nmkdir -p ./.envs/.local/\ncp -a ./docs/sample_envs/local/.frontend ./frontend\ncp -a ./docs/sample_envs/local/.django ./.envs/.local\ncp -a ./docs/sample_envs/local/.postgres ./.envs/.local\n```\n\nEdit the `.django` and `.postgres` configuration files to include your OpenAI API key and set a unique password for your\ndatabase user. You can also set the response token limit in the .django file or switch which OpenAI model you want to\nuse. GPT4 is supported, assuming you’re authorized to access it.\n\nBuild the docker compose stack with the `--profiles fullstack` flag:\n\n```commandline\nsudo docker-compose --profiles fullstack -f local.yml build\n```\n\nThe fullstack flag instructs compose to build a docker container from the frontend folder and this will be launched\nalong with all of the needed, backend containers. It takes a long time to build a production React container, however,\nso we don’t recommend you develop this way. Follow\nthe [instructions in the project readme.md](https://github.com/JSv4/Delphic#development) for development environment\nsetup instructions.\n\nFinally, bring up the application:\n\n```commandline\nsudo docker-compose -f local.yml up\n```\n\nNow, visit `localhost:3000` in your browser to see the frontend, and use the Delphic application locally.\n\n## Using the Application\n\n### Setup Users\n\nIn order to actually use the application (at the moment, we intend to make it possible to share certain models with\nunauthenticated users), you need a login. You can use either a superuser or non-superuser. In either case, someone needs\nto first create a superuser using the console:\n\n**Why set up a Django superuser?** A Django superuser has all the permissions in the application and can manage all\naspects of the system, including creating, modifying, and deleting users, collections, and other data. Setting up a\nsuperuser allows you to fully control and manage the application.\n\n**How to create a Django superuser:**\n\n1 Run the following command to create a superuser:\n\nsudo docker-compose -f local.yml run django python manage.py createsuperuser\n\n2 You will be prompted to provide a username, email address, and password for the superuser. Enter the required\ninformation.\n\n**How to create additional users using Django admin:**\n\n1. Start your Delphic application locally following the deployment instructions.\n2. Visit the Django admin interface by navigating to `http://localhost:8000/admin` in your browser.\n3. Log in with the superuser credentials you created earlier.\n4. Click on “Users” under the “Authentication and Authorization” section.\n5. Click on the “Add user +” button in the top right corner.\n6. Enter the required information for the new user, such as username and password. Click “Save” to create the user.\n7. To grant the new user additional permissions or make them a superuser, click on their username in the user list,\n scroll down to the “Permissions” section, and configure their permissions accordingly. Save your changes."} {"tokens": 1587, "doc_id": "cca8c307-c42d-4470-a08e-55c98322f75b", "name": "Get References from PDFs", "url": "https://docs.llamaindex.ai/en/stable/examples/citation/pdf_page_reference", "retrieve_doc": true, "source": "llama_index", "content": "# Get References from PDFs \n\nThis guide shows you how to use LlamaIndex to get in-line page number citations in the response (and the response is streamed).\n\nThis is a simple combination of using the page number metadata in our PDF loader along with our indexing/query abstractions to use this information.\n\n\"Open\n\nIf you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.\n\n\n```python\n%pip install llama-index-llms-openai\n```\n\n\n```python\n!pip install llama-index\n```\n\n\n```python\nfrom llama_index.core import (\n SimpleDirectoryReader,\n VectorStoreIndex,\n download_loader,\n RAKEKeywordTableIndex,\n)\n```\n\n\n```python\nfrom llama_index.llms.openai import OpenAI\n\nllm = OpenAI(temperature=0, model=\"gpt-3.5-turbo\")\n```\n\nDownload Data\n\n\n```python\n!mkdir -p 'data/10k/'\n!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/10k/lyft_2021.pdf' -O 'data/10k/lyft_2021.pdf'\n```\n\nLoad document and build index\n\n\n```python\nreader = SimpleDirectoryReader(input_files=[\"./data/10k/lyft_2021.pdf\"])\ndata = reader.load_data()\n```\n\n\n```python\nindex = VectorStoreIndex.from_documents(data)\n```\n\n\n```python\nquery_engine = index.as_query_engine(streaming=True, similarity_top_k=3)\n```\n\nStream response with page citation\n\n\n```python\nresponse = query_engine.query(\n \"What was the impact of COVID? Show statements in bullet form and show\"\n \" page reference after each statement.\"\n)\nresponse.print_response_stream()\n```\n\n \n • The ongoing COVID-19 pandemic continues to impact communities in the United States, Canada and globally (page 6). \n • The pandemic and related responses caused decreased demand for our platform leading to decreased revenues as well as decreased earning opportunities for drivers on our platform (page 6).\n • Our business continues to be impacted by the COVID-19 pandemic (page 6).\n • The exact timing and pace of the recovery remain uncertain (page 6).\n • The extent to which our operations will continue to be impacted by the pandemic will depend largely on future developments, which are highly uncertain and cannot be accurately predicted (page 6).\n • An increase in cases due to variants of the virus has caused many businesses to delay employees returning to the office (page 6).\n • We anticipate that continued social distancing, altered consumer behavior, reduced travel and commuting, and expected corporate cost cutting will be significant challenges for us (page 6).\n • We have adopted multiple measures, including, but not limited, to establishing new health and safety requirements for ridesharing and updating workplace policies (page 6).\n • We have had to take certain cost-cutting measures, including lay-offs, furloughs and salary reductions, which may have adversely affect employee morale, our culture and our ability to attract and retain employees (page 18).\n • The ultimate impact of the COVID-19 pandemic on our users, customers, employees, business, operations and financial performance depends on many factors that are not within our control (page 18).\n\nInspect source nodes\n\n\n```python\nfor node in response.source_nodes:\n print(\"-----\")\n text_fmt = node.node.get_content().strip().replace(\"\\n\", \" \")[:1000]\n print(f\"Text:\\t {text_fmt} ...\")\n print(f\"Metadata:\\t {node.node.metadata}\")\n print(f\"Score:\\t {node.score:.3f}\")\n```\n\n -----\n Text:\t Impact of COVID-19 to our BusinessThe ongoing COVID-19 pandemic continues to impact communities in the United States, Canada and globally. Since the pandemic began in March 2020,governments and private businesses - at the recommendation of public health officials - have enacted precautions to mitigate the spread of the virus, including travelrestrictions and social distancing measures in many regions of the United States and Canada, and many enterprises have instituted and maintained work from homeprograms and limited the number of employees on site. Beginning in the middle of March 2020, the pandemic and these related responses caused decreased demand for ourplatform leading to decreased revenues as well as decreased earning opportunities for drivers on our platform. Our business continues to be impacted by the COVID-19pandemic. Although we have seen some signs of demand improving, particularly compared to the dema ...\n Metadata:\t {'page_label': '6', 'file_name': 'lyft_2021.pdf'}\n Score:\t 0.821\n -----\n Text:\t will continue to be impacted by the pandemic will depend largely on future developments, which are highly uncertain and cannot beaccurately predicted, including new information which may emerge concerning COVID-19 variants and the severity of the pandemic and actions by government authoritiesand private businesses to contain the pandemic or recover from its impact, among other things. For example, an increase in cases due to variants of the virus has causedmany businesses to delay employees returning to the office. Even as travel restrictions and shelter-in-place orders are modified or lifted, we anticipate that continued socialdistancing, altered consu mer behavior, reduced travel and commuting, and expected corporate cost cutting will be significant challenges for us. The strength and duration ofthese challenges cannot b e presently estimated.In response to the COVID-19 pandemic, we have adopted multiple measures, including, but not limited, to establishing ne ...\n Metadata:\t {'page_label': '56', 'file_name': 'lyft_2021.pdf'}\n Score:\t 0.808\n -----\n Text:\t storing unrented and returned vehicles. These impacts to the demand for and operations of the different rental programs have and may continue to adversely affectour business, financial condi tion and results of operation.• The COVID-19 pandemic may delay or prevent us, or our current or prospective partners and suppliers, from being able to test, develop or deploy autonomousvehicle-related technology, including through direct impacts of the COVID-19 virus on employee and contractor health; reduced consumer demand forautonomous vehicle travel resulting from an overall reduced demand for travel; shelter-in-place orders by local, state or federal governments negatively impactingoperations, including our ability to test autonomous vehicle-related technology; impacts to the supply chains of our current or prospective partners and suppliers;or economic impacts limiting our or our current or prospective partners’ or suppliers’ ability to expend resources o ...\n Metadata:\t {'page_label': '18', 'file_name': 'lyft_2021.pdf'}\n Score:\t 0.805"} {"tokens": 1654, "doc_id": "fbb928bd-56a2-4df4-bd74-23b68502d3d0", "name": "Auto-Retrieval from a Weaviate Vector Database", "url": "https://docs.llamaindex.ai/en/stable/examples/vector_stores/WeaviateIndex_auto_retriever", "retrieve_doc": true, "source": "llama_index", "content": "\"Open\n\n# Auto-Retrieval from a Weaviate Vector Database\n\nThis guide shows how to perform **auto-retrieval** in LlamaIndex with [Weaviate](https://weaviate.io/). \n\nThe Weaviate vector database supports a set of [metadata filters](https://weaviate.io/developers/weaviate/search/filters) in addition to a query string for semantic search. Given a natural language query, we first use a Large Language Model (LLM) to infer a set of metadata filters as well as the right query string to pass to the vector database (either can also be blank). This overall query bundle is then executed against the vector database.\n\nThis allows for more dynamic, expressive forms of retrieval beyond top-k semantic search. The relevant context for a given query may only require filtering on a metadata tag, or require a joint combination of filtering + semantic search within the filtered set, or just raw semantic search.\n\n## Setup \n\nWe first define imports and define an empty Weaviate collection.\n\nIf you're opening this Notebook on Colab, you will probably need to install LlamaIndex 🦙.\n\n\n```python\n%pip install llama-index-vector-stores-weaviate\n```\n\n\n```python\n!pip install llama-index weaviate-client\n```\n\n\n```python\nimport logging\nimport sys\n\nlogging.basicConfig(stream=sys.stdout, level=logging.INFO)\nlogging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))\n```\n\nWe will be using GPT-4 for its reasoning capabilities to infer the metadata filters. Depending on your use case, `\"gpt-3.5-turbo\"` can work as well.\n\n\n```python\n# set up OpenAI\nimport os\nimport getpass\nimport openai\n\nos.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")\nopenai.api_key = os.environ[\"OPENAI_API_KEY\"]\n```\n\n\n```python\nfrom llama_index.embeddings.openai import OpenAIEmbedding\nfrom llama_index.llms.openai import OpenAI\nfrom llama_index.core.settings import Settings\n\nSettings.llm = OpenAI(model=\"gpt-4\")\nSettings.embed_model = OpenAIEmbedding()\n```\n\nThis Notebook uses Weaviate in [Embedded mode](https://weaviate.io/developers/weaviate/installation/embedded), which is supported on Linux and macOS.\n\nIf you prefer to try out Weaviate's fully managed service, [Weaviate Cloud Services (WCS)](https://weaviate.io/developers/weaviate/installation/weaviate-cloud-services), you can enable the code in the comments.\n\n\n```python\nimport weaviate\nfrom weaviate.embedded import EmbeddedOptions\n\n# Connect to Weaviate client in embedded mode\nclient = weaviate.connect_to_embedded()\n\n# Enable this code if you want to use Weaviate Cloud Services instead of Embedded mode.\n\"\"\"\nimport weaviate\n\n# cloud\ncluster_url = \"\"\napi_key = \"\"\n\nclient = weaviate.connect_to_wcs(cluster_url=cluster_url,\n auth_credentials=weaviate.auth.AuthApiKey(api_key), \n)\n\n# local\n# client = weaviate.connect_to_local()\n\"\"\"\n```\n\n## Defining Some Sample Data\n\nWe insert some sample nodes containing text chunks into the vector database. Note that each `TextNode` not only contains the text, but also metadata e.g. `category` and `country`. These metadata fields will get converted/stored as such in the underlying vector db.\n\n\n```python\nfrom llama_index.core.schema import TextNode\n\nnodes = [\n TextNode(\n text=(\n \"Michael Jordan is a retired professional basketball player,\"\n \" widely regarded as one of the greatest basketball players of all\"\n \" time.\"\n ),\n metadata={\n \"category\": \"Sports\",\n \"country\": \"United States\",\n },\n ),\n TextNode(\n text=(\n \"Angelina Jolie is an American actress, filmmaker, and\"\n \" humanitarian. She has received numerous awards for her acting\"\n \" and is known for her philanthropic work.\"\n ),\n metadata={\n \"category\": \"Entertainment\",\n \"country\": \"United States\",\n },\n ),\n TextNode(\n text=(\n \"Elon Musk is a business magnate, industrial designer, and\"\n \" engineer. He is the founder, CEO, and lead designer of SpaceX,\"\n \" Tesla, Inc., Neuralink, and The Boring Company.\"\n ),\n metadata={\n \"category\": \"Business\",\n \"country\": \"United States\",\n },\n ),\n TextNode(\n text=(\n \"Rihanna is a Barbadian singer, actress, and businesswoman. She\"\n \" has achieved significant success in the music industry and is\"\n \" known for her versatile musical style.\"\n ),\n metadata={\n \"category\": \"Music\",\n \"country\": \"Barbados\",\n },\n ),\n TextNode(\n text=(\n \"Cristiano Ronaldo is a Portuguese professional footballer who is\"\n \" considered one of the greatest football players of all time. He\"\n \" has won numerous awards and set multiple records during his\"\n \" career.\"\n ),\n metadata={\n \"category\": \"Sports\",\n \"country\": \"Portugal\",\n },\n ),\n]\n```\n\n## Build Vector Index with Weaviate Vector Store\n\nHere we load the data into the vector store. As mentioned above, both the text and metadata for each node will get converted into corresopnding representations in Weaviate. We can now run semantic queries and also metadata filtering on this data from Weaviate.\n\n\n```python\nfrom llama_index.core import VectorStoreIndex, StorageContext\nfrom llama_index.vector_stores.weaviate import WeaviateVectorStore\n\nvector_store = WeaviateVectorStore(\n weaviate_client=client, index_name=\"LlamaIndex_filter\"\n)\n\nstorage_context = StorageContext.from_defaults(vector_store=vector_store)\n```\n\n\n```python\nindex = VectorStoreIndex(nodes, storage_context=storage_context)\n```\n\n## Define `VectorIndexAutoRetriever`\n\nWe define our core `VectorIndexAutoRetriever` module. The module takes in `VectorStoreInfo`,\nwhich contains a structured description of the vector store collection and the metadata filters it supports.\nThis information will then be used in the auto-retrieval prompt where the LLM infers metadata filters.\n\n\n```python\nfrom llama_index.core.retrievers import VectorIndexAutoRetriever\nfrom llama_index.core.vector_stores.types import MetadataInfo, VectorStoreInfo\n\n\nvector_store_info = VectorStoreInfo(\n content_info=\"brief biography of celebrities\",\n metadata_info=[\n MetadataInfo(\n name=\"category\",\n type=\"str\",\n description=(\n \"Category of the celebrity, one of [Sports, Entertainment,\"\n \" Business, Music]\"\n ),\n ),\n MetadataInfo(\n name=\"country\",\n type=\"str\",\n description=(\n \"Country of the celebrity, one of [United States, Barbados,\"\n \" Portugal]\"\n ),\n ),\n ],\n)\n\nretriever = VectorIndexAutoRetriever(\n index, vector_store_info=vector_store_info\n)\n```\n\n## Running over some sample data\n\nWe try running over some sample data. Note how metadata filters are inferred - this helps with more precise retrieval! \n\n\n```python\nresponse = retriever.retrieve(\"Tell me about celebrities from United States\")\n```\n\n\n```python\nprint(response[0])\n```\n\n\n```python\nresponse = retriever.retrieve(\n \"Tell me about Sports celebrities from United States\"\n)\n```\n\n\n```python\nprint(response[0])\n```"} {"tokens": 1016, "doc_id": "517359e3-f4af-44bb-8076-2f6f8f27505c", "name": "Weaviate Vector Store", "url": "https://docs.llamaindex.ai/en/stable/examples/vector_stores/WeaviateIndexDemo", "retrieve_doc": true, "source": "llama_index", "content": "\"Open\n\n# Weaviate Vector Store\n\nIf you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.\n\n\n```python\n%pip install llama-index-vector-stores-weaviate\n```\n\n\n```python\n!pip install llama-index\n```\n\n#### Creating a Weaviate Client\n\n\n```python\nimport os\nimport openai\n\nos.environ[\"OPENAI_API_KEY\"] = \"\"\nopenai.api_key = os.environ[\"OPENAI_API_KEY\"]\n```\n\n\n```python\nimport logging\nimport sys\n\nlogging.basicConfig(stream=sys.stdout, level=logging.INFO)\nlogging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))\n```\n\n\n```python\nimport weaviate\n```\n\n\n```python\n# cloud\ncluster_url = \"\"\napi_key = \"\"\n\nclient = weaviate.connect_to_wcs(\n cluster_url=cluster_url,\n auth_credentials=weaviate.auth.AuthApiKey(api_key),\n)\n\n# local\n# client = connect_to_local()\n```\n\n#### Load documents, build the VectorStoreIndex\n\n\n```python\nfrom llama_index.core import VectorStoreIndex, SimpleDirectoryReader\nfrom llama_index.vector_stores.weaviate import WeaviateVectorStore\nfrom IPython.display import Markdown, display\n```\n\nDownload Data\n\n\n```python\n!mkdir -p 'data/paul_graham/'\n!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'\n```\n\n\n```python\n# load documents\ndocuments = SimpleDirectoryReader(\"./data/paul_graham\").load_data()\n```\n\n\n```python\nfrom llama_index.core import StorageContext\n\n# If you want to load the index later, be sure to give it a name!\nvector_store = WeaviateVectorStore(\n weaviate_client=client, index_name=\"LlamaIndex\"\n)\nstorage_context = StorageContext.from_defaults(vector_store=vector_store)\nindex = VectorStoreIndex.from_documents(\n documents, storage_context=storage_context\n)\n\n# NOTE: you may also choose to define a index_name manually.\n# index_name = \"test_prefix\"\n# vector_store = WeaviateVectorStore(weaviate_client=client, index_name=index_name)\n```\n\n#### Query Index\n\n\n```python\n# set Logging to DEBUG for more detailed outputs\nquery_engine = index.as_query_engine()\nresponse = query_engine.query(\"What did the author do growing up?\")\n```\n\n\n```python\ndisplay(Markdown(f\"{response}\"))\n```\n\n## Loading the index\n\nHere, we use the same index name as when we created the initial index. This stops it from being auto-generated and allows us to easily connect back to it.\n\n\n```python\ncluster_url = \"\"\napi_key = \"\"\n\nclient = weaviate.connect_to_wcs(\n cluster_url=cluster_url,\n auth_credentials=weaviate.auth.AuthApiKey(api_key),\n)\n\n# local\n# client = weaviate.connect_to_local()\n```\n\n\n```python\nvector_store = WeaviateVectorStore(\n weaviate_client=client, index_name=\"LlamaIndex\"\n)\n\nloaded_index = VectorStoreIndex.from_vector_store(vector_store)\n```\n\n\n```python\n# set Logging to DEBUG for more detailed outputs\nquery_engine = loaded_index.as_query_engine()\nresponse = query_engine.query(\"What happened at interleaf?\")\ndisplay(Markdown(f\"{response}\"))\n```\n\n## Metadata Filtering\n\nLet's insert a dummy document, and try to filter so that only that document is returned.\n\n\n```python\nfrom llama_index.core import Document\n\ndoc = Document.example()\nprint(doc.metadata)\nprint(\"-----\")\nprint(doc.text[:100])\n```\n\n\n```python\nloaded_index.insert(doc)\n```\n\n\n```python\nfrom llama_index.core.vector_stores import ExactMatchFilter, MetadataFilters\n\nfilters = MetadataFilters(\n filters=[ExactMatchFilter(key=\"filename\", value=\"README.md\")]\n)\nquery_engine = loaded_index.as_query_engine(filters=filters)\nresponse = query_engine.query(\"What is the name of the file?\")\ndisplay(Markdown(f\"{response}\"))\n```\n\n# Deleting the index completely\n\nYou can delete the index created by the vector store using the `delete_index` function\n\n\n```python\nvector_store.delete_index()\n```\n\n\n```python\nvector_store.delete_index() # calling the function again does nothing\n```\n\n# Connection Termination\n\nYou must ensure your client connections are closed:\n\n\n```python\nclient.close()\n```"} {"tokens": 1471, "doc_id": "26824bda-cde2-4903-9be7-f5288b216ca2", "name": "Neo4j vector store", "url": "https://docs.llamaindex.ai/en/stable/examples/vector_stores/Neo4jVectorDemo", "retrieve_doc": true, "source": "llama_index", "content": "\"Open\n\n# Neo4j vector store\n\nIf you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.\n\n\n```python\n%pip install llama-index-vector-stores-neo4jvector\n```\n\n\n```python\n!pip install llama-index\n```\n\n\n```python\nimport os\nimport openai\n\nos.environ[\"OPENAI_API_KEY\"] = \"OPENAI_API_KEY\"\nopenai.api_key = os.environ[\"OPENAI_API_KEY\"]\n```\n\n## Initiate Neo4j vector wrapper\n\n\n```python\nfrom llama_index.vector_stores.neo4jvector import Neo4jVectorStore\n\nusername = \"neo4j\"\npassword = \"pleaseletmein\"\nurl = \"bolt://localhost:7687\"\nembed_dim = 1536\n\nneo4j_vector = Neo4jVectorStore(username, password, url, embed_dim)\n```\n\n## Load documents, build the VectorStoreIndex\n\n\n```python\nfrom llama_index.core import VectorStoreIndex, SimpleDirectoryReader\nfrom IPython.display import Markdown, display\n```\n\nDownload Data\n\n\n```python\n!mkdir -p 'data/paul_graham/'\n!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'\n```\n\n --2023-12-14 18:44:00-- https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt\n Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.111.133, 185.199.109.133, 185.199.110.133, ...\n Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443... connected.\n HTTP request sent, awaiting response... 200 OK\n Length: 75042 (73K) [text/plain]\n Saving to: ‘data/paul_graham/paul_graham_essay.txt’\n \n data/paul_graham/pa 100%[===================>] 73,28K --.-KB/s in 0,03s \n \n 2023-12-14 18:44:00 (2,16 MB/s) - ‘data/paul_graham/paul_graham_essay.txt’ saved [75042/75042]\n \n\n\n\n```python\n# load documents\ndocuments = SimpleDirectoryReader(\"./data/paul_graham\").load_data()\n```\n\n\n```python\nfrom llama_index.core import StorageContext\n\nstorage_context = StorageContext.from_defaults(vector_store=neo4j_vector)\nindex = VectorStoreIndex.from_documents(\n documents, storage_context=storage_context\n)\n```\n\n\n```python\nquery_engine = index.as_query_engine()\nresponse = query_engine.query(\"What happened at interleaf?\")\ndisplay(Markdown(f\"{response}\"))\n```\n\n\nAt Interleaf, they added a scripting language inspired by Emacs and made it a dialect of Lisp. They were looking for a Lisp hacker to write things in this scripting language. The author of the text worked at Interleaf and mentioned that their Lisp was the thinnest icing on a giant C cake. The author also mentioned that they didn't know C and didn't want to learn it, so they never understood most of the software at Interleaf. Additionally, the author admitted to being a bad employee and spending much of their time working on a separate project called On Lisp.\n\n\n## Hybrid search\n\nHybrid search uses a combination of keyword and vector search\nIn order to use hybrid search, you need to set the `hybrid_search` to `True`\n\n\n```python\nneo4j_vector_hybrid = Neo4jVectorStore(\n username, password, url, embed_dim, hybrid_search=True\n)\n```\n\n\n```python\nstorage_context = StorageContext.from_defaults(\n vector_store=neo4j_vector_hybrid\n)\nindex = VectorStoreIndex.from_documents(\n documents, storage_context=storage_context\n)\nquery_engine = index.as_query_engine()\nresponse = query_engine.query(\"What happened at interleaf?\")\ndisplay(Markdown(f\"{response}\"))\n```\n\n\nAt Interleaf, they added a scripting language inspired by Emacs and made it a dialect of Lisp. They were looking for a Lisp hacker to write things in this scripting language. The author of the essay worked at Interleaf but didn't understand most of the software because he didn't know C and didn't want to learn it. He also mentioned that their Lisp was the thinnest icing on a giant C cake. The author admits to being a bad employee and spending much of his time working on a contract to publish On Lisp.\n\n\n## Load existing vector index\n\nIn order to connect to an existing vector index, you need to define the `index_name` and `text_node_property` parameters:\n\n- index_name: name of the existing vector index (default is `vector`)\n- text_node_property: name of the property that containt the text value (default is `text`)\n\n\n```python\nindex_name = \"existing_index\"\ntext_node_property = \"text\"\nexisting_vector = Neo4jVectorStore(\n username,\n password,\n url,\n embed_dim,\n index_name=index_name,\n text_node_property=text_node_property,\n)\n\nloaded_index = VectorStoreIndex.from_vector_store(existing_vector)\n```\n\n## Customizing responses\n\nYou can customize the retrieved information from the knowledge graph using the `retrieval_query` parameter.\n\nThe retrieval query must return the following four columns:\n\n* text:str - The text of the returned document\n* score:str - similarity score\n* id:str - node id\n* metadata: Dict - dictionary with additional metadata (must contain `_node_type` and `_node_content` keys)\n\n\n```python\nretrieval_query = (\n \"RETURN 'Interleaf hired Tomaz' AS text, score, node.id AS id, \"\n \"{author: 'Tomaz', _node_type:node._node_type, _node_content:node._node_content} AS metadata\"\n)\nneo4j_vector_retrieval = Neo4jVectorStore(\n username, password, url, embed_dim, retrieval_query=retrieval_query\n)\n```\n\n\n```python\nloaded_index = VectorStoreIndex.from_vector_store(\n neo4j_vector_retrieval\n).as_query_engine()\nresponse = loaded_index.query(\"What happened at interleaf?\")\ndisplay(Markdown(f\"{response}\"))\n```\n\n\nInterleaf hired Tomaz."} {"tokens": 1782, "doc_id": "dea54c67-9b5e-47f0-adcc-c00da6a46c2f", "name": "S3/R2 Storage", "url": "https://docs.llamaindex.ai/en/stable/examples/vector_stores/SimpleIndexOnS3", "retrieve_doc": true, "source": "llama_index", "content": "\"Open\n\n# S3/R2 Storage\n\nIf you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.\n\n\n```python\n!pip install llama-index\n```\n\n\n```python\nimport logging\nimport sys\n\nlogging.basicConfig(stream=sys.stdout, level=logging.INFO)\nlogging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))\n\nfrom llama_index.core import (\n VectorStoreIndex,\n SimpleDirectoryReader,\n load_index_from_storage,\n StorageContext,\n)\nfrom IPython.display import Markdown, display\n```\n\n INFO:numexpr.utils:Note: NumExpr detected 32 cores but \"NUMEXPR_MAX_THREADS\" not set, so enforcing safe limit of 8.\n Note: NumExpr detected 32 cores but \"NUMEXPR_MAX_THREADS\" not set, so enforcing safe limit of 8.\n INFO:numexpr.utils:NumExpr defaulting to 8 threads.\n NumExpr defaulting to 8 threads.\n\n\n /home/hua/code/llama_index/.hermit/python/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n from .autonotebook import tqdm as notebook_tqdm\n\n\n\n```python\nimport dotenv\nimport s3fs\nimport os\n\ndotenv.load_dotenv(\"../../../.env\")\n\nAWS_KEY = os.environ[\"AWS_ACCESS_KEY_ID\"]\nAWS_SECRET = os.environ[\"AWS_SECRET_ACCESS_KEY\"]\nR2_ACCOUNT_ID = os.environ[\"R2_ACCOUNT_ID\"]\n\nassert AWS_KEY is not None and AWS_KEY != \"\"\n\ns3 = s3fs.S3FileSystem(\n key=AWS_KEY,\n secret=AWS_SECRET,\n endpoint_url=f\"https://{R2_ACCOUNT_ID}.r2.cloudflarestorage.com\",\n s3_additional_kwargs={\"ACL\": \"public-read\"},\n)\n```\n\nDownload Data\n\n\n```python\n!mkdir -p 'data/paul_graham/'\n!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'\n```\n\n\n```python\n# load documents\ndocuments = SimpleDirectoryReader(\"./data/paul_graham/\").load_data()\nprint(len(documents))\n```\n\n 1\n\n\n\n```python\nindex = VectorStoreIndex.from_documents(documents, fs=s3)\n```\n\n INFO:llama_index.token_counter.token_counter:> [build_index_from_nodes] Total LLM token usage: 0 tokens\n > [build_index_from_nodes] Total LLM token usage: 0 tokens\n INFO:llama_index.token_counter.token_counter:> [build_index_from_nodes] Total embedding token usage: 20729 tokens\n > [build_index_from_nodes] Total embedding token usage: 20729 tokens\n\n\n\n```python\n# save index to disk\nindex.set_index_id(\"vector_index\")\nindex.storage_context.persist(\"llama-index/storage_demo\", fs=s3)\n```\n\n\n```python\ns3.listdir(\"llama-index/storage_demo\")\n```\n\n\n\n\n [{'Key': 'llama-index/storage_demo/docstore.json',\n 'LastModified': datetime.datetime(2023, 5, 14, 20, 23, 53, 213000, tzinfo=tzutc()),\n 'ETag': '\"3993f79a6f7cf908a8e53450a2876cf0\"',\n 'Size': 107529,\n 'StorageClass': 'STANDARD',\n 'type': 'file',\n 'size': 107529,\n 'name': 'llama-index/storage_demo/docstore.json'},\n {'Key': 'llama-index/storage_demo/index_store.json',\n 'LastModified': datetime.datetime(2023, 5, 14, 20, 23, 53, 783000, tzinfo=tzutc()),\n 'ETag': '\"5b084883bf0b08e3c2b979af7c16be43\"',\n 'Size': 3105,\n 'StorageClass': 'STANDARD',\n 'type': 'file',\n 'size': 3105,\n 'name': 'llama-index/storage_demo/index_store.json'},\n {'Key': 'llama-index/storage_demo/vector_store.json',\n 'LastModified': datetime.datetime(2023, 5, 14, 20, 23, 54, 232000, tzinfo=tzutc()),\n 'ETag': '\"75535cf22c23bcd8ead21b8a52e9517a\"',\n 'Size': 829290,\n 'StorageClass': 'STANDARD',\n 'type': 'file',\n 'size': 829290,\n 'name': 'llama-index/storage_demo/vector_store.json'}]\n\n\n\n\n```python\n# load index from s3\nsc = StorageContext.from_defaults(\n persist_dir=\"llama-index/storage_demo\", fs=s3\n)\n```\n\n\n```python\nindex2 = load_index_from_storage(sc, \"vector_index\")\n```\n\n INFO:llama_index.indices.loading:Loading indices with ids: ['vector_index']\n Loading indices with ids: ['vector_index']\n\n\n\n```python\nindex2.docstore.docs.keys()\n```\n\n\n\n\n dict_keys(['f8891670-813b-4cfa-9025-fcdc8ba73449', '985a2c69-9da5-40cf-ba30-f984921187c1', 'c55f077c-0bfb-4036-910c-6fd5f26f7372', 'b47face6-f25b-4381-bb8d-164f179d6888', '16304ef7-2378-4776-b86d-e8ed64c8fb58', '62dfdc7a-6a2f-4d5f-9033-851fbc56c14a', 'a51ef189-3924-494b-84cf-e23df673e29c', 'f94aca2b-34ac-4ec4-ac41-d31cd3b7646f', 'ad89e2fb-e0fc-4615-a380-8245bd6546af', '3dbba979-ca08-4321-b4de-be5236ac2e11', '634b2d6d-0bff-4384-898f-b521470db8ac', 'ee9551ba-7a44-493d-997b-8eeab9c04e25', 'b21fe2b5-d8e3-4895-8424-fa9e3da76711', 'bd2609e8-8b52-49e8-8ee7-41b64b3ce9e1', 'a08b739e-efd9-4a61-8517-c4f9cea8cf7d', '8d4babaf-37f1-454a-8be4-b67e1b8e428f', '05389153-4567-4e53-a2ea-bc3e020ee1b2', 'd29531a5-c5d2-4e1d-ab99-56f2b4bb7f37', '2ccb3c63-3407-4acf-b5bb-045caa588bbc', 'a0b1bebb-3dcd-4bf8-9ebb-a4cd2cb82d53', '21517b34-6c1b-4607-bf89-7ab59b85fba6', 'f2487d52-1e5e-4482-a182-218680ef306e', '979998ce-39ee-41bc-a9be-b3ed68d7c304', '3e658f36-a13e-407a-8624-0adf9e842676'])"} {"tokens": 1815, "doc_id": "865b355c-a71d-4252-8033-1aa6c567ae16", "name": "Rockset Vector Store", "url": "https://docs.llamaindex.ai/en/stable/examples/vector_stores/RocksetIndexDemo", "retrieve_doc": true, "source": "llama_index", "content": "\"Open\n\n# Rockset Vector Store\n\nAs a real-time search and analytics database, Rockset uses indexing to deliver scalable and performant personalization, product search, semantic search, chatbot applications, and more.\nSince Rockset is purpose-built for real-time, you can build these responsive applications on constantly updating, streaming data. \nBy integrating Rockset with LlamaIndex, you can easily use LLMs on your own real-time data for production-ready vector search applications.\n\nWe'll walk through a demonstration of how to use Rockset as a vector store in LlamaIndex. \n\n## Tutorial\nIn this example, we'll use OpenAI's `text-embedding-ada-002` model to generate embeddings and Rockset as vector store to store embeddings.\nWe'll ingest text from a file and ask questions about the content.\n\n### Setting Up Your Environment\n1. Create a [collection](https://rockset.com/docs/collections) from the Rockset console with the [Write API](https://rockset.com/docs/write-api/) as your source.\nName your collection `llamaindex_demo`. Configure the following [ingest transformation](https://rockset.com/docs/ingest-transformation) \nwith [`VECTOR_ENFORCE`](https://rockset.com/docs/vector-functions) to define your embeddings field and take advantage of performance and storage optimizations:\n```sql\nSELECT \n _input.* EXCEPT(_meta), \n VECTOR_ENFORCE(\n _input.embedding,\n 1536,\n 'float'\n ) as embedding\nFROM _input\n```\n\n2. Create an [API key](https://rockset.com/docs/iam) from the Rockset console and set the `ROCKSET_API_KEY` environment variable.\nFind your API server [here](http://rockset.com/docs/rest-api#introduction) and set the `ROCKSET_API_SERVER` environment variable. \nSet the `OPENAI_API_KEY` environment variable.\n\n3. Install the dependencies.\n```shell\npip3 install llama_index rockset \n```\n\n4. LlamaIndex allows you to ingest data from a variety of sources. \nFor this example, we'll read from a text file named `constitution.txt`, which is a transcript of the American Constitution, found [here](https://www.archives.gov/founding-docs/constitution-transcript). \n\n### Data ingestion \nUse LlamaIndex's `SimpleDirectoryReader` class to convert the text file to a list of `Document` objects.\n\n\n```python\n%pip install llama-index-llms-openai\n%pip install llama-index-vector-stores-rocksetdb\n```\n\n\n```python\nfrom llama_index.core import SimpleDirectoryReader\n\ndocs = SimpleDirectoryReader(\n input_files=[\"{path to}/consitution.txt\"]\n).load_data()\n```\n\nInstantiate the LLM and service context.\n\n\n```python\nfrom llama_index.core import Settings\nfrom llama_index.llms.openai import OpenAI\n\nSettings.llm = OpenAI(temperature=0.8, model=\"gpt-3.5-turbo\")\n```\n\nInstantiate the vector store and storage context.\n\n\n```python\nfrom llama_index.core import StorageContext\nfrom llama_index.vector_stores.rocksetdb import RocksetVectorStore\n\nvector_store = RocksetVectorStore(collection=\"llamaindex_demo\")\nstorage_context = StorageContext.from_defaults(vector_store=vector_store)\n```\n\nAdd documents to the `llamaindex_demo` collection and create an index.\n\n\n```python\nfrom llama_index.core import VectorStoreIndex\n\nindex = VectorStoreIndex.from_documents(\n docs,\n storage_context=storage_context,\n)\n```\n\n### Querying\nAsk a question about your document and generate a response.\n\n\n```python\nresponse = index.as_query_engine().query(\"What is the duty of the president?\")\n\nprint(str(response))\n```\n\n\nRun the program.\n```text\n$ python3 main.py\nThe duty of the president is to faithfully execute the Office of President of the United States, preserve, protect and defend the Constitution of the United States, serve as the Commander in Chief of the Army and Navy, grant reprieves and pardons for offenses against the United States (except in cases of impeachment), make treaties and appoint ambassadors and other public ministers, take care that the laws be faithfully executed, and commission all the officers of the United States.\n```\n\n## Metadata Filtering\nMetadata filtering allows you to retrieve relevant documents that match specific filters.\n\n1. Add nodes to your vector store and create an index.\n\n\n```python\nfrom llama_index.vector_stores.rocksetdb import RocksetVectorStore\nfrom llama_index.core import VectorStoreIndex, StorageContext\nfrom llama_index.core.vector_stores.types import NodeWithEmbedding\nfrom llama_index.core.schema import TextNode\n\nnodes = [\n NodeWithEmbedding(\n node=TextNode(\n text=\"Apples are blue\",\n metadata={\"type\": \"fruit\"},\n ),\n embedding=[],\n )\n]\nindex = VectorStoreIndex(\n nodes,\n storage_context=StorageContext.from_defaults(\n vector_store=RocksetVectorStore(collection=\"llamaindex_demo\")\n ),\n)\n```\n\n2. Define metadata filters.\n\n\n```python\nfrom llama_index.core.vector_stores import ExactMatchFilter, MetadataFilters\n\nfilters = MetadataFilters(\n filters=[ExactMatchFilter(key=\"type\", value=\"fruit\")]\n)\n```\n\n3. Retrieve relevant documents that satisfy the filters.\n\n\n```python\nretriever = index.as_retriever(filters=filters)\nretriever.retrieve(\"What colors are apples?\")\n```\n\n## Creating an Index from an Existing Collection\nYou can create indices with data from existing collections.\n\n\n```python\nfrom llama_index.core import VectorStoreIndex\nfrom llama_index.vector_stores.rocksetdb import RocksetVectorStore\n\nvector_store = RocksetVectorStore(collection=\"llamaindex_demo\")\n\nindex = VectorStoreIndex.from_vector_store(vector_store)\n```\n\n## Creating an Index from a New Collection\nYou can also create a new Rockset collection to use as a vector store.\n\n\n```python\nfrom llama_index.vector_stores.rocksetdb import RocksetVectorStore\n\nvector_store = RocksetVectorStore.with_new_collection(\n collection=\"llamaindex_demo\", # name of new collection\n dimensions=1536, # specifies length of vectors in ingest tranformation (optional)\n # other RocksetVectorStore args\n)\n\nindex = VectorStoreIndex(\n nodes,\n storage_context=StorageContext.from_defaults(vector_store=vector_store),\n)\n```\n\n## Configuration\n* **collection**: Name of the collection to query (required).\n\n```python\nRocksetVectorStore(collection=\"my_collection\")\n```\n\n* **workspace**: Name of the workspace containing the collection. Defaults to `\"commons\"`.\n```python\nRocksetVectorStore(worksapce=\"my_workspace\")\n```\n\n* **api_key**: The API key to use to authenticate Rockset requests. Ignored if `client` is passed in. Defaults to the `ROCKSET_API_KEY` environment variable.\n```python\nRocksetVectorStore(api_key=\"\")\n```\n\n* **api_server**: The API server to use for Rockset requests. Ignored if `client` is passed in. Defaults to the `ROCKSET_API_KEY` environment variable or `\"https://api.use1a1.rockset.com\"` if the `ROCKSET_API_SERVER` is not set.\n```python\nfrom rockset import Regions\nRocksetVectorStore(api_server=Regions.euc1a1)\n```\n\n* **client**: Rockset client object to use to execute Rockset requests. If not specified, a client object is internally constructed with the `api_key` parameter (or `ROCKSET_API_SERVER` environment variable) and the `api_server` parameter (or `ROCKSET_API_SERVER` environment variable).\n```python\nfrom rockset import RocksetClient\nRocksetVectorStore(client=RocksetClient(api_key=\"\"))\n```\n\n* **embedding_col**: The name of the database field containing embeddings. Defaults to `\"embedding\"`.\n```python\nRocksetVectorStore(embedding_col=\"my_embedding\")\n```\n\n* **metadata_col**: The name of the database field containing node data. Defaults to `\"metadata\"`.\n```python\nRocksetVectorStore(metadata_col=\"node\")\n```\n\n* **distance_func**: The metric to measure vector relationship. Defaults to cosine similarity.\n```python\nRocksetVectorStore(distance_func=RocksetVectorStore.DistanceFunc.DOT_PRODUCT)\n```"} {"tokens": 865, "doc_id": "bcde3b34-3303-4885-afb1-8654f27e3176", "name": "Databricks Vector Search", "url": "https://docs.llamaindex.ai/en/stable/examples/vector_stores/DatabricksVectorSearchDemo", "retrieve_doc": true, "source": "llama_index", "content": "# Databricks Vector Search\n\nDatabricks Vector Search is a vector database that is built into the Databricks Intelligence Platform and integrated with its governance and productivity tools. Full docs here: https://docs.databricks.com/en/generative-ai/vector-search.html\n\nInstall llama-index and databricks-vectorsearch. You must be inside a Databricks runtime to use the Vector Search python client.\n\n\n```python\n%pip install llama-index llama-index-vector-stores-databricks\n%pip install databricks-vectorsearch\n```\n\nImport databricks dependencies\n\n\n```python\nfrom databricks.vector_search.client import (\n VectorSearchIndex,\n VectorSearchClient,\n)\n```\n\nImport LlamaIndex dependencies\n\n\n```python\nfrom llama_index.core import (\n VectorStoreIndex,\n SimpleDirectoryReader,\n ServiceContext,\n StorageContext,\n)\nfrom llama_index.vector_stores.databricks import DatabricksVectorSearch\n```\n\nLoad example data\n\n\n```python\n!mkdir -p 'data/paul_graham/'\n!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'\n```\n\nRead the data\n\n\n```python\n# load documents\ndocuments = SimpleDirectoryReader(\"./data/paul_graham/\").load_data()\nprint(f\"Total documents: {len(documents)}\")\nprint(f\"First document, id: {documents[0].doc_id}\")\nprint(f\"First document, hash: {documents[0].hash}\")\nprint(\n \"First document, text\"\n f\" ({len(documents[0].text)} characters):\\n{'='*20}\\n{documents[0].text[:360]} ...\"\n)\n```\n\nCreate a Databricks Vector Search endpoint which will serve the index\n\n\n```python\n# Create a vector search endpoint\nclient = VectorSearchClient()\nclient.create_endpoint(\n name=\"llamaindex_dbx_vector_store_test_endpoint\", endpoint_type=\"STANDARD\"\n)\n```\n\nCreate the Databricks Vector Search index, and build it from the documents\n\n\n```python\n# Create a vector search index\n# it must be placed inside a Unity Catalog-enabled schema\n\n# We'll use self-managed embeddings (i.e. managed by LlamaIndex) rather than a Databricks-managed index\ndatabricks_index = client.create_direct_access_index(\n endpoint_name=\"llamaindex_dbx_vector_store_test_endpoint\",\n index_name=\"my_catalog.my_schema.my_test_table\",\n primary_key=\"my_primary_key_name\",\n embedding_dimension=1536, # match the embeddings model dimension you're going to use\n embedding_vector_column=\"my_embedding_vector_column_name\", # you name this anything you want - it'll be picked up by the LlamaIndex class\n schema={\n \"my_primary_key_name\": \"string\",\n \"my_embedding_vector_column_name\": \"array\",\n \"text\": \"string\", # one column must match the text_column in the DatabricksVectorSearch instance created below; this will hold the raw node text,\n \"doc_id\": \"string\", # one column must contain the reference document ID (this will be populated by LlamaIndex automatically)\n # add any other metadata you may have in your nodes (Databricks Vector Search supports metadata filtering)\n # NOTE THAT THESE FIELDS MUST BE ADDED EXPLICITLY TO BE USED FOR METADATA FILTERING\n },\n)\n\ndatabricks_vector_store = DatabricksVectorSearch(\n index=databricks_index,\n text_column=\"text\",\n columns=None, # YOU MUST ALSO RECORD YOUR METADATA FIELD NAMES HERE\n) # text_column is required for self-managed embeddings\nstorage_context = StorageContext.from_defaults(\n vector_store=databricks_vector_store\n)\nindex = VectorStoreIndex.from_documents(\n documents, storage_context=storage_context\n)\n```\n\nQuery the index\n\n\n```python\nquery_engine = index.as_query_engine()\nresponse = query_engine.query(\"Why did the author choose to work on AI?\")\n\nprint(response.response)\n```"} {"tokens": 6073, "doc_id": "8cd97a8d-9e7a-41df-96a0-cefafcfa1282", "name": "Postgres Vector Store", "url": "https://docs.llamaindex.ai/en/stable/examples/vector_stores/postgres", "retrieve_doc": true, "source": "llama_index", "content": "\"Open\n\n# Postgres Vector Store\nIn this notebook we are going to show how to use [Postgresql](https://www.postgresql.org) and [pgvector](https://github.com/pgvector/pgvector) to perform vector searches in LlamaIndex\n\nIf you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.\n\n\n```python\n%pip install llama-index-vector-stores-postgres\n```\n\n\n```python\n!pip install llama-index\n```\n\nRunning the following cell will install Postgres with PGVector in Colab.\n\n\n```python\n!sudo apt update\n!echo | sudo apt install -y postgresql-common\n!echo | sudo /usr/share/postgresql-common/pgdg/apt.postgresql.org.sh\n!echo | sudo apt install postgresql-15-pgvector\n!sudo service postgresql start\n!sudo -u postgres psql -c \"ALTER USER postgres PASSWORD 'password';\"\n!sudo -u postgres psql -c \"CREATE DATABASE vector_db;\"\n```\n\n\n```python\n# import logging\n# import sys\n\n# Uncomment to see debug logs\n# logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)\n# logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))\n\nfrom llama_index.core import SimpleDirectoryReader, StorageContext\nfrom llama_index.core import VectorStoreIndex\nfrom llama_index.vector_stores.postgres import PGVectorStore\nimport textwrap\nimport openai\n```\n\n### Setup OpenAI\nThe first step is to configure the openai key. It will be used to created embeddings for the documents loaded into the index\n\n\n```python\nimport os\n\nos.environ[\"OPENAI_API_KEY\"] = \"\"\nopenai.api_key = os.environ[\"OPENAI_API_KEY\"]\n```\n\nDownload Data\n\n\n```python\n!mkdir -p 'data/paul_graham/'\n!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'\n```\n\n --2024-03-14 02:56:30-- https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt\n Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.111.133, ...\n Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.\n HTTP request sent, awaiting response... 200 OK\n Length: 75042 (73K) [text/plain]\n Saving to: ‘data/paul_graham/paul_graham_essay.txt’\n \n data/paul_graham/pa 100%[===================>] 73.28K --.-KB/s in 0.001s \n \n 2024-03-14 02:56:30 (72.2 MB/s) - ‘data/paul_graham/paul_graham_essay.txt’ saved [75042/75042]\n \n\n\n### Loading documents\nLoad the documents stored in the `data/paul_graham/` using the SimpleDirectoryReader\n\n\n```python\ndocuments = SimpleDirectoryReader(\"./data/paul_graham\").load_data()\nprint(\"Document ID:\", documents[0].doc_id)\n```\n\n Document ID: 1306591e-cc2d-430b-a74c-03ae7105ecab\n\n\n### Create the Database\nUsing an existing postgres running at localhost, create the database we'll be using.\n\n\n```python\nimport psycopg2\n\nconnection_string = \"postgresql://postgres:password@localhost:5432\"\ndb_name = \"vector_db\"\nconn = psycopg2.connect(connection_string)\nconn.autocommit = True\n\nwith conn.cursor() as c:\n c.execute(f\"DROP DATABASE IF EXISTS {db_name}\")\n c.execute(f\"CREATE DATABASE {db_name}\")\n```\n\n### Create the index\nHere we create an index backed by Postgres using the documents loaded previously. PGVectorStore takes a few arguments.\n\n\n```python\nfrom sqlalchemy import make_url\n\nurl = make_url(connection_string)\nvector_store = PGVectorStore.from_params(\n database=db_name,\n host=url.host,\n password=url.password,\n port=url.port,\n user=url.username,\n table_name=\"paul_graham_essay\",\n embed_dim=1536, # openai embedding dimension\n)\n\nstorage_context = StorageContext.from_defaults(vector_store=vector_store)\nindex = VectorStoreIndex.from_documents(\n documents, storage_context=storage_context, show_progress=True\n)\nquery_engine = index.as_query_engine()\n```\n\n\n Parsing nodes: 0%| | 0/1 [00:00] 1.67M --.-KB/s in 0.02s \n \n 2024-03-14 02:56:46 (106 MB/s) - ‘data/git_commits/commit_history.csv’ saved [1753902/1753902]\n \n\n\n\n```python\nimport csv\n\nwith open(\"data/git_commits/commit_history.csv\", \"r\") as f:\n commits = list(csv.DictReader(f))\n\nprint(commits[0])\nprint(len(commits))\n```\n\n {'commit': '44e41c12ab25e36c202f58e068ced262eadc8d16', 'author': 'Lakshmi Narayanan Sreethar', 'date': 'Tue Sep 5 21:03:21 2023 +0530', 'change summary': 'Fix segfault in set_integer_now_func', 'change details': 'When an invalid function oid is passed to set_integer_now_func, it finds out that the function oid is invalid but before throwing the error, it calls ReleaseSysCache on an invalid tuple causing a segfault. Fixed that by removing the invalid call to ReleaseSysCache. Fixes #6037 '}\n 4167\n\n\n#### Add nodes with custom metadata\n\n\n```python\n# Create TextNode for each of the first 100 commits\nfrom llama_index.core.schema import TextNode\nfrom datetime import datetime\nimport re\n\nnodes = []\ndates = set()\nauthors = set()\nfor commit in commits[:100]:\n author_email = commit[\"author\"].split(\"<\")[1][:-1]\n commit_date = datetime.strptime(\n commit[\"date\"], \"%a %b %d %H:%M:%S %Y %z\"\n ).strftime(\"%Y-%m-%d\")\n commit_text = commit[\"change summary\"]\n if commit[\"change details\"]:\n commit_text += \"\\n\\n\" + commit[\"change details\"]\n fixes = re.findall(r\"#(\\d+)\", commit_text, re.IGNORECASE)\n nodes.append(\n TextNode(\n text=commit_text,\n metadata={\n \"commit_date\": commit_date,\n \"author\": author_email,\n \"fixes\": fixes,\n },\n )\n )\n dates.add(commit_date)\n authors.add(author_email)\n\nprint(nodes[0])\nprint(min(dates), \"to\", max(dates))\nprint(authors)\n```\n\n Node ID: 69513543-dee5-4c65-b4b8-39295f11e669\n Text: Fix segfault in set_integer_now_func When an invalid function\n oid is passed to set_integer_now_func, it finds out that the function\n oid is invalid but before throwing the error, it calls ReleaseSysCache\n on an invalid tuple causing a segfault. Fixed that by removing the\n invalid call to ReleaseSysCache. Fixes #6037\n 2023-03-22 to 2023-09-05\n {'rafia.sabih@gmail.com', 'erik@timescale.com', 'jguthrie@timescale.com', 'sven@timescale.com', '36882414+akuzm@users.noreply.github.com', 'me@noctarius.com', 'satish.8483@gmail.com', 'nikhil@timescale.com', 'konstantina@timescale.com', 'dmitry@timescale.com', 'mats@timescale.com', 'jan@timescale.com', 'lakshmi@timescale.com', 'fabriziomello@gmail.com', 'engel@sero-systems.de'}\n\n\n\n```python\nvector_store = PGVectorStore.from_params(\n database=db_name,\n host=url.host,\n password=url.password,\n port=url.port,\n user=url.username,\n table_name=\"metadata_filter_demo3\",\n embed_dim=1536, # openai embedding dimension\n)\n\nindex = VectorStoreIndex.from_vector_store(vector_store=vector_store)\nindex.insert_nodes(nodes)\n```\n\n\n```python\nprint(index.as_query_engine().query(\"How did Lakshmi fix the segfault?\"))\n```\n\n Lakshmi fixed the segfault by removing the invalid call to ReleaseSysCache that was causing the issue.\n\n\n#### Apply metadata filters\n\nNow we can filter by commit author or by date when retrieving nodes.\n\n\n```python\nfrom llama_index.core.vector_stores.types import (\n MetadataFilter,\n MetadataFilters,\n)\n\nfilters = MetadataFilters(\n filters=[\n MetadataFilter(key=\"author\", value=\"mats@timescale.com\"),\n MetadataFilter(key=\"author\", value=\"sven@timescale.com\"),\n ],\n condition=\"or\",\n)\n\nretriever = index.as_retriever(\n similarity_top_k=10,\n filters=filters,\n)\n\nretrieved_nodes = retriever.retrieve(\"What is this software project about?\")\n\nfor node in retrieved_nodes:\n print(node.node.metadata)\n```\n\n {'commit_date': '2023-08-07', 'author': 'mats@timescale.com', 'fixes': []}\n {'commit_date': '2023-08-27', 'author': 'sven@timescale.com', 'fixes': []}\n {'commit_date': '2023-07-13', 'author': 'mats@timescale.com', 'fixes': []}\n {'commit_date': '2023-08-07', 'author': 'sven@timescale.com', 'fixes': []}\n {'commit_date': '2023-08-30', 'author': 'sven@timescale.com', 'fixes': []}\n {'commit_date': '2023-08-15', 'author': 'sven@timescale.com', 'fixes': []}\n {'commit_date': '2023-08-23', 'author': 'sven@timescale.com', 'fixes': []}\n {'commit_date': '2023-08-10', 'author': 'mats@timescale.com', 'fixes': []}\n {'commit_date': '2023-07-25', 'author': 'mats@timescale.com', 'fixes': ['5892']}\n {'commit_date': '2023-08-21', 'author': 'sven@timescale.com', 'fixes': []}\n\n\n\n```python\nfilters = MetadataFilters(\n filters=[\n MetadataFilter(key=\"commit_date\", value=\"2023-08-15\", operator=\">=\"),\n MetadataFilter(key=\"commit_date\", value=\"2023-08-25\", operator=\"<=\"),\n ],\n condition=\"and\",\n)\n\nretriever = index.as_retriever(\n similarity_top_k=10,\n filters=filters,\n)\n\nretrieved_nodes = retriever.retrieve(\"What is this software project about?\")\n\nfor node in retrieved_nodes:\n print(node.node.metadata)\n```\n\n {'commit_date': '2023-08-23', 'author': 'erik@timescale.com', 'fixes': []}\n {'commit_date': '2023-08-17', 'author': 'konstantina@timescale.com', 'fixes': []}\n {'commit_date': '2023-08-15', 'author': '36882414+akuzm@users.noreply.github.com', 'fixes': []}\n {'commit_date': '2023-08-15', 'author': '36882414+akuzm@users.noreply.github.com', 'fixes': []}\n {'commit_date': '2023-08-24', 'author': 'lakshmi@timescale.com', 'fixes': []}\n {'commit_date': '2023-08-15', 'author': 'sven@timescale.com', 'fixes': []}\n {'commit_date': '2023-08-23', 'author': 'sven@timescale.com', 'fixes': []}\n {'commit_date': '2023-08-21', 'author': 'sven@timescale.com', 'fixes': []}\n {'commit_date': '2023-08-20', 'author': 'sven@timescale.com', 'fixes': []}\n {'commit_date': '2023-08-21', 'author': 'sven@timescale.com', 'fixes': []}\n\n\n#### Apply nested filters\n\nIn the above examples, we combined multiple filters using AND or OR. We can also combine multiple sets of filters.\n\ne.g. in SQL:\n```sql\nWHERE (commit_date >= '2023-08-01' AND commit_date <= '2023-08-15') AND (author = 'mats@timescale.com' OR author = 'sven@timescale.com')\n```\n\n\n```python\nfilters = MetadataFilters(\n filters=[\n MetadataFilters(\n filters=[\n MetadataFilter(\n key=\"commit_date\", value=\"2023-08-01\", operator=\">=\"\n ),\n MetadataFilter(\n key=\"commit_date\", value=\"2023-08-15\", operator=\"<=\"\n ),\n ],\n condition=\"and\",\n ),\n MetadataFilters(\n filters=[\n MetadataFilter(key=\"author\", value=\"mats@timescale.com\"),\n MetadataFilter(key=\"author\", value=\"sven@timescale.com\"),\n ],\n condition=\"or\",\n ),\n ],\n condition=\"and\",\n)\n\nretriever = index.as_retriever(\n similarity_top_k=10,\n filters=filters,\n)\n\nretrieved_nodes = retriever.retrieve(\"What is this software project about?\")\n\nfor node in retrieved_nodes:\n print(node.node.metadata)\n```\n\n {'commit_date': '2023-08-07', 'author': 'mats@timescale.com', 'fixes': []}\n {'commit_date': '2023-08-07', 'author': 'sven@timescale.com', 'fixes': []}\n {'commit_date': '2023-08-15', 'author': 'sven@timescale.com', 'fixes': []}\n {'commit_date': '2023-08-10', 'author': 'mats@timescale.com', 'fixes': []}\n\n\nThe above can be simplified by using the IN operator. `PGVectorStore` supports `in`, `nin`, and `contains` for comparing an element with a list.\n\n\n```python\nfilters = MetadataFilters(\n filters=[\n MetadataFilter(key=\"commit_date\", value=\"2023-08-01\", operator=\">=\"),\n MetadataFilter(key=\"commit_date\", value=\"2023-08-15\", operator=\"<=\"),\n MetadataFilter(\n key=\"author\",\n value=[\"mats@timescale.com\", \"sven@timescale.com\"],\n operator=\"in\",\n ),\n ],\n condition=\"and\",\n)\n\nretriever = index.as_retriever(\n similarity_top_k=10,\n filters=filters,\n)\n\nretrieved_nodes = retriever.retrieve(\"What is this software project about?\")\n\nfor node in retrieved_nodes:\n print(node.node.metadata)\n```\n\n {'commit_date': '2023-08-07', 'author': 'mats@timescale.com', 'fixes': []}\n {'commit_date': '2023-08-07', 'author': 'sven@timescale.com', 'fixes': []}\n {'commit_date': '2023-08-15', 'author': 'sven@timescale.com', 'fixes': []}\n {'commit_date': '2023-08-10', 'author': 'mats@timescale.com', 'fixes': []}\n\n\n\n```python\n# Same thing, with NOT IN\nfilters = MetadataFilters(\n filters=[\n MetadataFilter(key=\"commit_date\", value=\"2023-08-01\", operator=\">=\"),\n MetadataFilter(key=\"commit_date\", value=\"2023-08-15\", operator=\"<=\"),\n MetadataFilter(\n key=\"author\",\n value=[\"mats@timescale.com\", \"sven@timescale.com\"],\n operator=\"nin\",\n ),\n ],\n condition=\"and\",\n)\n\nretriever = index.as_retriever(\n similarity_top_k=10,\n filters=filters,\n)\n\nretrieved_nodes = retriever.retrieve(\"What is this software project about?\")\n\nfor node in retrieved_nodes:\n print(node.node.metadata)\n```\n\n {'commit_date': '2023-08-09', 'author': 'me@noctarius.com', 'fixes': ['5805']}\n {'commit_date': '2023-08-15', 'author': '36882414+akuzm@users.noreply.github.com', 'fixes': []}\n {'commit_date': '2023-08-15', 'author': '36882414+akuzm@users.noreply.github.com', 'fixes': []}\n {'commit_date': '2023-08-11', 'author': '36882414+akuzm@users.noreply.github.com', 'fixes': []}\n {'commit_date': '2023-08-09', 'author': 'konstantina@timescale.com', 'fixes': ['5923', '5680', '5774', '5786', '5906', '5912']}\n {'commit_date': '2023-08-03', 'author': 'dmitry@timescale.com', 'fixes': []}\n {'commit_date': '2023-08-03', 'author': 'dmitry@timescale.com', 'fixes': ['5908']}\n {'commit_date': '2023-08-01', 'author': 'nikhil@timescale.com', 'fixes': []}\n {'commit_date': '2023-08-10', 'author': 'konstantina@timescale.com', 'fixes': []}\n {'commit_date': '2023-08-10', 'author': '36882414+akuzm@users.noreply.github.com', 'fixes': []}\n\n\n\n```python\n# CONTAINS\nfilters = MetadataFilters(\n filters=[\n MetadataFilter(key=\"fixes\", value=\"5680\", operator=\"contains\"),\n ]\n)\n\nretriever = index.as_retriever(\n similarity_top_k=10,\n filters=filters,\n)\n\nretrieved_nodes = retriever.retrieve(\"How did these commits fix the issue?\")\nfor node in retrieved_nodes:\n print(node.node.metadata)\n```\n\n {'commit_date': '2023-08-09', 'author': 'konstantina@timescale.com', 'fixes': ['5923', '5680', '5774', '5786', '5906', '5912']}\n\n\n### PgVector Query Options\n\n#### IVFFlat Probes\n\nSpecify the number of [IVFFlat probes](https://github.com/pgvector/pgvector?tab=readme-ov-file#query-options) (1 by default)\n\nWhen retrieving from the index, you can specify an appropriate number of IVFFlat probes (higher is better for recall, lower is better for speed)\n\n\n```python\nretriever = index.as_retriever(\n vector_store_query_mode=\"hybrid\",\n similarity_top_k=5,\n vector_store_kwargs={\"ivfflat_probes\": 10},\n)\n```\n\n#### HNSW EF Search\n\nSpecify the size of the dynamic [candidate list](https://github.com/pgvector/pgvector?tab=readme-ov-file#query-options-1) for search (40 by default)\n\n\n```python\nretriever = index.as_retriever(\n vector_store_query_mode=\"hybrid\",\n similarity_top_k=5,\n vector_store_kwargs={\"hnsw_ef_search\": 300},\n)\n```"} {"tokens": 697, "doc_id": "49a5cb74-0878-4b87-af18-82a6100409db", "name": "DashVector Vector Store", "url": "https://docs.llamaindex.ai/en/stable/examples/vector_stores/DashvectorIndexDemo", "retrieve_doc": true, "source": "llama_index", "content": "\"Open\n\n# DashVector Vector Store\n\nIf you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.\n\n\n```python\n%pip install llama-index-vector-stores-dashvector\n```\n\n\n```python\n!pip install llama-index\n```\n\n\n```python\nimport logging\nimport sys\nimport os\n\nlogging.basicConfig(stream=sys.stdout, level=logging.INFO)\nlogging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))\n```\n\n#### Creating a DashVector Collection\n\n\n```python\nimport dashvector\n```\n\n\n```python\napi_key = os.environ[\"DASHVECTOR_API_KEY\"]\nclient = dashvector.Client(api_key=api_key)\n```\n\n\n```python\n# dimensions are for text-embedding-ada-002\nclient.create(\"llama-demo\", dimension=1536)\n```\n\n\n\n\n {\"code\": 0, \"message\": \"\", \"requests_id\": \"82b969d2-2568-4e18-b0dc-aa159b503c84\"}\n\n\n\n\n```python\ndashvector_collection = client.get(\"quickstart\")\n```\n\n#### Download Data\n\n\n```python\n!mkdir -p 'data/paul_graham/'\n!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'\n```\n\n#### Load documents, build the DashVectorStore and VectorStoreIndex\n\n\n```python\nfrom llama_index.core import VectorStoreIndex, SimpleDirectoryReader\nfrom llama_index.vector_stores.dashvector import DashVectorStore\nfrom IPython.display import Markdown, display\n```\n\n INFO:numexpr.utils:Note: NumExpr detected 12 cores but \"NUMEXPR_MAX_THREADS\" not set, so enforcing safe limit of 8.\n Note: NumExpr detected 12 cores but \"NUMEXPR_MAX_THREADS\" not set, so enforcing safe limit of 8.\n INFO:numexpr.utils:NumExpr defaulting to 8 threads.\n NumExpr defaulting to 8 threads.\n\n\n\n```python\n# load documents\ndocuments = SimpleDirectoryReader(\"./data/paul_graham\").load_data()\n```\n\n\n```python\n# initialize without metadata filter\nfrom llama_index.core import StorageContext\n\nvector_store = DashVectorStore(dashvector_collection)\nstorage_context = StorageContext.from_defaults(vector_store=vector_store)\nindex = VectorStoreIndex.from_documents(\n documents, storage_context=storage_context\n)\n```\n\n#### Query Index\n\n\n```python\n# set Logging to DEBUG for more detailed outputs\nquery_engine = index.as_query_engine()\nresponse = query_engine.query(\"What did the author do growing up?\")\n```\n\n\n```python\ndisplay(Markdown(f\"{response}\"))\n```\n\n\nThe author worked on writing and programming outside of school. They wrote short stories and tried writing programs on the IBM 1401 computer. They also built a microcomputer and started programming on it, writing simple games and a word processor."} {"tokens": 871, "doc_id": "49aafced-ea43-4b11-a230-f031f3453b6b", "name": "MyScale Vector Store", "url": "https://docs.llamaindex.ai/en/stable/examples/vector_stores/MyScaleIndexDemo", "retrieve_doc": true, "source": "llama_index", "content": "\"Open\n\n# MyScale Vector Store\nIn this notebook we are going to show a quick demo of using the MyScaleVectorStore.\n\nIf you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.\n\n\n```python\n%pip install llama-index-vector-stores-myscale\n```\n\n\n```python\n!pip install llama-index\n```\n\n#### Creating a MyScale Client\n\n\n```python\nimport logging\nimport sys\n\nlogging.basicConfig(stream=sys.stdout, level=logging.INFO)\nlogging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))\n```\n\n\n```python\nfrom os import environ\nimport clickhouse_connect\n\nenviron[\"OPENAI_API_KEY\"] = \"sk-*\"\n\n# initialize client\nclient = clickhouse_connect.get_client(\n host=\"YOUR_CLUSTER_HOST\",\n port=8443,\n username=\"YOUR_USERNAME\",\n password=\"YOUR_CLUSTER_PASSWORD\",\n)\n```\n\n#### Load documents, build and store the VectorStoreIndex with MyScaleVectorStore\n\nHere we will use a set of Paul Graham essays to provide the text to turn into embeddings, store in a ``MyScaleVectorStore`` and query to find context for our LLM QnA loop.\n\n\n```python\nfrom llama_index.core import VectorStoreIndex, SimpleDirectoryReader\nfrom llama_index.vector_stores.myscale import MyScaleVectorStore\nfrom IPython.display import Markdown, display\n```\n\n\n```python\n# load documents\ndocuments = SimpleDirectoryReader(\"../data/paul_graham\").load_data()\nprint(\"Document ID:\", documents[0].doc_id)\nprint(\"Number of Documents: \", len(documents))\n```\n\n Document ID: a5f2737c-ed18-4e5d-ab9a-75955edb816d\n Number of Documents: 1\n\n\nDownload Data\n\n\n```python\n!mkdir -p 'data/paul_graham/'\n!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'\n```\n\nYou can process your files individually using [SimpleDirectoryReader](/examples/data_connectors/simple_directory_reader.ipynb):\n\n\n```python\nloader = SimpleDirectoryReader(\"./data/paul_graham/\")\ndocuments = loader.load_data()\nfor file in loader.input_files:\n print(file)\n # Here is where you would do any preprocessing\n```\n\n ../data/paul_graham/paul_graham_essay.txt\n\n\n\n```python\n# initialize with metadata filter and store indexes\nfrom llama_index.core import StorageContext\n\nfor document in documents:\n document.metadata = {\"user_id\": \"123\", \"favorite_color\": \"blue\"}\nvector_store = MyScaleVectorStore(myscale_client=client)\nstorage_context = StorageContext.from_defaults(vector_store=vector_store)\nindex = VectorStoreIndex.from_documents(\n documents, storage_context=storage_context\n)\n```\n\n#### Query Index\n\nNow MyScale vector store supports filter search and hybrid search\n\nYou can learn more about [query_engine](/module_guides/deploying/query_engine/index.md) and [retriever](/module_guides/querying/retriever/index.md).\n\n\n```python\nimport textwrap\n\nfrom llama_index.core.vector_stores import ExactMatchFilter, MetadataFilters\n\n# set Logging to DEBUG for more detailed outputs\nquery_engine = index.as_query_engine(\n filters=MetadataFilters(\n filters=[\n ExactMatchFilter(key=\"user_id\", value=\"123\"),\n ]\n ),\n similarity_top_k=2,\n vector_store_query_mode=\"hybrid\",\n)\nresponse = query_engine.query(\"What did the author learn?\")\nprint(textwrap.fill(str(response), 100))\n```\n\n#### Clear All Indexes\n\n\n```python\nfor document in documents:\n index.delete_ref_doc(document.doc_id)\n```"} {"tokens": 5379, "doc_id": "15e961c0-5b7c-4bf7-b87e-8ba5d415e63f", "name": "Redis Vector Store", "url": "https://docs.llamaindex.ai/en/stable/examples/vector_stores/RedisIndexDemo", "retrieve_doc": true, "source": "llama_index", "content": "\"Open\n\n# Redis Vector Store\n\nIn this notebook we are going to show a quick demo of using the RedisVectorStore.\n\nIf you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.\n\n\n```python\n%pip install -U llama-index llama-index-vector-stores-redis llama-index-embeddings-cohere llama-index-embeddings-openai\n```\n\n\n```python\nimport os\nimport getpass\nimport sys\nimport logging\nimport textwrap\nimport warnings\n\nwarnings.filterwarnings(\"ignore\")\n\n# Uncomment to see debug logs\nlogging.basicConfig(stream=sys.stdout, level=logging.INFO)\n\nfrom llama_index.core import VectorStoreIndex, SimpleDirectoryReader\nfrom llama_index.vector_stores.redis import RedisVectorStore\n```\n\n### Start Redis\n\nThe easiest way to start Redis is using the [Redis Stack](https://hub.docker.com/r/redis/redis-stack) docker image or\nquickly signing up for a [FREE Redis Cloud](https://redis.com/try-free) instance.\n\nTo follow every step of this tutorial, launch the image as follows:\n\n```bash\ndocker run --name redis-vecdb -d -p 6379:6379 -p 8001:8001 redis/redis-stack:latest\n```\n\nThis will also launch the RedisInsight UI on port 8001 which you can view at http://localhost:8001.\n\n\n### Setup OpenAI\nLets first begin by adding the openai api key. This will allow us to access openai for embeddings and to use chatgpt.\n\n\n```python\noai_api_key = getpass.getpass(\"OpenAI API Key:\")\nos.environ[\"OPENAI_API_KEY\"] = oai_api_key\n```\n\nDownload Data\n\n\n```python\n!mkdir -p 'data/paul_graham/'\n!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'\n```\n\n --2024-04-10 19:35:33-- https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt\n Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8003::154, 2606:50c0:8000::154, 2606:50c0:8002::154, ...\n Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8003::154|:443... connected.\n HTTP request sent, awaiting response... 200 OK\n Length: 75042 (73K) [text/plain]\n Saving to: ‘data/paul_graham/paul_graham_essay.txt’\n \n data/paul_graham/pa 100%[===================>] 73.28K --.-KB/s in 0.03s \n \n 2024-04-10 19:35:33 (2.15 MB/s) - ‘data/paul_graham/paul_graham_essay.txt’ saved [75042/75042]\n \n\n\n### Read in a dataset\nHere we will use a set of Paul Graham essays to provide the text to turn into embeddings, store in a ``RedisVectorStore`` and query to find context for our LLM QnA loop.\n\n\n```python\n# load documents\ndocuments = SimpleDirectoryReader(\"./data/paul_graham\").load_data()\nprint(\n \"Document ID:\",\n documents[0].id_,\n \"Document Filename:\",\n documents[0].metadata[\"file_name\"],\n)\n```\n\n Document ID: 7056f7ba-3513-4ef4-9792-2bd28040aaed Document Filename: paul_graham_essay.txt\n\n\n### Initialize the default Redis Vector Store\n\nNow we have our documents prepared, we can initialize the Redis Vector Store with **default** settings. This will allow us to store our vectors in Redis and create an index for real-time search.\n\n\n```python\nfrom llama_index.core import StorageContext\nfrom redis import Redis\n\n# create a Redis client connection\nredis_client = Redis.from_url(\"redis://localhost:6379\")\n\n# create the vector store wrapper\nvector_store = RedisVectorStore(redis_client=redis_client, overwrite=True)\n\n# load storage context\nstorage_context = StorageContext.from_defaults(vector_store=vector_store)\n\n# build and load index from documents and storage context\nindex = VectorStoreIndex.from_documents(\n documents, storage_context=storage_context\n)\n# index = VectorStoreIndex.from_vector_store(vector_store=vector_store)\n```\n\n 19:39:17 llama_index.vector_stores.redis.base INFO Using default RedisVectorStore schema.\n 19:39:19 httpx INFO HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n 19:39:19 llama_index.vector_stores.redis.base INFO Added 22 documents to index llama_index\n\n\n### Query the default vector store\n\nNow that we have our data stored in the index, we can ask questions against the index.\n\nThe index will use the data as the knowledge base for an LLM. The default setting for as_query_engine() utilizes OpenAI embeddings and GPT as the language model. Therefore, an OpenAI key is required unless you opt for a customized or local language model.\n\nBelow we will test searches against out index and then full RAG with an LLM.\n\n\n```python\nquery_engine = index.as_query_engine()\nretriever = index.as_retriever()\n```\n\n\n```python\nresult_nodes = retriever.retrieve(\"What did the author learn?\")\nfor node in result_nodes:\n print(node)\n```\n\n 19:39:22 httpx INFO HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n 19:39:22 llama_index.vector_stores.redis.base INFO Querying index llama_index with filters *\n 19:39:22 llama_index.vector_stores.redis.base INFO Found 2 results for query with id ['llama_index/vector_adb6b7ce-49bb-4961-8506-37082c02a389', 'llama_index/vector_e39be1fe-32d0-456e-b211-4efabd191108']\n Node ID: adb6b7ce-49bb-4961-8506-37082c02a389\n Text: What I Worked On February 2021 Before college the two main\n things I worked on, outside of school, were writing and programming. I\n didn't write essays. I wrote what beginning writers were supposed to\n write then, and probably still are: short stories. My stories were\n awful. They had hardly any plot, just characters with strong feelings,\n which I ...\n Score: 0.820\n \n Node ID: e39be1fe-32d0-456e-b211-4efabd191108\n Text: Except for a few officially anointed thinkers who went to the\n right parties in New York, the only people allowed to publish essays\n were specialists writing about their specialties. There were so many\n essays that had never been written, because there had been no way to\n publish them. Now they could be, and I was going to write them. [12]\n I've wor...\n Score: 0.819\n \n\n\n\n```python\nresponse = query_engine.query(\"What did the author learn?\")\nprint(textwrap.fill(str(response), 100))\n```\n\n 19:39:25 httpx INFO HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n 19:39:25 llama_index.vector_stores.redis.base INFO Querying index llama_index with filters *\n 19:39:25 llama_index.vector_stores.redis.base INFO Found 2 results for query with id ['llama_index/vector_adb6b7ce-49bb-4961-8506-37082c02a389', 'llama_index/vector_e39be1fe-32d0-456e-b211-4efabd191108']\n 19:39:27 httpx INFO HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n The author learned that working on things that weren't prestigious often led to valuable discoveries\n and indicated the right kind of motives. Despite the lack of initial prestige, pursuing such work\n could be a sign of genuine potential and appropriate motivations, steering clear of the common\n pitfall of being driven solely by the desire to impress others.\n\n\n\n```python\nresult_nodes = retriever.retrieve(\"What was a hard moment for the author?\")\nfor node in result_nodes:\n print(node)\n```\n\n 19:39:27 httpx INFO HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n 19:39:27 llama_index.vector_stores.redis.base INFO Querying index llama_index with filters *\n 19:39:27 llama_index.vector_stores.redis.base INFO Found 2 results for query with id ['llama_index/vector_adb6b7ce-49bb-4961-8506-37082c02a389', 'llama_index/vector_e39be1fe-32d0-456e-b211-4efabd191108']\n Node ID: adb6b7ce-49bb-4961-8506-37082c02a389\n Text: What I Worked On February 2021 Before college the two main\n things I worked on, outside of school, were writing and programming. I\n didn't write essays. I wrote what beginning writers were supposed to\n write then, and probably still are: short stories. My stories were\n awful. They had hardly any plot, just characters with strong feelings,\n which I ...\n Score: 0.802\n \n Node ID: e39be1fe-32d0-456e-b211-4efabd191108\n Text: Except for a few officially anointed thinkers who went to the\n right parties in New York, the only people allowed to publish essays\n were specialists writing about their specialties. There were so many\n essays that had never been written, because there had been no way to\n publish them. Now they could be, and I was going to write them. [12]\n I've wor...\n Score: 0.799\n \n\n\n\n```python\nresponse = query_engine.query(\"What was a hard moment for the author?\")\nprint(textwrap.fill(str(response), 100))\n```\n\n 19:39:29 httpx INFO HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n 19:39:29 llama_index.vector_stores.redis.base INFO Querying index llama_index with filters *\n 19:39:29 llama_index.vector_stores.redis.base INFO Found 2 results for query with id ['llama_index/vector_adb6b7ce-49bb-4961-8506-37082c02a389', 'llama_index/vector_e39be1fe-32d0-456e-b211-4efabd191108']\n 19:39:31 httpx INFO HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n A hard moment for the author was when one of his programs on the IBM 1401 mainframe didn't\n terminate, leading to a technical error and an uncomfortable situation with the data center manager.\n\n\n\n```python\nindex.vector_store.delete_index()\n```\n\n 19:39:34 llama_index.vector_stores.redis.base INFO Deleting index llama_index\n\n\n### Use a custom index schema\n\nIn most use cases, you need the ability to customize the underling index configuration\nand specification. For example, this is handy in order to define specific metadata filters you wish to enable.\n\nWith Redis, this is as simple as defining an index schema object\n(from file or dict) and passing it through to the vector store client wrapper.\n\nFor this example, we will:\n1. switch the embedding model to [Cohere](cohereai.com)\n2. add an additional metadata field for the document `updated_at` timestamp\n3. index the existing `file_name` metadata field\n\n\n```python\nfrom llama_index.core.settings import Settings\nfrom llama_index.embeddings.cohere import CohereEmbedding\n\n# set up Cohere Key\nco_api_key = getpass.getpass(\"Cohere API Key:\")\nos.environ[\"CO_API_KEY\"] = co_api_key\n\n# set llamaindex to use Cohere embeddings\nSettings.embed_model = CohereEmbedding()\n```\n\n\n```python\nfrom redisvl.schema import IndexSchema\n\n\ncustom_schema = IndexSchema.from_dict(\n {\n # customize basic index specs\n \"index\": {\n \"name\": \"paul_graham\",\n \"prefix\": \"essay\",\n \"key_separator\": \":\",\n },\n # customize fields that are indexed\n \"fields\": [\n # required fields for llamaindex\n {\"type\": \"tag\", \"name\": \"id\"},\n {\"type\": \"tag\", \"name\": \"doc_id\"},\n {\"type\": \"text\", \"name\": \"text\"},\n # custom metadata fields\n {\"type\": \"numeric\", \"name\": \"updated_at\"},\n {\"type\": \"tag\", \"name\": \"file_name\"},\n # custom vector field definition for cohere embeddings\n {\n \"type\": \"vector\",\n \"name\": \"vector\",\n \"attrs\": {\n \"dims\": 1024,\n \"algorithm\": \"hnsw\",\n \"distance_metric\": \"cosine\",\n },\n },\n ],\n }\n)\n```\n\n\n```python\ncustom_schema.index\n```\n\n\n\n\n IndexInfo(name='paul_graham', prefix='essay', key_separator=':', storage_type=)\n\n\n\n\n```python\ncustom_schema.fields\n```\n\n\n\n\n {'id': TagField(name='id', type='tag', path=None, attrs=TagFieldAttributes(sortable=False, separator=',', case_sensitive=False, withsuffixtrie=False)),\n 'doc_id': TagField(name='doc_id', type='tag', path=None, attrs=TagFieldAttributes(sortable=False, separator=',', case_sensitive=False, withsuffixtrie=False)),\n 'text': TextField(name='text', type='text', path=None, attrs=TextFieldAttributes(sortable=False, weight=1, no_stem=False, withsuffixtrie=False, phonetic_matcher=None)),\n 'updated_at': NumericField(name='updated_at', type='numeric', path=None, attrs=NumericFieldAttributes(sortable=False)),\n 'file_name': TagField(name='file_name', type='tag', path=None, attrs=TagFieldAttributes(sortable=False, separator=',', case_sensitive=False, withsuffixtrie=False)),\n 'vector': HNSWVectorField(name='vector', type='vector', path=None, attrs=HNSWVectorFieldAttributes(dims=1024, algorithm=, datatype=, distance_metric=, initial_cap=None, m=16, ef_construction=200, ef_runtime=10, epsilon=0.01))}\n\n\n\nLearn more about [schema and index design](https://redisvl.com) with redis.\n\n\n```python\nfrom datetime import datetime\n\n\ndef date_to_timestamp(date_string: str) -> int:\n date_format: str = \"%Y-%m-%d\"\n return int(datetime.strptime(date_string, date_format).timestamp())\n\n\n# iterate through documents and add new field\nfor document in documents:\n document.metadata[\"updated_at\"] = date_to_timestamp(\n document.metadata[\"last_modified_date\"]\n )\n```\n\n\n```python\nvector_store = RedisVectorStore(\n schema=custom_schema, # provide customized schema\n redis_client=redis_client,\n overwrite=True,\n)\n\nstorage_context = StorageContext.from_defaults(vector_store=vector_store)\n\n# build and load index from documents and storage context\nindex = VectorStoreIndex.from_documents(\n documents, storage_context=storage_context\n)\n```\n\n 19:40:05 httpx INFO HTTP Request: POST https://api.cohere.ai/v1/embed \"HTTP/1.1 200 OK\"\n 19:40:06 httpx INFO HTTP Request: POST https://api.cohere.ai/v1/embed \"HTTP/1.1 200 OK\"\n 19:40:06 httpx INFO HTTP Request: POST https://api.cohere.ai/v1/embed \"HTTP/1.1 200 OK\"\n 19:40:06 llama_index.vector_stores.redis.base INFO Added 22 documents to index paul_graham\n\n\n### Query the vector store and filter on metadata\nNow that we have additional metadata indexed in Redis, let's try some queries with filters.\n\n\n```python\nfrom llama_index.core.vector_stores import (\n MetadataFilters,\n MetadataFilter,\n ExactMatchFilter,\n)\n\nretriever = index.as_retriever(\n similarity_top_k=3,\n filters=MetadataFilters(\n filters=[\n ExactMatchFilter(key=\"file_name\", value=\"paul_graham_essay.txt\"),\n MetadataFilter(\n key=\"updated_at\",\n value=date_to_timestamp(\"2023-01-01\"),\n operator=\">=\",\n ),\n MetadataFilter(\n key=\"text\",\n value=\"learn\",\n operator=\"text_match\",\n ),\n ],\n condition=\"and\",\n ),\n)\n```\n\n\n```python\nresult_nodes = retriever.retrieve(\"What did the author learn?\")\n\nfor node in result_nodes:\n print(node)\n```\n\n 19:40:22 httpx INFO HTTP Request: POST https://api.cohere.ai/v1/embed \"HTTP/1.1 200 OK\"\n\n\n 19:40:22 llama_index.vector_stores.redis.base INFO Querying index paul_graham with filters ((@file_name:{paul_graham_essay\\.txt} @updated_at:[1672549200 +inf]) @text:(learn))\n 19:40:22 llama_index.vector_stores.redis.base INFO Found 3 results for query with id ['essay:0df3b734-ecdb-438e-8c90-f21a8c80f552', 'essay:01108c0d-140b-4dcc-b581-c38b7df9251e', 'essay:ced36463-ac36-46b0-b2d7-935c1b38b781']\n Node ID: 0df3b734-ecdb-438e-8c90-f21a8c80f552\n Text: All that seemed left for philosophy were edge cases that people\n in other fields felt could safely be ignored. I couldn't have put\n this into words when I was 18. All I knew at the time was that I kept\n taking philosophy courses and they kept being boring. So I decided to\n switch to AI. AI was in the air in the mid 1980s, but there were two\n things...\n Score: 0.410\n \n Node ID: 01108c0d-140b-4dcc-b581-c38b7df9251e\n Text: It was not, in fact, simply a matter of teaching SHRDLU more\n words. That whole way of doing AI, with explicit data structures\n representing concepts, was not going to work. Its brokenness did, as\n so often happens, generate a lot of opportunities to write papers\n about various band-aids that could be applied to it, but it was never\n going to get us ...\n Score: 0.390\n \n Node ID: ced36463-ac36-46b0-b2d7-935c1b38b781\n Text: Grad students could take classes in any department, and my\n advisor, Tom Cheatham, was very easy going. If he even knew about the\n strange classes I was taking, he never said anything. So now I was in\n a PhD program in computer science, yet planning to be an artist, yet\n also genuinely in love with Lisp hacking and working away at On Lisp.\n In other...\n Score: 0.389\n \n\n\n### Restoring from an existing index in Redis\nRestoring from an index requires a Redis connection client (or URL), `overwrite=False`, and passing in the same schema object used before. (This can be offloaded to a YAML file for convenience using `.to_yaml()`)\n\n\n```python\ncustom_schema.to_yaml(\"paul_graham.yaml\")\n```\n\n\n```python\nvector_store = RedisVectorStore(\n schema=IndexSchema.from_yaml(\"paul_graham.yaml\"),\n redis_client=redis_client,\n)\nindex = VectorStoreIndex.from_vector_store(vector_store=vector_store)\n```\n\n 19:40:28 redisvl.index.index INFO Index already exists, not overwriting.\n\n\n**In the near future** -- we will implement a convenience method to load just using an index name:\n```python\nRedisVectorStore.from_existing_index(index_name=\"paul_graham\", redis_client=redis_client)\n```\n\n### Deleting documents or index completely\n\nSometimes it may be useful to delete documents or the entire index. This can be done using the `delete` and `delete_index` methods.\n\n\n```python\ndocument_id = documents[0].doc_id\ndocument_id\n```\n\n\n\n\n '7056f7ba-3513-4ef4-9792-2bd28040aaed'\n\n\n\n\n```python\nprint(\"Number of documents before deleting\", redis_client.dbsize())\nvector_store.delete(document_id)\nprint(\"Number of documents after deleting\", redis_client.dbsize())\n```\n\n Number of documents before deleting 22\n 19:40:32 llama_index.vector_stores.redis.base INFO Deleted 22 documents from index paul_graham\n Number of documents after deleting 0\n\n\nHowever, the Redis index still exists (with no associated documents) for continuous upsert.\n\n\n```python\nvector_store.index_exists()\n```\n\n\n\n\n True\n\n\n\n\n```python\n# now lets delete the index entirely\n# this will delete all the documents and the index\nvector_store.delete_index()\n```\n\n 19:40:37 llama_index.vector_stores.redis.base INFO Deleting index paul_graham\n\n\n\n```python\nprint(\"Number of documents after deleting\", redis_client.dbsize())\n```\n\n Number of documents after deleting 0\n\n\n### Troubleshooting\n\nIf you get an empty query result, there a couple of issues to check:\n\n#### Schema\n\nUnlike other vector stores, Redis expects users to explicitly define the schema for the index. This is for a few reasons:\n1. Redis is used for many use cases, including real-time vector search, but also for standard document storage/retrieval, caching, messaging, pub/sub, session mangement, and more. Not all attributes on records need to be indexed for search. This is partially an efficiency thing, and partially an attempt to minimize user foot guns.\n2. All index schemas, when using Redis & LlamaIndex, must include the following fields `id`, `doc_id`, `text`, and `vector`, at a minimum.\n\nInstantiate your `RedisVectorStore` with the default schema (assumes OpenAI embeddings), or with a custom schema (see above).\n\n#### Prefix issues\n\nRedis expects all records to have a key prefix that segments the keyspace into \"partitions\"\nfor potentially different applications, use cases, and clients.\n\nMake sure that the chosen `prefix`, as part of the index schema, is consistent across your code (tied to a specific index).\n\nTo see what prefix your index was created with, you can run `FT.INFO ` in the Redis CLI and look under `index_definition` => `prefixes`.\n\n#### Data vs Index\nRedis treats the records in the dataset and the index as different entities. This allows you more flexibility in performing updates, upserts, and index schema migrations.\n\nIf you have an existing index and want to make sure it's dropped, you can run `FT.DROPINDEX ` in the Redis CLI. Note that this will *not* drop your actual data unless you pass `DD`\n\n#### Empty queries when using metadata\n\nIf you add metadata to the index *after* it has already been created and then try to query over that metadata, your queries will come back empty.\n\nRedis indexes fields upon index creation only (similar to how it indexes the prefixes, above)."} {"tokens": 4703, "doc_id": "5672d8aa-3d43-4ec7-8ec7-748c41e153e7", "name": "Simple Vector Store - Async Index Creation", "url": "https://docs.llamaindex.ai/en/stable/examples/vector_stores/AsyncIndexCreationDemo", "retrieve_doc": true, "source": "llama_index", "content": "\"Open\n\n# Simple Vector Store - Async Index Creation\n\nIf you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.\n\n\n```python\n%pip install llama-index-readers-wikipedia\n```\n\n\n```python\n!pip install llama-index\n```\n\n\n```python\nimport time\n\n# Helps asyncio run within Jupyter\nimport nest_asyncio\n\nnest_asyncio.apply()\n\n# My OpenAI Key\nimport os\n\nos.environ[\"OPENAI_API_KEY\"] = \"[YOUR_API_KEY]\"\n```\n\n\n```python\nfrom llama_index.core import VectorStoreIndex, download_loader\n\nfrom llama_index.readers.wikipedia import WikipediaReader\n\nloader = WikipediaReader()\ndocuments = loader.load_data(\n pages=[\n \"Berlin\",\n \"Santiago\",\n \"Moscow\",\n \"Tokyo\",\n \"Jakarta\",\n \"Cairo\",\n \"Bogota\",\n \"Shanghai\",\n \"Damascus\",\n ]\n)\n```\n\n\n```python\nlen(documents)\n```\n\n\n\n\n 9\n\n\n\n9 Wikipedia articles downloaded as documents\n\n\n```python\nstart_time = time.perf_counter()\nindex = VectorStoreIndex.from_documents(documents)\nduration = time.perf_counter() - start_time\nprint(duration)\n```\n\n INFO:root:> [build_index_from_documents] Total LLM token usage: 0 tokens\n INFO:root:> [build_index_from_documents] Total embedding token usage: 142295 tokens\n\n\n 7.691995083000052\n\n\nStandard index creation took 7.69 seconds\n\n\n```python\nstart_time = time.perf_counter()\nindex = VectorStoreIndex(documents, use_async=True)\nduration = time.perf_counter() - start_time\nprint(duration)\n```\n\n INFO:openai:message='OpenAI API response' path=https://api.openai.com/v1/engines/text-embedding-ada-002/embeddings processing_ms=245 request_id=314b145a07f65fd34e707f633cc1a444 response_code=200\n INFO:openai:message='OpenAI API response' path=https://api.openai.com/v1/engines/text-embedding-ada-002/embeddings processing_ms=432 request_id=bb9e796d0b8f9c2365b68de8a56009ff response_code=200\n INFO:openai:message='OpenAI API response' path=https://api.openai.com/v1/engines/text-embedding-ada-002/embeddings processing_ms=433 request_id=7a94707fe2f8916e9cdd8276a5748207 response_code=200\n INFO:openai:message='OpenAI API response' path=https://api.openai.com/v1/engines/text-embedding-ada-002/embeddings processing_ms=499 request_id=cda679215293c3a13ed57c2eae3dc582 response_code=200\n INFO:openai:message='OpenAI API response' path=https://api.openai.com/v1/engines/text-embedding-ada-002/embeddings processing_ms=527 request_id=5e1c3e74aa3f9f950e4035f81a0f0a15 response_code=200\n INFO:openai:message='OpenAI API response' path=https://api.openai.com/v1/engines/text-embedding-ada-002/embeddings processing_ms=585 request_id=81983fe76eab95f73f82df881ff7b2d9 response_code=200\n INFO:openai:message='OpenAI API response' path=https://api.openai.com/v1/engines/text-embedding-ada-002/embeddings processing_ms=574 request_id=702a182b54a29a33719205f722378c8e response_code=200\n INFO:openai:message='OpenAI API response' path=https://api.openai.com/v1/engines/text-embedding-ada-002/embeddings processing_ms=575 request_id=d1df11775c59a3ba403dda253081f8eb response_code=200\n INFO:openai:message='OpenAI API response' path=https://api.openai.com/v1/engines/text-embedding-ada-002/embeddings processing_ms=575 request_id=47929f13469569527505b51958cd8e71 response_code=200\n INFO:root:> [build_index_from_documents] Total LLM token usage: 0 tokens\n INFO:root:> [build_index_from_documents] Total embedding token usage: 142295 tokens\n\n\n 2.3730635830000892\n\n\nAsync index creation took 2.37 seconds\n\n\n```python\nquery_engine = index.as_query_engine()\nquery_engine.query(\"What is the etymology of Jakarta?\")\n```\n\n INFO:root:> [query] Total LLM token usage: 4075 tokens\n INFO:root:> [query] Total embedding token usage: 8 tokens\n\n\n\n\n\n Response(response=\"\\n\\nThe name 'Jakarta' is derived from the word Jayakarta (Devanagari: जयकर्त) which is ultimately derived from the Sanskrit जय jaya (victorious), and कृत krta (accomplished, acquired), thus Jayakarta translates as 'victorious deed', 'complete act' or 'complete victory'. It was named for the Muslim troops of Fatahillah which successfully defeated and drove the Portuguese away from the city in 1527. Before it was called Jayakarta, the city was known as 'Sunda Kelapa'. Tomé Pires, a Portuguese apothecary wrote the name of the city on his magnum opus as Jacatra or Jacarta during his journey to East Indies. The city is located in a low-lying area ranging from −2 to 91 m (−7 to 299 ft) with an average elevation of 8 m (26 ft) above sea level with historically extensive swampy areas. Some parts of the city have been constructed on reclaimed tidal flats that occur around the area. Thirteen rivers flow through Jakarta, including the Ciliwung River, Kalibaru, Pesanggra\", source_nodes=[SourceNode(source_text=\"Jakarta (; Indonesian pronunciation: [dʒaˈkarta] (listen)), officially the Special Capital Region of Jakarta (Indonesian: Daerah Khusus Ibukota Jakarta), is the capital and largest city of Indonesia. Lying on the northwest coast of Java, the world's most populous island, Jakarta is the largest city in Southeast Asia and serves as the diplomatic capital of ASEAN.\\nThe city is the economic, cultural, and political centre of Indonesia. It possesses a province-level status and has a population of 10,562,088 as of mid-2021. Although Jakarta extends over only 664.01 km2 (256.38 sq mi) and thus has the smallest area of any Indonesian province, its metropolitan area covers 9,957.08 km2 (3,844.45 sq mi), which includes the satellite cities Bogor, Depok, Tangerang, South Tangerang, and Bekasi, and has an estimated population of 35 million as of 2021, making it the largest urban area in Indonesia and the second-largest in the world (after Tokyo). Jakarta ranks first among the Indonesian provinces in the human development index. Jakarta's business and employment opportunities, along with its ability to offer a potentially higher standard of living compared to other parts of the country, have attracted migrants from across the Indonesian archipelago, making it a melting pot of numerous cultures.\\nJakarta is one of the oldest continuously inhabited cities in Southeast Asia. Established in the fourth century as Sunda Kelapa, the city became an important trading port for the Sunda Kingdom. At one time, it was the de facto capital of the Dutch East Indies, when it was known as Batavia. Jakarta was officially a city within West Java until 1960 when its official status was changed to a province with special capital region distinction. As a province, its government consists of five administrative cities and one administrative regency. Jakarta is an alpha world city and is the seat of the ASEAN secretariat. Financial institutions such as the Bank of Indonesia, Indonesia Stock Exchange, and corporate headquarters of numerous Indonesian companies and multinational corporations are located in the city. In 2021, the city's GRP PPP was estimated at US$602.946 billion.\\nJakarta's main challenges include rapid urban growth, ecological breakdown, gridlocked traffic, congestion, and flooding. Jakarta is sinking up to 17 cm (6.7 inches) annually, which coupled with the rising of sea levels, has made the city more prone to flooding. Hence, it is one of the fastest-sinking capitals in the world. In response to these challenges, in August 2019, President Joko Widodo announced that the capital of Indonesia would be moved from Jakarta to the planned city of Nusantara, in the province of East Kalimantan on the island of Borneo.\\n\\n\\n== Name ==\\n\\nJakarta has been home to multiple settlements. Below is the list of names used during its existence:\\n\\nSunda Kelapa (397–1527)\\nJayakarta (1527–1619)\\nBatavia (1619–1942)\\nDjakarta (1942–1972)\\nJakarta (1972–present)The name 'Jakarta' is derived from the word Jayakarta (Devanagari: जयकर्त) which is ultimately derived from the Sanskrit जय jaya (victorious), and कृत krta (accomplished, acquired), thus Jayakarta translates as 'victorious deed', 'complete act' or 'complete victory'. It was named for the Muslim troops of Fatahillah which successfully defeated and drove the Portuguese away from the city in 1527. Before it was called Jayakarta, the city was known as 'Sunda Kelapa'. Tomé Pires, a Portuguese apothecary wrote the name of the city on his magnum opus as Jacatra or Jacarta during his journey to East Indies. \\nIn the 17th century, the city was known as Koningin van het Oosten (Queen of the Orient), a name that was given for the urban beauty of downtown Batavia's canals, mansions and ordered city layout. After expanding to the south in the 19th century, this nickname came to be more associated with the suburbs (e.g. Menteng and the area around Merdeka Square), with their wide lanes, green spaces and villas. During the Japanese occupation, the city was renamed as Jakaruta Tokubetsu-shi (ジャカルタ特別市, Jakarta Special City).\\n\\n\\n== History ==\\n\\n\\n=== Precolonial era ===\\n\\nThe north coast area of western Java including Jakarta was the location of prehistoric Buni culture that flourished from 400 BC to 100 AD. The area in and around modern Jakarta was part of the 4th-century Sundanese kingdom of Tarumanagara, one of the oldest Hindu kingdoms in Indonesia. The area of North Jakarta around Tugu became a populated settlement in the early 5th century. The Tugu inscription (probably written around 417 AD) discovered in Batutumbuh hamlet, Tugu village, Koja, North Jakarta, mentions that King Purnawarman of Tarumanagara undertook hydraulic projects; the irrigation and water drainage project of the Chandrabhaga river and the Gomati river near his capital. Following the decline of Tarumanagara, its territories, including the Jakarta area, became part of the Hindu Kingdom of Sunda. From the 7th to the early 13th century, the port of Sunda was under the Srivijaya maritime empire. According to the Chinese source, Chu-fan-chi, written circa 1225, Chou Ju-kua reported in the early 13th century that Srivijaya still ruled Sumatra, the Malay peninsula and western Java (Sunda). The source says the port of Sunda is strategic and thriving, mentioning pepper from Sunda as among the best in quality. The people worked in agriculture, and their houses were built on wooden piles. The harbour area became known as Sunda Kelapa, (Sundanese: ᮞᮥᮔ᮪ᮓ ᮊᮨᮜᮕ) and by the 14th century, it was an important trading port for the Sunda Kingdom.\\nThe first European fleet, four Portuguese ships from Malacca, arrived in 1513 while looking for a route for spices. The Sunda Kingdom made an alliance treaty with the Portuguese by allowing them to build a port in 1522 to defend against the rising power of Demak Sultanate from central Java. In 1527, Fatahillah, a Javanese general from Demak attacked and conquered Sunda Kelapa, driving out the Portuguese. Sunda Kelapa was renamed Jayakarta, and became a fiefdom of the Banten Sultanate, which became a major Southeast Asian trading centre.\\nThrough the relationship with Prince Jayawikarta of the Banten Sultanate, Dutch ships arrived in 1596. In 1602, the British East India Company's first voyage, commanded by Sir James Lancaster, arrived in Aceh and sailed on to Banten where they were allowed to build a trading post. This site became the centre of British trade in the Indonesian archipelago until 1682. Jayawikarta is thought to have made trading connections with the British merchants, rivals of the Dutch, by allowing them to build houses directly across from the Dutch buildings in 1615.\\n\\n\\n=== Colonial era ===\\n\\nWhen relations between Prince Jayawikarta and the Dutch deteriorated, his soldiers attacked the Dutch fortress. His army and the British, however, were defeated by the Dutch, in part owing to the timely arrival of Jan Pieterszoon Coen. The Dutch burned the British fort and forced them to retreat on their ships. The victory consolidated Dutch power, and they renamed the city Batavia in 1619.\\n\\nCommercial opportunities in the city attracted native and especially Chinese and Arab immigrants. This sudden population increase created burdens on the city. Tensions grew as the colonial government tried to restrict Chinese migration through deportations. Following a revolt, 5,000 Chinese were massacred by the Dutch and natives on 9 October 1740, and the following year, Chinese inhabitants were moved to Glodok outside the city walls. At the beginning of the 19th century, around 400 Arabs and Moors lived in Batavia, a number that changed little during the following decades. Among the commodities traded were fabrics, mainly imported cotton, batik and clothing worn by Arab communities.The city began to expand further south as epidemics in 1835 and 1870 forced residents to move away from the port. The Koningsplein, now Merdeka Square was completed in 1818, the housing park of Menteng was started in 1913, and Kebayoran Baru was the last Dutch-built residential area. By 1930, Batavia had more than 500,000 inhabitants, including 37,067 Europeans.On 5 March 1942, the Japanese captured Batavia from Dutch control, and the city was named Jakarta (Jakarta Special City (ジャカルタ特別市, Jakaruta tokubetsu-shi), under the special status that was assigned to the city). After the war, the Dutch name Batavia was internationally recognised until full Indonesian independence on 27 December 1949. The city, now renamed Jakarta, was officially proclaimed the national capital of Indonesia.\\n\\n\\n=== Independence era ===\\n\\nAfter World War II ended, Indonesian nationalists declared independence on 17 August 1945, and the government of Jakarta City was changed into the Jakarta National Administration in the following month. During the Indonesian National Revolution, Indonesian Republicans withdrew from Allied-occupied Jakarta and established their capital in Yogyakarta.\\nAfter securing full independence, Jakarta again became the national capital in 1950. With Jakarta selected to host the 1962 Asian Games, Soekarno, envisaging Jakarta as a great international city, instigated large government-funded projects with openly nationalistic and modernist architecture. Projects included a cloverleaf interchange, a major boulevard (Jalan MH Thamrin-Sudirman), monuments such as The National Monument, Hotel Indonesia, a shopping centre, and a new building intended to be the headquarters of CONEFO. In October 1965, Jakarta was the site of an abortive coup attempt in which six top generals were killed, precipitating a violent anti-communist purge which killed at least 500,000 people, including some ethnic Chinese. The event marked the beginning of Suharto's New Order. The first government was led by a mayor until the end of 1960 when the office was changed to that of a governor. The last mayor of Jakarta was Soediro until he was replaced by Soemarno Sosroatmodjo as governor. Based on law No. 5 of 1974 relating to regional governments, Jakarta was confirmed as the capital of Indonesia and one of the country's then 26 provinces.In 1966, Jakarta was declared a 'special capital region' (Daerah Khusus Ibukota), with a status equivalent to that of a province. Lieutenant General Ali Sadikin served as governor from 1966 to 1977; he rehabilitated roads and bridges, encouraged the arts, built hospitals and a large number of schools. He cleared out slum dwellers for new development projects — some for the benefit of the Suharto family,— and attempted to eliminate rickshaws and ban street vendors. He began control of migration to the city to stem overcrowding and poverty. Foreign investment contributed to a real estate boom that transformed the face of Jakarta. The boom ended with the 1997 Asian financial crisis, putting Jakarta at the centre of violence, protest, and political manoeuvring.\\nAfter three decades in power, support for President Suharto began to wane. Tensions peaked when four students were shot dead at Trisakti University by security forces. Four days of riots and violence in 1998 ensued that killed an estimated 1,200, and destroyed or damaged 6,000 buildings, forcing Suharto to resign. Much of the rioting targeted Chinese Indonesians. In the post-Suharto era, Jakarta has remained the focal point of democratic change in Indonesia. Jemaah Islamiah-connected bombings occurred almost annually in the city between 2000 and 2005, with another in 2009. In August 2007, Jakarta held its first-ever election to choose a governor as part of a nationwide decentralisation program that allows direct local elections in several areas. Previously, governors were elected by the city's legislative body.During the Jokowi presidency, the Government adopted a plan to move Indonesia's capital to East Kalimantan.Between 2016 and 2017, a series of terrorist attacks rocked Jakarta with scenes of multiple suicide bombings and gunfire. In suspicion to its links, the Islamic State, the perpetrator led by Abu Bakr al-Baghdadi claimed responsibility for the attacks.\\n\\n\\n== Geography ==\\n\\nJakarta covers 699.5 km2 (270.1 sq mi), the smallest among any Indonesian provinces. However, its metropolitan area covers 6,392 km2 (2,468 sq mi), which extends into two of the bordering provinces of West Java and Banten. The Greater Jakarta area includes three bordering regencies (Bekasi Regency, Tangerang Regency and Bogor Regency) and five adjacent cities (Bogor, Depok, Bekasi, Tangerang and South Tangerang).\\n\\nJakarta is situated on the northwest coast of Java, at the mouth of the Ciliwung River on Jakarta Bay, an inlet of the Java Sea. It is strategically located near the Sunda Strait. The northern part of Jakarta is plain land, some areas of which are below sea level, and subject to frequent flooding. The southern parts of the city are hilly. It is one of only two Asian capital cities located in the southern hemisphere (along with East Timor's Dili). Officially, the area of the Jakarta Special District is 662 km2 (256 sq mi) of land area and 6,977 km2 (2,694 sq mi) of sea area. The Thousand Islands, which are administratively a part of Jakarta, are located in Jakarta Bay, north of the city.\\nJakarta lies in a low and flat alluvial plain, ranging from −2 to 91 m (−7 to 299 ft) with an average elevation of 8 m (26 ft) above sea level with historically extensive swampy areas. Some parts of the city have been constructed on reclaimed tidal flats that occur around the area. Thirteen rivers flow through Jakarta. They are Ciliwung River, Kalibaru, Pesanggrahan, Cipinang, Angke River, Maja, Mookervart, Krukut, Buaran, West Tarum, Cakung, Petukangan, Sunter River and Grogol River. They flow from the Puncak highlands to the south of the city, then across the city northwards towards the Java Sea. The Ciliwung River divides the city into the western and eastern districts.\\nThese rivers, combined with the wet season rains and insufficient\", doc_id='eeb6ef32-c857-44e2-b0c5-dff6e29a9cd7', extra_info=None, node_info={'start': 0, 'end': 13970}, similarity=0.8701780916463354)], extra_info=None)"} {"tokens": 7707, "doc_id": "59cdaf61-ecef-4925-95d6-66a712b22cbc", "name": "Azure AI Search", "url": "https://docs.llamaindex.ai/en/stable/examples/vector_stores/AzureAISearchIndexDemo", "retrieve_doc": true, "source": "llama_index", "content": "\"Open\n\n# Azure AI Search\n\n## Basic Example\n\nIn this notebook, we take a Paul Graham essay, split it into chunks, embed it using an Azure OpenAI embedding model, load it into an Azure AI Search index, and then query it.\n\nIf you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.\n\n\n```python\n!pip install llama-index\n!pip install wget\n%pip install llama-index-vector-stores-azureaisearch\n%pip install azure-search-documents==11.4.0\n%llama-index-embeddings-azure-openai\n%llama-index-llms-azure-openai\n```\n\n\n```python\nimport logging\nimport sys\nfrom azure.core.credentials import AzureKeyCredential\nfrom azure.search.documents import SearchClient\nfrom azure.search.documents.indexes import SearchIndexClient\nfrom IPython.display import Markdown, display\nfrom llama_index.core import (\n SimpleDirectoryReader,\n StorageContext,\n VectorStoreIndex,\n)\nfrom llama_index.core.settings import Settings\n\nfrom llama_index.llms.azure_openai import AzureOpenAI\nfrom llama_index.embeddings.azure_openai import AzureOpenAIEmbedding\nfrom llama_index.vector_stores.azureaisearch import AzureAISearchVectorStore\nfrom llama_index.vector_stores.azureaisearch import (\n IndexManagement,\n MetadataIndexFieldType,\n)\n```\n\n## Setup Azure OpenAI\n\n\n```python\naoai_api_key = \"YOUR_AZURE_OPENAI_API_KEY\"\naoai_endpoint = \"YOUR_AZURE_OPENAI_ENDPOINT\"\naoai_api_version = \"2023-05-15\"\n\nllm = AzureOpenAI(\n model=\"YOUR_AZURE_OPENAI_COMPLETION_MODEL_NAME\",\n deployment_name=\"YOUR_AZURE_OPENAI_COMPLETION_DEPLOYMENT_NAME\",\n api_key=aoai_api_key,\n azure_endpoint=aoai_endpoint,\n api_version=aoai_api_version,\n)\n\n# You need to deploy your own embedding model as well as your own chat completion model\nembed_model = AzureOpenAIEmbedding(\n model=\"YOUR_AZURE_OPENAI_EMBEDDING_MODEL_NAME\",\n deployment_name=\"YOUR_AZURE_OPENAI_EMBEDDING_DEPLOYMENT_NAME\",\n api_key=aoai_api_key,\n azure_endpoint=aoai_endpoint,\n api_version=aoai_api_version,\n)\n```\n\n## Setup Azure AI Search\n\n\n```python\nsearch_service_api_key = \"YOUR-AZURE-SEARCH-SERVICE-ADMIN-KEY\"\nsearch_service_endpoint = \"YOUR-AZURE-SEARCH-SERVICE-ENDPOINT\"\nsearch_service_api_version = \"2023-11-01\"\ncredential = AzureKeyCredential(search_service_api_key)\n\n\n# Index name to use\nindex_name = \"llamaindex-vector-demo\"\n\n# Use index client to demonstrate creating an index\nindex_client = SearchIndexClient(\n endpoint=search_service_endpoint,\n credential=credential,\n)\n\n# Use search client to demonstration using existing index\nsearch_client = SearchClient(\n endpoint=search_service_endpoint,\n index_name=index_name,\n credential=credential,\n)\n```\n\n## Create Index (if it does not exist)\n\nDemonstrates creating a vector index named \"llamaindex-vector-demo\" if one doesn't exist. The index has the following fields:\n| Field Name | OData Type | \n|------------|---------------------------| \n| id | `Edm.String` | \n| chunk | `Edm.String` | \n| embedding | `Collection(Edm.Single)` | \n| metadata | `Edm.String` | \n| doc_id | `Edm.String` | \n| author | `Edm.String` | \n| theme | `Edm.String` | \n| director | `Edm.String` | \n\n\n```python\nmetadata_fields = {\n \"author\": \"author\",\n \"theme\": (\"topic\", MetadataIndexFieldType.STRING),\n \"director\": \"director\",\n}\n\nvector_store = AzureAISearchVectorStore(\n search_or_index_client=index_client,\n filterable_metadata_field_keys=metadata_fields,\n index_name=index_name,\n index_management=IndexManagement.CREATE_IF_NOT_EXISTS,\n id_field_key=\"id\",\n chunk_field_key=\"chunk\",\n embedding_field_key=\"embedding\",\n embedding_dimensionality=1536,\n metadata_string_field_key=\"metadata\",\n doc_id_field_key=\"doc_id\",\n language_analyzer=\"en.lucene\",\n vector_algorithm_type=\"exhaustiveKnn\",\n)\n```\n\n\n```python\n!mkdir -p 'data/paul_graham/'\n!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'\n```\n\n### Loading documents\nLoad the documents stored in the `data/paul_graham/` using the SimpleDirectoryReader\n\n\n```python\n# Load documents\ndocuments = SimpleDirectoryReader(\"../data/paul_graham/\").load_data()\nstorage_context = StorageContext.from_defaults(vector_store=vector_store)\n\nSettings.llm = llm\nSettings.embed_model = embed_model\nindex = VectorStoreIndex.from_documents(\n documents, storage_context=storage_context\n)\n```\n\n\n```python\n# Query Data\nquery_engine = index.as_query_engine(similarity_top_k=3)\nresponse = query_engine.query(\"What did the author do growing up?\")\ndisplay(Markdown(f\"{response}\"))\n```\n\n\nThe author engaged in writing and programming activities during their formative years. They initially wrote short stories and later transitioned to programming on the IBM 1401 using an early version of Fortran. Subsequently, with the advent of microcomputers, the author began programming on a TRS-80, writing simple games, a rocket flight prediction program, and a word processor.\n\n\n\n```python\nresponse = query_engine.query(\n \"What did the author learn?\",\n)\ndisplay(Markdown(f\"{response}\"))\n```\n\n\nThe author learned that the study of philosophy in college did not live up to their expectations, as they found the courses to be boring and lacking in ultimate truths. This led them to switch their focus to AI, which was influenced by a novel featuring an intelligent computer and a PBS documentary showcasing advanced technology.\n\n\n## Use Existing Index\n\n\n```python\nindex_name = \"llamaindex-vector-demo\"\n\nmetadata_fields = {\n \"author\": \"author\",\n \"theme\": (\"topic\", MetadataIndexFieldType.STRING),\n \"director\": \"director\",\n}\nvector_store = AzureAISearchVectorStore(\n search_or_index_client=search_client,\n filterable_metadata_field_keys=metadata_fields,\n index_management=IndexManagement.VALIDATE_INDEX,\n id_field_key=\"id\",\n chunk_field_key=\"chunk\",\n embedding_field_key=\"embedding\",\n embedding_dimensionality=1536,\n metadata_string_field_key=\"metadata\",\n doc_id_field_key=\"doc_id\",\n)\n```\n\n\n```python\nstorage_context = StorageContext.from_defaults(vector_store=vector_store)\nindex = VectorStoreIndex.from_documents(\n [],\n storage_context=storage_context,\n)\n```\n\n\n```python\nquery_engine = index.as_query_engine()\nresponse = query_engine.query(\"What was a hard moment for the author?\")\ndisplay(Markdown(f\"{response}\"))\n```\n\n\nThe author faced a challenging moment when he couldn't figure out what to do with the early computer he had access to in 9th grade. This was due to the limited options for input and the lack of knowledge in math to do anything interesting with the available resources.\n\n\n\n```python\nresponse = query_engine.query(\"Who is the author?\")\ndisplay(Markdown(f\"{response}\"))\n```\n\n\nPaul Graham\n\n\n\n```python\nimport time\n\nquery_engine = index.as_query_engine(streaming=True)\nresponse = query_engine.query(\"What happened at interleaf?\")\n\nstart_time = time.time()\n\ntoken_count = 0\nfor token in response.response_gen:\n print(token, end=\"\")\n token_count += 1\n\ntime_elapsed = time.time() - start_time\ntokens_per_second = token_count / time_elapsed\n\nprint(f\"\\n\\nStreamed output at {tokens_per_second} tokens/s\")\n```\n\n The author worked at Interleaf, where they learned several lessons, including the importance of product-focused leadership in technology companies, the drawbacks of code being edited by too many people, the limitations of conventional office hours for optimal hacking, and the risks associated with bureaucratic customers. Additionally, the author discovered the concept that the low end tends to dominate the high end, and that being the \"entry level\" option can be advantageous.\n \n Streamed output at 99.40073103089465 tokens/s\n\n\n## Adding a document to existing index\n\n\n```python\nresponse = query_engine.query(\"What colour is the sky?\")\ndisplay(Markdown(f\"{response}\"))\n```\n\n\nBlue\n\n\n\n```python\nfrom llama_index.core import Document\n\nindex.insert_nodes([Document(text=\"The sky is indigo today\")])\n```\n\n\n```python\nresponse = query_engine.query(\"What colour is the sky?\")\ndisplay(Markdown(f\"{response}\"))\n```\n\n\nThe sky is indigo today.\n\n\n## Filtering\n\n\n```python\nfrom llama_index.core.schema import TextNode\n\nnodes = [\n TextNode(\n text=\"The Shawshank Redemption\",\n metadata={\n \"author\": \"Stephen King\",\n \"theme\": \"Friendship\",\n },\n ),\n TextNode(\n text=\"The Godfather\",\n metadata={\n \"director\": \"Francis Ford Coppola\",\n \"theme\": \"Mafia\",\n },\n ),\n TextNode(\n text=\"Inception\",\n metadata={\n \"director\": \"Christopher Nolan\",\n },\n ),\n]\n```\n\n\n```python\nindex.insert_nodes(nodes)\n```\n\n\n```python\nfrom llama_index.core.vector_stores.types import (\n MetadataFilters,\n ExactMatchFilter,\n)\n\n\nfilters = MetadataFilters(\n filters=[ExactMatchFilter(key=\"theme\", value=\"Mafia\")]\n)\n\nretriever = index.as_retriever(filters=filters)\nretriever.retrieve(\"What is inception about?\")\n```\n\n\n\n\n [NodeWithScore(node=TextNode(id_='049f00de-13be-4af3-ab56-8c16352fe799', embedding=None, metadata={'director': 'Francis Ford Coppola', 'theme': 'Mafia'}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, hash='ad2a08d4364262546db9711b915348d43e0ccc41bd8c3c41775e133624e1fa1b', text='The Godfather', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n'), score=0.8120511)]\n\n\n\n## Query Mode\nFour query modes are supported: DEFAULT (vector search), SPARSE, HYBRID, and SEMANTIC_HYBRID.\n\n### Perform a Vector Search\n\n\n```python\nfrom llama_index.core.vector_stores.types import VectorStoreQueryMode\n\ndefault_retriever = index.as_retriever(\n vector_store_query_mode=VectorStoreQueryMode.DEFAULT\n)\nresponse = default_retriever.retrieve(\"What is inception about?\")\n\n# Loop through each NodeWithScore in the response\nfor node_with_score in response:\n node = node_with_score.node # The TextNode object\n score = node_with_score.score # The similarity score\n chunk_id = node.id_ # The chunk ID\n\n # Extract the relevant metadata from the node\n file_name = node.metadata.get(\"file_name\", \"Unknown\")\n file_path = node.metadata.get(\"file_path\", \"Unknown\")\n\n # Extract the text content from the node\n text_content = node.text if node.text else \"No content available\"\n\n # Print the results in a user-friendly format\n print(f\"Score: {score}\")\n print(f\"File Name: {file_name}\")\n print(f\"Id: {chunk_id}\")\n print(\"\\nExtracted Content:\")\n print(text_content)\n print(\"\\n\" + \"=\" * 40 + \" End of Result \" + \"=\" * 40 + \"\\n\")\n```\n\n Score: 0.8748552\n File Name: Unknown\n Id: bae0df75-ff37-4725-b659-b9fd8bf2ef3c\n \n Extracted Content:\n Inception\n \n ======================================== End of Result ========================================\n \n Score: 0.8155207\n File Name: paul_graham_essay.txt\n Id: ae5aee85-a083-4141-bf75-bbb872f53760\n \n Extracted Content:\n It's not that unprestigious types of work are good per se. But when you find yourself drawn to some kind of work despite its current lack of prestige, it's a sign both that there's something real to be discovered there, and that you have the right kind of motives. Impure motives are a big danger for the ambitious. If anything is going to lead you astray, it will be the desire to impress people. So while working on things that aren't prestigious doesn't guarantee you're on the right track, it at least guarantees you're not on the most common type of wrong one.\n \n Over the next several years I wrote lots of essays about all kinds of different topics. O'Reilly reprinted a collection of them as a book, called Hackers & Painters after one of the essays in it. I also worked on spam filters, and did some more painting. I used to have dinners for a group of friends every thursday night, which taught me how to cook for groups. And I bought another building in Cambridge, a former candy factory (and later, twas said, porn studio), to use as an office.\n \n One night in October 2003 there was a big party at my house. It was a clever idea of my friend Maria Daniels, who was one of the thursday diners. Three separate hosts would all invite their friends to one party. So for every guest, two thirds of the other guests would be people they didn't know but would probably like. One of the guests was someone I didn't know but would turn out to like a lot: a woman called Jessica Livingston. A couple days later I asked her out.\n \n Jessica was in charge of marketing at a Boston investment bank. This bank thought it understood startups, but over the next year, as she met friends of mine from the startup world, she was surprised how different reality was. And how colorful their stories were. So she decided to compile a book of interviews with startup founders.\n \n When the bank had financial problems and she had to fire half her staff, she started looking for a new job. In early 2005 she interviewed for a marketing job at a Boston VC firm. It took them weeks to make up their minds, and during this time I started telling her about all the things that needed to be fixed about venture capital. They should make a larger number of smaller investments instead of a handful of giant ones, they should be funding younger, more technical founders instead of MBAs, they should let the founders remain as CEO, and so on.\n \n One of my tricks for writing essays had always been to give talks. The prospect of having to stand up in front of a group of people and tell them something that won't waste their time is a great spur to the imagination. When the Harvard Computer Society, the undergrad computer club, asked me to give a talk, I decided I would tell them how to start a startup. Maybe they'd be able to avoid the worst of the mistakes we'd made.\n \n So I gave this talk, in the course of which I told them that the best sources of seed funding were successful startup founders, because then they'd be sources of advice too. Whereupon it seemed they were all looking expectantly at me. Horrified at the prospect of having my inbox flooded by business plans (if I'd only known), I blurted out \"But not me!\" and went on with the talk. But afterward it occurred to me that I should really stop procrastinating about angel investing. I'd been meaning to since Yahoo bought us, and now it was 7 years later and I still hadn't done one angel investment.\n \n Meanwhile I had been scheming with Robert and Trevor about projects we could work on together. I missed working with them, and it seemed like there had to be something we could collaborate on.\n \n As Jessica and I were walking home from dinner on March 11, at the corner of Garden and Walker streets, these three threads converged. Screw the VCs who were taking so long to make up their minds. We'd start our own investment firm and actually implement the ideas we'd been talking about. I'd fund it, and Jessica could quit her job and work for it, and we'd get Robert and Trevor as partners too. [13]\n \n Once again, ignorance worked in our favor. We had no idea how to be angel investors, and in Boston in 2005 there were no Ron Conways to learn from. So we just made what seemed like the obvious choices, and some of the things we did turned out to be novel.\n \n There are multiple components to Y Combinator, and we didn't figure them all out at once. The part we got first was to be an angel firm.\n \n ======================================== End of Result ========================================\n \n\n\n### Perform a Hybrid Search\n\n\n```python\nfrom llama_index.core.vector_stores.types import VectorStoreQueryMode\n\nhybrid_retriever = index.as_retriever(\n vector_store_query_mode=VectorStoreQueryMode.HYBRID\n)\nhybrid_retriever.retrieve(\"What is inception about?\")\n```\n\n\n\n\n [NodeWithScore(node=TextNode(id_='bae0df75-ff37-4725-b659-b9fd8bf2ef3c', embedding=None, metadata={'director': 'Christopher Nolan'}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, hash='9792a1fd7d2e1a08f1b1d70a597357bb6b68d69ed5685117eaa37ac9e9a3565e', text='Inception', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n'), score=0.03181818127632141),\n NodeWithScore(node=TextNode(id_='ae5aee85-a083-4141-bf75-bbb872f53760', embedding=None, metadata={'file_path': '..\\\\data\\\\paul_graham\\\\paul_graham_essay.txt', 'file_name': 'paul_graham_essay.txt', 'file_type': 'text/plain', 'file_size': 75395, 'creation_date': '2023-12-12', 'last_modified_date': '2023-12-12', 'last_accessed_date': '2024-02-02'}, excluded_embed_metadata_keys=['file_name', 'file_type', 'file_size', 'creation_date', 'last_modified_date', 'last_accessed_date'], excluded_llm_metadata_keys=['file_name', 'file_type', 'file_size', 'creation_date', 'last_modified_date', 'last_accessed_date'], relationships={: RelatedNodeInfo(node_id='627552ee-116a-4132-a7d3-7e7232f75866', node_type=, metadata={'file_path': '..\\\\data\\\\paul_graham\\\\paul_graham_essay.txt', 'file_name': 'paul_graham_essay.txt', 'file_type': 'text/plain', 'file_size': 75395, 'creation_date': '2023-12-12', 'last_modified_date': '2023-12-12', 'last_accessed_date': '2024-02-02'}, hash='0a59e1ce8e50a67680a5669164f79e524087270ce183a3971fcd18ac4cad1fa0'), : RelatedNodeInfo(node_id='24a1d375-31e3-492c-ac02-5091e3572e3f', node_type=, metadata={'file_path': '..\\\\data\\\\paul_graham\\\\paul_graham_essay.txt', 'file_name': 'paul_graham_essay.txt', 'file_type': 'text/plain', 'file_size': 75395, 'creation_date': '2023-12-12', 'last_modified_date': '2023-12-12', 'last_accessed_date': '2024-02-02'}, hash='51c474a12ac8e9748258b2c7bbe77bb7c8bf35b775ed44f016057a0aa8b0bd76'), : RelatedNodeInfo(node_id='196569e0-2b10-4ba3-8263-a69fb78dd98c', node_type=, metadata={}, hash='192082e7ba84b8c5e2a64bd1d422c6c503189fc3ba325bb3e6e8bdb43db03fbb')}, hash='a3ea638857f1daadf7af967322480f97e1235dac3ee7d72b8024670785df8810', text='It\\'s not that unprestigious types of work are good per se. But when you find yourself drawn to some kind of work despite its current lack of prestige, it\\'s a sign both that there\\'s something real to be discovered there, and that you have the right kind of motives. Impure motives are a big danger for the ambitious. If anything is going to lead you astray, it will be the desire to impress people. So while working on things that aren\\'t prestigious doesn\\'t guarantee you\\'re on the right track, it at least guarantees you\\'re not on the most common type of wrong one.\\n\\nOver the next several years I wrote lots of essays about all kinds of different topics. O\\'Reilly reprinted a collection of them as a book, called Hackers & Painters after one of the essays in it. I also worked on spam filters, and did some more painting. I used to have dinners for a group of friends every thursday night, which taught me how to cook for groups. And I bought another building in Cambridge, a former candy factory (and later, twas said, porn studio), to use as an office.\\n\\nOne night in October 2003 there was a big party at my house. It was a clever idea of my friend Maria Daniels, who was one of the thursday diners. Three separate hosts would all invite their friends to one party. So for every guest, two thirds of the other guests would be people they didn\\'t know but would probably like. One of the guests was someone I didn\\'t know but would turn out to like a lot: a woman called Jessica Livingston. A couple days later I asked her out.\\n\\nJessica was in charge of marketing at a Boston investment bank. This bank thought it understood startups, but over the next year, as she met friends of mine from the startup world, she was surprised how different reality was. And how colorful their stories were. So she decided to compile a book of interviews with startup founders.\\n\\nWhen the bank had financial problems and she had to fire half her staff, she started looking for a new job. In early 2005 she interviewed for a marketing job at a Boston VC firm. It took them weeks to make up their minds, and during this time I started telling her about all the things that needed to be fixed about venture capital. They should make a larger number of smaller investments instead of a handful of giant ones, they should be funding younger, more technical founders instead of MBAs, they should let the founders remain as CEO, and so on.\\n\\nOne of my tricks for writing essays had always been to give talks. The prospect of having to stand up in front of a group of people and tell them something that won\\'t waste their time is a great spur to the imagination. When the Harvard Computer Society, the undergrad computer club, asked me to give a talk, I decided I would tell them how to start a startup. Maybe they\\'d be able to avoid the worst of the mistakes we\\'d made.\\n\\nSo I gave this talk, in the course of which I told them that the best sources of seed funding were successful startup founders, because then they\\'d be sources of advice too. Whereupon it seemed they were all looking expectantly at me. Horrified at the prospect of having my inbox flooded by business plans (if I\\'d only known), I blurted out \"But not me!\" and went on with the talk. But afterward it occurred to me that I should really stop procrastinating about angel investing. I\\'d been meaning to since Yahoo bought us, and now it was 7 years later and I still hadn\\'t done one angel investment.\\n\\nMeanwhile I had been scheming with Robert and Trevor about projects we could work on together. I missed working with them, and it seemed like there had to be something we could collaborate on.\\n\\nAs Jessica and I were walking home from dinner on March 11, at the corner of Garden and Walker streets, these three threads converged. Screw the VCs who were taking so long to make up their minds. We\\'d start our own investment firm and actually implement the ideas we\\'d been talking about. I\\'d fund it, and Jessica could quit her job and work for it, and we\\'d get Robert and Trevor as partners too. [13]\\n\\nOnce again, ignorance worked in our favor. We had no idea how to be angel investors, and in Boston in 2005 there were no Ron Conways to learn from. So we just made what seemed like the obvious choices, and some of the things we did turned out to be novel.\\n\\nThere are multiple components to Y Combinator, and we didn\\'t figure them all out at once. The part we got first was to be an angel firm.', start_char_idx=45670, end_char_idx=50105, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n'), score=0.03009207174181938)]\n\n\n\n### Perform a Hybrid Search with Semantic Reranking\nThis mode incorporates semantic reranking to hybrid search results to improve search relevance. \n\nPlease see this link for further details: https://learn.microsoft.com/azure/search/semantic-search-overview\n\n\n```python\nhybrid_retriever = index.as_retriever(\n vector_store_query_mode=VectorStoreQueryMode.SEMANTIC_HYBRID\n)\nhybrid_retriever.retrieve(\"What is inception about?\")\n```\n\n\n\n\n [NodeWithScore(node=TextNode(id_='bae0df75-ff37-4725-b659-b9fd8bf2ef3c', embedding=None, metadata={'director': 'Christopher Nolan'}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, hash='9792a1fd7d2e1a08f1b1d70a597357bb6b68d69ed5685117eaa37ac9e9a3565e', text='Inception', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n'), score=2.3949906826019287),\n NodeWithScore(node=TextNode(id_='fc9782a2-c255-4265-a618-3a864abe598d', embedding=None, metadata={'file_path': '..\\\\data\\\\paul_graham\\\\paul_graham_essay.txt', 'file_name': 'paul_graham_essay.txt', 'file_type': 'text/plain', 'file_size': 75395, 'creation_date': '2023-12-12', 'last_modified_date': '2023-12-12', 'last_accessed_date': '2024-02-02'}, excluded_embed_metadata_keys=['file_name', 'file_type', 'file_size', 'creation_date', 'last_modified_date', 'last_accessed_date'], excluded_llm_metadata_keys=['file_name', 'file_type', 'file_size', 'creation_date', 'last_modified_date', 'last_accessed_date'], relationships={: RelatedNodeInfo(node_id='627552ee-116a-4132-a7d3-7e7232f75866', node_type=, metadata={'file_path': '..\\\\data\\\\paul_graham\\\\paul_graham_essay.txt', 'file_name': 'paul_graham_essay.txt', 'file_type': 'text/plain', 'file_size': 75395, 'creation_date': '2023-12-12', 'last_modified_date': '2023-12-12', 'last_accessed_date': '2024-02-02'}, hash='0a59e1ce8e50a67680a5669164f79e524087270ce183a3971fcd18ac4cad1fa0'), : RelatedNodeInfo(node_id='94d87013-ea3d-4a9c-982a-dde5ff219983', node_type=, metadata={'file_path': '..\\\\data\\\\paul_graham\\\\paul_graham_essay.txt', 'file_name': 'paul_graham_essay.txt', 'file_type': 'text/plain', 'file_size': 75395, 'creation_date': '2023-12-12', 'last_modified_date': '2023-12-12', 'last_accessed_date': '2024-02-02'}, hash='f28897170c6b61162069af9ee83dc11e13fa0f6bf6efaa7b3911e6ad9093da84'), : RelatedNodeInfo(node_id='dc3852e5-4c1e-484e-9e65-f17084d3f7b4', node_type=, metadata={}, hash='deaee6d5c992dbf757876957aa9112a42d30a636c6c83d81fcfac4aaf2d24dee')}, hash='a3b31e5ec2b5d4a9b3648de310c8a5962c17afdb800ea0e16faa47956607866d', text='And at the same time all involved would adhere outwardly to the conventions of a 19th century atelier. We actually had one of those little stoves, fed with kindling, that you see in 19th century studio paintings, and a nude model sitting as close to it as possible without getting burned. Except hardly anyone else painted her besides me. The rest of the students spent their time chatting or occasionally trying to imitate things they\\'d seen in American art magazines.\\n\\nOur model turned out to live just down the street from me. She made a living from a combination of modelling and making fakes for a local antique dealer. She\\'d copy an obscure old painting out of a book, and then he\\'d take the copy and maltreat it to make it look old. [3]\\n\\nWhile I was a student at the Accademia I started painting still lives in my bedroom at night. These paintings were tiny, because the room was, and because I painted them on leftover scraps of canvas, which was all I could afford at the time. Painting still lives is different from painting people, because the subject, as its name suggests, can\\'t move. People can\\'t sit for more than about 15 minutes at a time, and when they do they don\\'t sit very still. So the traditional m.o. for painting people is to know how to paint a generic person, which you then modify to match the specific person you\\'re painting. Whereas a still life you can, if you want, copy pixel by pixel from what you\\'re seeing. You don\\'t want to stop there, of course, or you get merely photographic accuracy, and what makes a still life interesting is that it\\'s been through a head. You want to emphasize the visual cues that tell you, for example, that the reason the color changes suddenly at a certain point is that it\\'s the edge of an object. By subtly emphasizing such things you can make paintings that are more realistic than photographs not just in some metaphorical sense, but in the strict information-theoretic sense. [4]\\n\\nI liked painting still lives because I was curious about what I was seeing. In everyday life, we aren\\'t consciously aware of much we\\'re seeing. Most visual perception is handled by low-level processes that merely tell your brain \"that\\'s a water droplet\" without telling you details like where the lightest and darkest points are, or \"that\\'s a bush\" without telling you the shape and position of every leaf. This is a feature of brains, not a bug. In everyday life it would be distracting to notice every leaf on every bush. But when you have to paint something, you have to look more closely, and when you do there\\'s a lot to see. You can still be noticing new things after days of trying to paint something people usually take for granted, just as you can after days of trying to write an essay about something people usually take for granted.\\n\\nThis is not the only way to paint. I\\'m not 100% sure it\\'s even a good way to paint. But it seemed a good enough bet to be worth trying.\\n\\nOur teacher, professor Ulivi, was a nice guy. He could see I worked hard, and gave me a good grade, which he wrote down in a sort of passport each student had. But the Accademia wasn\\'t teaching me anything except Italian, and my money was running out, so at the end of the first year I went back to the US.\\n\\nI wanted to go back to RISD, but I was now broke and RISD was very expensive, so I decided to get a job for a year and then return to RISD the next fall. I got one at a company called Interleaf, which made software for creating documents. You mean like Microsoft Word? Exactly. That was how I learned that low end software tends to eat high end software. But Interleaf still had a few years to live yet. [5]\\n\\nInterleaf had done something pretty bold. Inspired by Emacs, they\\'d added a scripting language, and even made the scripting language a dialect of Lisp. Now they wanted a Lisp hacker to write things in it. This was the closest thing I\\'ve had to a normal job, and I hereby apologize to my boss and coworkers, because I was a bad employee. Their Lisp was the thinnest icing on a giant C cake, and since I didn\\'t know C and didn\\'t want to learn it, I never understood most of the software. Plus I was terribly irresponsible. This was back when a programming job meant showing up every day during certain working hours.', start_char_idx=14179, end_char_idx=18443, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n'), score=1.0986518859863281)]"} {"tokens": 717, "doc_id": "69d7de9b-c00a-48a1-a87b-3ace42d65461", "name": "Qdrant Vector Store - Default Qdrant Filters", "url": "https://docs.llamaindex.ai/en/stable/examples/vector_stores/Qdrant_using_qdrant_filters", "retrieve_doc": true, "source": "llama_index", "content": "\"Open\n\n# Qdrant Vector Store - Default Qdrant Filters\n\nExample on how to use Filters from the qdrant_client SDK directly in your Retriever / Query Engine\n\n\n```python\n%pip install llama-index-vector-stores-qdrant\n```\n\n\n```python\n!pip3 install llama-index qdrant_client\n```\n\n\n```python\nimport openai\nimport qdrant_client\nfrom IPython.display import Markdown, display\nfrom llama_index.core import VectorStoreIndex\nfrom llama_index.core import StorageContext\nfrom llama_index.vector_stores.qdrant import QdrantVectorStore\nfrom qdrant_client.http.models import Filter, FieldCondition, MatchValue\n\nclient = qdrant_client.QdrantClient(location=\":memory:\")\nfrom llama_index.core.schema import TextNode\n\nnodes = [\n TextNode(\n text=\"りんごとは\",\n metadata={\"author\": \"Tanaka\", \"fruit\": \"apple\", \"city\": \"Tokyo\"},\n ),\n TextNode(\n text=\"Was ist Apfel?\",\n metadata={\"author\": \"David\", \"fruit\": \"apple\", \"city\": \"Berlin\"},\n ),\n TextNode(\n text=\"Orange like the sun\",\n metadata={\"author\": \"Jane\", \"fruit\": \"orange\", \"city\": \"Hong Kong\"},\n ),\n TextNode(\n text=\"Grape is...\",\n metadata={\"author\": \"Jane\", \"fruit\": \"grape\", \"city\": \"Hong Kong\"},\n ),\n TextNode(\n text=\"T-dot > G-dot\",\n metadata={\"author\": \"George\", \"fruit\": \"grape\", \"city\": \"Toronto\"},\n ),\n TextNode(\n text=\"6ix Watermelons\",\n metadata={\n \"author\": \"George\",\n \"fruit\": \"watermelon\",\n \"city\": \"Toronto\",\n },\n ),\n]\n\nopenai.api_key = \"YOUR_API_KEY\"\nvector_store = QdrantVectorStore(\n client=client, collection_name=\"fruit_collection\"\n)\nstorage_context = StorageContext.from_defaults(vector_store=vector_store)\nindex = VectorStoreIndex(nodes, storage_context=storage_context)\n\n\n# Use filters directly from qdrant_client python library\n# View python examples here for more info https://qdrant.tech/documentation/concepts/filtering/\n\nfilters = Filter(\n should=[\n Filter(\n must=[\n FieldCondition(\n key=\"fruit\",\n match=MatchValue(value=\"apple\"),\n ),\n FieldCondition(\n key=\"city\",\n match=MatchValue(value=\"Tokyo\"),\n ),\n ]\n ),\n Filter(\n must=[\n FieldCondition(\n key=\"fruit\",\n match=MatchValue(value=\"grape\"),\n ),\n FieldCondition(\n key=\"city\",\n match=MatchValue(value=\"Toronto\"),\n ),\n ]\n ),\n ]\n)\n\nretriever = index.as_retriever(vector_store_kwargs={\"qdrant_filters\": filters})\n\nresponse = retriever.retrieve(\"Who makes grapes?\")\nfor node in response:\n print(\"node\", node.score)\n print(\"node\", node.text)\n print(\"node\", node.metadata)\n```"} {"tokens": 1120, "doc_id": "fc2bf4ad-a7eb-471e-8807-ecc7b2f3b871", "name": "Pinecone Vector Store - Hybrid Search", "url": "https://docs.llamaindex.ai/en/stable/examples/vector_stores/PineconeIndexDemo-Hybrid", "retrieve_doc": true, "source": "llama_index", "content": "\"Open\n\n# Pinecone Vector Store - Hybrid Search\n\nIf you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.\n\n\n```python\n%pip install llama-index-vector-stores-pinecone\n```\n\n\n```python\n!pip install llama-index>=0.9.31 pinecone-client>=3.0.0 \"transformers[torch]\"\n```\n\n#### Creating a Pinecone Index\n\n\n```python\nimport logging\nimport sys\n\nlogging.basicConfig(stream=sys.stdout, level=logging.INFO)\nlogging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))\n```\n\n\n```python\nfrom pinecone import Pinecone, ServerlessSpec\n```\n\n\n```python\nimport os\n\nos.environ[\n \"PINECONE_API_KEY\"\n] = #\"\"\nos.environ[\n \"OPENAI_API_KEY\"\n] = \"sk-...\"\n\napi_key = os.environ[\"PINECONE_API_KEY\"]\n\npc = Pinecone(api_key=api_key)\n```\n\n\n```python\n# delete if needed\n# pc.delete_index(\"quickstart\")\n```\n\n\n```python\n# dimensions are for text-embedding-ada-002\n# NOTE: needs dotproduct for hybrid search\n\npc.create_index(\n name=\"quickstart\",\n dimension=1536,\n metric=\"dotproduct\",\n spec=ServerlessSpec(cloud=\"aws\", region=\"us-west-2\"),\n)\n\n# If you need to create a PodBased Pinecone index, you could alternatively do this:\n#\n# from pinecone import Pinecone, PodSpec\n#\n# pc = Pinecone(api_key='xxx')\n#\n# pc.create_index(\n# \t name='my-index',\n# \t dimension=1536,\n# \t metric='cosine',\n# \t spec=PodSpec(\n# \t\t environment='us-east1-gcp',\n# \t\t pod_type='p1.x1',\n# \t\t pods=1\n# \t )\n# )\n#\n```\n\n\n```python\npinecone_index = pc.Index(\"quickstart\")\n```\n\nDownload Data\n\n\n```python\n!mkdir -p 'data/paul_graham/'\n!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'\n```\n\n#### Load documents, build the PineconeVectorStore\n\n\n```python\nfrom llama_index.core import VectorStoreIndex, SimpleDirectoryReader\nfrom llama_index.vector_stores.pinecone import PineconeVectorStore\nfrom IPython.display import Markdown, display\n```\n\n\n```python\n# load documents\ndocuments = SimpleDirectoryReader(\"./data/paul_graham/\").load_data()\n```\n\n\n```python\n# set add_sparse_vector=True to compute sparse vectors during upsert\nfrom llama_index.core import StorageContext\n\nif \"OPENAI_API_KEY\" not in os.environ:\n raise EnvironmentError(f\"Environment variable OPENAI_API_KEY is not set\")\n\nvector_store = PineconeVectorStore(\n pinecone_index=pinecone_index,\n add_sparse_vector=True,\n)\nstorage_context = StorageContext.from_defaults(vector_store=vector_store)\nindex = VectorStoreIndex.from_documents(\n documents, storage_context=storage_context\n)\n```\n\n INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n\n\n\n Upserted vectors: 0%| | 0/22 [00:00{response}\"))\n```\n\n\nAt Viaweb, Lisp was used as a programming language. The speaker gave a talk at a Lisp conference about how Lisp was used at Viaweb, and afterward, the talk gained a lot of attention when it was posted online. This led to a realization that publishing essays online could reach a wider audience than traditional print media. The speaker also wrote a collection of essays, which was later published as a book called \"Hackers & Painters.\""} {"tokens": 699, "doc_id": "2688c29a-6436-4251-a97e-d38741b7a804", "name": "Elasticsearch", "url": "https://docs.llamaindex.ai/en/stable/examples/vector_stores/Elasticsearch_demo", "retrieve_doc": true, "source": "llama_index", "content": "\"Open\n\n# Elasticsearch\n\n>[Elasticsearch](http://www.github.com/elastic/elasticsearch) is a search database, that supports full text and vector searches. \n\n\n## Basic Example\n\n\nIn this basic example, we take the a Paul Graham essay, split it into chunks, embed it using an open-source embedding model, load it into Elasticsearch, and then query it. For an example using different retrieval strategies see [Elasticsearch Vector Store](https://docs.llamaindex.ai/en/stable/examples/vector_stores/ElasticsearchIndexDemo/).\n\nIf you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.\n\n\n```python\n%pip install -qU llama-index-vector-stores-elasticsearch llama-index-embeddings-huggingface llama-index\n```\n\n\n```python\n# import\nfrom llama_index.core import VectorStoreIndex, SimpleDirectoryReader\nfrom llama_index.vector_stores.elasticsearch import ElasticsearchStore\nfrom llama_index.core import StorageContext\n```\n\n\n```python\n# set up OpenAI\nimport os\nimport getpass\n\nos.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")\n```\n\nDownload Data\n\n\n```python\n!mkdir -p 'data/paul_graham/'\n!wget -nv 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'\n```\n\n 2024-05-13 15:10:43 URL:https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt [75042/75042] -> \"data/paul_graham/paul_graham_essay.txt\" [1]\n\n\n\n```python\nfrom llama_index.embeddings.huggingface import HuggingFaceEmbedding\nfrom llama_index.core import Settings\n\n# define embedding function\nSettings.embed_model = HuggingFaceEmbedding(\n model_name=\"BAAI/bge-small-en-v1.5\"\n)\n```\n\n\n```python\n# load documents\ndocuments = SimpleDirectoryReader(\"./data/paul_graham/\").load_data()\n\n# define index\nvector_store = ElasticsearchStore(\n es_url=\"http://localhost:9200\", # see Elasticsearch Vector Store for more authentication options\n index_name=\"paul_graham_essay\",\n)\nstorage_context = StorageContext.from_defaults(vector_store=vector_store)\nindex = VectorStoreIndex.from_documents(\n documents, storage_context=storage_context\n)\n```\n\n\n```python\n# Query Data\nquery_engine = index.as_query_engine()\nresponse = query_engine.query(\"What did the author do growing up?\")\nprint(response)\n```\n\n The author worked on writing and programming outside of school. They wrote short stories and tried writing programs on an IBM 1401 computer. They also built a microcomputer kit and started programming on it, writing simple games and a word processor."} {"tokens": 2090, "doc_id": "41b7c5e0-53b5-40ec-bc06-3fb09db6e847", "name": "Firestore Vector Store", "url": "https://docs.llamaindex.ai/en/stable/examples/vector_stores/FirestoreVectorStore", "retrieve_doc": true, "source": "llama_index", "content": "\"Open\n\n# Firestore Vector Store\n\n# Google Firestore (Native Mode)\n\n> [Firestore](https://cloud.google.com/firestore) is a serverless document-oriented database that scales to meet any demand. Extend your database application to build AI-powered experiences leveraging Firestore's Langchain integrations.\n\nThis notebook goes over how to use [Firestore](https://cloud.google.com/firestore) to store vectors and query them using the `FirestoreVectorStore` class.\n\n## Before You Begin\n\nTo run this notebook, you will need to do the following:\n\n* [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n* [Enable the Firestore API](https://console.cloud.google.com/flows/enableapi?apiid=firestore.googleapis.com)\n* [Create a Firestore database](https://cloud.google.com/firestore/docs/manage-databases)\n\nAfter confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts.\n\n## Library Installation\n\nIf you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙. For this notebook, we will also install `langchain-google-genai` to use Google Generative AI embeddings.\n\n\n```python\n%pip install --quiet llama-index\n%pip install --quiet llama-index-vector-stores-firestore llama-index-embeddings-huggingface\n```\n\n### ☁ Set Your Google Cloud Project\nSet your Google Cloud project so that you can leverage Google Cloud resources within this notebook.\n\nIf you don't know your project ID, try the following:\n\n* Run `gcloud config list`.\n* Run `gcloud projects list`.\n* See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113).\n\n\n```python\n# @markdown Please fill in the value below with your Google Cloud project ID and then run the cell.\n\nPROJECT_ID = \"YOUR_PROJECT_ID\" # @param {type:\"string\"}\n\n# Set the project id\n!gcloud config set project {PROJECT_ID}\n```\n\n### 🔐 Authentication\n\nAuthenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project.\n\n- If you are using Colab to run this notebook, use the cell below and continue.\n- If you are using Vertex AI Workbench, check out the setup instructions [here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env).\n\n\n```python\nfrom google.colab import auth\n\nauth.authenticate_user()\n```\n\n# Basic Usage\n\n### Initialize FirestoreVectorStore\n\n`FirestoreVectroStore` allows you to load data into Firestore and query it.\n\n\n```python\n# @markdown Please specify a source for demo purpose.\nCOLLECTION_NAME = \"test_collection\"\n```\n\n\n```python\nfrom llama_index.core import SimpleDirectoryReader\n\n# Load documents and build index\ndocuments = SimpleDirectoryReader(\n \"../../examples/data/paul_graham\"\n).load_data()\n```\n\n\n```python\nfrom llama_index.embeddings.huggingface import HuggingFaceEmbedding\nfrom llama_index.core import Settings\n\n# Set the embedding model, this is a local model\nembed_model = HuggingFaceEmbedding(model_name=\"BAAI/bge-small-en-v1.5\")\n```\n\n\n```python\nfrom llama_index.core import VectorStoreIndex\nfrom llama_index.core import StorageContext, ServiceContext\n\nfrom llama_index.vector_stores.firestore import FirestoreVectorStore\n\n# Create a Firestore vector store\nstore = FirestoreVectorStore(collection_name=COLLECTION_NAME)\n\nstorage_context = StorageContext.from_defaults(vector_store=store)\nservice_context = ServiceContext.from_defaults(\n llm=None, embed_model=embed_model\n)\n\nindex = VectorStoreIndex.from_documents(\n documents, storage_context=storage_context, service_context=service_context\n)\n```\n\n /var/folders/mh/cqn7wzgs3j79rbg243_gfcx80000gn/T/ipykernel_29666/1668628626.py:10: DeprecationWarning: Call to deprecated class method from_defaults. (ServiceContext is deprecated, please use `llama_index.settings.Settings` instead.) -- Deprecated since version 0.10.0.\n service_context = ServiceContext.from_defaults(llm=None, embed_model=embed_model)\n\n\n LLM is explicitly disabled. Using MockLLM.\n\n\n### Perform search\n\nYou can use the `FirestoreVectorStore` to perform similarity searches on the vectors you have stored. This is useful for finding similar documents or text.\n\n\n```python\nquery_engine = index.as_query_engine()\nres = query_engine.query(\"What did the author do growing up?\")\nprint(str(res.source_nodes[0].text))\n```\n\n None\n What I Worked On\n \n February 2021\n \n Before college the two main things I worked on, outside of school, were writing and programming. I didn't write essays. I wrote what beginning writers were supposed to write then, and probably still are: short stories. My stories were awful. They had hardly any plot, just characters with strong feelings, which I imagined made them deep.\n \n The first programs I tried writing were on the IBM 1401 that our school district used for what was then called \"data processing.\" This was in 9th grade, so I was 13 or 14. The school district's 1401 happened to be in the basement of our junior high school, and my friend Rich Draves and I got permission to use it. It was like a mini Bond villain's lair down there, with all these alien-looking machines — CPU, disk drives, printer, card reader — sitting up on a raised floor under bright fluorescent lights.\n \n The language we used was an early version of Fortran. You had to type programs on punch cards, then stack them in the card reader and press a button to load the program into memory and run it. The result would ordinarily be to print something on the spectacularly loud printer.\n \n I was puzzled by the 1401. I couldn't figure out what to do with it. And in retrospect there's not much I could have done with it. The only form of input to programs was data stored on punched cards, and I didn't have any data stored on punched cards. The only other option was to do things that didn't rely on any input, like calculate approximations of pi, but I didn't know enough math to do anything interesting of that type. So I'm not surprised I can't remember any programs I wrote, because they can't have done much. My clearest memory is of the moment I learned it was possible for programs not to terminate, when one of mine didn't. On a machine without time-sharing, this was a social as well as a technical error, as the data center manager's expression made clear.\n \n With microcomputers, everything changed. Now you could have a computer sitting right in front of you, on a desk, that could respond to your keystrokes as it was running instead of just churning through a stack of punch cards and then stopping. [1]\n \n The first of my friends to get a microcomputer built it himself. It was sold as a kit by Heathkit. I remember vividly how impressed and envious I felt watching him sitting in front of it, typing programs right into the computer.\n \n Computers were expensive in those days and it took me years of nagging before I convinced my father to buy one, a TRS-80, in about 1980. The gold standard then was the Apple II, but a TRS-80 was good enough. This was when I really started programming. I wrote simple games, a program to predict how high my model rockets would fly, and a word processor that my father used to write at least one book. There was only room in memory for about 2 pages of text, so he'd write 2 pages at a time and then print them out, but it was a lot better than a typewriter.\n \n Though I liked programming, I didn't plan to study it in college. In college I was going to study philosophy, which sounded much more powerful. It seemed, to my naive high school self, to be the study of the ultimate truths, compared to which the things studied in other fields would be mere domain knowledge. What I discovered when I got to college was that the other fields took up so much of the space of ideas that there wasn't much left for these supposed ultimate truths. All that seemed left for philosophy were edge cases that people in other fields felt could safely be ignored.\n \n I couldn't have put this into words when I was 18. All I knew at the time was that I kept taking philosophy courses and they kept being boring. So I decided to switch to AI.\n \n AI was in the air in the mid 1980s, but there were two things especially that made me want to work on it: a novel by Heinlein called The Moon is a Harsh Mistress, which featured an intelligent computer called Mike, and a PBS documentary that showed Terry Winograd using SHRDLU. I haven't tried rereading The Moon is a Harsh Mistress, so I don't know how well it has aged, but when I read it I was drawn entirely into its world.\n\n\nYou can apply pre-filtering to the search results by specifying a `filters` argument.\n\n\n```python\nfrom llama_index.core.vector_stores.types import (\n MetadataFilters,\n ExactMatchFilter,\n MetadataFilter,\n)\n\nfilters = MetadataFilters(\n filters=[MetadataFilter(key=\"author\", value=\"Paul Graham\")]\n)\nquery_engine = index.as_query_engine(filters=filters)\nres = query_engine.query(\"What did the author do growing up?\")\nprint(str(res.source_nodes[0].text))\n```"} {"tokens": 11494, "doc_id": "c1f3da6b-ebd5-4d21-b8cb-9912b3d62b55", "name": "set up Fireworks.ai Key", "url": "https://docs.llamaindex.ai/en/stable/examples/vector_stores/MongoDBAtlasVectorSearchRAGFireworks", "retrieve_doc": false, "source": "llama_index", "content": "```python\n!pip install -q llama-index llama-index-vector-stores-mongodb llama-index-embeddings-fireworks==0.1.2 llama-index-llms-fireworks\n!pip install -q pymongo datasets pandas\n```\n\n\n```python\n# set up Fireworks.ai Key\nimport os\nimport getpass\n\nfw_api_key = getpass.getpass(\"Fireworks API Key:\")\nos.environ[\"FIREWORKS_API_KEY\"] = fw_api_key\n```\n\n\n```python\nfrom datasets import load_dataset\nimport pandas as pd\n\n# https://huggingface.co./datasets/AIatMongoDB/whatscooking.restaurants\ndataset = load_dataset(\"AIatMongoDB/whatscooking.restaurants\")\n\n# Convert the dataset to a pandas dataframe\ndataset_df = pd.DataFrame(dataset[\"train\"])\n\ndataset_df.head(5)\n```\n\n /mnt/disks/data/llama_index/.venv/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n from .autonotebook import tqdm as notebook_tqdm\n\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
restaurant_idattributescuisineDogsAllowedembeddingOutdoorSeatingboroughaddress_idnamemenuTakeOutlocationPriceRangeHappyHourreview_countsponsoredstars
040366661{'Alcohol': ''none'', 'Ambience': '{'romantic'...Tex-MexNone[-0.14520384, 0.018315623, -0.018330636, -0.10...TrueManhattan{'building': '627', 'coord': [-73.975980999999...{'$oid': '6095a34a7c34416a90d3206b'}Baby Bo'S BurritosNoneTrue{'coordinates': [-73.97598099999999, 40.745132...1.0None10NaN2.5
140367442{'Alcohol': ''beer_and_wine'', 'Ambience': '{'...AmericanTrue[-0.11977468, -0.02157107, 0.0038846824, -0.09...TrueStaten Island{'building': '17', 'coord': [-74.1350211, 40.6...{'$oid': '6095a34a7c34416a90d3209e'}Buddy'S Wonder Bar[Grilled cheese sandwich, Baked potato, Lasagn...True{'coordinates': [-74.1350211, 40.6369042], 'ty...2.0None62NaN3.5
240364610{'Alcohol': ''none'', 'Ambience': '{'touristy'...AmericanNone[-0.1004329, -0.014882699, -0.033005167, -0.09...TrueStaten Island{'building': '37', 'coord': [-74.138263, 40.54...{'$oid': '6095a34a7c34416a90d31ff6'}Great Kills Yacht Club[Mozzarella sticks, Mushroom swiss burger, Spi...True{'coordinates': [-74.138263, 40.546681], 'type...1.0None72NaN4.0
340365288{'Alcohol': None, 'Ambience': '{'touristy': Fa...AmericanNone[-0.11735515, -0.0397448, -0.0072645755, -0.09...TrueManhattan{'building': '842', 'coord': [-73.970637000000...{'$oid': '6095a34a7c34416a90d32017'}Keats Restaurant[French fries, Chicken pot pie, Mac & cheese, ...True{'coordinates': [-73.97063700000001, 40.751495...2.0True149NaN4.0
440363151{'Alcohol': None, 'Ambience': None, 'BYOB': No...BakeryNone[-0.096541286, -0.009661355, 0.04402167, -0.12...TrueManhattan{'building': '120', 'coord': [-73.9998042, 40....{'$oid': '6095a34a7c34416a90d31fbd'}Olive'S[doughnuts, chocolate chip cookies, chocolate ...True{'coordinates': [-73.9998042, 40.7251256], 'ty...1.0None7NaN5.0
\n
\n\n\n\n\n```python\nfrom llama_index.core.settings import Settings\nfrom llama_index.llms.fireworks import Fireworks\nfrom llama_index.embeddings.fireworks import FireworksEmbedding\n\nembed_model = FireworksEmbedding(\n embed_batch_size=512,\n model_name=\"nomic-ai/nomic-embed-text-v1.5\",\n api_key=fw_api_key,\n)\nllm = Fireworks(\n temperature=0,\n model=\"accounts/fireworks/models/mixtral-8x7b-instruct\",\n api_key=fw_api_key,\n)\n\nSettings.llm = llm\nSettings.embed_model = embed_model\n```\n\n\n```python\nimport json\nfrom llama_index.core import Document\nfrom llama_index.core.schema import MetadataMode\n\n# Convert the DataFrame to a JSON string representation\ndocuments_json = dataset_df.to_json(orient=\"records\")\n# Load the JSON string into a Python list of dictionaries\ndocuments_list = json.loads(documents_json)\n\nllama_documents = []\n\nfor document in documents_list:\n # Value for metadata must be one of (str, int, float, None)\n document[\"name\"] = json.dumps(document[\"name\"])\n document[\"cuisine\"] = json.dumps(document[\"cuisine\"])\n document[\"attributes\"] = json.dumps(document[\"attributes\"])\n document[\"menu\"] = json.dumps(document[\"menu\"])\n document[\"borough\"] = json.dumps(document[\"borough\"])\n document[\"address\"] = json.dumps(document[\"address\"])\n document[\"PriceRange\"] = json.dumps(document[\"PriceRange\"])\n document[\"HappyHour\"] = json.dumps(document[\"HappyHour\"])\n document[\"review_count\"] = json.dumps(document[\"review_count\"])\n document[\"TakeOut\"] = json.dumps(document[\"TakeOut\"])\n # these two fields are not relevant to the question we want to answer,\n # so I will skip it for now\n del document[\"embedding\"]\n del document[\"location\"]\n\n # Create a Document object with the text and excluded metadata for llm and embedding models\n llama_document = Document(\n text=json.dumps(document),\n metadata=document,\n metadata_template=\"{key}=>{value}\",\n text_template=\"Metadata: {metadata_str}\\n-----\\nContent: {content}\",\n )\n\n llama_documents.append(llama_document)\n\n# Observing an example of what the LLM and Embedding model receive as input\nprint(\n \"\\nThe LLM sees this: \\n\",\n llama_documents[0].get_content(metadata_mode=MetadataMode.LLM),\n)\nprint(\n \"\\nThe Embedding model sees this: \\n\",\n llama_documents[0].get_content(metadata_mode=MetadataMode.EMBED),\n)\n```\n\n \n The LLM sees this: \n Metadata: restaurant_id=>40366661\n attributes=>{\"Alcohol\": \"'none'\", \"Ambience\": \"{'romantic': False, 'intimate': False, 'classy': False, 'hipster': False, 'divey': False, 'touristy': False, 'trendy': False, 'upscale': False, 'casual': False}\", \"BYOB\": null, \"BestNights\": null, \"BikeParking\": null, \"BusinessAcceptsBitcoin\": null, \"BusinessAcceptsCreditCards\": null, \"BusinessParking\": \"None\", \"Caters\": \"True\", \"DriveThru\": null, \"GoodForDancing\": null, \"GoodForKids\": \"True\", \"GoodForMeal\": null, \"HasTV\": \"True\", \"Music\": null, \"NoiseLevel\": \"'average'\", \"RestaurantsAttire\": \"'casual'\", \"RestaurantsDelivery\": \"True\", \"RestaurantsGoodForGroups\": \"True\", \"RestaurantsReservations\": \"True\", \"RestaurantsTableService\": \"False\", \"WheelchairAccessible\": \"True\", \"WiFi\": \"'free'\"}\n cuisine=>\"Tex-Mex\"\n DogsAllowed=>None\n OutdoorSeating=>True\n borough=>\"Manhattan\"\n address=>{\"building\": \"627\", \"coord\": [-73.975981, 40.745132], \"street\": \"2 Avenue\", \"zipcode\": \"10016\"}\n _id=>{'$oid': '6095a34a7c34416a90d3206b'}\n name=>\"Baby Bo'S Burritos\"\n menu=>null\n TakeOut=>true\n PriceRange=>1.0\n HappyHour=>null\n review_count=>10\n sponsored=>None\n stars=>2.5\n -----\n Content: {\"restaurant_id\": \"40366661\", \"attributes\": \"{\\\"Alcohol\\\": \\\"'none'\\\", \\\"Ambience\\\": \\\"{'romantic': False, 'intimate': False, 'classy': False, 'hipster': False, 'divey': False, 'touristy': False, 'trendy': False, 'upscale': False, 'casual': False}\\\", \\\"BYOB\\\": null, \\\"BestNights\\\": null, \\\"BikeParking\\\": null, \\\"BusinessAcceptsBitcoin\\\": null, \\\"BusinessAcceptsCreditCards\\\": null, \\\"BusinessParking\\\": \\\"None\\\", \\\"Caters\\\": \\\"True\\\", \\\"DriveThru\\\": null, \\\"GoodForDancing\\\": null, \\\"GoodForKids\\\": \\\"True\\\", \\\"GoodForMeal\\\": null, \\\"HasTV\\\": \\\"True\\\", \\\"Music\\\": null, \\\"NoiseLevel\\\": \\\"'average'\\\", \\\"RestaurantsAttire\\\": \\\"'casual'\\\", \\\"RestaurantsDelivery\\\": \\\"True\\\", \\\"RestaurantsGoodForGroups\\\": \\\"True\\\", \\\"RestaurantsReservations\\\": \\\"True\\\", \\\"RestaurantsTableService\\\": \\\"False\\\", \\\"WheelchairAccessible\\\": \\\"True\\\", \\\"WiFi\\\": \\\"'free'\\\"}\", \"cuisine\": \"\\\"Tex-Mex\\\"\", \"DogsAllowed\": null, \"OutdoorSeating\": true, \"borough\": \"\\\"Manhattan\\\"\", \"address\": \"{\\\"building\\\": \\\"627\\\", \\\"coord\\\": [-73.975981, 40.745132], \\\"street\\\": \\\"2 Avenue\\\", \\\"zipcode\\\": \\\"10016\\\"}\", \"_id\": {\"$oid\": \"6095a34a7c34416a90d3206b\"}, \"name\": \"\\\"Baby Bo'S Burritos\\\"\", \"menu\": \"null\", \"TakeOut\": \"true\", \"PriceRange\": \"1.0\", \"HappyHour\": \"null\", \"review_count\": \"10\", \"sponsored\": null, \"stars\": 2.5}\n \n The Embedding model sees this: \n Metadata: restaurant_id=>40366661\n attributes=>{\"Alcohol\": \"'none'\", \"Ambience\": \"{'romantic': False, 'intimate': False, 'classy': False, 'hipster': False, 'divey': False, 'touristy': False, 'trendy': False, 'upscale': False, 'casual': False}\", \"BYOB\": null, \"BestNights\": null, \"BikeParking\": null, \"BusinessAcceptsBitcoin\": null, \"BusinessAcceptsCreditCards\": null, \"BusinessParking\": \"None\", \"Caters\": \"True\", \"DriveThru\": null, \"GoodForDancing\": null, \"GoodForKids\": \"True\", \"GoodForMeal\": null, \"HasTV\": \"True\", \"Music\": null, \"NoiseLevel\": \"'average'\", \"RestaurantsAttire\": \"'casual'\", \"RestaurantsDelivery\": \"True\", \"RestaurantsGoodForGroups\": \"True\", \"RestaurantsReservations\": \"True\", \"RestaurantsTableService\": \"False\", \"WheelchairAccessible\": \"True\", \"WiFi\": \"'free'\"}\n cuisine=>\"Tex-Mex\"\n DogsAllowed=>None\n OutdoorSeating=>True\n borough=>\"Manhattan\"\n address=>{\"building\": \"627\", \"coord\": [-73.975981, 40.745132], \"street\": \"2 Avenue\", \"zipcode\": \"10016\"}\n _id=>{'$oid': '6095a34a7c34416a90d3206b'}\n name=>\"Baby Bo'S Burritos\"\n menu=>null\n TakeOut=>true\n PriceRange=>1.0\n HappyHour=>null\n review_count=>10\n sponsored=>None\n stars=>2.5\n -----\n Content: {\"restaurant_id\": \"40366661\", \"attributes\": \"{\\\"Alcohol\\\": \\\"'none'\\\", \\\"Ambience\\\": \\\"{'romantic': False, 'intimate': False, 'classy': False, 'hipster': False, 'divey': False, 'touristy': False, 'trendy': False, 'upscale': False, 'casual': False}\\\", \\\"BYOB\\\": null, \\\"BestNights\\\": null, \\\"BikeParking\\\": null, \\\"BusinessAcceptsBitcoin\\\": null, \\\"BusinessAcceptsCreditCards\\\": null, \\\"BusinessParking\\\": \\\"None\\\", \\\"Caters\\\": \\\"True\\\", \\\"DriveThru\\\": null, \\\"GoodForDancing\\\": null, \\\"GoodForKids\\\": \\\"True\\\", \\\"GoodForMeal\\\": null, \\\"HasTV\\\": \\\"True\\\", \\\"Music\\\": null, \\\"NoiseLevel\\\": \\\"'average'\\\", \\\"RestaurantsAttire\\\": \\\"'casual'\\\", \\\"RestaurantsDelivery\\\": \\\"True\\\", \\\"RestaurantsGoodForGroups\\\": \\\"True\\\", \\\"RestaurantsReservations\\\": \\\"True\\\", \\\"RestaurantsTableService\\\": \\\"False\\\", \\\"WheelchairAccessible\\\": \\\"True\\\", \\\"WiFi\\\": \\\"'free'\\\"}\", \"cuisine\": \"\\\"Tex-Mex\\\"\", \"DogsAllowed\": null, \"OutdoorSeating\": true, \"borough\": \"\\\"Manhattan\\\"\", \"address\": \"{\\\"building\\\": \\\"627\\\", \\\"coord\\\": [-73.975981, 40.745132], \\\"street\\\": \\\"2 Avenue\\\", \\\"zipcode\\\": \\\"10016\\\"}\", \"_id\": {\"$oid\": \"6095a34a7c34416a90d3206b\"}, \"name\": \"\\\"Baby Bo'S Burritos\\\"\", \"menu\": \"null\", \"TakeOut\": \"true\", \"PriceRange\": \"1.0\", \"HappyHour\": \"null\", \"review_count\": \"10\", \"sponsored\": null, \"stars\": 2.5}\n\n\n\n```python\nllama_documents[0]\n```\n\n\n\n\n Document(id_='93d3f08d-85f3-494d-a057-19bc834abc29', embedding=None, metadata={'restaurant_id': '40366661', 'attributes': '{\"Alcohol\": \"\\'none\\'\", \"Ambience\": \"{\\'romantic\\': False, \\'intimate\\': False, \\'classy\\': False, \\'hipster\\': False, \\'divey\\': False, \\'touristy\\': False, \\'trendy\\': False, \\'upscale\\': False, \\'casual\\': False}\", \"BYOB\": null, \"BestNights\": null, \"BikeParking\": null, \"BusinessAcceptsBitcoin\": null, \"BusinessAcceptsCreditCards\": null, \"BusinessParking\": \"None\", \"Caters\": \"True\", \"DriveThru\": null, \"GoodForDancing\": null, \"GoodForKids\": \"True\", \"GoodForMeal\": null, \"HasTV\": \"True\", \"Music\": null, \"NoiseLevel\": \"\\'average\\'\", \"RestaurantsAttire\": \"\\'casual\\'\", \"RestaurantsDelivery\": \"True\", \"RestaurantsGoodForGroups\": \"True\", \"RestaurantsReservations\": \"True\", \"RestaurantsTableService\": \"False\", \"WheelchairAccessible\": \"True\", \"WiFi\": \"\\'free\\'\"}', 'cuisine': '\"Tex-Mex\"', 'DogsAllowed': None, 'OutdoorSeating': True, 'borough': '\"Manhattan\"', 'address': '{\"building\": \"627\", \"coord\": [-73.975981, 40.745132], \"street\": \"2 Avenue\", \"zipcode\": \"10016\"}', '_id': {'$oid': '6095a34a7c34416a90d3206b'}, 'name': '\"Baby Bo\\'S Burritos\"', 'menu': 'null', 'TakeOut': 'true', 'PriceRange': '1.0', 'HappyHour': 'null', 'review_count': '10', 'sponsored': None, 'stars': 2.5}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, text='{\"restaurant_id\": \"40366661\", \"attributes\": \"{\\\\\"Alcohol\\\\\": \\\\\"\\'none\\'\\\\\", \\\\\"Ambience\\\\\": \\\\\"{\\'romantic\\': False, \\'intimate\\': False, \\'classy\\': False, \\'hipster\\': False, \\'divey\\': False, \\'touristy\\': False, \\'trendy\\': False, \\'upscale\\': False, \\'casual\\': False}\\\\\", \\\\\"BYOB\\\\\": null, \\\\\"BestNights\\\\\": null, \\\\\"BikeParking\\\\\": null, \\\\\"BusinessAcceptsBitcoin\\\\\": null, \\\\\"BusinessAcceptsCreditCards\\\\\": null, \\\\\"BusinessParking\\\\\": \\\\\"None\\\\\", \\\\\"Caters\\\\\": \\\\\"True\\\\\", \\\\\"DriveThru\\\\\": null, \\\\\"GoodForDancing\\\\\": null, \\\\\"GoodForKids\\\\\": \\\\\"True\\\\\", \\\\\"GoodForMeal\\\\\": null, \\\\\"HasTV\\\\\": \\\\\"True\\\\\", \\\\\"Music\\\\\": null, \\\\\"NoiseLevel\\\\\": \\\\\"\\'average\\'\\\\\", \\\\\"RestaurantsAttire\\\\\": \\\\\"\\'casual\\'\\\\\", \\\\\"RestaurantsDelivery\\\\\": \\\\\"True\\\\\", \\\\\"RestaurantsGoodForGroups\\\\\": \\\\\"True\\\\\", \\\\\"RestaurantsReservations\\\\\": \\\\\"True\\\\\", \\\\\"RestaurantsTableService\\\\\": \\\\\"False\\\\\", \\\\\"WheelchairAccessible\\\\\": \\\\\"True\\\\\", \\\\\"WiFi\\\\\": \\\\\"\\'free\\'\\\\\"}\", \"cuisine\": \"\\\\\"Tex-Mex\\\\\"\", \"DogsAllowed\": null, \"OutdoorSeating\": true, \"borough\": \"\\\\\"Manhattan\\\\\"\", \"address\": \"{\\\\\"building\\\\\": \\\\\"627\\\\\", \\\\\"coord\\\\\": [-73.975981, 40.745132], \\\\\"street\\\\\": \\\\\"2 Avenue\\\\\", \\\\\"zipcode\\\\\": \\\\\"10016\\\\\"}\", \"_id\": {\"$oid\": \"6095a34a7c34416a90d3206b\"}, \"name\": \"\\\\\"Baby Bo\\'S Burritos\\\\\"\", \"menu\": \"null\", \"TakeOut\": \"true\", \"PriceRange\": \"1.0\", \"HappyHour\": \"null\", \"review_count\": \"10\", \"sponsored\": null, \"stars\": 2.5}', start_char_idx=None, end_char_idx=None, text_template='Metadata: {metadata_str}\\n-----\\nContent: {content}', metadata_template='{key}=>{value}', metadata_seperator='\\n')\n\n\n\n\n```python\nfrom llama_index.core.node_parser import SentenceSplitter\n\nparser = SentenceSplitter()\nnodes = parser.get_nodes_from_documents(llama_documents)\n# 25k nodes takes about 10 minutes, will trim it down to 2.5k\nnew_nodes = nodes[:2500]\n\n# There are 25k documents, so we need to do batching. Fortunately LlamaIndex provides good batching\n# for embedding models, and we are going to rely on the __call__ method for the model to handle this\nnode_embeddings = embed_model(new_nodes)\n```\n\n\n```python\nfor idx, n in enumerate(new_nodes):\n n.embedding = node_embeddings[idx].embedding\n if \"_id\" in n.metadata:\n del n.metadata[\"_id\"]\n```\n\nEnsure your databse, collection and vector store index is setup on MongoDB Atlas for the collection or the following step won't work appropriately on MongoDB.\n\n\n - For assistance with database cluster setup and obtaining the URI, refer to this [guide](https://www.mongodb.com/docs/guides/atlas/cluster/) for setting up a MongoDB cluster, and this [guide](https://www.mongodb.com/docs/guides/atlas/connection-string/) to get your connection string. \n\n - Once you have successfully created a cluster, create the database and collection within the MongoDB Atlas cluster by clicking “+ Create Database”. The database will be named movies, and the collection will be named movies_records.\n\n - Creating a vector search index within the movies_records collection is essential for efficient document retrieval from MongoDB into our development environment. To achieve this, refer to the official [guide](https://www.mongodb.com/docs/atlas/atlas-vector-search/create-index/) on vector search index creation.\n\n\n\n\n```python\nimport pymongo\n\n\ndef get_mongo_client(mongo_uri):\n \"\"\"Establish connection to the MongoDB.\"\"\"\n try:\n client = pymongo.MongoClient(mongo_uri)\n print(\"Connection to MongoDB successful\")\n return client\n except pymongo.errors.ConnectionFailure as e:\n print(f\"Connection failed: {e}\")\n return None\n\n\n# set up Fireworks.ai Key\nimport os\nimport getpass\n\nmongo_uri = getpass.getpass(\"MONGO_URI:\")\nif not mongo_uri:\n print(\"MONGO_URI not set\")\n\nmongo_client = get_mongo_client(mongo_uri)\n\nDB_NAME = \"whatscooking\"\nCOLLECTION_NAME = \"restaurants\"\n\ndb = mongo_client[DB_NAME]\ncollection = db[COLLECTION_NAME]\n```\n\n Connection to MongoDB successful\n\n\n\n```python\n# To ensure we are working with a fresh collection\n# delete any existing records in the collection\ncollection.delete_many({})\n```\n\n\n\n\n DeleteResult({'n': 0, 'electionId': ObjectId('7fffffff00000000000001ce'), 'opTime': {'ts': Timestamp(1708970193, 3), 't': 462}, 'ok': 1.0, '$clusterTime': {'clusterTime': Timestamp(1708970193, 3), 'signature': {'hash': b'\\x9a3H8\\xa1\\x1b\\xb6\\xbb\\xa9\\xc3x\\x17\\x1c\\xeb\\xe9\\x03\\xaa\\xf8\\xf17', 'keyId': 7294687148333072386}}, 'operationTime': Timestamp(1708970193, 3)}, acknowledged=True)\n\n\n\n\n```python\nfrom llama_index.vector_stores.mongodb import MongoDBAtlasVectorSearch\n\nvector_store = MongoDBAtlasVectorSearch(\n mongo_client,\n db_name=DB_NAME,\n collection_name=COLLECTION_NAME,\n index_name=\"vector_index\",\n)\nvector_store.add(new_nodes)\n```\n\n# now make sure you create the search index with the right name here\n\n\n```python\nfrom llama_index.core import VectorStoreIndex, StorageContext\n\nindex = VectorStoreIndex.from_vector_store(vector_store)\n```\n\n\n```python\n%pip install -q matplotlib\n```\n\n Note: you may need to restart the kernel to use updated packages.\n\n\n\n```python\nimport pprint\nfrom llama_index.core.response.notebook_utils import display_response\n\nquery_engine = index.as_query_engine()\n\nquery = \"search query: Anything that doesn't have alcohol in it\"\n\nresponse = query_engine.query(query)\ndisplay_response(response)\npprint.pprint(response.source_nodes)\n```\n\n\n**`Final Response:`** Based on the context provided, two restaurant options that don't serve alcohol are:\n\n1. \"Academy Restauraunt\" in Brooklyn, which serves American cuisine and has a variety of dishes such as Mozzarella sticks, Cheeseburger, Baked potato, Breadsticks, Caesar salad, Chicken parmesan, Pigs in a blanket, Chicken soup, Mac & cheese, Mushroom swiss burger, Spaghetti with meatballs, and Mashed potatoes.\n\n2. \"Gabriel'S Bar & Grill\" in Manhattan, which specializes in Italian cuisine and offers dishes like Cheese Ravioli, Neapolitan Pizza, assorted gelato, Vegetarian Baked Ziti, Vegetarian Broccoli Pizza, Lasagna, Buca Trio Platter, Spinach Ravioli, Pasta with ricotta cheese, Spaghetti, Fried calamari, and Alfredo Pizza.\n\nBoth restaurants offer outdoor seating, are kid-friendly, and have a casual dress code. They also provide take-out service and have happy hour promotions.\n\n\n [NodeWithScore(node=TextNode(id_='5405e68c-19f2-4a65-95d7-f880fa6a8deb', embedding=None, metadata={'restaurant_id': '40385767', 'attributes': '{\"Alcohol\": \"u\\'beer_and_wine\\'\", \"Ambience\": \"{\\'touristy\\': False, \\'hipster\\': False, \\'romantic\\': False, \\'divey\\': False, \\'intimate\\': None, \\'trendy\\': None, \\'upscale\\': False, \\'classy\\': False, \\'casual\\': True}\", \"BYOB\": null, \"BestNights\": \"{\\'monday\\': False, \\'tuesday\\': False, \\'friday\\': True, \\'wednesday\\': False, \\'thursday\\': False, \\'sunday\\': False, \\'saturday\\': True}\", \"BikeParking\": \"True\", \"BusinessAcceptsBitcoin\": \"False\", \"BusinessAcceptsCreditCards\": \"True\", \"BusinessParking\": \"{\\'garage\\': False, \\'street\\': False, \\'validated\\': False, \\'lot\\': True, \\'valet\\': False}\", \"Caters\": \"True\", \"DriveThru\": null, \"GoodForDancing\": \"False\", \"GoodForKids\": \"True\", \"GoodForMeal\": \"{\\'dessert\\': False, \\'latenight\\': False, \\'lunch\\': True, \\'dinner\\': True, \\'brunch\\': False, \\'breakfast\\': False}\", \"HasTV\": \"True\", \"Music\": \"{\\'dj\\': False, \\'background_music\\': False, \\'no_music\\': False, \\'jukebox\\': False, \\'live\\': False, \\'video\\': False, \\'karaoke\\': False}\", \"NoiseLevel\": \"u\\'average\\'\", \"RestaurantsAttire\": \"u\\'casual\\'\", \"RestaurantsDelivery\": \"None\", \"RestaurantsGoodForGroups\": \"True\", \"RestaurantsReservations\": \"True\", \"RestaurantsTableService\": \"True\", \"WheelchairAccessible\": \"True\", \"WiFi\": \"u\\'free\\'\"}', 'cuisine': '\"American\"', 'DogsAllowed': True, 'OutdoorSeating': True, 'borough': '\"Brooklyn\"', 'address': '{\"building\": \"69\", \"coord\": [-73.9757464, 40.687295], \"street\": \"Lafayette Avenue\", \"zipcode\": \"11217\"}', 'name': '\"Academy Restauraunt\"', 'menu': '[\"Mozzarella sticks\", \"Cheeseburger\", \"Baked potato\", \"Breadsticks\", \"Caesar salad\", \"Chicken parmesan\", \"Pigs in a blanket\", \"Chicken soup\", \"Mac & cheese\", \"Mushroom swiss burger\", \"Spaghetti with meatballs\", \"Mashed potatoes\"]', 'TakeOut': 'true', 'PriceRange': '2.0', 'HappyHour': 'true', 'review_count': '173', 'sponsored': None, 'stars': 4.5}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={: RelatedNodeInfo(node_id='bbfc4bf5-d9c3-4f3b-8c1f-ddcf94f3b5df', node_type=, metadata={'restaurant_id': '40385767', 'attributes': '{\"Alcohol\": \"u\\'beer_and_wine\\'\", \"Ambience\": \"{\\'touristy\\': False, \\'hipster\\': False, \\'romantic\\': False, \\'divey\\': False, \\'intimate\\': None, \\'trendy\\': None, \\'upscale\\': False, \\'classy\\': False, \\'casual\\': True}\", \"BYOB\": null, \"BestNights\": \"{\\'monday\\': False, \\'tuesday\\': False, \\'friday\\': True, \\'wednesday\\': False, \\'thursday\\': False, \\'sunday\\': False, \\'saturday\\': True}\", \"BikeParking\": \"True\", \"BusinessAcceptsBitcoin\": \"False\", \"BusinessAcceptsCreditCards\": \"True\", \"BusinessParking\": \"{\\'garage\\': False, \\'street\\': False, \\'validated\\': False, \\'lot\\': True, \\'valet\\': False}\", \"Caters\": \"True\", \"DriveThru\": null, \"GoodForDancing\": \"False\", \"GoodForKids\": \"True\", \"GoodForMeal\": \"{\\'dessert\\': False, \\'latenight\\': False, \\'lunch\\': True, \\'dinner\\': True, \\'brunch\\': False, \\'breakfast\\': False}\", \"HasTV\": \"True\", \"Music\": \"{\\'dj\\': False, \\'background_music\\': False, \\'no_music\\': False, \\'jukebox\\': False, \\'live\\': False, \\'video\\': False, \\'karaoke\\': False}\", \"NoiseLevel\": \"u\\'average\\'\", \"RestaurantsAttire\": \"u\\'casual\\'\", \"RestaurantsDelivery\": \"None\", \"RestaurantsGoodForGroups\": \"True\", \"RestaurantsReservations\": \"True\", \"RestaurantsTableService\": \"True\", \"WheelchairAccessible\": \"True\", \"WiFi\": \"u\\'free\\'\"}', 'cuisine': '\"American\"', 'DogsAllowed': True, 'OutdoorSeating': True, 'borough': '\"Brooklyn\"', 'address': '{\"building\": \"69\", \"coord\": [-73.9757464, 40.687295], \"street\": \"Lafayette Avenue\", \"zipcode\": \"11217\"}', '_id': {'$oid': '6095a34a7c34416a90d322d1'}, 'name': '\"Academy Restauraunt\"', 'menu': '[\"Mozzarella sticks\", \"Cheeseburger\", \"Baked potato\", \"Breadsticks\", \"Caesar salad\", \"Chicken parmesan\", \"Pigs in a blanket\", \"Chicken soup\", \"Mac & cheese\", \"Mushroom swiss burger\", \"Spaghetti with meatballs\", \"Mashed potatoes\"]', 'TakeOut': 'true', 'PriceRange': '2.0', 'HappyHour': 'true', 'review_count': '173', 'sponsored': None, 'stars': 4.5}, hash='df7870b3103572b05e98091e4d4b52b238175eb08558831b621b6832c0472c2e'), : RelatedNodeInfo(node_id='5fbb14fe-c8a8-4c4c-930d-2e07e4f77b47', node_type=, metadata={'restaurant_id': '40377111', 'attributes': '{\"Alcohol\": null, \"Ambience\": null, \"BYOB\": null, \"BestNights\": null, \"BikeParking\": \"True\", \"BusinessAcceptsBitcoin\": null, \"BusinessAcceptsCreditCards\": \"False\", \"BusinessParking\": \"{\\'garage\\': False, \\'street\\': True, \\'validated\\': False, \\'lot\\': False, \\'valet\\': False}\", \"Caters\": null, \"DriveThru\": \"True\", \"GoodForDancing\": null, \"GoodForKids\": null, \"GoodForMeal\": null, \"HasTV\": null, \"Music\": null, \"NoiseLevel\": null, \"RestaurantsAttire\": null, \"RestaurantsDelivery\": \"True\", \"RestaurantsGoodForGroups\": null, \"RestaurantsReservations\": null, \"RestaurantsTableService\": null, \"WheelchairAccessible\": null, \"WiFi\": null}', 'cuisine': '\"American\"', 'DogsAllowed': None, 'OutdoorSeating': None, 'borough': '\"Manhattan\"', 'address': '{\"building\": \"1207\", \"coord\": [-73.9592644, 40.8088612], \"street\": \"Amsterdam Avenue\", \"zipcode\": \"10027\"}', '_id': {'$oid': '6095a34a7c34416a90d321d6'}, 'name': '\"Amsterdam Restaurant & Tapas Lounge\"', 'menu': '[\"Green salad\", \"Cheddar Biscuits\", \"Lasagna\", \"Chicken parmesan\", \"Chicken soup\", \"Pigs in a blanket\", \"Caesar salad\", \"French fries\", \"Baked potato\", \"Mushroom swiss burger\", \"Grilled cheese sandwich\", \"Fried chicken\"]', 'TakeOut': 'true', 'PriceRange': '1.0', 'HappyHour': 'null', 'review_count': '6', 'sponsored': None, 'stars': 5.0}, hash='1261332dd67be495d0639f41b5f6462f87a41aabe20367502ef28074bf13e561'), : RelatedNodeInfo(node_id='10ad1a23-3237-4b68-808d-58fd7b7e5cb6', node_type=, metadata={}, hash='bc64dca2f9210693c3d5174aec305f25b68d080be65a0ae52f9a560f99992bb0')}, text='{\"restaurant_id\": \"40385767\", \"attributes\": \"{\\\\\"Alcohol\\\\\": \\\\\"u\\'beer_and_wine\\'\\\\\", \\\\\"Ambience\\\\\": \\\\\"{\\'touristy\\': False, \\'hipster\\': False, \\'romantic\\': False, \\'divey\\': False, \\'intimate\\': None, \\'trendy\\': None, \\'upscale\\': False, \\'classy\\': False, \\'casual\\': True}\\\\\", \\\\\"BYOB\\\\\": null, \\\\\"BestNights\\\\\": \\\\\"{\\'monday\\': False, \\'tuesday\\': False, \\'friday\\': True, \\'wednesday\\': False, \\'thursday\\': False, \\'sunday\\': False, \\'saturday\\': True}\\\\\", \\\\\"BikeParking\\\\\": \\\\\"True\\\\\", \\\\\"BusinessAcceptsBitcoin\\\\\": \\\\\"False\\\\\", \\\\\"BusinessAcceptsCreditCards\\\\\": \\\\\"True\\\\\", \\\\\"BusinessParking\\\\\": \\\\\"{\\'garage\\': False, \\'street\\': False, \\'validated\\': False, \\'lot\\': True, \\'valet\\': False}\\\\\", \\\\\"Caters\\\\\": \\\\\"True\\\\\", \\\\\"DriveThru\\\\\": null, \\\\\"GoodForDancing\\\\\": \\\\\"False\\\\\", \\\\\"GoodForKids\\\\\": \\\\\"True\\\\\", \\\\\"GoodForMeal\\\\\": \\\\\"{\\'dessert\\': False, \\'latenight\\': False, \\'lunch\\': True, \\'dinner\\': True, \\'brunch\\': False, \\'breakfast\\': False}\\\\\", \\\\\"HasTV\\\\\": \\\\\"True\\\\\", \\\\\"Music\\\\\": \\\\\"{\\'dj\\': False, \\'background_music\\': False, \\'no_music\\': False, \\'jukebox\\': False, \\'live\\': False, \\'video\\': False, \\'karaoke\\': False}\\\\\", \\\\\"NoiseLevel\\\\\": \\\\\"u\\'average\\'\\\\\", \\\\\"RestaurantsAttire\\\\\": \\\\\"u\\'casual\\'\\\\\", \\\\\"RestaurantsDelivery\\\\\": \\\\\"None\\\\\", \\\\\"RestaurantsGoodForGroups\\\\\": \\\\\"True\\\\\", \\\\\"RestaurantsReservations\\\\\": \\\\\"True\\\\\", \\\\\"RestaurantsTableService\\\\\": \\\\\"True\\\\\", \\\\\"WheelchairAccessible\\\\\": \\\\\"True\\\\\", \\\\\"WiFi\\\\\": \\\\\"u\\'free\\'\\\\\"}\", \"cuisine\": \"\\\\\"American\\\\\"\", \"DogsAllowed\": true, \"OutdoorSeating\": true, \"borough\": \"\\\\\"Brooklyn\\\\\"\",', start_char_idx=0, end_char_idx=1415, text_template='Metadata: {metadata_str}\\n-----\\nContent: {content}', metadata_template='{key}=>{value}', metadata_seperator='\\n'), score=0.7296431064605713),\n NodeWithScore(node=TextNode(id_='9cd153ba-2ab8-40aa-90f0-9da5ae24c632', embedding=None, metadata={'restaurant_id': '40392690', 'attributes': '{\"Alcohol\": \"u\\'full_bar\\'\", \"Ambience\": \"{\\'touristy\\': None, \\'hipster\\': True, \\'romantic\\': False, \\'divey\\': False, \\'intimate\\': None, \\'trendy\\': True, \\'upscale\\': None, \\'classy\\': True, \\'casual\\': True}\", \"BYOB\": \"False\", \"BestNights\": \"{\\'monday\\': False, \\'tuesday\\': False, \\'friday\\': True, \\'wednesday\\': False, \\'thursday\\': False, \\'sunday\\': False, \\'saturday\\': False}\", \"BikeParking\": \"True\", \"BusinessAcceptsBitcoin\": null, \"BusinessAcceptsCreditCards\": \"True\", \"BusinessParking\": \"{\\'garage\\': False, \\'street\\': True, \\'validated\\': False, \\'lot\\': False, \\'valet\\': False}\", \"Caters\": \"True\", \"DriveThru\": \"False\", \"GoodForDancing\": \"False\", \"GoodForKids\": \"True\", \"GoodForMeal\": \"{\\'dessert\\': None, \\'latenight\\': None, \\'lunch\\': True, \\'dinner\\': True, \\'brunch\\': False, \\'breakfast\\': False}\", \"HasTV\": \"False\", \"Music\": \"{\\'dj\\': False, \\'background_music\\': False, \\'no_music\\': False, \\'jukebox\\': False, \\'live\\': False, \\'video\\': False, \\'karaoke\\': False}\", \"NoiseLevel\": \"u\\'average\\'\", \"RestaurantsAttire\": \"\\'casual\\'\", \"RestaurantsDelivery\": \"True\", \"RestaurantsGoodForGroups\": \"True\", \"RestaurantsReservations\": \"False\", \"RestaurantsTableService\": \"True\", \"WheelchairAccessible\": \"True\", \"WiFi\": \"\\'free\\'\"}', 'cuisine': '\"Italian\"', 'DogsAllowed': True, 'OutdoorSeating': True, 'borough': '\"Manhattan\"', 'address': '{\"building\": \"11\", \"coord\": [-73.9828696, 40.7693649], \"street\": \"West 60 Street\", \"zipcode\": \"10023\"}', 'name': '\"Gabriel\\'S Bar & Grill\"', 'menu': '[\"Cheese Ravioli\", \"Neapolitan Pizza\", \"assorted gelato\", \"Vegetarian Baked Ziti\", \"Vegetarian Broccoli Pizza\", \"Lasagna\", \"Buca Trio Platter\", \"Spinach Ravioli\", \"Pasta with ricotta cheese\", \"Spaghetti\", \"Fried calimari\", \"Alfredo Pizza\"]', 'TakeOut': 'true', 'PriceRange': '2.0', 'HappyHour': 'true', 'review_count': '333', 'sponsored': None, 'stars': 4.0}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={: RelatedNodeInfo(node_id='77584933-8286-4277-bc56-bed76adcfd37', node_type=, metadata={'restaurant_id': '40392690', 'attributes': '{\"Alcohol\": \"u\\'full_bar\\'\", \"Ambience\": \"{\\'touristy\\': None, \\'hipster\\': True, \\'romantic\\': False, \\'divey\\': False, \\'intimate\\': None, \\'trendy\\': True, \\'upscale\\': None, \\'classy\\': True, \\'casual\\': True}\", \"BYOB\": \"False\", \"BestNights\": \"{\\'monday\\': False, \\'tuesday\\': False, \\'friday\\': True, \\'wednesday\\': False, \\'thursday\\': False, \\'sunday\\': False, \\'saturday\\': False}\", \"BikeParking\": \"True\", \"BusinessAcceptsBitcoin\": null, \"BusinessAcceptsCreditCards\": \"True\", \"BusinessParking\": \"{\\'garage\\': False, \\'street\\': True, \\'validated\\': False, \\'lot\\': False, \\'valet\\': False}\", \"Caters\": \"True\", \"DriveThru\": \"False\", \"GoodForDancing\": \"False\", \"GoodForKids\": \"True\", \"GoodForMeal\": \"{\\'dessert\\': None, \\'latenight\\': None, \\'lunch\\': True, \\'dinner\\': True, \\'brunch\\': False, \\'breakfast\\': False}\", \"HasTV\": \"False\", \"Music\": \"{\\'dj\\': False, \\'background_music\\': False, \\'no_music\\': False, \\'jukebox\\': False, \\'live\\': False, \\'video\\': False, \\'karaoke\\': False}\", \"NoiseLevel\": \"u\\'average\\'\", \"RestaurantsAttire\": \"\\'casual\\'\", \"RestaurantsDelivery\": \"True\", \"RestaurantsGoodForGroups\": \"True\", \"RestaurantsReservations\": \"False\", \"RestaurantsTableService\": \"True\", \"WheelchairAccessible\": \"True\", \"WiFi\": \"\\'free\\'\"}', 'cuisine': '\"Italian\"', 'DogsAllowed': True, 'OutdoorSeating': True, 'borough': '\"Manhattan\"', 'address': '{\"building\": \"11\", \"coord\": [-73.9828696, 40.7693649], \"street\": \"West 60 Street\", \"zipcode\": \"10023\"}', '_id': {'$oid': '6095a34b7c34416a90d3243a'}, 'name': '\"Gabriel\\'S Bar & Grill\"', 'menu': '[\"Cheese Ravioli\", \"Neapolitan Pizza\", \"assorted gelato\", \"Vegetarian Baked Ziti\", \"Vegetarian Broccoli Pizza\", \"Lasagna\", \"Buca Trio Platter\", \"Spinach Ravioli\", \"Pasta with ricotta cheese\", \"Spaghetti\", \"Fried calimari\", \"Alfredo Pizza\"]', 'TakeOut': 'true', 'PriceRange': '2.0', 'HappyHour': 'true', 'review_count': '333', 'sponsored': None, 'stars': 4.0}, hash='c4dcc57a697cd2fe3047a280573c0f54bc5236e1d5af2228737af77613c9dbf7'), : RelatedNodeInfo(node_id='6e1ead27-3679-48fb-b160-b47db523a3ce', node_type=, metadata={'restaurant_id': '40392496', 'attributes': '{\"Alcohol\": \"u\\'none\\'\", \"Ambience\": \"{\\'touristy\\': False, \\'hipster\\': False, \\'romantic\\': False, \\'intimate\\': None, \\'trendy\\': False, \\'upscale\\': False, \\'classy\\': False, \\'casual\\': True}\", \"BYOB\": null, \"BestNights\": null, \"BikeParking\": \"True\", \"BusinessAcceptsBitcoin\": null, \"BusinessAcceptsCreditCards\": null, \"BusinessParking\": \"{\\'garage\\': False, \\'street\\': True, \\'validated\\': False, \\'lot\\': False, \\'valet\\': False}\", \"Caters\": \"False\", \"DriveThru\": null, \"GoodForDancing\": null, \"GoodForKids\": \"True\", \"GoodForMeal\": \"{\\'dessert\\': False, \\'latenight\\': False, \\'lunch\\': True, \\'dinner\\': True, \\'brunch\\': None, \\'breakfast\\': False}\", \"HasTV\": \"True\", \"Music\": null, \"NoiseLevel\": \"u\\'average\\'\", \"RestaurantsAttire\": \"u\\'casual\\'\", \"RestaurantsDelivery\": \"True\", \"RestaurantsGoodForGroups\": \"False\", \"RestaurantsReservations\": \"False\", \"RestaurantsTableService\": \"True\", \"WheelchairAccessible\": null, \"WiFi\": \"\\'free\\'\"}', 'cuisine': '\"English\"', 'DogsAllowed': True, 'OutdoorSeating': True, 'borough': '\"Manhattan\"', 'address': '{\"building\": \"253\", \"coord\": [-74.0034571, 40.736351], \"street\": \"West 11 Street\", \"zipcode\": \"10014\"}', '_id': {'$oid': '6095a34b7c34416a90d32435'}, 'name': '\"Tartine\"', 'menu': 'null', 'TakeOut': 'true', 'PriceRange': '2.0', 'HappyHour': 'true', 'review_count': '436', 'sponsored': None, 'stars': 4.5}, hash='146bffad5c816926ec1008d966caab7c0df675251ccca5de860f8a2160bb7a34'), : RelatedNodeInfo(node_id='6640911b-3d8e-4bad-a016-4c3d91444b0c', node_type=, metadata={}, hash='39984a7534d6755344f0887e0d6a200eaab562a7dc492afe292040c0022282bd')}, text='{\"restaurant_id\": \"40392690\", \"attributes\": \"{\\\\\"Alcohol\\\\\": \\\\\"u\\'full_bar\\'\\\\\", \\\\\"Ambience\\\\\": \\\\\"{\\'touristy\\': None, \\'hipster\\': True, \\'romantic\\': False, \\'divey\\': False, \\'intimate\\': None, \\'trendy\\': True, \\'upscale\\': None, \\'classy\\': True, \\'casual\\': True}\\\\\", \\\\\"BYOB\\\\\": \\\\\"False\\\\\", \\\\\"BestNights\\\\\": \\\\\"{\\'monday\\': False, \\'tuesday\\': False, \\'friday\\': True, \\'wednesday\\': False, \\'thursday\\': False, \\'sunday\\': False, \\'saturday\\': False}\\\\\", \\\\\"BikeParking\\\\\": \\\\\"True\\\\\", \\\\\"BusinessAcceptsBitcoin\\\\\": null, \\\\\"BusinessAcceptsCreditCards\\\\\": \\\\\"True\\\\\", \\\\\"BusinessParking\\\\\": \\\\\"{\\'garage\\': False, \\'street\\': True, \\'validated\\': False, \\'lot\\': False, \\'valet\\': False}\\\\\", \\\\\"Caters\\\\\": \\\\\"True\\\\\", \\\\\"DriveThru\\\\\": \\\\\"False\\\\\", \\\\\"GoodForDancing\\\\\": \\\\\"False\\\\\", \\\\\"GoodForKids\\\\\": \\\\\"True\\\\\", \\\\\"GoodForMeal\\\\\": \\\\\"{\\'dessert\\': None, \\'latenight\\': None, \\'lunch\\': True, \\'dinner\\': True, \\'brunch\\': False, \\'breakfast\\': False}\\\\\", \\\\\"HasTV\\\\\": \\\\\"False\\\\\", \\\\\"Music\\\\\": \\\\\"{\\'dj\\': False, \\'background_music\\': False, \\'no_music\\': False, \\'jukebox\\': False, \\'live\\': False, \\'video\\': False, \\'karaoke\\': False}\\\\\", \\\\\"NoiseLevel\\\\\": \\\\\"u\\'average\\'\\\\\", \\\\\"RestaurantsAttire\\\\\": \\\\\"\\'casual\\'\\\\\", \\\\\"RestaurantsDelivery\\\\\": \\\\\"True\\\\\", \\\\\"RestaurantsGoodForGroups\\\\\": \\\\\"True\\\\\", \\\\\"RestaurantsReservations\\\\\": \\\\\"False\\\\\", \\\\\"RestaurantsTableService\\\\\": \\\\\"True\\\\\", \\\\\"WheelchairAccessible\\\\\": \\\\\"True\\\\\", \\\\\"WiFi\\\\\": \\\\\"\\'free\\'\\\\\"}\", \"cuisine\": \"\\\\\"Italian\\\\\"\", \"DogsAllowed\": true, \"OutdoorSeating\": true,', start_char_idx=0, end_char_idx=1382, text_template='Metadata: {metadata_str}\\n-----\\nContent: {content}', metadata_template='{key}=>{value}', metadata_seperator='\\n'), score=0.7284677028656006)]"} {"tokens": 331, "doc_id": "03902cf5-1771-4ffa-8b80-70cdbd298acf", "name": "Amazon Neptune - Neptune Analytics vector store", "url": "https://docs.llamaindex.ai/en/stable/examples/vector_stores/AmazonNeptuneVectorDemo", "retrieve_doc": true, "source": "llama_index", "content": "# Amazon Neptune - Neptune Analytics vector store\n\nIf you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.\n\n\n```python\n%pip install llama-index-vector-stores-neptune\n```\n\n## Initiate Neptune Analytics vector wrapper\n\n\n```python\nfrom llama_index.vector_stores.neptune import NeptuneAnalyticsVectorStore\n\ngraph_identifier = \"\"\nembed_dim = 1536\n\nneptune_vector_store = NeptuneAnalyticsVectorStore(\n graph_identifier=graph_identifier, embedding_dimension=1536\n)\n```\n\n## Load documents, build the VectorStoreIndex\n\n\n```python\nfrom llama_index.core import VectorStoreIndex, SimpleDirectoryReader\nfrom IPython.display import Markdown, display\n```\n\nDownload Data\n\n\n```python\n!mkdir -p 'data/paul_graham/'\n!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'\n```\n\n\n```python\n# load documents\ndocuments = SimpleDirectoryReader(\"./data/paul_graham\").load_data()\n```\n\n\n```python\nfrom llama_index.core import StorageContext\n\nstorage_context = StorageContext.from_defaults(\n vector_store=neptune_vector_store\n)\nindex = VectorStoreIndex.from_documents(\n documents, storage_context=storage_context\n)\n```\n\n\n```python\nquery_engine = index.as_query_engine()\nresponse = query_engine.query(\"What happened at interleaf?\")\ndisplay(Markdown(f\"{response}\"))\n```"} {"tokens": 4140, "doc_id": "6f2ca851-bcf4-4783-9f1b-f6858a6d730c", "name": "Simple Vector Store", "url": "https://docs.llamaindex.ai/en/stable/examples/vector_stores/SimpleIndexDemo", "retrieve_doc": true, "source": "llama_index", "content": "\"Open\n\n# Simple Vector Store\n\nIf you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.\n\n\n```python\n!pip install llama-index\n```\n\n\n```python\nimport os\nimport openai\n\nos.environ[\"OPENAI_API_KEY\"] = \"sk-...\"\nopenai.api_key = os.environ[\"OPENAI_API_KEY\"]\n```\n\n#### Load documents, build the VectorStoreIndex\n\n\n```python\nimport nltk\n\nnltk.download(\"stopwords\")\n```\n\n [nltk_data] Downloading package stopwords to\n [nltk_data] /Users/jerryliu/nltk_data...\n [nltk_data] Package stopwords is already up-to-date!\n\n\n\n\n\n True\n\n\n\n\n```python\nimport llama_index.core\n```\n\n [nltk_data] Downloading package stopwords to /Users/jerryliu/Programmi\n [nltk_data] ng/gpt_index/.venv/lib/python3.10/site-\n [nltk_data] packages/llama_index/core/_static/nltk_cache...\n [nltk_data] Unzipping corpora/stopwords.zip.\n [nltk_data] Downloading package punkt to /Users/jerryliu/Programming/g\n [nltk_data] pt_index/.venv/lib/python3.10/site-\n [nltk_data] packages/llama_index/core/_static/nltk_cache...\n [nltk_data] Unzipping tokenizers/punkt.zip.\n\n\n\n```python\nimport logging\nimport sys\n\nlogging.basicConfig(stream=sys.stdout, level=logging.INFO)\nlogging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))\n\nfrom llama_index.core import (\n VectorStoreIndex,\n SimpleDirectoryReader,\n load_index_from_storage,\n StorageContext,\n)\nfrom IPython.display import Markdown, display\n```\n\nDownload Data\n\n\n```python\n!mkdir -p 'data/paul_graham/'\n!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'\n```\n\n --2024-02-12 13:21:13-- https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt\n Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.111.133, 185.199.108.133, ...\n Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.\n HTTP request sent, awaiting response... 200 OK\n Length: 75042 (73K) [text/plain]\n Saving to: ‘data/paul_graham/paul_graham_essay.txt’\n \n data/paul_graham/pa 100%[===================>] 73.28K --.-KB/s in 0.02s \n \n 2024-02-12 13:21:13 (4.76 MB/s) - ‘data/paul_graham/paul_graham_essay.txt’ saved [75042/75042]\n \n\n\n\n```python\n# load documents\ndocuments = SimpleDirectoryReader(\"./data/paul_graham/\").load_data()\n```\n\n\n```python\nindex = VectorStoreIndex.from_documents(documents)\n```\n\n INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n\n\n\n```python\n# save index to disk\nindex.set_index_id(\"vector_index\")\nindex.storage_context.persist(\"./storage\")\n```\n\n\n```python\n# rebuild storage context\nstorage_context = StorageContext.from_defaults(persist_dir=\"storage\")\n# load index\nindex = load_index_from_storage(storage_context, index_id=\"vector_index\")\n```\n\n INFO:llama_index.core.indices.loading:Loading indices with ids: ['vector_index']\n Loading indices with ids: ['vector_index']\n\n\n#### Query Index\n\n\n```python\n# set Logging to DEBUG for more detailed outputs\nquery_engine = index.as_query_engine(response_mode=\"tree_summarize\")\nresponse = query_engine.query(\"What did the author do growing up?\")\n```\n\n INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n\n\n\n```python\ndisplay(Markdown(f\"{response}\"))\n```\n\n\nThe author wrote short stories and also worked on programming, specifically on an IBM 1401 computer in 9th grade. They later transitioned to working with microcomputers, starting with a kit-built microcomputer and eventually acquiring a TRS-80. They wrote simple games, a program to predict rocket heights, and even a word processor. Although the author initially planned to study philosophy in college, they eventually switched to studying AI.\n\n\n**Query Index with SVM/Linear Regression**\n\nUse Karpathy's [SVM-based](https://twitter.com/karpathy/status/1647025230546886658?s=20) approach. Set query as positive example, all other datapoints as negative examples, and then fit a hyperplane.\n\n\n```python\nquery_modes = [\n \"svm\",\n \"linear_regression\",\n \"logistic_regression\",\n]\nfor query_mode in query_modes:\n # set Logging to DEBUG for more detailed outputs\n query_engine = index.as_query_engine(vector_store_query_mode=query_mode)\n response = query_engine.query(\"What did the author do growing up?\")\n print(f\"Query mode: {query_mode}\")\n display(Markdown(f\"{response}\"))\n```\n\n INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n\n\n /Users/jerryliu/Programming/gpt_index/.venv/lib/python3.10/site-packages/sklearn/svm/_classes.py:31: FutureWarning: The default value of `dual` will change from `True` to `'auto'` in 1.5. Set the value of `dual` explicitly to suppress the warning.\n warnings.warn(\n\n\n INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n Query mode: svm\n\n\n\nThe author wrote short stories and also worked on programming, specifically on an IBM 1401 computer in 9th grade. They later got a microcomputer and started programming on it, writing simple games and a word processor. They initially planned to study philosophy in college but ended up switching to AI.\n\n\n INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n\n\n /Users/jerryliu/Programming/gpt_index/.venv/lib/python3.10/site-packages/sklearn/svm/_classes.py:31: FutureWarning: The default value of `dual` will change from `True` to `'auto'` in 1.5. Set the value of `dual` explicitly to suppress the warning.\n warnings.warn(\n\n\n INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n Query mode: linear_regression\n\n\n\nThe author wrote short stories and also worked on programming, specifically on an IBM 1401 computer in 9th grade. They later got a microcomputer and started programming on it, writing simple games and a word processor. They initially planned to study philosophy in college but ended up switching to AI.\n\n\n INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n\n\n /Users/jerryliu/Programming/gpt_index/.venv/lib/python3.10/site-packages/sklearn/svm/_classes.py:31: FutureWarning: The default value of `dual` will change from `True` to `'auto'` in 1.5. Set the value of `dual` explicitly to suppress the warning.\n warnings.warn(\n\n\n INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n Query mode: logistic_regression\n\n\n\nThe author wrote short stories and also worked on programming, specifically on an IBM 1401 computer in 9th grade. They later got a microcomputer and started programming on it, writing simple games and a word processor. They initially planned to study philosophy in college but eventually switched to AI.\n\n\n\n```python\ndisplay(Markdown(f\"{response}\"))\n```\n\n\nThe author wrote short stories and also worked on programming, specifically on an IBM 1401 computer in 9th grade. They later got a microcomputer and started programming on it, writing simple games and a word processor. They initially planned to study philosophy in college but eventually switched to AI.\n\n\n\n```python\nprint(response.source_nodes[0].text)\n```\n\n What I Worked On\n \n February 2021\n \n Before college the two main things I worked on, outside of school, were writing and programming. I didn't write essays. I wrote what beginning writers were supposed to write then, and probably still are: short stories. My stories were awful. They had hardly any plot, just characters with strong feelings, which I imagined made them deep.\n \n The first programs I tried writing were on the IBM 1401 that our school district used for what was then called \"data processing.\" This was in 9th grade, so I was 13 or 14. The school district's 1401 happened to be in the basement of our junior high school, and my friend Rich Draves and I got permission to use it. It was like a mini Bond villain's lair down there, with all these alien-looking machines — CPU, disk drives, printer, card reader — sitting up on a raised floor under bright fluorescent lights.\n \n The language we used was an early version of Fortran. You had to type programs on punch cards, then stack them in the card reader and press a button to load the program into memory and run it. The result would ordinarily be to print something on the spectacularly loud printer.\n \n I was puzzled by the 1401. I couldn't figure out what to do with it. And in retrospect there's not much I could have done with it. The only form of input to programs was data stored on punched cards, and I didn't have any data stored on punched cards. The only other option was to do things that didn't rely on any input, like calculate approximations of pi, but I didn't know enough math to do anything interesting of that type. So I'm not surprised I can't remember any programs I wrote, because they can't have done much. My clearest memory is of the moment I learned it was possible for programs not to terminate, when one of mine didn't. On a machine without time-sharing, this was a social as well as a technical error, as the data center manager's expression made clear.\n \n With microcomputers, everything changed. Now you could have a computer sitting right in front of you, on a desk, that could respond to your keystrokes as it was running instead of just churning through a stack of punch cards and then stopping. [1]\n \n The first of my friends to get a microcomputer built it himself. It was sold as a kit by Heathkit. I remember vividly how impressed and envious I felt watching him sitting in front of it, typing programs right into the computer.\n \n Computers were expensive in those days and it took me years of nagging before I convinced my father to buy one, a TRS-80, in about 1980. The gold standard then was the Apple II, but a TRS-80 was good enough. This was when I really started programming. I wrote simple games, a program to predict how high my model rockets would fly, and a word processor that my father used to write at least one book. There was only room in memory for about 2 pages of text, so he'd write 2 pages at a time and then print them out, but it was a lot better than a typewriter.\n \n Though I liked programming, I didn't plan to study it in college. In college I was going to study philosophy, which sounded much more powerful. It seemed, to my naive high school self, to be the study of the ultimate truths, compared to which the things studied in other fields would be mere domain knowledge. What I discovered when I got to college was that the other fields took up so much of the space of ideas that there wasn't much left for these supposed ultimate truths. All that seemed left for philosophy were edge cases that people in other fields felt could safely be ignored.\n \n I couldn't have put this into words when I was 18. All I knew at the time was that I kept taking philosophy courses and they kept being boring. So I decided to switch to AI.\n \n AI was in the air in the mid 1980s, but there were two things especially that made me want to work on it: a novel by Heinlein called The Moon is a Harsh Mistress, which featured an intelligent computer called Mike, and a PBS documentary that showed Terry Winograd using SHRDLU. I haven't tried rereading The Moon is a Harsh Mistress, so I don't know how well it has aged, but when I read it I was drawn entirely into its world. It seemed only a matter of time before we'd have Mike, and when I saw Winograd using SHRDLU, it seemed like that time would be a few years at most.\n\n\n**Query Index with custom embedding string**\n\n\n```python\nfrom llama_index.core import QueryBundle\n```\n\n\n```python\nquery_bundle = QueryBundle(\n query_str=\"What did the author do growing up?\",\n custom_embedding_strs=[\"The author grew up painting.\"],\n)\nquery_engine = index.as_query_engine()\nresponse = query_engine.query(query_bundle)\n```\n\n INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n\n\n\n```python\ndisplay(Markdown(f\"{response}\"))\n```\n\n\nThe context does not provide information about what the author did growing up.\n\n\n**Use maximum marginal relevance**\n\nInstead of ranking vectors purely by similarity, adds diversity to the documents by penalizing documents similar to ones that have already been found based on MMR . A lower mmr_treshold increases diversity.\n\n\n```python\nquery_engine = index.as_query_engine(\n vector_store_query_mode=\"mmr\", vector_store_kwargs={\"mmr_threshold\": 0.2}\n)\nresponse = query_engine.query(\"What did the author do growing up?\")\n```\n\n INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n\n\n#### Get Sources\n\n\n```python\nprint(response.get_formatted_sources())\n```\n\n > Source (Doc id: c4118521-8f55-4a4d-819a-2db546b6491e): What I Worked On\n \n February 2021\n \n Before college the two main things I worked on, outside of schoo...\n \n > Source (Doc id: 74f77233-e4fe-4389-9820-76dd9f765af6): Which meant being easy to use and inexpensive. It was lucky for us that we were poor, because tha...\n\n\n#### Query Index with Filters\n\nWe can also filter our queries using metadata\n\n\n```python\nfrom llama_index.core import Document\n\ndoc = Document(text=\"target\", metadata={\"tag\": \"target\"})\n\nindex.insert(doc)\n```\n\n INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n\n\n\n```python\nfrom llama_index.core.vector_stores import ExactMatchFilter, MetadataFilters\n\nfilters = MetadataFilters(\n filters=[ExactMatchFilter(key=\"tag\", value=\"target\")]\n)\n\nretriever = index.as_retriever(\n similarity_top_k=20,\n filters=filters,\n)\n\nsource_nodes = retriever.retrieve(\"What did the author do growing up?\")\n```\n\n INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n\n\n\n```python\n# retrieves only our target node, even though we set the top k to 20\nprint(len(source_nodes))\n```\n\n 1\n\n\n\n```python\nprint(source_nodes[0].text)\nprint(source_nodes[0].metadata)\n```\n\n target\n {'tag': 'target'}"} {"tokens": 43097, "doc_id": "264a5660-6484-4a24-b74f-50ba42fa1223", "name": "Opensearch Vector Store", "url": "https://docs.llamaindex.ai/en/stable/examples/vector_stores/OpensearchDemo", "retrieve_doc": false, "source": "llama_index", "content": "\"Open\n\n# Opensearch Vector Store\n\nElasticsearch only supports Lucene indices, so only Opensearch is supported.\n\n**Note on setup**: We setup a local Opensearch instance through the following doc. https://opensearch.org/docs/1.0/\n\nIf you run into SSL issues, try the following `docker run` command instead: \n```\ndocker run -p 9200:9200 -p 9600:9600 -e \"discovery.type=single-node\" -e \"plugins.security.disabled=true\" opensearchproject/opensearch:1.0.1\n```\n\nReference: https://github.com/opensearch-project/OpenSearch/issues/1598\n\nDownload Data\n\n\n```python\n%pip install llama-index-readers-elasticsearch\n%pip install llama-index-vector-stores-opensearch\n%pip install llama-index-embeddings-ollama\n```\n\n\n```python\n!mkdir -p 'data/paul_graham/'\n!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'\n```\n\n\n```python\nfrom os import getenv\nfrom llama_index.core import SimpleDirectoryReader\nfrom llama_index.vector_stores.opensearch import (\n OpensearchVectorStore,\n OpensearchVectorClient,\n)\nfrom llama_index.core import VectorStoreIndex, StorageContext\n\n# http endpoint for your cluster (opensearch required for vector index usage)\nendpoint = getenv(\"OPENSEARCH_ENDPOINT\", \"http://localhost:9200\")\n# index to demonstrate the VectorStore impl\nidx = getenv(\"OPENSEARCH_INDEX\", \"gpt-index-demo\")\n# load some sample data\ndocuments = SimpleDirectoryReader(\"./data/paul_graham/\").load_data()\n```\n\n /Users/jerryliu/Programming/gpt_index/.venv/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n from .autonotebook import tqdm as notebook_tqdm\n\n\n\n```python\n# OpensearchVectorClient stores text in this field by default\ntext_field = \"content\"\n# OpensearchVectorClient stores embeddings in this field by default\nembedding_field = \"embedding\"\n# OpensearchVectorClient encapsulates logic for a\n# single opensearch index with vector search enabled\nclient = OpensearchVectorClient(\n endpoint, idx, 1536, embedding_field=embedding_field, text_field=text_field\n)\n# initialize vector store\nvector_store = OpensearchVectorStore(client)\nstorage_context = StorageContext.from_defaults(vector_store=vector_store)\n# initialize an index using our sample data and the client we just created\nindex = VectorStoreIndex.from_documents(\n documents=documents, storage_context=storage_context\n)\n```\n\n\n```python\n# run query\nquery_engine = index.as_query_engine()\nres = query_engine.query(\"What did the author do growing up?\")\nres.response\n```\n\n INFO:root:> [query] Total LLM token usage: 29628 tokens\n INFO:root:> [query] Total embedding token usage: 8 tokens\n\n\n\n\n\n '\\n\\nThe author grew up writing short stories, programming on an IBM 1401, and building a computer kit from Heathkit. They also wrote programs for a TRS-80, such as games, a program to predict model rocket flight, and a word processor. After years of nagging, they convinced their father to buy a TRS-80, and they wrote simple games, a program to predict how high their model rockets would fly, and a word processor that their father used to write at least one book. In college, they studied philosophy and AI, and wrote a book about Lisp hacking. They also took art classes and applied to art schools, and experimented with computer graphics and animation, exploring the use of algorithms to create art. Additionally, they experimented with machine learning algorithms, such as using neural networks to generate art, and exploring the use of numerical values to create art. They also took classes in fundamental subjects like drawing, color, and design, and applied to two art schools, RISD in the US, and the Accademia di Belli Arti in Florence. They were accepted to RISD, and while waiting to hear back from the Accademia, they learned Italian and took the entrance exam in Florence. They eventually graduated from RISD'\n\n\n\nThe OpenSearch vector store supports [filter-context queries](https://opensearch.org/docs/latest/query-dsl/query-filter-context/).\n\n\n```python\nfrom llama_index.core import Document\nfrom llama_index.core.vector_stores import MetadataFilters, ExactMatchFilter\nimport regex as re\n```\n\n\n```python\n# Split the text into paragraphs.\ntext_chunks = documents[0].text.split(\"\\n\\n\")\n\n# Create a document for each footnote\nfootnotes = [\n Document(\n text=chunk,\n id=documents[0].doc_id,\n metadata={\"is_footnote\": bool(re.search(r\"^\\s*\\[\\d+\\]\\s*\", chunk))},\n )\n for chunk in text_chunks\n if bool(re.search(r\"^\\s*\\[\\d+\\]\\s*\", chunk))\n]\n```\n\n\n```python\n# Insert the footnotes into the index\nfor f in footnotes:\n index.insert(f)\n```\n\n\n```python\n# Create a query engine that only searches certain footnotes.\nfootnote_query_engine = index.as_query_engine(\n filters=MetadataFilters(\n filters=[\n ExactMatchFilter(\n key=\"term\", value='{\"metadata.is_footnote\": \"true\"}'\n ),\n ExactMatchFilter(\n key=\"query_string\",\n value='{\"query\": \"content: space AND content: lisp\"}',\n ),\n ]\n )\n)\n\nres = footnote_query_engine.query(\n \"What did the author about space aliens and lisp?\"\n)\nres.response\n```\n\n\n\n\n \"The author believes that any sufficiently advanced alien civilization would know about the Pythagorean theorem and possibly also about Lisp in McCarthy's 1960 paper.\"\n\n\n\n## Use reader to check out what VectorStoreIndex just created in our index.\n\nReader works with Elasticsearch too as it just uses the basic search features.\n\n\n```python\n# create a reader to check out the index used in previous section.\nfrom llama_index.readers.elasticsearch import ElasticsearchReader\n\nrdr = ElasticsearchReader(endpoint, idx)\n# set embedding_field optionally to read embedding data from the elasticsearch index\ndocs = rdr.load_data(text_field, embedding_field=embedding_field)\n# docs have embeddings in them\nprint(\"embedding dimension:\", len(docs[0].embedding))\n# full document is stored in metadata\nprint(\"all fields in index:\", docs[0].metadata.keys())\n```\n\n embedding dimension: 1536\n all fields in index: dict_keys(['content', 'embedding'])\n\n\n\n```python\n# we can check out how the text was chunked by the `GPTOpensearchIndex`\nprint(\"total number of chunks created:\", len(docs))\n```\n\n total number of chunks: 10\n\n\n\n```python\n# search index using standard elasticsearch query DSL\ndocs = rdr.load_data(text_field, {\"query\": {\"match\": {text_field: \"Lisp\"}}})\nprint(\"chunks that mention Lisp:\", len(docs))\ndocs = rdr.load_data(text_field, {\"query\": {\"match\": {text_field: \"Yahoo\"}}})\nprint(\"chunks that mention Yahoo:\", len(docs))\n```\n\n chunks that mention Lisp: 10\n chunks that mention Yahoo: 8\n\n\n## Hybrid query for opensearch vector store\nHybrid query has been supported since OpenSearch 2.10. It is a combination of vector search and text search. It is useful when you want to search for a specific text and also want to filter the results by vector similarity. You can find more details: https://opensearch.org/docs/latest/query-dsl/compound/hybrid/. \n\n### Prepare Search Pipeline\n\nCreate a new [search pipeline](https://opensearch.org/docs/latest/search-plugins/search-pipelines/creating-search-pipeline/) with [score normalization and weighted harmonic mean combination](https://opensearch.org/docs/latest/search-plugins/search-pipelines/normalization-processor/).\n\n```\nPUT /_search/pipeline/hybrid-search-pipeline\n{\n \"description\": \"Post processor for hybrid search\",\n \"phase_results_processors\": [\n {\n \"normalization-processor\": {\n \"normalization\": {\n \"technique\": \"min_max\"\n },\n \"combination\": {\n \"technique\": \"harmonic_mean\",\n \"parameters\": {\n \"weights\": [\n 0.3,\n 0.7\n ]\n }\n }\n }\n }\n ]\n}\n```\n\n### Initialize a OpenSearch client and vector store supporting hybrid query with search pipeline details\n\n\n```python\nfrom os import getenv\nfrom llama_index.vector_stores.opensearch import (\n OpensearchVectorStore,\n OpensearchVectorClient,\n)\n\n# http endpoint for your cluster (opensearch required for vector index usage)\nendpoint = getenv(\"OPENSEARCH_ENDPOINT\", \"http://localhost:9200\")\n# index to demonstrate the VectorStore impl\nidx = getenv(\"OPENSEARCH_INDEX\", \"auto_retriever_movies\")\n\n# OpensearchVectorClient stores text in this field by default\ntext_field = \"content\"\n# OpensearchVectorClient stores embeddings in this field by default\nembedding_field = \"embedding\"\n# OpensearchVectorClient encapsulates logic for a\n# single opensearch index with vector search enabled with hybrid search pipeline\nclient = OpensearchVectorClient(\n endpoint,\n idx,\n 4096,\n embedding_field=embedding_field,\n text_field=text_field,\n search_pipeline=\"hybrid-search-pipeline\",\n)\n\nfrom llama_index.embeddings.ollama import OllamaEmbedding\n\nembed_model = OllamaEmbedding(model_name=\"llama2\")\n\n# initialize vector store\nvector_store = OpensearchVectorStore(client)\n```\n\n### Prepare the index\n\n\n```python\nfrom llama_index.core.schema import TextNode\nfrom llama_index.core import VectorStoreIndex, StorageContext\n\n\nstorage_context = StorageContext.from_defaults(vector_store=vector_store)\n\nnodes = [\n TextNode(\n text=\"The Shawshank Redemption\",\n metadata={\n \"author\": \"Stephen King\",\n \"theme\": \"Friendship\",\n },\n ),\n TextNode(\n text=\"The Godfather\",\n metadata={\n \"director\": \"Francis Ford Coppola\",\n \"theme\": \"Mafia\",\n },\n ),\n TextNode(\n text=\"Inception\",\n metadata={\n \"director\": \"Christopher Nolan\",\n },\n ),\n]\n\nindex = VectorStoreIndex(\n nodes, storage_context=storage_context, embed_model=embed_model\n)\n```\n\n LLM is explicitly disabled. Using MockLLM.\n\n\n### Search the index with hybrid query by specifying the vector store query mode: VectorStoreQueryMode.HYBRID with filters\n\n\n```python\nfrom llama_index.core.vector_stores import ExactMatchFilter, MetadataFilters\nfrom llama_index.core.vector_stores.types import VectorStoreQueryMode\n\nfilters = MetadataFilters(\n filters=[\n ExactMatchFilter(\n key=\"term\", value='{\"metadata.theme.keyword\": \"Mafia\"}'\n )\n ]\n)\n\nretriever = index.as_retriever(\n filters=filters, vector_store_query_mode=VectorStoreQueryMode.HYBRID\n)\n\nresult = retriever.retrieve(\"What is inception about?\")\n\nprint(result)\n```\n\n query_strWhat is inception about?\n query_modehybrid\n {'size': 2, 'query': {'hybrid': {'queries': [{'bool': {'must': {'match': {'content': {'query': 'What is inception about?'}}}, 'filter': [{'term': {'metadata.theme.keyword': 'Mafia'}}]}}, {'script_score': {'query': {'bool': {'filter': [{'term': {'metadata.theme.keyword': 'Mafia'}}]}}, 'script': {'source': \"1/(1.0 + l2Squared(params.query_value, doc['embedding']))\", 'params': {'field': 'embedding', 'query_value': [0.41321834921836853, 0.18020285665988922, 2.5630273818969727, 1.490068793296814, -2.2188172340393066, 0.3613924980163574, 0.036182258278131485, 1.3815258741378784, -0.4603463411331177, 0.9783738851547241, 0.3667166233062744, -0.30677080154418945, -1.2893489599227905, -1.19036865234375, -1.4050743579864502, -2.200796365737915, 0.05992934852838516, 0.30156904458999634, 0.6115846633911133, -0.028691552579402924, 0.5112416744232178, -2.069373846054077, 0.6121743321418762, -0.05102552846074104, 1.8506423234939575, -1.293755292892456, -0.8149858117103577, 0.37656715512275696, 0.427949458360672, 0.43708929419517517, 3.2720835208892822, -1.9999115467071533, -2.374300241470337, 3.1277284622192383, 3.2631218433380127, -4.0594635009765625, -0.7985063195228577, 1.9719655513763428, -1.0863256454467773, -1.3689632415771484, -1.6202458143234253, -0.970841109752655, 0.4361116886138916, -1.5362870693206787, -1.1693036556243896, -1.026757836341858, 0.5508455634117126, -1.3451452255249023, -0.1262667030096054, -2.551471710205078, -2.0497262477874756, 2.496407985687256, 2.135885000228882, 0.35134005546569824, 5.0327935218811035, 1.8164896965026855, -0.6962565779685974, -0.8567550182342529, -0.7652865052223206, -0.3472128212451935, -4.674342155456543, -0.4849073886871338, 0.264328271150589, -0.13345342874526978, -0.8415009379386902, -0.573940634727478, -1.5133740901947021, -1.1298637390136719, -0.4023132026195526, -0.9682215452194214, -0.6318851709365845, -1.1680705547332764, -0.009688361547887325, 0.4505622684955597, -0.8854013085365295, -0.3571643531322479, 1.4883410930633545, -1.783129334449768, 0.11535698920488358, -0.30390724539756775, -0.25188541412353516, -1.2200418710708618, -0.46980828046798706, 0.010308354161679745, -0.11891602724790573, -2.1998283863067627, -0.8609093427658081, 0.13315293192863464, -0.8290212154388428, -2.8762452602386475, 0.07886768132448196, -1.0726840496063232, 1.9736577272415161, -0.5146512389183044, 0.5342828631401062, -0.11156866699457169, 1.7214893102645874, -2.3838982582092285, -2.6821601390838623, 3.317544460296631, -0.09058598428964615, 1.869874358177185, 0.20941582322120667, -0.32621312141418457, 1.414040207862854, 1.2938545942306519, -0.8429654240608215, 0.5140904784202576, 0.8016107082366943, 0.7636069059371948, -0.4329335391521454, -0.7065062522888184, 4.734518527984619, -0.3860406279563904, 0.925670862197876, 0.9335429668426514, 1.3854609727859497, -0.12670166790485382, -1.3067851066589355, -0.7774076461791992, -0.9004611372947693, 0.10689397901296616, 1.2346686124801636, -0.5597251653671265, 2.0317792892456055, -1.4601149559020996, -1.7142622470855713, 0.29964911937713623, 1.8859195709228516, -0.2781992256641388, -0.5782546997070312, 1.0062665939331055, 0.8075907826423645, -0.12356983870267868, 0.044209253042936325, -0.9768295884132385, -0.7845012545585632, 3.1435296535491943, 0.5873728394508362, 1.7868859767913818, 0.08011605590581894, -0.22836042940616608, 0.7038129568099976, -1.9104092121124268, 1.4030147790908813, -1.2962714433670044, 2.027243137359619, 0.9790756106376648, -2.264589786529541, 7.12422513961792, 2.6044716835021973, 0.1689453423023224, 0.8290825486183167, 2.4138808250427246, 1.5987122058868408, 0.3719463348388672, -1.3208861351013184, -2.665656566619873, 0.011798880994319916, 2.958852767944336, 1.608904480934143, 2.4605748653411865, 2.297091007232666, 0.4549705386161804, 1.1293487548828125, -1.3814384937286377, 0.7619526386260986, -0.5543878078460693, -1.3978607654571533, 1.0291355848312378, -1.0831276178359985, -0.7420253157615662, -0.013568096794188023, 0.26438722014427185, -2.890491008758545, 1.9345614910125732, -2.7232303619384766, 2.1288723945617676, -1.5730639696121216, 0.42103731632232666, -0.5871202945709229, -0.7733861207962036, -0.17877067625522614, -1.259313702583313, 2.633655071258545, -2.6153783798217773, 1.7496006488800049, -1.3132662773132324, 0.30032068490982056, 2.3259973526000977, -0.8340680599212646, -3.8754353523254395, 1.6866732835769653, -0.6322534680366516, -3.1253058910369873, -1.4690831899642944, 0.3984243869781494, -0.6030164361000061, -1.1149078607559204, -0.4780992567539215, 2.6681854724884033, 1.5737766027450562, -1.724433183670044, -1.025917887687683, 0.44603830575942993, 0.14515168964862823, -1.8136513233184814, 0.7997931838035583, -0.9585741758346558, -0.6773001551628113, -0.03136235475540161, 1.519403100013733, -0.181321918964386, -0.5776315927505493, -0.1555202752351761, 0.18355552852153778, 1.78794527053833, -2.432624340057373, -2.234393835067749, 0.4157070219516754, -0.5297521948814392, 0.5506531000137329, -0.4689751863479614, -0.8898658156394958, -0.3534289002418518, 1.8718829154968262, 0.6798714399337769, 2.9149982929229736, -0.9962785243988037, -2.7887353897094727, -0.5387859344482422, 2.679020643234253, -2.448556900024414, 0.651435136795044, 0.966449499130249, 1.6953942775726318, 0.3823235332965851, 0.10229398310184479, -0.9457557797431946, -0.6493328809738159, 0.5688035488128662, -2.922553539276123, -1.548913598060608, 0.4459702968597412, 0.013540555723011494, -0.2704170346260071, 1.006961464881897, -5.754271984100342, -0.5904161930084229, 1.7579066753387451, 1.176064133644104, -0.8002220988273621, 1.309417724609375, -5.752984046936035, -1.6502244472503662, 2.983844757080078, -0.23023942112922668, -0.9855138659477234, 1.3303319215774536, 2.9236953258514404, -3.320286989212036, -0.31151318550109863, 2.217740535736084, 0.7638903260231018, -0.9520173668861389, -1.950067162513733, 0.1302500218153, 1.4167200326919556, 0.29567164182662964, 6.863494873046875, -0.7736454010009766, 2.200040102005005, 0.8791037797927856, 2.6473147869110107, 0.9428380727767944, -1.8561729192733765, 1.2539398670196533, 0.8624231815338135, -2.1333630084991455, 3.7115859985351562, 1.5294171571731567, -2.779855728149414, -4.007022857666016, -0.19421091675758362, 1.4657100439071655, 0.7395465970039368, 1.991339087486267, -0.48850712180137634, 1.2810578346252441, -2.5738956928253174, 0.14520567655563354, -0.9870433211326599, 1.4076640605926514, -1.4828301668167114, -1.5893239974975586, -1.724867582321167, -0.23354482650756836, -1.4163196086883545, 0.5109336376190186, -0.3238542377948761, 1.955265998840332, 0.8233320713043213, 0.732318103313446, -2.2174081802368164, -2.136789083480835, 2.771289587020874, -0.7900831699371338, -0.6042210459709167, -3.237797975540161, 2.219860076904297, 1.3639500141143799, -1.0344531536102295, -3.3109471797943115, -0.2439427226781845, 2.258779287338257, 0.14851944148540497, -0.2913777828216553, 7.262680530548096, 0.5428546071052551, -1.7717254161834717, -0.4633650481700897, 2.8074758052825928, 0.048105500638484955, 1.6452494859695435, 0.04491522163152695, 0.5333496332168579, -0.7809147834777832, 0.2830696105957031, -0.7639930248260498, 0.4482744336128235, -1.4852536916732788, 0.8833461999893188, 0.523638129234314, -0.7595995664596558, -2.6632511615753174, 0.01600099354982376, 1.2090786695480347, 1.558943271636963, -0.332999050617218, -0.004141625016927719, -0.9229335188865662, 2.2113349437713623, -2.042768716812134, 1.812636137008667, -1.677463412284851, -0.3890987038612366, 1.9915165901184082, -0.15162350237369537, 0.6212348937988281, -0.12589970231056213, -1.5613648891448975, -2.242802858352661, -1.0037013292312622, -0.620574951171875, -0.8884297609329224, -3.06825590133667, 2.861025810241699, -0.6538719534873962, 0.8056166172027588, 0.018622085452079773, -0.024002058431506157, -0.9258925914764404, 0.12631414830684662, 0.584757387638092, 0.27688172459602356, 1.6044093370437622, 1.270908236503601, -0.5254065990447998, 1.8217332363128662, -0.6541954278945923, 0.8827502727508545, 0.005546186119318008, 1.258598804473877, -1.0960404872894287, 1.4661812782287598, 1.313948392868042, 1.6511622667312622, 0.7871065735816956, -1.5718154907226562, -1.0518637895584106, 0.9388594031333923, 3.3684990406036377, 0.45377177000045776, 1.271720290184021, -1.1764464378356934, -0.15176154673099518, -1.391137719154358, 3.011141300201416, -1.0445970296859741, 2.899102210998535, -1.758180022239685, 4.193892955780029, -6.368247032165527, -0.5940825939178467, -1.0767533779144287, -1.3527724742889404, 1.8917447328567505, -2.1997251510620117, -0.19185307621955872, 0.25080886483192444, 2.0800955295562744, -0.6289852261543274, -2.2921133041381836, -4.517301082611084, 4.76081657409668, 0.1720455437898636, 0.5073676109313965, 0.6299363374710083, 0.767320990562439, -0.8382765054702759, -1.3843607902526855, -1.3682464361190796, -2.6356472969055176, -0.8984878063201904, 0.22113864123821259, -2.1458795070648193, 0.7607365846633911, 0.2667470872402191, 1.220933437347412, 0.02754109539091587, -0.0877218097448349, 0.41839832067489624, 1.8138320446014404, 1.5390034914016724, -0.6963170766830444, -0.2749406695365906, -0.6144360899925232, -0.010053030215203762, 0.9293986558914185, 0.7217408418655396, 2.536949396133423, -1.1031646728515625, 1.6805330514907837, -0.4614034593105316, -1.8670165538787842, -1.8161876201629639, -0.591956615447998, -4.985913276672363, -0.2568120062351227, 0.48842141032218933, 0.7554554343223572, 0.38172686100006104, 0.9337061643600464, 2.2370591163635254, 1.419506311416626, -0.7996056079864502, -1.2188458442687988, -0.7220484614372253, -2.3885955810546875, -2.3270604610443115, -0.6024976372718811, 0.858237087726593, -0.4162434935569763, -1.4675885438919067, 1.8310022354125977, 1.28183114528656, 0.8004191517829895, -1.2845454216003418, 0.937484860420227, -0.10335024446249008, 3.258983850479126, 1.3268334865570068, 1.2220652103424072, 0.7784561514854431, 3.3600029945373535, 0.6701059937477112, 1.0529390573501587, 0.10208575427532196, 0.5701940059661865, 0.1962825357913971, 0.10828425735235214, -0.2162337452173233, 2.180311679840088, -1.7972211837768555, 1.0405341386795044, 0.7389837503433228, -4.010706424713135, -2.3734586238861084, -1.719375491142273, -1.8657660484313965, 0.1835731565952301, 1.2427527904510498, -0.7261231541633606, -1.1701852083206177, 0.789677619934082, -2.7172350883483887, 1.319502353668213, 1.0955758094787598, 2.324152708053589, -0.0015042572049424052, 0.12953521311283112, -0.647757887840271, 1.4880874156951904, 2.802795886993408, 2.35840106010437, -2.0141172409057617, -3.2490947246551514, 0.4349888861179352, -2.3027102947235107, 1.726550817489624, -2.0354580879211426, 0.3805755376815796, -0.9496164321899414, -0.7888155579566956, -0.43960967659950256, 1.7932041883468628, -1.5066981315612793, 1.4541993141174316, -0.5531985759735107, 0.36705297231674194, 0.014699921943247318, -1.6991020441055298, -0.21752266585826874, 1.7329368591308594, 11.894489288330078, -0.5965126156806946, -0.925564169883728, -0.2954309582710266, -1.5528509616851807, 2.199148654937744, -1.103115200996399, 0.19948604702949524, 1.3276681900024414, -0.39991408586502075, 0.08070758730173111, -4.513566493988037, 0.7369015216827393, -0.06655729562044144, 1.611018180847168, -5.976266384124756, 1.5534995794296265, 0.9247637391090393, 1.9740935564041138, -1.6040284633636475, -1.692891001701355, 2.5750420093536377, -2.327113151550293, 0.1548505425453186, 0.9327078461647034, -0.25829583406448364, 2.666149616241455, -3.593252420425415, -0.15699230134487152, -1.7032642364501953, -0.311889111995697, 0.5351189970970154, 1.087026596069336, -0.6252873539924622, 1.3841193914413452, -0.4950295686721802, 1.5594199895858765, 2.66278338432312, -1.7093839645385742, -0.010296639986336231, -0.28942716121673584, 1.4094592332839966, 0.638701319694519, 1.562028408050537, 2.648719549179077, 0.43120214343070984, 0.2683892548084259, -1.592780351638794, -0.043680235743522644, -2.216395139694214, -0.7123466730117798, -0.8192989230155945, 0.009025665931403637, 0.8953601717948914, -0.812109649181366, -0.8570348024368286, -0.9459167122840881, 0.17694488167762756, -0.2153395116329193, -1.6095856428146362, -1.3068273067474365, 0.07987572252750397, 0.9553368091583252, -0.6526023745536804, 0.36873266100883484, 1.2450517416000366, -2.059387683868408, -1.3680862188339233, -0.012401364743709564, 1.4825446605682373, 0.004227606114000082, -1.4840946197509766, 2.2486157417297363, 0.1467883139848709, -0.6168572902679443, 4.384040355682373, 1.6955211162567139, 1.3673641681671143, 0.02802290767431259, -0.8326700329780579, 0.5160557627677917, 1.5494022369384766, -0.038791801780462265, 1.3310153484344482, 2.623941659927368, -0.44216081500053406, 2.094320297241211, -0.4652816355228424, -2.16534423828125, 1.1661605834960938, 0.5016739964485168, 0.2974618971347809, -1.2477234601974487, 0.45119279623031616, -2.0935275554656982, -2.7642881870269775, -0.3183857798576355, -1.7994561195373535, 0.46001338958740234, 1.13956880569458, 0.7820373773574829, 1.1870800256729126, -0.09882406145334244, -0.012949690222740173, -2.851064682006836, -0.23078449070453644, 0.5443326234817505, -1.5935089588165283, -0.15193487703800201, 0.8875556588172913, 1.8850420713424683, -1.6735634803771973, -0.4044044315814972, 0.13618849217891693, -0.7734470367431641, -1.2560303211212158, -0.6135643720626831, -0.3756520450115204, 0.09861935675144196, 1.7973986864089966, 3.9645559787750244, 1.1840814352035522, 0.23493440449237823, 0.4021183252334595, -0.3134872019290924, 2.8585891723632812, -1.7090718746185303, 1.0857326984405518, -0.5228433609008789, 1.052767276763916, -2.750671148300171, -2.292957067489624, -2.2393078804016113, 0.6484774947166443, -0.8178457617759705, 1.981013536453247, 0.9351786375045776, -1.7835562229156494, 1.197204828262329, -1.580520510673523, 1.3651384115219116, -1.2498836517333984, 2.271068811416626, -0.4805469214916229, -0.8042144775390625, 1.1161340475082397, 0.28766822814941406, -0.9136468768119812, 1.4822930097579956, -1.9415802955627441, 3.3139493465423584, -0.788847804069519, -0.46007534861564636, -0.8408829569816589, 1.552205204963684, 2.770519256591797, -0.024295229464769363, -0.2848755717277527, -1.7725780010223389, 1.800087332725525, 0.07893167436122894, -1.2222589254379272, -0.014700260013341904, 1.6821144819259644, -2.8402585983276367, -1.0875762701034546, 0.920182466506958, 1.5571104288101196, 1.580711007118225, -2.1959006786346436, 0.40867993235588074, -0.4071654975414276, 0.4721708297729492, 2.2015981674194336, 1.7094886302947998, 2.791167974472046, -1.8486231565475464, 0.9494439363479614, -1.6473835706710815, 2.25347900390625, -0.7640524506568909, -1.3047209978103638, 2.0264523029327393, -0.7758778929710388, -3.2164461612701416, -0.431278258562088, 0.48025432229042053, 1.8809497356414795, -1.7093976736068726, 0.47827860713005066, 1.893001675605774, -3.900144100189209, -1.5717852115631104, -1.9519548416137695, -0.5816302299499512, -2.5087790489196777, -2.137329339981079, 0.48499026894569397, -1.041875958442688, 1.495080828666687, 0.7974658012390137, -0.33765724301338196, -0.2551305294036865, -1.225867509841919, 0.40782275795936584, -1.9513366222381592, 2.4652771949768066, -0.4490397274494171, -0.5427073240280151, -0.9319576025009155, -1.2108888626098633, -3.5326883792877197, 0.5978140830993652, -1.5832680463790894, -3.4952869415283203, 0.8160491585731506, 2.4453232288360596, 1.9943169355392456, -1.6371946334838867, -0.7201486229896545, -2.150602102279663, -0.8741227984428406, -1.0412555932998657, 1.1813536882400513, -0.5626242160797119, 0.9812798500061035, 0.9959167838096619, -2.4925386905670166, -1.0300214290618896, -2.5242247581481934, 0.4867877960205078, -0.5604022145271301, 0.7731047868728638, 0.09035436064004898, 2.148285150527954, -0.14102017879486084, -1.0548553466796875, 0.346242219209671, 0.8292868733406067, 0.2173319011926651, 1.6390180587768555, 0.8006800413131714, -2.504382848739624, 0.03211856260895729, 0.25490802526474, -0.1592618227005005, -2.52319073677063, -0.07528931647539139, 1.6852014064788818, 1.2371580600738525, -1.3527917861938477, -0.7488723397254944, -0.7073266506195068, 1.2466566562652588, -0.734491765499115, 2.599490165710449, -1.1392076015472412, -0.26751452684402466, 1.9701131582260132, -3.0358736515045166, 0.6857394576072693, -2.17743182182312, 0.7840812802314758, 0.7634314894676208, 1.6858117580413818, -0.14474305510520935, -0.03722609952092171, -0.7322748303413391, 0.8631106615066528, 2.321913003921509, 2.620532274246216, -1.7463874816894531, -0.8518179059028625, 18.426437377929688, 2.292031764984131, -0.9628440737724304, 0.2770772874355316, 1.823053240776062, 0.007035842165350914, -1.350489854812622, 0.9310376644134521, -1.555370807647705, -1.22098708152771, -0.4069618284702301, -2.5084807872772217, 0.07337111979722977, -0.6376367807388306, 0.3913240432739258, 0.8780924677848816, -1.000422477722168, -0.11413756012916565, -0.41021502017974854, -1.2571842670440674, -0.8197417855262756, 2.0337860584259033, 0.3979244828224182, 1.4167122840881348, 0.3471311926841736, -0.4256099760532379, 1.0012407302856445, -0.4308701753616333, -0.02153640426695347, 0.6896073222160339, -0.41300255060195923, -2.1376280784606934, 0.15132027864456177, 1.122147560119629, -0.26097020506858826, -1.5312714576721191, 1.1588066816329956, 0.5141109824180603, -0.4418908655643463, -1.282315969467163, -2.1520655155181885, -2.381605625152588, -1.0613080263137817, 1.8376272916793823, -0.3373865783214569, -1.7497568130493164, 1.3478856086730957, 0.522821843624115, 2.8063817024230957, -1.5707430839538574, 1.6574434041976929, 1.0973840951919556, 0.033301882445812225, -0.870749831199646, -1.2195767164230347, -0.4587917923927307, -0.32304897904396057, 1.0247005224227905, -0.061056286096572876, 1.0645840167999268, 0.26554223895072937, 0.7214350700378418, -0.49338391423225403, 2.04323410987854, -0.38607147336006165, -1.9434980154037476, -1.4400379657745361, 4.2936177253723145, -0.03506356105208397, -1.607264518737793, -1.4003962278366089, 0.8912801146507263, -0.6198359727859497, 1.4857014417648315, 0.8332427740097046, 1.5414448976516724, 1.0930620431900024, -1.062386393547058, 0.4404706358909607, -2.0785317420959473, 0.9004122018814087, 0.5037896633148193, -0.7400078177452087, 0.7098906636238098, 3.7883002758026123, 0.3869098424911499, 0.7730949521064758, 0.2972405254840851, 0.02568812482059002, 0.774571418762207, -2.0131654739379883, -0.20678681135177612, 1.8377408981323242, -0.06119948998093605, -1.2104179859161377, -0.2865597903728485, -1.013867974281311, 0.0007775087142363191, -1.6674636602401733, 1.061977744102478, 2.9370741844177246, 1.4935888051986694, 2.5850329399108887, 0.016956254839897156, 1.406268835067749, -0.5984053015708923, 0.6108880043029785, -0.04343929886817932, 1.3669254779815674, -1.2286776304244995, -0.10667647421360016, 2.1632094383239746, 0.8779910206794739, -1.3170784711837769, -1.860677719116211, 0.9604260325431824, -2.4838356971740723, -1.691286325454712, 0.22740653157234192, -0.7766919732093811, -0.5894504189491272, -4.942060470581055, -0.26809266209602356, 1.1812422275543213, 2.37599778175354, 1.0258384943008423, -1.118991732597351, 0.5149827003479004, -0.5733175873756409, 1.505476474761963, 3.1367368698120117, 0.7641242146492004, -0.0940699428319931, 1.0783028602600098, 1.3335994482040405, -1.2336270809173584, 0.22182348370552063, -1.110285997390747, 0.862419605255127, -1.0850942134857178, -2.729142904281616, 1.0944768190383911, -0.7928529977798462, -0.6893836259841919, 0.18696878850460052, -2.0538835525512695, -1.0116357803344727, -0.797469437122345, -1.3255575895309448, 1.709050178527832, 3.431581735610962, 2.935115098953247, 1.0282948017120361, 0.5271965861320496, -0.7158775329589844, 1.3512331247329712, -0.7794892191886902, 0.13029088079929352, 0.3733986020088196, -0.17051351070404053, 0.38182443380355835, 0.9633568525314331, -0.15820203721523285, 2.1459097862243652, 0.5132815837860107, 0.08023839443922043, -0.8007093071937561, 0.13462162017822266, 1.9698970317840576, 0.8776851296424866, -1.9589300155639648, 0.5906473994255066, 1.028153419494629, -0.4514116644859314, -2.473788022994995, -0.2742897570133209, 1.0657744407653809, 2.362811326980591, 0.028045516461133957, -0.5195608735084534, -2.3411612510681152, 0.1536271870136261, -0.15816496312618256, -0.09372033178806305, -0.49644598364830017, 0.49094706773757935, 1.1586555242538452, -0.955280065536499, 0.9317602515220642, -1.1424400806427002, 1.6726744174957275, 0.519007682800293, -0.6123946309089661, 2.615694046020508, 2.466355562210083, 3.3426148891448975, 1.0087884664535522, -0.516756534576416, -0.11329516023397446, 0.6762191653251648, -0.05646437406539917, 0.34115341305732727, 1.4121625423431396, 1.80597984790802, -0.6195365786552429, 0.046768467873334885, -0.18133965134620667, 2.0016236305236816, -0.15139950811862946, -0.41256871819496155, -0.1790081411600113, 0.5522864460945129, -1.2738145589828491, -0.21690881252288818, 1.0143086910247803, 0.6111000776290894, -2.4920296669006348, 0.3650006055831909, 0.5012017488479614, 3.312314987182617, -1.2554460763931274, -0.08991418778896332, -5.223748683929443, 0.49595025181770325, -1.0139282941818237, 0.08150297403335571, 0.5423699021339417, 0.6872586011886597, 0.3866420388221741, 0.2387423813343048, 1.6300451755523682, -0.23714679479599, -1.4279755353927612, 4.459320068359375, -0.7372031807899475, 1.5491743087768555, -0.9331847429275513, 1.5157212018966675, 0.33791929483413696, 2.988191843032837, -0.1212812289595604, -1.2225391864776611, -0.8952404260635376, 0.30449047684669495, -0.5278837084770203, 0.47584253549575806, 1.4064100980758667, -1.2114145755767822, -0.10328574478626251, 1.5992718935012817, -2.0458250045776367, -3.102452278137207, -1.4500226974487305, -2.892245292663574, 0.5406331419944763, 1.0614030361175537, 0.9008101224899292, -0.5399534106254578, -0.4225170314311981, -0.5858743190765381, 1.785391926765442, 0.21592077612876892, -3.7099521160125732, 0.7630082964897156, 1.3418095111846924, -2.593329429626465, 0.31877732276916504, 1.6515623331069946, 0.9644103646278381, 1.9154785871505737, -1.0050128698349, 2.866792678833008, -3.363034248352051, -0.010284701362252235, 2.8003530502319336, -4.132946014404297, -1.0492007732391357, -1.803873896598816, -1.6592904329299927, 0.5143199563026428, -1.4949287176132202, 1.6534130573272705, -1.6133151054382324, -0.22070585191249847, 1.3808913230895996, 2.3047897815704346, -1.7598133087158203, -1.6936516761779785, -0.7323946356773376, -4.033495903015137, 0.908507227897644, -0.9024778008460999, 1.3645659685134888, 1.8907235860824585, 1.2878985404968262, 0.8542701601982117, 0.8109430074691772, -2.2866451740264893, -2.5592124462127686, 0.812874436378479, 1.6586065292358398, -1.0911669731140137, -0.1487925946712494, -2.1414759159088135, -1.8146477937698364, -0.363641619682312, -1.3416190147399902, 0.37370967864990234, -2.0443432331085205, 0.7105128169059753, 2.1254630088806152, -2.8021240234375, -1.104745864868164, -2.176929235458374, -3.2365283966064453, -3.0512943267822266, -0.11705376207828522, -0.2737237215042114, 0.3246777653694153, -0.3063682019710541, -0.5377206206321716, -2.49725341796875, 1.262384295463562, 0.14024639129638672, 1.1029243469238281, 0.2849975526332855, 0.818973183631897, -3.680553913116455, -0.7605910897254944, 0.32638072967529297, -0.6741605997085571, 0.8537416458129883, 1.168124794960022, -1.5162039995193481, 0.5819069147109985, 0.023379748687148094, -1.348990559577942, -1.5652809143066406, -0.5094784498214722, 0.27916091680526733, 1.121222734451294, 0.8780670762062073, 1.2094379663467407, 2.1354639530181885, 2.769707441329956, 1.4601696729660034, 0.5871595144271851, -0.9278814196586609, -1.3891559839248657, 1.9506850242614746, 1.7492010593414307, -0.623008131980896, -1.7607749700546265, -1.044310212135315, 1.6887259483337402, -0.8975515961647034, -0.4015905559062958, -3.0241539478302, -1.561933159828186, 1.3948237895965576, -1.3228869438171387, 0.13199321925640106, -2.3275814056396484, 1.9689031839370728, 0.8485745191574097, -0.08251477777957916, 0.2345050424337387, -1.1688499450683594, -0.11912787705659866, -0.21194298565387726, 0.09007112681865692, 1.7608760595321655, -0.7274044156074524, 1.5473390817642212, -0.8514923453330994, -1.8599978685379028, -0.9838665127754211, 1.206497073173523, -0.05950266867876053, -0.11489760130643845, -0.4535527527332306, -2.0776290893554688, 0.17017999291419983, -0.28572288155555725, -0.05139496177434921, 1.7572499513626099, -2.834480047225952, -0.5412831902503967, -1.4063488245010376, 1.6982507705688477, -0.15384571254253387, 0.20969967544078827, -0.6751638054847717, -0.6338038444519043, 0.15595316886901855, -2.1501686573028564, 3.7269763946533203, -0.5278751254081726, 0.5313963294029236, -0.9846722483634949, -0.7395603060722351, 0.2116585671901703, -1.17556893825531, 0.6930138468742371, -1.498841404914856, 0.06944025307893753, 4.103360652923584, 0.8904181122779846, -1.6667888164520264, 2.365586996078491, -0.30954357981681824, 1.4848604202270508, 0.12867887318134308, -0.9684067964553833, 1.8107026815414429, 0.2624013423919678, -0.00013041730562690645, -0.9252362847328186, -1.0514239072799683, -0.4941941797733307, -0.14078719913959503, 0.9959864616394043, 1.9541596174240112, 1.449040412902832, -0.7560957074165344, 0.39170560240745544, 1.1071592569351196, -2.732081651687622, 2.192186117172241, -0.4868117868900299, -0.9378765821456909, -0.21596597135066986, 2.284925937652588, 0.48173102736473083, -1.092008113861084, 4.131366729736328, 0.4500076174736023, 0.551324188709259, 0.9356209635734558, 1.8111575841903687, 0.5323090553283691, -0.1642349511384964, -0.8208290934562683, -1.4830564260482788, -0.06867530941963196, 1.2636538743972778, -0.5348911285400391, 1.6775068044662476, -2.6230735778808594, 0.65394127368927, -1.6660821437835693, -0.1372344046831131, -0.2740567624568939, 0.24980051815509796, 0.2987605035305023, -1.3377487659454346, 1.7165122032165527, -3.766610622406006, 1.0698935985565186, -1.2334039211273193, 0.7106996178627014, 1.914261817932129, 2.254060983657837, 3.0593926906585693, -0.9038339257240295, 2.1295647621154785, 2.323791980743408, -1.0098944902420044, 0.3092609643936157, 0.5903484225273132, -0.1939529925584793, 1.3433213233947754, -2.3781626224517822, 0.011826583184301853, -0.7088412046432495, -0.061338480561971664, 0.2272409349679947, 1.3122551441192627, -0.609024703502655, -1.6595351696014404, 2.0951175689697266, 1.763617753982544, 1.723102331161499, -0.07782021164894104, -2.318408250808716, -0.05159427598118782, -1.0939024686813354, -1.6204721927642822, -0.2976556420326233, 0.7443931698799133, 0.1723729372024536, 2.450744152069092, -0.6820093393325806, -0.748424768447876, 2.5927767753601074, -0.003042939119040966, 0.3108278512954712, -0.8557866811752319, -0.2789894640445709, 0.1240282878279686, 2.2363221645355225, -0.6958662271499634, 1.3821767568588257, 0.6796685457229614, -1.0079951286315918, 0.07227839529514313, 0.16650229692459106, -0.26254791021347046, 2.390132427215576, -1.8655506372451782, -0.9341630935668945, -0.4989074766635895, 0.37631097435951233, 1.142351746559143, 0.9883608222007751, -0.4232832193374634, -1.5377675294876099, 2.386815309524536, 2.2229881286621094, 1.4753307104110718, 0.3690650463104248, 1.755672812461853, 0.1360682249069214, 1.8262691497802734, 1.204149842262268, -1.61245596408844, -1.0976654291152954, 0.5620847344398499, 0.014258773997426033, 1.1145908832550049, -0.048353638499975204, -1.7993223667144775, -1.3680578470230103, 0.6397918462753296, 0.8140274286270142, -1.4138717651367188, 1.7843458652496338, 2.320143222808838, -2.3691468238830566, -1.6290253400802612, 0.4552460014820099, -0.7073084115982056, -0.7053864002227783, -0.18425749242305756, 0.25378942489624023, -0.5154763460159302, -1.0927859544754028, -0.16792698204517365, -7.894286155700684, 2.1493186950683594, 1.498073935508728, 1.1957359313964844, 1.4592503309249878, -1.2221958637237549, -1.4473165273666382, -0.039233092218637466, -1.5387781858444214, 0.2809738218784332, 0.3632938265800476, -0.2190452218055725, 2.9330430030822754, -0.4174436628818512, -2.329633951187134, -1.2179923057556152, -0.9618884325027466, -1.5516972541809082, 0.019556254148483276, -0.4251065254211426, -2.3030922412872314, -2.5415854454040527, -0.11236034333705902, 0.9514794945716858, 0.7616139054298401, -8.174147605895996, -2.5553340911865234, 2.3889544010162354, -2.391383647918701, 0.27428004145622253, 0.06787795573472977, -0.32369983196258545, -0.22679738700389862, -2.1803629398345947, 0.04160657897591591, -1.6604293584823608, -1.2566741704940796, -1.6263835430145264, 2.1215732097625732, 0.7840049862861633, 2.6804425716400146, 1.8644461631774902, 0.6444897651672363, -0.5099689960479736, -2.8954007625579834, -1.2828558683395386, -3.4878811836242676, 3.494006633758545, 0.3797999918460846, -0.647855281829834, -0.13344724476337433, 0.17902664840221405, -0.9919470548629761, 1.616905689239502, -2.27630877494812, 1.643802285194397, -2.5938448905944824, -0.6710792183876038, -1.3830605745315552, 0.2624107003211975, -1.6451555490493774, -3.8474550247192383, 1.7321749925613403, 0.7066786289215088, 0.9384508728981018, -0.4754510819911957, -0.7334026098251343, 1.1032025814056396, -1.1658520698547363, 1.3763278722763062, -0.037774622440338135, -0.8751903176307678, -0.9791316390037537, 0.9107468128204346, -0.3296473026275635, -1.9909007549285889, -2.1473586559295654, -0.006557852495461702, 0.8384615778923035, -0.01962209679186344, 18.872133255004883, 0.36201873421669006, 0.798553466796875, -0.8644145131111145, 2.3191981315612793, 1.9541605710983276, 0.6602945923805237, -0.6179968118667603, -1.5543711185455322, 0.776279628276825, -0.1289747953414917, -0.06260916590690613, 1.7027626037597656, 2.0810482501983643, -1.6213568449020386, -0.39886006712913513, -0.9148863554000854, 2.371779203414917, -0.8255667686462402, 0.5241879224777222, -0.06611108034849167, 0.15851444005966187, -1.7265608310699463, -1.9876701831817627, -0.8574174642562866, -0.5137755870819092, 1.094200611114502, 2.051439046859741, -0.4424201250076294, 2.4114742279052734, 2.8330302238464355, 1.3852721452713013, -1.4038090705871582, -0.8299773335456848, 1.1527894735336304, 0.4274378716945648, 0.1335463523864746, -0.8394038081169128, -0.695540189743042, 2.1860713958740234, 0.02831282652914524, 1.38851797580719, 2.7180070877075195, -0.5800375938415527, 0.38012072443962097, -1.516226887702942, -1.4528743028640747, 2.020332098007202, 0.37799376249313354, -0.006111237220466137, 0.3068114221096039, 0.051762551069259644, -1.9482847452163696, 0.9943925738334656, 1.2114444971084595, -0.498379111289978, -0.9394795894622803, 1.5365674495697021, 0.16462092101573944, 0.6199139356613159, 1.0695781707763672, 2.171590805053711, -1.1515934467315674, 0.5827388167381287, -0.5251217484474182, -1.9005380868911743, 0.06192204728722572, -0.18885327875614166, -1.038601279258728, 0.7463323473930359, 1.9741954803466797, -0.3802947402000427, -1.7263867855072021, 0.5576955080032349, -6.5414228439331055, 2.482769250869751, -2.1220779418945312, -0.09322360157966614, -0.606932520866394, 1.5720510482788086, 1.186712622642517, -0.9327155947685242, -1.636777639389038, -0.4719899892807007, -1.5404103994369507, 1.0624099969863892, -0.8127937912940979, -2.095475673675537, -1.1025049686431885, -0.26622164249420166, 0.16464705765247345, 0.8162824511528015, -0.15933609008789062, -0.7117319107055664, -0.9574808478355408, -0.876996636390686, 2.278644561767578, -0.0024203015491366386, -0.5017860531806946, -1.2637724876403809, -0.5512189865112305, -3.1437408924102783, 1.3709018230438232, 0.026811804622411728, -1.9635486602783203, 0.31492292881011963, -0.20160254836082458, -0.24661631882190704, -1.9361134767532349, 1.3048427104949951, 3.6883456707000732, 0.5891764760017395, -3.1885087490081787, -2.2480430603027344, 0.44650864601135254, -0.2979971468448639, 0.6279115676879883, 1.7861369848251343, 1.31356680393219, 0.2839275002479553, -0.0985964760184288, 3.672964096069336, -0.4695611298084259, 0.9082326292991638, -2.184004306793213, 1.7009413242340088, -0.18669430911540985, 1.566172480583191, -1.174803376197815, -0.19450849294662476, 1.3686773777008057, 3.5500600337982178, 0.7436428666114807, -2.5459940433502197, -0.39744019508361816, 0.14069513976573944, 0.950007975101471, -1.4498867988586426, -0.7189942002296448, -0.2236652672290802, -2.013282537460327, -0.5737518668174744, 0.9382229447364807, 0.138462632894516, 0.9450423717498779, -1.2327749729156494, -0.06684131175279617, -0.21903301775455475, -0.19272048771381378, 1.4798189401626587, -0.28108158707618713, 0.008473487570881844, -1.8993659019470215, 0.6377541422843933, -1.2002936601638794, 1.3228615522384644, -0.7272652387619019, 0.6738811731338501, -12.774709701538086, 0.38885611295700073, 0.09384233504533768, 0.31756454706192017, -0.9169012308120728, 0.3109724819660187, 1.2062820196151733, -0.14381268620491028, 1.3380125761032104, 0.23123255372047424, 5.710921764373779, 2.0951988697052, -0.6727567911148071, 0.5585488677024841, -1.0341438055038452, 4.237761497497559, 2.1377511024475098, -0.49543625116348267, -1.4155120849609375, -1.9498896598815918, 0.5206643342971802, -0.6073912978172302, 1.0878022909164429, 1.1386674642562866, -0.385581910610199, 1.0004098415374756, 0.32254475355148315, -0.26826754212379456, -0.36881956458091736, 1.2502003908157349, 1.8067052364349365, -0.7950462698936462, -0.647400975227356, -0.7572196125984192, 1.8677783012390137, 2.2101082801818848, -0.4016321897506714, -2.1301164627075195, -1.4410021305084229, -0.4440961182117462, 0.9435309767723083, 0.7587440609931946, -0.7718055248260498, 0.6684849858283997, 1.4827388525009155, -0.5951601266860962, -0.04539009556174278, 1.4053939580917358, 1.600264549255371, 1.485518455505371, -0.01698189228773117, -2.1539177894592285, -0.6734874248504639, -0.1466687023639679, -1.8562843799591064, 1.368183970451355, -1.9869157075881958, -1.771111011505127, 1.3747059106826782, -2.1883490085601807, 1.245656132698059, 2.9322621822357178, -4.6943254470825195, 0.050724368542432785, 1.174140453338623, 2.134220600128174, -1.2295567989349365, -9.229207992553711, 1.1267402172088623, -0.657805860042572, -1.7399400472640991, -0.6609499454498291, -0.6485408544540405, 3.0318961143493652, -0.6680227518081665, 0.09523709863424301, -0.9661348462104797, -0.4199778139591217, -2.1234323978424072, 1.8200979232788086, 0.4164965748786926, 2.025296926498413, -3.4414825439453125, 1.9319193363189697, -0.10623864084482193, 0.2561671733856201, -0.6611090302467346, 1.3615325689315796, 2.108733892440796, 0.8126195073127747, -1.1526707410812378, -0.5965040326118469, -0.35427987575531006, -2.063122272491455, -1.2310903072357178, 1.2262243032455444, -1.8083066940307617, 0.42896851897239685, 0.3576699197292328, -0.4071148931980133, -1.2601420879364014, 0.1839064657688141, -1.5797836780548096, -1.2638546228408813, -2.8018031120300293, -0.637273371219635, 3.2183213233947754, 2.1219942569732666, -0.12670977413654327, -0.39420315623283386, 0.40950316190719604, -0.5919733643531799, -0.23056891560554504, 2.051269054412842, -0.7569652199745178, 1.4771054983139038, 1.0973950624465942, -1.8497394323349, 0.7660054564476013, 0.4079739451408386, 0.39509209990501404, -4.03759765625, 0.49509933590888977, -1.0944682359695435, 0.09745340794324875, -3.1690404415130615, 0.8090209364891052, -1.4141499996185303, 3.0473451614379883, 1.6514188051223755, 0.41704440116882324, -1.2381988763809204, -1.1585941314697266, -3.132882595062256, 1.6212838888168335, -0.30608034133911133, -0.8824394345283508, -0.8437250256538391, -0.9403614401817322, -0.8425355553627014, -0.37263181805610657, -0.1551574021577835, -0.5804091691970825, -1.1024240255355835, -1.7907911539077759, -0.0342000387609005, -0.4776504933834076, -1.3575290441513062, -2.328903913497925, 0.4996108412742615, 1.7269865274429321, 0.5199770331382751, -1.9266583919525146, -0.7093672752380371, 1.2503345012664795, 1.8306338787078857, 0.7360469102859497, -1.206422209739685, 0.6247041821479797, 0.7726438045501709, -1.032078742980957, -0.7114255428314209, 0.16287469863891602, 0.831956684589386, -0.7253220677375793, -0.47531649470329285, -1.4246597290039062, 1.755218744277954, -0.5425159335136414, 0.6625281572341919, -0.3054732382297516, -0.6943628191947937, -1.3100087642669678, -1.1087058782577515, -1.0377978086471558, -0.7500689029693604, 1.4751780033111572, 3.00736665725708, -0.6323608756065369, -2.119974136352539, -0.6540080904960632, -1.4383971691131592, -0.84005206823349, 4.245811462402344, 2.278538942337036, 3.1497910022735596, -0.27651938796043396, 0.6448743939399719, 1.4431798458099365, 0.5587866306304932, -3.0461509227752686, -1.2400342226028442, -1.0255615711212158, -1.4238051176071167, 0.5386326909065247, 0.7480037212371826, -3.042428493499756, 0.7404770255088806, 0.12366102635860443, 0.911239743232727, -0.3391643762588501, 0.223716139793396, -0.8176794648170471, 0.26733750104904175, -0.06358910351991653, -1.4497816562652588, 0.8220661878585815, 0.16676229238510132, 1.5089242458343506, 0.6346613764762878, 0.024414829909801483, 0.6593573093414307, 0.393612265586853, 0.019153645262122154, -0.7171251773834229, -0.9643132090568542, -1.9135726690292358, -0.6826731562614441, 0.5984606146812439, -0.10053187608718872, -0.2873309552669525, 2.3750436305999756, -1.2665084600448608, 2.283870220184326, 0.5721796154975891, -1.3008747100830078, 1.0985933542251587, -1.5088225603103638, 1.9784263372421265, 0.9985378980636597, 1.464012622833252, 0.059930458664894104, 1.9638173580169678, 0.8821389675140381, -1.2606337070465088, 0.1445717066526413, 1.4483168125152588, -0.2712717354297638, 0.9861794114112854, 0.16738435626029968, 1.2032196521759033, 0.016787560656666756, -1.5607249736785889, -1.5602887868881226, -2.0594980716705322, 0.8503971695899963, 0.21978792548179626, -0.7478030323982239, -1.548238754272461, -2.0839169025421143, 1.040157675743103, 0.17136456072330475, 1.4454336166381836, -0.3496195375919342, -1.5328574180603027, -0.5981230735778809, 1.348305583000183, -1.1996772289276123, 1.2960461378097534, -2.10420298576355, -1.6639989614486694, 0.6384819746017456, -0.3000016212463379, -1.7084497213363647, 1.006030559539795, -0.6925215125083923, -16.237192153930664, -1.269885540008545, -0.1343255341053009, -0.8638982176780701, 0.5025228261947632, -0.03916531801223755, -0.3935791552066803, -0.7058824896812439, -1.03640878200531, -0.008937481790781021, 1.2709771394729614, -0.10591604560613632, -1.0147794485092163, 1.338919758796692, 0.9484397768974304, 0.9701794981956482, 0.4421986937522888, 1.2322977781295776, -1.889535665512085, 0.5251283645629883, 0.3843725919723511, 1.7612661123275757, -0.6837946772575378, -0.4207232892513275, 2.161186456680298, -1.5622614622116089, -0.3522988557815552, 1.4155505895614624, -2.1782491207122803, -1.1853680610656738, 1.720255970954895, 0.25389912724494934, -0.3503161370754242, -0.4976607859134674, 0.20313221216201782, -1.7481805086135864, -0.051039956510066986, -0.07729162275791168, -1.3311573266983032, 0.3567187488079071, 2.487179756164551, 1.0334692001342773, -0.7893021702766418, -0.8556540012359619, 1.5236862897872925, -0.3487071096897125, -2.2354423999786377, -0.33195385336875916, -2.056328058242798, -3.69155216217041, 1.0659364461898804, 0.14452722668647766, 1.573434591293335, -0.45088863372802734, 0.4945583641529083, -0.5502666234970093, -0.43008995056152344, -1.099909782409668, -3.6009509563446045, 0.3614920973777771, 0.17738942801952362, 0.19482767581939697, 3.047203540802002, 0.6915555000305176, -0.3011980652809143, 0.22368474304676056, -1.2556663751602173, -0.6008588075637817, 2.426342725753784, 1.1014577150344849, -0.05255969986319542, 2.3032820224761963, 0.026818735525012016, -1.8038209676742554, 0.7464965581893921, -1.4359550476074219, -0.9251225590705872, 2.321892738342285, -0.010697663761675358, -0.523650050163269, -0.3477587103843689, -1.3873298168182373, 1.8978071212768555, -0.7265989184379578, -0.13917182385921478, 1.760409951210022, -1.8050470352172852, 1.9202536344528198, 23.424657821655273, 0.7895025610923767, -0.22024549543857574, 0.32768526673316956, 0.22950346767902374, -0.2173154354095459, 1.610393762588501, -2.5466644763946533, 0.6264030337333679, -1.3054112195968628, 2.720999002456665, -1.51677405834198, -2.8534555435180664, 1.6714026927947998, -2.2732057571411133, 2.916111707687378, -1.0937808752059937, 1.7382102012634277, -2.981768846511841, 3.435912609100342, 0.3376966118812561, -1.239315390586853, -0.400877445936203, 1.761841058731079, 3.293083667755127, -1.692542314529419, 1.9880279302597046, 0.514642059803009, -0.0478954091668129, -0.4543483853340149, 0.32787764072418213, -1.5450570583343506, 1.2334553003311157, 1.4770311117172241, -0.7615543603897095, -0.7700747847557068, -0.37422093749046326, -0.1740799993276596, 1.7913669347763062, 2.370370864868164, -1.2795953750610352, -1.1051491498947144, 1.4770939350128174, -0.03646974638104439, 0.9966365694999695, -1.172613263130188, 0.9230011701583862, 0.6721639037132263, 0.3518979251384735, -4.454400539398193, -0.44898751378059387, -2.9884603023529053, 0.3487760126590729, -0.5355443358421326, -1.8051347732543945, 0.8398903608322144, -1.3180123567581177, 1.1721769571304321, -3.272967576980591, 0.01520228385925293, 1.4445781707763672, 1.4469655752182007, 0.5919833183288574, 1.219369888305664, -1.831299066543579, 0.9018062353134155, 0.5006951689720154, 1.9173309803009033, 0.6067509651184082, -1.1368725299835205, 0.8343968391418457, -1.0959806442260742, -0.944695770740509, -0.41647955775260925, -2.262669801712036, 4.669586181640625, 1.0134044885635376, -4.808712005615234, -0.942473292350769, -2.451455593109131, -2.0447309017181396, -1.8993258476257324, 0.7938048243522644, -5.817100524902344, 0.3395240902900696, -0.5180562138557434, 0.7192035913467407, -1.9127206802368164, 0.6843070387840271, 0.17841504514217377, 0.06499477475881577, 0.9957720637321472, -1.5054919719696045, 0.37450188398361206, -2.1598570346832275, -1.8709479570388794, -1.1289294958114624, -0.515167772769928, -2.6569807529449463, -0.5510454177856445, 0.5140765309333801, 1.0727870464324951, -3.140223741531372, -1.4549286365509033, -0.038322318345308304, 2.3005473613739014, 0.41218411922454834, 0.1405603587627411, 2.579385995864868, 1.7039129734039307, 3.0319645404815674, 2.222633123397827, 0.48473167419433594, 0.39313510060310364, 1.5743176937103271, -17.08769416809082, 2.6103098392486572, -0.29352328181266785, 1.4871758222579956, -0.920323371887207, -1.261200189590454, -1.8815630674362183, -0.3742014169692993, 1.928483486175537, 0.8734447956085205, -0.7256561517715454, -0.19480429589748383, 0.4971783757209778, 0.0454951710999012, 1.5309410095214844, -1.8724687099456787, 0.2753872573375702, -0.05526876077055931, 2.019657850265503, -0.542966902256012, 2.5979809761047363, -1.5759060382843018, -2.0966858863830566, -1.2429949045181274, 0.8074167966842651, 1.6995701789855957, 2.364717483520508, -0.006171206012368202, -0.40523213148117065, 0.6031554937362671, -0.9142636656761169, -0.6844136118888855, -0.5789420008659363, -1.1073524951934814, 1.050377607345581, -0.22426076233386993, -4.312420845031738, 0.3582805097103119, 1.566651463508606, -1.0100003480911255, -2.445319652557373, 0.49360424280166626, -6.209681510925293, -3.5924978256225586, -2.6305131912231445, -3.0619750022888184, 3.185960292816162, 1.714870572090149, 1.8870161771774292, -2.1056036949157715, -1.3087836503982544, -0.397480309009552, 1.4927351474761963, -0.7130331993103027, 1.486342191696167, 0.3299499750137329, -2.418793201446533, 1.9932200908660889, 1.4768792390823364, -3.0037782192230225, -0.042862553149461746, 1.1720788478851318, 1.5001466274261475, -2.5495569705963135, -0.622663676738739, 0.7934010028839111, -1.1974726915359497, 0.36095690727233887, 0.19274689257144928, -3.497694730758667, -0.40920042991638184, 0.2558222711086273, -0.17489388585090637, -0.4993809461593628, -0.7705931067466736, -2.4662959575653076, 1.9247642755508423, 1.998637080192566, -1.9849026203155518, -1.5978630781173706, 1.7272976636886597, 2.1162023544311523, 3.836690902709961, -0.5702705979347229, 0.4890395998954773, -5.1495490074157715, -0.40522921085357666, 1.9576873779296875, -1.508880376815796, 1.41094970703125, -0.024070236831903458, -1.3425319194793701, 0.2499399334192276, -1.9436883926391602, -0.20083169639110565, -1.6973903179168701, 1.8585814237594604, 2.0651111602783203, -0.6890871524810791, 1.9258447885513306, 0.14739713072776794, -1.3216526508331299, -0.5668810606002808, -0.1970759779214859, 0.4085139334201813, 0.5241521000862122, -0.5185426473617554, 0.8455533981323242, 0.05106530711054802, -1.0309116840362549, 1.3577605485916138, 0.8617386817932129, -0.9283434748649597, -0.02036425843834877, -0.091877780854702, 0.5626043677330017, 0.9166983366012573, -1.6653329133987427, 0.6513411402702332, -2.0065479278564453, -0.25614944100379944, -1.7404941320419312, -0.14202706515789032, -1.8889561891555786, 0.7946772575378418, -2.131476402282715, 0.28767019510269165, -1.7267996072769165, -1.376927375793457, 0.305580735206604, -2.189678192138672, -0.012310806661844254, 3.2107341289520264, -0.5365090370178223, -2.4642841815948486, 0.8017498254776001, -0.3184514045715332, 0.7495277523994446, -0.4395090341567993, -1.053176760673523, 1.0031729936599731, 0.5520432591438293, 5.518334865570068, -0.260230153799057, 0.4129876494407654, -2.2801108360290527, 3.3234267234802246, -1.100612759590149, -0.1636020541191101, 0.5297877192497253, 1.1526376008987427, -0.6702059507369995, 0.11144405603408813, 1.4567251205444336, 2.211238384246826, 2.1231586933135986, -0.014792595990002155, 0.46270355582237244, -1.7553074359893799, -2.412024736404419, 0.5752195715904236, 1.0785473585128784, 1.4434525966644287, -0.36577677726745605, -0.9827273488044739, 0.22377555072307587, -3.826702833175659, -5.461728572845459, 2.8441531658172607, 0.05543769150972366, 1.0848572254180908, -2.3073110580444336, 1.1464284658432007, 6.840386390686035, 0.29163652658462524, 1.5096409320831299, 2.230553150177002, 0.03037729486823082, -0.03491774573922157, 3.0144357681274414, 2.0182530879974365, 0.1928826868534088, -0.42632055282592773, -1.7087998390197754, 0.8260899186134338, 1.0113804340362549, 2.360093832015991, -1.62473464012146, 1.5085432529449463, 2.578317642211914, 1.6136786937713623, -0.507075309753418, -2.3402822017669678, -0.07098083198070526, -1.3340305089950562, 0.19177654385566711, 1.1059727668762207, -1.3988288640975952, 0.6980583667755127, 0.04762393608689308, 2.205963373184204, 0.6097983121871948, 1.472859501838684, -0.8065006136894226, 0.8260449171066284, 0.6911891102790833, 0.7354405522346497, -1.020797848701477, 4.069032192230225, 1.1546580791473389, -1.3901289701461792, 4.088425159454346, 3.3327560424804688, -0.8147938847541809, -0.38041025400161743, -0.8002570867538452, -0.630027174949646, 0.1984773576259613, -0.5009771585464478, -2.725576400756836, -1.0677473545074463, -2.1194536685943604, 1.0863295793533325, 0.945219099521637, 0.8743425011634827, -1.5595207214355469, -3.2554945945739746, -0.059346023947000504, 1.5163980722427368, -2.4665417671203613, 1.6798737049102783, 0.13040810823440552, -1.8379839658737183, 1.0731821060180664, 3.5579402446746826, 1.2822164297103882, 1.2544536590576172, 0.21311433613300323, 1.0679103136062622, -7.644961833953857, -2.2976572513580322, -0.4696504473686218, -1.1461831331253052, 3.8370931148529053, -2.6373353004455566, -1.022015929222107, 1.944838523864746, -3.4792752265930176, 0.189581036567688, -1.4959508180618286, -0.8203619718551636, -0.8752302527427673, 1.1455988883972168, 1.394754409790039, 1.8890148401260376, 2.469120502471924, 6.615213394165039, -0.35686182975769043, -1.6679184436798096, 1.335914969444275, 0.8345732688903809, 2.998810291290283, 0.8350005149841309, -2.185638904571533, -0.9935243129730225, -0.5063812136650085, -1.023371934890747, -0.4569719731807709, 0.48809340596199036, -0.211369127035141, -1.0023069381713867, 0.6931540369987488, 1.9162567853927612, 2.1354031562805176, -0.9595145583152771, 1.6526645421981812, 1.8041722774505615, 0.6410518288612366, 0.7370561361312866, 0.6615729928016663, -1.5644463300704956, -1.0673896074295044, 6.431417465209961, -0.4807921350002289, 1.4150999784469604, -1.295664668083191, -3.4887518882751465, 1.5428330898284912, -2.5802090167999268, 2.689826488494873, -0.4622426927089691, -0.6111890077590942, 1.1808655261993408, 1.1734328269958496, -2.2830307483673096, -0.5659275054931641, 1.628258466720581, 1.4238611459732056, 0.9177718758583069, 2.57635498046875, -3.0586097240448, -0.1409277319908142, 0.13823434710502625, -0.35203301906585693, 0.9506719708442688, -6.526653289794922, 0.15715323388576508, 0.33741283416748047, 0.5778661966323853, 0.24446435272693634, -0.25828683376312256, -0.26176297664642334, -1.556192398071289, 1.7496039867401123, -2.566568613052368, -3.633755922317505, 5.877347469329834, 0.3881169557571411, 0.9792211651802063, 3.0303914546966553, -0.4234387278556824, -1.7467732429504395, -0.9940581917762756, 0.1604217141866684, 0.20533810555934906, -0.5118659734725952, 0.39175254106521606, -0.026054779067635536, -0.7470361590385437, -0.6664057970046997, 1.940830945968628, -1.7012990713119507, 0.010794420726597309, -1.8053219318389893, -1.4483990669250488, -0.9939783811569214, -2.142918586730957, -0.28726959228515625, -0.30280768871307373, -1.08336341381073, 3.519355535507202, -0.7694765329360962, 0.6794494390487671, 0.02129749022424221, 0.1468917429447174, -0.4394078552722931, 0.8040274381637573, -2.1332905292510986, 0.4357454776763916, -0.5084906816482544, 0.21598032116889954, -1.1935497522354126, 1.5270665884017944, 0.7274636030197144, 0.8407641649246216, 0.17818698287010193, 1.8959418535232544, 0.3077866733074188, 2.65822172164917, 1.8515098094940186, -0.32973712682724, 1.8853545188903809, -1.4277201890945435, -0.45664528012275696, 0.7097566723823547, 0.2476370483636856, 0.24467945098876953, -0.106924869120121, 1.5753772258758545, -0.9077993631362915, -0.2776675224304199, -0.6028621792793274, 0.3361768126487732, -1.9260371923446655, -1.4828319549560547, 2.7104969024658203, -0.32213327288627625, 1.046871542930603, -0.9400041103363037, -0.6073606014251709, 1.6994292736053467, -0.9165927767753601, -2.3352160453796387, -0.3473537862300873, -0.7119798064231873, -0.6926193237304688, 2.8489246368408203, -0.30154967308044434, -2.3563122749328613, -0.3843422830104828, 1.1836661100387573, -1.1338986158370972, -0.24423880875110626, 1.418196678161621, 0.5400394797325134, -0.015927601605653763, 0.7847772836685181, 0.2918948531150818, -2.478797435760498, 0.2756686806678772, 1.1419461965560913, 0.49127107858657837, -0.022380413487553596, -0.5809372663497925, -1.8818861246109009, -0.7043084502220154, -1.4923875331878662, 2.190058708190918, 1.125563144683838, -1.7257450819015503, 0.05809423327445984, -1.231887698173523, 2.4990298748016357, -0.6314716935157776, -0.03669692575931549, -2.2064425945281982, 1.5907856225967407, 0.4585913121700287, -1.45792555809021, -2.0502560138702393, 0.7699311971664429, -2.784538984298706, -0.9140456318855286, -0.3700370490550995, -0.8979235291481018, 0.44210389256477356, 1.0474436283111572, 1.779616355895996, 0.45078784227371216, -0.2973509728908539, -1.472576379776001, 2.0638420581817627, 0.6984675526618958, 0.28762000799179077, 3.2471299171447754, 3.79997181892395, 0.4689188301563263, 0.7657003998756409, -1.3535739183425903, 0.15177389979362488, -1.9707564115524292, -1.5294809341430664, 1.4862594604492188, -0.8001325130462646, -1.247962236404419, -1.176222562789917, -0.3547532260417938, 0.2978862226009369, 1.9624965190887451, 0.9902192950248718, -0.44017648696899414, -1.2257494926452637, -1.7168676853179932, 1.678995966911316, 0.45041409134864807, 0.29381826519966125, 0.24676980078220367, 1.4098718166351318, -0.23116594552993774, 2.851227283477783, -3.352517604827881, -1.870121717453003, 1.268830418586731, -2.901238441467285, 0.22949352860450745, 2.0386269092559814, -0.9146790504455566, -0.050751615315675735, 0.650490403175354, 0.688125729560852, -0.08217889070510864, 0.12222655117511749, -1.7349051237106323, -2.401493787765503, 0.755092978477478, 0.785330593585968, 2.030148506164551, -3.0832223892211914, -2.0020861625671387, 0.1970643252134323, -0.43846940994262695, 3.0661580562591553, -2.440918445587158, 0.255910187959671, -0.20022796094417572, -1.2181930541992188, -0.7898653745651245, -2.447021722793579, -2.7120091915130615, 1.023439884185791, 0.13306495547294617, 11.38375473022461, 0.4095974266529083, -3.126375436782837, 0.15059468150138855, 1.005212664604187, -0.6362734436988831, 1.8042926788330078, -0.544600784778595, 1.324157476425171, -0.1720346063375473, -0.48226967453956604, -0.6386629343032837, 0.7932955026626587, -1.0307537317276, -0.030334221199154854, -1.6885836124420166, 0.02540210448205471, 0.15673278272151947, 1.2310541868209839, 3.1716957092285156, 2.6241445541381836, 0.3046095371246338, 1.2929836511611938, 0.7420481443405151, 0.321260005235672, 0.669034481048584, -0.11876273900270462, 1.3900645971298218, -0.39547765254974365, -0.9423073530197144, -1.440240502357483, -2.7683916091918945, 0.5916474461555481, 0.22705861926078796, 2.289206027984619, -1.529347538948059, 3.0293784141540527, 1.585314154624939, -0.3475581705570221, -0.8158438205718994, -1.2707141637802124, 1.52529776096344, -0.4399953782558441, 0.7977296710014343, 2.15421724319458, 0.2029402256011963, 0.8182349801063538, -0.9828463792800903, -2.102130651473999, -0.7536905407905579, -0.6563103795051575, -0.8859535455703735, -2.16115140914917, 0.68268883228302, -0.8431786894798279, 1.6845060586929321, -3.457179546356201, -1.0305430889129639, 2.1177175045013428, 2.186978816986084, -0.7495031952857971, 0.4233001470565796, 1.7131890058517456, 2.653705358505249, -1.5412851572036743, 2.0931594371795654, -1.8673100471496582, 3.362546443939209, 0.37147626280784607, 2.6393561363220215, 0.5956027507781982, 3.8806629180908203, -0.8557716608047485, -1.8126965761184692, -0.6422334909439087, -0.4170646071434021, 0.07015134394168854, 1.601213812828064, 1.7752736806869507, -1.563095211982727, -1.842039942741394, 0.8949403166770935, 0.8213114738464355, 2.104454517364502, 1.5621185302734375, 1.983998417854309, 0.27188044786453247, -1.123093843460083, -0.42603784799575806, -4.802127838134766, -0.9244204163551331, -2.459841012954712, -2.634511709213257, -2.607050657272339, 0.3619783818721771, -1.8253533840179443, 2.1136412620544434, -1.0142664909362793, -0.35461071133613586, -0.08565346151590347, 1.2730433940887451, 1.4445371627807617, -2.562166213989258, -1.6224087476730347, -0.7401191592216492, -1.8183948993682861, -6.947819709777832, -2.958055257797241, -1.1326404809951782, 2.521576166152954, -0.7198857069015503, -0.19349172711372375, -2.5632424354553223, -1.1360121965408325, 1.7425504922866821, -2.3327488899230957, -0.3639349937438965, -0.7618690133094788, -0.06379194557666779, -2.3073813915252686, 0.694584846496582, 0.344064325094223, -1.2303060293197632, 1.2927721738815308, 0.06000807508826256, 0.40601813793182373, -0.8971396088600159, 0.519196629524231, -1.4103238582611084, -3.390002489089966, -1.5444581508636475, 0.7764025926589966, -1.286615014076233, -0.9456934928894043, -0.6860343217849731, -0.7364029288291931, 1.5457088947296143, 1.6128982305526733, 1.287780523300171, 1.6489148139953613, 1.67617928981781, 0.10088522732257843, -1.2689849138259888, 0.8049256205558777, -0.8268434405326843, 0.8534346222877502, 3.2546145915985107, -0.7334981560707092, -0.42363929748535156, -2.0192339420318604, 0.18278534710407257, -0.30329200625419617, -1.6454986333847046, 0.5611382126808167, 0.9428885579109192, 3.467724323272705, -1.7720670700073242, 3.3134148120880127, 0.8287512063980103, -0.6391113996505737, 0.5302921533584595, 3.3955209255218506, 1.8526530265808105, -5.831977367401123, -0.5608792901039124, -0.52732914686203, 1.1519194841384888, -3.8111307621002197, -1.112129807472229, -2.193333148956299, 3.558131456375122, -0.38883766531944275, -1.2926342487335205, -1.7179244756698608, 3.0252881050109863, -0.30636560916900635, -0.6726535558700562, -2.0738301277160645, 1.0538036823272705, -0.6432257890701294, -0.621713399887085, -1.2236216068267822, 0.47444531321525574, -1.533075213432312, 1.503252625465393, 1.7952961921691895, 2.1736719608306885, -0.3828437328338623, -0.4795142114162445, -0.7193837761878967, 1.4456597566604614, -0.02563435025513172, 0.5546603202819824, -1.2607388496398926, 1.1237564086914062, 2.7446420192718506, -1.68074369430542, -1.4911751747131348, 0.6633965373039246, 0.19930459558963776, 3.66977596282959, -2.2398242950439453, -0.29390445351600647, 0.2560953199863434, 0.26830923557281494, -2.39227032661438, 3.228013038635254, 1.5378494262695312, -0.4504263997077942, -2.826124668121338, 1.7755171060562134, 0.5379474759101868, 0.37574896216392517, 0.9193552136421204, 1.2337709665298462, -0.7457429766654968, 0.3981378376483917, 1.9126510620117188, -1.457673192024231, -1.840986967086792, -1.0645390748977661, -0.1767304390668869, 1.188957691192627, 1.2876298427581787, -0.8412945866584778, -0.25044959783554077, -1.0699965953826904, 0.009314493276178837, 0.47715994715690613, -1.6440861225128174, -0.5907453298568726, -1.049324631690979, 1.0390734672546387, 0.6445403099060059, 0.833937406539917, -0.355325847864151, 0.0994211733341217, -0.0302878487855196, 0.12409967184066772, -0.3736986219882965, 2.322896718978882, -0.07213949412107468, -0.041175637394189835, 0.15898191928863525, -1.2797447443008423, -1.7271647453308105, 1.1250183582305908, 0.053053118288517, 0.21516209840774536, -0.62578946352005, 1.643478512763977, 1.5589592456817627, 0.5566443800926208, -0.18252010643482208, 0.5588923096656799, -2.417508125305176, 1.536683440208435, 2.6799542903900146, 3.126356363296509, -1.7247638702392578, 0.7768693566322327, 0.15074074268341064, -0.7899144291877747, -0.1392408013343811, -1.8526852130889893, 0.03772513195872307, -0.5075445771217346, 0.2553730010986328, -0.8452396988868713, -0.804675817489624, 0.20948508381843567, 0.608883261680603, -0.43253928422927856, 2.2517855167388916, 1.1470715999603271, 0.057494793087244034, -1.487905502319336, -0.018844403326511383, -0.5127835273742676, -0.9914013743400574, 0.30636391043663025, 0.7900062203407288, 0.5838981866836548, -0.16234219074249268, -0.3470565378665924, -0.21970994770526886, 1.412819504737854, -2.344581365585327, 0.09724771976470947, -0.5757020711898804, 1.2181626558303833, -0.944413959980011, -0.6563422083854675, -0.5654497146606445, 2.407801628112793, 0.08510265499353409, 2.0938544273376465, 0.08230669051408768, 2.0056731700897217, -0.9489847421646118, -1.7223788499832153, -1.7133234739303589, -3.278630018234253, 1.6658223867416382, 0.10414383560419083, -0.5931969881057739, 0.6423833966255188, -2.9353301525115967, 3.526261568069458, -1.666553258895874, 0.9492028951644897, 0.667405366897583, -0.8604920506477356, 1.2735933065414429, -0.24551275372505188, 0.6441431045532227, -0.38227733969688416, -0.4630293846130371, 1.4358162879943848, 1.0937228202819824, 1.9490225315093994, 0.0740886926651001, 0.4029659032821655, -1.6319000720977783, 1.2711639404296875, -0.5974065661430359, -2.6834018230438232, 1.8502169847488403, 0.6386227607727051, 2.590479612350464, -0.49917230010032654, -2.5988664627075195, 1.9030545949935913, -0.3349710702896118, -2.7176058292388916, -1.4044554233551025, -2.1542625427246094, 0.39269959926605225, -0.3015066385269165, 0.15509101748466492, -1.8539525270462036, 3.4868879318237305, -1.4078190326690674, -3.222374200820923, -1.1986515522003174, -1.1208950281143188, 0.6884583830833435, -0.7585988640785217, 0.1059669777750969, 0.04318329319357872, -4.913561820983887, -0.05187537521123886, 3.5694751739501953, -1.9946166276931763, 0.014335528947412968, 0.04705454036593437, 1.4365737438201904, -1.2839676141738892, -0.04703819751739502, 0.6318968534469604, -0.4648891091346741, 0.28053349256515503, -2.2494683265686035, 0.8773587346076965, 3.2937123775482178, 0.461525559425354, 4.590155601501465, -0.9878007173538208, -0.08247177302837372, -0.43144866824150085, -1.0715477466583252, 1.6967984437942505, -3.3572113513946533, -0.6096997261047363, 1.3075783252716064, -2.2616846561431885, 4.197009086608887, -0.4991415739059448, 0.6471449732780457, 0.4552414119243622, 1.0929334163665771, -1.582084059715271, -0.5286394357681274, -0.5518680810928345, 0.7354360818862915, -0.2584633231163025, -0.08173595368862152, -0.5867318511009216, -1.8880888223648071, -1.814834713935852, 1.7573798894882202, 3.9596621990203857, 1.5880887508392334, 0.7259516716003418, 1.955574631690979, 0.3088712990283966, -1.7798328399658203, 1.4348945617675781, 0.8652783036231995, -0.11939241737127304, -0.42505839467048645, -0.5959363579750061, 1.7220964431762695, 2.022887706756592, 2.318899631500244, -1.0285959243774414, 0.5574663877487183, 1.8598313331604004, 2.340881824493408, -1.114876627922058, -2.9373958110809326, -0.3807956278324127, 0.9138448238372803, 0.09876017272472382, 0.736687958240509, 0.6977685689926147, -0.6091060638427734, -2.6238436698913574, 1.2243366241455078, 1.5129908323287964, 0.9895787239074707, 0.01610621064901352, -0.7177698612213135, -0.586176872253418, -0.8468607664108276, -2.300959348678589, -0.276903361082077, -0.4521595537662506, -0.39529210329055786, 2.112332344055176, -2.060443162918091, -3.177922248840332, -0.5120137333869934, 0.10933879762887955, 0.11730089783668518, 0.25420263409614563, -0.34655097126960754, -2.9007911682128906, 0.003339624498039484, 0.3639955520629883, -1.388902187347412, 1.4442331790924072, -0.861194372177124, 0.16477303206920624, 2.8582944869995117, -3.2511274814605713, -0.9999625086784363, -1.9750611782073975, 0.20032551884651184, -0.7910523414611816, 1.3464692831039429, 0.4899722933769226, -2.324185609817505, 2.6362833976745605, -2.167820453643799, -1.1179255247116089, 0.26357337832450867, 2.388129949569702, -0.3871464133262634, 2.541254758834839, -1.5910060405731201, -0.1521669179201126, 2.4372799396514893, 0.49059635400772095, 0.143768772482872, -0.2824336290359497, -0.07930364459753036, 0.18067769706249237, -1.5470519065856934, 0.8585227131843567, -1.7051506042480469, 0.2304743379354477, 1.2718594074249268, -2.262291193008423, 0.6345257759094238, 1.7309871912002563, -1.0747532844543457, 0.8628502488136292, -1.0308325290679932, 1.6426581144332886, -0.1179797425866127, 2.114360809326172, 0.4001002311706543, 1.3091498613357544, -0.5761996507644653, 1.7613424062728882, -0.9532261490821838, 1.8100963830947876, -0.551224946975708, 1.0943084955215454, 1.995148777961731, -0.2399289757013321, -2.8592641353607178, 0.8448318839073181, 1.438583254814148, -0.7680769562721252, 0.12946569919586182, 0.7584971189498901, 2.126793622970581, -0.8385722637176514, -1.3371894359588623, -0.8095458149909973, 2.117802619934082, 1.1792303323745728, -3.2345151901245117, -0.5444381237030029, 2.1084394454956055, -2.4026038646698, 0.18834252655506134, -1.2292487621307373, 0.12423299252986908, -2.0310535430908203, 0.3255136013031006, 0.2849785387516022, -2.3633954524993896, -0.6746733784675598, -0.34001630544662476, -0.25642478466033936, -1.6001611948013306, 0.8522850871086121, 1.7623180150985718, -0.1964396983385086, -1.2936173677444458, -1.528385877609253, -1.102852702140808, 0.7027903199195862, -2.311084747314453, 0.06160559877753258, -5.711217403411865, 3.7049355506896973, 0.27026474475860596, -0.921119213104248, 1.6805181503295898, 2.0733914375305176, -4.135998725891113, -0.9561137557029724, -0.6454806327819824, 0.55885910987854, -1.0215628147125244, -0.13304831087589264, -0.3172632157802582, -2.785482168197632, -0.3236042857170105, 2.439117908477783, 0.8945889472961426, -1.3276289701461792, 0.032644569873809814, 1.6577787399291992, 1.7553662061691284, -1.7791880369186401, 2.0067660808563232, -0.878115713596344, -0.22848550975322723, -0.07382026314735413, 0.6028909087181091, 0.9232040643692017, -0.7443209886550903, -1.1945438385009766, -0.5014027953147888, -0.6027995944023132, -0.9855751991271973, 0.7716651558876038, -1.7220836877822876, 0.5988412499427795, 0.6560685038566589, -1.4718652963638306, -0.09454447776079178, 0.39460813999176025, -1.0219866037368774, 0.16089311242103577, 1.2402374744415283, -3.279120922088623, -1.513095736503601, -1.7908998727798462, 1.5655872821807861, -0.9766507148742676, -0.3568771481513977, -0.6989377737045288, -2.275606870651245, -1.1739453077316284, 0.8857262134552002, 0.21379457414150238, 0.3872324228286743, 2.8312325477600098, 3.370190143585205, -1.2276592254638672, 2.5217015743255615, -2.6147425174713135, -1.7975482940673828, 0.2604275345802307, -0.9670408964157104, 1.0740933418273926, 0.0881202444434166, 0.3878750503063202, 3.7241787910461426, 2.5294928550720215, -1.554567813873291, 1.5883101224899292, 0.021601477637887, 0.7833694815635681, 0.7324634194374084, -1.0129834413528442, -1.7750601768493652, -1.6069577932357788, -0.00898703746497631, 0.6159497499465942, -0.21028690040111542, 1.0078929662704468, -1.3044366836547852, 5.082554340362549, 1.0289592742919922, -2.395045757293701, 2.4680073261260986, -0.2351224273443222, -1.6476593017578125, 0.38624653220176697, 0.2908729910850525, -0.40109455585479736, 1.2395310401916504, 1.575451135635376, -2.466839075088501, -1.930911898612976, -0.30898579955101013, 1.0600224733352661, 2.474728584289551, -0.5231278538703918, -1.1781158447265625, 2.0308663845062256, 0.27654165029525757, -1.2232980728149414, 1.4704314470291138, -0.700169563293457, -2.6749267578125, -1.2611212730407715, -1.5050514936447144, -0.9820262789726257, 1.3202519416809082, 1.7085771560668945, 2.4008524417877197, 0.5397467017173767, -2.5096402168273926, 1.4448264837265015, -2.4320006370544434, -0.6138431429862976, -0.7960938811302185, -0.8046653866767883, 0.36194565892219543, 1.4644893407821655, -0.36692118644714355, -0.3842164874076843, 0.9461280703544617, -0.394505113363266, -2.6483609676361084, -1.1774756908416748, 0.20689310133457184, -0.6184566020965576, -0.5069551467895508, 1.5505434274673462, 0.313493013381958, -0.9208681583404541, -0.5244215130805969, -0.07132044434547424, -1.0078376531600952, -0.3041566014289856, -2.9547841548919678, 0.13732536137104034, 1.058887243270874, 0.623813271522522, 1.536534070968628, 0.710353434085846, -2.091754198074341, 0.3863103687763214, -2.146207332611084, -0.2651400566101074, 0.3908107578754425, -2.1654295921325684, -0.4906494915485382, 2.2715344429016113, 0.7958000302314758, -0.3529462516307831, 0.023320848122239113, -0.6318991780281067, 0.7415646910667419, -1.5158635377883911, -1.92628014087677, 0.3778543174266815, -1.0284225940704346, 0.3418554365634918, -0.4106570780277252, 0.29304441809654236, -2.428920269012451, -0.12348226457834244, -0.34103113412857056, 0.02815360762178898, 1.9101290702819824, -1.278517246246338, -0.7780016660690308, 1.8167794942855835, 2.5061824321746826, 1.2782561779022217, -1.0568351745605469, 0.6961120367050171, 0.6501976847648621, -2.756662130355835, -1.0097459554672241, -0.9929289221763611, 0.9298126101493835, 2.3535094261169434, 27.893369674682617, 0.9989926815032959, 1.635241150856018, 0.3050057590007782, -0.11045846343040466, 0.48667430877685547, 1.4059665203094482, 2.3953042030334473, 0.24139665067195892, 1.2205312252044678, 1.4274930953979492, 1.1422854661941528, -1.2699135541915894, 0.38328030705451965, 2.3638064861297607, -0.2291434407234192, 3.1154348850250244, 0.5472202301025391, -0.10703212767839432, -1.256062626838684, -0.8193093538284302, 1.7242975234985352, -2.0377373695373535, 1.5178602933883667, 0.7586110830307007, -1.773211121559143, 0.90008145570755, 1.244199275970459, 1.8370442390441895, -1.6146992444992065, -0.5313140153884888, -0.8352211117744446, -0.28806909918785095, 2.07943058013916, -2.1276118755340576, 4.714601039886475, 0.08501234650611877, -1.0854072570800781, 0.45539429783821106, 0.02574874833226204, -0.7017617225646973, 0.271499365568161, -1.543891429901123, 1.1715095043182373, -4.165060520172119, -3.5382204055786133, -0.959351122379303, 0.586280107498169, -0.664473831653595, 0.24653545022010803, -1.3207391500473022, 1.1021311283111572, 0.8513509631156921, -0.22090765833854675, -1.2186039686203003, 0.6458785533905029, 0.068841353058815, -0.9462994337081909, -0.736159086227417, 2.489241361618042, 1.08546781539917, 0.17249566316604614, 0.00963551551103592, -2.0986745357513428, -0.18537047505378723, -1.241287112236023, 0.9592534899711609, -0.43631333112716675, 1.8670296669006348, -1.1359080076217651, 2.3669395446777344, -1.5876514911651611, -1.8304880857467651, 0.8184749484062195, 0.7685567736625671, 0.8345807194709778, 0.01114408578723669, 0.7298959493637085, -0.7284532785415649, -0.5363021492958069, -0.9247578978538513, -2.17104172706604, -0.6724880933761597, 2.363757848739624, 0.08590041846036911, 2.059079170227051, -2.2278695106506348, 3.668748140335083, 0.8368174433708191, 1.6728285551071167, -1.9286187887191772, -0.7129634618759155, -0.18277931213378906, 1.9877017736434937, -1.999313473701477, 0.6556553244590759, 2.9140737056732178, -0.3444043695926666, -0.4161573648452759, -1.4394901990890503, 1.290708065032959, 0.2468632608652115, -0.8644528388977051, 0.022347690537571907, -0.46164897084236145, 2.0218238830566406, 0.6671098470687866, 1.6139602661132812, 3.657604217529297, 2.271261692047119, 2.3326733112335205, 0.3738059401512146, 0.35563138127326965, -1.510993242263794, -0.29949405789375305, -1.237746238708496, -1.174346923828125, 0.6250507235527039, 0.5889301896095276, 0.03296980261802673, 0.5837801694869995, -1.3075876235961914, 2.2138357162475586, 0.8216298222541809, -0.16598419845104218, -0.3695119023323059, -0.1725255250930786, 0.7056125998497009, 0.5911400318145752, -1.3572112321853638, -1.7939324378967285, -0.346815824508667, 2.936661958694458, -1.8363295793533325, -2.0917155742645264, 1.1098142862319946, -1.650669813156128, 3.2686774730682373, -0.9288081526756287, 0.2646131217479706, 1.261751413345337, -2.543142557144165, 6.293051719665527, -2.597097873687744, -1.2042756080627441, -2.097094774246216, -1.8804082870483398, 0.9535214304924011, 1.670982837677002, 1.003290057182312, 4.251725196838379, 1.2506277561187744, 1.150233507156372, -1.8020832538604736, -0.3403712511062622, -0.8620516061782837, -1.283129334449768, -0.3915810286998749, 2.7018449306488037, -0.10127142071723938, -0.00876553077250719, 7.760560989379883, -2.298708438873291, 1.0014913082122803, -0.7197350263595581, 0.8198022842407227, 0.5770737528800964, -0.6671212315559387, -1.9607622623443604, -3.9859671592712402, 0.8894888162612915, 0.3556593656539917, -1.2468639612197876, -0.42202192544937134, -0.8496314287185669, 2.4973671436309814, 1.2184630632400513, -1.3097401857376099, -1.4257316589355469, -0.8838949799537659, 2.522961378097534, 1.0242716073989868, 1.1449272632598877, 1.494399070739746, 1.3268615007400513, 0.7323814630508423, 0.5462021827697754, -4.27741813659668, -0.5482227206230164, 0.6894055604934692, -1.457056999206543, -1.8107671737670898, 1.7643498182296753, -1.6268867254257202, -1.6463972330093384, 0.7533250451087952, -1.5215373039245605, 0.7346979975700378, -0.3701346814632416, -0.0226410161703825, -0.6458364725112915, -1.3796308040618896, -0.3815940320491791, 6.269187927246094, 2.289961338043213, -0.9773929715156555, -0.249546617269516, -1.6514405012130737, 0.867066502571106, 0.22829703986644745, -0.4617983400821686, 3.3042094707489014, 0.9521559476852417, -0.695234477519989, 2.962653398513794, -0.8236230611801147, 0.20833659172058105, 0.5054753422737122, 0.15649761259555817, 0.3403320610523224, -0.32528480887413025, -1.026519775390625, -0.8924757242202759, -1.8446648120880127, 2.6933515071868896, 1.8860138654708862, 0.46468058228492737, 0.48231080174446106, -0.8378691077232361, -1.9460488557815552, -1.1861300468444824, 0.7595608234405518, -1.095468521118164, 1.4308674335479736, 0.328189879655838, -2.451094388961792, -2.8908376693725586, -0.4236178398132324, -1.6981369256973267, 0.07236644625663757, -0.9503749012947083, 0.8383578658103943, 1.0358505249023438, 0.7380673885345459, 2.28603196144104, -1.8723185062408447, 0.5223669409751892, -0.011290911585092545, -0.7238665223121643, -1.6246486902236938, -2.181584596633911, 1.508367657661438, -0.6955671310424805, -6.630421161651611, 1.5550339221954346, 0.05992800369858742, 0.9386507272720337, -2.148855209350586, -2.04305100440979, 1.38173246383667, -1.2380393743515015, -3.3567206859588623, -1.3756507635116577, -0.2942374348640442, -4.111190319061279, 0.32021233439445496, -2.2395267486572266, -0.8271233439445496, -0.5836808085441589, 1.9801377058029175, -0.9668284058570862, 1.8952913284301758, 1.645387053489685, -0.14554183185100555, 1.147283911705017, -3.311444044113159, -0.201595276594162, -0.5542925596237183, 1.3598580360412598, 0.26370614767074585, 0.023029671981930733, -0.921843409538269, -2.9373505115509033, -0.2886929214000702, 0.4618637263774872, -1.1411409378051758, 2.7564940452575684, -2.9174437522888184, -0.6974139213562012, 2.123971462249756, -1.2719080448150635, -0.05564053729176521, -2.2673184871673584, -0.12627746164798737, -0.7531415820121765, 0.538124680519104, 0.9171910285949707, 0.16229069232940674, -1.6697087287902832, -0.15993909537792206, -1.8202638626098633, -0.1887633353471756, -0.7874069213867188, -1.3994258642196655, -0.3914186656475067, -2.069002389907837, 0.14583337306976318, 0.13571859896183014, 1.0151398181915283, -1.4915581941604614, -0.05901025980710983, -0.1938810497522354, 0.3131210207939148, -0.16058966517448425, -0.9250679016113281, -14.631373405456543, 0.9575139880180359, 3.1770806312561035, 1.2021996974945068, -0.6654183268547058, 3.9404962062835693, -0.7658974528312683, 2.7717905044555664, -1.520410418510437, 0.3642917275428772, -0.7192654609680176, 1.9125748872756958, 0.9570345878601074, -0.09266321361064911, -0.38360461592674255, 1.738484263420105, -3.2710161209106445, -1.7709176540374756, -2.0774242877960205, -0.3601045608520508, 0.5720903277397156, -0.699288010597229, 0.10553744435310364, -0.18496277928352356, 0.7611597180366516, -1.770328402519226, -2.7276382446289062, 1.824327826499939, -2.353358745574951, -0.402118444442749, 1.1608465909957886, 0.7886192798614502, -0.9140638113021851, -1.318404197692871, -0.4397779405117035, 2.865103006362915, -0.0457182377576828, -0.7885135412216187, 0.9373155236244202, -2.107434034347534, -0.38358789682388306, -0.3919948637485504, 2.923556327819824, -4.701347827911377, -0.7249741554260254, -0.9489683508872986, 1.0044702291488647, -0.11666374653577805, -1.3404510021209717, 0.5153619647026062, 0.04754114896059036, -0.19456803798675537, 1.3827818632125854, -2.0031208992004395, -1.289810299873352, 3.416640520095825, -2.449042797088623, 0.9355893135070801, 1.6686389446258545, 0.7991522550582886, -0.563110888004303, 1.418690800666809, -0.8917520642280579, 2.360565185546875, 2.634204626083374, 1.5688698291778564, -0.45071038603782654, -3.2660880088806152, -1.4052941799163818, 1.387974500656128, -0.23124323785305023, -1.476924180984497, 0.5204784870147705, 0.34926602244377136, -2.4898107051849365, -1.7497012615203857, 0.7724961042404175, -0.0890677198767662, 0.13224686682224274, 1.2534589767456055, 0.045317936688661575, 0.06332586705684662, 3.345268726348877, 0.8872537612915039, 0.6012753248214722, -0.6033196449279785, -0.5802770256996155, 0.3494185507297516, -1.682992935180664, -1.1012550592422485, 0.5895649790763855, 2.7002875804901123, 1.0863090753555298, -1.7454692125320435, -1.0909974575042725, 1.7235828638076782, 1.070810079574585, 0.9742421507835388, 0.06108007952570915, 1.931785225868225, -2.0204646587371826, -2.1400067806243896, -1.0201374292373657, 1.1510684490203857, -1.5037842988967896, -0.27043673396110535, 0.22798877954483032, -0.21005190908908844, 1.2690585851669312, 0.7277141213417053, 0.5758188366889954, -0.5459479689598083, -2.0902504920959473, -2.0736305713653564, -0.7945910096168518, -1.9498969316482544, -2.2743165493011475, 0.13061034679412842, -0.47374510765075684, -1.5163371562957764, 2.2691502571105957, 0.6805631518363953, 1.4631695747375488, 1.3238294124603271, -0.6621432304382324, -0.8533355593681335, 3.7632603645324707, 3.0241312980651855, -8.06316089630127, 1.8399620056152344, -0.852032482624054, 1.584251046180725, 0.41511836647987366, 0.22672411799430847, -0.26263105869293213, -3.6368632316589355, 0.926706075668335, 1.6890989542007446, 1.4503737688064575, -0.7642179131507874, -0.8178099989891052, 1.9415658712387085, -2.3238351345062256, 0.21372850239276886, 6.099509239196777, 4.171093463897705, 1.5177711248397827, -1.1565263271331787, 0.9976243376731873, -0.4523465931415558, 0.013580133207142353, 0.12584920227527618, 0.2991982400417328, 0.6719919443130493, -0.3317100703716278, -1.9753837585449219, -0.007987353019416332, 1.5750924348831177, -1.1654324531555176, 0.29240575432777405, -1.4655816555023193, -3.045579195022583, -2.5024802684783936, -0.40280434489250183, -0.7322313189506531, 0.10708696395158768, -2.0583841800689697, -1.045668601989746, -1.9754096269607544, -0.20613901317119598, 1.688043236732483, -0.06682968884706497, -2.257188081741333, -3.6643080711364746, -0.20721864700317383, -0.31327947974205017, -3.6634974479675293, -0.1695028841495514, -0.4593466520309448, 1.0550178289413452, -0.31605079770088196, 0.33697763085365295, 1.8109651803970337, -0.39704281091690063, 1.5428825616836548, 0.0765533298254013, -0.7723068594932556, -0.008361696265637875, -0.027305293828248978, 0.9093282222747803, 1.4793466329574585, -0.09230943024158478, 0.2398260086774826, 1.9512848854064941, 2.1526379585266113, -1.1372538805007935, -0.9880079030990601, 0.05866040289402008, 1.6449939012527466, 1.2967973947525024, -2.3071162700653076, 0.43727558851242065, -1.2817187309265137, -0.026710188016295433, 0.18430902063846588, 1.378725290298462, -0.9239446520805359, 0.27773207426071167, 0.3913203775882721, -0.4901234805583954, -1.6399188041687012, -0.12080557644367218, 0.7691868543624878, 0.1709577590227127, 0.10396196693181992, -2.130411386489868, -2.179257392883301, 0.7922729253768921, 0.27633994817733765, -1.7050774097442627, 0.6258018612861633, -2.0217652320861816, 0.6698062419891357, -0.8379725813865662, -1.3636385202407837, -0.9972206354141235, 0.7543817162513733, 0.05158863589167595, -2.257720470428467, 0.442294716835022, -1.8589301109313965, -0.500280499458313, 0.25550076365470886, -3.839138984680176, 0.4164075553417206, -1.7582212686538696, 1.8491343259811401, 0.320035457611084, 1.887444257736206, 3.1942121982574463, 0.1120339184999466, -0.5607714056968689, -0.1297776848077774, -0.8522632122039795, -3.525956153869629, -1.5982003211975098, 2.4504852294921875, 2.46470046043396, -0.8185501098632812, -0.5449082255363464, 2.8579764366149902, -0.044694188982248306, 1.0574771165847778, 1.4608573913574219, 1.3664439916610718, 0.7093403935432434, -2.4899682998657227, -1.9996600151062012, 0.4483301341533661, 1.8011810779571533, -0.9083479046821594, 0.1403864026069641, 1.2353026866912842, 1.4890071153640747, 0.5965154767036438, -2.2207891941070557, -0.386689692735672, 1.0173559188842773, 0.3317832052707672, 1.242241621017456, 8.096700668334961, -1.3860564231872559, -0.48307186365127563, 2.5056164264678955, -4.412651538848877, 1.4777299165725708, 1.2915771007537842, -0.3042348027229309, 1.3734688758850098, -1.0148760080337524, 0.29798030853271484, 1.5803537368774414, 1.6444553136825562, 0.5807373523712158, 2.011157512664795, 2.430384874343872, -0.001317560556344688, -0.37967628240585327, -2.5261998176574707, 3.2119202613830566, 1.7307785749435425, 2.321204900741577, -3.089421510696411, -1.120242714881897, -2.4553184509277344, 2.1926932334899902, -1.463491678237915, -0.39328238368034363, 4.166314601898193, -0.6354401707649231, 1.4693533182144165, 1.5991348028182983, -0.22541369497776031, 0.7343212962150574, 0.1794258952140808, -2.6583163738250732, 0.0027457335963845253, 1.6476435661315918, 1.0695385932922363, 0.8916047811508179, -2.3013198375701904, -1.501152515411377, 1.6795622110366821, 0.7713955044746399, 0.4782435894012451, 0.23006942868232727, 2.595839500427246, 0.2424996942281723, -0.5558034777641296, -0.04674000293016434, -0.6988910436630249, -0.429269403219223, -0.1290259063243866, 0.3222062587738037, 1.017810344696045, -0.5098836421966553, -3.4084291458129883, 0.3000796139240265, 0.7957308888435364, 0.7062281370162964, 1.6956732273101807, 0.5430508852005005, -0.3600875437259674, -1.298385739326477, 1.9226042032241821, 1.5142651796340942, -3.1519079208374023, -0.7966042160987854, -0.27132460474967957, -0.5806691646575928, 2.560450792312622, 1.5697822570800781, -0.4995734989643097, 0.29847368597984314, 0.07077287137508392, -0.12948045134544373, -3.5200178623199463, 0.6674454212188721, -1.3807265758514404, -0.4995282292366028, 1.9198191165924072, 0.5224218964576721, 2.4898221492767334, 11.09000015258789, 0.9179505705833435, -1.7494560480117798, 1.579803466796875, -2.7534961700439453, -1.3340791463851929, 1.9154255390167236, -0.01608842983841896, 0.821875810623169, -0.2625766098499298, 1.5072975158691406, -0.713702380657196, -1.4145824909210205, -1.5109056234359741, 2.1455888748168945, -1.419687271118164, -0.5414632558822632, 1.4491149187088013, 1.5224276781082153, 0.8204352855682373, -1.070623755455017, 0.46470969915390015, -0.006221574731171131, -0.18256701529026031, 2.493424892425537, -0.49038708209991455, 0.42922085523605347, 0.873096227645874, -0.31695419549942017, 2.991065740585327, -1.3125733137130737, 0.5723339319229126, 0.2613622844219208, -1.9564348459243774, 2.178072452545166, -1.5708738565444946, 0.8963414430618286, 1.5022779703140259, 2.5450186729431152, -0.292618989944458, 0.15747855603694916, 2.1199207305908203, 0.21814104914665222, -0.8757757544517517, 0.07445792108774185, 0.07510267198085785, -0.5053762197494507, 0.7606169581413269, -3.169386625289917, -1.1002830266952515, 1.8861533403396606, 2.0080013275146484, -1.7342684268951416, -1.1598358154296875, -0.7158825993537903, -0.1937912255525589, -2.8064157962799072, 0.755673348903656, 8.499192237854004, -0.7812408804893494, 1.57917058467865, -3.151332139968872, -1.9226319789886475, -1.5604653358459473, 0.5534848570823669, 3.228034496307373, -1.6294361352920532, -0.27278730273246765, -0.867935061454773, 2.1341497898101807, 1.1075159311294556, 0.7477016448974609, 2.5511136054992676, -1.5523147583007812, -0.9242894053459167, 0.8773165941238403, 1.6915799379348755, -1.1594383716583252, 0.23813001811504364, -1.4064743518829346, -1.6849969625473022, -2.9580302238464355, -2.5688488483428955, -1.1904170513153076, -3.782924175262451, 0.7100740671157837, -1.3624398708343506, -0.9443717002868652, -0.5225216746330261, -0.09034554660320282, -2.3202784061431885, -0.23590344190597534, -1.5452443361282349, 1.2575849294662476, 1.4288854598999023, 1.638762354850769, -1.7967208623886108, 1.0915971994400024, 0.9493638873100281, 1.095393419265747, 0.8215399980545044, -0.2051163911819458, 2.168558359146118, -1.6670429706573486, -0.049629729241132736, 2.85097599029541, -0.4837287664413452, 0.6502736210823059, -2.374113082885742, 0.7011888027191162, -1.978821039199829, -0.15510064363479614, 0.4679356813430786, 1.8866007328033447, 2.520395278930664, -1.1996338367462158, 0.7295427322387695, 0.9605655074119568, 0.05692993104457855, 0.7287044525146484, 3.7953286170959473, 2.68047833442688, 0.4475618600845337, 0.5628949999809265, 0.4778791069984436, -0.5932527184486389, 1.836578130722046, 1.5961389541625977, 1.3328230381011963, -0.7625845670700073, 0.964162290096283, 1.548017978668213, 0.9993221759796143, -1.4471023082733154, 1.100744366645813, -1.5122473239898682, -0.6169258952140808, 3.0650243759155273, -1.7722645998001099, -0.18872833251953125, -1.5391753911972046, 0.2957899868488312, -0.3034318685531616, 0.7158978581428528, 11.45010757446289, -0.970210611820221, -0.5953302979469299, 0.5357429385185242, -1.7459461688995361, 0.6572960615158081, 0.5218455195426941, -0.251964807510376, 1.4631516933441162, 4.249364376068115, -1.0942943096160889, -0.9652121067047119, -1.0656694173812866, -1.9772387742996216, -1.6469305753707886, -1.335737705230713, -1.819305658340454, 0.03515125438570976, -0.6280084848403931, 2.1817753314971924, 1.5289617776870728, 2.5101521015167236, -0.6491972208023071, -8.361392974853516, 0.06266439706087112, -2.3298821449279785, 0.3874412477016449, -0.23243151605129242, -3.78399658203125, 0.6930876970291138, 0.44730332493782043, -0.9292389750480652, -1.092700481414795, 1.0822983980178833, 0.38801273703575134, -2.0460126399993896, -0.28162679076194763, 0.9888787269592285, 0.05821562930941582, 3.9159140586853027, 0.17979349195957184, 1.6432956457138062, -0.40627729892730713]}}}}]}}}\n [NodeWithScore(node=TextNode(id_='657e40fb-497c-4c1a-8524-6351adbe990f', embedding=None, metadata={'director': 'Francis Ford Coppola', 'theme': 'Mafia'}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, hash='81cf4b9e847ba42e83fc401e31af8e17d629f0d5cf9c0c320ec7ac69dd0257e1', text='The Godfather', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n'), score=0.5), NodeWithScore(node=TextNode(id_='fc548a8e-5a1e-4392-bdce-08f8cb888c3f', embedding=None, metadata={'director': 'Francis Ford Coppola', 'theme': 'Mafia'}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, hash='81cf4b9e847ba42e83fc401e31af8e17d629f0d5cf9c0c320ec7ac69dd0257e1', text='The Godfather', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n'), score=0.0005)]"} {"tokens": 717, "doc_id": "277582b6-e0c5-4a16-82ab-5bb10a14f24f", "name": "Astra DB", "url": "https://docs.llamaindex.ai/en/stable/examples/vector_stores/AstraDBIndexDemo", "retrieve_doc": true, "source": "llama_index", "content": "\"Open\n\n# Astra DB\n\n>[DataStax Astra DB](https://docs.datastax.com/en/astra/home/astra.html) is a serverless vector-capable database built on Apache Cassandra and accessed through an easy-to-use JSON API.\n\nTo run this notebook you need a DataStax Astra DB instance running in the cloud (you can get one for free at [datastax.com](https://astra.datastax.com)).\n\nYou should ensure you have `llama-index` and `astrapy` installed:\n\n\n```python\n%pip install llama-index-vector-stores-astra-db\n```\n\n\n```python\n!pip install llama-index\n!pip install \"astrapy>=0.6.0\"\n```\n\n### Please provide database connection parameters and secrets:\n\n\n```python\nimport os\nimport getpass\n\napi_endpoint = input(\n \"\\nPlease enter your Database Endpoint URL (e.g. 'https://4bc...datastax.com'):\"\n)\n\ntoken = getpass.getpass(\n \"\\nPlease enter your 'Database Administrator' Token (e.g. 'AstraCS:...'):\"\n)\n\nos.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\n \"\\nPlease enter your OpenAI API Key (e.g. 'sk-...'):\"\n)\n```\n\n### Import needed package dependencies:\n\n\n```python\nfrom llama_index.core import (\n VectorStoreIndex,\n SimpleDirectoryReader,\n StorageContext,\n)\nfrom llama_index.vector_stores.astra_db import AstraDBVectorStore\n```\n\n### Load some example data:\n\n\n```python\n!mkdir -p 'data/paul_graham/'\n!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'\n```\n\n### Read the data:\n\n\n```python\n# load documents\ndocuments = SimpleDirectoryReader(\"./data/paul_graham/\").load_data()\nprint(f\"Total documents: {len(documents)}\")\nprint(f\"First document, id: {documents[0].doc_id}\")\nprint(f\"First document, hash: {documents[0].hash}\")\nprint(\n \"First document, text\"\n f\" ({len(documents[0].text)} characters):\\n{'='*20}\\n{documents[0].text[:360]} ...\"\n)\n```\n\n### Create the Astra DB Vector Store object:\n\n\n```python\nastra_db_store = AstraDBVectorStore(\n token=token,\n api_endpoint=api_endpoint,\n collection_name=\"astra_v_table\",\n embedding_dimension=1536,\n)\n```\n\n### Build the Index from the Documents:\n\n\n```python\nstorage_context = StorageContext.from_defaults(vector_store=astra_db_store)\n\nindex = VectorStoreIndex.from_documents(\n documents, storage_context=storage_context\n)\n```\n\n### Query using the index:\n\n\n```python\nquery_engine = index.as_query_engine()\nresponse = query_engine.query(\"Why did the author choose to work on AI?\")\n\nprint(response.response)\n```"} {"tokens": 1273, "doc_id": "b5f2abfe-6ee9-425f-848f-35d721aac12f", "name": "DocArray InMemory Vector Store", "url": "https://docs.llamaindex.ai/en/stable/examples/vector_stores/DocArrayInMemoryIndexDemo", "retrieve_doc": true, "source": "llama_index", "content": "\"Open\n\n# DocArray InMemory Vector Store\n\n[DocArrayInMemoryVectorStore](https://docs.docarray.org/user_guide/storing/index_in_memory/) is a document index provided by [Docarray](https://github.com/docarray/docarray) that stores documents in memory. It is a great starting point for small datasets, where you may not want to launch a database server.\n\n\n\n\nIf you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.\n\n\n```python\n%pip install llama-index-vector-stores-docarray\n```\n\n\n```python\n!pip install llama-index\n```\n\n\n```python\nimport os\nimport sys\nimport logging\nimport textwrap\n\nimport warnings\n\nwarnings.filterwarnings(\"ignore\")\n\n# stop huggingface warnings\nos.environ[\"TOKENIZERS_PARALLELISM\"] = \"false\"\n\n# Uncomment to see debug logs\n# logging.basicConfig(stream=sys.stdout, level=logging.INFO)\n# logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))\n\nfrom llama_index.core import (\n GPTVectorStoreIndex,\n SimpleDirectoryReader,\n Document,\n)\nfrom llama_index.vector_stores.docarray import DocArrayInMemoryVectorStore\nfrom IPython.display import Markdown, display\n```\n\n\n```python\nimport os\n\nos.environ[\"OPENAI_API_KEY\"] = \"\"\n```\n\nDownload Data\n\n\n```python\n!mkdir -p 'data/paul_graham/'\n!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'\n```\n\n\n```python\n# load documents\ndocuments = SimpleDirectoryReader(\"./data/paul_graham/\").load_data()\nprint(\n \"Document ID:\",\n documents[0].doc_id,\n \"Document Hash:\",\n documents[0].doc_hash,\n)\n```\n\n Document ID: 1c21062a-50a3-4133-a0b1-75f837a953e5 Document Hash: 77ae91ab542f3abb308c4d7c77c9bc4c9ad0ccd63144802b7cbe7e1bb3a4094e\n\n\n## Initialization and indexing\n\n\n```python\nfrom llama_index.core import StorageContext\n\n\nvector_store = DocArrayInMemoryVectorStore()\nstorage_context = StorageContext.from_defaults(vector_store=vector_store)\nindex = GPTVectorStoreIndex.from_documents(\n documents, storage_context=storage_context\n)\n```\n\n## Querying\n\n\n```python\n# set Logging to DEBUG for more detailed outputs\nquery_engine = index.as_query_engine()\nresponse = query_engine.query(\"What did the author do growing up?\")\nprint(textwrap.fill(str(response), 100))\n```\n\n Token indices sequence length is longer than the specified maximum sequence length for this model (1830 > 1024). Running this sequence through the model will result in indexing errors\n\n\n Growing up, the author wrote short stories, programmed on an IBM 1401, and nagged his father to buy\n him a TRS-80 microcomputer. He wrote simple games, a program to predict how high his model rockets\n would fly, and a word processor. He also studied philosophy in college, but switched to AI after\n becoming bored with it. He then took art classes at Harvard and applied to art schools, eventually\n attending RISD.\n\n\n\n```python\nresponse = query_engine.query(\"What was a hard moment for the author?\")\nprint(textwrap.fill(str(response), 100))\n```\n\n A hard moment for the author was when he realized that the AI programs of the time were a hoax and\n that there was an unbridgeable gap between what they could do and actually understanding natural\n language. He had invested a lot of time and energy into learning about AI and was disappointed to\n find out that it was not going to get him the results he had hoped for.\n\n\n## Querying with filters\n\n\n```python\nfrom llama_index.core.schema import TextNode\n\nnodes = [\n TextNode(\n text=\"The Shawshank Redemption\",\n metadata={\n \"author\": \"Stephen King\",\n \"theme\": \"Friendship\",\n },\n ),\n TextNode(\n text=\"The Godfather\",\n metadata={\n \"director\": \"Francis Ford Coppola\",\n \"theme\": \"Mafia\",\n },\n ),\n TextNode(\n text=\"Inception\",\n metadata={\n \"director\": \"Christopher Nolan\",\n },\n ),\n]\n```\n\n\n```python\nfrom llama_index.core import StorageContext\n\n\nvector_store = DocArrayInMemoryVectorStore()\nstorage_context = StorageContext.from_defaults(vector_store=vector_store)\n\nindex = GPTVectorStoreIndex(nodes, storage_context=storage_context)\n```\n\n\n```python\nfrom llama_index.core.vector_stores import ExactMatchFilter, MetadataFilters\n\n\nfilters = MetadataFilters(\n filters=[ExactMatchFilter(key=\"theme\", value=\"Mafia\")]\n)\n\nretriever = index.as_retriever(filters=filters)\nretriever.retrieve(\"What is inception about?\")\n```\n\n\n\n\n [NodeWithScore(node=Node(text='director: Francis Ford Coppola\\ntheme: Mafia\\n\\nThe Godfather', doc_id='41c99963-b200-4ce6-a9c4-d06ffeabdbc5', embedding=None, doc_hash='b770e43e6a94854a22dc01421d3d9ef6a94931c2b8dbbadf4fdb6eb6fbe41010', extra_info=None, node_info=None, relationships={: 'None'}), score=0.7681788983417586)]"} {"tokens": 1380, "doc_id": "d6073bd0-7493-4449-9204-d6983f8c7ee8", "name": "Supabase Vector Store", "url": "https://docs.llamaindex.ai/en/stable/examples/vector_stores/SupabaseVectorIndexDemo", "retrieve_doc": true, "source": "llama_index", "content": "\"Open\n\n# Supabase Vector Store\nIn this notebook we are going to show how to use [Vecs](https://supabase.github.io/vecs/) to perform vector searches in LlamaIndex. \nSee [this guide](https://supabase.github.io/vecs/hosting/) for instructions on hosting a database on Supabase \n\nIf you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.\n\n\n```python\n%pip install llama-index-vector-stores-supabase\n```\n\n\n```python\n!pip install llama-index\n```\n\n\n```python\nimport logging\nimport sys\n\n# Uncomment to see debug logs\n# logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)\n# logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))\n\nfrom llama_index.core import SimpleDirectoryReader, Document, StorageContext\nfrom llama_index.core import VectorStoreIndex\nfrom llama_index.vector_stores.supabase import SupabaseVectorStore\nimport textwrap\n```\n\n### Setup OpenAI\nThe first step is to configure the OpenAI key. It will be used to created embeddings for the documents loaded into the index\n\n\n```python\nimport os\n\nos.environ[\"OPENAI_API_KEY\"] = \"[your_openai_api_key]\"\n```\n\nDownload Data\n\n\n```python\n!mkdir -p 'data/paul_graham/'\n!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'\n```\n\n### Loading documents\nLoad the documents stored in the `./data/paul_graham/` using the SimpleDirectoryReader\n\n\n```python\ndocuments = SimpleDirectoryReader(\"./data/paul_graham/\").load_data()\nprint(\n \"Document ID:\",\n documents[0].doc_id,\n \"Document Hash:\",\n documents[0].doc_hash,\n)\n```\n\n Document ID: fb056993-ee9e-4463-80b4-32cf9509d1d8 Document Hash: 77ae91ab542f3abb308c4d7c77c9bc4c9ad0ccd63144802b7cbe7e1bb3a4094e\n\n\n### Create an index backed by Supabase's vector store. \nThis will work with all Postgres providers that support pgvector.\nIf the collection does not exist, we will attempt to create a new collection \n\n> Note: you need to pass in the embedding dimension if not using OpenAI's text-embedding-ada-002, e.g. `vector_store = SupabaseVectorStore(..., dimension=...)`\n\n\n```python\nvector_store = SupabaseVectorStore(\n postgres_connection_string=(\n \"postgresql://:@:/\"\n ),\n collection_name=\"base_demo\",\n)\nstorage_context = StorageContext.from_defaults(vector_store=vector_store)\nindex = VectorStoreIndex.from_documents(\n documents, storage_context=storage_context\n)\n```\n\n### Query the index\nWe can now ask questions using our index.\n\n\n```python\nquery_engine = index.as_query_engine()\nresponse = query_engine.query(\"Who is the author?\")\n```\n\n /Users/suo/miniconda3/envs/llama/lib/python3.9/site-packages/vecs/collection.py:182: UserWarning: Query does not have a covering index for cosine_distance. See Collection.create_index\n warnings.warn(\n\n\n\n```python\nprint(textwrap.fill(str(response), 100))\n```\n\n The author of this text is Paul Graham.\n\n\n\n```python\nresponse = query_engine.query(\"What did the author do growing up?\")\n```\n\n\n```python\nprint(textwrap.fill(str(response), 100))\n```\n\n The author grew up writing essays, learning Italian, exploring Florence, painting people, working\n with computers, attending RISD, living in a rent-stabilized apartment, building an online store\n builder, editing Lisp expressions, publishing essays online, writing essays, painting still life,\n working on spam filters, cooking for groups, and buying a building in Cambridge.\n\n\n## Using metadata filters\n\n\n```python\nfrom llama_index.core.schema import TextNode\n\nnodes = [\n TextNode(\n **{\n \"text\": \"The Shawshank Redemption\",\n \"metadata\": {\n \"author\": \"Stephen King\",\n \"theme\": \"Friendship\",\n },\n }\n ),\n TextNode(\n **{\n \"text\": \"The Godfather\",\n \"metadata\": {\n \"director\": \"Francis Ford Coppola\",\n \"theme\": \"Mafia\",\n },\n }\n ),\n TextNode(\n **{\n \"text\": \"Inception\",\n \"metadata\": {\n \"director\": \"Christopher Nolan\",\n },\n }\n ),\n]\n```\n\n\n```python\nvector_store = SupabaseVectorStore(\n postgres_connection_string=(\n \"postgresql://:@:/\"\n ),\n collection_name=\"metadata_filters_demo\",\n)\nstorage_context = StorageContext.from_defaults(vector_store=vector_store)\nindex = VectorStoreIndex(nodes, storage_context=storage_context)\n```\n\nDefine metadata filters\n\n\n```python\nfrom llama_index.core.vector_stores import ExactMatchFilter, MetadataFilters\n\nfilters = MetadataFilters(\n filters=[ExactMatchFilter(key=\"theme\", value=\"Mafia\")]\n)\n```\n\nRetrieve from vector store with filters\n\n\n```python\nretriever = index.as_retriever(filters=filters)\nretriever.retrieve(\"What is inception about?\")\n```\n\n\n\n\n [NodeWithScore(node=Node(text='The Godfather', doc_id='f837ed85-aacb-4552-b88a-7c114a5be15d', embedding=None, doc_hash='f8ee912e238a39fe2e620fb232fa27ade1e7f7c819b6d5b9cb26f3dddc75b6c0', extra_info={'theme': 'Mafia', 'director': 'Francis Ford Coppola'}, node_info={'_node_type': '1'}, relationships={}), score=0.20671339734643313)]"} {"tokens": 2521, "doc_id": "0e56587e-68ec-4f86-8d22-c6f4380d8265", "name": "Milvus Vector Store With Hybrid Retrieval", "url": "https://docs.llamaindex.ai/en/stable/examples/vector_stores/MilvusHybridIndexDemo", "retrieve_doc": true, "source": "llama_index", "content": "\"Open\n\n# Milvus Vector Store With Hybrid Retrieval\n\nIn this notebook we are going to show a quick demo of using the MilvusVectorStore with hybrid retrieval. (Milvus version should higher than 2.4.0)\n\nIf you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.\n\n\n```python\n%pip install llama-index-vector-stores-milvus\n```\n\nBGE-M3 from FlagEmbedding is used as the default sparse embedding method, so it needs to be installed along with llama-index.\n\n\n```python\n! pip install llama-index\n! pip install FlagEmbedding\n```\n\n\n```python\nimport logging\nimport sys\n\n# Uncomment to see debug logs\n# logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)\n# logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))\n\nfrom llama_index.core import VectorStoreIndex, SimpleDirectoryReader, Document\nfrom llama_index.vector_stores.milvus import MilvusVectorStore\nfrom IPython.display import Markdown, display\nimport textwrap\n```\n\n### Setup OpenAI\nLets first begin by adding the openai api key. This will allow us to access openai for embeddings and to use chatgpt.\n\n\n```python\nimport openai\n\nopenai.api_key = \"sk-\"\n```\n\nDownload Data\n\n\n```python\n! mkdir -p 'data/paul_graham/'\n! wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'\n```\n\n --2024-04-25 17:44:59-- https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt\n Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.111.133, 185.199.108.133, 185.199.109.133, ...\n Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443... connected.\n HTTP request sent, awaiting response... 200 OK\n Length: 75042 (73K) [text/plain]\n Saving to: ‘data/paul_graham/paul_graham_essay.txt’\n \n data/paul_graham/pa 100%[===================>] 73.28K --.-KB/s in 0.07s \n \n 2024-04-25 17:45:00 (994 KB/s) - ‘data/paul_graham/paul_graham_essay.txt’ saved [75042/75042]\n \n\n\n### Generate our data\nWith our LLM set, lets start using the Milvus Index. As a first example, lets generate a document from the file found in the `data/paul_graham/` folder. In this folder there is a single essay from Paul Graham titled `What I Worked On`. To generate the documents we will use the SimpleDirectoryReader.\n\n\n```python\n# load documents\ndocuments = SimpleDirectoryReader(\"./data/paul_graham/\").load_data()\n\nprint(\"Document ID:\", documents[0].doc_id)\n```\n\n Document ID: ca3f5dbc-f772-41da-9a4f-bb4884691793\n\n\n### Create an index across the data\nNow that we have a document, we can can create an index and insert the document. For the index we will use a MilvusVectorStore. MilvusVectorStore takes in a few arguments:\n\n- `uri (str, optional)`: The URI to connect to, comes in the form of \"http://address:port\". Defaults to \"http://localhost:19530\".\n- `token (str, optional)`: The token for log in. Empty if not using rbac, if using rbac it will most likely be \"username:password\". Defaults to \"\".\n- `collection_name (str, optional)`: The name of the collection where data will be stored. Defaults to \"llamalection\".\n- `dim (int, optional)`: The dimension of the embeddings. If it is not provided, collection creation will be done on first insert. Defaults to None.\n- `embedding_field (str, optional)`: The name of the embedding field for the collection, defaults to DEFAULT_EMBEDDING_KEY.\n- `doc_id_field (str, optional)`: The name of the doc_id field for the collection, defaults to DEFAULT_DOC_ID_KEY.\n- `similarity_metric (str, optional)`: The similarity metric to use, currently supports IP and L2. Defaults to \"IP\".\n- `consistency_level (str, optional)`: Which consistency level to use for a newly created collection. Defaults to \"Strong\".\n- `overwrite (bool, optional)`: Whether to overwrite existing collection with same name. Defaults to False.\n- `text_key (str, optional)`: What key text is stored in in the passed collection. Used when bringing your own collection. Defaults to None.\n- `index_config (dict, optional)`: The configuration used for building the Milvus index. Defaults to None.\n- `search_config (dict, optional)`: The configuration used for searching the Milvus index. Note that this must be compatible with the index type specified by index_config. Defaults to None.\n- `batch_size (int)`: Configures the number of documents processed in one batch when inserting data into Milvus. Defaults to DEFAULT_BATCH_SIZE.\n- `enable_sparse (bool)`: A boolean flag indicating whether to enable support\n for sparse embeddings for hybrid retrieval. Defaults to False.\n- `sparse_embedding_function (BaseSparseEmbeddingFunction, optional)`: If enable_sparse\n is True, this object should be provided to convert text to a sparse embedding.\n- `hybrid_ranker (str)`: Specifies the type of ranker used in hybrid search queries.\n Currently only supports ['RRFRanker','WeightedRanker']. Defaults to \"RRFRanker\".\n- `hybrid_ranker_params (dict)`: Configuration parameters for the hybrid ranker. \n - For \"RRFRanker\", it should include:\n - 'k' (int): A parameter used in Reciprocal Rank Fusion (RRF). This value is used \n to calculate the rank scores as part of the RRF algorithm, which combines \n multiple ranking strategies into a single score to improve search relevance.\n - For \"WeightedRanker\", it should include:\n - 'weights' (list of float): A list of exactly two weights:\n - The weight for the dense embedding component.\n - The weight for the sparse embedding component. \n \n These weights are used to adjust the importance of the dense and sparse components of the embeddings in the hybrid retrieval process.\n\n Defaults to an empty dictionary, implying that the ranker will operate with its predefined default settings.\n\nNow, let's begin creating a MilvusVectorStore for hybrid retrieval. We need to set `enable_sparse` to True to enable sparse embedding generation, and we also need to configure the RRFRanker for reranking. For more details, please refer to [Milvus Reranking](https://milvus.io/docs/reranking.md).\n\n\n```python\n# Create an index over the documnts\nfrom llama_index.core import StorageContext\nimport os\n\n\nvector_store = MilvusVectorStore(\n dim=1536,\n overwrite=True,\n enable_sparse=True,\n hybrid_ranker=\"RRFRanker\",\n hybrid_ranker_params={\"k\": 60},\n)\nstorage_context = StorageContext.from_defaults(vector_store=vector_store)\nindex = VectorStoreIndex.from_documents(\n documents, storage_context=storage_context\n)\n```\n\n Sparse embedding function is not provided, using default.\n\n\n\n Fetching 30 files: 0%| | 0/30 [00:00\"Open\n\n# Qdrant Vector Store - Metadata Filter\n\nIf you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.\n\n\n```python\n%pip install llama-index-vector-stores-qdrant\n```\n\n\n```python\n!pip install llama-index qdrant_client\n```\n\nBuild the Qdrant VectorStore Client\n\n\n```python\nimport qdrant_client\nfrom llama_index.core import VectorStoreIndex\nfrom llama_index.vector_stores.qdrant import QdrantVectorStore\n\nclient = qdrant_client.QdrantClient(\n # you can use :memory: mode for fast and light-weight experiments,\n # it does not require to have Qdrant deployed anywhere\n # but requires qdrant-client >= 1.1.1\n location=\":memory:\"\n # otherwise set Qdrant instance address with:\n # uri=\"http://:\"\n # set API KEY for Qdrant Cloud\n # api_key=\"\",\n)\n```\n\nBuild the QdrantVectorStore and create a Qdrant Index\n\n\n```python\nfrom llama_index.core.schema import TextNode\n\nnodes = [\n TextNode(\n text=\"The Shawshank Redemption\",\n metadata={\n \"author\": \"Stephen King\",\n \"theme\": \"Friendship\",\n \"year\": 1994,\n },\n ),\n TextNode(\n text=\"The Godfather\",\n metadata={\n \"director\": \"Francis Ford Coppola\",\n \"theme\": \"Mafia\",\n \"year\": 1972,\n },\n ),\n TextNode(\n text=\"Inception\",\n metadata={\n \"director\": \"Christopher Nolan\",\n \"theme\": \"Fiction\",\n \"year\": 2010,\n },\n ),\n TextNode(\n text=\"To Kill a Mockingbird\",\n metadata={\n \"author\": \"Harper Lee\",\n \"theme\": \"Mafia\",\n \"year\": 1960,\n },\n ),\n TextNode(\n text=\"1984\",\n metadata={\n \"author\": \"George Orwell\",\n \"theme\": \"Totalitarianism\",\n \"year\": 1949,\n },\n ),\n TextNode(\n text=\"The Great Gatsby\",\n metadata={\n \"author\": \"F. Scott Fitzgerald\",\n \"theme\": \"The American Dream\",\n \"year\": 1925,\n },\n ),\n TextNode(\n text=\"Harry Potter and the Sorcerer's Stone\",\n metadata={\n \"author\": \"J.K. Rowling\",\n \"theme\": \"Fiction\",\n \"year\": 1997,\n },\n ),\n]\n```\n\n\n```python\nimport os\n\nfrom llama_index.core import StorageContext\n\n\nos.environ[\"OPENAI_API_KEY\"] = \"sk-...\"\n\n\nvector_store = QdrantVectorStore(\n client=client, collection_name=\"test_collection_1\"\n)\nstorage_context = StorageContext.from_defaults(vector_store=vector_store)\nindex = VectorStoreIndex(nodes, storage_context=storage_context)\n```\n\nDefine metadata filters\n\n\n```python\nfrom llama_index.core.vector_stores import (\n MetadataFilter,\n MetadataFilters,\n FilterOperator,\n)\n\nfilters = MetadataFilters(\n filters=[\n MetadataFilter(key=\"theme\", operator=FilterOperator.EQ, value=\"Mafia\"),\n ]\n)\n```\n\nRetrieve from vector store with filters\n\n\n```python\nretriever = index.as_retriever(filters=filters)\nretriever.retrieve(\"What is inception about?\")\n```\n\n [FieldCondition(key='theme', match=MatchValue(value='Mafia'), range=None, geo_bounding_box=None, geo_radius=None, geo_polygon=None, values_count=None)]\n\n\n\n\n\n [NodeWithScore(node=TextNode(id_='050c085d-6d91-4080-9fd6-3f874a528970', embedding=None, metadata={'director': 'Francis Ford Coppola', 'theme': 'Mafia', 'year': 1972}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, hash='bfa890174187ddaed4876803691ed605463de599f5493f095a03b8d83364f1ef', text='The Godfather', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n'), score=0.7620959333946706),\n NodeWithScore(node=TextNode(id_='11d0043a-aba3-4ffe-84cb-3f17988759be', embedding=None, metadata={'author': 'Harper Lee', 'theme': 'Mafia', 'year': 1960}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, hash='3475334d04bbe4606cb77728d5dc0784f16c8db3f190f3692e6310906c821927', text='To Kill a Mockingbird', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n'), score=0.7340329162691743)]\n\n\n\nMultiple Metadata Filters with `AND` condition\n\n\n```python\nfrom llama_index.core.vector_stores import FilterOperator, FilterCondition\n\nfilters = MetadataFilters(\n filters=[\n MetadataFilter(key=\"theme\", value=\"Fiction\"),\n MetadataFilter(key=\"year\", value=1997, operator=FilterOperator.GT),\n ],\n condition=FilterCondition.AND,\n)\n\nretriever = index.as_retriever(filters=filters)\nretriever.retrieve(\"Harry Potter?\")\n```\n\n [FieldCondition(key='theme', match=MatchValue(value='Fiction'), range=None, geo_bounding_box=None, geo_radius=None, geo_polygon=None, values_count=None)]\n [FieldCondition(key='theme', match=MatchValue(value='Fiction'), range=None, geo_bounding_box=None, geo_radius=None, geo_polygon=None, values_count=None), FieldCondition(key='year', match=None, range=Range(lt=None, gt=1997.0, gte=None, lte=None), geo_bounding_box=None, geo_radius=None, geo_polygon=None, values_count=None)]\n\n\n\n\n\n [NodeWithScore(node=TextNode(id_='1be42402-518f-4e88-9860-12cfec9f5ed2', embedding=None, metadata={'director': 'Christopher Nolan', 'theme': 'Fiction', 'year': 2010}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, hash='7937eb153ccc78a3329560f37d90466ba748874df6b0303b3b8dd3c732aa7688', text='Inception', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n'), score=0.7649987694994126)]\n\n\n\nUse keyword arguments specific to Qdrant\n\n\n```python\nretriever = index.as_retriever(\n vector_store_kwargs={\"filter\": {\"theme\": \"Mafia\"}}\n)\nretriever.retrieve(\"What is inception about?\")\n```\n\n\n\n\n [NodeWithScore(node=TextNode(id_='1be42402-518f-4e88-9860-12cfec9f5ed2', embedding=None, metadata={'director': 'Christopher Nolan', 'theme': 'Fiction', 'year': 2010}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, hash='7937eb153ccc78a3329560f37d90466ba748874df6b0303b3b8dd3c732aa7688', text='Inception', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n'), score=0.841150534139415),\n NodeWithScore(node=TextNode(id_='ee4d3b32-7675-49bc-bc49-04011d62cf7c', embedding=None, metadata={'author': 'J.K. Rowling', 'theme': 'Fiction', 'year': 1997}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, hash='1b24f5e9fb6f18cc893e833af8d5f28ff805a6361fc0838a3015c287510d29a3', text=\"Harry Potter and the Sorcerer's Stone\", start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n'), score=0.7661930751179629)]"} {"tokens": 964, "doc_id": "86a3cc64-83fa-4d46-9ce3-b8670eef0d31", "name": "Bagel Vector Store", "url": "https://docs.llamaindex.ai/en/stable/examples/vector_stores/BagelAutoRetriever", "retrieve_doc": true, "source": "llama_index", "content": "\"Open\n\n# Bagel Vector Store\n\nIf you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.\n\n\n```python\n%pip install llama-index-vector-stores-bagel\n%pip install llama-index\n%pip install bagelML\n```\n\n\n```python\nimport logging\nimport sys\n\nlogging.basicConfig(stream=sys.stdout, level=logging.INFO)\nlogging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))\n```\n\n\n```python\n# set up OpenAI\nimport os\nimport getpass\n\nos.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")\nimport openai\n\nopenai.api_key = os.environ[\"OPENAI_API_KEY\"]\n```\n\n\n```python\nimport os\n\n# Set environment variable\nos.environ[\"BAGEL_API_KEY\"] = getpass.getpass(\"Bagel API Key:\")\n```\n\n\n```python\nimport bagel\nfrom bagel import Settings\n```\n\n\n```python\nserver_settings = Settings(\n bagel_api_impl=\"rest\", bagel_server_host=\"api.bageldb.ai\"\n)\n\nclient = bagel.Client(server_settings)\n\ncollection = client.get_or_create_cluster(\n \"testing_embeddings_3\", embedding_model=\"custom\", dimension=1536\n)\n```\n\n\n```python\nfrom llama_index.core import VectorStoreIndex, StorageContext\nfrom llama_index.vector_stores.bagel import BagelVectorStore\n```\n\n\n```python\nfrom llama_index.core.schema import TextNode\n\nnodes = [\n TextNode(\n text=(\n \"Michael Jordan is a retired professional basketball player,\"\n \" widely regarded as one of the greatest basketball players of all\"\n \" time.\"\n ),\n metadata={\n \"category\": \"Sports\",\n \"country\": \"United States\",\n },\n ),\n TextNode(\n text=(\n \"Angelina Jolie is an American actress, filmmaker, and\"\n \" humanitarian. She has received numerous awards for her acting\"\n \" and is known for her philanthropic work.\"\n ),\n metadata={\n \"category\": \"Entertainment\",\n \"country\": \"United States\",\n },\n ),\n TextNode(\n text=(\n \"Elon Musk is a business magnate, industrial designer, and\"\n \" engineer. He is the founder, CEO, and lead designer of SpaceX,\"\n \" Tesla, Inc., Neuralink, and The Boring Company.\"\n ),\n metadata={\n \"category\": \"Business\",\n \"country\": \"United States\",\n },\n ),\n TextNode(\n text=(\n \"Rihanna is a Barbadian singer, actress, and businesswoman. She\"\n \" has achieved significant success in the music industry and is\"\n \" known for her versatile musical style.\"\n ),\n metadata={\n \"category\": \"Music\",\n \"country\": \"Barbados\",\n },\n ),\n TextNode(\n text=(\n \"Cristiano Ronaldo is a Portuguese professional footballer who is\"\n \" considered one of the greatest football players of all time. He\"\n \" has won numerous awards and set multiple records during his\"\n \" career.\"\n ),\n metadata={\n \"category\": \"Sports\",\n \"country\": \"Portugal\",\n },\n ),\n]\n```\n\n\n```python\nvector_store = BagelVectorStore(collection=collection)\nstorage_context = StorageContext.from_defaults(vector_store=vector_store)\n```\n\n\n```python\nindex = VectorStoreIndex(nodes, storage_context=storage_context)\n```\n\n\n```python\nfrom llama_index.core.retrievers import VectorIndexAutoRetriever\nfrom llama_index.core.vector_stores import MetadataInfo, VectorStoreInfo\n\n\nvector_store_info = VectorStoreInfo(\n content_info=\"brief biography of celebrities\",\n metadata_info=[\n MetadataInfo(\n name=\"category\",\n type=\"str\",\n description=(\n \"Category of the celebrity, one of [Sports, Entertainment,\"\n \" Business, Music]\"\n ),\n ),\n MetadataInfo(\n name=\"country\",\n type=\"str\",\n description=(\n \"Country of the celebrity, one of [United States, Barbados,\"\n \" Portugal]\"\n ),\n ),\n ],\n)\nretriever = VectorIndexAutoRetriever(\n index, vector_store_info=vector_store_info\n)\n```\n\n\n```python\nretriever.retrieve(\"celebrity\")\n```"}