tokens
int64 182
43.1k
| doc_id
stringlengths 36
36
| name
stringlengths 6
64
| url
stringlengths 42
109
| retrieve_doc
bool 2
classes | source
stringclasses 1
value | content
stringlengths 801
96.2k
|
---|---|---|---|---|---|---|
1,869 | f2a017cd-c6a2-4611-b722-10951ad23a91 | Welcome to LlamaIndex 🦙 ! | https://docs.llamaindex.ai/en/stable/index | true | llama_index | <script src="https://cdn.jsdelivr.net/npm/marked/marked.min.js"></script>
# Welcome to LlamaIndex 🦙 !
LlamaIndex is a framework for building context-augmented generative AI applications with [LLMs](https://en.wikipedia.org/wiki/Large_language_model).
<div class="grid cards" markdown>
- <span style="font-size: 200%">[Introduction](#introduction)</span>
What is context augmentation? How does LlamaIndex help?
- <span style="font-size: 200%">[Use cases](#use-cases)</span>
What kind of apps can you build with LlamaIndex? Who should use it?
- <span style="font-size: 200%">[Getting started](#getting-started)</span>
Get started in Python or TypeScript in just 5 lines of code!
- <span style="font-size: 200%">[LlamaCloud](#llamacloud)</span>
Managed services for LlamaIndex including [LlamaParse](https://docs.cloud.llamaindex.ai/llamaparse/getting_started), the world's best document parser.
- <span style="font-size: 200%">[Community](#community)</span>
Get help and meet collaborators on Discord, Twitter, LinkedIn, and learn how to contribute to the project.
- <span style="font-size: 200%">[Related projects](#related-projects)</span>
Check out our library of connectors, readers, and other integrations at [LlamaHub](https://llamahub.ai) as well as demos and starter apps like [create-llama](https://www.npmjs.com/package/create-llama).
</div>
## Introduction
### What is context augmentation?
LLMs offer a natural language interface between humans and data. LLMs come pre-trained on huge amounts of publicly available data, but they are not trained on **your** data. Your data may be private or specific to the problem you're trying to solve. It's behind APIs, in SQL databases, or trapped in PDFs and slide decks.
Context augmentation makes your data available to the LLM to solve the problem at hand. LlamaIndex provides the tools to build any of context-augmentation use case, from prototype to production. Our tools allow you to ingest, parse, index and process your data and quickly implement complex query workflows combining data access with LLM prompting.
The most popular example of context-augmentation is [Retrieval-Augmented Generation or RAG](./getting_started/concepts.md), which combines context with LLMs at inference time.
### LlamaIndex is the Data Framework for Context-Augmented LLM Apps
LlamaIndex imposes no restriction on how you use LLMs. You can use LLMs as auto-complete, chatbots, semi-autonomous agents, and more. It just makes using them easier. We provide tools like:
- **Data connectors** ingest your existing data from their native source and format. These could be APIs, PDFs, SQL, and (much) more.
- **Data indexes** structure your data in intermediate representations that are easy and performant for LLMs to consume.
- **Engines** provide natural language access to your data. For example:
- Query engines are powerful interfaces for question-answering (e.g. a RAG pipeline).
- Chat engines are conversational interfaces for multi-message, "back and forth" interactions with your data.
- **Agents** are LLM-powered knowledge workers augmented by tools, from simple helper functions to API integrations and more.
- **Observability/Evaluation** integrations that enable you to rigorously experiment, evaluate, and monitor your app in a virtuous cycle.
## Use cases
Some popular use cases for LlamaIndex and context augmentation in general include:
- [Question-Answering](./use_cases/q_and_a/index.md) (Retrieval-Augmented Generation aka RAG)
- [Chatbots](./use_cases/chatbots.md)
- [Document Understanding and Data Extraction](./use_cases/extraction.md)
- [Autonomous Agents](./use_cases/agents.md) that can perform research and take actions
- [Multi-modal applications](./use_cases/multimodal.md) that combine text, images, and other data types
- [Fine-tuning](./use_cases/fine_tuning.md) models on data to improve performance
Check out our [use cases](./use_cases/index.md) documentation for more examples and links to tutorials.
### 👨👩👧👦 Who is LlamaIndex for?
LlamaIndex provides tools for beginners, advanced users, and everyone in between.
Our high-level API allows beginner users to use LlamaIndex to ingest and query their data in 5 lines of code.
For more complex applications, our lower-level APIs allow advanced users to customize and extend any module—data connectors, indices, retrievers, query engines, reranking modules—to fit their needs.
## Getting Started
LlamaIndex is available in Python (these docs) and [Typescript](https://ts.llamaindex.ai/). If you're not sure where to start, we recommend reading [how to read these docs](./getting_started/reading.md) which will point you to the right place based on your experience level.
### 30 second quickstart
Set an environment variable called `OPENAI_API_KEY` with an [OpenAI API key](https://platform.openai.com/api-keys). Install the Python library:
```bash
pip install llama-index
```
Put some documents in a folder called `data`, then ask questions about them with our famous 5-line starter:
```python
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("Some question about the data should go here")
print(response)
```
If any part of this trips you up, don't worry! Check out our more comprehensive starter tutorials using [remote APIs like OpenAI](./getting_started/starter_example.md) or [any model that runs on your laptop](./getting_started/starter_example_local.md).
## LlamaCloud
If you're an enterprise developer, check out [**LlamaCloud**](https://llamaindex.ai/enterprise). It is an end-to-end managed service for data parsing, ingestion, indexing, and retrieval, allowing you to get production-quality data for your production LLM application. It's available both hosted on our servers or as a self-hosted solution.
### LlamaParse
LlamaParse is our state-of-the-art document parsing solution. It's available as part of LlamaCloud and also available as a self-serve API. You can [sign up](https://cloud.llamaindex.ai/) and parse up to 1000 pages/day for free, or enter a credit card for unlimited parsing. [Learn more](https://llamaindex.ai/enterprise).
## Community
Need help? Have a feature suggestion? Join the LlamaIndex community:
- [Twitter](https://twitter.com/llama_index)
- [Discord](https://discord.gg/dGcwcsnxhU)
- [LinkedIn](https://www.linkedin.com/company/llamaindex/)
### Getting the library
- LlamaIndex Python
- [LlamaIndex Python Github](https://github.com/run-llama/llama_index)
- [Python Docs](https://docs.llamaindex.ai/) (what you're reading now)
- [LlamaIndex on PyPi](https://pypi.org/project/llama-index/)
- LlamaIndex.TS (Typescript/Javascript package):
- [LlamaIndex.TS Github](https://github.com/run-llama/LlamaIndexTS)
- [TypeScript Docs](https://ts.llamaindex.ai/)
- [LlamaIndex.TS on npm](https://www.npmjs.com/package/llamaindex)
### Contributing
We are open-source and always welcome contributions to the project! Check out our [contributing guide](./CONTRIBUTING.md) for full details on how to extend the core library or add an integration to a third party like an LLM, a vector store, an agent tool and more.
## Related projects
There's more to the LlamaIndex universe! Check out some of our other projects:
- [LlamaHub](https://llamahub.ai) | A large (and growing!) collection of custom data connectors
- [SEC Insights](https://secinsights.ai) | A LlamaIndex-powered application for financial research
- [create-llama](https://www.npmjs.com/package/create-llama) | A CLI tool to quickly scaffold LlamaIndex projects |
979 | 4ce1a9a2-e91a-47ae-9cbe-0566b5db3acb | Building an LLM application | https://docs.llamaindex.ai/en/stable/understanding/index | true | llama_index | # Building an LLM application
Welcome to the beginning of Understanding LlamaIndex. This is a series of short, bite-sized tutorials on every stage of building an LLM application to get you acquainted with how to use LlamaIndex before diving into more advanced and subtle strategies. If you're an experienced programmer new to LlamaIndex, this is the place to start.
## Key steps in building an LLM application
!!! tip
If you've already read our [high-level concepts](../getting_started/concepts.md) page you'll recognize several of these steps.
This tutorial has two main parts: **Building a RAG pipeline** and **Building an agent**, with some smaller sections before and after. Here's what to expect:
- **[Using LLMs](./using_llms/using_llms.md)**: hit the ground running by getting started working with LLMs. We'll show you how to use any of our [dozens of supported LLMs](../module_guides/models/llms/modules/), whether via remote API calls or running locally on your machine.
- **Building a RAG pipeline**: Retrieval-Augmented Generation (RAG) is a key technique for getting your data into an LLM, and a component of more sophisticated agentic systems. We'll show you how to build a full-featured RAG pipeline that can answer questions about your data. This includes:
- **[Loading & Ingestion](./loading/loading.md)**: Getting your data from wherever it lives, whether that's unstructured text, PDFs, databases, or APIs to other applications. LlamaIndex has hundreds of connectors to every data source over at [LlamaHub](https://llamahub.ai/).
- **[Indexing and Embedding](./indexing/indexing.md)**: Once you've got your data there are an infinite number of ways to structure access to that data to ensure your applications is always working with the most relevant data. LlamaIndex has a huge number of these strategies built-in and can help you select the best ones.
- **[Storing](./storing/storing.md)**: You will probably find it more efficient to store your data in indexed form, or pre-processed summaries provided by an LLM, often in a specialized database known as a `Vector Store` (see below). You can also store your indexes, metadata and more.
- **[Querying](./querying/querying.md)**: Every indexing strategy has a corresponding querying strategy and there are lots of ways to improve the relevance, speed and accuracy of what you retrieve and what the LLM does with it before returning it to you, including turning it into structured responses such as an API.
- **Building an agent**: agents are LLM-powered knowledge workers that can interact with the world via a set of tools. Those tools can be RAG engines such as you learned how to build in the previous section, or any arbitrary code. This tutorial includes:
- **[Building a basic agent](./agent/basic_agent.md)**: We show you how to build a simple agent that can interact with the world via a set of tools.
- **[Using local models with agents](./agent/local_models.md)**: Agents can be built to use local models, which can be important for performance or privacy reasons.
- **[Adding RAG to an agent](./agent/rag_agent.md)**: The RAG pipelines you built in the previous tutorial can be used as a tool by an agent, giving your agent powerful information-retrieval capabilities.
- **[Adding other tools](./agent/tools.md)**: Let's add more sophisticated tools to your agent, such as API integrations.
- **[Putting it all together](./putting_it_all_together/index.md)**: whether you are building question & answering, chatbots, an API, or an autonomous agent, we show you how to get your application into production.
- **[Tracing and debugging](./tracing_and_debugging/tracing_and_debugging.md)**: also called **observability**, it's especially important with LLM applications to be able to look into the inner workings of what's going on to help you debug problems and spot places to improve.
- **[Evaluating](./evaluating/evaluating.md)**: every strategy has pros and cons and a key part of building, shipping and evolving your application is evaluating whether your change has improved your application in terms of accuracy, performance, clarity, cost and more. Reliably evaluating your changes is a crucial part of LLM application development.
## Let's get started!
Ready to dive in? Head to [using LLMs](./using_llms/using_llms.md). |
182 | 5b64e132-a551-4e6f-9c95-2606810cae8c | Privacy and Security | https://docs.llamaindex.ai/en/stable/understanding/using_llms/privacy | true | llama_index | # Privacy and Security
By default, LLamaIndex sends your data to OpenAI for generating embeddings and natural language responses. However, it is important to note that this can be configured according to your preferences. LLamaIndex provides the flexibility to use your own embedding model or run a large language model locally if desired.
## Data Privacy
Regarding data privacy, when using LLamaIndex with OpenAI, the privacy details and handling of your data are subject to OpenAI's policies. And each custom service other than OpenAI has its policies as well.
## Vector stores
LLamaIndex offers modules to connect with other vector stores within indexes to store embeddings. It is worth noting that each vector store has its own privacy policies and practices, and LLamaIndex does not assume responsibility for how it handles or uses your data. Also by default, LLamaIndex has a default option to store your embeddings locally. |
869 | 7be87819-70df-4a9c-b558-ea795bb332d3 | Using LLMs | https://docs.llamaindex.ai/en/stable/understanding/using_llms/using_llms | true | llama_index | # Using LLMs
!!! tip
For a list of our supported LLMs and a comparison of their functionality, check out our [LLM module guide](../../module_guides/models/llms.md).
One of the first steps when building an LLM-based application is which LLM to use; you can also use more than one if you wish.
LLMs are used at multiple different stages of your pipeline:
- During **Indexing** you may use an LLM to determine the relevance of data (whether to index it at all) or you may use an LLM to summarize the raw data and index the summaries instead.
- During **Querying** LLMs can be used in two ways:
- During **Retrieval** (fetching data from your index) LLMs can be given an array of options (such as multiple different indices) and make decisions about where best to find the information you're looking for. An agentic LLM can also use _tools_ at this stage to query different data sources.
- During **Response Synthesis** (turning the retrieved data into an answer) an LLM can combine answers to multiple sub-queries into a single coherent answer, or it can transform data, such as from unstructured text to JSON or another programmatic output format.
LlamaIndex provides a single interface to a large number of different LLMs, allowing you to pass in any LLM you choose to any stage of the pipeline. It can be as simple as this:
```python
from llama_index.llms.openai import OpenAI
response = OpenAI().complete("Paul Graham is ")
print(response)
```
Usually, you will instantiate an LLM and pass it to `Settings`, which you then pass to other stages of the pipeline, as in this example:
```python
from llama_index.llms.openai import OpenAI
from llama_index.core import Settings
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
Settings.llm = OpenAI(temperature=0.2, model="gpt-4")
documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(
documents,
)
```
In this case, you've instantiated OpenAI and customized it to use the `gpt-4` model instead of the default `gpt-3.5-turbo`, and also modified the `temperature`. The `VectorStoreIndex` will now use gpt-4 to answer questions when querying.
!!! tip
The `Settings` is a bundle of configuration data that you pass into different parts of LlamaIndex. You can [learn more about Settings](../../module_guides/supporting_modules/settings.md) and how to customize it.
## Available LLMs
We support integrations with OpenAI, Hugging Face, PaLM, and more. Check out our [module guide to LLMs](../../module_guides/models/llms.md) for a full list, including how to run a local model.
!!! tip
A general note on privacy and LLMs can be found on the [privacy page](./privacy.md).
### Using a local LLM
LlamaIndex doesn't just support hosted LLM APIs; you can also [run a local model such as Llama2 locally](https://replicate.com/blog/run-llama-locally).
For example, if you have [Ollama](https://github.com/ollama/ollama) installed and running:
```python
from llama_index.llms.ollama import Ollama
from llama_index.core import Settings
Settings.llm = Ollama(model="llama2", request_timeout=60.0)
```
See the [custom LLM's How-To](../../module_guides/models/llms/usage_custom.md) for more details.
## Prompts
By default LlamaIndex comes with a great set of built-in, battle-tested prompts that handle the tricky work of getting a specific LLM to correctly handle and format data. This is one of the biggest benefits of using LlamaIndex. If you want to, you can [customize the prompts](../../module_guides/models/prompts/index.md). |
363 | 888d853a-1b0c-4456-b289-be9ed2c89c2a | LlamaHub | https://docs.llamaindex.ai/en/stable/understanding/loading/llamahub | true | llama_index | # LlamaHub
Our data connectors are offered through [LlamaHub](https://llamahub.ai/) 🦙.
LlamaHub contains a registry of open-source data connectors that you can easily plug into any LlamaIndex application (+ Agent Tools, and Llama Packs).
![](../../_static/data_connectors/llamahub.png)
## Usage Pattern
Get started with:
```python
from llama_index.core import download_loader
from llama_index.readers.google import GoogleDocsReader
loader = GoogleDocsReader()
documents = loader.load_data(document_ids=[...])
```
## Built-in connector: SimpleDirectoryReader
`SimpleDirectoryReader`. Can support parsing a wide range of file types including `.md`, `.pdf`, `.jpg`, `.png`, `.docx`, as well as audio and video types. It is available directly as part of LlamaIndex:
```python
from llama_index.core import SimpleDirectoryReader
documents = SimpleDirectoryReader("./data").load_data()
```
## Available connectors
Browse [LlamaHub](https://llamahub.ai/) directly to see the hundreds of connectors available, including:
- [Notion](https://developers.notion.com/) (`NotionPageReader`)
- [Google Docs](https://developers.google.com/docs/api) (`GoogleDocsReader`)
- [Slack](https://api.slack.com/) (`SlackReader`)
- [Discord](https://discord.com/developers/docs/intro) (`DiscordReader`)
- [Apify Actors](https://llamahub.ai/l/apify-actor) (`ApifyActor`). Can crawl the web, scrape webpages, extract text content, download files including `.pdf`, `.jpg`, `.png`, `.docx`, etc. |
1,418 | 88e2611e-eb6e-43c2-97bf-9252717a0a56 | Loading Data (Ingestion) | https://docs.llamaindex.ai/en/stable/understanding/loading/loading | true | llama_index | # Loading Data (Ingestion)
Before your chosen LLM can act on your data, you first need to process the data and load it. This has parallels to data cleaning/feature engineering pipelines in the ML world, or ETL pipelines in the traditional data setting.
This ingestion pipeline typically consists of three main stages:
1. Load the data
2. Transform the data
3. Index and store the data
We cover indexing/storage in [future](../indexing/indexing.md) [sections](../storing/storing.md). In this guide we'll mostly talk about loaders and transformations.
## Loaders
Before your chosen LLM can act on your data you need to load it. The way LlamaIndex does this is via data connectors, also called `Reader`. Data connectors ingest data from different data sources and format the data into `Document` objects. A `Document` is a collection of data (currently text, and in future, images and audio) and metadata about that data.
### Loading using SimpleDirectoryReader
The easiest reader to use is our SimpleDirectoryReader, which creates documents out of every file in a given directory. It is built in to LlamaIndex and can read a variety of formats including Markdown, PDFs, Word documents, PowerPoint decks, images, audio and video.
```python
from llama_index.core import SimpleDirectoryReader
documents = SimpleDirectoryReader("./data").load_data()
```
### Using Readers from LlamaHub
Because there are so many possible places to get data, they are not all built-in. Instead, you download them from our registry of data connectors, [LlamaHub](llamahub.md).
In this example LlamaIndex downloads and installs the connector called [DatabaseReader](https://llamahub.ai/l/readers/llama-index-readers-database), which runs a query against a SQL database and returns every row of the results as a `Document`:
```python
from llama_index.core import download_loader
from llama_index.readers.database import DatabaseReader
reader = DatabaseReader(
scheme=os.getenv("DB_SCHEME"),
host=os.getenv("DB_HOST"),
port=os.getenv("DB_PORT"),
user=os.getenv("DB_USER"),
password=os.getenv("DB_PASS"),
dbname=os.getenv("DB_NAME"),
)
query = "SELECT * FROM users"
documents = reader.load_data(query=query)
```
There are hundreds of connectors to use on [LlamaHub](https://llamahub.ai)!
### Creating Documents directly
Instead of using a loader, you can also use a Document directly.
```python
from llama_index.core import Document
doc = Document(text="text")
```
## Transformations
After the data is loaded, you then need to process and transform your data before putting it into a storage system. These transformations include chunking, extracting metadata, and embedding each chunk. This is necessary to make sure that the data can be retrieved, and used optimally by the LLM.
Transformation input/outputs are `Node` objects (a `Document` is a subclass of a `Node`). Transformations can also be stacked and reordered.
We have both a high-level and lower-level API for transforming documents.
### High-Level Transformation API
Indexes have a `.from_documents()` method which accepts an array of Document objects and will correctly parse and chunk them up. However, sometimes you will want greater control over how your documents are split up.
```python
from llama_index.core import VectorStoreIndex
vector_index = VectorStoreIndex.from_documents(documents)
vector_index.as_query_engine()
```
Under the hood, this splits your Document into Node objects, which are similar to Documents (they contain text and metadata) but have a relationship to their parent Document.
If you want to customize core components, like the text splitter, through this abstraction you can pass in a custom `transformations` list or apply to the global `Settings`:
```python
from llama_index.core.node_parser import SentenceSplitter
text_splitter = SentenceSplitter(chunk_size=512, chunk_overlap=10)
# global
from llama_index.core import Settings
Settings.text_splitter = text_splitter
# per-index
index = VectorStoreIndex.from_documents(
documents, transformations=[text_splitter]
)
```
### Lower-Level Transformation API
You can also define these steps explicitly.
You can do this by either using our transformation modules (text splitters, metadata extractors, etc.) as standalone components, or compose them in our declarative [Transformation Pipeline interface](../../module_guides/loading/ingestion_pipeline/index.md).
Let's walk through the steps below.
#### Splitting Your Documents into Nodes
A key step to process your documents is to split them into "chunks"/Node objects. The key idea is to process your data into bite-sized pieces that can be retrieved / fed to the LLM.
LlamaIndex has support for a wide range of [text splitters](../../module_guides/loading/node_parsers/modules.md), ranging from paragraph/sentence/token based splitters to file-based splitters like HTML, JSON.
These can be [used on their own or as part of an ingestion pipeline](../../module_guides/loading/node_parsers/index.md).
```python
from llama_index.core import SimpleDirectoryReader
from llama_index.core.ingestion import IngestionPipeline
from llama_index.core.node_parser import TokenTextSplitter
documents = SimpleDirectoryReader("./data").load_data()
pipeline = IngestionPipeline(transformations=[TokenTextSplitter(), ...])
nodes = pipeline.run(documents=documents)
```
### Adding Metadata
You can also choose to add metadata to your documents and nodes. This can be done either manually or with [automatic metadata extractors](../../module_guides/loading/documents_and_nodes/usage_metadata_extractor.md).
Here are guides on 1) [how to customize Documents](../../module_guides/loading/documents_and_nodes/usage_documents.md), and 2) [how to customize Nodes](../../module_guides/loading/documents_and_nodes/usage_nodes.md).
```python
document = Document(
text="text",
metadata={"filename": "<doc_file_name>", "category": "<category>"},
)
```
### Adding Embeddings
To insert a node into a vector index, it should have an embedding. See our [ingestion pipeline](../../module_guides/loading/ingestion_pipeline/index.md) or our [embeddings guide](../../module_guides/models/embeddings.md) for more details.
### Creating and passing Nodes directly
If you want to, you can create nodes directly and pass a list of Nodes directly to an indexer:
```python
from llama_index.core.schema import TextNode
node1 = TextNode(text="<text_chunk>", id_="<node_id>")
node2 = TextNode(text="<text_chunk>", id_="<node_id>")
index = VectorStoreIndex([node1, node2])
``` |
581 | 81066675-5d92-4073-853a-02f7605ce032 | Evaluating | https://docs.llamaindex.ai/en/stable/understanding/evaluating/evaluating | true | llama_index | # Evaluating
Evaluation and benchmarking are crucial concepts in LLM development. To improve the performance of an LLM app (RAG, agents), you must have a way to measure it.
LlamaIndex offers key modules to measure the quality of generated results. We also offer key modules to measure retrieval quality. You can learn more about how evaluation works in LlamaIndex in our [module guides](../../module_guides/evaluating/index.md).
## Response Evaluation
Does the response match the retrieved context? Does it also match the query? Does it match the reference answer or guidelines? Here's a simple example that evaluates a single response for Faithfulness, i.e. whether the response is aligned to the context, such as being free from hallucinations:
```python
from llama_index.core import VectorStoreIndex
from llama_index.llms.openai import OpenAI
from llama_index.core.evaluation import FaithfulnessEvaluator
# create llm
llm = OpenAI(model="gpt-4", temperature=0.0)
# build index
...
vector_index = VectorStoreIndex(...)
# define evaluator
evaluator = FaithfulnessEvaluator(llm=llm)
# query index
query_engine = vector_index.as_query_engine()
response = query_engine.query(
"What battles took place in New York City in the American Revolution?"
)
eval_result = evaluator.evaluate_response(response=response)
print(str(eval_result.passing))
```
The response contains both the response and the source from which the response was generated; the evaluator compares them and determines if the response is faithful to the source.
You can learn more in our module guides about [response evaluation](../../module_guides/evaluating/usage_pattern.md).
## Retrieval Evaluation
Are the retrieved sources relevant to the query? This is a simple example that evaluates a single retrieval:
```python
from llama_index.core.evaluation import RetrieverEvaluator
# define retriever somewhere (e.g. from index)
# retriever = index.as_retriever(similarity_top_k=2)
retriever = ...
retriever_evaluator = RetrieverEvaluator.from_metric_names(
["mrr", "hit_rate"], retriever=retriever
)
retriever_evaluator.evaluate(
query="query", expected_ids=["node_id1", "node_id2"]
)
```
This compares what was retrieved for the query to a set of nodes that were expected to be retrieved.
In reality you would want to evaluate a whole batch of retrievals; you can learn how do this in our module guide on [retrieval evaluation](../../module_guides/evaluating/usage_pattern_retrieval.md).
## Related concepts
You may be interested in [analyzing the cost of your application](cost_analysis/index.md) if you are making calls to a hosted, remote LLM. |
492 | 94a22f57-ea69-4559-926d-77f80c448b7e | Usage Pattern | https://docs.llamaindex.ai/en/stable/understanding/evaluating/cost_analysis/usage_pattern | true | llama_index | # Usage Pattern
## Estimating LLM and Embedding Token Counts
In order to measure LLM and Embedding token counts, you'll need to
1. Setup `MockLLM` and `MockEmbedding` objects
```python
from llama_index.core.llms import MockLLM
from llama_index.core import MockEmbedding
llm = MockLLM(max_tokens=256)
embed_model = MockEmbedding(embed_dim=1536)
```
2. Setup the `TokenCountingCallback` handler
```python
import tiktoken
from llama_index.core.callbacks import CallbackManager, TokenCountingHandler
token_counter = TokenCountingHandler(
tokenizer=tiktoken.encoding_for_model("gpt-3.5-turbo").encode
)
callback_manager = CallbackManager([token_counter])
```
3. Add them to the global `Settings`
```python
from llama_index.core import Settings
Settings.llm = llm
Settings.embed_model = embed_model
Settings.callback_manager = callback_manager
```
4. Construct an Index
```python
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader(
"./docs/examples/data/paul_graham"
).load_data()
index = VectorStoreIndex.from_documents(documents)
```
5. Measure the counts!
```python
print(
"Embedding Tokens: ",
token_counter.total_embedding_token_count,
"\n",
"LLM Prompt Tokens: ",
token_counter.prompt_llm_token_count,
"\n",
"LLM Completion Tokens: ",
token_counter.completion_llm_token_count,
"\n",
"Total LLM Token Count: ",
token_counter.total_llm_token_count,
"\n",
)
# reset counts
token_counter.reset_counts()
```
6. Run a query, measure again
```python
query_engine = index.as_query_engine()
response = query_engine.query("query")
print(
"Embedding Tokens: ",
token_counter.total_embedding_token_count,
"\n",
"LLM Prompt Tokens: ",
token_counter.prompt_llm_token_count,
"\n",
"LLM Completion Tokens: ",
token_counter.completion_llm_token_count,
"\n",
"Total LLM Token Count: ",
token_counter.total_llm_token_count,
"\n",
)
``` |
885 | 20ea3cb9-4145-4805-887e-7c48f1333c04 | Cost Analysis | https://docs.llamaindex.ai/en/stable/understanding/evaluating/cost_analysis/index | true | llama_index | # Cost Analysis
## Concept
Each call to an LLM will cost some amount of money - for instance, OpenAI's gpt-3.5-turbo costs $0.002 / 1k tokens. The cost of building an index and querying depends on
- the type of LLM used
- the type of data structure used
- parameters used during building
- parameters used during querying
The cost of building and querying each index is a TODO in the reference documentation. In the meantime, we provide the following information:
1. A high-level overview of the cost structure of the indices.
2. A token predictor that you can use directly within LlamaIndex!
### Overview of Cost Structure
#### Indices with no LLM calls
The following indices don't require LLM calls at all during building (0 cost):
- `SummaryIndex`
- `SimpleKeywordTableIndex` - uses a regex keyword extractor to extract keywords from each document
- `RAKEKeywordTableIndex` - uses a RAKE keyword extractor to extract keywords from each document
#### Indices with LLM calls
The following indices do require LLM calls during build time:
- `TreeIndex` - use LLM to hierarchically summarize the text to build the tree
- `KeywordTableIndex` - use LLM to extract keywords from each document
### Query Time
There will always be >= 1 LLM call during query time, in order to synthesize the final answer.
Some indices contain cost tradeoffs between index building and querying. `SummaryIndex`, for instance,
is free to build, but running a query over a summary index (without filtering or embedding lookups), will
call the LLM {math}`N` times.
Here are some notes regarding each of the indices:
- `SummaryIndex`: by default requires {math}`N` LLM calls, where N is the number of nodes.
- `TreeIndex`: by default requires {math}`\log (N)` LLM calls, where N is the number of leaf nodes.
- Setting `child_branch_factor=2` will be more expensive than the default `child_branch_factor=1` (polynomial vs logarithmic), because we traverse 2 children instead of just 1 for each parent node.
- `KeywordTableIndex`: by default requires an LLM call to extract query keywords.
- Can do `index.as_retriever(retriever_mode="simple")` or `index.as_retriever(retriever_mode="rake")` to also use regex/RAKE keyword extractors on your query text.
- `VectorStoreIndex`: by default, requires one LLM call per query. If you increase the `similarity_top_k` or `chunk_size`, or change the `response_mode`, then this number will increase.
## Usage Pattern
LlamaIndex offers token **predictors** to predict token usage of LLM and embedding calls.
This allows you to estimate your costs during 1) index construction, and 2) index querying, before
any respective LLM calls are made.
Tokens are counted using the `TokenCountingHandler` callback. See the [example notebook](../../../examples/callbacks/TokenCountingHandler.ipynb) for details on the setup.
### Using MockLLM
To predict token usage of LLM calls, import and instantiate the MockLLM as shown below. The `max_tokens` parameter is used as a "worst case" prediction, where each LLM response will contain exactly that number of tokens. If `max_tokens` is not specified, then it will simply predict back the prompt.
```python
from llama_index.core.llms import MockLLM
from llama_index.core import Settings
# use a mock llm globally
Settings.llm = MockLLM(max_tokens=256)
```
You can then use this predictor during both index construction and querying.
### Using MockEmbedding
You may also predict the token usage of embedding calls with `MockEmbedding`.
```python
from llama_index.core import MockEmbedding
from llama_index.core import Settings
# use a mock embedding globally
Settings.embed_model = MockEmbedding(embed_dim=1536)
```
## Usage Pattern
Read about the [full usage pattern](./usage_pattern.md) for more details! |
710 | 90154ae9-1d90-4442-a9b3-5bedaba0074c | Agents with local models | https://docs.llamaindex.ai/en/stable/understanding/agent/local_models | true | llama_index | # Agents with local models
If you're happy using OpenAI or another remote model, you can skip this section, but many people are interested in using models they run themselves. The easiest way to do this is via the great work of our friends at [Ollama](https://ollama.com/), who provide a simple to use client that will download, install and run a [growing range of models](https://ollama.com/library) for you.
## Install Ollama
They provide a one-click installer for Mac, Linux and Windows on their [home page](https://ollama.com/).
## Pick and run a model
Since we're going to be doing agentic work, we'll need a very capable model, but the largest models are hard to run on a laptop. We think `mixtral 8x7b` is a good balance between power and resources, but `llama3` is another great option. You can run Mixtral by running
```bash
ollama run mixtral:8x7b
```
The first time you run, it will also automatically download and install the model for you, which can take a while.
## Switch to local agent
To switch to Mixtral, you'll need to bring in the Ollama integration:
```bash
pip install llama-index-llms-ollama
```
Then modify your dependencies to bring in Ollama instead of OpenAI:
```python
from llama_index.llms.ollama import Ollama
```
And finally initialize Mixtral as your LLM instead:
```python
llm = Ollama(model="mixtral:8x7b", request_timeout=120.0)
```
## Ask the question again
```python
response = agent.chat("What is 20+(2*4)? Calculate step by step.")
```
The exact output looks different from OpenAI (it makes a mistake the first time it tries), but Mixtral gets the right answer:
```
Thought: The current language of the user is: English. The user wants to calculate the value of 20+(2*4). I need to break down this task into subtasks and use appropriate tools to solve each subtask.
Action: multiply
Action Input: {'a': 2, 'b': 4}
Observation: 8
Thought: The user has calculated the multiplication part of the expression, which is (2*4), and got 8 as a result. Now I need to add this value to 20 by using the 'add' tool.
Action: add
Action Input: {'a': 20, 'b': 8}
Observation: 28
Thought: The user has calculated the sum of 20+(2*4) and got 28 as a result. Now I can answer without using any more tools.
Answer: The solution to the expression 20+(2*4) is 28.
The solution to the expression 20+(2*4) is 28.
```
Check the [repo](https://github.com/run-llama/python-agents-tutorial/blob/main/2_local_agent.py) to see what this final code looks like.
You can now continue the rest of the tutorial with a local model if you prefer. We'll keep using OpenAI as we move on to [adding RAG to your agent](./rag_agent.md). |
971 | 9830872c-c9b8-4b01-9518-9a1fa6c14821 | Adding RAG to an agent | https://docs.llamaindex.ai/en/stable/understanding/agent/rag_agent | true | llama_index | # Adding RAG to an agent
To demonstrate using RAG engines as a tool in an agent, we're going to create a very simple RAG query engine. Our source data is going to be the [Wikipedia page about the 2023 Canadian federal budget](https://en.wikipedia.org/wiki/2023_Canadian_federal_budget) that we've [printed as a PDF](https://www.dropbox.com/scl/fi/rop435rax7mn91p3r8zj3/2023_canadian_budget.pdf?rlkey=z8j6sab5p6i54qa9tr39a43l7&dl=0).
## Bring in new dependencies
To read the PDF and index it, we'll need a few new dependencies. They were installed along with the rest of LlamaIndex, so we just need to import them:
```python
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex, Settings
```
## Add LLM to settings
We were previously passing the LLM directly, but now we need to use it in multiple places, so we'll add it to the global settings.
```python
Settings.llm = OpenAI(model="gpt-3.5-turbo", temperature=0)
```
Place this line near the top of the file; you can delete the other `llm` assignment.
## Load and index documents
We'll now do 3 things in quick succession: we'll load the PDF from a folder called "data", index and embed it using the `VectorStoreIndex`, and then create a query engine from that index:
```python
documents = SimpleDirectoryReader("./data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
```
We can run a quick smoke-test to make sure the engine is working:
```python
response = query_engine.query(
"What was the total amount of the 2023 Canadian federal budget?"
)
print(response)
```
The response is fast:
```
The total amount of the 2023 Canadian federal budget was $496.9 billion.
```
## Add a query engine tool
This requires one more import:
```python
from llama_index.core.tools import QueryEngineTool
```
Now we turn our query engine into a tool by supplying the appropriate metadata (for the python functions, this was being automatically extracted so we didn't need to add it):
```python
budget_tool = QueryEngineTool.from_defaults(
query_engine,
name="canadian_budget_2023",
description="A RAG engine with some basic facts about the 2023 Canadian federal budget.",
)
```
We modify our agent by adding this engine to our array of tools (we also remove the `llm` parameter, since it's now provided by settings):
```python
agent = ReActAgent.from_tools(
[multiply_tool, add_tool, budget_tool], verbose=True
)
```
## Ask a question using multiple tools
This is kind of a silly question, we'll ask something more useful later:
```python
response = agent.chat(
"What is the total amount of the 2023 Canadian federal budget multiplied by 3? Go step by step, using a tool to do any math."
)
print(response)
```
We get a perfect answer:
```
Thought: The current language of the user is English. I need to use the tools to help me answer the question.
Action: canadian_budget_2023
Action Input: {'input': 'total'}
Observation: $496.9 billion
Thought: I need to multiply the total amount of the 2023 Canadian federal budget by 3.
Action: multiply
Action Input: {'a': 496.9, 'b': 3}
Observation: 1490.6999999999998
Thought: I can answer without using any more tools. I'll use the user's language to answer
Answer: The total amount of the 2023 Canadian federal budget multiplied by 3 is $1,490.70 billion.
The total amount of the 2023 Canadian federal budget multiplied by 3 is $1,490.70 billion.
```
As usual, you can check the [repo](https://github.com/run-llama/python-agents-tutorial/blob/main/3_rag_agent.py) to see this code all together.
Excellent! Your agent can now use any arbitrarily advanced query engine to help answer questions. You can also add as many different RAG engines as you need to consult different data sources. Next, we'll look at how we can answer more advanced questions [using LlamaParse](./llamaparse.md). |
559 | 8df3083f-e2ae-48de-b70c-82b0213e5af4 | Enhancing with LlamaParse | https://docs.llamaindex.ai/en/stable/understanding/agent/llamaparse | true | llama_index | # Enhancing with LlamaParse
In the previous example we asked a very basic question of our document, about the total amount of the budget. Let's instead ask a more complicated question about a specific fact in the document:
```python
documents = SimpleDirectoryReader("./data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query(
"How much exactly was allocated to a tax credit to promote investment in green technologies in the 2023 Canadian federal budget?"
)
print(response)
```
We unfortunately get an unhelpful answer:
```
The budget allocated funds to a new green investments tax credit, but the exact amount was not specified in the provided context information.
```
This is bad, because we happen to know the exact number is in the document! But the PDF is complicated, with tables and multi-column layout, and the LLM is missing the answer. Luckily, we can use LlamaParse to help us out.
First, you need a LlamaCloud API key. You can [get one for free](https://cloud.llamaindex.ai/) by signing up for LlamaCloud. Then put it in your `.env` file just like your OpenAI key:
```bash
LLAMA_CLOUD_API_KEY=llx-xxxxx
```
Now you're ready to use LlamaParse in your code. Let's bring it in as as import:
```python
from llama_parse import LlamaParse
```
And let's put in a second attempt to parse and query the file (note that this uses `documents2`, `index2`, etc.) and see if we get a better answer to the exact same question:
```python
documents2 = LlamaParse(result_type="markdown").load_data(
"./data/2023_canadian_budget.pdf"
)
index2 = VectorStoreIndex.from_documents(documents2)
query_engine2 = index2.as_query_engine()
response2 = query_engine2.query(
"How much exactly was allocated to a tax credit to promote investment in green technologies in the 2023 Canadian federal budget?"
)
print(response2)
```
We do!
```
$20 billion was allocated to a tax credit to promote investment in green technologies in the 2023 Canadian federal budget.
```
You can always check [the repo](https://github.com/run-llama/python-agents-tutorial/blob/main/4_llamaparse.py) to what this code looks like.
As you can see, parsing quality makes a big difference to what the LLM can understand, even for relatively simple questions. Next let's see how [memory](./memory.md) can help us with more complex questions. |
793 | c8371e03-8cc7-4a36-b589-27a79fad6c81 | Memory | https://docs.llamaindex.ai/en/stable/understanding/agent/memory | true | llama_index | # Memory
We've now made several additions and subtractions to our code. To make it clear what we're using, you can see [the current code for our agent](https://github.com/run-llama/python-agents-tutorial/blob/main/5_memory.py) in the repo. It's using OpenAI for the LLM and LlamaParse to enhance parsing.
We've also added 3 questions in a row. Let's see how the agent handles them:
```python
response = agent.chat(
"How much exactly was allocated to a tax credit to promote investment in green technologies in the 2023 Canadian federal budget?"
)
print(response)
response = agent.chat(
"How much was allocated to a implement a means-tested dental care program in the 2023 Canadian federal budget?"
)
print(response)
response = agent.chat(
"How much was the total of those two allocations added together? Use a tool to answer any questions."
)
print(response)
```
This is demonstrating a powerful feature of agents in LlamaIndex: memory. Let's see what the output looks like:
```
Started parsing the file under job_id cac11eca-45e0-4ea9-968a-25f1ac9b8f99
Thought: The current language of the user is English. I need to use a tool to help me answer the question.
Action: canadian_budget_2023
Action Input: {'input': 'How much was allocated to a tax credit to promote investment in green technologies in the 2023 Canadian federal budget?'}
Observation: $20 billion was allocated to a tax credit to promote investment in green technologies in the 2023 Canadian federal budget.
Thought: I can answer without using any more tools. I'll use the user's language to answer
Answer: $20 billion was allocated to a tax credit to promote investment in green technologies in the 2023 Canadian federal budget.
$20 billion was allocated to a tax credit to promote investment in green technologies in the 2023 Canadian federal budget.
Thought: The current language of the user is: English. I need to use a tool to help me answer the question.
Action: canadian_budget_2023
Action Input: {'input': 'How much was allocated to implement a means-tested dental care program in the 2023 Canadian federal budget?'}
Observation: $13 billion was allocated to implement a means-tested dental care program in the 2023 Canadian federal budget.
Thought: I can answer without using any more tools. I'll use the user's language to answer
Answer: $13 billion was allocated to implement a means-tested dental care program in the 2023 Canadian federal budget.
$13 billion was allocated to implement a means-tested dental care program in the 2023 Canadian federal budget.
Thought: The current language of the user is: English. I need to use a tool to help me answer the question.
Action: add
Action Input: {'a': 20, 'b': 13}
Observation: 33
Thought: I can answer without using any more tools. I'll use the user's language to answer
Answer: The total of the allocations for the tax credit to promote investment in green technologies and the means-tested dental care program in the 2023 Canadian federal budget is $33 billion.
The total of the allocations for the tax credit to promote investment in green technologies and the means-tested dental care program in the 2023 Canadian federal budget is $33 billion.
```
The agent remembers that it already has the budget allocations from previous questions, and can answer a contextual question like "add those two allocations together" without needing to specify which allocations exactly. It even correctly uses the other addition tool to sum up the numbers.
Having demonstrated how memory helps, let's [add some more complex tools](./tools.md) to our agent. |
983 | 105b26c9-8f71-4dbb-915e-3c10c5105353 | Adding other tools | https://docs.llamaindex.ai/en/stable/understanding/agent/tools | true | llama_index | # Adding other tools
Now that you've built a capable agent, we hope you're excited about all it can do. The core of expanding agent capabilities is the tools available, and we have good news: [LlamaHub](https://llamahub.ai) from LlamaIndex has hundreds of integrations, including [dozens of existing agent tools](https://llamahub.ai/?tab=tools) that you can use right away. We'll show you how to use one of the existing tools, and also how to build and contribute your own.
## Using an existing tool from LlamaHub
For our example, we're going to use the [Yahoo Finance tool](https://llamahub.ai/l/tools/llama-index-tools-yahoo-finance?from=tools) from LlamaHub. It provides a set of six agent tools that look up a variety of information about stock ticker symbols.
First we need to install the tool:
```bash
pip install llama-index-tools-yahoo-finance
```
Then we can set up our dependencies. This is exactly the same as our previous examples, except for the final import:
```python
from dotenv import load_dotenv
load_dotenv()
from llama_index.core.agent import ReActAgent
from llama_index.llms.openai import OpenAI
from llama_index.core.tools import FunctionTool
from llama_index.core import Settings
from llama_index.tools.yahoo_finance import YahooFinanceToolSpec
```
To show how custom tools and LlamaHub tools can work together, we'll include the code from our previous examples the defines a "multiple" tool. We'll also take this opportunity to set up the LLM:
```python
# settings
Settings.llm = OpenAI(model="gpt-4o", temperature=0)
# function tools
def multiply(a: float, b: float) -> float:
"""Multiply two numbers and returns the product"""
return a * b
multiply_tool = FunctionTool.from_defaults(fn=multiply)
def add(a: float, b: float) -> float:
"""Add two numbers and returns the sum"""
return a + b
add_tool = FunctionTool.from_defaults(fn=add)
```
Now we'll do the new step, which is to fetch the array of tools:
```python
finance_tools = YahooFinanceToolSpec().to_tool_list()
```
This is just a regular array, so we can use Python's `extend` method to add our own tools to the mix:
```python
finance_tools.extend([multiply_tool, add_tool])
```
Then we set up the agent as usual, and ask a question:
```python
agent = ReActAgent.from_tools(finance_tools, verbose=True)
response = agent.chat("What is the current price of NVDA?")
print(response)
```
The response is very wordy, so we've truncated it:
```
Thought: The current language of the user is English. I need to use a tool to help me answer the question.
Action: stock_basic_info
Action Input: {'ticker': 'NVDA'}
Observation: Info:
{'address1': '2788 San Tomas Expressway'
...
'currentPrice': 135.58
...}
Thought: I have obtained the current price of NVDA from the stock basic info.
Answer: The current price of NVDA (NVIDIA Corporation) is $135.58.
The current price of NVDA (NVIDIA Corporation) is $135.58.
```
Perfect! As you can see, using existing tools is a snap.
As always, you can check [the repo](https://github.com/run-llama/python-agents-tutorial/blob/main/6_tools.py) to see this code all in one place.
## Building and contributing your own tools
We love open source contributions of new tools! You can see an example of [what the code of the Yahoo finance tool looks like](https://github.com/run-llama/llama_index/blob/main/llama-index-integrations/tools/llama-index-tools-yahoo-finance/llama_index/tools/yahoo_finance/base.py):
* A class that extends `BaseToolSpec`
* A set of arbitrary Python functions
* A `spec_functions` list that maps the functions to the tool's API
Once you've got a tool working, follow our [contributing guide](https://github.com/run-llama/llama_index/blob/main/CONTRIBUTING.md#2--contribute-a-pack-reader-tool-or-dataset-formerly-from-llama-hub) for instructions on correctly setting metadata and submitting a pull request.
Congratulations! You've completed our guide to building agents with LlamaIndex. We can't wait to see what use-cases you build! |
1,197 | e539dfa2-9a44-42a8-aa53-598e47a4b591 | Building a basic agent | https://docs.llamaindex.ai/en/stable/understanding/agent/basic_agent | true | llama_index | # Building a basic agent
In LlamaIndex, an agent is a semi-autonomous piece of software powered by an LLM that is given a task and executes a series of steps towards solving that task. It is given a set of tools, which can be anything from arbitrary functions up to full LlamaIndex query engines, and it selects the best available tool to complete each step. When each step is completed, the agent judges whether the task is now complete, in which case it returns a result to the user, or whether it needs to take another step, in which case it loops back to the start.
![agent flow](./agent_flow.png)
## Getting started
You can find all of this code in [the tutorial repo](https://github.com/run-llama/python-agents-tutorial).
To avoid conflicts and keep things clean, we'll start a new Python virtual environment. You can use any virtual environment manager, but we'll use `poetry` here:
```bash
poetry init
poetry shell
```
And then we'll install the LlamaIndex library and some other dependencies that will come in handy:
```bash
pip install llama-index python-dotenv
```
If any of this gives you trouble, check out our more detailed [installation guide](../getting_started/installation/).
## OpenAI Key
Our agent will be powered by OpenAI's `GPT-3.5-Turbo` LLM, so you'll need an [API key](https://platform.openai.com/). Once you have your key, you can put it in a `.env` file in the root of your project:
```bash
OPENAI_API_KEY=sk-proj-xxxx
```
If you don't want to use OpenAI, we'll show you how to use other models later.
## Bring in dependencies
We'll start by importing the components of LlamaIndex we need, as well as loading the environment variables from our `.env` file:
```python
from dotenv import load_dotenv
load_dotenv()
from llama_index.core.agent import ReActAgent
from llama_index.llms.openai import OpenAI
from llama_index.core.tools import FunctionTool
```
## Create basic tools
For this simple example we'll be creating two tools: one that knows how to multiply numbers together, and one that knows how to add them.
```python
def multiply(a: float, b: float) -> float:
"""Multiply two numbers and returns the product"""
return a * b
multiply_tool = FunctionTool.from_defaults(fn=multiply)
def add(a: float, b: float) -> float:
"""Add two numbers and returns the sum"""
return a + b
add_tool = FunctionTool.from_defaults(fn=add)
```
As you can see, these are regular vanilla Python functions. The docstring comments provide metadata to the agent about what the tool does: if your LLM is having trouble figuring out which tool to use, these docstrings are what you should tweak first.
After each function is defined we create `FunctionTool` objects from these functions, which wrap them in a way that the agent can understand.
## Initialize the LLM
`GPT-3.5-Turbo` is going to be doing the work today:
```python
llm = OpenAI(model="gpt-3.5-turbo", temperature=0)
```
You could also pick another popular model accessible via API, such as those from [Mistral](../examples/llm/mistralai/), [Claude from Anthropic](../examples/llm/anthropic/) or [Gemini from Google](../examples/llm/gemini/).
## Initialize the agent
Now we create our agent. In this case, this is a [ReAct agent](https://klu.ai/glossary/react-agent-model), a relatively simple but powerful agent. We give it an array containing our two tools, the LLM we just created, and set `verbose=True` so we can see what's going on:
```python
agent = ReActAgent.from_tools([multiply_tool, add_tool], llm=llm, verbose=True)
```
## Ask a question
We specify that it should use a tool, as this is pretty simple and GPT-3.5 doesn't really need this tool to get the answer.
```python
response = agent.chat("What is 20+(2*4)? Use a tool to calculate every step.")
```
This should give you output similar to the following:
```
Thought: The current language of the user is: English. I need to use a tool to help me answer the question.
Action: multiply
Action Input: {'a': 2, 'b': 4}
Observation: 8
Thought: I need to add 20 to the result of the multiplication.
Action: add
Action Input: {'a': 20, 'b': 8}
Observation: 28
Thought: I can answer without using any more tools. I'll use the user's language to answer
Answer: The result of 20 + (2 * 4) is 28.
The result of 20 + (2 * 4) is 28.
```
As you can see, the agent picks the correct tools one after the other and combines the answers to give the final result. Check the [repo](https://github.com/run-llama/python-agents-tutorial/blob/main/1_basic_agent.py) to see what the final code should look like.
Congratulations! You've built the most basic kind of agent. Next you can find out how to use [local models](./local_models.md) or skip to [adding RAG to your agent](./rag_agent.md). |
1,069 | 37983b44-ac28-44e2-b2a8-455df06ee13b | Storing | https://docs.llamaindex.ai/en/stable/understanding/storing/storing | true | llama_index | # Storing
Once you have data [loaded](../loading/loading.md) and [indexed](../indexing/indexing.md), you will probably want to store it to avoid the time and cost of re-indexing it. By default, your indexed data is stored only in memory.
## Persisting to disk
The simplest way to store your indexed data is to use the built-in `.persist()` method of every Index, which writes all the data to disk at the location specified. This works for any type of index.
```python
index.storage_context.persist(persist_dir="<persist_dir>")
```
Here is an example of a Composable Graph:
```python
graph.root_index.storage_context.persist(persist_dir="<persist_dir>")
```
You can then avoid re-loading and re-indexing your data by loading the persisted index like this:
```python
from llama_index.core import StorageContext, load_index_from_storage
# rebuild storage context
storage_context = StorageContext.from_defaults(persist_dir="<persist_dir>")
# load index
index = load_index_from_storage(storage_context)
```
!!! tip
Important: if you had initialized your index with a custom `transformations`, `embed_model`, etc., you will need to pass in the same options during `load_index_from_storage`, or have it set as the [global settings](../../module_guides/supporting_modules/settings.md).
## Using Vector Stores
As discussed in [indexing](../indexing/indexing.md), one of the most common types of Index is the VectorStoreIndex. The API calls to create the {ref}`embeddings <what-is-an-embedding>` in a VectorStoreIndex can be expensive in terms of time and money, so you will want to store them to avoid having to constantly re-index things.
LlamaIndex supports a [huge number of vector stores](../../module_guides/storing/vector_stores.md) which vary in architecture, complexity and cost. In this example we'll be using Chroma, an open-source vector store.
First you will need to install chroma:
```
pip install chromadb
```
To use Chroma to store the embeddings from a VectorStoreIndex, you need to:
- initialize the Chroma client
- create a Collection to store your data in Chroma
- assign Chroma as the `vector_store` in a `StorageContext`
- initialize your VectorStoreIndex using that StorageContext
Here's what that looks like, with a sneak peek at actually querying the data:
```python
import chromadb
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.vector_stores.chroma import ChromaVectorStore
from llama_index.core import StorageContext
# load some documents
documents = SimpleDirectoryReader("./data").load_data()
# initialize client, setting path to save data
db = chromadb.PersistentClient(path="./chroma_db")
# create collection
chroma_collection = db.get_or_create_collection("quickstart")
# assign chroma as the vector_store to the context
vector_store = ChromaVectorStore(chroma_collection=chroma_collection)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
# create your index
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
# create a query engine and query
query_engine = index.as_query_engine()
response = query_engine.query("What is the meaning of life?")
print(response)
```
If you've already created and stored your embeddings, you'll want to load them directly without loading your documents or creating a new VectorStoreIndex:
```python
import chromadb
from llama_index.core import VectorStoreIndex
from llama_index.vector_stores.chroma import ChromaVectorStore
from llama_index.core import StorageContext
# initialize client
db = chromadb.PersistentClient(path="./chroma_db")
# get collection
chroma_collection = db.get_or_create_collection("quickstart")
# assign chroma as the vector_store to the context
vector_store = ChromaVectorStore(chroma_collection=chroma_collection)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
# load your index from stored vectors
index = VectorStoreIndex.from_vector_store(
vector_store, storage_context=storage_context
)
# create a query engine
query_engine = index.as_query_engine()
response = query_engine.query("What is llama2?")
print(response)
```
!!! tip
We have a [more thorough example of using Chroma](../../examples/vector_stores/ChromaIndexDemo.ipynb) if you want to go deeper on this store.
### You're ready to query!
Now you have loaded data, indexed it, and stored that index, you're ready to [query your data](../querying/querying.md).
## Inserting Documents or Nodes
If you've already created an index, you can add new documents to your index using the `insert` method.
```python
from llama_index.core import VectorStoreIndex
index = VectorStoreIndex([])
for doc in documents:
index.insert(doc)
```
See the [document management how-to](../../module_guides/indexing/document_management.md) for more details on managing documents and an example notebook. |
397 | 5f60c10c-560d-47ff-87c3-228f49a478c0 | Tracing and Debugging | https://docs.llamaindex.ai/en/stable/understanding/tracing_and_debugging/tracing_and_debugging | true | llama_index | # Tracing and Debugging
Debugging and tracing the operation of your application is key to understanding and optimizing it. LlamaIndex provides a variety of ways to do this.
## Basic logging
The simplest possible way to look into what your application is doing is to turn on debug logging. That can be done anywhere in your application like this:
```python
import logging
import sys
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
```
## Callback handler
LlamaIndex provides callbacks to help debug, track, and trace the inner workings of the library. Using the callback manager, as many callbacks as needed can be added.
In addition to logging data related to events, you can also track the duration and number of occurrences
of each event.
Furthermore, a trace map of events is also recorded, and callbacks can use this data however they want. For example, the `LlamaDebugHandler` will, by default, print the trace of events after most operations.
You can get a simple callback handler like this:
```python
import llama_index.core
llama_index.core.set_global_handler("simple")
```
You can also learn how to [build you own custom callback handler](../../module_guides/observability/callbacks/index.md).
## Observability
LlamaIndex provides **one-click observability** to allow you to build principled LLM applications in a production setting.
This feature allows you to seamlessly integrate the LlamaIndex library with powerful observability/evaluation tools offered by our partners. Configure a variable once, and you'll be able to do things like the following:
- View LLM/prompt inputs/outputs
- Ensure that the outputs of any component (LLMs, embeddings) are performing as expected
- View call traces for both indexing and querying
To learn more, check out our [observability docs](../../module_guides/observability/index.md) |
899 | 5b253e54-efac-4382-b5a5-7462cefcbce2 | Indexing | https://docs.llamaindex.ai/en/stable/understanding/indexing/indexing | true | llama_index | # Indexing
With your data loaded, you now have a list of Document objects (or a list of Nodes). It's time to build an `Index` over these objects so you can start querying them.
## What is an Index?
In LlamaIndex terms, an `Index` is a data structure composed of `Document` objects, designed to enable querying by an LLM. Your Index is designed to be complementary to your querying strategy.
LlamaIndex offers several different index types. We'll cover the two most common here.
## Vector Store Index
A `VectorStoreIndex` is by far the most frequent type of Index you'll encounter. The Vector Store Index takes your Documents and splits them up into Nodes. It then creates `vector embeddings` of the text of every node, ready to be queried by an LLM.
### What is an embedding?
`Vector embeddings` are central to how LLM applications function.
A `vector embedding`, often just called an embedding, is a **numerical representation of the semantics, or meaning of your text**. Two pieces of text with similar meanings will have mathematically similar embeddings, even if the actual text is quite different.
This mathematical relationship enables **semantic search**, where a user provides query terms and LlamaIndex can locate text that is related to the **meaning of the query terms** rather than simple keyword matching. This is a big part of how Retrieval-Augmented Generation works, and how LLMs function in general.
There are [many types of embeddings](../../module_guides/models/embeddings.md), and they vary in efficiency, effectiveness and computational cost. By default LlamaIndex uses `text-embedding-ada-002`, which is the default embedding used by OpenAI. If you are using different LLMs you will often want to use different embeddings.
### Vector Store Index embeds your documents
Vector Store Index turns all of your text into embeddings using an API from your LLM; this is what is meant when we say it "embeds your text". If you have a lot of text, generating embeddings can take a long time since it involves many round-trip API calls.
When you want to search your embeddings, your query is itself turned into a vector embedding, and then a mathematical operation is carried out by VectorStoreIndex to rank all the embeddings by how semantically similar they are to your query.
### Top K Retrieval
Once the ranking is complete, VectorStoreIndex returns the most-similar embeddings as their corresponding chunks of text. The number of embeddings it returns is known as `k`, so the parameter controlling how many embeddings to return is known as `top_k`. This whole type of search is often referred to as "top-k semantic retrieval" for this reason.
Top-k retrieval is the simplest form of querying a vector index; you will learn about more complex and subtler strategies when you read the [querying](../querying/querying.md) section.
### Using Vector Store Index
To use the Vector Store Index, pass it the list of Documents you created during the loading stage:
```python
from llama_index.core import VectorStoreIndex
index = VectorStoreIndex.from_documents(documents)
```
!!! tip
`from_documents` also takes an optional argument `show_progress`. Set it to `True` to display a progress bar during index construction.
You can also choose to build an index over a list of Node objects directly:
```python
from llama_index.core import VectorStoreIndex
index = VectorStoreIndex(nodes)
```
With your text indexed, it is now technically ready for [querying](../querying/querying.md)! However, embedding all your text can be time-consuming and, if you are using a hosted LLM, it can also be expensive. To save time and money you will want to [store your embeddings](../storing/storing.md) first.
## Summary Index
A Summary Index is a simpler form of Index best suited to queries where, as the name suggests, you are trying to generate a summary of the text in your Documents. It simply stores all of the Documents and returns all of them to your query engine.
## Further Reading
If your data is a set of interconnected concepts (in computer science terms, a "graph") then you may be interested in our [knowledge graph index](../../examples/index_structs/knowledge_graph/KnowledgeGraphDemo.ipynb). |
1,494 | 92a2e347-69c9-4c40-85bf-65093eb36b46 | Querying | https://docs.llamaindex.ai/en/stable/understanding/querying/querying | true | llama_index | # Querying
Now you've loaded your data, built an index, and stored that index for later, you're ready to get to the most significant part of an LLM application: querying.
At its simplest, querying is just a prompt call to an LLM: it can be a question and get an answer, or a request for summarization, or a much more complex instruction.
More complex querying could involve repeated/chained prompt + LLM calls, or even a reasoning loop across multiple components.
## Getting started
The basis of all querying is the `QueryEngine`. The simplest way to get a QueryEngine is to get your index to create one for you, like this:
```python
query_engine = index.as_query_engine()
response = query_engine.query(
"Write an email to the user given their background information."
)
print(response)
```
## Stages of querying
However, there is more to querying than initially meets the eye. Querying consists of three distinct stages:
- **Retrieval** is when you find and return the most relevant documents for your query from your `Index`. As previously discussed in [indexing](../indexing/indexing.md), the most common type of retrieval is "top-k" semantic retrieval, but there are many other retrieval strategies.
- **Postprocessing** is when the `Node`s retrieved are optionally reranked, transformed, or filtered, for instance by requiring that they have specific metadata such as keywords attached.
- **Response synthesis** is when your query, your most-relevant data and your prompt are combined and sent to your LLM to return a response.
!!! tip
You can find out about [how to attach metadata to documents](../../module_guides/loading/documents_and_nodes/usage_documents.md) and [nodes](../../module_guides/loading/documents_and_nodes/usage_nodes.md).
## Customizing the stages of querying
LlamaIndex features a low-level composition API that gives you granular control over your querying.
In this example, we customize our retriever to use a different number for `top_k` and add a post-processing step that requires that the retrieved nodes reach a minimum similarity score to be included. This would give you a lot of data when you have relevant results but potentially no data if you have nothing relevant.
```python
from llama_index.core import VectorStoreIndex, get_response_synthesizer
from llama_index.core.retrievers import VectorIndexRetriever
from llama_index.core.query_engine import RetrieverQueryEngine
from llama_index.core.postprocessor import SimilarityPostprocessor
# build index
index = VectorStoreIndex.from_documents(documents)
# configure retriever
retriever = VectorIndexRetriever(
index=index,
similarity_top_k=10,
)
# configure response synthesizer
response_synthesizer = get_response_synthesizer()
# assemble query engine
query_engine = RetrieverQueryEngine(
retriever=retriever,
response_synthesizer=response_synthesizer,
node_postprocessors=[SimilarityPostprocessor(similarity_cutoff=0.7)],
)
# query
response = query_engine.query("What did the author do growing up?")
print(response)
```
You can also add your own retrieval, response synthesis, and overall query logic, by implementing the corresponding interfaces.
For a full list of implemented components and the supported configurations, check out our [reference docs](../../api_reference/index.md).
Let's go into more detail about customizing each step:
### Configuring retriever
```python
retriever = VectorIndexRetriever(
index=index,
similarity_top_k=10,
)
```
There are a huge variety of retrievers that you can learn about in our [module guide on retrievers](../../module_guides/querying/retriever/index.md).
### Configuring node postprocessors
We support advanced `Node` filtering and augmentation that can further improve the relevancy of the retrieved `Node` objects.
This can help reduce the time/number of LLM calls/cost or improve response quality.
For example:
- `KeywordNodePostprocessor`: filters nodes by `required_keywords` and `exclude_keywords`.
- `SimilarityPostprocessor`: filters nodes by setting a threshold on the similarity score (thus only supported by embedding-based retrievers)
- `PrevNextNodePostprocessor`: augments retrieved `Node` objects with additional relevant context based on `Node` relationships.
The full list of node postprocessors is documented in the [Node Postprocessor Reference](../../api_reference/postprocessor/index.md).
To configure the desired node postprocessors:
```python
node_postprocessors = [
KeywordNodePostprocessor(
required_keywords=["Combinator"], exclude_keywords=["Italy"]
)
]
query_engine = RetrieverQueryEngine.from_args(
retriever, node_postprocessors=node_postprocessors
)
response = query_engine.query("What did the author do growing up?")
```
### Configuring response synthesis
After a retriever fetches relevant nodes, a `BaseSynthesizer` synthesizes the final response by combining the information.
You can configure it via
```python
query_engine = RetrieverQueryEngine.from_args(
retriever, response_mode=response_mode
)
```
Right now, we support the following options:
- `default`: "create and refine" an answer by sequentially going through each retrieved `Node`;
This makes a separate LLM call per Node. Good for more detailed answers.
- `compact`: "compact" the prompt during each LLM call by stuffing as
many `Node` text chunks that can fit within the maximum prompt size. If there are
too many chunks to stuff in one prompt, "create and refine" an answer by going through
multiple prompts.
- `tree_summarize`: Given a set of `Node` objects and the query, recursively construct a tree
and return the root node as the response. Good for summarization purposes.
- `no_text`: Only runs the retriever to fetch the nodes that would have been sent to the LLM,
without actually sending them. Then can be inspected by checking `response.source_nodes`.
The response object is covered in more detail in Section 5.
- `accumulate`: Given a set of `Node` objects and the query, apply the query to each `Node` text
chunk while accumulating the responses into an array. Returns a concatenated string of all
responses. Good for when you need to run the same query separately against each text
chunk.
## Structured Outputs
You may want to ensure your output is structured. See our [Query Engines + Pydantic Outputs](../../module_guides/querying/structured_outputs/query_engine.md) to see how to extract a Pydantic object from a query engine class.
Also make sure to check out our entire [Structured Outputs](../../module_guides/querying/structured_outputs/index.md) guide.
## Creating your own Query Pipeline
If you want to design complex query flows, you can compose your own query pipeline across many different modules, from prompts/LLMs/output parsers to retrievers to response synthesizers to your own custom components.
Take a look at our [Query Pipelines Module Guide](../../module_guides/querying/pipeline/index.md) for more details. |
399 | 906509df-1a70-4ab8-9df2-68aee062407c | Putting It All Together | https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/index | true | llama_index | # Putting It All Together
Congratulations! You've loaded your data, indexed it, stored your index, and queried your index. Now you've got to ship something to production. We can show you how to do that!
- In [Q&A Patterns](q_and_a.md) we'll go into some of the more advanced and subtle ways you can build a query engine beyond the basics.
- The [terms definition tutorial](q_and_a/terms_definitions_tutorial.md) is a detailed, step-by-step tutorial on creating a subtle query application including defining your prompts and supporting images as input.
- We have a guide to [creating a unified query framework over your indexes](../../examples/retrievers/reciprocal_rerank_fusion.ipynb) which shows you how to run queries across multiple indexes.
- And also over [structured data like SQL](structured_data.md)
- We have a guide on [how to build a chatbot](chatbots/building_a_chatbot.md)
- We talk about [building agents in LlamaIndex](agents.md)
- We have a complete guide to using [property graphs for indexing and retrieval](../../module_guides/indexing/lpg_index_guide.md)
- And last but not least we show you how to build [a full stack web application](apps/index.md) using LlamaIndex
LlamaIndex also provides some tools / project templates to help you build a full-stack template. For instance, [`create-llama`](https://github.com/run-llama/LlamaIndexTS/tree/main/packages/create-llama) spins up a full-stack scaffold for you.
Check out our [Full-Stack Projects](../../community/full_stack_projects.md) page for more details.
We also have the [`llamaindex-cli rag` CLI tool](../../getting_started/starter_tools/rag_cli.md) that combines some of the above concepts into an easy to use tool for chatting with files from your terminal! |
1,084 | bf31b6c1-15db-4298-aacf-793390f87cb0 | Agents | https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/agents | true | llama_index | # Agents
Putting together an agent in LlamaIndex can be done by defining a set of tools and providing them to our ReActAgent implementation. We're using it here with OpenAI, but it can be used with any sufficiently capable LLM:
```python
from llama_index.core.tools import FunctionTool
from llama_index.llms.openai import OpenAI
from llama_index.core.agent import ReActAgent
# define sample Tool
def multiply(a: int, b: int) -> int:
"""Multiply two integers and returns the result integer"""
return a * b
multiply_tool = FunctionTool.from_defaults(fn=multiply)
# initialize llm
llm = OpenAI(model="gpt-3.5-turbo-0613")
# initialize ReAct agent
agent = ReActAgent.from_tools([multiply_tool], llm=llm, verbose=True)
```
These tools can be Python functions as shown above, or they can be LlamaIndex query engines:
```python
from llama_index.core.tools import QueryEngineTool
query_engine_tools = [
QueryEngineTool(
query_engine=sql_agent,
metadata=ToolMetadata(
name="sql_agent", description="Agent that can execute SQL queries."
),
),
]
agent = ReActAgent.from_tools(query_engine_tools, llm=llm, verbose=True)
```
You can learn more in our [Agent Module Guide](../../module_guides/deploying/agents/index.md).
## Native OpenAIAgent
We have an `OpenAIAgent` implementation built on the [OpenAI API for function calling](https://openai.com/blog/function-calling-and-other-api-updates) that allows you to rapidly build agents:
- [OpenAIAgent](../../examples/agent/openai_agent.ipynb)
- [OpenAIAgent with Query Engine Tools](../../examples/agent/openai_agent_with_query_engine.ipynb)
- [OpenAIAgent Query Planning](../../examples/agent/openai_agent_query_plan.ipynb)
- [OpenAI Assistant](../../examples/agent/openai_assistant_agent.ipynb)
- [OpenAI Assistant Cookbook](../../examples/agent/openai_assistant_query_cookbook.ipynb)
- [Forced Function Calling](../../examples/agent/openai_forced_function_call.ipynb)
- [Parallel Function Calling](../../examples/agent/openai_agent_parallel_function_calling.ipynb)
- [Context Retrieval](../../examples/agent/openai_agent_context_retrieval.ipynb)
## Agentic Components within LlamaIndex
LlamaIndex provides core modules capable of automated reasoning for different use cases over your data which makes them essentially Agents. Some of these core modules are shown below along with example tutorials.
**SubQuestionQueryEngine for Multi Document Analysis**
- [Sub Question Query Engine (Intro)](../../examples/query_engine/sub_question_query_engine.ipynb)
- [10Q Analysis (Uber)](../../examples/usecases/10q_sub_question.ipynb)
- [10K Analysis (Uber and Lyft)](../../examples/usecases/10k_sub_question.ipynb)
**Query Transformations**
- [How-To](../../optimizing/advanced_retrieval/query_transformations.md)
- [Multi-Step Query Decomposition](../../examples/query_transformations/HyDEQueryTransformDemo.ipynb) ([Notebook](https://github.com/jerryjliu/llama_index/blob/main/docs/docs/examples/query_transformations/HyDEQueryTransformDemo.ipynb))
**Routing**
- [Usage](../../module_guides/querying/router/index.md)
- [Router Query Engine Guide](../../examples/query_engine/RouterQueryEngine.ipynb) ([Notebook](https://github.com/jerryjliu/llama_index/blob/main/docs../../examples/query_engine/RouterQueryEngine.ipynb))
**LLM Reranking**
- [Second Stage Processing How-To](../../module_guides/querying/node_postprocessors/index.md)
- [LLM Reranking Guide (Great Gatsby)](../../examples/node_postprocessor/LLMReranker-Gatsby.ipynb)
**Chat Engines**
- [Chat Engines How-To](../../module_guides/deploying/chat_engines/index.md)
## Using LlamaIndex as as Tool within an Agent Framework
LlamaIndex can be used as as Tool within an agent framework - including LangChain, ChatGPT. These integrations are described below.
### LangChain
We have deep integrations with LangChain.
LlamaIndex query engines can be easily packaged as Tools to be used within a LangChain agent, and LlamaIndex can also be used as a memory module / retriever. Check out our guides/tutorials below!
**Resources**
- [Building a Chatbot Tutorial](chatbots/building_a_chatbot.md)
- [OnDemandLoaderTool Tutorial](../../examples/tools/OnDemandLoaderTool.ipynb)
### ChatGPT
LlamaIndex can be used as a ChatGPT retrieval plugin (we have a TODO to develop a more general plugin as well).
**Resources**
- [LlamaIndex ChatGPT Retrieval Plugin](https://github.com/openai/chatgpt-retrieval-plugin#llamaindex) |
5,652 | 8dada3ca-6484-4531-8f3d-cf97f6b9fcd9 | A Guide to Extracting Terms and Definitions | https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/q_and_a/terms_definitions_tutorial | true | llama_index | # A Guide to Extracting Terms and Definitions
Llama Index has many use cases (semantic search, summarization, etc.) that are well documented. However, this doesn't mean we can't apply Llama Index to very specific use cases!
In this tutorial, we will go through the design process of using Llama Index to extract terms and definitions from text, while allowing users to query those terms later. Using [Streamlit](https://streamlit.io/), we can provide an easy way to build frontend for running and testing all of this, and quickly iterate with our design.
This tutorial assumes you have Python3.9+ and the following packages installed:
- llama-index
- streamlit
At the base level, our objective is to take text from a document, extract terms and definitions, and then provide a way for users to query that knowledge base of terms and definitions. The tutorial will go over features from both Llama Index and Streamlit, and hopefully provide some interesting solutions for common problems that come up.
The final version of this tutorial can be found [here](https://github.com/abdulasiraj/A-Guide-to-Extracting-Terms-and-Definitions) and a live hosted demo is available on [Huggingface Spaces](https://huggingface.co./spaces/Nobody4591/Llama_Index_Term_Extractor).
## Uploading Text
Step one is giving users a way to input text manually. Let’s write some code using Streamlit to provide the interface for this! Use the following code and launch the app with `streamlit run app.py`.
```python
import streamlit as st
st.title("🦙 Llama Index Term Extractor 🦙")
document_text = st.text_area("Enter raw text")
if st.button("Extract Terms and Definitions") and document_text:
with st.spinner("Extracting..."):
extracted_terms = document_text # this is a placeholder!
st.write(extracted_terms)
```
Super simple right! But you'll notice that the app doesn't do anything useful yet. To use llama_index, we also need to setup our OpenAI LLM. There are a bunch of possible settings for the LLM, so we can let the user figure out what's best. We should also let the user set the prompt that will extract the terms (which will also help us debug what works best).
## LLM Settings
This next step introduces some tabs to our app, to separate it into different panes that provide different features. Let's create a tab for LLM settings and for uploading text:
```python
import os
import streamlit as st
DEFAULT_TERM_STR = (
"Make a list of terms and definitions that are defined in the context, "
"with one pair on each line. "
"If a term is missing it's definition, use your best judgment. "
"Write each line as as follows:\nTerm: <term> Definition: <definition>"
)
st.title("🦙 Llama Index Term Extractor 🦙")
setup_tab, upload_tab = st.tabs(["Setup", "Upload/Extract Terms"])
with setup_tab:
st.subheader("LLM Setup")
api_key = st.text_input("Enter your OpenAI API key here", type="password")
llm_name = st.selectbox("Which LLM?", ["gpt-3.5-turbo", "gpt-4"])
model_temperature = st.slider(
"LLM Temperature", min_value=0.0, max_value=1.0, step=0.1
)
term_extract_str = st.text_area(
"The query to extract terms and definitions with.",
value=DEFAULT_TERM_STR,
)
with upload_tab:
st.subheader("Extract and Query Definitions")
document_text = st.text_area("Enter raw text")
if st.button("Extract Terms and Definitions") and document_text:
with st.spinner("Extracting..."):
extracted_terms = document_text # this is a placeholder!
st.write(extracted_terms)
```
Now our app has two tabs, which really helps with the organization. You'll also noticed I added a default prompt to extract terms -- you can change this later once you try extracting some terms, it's just the prompt I arrived at after experimenting a bit.
Speaking of extracting terms, it's time to add some functions to do just that!
## Extracting and Storing Terms
Now that we are able to define LLM settings and input text, we can try using Llama Index to extract the terms from text for us!
We can add the following functions to both initialize our LLM, as well as use it to extract terms from the input text.
```python
from llama_index.core import Document, SummaryIndex, load_index_from_storage
from llama_index.llms.openai import OpenAI
from llama_index.core import Settings
def get_llm(llm_name, model_temperature, api_key, max_tokens=256):
os.environ["OPENAI_API_KEY"] = api_key
return OpenAI(
temperature=model_temperature, model=llm_name, max_tokens=max_tokens
)
def extract_terms(
documents, term_extract_str, llm_name, model_temperature, api_key
):
llm = get_llm(llm_name, model_temperature, api_key, max_tokens=1024)
temp_index = SummaryIndex.from_documents(
documents,
)
query_engine = temp_index.as_query_engine(
response_mode="tree_summarize", llm=llm
)
terms_definitions = str(query_engine.query(term_extract_str))
terms_definitions = [
x
for x in terms_definitions.split("\n")
if x and "Term:" in x and "Definition:" in x
]
# parse the text into a dict
terms_to_definition = {
x.split("Definition:")[0]
.split("Term:")[-1]
.strip(): x.split("Definition:")[-1]
.strip()
for x in terms_definitions
}
return terms_to_definition
```
Now, using the new functions, we can finally extract our terms!
```python
...
with upload_tab:
st.subheader("Extract and Query Definitions")
document_text = st.text_area("Enter raw text")
if st.button("Extract Terms and Definitions") and document_text:
with st.spinner("Extracting..."):
extracted_terms = extract_terms(
[Document(text=document_text)],
term_extract_str,
llm_name,
model_temperature,
api_key,
)
st.write(extracted_terms)
```
There's a lot going on now, let's take a moment to go over what is happening.
`get_llm()` is instantiating the LLM based on the user configuration from the setup tab. Based on the model name, we need to use the appropriate class (`OpenAI` vs. `ChatOpenAI`).
`extract_terms()` is where all the good stuff happens. First, we call `get_llm()` with `max_tokens=1024`, since we don't want to limit the model too much when it is extracting our terms and definitions (the default is 256 if not set). Then, we define our `Settings` object, aligning `num_output` with our `max_tokens` value, as well as setting the chunk size to be no larger than the output. When documents are indexed by Llama Index, they are broken into chunks (also called nodes) if they are large, and `chunk_size` sets the size for these chunks.
Next, we create a temporary summary index and pass in our llm. A summary index will read every single piece of text in our index, which is perfect for extracting terms. Finally, we use our pre-defined query text to extract terms, using `response_mode="tree_summarize`. This response mode will generate a tree of summaries from the bottom up, where each parent summarizes its children. Finally, the top of the tree is returned, which will contain all our extracted terms and definitions.
Lastly, we do some minor post processing. We assume the model followed instructions and put a term/definition pair on each line. If a line is missing the `Term:` or `Definition:` labels, we skip it. Then, we convert this to a dictionary for easy storage!
## Saving Extracted Terms
Now that we can extract terms, we need to put them somewhere so that we can query for them later. A `VectorStoreIndex` should be a perfect choice for now! But in addition, our app should also keep track of which terms are inserted into the index so that we can inspect them later. Using `st.session_state`, we can store the current list of terms in a session dict, unique to each user!
First things first though, let's add a feature to initialize a global vector index and another function to insert the extracted terms.
```python
from llama_index.core import Settings, VectorStoreIndex
...
if "all_terms" not in st.session_state:
st.session_state["all_terms"] = DEFAULT_TERMS
...
def insert_terms(terms_to_definition):
for term, definition in terms_to_definition.items():
doc = Document(text=f"Term: {term}\nDefinition: {definition}")
st.session_state["llama_index"].insert(doc)
@st.cache_resource
def initialize_index(llm_name, model_temperature, api_key):
"""Create the VectorStoreIndex object."""
Settings.llm = get_llm(llm_name, model_temperature, api_key)
index = VectorStoreIndex([])
return index, llm
...
with upload_tab:
st.subheader("Extract and Query Definitions")
if st.button("Initialize Index and Reset Terms"):
st.session_state["llama_index"] = initialize_index(
llm_name, model_temperature, api_key
)
st.session_state["all_terms"] = {}
if "llama_index" in st.session_state:
st.markdown(
"Either upload an image/screenshot of a document, or enter the text manually."
)
document_text = st.text_area("Or enter raw text")
if st.button("Extract Terms and Definitions") and (
uploaded_file or document_text
):
st.session_state["terms"] = {}
terms_docs = {}
with st.spinner("Extracting..."):
terms_docs.update(
extract_terms(
[Document(text=document_text)],
term_extract_str,
llm_name,
model_temperature,
api_key,
)
)
st.session_state["terms"].update(terms_docs)
if "terms" in st.session_state and st.session_state["terms"]:
st.markdown("Extracted terms")
st.json(st.session_state["terms"])
if st.button("Insert terms?"):
with st.spinner("Inserting terms"):
insert_terms(st.session_state["terms"])
st.session_state["all_terms"].update(st.session_state["terms"])
st.session_state["terms"] = {}
st.experimental_rerun()
```
Now you are really starting to leverage the power of streamlit! Let's start with the code under the upload tab. We added a button to initialize the vector index, and we store it in the global streamlit state dictionary, as well as resetting the currently extracted terms. Then, after extracting terms from the input text, we store it the extracted terms in the global state again and give the user a chance to review them before inserting. If the insert button is pressed, then we call our insert terms function, update our global tracking of inserted terms, and remove the most recently extracted terms from the session state.
## Querying for Extracted Terms/Definitions
With the terms and definitions extracted and saved, how can we use them? And how will the user even remember what's previously been saved?? We can simply add some more tabs to the app to handle these features.
```python
...
setup_tab, terms_tab, upload_tab, query_tab = st.tabs(
["Setup", "All Terms", "Upload/Extract Terms", "Query Terms"]
)
...
with terms_tab:
with terms_tab:
st.subheader("Current Extracted Terms and Definitions")
st.json(st.session_state["all_terms"])
...
with query_tab:
st.subheader("Query for Terms/Definitions!")
st.markdown(
(
"The LLM will attempt to answer your query, and augment it's answers using the terms/definitions you've inserted. "
"If a term is not in the index, it will answer using it's internal knowledge."
)
)
if st.button("Initialize Index and Reset Terms", key="init_index_2"):
st.session_state["llama_index"] = initialize_index(
llm_name, model_temperature, api_key
)
st.session_state["all_terms"] = {}
if "llama_index" in st.session_state:
query_text = st.text_input("Ask about a term or definition:")
if query_text:
query_text = (
query_text
+ "\nIf you can't find the answer, answer the query with the best of your knowledge."
)
with st.spinner("Generating answer..."):
response = (
st.session_state["llama_index"]
.as_query_engine(
similarity_top_k=5,
response_mode="compact",
text_qa_template=TEXT_QA_TEMPLATE,
refine_template=DEFAULT_REFINE_PROMPT,
)
.query(query_text)
)
st.markdown(str(response))
```
While this is mostly basic, some important things to note:
- Our initialize button has the same text as our other button. Streamlit will complain about this, so we provide a unique key instead.
- Some additional text has been added to the query! This is to try and compensate for times when the index does not have the answer.
- In our index query, we've specified two options:
- `similarity_top_k=5` means the index will fetch the top 5 closest matching terms/definitions to the query.
- `response_mode="compact"` means as much text as possible from the 5 matching terms/definitions will be used in each LLM call. Without this, the index would make at least 5 calls to the LLM, which can slow things down for the user.
## Dry Run Test
Well, actually I hope you've been testing as we went. But now, let's try one complete test.
1. Refresh the app
2. Enter your LLM settings
3. Head over to the query tab
4. Ask the following: `What is a bunnyhug?`
5. The app should give some nonsense response. If you didn't know, a bunnyhug is another word for a hoodie, used by people from the Canadian Prairies!
6. Let's add this definition to the app. Open the upload tab and enter the following text: `A bunnyhug is a common term used to describe a hoodie. This term is used by people from the Canadian Prairies.`
7. Click the extract button. After a few moments, the app should display the correctly extracted term/definition. Click the insert term button to save it!
8. If we open the terms tab, the term and definition we just extracted should be displayed
9. Go back to the query tab and try asking what a bunnyhug is. Now, the answer should be correct!
## Improvement #1 - Create a Starting Index
With our base app working, it might feel like a lot of work to build up a useful index. What if we gave the user some kind of starting point to show off the app's query capabilities? We can do just that! First, let's make a small change to our app so that we save the index to disk after every upload:
```python
def insert_terms(terms_to_definition):
for term, definition in terms_to_definition.items():
doc = Document(text=f"Term: {term}\nDefinition: {definition}")
st.session_state["llama_index"].insert(doc)
# TEMPORARY - save to disk
st.session_state["llama_index"].storage_context.persist()
```
Now, we need some document to extract from! The repository for this project used the wikipedia page on New York City, and you can find the text [here](https://github.com/jerryjliu/llama_index/blob/main/examples/test_wiki/data/nyc_text.txt).
If you paste the text into the upload tab and run it (it may take some time), we can insert the extracted terms. Make sure to also copy the text for the extracted terms into a notepad or similar before inserting into the index! We will need them in a second.
After inserting, remove the line of code we used to save the index to disk. With a starting index now saved, we can modify our `initialize_index` function to look like this:
```python
@st.cache_resource
def initialize_index(llm_name, model_temperature, api_key):
"""Load the Index object."""
Settings.llm = get_llm(llm_name, model_temperature, api_key)
index = load_index_from_storage(storage_context)
return index
```
Did you remember to save that giant list of extracted terms in a notepad? Now when our app initializes, we want to pass in the default terms that are in the index to our global terms state:
```python
...
if "all_terms" not in st.session_state:
st.session_state["all_terms"] = DEFAULT_TERMS
...
```
Repeat the above anywhere where we were previously resetting the `all_terms` values.
## Improvement #2 - (Refining) Better Prompts
If you play around with the app a bit now, you might notice that it stopped following our prompt! Remember, we added to our `query_str` variable that if the term/definition could not be found, answer to the best of its knowledge. But now if you try asking about random terms (like bunnyhug!), it may or may not follow those instructions.
This is due to the concept of "refining" answers in Llama Index. Since we are querying across the top 5 matching results, sometimes all the results do not fit in a single prompt! OpenAI models typically have a max input size of 4097 tokens. So, Llama Index accounts for this by breaking up the matching results into chunks that will fit into the prompt. After Llama Index gets an initial answer from the first API call, it sends the next chunk to the API, along with the previous answer, and asks the model to refine that answer.
So, the refine process seems to be messing with our results! Rather than appending extra instructions to the `query_str`, remove that, and Llama Index will let us provide our own custom prompts! Let's create those now, using the [default prompts](https://github.com/jerryjliu/llama_index/blob/main/llama_index/prompts/default_prompts.py) and [chat specific prompts](https://github.com/jerryjliu/llama_index/blob/main/llama_index/prompts/chat_prompts.py) as a guide. Using a new file `constants.py`, let's create some new query templates:
```python
from llama_index.core import (
PromptTemplate,
SelectorPromptTemplate,
ChatPromptTemplate,
)
from llama_index.core.prompts.utils import is_chat_model
from llama_index.core.llms import ChatMessage, MessageRole
# Text QA templates
DEFAULT_TEXT_QA_PROMPT_TMPL = (
"Context information is below. \n"
"---------------------\n"
"{context_str}"
"\n---------------------\n"
"Given the context information answer the following question "
"(if you don't know the answer, use the best of your knowledge): {query_str}\n"
)
TEXT_QA_TEMPLATE = PromptTemplate(DEFAULT_TEXT_QA_PROMPT_TMPL)
# Refine templates
DEFAULT_REFINE_PROMPT_TMPL = (
"The original question is as follows: {query_str}\n"
"We have provided an existing answer: {existing_answer}\n"
"We have the opportunity to refine the existing answer "
"(only if needed) with some more context below.\n"
"------------\n"
"{context_msg}\n"
"------------\n"
"Given the new context and using the best of your knowledge, improve the existing answer. "
"If you can't improve the existing answer, just repeat it again."
)
DEFAULT_REFINE_PROMPT = PromptTemplate(DEFAULT_REFINE_PROMPT_TMPL)
CHAT_REFINE_PROMPT_TMPL_MSGS = [
ChatMessage(content="{query_str}", role=MessageRole.USER),
ChatMessage(content="{existing_answer}", role=MessageRole.ASSISTANT),
ChatMessage(
content="We have the opportunity to refine the above answer "
"(only if needed) with some more context below.\n"
"------------\n"
"{context_msg}\n"
"------------\n"
"Given the new context and using the best of your knowledge, improve the existing answer. "
"If you can't improve the existing answer, just repeat it again.",
role=MessageRole.USER,
),
]
CHAT_REFINE_PROMPT = ChatPromptTemplate(CHAT_REFINE_PROMPT_TMPL_MSGS)
# refine prompt selector
REFINE_TEMPLATE = SelectorPromptTemplate(
default_template=DEFAULT_REFINE_PROMPT,
conditionals=[(is_chat_model, CHAT_REFINE_PROMPT)],
)
```
That seems like a lot of code, but it's not too bad! If you looked at the default prompts, you might have noticed that there are default prompts, and prompts specific to chat models. Continuing that trend, we do the same for our custom prompts. Then, using a prompt selector, we can combine both prompts into a single object. If the LLM being used is a chat model (ChatGPT, GPT-4), then the chat prompts are used. Otherwise, use the normal prompt templates.
Another thing to note is that we only defined one QA template. In a chat model, this will be converted to a single "human" message.
So, now we can import these prompts into our app and use them during the query.
```python
from constants import REFINE_TEMPLATE, TEXT_QA_TEMPLATE
...
if "llama_index" in st.session_state:
query_text = st.text_input("Ask about a term or definition:")
if query_text:
query_text = query_text # Notice we removed the old instructions
with st.spinner("Generating answer..."):
response = (
st.session_state["llama_index"]
.as_query_engine(
similarity_top_k=5,
response_mode="compact",
text_qa_template=TEXT_QA_TEMPLATE,
refine_template=DEFAULT_REFINE_PROMPT,
)
.query(query_text)
)
st.markdown(str(response))
...
```
If you experiment a bit more with queries, hopefully you notice that the responses follow our instructions a little better now!
## Improvement #3 - Image Support
Llama index also supports images! Using Llama Index, we can upload images of documents (papers, letters, etc.), and Llama Index handles extracting the text. We can leverage this to also allow users to upload images of their documents and extract terms and definitions from them.
If you get an import error about PIL, install it using `pip install Pillow` first.
```python
from PIL import Image
from llama_index.readers.file import ImageReader
@st.cache_resource
def get_file_extractor():
image_parser = ImageReader(keep_image=True, parse_text=True)
file_extractor = {
".jpg": image_parser,
".png": image_parser,
".jpeg": image_parser,
}
return file_extractor
file_extractor = get_file_extractor()
...
with upload_tab:
st.subheader("Extract and Query Definitions")
if st.button("Initialize Index and Reset Terms", key="init_index_1"):
st.session_state["llama_index"] = initialize_index(
llm_name, model_temperature, api_key
)
st.session_state["all_terms"] = DEFAULT_TERMS
if "llama_index" in st.session_state:
st.markdown(
"Either upload an image/screenshot of a document, or enter the text manually."
)
uploaded_file = st.file_uploader(
"Upload an image/screenshot of a document:",
type=["png", "jpg", "jpeg"],
)
document_text = st.text_area("Or enter raw text")
if st.button("Extract Terms and Definitions") and (
uploaded_file or document_text
):
st.session_state["terms"] = {}
terms_docs = {}
with st.spinner("Extracting (images may be slow)..."):
if document_text:
terms_docs.update(
extract_terms(
[Document(text=document_text)],
term_extract_str,
llm_name,
model_temperature,
api_key,
)
)
if uploaded_file:
Image.open(uploaded_file).convert("RGB").save("temp.png")
img_reader = SimpleDirectoryReader(
input_files=["temp.png"], file_extractor=file_extractor
)
img_docs = img_reader.load_data()
os.remove("temp.png")
terms_docs.update(
extract_terms(
img_docs,
term_extract_str,
llm_name,
model_temperature,
api_key,
)
)
st.session_state["terms"].update(terms_docs)
if "terms" in st.session_state and st.session_state["terms"]:
st.markdown("Extracted terms")
st.json(st.session_state["terms"])
if st.button("Insert terms?"):
with st.spinner("Inserting terms"):
insert_terms(st.session_state["terms"])
st.session_state["all_terms"].update(st.session_state["terms"])
st.session_state["terms"] = {}
st.experimental_rerun()
```
Here, we added the option to upload a file using Streamlit. Then the image is opened and saved to disk (this seems hacky but it keeps things simple). Then we pass the image path to the reader, extract the documents/text, and remove our temp image file.
Now that we have the documents, we can call `extract_terms()` the same as before.
## Conclusion/TLDR
In this tutorial, we covered a ton of information, while solving some common issues and problems along the way:
- Using different indexes for different use cases (List vs. Vector index)
- Storing global state values with Streamlit's `session_state` concept
- Customizing internal prompts with Llama Index
- Reading text from images with Llama Index
The final version of this tutorial can be found [here](https://github.com/abdulasiraj/A-Guide-to-Extracting-Terms-and-Definitions) and a live hosted demo is available on [Huggingface Spaces](https://huggingface.co./spaces/Nobody4591/Llama_Index_Term_Extractor). |
1,871 | 86e843c6-0a02-4475-84f3-0daaee761aeb | Q&A patterns | https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/q_and_a/index | true | llama_index | # Q&A patterns
## Semantic Search
The most basic example usage of LlamaIndex is through semantic search. We provide a simple in-memory vector store for you to get started, but you can also choose to use any one of our [vector store integrations](../../community/integrations/vector_stores.md):
```python
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
print(response)
```
**Tutorials**
- [Starter Tutorial](../../getting_started/starter_example.md)
- [Basic Usage Pattern](../querying/querying.md)
**Guides**
- [Example](../../examples/vector_stores/SimpleIndexDemo.ipynb) ([Notebook](https://github.com/run-llama/llama_index/tree/main/docs../../examples/vector_stores/SimpleIndexDemo.ipynb))
## Summarization
A summarization query requires the LLM to iterate through many if not most documents in order to synthesize an answer.
For instance, a summarization query could look like one of the following:
- "What is a summary of this collection of text?"
- "Give me a summary of person X's experience with the company."
In general, a summary index would be suited for this use case. A summary index by default goes through all the data.
Empirically, setting `response_mode="tree_summarize"` also leads to better summarization results.
```python
index = SummaryIndex.from_documents(documents)
query_engine = index.as_query_engine(response_mode="tree_summarize")
response = query_engine.query("<summarization_query>")
```
## Queries over Structured Data
LlamaIndex supports queries over structured data, whether that's a Pandas DataFrame or a SQL Database.
Here are some relevant resources:
**Tutorials**
- [Guide on Text-to-SQL](structured_data.md)
**Guides**
- [SQL Guide (Core)](../../examples/index_structs/struct_indices/SQLIndexDemo.ipynb) ([Notebook](https://github.com/jerryjliu/llama_index/blob/main/docs../../examples/index_structs/struct_indices/SQLIndexDemo.ipynb))
- [Pandas Demo](../../examples/query_engine/pandas_query_engine.ipynb) ([Notebook](https://github.com/jerryjliu/llama_index/blob/main/docs../../examples/query_engine/pandas_query_engine.ipynb))
## Routing over Heterogeneous Data
LlamaIndex also supports routing over heterogeneous data sources with `RouterQueryEngine` - for instance, if you want to "route" a query to an
underlying Document or a sub-index.
To do this, first build the sub-indices over different data sources.
Then construct the corresponding query engines, and give each query engine a description to obtain a `QueryEngineTool`.
```python
from llama_index.core import TreeIndex, VectorStoreIndex
from llama_index.core.tools import QueryEngineTool
...
# define sub-indices
index1 = VectorStoreIndex.from_documents(notion_docs)
index2 = VectorStoreIndex.from_documents(slack_docs)
# define query engines and tools
tool1 = QueryEngineTool.from_defaults(
query_engine=index1.as_query_engine(),
description="Use this query engine to do...",
)
tool2 = QueryEngineTool.from_defaults(
query_engine=index2.as_query_engine(),
description="Use this query engine for something else...",
)
```
Then, we define a `RouterQueryEngine` over them.
By default, this uses a `LLMSingleSelector` as the router, which uses the LLM to choose the best sub-index to router the query to, given the descriptions.
```python
from llama_index.core.query_engine import RouterQueryEngine
query_engine = RouterQueryEngine.from_defaults(
query_engine_tools=[tool1, tool2]
)
response = query_engine.query(
"In Notion, give me a summary of the product roadmap."
)
```
**Guides**
- [Router Query Engine Guide](../../examples/query_engine/RouterQueryEngine.ipynb) ([Notebook](https://github.com/jerryjliu/llama_index/blob/main/docs../../examples/query_engine/RouterQueryEngine.ipynb))
## Compare/Contrast Queries
You can explicitly perform compare/contrast queries with a **query transformation** module within a ComposableGraph.
```python
from llama_index.core.query.query_transform.base import DecomposeQueryTransform
decompose_transform = DecomposeQueryTransform(
service_context.llm, verbose=True
)
```
This module will help break down a complex query into a simpler one over your existing index structure.
**Guides**
- [Query Transformations](../../optimizing/advanced_retrieval/query_transformations.md)
You can also rely on the LLM to _infer_ whether to perform compare/contrast queries (see Multi Document Queries below).
## Multi Document Queries
Besides the explicit synthesis/routing flows described above, LlamaIndex can support more general multi-document queries as well.
It can do this through our `SubQuestionQueryEngine` class. Given a query, this query engine will generate a "query plan" containing
sub-queries against sub-documents before synthesizing the final answer.
To do this, first define an index for each document/data source, and wrap it with a `QueryEngineTool` (similar to above):
```python
from llama_index.core.tools import QueryEngineTool, ToolMetadata
query_engine_tools = [
QueryEngineTool(
query_engine=sept_engine,
metadata=ToolMetadata(
name="sept_22",
description="Provides information about Uber quarterly financials ending September 2022",
),
),
QueryEngineTool(
query_engine=june_engine,
metadata=ToolMetadata(
name="june_22",
description="Provides information about Uber quarterly financials ending June 2022",
),
),
QueryEngineTool(
query_engine=march_engine,
metadata=ToolMetadata(
name="march_22",
description="Provides information about Uber quarterly financials ending March 2022",
),
),
]
```
Then, we define a `SubQuestionQueryEngine` over these tools:
```python
from llama_index.core.query_engine import SubQuestionQueryEngine
query_engine = SubQuestionQueryEngine.from_defaults(
query_engine_tools=query_engine_tools
)
```
This query engine can execute any number of sub-queries against any subset of query engine tools before synthesizing the final answer.
This makes it especially well-suited for compare/contrast queries across documents as well as queries pertaining to a specific document.
**Guides**
- [Sub Question Query Engine (Intro)](../../examples/query_engine/sub_question_query_engine.ipynb)
- [10Q Analysis (Uber)](../../examples/usecases/10q_sub_question.ipynb)
- [10K Analysis (Uber and Lyft)](../../examples/usecases/10k_sub_question.ipynb)
## Multi-Step Queries
LlamaIndex can also support iterative multi-step queries. Given a complex query, break it down into an initial subquestions,
and sequentially generate subquestions based on returned answers until the final answer is returned.
For instance, given a question "Who was in the first batch of the accelerator program the author started?",
the module will first decompose the query into a simpler initial question "What was the accelerator program the author started?",
query the index, and then ask followup questions.
**Guides**
- [Query Transformations](../../optimizing/advanced_retrieval/query_transformations.md)
- [Multi-Step Query Decomposition](../../examples/query_transformations/HyDEQueryTransformDemo.ipynb) ([Notebook](https://github.com/jerryjliu/llama_index/blob/main/docs/docs/examples/query_transformations/HyDEQueryTransformDemo.ipynb))
## Temporal Queries
LlamaIndex can support queries that require an understanding of time. It can do this in two ways:
- Decide whether the query requires utilizing temporal relationships between nodes (prev/next relationships) in order to retrieve additional context to answer the question.
- Sort by recency and filter outdated context.
**Guides**
- [Postprocessing Guide](../../module_guides/querying/node_postprocessors/node_postprocessors.md)
- [Prev/Next Postprocessing](../../examples/node_postprocessor/PrevNextPostprocessorDemo.ipynb)
- [Recency Postprocessing](../../examples/node_postprocessor/RecencyPostprocessorDemo.ipynb)
## Additional Resources
- [A Guide to Extracting Terms and Definitions](q_and_a/terms_definitions_tutorial.md)
- [SEC 10k Analysis](https://medium.com/@jerryjliu98/how-unstructured-and-llamaindex-can-help-bring-the-power-of-llms-to-your-own-data-3657d063e30d) |
3,639 | 0a9fdd80-bd50-41e1-86b6-4dddbefd25f0 | Airbyte SQL Index Guide | https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/structured_data/Airbyte_demo | true | llama_index | # Airbyte SQL Index Guide
We will show how to generate SQL queries on a Snowflake db generated by Airbyte.
```python
# Uncomment to enable debugging.
# import logging
# import sys
# logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
# logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
```
### Airbyte ingestion
Here we show how to ingest data from Github into a Snowflake db using Airbyte.
```python
from IPython.display import Image
Image(filename="img/airbyte_1.png")
```
![png](output_4_0.png)
Let's create a new connection. Here we will be dumping our Zendesk tickets into a Snowflake db.
```python
Image(filename="img/github_1.png")
```
![png](output_6_0.png)
```python
Image(filename="img/github_2.png")
```
![png](output_7_0.png)
```python
Image(filename="img/snowflake_1.png")
```
![png](output_8_0.png)
```python
Image(filename="img/snowflake_2.png")
```
![png](output_9_0.png)
Choose the streams you want to sync.
```python
Image(filename="img/airbyte_7.png")
```
![png](output_11_0.png)
```python
Image(filename="img/github_3.png")
```
![png](output_12_0.png)
Sync your data.
```python
Image(filename="img/airbyte_9.png")
```
![png](output_14_0.png)
```python
Image(filename="img/airbyte_8.png")
```
![png](output_15_0.png)
### Snowflake-SQLAlchemy version fix
Hack to make snowflake-sqlalchemy work despite incompatible sqlalchemy versions
Taken from https://github.com/snowflakedb/snowflake-sqlalchemy/issues/380#issuecomment-1470762025
```python
# Hack to make snowflake-sqlalchemy work until they patch it
def snowflake_sqlalchemy_20_monkey_patches():
import sqlalchemy.util.compat
# make strings always return unicode strings
sqlalchemy.util.compat.string_types = (str,)
sqlalchemy.types.String.RETURNS_UNICODE = True
import snowflake.sqlalchemy.snowdialect
snowflake.sqlalchemy.snowdialect.SnowflakeDialect.returns_unicode_strings = (
True
)
# make has_table() support the `info_cache` kwarg
import snowflake.sqlalchemy.snowdialect
def has_table(self, connection, table_name, schema=None, info_cache=None):
"""
Checks if the table exists
"""
return self._has_object(connection, "TABLE", table_name, schema)
snowflake.sqlalchemy.snowdialect.SnowflakeDialect.has_table = has_table
# usage: call this function before creating an engine:
try:
snowflake_sqlalchemy_20_monkey_patches()
except Exception as e:
raise ValueError("Please run `pip install snowflake-sqlalchemy`")
```
### Define database
We pass the Snowflake uri to the SQL db constructor
```python
snowflake_uri = "snowflake://<user_login_name>:<password>@<account_identifier>/<database_name>/<schema_name>?warehouse=<warehouse_name>&role=<role_name>"
```
First we try connecting with sqlalchemy to check the db works.
```python
from sqlalchemy import select, create_engine, MetaData, Table
# view current table
engine = create_engine(snowflake_uri)
metadata = MetaData(bind=None)
table = Table("ZENDESK_TICKETS", metadata, autoload=True, autoload_with=engine)
stmt = select(table.columns)
with engine.connect() as connection:
results = connection.execute(stmt).fetchone()
print(results)
print(results.keys())
```
/var/folders/dx/n9yhm8p9039b5bgmgjqy46y40000gn/T/ipykernel_57673/3609487787.py:6: RemovedIn20Warning: Deprecated API features detected! These feature(s) are not compatible with SQLAlchemy 2.0. To prevent incompatible upgrades prior to updating applications, ensure requirements files are pinned to "sqlalchemy<2.0". Set environment variable SQLALCHEMY_WARN_20=1 to show all deprecation warnings. Set environment variable SQLALCHEMY_SILENCE_UBER_WARNING=1 to silence this message. (Background on SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9)
table = Table(
(False, 'test case', '[]', datetime.datetime(2022, 7, 18, 16, 59, 13, tzinfo=<UTC>), 'test to', None, None, 'question', '{\n "channel": "web",\n "source": {\n "from": {},\n "rel": null,\n "to": {}\n }\n}', True, datetime.datetime(2022, 7, 18, 18, 1, 37, tzinfo=<UTC>), None, '[]', None, 134, None, 1658167297, 'test case', None, '[]', False, '{\n "score": "offered"\n}', 360786799676, 'low', '[]', 'https://d3v-airbyte.zendesk.com/api/v2/tickets/134.json', '[]', 360000358316, 360000084116, '[]', None, '[]', 360033549136, True, None, False, 'new', 360786799676, 'abd39a87-b1f9-4390-bf8b-cf3c288b1f74', datetime.datetime(2023, 6, 9, 0, 25, 23, 501000, tzinfo=pytz.FixedOffset(-420)), datetime.datetime(2023, 6, 9, 0, 38, 20, 440000, tzinfo=<UTC>), '6577ef036668746df889983970579a55', '02522a2b2726fb0a03bb19f2d8d9524d')
RMKeyView(['from_messaging_channel', 'subject', 'email_cc_ids', 'created_at', 'description', 'custom_status_id', 'external_id', 'type', 'via', 'allow_attachments', 'updated_at', 'problem_id', 'follower_ids', 'due_at', 'id', 'assignee_id', 'generated_timestamp', 'raw_subject', 'forum_topic_id', 'custom_fields', 'allow_channelback', 'satisfaction_rating', 'submitter_id', 'priority', 'collaborator_ids', 'url', 'tags', 'brand_id', 'ticket_form_id', 'sharing_agreement_ids', 'group_id', 'followup_ids', 'organization_id', 'is_public', 'recipient', 'has_incidents', 'status', 'requester_id', '_airbyte_ab_id', '_airbyte_emitted_at', '_airbyte_normalized_at', '_airbyte_zendesk_tickets_hashid', '_airbyte_unique_key'])
### Define SQL DB
Once we have defined the SQLDatabase, we can wrap it in a query engine to query it.
If we know what tables we want to use we can use `NLSQLTableQueryEngine`.
This will generate a SQL query on the specified tables.
```python
from llama_index import SQLDatabase
# You can specify table filters during engine creation.
# sql_database = SQLDatabase(engine, include_tables=["github_issues","github_comments", "github_users"])
sql_database = SQLDatabase(engine)
```
### Synthesize Query
We then show a natural language query, which is translated to a SQL query under the hood with our text-to-SQL prompt.
```python
from llama_index.indices.struct_store.sql_query import NLSQLTableQueryEngine
from IPython.display import Markdown, display
query_engine = NLSQLTableQueryEngine(
sql_database=sql_database,
tables=["github_issues", "github_comments", "github_users"],
)
query_str = "Which issues have the most comments? Give the top 10 and use a join on url."
response = query_engine.query(query_str)
display(Markdown(f"<b>{response}</b>"))
```
<b> The top 10 issues with the most comments, based on a join on url, are 'Proof of concept parallel source stream reading implementation for MySQL', 'Remove noisy logging for `LegacyStateManager`', 'Track stream status in source', 'Source Google Analytics v4: - add pk and lookback window', 'Connector Health: Fixed SAT for marketo, close, chargebee, facebook marketing, paystack, hubspot, pipedrive and marketo', '📝 Update outdated docs urls in metadata files', 'Fix emitted intermediate state for initial incremental non-CDC syncs', 'source-postgres : Add logic to handle xmin wraparound', ':bug: Source HubSpot: fix cast string as boolean using string comparison', and 'Fix db-lib JdbcUtils.java to accept JDBC parameters with = sign.'.</b>
```python
# You can also get only the SQL query result.
query_engine = NLSQLTableQueryEngine(
sql_database=sql_database,
synthesize_response=False,
tables=["github_issues", "github_comments", "github_users"],
)
response = query_engine.query(query_str)
display(Markdown(f"<b>{response}</b>"))
```
<b>[('Proof of concept parallel source stream reading implementation for MySQL', 'https://api.github.com/repos/airbytehq/airbyte/issues/26580', 'https://api.github.com/repos/airbytehq/airbyte/issues/26580', 104), ('Remove noisy logging for `LegacyStateManager`', 'https://api.github.com/repos/airbytehq/airbyte/issues/27335', 'https://api.github.com/repos/airbytehq/airbyte/issues/27335', 39), ('Track stream status in source', 'https://api.github.com/repos/airbytehq/airbyte/issues/24971', 'https://api.github.com/repos/airbytehq/airbyte/issues/24971', 35), ('Source Google Analytics v4: - add pk and lookback window', 'https://api.github.com/repos/airbytehq/airbyte/issues/26283', 'https://api.github.com/repos/airbytehq/airbyte/issues/26283', 29), ('Connector Health: Fixed SAT for marketo, close, chargebee, facebook marketing, paystack, hubspot, pipedrive and marketo', 'https://api.github.com/repos/airbytehq/airbyte/issues/24802', 'https://api.github.com/repos/airbytehq/airbyte/issues/24802', 28), ('📝 Update outdated docs urls in metadata files', 'https://api.github.com/repos/airbytehq/airbyte/issues/27420', 'https://api.github.com/repos/airbytehq/airbyte/issues/27420', 26), ('Fix emitted intermediate state for initial incremental non-CDC syncs', 'https://api.github.com/repos/airbytehq/airbyte/issues/24820', 'https://api.github.com/repos/airbytehq/airbyte/issues/24820', 25), ('source-postgres : Add logic to handle xmin wraparound', 'https://api.github.com/repos/airbytehq/airbyte/issues/27384', 'https://api.github.com/repos/airbytehq/airbyte/issues/27384', 24), (':bug: Source HubSpot: fix cast string as boolean using string comparison', 'https://api.github.com/repos/airbytehq/airbyte/issues/26082', 'https://api.github.com/repos/airbytehq/airbyte/issues/26082', 24), ('Fix db-lib JdbcUtils.java to accept JDBC parameters with = sign.', 'https://api.github.com/repos/airbytehq/airbyte/issues/25386', 'https://api.github.com/repos/airbytehq/airbyte/issues/25386', 22)]</b>
```python
# You can also get the original SQL query
sql_query = response.metadata["sql_query"]
display(Markdown(f"<b>{sql_query}</b>"))
```
<b>SELECT gi.title, gi.url, gc.issue_url, COUNT(*) AS comment_count
FROM github_issues gi
JOIN github_comments gc ON gi.url = gc.issue_url
GROUP BY gi.title, gi.url, gc.issue_url
ORDER BY comment_count DESC
LIMIT 10;</b>
We can also use LLM prediction to figure out what tables to use.
We first need to create an ObjectIndex of SQLTableSchema. In this case we only pass in the table names.
The query engine will fetch the relevant table schema at query time.
```python
from llama_index.indices.struct_store.sql_query import (
SQLTableRetrieverQueryEngine,
)
from llama_index.objects import (
SQLTableNodeMapping,
ObjectIndex,
SQLTableSchema,
)
from llama_index import VectorStoreIndex
table_node_mapping = SQLTableNodeMapping(sql_database)
all_table_names = sql_database.get_usable_table_names()
table_schema_objs = []
for table_name in all_table_names:
table_schema_objs.append(SQLTableSchema(table_name=table_name))
obj_index = ObjectIndex.from_objects(
table_schema_objs,
table_node_mapping,
VectorStoreIndex,
)
table_retriever_query_engine = SQLTableRetrieverQueryEngine(
sql_database, obj_index.as_retriever(similarity_top_k=1)
)
response = query_engine.query(query_str)
display(Markdown(f"<b>{response}</b>"))
sql_query = response.metadata["sql_query"]
display(Markdown(f"<b>{sql_query}</b>"))
```
/Users/hongyishi/Documents/GitHub/gpt_index/.venv/lib/python3.11/site-packages/langchain/sql_database.py:279: UserWarning: This method is deprecated - please use `get_usable_table_names`.
warnings.warn(
<b>[('Proof of concept parallel source stream reading implementation for MySQL', 'https://api.github.com/repos/airbytehq/airbyte/issues/26580', 'https://api.github.com/repos/airbytehq/airbyte/issues/26580', 104), ('Remove noisy logging for `LegacyStateManager`', 'https://api.github.com/repos/airbytehq/airbyte/issues/27335', 'https://api.github.com/repos/airbytehq/airbyte/issues/27335', 39), ('Track stream status in source', 'https://api.github.com/repos/airbytehq/airbyte/issues/24971', 'https://api.github.com/repos/airbytehq/airbyte/issues/24971', 35), ('Source Google Analytics v4: - add pk and lookback window', 'https://api.github.com/repos/airbytehq/airbyte/issues/26283', 'https://api.github.com/repos/airbytehq/airbyte/issues/26283', 29), ('Connector Health: Fixed SAT for marketo, close, chargebee, facebook marketing, paystack, hubspot, pipedrive and marketo', 'https://api.github.com/repos/airbytehq/airbyte/issues/24802', 'https://api.github.com/repos/airbytehq/airbyte/issues/24802', 28), ('📝 Update outdated docs urls in metadata files', 'https://api.github.com/repos/airbytehq/airbyte/issues/27420', 'https://api.github.com/repos/airbytehq/airbyte/issues/27420', 26), ('Fix emitted intermediate state for initial incremental non-CDC syncs', 'https://api.github.com/repos/airbytehq/airbyte/issues/24820', 'https://api.github.com/repos/airbytehq/airbyte/issues/24820', 25), ('source-postgres : Add logic to handle xmin wraparound', 'https://api.github.com/repos/airbytehq/airbyte/issues/27384', 'https://api.github.com/repos/airbytehq/airbyte/issues/27384', 24), (':bug: Source HubSpot: fix cast string as boolean using string comparison', 'https://api.github.com/repos/airbytehq/airbyte/issues/26082', 'https://api.github.com/repos/airbytehq/airbyte/issues/26082', 24), ('Fix db-lib JdbcUtils.java to accept JDBC parameters with = sign.', 'https://api.github.com/repos/airbytehq/airbyte/issues/25386', 'https://api.github.com/repos/airbytehq/airbyte/issues/25386', 22)]</b>
<b>SELECT gi.title, gi.url, gc.issue_url, COUNT(*) AS comment_count
FROM github_issues gi
JOIN github_comments gc ON gi.url = gc.issue_url
GROUP BY gi.title, gi.url, gc.issue_url
ORDER BY comment_count DESC
LIMIT 10;</b> |
1,389 | 2ed4f255-948b-40be-8d07-7a07057fa10e | Structured Data | https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/structured_data/index | true | llama_index | # Structured Data
# A Guide to LlamaIndex + Structured Data
A lot of modern data systems depend on structured data, such as a Postgres DB or a Snowflake data warehouse.
LlamaIndex provides a lot of advanced features, powered by LLM's, to both create structured data from
unstructured data, as well as analyze this structured data through augmented text-to-SQL capabilities.
**NOTE:** Any Text-to-SQL application should be aware that executing
arbitrary SQL queries can be a security risk. It is recommended to
take precautions as needed, such as using restricted roles, read-only
databases, sandboxing, etc.
This guide helps walk through each of these capabilities. Specifically, we cover the following topics:
- **Setup**: Defining up our example SQL Table.
- **Building our Table Index**: How to go from sql database to a Table Schema Index
- **Using natural language SQL queries**: How to query our SQL database using natural language.
We will walk through a toy example table which contains city/population/country information.
A notebook for this tutorial is [available here](../../examples/index_structs/struct_indices/SQLIndexDemo.ipynb).
## Setup
First, we use SQLAlchemy to setup a simple sqlite db:
```python
from sqlalchemy import (
create_engine,
MetaData,
Table,
Column,
String,
Integer,
select,
column,
)
engine = create_engine("sqlite:///:memory:")
metadata_obj = MetaData()
```
We then create a toy `city_stats` table:
```python
# create city SQL table
table_name = "city_stats"
city_stats_table = Table(
table_name,
metadata_obj,
Column("city_name", String(16), primary_key=True),
Column("population", Integer),
Column("country", String(16), nullable=False),
)
metadata_obj.create_all(engine)
```
Now it's time to insert some datapoints!
If you want to look into filling into this table by inferring structured datapoints
from unstructured data, take a look at the below section. Otherwise, you can choose
to directly populate this table:
```python
from sqlalchemy import insert
rows = [
{"city_name": "Toronto", "population": 2731571, "country": "Canada"},
{"city_name": "Tokyo", "population": 13929286, "country": "Japan"},
{"city_name": "Berlin", "population": 600000, "country": "Germany"},
]
for row in rows:
stmt = insert(city_stats_table).values(**row)
with engine.begin() as connection:
cursor = connection.execute(stmt)
```
Finally, we can wrap the SQLAlchemy engine with our SQLDatabase wrapper;
this allows the db to be used within LlamaIndex:
```python
from llama_index.core import SQLDatabase
sql_database = SQLDatabase(engine, include_tables=["city_stats"])
```
## Natural language SQL
Once we have constructed our SQL database, we can use the NLSQLTableQueryEngine to
construct natural language queries that are synthesized into SQL queries.
Note that we need to specify the tables we want to use with this query engine.
If we don't the query engine will pull all the schema context, which could
overflow the context window of the LLM.
```python
from llama_index.core.query_engine import NLSQLTableQueryEngine
query_engine = NLSQLTableQueryEngine(
sql_database=sql_database,
tables=["city_stats"],
)
query_str = "Which city has the highest population?"
response = query_engine.query(query_str)
```
This query engine should used in any case where you can specify the tables you want
to query over beforehand, or the total size of all the table schema plus the rest of
the prompt fits your context window.
## Building our Table Index
If we don't know ahead of time which table we would like to use, and the total size of
the table schema overflows your context window size, we should store the table schema
in an index so that during query time we can retrieve the right schema.
The way we can do this is using the SQLTableNodeMapping object, which takes in a
SQLDatabase and produces a Node object for each SQLTableSchema object passed
into the ObjectIndex constructor.
```python
from llama_index.core.objects import (
SQLTableNodeMapping,
ObjectIndex,
SQLTableSchema,
)
table_node_mapping = SQLTableNodeMapping(sql_database)
table_schema_objs = [
(SQLTableSchema(table_name="city_stats")),
...,
] # one SQLTableSchema for each table
obj_index = ObjectIndex.from_objects(
table_schema_objs,
table_node_mapping,
VectorStoreIndex,
)
```
Here you can see we define our table_node_mapping, and a single SQLTableSchema with the
"city_stats" table name. We pass these into the ObjectIndex constructor, along with the
VectorStoreIndex class definition we want to use. This will give us a VectorStoreIndex where
each Node contains table schema and other context information. You can also add any additional
context information you'd like.
```python
# manually set extra context text
city_stats_text = (
"This table gives information regarding the population and country of a given city.\n"
"The user will query with codewords, where 'foo' corresponds to population and 'bar'"
"corresponds to city."
)
table_node_mapping = SQLTableNodeMapping(sql_database)
table_schema_objs = [
(SQLTableSchema(table_name="city_stats", context_str=city_stats_text))
]
```
## Using natural language SQL queries
Once we have defined our table schema index obj_index, we can construct a SQLTableRetrieverQueryEngine
by passing in our SQLDatabase, and a retriever constructed from our object index.
```python
from llama_index.core.indices.struct_store import SQLTableRetrieverQueryEngine
query_engine = SQLTableRetrieverQueryEngine(
sql_database, obj_index.as_retriever(similarity_top_k=1)
)
response = query_engine.query("Which city has the highest population?")
print(response)
```
Now when we query the retriever query engine, it will retrieve the relevant table schema
and synthesize a SQL query and a response from the results of that query.
## Concluding Thoughts
This is it for now! We're constantly looking for ways to improve our structured data support.
If you have any questions let us know in [our Discord](https://discord.gg/dGcwcsnxhU).
Relevant Resources:
- [Airbyte SQL Index Guide](./structured_data/Airbyte_demo.ipynb) |
4,506 | 3b04b376-b99a-40a3-96f6-571a5dda5fcb | How to Build a Chatbot | https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/chatbots/building_a_chatbot | true | llama_index | # How to Build a Chatbot
LlamaIndex serves as a bridge between your data and Large Language Models (LLMs), providing a toolkit that enables you to establish a query interface around your data for a variety of tasks, such as question-answering and summarization.
In this tutorial, we'll walk you through building a context-augmented chatbot using a [Data Agent](https://gpt-index.readthedocs.io/en/stable/core_modules/agent_modules/agents/root.html). This agent, powered by LLMs, is capable of intelligently executing tasks over your data. The end result is a chatbot agent equipped with a robust set of data interface tools provided by LlamaIndex to answer queries about your data.
**Note**: This tutorial builds upon initial work on creating a query interface over SEC 10-K filings - [check it out here](https://medium.com/@jerryjliu98/how-unstructured-and-llamaindex-can-help-bring-the-power-of-llms-to-your-own-data-3657d063e30d).
### Context
In this guide, we’ll build a "10-K Chatbot" that uses raw UBER 10-K HTML filings from Dropbox. Users can interact with the chatbot to ask questions related to the 10-K filings.
### Preparation
```python
import os
import openai
os.environ["OPENAI_API_KEY"] = "sk-..."
openai.api_key = os.environ["OPENAI_API_KEY"]
import nest_asyncio
nest_asyncio.apply()
```
### Ingest Data
Let's first download the raw 10-k files, from 2019-2022.
```
# NOTE: the code examples assume you're operating within a Jupyter notebook.
# download files
!mkdir data
!wget "https://www.dropbox.com/s/948jr9cfs7fgj99/UBER.zip?dl=1" -O data/UBER.zip
!unzip data/UBER.zip -d data
```
To parse the HTML files into formatted text, we use the [Unstructured](https://github.com/Unstructured-IO/unstructured) library. Thanks to [LlamaHub](https://llamahub.ai/), we can directly integrate with Unstructured, allowing conversion of any text into a Document format that LlamaIndex can ingest.
First we install the necessary packages:
```
!pip install llama-hub unstructured
```
Then we can use the `UnstructuredReader` to parse the HTML files into a list of `Document` objects.
```python
from llama_index.readers.file import UnstructuredReader
from pathlib import Path
years = [2022, 2021, 2020, 2019]
loader = UnstructuredReader()
doc_set = {}
all_docs = []
for year in years:
year_docs = loader.load_data(
file=Path(f"./data/UBER/UBER_{year}.html"), split_documents=False
)
# insert year metadata into each year
for d in year_docs:
d.metadata = {"year": year}
doc_set[year] = year_docs
all_docs.extend(year_docs)
```
### Setting up Vector Indices for each year
We first setup a vector index for each year. Each vector index allows us
to ask questions about the 10-K filing of a given year.
We build each index and save it to disk.
```python
# initialize simple vector indices
from llama_index.core import VectorStoreIndex, StorageContext
from llama_index.core import Settings
Settings.chunk_size = 512
index_set = {}
for year in years:
storage_context = StorageContext.from_defaults()
cur_index = VectorStoreIndex.from_documents(
doc_set[year],
storage_context=storage_context,
)
index_set[year] = cur_index
storage_context.persist(persist_dir=f"./storage/{year}")
```
To load an index from disk, do the following
```python
# Load indices from disk
from llama_index.core import load_index_from_storage
index_set = {}
for year in years:
storage_context = StorageContext.from_defaults(
persist_dir=f"./storage/{year}"
)
cur_index = load_index_from_storage(
storage_context,
)
index_set[year] = cur_index
```
### Setting up a Sub Question Query Engine to Synthesize Answers Across 10-K Filings
Since we have access to documents of 4 years, we may not only want to ask questions regarding the 10-K document of a given year, but ask questions that require analysis over all 10-K filings.
To address this, we can use a [Sub Question Query Engine](https://gpt-index.readthedocs.io/en/stable/examples/query_engine/sub_question_query_engine.html). It decomposes a query into subqueries, each answered by an individual vector index, and synthesizes the results to answer the overall query.
LlamaIndex provides some wrappers around indices (and query engines) so that they can be used by query engines and agents. First we define a `QueryEngineTool` for each vector index.
Each tool has a name and a description; these are what the LLM agent sees to decide which tool to choose.
```python
from llama_index.core.tools import QueryEngineTool, ToolMetadata
individual_query_engine_tools = [
QueryEngineTool(
query_engine=index_set[year].as_query_engine(),
metadata=ToolMetadata(
name=f"vector_index_{year}",
description=f"useful for when you want to answer queries about the {year} SEC 10-K for Uber",
),
)
for year in years
]
```
Now we can create the Sub Question Query Engine, which will allow us to synthesize answers across the 10-K filings. We pass in the `individual_query_engine_tools` we defined above, as well as an `llm` that will be used to run the subqueries.
```python
from llama_index.llms.openai import OpenAI
from llama_index.core.query_engine import SubQuestionQueryEngine
query_engine = SubQuestionQueryEngine.from_defaults(
query_engine_tools=individual_query_engine_tools,
llm=OpenAI(model="gpt-3.5-turbo"),
)
```
### Setting up the Chatbot Agent
We use a LlamaIndex Data Agent to setup the outer chatbot agent, which has access to a set of Tools. Specifically, we will use an OpenAIAgent, that takes advantage of OpenAI API function calling. We want to use the separate Tools we defined previously for each index (corresponding to a given year), as well as a tool for the sub question query engine we defined above.
First we define a `QueryEngineTool` for the sub question query engine:
```python
query_engine_tool = QueryEngineTool(
query_engine=query_engine,
metadata=ToolMetadata(
name="sub_question_query_engine",
description="useful for when you want to answer queries that require analyzing multiple SEC 10-K documents for Uber",
),
)
```
Then, we combine the Tools we defined above into a single list of tools for the agent:
```python
tools = individual_query_engine_tools + [query_engine_tool]
```
Finally, we call `OpenAIAgent.from_tools` to create the agent, passing in the list of tools we defined above.
```python
from llama_index.agent.openai import OpenAIAgent
agent = OpenAIAgent.from_tools(tools, verbose=True)
```
### Testing the Agent
We can now test the agent with various queries.
If we test it with a simple "hello" query, the agent does not use any Tools.
```python
response = agent.chat("hi, i am bob")
print(str(response))
```
```
Hello Bob! How can I assist you today?
```
If we test it with a query regarding the 10-k of a given year, the agent will use
the relevant vector index Tool.
```python
response = agent.chat(
"What were some of the biggest risk factors in 2020 for Uber?"
)
print(str(response))
```
```
=== Calling Function ===
Calling function: vector_index_2020 with args: {
"input": "biggest risk factors"
}
Got output: The biggest risk factors mentioned in the context are:
1. The adverse impact of the COVID-19 pandemic and actions taken to mitigate it on the business.
2. The potential reclassification of drivers as employees, workers, or quasi-employees instead of independent contractors.
3. Intense competition in the mobility, delivery, and logistics industries, with low-cost alternatives and well-capitalized competitors.
4. The need to lower fares or service fees and offer driver incentives and consumer discounts to remain competitive.
5. Significant losses incurred and the uncertainty of achieving profitability.
6. The risk of not attracting or maintaining a critical mass of platform users.
7. Operational, compliance, and cultural challenges related to the workplace culture and forward-leaning approach.
8. The potential negative impact of international investments and the challenges of conducting business in foreign countries.
9. Risks associated with operational and compliance challenges, localization, laws and regulations, competition, social acceptance, technological compatibility, improper business practices, liability uncertainty, managing international operations, currency fluctuations, cash transactions, tax consequences, and payment fraud.
========================
Some of the biggest risk factors for Uber in 2020 were:
1. The adverse impact of the COVID-19 pandemic and actions taken to mitigate it on the business.
2. The potential reclassification of drivers as employees, workers, or quasi-employees instead of independent contractors.
3. Intense competition in the mobility, delivery, and logistics industries, with low-cost alternatives and well-capitalized competitors.
4. The need to lower fares or service fees and offer driver incentives and consumer discounts to remain competitive.
5. Significant losses incurred and the uncertainty of achieving profitability.
6. The risk of not attracting or maintaining a critical mass of platform users.
7. Operational, compliance, and cultural challenges related to the workplace culture and forward-leaning approach.
8. The potential negative impact of international investments and the challenges of conducting business in foreign countries.
9. Risks associated with operational and compliance challenges, localization, laws and regulations, competition, social acceptance, technological compatibility, improper business practices, liability uncertainty, managing international operations, currency fluctuations, cash transactions, tax consequences, and payment fraud.
These risk factors highlight the challenges and uncertainties that Uber faced in 2020.
```
Finally, if we test it with a query to compare/contrast risk factors across years,
the agent will use the Sub Question Query Engine Tool.
```python
cross_query_str = "Compare/contrast the risk factors described in the Uber 10-K across years. Give answer in bullet points."
response = agent.chat(cross_query_str)
print(str(response))
```
```
=== Calling Function ===
Calling function: sub_question_query_engine with args: {
"input": "Compare/contrast the risk factors described in the Uber 10-K across years"
}
Generated 4 sub questions.
[vector_index_2022] Q: What are the risk factors described in the 2022 SEC 10-K for Uber?
[vector_index_2021] Q: What are the risk factors described in the 2021 SEC 10-K for Uber?
[vector_index_2020] Q: What are the risk factors described in the 2020 SEC 10-K for Uber?
[vector_index_2019] Q: What are the risk factors described in the 2019 SEC 10-K for Uber?
[vector_index_2021] A: The risk factors described in the 2021 SEC 10-K for Uber include the adverse impact of the COVID-19 pandemic on their business, the potential reclassification of drivers as employees instead of independent contractors, intense competition in the mobility, delivery, and logistics industries, the need to lower fares and offer incentives to remain competitive, significant losses incurred by the company, the importance of attracting and maintaining a critical mass of platform users, and the ongoing legal challenges regarding driver classification.
[vector_index_2020] A: The risk factors described in the 2020 SEC 10-K for Uber include the adverse impact of the COVID-19 pandemic on their business, the potential reclassification of drivers as employees instead of independent contractors, intense competition in the mobility, delivery, and logistics industries, the need to lower fares and offer incentives to remain competitive, significant losses and the uncertainty of achieving profitability, the importance of attracting and retaining a critical mass of drivers and users, and the challenges associated with their workplace culture and operational compliance.
[vector_index_2022] A: The risk factors described in the 2022 SEC 10-K for Uber include the potential adverse effect on their business if drivers were classified as employees instead of independent contractors, the highly competitive nature of the mobility, delivery, and logistics industries, the need to lower fares or service fees to remain competitive in certain markets, the company's history of significant losses and the expectation of increased operating expenses in the future, and the potential impact on their platform if they are unable to attract or maintain a critical mass of drivers, consumers, merchants, shippers, and carriers.
[vector_index_2019] A: The risk factors described in the 2019 SEC 10-K for Uber include the loss of their license to operate in London, the complexity of their business and operating model due to regulatory uncertainties, the potential for additional regulations for their other products in the Other Bets segment, the evolving laws and regulations regarding the development and deployment of autonomous vehicles, and the increasing number of data protection and privacy laws around the world. Additionally, there are legal proceedings, litigation, claims, and government investigations that Uber is involved in, which could impose a burden on management and employees and come with defense costs or unfavorable rulings.
Got output: The risk factors described in the Uber 10-K reports across the years include the potential reclassification of drivers as employees instead of independent contractors, intense competition in the mobility, delivery, and logistics industries, the need to lower fares and offer incentives to remain competitive, significant losses incurred by the company, the importance of attracting and maintaining a critical mass of platform users, and the ongoing legal challenges regarding driver classification. Additionally, there are specific risk factors mentioned in each year's report, such as the adverse impact of the COVID-19 pandemic in 2020 and 2021, the loss of their license to operate in London in 2019, and the evolving laws and regulations regarding autonomous vehicles in 2019. Overall, while there are some similarities in the risk factors mentioned, there are also specific factors that vary across the years.
========================
=== Calling Function ===
Calling function: vector_index_2022 with args: {
"input": "risk factors"
}
Got output: Some of the risk factors mentioned in the context include the potential adverse effect on the business if drivers were classified as employees instead of independent contractors, the highly competitive nature of the mobility, delivery, and logistics industries, the need to lower fares or service fees to remain competitive, the company's history of significant losses and the expectation of increased operating expenses, the impact of future pandemics or disease outbreaks on the business and financial results, and the potential harm to the business due to economic conditions and their effect on discretionary consumer spending.
========================
=== Calling Function ===
Calling function: vector_index_2021 with args: {
"input": "risk factors"
}
Got output: The COVID-19 pandemic and the impact of actions to mitigate the pandemic have adversely affected and may continue to adversely affect parts of our business. Our business would be adversely affected if Drivers were classified as employees, workers or quasi-employees instead of independent contractors. The mobility, delivery, and logistics industries are highly competitive, with well-established and low-cost alternatives that have been available for decades, low barriers to entry, low switching costs, and well-capitalized competitors in nearly every major geographic region. To remain competitive in certain markets, we have in the past lowered, and may continue to lower, fares or service fees, and we have in the past offered, and may continue to offer, significant Driver incentives and consumer discounts and promotions. We have incurred significant losses since inception, including in the United States and other major markets. We expect our operating expenses to increase significantly in the foreseeable future, and we may not achieve or maintain profitability. If we are unable to attract or maintain a critical mass of Drivers, consumers, merchants, shippers, and carriers, whether as a result of competition or other factors, our platform will become less appealing to platform users.
========================
=== Calling Function ===
Calling function: vector_index_2020 with args: {
"input": "risk factors"
}
Got output: The risk factors mentioned in the context include the adverse impact of the COVID-19 pandemic on the business, the potential reclassification of drivers as employees, the highly competitive nature of the mobility, delivery, and logistics industries, the need to lower fares or service fees to remain competitive, the company's history of significant losses and potential future expenses, the importance of attracting and maintaining a critical mass of platform users, and the operational and cultural challenges faced by the company.
========================
=== Calling Function ===
Calling function: vector_index_2019 with args: {
"input": "risk factors"
}
Got output: The risk factors mentioned in the context include competition with local companies, differing levels of social acceptance, technological compatibility issues, exposure to improper business practices, legal uncertainty, difficulties in managing international operations, fluctuations in currency exchange rates, regulations governing local currencies, tax consequences, financial accounting burdens, difficulties in implementing financial systems, import and export restrictions, political and economic instability, public health concerns, reduced protection for intellectual property rights, limited influence over minority-owned affiliates, and regulatory complexities. These risk factors could adversely affect the international operations, business, financial condition, and operating results of the company.
========================
Here is a comparison of the risk factors described in the Uber 10-K reports across years:
2022 Risk Factors:
- Potential adverse effect if drivers were classified as employees instead of independent contractors.
- Highly competitive nature of the mobility, delivery, and logistics industries.
- Need to lower fares or service fees to remain competitive.
- History of significant losses and expectation of increased operating expenses.
- Impact of future pandemics or disease outbreaks on the business and financial results.
- Potential harm to the business due to economic conditions and their effect on discretionary consumer spending.
2021 Risk Factors:
- Adverse impact of the COVID-19 pandemic and actions to mitigate it on the business.
- Potential reclassification of drivers as employees instead of independent contractors.
- Highly competitive nature of the mobility, delivery, and logistics industries.
- Need to lower fares or service fees and offer incentives to remain competitive.
- History of significant losses and uncertainty of achieving profitability.
- Importance of attracting and maintaining a critical mass of platform users.
2020 Risk Factors:
- Adverse impact of the COVID-19 pandemic on the business.
- Potential reclassification of drivers as employees.
- Highly competitive nature of the mobility, delivery, and logistics industries.
- Need to lower fares or service fees to remain competitive.
- History of significant losses and potential future expenses.
- Importance of attracting and maintaining a critical mass of platform users.
- Operational and cultural challenges faced by the company.
2019 Risk Factors:
- Competition with local companies.
- Differing levels of social acceptance.
- Technological compatibility issues.
- Exposure to improper business practices.
- Legal uncertainty.
- Difficulties in managing international operations.
- Fluctuations in currency exchange rates.
- Regulations governing local currencies.
- Tax consequences.
- Financial accounting burdens.
- Difficulties in implementing financial systems.
- Import and export restrictions.
- Political and economic instability.
- Public health concerns.
- Reduced protection for intellectual property rights.
- Limited influence over minority-owned affiliates.
- Regulatory complexities.
These comparisons highlight both common and unique risk factors that Uber faced in different years.
```
### Setting up the Chatbot Loop
Now that we have the chatbot setup, it only takes a few more steps to setup a basic interactive loop to chat with our SEC-augmented chatbot!
```python
agent = OpenAIAgent.from_tools(tools) # verbose=False by default
while True:
text_input = input("User: ")
if text_input == "exit":
break
response = agent.chat(text_input)
print(f"Agent: {response}")
```
Here's an example of the loop in action:
```
User: What were some of the legal proceedings against Uber in 2022?
Agent: In 2022, Uber faced several legal proceedings. Some of the notable ones include:
1. Petition against Proposition 22: A petition was filed in California alleging that Proposition 22, which classifies app-based drivers as independent contractors, is unconstitutional.
2. Lawsuit by Massachusetts Attorney General: The Massachusetts Attorney General filed a lawsuit against Uber, claiming that drivers should be classified as employees and entitled to protections under wage and labor laws.
3. Allegations by New York Attorney General: The New York Attorney General made allegations against Uber regarding the misclassification of drivers and related employment violations.
4. Swiss social security rulings: Swiss social security rulings classified Uber drivers as employees, which could have implications for Uber's operations in Switzerland.
5. Class action lawsuits in Australia: Uber faced class action lawsuits in Australia, with allegations that the company conspired to harm participants in the taxi, hire-car, and limousine industries.
It's important to note that the outcomes of these legal proceedings are uncertain and may vary.
User:
```
### Notebook
Take a look at our [corresponding notebook](../../../examples/agent/Chatbot_SEC.ipynb). |
3,667 | 874edc9f-5575-4c23-a772-908223caa446 | A Guide to Building a Full-Stack Web App with LLamaIndex | https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/apps/fullstack_app_guide | true | llama_index | # A Guide to Building a Full-Stack Web App with LLamaIndex
LlamaIndex is a python library, which means that integrating it with a full-stack web application will be a little different than what you might be used to.
This guide seeks to walk through the steps needed to create a basic API service written in python, and how this interacts with a TypeScript+React frontend.
All code examples here are available from the [llama_index_starter_pack](https://github.com/logan-markewich/llama_index_starter_pack/tree/main/flask_react) in the flask_react folder.
The main technologies used in this guide are as follows:
- python3.11
- llama_index
- flask
- typescript
- react
## Flask Backend
For this guide, our backend will use a [Flask](https://flask.palletsprojects.com/en/2.2.x/) API server to communicate with our frontend code. If you prefer, you can also easily translate this to a [FastAPI](https://fastapi.tiangolo.com/) server, or any other python server library of your choice.
Setting up a server using Flask is easy. You import the package, create the app object, and then create your endpoints. Let's create a basic skeleton for the server first:
```python
from flask import Flask
app = Flask(__name__)
@app.route("/")
def home():
return "Hello World!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5601)
```
_flask_demo.py_
If you run this file (`python flask_demo.py`), it will launch a server on port 5601. If you visit `http://localhost:5601/`, you will see the "Hello World!" text rendered in your browser. Nice!
The next step is deciding what functions we want to include in our server, and to start using LlamaIndex.
To keep things simple, the most basic operation we can provide is querying an existing index. Using the [paul graham essay](https://github.com/jerryjliu/llama_index/blob/main/examples/paul_graham_essay/data/paul_graham_essay.txt) from LlamaIndex, create a documents folder and download+place the essay text file inside of it.
### Basic Flask - Handling User Index Queries
Now, let's write some code to initialize our index:
```python
import os
from llama_index.core import (
SimpleDirectoryReader,
VectorStoreIndex,
StorageContext,
load_index_from_storage,
)
# NOTE: for local testing only, do NOT deploy with your key hardcoded
os.environ["OPENAI_API_KEY"] = "your key here"
index = None
def initialize_index():
global index
storage_context = StorageContext.from_defaults()
index_dir = "./.index"
if os.path.exists(index_dir):
index = load_index_from_storage(storage_context)
else:
documents = SimpleDirectoryReader("./documents").load_data()
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
storage_context.persist(index_dir)
```
This function will initialize our index. If we call this just before starting the flask server in the `main` function, then our index will be ready for user queries!
Our query endpoint will accept `GET` requests with the query text as a parameter. Here's what the full endpoint function will look like:
```python
from flask import request
@app.route("/query", methods=["GET"])
def query_index():
global index
query_text = request.args.get("text", None)
if query_text is None:
return (
"No text found, please include a ?text=blah parameter in the URL",
400,
)
query_engine = index.as_query_engine()
response = query_engine.query(query_text)
return str(response), 200
```
Now, we've introduced a few new concepts to our server:
- a new `/query` endpoint, defined by the function decorator
- a new import from flask, `request`, which is used to get parameters from the request
- if the `text` parameter is missing, then we return an error message and an appropriate HTML response code
- otherwise, we query the index, and return the response as a string
A full query example that you can test in your browser might look something like this: `http://localhost:5601/query?text=what did the author do growing up` (once you press enter, the browser will convert the spaces into "%20" characters).
Things are looking pretty good! We now have a functional API. Using your own documents, you can easily provide an interface for any application to call the flask API and get answers to queries.
### Advanced Flask - Handling User Document Uploads
Things are looking pretty cool, but how can we take this a step further? What if we want to allow users to build their own indexes by uploading their own documents? Have no fear, Flask can handle it all :muscle:.
To let users upload documents, we have to take some extra precautions. Instead of querying an existing index, the index will become **mutable**. If you have many users adding to the same index, we need to think about how to handle concurrency. Our Flask server is threaded, which means multiple users can ping the server with requests which will be handled at the same time.
One option might be to create an index for each user or group, and store and fetch things from S3. But for this example, we will assume there is one locally stored index that users are interacting with.
To handle concurrent uploads and ensure sequential inserts into the index, we can use the `BaseManager` python package to provide sequential access to the index using a separate server and locks. This sounds scary, but it's not so bad! We will just move all our index operations (initializing, querying, inserting) into the `BaseManager` "index_server", which will be called from our Flask server.
Here's a basic example of what our `index_server.py` will look like after we've moved our code:
```python
import os
from multiprocessing import Lock
from multiprocessing.managers import BaseManager
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex, Document
# NOTE: for local testing only, do NOT deploy with your key hardcoded
os.environ["OPENAI_API_KEY"] = "your key here"
index = None
lock = Lock()
def initialize_index():
global index
with lock:
# same as before ...
pass
def query_index(query_text):
global index
query_engine = index.as_query_engine()
response = query_engine.query(query_text)
return str(response)
if __name__ == "__main__":
# init the global index
print("initializing index...")
initialize_index()
# setup server
# NOTE: you might want to handle the password in a less hardcoded way
manager = BaseManager(("", 5602), b"password")
manager.register("query_index", query_index)
server = manager.get_server()
print("starting server...")
server.serve_forever()
```
_index_server.py_
So, we've moved our functions, introduced the `Lock` object which ensures sequential access to the global index, registered our single function in the server, and started the server on port 5602 with the password `password`.
Then, we can adjust our flask code as follows:
```python
from multiprocessing.managers import BaseManager
from flask import Flask, request
# initialize manager connection
# NOTE: you might want to handle the password in a less hardcoded way
manager = BaseManager(("", 5602), b"password")
manager.register("query_index")
manager.connect()
@app.route("/query", methods=["GET"])
def query_index():
global index
query_text = request.args.get("text", None)
if query_text is None:
return (
"No text found, please include a ?text=blah parameter in the URL",
400,
)
response = manager.query_index(query_text)._getvalue()
return str(response), 200
@app.route("/")
def home():
return "Hello World!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5601)
```
_flask_demo.py_
The two main changes are connecting to our existing `BaseManager` server and registering the functions, as well as calling the function through the manager in the `/query` endpoint.
One special thing to note is that `BaseManager` servers don't return objects quite as we expect. To resolve the return value into it's original object, we call the `_getvalue()` function.
If we allow users to upload their own documents, we should probably remove the Paul Graham essay from the documents folder, so let's do that first. Then, let's add an endpoint to upload files! First, let's define our Flask endpoint function:
```python
...
manager.register("insert_into_index")
...
@app.route("/uploadFile", methods=["POST"])
def upload_file():
global manager
if "file" not in request.files:
return "Please send a POST request with a file", 400
filepath = None
try:
uploaded_file = request.files["file"]
filename = secure_filename(uploaded_file.filename)
filepath = os.path.join("documents", os.path.basename(filename))
uploaded_file.save(filepath)
if request.form.get("filename_as_doc_id", None) is not None:
manager.insert_into_index(filepath, doc_id=filename)
else:
manager.insert_into_index(filepath)
except Exception as e:
# cleanup temp file
if filepath is not None and os.path.exists(filepath):
os.remove(filepath)
return "Error: {}".format(str(e)), 500
# cleanup temp file
if filepath is not None and os.path.exists(filepath):
os.remove(filepath)
return "File inserted!", 200
```
Not too bad! You will notice that we write the file to disk. We could skip this if we only accept basic file formats like `txt` files, but written to disk we can take advantage of LlamaIndex's `SimpleDirectoryReader` to take care of a bunch of more complex file formats. Optionally, we also use a second `POST` argument to either use the filename as a doc_id or let LlamaIndex generate one for us. This will make more sense once we implement the frontend.
With these more complicated requests, I also suggest using a tool like [Postman](https://www.postman.com/downloads/?utm_source=postman-home). Examples of using postman to test our endpoints are in the [repository for this project](https://github.com/logan-markewich/llama_index_starter_pack/tree/main/flask_react/postman_examples).
Lastly, you'll notice we added a new function to the manager. Let's implement that inside `index_server.py`:
```python
def insert_into_index(doc_text, doc_id=None):
global index
document = SimpleDirectoryReader(input_files=[doc_text]).load_data()[0]
if doc_id is not None:
document.doc_id = doc_id
with lock:
index.insert(document)
index.storage_context.persist()
...
manager.register("insert_into_index", insert_into_index)
...
```
Easy! If we launch both the `index_server.py` and then the `flask_demo.py` python files, we have a Flask API server that can handle multiple requests to insert documents into a vector index and respond to user queries!
To support some functionality in the frontend, I've adjusted what some responses look like from the Flask API, as well as added some functionality to keep track of which documents are stored in the index (LlamaIndex doesn't currently support this in a user-friendly way, but we can augment it ourselves!). Lastly, I had to add CORS support to the server using the `Flask-cors` python package.
Check out the complete `flask_demo.py` and `index_server.py` scripts in the [repository](https://github.com/logan-markewich/llama_index_starter_pack/tree/main/flask_react) for the final minor changes, the`requirements.txt` file, and a sample `Dockerfile` to help with deployment.
## React Frontend
Generally, React and Typescript are one of the most popular libraries and languages for writing webapps today. This guide will assume you are familiar with how these tools work, because otherwise this guide will triple in length :smile:.
In the [repository](https://github.com/logan-markewich/llama_index_starter_pack/tree/main/flask_react), the frontend code is organized inside of the `react_frontend` folder.
The most relevant part of the frontend will be the `src/apis` folder. This is where we make calls to the Flask server, supporting the following queries:
- `/query` -- make a query to the existing index
- `/uploadFile` -- upload a file to the flask server for insertion into the index
- `/getDocuments` -- list the current document titles and a portion of their texts
Using these three queries, we can build a robust frontend that allows users to upload and keep track of their files, query the index, and view the query response and information about which text nodes were used to form the response.
### fetchDocuments.tsx
This file contains the function to, you guessed it, fetch the list of current documents in the index. The code is as follows:
```typescript
export type Document = {
id: string;
text: string;
};
const fetchDocuments = async (): Promise<Document[]> => {
const response = await fetch("http://localhost:5601/getDocuments", {
mode: "cors",
});
if (!response.ok) {
return [];
}
const documentList = (await response.json()) as Document[];
return documentList;
};
```
As you can see, we make a query to the Flask server (here, it assumes running on localhost). Notice that we need to include the `mode: 'cors'` option, as we are making an external request.
Then, we check if the response was ok, and if so, get the response json and return it. Here, the response json is a list of `Document` objects that are defined in the same file.
### queryIndex.tsx
This file sends the user query to the flask server, and gets the response back, as well as details about which nodes in our index provided the response.
```typescript
export type ResponseSources = {
text: string;
doc_id: string;
start: number;
end: number;
similarity: number;
};
export type QueryResponse = {
text: string;
sources: ResponseSources[];
};
const queryIndex = async (query: string): Promise<QueryResponse> => {
const queryURL = new URL("http://localhost:5601/query?text=1");
queryURL.searchParams.append("text", query);
const response = await fetch(queryURL, { mode: "cors" });
if (!response.ok) {
return { text: "Error in query", sources: [] };
}
const queryResponse = (await response.json()) as QueryResponse;
return queryResponse;
};
export default queryIndex;
```
This is similar to the `fetchDocuments.tsx` file, with the main difference being we include the query text as a parameter in the URL. Then, we check if the response is ok and return it with the appropriate typescript type.
### insertDocument.tsx
Probably the most complex API call is uploading a document. The function here accepts a file object and constructs a `POST` request using `FormData`.
The actual response text is not used in the app but could be utilized to provide some user feedback on if the file failed to upload or not.
```typescript
const insertDocument = async (file: File) => {
const formData = new FormData();
formData.append("file", file);
formData.append("filename_as_doc_id", "true");
const response = await fetch("http://localhost:5601/uploadFile", {
mode: "cors",
method: "POST",
body: formData,
});
const responseText = response.text();
return responseText;
};
export default insertDocument;
```
### All the Other Frontend Good-ness
And that pretty much wraps up the frontend portion! The rest of the react frontend code is some pretty basic react components, and my best attempt to make it look at least a little nice :smile:.
I encourage to read the rest of the [codebase](https://github.com/logan-markewich/llama_index_starter_pack/tree/main/flask_react/react_frontend) and submit any PRs for improvements!
## Conclusion
This guide has covered a ton of information. We went from a basic "Hello World" Flask server written in python, to a fully functioning LlamaIndex powered backend and how to connect that to a frontend application.
As you can see, we can easily augment and wrap the services provided by LlamaIndex (like the little external document tracker) to help provide a good user experience on the frontend.
You could take this and add many features (multi-index/user support, saving objects into S3, adding a Pinecone vector server, etc.). And when you build an app after reading this, be sure to share the final result in the Discord! Good Luck! :muscle: |
182 | d4157c1a-a595-4350-9ba4-63e0e92e2984 | Full-Stack Web Application | https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/apps/index | true | llama_index | # Full-Stack Web Application
LlamaIndex can be integrated into a downstream full-stack web application. It can be used in a backend server (such as Flask), packaged into a Docker container, and/or directly used in a framework such as Streamlit.
We provide tutorials and resources to help you get started in this area:
- [Fullstack Application Guide](./fullstack_app_guide.md) shows you how to build an app with LlamaIndex as an API and a TypeScript+React frontend
- [Fullstack Application with Delphic](./fullstack_with_delphic.md) walks you through using LlamaIndex with a production-ready web app starter template called Delphic.
- The [LlamaIndex Starter Pack](https://github.com/logan-markewich/llama_index_starter_pack) provides very basic flask, streamlit, and docker examples for LlamaIndex. |
7,293 | d380d740-f28f-467b-ae53-b9b4e17404fe | A Guide to Building a Full-Stack LlamaIndex Web App with Delphic | https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/apps/fullstack_with_delphic | true | llama_index | # A Guide to Building a Full-Stack LlamaIndex Web App with Delphic
This guide seeks to walk you through using LlamaIndex with a production-ready web app starter template
called [Delphic](https://github.com/JSv4/Delphic). All code examples here are available from
the [Delphic](https://github.com/JSv4/Delphic) repo
## What We're Building
Here's a quick demo of the out-of-the-box functionality of Delphic:
https://user-images.githubusercontent.com/5049984/233236432-aa4980b6-a510-42f3-887a-81485c9644e6.mp4
## Architectural Overview
Delphic leverages the LlamaIndex python library to let users to create their own document collections they can then
query in a responsive frontend.
We chose a stack that provides a responsive, robust mix of technologies that can (1) orchestrate complex python
processing tasks while providing (2) a modern, responsive frontend and (3) a secure backend to build additional
functionality upon.
The core libraries are:
1. [Django](https://www.djangoproject.com/)
2. [Django Channels](https://channels.readthedocs.io/en/stable/)
3. [Django Ninja](https://django-ninja.rest-framework.com/)
4. [Redis](https://redis.io/)
5. [Celery](https://docs.celeryq.dev/en/stable/getting-started/introduction.html)
6. [LlamaIndex](https://gpt-index.readthedocs.io/en/latest/)
7. [Langchain](https://python.langchain.com/en/latest/index.html)
8. [React](https://github.com/facebook/react)
9. Docker & Docker Compose
Thanks to this modern stack built on the super stable Django web framework, the starter Delphic app boasts a streamlined
developer experience, built-in authentication and user management, asynchronous vector store processing, and
web-socket-based query connections for a responsive UI. In addition, our frontend is built with TypeScript and is based
on MUI React for a responsive and modern user interface.
## System Requirements
Celery doesn't work on Windows. It may be deployable with Windows Subsystem for Linux, but configuring that is beyond
the scope of this tutorial. For this reason, we recommend you only follow this tutorial if you're running Linux or OSX.
You will need Docker and Docker Compose installed to deploy the application. Local development will require node version
manager (nvm).
## Django Backend
### Project Directory Overview
The Delphic application has a structured backend directory organization that follows common Django project conventions.
From the repo root, in the `./delphic` subfolder, the main folders are:
1. `contrib`: This directory contains custom modifications or additions to Django's built-in `contrib` apps.
2. `indexes`: This directory contains the core functionality related to document indexing and LLM integration. It
includes:
- `admin.py`: Django admin configuration for the app
- `apps.py`: Application configuration
- `models.py`: Contains the app's database models
- `migrations`: Directory containing database schema migrations for the app
- `signals.py`: Defines any signals for the app
- `tests.py`: Unit tests for the app
3. `tasks`: This directory contains tasks for asynchronous processing using Celery. The `index_tasks.py` file includes
the tasks for creating vector indexes.
4. `users`: This directory is dedicated to user management, including:
5. `utils`: This directory contains utility modules and functions that are used across the application, such as custom
storage backends, path helpers, and collection-related utilities.
### Database Models
The Delphic application has two core models: `Document` and `Collection`. These models represent the central entities
the application deals with when indexing and querying documents using LLMs. They're defined in
[`./delphic/indexes/models.py`](https://github.com/JSv4/Delphic/blob/main/delphic/indexes/models.py).
1. `Collection`:
- `api_key`: A foreign key that links a collection to an API key. This helps associate jobs with the source API key.
- `title`: A character field that provides a title for the collection.
- `description`: A text field that provides a description of the collection.
- `status`: A character field that stores the processing status of the collection, utilizing the `CollectionStatus`
enumeration.
- `created`: A datetime field that records when the collection was created.
- `modified`: A datetime field that records the last modification time of the collection.
- `model`: A file field that stores the model associated with the collection.
- `processing`: A boolean field that indicates if the collection is currently being processed.
2. `Document`:
- `collection`: A foreign key that links a document to a collection. This represents the relationship between documents
and collections.
- `file`: A file field that stores the uploaded document file.
- `description`: A text field that provides a description of the document.
- `created`: A datetime field that records when the document was created.
- `modified`: A datetime field that records the last modification time of the document.
These models provide a solid foundation for collections of documents and the indexes created from them with LlamaIndex.
### Django Ninja API
Django Ninja is a web framework for building APIs with Django and Python 3.7+ type hints. It provides a simple,
intuitive, and expressive way of defining API endpoints, leveraging Python’s type hints to automatically generate input
validation, serialization, and documentation.
In the Delphic repo,
the [`./config/api/endpoints.py`](https://github.com/JSv4/Delphic/blob/main/config/api/endpoints.py)
file contains the API routes and logic for the API endpoints. Now, let’s briefly address the purpose of each endpoint
in the `endpoints.py` file:
1. `/heartbeat`: A simple GET endpoint to check if the API is up and running. Returns `True` if the API is accessible.
This is helpful for Kubernetes setups that expect to be able to query your container to ensure it's up and running.
2. `/collections/create`: A POST endpoint to create a new `Collection`. Accepts form parameters such
as `title`, `description`, and a list of `files`. Creates a new `Collection` and `Document` instances for each file,
and schedules a Celery task to create an index.
```python
@collections_router.post("/create")
async def create_collection(
request,
title: str = Form(...),
description: str = Form(...),
files: list[UploadedFile] = File(...),
):
key = None if getattr(request, "auth", None) is None else request.auth
if key is not None:
key = await key
collection_instance = Collection(
api_key=key,
title=title,
description=description,
status=CollectionStatusEnum.QUEUED,
)
await sync_to_async(collection_instance.save)()
for uploaded_file in files:
doc_data = uploaded_file.file.read()
doc_file = ContentFile(doc_data, uploaded_file.name)
document = Document(collection=collection_instance, file=doc_file)
await sync_to_async(document.save)()
create_index.si(collection_instance.id).apply_async()
return await sync_to_async(CollectionModelSchema)(...)
```
3. `/collections/query` — a POST endpoint to query a document collection using the LLM. Accepts a JSON payload
containing `collection_id` and `query_str`, and returns a response generated by querying the collection. We don't
actually use this endpoint in our chat GUI (We use a websocket - see below), but you could build an app to integrate
to this REST endpoint to query a specific collection.
```python
@collections_router.post(
"/query",
response=CollectionQueryOutput,
summary="Ask a question of a document collection",
)
def query_collection_view(
request: HttpRequest, query_input: CollectionQueryInput
):
collection_id = query_input.collection_id
query_str = query_input.query_str
response = query_collection(collection_id, query_str)
return {"response": response}
```
4. `/collections/available`: A GET endpoint that returns a list of all collections created with the user's API key. The
output is serialized using the `CollectionModelSchema`.
```python
@collections_router.get(
"/available",
response=list[CollectionModelSchema],
summary="Get a list of all of the collections created with my api_key",
)
async def get_my_collections_view(request: HttpRequest):
key = None if getattr(request, "auth", None) is None else request.auth
if key is not None:
key = await key
collections = Collection.objects.filter(api_key=key)
return [{...} async for collection in collections]
```
5. `/collections/{collection_id}/add_file`: A POST endpoint to add a file to an existing collection. Accepts
a `collection_id` path parameter, and form parameters such as `file` and `description`. Adds the file as a `Document`
instance associated with the specified collection.
```python
@collections_router.post(
"/{collection_id}/add_file", summary="Add a file to a collection"
)
async def add_file_to_collection(
request,
collection_id: int,
file: UploadedFile = File(...),
description: str = Form(...),
):
collection = await sync_to_async(Collection.objects.get)(id=collection_id)
```
### Intro to Websockets
WebSockets are a communication protocol that enables bidirectional and full-duplex communication between a client and a
server over a single, long-lived connection. The WebSocket protocol is designed to work over the same ports as HTTP and
HTTPS (ports 80 and 443, respectively) and uses a similar handshake process to establish a connection. Once the
connection is established, data can be sent in both directions as “frames” without the need to reestablish the
connection each time, unlike traditional HTTP requests.
There are several reasons to use WebSockets, particularly when working with code that takes a long time to load into
memory but is quick to run once loaded:
1. **Performance**: WebSockets eliminate the overhead associated with opening and closing multiple connections for each
request, reducing latency.
2. **Efficiency**: WebSockets allow for real-time communication without the need for polling, resulting in more
efficient use of resources and better responsiveness.
3. **Scalability**: WebSockets can handle a large number of simultaneous connections, making it ideal for applications
that require high concurrency.
In the case of the Delphic application, using WebSockets makes sense as the LLMs can be expensive to load into memory.
By establishing a WebSocket connection, the LLM can remain loaded in memory, allowing subsequent requests to be
processed quickly without the need to reload the model each time.
The ASGI configuration file [`./config/asgi.py`](https://github.com/JSv4/Delphic/blob/main/config/asgi.py) defines how
the application should handle incoming connections, using the Django Channels `ProtocolTypeRouter` to route connections
based on their protocol type. In this case, we have two protocol types: "http" and "websocket".
The “http” protocol type uses the standard Django ASGI application to handle HTTP requests, while the “websocket”
protocol type uses a custom `TokenAuthMiddleware` to authenticate WebSocket connections. The `URLRouter` within
the `TokenAuthMiddleware` defines a URL pattern for the `CollectionQueryConsumer`, which is responsible for handling
WebSocket connections related to querying document collections.
```python
application = ProtocolTypeRouter(
{
"http": get_asgi_application(),
"websocket": TokenAuthMiddleware(
URLRouter(
[
re_path(
r"ws/collections/(?P<collection_id>\w+)/query/$",
CollectionQueryConsumer.as_asgi(),
),
]
)
),
}
)
```
This configuration allows clients to establish WebSocket connections with the Delphic application to efficiently query
document collections using the LLMs, without the need to reload the models for each request.
### Websocket Handler
The `CollectionQueryConsumer` class
in [`config/api/websockets/queries.py`](https://github.com/JSv4/Delphic/blob/main/config/api/websockets/queries.py) is
responsible for handling WebSocket connections related to querying document collections. It inherits from
the `AsyncWebsocketConsumer` class provided by Django Channels.
The `CollectionQueryConsumer` class has three main methods:
1. `connect`: Called when a WebSocket is handshaking as part of the connection process.
2. `disconnect`: Called when a WebSocket closes for any reason.
3. `receive`: Called when the server receives a message from the WebSocket.
#### Websocket connect listener
The `connect` method is responsible for establishing the connection, extracting the collection ID from the connection
path, loading the collection model, and accepting the connection.
```python
async def connect(self):
try:
self.collection_id = extract_connection_id(self.scope["path"])
self.index = await load_collection_model(self.collection_id)
await self.accept()
except ValueError as e:
await self.accept()
await self.close(code=4000)
except Exception as e:
pass
```
#### Websocket disconnect listener
The `disconnect` method is empty in this case, as there are no additional actions to be taken when the WebSocket is
closed.
#### Websocket receive listener
The `receive` method is responsible for processing incoming messages from the WebSocket. It takes the incoming message,
decodes it, and then queries the loaded collection model using the provided query. The response is then formatted as a
markdown string and sent back to the client over the WebSocket connection.
```python
async def receive(self, text_data):
text_data_json = json.loads(text_data)
if self.index is not None:
query_str = text_data_json["query"]
modified_query_str = f"Please return a nicely formatted markdown string to this request:\n\n{query_str}"
query_engine = self.index.as_query_engine()
response = query_engine.query(modified_query_str)
markdown_response = f"## Response\n\n{response}\n\n"
if response.source_nodes:
markdown_sources = (
f"## Sources\n\n{response.get_formatted_sources()}"
)
else:
markdown_sources = ""
formatted_response = f"{markdown_response}{markdown_sources}"
await self.send(json.dumps({"response": formatted_response}, indent=4))
else:
await self.send(
json.dumps(
{"error": "No index loaded for this connection."}, indent=4
)
)
```
To load the collection model, the `load_collection_model` function is used, which can be found
in [`delphic/utils/collections.py`](https://github.com/JSv4/Delphic/blob/main/delphic/utils/collections.py). This
function retrieves the collection object with the given collection ID, checks if a JSON file for the collection model
exists, and if not, creates one. Then, it sets up the `LLM` and `Settings` before loading
the `VectorStoreIndex` using the cache file.
```python
from llama_index.core import Settings
async def load_collection_model(collection_id: str | int) -> VectorStoreIndex:
"""
Load the Collection model from cache or the database, and return the index.
Args:
collection_id (Union[str, int]): The ID of the Collection model instance.
Returns:
VectorStoreIndex: The loaded index.
This function performs the following steps:
1. Retrieve the Collection object with the given collection_id.
2. Check if a JSON file with the name '/cache/model_{collection_id}.json' exists.
3. If the JSON file doesn't exist, load the JSON from the Collection.model FileField and save it to
'/cache/model_{collection_id}.json'.
4. Call VectorStoreIndex.load_from_disk with the cache_file_path.
"""
# Retrieve the Collection object
collection = await Collection.objects.aget(id=collection_id)
logger.info(f"load_collection_model() - loaded collection {collection_id}")
# Make sure there's a model
if collection.model.name:
logger.info("load_collection_model() - Setup local json index file")
# Check if the JSON file exists
cache_dir = Path(settings.BASE_DIR) / "cache"
cache_file_path = cache_dir / f"model_{collection_id}.json"
if not cache_file_path.exists():
cache_dir.mkdir(parents=True, exist_ok=True)
with collection.model.open("rb") as model_file:
with cache_file_path.open(
"w+", encoding="utf-8"
) as cache_file:
cache_file.write(model_file.read().decode("utf-8"))
# define LLM
logger.info(
f"load_collection_model() - Setup Settings with tokens {settings.MAX_TOKENS} and "
f"model {settings.MODEL_NAME}"
)
Settings.llm = OpenAI(
temperature=0, model="gpt-3.5-turbo", max_tokens=512
)
# Call VectorStoreIndex.load_from_disk
logger.info("load_collection_model() - Load llama index")
index = VectorStoreIndex.load_from_disk(
cache_file_path,
)
logger.info(
"load_collection_model() - Llamaindex loaded and ready for query..."
)
else:
logger.error(
f"load_collection_model() - collection {collection_id} has no model!"
)
raise ValueError("No model exists for this collection!")
return index
```
## React Frontend
### Overview
We chose to use TypeScript, React and Material-UI (MUI) for the Delphic project’s frontend for a couple reasons. First,
as the most popular component library (MUI) for the most popular frontend framework (React), this choice makes this
project accessible to a huge community of developers. Second, React is, at this point, a stable and generally well-liked
framework that delivers valuable abstractions in the form of its virtual DOM while still being relatively stable and, in
our opinion, pretty easy to learn, again making it accessible.
### Frontend Project Structure
The frontend can be found in the [`/frontend`](https://github.com/JSv4/Delphic/tree/main/frontend) directory of the
repo, with the React-related components being in `/frontend/src` . You’ll notice there is a DockerFile in the `frontend`
directory and several folders and files related to configuring our frontend web
server — [nginx](https://www.nginx.com/).
The `/frontend/src/App.tsx` file serves as the entry point of the application. It defines the main components, such as
the login form, the drawer layout, and the collection create modal. The main components are conditionally rendered based
on whether the user is logged in and has an authentication token.
The DrawerLayout2 component is defined in the`DrawerLayour2.tsx` file. This component manages the layout of the
application and provides the navigation and main content areas.
Since the application is relatively simple, we can get away with not using a complex state management solution like
Redux and just use React’s useState hooks.
### Grabbing Collections from the Backend
The collections available to the logged-in user are retrieved and displayed in the DrawerLayout2 component. The process
can be broken down into the following steps:
1. Initializing state variables:
```tsx
const [collections, setCollections] = useState<CollectionModelSchema[]>([]);
const [loading, setLoading] = useState(true);
```
Here, we initialize two state variables: `collections` to store the list of collections and `loading` to track whether
the collections are being fetched.
2. Collections are fetched for the logged-in user with the `fetchCollections()` function:
```tsx
const
fetchCollections = async () = > {
try {
const accessToken = localStorage.getItem("accessToken");
if (accessToken) {
const response = await getMyCollections(accessToken);
setCollections(response.data);
}
} catch (error) {
console.error(error);
} finally {
setLoading(false);
}
};
```
The `fetchCollections` function retrieves the collections for the logged-in user by calling the `getMyCollections` API
function with the user's access token. It then updates the `collections` state with the retrieved data and sets
the `loading` state to `false` to indicate that fetching is complete.
### Displaying Collections
The latest collectios are displayed in the drawer like this:
```tsx
< List >
{collections.map((collection) = > (
< div key={collection.id} >
< ListItem disablePadding >
< ListItemButton
disabled={
collection.status != = CollectionStatus.COMPLETE | |
!collection.has_model
}
onClick={() = > handleCollectionClick(collection)}
selected = {
selectedCollection & &
selectedCollection.id == = collection.id
}
>
< ListItemText
primary = {collection.title} / >
{collection.status == = CollectionStatus.RUNNING ? (
< CircularProgress
size={24}
style={{position: "absolute", right: 16}}
/ >
): null}
< / ListItemButton >
< / ListItem >
< / div >
))}
< / List >
```
You’ll notice that the `disabled` property of a collection’s `ListItemButton` is set based on whether the collection's
status is not `CollectionStatus.COMPLETE` or the collection does not have a model (`!collection.has_model`). If either
of these conditions is true, the button is disabled, preventing users from selecting an incomplete or model-less
collection. Where the CollectionStatus is RUNNING, we also show a loading wheel over the button.
In a separate `useEffect` hook, we check if any collection in the `collections` state has a status
of `CollectionStatus.RUNNING` or `CollectionStatus.QUEUED`. If so, we set up an interval to repeatedly call
the `fetchCollections` function every 15 seconds (15,000 milliseconds) to update the collection statuses. This way, the
application periodically checks for completed collections, and the UI is updated accordingly when the processing is
done.
```tsx
useEffect(() = > {
let
interval: NodeJS.Timeout;
if (
collections.some(
(collection) = >
collection.status == = CollectionStatus.RUNNING | |
collection.status == = CollectionStatus.QUEUED
)
) {
interval = setInterval(() = > {
fetchCollections();
}, 15000);
}
return () = > clearInterval(interval);
}, [collections]);
```
### Chat View Component
The `ChatView` component in `frontend/src/chat/ChatView.tsx` is responsible for handling and displaying a chat interface
for a user to interact with a collection. The component establishes a WebSocket connection to communicate in real-time
with the server, sending and receiving messages.
Key features of the `ChatView` component include:
1. Establishing and managing the WebSocket connection with the server.
2. Displaying messages from the user and the server in a chat-like format.
3. Handling user input to send messages to the server.
4. Updating the messages state and UI based on received messages from the server.
5. Displaying connection status and errors, such as loading messages, connecting to the server, or encountering errors
while loading a collection.
Together, all of this allows users to interact with their selected collection with a very smooth, low-latency
experience.
#### Chat Websocket Client
The WebSocket connection in the `ChatView` component is used to establish real-time communication between the client and
the server. The WebSocket connection is set up and managed in the `ChatView` component as follows:
First, we want to initialize the WebSocket reference:
const websocket = useRef<WebSocket | null>(null);
A `websocket` reference is created using `useRef`, which holds the WebSocket object that will be used for
communication. `useRef` is a hook in React that allows you to create a mutable reference object that persists across
renders. It is particularly useful when you need to hold a reference to a mutable object, such as a WebSocket
connection, without causing unnecessary re-renders.
In the `ChatView` component, the WebSocket connection needs to be established and maintained throughout the lifetime of
the component, and it should not trigger a re-render when the connection state changes. By using `useRef`, you ensure
that the WebSocket connection is kept as a reference, and the component only re-renders when there are actual state
changes, such as updating messages or displaying errors.
The `setupWebsocket` function is responsible for establishing the WebSocket connection and setting up event handlers to
handle different WebSocket events.
Overall, the setupWebsocket function looks like this:
```tsx
const setupWebsocket = () => {
setConnecting(true);
// Here, a new WebSocket object is created using the specified URL, which includes the
// selected collection's ID and the user's authentication token.
websocket.current = new WebSocket(
`ws://localhost:8000/ws/collections/${selectedCollection.id}/query/?token=${authToken}`,
);
websocket.current.onopen = (event) => {
//...
};
websocket.current.onmessage = (event) => {
//...
};
websocket.current.onclose = (event) => {
//...
};
websocket.current.onerror = (event) => {
//...
};
return () => {
websocket.current?.close();
};
};
```
Notice in a bunch of places we trigger updates to the GUI based on the information from the web socket client.
When the component first opens and we try to establish a connection, the `onopen` listener is triggered. In the
callback, the component updates the states to reflect that the connection is established, any previous errors are
cleared, and no messages are awaiting responses:
```tsx
websocket.current.onopen = (event) => {
setError(false);
setConnecting(false);
setAwaitingMessage(false);
console.log("WebSocket connected:", event);
};
```
`onmessage`is triggered when a new message is received from the server through the WebSocket connection. In the
callback, the received data is parsed and the `messages` state is updated with the new message from the server:
```
websocket.current.onmessage = (event) => {
const data = JSON.parse(event.data);
console.log("WebSocket message received:", data);
setAwaitingMessage(false);
if (data.response) {
// Update the messages state with the new message from the server
setMessages((prevMessages) => [
...prevMessages,
{
sender_id: "server",
message: data.response,
timestamp: new Date().toLocaleTimeString(),
},
]);
}
};
```
`onclose`is triggered when the WebSocket connection is closed. In the callback, the component checks for a specific
close code (`4000`) to display a warning toast and update the component states accordingly. It also logs the close
event:
```tsx
websocket.current.onclose = (event) => {
if (event.code === 4000) {
toast.warning(
"Selected collection's model is unavailable. Was it created properly?",
);
setError(true);
setConnecting(false);
setAwaitingMessage(false);
}
console.log("WebSocket closed:", event);
};
```
Finally, `onerror` is triggered when an error occurs with the WebSocket connection. In the callback, the component
updates the states to reflect the error and logs the error event:
```tsx
websocket.current.onerror = (event) => {
setError(true);
setConnecting(false);
setAwaitingMessage(false);
console.error("WebSocket error:", event);
};
```
#### Rendering our Chat Messages
In the `ChatView` component, the layout is determined using CSS styling and Material-UI components. The main layout
consists of a container with a `flex` display and a column-oriented `flexDirection`. This ensures that the content
within the container is arranged vertically.
There are three primary sections within the layout:
1. The chat messages area: This section takes up most of the available space and displays a list of messages exchanged
between the user and the server. It has an overflow-y set to ‘auto’, which allows scrolling when the content
overflows the available space. The messages are rendered using the `ChatMessage` component for each message and
a `ChatMessageLoading` component to show the loading state while waiting for a server response.
2. The divider: A Material-UI `Divider` component is used to separate the chat messages area from the input area,
creating a clear visual distinction between the two sections.
3. The input area: This section is located at the bottom and allows the user to type and send messages. It contains
a `TextField` component from Material-UI, which is set to accept multiline input with a maximum of 2 rows. The input
area also includes a `Button` component to send the message. The user can either click the "Send" button or press "
Enter" on their keyboard to send the message.
The user inputs accepted in the `ChatView` component are text messages that the user types in the `TextField`. The
component processes these text inputs and sends them to the server through the WebSocket connection.
## Deployment
### Prerequisites
To deploy the app, you're going to need Docker and Docker Compose installed. If you're on Ubuntu or another, common
Linux distribution, DigitalOcean has
a [great Docker tutorial](https://www.digitalocean.com/community/tutorial_collections/how-to-install-and-use-docker) and
another great tutorial
for [Docker Compose](https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-compose-on-ubuntu-20-04)
you can follow. If those don't work for you, try
the [official docker documentation.](https://docs.docker.com/engine/install/)
### Build and Deploy
The project is based on django-cookiecutter, and it’s pretty easy to get it deployed on a VM and configured to serve
HTTPs traffic for a specific domain. The configuration is somewhat involved, however — not because of this project, but
it’s just a fairly involved topic to configure your certificates, DNS, etc.
For the purposes of this guide, let’s just get running locally. Perhaps we’ll release a guide on production deployment.
In the meantime, check out
the [Django Cookiecutter project docs](https://cookiecutter-django.readthedocs.io/en/latest/deployment-with-docker.html)
for starters.
This guide assumes your goal is to get the application up and running for use. If you want to develop, most likely you
won’t want to launch the compose stack with the — profiles fullstack flag and will instead want to launch the react
frontend using the node development server.
To deploy, first clone the repo:
```commandline
git clone https://github.com/yourusername/delphic.git
```
Change into the project directory:
```commandline
cd delphic
```
Copy the sample environment files:
```commandline
mkdir -p ./.envs/.local/
cp -a ./docs/sample_envs/local/.frontend ./frontend
cp -a ./docs/sample_envs/local/.django ./.envs/.local
cp -a ./docs/sample_envs/local/.postgres ./.envs/.local
```
Edit the `.django` and `.postgres` configuration files to include your OpenAI API key and set a unique password for your
database user. You can also set the response token limit in the .django file or switch which OpenAI model you want to
use. GPT4 is supported, assuming you’re authorized to access it.
Build the docker compose stack with the `--profiles fullstack` flag:
```commandline
sudo docker-compose --profiles fullstack -f local.yml build
```
The fullstack flag instructs compose to build a docker container from the frontend folder and this will be launched
along with all of the needed, backend containers. It takes a long time to build a production React container, however,
so we don’t recommend you develop this way. Follow
the [instructions in the project readme.md](https://github.com/JSv4/Delphic#development) for development environment
setup instructions.
Finally, bring up the application:
```commandline
sudo docker-compose -f local.yml up
```
Now, visit `localhost:3000` in your browser to see the frontend, and use the Delphic application locally.
## Using the Application
### Setup Users
In order to actually use the application (at the moment, we intend to make it possible to share certain models with
unauthenticated users), you need a login. You can use either a superuser or non-superuser. In either case, someone needs
to first create a superuser using the console:
**Why set up a Django superuser?** A Django superuser has all the permissions in the application and can manage all
aspects of the system, including creating, modifying, and deleting users, collections, and other data. Setting up a
superuser allows you to fully control and manage the application.
**How to create a Django superuser:**
1 Run the following command to create a superuser:
sudo docker-compose -f local.yml run django python manage.py createsuperuser
2 You will be prompted to provide a username, email address, and password for the superuser. Enter the required
information.
**How to create additional users using Django admin:**
1. Start your Delphic application locally following the deployment instructions.
2. Visit the Django admin interface by navigating to `http://localhost:8000/admin` in your browser.
3. Log in with the superuser credentials you created earlier.
4. Click on “Users” under the “Authentication and Authorization” section.
5. Click on the “Add user +” button in the top right corner.
6. Enter the required information for the new user, such as username and password. Click “Save” to create the user.
7. To grant the new user additional permissions or make them a superuser, click on their username in the user list,
scroll down to the “Permissions” section, and configure their permissions accordingly. Save your changes. |
1,587 | cca8c307-c42d-4470-a08e-55c98322f75b | Get References from PDFs | https://docs.llamaindex.ai/en/stable/examples/citation/pdf_page_reference | true | llama_index | # Get References from PDFs
This guide shows you how to use LlamaIndex to get in-line page number citations in the response (and the response is streamed).
This is a simple combination of using the page number metadata in our PDF loader along with our indexing/query abstractions to use this information.
<a href="https://colab.research.google.com/github/jerryjliu/llama_index/blob/main/docs/docs/examples/citation/pdf_page_reference.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.
```python
%pip install llama-index-llms-openai
```
```python
!pip install llama-index
```
```python
from llama_index.core import (
SimpleDirectoryReader,
VectorStoreIndex,
download_loader,
RAKEKeywordTableIndex,
)
```
```python
from llama_index.llms.openai import OpenAI
llm = OpenAI(temperature=0, model="gpt-3.5-turbo")
```
Download Data
```python
!mkdir -p 'data/10k/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/10k/lyft_2021.pdf' -O 'data/10k/lyft_2021.pdf'
```
Load document and build index
```python
reader = SimpleDirectoryReader(input_files=["./data/10k/lyft_2021.pdf"])
data = reader.load_data()
```
```python
index = VectorStoreIndex.from_documents(data)
```
```python
query_engine = index.as_query_engine(streaming=True, similarity_top_k=3)
```
Stream response with page citation
```python
response = query_engine.query(
"What was the impact of COVID? Show statements in bullet form and show"
" page reference after each statement."
)
response.print_response_stream()
```
• The ongoing COVID-19 pandemic continues to impact communities in the United States, Canada and globally (page 6).
• The pandemic and related responses caused decreased demand for our platform leading to decreased revenues as well as decreased earning opportunities for drivers on our platform (page 6).
• Our business continues to be impacted by the COVID-19 pandemic (page 6).
• The exact timing and pace of the recovery remain uncertain (page 6).
• The extent to which our operations will continue to be impacted by the pandemic will depend largely on future developments, which are highly uncertain and cannot be accurately predicted (page 6).
• An increase in cases due to variants of the virus has caused many businesses to delay employees returning to the office (page 6).
• We anticipate that continued social distancing, altered consumer behavior, reduced travel and commuting, and expected corporate cost cutting will be significant challenges for us (page 6).
• We have adopted multiple measures, including, but not limited, to establishing new health and safety requirements for ridesharing and updating workplace policies (page 6).
• We have had to take certain cost-cutting measures, including lay-offs, furloughs and salary reductions, which may have adversely affect employee morale, our culture and our ability to attract and retain employees (page 18).
• The ultimate impact of the COVID-19 pandemic on our users, customers, employees, business, operations and financial performance depends on many factors that are not within our control (page 18).
Inspect source nodes
```python
for node in response.source_nodes:
print("-----")
text_fmt = node.node.get_content().strip().replace("\n", " ")[:1000]
print(f"Text:\t {text_fmt} ...")
print(f"Metadata:\t {node.node.metadata}")
print(f"Score:\t {node.score:.3f}")
```
-----
Text: Impact of COVID-19 to our BusinessThe ongoing COVID-19 pandemic continues to impact communities in the United States, Canada and globally. Since the pandemic began in March 2020,governments and private businesses - at the recommendation of public health officials - have enacted precautions to mitigate the spread of the virus, including travelrestrictions and social distancing measures in many regions of the United States and Canada, and many enterprises have instituted and maintained work from homeprograms and limited the number of employees on site. Beginning in the middle of March 2020, the pandemic and these related responses caused decreased demand for ourplatform leading to decreased revenues as well as decreased earning opportunities for drivers on our platform. Our business continues to be impacted by the COVID-19pandemic. Although we have seen some signs of demand improving, particularly compared to the dema ...
Metadata: {'page_label': '6', 'file_name': 'lyft_2021.pdf'}
Score: 0.821
-----
Text: will continue to be impacted by the pandemic will depend largely on future developments, which are highly uncertain and cannot beaccurately predicted, including new information which may emerge concerning COVID-19 variants and the severity of the pandemic and actions by government authoritiesand private businesses to contain the pandemic or recover from its impact, among other things. For example, an increase in cases due to variants of the virus has causedmany businesses to delay employees returning to the office. Even as travel restrictions and shelter-in-place orders are modified or lifted, we anticipate that continued socialdistancing, altered consu mer behavior, reduced travel and commuting, and expected corporate cost cutting will be significant challenges for us. The strength and duration ofthese challenges cannot b e presently estimated.In response to the COVID-19 pandemic, we have adopted multiple measures, including, but not limited, to establishing ne ...
Metadata: {'page_label': '56', 'file_name': 'lyft_2021.pdf'}
Score: 0.808
-----
Text: storing unrented and returned vehicles. These impacts to the demand for and operations of the different rental programs have and may continue to adversely affectour business, financial condi tion and results of operation.• The COVID-19 pandemic may delay or prevent us, or our current or prospective partners and suppliers, from being able to test, develop or deploy autonomousvehicle-related technology, including through direct impacts of the COVID-19 virus on employee and contractor health; reduced consumer demand forautonomous vehicle travel resulting from an overall reduced demand for travel; shelter-in-place orders by local, state or federal governments negatively impactingoperations, including our ability to test autonomous vehicle-related technology; impacts to the supply chains of our current or prospective partners and suppliers;or economic impacts limiting our or our current or prospective partners’ or suppliers’ ability to expend resources o ...
Metadata: {'page_label': '18', 'file_name': 'lyft_2021.pdf'}
Score: 0.805 |
1,654 | fbb928bd-56a2-4df4-bd74-23b68502d3d0 | Auto-Retrieval from a Weaviate Vector Database | https://docs.llamaindex.ai/en/stable/examples/vector_stores/WeaviateIndex_auto_retriever | true | llama_index | <a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/docs/examples/vector_stores/WeaviateIndex_auto_retriever.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Auto-Retrieval from a Weaviate Vector Database
This guide shows how to perform **auto-retrieval** in LlamaIndex with [Weaviate](https://weaviate.io/).
The Weaviate vector database supports a set of [metadata filters](https://weaviate.io/developers/weaviate/search/filters) in addition to a query string for semantic search. Given a natural language query, we first use a Large Language Model (LLM) to infer a set of metadata filters as well as the right query string to pass to the vector database (either can also be blank). This overall query bundle is then executed against the vector database.
This allows for more dynamic, expressive forms of retrieval beyond top-k semantic search. The relevant context for a given query may only require filtering on a metadata tag, or require a joint combination of filtering + semantic search within the filtered set, or just raw semantic search.
## Setup
We first define imports and define an empty Weaviate collection.
If you're opening this Notebook on Colab, you will probably need to install LlamaIndex 🦙.
```python
%pip install llama-index-vector-stores-weaviate
```
```python
!pip install llama-index weaviate-client
```
```python
import logging
import sys
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
```
We will be using GPT-4 for its reasoning capabilities to infer the metadata filters. Depending on your use case, `"gpt-3.5-turbo"` can work as well.
```python
# set up OpenAI
import os
import getpass
import openai
os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
openai.api_key = os.environ["OPENAI_API_KEY"]
```
```python
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.llms.openai import OpenAI
from llama_index.core.settings import Settings
Settings.llm = OpenAI(model="gpt-4")
Settings.embed_model = OpenAIEmbedding()
```
This Notebook uses Weaviate in [Embedded mode](https://weaviate.io/developers/weaviate/installation/embedded), which is supported on Linux and macOS.
If you prefer to try out Weaviate's fully managed service, [Weaviate Cloud Services (WCS)](https://weaviate.io/developers/weaviate/installation/weaviate-cloud-services), you can enable the code in the comments.
```python
import weaviate
from weaviate.embedded import EmbeddedOptions
# Connect to Weaviate client in embedded mode
client = weaviate.connect_to_embedded()
# Enable this code if you want to use Weaviate Cloud Services instead of Embedded mode.
"""
import weaviate
# cloud
cluster_url = ""
api_key = ""
client = weaviate.connect_to_wcs(cluster_url=cluster_url,
auth_credentials=weaviate.auth.AuthApiKey(api_key),
)
# local
# client = weaviate.connect_to_local()
"""
```
## Defining Some Sample Data
We insert some sample nodes containing text chunks into the vector database. Note that each `TextNode` not only contains the text, but also metadata e.g. `category` and `country`. These metadata fields will get converted/stored as such in the underlying vector db.
```python
from llama_index.core.schema import TextNode
nodes = [
TextNode(
text=(
"Michael Jordan is a retired professional basketball player,"
" widely regarded as one of the greatest basketball players of all"
" time."
),
metadata={
"category": "Sports",
"country": "United States",
},
),
TextNode(
text=(
"Angelina Jolie is an American actress, filmmaker, and"
" humanitarian. She has received numerous awards for her acting"
" and is known for her philanthropic work."
),
metadata={
"category": "Entertainment",
"country": "United States",
},
),
TextNode(
text=(
"Elon Musk is a business magnate, industrial designer, and"
" engineer. He is the founder, CEO, and lead designer of SpaceX,"
" Tesla, Inc., Neuralink, and The Boring Company."
),
metadata={
"category": "Business",
"country": "United States",
},
),
TextNode(
text=(
"Rihanna is a Barbadian singer, actress, and businesswoman. She"
" has achieved significant success in the music industry and is"
" known for her versatile musical style."
),
metadata={
"category": "Music",
"country": "Barbados",
},
),
TextNode(
text=(
"Cristiano Ronaldo is a Portuguese professional footballer who is"
" considered one of the greatest football players of all time. He"
" has won numerous awards and set multiple records during his"
" career."
),
metadata={
"category": "Sports",
"country": "Portugal",
},
),
]
```
## Build Vector Index with Weaviate Vector Store
Here we load the data into the vector store. As mentioned above, both the text and metadata for each node will get converted into corresopnding representations in Weaviate. We can now run semantic queries and also metadata filtering on this data from Weaviate.
```python
from llama_index.core import VectorStoreIndex, StorageContext
from llama_index.vector_stores.weaviate import WeaviateVectorStore
vector_store = WeaviateVectorStore(
weaviate_client=client, index_name="LlamaIndex_filter"
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
```
```python
index = VectorStoreIndex(nodes, storage_context=storage_context)
```
## Define `VectorIndexAutoRetriever`
We define our core `VectorIndexAutoRetriever` module. The module takes in `VectorStoreInfo`,
which contains a structured description of the vector store collection and the metadata filters it supports.
This information will then be used in the auto-retrieval prompt where the LLM infers metadata filters.
```python
from llama_index.core.retrievers import VectorIndexAutoRetriever
from llama_index.core.vector_stores.types import MetadataInfo, VectorStoreInfo
vector_store_info = VectorStoreInfo(
content_info="brief biography of celebrities",
metadata_info=[
MetadataInfo(
name="category",
type="str",
description=(
"Category of the celebrity, one of [Sports, Entertainment,"
" Business, Music]"
),
),
MetadataInfo(
name="country",
type="str",
description=(
"Country of the celebrity, one of [United States, Barbados,"
" Portugal]"
),
),
],
)
retriever = VectorIndexAutoRetriever(
index, vector_store_info=vector_store_info
)
```
## Running over some sample data
We try running over some sample data. Note how metadata filters are inferred - this helps with more precise retrieval!
```python
response = retriever.retrieve("Tell me about celebrities from United States")
```
```python
print(response[0])
```
```python
response = retriever.retrieve(
"Tell me about Sports celebrities from United States"
)
```
```python
print(response[0])
``` |
1,016 | 517359e3-f4af-44bb-8076-2f6f8f27505c | Weaviate Vector Store | https://docs.llamaindex.ai/en/stable/examples/vector_stores/WeaviateIndexDemo | true | llama_index | <a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/docs/examples/vector_stores/WeaviateIndexDemo.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Weaviate Vector Store
If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.
```python
%pip install llama-index-vector-stores-weaviate
```
```python
!pip install llama-index
```
#### Creating a Weaviate Client
```python
import os
import openai
os.environ["OPENAI_API_KEY"] = ""
openai.api_key = os.environ["OPENAI_API_KEY"]
```
```python
import logging
import sys
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
```
```python
import weaviate
```
```python
# cloud
cluster_url = ""
api_key = ""
client = weaviate.connect_to_wcs(
cluster_url=cluster_url,
auth_credentials=weaviate.auth.AuthApiKey(api_key),
)
# local
# client = connect_to_local()
```
#### Load documents, build the VectorStoreIndex
```python
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.vector_stores.weaviate import WeaviateVectorStore
from IPython.display import Markdown, display
```
Download Data
```python
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
```
```python
# load documents
documents = SimpleDirectoryReader("./data/paul_graham").load_data()
```
```python
from llama_index.core import StorageContext
# If you want to load the index later, be sure to give it a name!
vector_store = WeaviateVectorStore(
weaviate_client=client, index_name="LlamaIndex"
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
# NOTE: you may also choose to define a index_name manually.
# index_name = "test_prefix"
# vector_store = WeaviateVectorStore(weaviate_client=client, index_name=index_name)
```
#### Query Index
```python
# set Logging to DEBUG for more detailed outputs
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
```
```python
display(Markdown(f"<b>{response}</b>"))
```
## Loading the index
Here, we use the same index name as when we created the initial index. This stops it from being auto-generated and allows us to easily connect back to it.
```python
cluster_url = ""
api_key = ""
client = weaviate.connect_to_wcs(
cluster_url=cluster_url,
auth_credentials=weaviate.auth.AuthApiKey(api_key),
)
# local
# client = weaviate.connect_to_local()
```
```python
vector_store = WeaviateVectorStore(
weaviate_client=client, index_name="LlamaIndex"
)
loaded_index = VectorStoreIndex.from_vector_store(vector_store)
```
```python
# set Logging to DEBUG for more detailed outputs
query_engine = loaded_index.as_query_engine()
response = query_engine.query("What happened at interleaf?")
display(Markdown(f"<b>{response}</b>"))
```
## Metadata Filtering
Let's insert a dummy document, and try to filter so that only that document is returned.
```python
from llama_index.core import Document
doc = Document.example()
print(doc.metadata)
print("-----")
print(doc.text[:100])
```
```python
loaded_index.insert(doc)
```
```python
from llama_index.core.vector_stores import ExactMatchFilter, MetadataFilters
filters = MetadataFilters(
filters=[ExactMatchFilter(key="filename", value="README.md")]
)
query_engine = loaded_index.as_query_engine(filters=filters)
response = query_engine.query("What is the name of the file?")
display(Markdown(f"<b>{response}</b>"))
```
# Deleting the index completely
You can delete the index created by the vector store using the `delete_index` function
```python
vector_store.delete_index()
```
```python
vector_store.delete_index() # calling the function again does nothing
```
# Connection Termination
You must ensure your client connections are closed:
```python
client.close()
``` |
1,471 | 26824bda-cde2-4903-9be7-f5288b216ca2 | Neo4j vector store | https://docs.llamaindex.ai/en/stable/examples/vector_stores/Neo4jVectorDemo | true | llama_index | <a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/docs/examples/vector_stores/Neo4jVectorDemo.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Neo4j vector store
If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.
```python
%pip install llama-index-vector-stores-neo4jvector
```
```python
!pip install llama-index
```
```python
import os
import openai
os.environ["OPENAI_API_KEY"] = "OPENAI_API_KEY"
openai.api_key = os.environ["OPENAI_API_KEY"]
```
## Initiate Neo4j vector wrapper
```python
from llama_index.vector_stores.neo4jvector import Neo4jVectorStore
username = "neo4j"
password = "pleaseletmein"
url = "bolt://localhost:7687"
embed_dim = 1536
neo4j_vector = Neo4jVectorStore(username, password, url, embed_dim)
```
## Load documents, build the VectorStoreIndex
```python
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from IPython.display import Markdown, display
```
Download Data
```python
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
```
--2023-12-14 18:44:00-- https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.111.133, 185.199.109.133, 185.199.110.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 75042 (73K) [text/plain]
Saving to: ‘data/paul_graham/paul_graham_essay.txt’
data/paul_graham/pa 100%[===================>] 73,28K --.-KB/s in 0,03s
2023-12-14 18:44:00 (2,16 MB/s) - ‘data/paul_graham/paul_graham_essay.txt’ saved [75042/75042]
```python
# load documents
documents = SimpleDirectoryReader("./data/paul_graham").load_data()
```
```python
from llama_index.core import StorageContext
storage_context = StorageContext.from_defaults(vector_store=neo4j_vector)
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
```
```python
query_engine = index.as_query_engine()
response = query_engine.query("What happened at interleaf?")
display(Markdown(f"<b>{response}</b>"))
```
<b>At Interleaf, they added a scripting language inspired by Emacs and made it a dialect of Lisp. They were looking for a Lisp hacker to write things in this scripting language. The author of the text worked at Interleaf and mentioned that their Lisp was the thinnest icing on a giant C cake. The author also mentioned that they didn't know C and didn't want to learn it, so they never understood most of the software at Interleaf. Additionally, the author admitted to being a bad employee and spending much of their time working on a separate project called On Lisp.</b>
## Hybrid search
Hybrid search uses a combination of keyword and vector search
In order to use hybrid search, you need to set the `hybrid_search` to `True`
```python
neo4j_vector_hybrid = Neo4jVectorStore(
username, password, url, embed_dim, hybrid_search=True
)
```
```python
storage_context = StorageContext.from_defaults(
vector_store=neo4j_vector_hybrid
)
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
query_engine = index.as_query_engine()
response = query_engine.query("What happened at interleaf?")
display(Markdown(f"<b>{response}</b>"))
```
<b>At Interleaf, they added a scripting language inspired by Emacs and made it a dialect of Lisp. They were looking for a Lisp hacker to write things in this scripting language. The author of the essay worked at Interleaf but didn't understand most of the software because he didn't know C and didn't want to learn it. He also mentioned that their Lisp was the thinnest icing on a giant C cake. The author admits to being a bad employee and spending much of his time working on a contract to publish On Lisp.</b>
## Load existing vector index
In order to connect to an existing vector index, you need to define the `index_name` and `text_node_property` parameters:
- index_name: name of the existing vector index (default is `vector`)
- text_node_property: name of the property that containt the text value (default is `text`)
```python
index_name = "existing_index"
text_node_property = "text"
existing_vector = Neo4jVectorStore(
username,
password,
url,
embed_dim,
index_name=index_name,
text_node_property=text_node_property,
)
loaded_index = VectorStoreIndex.from_vector_store(existing_vector)
```
## Customizing responses
You can customize the retrieved information from the knowledge graph using the `retrieval_query` parameter.
The retrieval query must return the following four columns:
* text:str - The text of the returned document
* score:str - similarity score
* id:str - node id
* metadata: Dict - dictionary with additional metadata (must contain `_node_type` and `_node_content` keys)
```python
retrieval_query = (
"RETURN 'Interleaf hired Tomaz' AS text, score, node.id AS id, "
"{author: 'Tomaz', _node_type:node._node_type, _node_content:node._node_content} AS metadata"
)
neo4j_vector_retrieval = Neo4jVectorStore(
username, password, url, embed_dim, retrieval_query=retrieval_query
)
```
```python
loaded_index = VectorStoreIndex.from_vector_store(
neo4j_vector_retrieval
).as_query_engine()
response = loaded_index.query("What happened at interleaf?")
display(Markdown(f"<b>{response}</b>"))
```
<b>Interleaf hired Tomaz.</b> |
1,782 | dea54c67-9b5e-47f0-adcc-c00da6a46c2f | S3/R2 Storage | https://docs.llamaindex.ai/en/stable/examples/vector_stores/SimpleIndexOnS3 | true | llama_index | <a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/docs/examples/vector_stores/SimpleIndexOnS3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# S3/R2 Storage
If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.
```python
!pip install llama-index
```
```python
import logging
import sys
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
from llama_index.core import (
VectorStoreIndex,
SimpleDirectoryReader,
load_index_from_storage,
StorageContext,
)
from IPython.display import Markdown, display
```
INFO:numexpr.utils:Note: NumExpr detected 32 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8.
Note: NumExpr detected 32 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8.
INFO:numexpr.utils:NumExpr defaulting to 8 threads.
NumExpr defaulting to 8 threads.
/home/hua/code/llama_index/.hermit/python/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
from .autonotebook import tqdm as notebook_tqdm
```python
import dotenv
import s3fs
import os
dotenv.load_dotenv("../../../.env")
AWS_KEY = os.environ["AWS_ACCESS_KEY_ID"]
AWS_SECRET = os.environ["AWS_SECRET_ACCESS_KEY"]
R2_ACCOUNT_ID = os.environ["R2_ACCOUNT_ID"]
assert AWS_KEY is not None and AWS_KEY != ""
s3 = s3fs.S3FileSystem(
key=AWS_KEY,
secret=AWS_SECRET,
endpoint_url=f"https://{R2_ACCOUNT_ID}.r2.cloudflarestorage.com",
s3_additional_kwargs={"ACL": "public-read"},
)
```
Download Data
```python
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
```
```python
# load documents
documents = SimpleDirectoryReader("./data/paul_graham/").load_data()
print(len(documents))
```
1
```python
index = VectorStoreIndex.from_documents(documents, fs=s3)
```
INFO:llama_index.token_counter.token_counter:> [build_index_from_nodes] Total LLM token usage: 0 tokens
> [build_index_from_nodes] Total LLM token usage: 0 tokens
INFO:llama_index.token_counter.token_counter:> [build_index_from_nodes] Total embedding token usage: 20729 tokens
> [build_index_from_nodes] Total embedding token usage: 20729 tokens
```python
# save index to disk
index.set_index_id("vector_index")
index.storage_context.persist("llama-index/storage_demo", fs=s3)
```
```python
s3.listdir("llama-index/storage_demo")
```
[{'Key': 'llama-index/storage_demo/docstore.json',
'LastModified': datetime.datetime(2023, 5, 14, 20, 23, 53, 213000, tzinfo=tzutc()),
'ETag': '"3993f79a6f7cf908a8e53450a2876cf0"',
'Size': 107529,
'StorageClass': 'STANDARD',
'type': 'file',
'size': 107529,
'name': 'llama-index/storage_demo/docstore.json'},
{'Key': 'llama-index/storage_demo/index_store.json',
'LastModified': datetime.datetime(2023, 5, 14, 20, 23, 53, 783000, tzinfo=tzutc()),
'ETag': '"5b084883bf0b08e3c2b979af7c16be43"',
'Size': 3105,
'StorageClass': 'STANDARD',
'type': 'file',
'size': 3105,
'name': 'llama-index/storage_demo/index_store.json'},
{'Key': 'llama-index/storage_demo/vector_store.json',
'LastModified': datetime.datetime(2023, 5, 14, 20, 23, 54, 232000, tzinfo=tzutc()),
'ETag': '"75535cf22c23bcd8ead21b8a52e9517a"',
'Size': 829290,
'StorageClass': 'STANDARD',
'type': 'file',
'size': 829290,
'name': 'llama-index/storage_demo/vector_store.json'}]
```python
# load index from s3
sc = StorageContext.from_defaults(
persist_dir="llama-index/storage_demo", fs=s3
)
```
```python
index2 = load_index_from_storage(sc, "vector_index")
```
INFO:llama_index.indices.loading:Loading indices with ids: ['vector_index']
Loading indices with ids: ['vector_index']
```python
index2.docstore.docs.keys()
```
dict_keys(['f8891670-813b-4cfa-9025-fcdc8ba73449', '985a2c69-9da5-40cf-ba30-f984921187c1', 'c55f077c-0bfb-4036-910c-6fd5f26f7372', 'b47face6-f25b-4381-bb8d-164f179d6888', '16304ef7-2378-4776-b86d-e8ed64c8fb58', '62dfdc7a-6a2f-4d5f-9033-851fbc56c14a', 'a51ef189-3924-494b-84cf-e23df673e29c', 'f94aca2b-34ac-4ec4-ac41-d31cd3b7646f', 'ad89e2fb-e0fc-4615-a380-8245bd6546af', '3dbba979-ca08-4321-b4de-be5236ac2e11', '634b2d6d-0bff-4384-898f-b521470db8ac', 'ee9551ba-7a44-493d-997b-8eeab9c04e25', 'b21fe2b5-d8e3-4895-8424-fa9e3da76711', 'bd2609e8-8b52-49e8-8ee7-41b64b3ce9e1', 'a08b739e-efd9-4a61-8517-c4f9cea8cf7d', '8d4babaf-37f1-454a-8be4-b67e1b8e428f', '05389153-4567-4e53-a2ea-bc3e020ee1b2', 'd29531a5-c5d2-4e1d-ab99-56f2b4bb7f37', '2ccb3c63-3407-4acf-b5bb-045caa588bbc', 'a0b1bebb-3dcd-4bf8-9ebb-a4cd2cb82d53', '21517b34-6c1b-4607-bf89-7ab59b85fba6', 'f2487d52-1e5e-4482-a182-218680ef306e', '979998ce-39ee-41bc-a9be-b3ed68d7c304', '3e658f36-a13e-407a-8624-0adf9e842676']) |
1,815 | 865b355c-a71d-4252-8033-1aa6c567ae16 | Rockset Vector Store | https://docs.llamaindex.ai/en/stable/examples/vector_stores/RocksetIndexDemo | true | llama_index | <a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/docs/examples/vector_stores/RocksetIndexDemo.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Rockset Vector Store
As a real-time search and analytics database, Rockset uses indexing to deliver scalable and performant personalization, product search, semantic search, chatbot applications, and more.
Since Rockset is purpose-built for real-time, you can build these responsive applications on constantly updating, streaming data.
By integrating Rockset with LlamaIndex, you can easily use LLMs on your own real-time data for production-ready vector search applications.
We'll walk through a demonstration of how to use Rockset as a vector store in LlamaIndex.
## Tutorial
In this example, we'll use OpenAI's `text-embedding-ada-002` model to generate embeddings and Rockset as vector store to store embeddings.
We'll ingest text from a file and ask questions about the content.
### Setting Up Your Environment
1. Create a [collection](https://rockset.com/docs/collections) from the Rockset console with the [Write API](https://rockset.com/docs/write-api/) as your source.
Name your collection `llamaindex_demo`. Configure the following [ingest transformation](https://rockset.com/docs/ingest-transformation)
with [`VECTOR_ENFORCE`](https://rockset.com/docs/vector-functions) to define your embeddings field and take advantage of performance and storage optimizations:
```sql
SELECT
_input.* EXCEPT(_meta),
VECTOR_ENFORCE(
_input.embedding,
1536,
'float'
) as embedding
FROM _input
```
2. Create an [API key](https://rockset.com/docs/iam) from the Rockset console and set the `ROCKSET_API_KEY` environment variable.
Find your API server [here](http://rockset.com/docs/rest-api#introduction) and set the `ROCKSET_API_SERVER` environment variable.
Set the `OPENAI_API_KEY` environment variable.
3. Install the dependencies.
```shell
pip3 install llama_index rockset
```
4. LlamaIndex allows you to ingest data from a variety of sources.
For this example, we'll read from a text file named `constitution.txt`, which is a transcript of the American Constitution, found [here](https://www.archives.gov/founding-docs/constitution-transcript).
### Data ingestion
Use LlamaIndex's `SimpleDirectoryReader` class to convert the text file to a list of `Document` objects.
```python
%pip install llama-index-llms-openai
%pip install llama-index-vector-stores-rocksetdb
```
```python
from llama_index.core import SimpleDirectoryReader
docs = SimpleDirectoryReader(
input_files=["{path to}/consitution.txt"]
).load_data()
```
Instantiate the LLM and service context.
```python
from llama_index.core import Settings
from llama_index.llms.openai import OpenAI
Settings.llm = OpenAI(temperature=0.8, model="gpt-3.5-turbo")
```
Instantiate the vector store and storage context.
```python
from llama_index.core import StorageContext
from llama_index.vector_stores.rocksetdb import RocksetVectorStore
vector_store = RocksetVectorStore(collection="llamaindex_demo")
storage_context = StorageContext.from_defaults(vector_store=vector_store)
```
Add documents to the `llamaindex_demo` collection and create an index.
```python
from llama_index.core import VectorStoreIndex
index = VectorStoreIndex.from_documents(
docs,
storage_context=storage_context,
)
```
### Querying
Ask a question about your document and generate a response.
```python
response = index.as_query_engine().query("What is the duty of the president?")
print(str(response))
```
Run the program.
```text
$ python3 main.py
The duty of the president is to faithfully execute the Office of President of the United States, preserve, protect and defend the Constitution of the United States, serve as the Commander in Chief of the Army and Navy, grant reprieves and pardons for offenses against the United States (except in cases of impeachment), make treaties and appoint ambassadors and other public ministers, take care that the laws be faithfully executed, and commission all the officers of the United States.
```
## Metadata Filtering
Metadata filtering allows you to retrieve relevant documents that match specific filters.
1. Add nodes to your vector store and create an index.
```python
from llama_index.vector_stores.rocksetdb import RocksetVectorStore
from llama_index.core import VectorStoreIndex, StorageContext
from llama_index.core.vector_stores.types import NodeWithEmbedding
from llama_index.core.schema import TextNode
nodes = [
NodeWithEmbedding(
node=TextNode(
text="Apples are blue",
metadata={"type": "fruit"},
),
embedding=[],
)
]
index = VectorStoreIndex(
nodes,
storage_context=StorageContext.from_defaults(
vector_store=RocksetVectorStore(collection="llamaindex_demo")
),
)
```
2. Define metadata filters.
```python
from llama_index.core.vector_stores import ExactMatchFilter, MetadataFilters
filters = MetadataFilters(
filters=[ExactMatchFilter(key="type", value="fruit")]
)
```
3. Retrieve relevant documents that satisfy the filters.
```python
retriever = index.as_retriever(filters=filters)
retriever.retrieve("What colors are apples?")
```
## Creating an Index from an Existing Collection
You can create indices with data from existing collections.
```python
from llama_index.core import VectorStoreIndex
from llama_index.vector_stores.rocksetdb import RocksetVectorStore
vector_store = RocksetVectorStore(collection="llamaindex_demo")
index = VectorStoreIndex.from_vector_store(vector_store)
```
## Creating an Index from a New Collection
You can also create a new Rockset collection to use as a vector store.
```python
from llama_index.vector_stores.rocksetdb import RocksetVectorStore
vector_store = RocksetVectorStore.with_new_collection(
collection="llamaindex_demo", # name of new collection
dimensions=1536, # specifies length of vectors in ingest tranformation (optional)
# other RocksetVectorStore args
)
index = VectorStoreIndex(
nodes,
storage_context=StorageContext.from_defaults(vector_store=vector_store),
)
```
## Configuration
* **collection**: Name of the collection to query (required).
```python
RocksetVectorStore(collection="my_collection")
```
* **workspace**: Name of the workspace containing the collection. Defaults to `"commons"`.
```python
RocksetVectorStore(worksapce="my_workspace")
```
* **api_key**: The API key to use to authenticate Rockset requests. Ignored if `client` is passed in. Defaults to the `ROCKSET_API_KEY` environment variable.
```python
RocksetVectorStore(api_key="<my key>")
```
* **api_server**: The API server to use for Rockset requests. Ignored if `client` is passed in. Defaults to the `ROCKSET_API_KEY` environment variable or `"https://api.use1a1.rockset.com"` if the `ROCKSET_API_SERVER` is not set.
```python
from rockset import Regions
RocksetVectorStore(api_server=Regions.euc1a1)
```
* **client**: Rockset client object to use to execute Rockset requests. If not specified, a client object is internally constructed with the `api_key` parameter (or `ROCKSET_API_SERVER` environment variable) and the `api_server` parameter (or `ROCKSET_API_SERVER` environment variable).
```python
from rockset import RocksetClient
RocksetVectorStore(client=RocksetClient(api_key="<my key>"))
```
* **embedding_col**: The name of the database field containing embeddings. Defaults to `"embedding"`.
```python
RocksetVectorStore(embedding_col="my_embedding")
```
* **metadata_col**: The name of the database field containing node data. Defaults to `"metadata"`.
```python
RocksetVectorStore(metadata_col="node")
```
* **distance_func**: The metric to measure vector relationship. Defaults to cosine similarity.
```python
RocksetVectorStore(distance_func=RocksetVectorStore.DistanceFunc.DOT_PRODUCT)
``` |
865 | bcde3b34-3303-4885-afb1-8654f27e3176 | Databricks Vector Search | https://docs.llamaindex.ai/en/stable/examples/vector_stores/DatabricksVectorSearchDemo | true | llama_index | # Databricks Vector Search
Databricks Vector Search is a vector database that is built into the Databricks Intelligence Platform and integrated with its governance and productivity tools. Full docs here: https://docs.databricks.com/en/generative-ai/vector-search.html
Install llama-index and databricks-vectorsearch. You must be inside a Databricks runtime to use the Vector Search python client.
```python
%pip install llama-index llama-index-vector-stores-databricks
%pip install databricks-vectorsearch
```
Import databricks dependencies
```python
from databricks.vector_search.client import (
VectorSearchIndex,
VectorSearchClient,
)
```
Import LlamaIndex dependencies
```python
from llama_index.core import (
VectorStoreIndex,
SimpleDirectoryReader,
ServiceContext,
StorageContext,
)
from llama_index.vector_stores.databricks import DatabricksVectorSearch
```
Load example data
```python
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
```
Read the data
```python
# load documents
documents = SimpleDirectoryReader("./data/paul_graham/").load_data()
print(f"Total documents: {len(documents)}")
print(f"First document, id: {documents[0].doc_id}")
print(f"First document, hash: {documents[0].hash}")
print(
"First document, text"
f" ({len(documents[0].text)} characters):\n{'='*20}\n{documents[0].text[:360]} ..."
)
```
Create a Databricks Vector Search endpoint which will serve the index
```python
# Create a vector search endpoint
client = VectorSearchClient()
client.create_endpoint(
name="llamaindex_dbx_vector_store_test_endpoint", endpoint_type="STANDARD"
)
```
Create the Databricks Vector Search index, and build it from the documents
```python
# Create a vector search index
# it must be placed inside a Unity Catalog-enabled schema
# We'll use self-managed embeddings (i.e. managed by LlamaIndex) rather than a Databricks-managed index
databricks_index = client.create_direct_access_index(
endpoint_name="llamaindex_dbx_vector_store_test_endpoint",
index_name="my_catalog.my_schema.my_test_table",
primary_key="my_primary_key_name",
embedding_dimension=1536, # match the embeddings model dimension you're going to use
embedding_vector_column="my_embedding_vector_column_name", # you name this anything you want - it'll be picked up by the LlamaIndex class
schema={
"my_primary_key_name": "string",
"my_embedding_vector_column_name": "array<double>",
"text": "string", # one column must match the text_column in the DatabricksVectorSearch instance created below; this will hold the raw node text,
"doc_id": "string", # one column must contain the reference document ID (this will be populated by LlamaIndex automatically)
# add any other metadata you may have in your nodes (Databricks Vector Search supports metadata filtering)
# NOTE THAT THESE FIELDS MUST BE ADDED EXPLICITLY TO BE USED FOR METADATA FILTERING
},
)
databricks_vector_store = DatabricksVectorSearch(
index=databricks_index,
text_column="text",
columns=None, # YOU MUST ALSO RECORD YOUR METADATA FIELD NAMES HERE
) # text_column is required for self-managed embeddings
storage_context = StorageContext.from_defaults(
vector_store=databricks_vector_store
)
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
```
Query the index
```python
query_engine = index.as_query_engine()
response = query_engine.query("Why did the author choose to work on AI?")
print(response.response)
``` |
6,073 | 8cd97a8d-9e7a-41df-96a0-cefafcfa1282 | Postgres Vector Store | https://docs.llamaindex.ai/en/stable/examples/vector_stores/postgres | true | llama_index | <a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/docs/examples/vector_stores/postgres.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Postgres Vector Store
In this notebook we are going to show how to use [Postgresql](https://www.postgresql.org) and [pgvector](https://github.com/pgvector/pgvector) to perform vector searches in LlamaIndex
If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.
```python
%pip install llama-index-vector-stores-postgres
```
```python
!pip install llama-index
```
Running the following cell will install Postgres with PGVector in Colab.
```python
!sudo apt update
!echo | sudo apt install -y postgresql-common
!echo | sudo /usr/share/postgresql-common/pgdg/apt.postgresql.org.sh
!echo | sudo apt install postgresql-15-pgvector
!sudo service postgresql start
!sudo -u postgres psql -c "ALTER USER postgres PASSWORD 'password';"
!sudo -u postgres psql -c "CREATE DATABASE vector_db;"
```
```python
# import logging
# import sys
# Uncomment to see debug logs
# logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
# logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
from llama_index.core import SimpleDirectoryReader, StorageContext
from llama_index.core import VectorStoreIndex
from llama_index.vector_stores.postgres import PGVectorStore
import textwrap
import openai
```
### Setup OpenAI
The first step is to configure the openai key. It will be used to created embeddings for the documents loaded into the index
```python
import os
os.environ["OPENAI_API_KEY"] = "<your key>"
openai.api_key = os.environ["OPENAI_API_KEY"]
```
Download Data
```python
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
```
--2024-03-14 02:56:30-- https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.111.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 75042 (73K) [text/plain]
Saving to: ‘data/paul_graham/paul_graham_essay.txt’
data/paul_graham/pa 100%[===================>] 73.28K --.-KB/s in 0.001s
2024-03-14 02:56:30 (72.2 MB/s) - ‘data/paul_graham/paul_graham_essay.txt’ saved [75042/75042]
### Loading documents
Load the documents stored in the `data/paul_graham/` using the SimpleDirectoryReader
```python
documents = SimpleDirectoryReader("./data/paul_graham").load_data()
print("Document ID:", documents[0].doc_id)
```
Document ID: 1306591e-cc2d-430b-a74c-03ae7105ecab
### Create the Database
Using an existing postgres running at localhost, create the database we'll be using.
```python
import psycopg2
connection_string = "postgresql://postgres:password@localhost:5432"
db_name = "vector_db"
conn = psycopg2.connect(connection_string)
conn.autocommit = True
with conn.cursor() as c:
c.execute(f"DROP DATABASE IF EXISTS {db_name}")
c.execute(f"CREATE DATABASE {db_name}")
```
### Create the index
Here we create an index backed by Postgres using the documents loaded previously. PGVectorStore takes a few arguments.
```python
from sqlalchemy import make_url
url = make_url(connection_string)
vector_store = PGVectorStore.from_params(
database=db_name,
host=url.host,
password=url.password,
port=url.port,
user=url.username,
table_name="paul_graham_essay",
embed_dim=1536, # openai embedding dimension
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context, show_progress=True
)
query_engine = index.as_query_engine()
```
Parsing nodes: 0%| | 0/1 [00:00<?, ?it/s]
Generating embeddings: 0%| | 0/22 [00:00<?, ?it/s]
### Query the index
We can now ask questions using our index.
```python
response = query_engine.query("What did the author do?")
```
```python
print(textwrap.fill(str(response), 100))
```
The author worked on writing and programming before college, initially focusing on writing short
stories and later transitioning to programming on early computers like the IBM 1401 using Fortran.
The author continued programming on microcomputers like the TRS-80, creating simple games and a word
processor. In college, the author initially planned to study philosophy but switched to studying AI
due to a lack of interest in philosophy courses. The author was inspired to work on AI after
encountering works like Heinlein's novel "The Moon is a Harsh Mistress" and seeing Terry Winograd
using SHRDLU in a PBS documentary.
```python
response = query_engine.query("What happened in the mid 1980s?")
```
```python
print(textwrap.fill(str(response), 100))
```
AI was in the air in the mid 1980s, with two main influences that sparked interest in working on it:
a novel by Heinlein called The Moon is a Harsh Mistress, featuring an intelligent computer called
Mike, and a PBS documentary showing Terry Winograd using SHRDLU.
### Querying existing index
```python
vector_store = PGVectorStore.from_params(
database="vector_db",
host="localhost",
password="password",
port=5432,
user="postgres",
table_name="paul_graham_essay",
embed_dim=1536, # openai embedding dimension
)
index = VectorStoreIndex.from_vector_store(vector_store=vector_store)
query_engine = index.as_query_engine()
```
```python
response = query_engine.query("What did the author do?")
```
```python
print(textwrap.fill(str(response), 100))
```
The author worked on writing short stories and programming before college. Initially, the author
wrote short stories and later started programming on an IBM 1401 using an early version of Fortran.
With the introduction of microcomputers, the author's interest in programming grew, leading to
writing simple games, predictive programs, and a word processor. Despite initially planning to study
philosophy in college, the author switched to studying AI due to a lack of interest in philosophy
courses. The author was inspired to work on AI after encountering a novel featuring an intelligent
computer and a PBS documentary showcasing AI technology.
### Hybrid Search
To enable hybrid search, you need to:
1. pass in `hybrid_search=True` when constructing the `PGVectorStore` (and optionally configure `text_search_config` with the desired language)
2. pass in `vector_store_query_mode="hybrid"` when constructing the query engine (this config is passed to the retriever under the hood). You can also optionally set the `sparse_top_k` to configure how many results we should obtain from sparse text search (default is using the same value as `similarity_top_k`).
```python
from sqlalchemy import make_url
url = make_url(connection_string)
hybrid_vector_store = PGVectorStore.from_params(
database=db_name,
host=url.host,
password=url.password,
port=url.port,
user=url.username,
table_name="paul_graham_essay_hybrid_search",
embed_dim=1536, # openai embedding dimension
hybrid_search=True,
text_search_config="english",
)
storage_context = StorageContext.from_defaults(
vector_store=hybrid_vector_store
)
hybrid_index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
```
```python
hybrid_query_engine = hybrid_index.as_query_engine(
vector_store_query_mode="hybrid", sparse_top_k=2
)
hybrid_response = hybrid_query_engine.query(
"Who does Paul Graham think of with the word schtick"
)
```
/workspaces/llama_index/llama-index-integrations/vector_stores/llama-index-vector-stores-postgres/llama_index/vector_stores/postgres/base.py:571: SAWarning: UserDefinedType REGCONFIG() will not produce a cache key because the ``cache_ok`` attribute is not set to True. This can have significant performance implications including some performance degradations in comparison to prior SQLAlchemy versions. Set this attribute to True if this type object's state is safe to use in a cache key, or False to disable this warning. (Background on this warning at: https://sqlalche.me/e/20/cprf)
res = session.execute(stmt)
```python
print(hybrid_response)
```
Roy Lichtenstein
#### Improving hybrid search with QueryFusionRetriever
Since the scores for text search and vector search are calculated differently, the nodes that were found only by text search will have a much lower score.
You can often improve hybrid search performance by using `QueryFusionRetriever`, which makes better use of the mutual information to rank the nodes.
```python
from llama_index.core.response_synthesizers import CompactAndRefine
from llama_index.core.retrievers import QueryFusionRetriever
from llama_index.core.query_engine import RetrieverQueryEngine
vector_retriever = hybrid_index.as_retriever(
vector_store_query_mode="default",
similarity_top_k=5,
)
text_retriever = hybrid_index.as_retriever(
vector_store_query_mode="sparse",
similarity_top_k=5, # interchangeable with sparse_top_k in this context
)
retriever = QueryFusionRetriever(
[vector_retriever, text_retriever],
similarity_top_k=5,
num_queries=1, # set this to 1 to disable query generation
mode="relative_score",
use_async=False,
)
response_synthesizer = CompactAndRefine()
query_engine = RetrieverQueryEngine(
retriever=retriever,
response_synthesizer=response_synthesizer,
)
```
```python
response = query_engine.query(
"Who does Paul Graham think of with the word schtick, and why?"
)
print(response)
```
Paul Graham thinks of Roy Lichtenstein when he uses the word "schtick" because he recognizes paintings resembling a specific type of cartoon style as being created by Roy Lichtenstein.
### Metadata filters
PGVectorStore supports storing metadata in nodes, and filtering based on that metadata during the retrieval step.
#### Download git commits dataset
```python
!mkdir -p 'data/git_commits/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/csv/commit_history.csv' -O 'data/git_commits/commit_history.csv'
```
--2024-03-14 02:56:46-- https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/csv/commit_history.csv
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.111.133, 185.199.108.133, 185.199.109.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1753902 (1.7M) [text/plain]
Saving to: ‘data/git_commits/commit_history.csv’
data/git_commits/co 100%[===================>] 1.67M --.-KB/s in 0.02s
2024-03-14 02:56:46 (106 MB/s) - ‘data/git_commits/commit_history.csv’ saved [1753902/1753902]
```python
import csv
with open("data/git_commits/commit_history.csv", "r") as f:
commits = list(csv.DictReader(f))
print(commits[0])
print(len(commits))
```
{'commit': '44e41c12ab25e36c202f58e068ced262eadc8d16', 'author': 'Lakshmi Narayanan Sreethar<[email protected]>', 'date': 'Tue Sep 5 21:03:21 2023 +0530', 'change summary': 'Fix segfault in set_integer_now_func', 'change details': 'When an invalid function oid is passed to set_integer_now_func, it finds out that the function oid is invalid but before throwing the error, it calls ReleaseSysCache on an invalid tuple causing a segfault. Fixed that by removing the invalid call to ReleaseSysCache. Fixes #6037 '}
4167
#### Add nodes with custom metadata
```python
# Create TextNode for each of the first 100 commits
from llama_index.core.schema import TextNode
from datetime import datetime
import re
nodes = []
dates = set()
authors = set()
for commit in commits[:100]:
author_email = commit["author"].split("<")[1][:-1]
commit_date = datetime.strptime(
commit["date"], "%a %b %d %H:%M:%S %Y %z"
).strftime("%Y-%m-%d")
commit_text = commit["change summary"]
if commit["change details"]:
commit_text += "\n\n" + commit["change details"]
fixes = re.findall(r"#(\d+)", commit_text, re.IGNORECASE)
nodes.append(
TextNode(
text=commit_text,
metadata={
"commit_date": commit_date,
"author": author_email,
"fixes": fixes,
},
)
)
dates.add(commit_date)
authors.add(author_email)
print(nodes[0])
print(min(dates), "to", max(dates))
print(authors)
```
Node ID: 69513543-dee5-4c65-b4b8-39295f11e669
Text: Fix segfault in set_integer_now_func When an invalid function
oid is passed to set_integer_now_func, it finds out that the function
oid is invalid but before throwing the error, it calls ReleaseSysCache
on an invalid tuple causing a segfault. Fixed that by removing the
invalid call to ReleaseSysCache. Fixes #6037
2023-03-22 to 2023-09-05
{'[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]'}
```python
vector_store = PGVectorStore.from_params(
database=db_name,
host=url.host,
password=url.password,
port=url.port,
user=url.username,
table_name="metadata_filter_demo3",
embed_dim=1536, # openai embedding dimension
)
index = VectorStoreIndex.from_vector_store(vector_store=vector_store)
index.insert_nodes(nodes)
```
```python
print(index.as_query_engine().query("How did Lakshmi fix the segfault?"))
```
Lakshmi fixed the segfault by removing the invalid call to ReleaseSysCache that was causing the issue.
#### Apply metadata filters
Now we can filter by commit author or by date when retrieving nodes.
```python
from llama_index.core.vector_stores.types import (
MetadataFilter,
MetadataFilters,
)
filters = MetadataFilters(
filters=[
MetadataFilter(key="author", value="[email protected]"),
MetadataFilter(key="author", value="[email protected]"),
],
condition="or",
)
retriever = index.as_retriever(
similarity_top_k=10,
filters=filters,
)
retrieved_nodes = retriever.retrieve("What is this software project about?")
for node in retrieved_nodes:
print(node.node.metadata)
```
{'commit_date': '2023-08-07', 'author': '[email protected]', 'fixes': []}
{'commit_date': '2023-08-27', 'author': '[email protected]', 'fixes': []}
{'commit_date': '2023-07-13', 'author': '[email protected]', 'fixes': []}
{'commit_date': '2023-08-07', 'author': '[email protected]', 'fixes': []}
{'commit_date': '2023-08-30', 'author': '[email protected]', 'fixes': []}
{'commit_date': '2023-08-15', 'author': '[email protected]', 'fixes': []}
{'commit_date': '2023-08-23', 'author': '[email protected]', 'fixes': []}
{'commit_date': '2023-08-10', 'author': '[email protected]', 'fixes': []}
{'commit_date': '2023-07-25', 'author': '[email protected]', 'fixes': ['5892']}
{'commit_date': '2023-08-21', 'author': '[email protected]', 'fixes': []}
```python
filters = MetadataFilters(
filters=[
MetadataFilter(key="commit_date", value="2023-08-15", operator=">="),
MetadataFilter(key="commit_date", value="2023-08-25", operator="<="),
],
condition="and",
)
retriever = index.as_retriever(
similarity_top_k=10,
filters=filters,
)
retrieved_nodes = retriever.retrieve("What is this software project about?")
for node in retrieved_nodes:
print(node.node.metadata)
```
{'commit_date': '2023-08-23', 'author': '[email protected]', 'fixes': []}
{'commit_date': '2023-08-17', 'author': '[email protected]', 'fixes': []}
{'commit_date': '2023-08-15', 'author': '[email protected]', 'fixes': []}
{'commit_date': '2023-08-15', 'author': '[email protected]', 'fixes': []}
{'commit_date': '2023-08-24', 'author': '[email protected]', 'fixes': []}
{'commit_date': '2023-08-15', 'author': '[email protected]', 'fixes': []}
{'commit_date': '2023-08-23', 'author': '[email protected]', 'fixes': []}
{'commit_date': '2023-08-21', 'author': '[email protected]', 'fixes': []}
{'commit_date': '2023-08-20', 'author': '[email protected]', 'fixes': []}
{'commit_date': '2023-08-21', 'author': '[email protected]', 'fixes': []}
#### Apply nested filters
In the above examples, we combined multiple filters using AND or OR. We can also combine multiple sets of filters.
e.g. in SQL:
```sql
WHERE (commit_date >= '2023-08-01' AND commit_date <= '2023-08-15') AND (author = '[email protected]' OR author = '[email protected]')
```
```python
filters = MetadataFilters(
filters=[
MetadataFilters(
filters=[
MetadataFilter(
key="commit_date", value="2023-08-01", operator=">="
),
MetadataFilter(
key="commit_date", value="2023-08-15", operator="<="
),
],
condition="and",
),
MetadataFilters(
filters=[
MetadataFilter(key="author", value="[email protected]"),
MetadataFilter(key="author", value="[email protected]"),
],
condition="or",
),
],
condition="and",
)
retriever = index.as_retriever(
similarity_top_k=10,
filters=filters,
)
retrieved_nodes = retriever.retrieve("What is this software project about?")
for node in retrieved_nodes:
print(node.node.metadata)
```
{'commit_date': '2023-08-07', 'author': '[email protected]', 'fixes': []}
{'commit_date': '2023-08-07', 'author': '[email protected]', 'fixes': []}
{'commit_date': '2023-08-15', 'author': '[email protected]', 'fixes': []}
{'commit_date': '2023-08-10', 'author': '[email protected]', 'fixes': []}
The above can be simplified by using the IN operator. `PGVectorStore` supports `in`, `nin`, and `contains` for comparing an element with a list.
```python
filters = MetadataFilters(
filters=[
MetadataFilter(key="commit_date", value="2023-08-01", operator=">="),
MetadataFilter(key="commit_date", value="2023-08-15", operator="<="),
MetadataFilter(
key="author",
value=["[email protected]", "[email protected]"],
operator="in",
),
],
condition="and",
)
retriever = index.as_retriever(
similarity_top_k=10,
filters=filters,
)
retrieved_nodes = retriever.retrieve("What is this software project about?")
for node in retrieved_nodes:
print(node.node.metadata)
```
{'commit_date': '2023-08-07', 'author': '[email protected]', 'fixes': []}
{'commit_date': '2023-08-07', 'author': '[email protected]', 'fixes': []}
{'commit_date': '2023-08-15', 'author': '[email protected]', 'fixes': []}
{'commit_date': '2023-08-10', 'author': '[email protected]', 'fixes': []}
```python
# Same thing, with NOT IN
filters = MetadataFilters(
filters=[
MetadataFilter(key="commit_date", value="2023-08-01", operator=">="),
MetadataFilter(key="commit_date", value="2023-08-15", operator="<="),
MetadataFilter(
key="author",
value=["[email protected]", "[email protected]"],
operator="nin",
),
],
condition="and",
)
retriever = index.as_retriever(
similarity_top_k=10,
filters=filters,
)
retrieved_nodes = retriever.retrieve("What is this software project about?")
for node in retrieved_nodes:
print(node.node.metadata)
```
{'commit_date': '2023-08-09', 'author': '[email protected]', 'fixes': ['5805']}
{'commit_date': '2023-08-15', 'author': '[email protected]', 'fixes': []}
{'commit_date': '2023-08-15', 'author': '[email protected]', 'fixes': []}
{'commit_date': '2023-08-11', 'author': '[email protected]', 'fixes': []}
{'commit_date': '2023-08-09', 'author': '[email protected]', 'fixes': ['5923', '5680', '5774', '5786', '5906', '5912']}
{'commit_date': '2023-08-03', 'author': '[email protected]', 'fixes': []}
{'commit_date': '2023-08-03', 'author': '[email protected]', 'fixes': ['5908']}
{'commit_date': '2023-08-01', 'author': '[email protected]', 'fixes': []}
{'commit_date': '2023-08-10', 'author': '[email protected]', 'fixes': []}
{'commit_date': '2023-08-10', 'author': '[email protected]', 'fixes': []}
```python
# CONTAINS
filters = MetadataFilters(
filters=[
MetadataFilter(key="fixes", value="5680", operator="contains"),
]
)
retriever = index.as_retriever(
similarity_top_k=10,
filters=filters,
)
retrieved_nodes = retriever.retrieve("How did these commits fix the issue?")
for node in retrieved_nodes:
print(node.node.metadata)
```
{'commit_date': '2023-08-09', 'author': '[email protected]', 'fixes': ['5923', '5680', '5774', '5786', '5906', '5912']}
### PgVector Query Options
#### IVFFlat Probes
Specify the number of [IVFFlat probes](https://github.com/pgvector/pgvector?tab=readme-ov-file#query-options) (1 by default)
When retrieving from the index, you can specify an appropriate number of IVFFlat probes (higher is better for recall, lower is better for speed)
```python
retriever = index.as_retriever(
vector_store_query_mode="hybrid",
similarity_top_k=5,
vector_store_kwargs={"ivfflat_probes": 10},
)
```
#### HNSW EF Search
Specify the size of the dynamic [candidate list](https://github.com/pgvector/pgvector?tab=readme-ov-file#query-options-1) for search (40 by default)
```python
retriever = index.as_retriever(
vector_store_query_mode="hybrid",
similarity_top_k=5,
vector_store_kwargs={"hnsw_ef_search": 300},
)
``` |
697 | 49a5cb74-0878-4b87-af18-82a6100409db | DashVector Vector Store | https://docs.llamaindex.ai/en/stable/examples/vector_stores/DashvectorIndexDemo | true | llama_index | <a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/docs/examples/vector_stores/DashvectorIndexDemo.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# DashVector Vector Store
If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.
```python
%pip install llama-index-vector-stores-dashvector
```
```python
!pip install llama-index
```
```python
import logging
import sys
import os
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
```
#### Creating a DashVector Collection
```python
import dashvector
```
```python
api_key = os.environ["DASHVECTOR_API_KEY"]
client = dashvector.Client(api_key=api_key)
```
```python
# dimensions are for text-embedding-ada-002
client.create("llama-demo", dimension=1536)
```
{"code": 0, "message": "", "requests_id": "82b969d2-2568-4e18-b0dc-aa159b503c84"}
```python
dashvector_collection = client.get("quickstart")
```
#### Download Data
```python
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
```
#### Load documents, build the DashVectorStore and VectorStoreIndex
```python
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.vector_stores.dashvector import DashVectorStore
from IPython.display import Markdown, display
```
INFO:numexpr.utils:Note: NumExpr detected 12 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8.
Note: NumExpr detected 12 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8.
INFO:numexpr.utils:NumExpr defaulting to 8 threads.
NumExpr defaulting to 8 threads.
```python
# load documents
documents = SimpleDirectoryReader("./data/paul_graham").load_data()
```
```python
# initialize without metadata filter
from llama_index.core import StorageContext
vector_store = DashVectorStore(dashvector_collection)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
```
#### Query Index
```python
# set Logging to DEBUG for more detailed outputs
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
```
```python
display(Markdown(f"<b>{response}</b>"))
```
<b>The author worked on writing and programming outside of school. They wrote short stories and tried writing programs on the IBM 1401 computer. They also built a microcomputer and started programming on it, writing simple games and a word processor.</b> |
871 | 49aafced-ea43-4b11-a230-f031f3453b6b | MyScale Vector Store | https://docs.llamaindex.ai/en/stable/examples/vector_stores/MyScaleIndexDemo | true | llama_index | <a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/docs/examples/vector_stores/MyScaleIndexDemo.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# MyScale Vector Store
In this notebook we are going to show a quick demo of using the MyScaleVectorStore.
If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.
```python
%pip install llama-index-vector-stores-myscale
```
```python
!pip install llama-index
```
#### Creating a MyScale Client
```python
import logging
import sys
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
```
```python
from os import environ
import clickhouse_connect
environ["OPENAI_API_KEY"] = "sk-*"
# initialize client
client = clickhouse_connect.get_client(
host="YOUR_CLUSTER_HOST",
port=8443,
username="YOUR_USERNAME",
password="YOUR_CLUSTER_PASSWORD",
)
```
#### Load documents, build and store the VectorStoreIndex with MyScaleVectorStore
Here we will use a set of Paul Graham essays to provide the text to turn into embeddings, store in a ``MyScaleVectorStore`` and query to find context for our LLM QnA loop.
```python
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.vector_stores.myscale import MyScaleVectorStore
from IPython.display import Markdown, display
```
```python
# load documents
documents = SimpleDirectoryReader("../data/paul_graham").load_data()
print("Document ID:", documents[0].doc_id)
print("Number of Documents: ", len(documents))
```
Document ID: a5f2737c-ed18-4e5d-ab9a-75955edb816d
Number of Documents: 1
Download Data
```python
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
```
You can process your files individually using [SimpleDirectoryReader](/examples/data_connectors/simple_directory_reader.ipynb):
```python
loader = SimpleDirectoryReader("./data/paul_graham/")
documents = loader.load_data()
for file in loader.input_files:
print(file)
# Here is where you would do any preprocessing
```
../data/paul_graham/paul_graham_essay.txt
```python
# initialize with metadata filter and store indexes
from llama_index.core import StorageContext
for document in documents:
document.metadata = {"user_id": "123", "favorite_color": "blue"}
vector_store = MyScaleVectorStore(myscale_client=client)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
```
#### Query Index
Now MyScale vector store supports filter search and hybrid search
You can learn more about [query_engine](/module_guides/deploying/query_engine/index.md) and [retriever](/module_guides/querying/retriever/index.md).
```python
import textwrap
from llama_index.core.vector_stores import ExactMatchFilter, MetadataFilters
# set Logging to DEBUG for more detailed outputs
query_engine = index.as_query_engine(
filters=MetadataFilters(
filters=[
ExactMatchFilter(key="user_id", value="123"),
]
),
similarity_top_k=2,
vector_store_query_mode="hybrid",
)
response = query_engine.query("What did the author learn?")
print(textwrap.fill(str(response), 100))
```
#### Clear All Indexes
```python
for document in documents:
index.delete_ref_doc(document.doc_id)
``` |
5,379 | 15e961c0-5b7c-4bf7-b87e-8ba5d415e63f | Redis Vector Store | https://docs.llamaindex.ai/en/stable/examples/vector_stores/RedisIndexDemo | true | llama_index | <a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/docs/examples/vector_stores/RedisIndexDemo.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Redis Vector Store
In this notebook we are going to show a quick demo of using the RedisVectorStore.
If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.
```python
%pip install -U llama-index llama-index-vector-stores-redis llama-index-embeddings-cohere llama-index-embeddings-openai
```
```python
import os
import getpass
import sys
import logging
import textwrap
import warnings
warnings.filterwarnings("ignore")
# Uncomment to see debug logs
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.vector_stores.redis import RedisVectorStore
```
### Start Redis
The easiest way to start Redis is using the [Redis Stack](https://hub.docker.com/r/redis/redis-stack) docker image or
quickly signing up for a [FREE Redis Cloud](https://redis.com/try-free) instance.
To follow every step of this tutorial, launch the image as follows:
```bash
docker run --name redis-vecdb -d -p 6379:6379 -p 8001:8001 redis/redis-stack:latest
```
This will also launch the RedisInsight UI on port 8001 which you can view at http://localhost:8001.
### Setup OpenAI
Lets first begin by adding the openai api key. This will allow us to access openai for embeddings and to use chatgpt.
```python
oai_api_key = getpass.getpass("OpenAI API Key:")
os.environ["OPENAI_API_KEY"] = oai_api_key
```
Download Data
```python
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
```
--2024-04-10 19:35:33-- https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8003::154, 2606:50c0:8000::154, 2606:50c0:8002::154, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8003::154|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 75042 (73K) [text/plain]
Saving to: ‘data/paul_graham/paul_graham_essay.txt’
data/paul_graham/pa 100%[===================>] 73.28K --.-KB/s in 0.03s
2024-04-10 19:35:33 (2.15 MB/s) - ‘data/paul_graham/paul_graham_essay.txt’ saved [75042/75042]
### Read in a dataset
Here we will use a set of Paul Graham essays to provide the text to turn into embeddings, store in a ``RedisVectorStore`` and query to find context for our LLM QnA loop.
```python
# load documents
documents = SimpleDirectoryReader("./data/paul_graham").load_data()
print(
"Document ID:",
documents[0].id_,
"Document Filename:",
documents[0].metadata["file_name"],
)
```
Document ID: 7056f7ba-3513-4ef4-9792-2bd28040aaed Document Filename: paul_graham_essay.txt
### Initialize the default Redis Vector Store
Now we have our documents prepared, we can initialize the Redis Vector Store with **default** settings. This will allow us to store our vectors in Redis and create an index for real-time search.
```python
from llama_index.core import StorageContext
from redis import Redis
# create a Redis client connection
redis_client = Redis.from_url("redis://localhost:6379")
# create the vector store wrapper
vector_store = RedisVectorStore(redis_client=redis_client, overwrite=True)
# load storage context
storage_context = StorageContext.from_defaults(vector_store=vector_store)
# build and load index from documents and storage context
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
# index = VectorStoreIndex.from_vector_store(vector_store=vector_store)
```
19:39:17 llama_index.vector_stores.redis.base INFO Using default RedisVectorStore schema.
19:39:19 httpx INFO HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
19:39:19 llama_index.vector_stores.redis.base INFO Added 22 documents to index llama_index
### Query the default vector store
Now that we have our data stored in the index, we can ask questions against the index.
The index will use the data as the knowledge base for an LLM. The default setting for as_query_engine() utilizes OpenAI embeddings and GPT as the language model. Therefore, an OpenAI key is required unless you opt for a customized or local language model.
Below we will test searches against out index and then full RAG with an LLM.
```python
query_engine = index.as_query_engine()
retriever = index.as_retriever()
```
```python
result_nodes = retriever.retrieve("What did the author learn?")
for node in result_nodes:
print(node)
```
19:39:22 httpx INFO HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
19:39:22 llama_index.vector_stores.redis.base INFO Querying index llama_index with filters *
19:39:22 llama_index.vector_stores.redis.base INFO Found 2 results for query with id ['llama_index/vector_adb6b7ce-49bb-4961-8506-37082c02a389', 'llama_index/vector_e39be1fe-32d0-456e-b211-4efabd191108']
Node ID: adb6b7ce-49bb-4961-8506-37082c02a389
Text: What I Worked On February 2021 Before college the two main
things I worked on, outside of school, were writing and programming. I
didn't write essays. I wrote what beginning writers were supposed to
write then, and probably still are: short stories. My stories were
awful. They had hardly any plot, just characters with strong feelings,
which I ...
Score: 0.820
Node ID: e39be1fe-32d0-456e-b211-4efabd191108
Text: Except for a few officially anointed thinkers who went to the
right parties in New York, the only people allowed to publish essays
were specialists writing about their specialties. There were so many
essays that had never been written, because there had been no way to
publish them. Now they could be, and I was going to write them. [12]
I've wor...
Score: 0.819
```python
response = query_engine.query("What did the author learn?")
print(textwrap.fill(str(response), 100))
```
19:39:25 httpx INFO HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
19:39:25 llama_index.vector_stores.redis.base INFO Querying index llama_index with filters *
19:39:25 llama_index.vector_stores.redis.base INFO Found 2 results for query with id ['llama_index/vector_adb6b7ce-49bb-4961-8506-37082c02a389', 'llama_index/vector_e39be1fe-32d0-456e-b211-4efabd191108']
19:39:27 httpx INFO HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
The author learned that working on things that weren't prestigious often led to valuable discoveries
and indicated the right kind of motives. Despite the lack of initial prestige, pursuing such work
could be a sign of genuine potential and appropriate motivations, steering clear of the common
pitfall of being driven solely by the desire to impress others.
```python
result_nodes = retriever.retrieve("What was a hard moment for the author?")
for node in result_nodes:
print(node)
```
19:39:27 httpx INFO HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
19:39:27 llama_index.vector_stores.redis.base INFO Querying index llama_index with filters *
19:39:27 llama_index.vector_stores.redis.base INFO Found 2 results for query with id ['llama_index/vector_adb6b7ce-49bb-4961-8506-37082c02a389', 'llama_index/vector_e39be1fe-32d0-456e-b211-4efabd191108']
Node ID: adb6b7ce-49bb-4961-8506-37082c02a389
Text: What I Worked On February 2021 Before college the two main
things I worked on, outside of school, were writing and programming. I
didn't write essays. I wrote what beginning writers were supposed to
write then, and probably still are: short stories. My stories were
awful. They had hardly any plot, just characters with strong feelings,
which I ...
Score: 0.802
Node ID: e39be1fe-32d0-456e-b211-4efabd191108
Text: Except for a few officially anointed thinkers who went to the
right parties in New York, the only people allowed to publish essays
were specialists writing about their specialties. There were so many
essays that had never been written, because there had been no way to
publish them. Now they could be, and I was going to write them. [12]
I've wor...
Score: 0.799
```python
response = query_engine.query("What was a hard moment for the author?")
print(textwrap.fill(str(response), 100))
```
19:39:29 httpx INFO HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
19:39:29 llama_index.vector_stores.redis.base INFO Querying index llama_index with filters *
19:39:29 llama_index.vector_stores.redis.base INFO Found 2 results for query with id ['llama_index/vector_adb6b7ce-49bb-4961-8506-37082c02a389', 'llama_index/vector_e39be1fe-32d0-456e-b211-4efabd191108']
19:39:31 httpx INFO HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
A hard moment for the author was when one of his programs on the IBM 1401 mainframe didn't
terminate, leading to a technical error and an uncomfortable situation with the data center manager.
```python
index.vector_store.delete_index()
```
19:39:34 llama_index.vector_stores.redis.base INFO Deleting index llama_index
### Use a custom index schema
In most use cases, you need the ability to customize the underling index configuration
and specification. For example, this is handy in order to define specific metadata filters you wish to enable.
With Redis, this is as simple as defining an index schema object
(from file or dict) and passing it through to the vector store client wrapper.
For this example, we will:
1. switch the embedding model to [Cohere](cohereai.com)
2. add an additional metadata field for the document `updated_at` timestamp
3. index the existing `file_name` metadata field
```python
from llama_index.core.settings import Settings
from llama_index.embeddings.cohere import CohereEmbedding
# set up Cohere Key
co_api_key = getpass.getpass("Cohere API Key:")
os.environ["CO_API_KEY"] = co_api_key
# set llamaindex to use Cohere embeddings
Settings.embed_model = CohereEmbedding()
```
```python
from redisvl.schema import IndexSchema
custom_schema = IndexSchema.from_dict(
{
# customize basic index specs
"index": {
"name": "paul_graham",
"prefix": "essay",
"key_separator": ":",
},
# customize fields that are indexed
"fields": [
# required fields for llamaindex
{"type": "tag", "name": "id"},
{"type": "tag", "name": "doc_id"},
{"type": "text", "name": "text"},
# custom metadata fields
{"type": "numeric", "name": "updated_at"},
{"type": "tag", "name": "file_name"},
# custom vector field definition for cohere embeddings
{
"type": "vector",
"name": "vector",
"attrs": {
"dims": 1024,
"algorithm": "hnsw",
"distance_metric": "cosine",
},
},
],
}
)
```
```python
custom_schema.index
```
IndexInfo(name='paul_graham', prefix='essay', key_separator=':', storage_type=<StorageType.HASH: 'hash'>)
```python
custom_schema.fields
```
{'id': TagField(name='id', type='tag', path=None, attrs=TagFieldAttributes(sortable=False, separator=',', case_sensitive=False, withsuffixtrie=False)),
'doc_id': TagField(name='doc_id', type='tag', path=None, attrs=TagFieldAttributes(sortable=False, separator=',', case_sensitive=False, withsuffixtrie=False)),
'text': TextField(name='text', type='text', path=None, attrs=TextFieldAttributes(sortable=False, weight=1, no_stem=False, withsuffixtrie=False, phonetic_matcher=None)),
'updated_at': NumericField(name='updated_at', type='numeric', path=None, attrs=NumericFieldAttributes(sortable=False)),
'file_name': TagField(name='file_name', type='tag', path=None, attrs=TagFieldAttributes(sortable=False, separator=',', case_sensitive=False, withsuffixtrie=False)),
'vector': HNSWVectorField(name='vector', type='vector', path=None, attrs=HNSWVectorFieldAttributes(dims=1024, algorithm=<VectorIndexAlgorithm.HNSW: 'HNSW'>, datatype=<VectorDataType.FLOAT32: 'FLOAT32'>, distance_metric=<VectorDistanceMetric.COSINE: 'COSINE'>, initial_cap=None, m=16, ef_construction=200, ef_runtime=10, epsilon=0.01))}
Learn more about [schema and index design](https://redisvl.com) with redis.
```python
from datetime import datetime
def date_to_timestamp(date_string: str) -> int:
date_format: str = "%Y-%m-%d"
return int(datetime.strptime(date_string, date_format).timestamp())
# iterate through documents and add new field
for document in documents:
document.metadata["updated_at"] = date_to_timestamp(
document.metadata["last_modified_date"]
)
```
```python
vector_store = RedisVectorStore(
schema=custom_schema, # provide customized schema
redis_client=redis_client,
overwrite=True,
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
# build and load index from documents and storage context
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
```
19:40:05 httpx INFO HTTP Request: POST https://api.cohere.ai/v1/embed "HTTP/1.1 200 OK"
19:40:06 httpx INFO HTTP Request: POST https://api.cohere.ai/v1/embed "HTTP/1.1 200 OK"
19:40:06 httpx INFO HTTP Request: POST https://api.cohere.ai/v1/embed "HTTP/1.1 200 OK"
19:40:06 llama_index.vector_stores.redis.base INFO Added 22 documents to index paul_graham
### Query the vector store and filter on metadata
Now that we have additional metadata indexed in Redis, let's try some queries with filters.
```python
from llama_index.core.vector_stores import (
MetadataFilters,
MetadataFilter,
ExactMatchFilter,
)
retriever = index.as_retriever(
similarity_top_k=3,
filters=MetadataFilters(
filters=[
ExactMatchFilter(key="file_name", value="paul_graham_essay.txt"),
MetadataFilter(
key="updated_at",
value=date_to_timestamp("2023-01-01"),
operator=">=",
),
MetadataFilter(
key="text",
value="learn",
operator="text_match",
),
],
condition="and",
),
)
```
```python
result_nodes = retriever.retrieve("What did the author learn?")
for node in result_nodes:
print(node)
```
19:40:22 httpx INFO HTTP Request: POST https://api.cohere.ai/v1/embed "HTTP/1.1 200 OK"
19:40:22 llama_index.vector_stores.redis.base INFO Querying index paul_graham with filters ((@file_name:{paul_graham_essay\.txt} @updated_at:[1672549200 +inf]) @text:(learn))
19:40:22 llama_index.vector_stores.redis.base INFO Found 3 results for query with id ['essay:0df3b734-ecdb-438e-8c90-f21a8c80f552', 'essay:01108c0d-140b-4dcc-b581-c38b7df9251e', 'essay:ced36463-ac36-46b0-b2d7-935c1b38b781']
Node ID: 0df3b734-ecdb-438e-8c90-f21a8c80f552
Text: All that seemed left for philosophy were edge cases that people
in other fields felt could safely be ignored. I couldn't have put
this into words when I was 18. All I knew at the time was that I kept
taking philosophy courses and they kept being boring. So I decided to
switch to AI. AI was in the air in the mid 1980s, but there were two
things...
Score: 0.410
Node ID: 01108c0d-140b-4dcc-b581-c38b7df9251e
Text: It was not, in fact, simply a matter of teaching SHRDLU more
words. That whole way of doing AI, with explicit data structures
representing concepts, was not going to work. Its brokenness did, as
so often happens, generate a lot of opportunities to write papers
about various band-aids that could be applied to it, but it was never
going to get us ...
Score: 0.390
Node ID: ced36463-ac36-46b0-b2d7-935c1b38b781
Text: Grad students could take classes in any department, and my
advisor, Tom Cheatham, was very easy going. If he even knew about the
strange classes I was taking, he never said anything. So now I was in
a PhD program in computer science, yet planning to be an artist, yet
also genuinely in love with Lisp hacking and working away at On Lisp.
In other...
Score: 0.389
### Restoring from an existing index in Redis
Restoring from an index requires a Redis connection client (or URL), `overwrite=False`, and passing in the same schema object used before. (This can be offloaded to a YAML file for convenience using `.to_yaml()`)
```python
custom_schema.to_yaml("paul_graham.yaml")
```
```python
vector_store = RedisVectorStore(
schema=IndexSchema.from_yaml("paul_graham.yaml"),
redis_client=redis_client,
)
index = VectorStoreIndex.from_vector_store(vector_store=vector_store)
```
19:40:28 redisvl.index.index INFO Index already exists, not overwriting.
**In the near future** -- we will implement a convenience method to load just using an index name:
```python
RedisVectorStore.from_existing_index(index_name="paul_graham", redis_client=redis_client)
```
### Deleting documents or index completely
Sometimes it may be useful to delete documents or the entire index. This can be done using the `delete` and `delete_index` methods.
```python
document_id = documents[0].doc_id
document_id
```
'7056f7ba-3513-4ef4-9792-2bd28040aaed'
```python
print("Number of documents before deleting", redis_client.dbsize())
vector_store.delete(document_id)
print("Number of documents after deleting", redis_client.dbsize())
```
Number of documents before deleting 22
19:40:32 llama_index.vector_stores.redis.base INFO Deleted 22 documents from index paul_graham
Number of documents after deleting 0
However, the Redis index still exists (with no associated documents) for continuous upsert.
```python
vector_store.index_exists()
```
True
```python
# now lets delete the index entirely
# this will delete all the documents and the index
vector_store.delete_index()
```
19:40:37 llama_index.vector_stores.redis.base INFO Deleting index paul_graham
```python
print("Number of documents after deleting", redis_client.dbsize())
```
Number of documents after deleting 0
### Troubleshooting
If you get an empty query result, there a couple of issues to check:
#### Schema
Unlike other vector stores, Redis expects users to explicitly define the schema for the index. This is for a few reasons:
1. Redis is used for many use cases, including real-time vector search, but also for standard document storage/retrieval, caching, messaging, pub/sub, session mangement, and more. Not all attributes on records need to be indexed for search. This is partially an efficiency thing, and partially an attempt to minimize user foot guns.
2. All index schemas, when using Redis & LlamaIndex, must include the following fields `id`, `doc_id`, `text`, and `vector`, at a minimum.
Instantiate your `RedisVectorStore` with the default schema (assumes OpenAI embeddings), or with a custom schema (see above).
#### Prefix issues
Redis expects all records to have a key prefix that segments the keyspace into "partitions"
for potentially different applications, use cases, and clients.
Make sure that the chosen `prefix`, as part of the index schema, is consistent across your code (tied to a specific index).
To see what prefix your index was created with, you can run `FT.INFO <name of your index>` in the Redis CLI and look under `index_definition` => `prefixes`.
#### Data vs Index
Redis treats the records in the dataset and the index as different entities. This allows you more flexibility in performing updates, upserts, and index schema migrations.
If you have an existing index and want to make sure it's dropped, you can run `FT.DROPINDEX <name of your index>` in the Redis CLI. Note that this will *not* drop your actual data unless you pass `DD`
#### Empty queries when using metadata
If you add metadata to the index *after* it has already been created and then try to query over that metadata, your queries will come back empty.
Redis indexes fields upon index creation only (similar to how it indexes the prefixes, above). |
4,703 | 5672d8aa-3d43-4ec7-8ec7-748c41e153e7 | Simple Vector Store - Async Index Creation | https://docs.llamaindex.ai/en/stable/examples/vector_stores/AsyncIndexCreationDemo | true | llama_index | <a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/docs/examples/vector_stores/AsyncIndexCreationDemo.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Simple Vector Store - Async Index Creation
If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.
```python
%pip install llama-index-readers-wikipedia
```
```python
!pip install llama-index
```
```python
import time
# Helps asyncio run within Jupyter
import nest_asyncio
nest_asyncio.apply()
# My OpenAI Key
import os
os.environ["OPENAI_API_KEY"] = "[YOUR_API_KEY]"
```
```python
from llama_index.core import VectorStoreIndex, download_loader
from llama_index.readers.wikipedia import WikipediaReader
loader = WikipediaReader()
documents = loader.load_data(
pages=[
"Berlin",
"Santiago",
"Moscow",
"Tokyo",
"Jakarta",
"Cairo",
"Bogota",
"Shanghai",
"Damascus",
]
)
```
```python
len(documents)
```
9
9 Wikipedia articles downloaded as documents
```python
start_time = time.perf_counter()
index = VectorStoreIndex.from_documents(documents)
duration = time.perf_counter() - start_time
print(duration)
```
INFO:root:> [build_index_from_documents] Total LLM token usage: 0 tokens
INFO:root:> [build_index_from_documents] Total embedding token usage: 142295 tokens
7.691995083000052
Standard index creation took 7.69 seconds
```python
start_time = time.perf_counter()
index = VectorStoreIndex(documents, use_async=True)
duration = time.perf_counter() - start_time
print(duration)
```
INFO:openai:message='OpenAI API response' path=https://api.openai.com/v1/engines/text-embedding-ada-002/embeddings processing_ms=245 request_id=314b145a07f65fd34e707f633cc1a444 response_code=200
INFO:openai:message='OpenAI API response' path=https://api.openai.com/v1/engines/text-embedding-ada-002/embeddings processing_ms=432 request_id=bb9e796d0b8f9c2365b68de8a56009ff response_code=200
INFO:openai:message='OpenAI API response' path=https://api.openai.com/v1/engines/text-embedding-ada-002/embeddings processing_ms=433 request_id=7a94707fe2f8916e9cdd8276a5748207 response_code=200
INFO:openai:message='OpenAI API response' path=https://api.openai.com/v1/engines/text-embedding-ada-002/embeddings processing_ms=499 request_id=cda679215293c3a13ed57c2eae3dc582 response_code=200
INFO:openai:message='OpenAI API response' path=https://api.openai.com/v1/engines/text-embedding-ada-002/embeddings processing_ms=527 request_id=5e1c3e74aa3f9f950e4035f81a0f0a15 response_code=200
INFO:openai:message='OpenAI API response' path=https://api.openai.com/v1/engines/text-embedding-ada-002/embeddings processing_ms=585 request_id=81983fe76eab95f73f82df881ff7b2d9 response_code=200
INFO:openai:message='OpenAI API response' path=https://api.openai.com/v1/engines/text-embedding-ada-002/embeddings processing_ms=574 request_id=702a182b54a29a33719205f722378c8e response_code=200
INFO:openai:message='OpenAI API response' path=https://api.openai.com/v1/engines/text-embedding-ada-002/embeddings processing_ms=575 request_id=d1df11775c59a3ba403dda253081f8eb response_code=200
INFO:openai:message='OpenAI API response' path=https://api.openai.com/v1/engines/text-embedding-ada-002/embeddings processing_ms=575 request_id=47929f13469569527505b51958cd8e71 response_code=200
INFO:root:> [build_index_from_documents] Total LLM token usage: 0 tokens
INFO:root:> [build_index_from_documents] Total embedding token usage: 142295 tokens
2.3730635830000892
Async index creation took 2.37 seconds
```python
query_engine = index.as_query_engine()
query_engine.query("What is the etymology of Jakarta?")
```
INFO:root:> [query] Total LLM token usage: 4075 tokens
INFO:root:> [query] Total embedding token usage: 8 tokens
Response(response="\n\nThe name 'Jakarta' is derived from the word Jayakarta (Devanagari: जयकर्त) which is ultimately derived from the Sanskrit जय jaya (victorious), and कृत krta (accomplished, acquired), thus Jayakarta translates as 'victorious deed', 'complete act' or 'complete victory'. It was named for the Muslim troops of Fatahillah which successfully defeated and drove the Portuguese away from the city in 1527. Before it was called Jayakarta, the city was known as 'Sunda Kelapa'. Tomé Pires, a Portuguese apothecary wrote the name of the city on his magnum opus as Jacatra or Jacarta during his journey to East Indies. The city is located in a low-lying area ranging from −2 to 91 m (−7 to 299 ft) with an average elevation of 8 m (26 ft) above sea level with historically extensive swampy areas. Some parts of the city have been constructed on reclaimed tidal flats that occur around the area. Thirteen rivers flow through Jakarta, including the Ciliwung River, Kalibaru, Pesanggra", source_nodes=[SourceNode(source_text="Jakarta (; Indonesian pronunciation: [dʒaˈkarta] (listen)), officially the Special Capital Region of Jakarta (Indonesian: Daerah Khusus Ibukota Jakarta), is the capital and largest city of Indonesia. Lying on the northwest coast of Java, the world's most populous island, Jakarta is the largest city in Southeast Asia and serves as the diplomatic capital of ASEAN.\nThe city is the economic, cultural, and political centre of Indonesia. It possesses a province-level status and has a population of 10,562,088 as of mid-2021. Although Jakarta extends over only 664.01 km2 (256.38 sq mi) and thus has the smallest area of any Indonesian province, its metropolitan area covers 9,957.08 km2 (3,844.45 sq mi), which includes the satellite cities Bogor, Depok, Tangerang, South Tangerang, and Bekasi, and has an estimated population of 35 million as of 2021, making it the largest urban area in Indonesia and the second-largest in the world (after Tokyo). Jakarta ranks first among the Indonesian provinces in the human development index. Jakarta's business and employment opportunities, along with its ability to offer a potentially higher standard of living compared to other parts of the country, have attracted migrants from across the Indonesian archipelago, making it a melting pot of numerous cultures.\nJakarta is one of the oldest continuously inhabited cities in Southeast Asia. Established in the fourth century as Sunda Kelapa, the city became an important trading port for the Sunda Kingdom. At one time, it was the de facto capital of the Dutch East Indies, when it was known as Batavia. Jakarta was officially a city within West Java until 1960 when its official status was changed to a province with special capital region distinction. As a province, its government consists of five administrative cities and one administrative regency. Jakarta is an alpha world city and is the seat of the ASEAN secretariat. Financial institutions such as the Bank of Indonesia, Indonesia Stock Exchange, and corporate headquarters of numerous Indonesian companies and multinational corporations are located in the city. In 2021, the city's GRP PPP was estimated at US$602.946 billion.\nJakarta's main challenges include rapid urban growth, ecological breakdown, gridlocked traffic, congestion, and flooding. Jakarta is sinking up to 17 cm (6.7 inches) annually, which coupled with the rising of sea levels, has made the city more prone to flooding. Hence, it is one of the fastest-sinking capitals in the world. In response to these challenges, in August 2019, President Joko Widodo announced that the capital of Indonesia would be moved from Jakarta to the planned city of Nusantara, in the province of East Kalimantan on the island of Borneo.\n\n\n== Name ==\n\nJakarta has been home to multiple settlements. Below is the list of names used during its existence:\n\nSunda Kelapa (397–1527)\nJayakarta (1527–1619)\nBatavia (1619–1942)\nDjakarta (1942–1972)\nJakarta (1972–present)The name 'Jakarta' is derived from the word Jayakarta (Devanagari: जयकर्त) which is ultimately derived from the Sanskrit जय jaya (victorious), and कृत krta (accomplished, acquired), thus Jayakarta translates as 'victorious deed', 'complete act' or 'complete victory'. It was named for the Muslim troops of Fatahillah which successfully defeated and drove the Portuguese away from the city in 1527. Before it was called Jayakarta, the city was known as 'Sunda Kelapa'. Tomé Pires, a Portuguese apothecary wrote the name of the city on his magnum opus as Jacatra or Jacarta during his journey to East Indies. \nIn the 17th century, the city was known as Koningin van het Oosten (Queen of the Orient), a name that was given for the urban beauty of downtown Batavia's canals, mansions and ordered city layout. After expanding to the south in the 19th century, this nickname came to be more associated with the suburbs (e.g. Menteng and the area around Merdeka Square), with their wide lanes, green spaces and villas. During the Japanese occupation, the city was renamed as Jakaruta Tokubetsu-shi (ジャカルタ特別市, Jakarta Special City).\n\n\n== History ==\n\n\n=== Precolonial era ===\n\nThe north coast area of western Java including Jakarta was the location of prehistoric Buni culture that flourished from 400 BC to 100 AD. The area in and around modern Jakarta was part of the 4th-century Sundanese kingdom of Tarumanagara, one of the oldest Hindu kingdoms in Indonesia. The area of North Jakarta around Tugu became a populated settlement in the early 5th century. The Tugu inscription (probably written around 417 AD) discovered in Batutumbuh hamlet, Tugu village, Koja, North Jakarta, mentions that King Purnawarman of Tarumanagara undertook hydraulic projects; the irrigation and water drainage project of the Chandrabhaga river and the Gomati river near his capital. Following the decline of Tarumanagara, its territories, including the Jakarta area, became part of the Hindu Kingdom of Sunda. From the 7th to the early 13th century, the port of Sunda was under the Srivijaya maritime empire. According to the Chinese source, Chu-fan-chi, written circa 1225, Chou Ju-kua reported in the early 13th century that Srivijaya still ruled Sumatra, the Malay peninsula and western Java (Sunda). The source says the port of Sunda is strategic and thriving, mentioning pepper from Sunda as among the best in quality. The people worked in agriculture, and their houses were built on wooden piles. The harbour area became known as Sunda Kelapa, (Sundanese: ᮞᮥᮔ᮪ᮓ ᮊᮨᮜᮕ) and by the 14th century, it was an important trading port for the Sunda Kingdom.\nThe first European fleet, four Portuguese ships from Malacca, arrived in 1513 while looking for a route for spices. The Sunda Kingdom made an alliance treaty with the Portuguese by allowing them to build a port in 1522 to defend against the rising power of Demak Sultanate from central Java. In 1527, Fatahillah, a Javanese general from Demak attacked and conquered Sunda Kelapa, driving out the Portuguese. Sunda Kelapa was renamed Jayakarta, and became a fiefdom of the Banten Sultanate, which became a major Southeast Asian trading centre.\nThrough the relationship with Prince Jayawikarta of the Banten Sultanate, Dutch ships arrived in 1596. In 1602, the British East India Company's first voyage, commanded by Sir James Lancaster, arrived in Aceh and sailed on to Banten where they were allowed to build a trading post. This site became the centre of British trade in the Indonesian archipelago until 1682. Jayawikarta is thought to have made trading connections with the British merchants, rivals of the Dutch, by allowing them to build houses directly across from the Dutch buildings in 1615.\n\n\n=== Colonial era ===\n\nWhen relations between Prince Jayawikarta and the Dutch deteriorated, his soldiers attacked the Dutch fortress. His army and the British, however, were defeated by the Dutch, in part owing to the timely arrival of Jan Pieterszoon Coen. The Dutch burned the British fort and forced them to retreat on their ships. The victory consolidated Dutch power, and they renamed the city Batavia in 1619.\n\nCommercial opportunities in the city attracted native and especially Chinese and Arab immigrants. This sudden population increase created burdens on the city. Tensions grew as the colonial government tried to restrict Chinese migration through deportations. Following a revolt, 5,000 Chinese were massacred by the Dutch and natives on 9 October 1740, and the following year, Chinese inhabitants were moved to Glodok outside the city walls. At the beginning of the 19th century, around 400 Arabs and Moors lived in Batavia, a number that changed little during the following decades. Among the commodities traded were fabrics, mainly imported cotton, batik and clothing worn by Arab communities.The city began to expand further south as epidemics in 1835 and 1870 forced residents to move away from the port. The Koningsplein, now Merdeka Square was completed in 1818, the housing park of Menteng was started in 1913, and Kebayoran Baru was the last Dutch-built residential area. By 1930, Batavia had more than 500,000 inhabitants, including 37,067 Europeans.On 5 March 1942, the Japanese captured Batavia from Dutch control, and the city was named Jakarta (Jakarta Special City (ジャカルタ特別市, Jakaruta tokubetsu-shi), under the special status that was assigned to the city). After the war, the Dutch name Batavia was internationally recognised until full Indonesian independence on 27 December 1949. The city, now renamed Jakarta, was officially proclaimed the national capital of Indonesia.\n\n\n=== Independence era ===\n\nAfter World War II ended, Indonesian nationalists declared independence on 17 August 1945, and the government of Jakarta City was changed into the Jakarta National Administration in the following month. During the Indonesian National Revolution, Indonesian Republicans withdrew from Allied-occupied Jakarta and established their capital in Yogyakarta.\nAfter securing full independence, Jakarta again became the national capital in 1950. With Jakarta selected to host the 1962 Asian Games, Soekarno, envisaging Jakarta as a great international city, instigated large government-funded projects with openly nationalistic and modernist architecture. Projects included a cloverleaf interchange, a major boulevard (Jalan MH Thamrin-Sudirman), monuments such as The National Monument, Hotel Indonesia, a shopping centre, and a new building intended to be the headquarters of CONEFO. In October 1965, Jakarta was the site of an abortive coup attempt in which six top generals were killed, precipitating a violent anti-communist purge which killed at least 500,000 people, including some ethnic Chinese. The event marked the beginning of Suharto's New Order. The first government was led by a mayor until the end of 1960 when the office was changed to that of a governor. The last mayor of Jakarta was Soediro until he was replaced by Soemarno Sosroatmodjo as governor. Based on law No. 5 of 1974 relating to regional governments, Jakarta was confirmed as the capital of Indonesia and one of the country's then 26 provinces.In 1966, Jakarta was declared a 'special capital region' (Daerah Khusus Ibukota), with a status equivalent to that of a province. Lieutenant General Ali Sadikin served as governor from 1966 to 1977; he rehabilitated roads and bridges, encouraged the arts, built hospitals and a large number of schools. He cleared out slum dwellers for new development projects — some for the benefit of the Suharto family,— and attempted to eliminate rickshaws and ban street vendors. He began control of migration to the city to stem overcrowding and poverty. Foreign investment contributed to a real estate boom that transformed the face of Jakarta. The boom ended with the 1997 Asian financial crisis, putting Jakarta at the centre of violence, protest, and political manoeuvring.\nAfter three decades in power, support for President Suharto began to wane. Tensions peaked when four students were shot dead at Trisakti University by security forces. Four days of riots and violence in 1998 ensued that killed an estimated 1,200, and destroyed or damaged 6,000 buildings, forcing Suharto to resign. Much of the rioting targeted Chinese Indonesians. In the post-Suharto era, Jakarta has remained the focal point of democratic change in Indonesia. Jemaah Islamiah-connected bombings occurred almost annually in the city between 2000 and 2005, with another in 2009. In August 2007, Jakarta held its first-ever election to choose a governor as part of a nationwide decentralisation program that allows direct local elections in several areas. Previously, governors were elected by the city's legislative body.During the Jokowi presidency, the Government adopted a plan to move Indonesia's capital to East Kalimantan.Between 2016 and 2017, a series of terrorist attacks rocked Jakarta with scenes of multiple suicide bombings and gunfire. In suspicion to its links, the Islamic State, the perpetrator led by Abu Bakr al-Baghdadi claimed responsibility for the attacks.\n\n\n== Geography ==\n\nJakarta covers 699.5 km2 (270.1 sq mi), the smallest among any Indonesian provinces. However, its metropolitan area covers 6,392 km2 (2,468 sq mi), which extends into two of the bordering provinces of West Java and Banten. The Greater Jakarta area includes three bordering regencies (Bekasi Regency, Tangerang Regency and Bogor Regency) and five adjacent cities (Bogor, Depok, Bekasi, Tangerang and South Tangerang).\n\nJakarta is situated on the northwest coast of Java, at the mouth of the Ciliwung River on Jakarta Bay, an inlet of the Java Sea. It is strategically located near the Sunda Strait. The northern part of Jakarta is plain land, some areas of which are below sea level, and subject to frequent flooding. The southern parts of the city are hilly. It is one of only two Asian capital cities located in the southern hemisphere (along with East Timor's Dili). Officially, the area of the Jakarta Special District is 662 km2 (256 sq mi) of land area and 6,977 km2 (2,694 sq mi) of sea area. The Thousand Islands, which are administratively a part of Jakarta, are located in Jakarta Bay, north of the city.\nJakarta lies in a low and flat alluvial plain, ranging from −2 to 91 m (−7 to 299 ft) with an average elevation of 8 m (26 ft) above sea level with historically extensive swampy areas. Some parts of the city have been constructed on reclaimed tidal flats that occur around the area. Thirteen rivers flow through Jakarta. They are Ciliwung River, Kalibaru, Pesanggrahan, Cipinang, Angke River, Maja, Mookervart, Krukut, Buaran, West Tarum, Cakung, Petukangan, Sunter River and Grogol River. They flow from the Puncak highlands to the south of the city, then across the city northwards towards the Java Sea. The Ciliwung River divides the city into the western and eastern districts.\nThese rivers, combined with the wet season rains and insufficient", doc_id='eeb6ef32-c857-44e2-b0c5-dff6e29a9cd7', extra_info=None, node_info={'start': 0, 'end': 13970}, similarity=0.8701780916463354)], extra_info=None) |
7,707 | 59cdaf61-ecef-4925-95d6-66a712b22cbc | Azure AI Search | https://docs.llamaindex.ai/en/stable/examples/vector_stores/AzureAISearchIndexDemo | true | llama_index | <a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/docs/examples/vector_stores/CognitiveSearchIndexDemo.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Azure AI Search
## Basic Example
In this notebook, we take a Paul Graham essay, split it into chunks, embed it using an Azure OpenAI embedding model, load it into an Azure AI Search index, and then query it.
If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.
```python
!pip install llama-index
!pip install wget
%pip install llama-index-vector-stores-azureaisearch
%pip install azure-search-documents==11.4.0
%llama-index-embeddings-azure-openai
%llama-index-llms-azure-openai
```
```python
import logging
import sys
from azure.core.credentials import AzureKeyCredential
from azure.search.documents import SearchClient
from azure.search.documents.indexes import SearchIndexClient
from IPython.display import Markdown, display
from llama_index.core import (
SimpleDirectoryReader,
StorageContext,
VectorStoreIndex,
)
from llama_index.core.settings import Settings
from llama_index.llms.azure_openai import AzureOpenAI
from llama_index.embeddings.azure_openai import AzureOpenAIEmbedding
from llama_index.vector_stores.azureaisearch import AzureAISearchVectorStore
from llama_index.vector_stores.azureaisearch import (
IndexManagement,
MetadataIndexFieldType,
)
```
## Setup Azure OpenAI
```python
aoai_api_key = "YOUR_AZURE_OPENAI_API_KEY"
aoai_endpoint = "YOUR_AZURE_OPENAI_ENDPOINT"
aoai_api_version = "2023-05-15"
llm = AzureOpenAI(
model="YOUR_AZURE_OPENAI_COMPLETION_MODEL_NAME",
deployment_name="YOUR_AZURE_OPENAI_COMPLETION_DEPLOYMENT_NAME",
api_key=aoai_api_key,
azure_endpoint=aoai_endpoint,
api_version=aoai_api_version,
)
# You need to deploy your own embedding model as well as your own chat completion model
embed_model = AzureOpenAIEmbedding(
model="YOUR_AZURE_OPENAI_EMBEDDING_MODEL_NAME",
deployment_name="YOUR_AZURE_OPENAI_EMBEDDING_DEPLOYMENT_NAME",
api_key=aoai_api_key,
azure_endpoint=aoai_endpoint,
api_version=aoai_api_version,
)
```
## Setup Azure AI Search
```python
search_service_api_key = "YOUR-AZURE-SEARCH-SERVICE-ADMIN-KEY"
search_service_endpoint = "YOUR-AZURE-SEARCH-SERVICE-ENDPOINT"
search_service_api_version = "2023-11-01"
credential = AzureKeyCredential(search_service_api_key)
# Index name to use
index_name = "llamaindex-vector-demo"
# Use index client to demonstrate creating an index
index_client = SearchIndexClient(
endpoint=search_service_endpoint,
credential=credential,
)
# Use search client to demonstration using existing index
search_client = SearchClient(
endpoint=search_service_endpoint,
index_name=index_name,
credential=credential,
)
```
## Create Index (if it does not exist)
Demonstrates creating a vector index named "llamaindex-vector-demo" if one doesn't exist. The index has the following fields:
| Field Name | OData Type |
|------------|---------------------------|
| id | `Edm.String` |
| chunk | `Edm.String` |
| embedding | `Collection(Edm.Single)` |
| metadata | `Edm.String` |
| doc_id | `Edm.String` |
| author | `Edm.String` |
| theme | `Edm.String` |
| director | `Edm.String` |
```python
metadata_fields = {
"author": "author",
"theme": ("topic", MetadataIndexFieldType.STRING),
"director": "director",
}
vector_store = AzureAISearchVectorStore(
search_or_index_client=index_client,
filterable_metadata_field_keys=metadata_fields,
index_name=index_name,
index_management=IndexManagement.CREATE_IF_NOT_EXISTS,
id_field_key="id",
chunk_field_key="chunk",
embedding_field_key="embedding",
embedding_dimensionality=1536,
metadata_string_field_key="metadata",
doc_id_field_key="doc_id",
language_analyzer="en.lucene",
vector_algorithm_type="exhaustiveKnn",
)
```
```python
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
```
### Loading documents
Load the documents stored in the `data/paul_graham/` using the SimpleDirectoryReader
```python
# Load documents
documents = SimpleDirectoryReader("../data/paul_graham/").load_data()
storage_context = StorageContext.from_defaults(vector_store=vector_store)
Settings.llm = llm
Settings.embed_model = embed_model
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
```
```python
# Query Data
query_engine = index.as_query_engine(similarity_top_k=3)
response = query_engine.query("What did the author do growing up?")
display(Markdown(f"<b>{response}</b>"))
```
<b>The author engaged in writing and programming activities during their formative years. They initially wrote short stories and later transitioned to programming on the IBM 1401 using an early version of Fortran. Subsequently, with the advent of microcomputers, the author began programming on a TRS-80, writing simple games, a rocket flight prediction program, and a word processor.</b>
```python
response = query_engine.query(
"What did the author learn?",
)
display(Markdown(f"<b>{response}</b>"))
```
<b>The author learned that the study of philosophy in college did not live up to their expectations, as they found the courses to be boring and lacking in ultimate truths. This led them to switch their focus to AI, which was influenced by a novel featuring an intelligent computer and a PBS documentary showcasing advanced technology.</b>
## Use Existing Index
```python
index_name = "llamaindex-vector-demo"
metadata_fields = {
"author": "author",
"theme": ("topic", MetadataIndexFieldType.STRING),
"director": "director",
}
vector_store = AzureAISearchVectorStore(
search_or_index_client=search_client,
filterable_metadata_field_keys=metadata_fields,
index_management=IndexManagement.VALIDATE_INDEX,
id_field_key="id",
chunk_field_key="chunk",
embedding_field_key="embedding",
embedding_dimensionality=1536,
metadata_string_field_key="metadata",
doc_id_field_key="doc_id",
)
```
```python
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(
[],
storage_context=storage_context,
)
```
```python
query_engine = index.as_query_engine()
response = query_engine.query("What was a hard moment for the author?")
display(Markdown(f"<b>{response}</b>"))
```
<b>The author faced a challenging moment when he couldn't figure out what to do with the early computer he had access to in 9th grade. This was due to the limited options for input and the lack of knowledge in math to do anything interesting with the available resources.</b>
```python
response = query_engine.query("Who is the author?")
display(Markdown(f"<b>{response}</b>"))
```
<b>Paul Graham</b>
```python
import time
query_engine = index.as_query_engine(streaming=True)
response = query_engine.query("What happened at interleaf?")
start_time = time.time()
token_count = 0
for token in response.response_gen:
print(token, end="")
token_count += 1
time_elapsed = time.time() - start_time
tokens_per_second = token_count / time_elapsed
print(f"\n\nStreamed output at {tokens_per_second} tokens/s")
```
The author worked at Interleaf, where they learned several lessons, including the importance of product-focused leadership in technology companies, the drawbacks of code being edited by too many people, the limitations of conventional office hours for optimal hacking, and the risks associated with bureaucratic customers. Additionally, the author discovered the concept that the low end tends to dominate the high end, and that being the "entry level" option can be advantageous.
Streamed output at 99.40073103089465 tokens/s
## Adding a document to existing index
```python
response = query_engine.query("What colour is the sky?")
display(Markdown(f"<b>{response}</b>"))
```
<b>Blue</b>
```python
from llama_index.core import Document
index.insert_nodes([Document(text="The sky is indigo today")])
```
```python
response = query_engine.query("What colour is the sky?")
display(Markdown(f"<b>{response}</b>"))
```
<b>The sky is indigo today.</b>
## Filtering
```python
from llama_index.core.schema import TextNode
nodes = [
TextNode(
text="The Shawshank Redemption",
metadata={
"author": "Stephen King",
"theme": "Friendship",
},
),
TextNode(
text="The Godfather",
metadata={
"director": "Francis Ford Coppola",
"theme": "Mafia",
},
),
TextNode(
text="Inception",
metadata={
"director": "Christopher Nolan",
},
),
]
```
```python
index.insert_nodes(nodes)
```
```python
from llama_index.core.vector_stores.types import (
MetadataFilters,
ExactMatchFilter,
)
filters = MetadataFilters(
filters=[ExactMatchFilter(key="theme", value="Mafia")]
)
retriever = index.as_retriever(filters=filters)
retriever.retrieve("What is inception about?")
```
[NodeWithScore(node=TextNode(id_='049f00de-13be-4af3-ab56-8c16352fe799', embedding=None, metadata={'director': 'Francis Ford Coppola', 'theme': 'Mafia'}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, hash='ad2a08d4364262546db9711b915348d43e0ccc41bd8c3c41775e133624e1fa1b', text='The Godfather', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\n\n{content}', metadata_template='{key}: {value}', metadata_seperator='\n'), score=0.8120511)]
## Query Mode
Four query modes are supported: DEFAULT (vector search), SPARSE, HYBRID, and SEMANTIC_HYBRID.
### Perform a Vector Search
```python
from llama_index.core.vector_stores.types import VectorStoreQueryMode
default_retriever = index.as_retriever(
vector_store_query_mode=VectorStoreQueryMode.DEFAULT
)
response = default_retriever.retrieve("What is inception about?")
# Loop through each NodeWithScore in the response
for node_with_score in response:
node = node_with_score.node # The TextNode object
score = node_with_score.score # The similarity score
chunk_id = node.id_ # The chunk ID
# Extract the relevant metadata from the node
file_name = node.metadata.get("file_name", "Unknown")
file_path = node.metadata.get("file_path", "Unknown")
# Extract the text content from the node
text_content = node.text if node.text else "No content available"
# Print the results in a user-friendly format
print(f"Score: {score}")
print(f"File Name: {file_name}")
print(f"Id: {chunk_id}")
print("\nExtracted Content:")
print(text_content)
print("\n" + "=" * 40 + " End of Result " + "=" * 40 + "\n")
```
Score: 0.8748552
File Name: Unknown
Id: bae0df75-ff37-4725-b659-b9fd8bf2ef3c
Extracted Content:
Inception
======================================== End of Result ========================================
Score: 0.8155207
File Name: paul_graham_essay.txt
Id: ae5aee85-a083-4141-bf75-bbb872f53760
Extracted Content:
It's not that unprestigious types of work are good per se. But when you find yourself drawn to some kind of work despite its current lack of prestige, it's a sign both that there's something real to be discovered there, and that you have the right kind of motives. Impure motives are a big danger for the ambitious. If anything is going to lead you astray, it will be the desire to impress people. So while working on things that aren't prestigious doesn't guarantee you're on the right track, it at least guarantees you're not on the most common type of wrong one.
Over the next several years I wrote lots of essays about all kinds of different topics. O'Reilly reprinted a collection of them as a book, called Hackers & Painters after one of the essays in it. I also worked on spam filters, and did some more painting. I used to have dinners for a group of friends every thursday night, which taught me how to cook for groups. And I bought another building in Cambridge, a former candy factory (and later, twas said, porn studio), to use as an office.
One night in October 2003 there was a big party at my house. It was a clever idea of my friend Maria Daniels, who was one of the thursday diners. Three separate hosts would all invite their friends to one party. So for every guest, two thirds of the other guests would be people they didn't know but would probably like. One of the guests was someone I didn't know but would turn out to like a lot: a woman called Jessica Livingston. A couple days later I asked her out.
Jessica was in charge of marketing at a Boston investment bank. This bank thought it understood startups, but over the next year, as she met friends of mine from the startup world, she was surprised how different reality was. And how colorful their stories were. So she decided to compile a book of interviews with startup founders.
When the bank had financial problems and she had to fire half her staff, she started looking for a new job. In early 2005 she interviewed for a marketing job at a Boston VC firm. It took them weeks to make up their minds, and during this time I started telling her about all the things that needed to be fixed about venture capital. They should make a larger number of smaller investments instead of a handful of giant ones, they should be funding younger, more technical founders instead of MBAs, they should let the founders remain as CEO, and so on.
One of my tricks for writing essays had always been to give talks. The prospect of having to stand up in front of a group of people and tell them something that won't waste their time is a great spur to the imagination. When the Harvard Computer Society, the undergrad computer club, asked me to give a talk, I decided I would tell them how to start a startup. Maybe they'd be able to avoid the worst of the mistakes we'd made.
So I gave this talk, in the course of which I told them that the best sources of seed funding were successful startup founders, because then they'd be sources of advice too. Whereupon it seemed they were all looking expectantly at me. Horrified at the prospect of having my inbox flooded by business plans (if I'd only known), I blurted out "But not me!" and went on with the talk. But afterward it occurred to me that I should really stop procrastinating about angel investing. I'd been meaning to since Yahoo bought us, and now it was 7 years later and I still hadn't done one angel investment.
Meanwhile I had been scheming with Robert and Trevor about projects we could work on together. I missed working with them, and it seemed like there had to be something we could collaborate on.
As Jessica and I were walking home from dinner on March 11, at the corner of Garden and Walker streets, these three threads converged. Screw the VCs who were taking so long to make up their minds. We'd start our own investment firm and actually implement the ideas we'd been talking about. I'd fund it, and Jessica could quit her job and work for it, and we'd get Robert and Trevor as partners too. [13]
Once again, ignorance worked in our favor. We had no idea how to be angel investors, and in Boston in 2005 there were no Ron Conways to learn from. So we just made what seemed like the obvious choices, and some of the things we did turned out to be novel.
There are multiple components to Y Combinator, and we didn't figure them all out at once. The part we got first was to be an angel firm.
======================================== End of Result ========================================
### Perform a Hybrid Search
```python
from llama_index.core.vector_stores.types import VectorStoreQueryMode
hybrid_retriever = index.as_retriever(
vector_store_query_mode=VectorStoreQueryMode.HYBRID
)
hybrid_retriever.retrieve("What is inception about?")
```
[NodeWithScore(node=TextNode(id_='bae0df75-ff37-4725-b659-b9fd8bf2ef3c', embedding=None, metadata={'director': 'Christopher Nolan'}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, hash='9792a1fd7d2e1a08f1b1d70a597357bb6b68d69ed5685117eaa37ac9e9a3565e', text='Inception', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\n\n{content}', metadata_template='{key}: {value}', metadata_seperator='\n'), score=0.03181818127632141),
NodeWithScore(node=TextNode(id_='ae5aee85-a083-4141-bf75-bbb872f53760', embedding=None, metadata={'file_path': '..\\data\\paul_graham\\paul_graham_essay.txt', 'file_name': 'paul_graham_essay.txt', 'file_type': 'text/plain', 'file_size': 75395, 'creation_date': '2023-12-12', 'last_modified_date': '2023-12-12', 'last_accessed_date': '2024-02-02'}, excluded_embed_metadata_keys=['file_name', 'file_type', 'file_size', 'creation_date', 'last_modified_date', 'last_accessed_date'], excluded_llm_metadata_keys=['file_name', 'file_type', 'file_size', 'creation_date', 'last_modified_date', 'last_accessed_date'], relationships={<NodeRelationship.SOURCE: '1'>: RelatedNodeInfo(node_id='627552ee-116a-4132-a7d3-7e7232f75866', node_type=<ObjectType.DOCUMENT: '4'>, metadata={'file_path': '..\\data\\paul_graham\\paul_graham_essay.txt', 'file_name': 'paul_graham_essay.txt', 'file_type': 'text/plain', 'file_size': 75395, 'creation_date': '2023-12-12', 'last_modified_date': '2023-12-12', 'last_accessed_date': '2024-02-02'}, hash='0a59e1ce8e50a67680a5669164f79e524087270ce183a3971fcd18ac4cad1fa0'), <NodeRelationship.PREVIOUS: '2'>: RelatedNodeInfo(node_id='24a1d375-31e3-492c-ac02-5091e3572e3f', node_type=<ObjectType.TEXT: '1'>, metadata={'file_path': '..\\data\\paul_graham\\paul_graham_essay.txt', 'file_name': 'paul_graham_essay.txt', 'file_type': 'text/plain', 'file_size': 75395, 'creation_date': '2023-12-12', 'last_modified_date': '2023-12-12', 'last_accessed_date': '2024-02-02'}, hash='51c474a12ac8e9748258b2c7bbe77bb7c8bf35b775ed44f016057a0aa8b0bd76'), <NodeRelationship.NEXT: '3'>: RelatedNodeInfo(node_id='196569e0-2b10-4ba3-8263-a69fb78dd98c', node_type=<ObjectType.TEXT: '1'>, metadata={}, hash='192082e7ba84b8c5e2a64bd1d422c6c503189fc3ba325bb3e6e8bdb43db03fbb')}, hash='a3ea638857f1daadf7af967322480f97e1235dac3ee7d72b8024670785df8810', text='It\'s not that unprestigious types of work are good per se. But when you find yourself drawn to some kind of work despite its current lack of prestige, it\'s a sign both that there\'s something real to be discovered there, and that you have the right kind of motives. Impure motives are a big danger for the ambitious. If anything is going to lead you astray, it will be the desire to impress people. So while working on things that aren\'t prestigious doesn\'t guarantee you\'re on the right track, it at least guarantees you\'re not on the most common type of wrong one.\n\nOver the next several years I wrote lots of essays about all kinds of different topics. O\'Reilly reprinted a collection of them as a book, called Hackers & Painters after one of the essays in it. I also worked on spam filters, and did some more painting. I used to have dinners for a group of friends every thursday night, which taught me how to cook for groups. And I bought another building in Cambridge, a former candy factory (and later, twas said, porn studio), to use as an office.\n\nOne night in October 2003 there was a big party at my house. It was a clever idea of my friend Maria Daniels, who was one of the thursday diners. Three separate hosts would all invite their friends to one party. So for every guest, two thirds of the other guests would be people they didn\'t know but would probably like. One of the guests was someone I didn\'t know but would turn out to like a lot: a woman called Jessica Livingston. A couple days later I asked her out.\n\nJessica was in charge of marketing at a Boston investment bank. This bank thought it understood startups, but over the next year, as she met friends of mine from the startup world, she was surprised how different reality was. And how colorful their stories were. So she decided to compile a book of interviews with startup founders.\n\nWhen the bank had financial problems and she had to fire half her staff, she started looking for a new job. In early 2005 she interviewed for a marketing job at a Boston VC firm. It took them weeks to make up their minds, and during this time I started telling her about all the things that needed to be fixed about venture capital. They should make a larger number of smaller investments instead of a handful of giant ones, they should be funding younger, more technical founders instead of MBAs, they should let the founders remain as CEO, and so on.\n\nOne of my tricks for writing essays had always been to give talks. The prospect of having to stand up in front of a group of people and tell them something that won\'t waste their time is a great spur to the imagination. When the Harvard Computer Society, the undergrad computer club, asked me to give a talk, I decided I would tell them how to start a startup. Maybe they\'d be able to avoid the worst of the mistakes we\'d made.\n\nSo I gave this talk, in the course of which I told them that the best sources of seed funding were successful startup founders, because then they\'d be sources of advice too. Whereupon it seemed they were all looking expectantly at me. Horrified at the prospect of having my inbox flooded by business plans (if I\'d only known), I blurted out "But not me!" and went on with the talk. But afterward it occurred to me that I should really stop procrastinating about angel investing. I\'d been meaning to since Yahoo bought us, and now it was 7 years later and I still hadn\'t done one angel investment.\n\nMeanwhile I had been scheming with Robert and Trevor about projects we could work on together. I missed working with them, and it seemed like there had to be something we could collaborate on.\n\nAs Jessica and I were walking home from dinner on March 11, at the corner of Garden and Walker streets, these three threads converged. Screw the VCs who were taking so long to make up their minds. We\'d start our own investment firm and actually implement the ideas we\'d been talking about. I\'d fund it, and Jessica could quit her job and work for it, and we\'d get Robert and Trevor as partners too. [13]\n\nOnce again, ignorance worked in our favor. We had no idea how to be angel investors, and in Boston in 2005 there were no Ron Conways to learn from. So we just made what seemed like the obvious choices, and some of the things we did turned out to be novel.\n\nThere are multiple components to Y Combinator, and we didn\'t figure them all out at once. The part we got first was to be an angel firm.', start_char_idx=45670, end_char_idx=50105, text_template='{metadata_str}\n\n{content}', metadata_template='{key}: {value}', metadata_seperator='\n'), score=0.03009207174181938)]
### Perform a Hybrid Search with Semantic Reranking
This mode incorporates semantic reranking to hybrid search results to improve search relevance.
Please see this link for further details: https://learn.microsoft.com/azure/search/semantic-search-overview
```python
hybrid_retriever = index.as_retriever(
vector_store_query_mode=VectorStoreQueryMode.SEMANTIC_HYBRID
)
hybrid_retriever.retrieve("What is inception about?")
```
[NodeWithScore(node=TextNode(id_='bae0df75-ff37-4725-b659-b9fd8bf2ef3c', embedding=None, metadata={'director': 'Christopher Nolan'}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, hash='9792a1fd7d2e1a08f1b1d70a597357bb6b68d69ed5685117eaa37ac9e9a3565e', text='Inception', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\n\n{content}', metadata_template='{key}: {value}', metadata_seperator='\n'), score=2.3949906826019287),
NodeWithScore(node=TextNode(id_='fc9782a2-c255-4265-a618-3a864abe598d', embedding=None, metadata={'file_path': '..\\data\\paul_graham\\paul_graham_essay.txt', 'file_name': 'paul_graham_essay.txt', 'file_type': 'text/plain', 'file_size': 75395, 'creation_date': '2023-12-12', 'last_modified_date': '2023-12-12', 'last_accessed_date': '2024-02-02'}, excluded_embed_metadata_keys=['file_name', 'file_type', 'file_size', 'creation_date', 'last_modified_date', 'last_accessed_date'], excluded_llm_metadata_keys=['file_name', 'file_type', 'file_size', 'creation_date', 'last_modified_date', 'last_accessed_date'], relationships={<NodeRelationship.SOURCE: '1'>: RelatedNodeInfo(node_id='627552ee-116a-4132-a7d3-7e7232f75866', node_type=<ObjectType.DOCUMENT: '4'>, metadata={'file_path': '..\\data\\paul_graham\\paul_graham_essay.txt', 'file_name': 'paul_graham_essay.txt', 'file_type': 'text/plain', 'file_size': 75395, 'creation_date': '2023-12-12', 'last_modified_date': '2023-12-12', 'last_accessed_date': '2024-02-02'}, hash='0a59e1ce8e50a67680a5669164f79e524087270ce183a3971fcd18ac4cad1fa0'), <NodeRelationship.PREVIOUS: '2'>: RelatedNodeInfo(node_id='94d87013-ea3d-4a9c-982a-dde5ff219983', node_type=<ObjectType.TEXT: '1'>, metadata={'file_path': '..\\data\\paul_graham\\paul_graham_essay.txt', 'file_name': 'paul_graham_essay.txt', 'file_type': 'text/plain', 'file_size': 75395, 'creation_date': '2023-12-12', 'last_modified_date': '2023-12-12', 'last_accessed_date': '2024-02-02'}, hash='f28897170c6b61162069af9ee83dc11e13fa0f6bf6efaa7b3911e6ad9093da84'), <NodeRelationship.NEXT: '3'>: RelatedNodeInfo(node_id='dc3852e5-4c1e-484e-9e65-f17084d3f7b4', node_type=<ObjectType.TEXT: '1'>, metadata={}, hash='deaee6d5c992dbf757876957aa9112a42d30a636c6c83d81fcfac4aaf2d24dee')}, hash='a3b31e5ec2b5d4a9b3648de310c8a5962c17afdb800ea0e16faa47956607866d', text='And at the same time all involved would adhere outwardly to the conventions of a 19th century atelier. We actually had one of those little stoves, fed with kindling, that you see in 19th century studio paintings, and a nude model sitting as close to it as possible without getting burned. Except hardly anyone else painted her besides me. The rest of the students spent their time chatting or occasionally trying to imitate things they\'d seen in American art magazines.\n\nOur model turned out to live just down the street from me. She made a living from a combination of modelling and making fakes for a local antique dealer. She\'d copy an obscure old painting out of a book, and then he\'d take the copy and maltreat it to make it look old. [3]\n\nWhile I was a student at the Accademia I started painting still lives in my bedroom at night. These paintings were tiny, because the room was, and because I painted them on leftover scraps of canvas, which was all I could afford at the time. Painting still lives is different from painting people, because the subject, as its name suggests, can\'t move. People can\'t sit for more than about 15 minutes at a time, and when they do they don\'t sit very still. So the traditional m.o. for painting people is to know how to paint a generic person, which you then modify to match the specific person you\'re painting. Whereas a still life you can, if you want, copy pixel by pixel from what you\'re seeing. You don\'t want to stop there, of course, or you get merely photographic accuracy, and what makes a still life interesting is that it\'s been through a head. You want to emphasize the visual cues that tell you, for example, that the reason the color changes suddenly at a certain point is that it\'s the edge of an object. By subtly emphasizing such things you can make paintings that are more realistic than photographs not just in some metaphorical sense, but in the strict information-theoretic sense. [4]\n\nI liked painting still lives because I was curious about what I was seeing. In everyday life, we aren\'t consciously aware of much we\'re seeing. Most visual perception is handled by low-level processes that merely tell your brain "that\'s a water droplet" without telling you details like where the lightest and darkest points are, or "that\'s a bush" without telling you the shape and position of every leaf. This is a feature of brains, not a bug. In everyday life it would be distracting to notice every leaf on every bush. But when you have to paint something, you have to look more closely, and when you do there\'s a lot to see. You can still be noticing new things after days of trying to paint something people usually take for granted, just as you can after days of trying to write an essay about something people usually take for granted.\n\nThis is not the only way to paint. I\'m not 100% sure it\'s even a good way to paint. But it seemed a good enough bet to be worth trying.\n\nOur teacher, professor Ulivi, was a nice guy. He could see I worked hard, and gave me a good grade, which he wrote down in a sort of passport each student had. But the Accademia wasn\'t teaching me anything except Italian, and my money was running out, so at the end of the first year I went back to the US.\n\nI wanted to go back to RISD, but I was now broke and RISD was very expensive, so I decided to get a job for a year and then return to RISD the next fall. I got one at a company called Interleaf, which made software for creating documents. You mean like Microsoft Word? Exactly. That was how I learned that low end software tends to eat high end software. But Interleaf still had a few years to live yet. [5]\n\nInterleaf had done something pretty bold. Inspired by Emacs, they\'d added a scripting language, and even made the scripting language a dialect of Lisp. Now they wanted a Lisp hacker to write things in it. This was the closest thing I\'ve had to a normal job, and I hereby apologize to my boss and coworkers, because I was a bad employee. Their Lisp was the thinnest icing on a giant C cake, and since I didn\'t know C and didn\'t want to learn it, I never understood most of the software. Plus I was terribly irresponsible. This was back when a programming job meant showing up every day during certain working hours.', start_char_idx=14179, end_char_idx=18443, text_template='{metadata_str}\n\n{content}', metadata_template='{key}: {value}', metadata_seperator='\n'), score=1.0986518859863281)] |
717 | 69d7de9b-c00a-48a1-a87b-3ace42d65461 | Qdrant Vector Store - Default Qdrant Filters | https://docs.llamaindex.ai/en/stable/examples/vector_stores/Qdrant_using_qdrant_filters | true | llama_index | <a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/docs/examples/vector_stores/pinecone_metadata_filter.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Qdrant Vector Store - Default Qdrant Filters
Example on how to use Filters from the qdrant_client SDK directly in your Retriever / Query Engine
```python
%pip install llama-index-vector-stores-qdrant
```
```python
!pip3 install llama-index qdrant_client
```
```python
import openai
import qdrant_client
from IPython.display import Markdown, display
from llama_index.core import VectorStoreIndex
from llama_index.core import StorageContext
from llama_index.vector_stores.qdrant import QdrantVectorStore
from qdrant_client.http.models import Filter, FieldCondition, MatchValue
client = qdrant_client.QdrantClient(location=":memory:")
from llama_index.core.schema import TextNode
nodes = [
TextNode(
text="りんごとは",
metadata={"author": "Tanaka", "fruit": "apple", "city": "Tokyo"},
),
TextNode(
text="Was ist Apfel?",
metadata={"author": "David", "fruit": "apple", "city": "Berlin"},
),
TextNode(
text="Orange like the sun",
metadata={"author": "Jane", "fruit": "orange", "city": "Hong Kong"},
),
TextNode(
text="Grape is...",
metadata={"author": "Jane", "fruit": "grape", "city": "Hong Kong"},
),
TextNode(
text="T-dot > G-dot",
metadata={"author": "George", "fruit": "grape", "city": "Toronto"},
),
TextNode(
text="6ix Watermelons",
metadata={
"author": "George",
"fruit": "watermelon",
"city": "Toronto",
},
),
]
openai.api_key = "YOUR_API_KEY"
vector_store = QdrantVectorStore(
client=client, collection_name="fruit_collection"
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex(nodes, storage_context=storage_context)
# Use filters directly from qdrant_client python library
# View python examples here for more info https://qdrant.tech/documentation/concepts/filtering/
filters = Filter(
should=[
Filter(
must=[
FieldCondition(
key="fruit",
match=MatchValue(value="apple"),
),
FieldCondition(
key="city",
match=MatchValue(value="Tokyo"),
),
]
),
Filter(
must=[
FieldCondition(
key="fruit",
match=MatchValue(value="grape"),
),
FieldCondition(
key="city",
match=MatchValue(value="Toronto"),
),
]
),
]
)
retriever = index.as_retriever(vector_store_kwargs={"qdrant_filters": filters})
response = retriever.retrieve("Who makes grapes?")
for node in response:
print("node", node.score)
print("node", node.text)
print("node", node.metadata)
``` |
1,120 | fc2bf4ad-a7eb-471e-8807-ecc7b2f3b871 | Pinecone Vector Store - Hybrid Search | https://docs.llamaindex.ai/en/stable/examples/vector_stores/PineconeIndexDemo-Hybrid | true | llama_index | <a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/docs/examples/vector_stores/PineconeIndexDemo-Hybrid.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Pinecone Vector Store - Hybrid Search
If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.
```python
%pip install llama-index-vector-stores-pinecone
```
```python
!pip install llama-index>=0.9.31 pinecone-client>=3.0.0 "transformers[torch]"
```
#### Creating a Pinecone Index
```python
import logging
import sys
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
```
```python
from pinecone import Pinecone, ServerlessSpec
```
```python
import os
os.environ[
"PINECONE_API_KEY"
] = #"<Your Pinecone API key, from app.pinecone.io>"
os.environ[
"OPENAI_API_KEY"
] = "sk-..."
api_key = os.environ["PINECONE_API_KEY"]
pc = Pinecone(api_key=api_key)
```
```python
# delete if needed
# pc.delete_index("quickstart")
```
```python
# dimensions are for text-embedding-ada-002
# NOTE: needs dotproduct for hybrid search
pc.create_index(
name="quickstart",
dimension=1536,
metric="dotproduct",
spec=ServerlessSpec(cloud="aws", region="us-west-2"),
)
# If you need to create a PodBased Pinecone index, you could alternatively do this:
#
# from pinecone import Pinecone, PodSpec
#
# pc = Pinecone(api_key='xxx')
#
# pc.create_index(
# name='my-index',
# dimension=1536,
# metric='cosine',
# spec=PodSpec(
# environment='us-east1-gcp',
# pod_type='p1.x1',
# pods=1
# )
# )
#
```
```python
pinecone_index = pc.Index("quickstart")
```
Download Data
```python
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
```
#### Load documents, build the PineconeVectorStore
```python
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.vector_stores.pinecone import PineconeVectorStore
from IPython.display import Markdown, display
```
```python
# load documents
documents = SimpleDirectoryReader("./data/paul_graham/").load_data()
```
```python
# set add_sparse_vector=True to compute sparse vectors during upsert
from llama_index.core import StorageContext
if "OPENAI_API_KEY" not in os.environ:
raise EnvironmentError(f"Environment variable OPENAI_API_KEY is not set")
vector_store = PineconeVectorStore(
pinecone_index=pinecone_index,
add_sparse_vector=True,
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
```
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
Upserted vectors: 0%| | 0/22 [00:00<?, ?it/s]
#### Query Index
May need to wait a minute or two for the index to be ready
```python
# set Logging to DEBUG for more detailed outputs
query_engine = index.as_query_engine(vector_store_query_mode="hybrid")
response = query_engine.query("What happened at Viaweb?")
```
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
```python
display(Markdown(f"<b>{response}</b>"))
```
<b>At Viaweb, Lisp was used as a programming language. The speaker gave a talk at a Lisp conference about how Lisp was used at Viaweb, and afterward, the talk gained a lot of attention when it was posted online. This led to a realization that publishing essays online could reach a wider audience than traditional print media. The speaker also wrote a collection of essays, which was later published as a book called "Hackers & Painters."</b> |
699 | 2688c29a-6436-4251-a97e-d38741b7a804 | Elasticsearch | https://docs.llamaindex.ai/en/stable/examples/vector_stores/Elasticsearch_demo | true | llama_index | <a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/docs/examples/vector_stores/Elasticsearch_demo.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Elasticsearch
>[Elasticsearch](http://www.github.com/elastic/elasticsearch) is a search database, that supports full text and vector searches.
## Basic Example
In this basic example, we take the a Paul Graham essay, split it into chunks, embed it using an open-source embedding model, load it into Elasticsearch, and then query it. For an example using different retrieval strategies see [Elasticsearch Vector Store](https://docs.llamaindex.ai/en/stable/examples/vector_stores/ElasticsearchIndexDemo/).
If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.
```python
%pip install -qU llama-index-vector-stores-elasticsearch llama-index-embeddings-huggingface llama-index
```
```python
# import
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.vector_stores.elasticsearch import ElasticsearchStore
from llama_index.core import StorageContext
```
```python
# set up OpenAI
import os
import getpass
os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
```
Download Data
```python
!mkdir -p 'data/paul_graham/'
!wget -nv 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
```
2024-05-13 15:10:43 URL:https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt [75042/75042] -> "data/paul_graham/paul_graham_essay.txt" [1]
```python
from llama_index.embeddings.huggingface import HuggingFaceEmbedding
from llama_index.core import Settings
# define embedding function
Settings.embed_model = HuggingFaceEmbedding(
model_name="BAAI/bge-small-en-v1.5"
)
```
```python
# load documents
documents = SimpleDirectoryReader("./data/paul_graham/").load_data()
# define index
vector_store = ElasticsearchStore(
es_url="http://localhost:9200", # see Elasticsearch Vector Store for more authentication options
index_name="paul_graham_essay",
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
```
```python
# Query Data
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
print(response)
```
The author worked on writing and programming outside of school. They wrote short stories and tried writing programs on an IBM 1401 computer. They also built a microcomputer kit and started programming on it, writing simple games and a word processor. |
2,090 | 41b7c5e0-53b5-40ec-bc06-3fb09db6e847 | Firestore Vector Store | https://docs.llamaindex.ai/en/stable/examples/vector_stores/FirestoreVectorStore | true | llama_index | <a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/examples/vector_stores/FirestoreVectorStore.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Firestore Vector Store
# Google Firestore (Native Mode)
> [Firestore](https://cloud.google.com/firestore) is a serverless document-oriented database that scales to meet any demand. Extend your database application to build AI-powered experiences leveraging Firestore's Langchain integrations.
This notebook goes over how to use [Firestore](https://cloud.google.com/firestore) to store vectors and query them using the `FirestoreVectorStore` class.
## Before You Begin
To run this notebook, you will need to do the following:
* [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)
* [Enable the Firestore API](https://console.cloud.google.com/flows/enableapi?apiid=firestore.googleapis.com)
* [Create a Firestore database](https://cloud.google.com/firestore/docs/manage-databases)
After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts.
## Library Installation
If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙. For this notebook, we will also install `langchain-google-genai` to use Google Generative AI embeddings.
```python
%pip install --quiet llama-index
%pip install --quiet llama-index-vector-stores-firestore llama-index-embeddings-huggingface
```
### ☁ Set Your Google Cloud Project
Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook.
If you don't know your project ID, try the following:
* Run `gcloud config list`.
* Run `gcloud projects list`.
* See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113).
```python
# @markdown Please fill in the value below with your Google Cloud project ID and then run the cell.
PROJECT_ID = "YOUR_PROJECT_ID" # @param {type:"string"}
# Set the project id
!gcloud config set project {PROJECT_ID}
```
### 🔐 Authentication
Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project.
- If you are using Colab to run this notebook, use the cell below and continue.
- If you are using Vertex AI Workbench, check out the setup instructions [here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env).
```python
from google.colab import auth
auth.authenticate_user()
```
# Basic Usage
### Initialize FirestoreVectorStore
`FirestoreVectroStore` allows you to load data into Firestore and query it.
```python
# @markdown Please specify a source for demo purpose.
COLLECTION_NAME = "test_collection"
```
```python
from llama_index.core import SimpleDirectoryReader
# Load documents and build index
documents = SimpleDirectoryReader(
"../../examples/data/paul_graham"
).load_data()
```
```python
from llama_index.embeddings.huggingface import HuggingFaceEmbedding
from llama_index.core import Settings
# Set the embedding model, this is a local model
embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-small-en-v1.5")
```
```python
from llama_index.core import VectorStoreIndex
from llama_index.core import StorageContext, ServiceContext
from llama_index.vector_stores.firestore import FirestoreVectorStore
# Create a Firestore vector store
store = FirestoreVectorStore(collection_name=COLLECTION_NAME)
storage_context = StorageContext.from_defaults(vector_store=store)
service_context = ServiceContext.from_defaults(
llm=None, embed_model=embed_model
)
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context, service_context=service_context
)
```
/var/folders/mh/cqn7wzgs3j79rbg243_gfcx80000gn/T/ipykernel_29666/1668628626.py:10: DeprecationWarning: Call to deprecated class method from_defaults. (ServiceContext is deprecated, please use `llama_index.settings.Settings` instead.) -- Deprecated since version 0.10.0.
service_context = ServiceContext.from_defaults(llm=None, embed_model=embed_model)
LLM is explicitly disabled. Using MockLLM.
### Perform search
You can use the `FirestoreVectorStore` to perform similarity searches on the vectors you have stored. This is useful for finding similar documents or text.
```python
query_engine = index.as_query_engine()
res = query_engine.query("What did the author do growing up?")
print(str(res.source_nodes[0].text))
```
None
What I Worked On
February 2021
Before college the two main things I worked on, outside of school, were writing and programming. I didn't write essays. I wrote what beginning writers were supposed to write then, and probably still are: short stories. My stories were awful. They had hardly any plot, just characters with strong feelings, which I imagined made them deep.
The first programs I tried writing were on the IBM 1401 that our school district used for what was then called "data processing." This was in 9th grade, so I was 13 or 14. The school district's 1401 happened to be in the basement of our junior high school, and my friend Rich Draves and I got permission to use it. It was like a mini Bond villain's lair down there, with all these alien-looking machines — CPU, disk drives, printer, card reader — sitting up on a raised floor under bright fluorescent lights.
The language we used was an early version of Fortran. You had to type programs on punch cards, then stack them in the card reader and press a button to load the program into memory and run it. The result would ordinarily be to print something on the spectacularly loud printer.
I was puzzled by the 1401. I couldn't figure out what to do with it. And in retrospect there's not much I could have done with it. The only form of input to programs was data stored on punched cards, and I didn't have any data stored on punched cards. The only other option was to do things that didn't rely on any input, like calculate approximations of pi, but I didn't know enough math to do anything interesting of that type. So I'm not surprised I can't remember any programs I wrote, because they can't have done much. My clearest memory is of the moment I learned it was possible for programs not to terminate, when one of mine didn't. On a machine without time-sharing, this was a social as well as a technical error, as the data center manager's expression made clear.
With microcomputers, everything changed. Now you could have a computer sitting right in front of you, on a desk, that could respond to your keystrokes as it was running instead of just churning through a stack of punch cards and then stopping. [1]
The first of my friends to get a microcomputer built it himself. It was sold as a kit by Heathkit. I remember vividly how impressed and envious I felt watching him sitting in front of it, typing programs right into the computer.
Computers were expensive in those days and it took me years of nagging before I convinced my father to buy one, a TRS-80, in about 1980. The gold standard then was the Apple II, but a TRS-80 was good enough. This was when I really started programming. I wrote simple games, a program to predict how high my model rockets would fly, and a word processor that my father used to write at least one book. There was only room in memory for about 2 pages of text, so he'd write 2 pages at a time and then print them out, but it was a lot better than a typewriter.
Though I liked programming, I didn't plan to study it in college. In college I was going to study philosophy, which sounded much more powerful. It seemed, to my naive high school self, to be the study of the ultimate truths, compared to which the things studied in other fields would be mere domain knowledge. What I discovered when I got to college was that the other fields took up so much of the space of ideas that there wasn't much left for these supposed ultimate truths. All that seemed left for philosophy were edge cases that people in other fields felt could safely be ignored.
I couldn't have put this into words when I was 18. All I knew at the time was that I kept taking philosophy courses and they kept being boring. So I decided to switch to AI.
AI was in the air in the mid 1980s, but there were two things especially that made me want to work on it: a novel by Heinlein called The Moon is a Harsh Mistress, which featured an intelligent computer called Mike, and a PBS documentary that showed Terry Winograd using SHRDLU. I haven't tried rereading The Moon is a Harsh Mistress, so I don't know how well it has aged, but when I read it I was drawn entirely into its world.
You can apply pre-filtering to the search results by specifying a `filters` argument.
```python
from llama_index.core.vector_stores.types import (
MetadataFilters,
ExactMatchFilter,
MetadataFilter,
)
filters = MetadataFilters(
filters=[MetadataFilter(key="author", value="Paul Graham")]
)
query_engine = index.as_query_engine(filters=filters)
res = query_engine.query("What did the author do growing up?")
print(str(res.source_nodes[0].text))
``` |
11,494 | c1f3da6b-ebd5-4d21-b8cb-9912b3d62b55 | set up Fireworks.ai Key | https://docs.llamaindex.ai/en/stable/examples/vector_stores/MongoDBAtlasVectorSearchRAGFireworks | false | llama_index | ```python
!pip install -q llama-index llama-index-vector-stores-mongodb llama-index-embeddings-fireworks==0.1.2 llama-index-llms-fireworks
!pip install -q pymongo datasets pandas
```
```python
# set up Fireworks.ai Key
import os
import getpass
fw_api_key = getpass.getpass("Fireworks API Key:")
os.environ["FIREWORKS_API_KEY"] = fw_api_key
```
```python
from datasets import load_dataset
import pandas as pd
# https://huggingface.co./datasets/AIatMongoDB/whatscooking.restaurants
dataset = load_dataset("AIatMongoDB/whatscooking.restaurants")
# Convert the dataset to a pandas dataframe
dataset_df = pd.DataFrame(dataset["train"])
dataset_df.head(5)
```
/mnt/disks/data/llama_index/.venv/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
from .autonotebook import tqdm as notebook_tqdm
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>restaurant_id</th>
<th>attributes</th>
<th>cuisine</th>
<th>DogsAllowed</th>
<th>embedding</th>
<th>OutdoorSeating</th>
<th>borough</th>
<th>address</th>
<th>_id</th>
<th>name</th>
<th>menu</th>
<th>TakeOut</th>
<th>location</th>
<th>PriceRange</th>
<th>HappyHour</th>
<th>review_count</th>
<th>sponsored</th>
<th>stars</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>40366661</td>
<td>{'Alcohol': ''none'', 'Ambience': '{'romantic'...</td>
<td>Tex-Mex</td>
<td>None</td>
<td>[-0.14520384, 0.018315623, -0.018330636, -0.10...</td>
<td>True</td>
<td>Manhattan</td>
<td>{'building': '627', 'coord': [-73.975980999999...</td>
<td>{'$oid': '6095a34a7c34416a90d3206b'}</td>
<td>Baby Bo'S Burritos</td>
<td>None</td>
<td>True</td>
<td>{'coordinates': [-73.97598099999999, 40.745132...</td>
<td>1.0</td>
<td>None</td>
<td>10</td>
<td>NaN</td>
<td>2.5</td>
</tr>
<tr>
<th>1</th>
<td>40367442</td>
<td>{'Alcohol': ''beer_and_wine'', 'Ambience': '{'...</td>
<td>American</td>
<td>True</td>
<td>[-0.11977468, -0.02157107, 0.0038846824, -0.09...</td>
<td>True</td>
<td>Staten Island</td>
<td>{'building': '17', 'coord': [-74.1350211, 40.6...</td>
<td>{'$oid': '6095a34a7c34416a90d3209e'}</td>
<td>Buddy'S Wonder Bar</td>
<td>[Grilled cheese sandwich, Baked potato, Lasagn...</td>
<td>True</td>
<td>{'coordinates': [-74.1350211, 40.6369042], 'ty...</td>
<td>2.0</td>
<td>None</td>
<td>62</td>
<td>NaN</td>
<td>3.5</td>
</tr>
<tr>
<th>2</th>
<td>40364610</td>
<td>{'Alcohol': ''none'', 'Ambience': '{'touristy'...</td>
<td>American</td>
<td>None</td>
<td>[-0.1004329, -0.014882699, -0.033005167, -0.09...</td>
<td>True</td>
<td>Staten Island</td>
<td>{'building': '37', 'coord': [-74.138263, 40.54...</td>
<td>{'$oid': '6095a34a7c34416a90d31ff6'}</td>
<td>Great Kills Yacht Club</td>
<td>[Mozzarella sticks, Mushroom swiss burger, Spi...</td>
<td>True</td>
<td>{'coordinates': [-74.138263, 40.546681], 'type...</td>
<td>1.0</td>
<td>None</td>
<td>72</td>
<td>NaN</td>
<td>4.0</td>
</tr>
<tr>
<th>3</th>
<td>40365288</td>
<td>{'Alcohol': None, 'Ambience': '{'touristy': Fa...</td>
<td>American</td>
<td>None</td>
<td>[-0.11735515, -0.0397448, -0.0072645755, -0.09...</td>
<td>True</td>
<td>Manhattan</td>
<td>{'building': '842', 'coord': [-73.970637000000...</td>
<td>{'$oid': '6095a34a7c34416a90d32017'}</td>
<td>Keats Restaurant</td>
<td>[French fries, Chicken pot pie, Mac & cheese, ...</td>
<td>True</td>
<td>{'coordinates': [-73.97063700000001, 40.751495...</td>
<td>2.0</td>
<td>True</td>
<td>149</td>
<td>NaN</td>
<td>4.0</td>
</tr>
<tr>
<th>4</th>
<td>40363151</td>
<td>{'Alcohol': None, 'Ambience': None, 'BYOB': No...</td>
<td>Bakery</td>
<td>None</td>
<td>[-0.096541286, -0.009661355, 0.04402167, -0.12...</td>
<td>True</td>
<td>Manhattan</td>
<td>{'building': '120', 'coord': [-73.9998042, 40....</td>
<td>{'$oid': '6095a34a7c34416a90d31fbd'}</td>
<td>Olive'S</td>
<td>[doughnuts, chocolate chip cookies, chocolate ...</td>
<td>True</td>
<td>{'coordinates': [-73.9998042, 40.7251256], 'ty...</td>
<td>1.0</td>
<td>None</td>
<td>7</td>
<td>NaN</td>
<td>5.0</td>
</tr>
</tbody>
</table>
</div>
```python
from llama_index.core.settings import Settings
from llama_index.llms.fireworks import Fireworks
from llama_index.embeddings.fireworks import FireworksEmbedding
embed_model = FireworksEmbedding(
embed_batch_size=512,
model_name="nomic-ai/nomic-embed-text-v1.5",
api_key=fw_api_key,
)
llm = Fireworks(
temperature=0,
model="accounts/fireworks/models/mixtral-8x7b-instruct",
api_key=fw_api_key,
)
Settings.llm = llm
Settings.embed_model = embed_model
```
```python
import json
from llama_index.core import Document
from llama_index.core.schema import MetadataMode
# Convert the DataFrame to a JSON string representation
documents_json = dataset_df.to_json(orient="records")
# Load the JSON string into a Python list of dictionaries
documents_list = json.loads(documents_json)
llama_documents = []
for document in documents_list:
# Value for metadata must be one of (str, int, float, None)
document["name"] = json.dumps(document["name"])
document["cuisine"] = json.dumps(document["cuisine"])
document["attributes"] = json.dumps(document["attributes"])
document["menu"] = json.dumps(document["menu"])
document["borough"] = json.dumps(document["borough"])
document["address"] = json.dumps(document["address"])
document["PriceRange"] = json.dumps(document["PriceRange"])
document["HappyHour"] = json.dumps(document["HappyHour"])
document["review_count"] = json.dumps(document["review_count"])
document["TakeOut"] = json.dumps(document["TakeOut"])
# these two fields are not relevant to the question we want to answer,
# so I will skip it for now
del document["embedding"]
del document["location"]
# Create a Document object with the text and excluded metadata for llm and embedding models
llama_document = Document(
text=json.dumps(document),
metadata=document,
metadata_template="{key}=>{value}",
text_template="Metadata: {metadata_str}\n-----\nContent: {content}",
)
llama_documents.append(llama_document)
# Observing an example of what the LLM and Embedding model receive as input
print(
"\nThe LLM sees this: \n",
llama_documents[0].get_content(metadata_mode=MetadataMode.LLM),
)
print(
"\nThe Embedding model sees this: \n",
llama_documents[0].get_content(metadata_mode=MetadataMode.EMBED),
)
```
The LLM sees this:
Metadata: restaurant_id=>40366661
attributes=>{"Alcohol": "'none'", "Ambience": "{'romantic': False, 'intimate': False, 'classy': False, 'hipster': False, 'divey': False, 'touristy': False, 'trendy': False, 'upscale': False, 'casual': False}", "BYOB": null, "BestNights": null, "BikeParking": null, "BusinessAcceptsBitcoin": null, "BusinessAcceptsCreditCards": null, "BusinessParking": "None", "Caters": "True", "DriveThru": null, "GoodForDancing": null, "GoodForKids": "True", "GoodForMeal": null, "HasTV": "True", "Music": null, "NoiseLevel": "'average'", "RestaurantsAttire": "'casual'", "RestaurantsDelivery": "True", "RestaurantsGoodForGroups": "True", "RestaurantsReservations": "True", "RestaurantsTableService": "False", "WheelchairAccessible": "True", "WiFi": "'free'"}
cuisine=>"Tex-Mex"
DogsAllowed=>None
OutdoorSeating=>True
borough=>"Manhattan"
address=>{"building": "627", "coord": [-73.975981, 40.745132], "street": "2 Avenue", "zipcode": "10016"}
_id=>{'$oid': '6095a34a7c34416a90d3206b'}
name=>"Baby Bo'S Burritos"
menu=>null
TakeOut=>true
PriceRange=>1.0
HappyHour=>null
review_count=>10
sponsored=>None
stars=>2.5
-----
Content: {"restaurant_id": "40366661", "attributes": "{\"Alcohol\": \"'none'\", \"Ambience\": \"{'romantic': False, 'intimate': False, 'classy': False, 'hipster': False, 'divey': False, 'touristy': False, 'trendy': False, 'upscale': False, 'casual': False}\", \"BYOB\": null, \"BestNights\": null, \"BikeParking\": null, \"BusinessAcceptsBitcoin\": null, \"BusinessAcceptsCreditCards\": null, \"BusinessParking\": \"None\", \"Caters\": \"True\", \"DriveThru\": null, \"GoodForDancing\": null, \"GoodForKids\": \"True\", \"GoodForMeal\": null, \"HasTV\": \"True\", \"Music\": null, \"NoiseLevel\": \"'average'\", \"RestaurantsAttire\": \"'casual'\", \"RestaurantsDelivery\": \"True\", \"RestaurantsGoodForGroups\": \"True\", \"RestaurantsReservations\": \"True\", \"RestaurantsTableService\": \"False\", \"WheelchairAccessible\": \"True\", \"WiFi\": \"'free'\"}", "cuisine": "\"Tex-Mex\"", "DogsAllowed": null, "OutdoorSeating": true, "borough": "\"Manhattan\"", "address": "{\"building\": \"627\", \"coord\": [-73.975981, 40.745132], \"street\": \"2 Avenue\", \"zipcode\": \"10016\"}", "_id": {"$oid": "6095a34a7c34416a90d3206b"}, "name": "\"Baby Bo'S Burritos\"", "menu": "null", "TakeOut": "true", "PriceRange": "1.0", "HappyHour": "null", "review_count": "10", "sponsored": null, "stars": 2.5}
The Embedding model sees this:
Metadata: restaurant_id=>40366661
attributes=>{"Alcohol": "'none'", "Ambience": "{'romantic': False, 'intimate': False, 'classy': False, 'hipster': False, 'divey': False, 'touristy': False, 'trendy': False, 'upscale': False, 'casual': False}", "BYOB": null, "BestNights": null, "BikeParking": null, "BusinessAcceptsBitcoin": null, "BusinessAcceptsCreditCards": null, "BusinessParking": "None", "Caters": "True", "DriveThru": null, "GoodForDancing": null, "GoodForKids": "True", "GoodForMeal": null, "HasTV": "True", "Music": null, "NoiseLevel": "'average'", "RestaurantsAttire": "'casual'", "RestaurantsDelivery": "True", "RestaurantsGoodForGroups": "True", "RestaurantsReservations": "True", "RestaurantsTableService": "False", "WheelchairAccessible": "True", "WiFi": "'free'"}
cuisine=>"Tex-Mex"
DogsAllowed=>None
OutdoorSeating=>True
borough=>"Manhattan"
address=>{"building": "627", "coord": [-73.975981, 40.745132], "street": "2 Avenue", "zipcode": "10016"}
_id=>{'$oid': '6095a34a7c34416a90d3206b'}
name=>"Baby Bo'S Burritos"
menu=>null
TakeOut=>true
PriceRange=>1.0
HappyHour=>null
review_count=>10
sponsored=>None
stars=>2.5
-----
Content: {"restaurant_id": "40366661", "attributes": "{\"Alcohol\": \"'none'\", \"Ambience\": \"{'romantic': False, 'intimate': False, 'classy': False, 'hipster': False, 'divey': False, 'touristy': False, 'trendy': False, 'upscale': False, 'casual': False}\", \"BYOB\": null, \"BestNights\": null, \"BikeParking\": null, \"BusinessAcceptsBitcoin\": null, \"BusinessAcceptsCreditCards\": null, \"BusinessParking\": \"None\", \"Caters\": \"True\", \"DriveThru\": null, \"GoodForDancing\": null, \"GoodForKids\": \"True\", \"GoodForMeal\": null, \"HasTV\": \"True\", \"Music\": null, \"NoiseLevel\": \"'average'\", \"RestaurantsAttire\": \"'casual'\", \"RestaurantsDelivery\": \"True\", \"RestaurantsGoodForGroups\": \"True\", \"RestaurantsReservations\": \"True\", \"RestaurantsTableService\": \"False\", \"WheelchairAccessible\": \"True\", \"WiFi\": \"'free'\"}", "cuisine": "\"Tex-Mex\"", "DogsAllowed": null, "OutdoorSeating": true, "borough": "\"Manhattan\"", "address": "{\"building\": \"627\", \"coord\": [-73.975981, 40.745132], \"street\": \"2 Avenue\", \"zipcode\": \"10016\"}", "_id": {"$oid": "6095a34a7c34416a90d3206b"}, "name": "\"Baby Bo'S Burritos\"", "menu": "null", "TakeOut": "true", "PriceRange": "1.0", "HappyHour": "null", "review_count": "10", "sponsored": null, "stars": 2.5}
```python
llama_documents[0]
```
Document(id_='93d3f08d-85f3-494d-a057-19bc834abc29', embedding=None, metadata={'restaurant_id': '40366661', 'attributes': '{"Alcohol": "\'none\'", "Ambience": "{\'romantic\': False, \'intimate\': False, \'classy\': False, \'hipster\': False, \'divey\': False, \'touristy\': False, \'trendy\': False, \'upscale\': False, \'casual\': False}", "BYOB": null, "BestNights": null, "BikeParking": null, "BusinessAcceptsBitcoin": null, "BusinessAcceptsCreditCards": null, "BusinessParking": "None", "Caters": "True", "DriveThru": null, "GoodForDancing": null, "GoodForKids": "True", "GoodForMeal": null, "HasTV": "True", "Music": null, "NoiseLevel": "\'average\'", "RestaurantsAttire": "\'casual\'", "RestaurantsDelivery": "True", "RestaurantsGoodForGroups": "True", "RestaurantsReservations": "True", "RestaurantsTableService": "False", "WheelchairAccessible": "True", "WiFi": "\'free\'"}', 'cuisine': '"Tex-Mex"', 'DogsAllowed': None, 'OutdoorSeating': True, 'borough': '"Manhattan"', 'address': '{"building": "627", "coord": [-73.975981, 40.745132], "street": "2 Avenue", "zipcode": "10016"}', '_id': {'$oid': '6095a34a7c34416a90d3206b'}, 'name': '"Baby Bo\'S Burritos"', 'menu': 'null', 'TakeOut': 'true', 'PriceRange': '1.0', 'HappyHour': 'null', 'review_count': '10', 'sponsored': None, 'stars': 2.5}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, text='{"restaurant_id": "40366661", "attributes": "{\\"Alcohol\\": \\"\'none\'\\", \\"Ambience\\": \\"{\'romantic\': False, \'intimate\': False, \'classy\': False, \'hipster\': False, \'divey\': False, \'touristy\': False, \'trendy\': False, \'upscale\': False, \'casual\': False}\\", \\"BYOB\\": null, \\"BestNights\\": null, \\"BikeParking\\": null, \\"BusinessAcceptsBitcoin\\": null, \\"BusinessAcceptsCreditCards\\": null, \\"BusinessParking\\": \\"None\\", \\"Caters\\": \\"True\\", \\"DriveThru\\": null, \\"GoodForDancing\\": null, \\"GoodForKids\\": \\"True\\", \\"GoodForMeal\\": null, \\"HasTV\\": \\"True\\", \\"Music\\": null, \\"NoiseLevel\\": \\"\'average\'\\", \\"RestaurantsAttire\\": \\"\'casual\'\\", \\"RestaurantsDelivery\\": \\"True\\", \\"RestaurantsGoodForGroups\\": \\"True\\", \\"RestaurantsReservations\\": \\"True\\", \\"RestaurantsTableService\\": \\"False\\", \\"WheelchairAccessible\\": \\"True\\", \\"WiFi\\": \\"\'free\'\\"}", "cuisine": "\\"Tex-Mex\\"", "DogsAllowed": null, "OutdoorSeating": true, "borough": "\\"Manhattan\\"", "address": "{\\"building\\": \\"627\\", \\"coord\\": [-73.975981, 40.745132], \\"street\\": \\"2 Avenue\\", \\"zipcode\\": \\"10016\\"}", "_id": {"$oid": "6095a34a7c34416a90d3206b"}, "name": "\\"Baby Bo\'S Burritos\\"", "menu": "null", "TakeOut": "true", "PriceRange": "1.0", "HappyHour": "null", "review_count": "10", "sponsored": null, "stars": 2.5}', start_char_idx=None, end_char_idx=None, text_template='Metadata: {metadata_str}\n-----\nContent: {content}', metadata_template='{key}=>{value}', metadata_seperator='\n')
```python
from llama_index.core.node_parser import SentenceSplitter
parser = SentenceSplitter()
nodes = parser.get_nodes_from_documents(llama_documents)
# 25k nodes takes about 10 minutes, will trim it down to 2.5k
new_nodes = nodes[:2500]
# There are 25k documents, so we need to do batching. Fortunately LlamaIndex provides good batching
# for embedding models, and we are going to rely on the __call__ method for the model to handle this
node_embeddings = embed_model(new_nodes)
```
```python
for idx, n in enumerate(new_nodes):
n.embedding = node_embeddings[idx].embedding
if "_id" in n.metadata:
del n.metadata["_id"]
```
Ensure your databse, collection and vector store index is setup on MongoDB Atlas for the collection or the following step won't work appropriately on MongoDB.
- For assistance with database cluster setup and obtaining the URI, refer to this [guide](https://www.mongodb.com/docs/guides/atlas/cluster/) for setting up a MongoDB cluster, and this [guide](https://www.mongodb.com/docs/guides/atlas/connection-string/) to get your connection string.
- Once you have successfully created a cluster, create the database and collection within the MongoDB Atlas cluster by clicking “+ Create Database”. The database will be named movies, and the collection will be named movies_records.
- Creating a vector search index within the movies_records collection is essential for efficient document retrieval from MongoDB into our development environment. To achieve this, refer to the official [guide](https://www.mongodb.com/docs/atlas/atlas-vector-search/create-index/) on vector search index creation.
```python
import pymongo
def get_mongo_client(mongo_uri):
"""Establish connection to the MongoDB."""
try:
client = pymongo.MongoClient(mongo_uri)
print("Connection to MongoDB successful")
return client
except pymongo.errors.ConnectionFailure as e:
print(f"Connection failed: {e}")
return None
# set up Fireworks.ai Key
import os
import getpass
mongo_uri = getpass.getpass("MONGO_URI:")
if not mongo_uri:
print("MONGO_URI not set")
mongo_client = get_mongo_client(mongo_uri)
DB_NAME = "whatscooking"
COLLECTION_NAME = "restaurants"
db = mongo_client[DB_NAME]
collection = db[COLLECTION_NAME]
```
Connection to MongoDB successful
```python
# To ensure we are working with a fresh collection
# delete any existing records in the collection
collection.delete_many({})
```
DeleteResult({'n': 0, 'electionId': ObjectId('7fffffff00000000000001ce'), 'opTime': {'ts': Timestamp(1708970193, 3), 't': 462}, 'ok': 1.0, '$clusterTime': {'clusterTime': Timestamp(1708970193, 3), 'signature': {'hash': b'\x9a3H8\xa1\x1b\xb6\xbb\xa9\xc3x\x17\x1c\xeb\xe9\x03\xaa\xf8\xf17', 'keyId': 7294687148333072386}}, 'operationTime': Timestamp(1708970193, 3)}, acknowledged=True)
```python
from llama_index.vector_stores.mongodb import MongoDBAtlasVectorSearch
vector_store = MongoDBAtlasVectorSearch(
mongo_client,
db_name=DB_NAME,
collection_name=COLLECTION_NAME,
index_name="vector_index",
)
vector_store.add(new_nodes)
```
# now make sure you create the search index with the right name here
```python
from llama_index.core import VectorStoreIndex, StorageContext
index = VectorStoreIndex.from_vector_store(vector_store)
```
```python
%pip install -q matplotlib
```
Note: you may need to restart the kernel to use updated packages.
```python
import pprint
from llama_index.core.response.notebook_utils import display_response
query_engine = index.as_query_engine()
query = "search query: Anything that doesn't have alcohol in it"
response = query_engine.query(query)
display_response(response)
pprint.pprint(response.source_nodes)
```
**`Final Response:`** Based on the context provided, two restaurant options that don't serve alcohol are:
1. "Academy Restauraunt" in Brooklyn, which serves American cuisine and has a variety of dishes such as Mozzarella sticks, Cheeseburger, Baked potato, Breadsticks, Caesar salad, Chicken parmesan, Pigs in a blanket, Chicken soup, Mac & cheese, Mushroom swiss burger, Spaghetti with meatballs, and Mashed potatoes.
2. "Gabriel'S Bar & Grill" in Manhattan, which specializes in Italian cuisine and offers dishes like Cheese Ravioli, Neapolitan Pizza, assorted gelato, Vegetarian Baked Ziti, Vegetarian Broccoli Pizza, Lasagna, Buca Trio Platter, Spinach Ravioli, Pasta with ricotta cheese, Spaghetti, Fried calamari, and Alfredo Pizza.
Both restaurants offer outdoor seating, are kid-friendly, and have a casual dress code. They also provide take-out service and have happy hour promotions.
[NodeWithScore(node=TextNode(id_='5405e68c-19f2-4a65-95d7-f880fa6a8deb', embedding=None, metadata={'restaurant_id': '40385767', 'attributes': '{"Alcohol": "u\'beer_and_wine\'", "Ambience": "{\'touristy\': False, \'hipster\': False, \'romantic\': False, \'divey\': False, \'intimate\': None, \'trendy\': None, \'upscale\': False, \'classy\': False, \'casual\': True}", "BYOB": null, "BestNights": "{\'monday\': False, \'tuesday\': False, \'friday\': True, \'wednesday\': False, \'thursday\': False, \'sunday\': False, \'saturday\': True}", "BikeParking": "True", "BusinessAcceptsBitcoin": "False", "BusinessAcceptsCreditCards": "True", "BusinessParking": "{\'garage\': False, \'street\': False, \'validated\': False, \'lot\': True, \'valet\': False}", "Caters": "True", "DriveThru": null, "GoodForDancing": "False", "GoodForKids": "True", "GoodForMeal": "{\'dessert\': False, \'latenight\': False, \'lunch\': True, \'dinner\': True, \'brunch\': False, \'breakfast\': False}", "HasTV": "True", "Music": "{\'dj\': False, \'background_music\': False, \'no_music\': False, \'jukebox\': False, \'live\': False, \'video\': False, \'karaoke\': False}", "NoiseLevel": "u\'average\'", "RestaurantsAttire": "u\'casual\'", "RestaurantsDelivery": "None", "RestaurantsGoodForGroups": "True", "RestaurantsReservations": "True", "RestaurantsTableService": "True", "WheelchairAccessible": "True", "WiFi": "u\'free\'"}', 'cuisine': '"American"', 'DogsAllowed': True, 'OutdoorSeating': True, 'borough': '"Brooklyn"', 'address': '{"building": "69", "coord": [-73.9757464, 40.687295], "street": "Lafayette Avenue", "zipcode": "11217"}', 'name': '"Academy Restauraunt"', 'menu': '["Mozzarella sticks", "Cheeseburger", "Baked potato", "Breadsticks", "Caesar salad", "Chicken parmesan", "Pigs in a blanket", "Chicken soup", "Mac & cheese", "Mushroom swiss burger", "Spaghetti with meatballs", "Mashed potatoes"]', 'TakeOut': 'true', 'PriceRange': '2.0', 'HappyHour': 'true', 'review_count': '173', 'sponsored': None, 'stars': 4.5}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={<NodeRelationship.SOURCE: '1'>: RelatedNodeInfo(node_id='bbfc4bf5-d9c3-4f3b-8c1f-ddcf94f3b5df', node_type=<ObjectType.DOCUMENT: '4'>, metadata={'restaurant_id': '40385767', 'attributes': '{"Alcohol": "u\'beer_and_wine\'", "Ambience": "{\'touristy\': False, \'hipster\': False, \'romantic\': False, \'divey\': False, \'intimate\': None, \'trendy\': None, \'upscale\': False, \'classy\': False, \'casual\': True}", "BYOB": null, "BestNights": "{\'monday\': False, \'tuesday\': False, \'friday\': True, \'wednesday\': False, \'thursday\': False, \'sunday\': False, \'saturday\': True}", "BikeParking": "True", "BusinessAcceptsBitcoin": "False", "BusinessAcceptsCreditCards": "True", "BusinessParking": "{\'garage\': False, \'street\': False, \'validated\': False, \'lot\': True, \'valet\': False}", "Caters": "True", "DriveThru": null, "GoodForDancing": "False", "GoodForKids": "True", "GoodForMeal": "{\'dessert\': False, \'latenight\': False, \'lunch\': True, \'dinner\': True, \'brunch\': False, \'breakfast\': False}", "HasTV": "True", "Music": "{\'dj\': False, \'background_music\': False, \'no_music\': False, \'jukebox\': False, \'live\': False, \'video\': False, \'karaoke\': False}", "NoiseLevel": "u\'average\'", "RestaurantsAttire": "u\'casual\'", "RestaurantsDelivery": "None", "RestaurantsGoodForGroups": "True", "RestaurantsReservations": "True", "RestaurantsTableService": "True", "WheelchairAccessible": "True", "WiFi": "u\'free\'"}', 'cuisine': '"American"', 'DogsAllowed': True, 'OutdoorSeating': True, 'borough': '"Brooklyn"', 'address': '{"building": "69", "coord": [-73.9757464, 40.687295], "street": "Lafayette Avenue", "zipcode": "11217"}', '_id': {'$oid': '6095a34a7c34416a90d322d1'}, 'name': '"Academy Restauraunt"', 'menu': '["Mozzarella sticks", "Cheeseburger", "Baked potato", "Breadsticks", "Caesar salad", "Chicken parmesan", "Pigs in a blanket", "Chicken soup", "Mac & cheese", "Mushroom swiss burger", "Spaghetti with meatballs", "Mashed potatoes"]', 'TakeOut': 'true', 'PriceRange': '2.0', 'HappyHour': 'true', 'review_count': '173', 'sponsored': None, 'stars': 4.5}, hash='df7870b3103572b05e98091e4d4b52b238175eb08558831b621b6832c0472c2e'), <NodeRelationship.PREVIOUS: '2'>: RelatedNodeInfo(node_id='5fbb14fe-c8a8-4c4c-930d-2e07e4f77b47', node_type=<ObjectType.TEXT: '1'>, metadata={'restaurant_id': '40377111', 'attributes': '{"Alcohol": null, "Ambience": null, "BYOB": null, "BestNights": null, "BikeParking": "True", "BusinessAcceptsBitcoin": null, "BusinessAcceptsCreditCards": "False", "BusinessParking": "{\'garage\': False, \'street\': True, \'validated\': False, \'lot\': False, \'valet\': False}", "Caters": null, "DriveThru": "True", "GoodForDancing": null, "GoodForKids": null, "GoodForMeal": null, "HasTV": null, "Music": null, "NoiseLevel": null, "RestaurantsAttire": null, "RestaurantsDelivery": "True", "RestaurantsGoodForGroups": null, "RestaurantsReservations": null, "RestaurantsTableService": null, "WheelchairAccessible": null, "WiFi": null}', 'cuisine': '"American"', 'DogsAllowed': None, 'OutdoorSeating': None, 'borough': '"Manhattan"', 'address': '{"building": "1207", "coord": [-73.9592644, 40.8088612], "street": "Amsterdam Avenue", "zipcode": "10027"}', '_id': {'$oid': '6095a34a7c34416a90d321d6'}, 'name': '"Amsterdam Restaurant & Tapas Lounge"', 'menu': '["Green salad", "Cheddar Biscuits", "Lasagna", "Chicken parmesan", "Chicken soup", "Pigs in a blanket", "Caesar salad", "French fries", "Baked potato", "Mushroom swiss burger", "Grilled cheese sandwich", "Fried chicken"]', 'TakeOut': 'true', 'PriceRange': '1.0', 'HappyHour': 'null', 'review_count': '6', 'sponsored': None, 'stars': 5.0}, hash='1261332dd67be495d0639f41b5f6462f87a41aabe20367502ef28074bf13e561'), <NodeRelationship.NEXT: '3'>: RelatedNodeInfo(node_id='10ad1a23-3237-4b68-808d-58fd7b7e5cb6', node_type=<ObjectType.TEXT: '1'>, metadata={}, hash='bc64dca2f9210693c3d5174aec305f25b68d080be65a0ae52f9a560f99992bb0')}, text='{"restaurant_id": "40385767", "attributes": "{\\"Alcohol\\": \\"u\'beer_and_wine\'\\", \\"Ambience\\": \\"{\'touristy\': False, \'hipster\': False, \'romantic\': False, \'divey\': False, \'intimate\': None, \'trendy\': None, \'upscale\': False, \'classy\': False, \'casual\': True}\\", \\"BYOB\\": null, \\"BestNights\\": \\"{\'monday\': False, \'tuesday\': False, \'friday\': True, \'wednesday\': False, \'thursday\': False, \'sunday\': False, \'saturday\': True}\\", \\"BikeParking\\": \\"True\\", \\"BusinessAcceptsBitcoin\\": \\"False\\", \\"BusinessAcceptsCreditCards\\": \\"True\\", \\"BusinessParking\\": \\"{\'garage\': False, \'street\': False, \'validated\': False, \'lot\': True, \'valet\': False}\\", \\"Caters\\": \\"True\\", \\"DriveThru\\": null, \\"GoodForDancing\\": \\"False\\", \\"GoodForKids\\": \\"True\\", \\"GoodForMeal\\": \\"{\'dessert\': False, \'latenight\': False, \'lunch\': True, \'dinner\': True, \'brunch\': False, \'breakfast\': False}\\", \\"HasTV\\": \\"True\\", \\"Music\\": \\"{\'dj\': False, \'background_music\': False, \'no_music\': False, \'jukebox\': False, \'live\': False, \'video\': False, \'karaoke\': False}\\", \\"NoiseLevel\\": \\"u\'average\'\\", \\"RestaurantsAttire\\": \\"u\'casual\'\\", \\"RestaurantsDelivery\\": \\"None\\", \\"RestaurantsGoodForGroups\\": \\"True\\", \\"RestaurantsReservations\\": \\"True\\", \\"RestaurantsTableService\\": \\"True\\", \\"WheelchairAccessible\\": \\"True\\", \\"WiFi\\": \\"u\'free\'\\"}", "cuisine": "\\"American\\"", "DogsAllowed": true, "OutdoorSeating": true, "borough": "\\"Brooklyn\\"",', start_char_idx=0, end_char_idx=1415, text_template='Metadata: {metadata_str}\n-----\nContent: {content}', metadata_template='{key}=>{value}', metadata_seperator='\n'), score=0.7296431064605713),
NodeWithScore(node=TextNode(id_='9cd153ba-2ab8-40aa-90f0-9da5ae24c632', embedding=None, metadata={'restaurant_id': '40392690', 'attributes': '{"Alcohol": "u\'full_bar\'", "Ambience": "{\'touristy\': None, \'hipster\': True, \'romantic\': False, \'divey\': False, \'intimate\': None, \'trendy\': True, \'upscale\': None, \'classy\': True, \'casual\': True}", "BYOB": "False", "BestNights": "{\'monday\': False, \'tuesday\': False, \'friday\': True, \'wednesday\': False, \'thursday\': False, \'sunday\': False, \'saturday\': False}", "BikeParking": "True", "BusinessAcceptsBitcoin": null, "BusinessAcceptsCreditCards": "True", "BusinessParking": "{\'garage\': False, \'street\': True, \'validated\': False, \'lot\': False, \'valet\': False}", "Caters": "True", "DriveThru": "False", "GoodForDancing": "False", "GoodForKids": "True", "GoodForMeal": "{\'dessert\': None, \'latenight\': None, \'lunch\': True, \'dinner\': True, \'brunch\': False, \'breakfast\': False}", "HasTV": "False", "Music": "{\'dj\': False, \'background_music\': False, \'no_music\': False, \'jukebox\': False, \'live\': False, \'video\': False, \'karaoke\': False}", "NoiseLevel": "u\'average\'", "RestaurantsAttire": "\'casual\'", "RestaurantsDelivery": "True", "RestaurantsGoodForGroups": "True", "RestaurantsReservations": "False", "RestaurantsTableService": "True", "WheelchairAccessible": "True", "WiFi": "\'free\'"}', 'cuisine': '"Italian"', 'DogsAllowed': True, 'OutdoorSeating': True, 'borough': '"Manhattan"', 'address': '{"building": "11", "coord": [-73.9828696, 40.7693649], "street": "West 60 Street", "zipcode": "10023"}', 'name': '"Gabriel\'S Bar & Grill"', 'menu': '["Cheese Ravioli", "Neapolitan Pizza", "assorted gelato", "Vegetarian Baked Ziti", "Vegetarian Broccoli Pizza", "Lasagna", "Buca Trio Platter", "Spinach Ravioli", "Pasta with ricotta cheese", "Spaghetti", "Fried calimari", "Alfredo Pizza"]', 'TakeOut': 'true', 'PriceRange': '2.0', 'HappyHour': 'true', 'review_count': '333', 'sponsored': None, 'stars': 4.0}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={<NodeRelationship.SOURCE: '1'>: RelatedNodeInfo(node_id='77584933-8286-4277-bc56-bed76adcfd37', node_type=<ObjectType.DOCUMENT: '4'>, metadata={'restaurant_id': '40392690', 'attributes': '{"Alcohol": "u\'full_bar\'", "Ambience": "{\'touristy\': None, \'hipster\': True, \'romantic\': False, \'divey\': False, \'intimate\': None, \'trendy\': True, \'upscale\': None, \'classy\': True, \'casual\': True}", "BYOB": "False", "BestNights": "{\'monday\': False, \'tuesday\': False, \'friday\': True, \'wednesday\': False, \'thursday\': False, \'sunday\': False, \'saturday\': False}", "BikeParking": "True", "BusinessAcceptsBitcoin": null, "BusinessAcceptsCreditCards": "True", "BusinessParking": "{\'garage\': False, \'street\': True, \'validated\': False, \'lot\': False, \'valet\': False}", "Caters": "True", "DriveThru": "False", "GoodForDancing": "False", "GoodForKids": "True", "GoodForMeal": "{\'dessert\': None, \'latenight\': None, \'lunch\': True, \'dinner\': True, \'brunch\': False, \'breakfast\': False}", "HasTV": "False", "Music": "{\'dj\': False, \'background_music\': False, \'no_music\': False, \'jukebox\': False, \'live\': False, \'video\': False, \'karaoke\': False}", "NoiseLevel": "u\'average\'", "RestaurantsAttire": "\'casual\'", "RestaurantsDelivery": "True", "RestaurantsGoodForGroups": "True", "RestaurantsReservations": "False", "RestaurantsTableService": "True", "WheelchairAccessible": "True", "WiFi": "\'free\'"}', 'cuisine': '"Italian"', 'DogsAllowed': True, 'OutdoorSeating': True, 'borough': '"Manhattan"', 'address': '{"building": "11", "coord": [-73.9828696, 40.7693649], "street": "West 60 Street", "zipcode": "10023"}', '_id': {'$oid': '6095a34b7c34416a90d3243a'}, 'name': '"Gabriel\'S Bar & Grill"', 'menu': '["Cheese Ravioli", "Neapolitan Pizza", "assorted gelato", "Vegetarian Baked Ziti", "Vegetarian Broccoli Pizza", "Lasagna", "Buca Trio Platter", "Spinach Ravioli", "Pasta with ricotta cheese", "Spaghetti", "Fried calimari", "Alfredo Pizza"]', 'TakeOut': 'true', 'PriceRange': '2.0', 'HappyHour': 'true', 'review_count': '333', 'sponsored': None, 'stars': 4.0}, hash='c4dcc57a697cd2fe3047a280573c0f54bc5236e1d5af2228737af77613c9dbf7'), <NodeRelationship.PREVIOUS: '2'>: RelatedNodeInfo(node_id='6e1ead27-3679-48fb-b160-b47db523a3ce', node_type=<ObjectType.TEXT: '1'>, metadata={'restaurant_id': '40392496', 'attributes': '{"Alcohol": "u\'none\'", "Ambience": "{\'touristy\': False, \'hipster\': False, \'romantic\': False, \'intimate\': None, \'trendy\': False, \'upscale\': False, \'classy\': False, \'casual\': True}", "BYOB": null, "BestNights": null, "BikeParking": "True", "BusinessAcceptsBitcoin": null, "BusinessAcceptsCreditCards": null, "BusinessParking": "{\'garage\': False, \'street\': True, \'validated\': False, \'lot\': False, \'valet\': False}", "Caters": "False", "DriveThru": null, "GoodForDancing": null, "GoodForKids": "True", "GoodForMeal": "{\'dessert\': False, \'latenight\': False, \'lunch\': True, \'dinner\': True, \'brunch\': None, \'breakfast\': False}", "HasTV": "True", "Music": null, "NoiseLevel": "u\'average\'", "RestaurantsAttire": "u\'casual\'", "RestaurantsDelivery": "True", "RestaurantsGoodForGroups": "False", "RestaurantsReservations": "False", "RestaurantsTableService": "True", "WheelchairAccessible": null, "WiFi": "\'free\'"}', 'cuisine': '"English"', 'DogsAllowed': True, 'OutdoorSeating': True, 'borough': '"Manhattan"', 'address': '{"building": "253", "coord": [-74.0034571, 40.736351], "street": "West 11 Street", "zipcode": "10014"}', '_id': {'$oid': '6095a34b7c34416a90d32435'}, 'name': '"Tartine"', 'menu': 'null', 'TakeOut': 'true', 'PriceRange': '2.0', 'HappyHour': 'true', 'review_count': '436', 'sponsored': None, 'stars': 4.5}, hash='146bffad5c816926ec1008d966caab7c0df675251ccca5de860f8a2160bb7a34'), <NodeRelationship.NEXT: '3'>: RelatedNodeInfo(node_id='6640911b-3d8e-4bad-a016-4c3d91444b0c', node_type=<ObjectType.TEXT: '1'>, metadata={}, hash='39984a7534d6755344f0887e0d6a200eaab562a7dc492afe292040c0022282bd')}, text='{"restaurant_id": "40392690", "attributes": "{\\"Alcohol\\": \\"u\'full_bar\'\\", \\"Ambience\\": \\"{\'touristy\': None, \'hipster\': True, \'romantic\': False, \'divey\': False, \'intimate\': None, \'trendy\': True, \'upscale\': None, \'classy\': True, \'casual\': True}\\", \\"BYOB\\": \\"False\\", \\"BestNights\\": \\"{\'monday\': False, \'tuesday\': False, \'friday\': True, \'wednesday\': False, \'thursday\': False, \'sunday\': False, \'saturday\': False}\\", \\"BikeParking\\": \\"True\\", \\"BusinessAcceptsBitcoin\\": null, \\"BusinessAcceptsCreditCards\\": \\"True\\", \\"BusinessParking\\": \\"{\'garage\': False, \'street\': True, \'validated\': False, \'lot\': False, \'valet\': False}\\", \\"Caters\\": \\"True\\", \\"DriveThru\\": \\"False\\", \\"GoodForDancing\\": \\"False\\", \\"GoodForKids\\": \\"True\\", \\"GoodForMeal\\": \\"{\'dessert\': None, \'latenight\': None, \'lunch\': True, \'dinner\': True, \'brunch\': False, \'breakfast\': False}\\", \\"HasTV\\": \\"False\\", \\"Music\\": \\"{\'dj\': False, \'background_music\': False, \'no_music\': False, \'jukebox\': False, \'live\': False, \'video\': False, \'karaoke\': False}\\", \\"NoiseLevel\\": \\"u\'average\'\\", \\"RestaurantsAttire\\": \\"\'casual\'\\", \\"RestaurantsDelivery\\": \\"True\\", \\"RestaurantsGoodForGroups\\": \\"True\\", \\"RestaurantsReservations\\": \\"False\\", \\"RestaurantsTableService\\": \\"True\\", \\"WheelchairAccessible\\": \\"True\\", \\"WiFi\\": \\"\'free\'\\"}", "cuisine": "\\"Italian\\"", "DogsAllowed": true, "OutdoorSeating": true,', start_char_idx=0, end_char_idx=1382, text_template='Metadata: {metadata_str}\n-----\nContent: {content}', metadata_template='{key}=>{value}', metadata_seperator='\n'), score=0.7284677028656006)] |
331 | 03902cf5-1771-4ffa-8b80-70cdbd298acf | Amazon Neptune - Neptune Analytics vector store | https://docs.llamaindex.ai/en/stable/examples/vector_stores/AmazonNeptuneVectorDemo | true | llama_index | # Amazon Neptune - Neptune Analytics vector store
If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.
```python
%pip install llama-index-vector-stores-neptune
```
## Initiate Neptune Analytics vector wrapper
```python
from llama_index.vector_stores.neptune import NeptuneAnalyticsVectorStore
graph_identifier = ""
embed_dim = 1536
neptune_vector_store = NeptuneAnalyticsVectorStore(
graph_identifier=graph_identifier, embedding_dimension=1536
)
```
## Load documents, build the VectorStoreIndex
```python
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from IPython.display import Markdown, display
```
Download Data
```python
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
```
```python
# load documents
documents = SimpleDirectoryReader("./data/paul_graham").load_data()
```
```python
from llama_index.core import StorageContext
storage_context = StorageContext.from_defaults(
vector_store=neptune_vector_store
)
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
```
```python
query_engine = index.as_query_engine()
response = query_engine.query("What happened at interleaf?")
display(Markdown(f"<b>{response}</b>"))
``` |
4,140 | 6f2ca851-bcf4-4783-9f1b-f6858a6d730c | Simple Vector Store | https://docs.llamaindex.ai/en/stable/examples/vector_stores/SimpleIndexDemo | true | llama_index | <a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/docs/examples/vector_stores/SimpleIndexDemo.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Simple Vector Store
If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.
```python
!pip install llama-index
```
```python
import os
import openai
os.environ["OPENAI_API_KEY"] = "sk-..."
openai.api_key = os.environ["OPENAI_API_KEY"]
```
#### Load documents, build the VectorStoreIndex
```python
import nltk
nltk.download("stopwords")
```
[nltk_data] Downloading package stopwords to
[nltk_data] /Users/jerryliu/nltk_data...
[nltk_data] Package stopwords is already up-to-date!
True
```python
import llama_index.core
```
[nltk_data] Downloading package stopwords to /Users/jerryliu/Programmi
[nltk_data] ng/gpt_index/.venv/lib/python3.10/site-
[nltk_data] packages/llama_index/core/_static/nltk_cache...
[nltk_data] Unzipping corpora/stopwords.zip.
[nltk_data] Downloading package punkt to /Users/jerryliu/Programming/g
[nltk_data] pt_index/.venv/lib/python3.10/site-
[nltk_data] packages/llama_index/core/_static/nltk_cache...
[nltk_data] Unzipping tokenizers/punkt.zip.
```python
import logging
import sys
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
from llama_index.core import (
VectorStoreIndex,
SimpleDirectoryReader,
load_index_from_storage,
StorageContext,
)
from IPython.display import Markdown, display
```
Download Data
```python
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
```
--2024-02-12 13:21:13-- https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.111.133, 185.199.108.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 75042 (73K) [text/plain]
Saving to: ‘data/paul_graham/paul_graham_essay.txt’
data/paul_graham/pa 100%[===================>] 73.28K --.-KB/s in 0.02s
2024-02-12 13:21:13 (4.76 MB/s) - ‘data/paul_graham/paul_graham_essay.txt’ saved [75042/75042]
```python
# load documents
documents = SimpleDirectoryReader("./data/paul_graham/").load_data()
```
```python
index = VectorStoreIndex.from_documents(documents)
```
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
```python
# save index to disk
index.set_index_id("vector_index")
index.storage_context.persist("./storage")
```
```python
# rebuild storage context
storage_context = StorageContext.from_defaults(persist_dir="storage")
# load index
index = load_index_from_storage(storage_context, index_id="vector_index")
```
INFO:llama_index.core.indices.loading:Loading indices with ids: ['vector_index']
Loading indices with ids: ['vector_index']
#### Query Index
```python
# set Logging to DEBUG for more detailed outputs
query_engine = index.as_query_engine(response_mode="tree_summarize")
response = query_engine.query("What did the author do growing up?")
```
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
```python
display(Markdown(f"<b>{response}</b>"))
```
<b>The author wrote short stories and also worked on programming, specifically on an IBM 1401 computer in 9th grade. They later transitioned to working with microcomputers, starting with a kit-built microcomputer and eventually acquiring a TRS-80. They wrote simple games, a program to predict rocket heights, and even a word processor. Although the author initially planned to study philosophy in college, they eventually switched to studying AI.</b>
**Query Index with SVM/Linear Regression**
Use Karpathy's [SVM-based](https://twitter.com/karpathy/status/1647025230546886658?s=20) approach. Set query as positive example, all other datapoints as negative examples, and then fit a hyperplane.
```python
query_modes = [
"svm",
"linear_regression",
"logistic_regression",
]
for query_mode in query_modes:
# set Logging to DEBUG for more detailed outputs
query_engine = index.as_query_engine(vector_store_query_mode=query_mode)
response = query_engine.query("What did the author do growing up?")
print(f"Query mode: {query_mode}")
display(Markdown(f"<b>{response}</b>"))
```
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
/Users/jerryliu/Programming/gpt_index/.venv/lib/python3.10/site-packages/sklearn/svm/_classes.py:31: FutureWarning: The default value of `dual` will change from `True` to `'auto'` in 1.5. Set the value of `dual` explicitly to suppress the warning.
warnings.warn(
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
Query mode: svm
<b>The author wrote short stories and also worked on programming, specifically on an IBM 1401 computer in 9th grade. They later got a microcomputer and started programming on it, writing simple games and a word processor. They initially planned to study philosophy in college but ended up switching to AI.</b>
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
/Users/jerryliu/Programming/gpt_index/.venv/lib/python3.10/site-packages/sklearn/svm/_classes.py:31: FutureWarning: The default value of `dual` will change from `True` to `'auto'` in 1.5. Set the value of `dual` explicitly to suppress the warning.
warnings.warn(
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
Query mode: linear_regression
<b>The author wrote short stories and also worked on programming, specifically on an IBM 1401 computer in 9th grade. They later got a microcomputer and started programming on it, writing simple games and a word processor. They initially planned to study philosophy in college but ended up switching to AI.</b>
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
/Users/jerryliu/Programming/gpt_index/.venv/lib/python3.10/site-packages/sklearn/svm/_classes.py:31: FutureWarning: The default value of `dual` will change from `True` to `'auto'` in 1.5. Set the value of `dual` explicitly to suppress the warning.
warnings.warn(
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
Query mode: logistic_regression
<b>The author wrote short stories and also worked on programming, specifically on an IBM 1401 computer in 9th grade. They later got a microcomputer and started programming on it, writing simple games and a word processor. They initially planned to study philosophy in college but eventually switched to AI.</b>
```python
display(Markdown(f"<b>{response}</b>"))
```
<b>The author wrote short stories and also worked on programming, specifically on an IBM 1401 computer in 9th grade. They later got a microcomputer and started programming on it, writing simple games and a word processor. They initially planned to study philosophy in college but eventually switched to AI.</b>
```python
print(response.source_nodes[0].text)
```
What I Worked On
February 2021
Before college the two main things I worked on, outside of school, were writing and programming. I didn't write essays. I wrote what beginning writers were supposed to write then, and probably still are: short stories. My stories were awful. They had hardly any plot, just characters with strong feelings, which I imagined made them deep.
The first programs I tried writing were on the IBM 1401 that our school district used for what was then called "data processing." This was in 9th grade, so I was 13 or 14. The school district's 1401 happened to be in the basement of our junior high school, and my friend Rich Draves and I got permission to use it. It was like a mini Bond villain's lair down there, with all these alien-looking machines — CPU, disk drives, printer, card reader — sitting up on a raised floor under bright fluorescent lights.
The language we used was an early version of Fortran. You had to type programs on punch cards, then stack them in the card reader and press a button to load the program into memory and run it. The result would ordinarily be to print something on the spectacularly loud printer.
I was puzzled by the 1401. I couldn't figure out what to do with it. And in retrospect there's not much I could have done with it. The only form of input to programs was data stored on punched cards, and I didn't have any data stored on punched cards. The only other option was to do things that didn't rely on any input, like calculate approximations of pi, but I didn't know enough math to do anything interesting of that type. So I'm not surprised I can't remember any programs I wrote, because they can't have done much. My clearest memory is of the moment I learned it was possible for programs not to terminate, when one of mine didn't. On a machine without time-sharing, this was a social as well as a technical error, as the data center manager's expression made clear.
With microcomputers, everything changed. Now you could have a computer sitting right in front of you, on a desk, that could respond to your keystrokes as it was running instead of just churning through a stack of punch cards and then stopping. [1]
The first of my friends to get a microcomputer built it himself. It was sold as a kit by Heathkit. I remember vividly how impressed and envious I felt watching him sitting in front of it, typing programs right into the computer.
Computers were expensive in those days and it took me years of nagging before I convinced my father to buy one, a TRS-80, in about 1980. The gold standard then was the Apple II, but a TRS-80 was good enough. This was when I really started programming. I wrote simple games, a program to predict how high my model rockets would fly, and a word processor that my father used to write at least one book. There was only room in memory for about 2 pages of text, so he'd write 2 pages at a time and then print them out, but it was a lot better than a typewriter.
Though I liked programming, I didn't plan to study it in college. In college I was going to study philosophy, which sounded much more powerful. It seemed, to my naive high school self, to be the study of the ultimate truths, compared to which the things studied in other fields would be mere domain knowledge. What I discovered when I got to college was that the other fields took up so much of the space of ideas that there wasn't much left for these supposed ultimate truths. All that seemed left for philosophy were edge cases that people in other fields felt could safely be ignored.
I couldn't have put this into words when I was 18. All I knew at the time was that I kept taking philosophy courses and they kept being boring. So I decided to switch to AI.
AI was in the air in the mid 1980s, but there were two things especially that made me want to work on it: a novel by Heinlein called The Moon is a Harsh Mistress, which featured an intelligent computer called Mike, and a PBS documentary that showed Terry Winograd using SHRDLU. I haven't tried rereading The Moon is a Harsh Mistress, so I don't know how well it has aged, but when I read it I was drawn entirely into its world. It seemed only a matter of time before we'd have Mike, and when I saw Winograd using SHRDLU, it seemed like that time would be a few years at most.
**Query Index with custom embedding string**
```python
from llama_index.core import QueryBundle
```
```python
query_bundle = QueryBundle(
query_str="What did the author do growing up?",
custom_embedding_strs=["The author grew up painting."],
)
query_engine = index.as_query_engine()
response = query_engine.query(query_bundle)
```
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
```python
display(Markdown(f"<b>{response}</b>"))
```
<b>The context does not provide information about what the author did growing up.</b>
**Use maximum marginal relevance**
Instead of ranking vectors purely by similarity, adds diversity to the documents by penalizing documents similar to ones that have already been found based on <a href="https://www.cs.cmu.edu/~jgc/publication/The_Use_MMR_Diversity_Based_LTMIR_1998.pdf">MMR</a> . A lower mmr_treshold increases diversity.
```python
query_engine = index.as_query_engine(
vector_store_query_mode="mmr", vector_store_kwargs={"mmr_threshold": 0.2}
)
response = query_engine.query("What did the author do growing up?")
```
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
#### Get Sources
```python
print(response.get_formatted_sources())
```
> Source (Doc id: c4118521-8f55-4a4d-819a-2db546b6491e): What I Worked On
February 2021
Before college the two main things I worked on, outside of schoo...
> Source (Doc id: 74f77233-e4fe-4389-9820-76dd9f765af6): Which meant being easy to use and inexpensive. It was lucky for us that we were poor, because tha...
#### Query Index with Filters
We can also filter our queries using metadata
```python
from llama_index.core import Document
doc = Document(text="target", metadata={"tag": "target"})
index.insert(doc)
```
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
```python
from llama_index.core.vector_stores import ExactMatchFilter, MetadataFilters
filters = MetadataFilters(
filters=[ExactMatchFilter(key="tag", value="target")]
)
retriever = index.as_retriever(
similarity_top_k=20,
filters=filters,
)
source_nodes = retriever.retrieve("What did the author do growing up?")
```
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
```python
# retrieves only our target node, even though we set the top k to 20
print(len(source_nodes))
```
1
```python
print(source_nodes[0].text)
print(source_nodes[0].metadata)
```
target
{'tag': 'target'} |
43,097 | 264a5660-6484-4a24-b74f-50ba42fa1223 | Opensearch Vector Store | https://docs.llamaindex.ai/en/stable/examples/vector_stores/OpensearchDemo | false | llama_index | <a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/docs/examples/vector_stores/OpensearchDemo.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Opensearch Vector Store
Elasticsearch only supports Lucene indices, so only Opensearch is supported.
**Note on setup**: We setup a local Opensearch instance through the following doc. https://opensearch.org/docs/1.0/
If you run into SSL issues, try the following `docker run` command instead:
```
docker run -p 9200:9200 -p 9600:9600 -e "discovery.type=single-node" -e "plugins.security.disabled=true" opensearchproject/opensearch:1.0.1
```
Reference: https://github.com/opensearch-project/OpenSearch/issues/1598
Download Data
```python
%pip install llama-index-readers-elasticsearch
%pip install llama-index-vector-stores-opensearch
%pip install llama-index-embeddings-ollama
```
```python
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
```
```python
from os import getenv
from llama_index.core import SimpleDirectoryReader
from llama_index.vector_stores.opensearch import (
OpensearchVectorStore,
OpensearchVectorClient,
)
from llama_index.core import VectorStoreIndex, StorageContext
# http endpoint for your cluster (opensearch required for vector index usage)
endpoint = getenv("OPENSEARCH_ENDPOINT", "http://localhost:9200")
# index to demonstrate the VectorStore impl
idx = getenv("OPENSEARCH_INDEX", "gpt-index-demo")
# load some sample data
documents = SimpleDirectoryReader("./data/paul_graham/").load_data()
```
/Users/jerryliu/Programming/gpt_index/.venv/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
from .autonotebook import tqdm as notebook_tqdm
```python
# OpensearchVectorClient stores text in this field by default
text_field = "content"
# OpensearchVectorClient stores embeddings in this field by default
embedding_field = "embedding"
# OpensearchVectorClient encapsulates logic for a
# single opensearch index with vector search enabled
client = OpensearchVectorClient(
endpoint, idx, 1536, embedding_field=embedding_field, text_field=text_field
)
# initialize vector store
vector_store = OpensearchVectorStore(client)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
# initialize an index using our sample data and the client we just created
index = VectorStoreIndex.from_documents(
documents=documents, storage_context=storage_context
)
```
```python
# run query
query_engine = index.as_query_engine()
res = query_engine.query("What did the author do growing up?")
res.response
```
INFO:root:> [query] Total LLM token usage: 29628 tokens
INFO:root:> [query] Total embedding token usage: 8 tokens
'\n\nThe author grew up writing short stories, programming on an IBM 1401, and building a computer kit from Heathkit. They also wrote programs for a TRS-80, such as games, a program to predict model rocket flight, and a word processor. After years of nagging, they convinced their father to buy a TRS-80, and they wrote simple games, a program to predict how high their model rockets would fly, and a word processor that their father used to write at least one book. In college, they studied philosophy and AI, and wrote a book about Lisp hacking. They also took art classes and applied to art schools, and experimented with computer graphics and animation, exploring the use of algorithms to create art. Additionally, they experimented with machine learning algorithms, such as using neural networks to generate art, and exploring the use of numerical values to create art. They also took classes in fundamental subjects like drawing, color, and design, and applied to two art schools, RISD in the US, and the Accademia di Belli Arti in Florence. They were accepted to RISD, and while waiting to hear back from the Accademia, they learned Italian and took the entrance exam in Florence. They eventually graduated from RISD'
The OpenSearch vector store supports [filter-context queries](https://opensearch.org/docs/latest/query-dsl/query-filter-context/).
```python
from llama_index.core import Document
from llama_index.core.vector_stores import MetadataFilters, ExactMatchFilter
import regex as re
```
```python
# Split the text into paragraphs.
text_chunks = documents[0].text.split("\n\n")
# Create a document for each footnote
footnotes = [
Document(
text=chunk,
id=documents[0].doc_id,
metadata={"is_footnote": bool(re.search(r"^\s*\[\d+\]\s*", chunk))},
)
for chunk in text_chunks
if bool(re.search(r"^\s*\[\d+\]\s*", chunk))
]
```
```python
# Insert the footnotes into the index
for f in footnotes:
index.insert(f)
```
```python
# Create a query engine that only searches certain footnotes.
footnote_query_engine = index.as_query_engine(
filters=MetadataFilters(
filters=[
ExactMatchFilter(
key="term", value='{"metadata.is_footnote": "true"}'
),
ExactMatchFilter(
key="query_string",
value='{"query": "content: space AND content: lisp"}',
),
]
)
)
res = footnote_query_engine.query(
"What did the author about space aliens and lisp?"
)
res.response
```
"The author believes that any sufficiently advanced alien civilization would know about the Pythagorean theorem and possibly also about Lisp in McCarthy's 1960 paper."
## Use reader to check out what VectorStoreIndex just created in our index.
Reader works with Elasticsearch too as it just uses the basic search features.
```python
# create a reader to check out the index used in previous section.
from llama_index.readers.elasticsearch import ElasticsearchReader
rdr = ElasticsearchReader(endpoint, idx)
# set embedding_field optionally to read embedding data from the elasticsearch index
docs = rdr.load_data(text_field, embedding_field=embedding_field)
# docs have embeddings in them
print("embedding dimension:", len(docs[0].embedding))
# full document is stored in metadata
print("all fields in index:", docs[0].metadata.keys())
```
embedding dimension: 1536
all fields in index: dict_keys(['content', 'embedding'])
```python
# we can check out how the text was chunked by the `GPTOpensearchIndex`
print("total number of chunks created:", len(docs))
```
total number of chunks: 10
```python
# search index using standard elasticsearch query DSL
docs = rdr.load_data(text_field, {"query": {"match": {text_field: "Lisp"}}})
print("chunks that mention Lisp:", len(docs))
docs = rdr.load_data(text_field, {"query": {"match": {text_field: "Yahoo"}}})
print("chunks that mention Yahoo:", len(docs))
```
chunks that mention Lisp: 10
chunks that mention Yahoo: 8
## Hybrid query for opensearch vector store
Hybrid query has been supported since OpenSearch 2.10. It is a combination of vector search and text search. It is useful when you want to search for a specific text and also want to filter the results by vector similarity. You can find more details: https://opensearch.org/docs/latest/query-dsl/compound/hybrid/.
### Prepare Search Pipeline
Create a new [search pipeline](https://opensearch.org/docs/latest/search-plugins/search-pipelines/creating-search-pipeline/) with [score normalization and weighted harmonic mean combination](https://opensearch.org/docs/latest/search-plugins/search-pipelines/normalization-processor/).
```
PUT /_search/pipeline/hybrid-search-pipeline
{
"description": "Post processor for hybrid search",
"phase_results_processors": [
{
"normalization-processor": {
"normalization": {
"technique": "min_max"
},
"combination": {
"technique": "harmonic_mean",
"parameters": {
"weights": [
0.3,
0.7
]
}
}
}
}
]
}
```
### Initialize a OpenSearch client and vector store supporting hybrid query with search pipeline details
```python
from os import getenv
from llama_index.vector_stores.opensearch import (
OpensearchVectorStore,
OpensearchVectorClient,
)
# http endpoint for your cluster (opensearch required for vector index usage)
endpoint = getenv("OPENSEARCH_ENDPOINT", "http://localhost:9200")
# index to demonstrate the VectorStore impl
idx = getenv("OPENSEARCH_INDEX", "auto_retriever_movies")
# OpensearchVectorClient stores text in this field by default
text_field = "content"
# OpensearchVectorClient stores embeddings in this field by default
embedding_field = "embedding"
# OpensearchVectorClient encapsulates logic for a
# single opensearch index with vector search enabled with hybrid search pipeline
client = OpensearchVectorClient(
endpoint,
idx,
4096,
embedding_field=embedding_field,
text_field=text_field,
search_pipeline="hybrid-search-pipeline",
)
from llama_index.embeddings.ollama import OllamaEmbedding
embed_model = OllamaEmbedding(model_name="llama2")
# initialize vector store
vector_store = OpensearchVectorStore(client)
```
### Prepare the index
```python
from llama_index.core.schema import TextNode
from llama_index.core import VectorStoreIndex, StorageContext
storage_context = StorageContext.from_defaults(vector_store=vector_store)
nodes = [
TextNode(
text="The Shawshank Redemption",
metadata={
"author": "Stephen King",
"theme": "Friendship",
},
),
TextNode(
text="The Godfather",
metadata={
"director": "Francis Ford Coppola",
"theme": "Mafia",
},
),
TextNode(
text="Inception",
metadata={
"director": "Christopher Nolan",
},
),
]
index = VectorStoreIndex(
nodes, storage_context=storage_context, embed_model=embed_model
)
```
LLM is explicitly disabled. Using MockLLM.
### Search the index with hybrid query by specifying the vector store query mode: VectorStoreQueryMode.HYBRID with filters
```python
from llama_index.core.vector_stores import ExactMatchFilter, MetadataFilters
from llama_index.core.vector_stores.types import VectorStoreQueryMode
filters = MetadataFilters(
filters=[
ExactMatchFilter(
key="term", value='{"metadata.theme.keyword": "Mafia"}'
)
]
)
retriever = index.as_retriever(
filters=filters, vector_store_query_mode=VectorStoreQueryMode.HYBRID
)
result = retriever.retrieve("What is inception about?")
print(result)
```
query_strWhat is inception about?
query_modehybrid
{'size': 2, 'query': {'hybrid': {'queries': [{'bool': {'must': {'match': {'content': {'query': 'What is inception about?'}}}, 'filter': [{'term': {'metadata.theme.keyword': 'Mafia'}}]}}, {'script_score': {'query': {'bool': {'filter': [{'term': {'metadata.theme.keyword': 'Mafia'}}]}}, 'script': {'source': "1/(1.0 + l2Squared(params.query_value, doc['embedding']))", 'params': {'field': 'embedding', 'query_value': [0.41321834921836853, 0.18020285665988922, 2.5630273818969727, 1.490068793296814, -2.2188172340393066, 0.3613924980163574, 0.036182258278131485, 1.3815258741378784, -0.4603463411331177, 0.9783738851547241, 0.3667166233062744, -0.30677080154418945, -1.2893489599227905, -1.19036865234375, -1.4050743579864502, -2.200796365737915, 0.05992934852838516, 0.30156904458999634, 0.6115846633911133, -0.028691552579402924, 0.5112416744232178, -2.069373846054077, 0.6121743321418762, -0.05102552846074104, 1.8506423234939575, -1.293755292892456, -0.8149858117103577, 0.37656715512275696, 0.427949458360672, 0.43708929419517517, 3.2720835208892822, -1.9999115467071533, -2.374300241470337, 3.1277284622192383, 3.2631218433380127, -4.0594635009765625, -0.7985063195228577, 1.9719655513763428, -1.0863256454467773, -1.3689632415771484, -1.6202458143234253, -0.970841109752655, 0.4361116886138916, -1.5362870693206787, -1.1693036556243896, -1.026757836341858, 0.5508455634117126, -1.3451452255249023, -0.1262667030096054, -2.551471710205078, -2.0497262477874756, 2.496407985687256, 2.135885000228882, 0.35134005546569824, 5.0327935218811035, 1.8164896965026855, -0.6962565779685974, -0.8567550182342529, -0.7652865052223206, -0.3472128212451935, -4.674342155456543, -0.4849073886871338, 0.264328271150589, -0.13345342874526978, -0.8415009379386902, -0.573940634727478, -1.5133740901947021, -1.1298637390136719, -0.4023132026195526, -0.9682215452194214, -0.6318851709365845, -1.1680705547332764, -0.009688361547887325, 0.4505622684955597, -0.8854013085365295, -0.3571643531322479, 1.4883410930633545, -1.783129334449768, 0.11535698920488358, -0.30390724539756775, -0.25188541412353516, -1.2200418710708618, -0.46980828046798706, 0.010308354161679745, -0.11891602724790573, -2.1998283863067627, -0.8609093427658081, 0.13315293192863464, -0.8290212154388428, -2.8762452602386475, 0.07886768132448196, -1.0726840496063232, 1.9736577272415161, -0.5146512389183044, 0.5342828631401062, -0.11156866699457169, 1.7214893102645874, -2.3838982582092285, -2.6821601390838623, 3.317544460296631, -0.09058598428964615, 1.869874358177185, 0.20941582322120667, -0.32621312141418457, 1.414040207862854, 1.2938545942306519, -0.8429654240608215, 0.5140904784202576, 0.8016107082366943, 0.7636069059371948, -0.4329335391521454, -0.7065062522888184, 4.734518527984619, -0.3860406279563904, 0.925670862197876, 0.9335429668426514, 1.3854609727859497, -0.12670166790485382, -1.3067851066589355, -0.7774076461791992, -0.9004611372947693, 0.10689397901296616, 1.2346686124801636, -0.5597251653671265, 2.0317792892456055, -1.4601149559020996, -1.7142622470855713, 0.29964911937713623, 1.8859195709228516, -0.2781992256641388, -0.5782546997070312, 1.0062665939331055, 0.8075907826423645, -0.12356983870267868, 0.044209253042936325, -0.9768295884132385, -0.7845012545585632, 3.1435296535491943, 0.5873728394508362, 1.7868859767913818, 0.08011605590581894, -0.22836042940616608, 0.7038129568099976, -1.9104092121124268, 1.4030147790908813, -1.2962714433670044, 2.027243137359619, 0.9790756106376648, -2.264589786529541, 7.12422513961792, 2.6044716835021973, 0.1689453423023224, 0.8290825486183167, 2.4138808250427246, 1.5987122058868408, 0.3719463348388672, -1.3208861351013184, -2.665656566619873, 0.011798880994319916, 2.958852767944336, 1.608904480934143, 2.4605748653411865, 2.297091007232666, 0.4549705386161804, 1.1293487548828125, -1.3814384937286377, 0.7619526386260986, -0.5543878078460693, -1.3978607654571533, 1.0291355848312378, -1.0831276178359985, -0.7420253157615662, -0.013568096794188023, 0.26438722014427185, -2.890491008758545, 1.9345614910125732, -2.7232303619384766, 2.1288723945617676, -1.5730639696121216, 0.42103731632232666, -0.5871202945709229, -0.7733861207962036, -0.17877067625522614, -1.259313702583313, 2.633655071258545, -2.6153783798217773, 1.7496006488800049, -1.3132662773132324, 0.30032068490982056, 2.3259973526000977, -0.8340680599212646, -3.8754353523254395, 1.6866732835769653, -0.6322534680366516, -3.1253058910369873, -1.4690831899642944, 0.3984243869781494, -0.6030164361000061, -1.1149078607559204, -0.4780992567539215, 2.6681854724884033, 1.5737766027450562, -1.724433183670044, -1.025917887687683, 0.44603830575942993, 0.14515168964862823, -1.8136513233184814, 0.7997931838035583, -0.9585741758346558, -0.6773001551628113, -0.03136235475540161, 1.519403100013733, -0.181321918964386, -0.5776315927505493, -0.1555202752351761, 0.18355552852153778, 1.78794527053833, -2.432624340057373, -2.234393835067749, 0.4157070219516754, -0.5297521948814392, 0.5506531000137329, -0.4689751863479614, -0.8898658156394958, -0.3534289002418518, 1.8718829154968262, 0.6798714399337769, 2.9149982929229736, -0.9962785243988037, -2.7887353897094727, -0.5387859344482422, 2.679020643234253, -2.448556900024414, 0.651435136795044, 0.966449499130249, 1.6953942775726318, 0.3823235332965851, 0.10229398310184479, -0.9457557797431946, -0.6493328809738159, 0.5688035488128662, -2.922553539276123, -1.548913598060608, 0.4459702968597412, 0.013540555723011494, -0.2704170346260071, 1.006961464881897, -5.754271984100342, -0.5904161930084229, 1.7579066753387451, 1.176064133644104, -0.8002220988273621, 1.309417724609375, -5.752984046936035, -1.6502244472503662, 2.983844757080078, -0.23023942112922668, -0.9855138659477234, 1.3303319215774536, 2.9236953258514404, -3.320286989212036, -0.31151318550109863, 2.217740535736084, 0.7638903260231018, -0.9520173668861389, -1.950067162513733, 0.1302500218153, 1.4167200326919556, 0.29567164182662964, 6.863494873046875, -0.7736454010009766, 2.200040102005005, 0.8791037797927856, 2.6473147869110107, 0.9428380727767944, -1.8561729192733765, 1.2539398670196533, 0.8624231815338135, -2.1333630084991455, 3.7115859985351562, 1.5294171571731567, -2.779855728149414, -4.007022857666016, -0.19421091675758362, 1.4657100439071655, 0.7395465970039368, 1.991339087486267, -0.48850712180137634, 1.2810578346252441, -2.5738956928253174, 0.14520567655563354, -0.9870433211326599, 1.4076640605926514, -1.4828301668167114, -1.5893239974975586, -1.724867582321167, -0.23354482650756836, -1.4163196086883545, 0.5109336376190186, -0.3238542377948761, 1.955265998840332, 0.8233320713043213, 0.732318103313446, -2.2174081802368164, -2.136789083480835, 2.771289587020874, -0.7900831699371338, -0.6042210459709167, -3.237797975540161, 2.219860076904297, 1.3639500141143799, -1.0344531536102295, -3.3109471797943115, -0.2439427226781845, 2.258779287338257, 0.14851944148540497, -0.2913777828216553, 7.262680530548096, 0.5428546071052551, -1.7717254161834717, -0.4633650481700897, 2.8074758052825928, 0.048105500638484955, 1.6452494859695435, 0.04491522163152695, 0.5333496332168579, -0.7809147834777832, 0.2830696105957031, -0.7639930248260498, 0.4482744336128235, -1.4852536916732788, 0.8833461999893188, 0.523638129234314, -0.7595995664596558, -2.6632511615753174, 0.01600099354982376, 1.2090786695480347, 1.558943271636963, -0.332999050617218, -0.004141625016927719, -0.9229335188865662, 2.2113349437713623, -2.042768716812134, 1.812636137008667, -1.677463412284851, -0.3890987038612366, 1.9915165901184082, -0.15162350237369537, 0.6212348937988281, -0.12589970231056213, -1.5613648891448975, -2.242802858352661, -1.0037013292312622, -0.620574951171875, -0.8884297609329224, -3.06825590133667, 2.861025810241699, -0.6538719534873962, 0.8056166172027588, 0.018622085452079773, -0.024002058431506157, -0.9258925914764404, 0.12631414830684662, 0.584757387638092, 0.27688172459602356, 1.6044093370437622, 1.270908236503601, -0.5254065990447998, 1.8217332363128662, -0.6541954278945923, 0.8827502727508545, 0.005546186119318008, 1.258598804473877, -1.0960404872894287, 1.4661812782287598, 1.313948392868042, 1.6511622667312622, 0.7871065735816956, -1.5718154907226562, -1.0518637895584106, 0.9388594031333923, 3.3684990406036377, 0.45377177000045776, 1.271720290184021, -1.1764464378356934, -0.15176154673099518, -1.391137719154358, 3.011141300201416, -1.0445970296859741, 2.899102210998535, -1.758180022239685, 4.193892955780029, -6.368247032165527, -0.5940825939178467, -1.0767533779144287, -1.3527724742889404, 1.8917447328567505, -2.1997251510620117, -0.19185307621955872, 0.25080886483192444, 2.0800955295562744, -0.6289852261543274, -2.2921133041381836, -4.517301082611084, 4.76081657409668, 0.1720455437898636, 0.5073676109313965, 0.6299363374710083, 0.767320990562439, -0.8382765054702759, -1.3843607902526855, -1.3682464361190796, -2.6356472969055176, -0.8984878063201904, 0.22113864123821259, -2.1458795070648193, 0.7607365846633911, 0.2667470872402191, 1.220933437347412, 0.02754109539091587, -0.0877218097448349, 0.41839832067489624, 1.8138320446014404, 1.5390034914016724, -0.6963170766830444, -0.2749406695365906, -0.6144360899925232, -0.010053030215203762, 0.9293986558914185, 0.7217408418655396, 2.536949396133423, -1.1031646728515625, 1.6805330514907837, -0.4614034593105316, -1.8670165538787842, -1.8161876201629639, -0.591956615447998, -4.985913276672363, -0.2568120062351227, 0.48842141032218933, 0.7554554343223572, 0.38172686100006104, 0.9337061643600464, 2.2370591163635254, 1.419506311416626, -0.7996056079864502, -1.2188458442687988, -0.7220484614372253, -2.3885955810546875, -2.3270604610443115, -0.6024976372718811, 0.858237087726593, -0.4162434935569763, -1.4675885438919067, 1.8310022354125977, 1.28183114528656, 0.8004191517829895, -1.2845454216003418, 0.937484860420227, -0.10335024446249008, 3.258983850479126, 1.3268334865570068, 1.2220652103424072, 0.7784561514854431, 3.3600029945373535, 0.6701059937477112, 1.0529390573501587, 0.10208575427532196, 0.5701940059661865, 0.1962825357913971, 0.10828425735235214, -0.2162337452173233, 2.180311679840088, -1.7972211837768555, 1.0405341386795044, 0.7389837503433228, -4.010706424713135, -2.3734586238861084, -1.719375491142273, -1.8657660484313965, 0.1835731565952301, 1.2427527904510498, -0.7261231541633606, -1.1701852083206177, 0.789677619934082, -2.7172350883483887, 1.319502353668213, 1.0955758094787598, 2.324152708053589, -0.0015042572049424052, 0.12953521311283112, -0.647757887840271, 1.4880874156951904, 2.802795886993408, 2.35840106010437, -2.0141172409057617, -3.2490947246551514, 0.4349888861179352, -2.3027102947235107, 1.726550817489624, -2.0354580879211426, 0.3805755376815796, -0.9496164321899414, -0.7888155579566956, -0.43960967659950256, 1.7932041883468628, -1.5066981315612793, 1.4541993141174316, -0.5531985759735107, 0.36705297231674194, 0.014699921943247318, -1.6991020441055298, -0.21752266585826874, 1.7329368591308594, 11.894489288330078, -0.5965126156806946, -0.925564169883728, -0.2954309582710266, -1.5528509616851807, 2.199148654937744, -1.103115200996399, 0.19948604702949524, 1.3276681900024414, -0.39991408586502075, 0.08070758730173111, -4.513566493988037, 0.7369015216827393, -0.06655729562044144, 1.611018180847168, -5.976266384124756, 1.5534995794296265, 0.9247637391090393, 1.9740935564041138, -1.6040284633636475, -1.692891001701355, 2.5750420093536377, -2.327113151550293, 0.1548505425453186, 0.9327078461647034, -0.25829583406448364, 2.666149616241455, -3.593252420425415, -0.15699230134487152, -1.7032642364501953, -0.311889111995697, 0.5351189970970154, 1.087026596069336, -0.6252873539924622, 1.3841193914413452, -0.4950295686721802, 1.5594199895858765, 2.66278338432312, -1.7093839645385742, -0.010296639986336231, -0.28942716121673584, 1.4094592332839966, 0.638701319694519, 1.562028408050537, 2.648719549179077, 0.43120214343070984, 0.2683892548084259, -1.592780351638794, -0.043680235743522644, -2.216395139694214, -0.7123466730117798, -0.8192989230155945, 0.009025665931403637, 0.8953601717948914, -0.812109649181366, -0.8570348024368286, -0.9459167122840881, 0.17694488167762756, -0.2153395116329193, -1.6095856428146362, -1.3068273067474365, 0.07987572252750397, 0.9553368091583252, -0.6526023745536804, 0.36873266100883484, 1.2450517416000366, -2.059387683868408, -1.3680862188339233, -0.012401364743709564, 1.4825446605682373, 0.004227606114000082, -1.4840946197509766, 2.2486157417297363, 0.1467883139848709, -0.6168572902679443, 4.384040355682373, 1.6955211162567139, 1.3673641681671143, 0.02802290767431259, -0.8326700329780579, 0.5160557627677917, 1.5494022369384766, -0.038791801780462265, 1.3310153484344482, 2.623941659927368, -0.44216081500053406, 2.094320297241211, -0.4652816355228424, -2.16534423828125, 1.1661605834960938, 0.5016739964485168, 0.2974618971347809, -1.2477234601974487, 0.45119279623031616, -2.0935275554656982, -2.7642881870269775, -0.3183857798576355, -1.7994561195373535, 0.46001338958740234, 1.13956880569458, 0.7820373773574829, 1.1870800256729126, -0.09882406145334244, -0.012949690222740173, -2.851064682006836, -0.23078449070453644, 0.5443326234817505, -1.5935089588165283, -0.15193487703800201, 0.8875556588172913, 1.8850420713424683, -1.6735634803771973, -0.4044044315814972, 0.13618849217891693, -0.7734470367431641, -1.2560303211212158, -0.6135643720626831, -0.3756520450115204, 0.09861935675144196, 1.7973986864089966, 3.9645559787750244, 1.1840814352035522, 0.23493440449237823, 0.4021183252334595, -0.3134872019290924, 2.8585891723632812, -1.7090718746185303, 1.0857326984405518, -0.5228433609008789, 1.052767276763916, -2.750671148300171, -2.292957067489624, -2.2393078804016113, 0.6484774947166443, -0.8178457617759705, 1.981013536453247, 0.9351786375045776, -1.7835562229156494, 1.197204828262329, -1.580520510673523, 1.3651384115219116, -1.2498836517333984, 2.271068811416626, -0.4805469214916229, -0.8042144775390625, 1.1161340475082397, 0.28766822814941406, -0.9136468768119812, 1.4822930097579956, -1.9415802955627441, 3.3139493465423584, -0.788847804069519, -0.46007534861564636, -0.8408829569816589, 1.552205204963684, 2.770519256591797, -0.024295229464769363, -0.2848755717277527, -1.7725780010223389, 1.800087332725525, 0.07893167436122894, -1.2222589254379272, -0.014700260013341904, 1.6821144819259644, -2.8402585983276367, -1.0875762701034546, 0.920182466506958, 1.5571104288101196, 1.580711007118225, -2.1959006786346436, 0.40867993235588074, -0.4071654975414276, 0.4721708297729492, 2.2015981674194336, 1.7094886302947998, 2.791167974472046, -1.8486231565475464, 0.9494439363479614, -1.6473835706710815, 2.25347900390625, -0.7640524506568909, -1.3047209978103638, 2.0264523029327393, -0.7758778929710388, -3.2164461612701416, -0.431278258562088, 0.48025432229042053, 1.8809497356414795, -1.7093976736068726, 0.47827860713005066, 1.893001675605774, -3.900144100189209, -1.5717852115631104, -1.9519548416137695, -0.5816302299499512, -2.5087790489196777, -2.137329339981079, 0.48499026894569397, -1.041875958442688, 1.495080828666687, 0.7974658012390137, -0.33765724301338196, -0.2551305294036865, -1.225867509841919, 0.40782275795936584, -1.9513366222381592, 2.4652771949768066, -0.4490397274494171, -0.5427073240280151, -0.9319576025009155, -1.2108888626098633, -3.5326883792877197, 0.5978140830993652, -1.5832680463790894, -3.4952869415283203, 0.8160491585731506, 2.4453232288360596, 1.9943169355392456, -1.6371946334838867, -0.7201486229896545, -2.150602102279663, -0.8741227984428406, -1.0412555932998657, 1.1813536882400513, -0.5626242160797119, 0.9812798500061035, 0.9959167838096619, -2.4925386905670166, -1.0300214290618896, -2.5242247581481934, 0.4867877960205078, -0.5604022145271301, 0.7731047868728638, 0.09035436064004898, 2.148285150527954, -0.14102017879486084, -1.0548553466796875, 0.346242219209671, 0.8292868733406067, 0.2173319011926651, 1.6390180587768555, 0.8006800413131714, -2.504382848739624, 0.03211856260895729, 0.25490802526474, -0.1592618227005005, -2.52319073677063, -0.07528931647539139, 1.6852014064788818, 1.2371580600738525, -1.3527917861938477, -0.7488723397254944, -0.7073266506195068, 1.2466566562652588, -0.734491765499115, 2.599490165710449, -1.1392076015472412, -0.26751452684402466, 1.9701131582260132, -3.0358736515045166, 0.6857394576072693, -2.17743182182312, 0.7840812802314758, 0.7634314894676208, 1.6858117580413818, -0.14474305510520935, -0.03722609952092171, -0.7322748303413391, 0.8631106615066528, 2.321913003921509, 2.620532274246216, -1.7463874816894531, -0.8518179059028625, 18.426437377929688, 2.292031764984131, -0.9628440737724304, 0.2770772874355316, 1.823053240776062, 0.007035842165350914, -1.350489854812622, 0.9310376644134521, -1.555370807647705, -1.22098708152771, -0.4069618284702301, -2.5084807872772217, 0.07337111979722977, -0.6376367807388306, 0.3913240432739258, 0.8780924677848816, -1.000422477722168, -0.11413756012916565, -0.41021502017974854, -1.2571842670440674, -0.8197417855262756, 2.0337860584259033, 0.3979244828224182, 1.4167122840881348, 0.3471311926841736, -0.4256099760532379, 1.0012407302856445, -0.4308701753616333, -0.02153640426695347, 0.6896073222160339, -0.41300255060195923, -2.1376280784606934, 0.15132027864456177, 1.122147560119629, -0.26097020506858826, -1.5312714576721191, 1.1588066816329956, 0.5141109824180603, -0.4418908655643463, -1.282315969467163, -2.1520655155181885, -2.381605625152588, -1.0613080263137817, 1.8376272916793823, -0.3373865783214569, -1.7497568130493164, 1.3478856086730957, 0.522821843624115, 2.8063817024230957, -1.5707430839538574, 1.6574434041976929, 1.0973840951919556, 0.033301882445812225, -0.870749831199646, -1.2195767164230347, -0.4587917923927307, -0.32304897904396057, 1.0247005224227905, -0.061056286096572876, 1.0645840167999268, 0.26554223895072937, 0.7214350700378418, -0.49338391423225403, 2.04323410987854, -0.38607147336006165, -1.9434980154037476, -1.4400379657745361, 4.2936177253723145, -0.03506356105208397, -1.607264518737793, -1.4003962278366089, 0.8912801146507263, -0.6198359727859497, 1.4857014417648315, 0.8332427740097046, 1.5414448976516724, 1.0930620431900024, -1.062386393547058, 0.4404706358909607, -2.0785317420959473, 0.9004122018814087, 0.5037896633148193, -0.7400078177452087, 0.7098906636238098, 3.7883002758026123, 0.3869098424911499, 0.7730949521064758, 0.2972405254840851, 0.02568812482059002, 0.774571418762207, -2.0131654739379883, -0.20678681135177612, 1.8377408981323242, -0.06119948998093605, -1.2104179859161377, -0.2865597903728485, -1.013867974281311, 0.0007775087142363191, -1.6674636602401733, 1.061977744102478, 2.9370741844177246, 1.4935888051986694, 2.5850329399108887, 0.016956254839897156, 1.406268835067749, -0.5984053015708923, 0.6108880043029785, -0.04343929886817932, 1.3669254779815674, -1.2286776304244995, -0.10667647421360016, 2.1632094383239746, 0.8779910206794739, -1.3170784711837769, -1.860677719116211, 0.9604260325431824, -2.4838356971740723, -1.691286325454712, 0.22740653157234192, -0.7766919732093811, -0.5894504189491272, -4.942060470581055, -0.26809266209602356, 1.1812422275543213, 2.37599778175354, 1.0258384943008423, -1.118991732597351, 0.5149827003479004, -0.5733175873756409, 1.505476474761963, 3.1367368698120117, 0.7641242146492004, -0.0940699428319931, 1.0783028602600098, 1.3335994482040405, -1.2336270809173584, 0.22182348370552063, -1.110285997390747, 0.862419605255127, -1.0850942134857178, -2.729142904281616, 1.0944768190383911, -0.7928529977798462, -0.6893836259841919, 0.18696878850460052, -2.0538835525512695, -1.0116357803344727, -0.797469437122345, -1.3255575895309448, 1.709050178527832, 3.431581735610962, 2.935115098953247, 1.0282948017120361, 0.5271965861320496, -0.7158775329589844, 1.3512331247329712, -0.7794892191886902, 0.13029088079929352, 0.3733986020088196, -0.17051351070404053, 0.38182443380355835, 0.9633568525314331, -0.15820203721523285, 2.1459097862243652, 0.5132815837860107, 0.08023839443922043, -0.8007093071937561, 0.13462162017822266, 1.9698970317840576, 0.8776851296424866, -1.9589300155639648, 0.5906473994255066, 1.028153419494629, -0.4514116644859314, -2.473788022994995, -0.2742897570133209, 1.0657744407653809, 2.362811326980591, 0.028045516461133957, -0.5195608735084534, -2.3411612510681152, 0.1536271870136261, -0.15816496312618256, -0.09372033178806305, -0.49644598364830017, 0.49094706773757935, 1.1586555242538452, -0.955280065536499, 0.9317602515220642, -1.1424400806427002, 1.6726744174957275, 0.519007682800293, -0.6123946309089661, 2.615694046020508, 2.466355562210083, 3.3426148891448975, 1.0087884664535522, -0.516756534576416, -0.11329516023397446, 0.6762191653251648, -0.05646437406539917, 0.34115341305732727, 1.4121625423431396, 1.80597984790802, -0.6195365786552429, 0.046768467873334885, -0.18133965134620667, 2.0016236305236816, -0.15139950811862946, -0.41256871819496155, -0.1790081411600113, 0.5522864460945129, -1.2738145589828491, -0.21690881252288818, 1.0143086910247803, 0.6111000776290894, -2.4920296669006348, 0.3650006055831909, 0.5012017488479614, 3.312314987182617, -1.2554460763931274, -0.08991418778896332, -5.223748683929443, 0.49595025181770325, -1.0139282941818237, 0.08150297403335571, 0.5423699021339417, 0.6872586011886597, 0.3866420388221741, 0.2387423813343048, 1.6300451755523682, -0.23714679479599, -1.4279755353927612, 4.459320068359375, -0.7372031807899475, 1.5491743087768555, -0.9331847429275513, 1.5157212018966675, 0.33791929483413696, 2.988191843032837, -0.1212812289595604, -1.2225391864776611, -0.8952404260635376, 0.30449047684669495, -0.5278837084770203, 0.47584253549575806, 1.4064100980758667, -1.2114145755767822, -0.10328574478626251, 1.5992718935012817, -2.0458250045776367, -3.102452278137207, -1.4500226974487305, -2.892245292663574, 0.5406331419944763, 1.0614030361175537, 0.9008101224899292, -0.5399534106254578, -0.4225170314311981, -0.5858743190765381, 1.785391926765442, 0.21592077612876892, -3.7099521160125732, 0.7630082964897156, 1.3418095111846924, -2.593329429626465, 0.31877732276916504, 1.6515623331069946, 0.9644103646278381, 1.9154785871505737, -1.0050128698349, 2.866792678833008, -3.363034248352051, -0.010284701362252235, 2.8003530502319336, -4.132946014404297, -1.0492007732391357, -1.803873896598816, -1.6592904329299927, 0.5143199563026428, -1.4949287176132202, 1.6534130573272705, -1.6133151054382324, -0.22070585191249847, 1.3808913230895996, 2.3047897815704346, -1.7598133087158203, -1.6936516761779785, -0.7323946356773376, -4.033495903015137, 0.908507227897644, -0.9024778008460999, 1.3645659685134888, 1.8907235860824585, 1.2878985404968262, 0.8542701601982117, 0.8109430074691772, -2.2866451740264893, -2.5592124462127686, 0.812874436378479, 1.6586065292358398, -1.0911669731140137, -0.1487925946712494, -2.1414759159088135, -1.8146477937698364, -0.363641619682312, -1.3416190147399902, 0.37370967864990234, -2.0443432331085205, 0.7105128169059753, 2.1254630088806152, -2.8021240234375, -1.104745864868164, -2.176929235458374, -3.2365283966064453, -3.0512943267822266, -0.11705376207828522, -0.2737237215042114, 0.3246777653694153, -0.3063682019710541, -0.5377206206321716, -2.49725341796875, 1.262384295463562, 0.14024639129638672, 1.1029243469238281, 0.2849975526332855, 0.818973183631897, -3.680553913116455, -0.7605910897254944, 0.32638072967529297, -0.6741605997085571, 0.8537416458129883, 1.168124794960022, -1.5162039995193481, 0.5819069147109985, 0.023379748687148094, -1.348990559577942, -1.5652809143066406, -0.5094784498214722, 0.27916091680526733, 1.121222734451294, 0.8780670762062073, 1.2094379663467407, 2.1354639530181885, 2.769707441329956, 1.4601696729660034, 0.5871595144271851, -0.9278814196586609, -1.3891559839248657, 1.9506850242614746, 1.7492010593414307, -0.623008131980896, -1.7607749700546265, -1.044310212135315, 1.6887259483337402, -0.8975515961647034, -0.4015905559062958, -3.0241539478302, -1.561933159828186, 1.3948237895965576, -1.3228869438171387, 0.13199321925640106, -2.3275814056396484, 1.9689031839370728, 0.8485745191574097, -0.08251477777957916, 0.2345050424337387, -1.1688499450683594, -0.11912787705659866, -0.21194298565387726, 0.09007112681865692, 1.7608760595321655, -0.7274044156074524, 1.5473390817642212, -0.8514923453330994, -1.8599978685379028, -0.9838665127754211, 1.206497073173523, -0.05950266867876053, -0.11489760130643845, -0.4535527527332306, -2.0776290893554688, 0.17017999291419983, -0.28572288155555725, -0.05139496177434921, 1.7572499513626099, -2.834480047225952, -0.5412831902503967, -1.4063488245010376, 1.6982507705688477, -0.15384571254253387, 0.20969967544078827, -0.6751638054847717, -0.6338038444519043, 0.15595316886901855, -2.1501686573028564, 3.7269763946533203, -0.5278751254081726, 0.5313963294029236, -0.9846722483634949, -0.7395603060722351, 0.2116585671901703, -1.17556893825531, 0.6930138468742371, -1.498841404914856, 0.06944025307893753, 4.103360652923584, 0.8904181122779846, -1.6667888164520264, 2.365586996078491, -0.30954357981681824, 1.4848604202270508, 0.12867887318134308, -0.9684067964553833, 1.8107026815414429, 0.2624013423919678, -0.00013041730562690645, -0.9252362847328186, -1.0514239072799683, -0.4941941797733307, -0.14078719913959503, 0.9959864616394043, 1.9541596174240112, 1.449040412902832, -0.7560957074165344, 0.39170560240745544, 1.1071592569351196, -2.732081651687622, 2.192186117172241, -0.4868117868900299, -0.9378765821456909, -0.21596597135066986, 2.284925937652588, 0.48173102736473083, -1.092008113861084, 4.131366729736328, 0.4500076174736023, 0.551324188709259, 0.9356209635734558, 1.8111575841903687, 0.5323090553283691, -0.1642349511384964, -0.8208290934562683, -1.4830564260482788, -0.06867530941963196, 1.2636538743972778, -0.5348911285400391, 1.6775068044662476, -2.6230735778808594, 0.65394127368927, -1.6660821437835693, -0.1372344046831131, -0.2740567624568939, 0.24980051815509796, 0.2987605035305023, -1.3377487659454346, 1.7165122032165527, -3.766610622406006, 1.0698935985565186, -1.2334039211273193, 0.7106996178627014, 1.914261817932129, 2.254060983657837, 3.0593926906585693, -0.9038339257240295, 2.1295647621154785, 2.323791980743408, -1.0098944902420044, 0.3092609643936157, 0.5903484225273132, -0.1939529925584793, 1.3433213233947754, -2.3781626224517822, 0.011826583184301853, -0.7088412046432495, -0.061338480561971664, 0.2272409349679947, 1.3122551441192627, -0.609024703502655, -1.6595351696014404, 2.0951175689697266, 1.763617753982544, 1.723102331161499, -0.07782021164894104, -2.318408250808716, -0.05159427598118782, -1.0939024686813354, -1.6204721927642822, -0.2976556420326233, 0.7443931698799133, 0.1723729372024536, 2.450744152069092, -0.6820093393325806, -0.748424768447876, 2.5927767753601074, -0.003042939119040966, 0.3108278512954712, -0.8557866811752319, -0.2789894640445709, 0.1240282878279686, 2.2363221645355225, -0.6958662271499634, 1.3821767568588257, 0.6796685457229614, -1.0079951286315918, 0.07227839529514313, 0.16650229692459106, -0.26254791021347046, 2.390132427215576, -1.8655506372451782, -0.9341630935668945, -0.4989074766635895, 0.37631097435951233, 1.142351746559143, 0.9883608222007751, -0.4232832193374634, -1.5377675294876099, 2.386815309524536, 2.2229881286621094, 1.4753307104110718, 0.3690650463104248, 1.755672812461853, 0.1360682249069214, 1.8262691497802734, 1.204149842262268, -1.61245596408844, -1.0976654291152954, 0.5620847344398499, 0.014258773997426033, 1.1145908832550049, -0.048353638499975204, -1.7993223667144775, -1.3680578470230103, 0.6397918462753296, 0.8140274286270142, -1.4138717651367188, 1.7843458652496338, 2.320143222808838, -2.3691468238830566, -1.6290253400802612, 0.4552460014820099, -0.7073084115982056, -0.7053864002227783, -0.18425749242305756, 0.25378942489624023, -0.5154763460159302, -1.0927859544754028, -0.16792698204517365, -7.894286155700684, 2.1493186950683594, 1.498073935508728, 1.1957359313964844, 1.4592503309249878, -1.2221958637237549, -1.4473165273666382, -0.039233092218637466, -1.5387781858444214, 0.2809738218784332, 0.3632938265800476, -0.2190452218055725, 2.9330430030822754, -0.4174436628818512, -2.329633951187134, -1.2179923057556152, -0.9618884325027466, -1.5516972541809082, 0.019556254148483276, -0.4251065254211426, -2.3030922412872314, -2.5415854454040527, -0.11236034333705902, 0.9514794945716858, 0.7616139054298401, -8.174147605895996, -2.5553340911865234, 2.3889544010162354, -2.391383647918701, 0.27428004145622253, 0.06787795573472977, -0.32369983196258545, -0.22679738700389862, -2.1803629398345947, 0.04160657897591591, -1.6604293584823608, -1.2566741704940796, -1.6263835430145264, 2.1215732097625732, 0.7840049862861633, 2.6804425716400146, 1.8644461631774902, 0.6444897651672363, -0.5099689960479736, -2.8954007625579834, -1.2828558683395386, -3.4878811836242676, 3.494006633758545, 0.3797999918460846, -0.647855281829834, -0.13344724476337433, 0.17902664840221405, -0.9919470548629761, 1.616905689239502, -2.27630877494812, 1.643802285194397, -2.5938448905944824, -0.6710792183876038, -1.3830605745315552, 0.2624107003211975, -1.6451555490493774, -3.8474550247192383, 1.7321749925613403, 0.7066786289215088, 0.9384508728981018, -0.4754510819911957, -0.7334026098251343, 1.1032025814056396, -1.1658520698547363, 1.3763278722763062, -0.037774622440338135, -0.8751903176307678, -0.9791316390037537, 0.9107468128204346, -0.3296473026275635, -1.9909007549285889, -2.1473586559295654, -0.006557852495461702, 0.8384615778923035, -0.01962209679186344, 18.872133255004883, 0.36201873421669006, 0.798553466796875, -0.8644145131111145, 2.3191981315612793, 1.9541605710983276, 0.6602945923805237, -0.6179968118667603, -1.5543711185455322, 0.776279628276825, -0.1289747953414917, -0.06260916590690613, 1.7027626037597656, 2.0810482501983643, -1.6213568449020386, -0.39886006712913513, -0.9148863554000854, 2.371779203414917, -0.8255667686462402, 0.5241879224777222, -0.06611108034849167, 0.15851444005966187, -1.7265608310699463, -1.9876701831817627, -0.8574174642562866, -0.5137755870819092, 1.094200611114502, 2.051439046859741, -0.4424201250076294, 2.4114742279052734, 2.8330302238464355, 1.3852721452713013, -1.4038090705871582, -0.8299773335456848, 1.1527894735336304, 0.4274378716945648, 0.1335463523864746, -0.8394038081169128, -0.695540189743042, 2.1860713958740234, 0.02831282652914524, 1.38851797580719, 2.7180070877075195, -0.5800375938415527, 0.38012072443962097, -1.516226887702942, -1.4528743028640747, 2.020332098007202, 0.37799376249313354, -0.006111237220466137, 0.3068114221096039, 0.051762551069259644, -1.9482847452163696, 0.9943925738334656, 1.2114444971084595, -0.498379111289978, -0.9394795894622803, 1.5365674495697021, 0.16462092101573944, 0.6199139356613159, 1.0695781707763672, 2.171590805053711, -1.1515934467315674, 0.5827388167381287, -0.5251217484474182, -1.9005380868911743, 0.06192204728722572, -0.18885327875614166, -1.038601279258728, 0.7463323473930359, 1.9741954803466797, -0.3802947402000427, -1.7263867855072021, 0.5576955080032349, -6.5414228439331055, 2.482769250869751, -2.1220779418945312, -0.09322360157966614, -0.606932520866394, 1.5720510482788086, 1.186712622642517, -0.9327155947685242, -1.636777639389038, -0.4719899892807007, -1.5404103994369507, 1.0624099969863892, -0.8127937912940979, -2.095475673675537, -1.1025049686431885, -0.26622164249420166, 0.16464705765247345, 0.8162824511528015, -0.15933609008789062, -0.7117319107055664, -0.9574808478355408, -0.876996636390686, 2.278644561767578, -0.0024203015491366386, -0.5017860531806946, -1.2637724876403809, -0.5512189865112305, -3.1437408924102783, 1.3709018230438232, 0.026811804622411728, -1.9635486602783203, 0.31492292881011963, -0.20160254836082458, -0.24661631882190704, -1.9361134767532349, 1.3048427104949951, 3.6883456707000732, 0.5891764760017395, -3.1885087490081787, -2.2480430603027344, 0.44650864601135254, -0.2979971468448639, 0.6279115676879883, 1.7861369848251343, 1.31356680393219, 0.2839275002479553, -0.0985964760184288, 3.672964096069336, -0.4695611298084259, 0.9082326292991638, -2.184004306793213, 1.7009413242340088, -0.18669430911540985, 1.566172480583191, -1.174803376197815, -0.19450849294662476, 1.3686773777008057, 3.5500600337982178, 0.7436428666114807, -2.5459940433502197, -0.39744019508361816, 0.14069513976573944, 0.950007975101471, -1.4498867988586426, -0.7189942002296448, -0.2236652672290802, -2.013282537460327, -0.5737518668174744, 0.9382229447364807, 0.138462632894516, 0.9450423717498779, -1.2327749729156494, -0.06684131175279617, -0.21903301775455475, -0.19272048771381378, 1.4798189401626587, -0.28108158707618713, 0.008473487570881844, -1.8993659019470215, 0.6377541422843933, -1.2002936601638794, 1.3228615522384644, -0.7272652387619019, 0.6738811731338501, -12.774709701538086, 0.38885611295700073, 0.09384233504533768, 0.31756454706192017, -0.9169012308120728, 0.3109724819660187, 1.2062820196151733, -0.14381268620491028, 1.3380125761032104, 0.23123255372047424, 5.710921764373779, 2.0951988697052, -0.6727567911148071, 0.5585488677024841, -1.0341438055038452, 4.237761497497559, 2.1377511024475098, -0.49543625116348267, -1.4155120849609375, -1.9498896598815918, 0.5206643342971802, -0.6073912978172302, 1.0878022909164429, 1.1386674642562866, -0.385581910610199, 1.0004098415374756, 0.32254475355148315, -0.26826754212379456, -0.36881956458091736, 1.2502003908157349, 1.8067052364349365, -0.7950462698936462, -0.647400975227356, -0.7572196125984192, 1.8677783012390137, 2.2101082801818848, -0.4016321897506714, -2.1301164627075195, -1.4410021305084229, -0.4440961182117462, 0.9435309767723083, 0.7587440609931946, -0.7718055248260498, 0.6684849858283997, 1.4827388525009155, -0.5951601266860962, -0.04539009556174278, 1.4053939580917358, 1.600264549255371, 1.485518455505371, -0.01698189228773117, -2.1539177894592285, -0.6734874248504639, -0.1466687023639679, -1.8562843799591064, 1.368183970451355, -1.9869157075881958, -1.771111011505127, 1.3747059106826782, -2.1883490085601807, 1.245656132698059, 2.9322621822357178, -4.6943254470825195, 0.050724368542432785, 1.174140453338623, 2.134220600128174, -1.2295567989349365, -9.229207992553711, 1.1267402172088623, -0.657805860042572, -1.7399400472640991, -0.6609499454498291, -0.6485408544540405, 3.0318961143493652, -0.6680227518081665, 0.09523709863424301, -0.9661348462104797, -0.4199778139591217, -2.1234323978424072, 1.8200979232788086, 0.4164965748786926, 2.025296926498413, -3.4414825439453125, 1.9319193363189697, -0.10623864084482193, 0.2561671733856201, -0.6611090302467346, 1.3615325689315796, 2.108733892440796, 0.8126195073127747, -1.1526707410812378, -0.5965040326118469, -0.35427987575531006, -2.063122272491455, -1.2310903072357178, 1.2262243032455444, -1.8083066940307617, 0.42896851897239685, 0.3576699197292328, -0.4071148931980133, -1.2601420879364014, 0.1839064657688141, -1.5797836780548096, -1.2638546228408813, -2.8018031120300293, -0.637273371219635, 3.2183213233947754, 2.1219942569732666, -0.12670977413654327, -0.39420315623283386, 0.40950316190719604, -0.5919733643531799, -0.23056891560554504, 2.051269054412842, -0.7569652199745178, 1.4771054983139038, 1.0973950624465942, -1.8497394323349, 0.7660054564476013, 0.4079739451408386, 0.39509209990501404, -4.03759765625, 0.49509933590888977, -1.0944682359695435, 0.09745340794324875, -3.1690404415130615, 0.8090209364891052, -1.4141499996185303, 3.0473451614379883, 1.6514188051223755, 0.41704440116882324, -1.2381988763809204, -1.1585941314697266, -3.132882595062256, 1.6212838888168335, -0.30608034133911133, -0.8824394345283508, -0.8437250256538391, -0.9403614401817322, -0.8425355553627014, -0.37263181805610657, -0.1551574021577835, -0.5804091691970825, -1.1024240255355835, -1.7907911539077759, -0.0342000387609005, -0.4776504933834076, -1.3575290441513062, -2.328903913497925, 0.4996108412742615, 1.7269865274429321, 0.5199770331382751, -1.9266583919525146, -0.7093672752380371, 1.2503345012664795, 1.8306338787078857, 0.7360469102859497, -1.206422209739685, 0.6247041821479797, 0.7726438045501709, -1.032078742980957, -0.7114255428314209, 0.16287469863891602, 0.831956684589386, -0.7253220677375793, -0.47531649470329285, -1.4246597290039062, 1.755218744277954, -0.5425159335136414, 0.6625281572341919, -0.3054732382297516, -0.6943628191947937, -1.3100087642669678, -1.1087058782577515, -1.0377978086471558, -0.7500689029693604, 1.4751780033111572, 3.00736665725708, -0.6323608756065369, -2.119974136352539, -0.6540080904960632, -1.4383971691131592, -0.84005206823349, 4.245811462402344, 2.278538942337036, 3.1497910022735596, -0.27651938796043396, 0.6448743939399719, 1.4431798458099365, 0.5587866306304932, -3.0461509227752686, -1.2400342226028442, -1.0255615711212158, -1.4238051176071167, 0.5386326909065247, 0.7480037212371826, -3.042428493499756, 0.7404770255088806, 0.12366102635860443, 0.911239743232727, -0.3391643762588501, 0.223716139793396, -0.8176794648170471, 0.26733750104904175, -0.06358910351991653, -1.4497816562652588, 0.8220661878585815, 0.16676229238510132, 1.5089242458343506, 0.6346613764762878, 0.024414829909801483, 0.6593573093414307, 0.393612265586853, 0.019153645262122154, -0.7171251773834229, -0.9643132090568542, -1.9135726690292358, -0.6826731562614441, 0.5984606146812439, -0.10053187608718872, -0.2873309552669525, 2.3750436305999756, -1.2665084600448608, 2.283870220184326, 0.5721796154975891, -1.3008747100830078, 1.0985933542251587, -1.5088225603103638, 1.9784263372421265, 0.9985378980636597, 1.464012622833252, 0.059930458664894104, 1.9638173580169678, 0.8821389675140381, -1.2606337070465088, 0.1445717066526413, 1.4483168125152588, -0.2712717354297638, 0.9861794114112854, 0.16738435626029968, 1.2032196521759033, 0.016787560656666756, -1.5607249736785889, -1.5602887868881226, -2.0594980716705322, 0.8503971695899963, 0.21978792548179626, -0.7478030323982239, -1.548238754272461, -2.0839169025421143, 1.040157675743103, 0.17136456072330475, 1.4454336166381836, -0.3496195375919342, -1.5328574180603027, -0.5981230735778809, 1.348305583000183, -1.1996772289276123, 1.2960461378097534, -2.10420298576355, -1.6639989614486694, 0.6384819746017456, -0.3000016212463379, -1.7084497213363647, 1.006030559539795, -0.6925215125083923, -16.237192153930664, -1.269885540008545, -0.1343255341053009, -0.8638982176780701, 0.5025228261947632, -0.03916531801223755, -0.3935791552066803, -0.7058824896812439, -1.03640878200531, -0.008937481790781021, 1.2709771394729614, -0.10591604560613632, -1.0147794485092163, 1.338919758796692, 0.9484397768974304, 0.9701794981956482, 0.4421986937522888, 1.2322977781295776, -1.889535665512085, 0.5251283645629883, 0.3843725919723511, 1.7612661123275757, -0.6837946772575378, -0.4207232892513275, 2.161186456680298, -1.5622614622116089, -0.3522988557815552, 1.4155505895614624, -2.1782491207122803, -1.1853680610656738, 1.720255970954895, 0.25389912724494934, -0.3503161370754242, -0.4976607859134674, 0.20313221216201782, -1.7481805086135864, -0.051039956510066986, -0.07729162275791168, -1.3311573266983032, 0.3567187488079071, 2.487179756164551, 1.0334692001342773, -0.7893021702766418, -0.8556540012359619, 1.5236862897872925, -0.3487071096897125, -2.2354423999786377, -0.33195385336875916, -2.056328058242798, -3.69155216217041, 1.0659364461898804, 0.14452722668647766, 1.573434591293335, -0.45088863372802734, 0.4945583641529083, -0.5502666234970093, -0.43008995056152344, -1.099909782409668, -3.6009509563446045, 0.3614920973777771, 0.17738942801952362, 0.19482767581939697, 3.047203540802002, 0.6915555000305176, -0.3011980652809143, 0.22368474304676056, -1.2556663751602173, -0.6008588075637817, 2.426342725753784, 1.1014577150344849, -0.05255969986319542, 2.3032820224761963, 0.026818735525012016, -1.8038209676742554, 0.7464965581893921, -1.4359550476074219, -0.9251225590705872, 2.321892738342285, -0.010697663761675358, -0.523650050163269, -0.3477587103843689, -1.3873298168182373, 1.8978071212768555, -0.7265989184379578, -0.13917182385921478, 1.760409951210022, -1.8050470352172852, 1.9202536344528198, 23.424657821655273, 0.7895025610923767, -0.22024549543857574, 0.32768526673316956, 0.22950346767902374, -0.2173154354095459, 1.610393762588501, -2.5466644763946533, 0.6264030337333679, -1.3054112195968628, 2.720999002456665, -1.51677405834198, -2.8534555435180664, 1.6714026927947998, -2.2732057571411133, 2.916111707687378, -1.0937808752059937, 1.7382102012634277, -2.981768846511841, 3.435912609100342, 0.3376966118812561, -1.239315390586853, -0.400877445936203, 1.761841058731079, 3.293083667755127, -1.692542314529419, 1.9880279302597046, 0.514642059803009, -0.0478954091668129, -0.4543483853340149, 0.32787764072418213, -1.5450570583343506, 1.2334553003311157, 1.4770311117172241, -0.7615543603897095, -0.7700747847557068, -0.37422093749046326, -0.1740799993276596, 1.7913669347763062, 2.370370864868164, -1.2795953750610352, -1.1051491498947144, 1.4770939350128174, -0.03646974638104439, 0.9966365694999695, -1.172613263130188, 0.9230011701583862, 0.6721639037132263, 0.3518979251384735, -4.454400539398193, -0.44898751378059387, -2.9884603023529053, 0.3487760126590729, -0.5355443358421326, -1.8051347732543945, 0.8398903608322144, -1.3180123567581177, 1.1721769571304321, -3.272967576980591, 0.01520228385925293, 1.4445781707763672, 1.4469655752182007, 0.5919833183288574, 1.219369888305664, -1.831299066543579, 0.9018062353134155, 0.5006951689720154, 1.9173309803009033, 0.6067509651184082, -1.1368725299835205, 0.8343968391418457, -1.0959806442260742, -0.944695770740509, -0.41647955775260925, -2.262669801712036, 4.669586181640625, 1.0134044885635376, -4.808712005615234, -0.942473292350769, -2.451455593109131, -2.0447309017181396, -1.8993258476257324, 0.7938048243522644, -5.817100524902344, 0.3395240902900696, -0.5180562138557434, 0.7192035913467407, -1.9127206802368164, 0.6843070387840271, 0.17841504514217377, 0.06499477475881577, 0.9957720637321472, -1.5054919719696045, 0.37450188398361206, -2.1598570346832275, -1.8709479570388794, -1.1289294958114624, -0.515167772769928, -2.6569807529449463, -0.5510454177856445, 0.5140765309333801, 1.0727870464324951, -3.140223741531372, -1.4549286365509033, -0.038322318345308304, 2.3005473613739014, 0.41218411922454834, 0.1405603587627411, 2.579385995864868, 1.7039129734039307, 3.0319645404815674, 2.222633123397827, 0.48473167419433594, 0.39313510060310364, 1.5743176937103271, -17.08769416809082, 2.6103098392486572, -0.29352328181266785, 1.4871758222579956, -0.920323371887207, -1.261200189590454, -1.8815630674362183, -0.3742014169692993, 1.928483486175537, 0.8734447956085205, -0.7256561517715454, -0.19480429589748383, 0.4971783757209778, 0.0454951710999012, 1.5309410095214844, -1.8724687099456787, 0.2753872573375702, -0.05526876077055931, 2.019657850265503, -0.542966902256012, 2.5979809761047363, -1.5759060382843018, -2.0966858863830566, -1.2429949045181274, 0.8074167966842651, 1.6995701789855957, 2.364717483520508, -0.006171206012368202, -0.40523213148117065, 0.6031554937362671, -0.9142636656761169, -0.6844136118888855, -0.5789420008659363, -1.1073524951934814, 1.050377607345581, -0.22426076233386993, -4.312420845031738, 0.3582805097103119, 1.566651463508606, -1.0100003480911255, -2.445319652557373, 0.49360424280166626, -6.209681510925293, -3.5924978256225586, -2.6305131912231445, -3.0619750022888184, 3.185960292816162, 1.714870572090149, 1.8870161771774292, -2.1056036949157715, -1.3087836503982544, -0.397480309009552, 1.4927351474761963, -0.7130331993103027, 1.486342191696167, 0.3299499750137329, -2.418793201446533, 1.9932200908660889, 1.4768792390823364, -3.0037782192230225, -0.042862553149461746, 1.1720788478851318, 1.5001466274261475, -2.5495569705963135, -0.622663676738739, 0.7934010028839111, -1.1974726915359497, 0.36095690727233887, 0.19274689257144928, -3.497694730758667, -0.40920042991638184, 0.2558222711086273, -0.17489388585090637, -0.4993809461593628, -0.7705931067466736, -2.4662959575653076, 1.9247642755508423, 1.998637080192566, -1.9849026203155518, -1.5978630781173706, 1.7272976636886597, 2.1162023544311523, 3.836690902709961, -0.5702705979347229, 0.4890395998954773, -5.1495490074157715, -0.40522921085357666, 1.9576873779296875, -1.508880376815796, 1.41094970703125, -0.024070236831903458, -1.3425319194793701, 0.2499399334192276, -1.9436883926391602, -0.20083169639110565, -1.6973903179168701, 1.8585814237594604, 2.0651111602783203, -0.6890871524810791, 1.9258447885513306, 0.14739713072776794, -1.3216526508331299, -0.5668810606002808, -0.1970759779214859, 0.4085139334201813, 0.5241521000862122, -0.5185426473617554, 0.8455533981323242, 0.05106530711054802, -1.0309116840362549, 1.3577605485916138, 0.8617386817932129, -0.9283434748649597, -0.02036425843834877, -0.091877780854702, 0.5626043677330017, 0.9166983366012573, -1.6653329133987427, 0.6513411402702332, -2.0065479278564453, -0.25614944100379944, -1.7404941320419312, -0.14202706515789032, -1.8889561891555786, 0.7946772575378418, -2.131476402282715, 0.28767019510269165, -1.7267996072769165, -1.376927375793457, 0.305580735206604, -2.189678192138672, -0.012310806661844254, 3.2107341289520264, -0.5365090370178223, -2.4642841815948486, 0.8017498254776001, -0.3184514045715332, 0.7495277523994446, -0.4395090341567993, -1.053176760673523, 1.0031729936599731, 0.5520432591438293, 5.518334865570068, -0.260230153799057, 0.4129876494407654, -2.2801108360290527, 3.3234267234802246, -1.100612759590149, -0.1636020541191101, 0.5297877192497253, 1.1526376008987427, -0.6702059507369995, 0.11144405603408813, 1.4567251205444336, 2.211238384246826, 2.1231586933135986, -0.014792595990002155, 0.46270355582237244, -1.7553074359893799, -2.412024736404419, 0.5752195715904236, 1.0785473585128784, 1.4434525966644287, -0.36577677726745605, -0.9827273488044739, 0.22377555072307587, -3.826702833175659, -5.461728572845459, 2.8441531658172607, 0.05543769150972366, 1.0848572254180908, -2.3073110580444336, 1.1464284658432007, 6.840386390686035, 0.29163652658462524, 1.5096409320831299, 2.230553150177002, 0.03037729486823082, -0.03491774573922157, 3.0144357681274414, 2.0182530879974365, 0.1928826868534088, -0.42632055282592773, -1.7087998390197754, 0.8260899186134338, 1.0113804340362549, 2.360093832015991, -1.62473464012146, 1.5085432529449463, 2.578317642211914, 1.6136786937713623, -0.507075309753418, -2.3402822017669678, -0.07098083198070526, -1.3340305089950562, 0.19177654385566711, 1.1059727668762207, -1.3988288640975952, 0.6980583667755127, 0.04762393608689308, 2.205963373184204, 0.6097983121871948, 1.472859501838684, -0.8065006136894226, 0.8260449171066284, 0.6911891102790833, 0.7354405522346497, -1.020797848701477, 4.069032192230225, 1.1546580791473389, -1.3901289701461792, 4.088425159454346, 3.3327560424804688, -0.8147938847541809, -0.38041025400161743, -0.8002570867538452, -0.630027174949646, 0.1984773576259613, -0.5009771585464478, -2.725576400756836, -1.0677473545074463, -2.1194536685943604, 1.0863295793533325, 0.945219099521637, 0.8743425011634827, -1.5595207214355469, -3.2554945945739746, -0.059346023947000504, 1.5163980722427368, -2.4665417671203613, 1.6798737049102783, 0.13040810823440552, -1.8379839658737183, 1.0731821060180664, 3.5579402446746826, 1.2822164297103882, 1.2544536590576172, 0.21311433613300323, 1.0679103136062622, -7.644961833953857, -2.2976572513580322, -0.4696504473686218, -1.1461831331253052, 3.8370931148529053, -2.6373353004455566, -1.022015929222107, 1.944838523864746, -3.4792752265930176, 0.189581036567688, -1.4959508180618286, -0.8203619718551636, -0.8752302527427673, 1.1455988883972168, 1.394754409790039, 1.8890148401260376, 2.469120502471924, 6.615213394165039, -0.35686182975769043, -1.6679184436798096, 1.335914969444275, 0.8345732688903809, 2.998810291290283, 0.8350005149841309, -2.185638904571533, -0.9935243129730225, -0.5063812136650085, -1.023371934890747, -0.4569719731807709, 0.48809340596199036, -0.211369127035141, -1.0023069381713867, 0.6931540369987488, 1.9162567853927612, 2.1354031562805176, -0.9595145583152771, 1.6526645421981812, 1.8041722774505615, 0.6410518288612366, 0.7370561361312866, 0.6615729928016663, -1.5644463300704956, -1.0673896074295044, 6.431417465209961, -0.4807921350002289, 1.4150999784469604, -1.295664668083191, -3.4887518882751465, 1.5428330898284912, -2.5802090167999268, 2.689826488494873, -0.4622426927089691, -0.6111890077590942, 1.1808655261993408, 1.1734328269958496, -2.2830307483673096, -0.5659275054931641, 1.628258466720581, 1.4238611459732056, 0.9177718758583069, 2.57635498046875, -3.0586097240448, -0.1409277319908142, 0.13823434710502625, -0.35203301906585693, 0.9506719708442688, -6.526653289794922, 0.15715323388576508, 0.33741283416748047, 0.5778661966323853, 0.24446435272693634, -0.25828683376312256, -0.26176297664642334, -1.556192398071289, 1.7496039867401123, -2.566568613052368, -3.633755922317505, 5.877347469329834, 0.3881169557571411, 0.9792211651802063, 3.0303914546966553, -0.4234387278556824, -1.7467732429504395, -0.9940581917762756, 0.1604217141866684, 0.20533810555934906, -0.5118659734725952, 0.39175254106521606, -0.026054779067635536, -0.7470361590385437, -0.6664057970046997, 1.940830945968628, -1.7012990713119507, 0.010794420726597309, -1.8053219318389893, -1.4483990669250488, -0.9939783811569214, -2.142918586730957, -0.28726959228515625, -0.30280768871307373, -1.08336341381073, 3.519355535507202, -0.7694765329360962, 0.6794494390487671, 0.02129749022424221, 0.1468917429447174, -0.4394078552722931, 0.8040274381637573, -2.1332905292510986, 0.4357454776763916, -0.5084906816482544, 0.21598032116889954, -1.1935497522354126, 1.5270665884017944, 0.7274636030197144, 0.8407641649246216, 0.17818698287010193, 1.8959418535232544, 0.3077866733074188, 2.65822172164917, 1.8515098094940186, -0.32973712682724, 1.8853545188903809, -1.4277201890945435, -0.45664528012275696, 0.7097566723823547, 0.2476370483636856, 0.24467945098876953, -0.106924869120121, 1.5753772258758545, -0.9077993631362915, -0.2776675224304199, -0.6028621792793274, 0.3361768126487732, -1.9260371923446655, -1.4828319549560547, 2.7104969024658203, -0.32213327288627625, 1.046871542930603, -0.9400041103363037, -0.6073606014251709, 1.6994292736053467, -0.9165927767753601, -2.3352160453796387, -0.3473537862300873, -0.7119798064231873, -0.6926193237304688, 2.8489246368408203, -0.30154967308044434, -2.3563122749328613, -0.3843422830104828, 1.1836661100387573, -1.1338986158370972, -0.24423880875110626, 1.418196678161621, 0.5400394797325134, -0.015927601605653763, 0.7847772836685181, 0.2918948531150818, -2.478797435760498, 0.2756686806678772, 1.1419461965560913, 0.49127107858657837, -0.022380413487553596, -0.5809372663497925, -1.8818861246109009, -0.7043084502220154, -1.4923875331878662, 2.190058708190918, 1.125563144683838, -1.7257450819015503, 0.05809423327445984, -1.231887698173523, 2.4990298748016357, -0.6314716935157776, -0.03669692575931549, -2.2064425945281982, 1.5907856225967407, 0.4585913121700287, -1.45792555809021, -2.0502560138702393, 0.7699311971664429, -2.784538984298706, -0.9140456318855286, -0.3700370490550995, -0.8979235291481018, 0.44210389256477356, 1.0474436283111572, 1.779616355895996, 0.45078784227371216, -0.2973509728908539, -1.472576379776001, 2.0638420581817627, 0.6984675526618958, 0.28762000799179077, 3.2471299171447754, 3.79997181892395, 0.4689188301563263, 0.7657003998756409, -1.3535739183425903, 0.15177389979362488, -1.9707564115524292, -1.5294809341430664, 1.4862594604492188, -0.8001325130462646, -1.247962236404419, -1.176222562789917, -0.3547532260417938, 0.2978862226009369, 1.9624965190887451, 0.9902192950248718, -0.44017648696899414, -1.2257494926452637, -1.7168676853179932, 1.678995966911316, 0.45041409134864807, 0.29381826519966125, 0.24676980078220367, 1.4098718166351318, -0.23116594552993774, 2.851227283477783, -3.352517604827881, -1.870121717453003, 1.268830418586731, -2.901238441467285, 0.22949352860450745, 2.0386269092559814, -0.9146790504455566, -0.050751615315675735, 0.650490403175354, 0.688125729560852, -0.08217889070510864, 0.12222655117511749, -1.7349051237106323, -2.401493787765503, 0.755092978477478, 0.785330593585968, 2.030148506164551, -3.0832223892211914, -2.0020861625671387, 0.1970643252134323, -0.43846940994262695, 3.0661580562591553, -2.440918445587158, 0.255910187959671, -0.20022796094417572, -1.2181930541992188, -0.7898653745651245, -2.447021722793579, -2.7120091915130615, 1.023439884185791, 0.13306495547294617, 11.38375473022461, 0.4095974266529083, -3.126375436782837, 0.15059468150138855, 1.005212664604187, -0.6362734436988831, 1.8042926788330078, -0.544600784778595, 1.324157476425171, -0.1720346063375473, -0.48226967453956604, -0.6386629343032837, 0.7932955026626587, -1.0307537317276, -0.030334221199154854, -1.6885836124420166, 0.02540210448205471, 0.15673278272151947, 1.2310541868209839, 3.1716957092285156, 2.6241445541381836, 0.3046095371246338, 1.2929836511611938, 0.7420481443405151, 0.321260005235672, 0.669034481048584, -0.11876273900270462, 1.3900645971298218, -0.39547765254974365, -0.9423073530197144, -1.440240502357483, -2.7683916091918945, 0.5916474461555481, 0.22705861926078796, 2.289206027984619, -1.529347538948059, 3.0293784141540527, 1.585314154624939, -0.3475581705570221, -0.8158438205718994, -1.2707141637802124, 1.52529776096344, -0.4399953782558441, 0.7977296710014343, 2.15421724319458, 0.2029402256011963, 0.8182349801063538, -0.9828463792800903, -2.102130651473999, -0.7536905407905579, -0.6563103795051575, -0.8859535455703735, -2.16115140914917, 0.68268883228302, -0.8431786894798279, 1.6845060586929321, -3.457179546356201, -1.0305430889129639, 2.1177175045013428, 2.186978816986084, -0.7495031952857971, 0.4233001470565796, 1.7131890058517456, 2.653705358505249, -1.5412851572036743, 2.0931594371795654, -1.8673100471496582, 3.362546443939209, 0.37147626280784607, 2.6393561363220215, 0.5956027507781982, 3.8806629180908203, -0.8557716608047485, -1.8126965761184692, -0.6422334909439087, -0.4170646071434021, 0.07015134394168854, 1.601213812828064, 1.7752736806869507, -1.563095211982727, -1.842039942741394, 0.8949403166770935, 0.8213114738464355, 2.104454517364502, 1.5621185302734375, 1.983998417854309, 0.27188044786453247, -1.123093843460083, -0.42603784799575806, -4.802127838134766, -0.9244204163551331, -2.459841012954712, -2.634511709213257, -2.607050657272339, 0.3619783818721771, -1.8253533840179443, 2.1136412620544434, -1.0142664909362793, -0.35461071133613586, -0.08565346151590347, 1.2730433940887451, 1.4445371627807617, -2.562166213989258, -1.6224087476730347, -0.7401191592216492, -1.8183948993682861, -6.947819709777832, -2.958055257797241, -1.1326404809951782, 2.521576166152954, -0.7198857069015503, -0.19349172711372375, -2.5632424354553223, -1.1360121965408325, 1.7425504922866821, -2.3327488899230957, -0.3639349937438965, -0.7618690133094788, -0.06379194557666779, -2.3073813915252686, 0.694584846496582, 0.344064325094223, -1.2303060293197632, 1.2927721738815308, 0.06000807508826256, 0.40601813793182373, -0.8971396088600159, 0.519196629524231, -1.4103238582611084, -3.390002489089966, -1.5444581508636475, 0.7764025926589966, -1.286615014076233, -0.9456934928894043, -0.6860343217849731, -0.7364029288291931, 1.5457088947296143, 1.6128982305526733, 1.287780523300171, 1.6489148139953613, 1.67617928981781, 0.10088522732257843, -1.2689849138259888, 0.8049256205558777, -0.8268434405326843, 0.8534346222877502, 3.2546145915985107, -0.7334981560707092, -0.42363929748535156, -2.0192339420318604, 0.18278534710407257, -0.30329200625419617, -1.6454986333847046, 0.5611382126808167, 0.9428885579109192, 3.467724323272705, -1.7720670700073242, 3.3134148120880127, 0.8287512063980103, -0.6391113996505737, 0.5302921533584595, 3.3955209255218506, 1.8526530265808105, -5.831977367401123, -0.5608792901039124, -0.52732914686203, 1.1519194841384888, -3.8111307621002197, -1.112129807472229, -2.193333148956299, 3.558131456375122, -0.38883766531944275, -1.2926342487335205, -1.7179244756698608, 3.0252881050109863, -0.30636560916900635, -0.6726535558700562, -2.0738301277160645, 1.0538036823272705, -0.6432257890701294, -0.621713399887085, -1.2236216068267822, 0.47444531321525574, -1.533075213432312, 1.503252625465393, 1.7952961921691895, 2.1736719608306885, -0.3828437328338623, -0.4795142114162445, -0.7193837761878967, 1.4456597566604614, -0.02563435025513172, 0.5546603202819824, -1.2607388496398926, 1.1237564086914062, 2.7446420192718506, -1.68074369430542, -1.4911751747131348, 0.6633965373039246, 0.19930459558963776, 3.66977596282959, -2.2398242950439453, -0.29390445351600647, 0.2560953199863434, 0.26830923557281494, -2.39227032661438, 3.228013038635254, 1.5378494262695312, -0.4504263997077942, -2.826124668121338, 1.7755171060562134, 0.5379474759101868, 0.37574896216392517, 0.9193552136421204, 1.2337709665298462, -0.7457429766654968, 0.3981378376483917, 1.9126510620117188, -1.457673192024231, -1.840986967086792, -1.0645390748977661, -0.1767304390668869, 1.188957691192627, 1.2876298427581787, -0.8412945866584778, -0.25044959783554077, -1.0699965953826904, 0.009314493276178837, 0.47715994715690613, -1.6440861225128174, -0.5907453298568726, -1.049324631690979, 1.0390734672546387, 0.6445403099060059, 0.833937406539917, -0.355325847864151, 0.0994211733341217, -0.0302878487855196, 0.12409967184066772, -0.3736986219882965, 2.322896718978882, -0.07213949412107468, -0.041175637394189835, 0.15898191928863525, -1.2797447443008423, -1.7271647453308105, 1.1250183582305908, 0.053053118288517, 0.21516209840774536, -0.62578946352005, 1.643478512763977, 1.5589592456817627, 0.5566443800926208, -0.18252010643482208, 0.5588923096656799, -2.417508125305176, 1.536683440208435, 2.6799542903900146, 3.126356363296509, -1.7247638702392578, 0.7768693566322327, 0.15074074268341064, -0.7899144291877747, -0.1392408013343811, -1.8526852130889893, 0.03772513195872307, -0.5075445771217346, 0.2553730010986328, -0.8452396988868713, -0.804675817489624, 0.20948508381843567, 0.608883261680603, -0.43253928422927856, 2.2517855167388916, 1.1470715999603271, 0.057494793087244034, -1.487905502319336, -0.018844403326511383, -0.5127835273742676, -0.9914013743400574, 0.30636391043663025, 0.7900062203407288, 0.5838981866836548, -0.16234219074249268, -0.3470565378665924, -0.21970994770526886, 1.412819504737854, -2.344581365585327, 0.09724771976470947, -0.5757020711898804, 1.2181626558303833, -0.944413959980011, -0.6563422083854675, -0.5654497146606445, 2.407801628112793, 0.08510265499353409, 2.0938544273376465, 0.08230669051408768, 2.0056731700897217, -0.9489847421646118, -1.7223788499832153, -1.7133234739303589, -3.278630018234253, 1.6658223867416382, 0.10414383560419083, -0.5931969881057739, 0.6423833966255188, -2.9353301525115967, 3.526261568069458, -1.666553258895874, 0.9492028951644897, 0.667405366897583, -0.8604920506477356, 1.2735933065414429, -0.24551275372505188, 0.6441431045532227, -0.38227733969688416, -0.4630293846130371, 1.4358162879943848, 1.0937228202819824, 1.9490225315093994, 0.0740886926651001, 0.4029659032821655, -1.6319000720977783, 1.2711639404296875, -0.5974065661430359, -2.6834018230438232, 1.8502169847488403, 0.6386227607727051, 2.590479612350464, -0.49917230010032654, -2.5988664627075195, 1.9030545949935913, -0.3349710702896118, -2.7176058292388916, -1.4044554233551025, -2.1542625427246094, 0.39269959926605225, -0.3015066385269165, 0.15509101748466492, -1.8539525270462036, 3.4868879318237305, -1.4078190326690674, -3.222374200820923, -1.1986515522003174, -1.1208950281143188, 0.6884583830833435, -0.7585988640785217, 0.1059669777750969, 0.04318329319357872, -4.913561820983887, -0.05187537521123886, 3.5694751739501953, -1.9946166276931763, 0.014335528947412968, 0.04705454036593437, 1.4365737438201904, -1.2839676141738892, -0.04703819751739502, 0.6318968534469604, -0.4648891091346741, 0.28053349256515503, -2.2494683265686035, 0.8773587346076965, 3.2937123775482178, 0.461525559425354, 4.590155601501465, -0.9878007173538208, -0.08247177302837372, -0.43144866824150085, -1.0715477466583252, 1.6967984437942505, -3.3572113513946533, -0.6096997261047363, 1.3075783252716064, -2.2616846561431885, 4.197009086608887, -0.4991415739059448, 0.6471449732780457, 0.4552414119243622, 1.0929334163665771, -1.582084059715271, -0.5286394357681274, -0.5518680810928345, 0.7354360818862915, -0.2584633231163025, -0.08173595368862152, -0.5867318511009216, -1.8880888223648071, -1.814834713935852, 1.7573798894882202, 3.9596621990203857, 1.5880887508392334, 0.7259516716003418, 1.955574631690979, 0.3088712990283966, -1.7798328399658203, 1.4348945617675781, 0.8652783036231995, -0.11939241737127304, -0.42505839467048645, -0.5959363579750061, 1.7220964431762695, 2.022887706756592, 2.318899631500244, -1.0285959243774414, 0.5574663877487183, 1.8598313331604004, 2.340881824493408, -1.114876627922058, -2.9373958110809326, -0.3807956278324127, 0.9138448238372803, 0.09876017272472382, 0.736687958240509, 0.6977685689926147, -0.6091060638427734, -2.6238436698913574, 1.2243366241455078, 1.5129908323287964, 0.9895787239074707, 0.01610621064901352, -0.7177698612213135, -0.586176872253418, -0.8468607664108276, -2.300959348678589, -0.276903361082077, -0.4521595537662506, -0.39529210329055786, 2.112332344055176, -2.060443162918091, -3.177922248840332, -0.5120137333869934, 0.10933879762887955, 0.11730089783668518, 0.25420263409614563, -0.34655097126960754, -2.9007911682128906, 0.003339624498039484, 0.3639955520629883, -1.388902187347412, 1.4442331790924072, -0.861194372177124, 0.16477303206920624, 2.8582944869995117, -3.2511274814605713, -0.9999625086784363, -1.9750611782073975, 0.20032551884651184, -0.7910523414611816, 1.3464692831039429, 0.4899722933769226, -2.324185609817505, 2.6362833976745605, -2.167820453643799, -1.1179255247116089, 0.26357337832450867, 2.388129949569702, -0.3871464133262634, 2.541254758834839, -1.5910060405731201, -0.1521669179201126, 2.4372799396514893, 0.49059635400772095, 0.143768772482872, -0.2824336290359497, -0.07930364459753036, 0.18067769706249237, -1.5470519065856934, 0.8585227131843567, -1.7051506042480469, 0.2304743379354477, 1.2718594074249268, -2.262291193008423, 0.6345257759094238, 1.7309871912002563, -1.0747532844543457, 0.8628502488136292, -1.0308325290679932, 1.6426581144332886, -0.1179797425866127, 2.114360809326172, 0.4001002311706543, 1.3091498613357544, -0.5761996507644653, 1.7613424062728882, -0.9532261490821838, 1.8100963830947876, -0.551224946975708, 1.0943084955215454, 1.995148777961731, -0.2399289757013321, -2.8592641353607178, 0.8448318839073181, 1.438583254814148, -0.7680769562721252, 0.12946569919586182, 0.7584971189498901, 2.126793622970581, -0.8385722637176514, -1.3371894359588623, -0.8095458149909973, 2.117802619934082, 1.1792303323745728, -3.2345151901245117, -0.5444381237030029, 2.1084394454956055, -2.4026038646698, 0.18834252655506134, -1.2292487621307373, 0.12423299252986908, -2.0310535430908203, 0.3255136013031006, 0.2849785387516022, -2.3633954524993896, -0.6746733784675598, -0.34001630544662476, -0.25642478466033936, -1.6001611948013306, 0.8522850871086121, 1.7623180150985718, -0.1964396983385086, -1.2936173677444458, -1.528385877609253, -1.102852702140808, 0.7027903199195862, -2.311084747314453, 0.06160559877753258, -5.711217403411865, 3.7049355506896973, 0.27026474475860596, -0.921119213104248, 1.6805181503295898, 2.0733914375305176, -4.135998725891113, -0.9561137557029724, -0.6454806327819824, 0.55885910987854, -1.0215628147125244, -0.13304831087589264, -0.3172632157802582, -2.785482168197632, -0.3236042857170105, 2.439117908477783, 0.8945889472961426, -1.3276289701461792, 0.032644569873809814, 1.6577787399291992, 1.7553662061691284, -1.7791880369186401, 2.0067660808563232, -0.878115713596344, -0.22848550975322723, -0.07382026314735413, 0.6028909087181091, 0.9232040643692017, -0.7443209886550903, -1.1945438385009766, -0.5014027953147888, -0.6027995944023132, -0.9855751991271973, 0.7716651558876038, -1.7220836877822876, 0.5988412499427795, 0.6560685038566589, -1.4718652963638306, -0.09454447776079178, 0.39460813999176025, -1.0219866037368774, 0.16089311242103577, 1.2402374744415283, -3.279120922088623, -1.513095736503601, -1.7908998727798462, 1.5655872821807861, -0.9766507148742676, -0.3568771481513977, -0.6989377737045288, -2.275606870651245, -1.1739453077316284, 0.8857262134552002, 0.21379457414150238, 0.3872324228286743, 2.8312325477600098, 3.370190143585205, -1.2276592254638672, 2.5217015743255615, -2.6147425174713135, -1.7975482940673828, 0.2604275345802307, -0.9670408964157104, 1.0740933418273926, 0.0881202444434166, 0.3878750503063202, 3.7241787910461426, 2.5294928550720215, -1.554567813873291, 1.5883101224899292, 0.021601477637887, 0.7833694815635681, 0.7324634194374084, -1.0129834413528442, -1.7750601768493652, -1.6069577932357788, -0.00898703746497631, 0.6159497499465942, -0.21028690040111542, 1.0078929662704468, -1.3044366836547852, 5.082554340362549, 1.0289592742919922, -2.395045757293701, 2.4680073261260986, -0.2351224273443222, -1.6476593017578125, 0.38624653220176697, 0.2908729910850525, -0.40109455585479736, 1.2395310401916504, 1.575451135635376, -2.466839075088501, -1.930911898612976, -0.30898579955101013, 1.0600224733352661, 2.474728584289551, -0.5231278538703918, -1.1781158447265625, 2.0308663845062256, 0.27654165029525757, -1.2232980728149414, 1.4704314470291138, -0.700169563293457, -2.6749267578125, -1.2611212730407715, -1.5050514936447144, -0.9820262789726257, 1.3202519416809082, 1.7085771560668945, 2.4008524417877197, 0.5397467017173767, -2.5096402168273926, 1.4448264837265015, -2.4320006370544434, -0.6138431429862976, -0.7960938811302185, -0.8046653866767883, 0.36194565892219543, 1.4644893407821655, -0.36692118644714355, -0.3842164874076843, 0.9461280703544617, -0.394505113363266, -2.6483609676361084, -1.1774756908416748, 0.20689310133457184, -0.6184566020965576, -0.5069551467895508, 1.5505434274673462, 0.313493013381958, -0.9208681583404541, -0.5244215130805969, -0.07132044434547424, -1.0078376531600952, -0.3041566014289856, -2.9547841548919678, 0.13732536137104034, 1.058887243270874, 0.623813271522522, 1.536534070968628, 0.710353434085846, -2.091754198074341, 0.3863103687763214, -2.146207332611084, -0.2651400566101074, 0.3908107578754425, -2.1654295921325684, -0.4906494915485382, 2.2715344429016113, 0.7958000302314758, -0.3529462516307831, 0.023320848122239113, -0.6318991780281067, 0.7415646910667419, -1.5158635377883911, -1.92628014087677, 0.3778543174266815, -1.0284225940704346, 0.3418554365634918, -0.4106570780277252, 0.29304441809654236, -2.428920269012451, -0.12348226457834244, -0.34103113412857056, 0.02815360762178898, 1.9101290702819824, -1.278517246246338, -0.7780016660690308, 1.8167794942855835, 2.5061824321746826, 1.2782561779022217, -1.0568351745605469, 0.6961120367050171, 0.6501976847648621, -2.756662130355835, -1.0097459554672241, -0.9929289221763611, 0.9298126101493835, 2.3535094261169434, 27.893369674682617, 0.9989926815032959, 1.635241150856018, 0.3050057590007782, -0.11045846343040466, 0.48667430877685547, 1.4059665203094482, 2.3953042030334473, 0.24139665067195892, 1.2205312252044678, 1.4274930953979492, 1.1422854661941528, -1.2699135541915894, 0.38328030705451965, 2.3638064861297607, -0.2291434407234192, 3.1154348850250244, 0.5472202301025391, -0.10703212767839432, -1.256062626838684, -0.8193093538284302, 1.7242975234985352, -2.0377373695373535, 1.5178602933883667, 0.7586110830307007, -1.773211121559143, 0.90008145570755, 1.244199275970459, 1.8370442390441895, -1.6146992444992065, -0.5313140153884888, -0.8352211117744446, -0.28806909918785095, 2.07943058013916, -2.1276118755340576, 4.714601039886475, 0.08501234650611877, -1.0854072570800781, 0.45539429783821106, 0.02574874833226204, -0.7017617225646973, 0.271499365568161, -1.543891429901123, 1.1715095043182373, -4.165060520172119, -3.5382204055786133, -0.959351122379303, 0.586280107498169, -0.664473831653595, 0.24653545022010803, -1.3207391500473022, 1.1021311283111572, 0.8513509631156921, -0.22090765833854675, -1.2186039686203003, 0.6458785533905029, 0.068841353058815, -0.9462994337081909, -0.736159086227417, 2.489241361618042, 1.08546781539917, 0.17249566316604614, 0.00963551551103592, -2.0986745357513428, -0.18537047505378723, -1.241287112236023, 0.9592534899711609, -0.43631333112716675, 1.8670296669006348, -1.1359080076217651, 2.3669395446777344, -1.5876514911651611, -1.8304880857467651, 0.8184749484062195, 0.7685567736625671, 0.8345807194709778, 0.01114408578723669, 0.7298959493637085, -0.7284532785415649, -0.5363021492958069, -0.9247578978538513, -2.17104172706604, -0.6724880933761597, 2.363757848739624, 0.08590041846036911, 2.059079170227051, -2.2278695106506348, 3.668748140335083, 0.8368174433708191, 1.6728285551071167, -1.9286187887191772, -0.7129634618759155, -0.18277931213378906, 1.9877017736434937, -1.999313473701477, 0.6556553244590759, 2.9140737056732178, -0.3444043695926666, -0.4161573648452759, -1.4394901990890503, 1.290708065032959, 0.2468632608652115, -0.8644528388977051, 0.022347690537571907, -0.46164897084236145, 2.0218238830566406, 0.6671098470687866, 1.6139602661132812, 3.657604217529297, 2.271261692047119, 2.3326733112335205, 0.3738059401512146, 0.35563138127326965, -1.510993242263794, -0.29949405789375305, -1.237746238708496, -1.174346923828125, 0.6250507235527039, 0.5889301896095276, 0.03296980261802673, 0.5837801694869995, -1.3075876235961914, 2.2138357162475586, 0.8216298222541809, -0.16598419845104218, -0.3695119023323059, -0.1725255250930786, 0.7056125998497009, 0.5911400318145752, -1.3572112321853638, -1.7939324378967285, -0.346815824508667, 2.936661958694458, -1.8363295793533325, -2.0917155742645264, 1.1098142862319946, -1.650669813156128, 3.2686774730682373, -0.9288081526756287, 0.2646131217479706, 1.261751413345337, -2.543142557144165, 6.293051719665527, -2.597097873687744, -1.2042756080627441, -2.097094774246216, -1.8804082870483398, 0.9535214304924011, 1.670982837677002, 1.003290057182312, 4.251725196838379, 1.2506277561187744, 1.150233507156372, -1.8020832538604736, -0.3403712511062622, -0.8620516061782837, -1.283129334449768, -0.3915810286998749, 2.7018449306488037, -0.10127142071723938, -0.00876553077250719, 7.760560989379883, -2.298708438873291, 1.0014913082122803, -0.7197350263595581, 0.8198022842407227, 0.5770737528800964, -0.6671212315559387, -1.9607622623443604, -3.9859671592712402, 0.8894888162612915, 0.3556593656539917, -1.2468639612197876, -0.42202192544937134, -0.8496314287185669, 2.4973671436309814, 1.2184630632400513, -1.3097401857376099, -1.4257316589355469, -0.8838949799537659, 2.522961378097534, 1.0242716073989868, 1.1449272632598877, 1.494399070739746, 1.3268615007400513, 0.7323814630508423, 0.5462021827697754, -4.27741813659668, -0.5482227206230164, 0.6894055604934692, -1.457056999206543, -1.8107671737670898, 1.7643498182296753, -1.6268867254257202, -1.6463972330093384, 0.7533250451087952, -1.5215373039245605, 0.7346979975700378, -0.3701346814632416, -0.0226410161703825, -0.6458364725112915, -1.3796308040618896, -0.3815940320491791, 6.269187927246094, 2.289961338043213, -0.9773929715156555, -0.249546617269516, -1.6514405012130737, 0.867066502571106, 0.22829703986644745, -0.4617983400821686, 3.3042094707489014, 0.9521559476852417, -0.695234477519989, 2.962653398513794, -0.8236230611801147, 0.20833659172058105, 0.5054753422737122, 0.15649761259555817, 0.3403320610523224, -0.32528480887413025, -1.026519775390625, -0.8924757242202759, -1.8446648120880127, 2.6933515071868896, 1.8860138654708862, 0.46468058228492737, 0.48231080174446106, -0.8378691077232361, -1.9460488557815552, -1.1861300468444824, 0.7595608234405518, -1.095468521118164, 1.4308674335479736, 0.328189879655838, -2.451094388961792, -2.8908376693725586, -0.4236178398132324, -1.6981369256973267, 0.07236644625663757, -0.9503749012947083, 0.8383578658103943, 1.0358505249023438, 0.7380673885345459, 2.28603196144104, -1.8723185062408447, 0.5223669409751892, -0.011290911585092545, -0.7238665223121643, -1.6246486902236938, -2.181584596633911, 1.508367657661438, -0.6955671310424805, -6.630421161651611, 1.5550339221954346, 0.05992800369858742, 0.9386507272720337, -2.148855209350586, -2.04305100440979, 1.38173246383667, -1.2380393743515015, -3.3567206859588623, -1.3756507635116577, -0.2942374348640442, -4.111190319061279, 0.32021233439445496, -2.2395267486572266, -0.8271233439445496, -0.5836808085441589, 1.9801377058029175, -0.9668284058570862, 1.8952913284301758, 1.645387053489685, -0.14554183185100555, 1.147283911705017, -3.311444044113159, -0.201595276594162, -0.5542925596237183, 1.3598580360412598, 0.26370614767074585, 0.023029671981930733, -0.921843409538269, -2.9373505115509033, -0.2886929214000702, 0.4618637263774872, -1.1411409378051758, 2.7564940452575684, -2.9174437522888184, -0.6974139213562012, 2.123971462249756, -1.2719080448150635, -0.05564053729176521, -2.2673184871673584, -0.12627746164798737, -0.7531415820121765, 0.538124680519104, 0.9171910285949707, 0.16229069232940674, -1.6697087287902832, -0.15993909537792206, -1.8202638626098633, -0.1887633353471756, -0.7874069213867188, -1.3994258642196655, -0.3914186656475067, -2.069002389907837, 0.14583337306976318, 0.13571859896183014, 1.0151398181915283, -1.4915581941604614, -0.05901025980710983, -0.1938810497522354, 0.3131210207939148, -0.16058966517448425, -0.9250679016113281, -14.631373405456543, 0.9575139880180359, 3.1770806312561035, 1.2021996974945068, -0.6654183268547058, 3.9404962062835693, -0.7658974528312683, 2.7717905044555664, -1.520410418510437, 0.3642917275428772, -0.7192654609680176, 1.9125748872756958, 0.9570345878601074, -0.09266321361064911, -0.38360461592674255, 1.738484263420105, -3.2710161209106445, -1.7709176540374756, -2.0774242877960205, -0.3601045608520508, 0.5720903277397156, -0.699288010597229, 0.10553744435310364, -0.18496277928352356, 0.7611597180366516, -1.770328402519226, -2.7276382446289062, 1.824327826499939, -2.353358745574951, -0.402118444442749, 1.1608465909957886, 0.7886192798614502, -0.9140638113021851, -1.318404197692871, -0.4397779405117035, 2.865103006362915, -0.0457182377576828, -0.7885135412216187, 0.9373155236244202, -2.107434034347534, -0.38358789682388306, -0.3919948637485504, 2.923556327819824, -4.701347827911377, -0.7249741554260254, -0.9489683508872986, 1.0044702291488647, -0.11666374653577805, -1.3404510021209717, 0.5153619647026062, 0.04754114896059036, -0.19456803798675537, 1.3827818632125854, -2.0031208992004395, -1.289810299873352, 3.416640520095825, -2.449042797088623, 0.9355893135070801, 1.6686389446258545, 0.7991522550582886, -0.563110888004303, 1.418690800666809, -0.8917520642280579, 2.360565185546875, 2.634204626083374, 1.5688698291778564, -0.45071038603782654, -3.2660880088806152, -1.4052941799163818, 1.387974500656128, -0.23124323785305023, -1.476924180984497, 0.5204784870147705, 0.34926602244377136, -2.4898107051849365, -1.7497012615203857, 0.7724961042404175, -0.0890677198767662, 0.13224686682224274, 1.2534589767456055, 0.045317936688661575, 0.06332586705684662, 3.345268726348877, 0.8872537612915039, 0.6012753248214722, -0.6033196449279785, -0.5802770256996155, 0.3494185507297516, -1.682992935180664, -1.1012550592422485, 0.5895649790763855, 2.7002875804901123, 1.0863090753555298, -1.7454692125320435, -1.0909974575042725, 1.7235828638076782, 1.070810079574585, 0.9742421507835388, 0.06108007952570915, 1.931785225868225, -2.0204646587371826, -2.1400067806243896, -1.0201374292373657, 1.1510684490203857, -1.5037842988967896, -0.27043673396110535, 0.22798877954483032, -0.21005190908908844, 1.2690585851669312, 0.7277141213417053, 0.5758188366889954, -0.5459479689598083, -2.0902504920959473, -2.0736305713653564, -0.7945910096168518, -1.9498969316482544, -2.2743165493011475, 0.13061034679412842, -0.47374510765075684, -1.5163371562957764, 2.2691502571105957, 0.6805631518363953, 1.4631695747375488, 1.3238294124603271, -0.6621432304382324, -0.8533355593681335, 3.7632603645324707, 3.0241312980651855, -8.06316089630127, 1.8399620056152344, -0.852032482624054, 1.584251046180725, 0.41511836647987366, 0.22672411799430847, -0.26263105869293213, -3.6368632316589355, 0.926706075668335, 1.6890989542007446, 1.4503737688064575, -0.7642179131507874, -0.8178099989891052, 1.9415658712387085, -2.3238351345062256, 0.21372850239276886, 6.099509239196777, 4.171093463897705, 1.5177711248397827, -1.1565263271331787, 0.9976243376731873, -0.4523465931415558, 0.013580133207142353, 0.12584920227527618, 0.2991982400417328, 0.6719919443130493, -0.3317100703716278, -1.9753837585449219, -0.007987353019416332, 1.5750924348831177, -1.1654324531555176, 0.29240575432777405, -1.4655816555023193, -3.045579195022583, -2.5024802684783936, -0.40280434489250183, -0.7322313189506531, 0.10708696395158768, -2.0583841800689697, -1.045668601989746, -1.9754096269607544, -0.20613901317119598, 1.688043236732483, -0.06682968884706497, -2.257188081741333, -3.6643080711364746, -0.20721864700317383, -0.31327947974205017, -3.6634974479675293, -0.1695028841495514, -0.4593466520309448, 1.0550178289413452, -0.31605079770088196, 0.33697763085365295, 1.8109651803970337, -0.39704281091690063, 1.5428825616836548, 0.0765533298254013, -0.7723068594932556, -0.008361696265637875, -0.027305293828248978, 0.9093282222747803, 1.4793466329574585, -0.09230943024158478, 0.2398260086774826, 1.9512848854064941, 2.1526379585266113, -1.1372538805007935, -0.9880079030990601, 0.05866040289402008, 1.6449939012527466, 1.2967973947525024, -2.3071162700653076, 0.43727558851242065, -1.2817187309265137, -0.026710188016295433, 0.18430902063846588, 1.378725290298462, -0.9239446520805359, 0.27773207426071167, 0.3913203775882721, -0.4901234805583954, -1.6399188041687012, -0.12080557644367218, 0.7691868543624878, 0.1709577590227127, 0.10396196693181992, -2.130411386489868, -2.179257392883301, 0.7922729253768921, 0.27633994817733765, -1.7050774097442627, 0.6258018612861633, -2.0217652320861816, 0.6698062419891357, -0.8379725813865662, -1.3636385202407837, -0.9972206354141235, 0.7543817162513733, 0.05158863589167595, -2.257720470428467, 0.442294716835022, -1.8589301109313965, -0.500280499458313, 0.25550076365470886, -3.839138984680176, 0.4164075553417206, -1.7582212686538696, 1.8491343259811401, 0.320035457611084, 1.887444257736206, 3.1942121982574463, 0.1120339184999466, -0.5607714056968689, -0.1297776848077774, -0.8522632122039795, -3.525956153869629, -1.5982003211975098, 2.4504852294921875, 2.46470046043396, -0.8185501098632812, -0.5449082255363464, 2.8579764366149902, -0.044694188982248306, 1.0574771165847778, 1.4608573913574219, 1.3664439916610718, 0.7093403935432434, -2.4899682998657227, -1.9996600151062012, 0.4483301341533661, 1.8011810779571533, -0.9083479046821594, 0.1403864026069641, 1.2353026866912842, 1.4890071153640747, 0.5965154767036438, -2.2207891941070557, -0.386689692735672, 1.0173559188842773, 0.3317832052707672, 1.242241621017456, 8.096700668334961, -1.3860564231872559, -0.48307186365127563, 2.5056164264678955, -4.412651538848877, 1.4777299165725708, 1.2915771007537842, -0.3042348027229309, 1.3734688758850098, -1.0148760080337524, 0.29798030853271484, 1.5803537368774414, 1.6444553136825562, 0.5807373523712158, 2.011157512664795, 2.430384874343872, -0.001317560556344688, -0.37967628240585327, -2.5261998176574707, 3.2119202613830566, 1.7307785749435425, 2.321204900741577, -3.089421510696411, -1.120242714881897, -2.4553184509277344, 2.1926932334899902, -1.463491678237915, -0.39328238368034363, 4.166314601898193, -0.6354401707649231, 1.4693533182144165, 1.5991348028182983, -0.22541369497776031, 0.7343212962150574, 0.1794258952140808, -2.6583163738250732, 0.0027457335963845253, 1.6476435661315918, 1.0695385932922363, 0.8916047811508179, -2.3013198375701904, -1.501152515411377, 1.6795622110366821, 0.7713955044746399, 0.4782435894012451, 0.23006942868232727, 2.595839500427246, 0.2424996942281723, -0.5558034777641296, -0.04674000293016434, -0.6988910436630249, -0.429269403219223, -0.1290259063243866, 0.3222062587738037, 1.017810344696045, -0.5098836421966553, -3.4084291458129883, 0.3000796139240265, 0.7957308888435364, 0.7062281370162964, 1.6956732273101807, 0.5430508852005005, -0.3600875437259674, -1.298385739326477, 1.9226042032241821, 1.5142651796340942, -3.1519079208374023, -0.7966042160987854, -0.27132460474967957, -0.5806691646575928, 2.560450792312622, 1.5697822570800781, -0.4995734989643097, 0.29847368597984314, 0.07077287137508392, -0.12948045134544373, -3.5200178623199463, 0.6674454212188721, -1.3807265758514404, -0.4995282292366028, 1.9198191165924072, 0.5224218964576721, 2.4898221492767334, 11.09000015258789, 0.9179505705833435, -1.7494560480117798, 1.579803466796875, -2.7534961700439453, -1.3340791463851929, 1.9154255390167236, -0.01608842983841896, 0.821875810623169, -0.2625766098499298, 1.5072975158691406, -0.713702380657196, -1.4145824909210205, -1.5109056234359741, 2.1455888748168945, -1.419687271118164, -0.5414632558822632, 1.4491149187088013, 1.5224276781082153, 0.8204352855682373, -1.070623755455017, 0.46470969915390015, -0.006221574731171131, -0.18256701529026031, 2.493424892425537, -0.49038708209991455, 0.42922085523605347, 0.873096227645874, -0.31695419549942017, 2.991065740585327, -1.3125733137130737, 0.5723339319229126, 0.2613622844219208, -1.9564348459243774, 2.178072452545166, -1.5708738565444946, 0.8963414430618286, 1.5022779703140259, 2.5450186729431152, -0.292618989944458, 0.15747855603694916, 2.1199207305908203, 0.21814104914665222, -0.8757757544517517, 0.07445792108774185, 0.07510267198085785, -0.5053762197494507, 0.7606169581413269, -3.169386625289917, -1.1002830266952515, 1.8861533403396606, 2.0080013275146484, -1.7342684268951416, -1.1598358154296875, -0.7158825993537903, -0.1937912255525589, -2.8064157962799072, 0.755673348903656, 8.499192237854004, -0.7812408804893494, 1.57917058467865, -3.151332139968872, -1.9226319789886475, -1.5604653358459473, 0.5534848570823669, 3.228034496307373, -1.6294361352920532, -0.27278730273246765, -0.867935061454773, 2.1341497898101807, 1.1075159311294556, 0.7477016448974609, 2.5511136054992676, -1.5523147583007812, -0.9242894053459167, 0.8773165941238403, 1.6915799379348755, -1.1594383716583252, 0.23813001811504364, -1.4064743518829346, -1.6849969625473022, -2.9580302238464355, -2.5688488483428955, -1.1904170513153076, -3.782924175262451, 0.7100740671157837, -1.3624398708343506, -0.9443717002868652, -0.5225216746330261, -0.09034554660320282, -2.3202784061431885, -0.23590344190597534, -1.5452443361282349, 1.2575849294662476, 1.4288854598999023, 1.638762354850769, -1.7967208623886108, 1.0915971994400024, 0.9493638873100281, 1.095393419265747, 0.8215399980545044, -0.2051163911819458, 2.168558359146118, -1.6670429706573486, -0.049629729241132736, 2.85097599029541, -0.4837287664413452, 0.6502736210823059, -2.374113082885742, 0.7011888027191162, -1.978821039199829, -0.15510064363479614, 0.4679356813430786, 1.8866007328033447, 2.520395278930664, -1.1996338367462158, 0.7295427322387695, 0.9605655074119568, 0.05692993104457855, 0.7287044525146484, 3.7953286170959473, 2.68047833442688, 0.4475618600845337, 0.5628949999809265, 0.4778791069984436, -0.5932527184486389, 1.836578130722046, 1.5961389541625977, 1.3328230381011963, -0.7625845670700073, 0.964162290096283, 1.548017978668213, 0.9993221759796143, -1.4471023082733154, 1.100744366645813, -1.5122473239898682, -0.6169258952140808, 3.0650243759155273, -1.7722645998001099, -0.18872833251953125, -1.5391753911972046, 0.2957899868488312, -0.3034318685531616, 0.7158978581428528, 11.45010757446289, -0.970210611820221, -0.5953302979469299, 0.5357429385185242, -1.7459461688995361, 0.6572960615158081, 0.5218455195426941, -0.251964807510376, 1.4631516933441162, 4.249364376068115, -1.0942943096160889, -0.9652121067047119, -1.0656694173812866, -1.9772387742996216, -1.6469305753707886, -1.335737705230713, -1.819305658340454, 0.03515125438570976, -0.6280084848403931, 2.1817753314971924, 1.5289617776870728, 2.5101521015167236, -0.6491972208023071, -8.361392974853516, 0.06266439706087112, -2.3298821449279785, 0.3874412477016449, -0.23243151605129242, -3.78399658203125, 0.6930876970291138, 0.44730332493782043, -0.9292389750480652, -1.092700481414795, 1.0822983980178833, 0.38801273703575134, -2.0460126399993896, -0.28162679076194763, 0.9888787269592285, 0.05821562930941582, 3.9159140586853027, 0.17979349195957184, 1.6432956457138062, -0.40627729892730713]}}}}]}}}
[NodeWithScore(node=TextNode(id_='657e40fb-497c-4c1a-8524-6351adbe990f', embedding=None, metadata={'director': 'Francis Ford Coppola', 'theme': 'Mafia'}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, hash='81cf4b9e847ba42e83fc401e31af8e17d629f0d5cf9c0c320ec7ac69dd0257e1', text='The Godfather', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\n\n{content}', metadata_template='{key}: {value}', metadata_seperator='\n'), score=0.5), NodeWithScore(node=TextNode(id_='fc548a8e-5a1e-4392-bdce-08f8cb888c3f', embedding=None, metadata={'director': 'Francis Ford Coppola', 'theme': 'Mafia'}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, hash='81cf4b9e847ba42e83fc401e31af8e17d629f0d5cf9c0c320ec7ac69dd0257e1', text='The Godfather', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\n\n{content}', metadata_template='{key}: {value}', metadata_seperator='\n'), score=0.0005)] |
717 | 277582b6-e0c5-4a16-82ab-5bb10a14f24f | Astra DB | https://docs.llamaindex.ai/en/stable/examples/vector_stores/AstraDBIndexDemo | true | llama_index | <a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/docs/examples/vector_stores/AstraDBIndexDemo.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Astra DB
>[DataStax Astra DB](https://docs.datastax.com/en/astra/home/astra.html) is a serverless vector-capable database built on Apache Cassandra and accessed through an easy-to-use JSON API.
To run this notebook you need a DataStax Astra DB instance running in the cloud (you can get one for free at [datastax.com](https://astra.datastax.com)).
You should ensure you have `llama-index` and `astrapy` installed:
```python
%pip install llama-index-vector-stores-astra-db
```
```python
!pip install llama-index
!pip install "astrapy>=0.6.0"
```
### Please provide database connection parameters and secrets:
```python
import os
import getpass
api_endpoint = input(
"\nPlease enter your Database Endpoint URL (e.g. 'https://4bc...datastax.com'):"
)
token = getpass.getpass(
"\nPlease enter your 'Database Administrator' Token (e.g. 'AstraCS:...'):"
)
os.environ["OPENAI_API_KEY"] = getpass.getpass(
"\nPlease enter your OpenAI API Key (e.g. 'sk-...'):"
)
```
### Import needed package dependencies:
```python
from llama_index.core import (
VectorStoreIndex,
SimpleDirectoryReader,
StorageContext,
)
from llama_index.vector_stores.astra_db import AstraDBVectorStore
```
### Load some example data:
```python
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
```
### Read the data:
```python
# load documents
documents = SimpleDirectoryReader("./data/paul_graham/").load_data()
print(f"Total documents: {len(documents)}")
print(f"First document, id: {documents[0].doc_id}")
print(f"First document, hash: {documents[0].hash}")
print(
"First document, text"
f" ({len(documents[0].text)} characters):\n{'='*20}\n{documents[0].text[:360]} ..."
)
```
### Create the Astra DB Vector Store object:
```python
astra_db_store = AstraDBVectorStore(
token=token,
api_endpoint=api_endpoint,
collection_name="astra_v_table",
embedding_dimension=1536,
)
```
### Build the Index from the Documents:
```python
storage_context = StorageContext.from_defaults(vector_store=astra_db_store)
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
```
### Query using the index:
```python
query_engine = index.as_query_engine()
response = query_engine.query("Why did the author choose to work on AI?")
print(response.response)
``` |
1,273 | b5f2abfe-6ee9-425f-848f-35d721aac12f | DocArray InMemory Vector Store | https://docs.llamaindex.ai/en/stable/examples/vector_stores/DocArrayInMemoryIndexDemo | true | llama_index | <a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/docs/examples/vector_stores/DocArrayInMemoryIndexDemo.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# DocArray InMemory Vector Store
[DocArrayInMemoryVectorStore](https://docs.docarray.org/user_guide/storing/index_in_memory/) is a document index provided by [Docarray](https://github.com/docarray/docarray) that stores documents in memory. It is a great starting point for small datasets, where you may not want to launch a database server.
If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.
```python
%pip install llama-index-vector-stores-docarray
```
```python
!pip install llama-index
```
```python
import os
import sys
import logging
import textwrap
import warnings
warnings.filterwarnings("ignore")
# stop huggingface warnings
os.environ["TOKENIZERS_PARALLELISM"] = "false"
# Uncomment to see debug logs
# logging.basicConfig(stream=sys.stdout, level=logging.INFO)
# logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
from llama_index.core import (
GPTVectorStoreIndex,
SimpleDirectoryReader,
Document,
)
from llama_index.vector_stores.docarray import DocArrayInMemoryVectorStore
from IPython.display import Markdown, display
```
```python
import os
os.environ["OPENAI_API_KEY"] = "<your openai key>"
```
Download Data
```python
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
```
```python
# load documents
documents = SimpleDirectoryReader("./data/paul_graham/").load_data()
print(
"Document ID:",
documents[0].doc_id,
"Document Hash:",
documents[0].doc_hash,
)
```
Document ID: 1c21062a-50a3-4133-a0b1-75f837a953e5 Document Hash: 77ae91ab542f3abb308c4d7c77c9bc4c9ad0ccd63144802b7cbe7e1bb3a4094e
## Initialization and indexing
```python
from llama_index.core import StorageContext
vector_store = DocArrayInMemoryVectorStore()
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = GPTVectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
```
## Querying
```python
# set Logging to DEBUG for more detailed outputs
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
print(textwrap.fill(str(response), 100))
```
Token indices sequence length is longer than the specified maximum sequence length for this model (1830 > 1024). Running this sequence through the model will result in indexing errors
Growing up, the author wrote short stories, programmed on an IBM 1401, and nagged his father to buy
him a TRS-80 microcomputer. He wrote simple games, a program to predict how high his model rockets
would fly, and a word processor. He also studied philosophy in college, but switched to AI after
becoming bored with it. He then took art classes at Harvard and applied to art schools, eventually
attending RISD.
```python
response = query_engine.query("What was a hard moment for the author?")
print(textwrap.fill(str(response), 100))
```
A hard moment for the author was when he realized that the AI programs of the time were a hoax and
that there was an unbridgeable gap between what they could do and actually understanding natural
language. He had invested a lot of time and energy into learning about AI and was disappointed to
find out that it was not going to get him the results he had hoped for.
## Querying with filters
```python
from llama_index.core.schema import TextNode
nodes = [
TextNode(
text="The Shawshank Redemption",
metadata={
"author": "Stephen King",
"theme": "Friendship",
},
),
TextNode(
text="The Godfather",
metadata={
"director": "Francis Ford Coppola",
"theme": "Mafia",
},
),
TextNode(
text="Inception",
metadata={
"director": "Christopher Nolan",
},
),
]
```
```python
from llama_index.core import StorageContext
vector_store = DocArrayInMemoryVectorStore()
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = GPTVectorStoreIndex(nodes, storage_context=storage_context)
```
```python
from llama_index.core.vector_stores import ExactMatchFilter, MetadataFilters
filters = MetadataFilters(
filters=[ExactMatchFilter(key="theme", value="Mafia")]
)
retriever = index.as_retriever(filters=filters)
retriever.retrieve("What is inception about?")
```
[NodeWithScore(node=Node(text='director: Francis Ford Coppola\ntheme: Mafia\n\nThe Godfather', doc_id='41c99963-b200-4ce6-a9c4-d06ffeabdbc5', embedding=None, doc_hash='b770e43e6a94854a22dc01421d3d9ef6a94931c2b8dbbadf4fdb6eb6fbe41010', extra_info=None, node_info=None, relationships={<DocumentRelationship.SOURCE: '1'>: 'None'}), score=0.7681788983417586)] |
1,380 | d6073bd0-7493-4449-9204-d6983f8c7ee8 | Supabase Vector Store | https://docs.llamaindex.ai/en/stable/examples/vector_stores/SupabaseVectorIndexDemo | true | llama_index | <a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/docs/examples/vector_stores/SupabaseVectorIndexDemo.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Supabase Vector Store
In this notebook we are going to show how to use [Vecs](https://supabase.github.io/vecs/) to perform vector searches in LlamaIndex.
See [this guide](https://supabase.github.io/vecs/hosting/) for instructions on hosting a database on Supabase
If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.
```python
%pip install llama-index-vector-stores-supabase
```
```python
!pip install llama-index
```
```python
import logging
import sys
# Uncomment to see debug logs
# logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
# logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
from llama_index.core import SimpleDirectoryReader, Document, StorageContext
from llama_index.core import VectorStoreIndex
from llama_index.vector_stores.supabase import SupabaseVectorStore
import textwrap
```
### Setup OpenAI
The first step is to configure the OpenAI key. It will be used to created embeddings for the documents loaded into the index
```python
import os
os.environ["OPENAI_API_KEY"] = "[your_openai_api_key]"
```
Download Data
```python
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
```
### Loading documents
Load the documents stored in the `./data/paul_graham/` using the SimpleDirectoryReader
```python
documents = SimpleDirectoryReader("./data/paul_graham/").load_data()
print(
"Document ID:",
documents[0].doc_id,
"Document Hash:",
documents[0].doc_hash,
)
```
Document ID: fb056993-ee9e-4463-80b4-32cf9509d1d8 Document Hash: 77ae91ab542f3abb308c4d7c77c9bc4c9ad0ccd63144802b7cbe7e1bb3a4094e
### Create an index backed by Supabase's vector store.
This will work with all Postgres providers that support pgvector.
If the collection does not exist, we will attempt to create a new collection
> Note: you need to pass in the embedding dimension if not using OpenAI's text-embedding-ada-002, e.g. `vector_store = SupabaseVectorStore(..., dimension=...)`
```python
vector_store = SupabaseVectorStore(
postgres_connection_string=(
"postgresql://<user>:<password>@<host>:<port>/<db_name>"
),
collection_name="base_demo",
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
```
### Query the index
We can now ask questions using our index.
```python
query_engine = index.as_query_engine()
response = query_engine.query("Who is the author?")
```
/Users/suo/miniconda3/envs/llama/lib/python3.9/site-packages/vecs/collection.py:182: UserWarning: Query does not have a covering index for cosine_distance. See Collection.create_index
warnings.warn(
```python
print(textwrap.fill(str(response), 100))
```
The author of this text is Paul Graham.
```python
response = query_engine.query("What did the author do growing up?")
```
```python
print(textwrap.fill(str(response), 100))
```
The author grew up writing essays, learning Italian, exploring Florence, painting people, working
with computers, attending RISD, living in a rent-stabilized apartment, building an online store
builder, editing Lisp expressions, publishing essays online, writing essays, painting still life,
working on spam filters, cooking for groups, and buying a building in Cambridge.
## Using metadata filters
```python
from llama_index.core.schema import TextNode
nodes = [
TextNode(
**{
"text": "The Shawshank Redemption",
"metadata": {
"author": "Stephen King",
"theme": "Friendship",
},
}
),
TextNode(
**{
"text": "The Godfather",
"metadata": {
"director": "Francis Ford Coppola",
"theme": "Mafia",
},
}
),
TextNode(
**{
"text": "Inception",
"metadata": {
"director": "Christopher Nolan",
},
}
),
]
```
```python
vector_store = SupabaseVectorStore(
postgres_connection_string=(
"postgresql://<user>:<password>@<host>:<port>/<db_name>"
),
collection_name="metadata_filters_demo",
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex(nodes, storage_context=storage_context)
```
Define metadata filters
```python
from llama_index.core.vector_stores import ExactMatchFilter, MetadataFilters
filters = MetadataFilters(
filters=[ExactMatchFilter(key="theme", value="Mafia")]
)
```
Retrieve from vector store with filters
```python
retriever = index.as_retriever(filters=filters)
retriever.retrieve("What is inception about?")
```
[NodeWithScore(node=Node(text='The Godfather', doc_id='f837ed85-aacb-4552-b88a-7c114a5be15d', embedding=None, doc_hash='f8ee912e238a39fe2e620fb232fa27ade1e7f7c819b6d5b9cb26f3dddc75b6c0', extra_info={'theme': 'Mafia', 'director': 'Francis Ford Coppola'}, node_info={'_node_type': '1'}, relationships={}), score=0.20671339734643313)] |
2,521 | 0e56587e-68ec-4f86-8d22-c6f4380d8265 | Milvus Vector Store With Hybrid Retrieval | https://docs.llamaindex.ai/en/stable/examples/vector_stores/MilvusHybridIndexDemo | true | llama_index | <a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/docs/examples/vector_stores/MilvusIndexDemo.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Milvus Vector Store With Hybrid Retrieval
In this notebook we are going to show a quick demo of using the MilvusVectorStore with hybrid retrieval. (Milvus version should higher than 2.4.0)
If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.
```python
%pip install llama-index-vector-stores-milvus
```
BGE-M3 from FlagEmbedding is used as the default sparse embedding method, so it needs to be installed along with llama-index.
```python
! pip install llama-index
! pip install FlagEmbedding
```
```python
import logging
import sys
# Uncomment to see debug logs
# logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
# logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, Document
from llama_index.vector_stores.milvus import MilvusVectorStore
from IPython.display import Markdown, display
import textwrap
```
### Setup OpenAI
Lets first begin by adding the openai api key. This will allow us to access openai for embeddings and to use chatgpt.
```python
import openai
openai.api_key = "sk-"
```
Download Data
```python
! mkdir -p 'data/paul_graham/'
! wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
```
--2024-04-25 17:44:59-- https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.111.133, 185.199.108.133, 185.199.109.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 75042 (73K) [text/plain]
Saving to: ‘data/paul_graham/paul_graham_essay.txt’
data/paul_graham/pa 100%[===================>] 73.28K --.-KB/s in 0.07s
2024-04-25 17:45:00 (994 KB/s) - ‘data/paul_graham/paul_graham_essay.txt’ saved [75042/75042]
### Generate our data
With our LLM set, lets start using the Milvus Index. As a first example, lets generate a document from the file found in the `data/paul_graham/` folder. In this folder there is a single essay from Paul Graham titled `What I Worked On`. To generate the documents we will use the SimpleDirectoryReader.
```python
# load documents
documents = SimpleDirectoryReader("./data/paul_graham/").load_data()
print("Document ID:", documents[0].doc_id)
```
Document ID: ca3f5dbc-f772-41da-9a4f-bb4884691793
### Create an index across the data
Now that we have a document, we can can create an index and insert the document. For the index we will use a MilvusVectorStore. MilvusVectorStore takes in a few arguments:
- `uri (str, optional)`: The URI to connect to, comes in the form of "http://address:port". Defaults to "http://localhost:19530".
- `token (str, optional)`: The token for log in. Empty if not using rbac, if using rbac it will most likely be "username:password". Defaults to "".
- `collection_name (str, optional)`: The name of the collection where data will be stored. Defaults to "llamalection".
- `dim (int, optional)`: The dimension of the embeddings. If it is not provided, collection creation will be done on first insert. Defaults to None.
- `embedding_field (str, optional)`: The name of the embedding field for the collection, defaults to DEFAULT_EMBEDDING_KEY.
- `doc_id_field (str, optional)`: The name of the doc_id field for the collection, defaults to DEFAULT_DOC_ID_KEY.
- `similarity_metric (str, optional)`: The similarity metric to use, currently supports IP and L2. Defaults to "IP".
- `consistency_level (str, optional)`: Which consistency level to use for a newly created collection. Defaults to "Strong".
- `overwrite (bool, optional)`: Whether to overwrite existing collection with same name. Defaults to False.
- `text_key (str, optional)`: What key text is stored in in the passed collection. Used when bringing your own collection. Defaults to None.
- `index_config (dict, optional)`: The configuration used for building the Milvus index. Defaults to None.
- `search_config (dict, optional)`: The configuration used for searching the Milvus index. Note that this must be compatible with the index type specified by index_config. Defaults to None.
- `batch_size (int)`: Configures the number of documents processed in one batch when inserting data into Milvus. Defaults to DEFAULT_BATCH_SIZE.
- `enable_sparse (bool)`: A boolean flag indicating whether to enable support
for sparse embeddings for hybrid retrieval. Defaults to False.
- `sparse_embedding_function (BaseSparseEmbeddingFunction, optional)`: If enable_sparse
is True, this object should be provided to convert text to a sparse embedding.
- `hybrid_ranker (str)`: Specifies the type of ranker used in hybrid search queries.
Currently only supports ['RRFRanker','WeightedRanker']. Defaults to "RRFRanker".
- `hybrid_ranker_params (dict)`: Configuration parameters for the hybrid ranker.
- For "RRFRanker", it should include:
- 'k' (int): A parameter used in Reciprocal Rank Fusion (RRF). This value is used
to calculate the rank scores as part of the RRF algorithm, which combines
multiple ranking strategies into a single score to improve search relevance.
- For "WeightedRanker", it should include:
- 'weights' (list of float): A list of exactly two weights:
- The weight for the dense embedding component.
- The weight for the sparse embedding component.
These weights are used to adjust the importance of the dense and sparse components of the embeddings in the hybrid retrieval process.
Defaults to an empty dictionary, implying that the ranker will operate with its predefined default settings.
Now, let's begin creating a MilvusVectorStore for hybrid retrieval. We need to set `enable_sparse` to True to enable sparse embedding generation, and we also need to configure the RRFRanker for reranking. For more details, please refer to [Milvus Reranking](https://milvus.io/docs/reranking.md).
```python
# Create an index over the documnts
from llama_index.core import StorageContext
import os
vector_store = MilvusVectorStore(
dim=1536,
overwrite=True,
enable_sparse=True,
hybrid_ranker="RRFRanker",
hybrid_ranker_params={"k": 60},
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
```
Sparse embedding function is not provided, using default.
Fetching 30 files: 0%| | 0/30 [00:00<?, ?it/s]
----------using 2*GPUs----------
### Query the data
Now that we have our document stored in the index, we can ask questions against the index while enable hybrid mode by specifying `vector_store_query_mode`. The index will use the data stored in itself as the knowledge base for chatgpt.
```python
query_engine = index.as_query_engine(vector_store_query_mode="hybrid")
response = query_engine.query("What did the author learn?")
print(textwrap.fill(str(response), 100))
```
The author learned that the field of AI, as practiced at the time, was not as promising as initially
believed. The author realized that the approach of using explicit data structures to represent
concepts in AI was not effective in truly understanding natural language. This led the author to
shift focus from traditional AI to exploring Lisp for its own merits, ultimately deciding to write a
book about Lisp hacking.
```python
response = query_engine.query("What was a hard moment for the author?")
print(textwrap.fill(str(response), 100))
```
Dealing with the stress and pressure related to managing Hacker News was a challenging moment for
the author.
### Customized sparse embedding function
Here, we are using the default sparse embedding function, which utilizes the [BGE-M3](https://arxiv.org/abs/2402.03216) model. Below, we describe how to prepare a customized sparse embedding function.
You will need to create a class similar to ExampleEmbeddingFunction. This class should include methods such as:
- encode_queries: This method converts texts into list of sparse embeddings for queries.
- encode_documents: This method converts text into list of sparse embeddings for documents.
The format of the sparse embedding is a dictionary, where the key (an integer) represents the dimension, and its corresponding value (a float) represents the embedding's magnitude in that dimension.(e.g., {1: 0.5, 2: 0.3}).
```python
! pip install FlagEmbedding
```
```python
from FlagEmbedding import BGEM3FlagModel
from typing import List
from llama_index.vector_stores.milvus.utils import BaseSparseEmbeddingFunction
class ExampleEmbeddingFunction(BaseSparseEmbeddingFunction):
def __init__(self):
self.model = BGEM3FlagModel("BAAI/bge-m3", use_fp16=False)
def encode_queries(self, queries: List[str]):
outputs = self.model.encode(
queries,
return_dense=False,
return_sparse=True,
return_colbert_vecs=False,
)["lexical_weights"]
return [self._to_standard_dict(output) for output in outputs]
def encode_documents(self, documents: List[str]):
outputs = self.model.encode(
documents,
return_dense=False,
return_sparse=True,
return_colbert_vecs=False,
)["lexical_weights"]
return [self._to_standard_dict(output) for output in outputs]
def _to_standard_dict(self, raw_output):
result = {}
for k in raw_output:
result[int(k)] = raw_output[k]
return result
```
now we can use this in our hybrid retrieval.
```python
vector_store = MilvusVectorStore(
dim=1536,
overwrite=True,
enable_sparse=True,
sparse_embedding_function=ExampleEmbeddingFunction(),
hybrid_ranker="RRFRanker",
hybrid_ranker_params={"k": 60},
)
```
Fetching 30 files: 0%| | 0/30 [00:00<?, ?it/s]
----------using 2*GPUs---------- |
1,937 | 6969dd3a-80a8-4e86-8c3c-005c576bf746 | Qdrant Vector Store - Metadata Filter | https://docs.llamaindex.ai/en/stable/examples/vector_stores/Qdrant_metadata_filter | true | llama_index | <a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/docs/examples/vector_stores/pinecone_metadata_filter.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Qdrant Vector Store - Metadata Filter
If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.
```python
%pip install llama-index-vector-stores-qdrant
```
```python
!pip install llama-index qdrant_client
```
Build the Qdrant VectorStore Client
```python
import qdrant_client
from llama_index.core import VectorStoreIndex
from llama_index.vector_stores.qdrant import QdrantVectorStore
client = qdrant_client.QdrantClient(
# you can use :memory: mode for fast and light-weight experiments,
# it does not require to have Qdrant deployed anywhere
# but requires qdrant-client >= 1.1.1
location=":memory:"
# otherwise set Qdrant instance address with:
# uri="http://<host>:<port>"
# set API KEY for Qdrant Cloud
# api_key="<qdrant-api-key>",
)
```
Build the QdrantVectorStore and create a Qdrant Index
```python
from llama_index.core.schema import TextNode
nodes = [
TextNode(
text="The Shawshank Redemption",
metadata={
"author": "Stephen King",
"theme": "Friendship",
"year": 1994,
},
),
TextNode(
text="The Godfather",
metadata={
"director": "Francis Ford Coppola",
"theme": "Mafia",
"year": 1972,
},
),
TextNode(
text="Inception",
metadata={
"director": "Christopher Nolan",
"theme": "Fiction",
"year": 2010,
},
),
TextNode(
text="To Kill a Mockingbird",
metadata={
"author": "Harper Lee",
"theme": "Mafia",
"year": 1960,
},
),
TextNode(
text="1984",
metadata={
"author": "George Orwell",
"theme": "Totalitarianism",
"year": 1949,
},
),
TextNode(
text="The Great Gatsby",
metadata={
"author": "F. Scott Fitzgerald",
"theme": "The American Dream",
"year": 1925,
},
),
TextNode(
text="Harry Potter and the Sorcerer's Stone",
metadata={
"author": "J.K. Rowling",
"theme": "Fiction",
"year": 1997,
},
),
]
```
```python
import os
from llama_index.core import StorageContext
os.environ["OPENAI_API_KEY"] = "sk-..."
vector_store = QdrantVectorStore(
client=client, collection_name="test_collection_1"
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex(nodes, storage_context=storage_context)
```
Define metadata filters
```python
from llama_index.core.vector_stores import (
MetadataFilter,
MetadataFilters,
FilterOperator,
)
filters = MetadataFilters(
filters=[
MetadataFilter(key="theme", operator=FilterOperator.EQ, value="Mafia"),
]
)
```
Retrieve from vector store with filters
```python
retriever = index.as_retriever(filters=filters)
retriever.retrieve("What is inception about?")
```
[FieldCondition(key='theme', match=MatchValue(value='Mafia'), range=None, geo_bounding_box=None, geo_radius=None, geo_polygon=None, values_count=None)]
[NodeWithScore(node=TextNode(id_='050c085d-6d91-4080-9fd6-3f874a528970', embedding=None, metadata={'director': 'Francis Ford Coppola', 'theme': 'Mafia', 'year': 1972}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, hash='bfa890174187ddaed4876803691ed605463de599f5493f095a03b8d83364f1ef', text='The Godfather', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\n\n{content}', metadata_template='{key}: {value}', metadata_seperator='\n'), score=0.7620959333946706),
NodeWithScore(node=TextNode(id_='11d0043a-aba3-4ffe-84cb-3f17988759be', embedding=None, metadata={'author': 'Harper Lee', 'theme': 'Mafia', 'year': 1960}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, hash='3475334d04bbe4606cb77728d5dc0784f16c8db3f190f3692e6310906c821927', text='To Kill a Mockingbird', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\n\n{content}', metadata_template='{key}: {value}', metadata_seperator='\n'), score=0.7340329162691743)]
Multiple Metadata Filters with `AND` condition
```python
from llama_index.core.vector_stores import FilterOperator, FilterCondition
filters = MetadataFilters(
filters=[
MetadataFilter(key="theme", value="Fiction"),
MetadataFilter(key="year", value=1997, operator=FilterOperator.GT),
],
condition=FilterCondition.AND,
)
retriever = index.as_retriever(filters=filters)
retriever.retrieve("Harry Potter?")
```
[FieldCondition(key='theme', match=MatchValue(value='Fiction'), range=None, geo_bounding_box=None, geo_radius=None, geo_polygon=None, values_count=None)]
[FieldCondition(key='theme', match=MatchValue(value='Fiction'), range=None, geo_bounding_box=None, geo_radius=None, geo_polygon=None, values_count=None), FieldCondition(key='year', match=None, range=Range(lt=None, gt=1997.0, gte=None, lte=None), geo_bounding_box=None, geo_radius=None, geo_polygon=None, values_count=None)]
[NodeWithScore(node=TextNode(id_='1be42402-518f-4e88-9860-12cfec9f5ed2', embedding=None, metadata={'director': 'Christopher Nolan', 'theme': 'Fiction', 'year': 2010}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, hash='7937eb153ccc78a3329560f37d90466ba748874df6b0303b3b8dd3c732aa7688', text='Inception', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\n\n{content}', metadata_template='{key}: {value}', metadata_seperator='\n'), score=0.7649987694994126)]
Use keyword arguments specific to Qdrant
```python
retriever = index.as_retriever(
vector_store_kwargs={"filter": {"theme": "Mafia"}}
)
retriever.retrieve("What is inception about?")
```
[NodeWithScore(node=TextNode(id_='1be42402-518f-4e88-9860-12cfec9f5ed2', embedding=None, metadata={'director': 'Christopher Nolan', 'theme': 'Fiction', 'year': 2010}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, hash='7937eb153ccc78a3329560f37d90466ba748874df6b0303b3b8dd3c732aa7688', text='Inception', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\n\n{content}', metadata_template='{key}: {value}', metadata_seperator='\n'), score=0.841150534139415),
NodeWithScore(node=TextNode(id_='ee4d3b32-7675-49bc-bc49-04011d62cf7c', embedding=None, metadata={'author': 'J.K. Rowling', 'theme': 'Fiction', 'year': 1997}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, hash='1b24f5e9fb6f18cc893e833af8d5f28ff805a6361fc0838a3015c287510d29a3', text="Harry Potter and the Sorcerer's Stone", start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\n\n{content}', metadata_template='{key}: {value}', metadata_seperator='\n'), score=0.7661930751179629)] |
964 | 86a3cc64-83fa-4d46-9ce3-b8670eef0d31 | Bagel Vector Store | https://docs.llamaindex.ai/en/stable/examples/vector_stores/BagelAutoRetriever | true | llama_index | <a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/docs/examples/vector_stores/BagelAutoRetriever.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Bagel Vector Store
If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.
```python
%pip install llama-index-vector-stores-bagel
%pip install llama-index
%pip install bagelML
```
```python
import logging
import sys
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
```
```python
# set up OpenAI
import os
import getpass
os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
import openai
openai.api_key = os.environ["OPENAI_API_KEY"]
```
```python
import os
# Set environment variable
os.environ["BAGEL_API_KEY"] = getpass.getpass("Bagel API Key:")
```
```python
import bagel
from bagel import Settings
```
```python
server_settings = Settings(
bagel_api_impl="rest", bagel_server_host="api.bageldb.ai"
)
client = bagel.Client(server_settings)
collection = client.get_or_create_cluster(
"testing_embeddings_3", embedding_model="custom", dimension=1536
)
```
```python
from llama_index.core import VectorStoreIndex, StorageContext
from llama_index.vector_stores.bagel import BagelVectorStore
```
```python
from llama_index.core.schema import TextNode
nodes = [
TextNode(
text=(
"Michael Jordan is a retired professional basketball player,"
" widely regarded as one of the greatest basketball players of all"
" time."
),
metadata={
"category": "Sports",
"country": "United States",
},
),
TextNode(
text=(
"Angelina Jolie is an American actress, filmmaker, and"
" humanitarian. She has received numerous awards for her acting"
" and is known for her philanthropic work."
),
metadata={
"category": "Entertainment",
"country": "United States",
},
),
TextNode(
text=(
"Elon Musk is a business magnate, industrial designer, and"
" engineer. He is the founder, CEO, and lead designer of SpaceX,"
" Tesla, Inc., Neuralink, and The Boring Company."
),
metadata={
"category": "Business",
"country": "United States",
},
),
TextNode(
text=(
"Rihanna is a Barbadian singer, actress, and businesswoman. She"
" has achieved significant success in the music industry and is"
" known for her versatile musical style."
),
metadata={
"category": "Music",
"country": "Barbados",
},
),
TextNode(
text=(
"Cristiano Ronaldo is a Portuguese professional footballer who is"
" considered one of the greatest football players of all time. He"
" has won numerous awards and set multiple records during his"
" career."
),
metadata={
"category": "Sports",
"country": "Portugal",
},
),
]
```
```python
vector_store = BagelVectorStore(collection=collection)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
```
```python
index = VectorStoreIndex(nodes, storage_context=storage_context)
```
```python
from llama_index.core.retrievers import VectorIndexAutoRetriever
from llama_index.core.vector_stores import MetadataInfo, VectorStoreInfo
vector_store_info = VectorStoreInfo(
content_info="brief biography of celebrities",
metadata_info=[
MetadataInfo(
name="category",
type="str",
description=(
"Category of the celebrity, one of [Sports, Entertainment,"
" Business, Music]"
),
),
MetadataInfo(
name="country",
type="str",
description=(
"Country of the celebrity, one of [United States, Barbados,"
" Portugal]"
),
),
],
)
retriever = VectorIndexAutoRetriever(
index, vector_store_info=vector_store_info
)
```
```python
retriever.retrieve("celebrity")
``` |