Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -92,8 +92,77 @@ It created a nice interactive map with tooltips.
|
|
92 |
|
93 |

|
94 |
|
95 |
-
##
|
96 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
97 |
You can even perform semantic search remotely without downloading the whole file.
|
98 |
Without using any index on the data (like HNSW or ANN etc.) but by simply brute forcing, it takes around 3 mins on my machine to query the example file for Italy remotely.
|
99 |
It weighs ~5Gb and consists of 3.029.191 rows. I used https://huggingface.co/minishlab/M2V_multilingual_output as multilingual embeddings.
|
|
|
92 |
|
93 |

|
94 |
|
95 |
+
## Semantic Search
|
96 |
|
97 |
+
### Local and Remote Semantic Search with LanceDB
|
98 |
+
|
99 |
+
#### Reading a LanceDB index
|
100 |
+
|
101 |
+
You can use the text LanceDB index I created for Italy in this way:
|
102 |
+
|
103 |
+
Download the index locally to get the fastest results (3.4GB).
|
104 |
+
|
105 |
+
Load the model and connect to the local DB:
|
106 |
+
|
107 |
+
```python
|
108 |
+
from model2vec import StaticModel
|
109 |
+
import numpy as np
|
110 |
+
import lancedb
|
111 |
+
|
112 |
+
model = StaticModel.from_pretrained("minishlab/m2v_multilingual_output")
|
113 |
+
|
114 |
+
db = lancedb.connect("italy_lancedb")
|
115 |
+
table = db.open_table("foursquare")
|
116 |
+
```
|
117 |
+
|
118 |
+
Create the query vector:
|
119 |
+
|
120 |
+
```python
|
121 |
+
query = "ski and snowboard"
|
122 |
+
query_vector = model.encode(query),
|
123 |
+
query_vector = np.array(query_vector).astype(np.float32)[0]
|
124 |
+
```
|
125 |
+
|
126 |
+
Fire the query. On my M3 Max I usually get query times of ~5ms
|
127 |
+
|
128 |
+
```python
|
129 |
+
table.search(query_vector).limit(10).select(["poi_name", "latitude", "longitude"]).to_pandas().sort_values("_distance",ascending=False)
|
130 |
+
```
|
131 |
+
|
132 |
+

|
133 |
+
|
134 |
+
You can host the index on s3 or similar, but with a massive increase in query latency (in the hundreds of ms, but under 1s). See LanceDB docs for more info.
|
135 |
+
|
136 |
+
#### Writing a LanceDB index
|
137 |
+
|
138 |
+
```python
|
139 |
+
import geopandas as gpd
|
140 |
+
|
141 |
+
data = gpd.read_parquet("foursquare_places_italy_embeddings.parquet")
|
142 |
+
|
143 |
+
import lancedb
|
144 |
+
import geopandas as gpd
|
145 |
+
import pyarrow as pa
|
146 |
+
|
147 |
+
# Connect to LanceDB
|
148 |
+
db = lancedb.connect("italy_lancedb")
|
149 |
+
|
150 |
+
# Extract lat/lon from geometry
|
151 |
+
data['latitude'] = data['geometry'].y
|
152 |
+
data['longitude'] = data['geometry'].x
|
153 |
+
|
154 |
+
data = data.rename(columns={"name": "poi_name", "embeddings": "vector"}) # standardize vector name to "vector", rename column to avoid conflicts
|
155 |
+
data = data[['poi_name', 'vector', 'latitude', 'longitude']] # select valid columns
|
156 |
+
|
157 |
+
# Create the table
|
158 |
+
table = db.create_table("foursquare", data=data)
|
159 |
+
|
160 |
+
# Create a Vector Index (Optional but HIGHLY recommended)
|
161 |
+
table.create_index(vector_column_name="vector", metric="cosine", num_sub_vectors=16) # do not use accelerator="mps" on Mac for tens of millions of rows, only for much larger DBs
|
162 |
+
```
|
163 |
+
|
164 |
+
|
165 |
+
### Remote Semantic Search with DuckDB
|
166 |
You can even perform semantic search remotely without downloading the whole file.
|
167 |
Without using any index on the data (like HNSW or ANN etc.) but by simply brute forcing, it takes around 3 mins on my machine to query the example file for Italy remotely.
|
168 |
It weighs ~5Gb and consists of 3.029.191 rows. I used https://huggingface.co/minishlab/M2V_multilingual_output as multilingual embeddings.
|