do-me commited on
Commit
23c8e51
·
verified ·
1 Parent(s): 352bf2f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +106 -0
README.md CHANGED
@@ -91,3 +91,109 @@ viz(bakeries)
91
  It created a nice interactive map with tooltips.
92
 
93
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c4da8719565937fb268b32/QIolrK2nlrENnkWlE6TFh.png)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
91
  It created a nice interactive map with tooltips.
92
 
93
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c4da8719565937fb268b32/QIolrK2nlrENnkWlE6TFh.png)
94
+
95
+ ## Remote Semantic Search
96
+
97
+ You can even perform semantic search remotely without downloading the whole file.
98
+ Without using any index on the data (like HNSW or ANN etc.) but by simply brute forcing, it takes 3:14 mins on my machine to query the example file for Italy remotely.
99
+ It weighs ~5Gb and consists of 3.029.191 rows. I used https://huggingface.co/minishlab/M2V_multilingual_output as multilingual embeddings.
100
+ I will write a detailed tutorial on the processing in the near future.
101
+
102
+ ```python
103
+ import duckdb
104
+ from model2vec import StaticModel
105
+ import pandas as pd
106
+ import numpy as np
107
+
108
+ model = StaticModel.from_pretrained("minishlab/m2v_multilingual_output")
109
+
110
+ def search_similar_locations(query_vector, top_k=10, db_path=""):
111
+ """
112
+ Search for locations with similar embedding vectors using cosine similarity.
113
+
114
+ Args:
115
+ query_vector (list): The embedding vector to compare against
116
+ top_k (int): Number of similar locations to return
117
+ db_path (str): Path to the parquet file containing the embeddings and geometries
118
+
119
+ Returns:
120
+ pandas.DataFrame: DataFrame containing the top_k most similar locations with their coordinates
121
+ """
122
+
123
+ # Convert query vector to numpy array
124
+ query_vector = np.array(query_vector).astype(np.float32)
125
+
126
+ # Install and load the vss extension
127
+ con = duckdb.connect()
128
+ try:
129
+ #con.execute("INSTALL vss;")
130
+ #con.execute("INSTALL spatial;")
131
+ con.execute("LOAD vss;")
132
+ con.execute("LOAD spatial;")
133
+ except Exception as e:
134
+ print(f"Error installing/loading vss extension: {str(e)}")
135
+ con.close()
136
+ return None
137
+
138
+ # Define a custom cosine similarity function for use in SQL
139
+ def cosine_similarity(arr1, arr2):
140
+ if arr1 is None or arr2 is None:
141
+ return None
142
+
143
+ arr1 = np.array(arr1)
144
+ arr2 = np.array(arr2)
145
+
146
+ norm_arr1 = np.linalg.norm(arr1)
147
+ norm_arr2 = np.linalg.norm(arr2)
148
+
149
+ if norm_arr1 == 0 or norm_arr2 == 0:
150
+ return 0.0 # To handle zero vectors
151
+
152
+ return np.dot(arr1, arr2) / (norm_arr1 * norm_arr2)
153
+
154
+ con.create_function('cosine_similarity', cosine_similarity, ['FLOAT[]', 'FLOAT[]'], 'DOUBLE') # Specify parameter types, then return type
155
+
156
+ # Construct the SQL query
157
+ query = f"""
158
+ WITH location_data AS (
159
+ SELECT *,
160
+ embeddings::FLOAT[] as embedding_arr -- Cast embedding to FLOAT[]
161
+ FROM '{db_path}'
162
+ --LIMIT 1_000
163
+ )
164
+ SELECT
165
+ name,
166
+ -- geometry,
167
+ ST_X(ST_GeomFromWKB(geometry)) as longitude,
168
+ ST_Y(ST_GeomFromWKB(geometry)) as latitude,
169
+ cosine_similarity(embedding_arr, ?::FLOAT[]) as cosine_sim
170
+ FROM location_data
171
+ ORDER BY cosine_sim DESC
172
+ LIMIT {top_k};
173
+ """
174
+
175
+ # Execute query and return results as DataFrame
176
+ try:
177
+ result = con.execute(query, parameters=(query_vector,)).df() # Pass parameters as a tuple
178
+ con.close()
179
+ return result
180
+ except Exception as e:
181
+ print(f"Error executing query: {str(e)}")
182
+ con.close()
183
+ return None
184
+
185
+ # Search for similar locations
186
+ results = search_similar_locations(
187
+ query_vector=model.encode("ski and snowboard"),
188
+ top_k=50,
189
+ db_path='hf://datasets/do-me/foursquare_places_100M/foursquare_places_italy_embeddings.parquet' # can also be a local file
190
+ )
191
+
192
+ results
193
+ ```
194
+
195
+ The resulting pandas df:
196
+
197
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c4da8719565937fb268b32/VkDGOHmSthVJtDeyof9KL.png)
198
+
199
+ Note that the cosine similarity function could also be rewritten to run fully in DuckDB.