Image-based search engine

Community Article Published July 4, 2024

Open In Colab

eyJ2ZXJzaW9uIjoiMSIsImVuY29kaW5nIjoiYnN0cmluZyIsImNvbXByZXNzZWQiOnRydWUsImVuY29kZWQiOiJ4nO1cXFlT28hcdTAwMWF951ekuK9jTe/LvLFcdTAwMDXMbnZ871RK2MKWLUuKLGPMVP77/dpcdTAwMTBLSPJCwCCm4lRcdTAwMTFQy9Kn7u+cPqdcdTAwMTf9s/Lly2o8XG6d1b++rDr3XHLbc5uRPVxc/cNcdTAwMWO/c6K+XHUwMDFi+FBExn/3g0HUXHUwMDE4n9mO47D/159/Jt+wXHUwMDFhQe/xW47n9Fx1MDAxYz/uw3n/hb+/fPln/DN1n8hpxLbf8pzxXHUwMDE3xkXJrTDh2aOHgT++LVZMc82FXHUwMDE0kzNcXL/p3Jtr3lx1MDAxMNRLLuf2NyGM2GlC0a3t9Z2kxFx1MDAxY1o9P+TsXFxccuvVQe9Ir6GL3aPGdTX5+q3reafxyFx1MDAxYkfbXHUwMDBm4Fx0k7J+XHUwMDFjXHUwMDA1XefSbcZtKGWZ49O+XHUwMDE1XHUwMDA1g1bbd/qmVtDkaFx1MDAxMNpccjdcdTAwMWWZR0PJ0ceqSZ9nnrDCXHUwMDE01Vx1MDAxNsJcdTAwMWEzzrVEXHUwMDEyT4rNXHUwMDA1XHUwMDE0XHUwMDE3yEJacikkJVxik0xgXHUwMDFigVx1MDAxN0QmsP/cjj9JaDd2o9uC+Pxmclx1MDAwZVx1MDAxYX+Sc4ZPjyuwsJikXHUwMDFhIaGlUDx5/LbjttoxnFx1MDAwMje3XHUwMDE0VkhrSrkgVNAkXHUwMDEyZ9wgmFx1MDAxMabgI9SkxNw/rDbHOfN30lxmkd1zquYr/sDz0nXpN5/q8mduJdlFn478SFx1MDAxZdCcv5XKyuRcdTAwMGWDsGk/Zlx0llgriqVcdTAwMTKIJk/luX43e3svaHSTxFpJ3euFic45npbpUiBGXHUwMDE0V6gw0f3FXHUwMDEzPWhcdTAwMWP31El8QVx1MDAwZnUj2Lql9LB92S57oktcIiyNNOJEMkgglkl0iS1BXHRHXHUwMDFjM6GweE2ix5Ht90M7gszIJztG2pJcYqLQXHUwMDAy8CRcdTAwMDB1+WxcdTAwMTfcYpQzXGY/OCGYk2yyXHUwMDFitlJEksJUT5J372AjaG2MuuFB73I7btVcdTAwMWX2KmdXk5DghLrPvIe9m5uafz7a9p1atX2Nr1afypdcdTAwMGaZyXeSb09yLKyFtb2aUod7Z3731FOuXHUwMDE3XGaDVOwpQNhRXHUwMDE0XGZXJyU/cvFcdTAwMTdcdTAwMDBcdTAwMTJcdTAwMDHpv1x1MDAxNSBj5z4uxmIqy7K9juRUS4aFLlx1MDAwNGP4gl5nUFx1MDAxOX7bPOBniPrd46/7VXaFz0pcdTAwMGZGQIGmXHUwMDFjKY5cdTAwMDRGKTZ/XHUwMDA0o2ZcdTAwMDBGoVx1MDAxOVKMwGlLXHUwMDAzo6RcdTAwMTbPw49gi1x1MDAwMzQpkVRcdTAwMTFGRFx1MDAwZX6QQ4hLpVJcdTAwMGZaQlx1MDAwMJLX9FnQY2EqX9JnJcFcdTAwMDZ+fOo+PGa5pVx0XFyPaqY4kFx1MDAxOVbPzvpq91xcz7R40v5jLEGzbdqx3XdSTWZcbtY8t2WAtdqAXHUwMDA3cKJnmItdXHUwMDEwjZNcdTAwMTPiIIWgXHUwMDA23Mp2fSfK12BcdTAwMTC5Lde3vbNpt7VcdTAwMDdxcOL0XHUwMDFmXHUwMDFmJ45cdTAwMDZOukacnZ8pgy3ya1x1MDAxNMGoylx1MDAxZZ5QXHUwMDA0Z9BdQ91cdTAwMTVTxPfFKeKENd1u4240PLvs7p6s2ztXt5VvZadcYk6UZUhcdTAwMTKDKkVSJz3ymFwiXHUwMDA0plx1MDAxNshByZDmXHUwMDE0xON0ikCqkVx1MDAxNp1FwjRLIz/pgUqLpp3DhCG0xVx1MDAwNYbAMOLKaGSZY1xirqRmppeew1x1MDAxMP9yXHUwMDAyXHUwMDAw/EMlMOjtmEZQXHUwMDFmqcqcQ1x1MDAwMM1HJPbfm1x1MDAwMfL3fSNcbpgt2zWealA1OFRcdTAwMDWZlnRDaVx1MDAxYYhcdTAwMTanXHUwMDAxp9dcdTAwMWTq9nm1M8Lfo7C2vlx1MDAxNq7fdEpPXHUwMDAzXGZbilwiRaE7XHUwMDE2NEtcdTAwMDNcdTAwMTJoXHUwMDAwJDJcdTAwMDMlQbHCy1NcbppYiFx1MDAwYiG5XHUwMDEyglOqXHUwMDBiSIGDameUYKlcdTAwMTUh4LJStfRIXG5cdTAwMTAlYdBcdTAwMTPO44ST0U5/92GLo7VTR+jdQ3tcdTAwMWKtVz+JbD9cdTAwMTRXfu9uva4uKi7h1fb3blXufDLZrlx0mYpFhrHS0C1cdTAwMTViMV5cdTAwMWOLXHUwMDE39c7N90r1WFx1MDAxZl5cXO9ddGLn+n5YLz1cdTAwMTYpscCRMs5cdTAwMTlcdTAwMDVW11x1MDAxOSxcdTAwMTJkcUWF6Vx1MDAxM1x1MDAxMZWvXHUwMDFhK/pcdTAwMDXVzqiFXGJcdTAwMDdHzSA0XHUwMDA1Ueb6ZDBcdTAwMTRcdTAwMTJRmlx1MDAxYegrIf5cdTAwMTbotGfh715fqdrRjnPQP633zr+ygyqS+4vh749Z110mrj/aa2z1bpym0/yfX+3ZLee9XHUwMDA1x7S7v5HseGyNXHUwMDAymmNcdTAwMDDSqdZDKcVcdTAwMTj0Y8VDhYPFeW52PpaU5yRWXHUwMDE2wlx1MDAwNDEyXHUwMDFlXHUwMDEzz/JcdTAwMWPjllJSQbJxzJc4Uig5XHUwMDE4IIkglbFpkCSfXHUwMDEzyWE8XHUwMDEwJ4bUtFx1MDAxNlx1MDAwNCeR/pRcdTAwMWNcYiFcdTAwMTAkKS3yyWzIy/v8VHvYUbxcdTAwMGV56/qt54E9TSaNXHUwMDFmZJXvVqpOvXPssVx1MDAxZdry6T1cdTAwMGVblWdsf1x1MDAxYjRcdTAwMDbjbLIwYFx1MDAwNlGNQHUqrpVIndSyQ/OoliBcdTAwMTRpgjmFXHUwMDFjXHUwMDEyMi+mXHUwMDFjvzk/pNni4HlIXFxQZoaCidRUQPeXi8lcZjdcdTAwMTPouVx1MDAxMVx1MDAwM1x1MDAxM4YpZ7mYPLtcdTAwMWZvXHUwMDA0vZ5cdTAwMWJDzVx1MDAxZlx1MDAwN65cdTAwMWZna3hcXJVrhknajp1rfnimdFmWckJzxef5lvz2JVx1MDAwMd/4j8nvf/9RePZUUJhPJYeH5HIr6f+nkaXjeW7YnzKDOH0w11xiQok0Lp5BvFucLWfnYknZUmFtXHUwMDExXHUwMDEwXFxCm/klnei+x4FcdTAwMWGtQFx1MDAxNVx1MDAwMmKQQEJcbplcdOxcclx1MDAxZJqC/lx1MDAxZlx1MDAwMfiIolx1MDAxY6fQmdCl4lx1MDAxNlVIMkRcdTAwMDTgXHUwMDE3tHyOLqlcdTAwMTnUpXNHbVx1MDAwNnvH0frGXHUwMDAzXHUwMDFidLtRvVx1MDAxOVxc43rroPXvV4hzldx7OTRKpzo0wZFcdTAwMDY8XHUwMDE2XCLxfnEk7t/E/kV9XHUwMDE0r1x1MDAxZFx1MDAxZY/ujsiF+Nbha2VHolx1MDAwNFxmUHBnRIP6hf4go1sospCZNlx1MDAwNOeG0Fx1MDAxMoVcdTAwMGInvzarXHUwMDAyXHUwMDFjXG5qZb5YKTv6yut0PsTgvLevIXg6OSiilFx1MDAxMLp4KPVhcXqYbYdLSlx1MDAwZoxJS1GqhSRcdTAwMTjBv7ytXHUwMDAxfcg4XHUwMDAz4UT48oZSsUCWhm5YMY1BvkJ75MlCXGIjtrFCglHEkEpF80hcdTAwMTaUXGIz5vvb2LyJi6BcXFxu4F6NucKY8WRgYOJcIjDg0cxDKDBcdTAwMWGcgfn9RWuzjTqK74RrXHUwMDE3w+tcdTAwMTFvbp9cZpAzqkxcdEpxXHUwMDAxqppJsOCmpXk+KGaZkMBcdTAwMGVAd4chb3kuqE/lbZiypFx1MDAxNOAlJFx1MDAxMkSJtMFcdTAwMDRzg5VcdTAwMDXMJVxiXGJpZlaOKDX3glORZj7AXHUwMDA2oFx1MDAxNTg0PHTIXHUwMDFh2p4l11tJ//9iXG6mKGWFckNLWoA3o8Wr0PDiXHUwMDE0PHuBUEkp2Cx1JEBnhGKGkU467kevhIxCXHUwMDAzeUZcdTAwMThIIbY8XG4mXFxZRiRcImOWmJCJK5swMDahUPBzXHUwMDAyS8owljmzRDSINdOh/qbgaWw3e6Fkiu0qyMKg5Sj0vFxujDT8SMxpQne/Rrm8u3Ghb9v7w+ODuzo68E56/lqnmHLhzpqC/zVTqYZYi1x1MDAxOFx1MDAxNyhIXHUwMDAxXHUwMDFkSzPXXHUwMDAyyfOpXHUwMDE5dypcdTAwMGXMJ4+AXHUwMDE38qPRs4X2XHUwMDE1p8gvS49cdTAwMDRpXHUwMDEwNkRcdTAwMTTTI1ucXHUwMDFm73prXXFd47V4r75cdTAwMWJsXHLXOpth6fmRXCIzWiS4XHUwMDE5e9dSZ0beXHUwMDA1NFx1MDAxN3RcdTAwMWaUaKVBoupMYG/Hj1x1MDAwMltAzlx1MDAxOG6EjchcdTAwMTSiYEn6WKJcdTAwMDL7SVwixJhPsvyoJSCE8t9cdTAwMTI1XHUwMDBm8Cklb0xcdTAwMWTPyt6UNyrTXHUwMDEzxHxcdTAwMDRcdTAwMDGxrKFcdTAwMTJcdTAwMDXmVEgl516PWFxugbSjXGbSmyNO1DMmyqXaXHUwMDBiiWj20qP0yqJcZlx1MDAxYnFcbqEgopJcdTAwMTROk5FanIykuN8+JV3dcavRKKC7W161ykpPRpxZnEPfoLWZPElE0qNf1lDKODRcdTAwMTl4arBOy/PL0Fx1MDAxMTHCkVx1MDAxNFx1MDAxOFHNi7hIYpBcdTAwMGZgXHUwMDFkQMVLqlx0ydllXGa2X2Mw0nO4aERseXS9v0kuO43+zeUl2q+Htd9Lj95tYDs90pHdvWNaXHUwMDFmXHUwMDAx/lx1MDAwYrG4vThcdTAwMTZnW/CyYlx1MDAxMdwq+GNCpaZmXHUwMDBi2HMsgse1JLRcdTAwMTEwrpmXX1x1MDAxYVx1MDAxNFxy5PPoy648yu9N44BNw9zzNux8KPxeOa/0XHRXXGKxZ2fNXHUwMDE4Nz9x4sh17swqXHUwMDFkqFxu+PXdlyXPiuA9XHUwMDE2KFx1MDAxM5Za5J5VXHRcdTAwMWNRSVx1MDAwMZ2FzHS0ODN9PTy4XHUwMDBmak3vdj+MXHS7cf1NWd8uPTNx4Fx1MDAxZbNCXHUwMDE5XGaLYiSjXHUwMDEyuKZgWcAsm1x1MDAxMS80a59CXHUwMDE59lx1MDAxNUpJpCbp7Y/FPHU3bJ5HXHUwMDFidXFWU/jr9yN5ft05OP8kMuFfsLGQsOlrUaCMK6xZ4VxuZXzxgrUoM4eNylxuRkYsrcZD6IrjlPV+XHUwMDFhX1x1MDAxNZaSXFxxoZVkcnnjXHUwMDA3qkCjz12fXGbCXHUwMDA2pP78LUNcdTAwMWZcbr5XioRlgu+jJ9fHXHUwMDBieccreT9o8+KMXHUwMDAwlr2Nkevp79dgkPZGtFx1MDAxN1x1MDAxYpf64ozUvMfbV+tdXHUwMDE5XHUwMDFlVMhu/+zebu9He2VnJG62XHUwMDA1XHTzzlxuKTXDXHUwMDE5RuJSWEKYRS9cdTAwMTL6jVk7nV+xjVx1MDAxMVNuXHUwMDExLtUvbmUkhFLFXHUwMDEw+7TzPFx1MDAxZr2VMVxiXHUwMDFk33b/bHhuWLlz44pnRy2nXHUwMDEy2nGjnVx1MDAxZdB/XHUwMDFmklgwmGVcdTAwMTOGTu1cdTAwMWXKXHUwMDEwXHUwMDA2dN9aXHUwMDFiMVXIXHUwMDE3zcX5otZlqtPvtlqNjnN6veXQbVXlZeNcdTAwMGJhXHUwMDExXHUwMDAx6Vx1MDAwNJJcXEB6XHUwMDExePhcZoFIXHUwMDEwLVx1MDAxOCqEXHUwMDAxXHUwMDA0eXpS+GlcdTAwMDMkXFyAaYpBU1x1MDAwMNcsZ1x1MDAxZvQrXHRcdTAwMDSBL5FY6nmjXHUwMDFmJdYty1xcNPubnspHT1xuTZ1cdTAwMTMhyLxlStPiN7dg71x1MDAwNTO0+1x1MDAxYjudYbxPb/jZ3TGrXFx/651cdTAwMGbKyE9aXG7BpcbprctPfotoy+wsgVx1MDAwNFx1MDAwNi2jslMkSFlISFx1MDAwNOqQMzFjwfHHvKNcdTAwMDFpXHUwMDAx7PqJXHUwMDE3sFx1MDAwMHNInnqByO93NCyLXHUwMDBmXHUwMDA0mfpWNa3Mgnosi+3NXHUwMDBitoRTN7pcdTAwMWW27uLOyVx1MDAwM1x1MDAxNfvhYWe0tXlbNjrIXHK4XHUwMDEwamGlOZFcXHDFsmuKqbIoheOESMWWQ1x1MDAwMFx1MDAxMFx1MDAwMCVoMVx1MDAwZcit0cBaakokm/sqtVx1MDAxMnNcdTAwMDA2q1/fmFx1MDAwM+izk2ZQwK3t9t9cdTAwMWT/mZu+XHUwMDE0/CtPLbZqh+FpXGa1OWmV1TvXXHUwMDE5rk9PvpWnRjJgdMaN+WPlx/9cdTAwMDFqSMcnIn0= DatasetdatasetsEmbededImageImageRetrievedentriesEmbeddedDatasetopenai/clip-vit-large-patch14openai/clip-vit-large-patch14datasetsfaiss

Introduction

In the realm of visual data, efficient and accurate image retrieval is key. This blog post offers a step-by-step guide to building an image-based search engine using open-source tools. By the end, you'll have the skills to create a robust, customizable search engine for precise image retrieval. This blog will empower you to explore visual data in new and exciting ways. Get ready to dive into image retrieval, unlocking the secrets of efficient visual data search with open-source tools!

Embedding the data

Embedding is crucial for building an image-based search engine as it transforms images into a format that can be easily compared and processed. This step enables efficient image retrieval by representing images as unique vectors in a high-dimensional space.

Let's first start by downloading our libraries. as for loadimg it is a python library that I developed to read images and convert them with ease, you can skip this library if you want. if you are interested about it and you want to contribute to its advancement you can checkout my github repository

pip install -qU datasets accelerate loadimg faiss-cpu

then we can move on to loading our dataset

from datasets import load_dataset
data = load_dataset("not-lain/pokemon",split="train")

data
>>> Dataset({
    features: ['image', 'text'],
    num_rows: 898
})

After that let's load our model, I'm using CLIP here, but you can use any other similar model.

import torch
from transformers import AutoProcessor, AutoModelForZeroShotImageClassification # or you can use CLIPProcessor, CLIPModel

device = 'cuda' if torch.cuda.is_available() else 'cpu'

processor = AutoProcessor.from_pretrained("openai/clip-vit-large-patch14")
model = AutoModelForZeroShotImageClassification.from_pretrained("openai/clip-vit-large-patch14", device_map = device)

Now for the most important part, we will embed our dataset and add our embedding to a new column called embeddings. It is recommended that you use a GPU here or any other accelerator since this is a very slow process.

def embed(batch):
    pixel_values = processor(images = batch["image"], return_tensors="pt")['pixel_values']
    pixel_values = pixel_values.to(device)
    img_emb = model.get_image_features(pixel_values)
    batch["embeddings"] = img_emb
    return batch

embedded_dataset = dataset.map(embed, batched=True, batch_size=16)

It is recommended that you store your embedded data in a database it being locally, HF, pinecone, chromadb, or any other alternative to avoid embedding the dataset again.

💡TIP

Although it is unrelated to our work here, you can also use the same model to create the embeddings for data of type text by first processing the input via tokens = processor(text = "some text here", padding=True, return_tensors="pt").to(device) and then you can pass them to your model to create the embeddings by text_emb = model.get_text_features(**tokens).

embedded_dataset.push_to_hub("not-lain/embedded-pokemon")

Retrieve images

Once your dataset is embedded and stored in a database we can now move on to defining the retrieval logic. We will need to load the similar model that was used in the previous section here first.

import torch
from transformers import AutoProcessor, AutoModelForZeroShotImageClassification # or you can use CLIPProcessor, CLIPModel

device = 'cuda' if torch.cuda.is_available() else 'cpu'

processor = AutoProcessor.from_pretrained("openai/clip-vit-large-patch14")
model = AutoModelForZeroShotImageClassification.from_pretrained("openai/clip-vit-large-patch14", device_map = device)

We will also need to load our embedded dataset

from datasets import load_dataset

dataset = load_dataset("not-lain/embedded-pokemon", split="train")

You need to add a Faiss index to the embeddings column to set it up as a similarity search index.

Faiss is a library for efficient similarity search and clustering of dense vectors, and it is particularly useful for large-scale image retrieval tasks. By adding a Faiss index to the embeddings column, you enable fast and accurate nearest neighbor searches, making it efficient to retrieve similar images based on their embeddings. This step significantly enhances the performance of your image-based search engine, especially when dealing with a large number of images.

dataset = dataset.add_faiss_index("embeddings")

Now to retrieve the most similar images, you will need to first create the embedding of the new image, then retrieve the most similar entries from the dataset.

import numpy as np

def search(query: str, k: int = 4 ):
    """a function that embeds a new image and returns the most probable results"""

    pixel_values = processor(images = query, return_tensors="pt")['pixel_values'] # embed new image
    pixel_values = pixel_values.to(device)
    img_emb = model.get_image_features(pixel_values)[0] # because it's a single element
    img_emb = img_emb.cpu().detach().numpy() # convert to numpy because the datasets library does not support torch vectors

    scores, retrieved_examples = dataset.get_nearest_examples( # retrieve results
        "embeddings", img_emb, # compare our new embedded image with the dataset embeddings
        k=k # get only top k results
    )

    return retrieved_examples

Let's test our algorithm, to do this you can start by loading an image

from loadimg import load_img
image = load_img("https://img.pokemondb.net/artwork/large/charmander.jpg")
image

image/png

after that you can retrieve the most similar entries.

the entries are sorted in a decreasing order with the first entry being the most similar to our input image.

retrieved_examples = search(image)

let's visualize our results

import matplotlib.pyplot as plt
f, axarr = plt.subplots(2,2)
for index in range(4):
    i,j = index//2, index%2
    axarr[i,j].set_title(retrieved_examples["text"][index])
    axarr[i,j].imshow(retrieved_examples["image"][index])
    axarr[i,j].axis('off')
plt.show()

image/png

Demo

now let's strap everything together in a single application to see our work in motion, you might consider the following application as a good reference https://huggingface.co./spaces/not-lain/image-retriever

Acknowledgement

I would like to acknowledge the importance of Pinecone's docs in helping me develop the script I used in this blogpost 🌲

I would like to thank everyone in the huggingface discord server who supported my work especially lunarflu, christopher, and tomaarsen ❤️

If you loved this blog post and consider upvoting it as this will help me showcase my work 🤗

Finally, if you want to request another blog post you can contact me on twitter, email or linkedin ✉️