text
stringlengths
0
1.36k
databases
## Table of contents
## vector database comparison: Weaviate vs Milvus
This page contains a detailed comparison of the Weaviate and Milvus vector databases.
You can also check out my [detailed breakdown of the most popular vector databases here](/blog/vector-databases-compared).
## Deployment Options | Feature | Weaviate | Milvus | | ---------| -------------| -------------| | Local Deployment | βœ… | βœ… | | Cloud Deployment | βœ… | ❌ | | On - Premises Deployment | βœ… | βœ… |
## Scalability | Feature | Weaviate | Milvus | | ---------| -------------| -------------| | Horizontal Scaling | βœ… | βœ… | | Vertical Scaling | βœ… | ❌ | | Distributed Architecture | βœ… | βœ… |
## Data Management | Feature | Weaviate | Milvus | | ---------| -------------| -------------| | Data Import | βœ… | βœ… | | Data Update / Deletion | βœ… | βœ… | | Data Backup / Restore | βœ… | βœ… |
## Security | Feature | Weaviate | Milvus | | ---------| -------------| -------------| | Authentication | βœ… | βœ… | | Data Encryption | βœ… | ❌ | | Access Control | βœ… | βœ… |
## Vector Similarity Search | Feature | Weaviate | Milvus | |---------|-------------|-------------| | Distance Metrics | Cosine, Euclidean, Jaccard | Euclidean, Cosine, Jaccard | | ANN Algorithms | HNSW, Beam Search | IVF, HNSW, Flat | | Filtering | βœ… | βœ… | | Post-Processing | βœ… | βœ… |
## Integration and API | Feature | Weaviate | Milvus | |---------|-------------|-------------| | Language SDKs | Python, Go, JavaScript | Python, Java, Go | | REST API | βœ… | βœ… | | GraphQL API | βœ… | ❌ | | GRPC API | ❌ | ❌ |
## Community and Ecosystem | Feature | Weaviate | Milvus | |---------|-------------|-------------| | Open-Source | βœ… | βœ… | | Community Support | βœ… | βœ… | | Integration with Frameworks | βœ… | βœ… |
## Pricing | Feature | Weaviate | Milvus | |---------|-------------|-------------| | Free Tier | ❌ | βœ… | | Pay-as-you-go | ❌ | ❌ | | Enterprise Plans | βœ… | βœ… |
copilot
copilot
review
GitHub Copilot has immense potential, but continues to underwhelm
When I signed up to try out GitHub Copilot, I was delighted to find that GitHub gifted me a free license to use it based on my being an [active open-source developer](https://github.com/zackproser).
Initially, I configured it for use with Neovim, my preferred code editor, but have also used it with VSCode. Here's my unvarnished opinion after giving it several chances over the course of several months.
## The potential is there but the performance is not
GitHub Copilot struggles to make relevant and high-quality code completion suggestions. I don't do comment-driven development, where you specify what you want in a large block of code comments and cross your fingers that Copilot can figure it out and generate your desired code correctly, but even when I did this to put Copilot through its paces, it still underwhelmed me.
As other developers have noted, unfortunately Copilot manages best when you've already defined the overall structure of your current file, and already have a similar function that Copilot can reference.
In these cases, Copilot can usually be trusted to handle completing the boilerplate code for you.
## ChatGPT4 upgrade, product page teasers and developer rage
GitHub Copilot X might be the promised land, but the launch and teaser has been handled poorly
For many months now, GitHub's Copilot product page has teased the fact that the next version of GitHub Copilot, known currently as Copilot X, will use ChatGPT4 under the hood and will handle way more than code completions.
Copilot X will also help you with pull request descriptions, adding tests to existing code bases, and it will have a chat feature that allows you to ask detailed questions about your codebase and theoretically get back useful answers.
It's worth noting that Sourcegraph's Cody has been able to do this (albeit with some bugs) for many months now thanks to its powerful approach of marrying graph-based knowledge of your codebase with embeddings (vectors) of your code files which allows its supporting large language model (LLM), Anthropic's Claude, to return useful responses about your code for your natural language queries.
The main axe I have to grind with GitHub's product team is the level of vagueness and "I guess we'll see!" that their product page has communicated to developers who might otherwise be interested in giving Copilot X a spin.
One of the FAQ's is about price and upgrades for GitHub Copilot base model users. Will Copilot X be free? Will it cost a premium subscription? "Who knows! We're still trying to figure that out ourselves".
The sign-up and waiting list user experience has also been deeply lacking, because apparently each of Copilot X's main features: pull request description generation, test generation, chat, etc are separate waiting lists that you need to sign-up for and wait on individually. This seems like a giant miss.
## There are open-source and free competitors who continue to build developer mindshare
Meanwhile, competitors such as [codeium](/blog/codeium-review) have been far more transparent with their developer audience and have been working well for many users the entire time that Copilot X has been inscrutable and vague about whether it will even be available to individual developers or only accessible to those at companies large enough to foot the bill for a team license with multiple developer seats.
Codeium is not the only horse in town. Many developers, myself included, are still deriving tremendous benefit and acceleration from keeping a browser tab to OpenAI's ChatGPT4 open as we code, for talking through architecturaly decisions, generating boilerplate code and for assistance debugging complex stack traces and TypeScript errors, to name a few use cases.
In the long term, developer experience and UX will win the game, and developers will coalesce around the tools that most reliably deliver them acceleration, peace of mind, and enhanced abilities to tackle additional scope and more ambitious projects. GitHub Copilot X would do well to take a more open approach, state their intentions clearly and be transparent about their plans for developer experience, because developers in the market for AI-assisted tooling are falling in love with their many competitors in the meantime.
I built [a chat with my blog experience](/chat) into my site, allowing visitors to ask questions of my writing.
Here's a quick demo of it in action - or [you can try it out yourself](/chat):
My solution recommends related blog posts that were used to answer the question as the LLM responds. This is [Retrieval Augmented Generation](https://pinecone.io/learn/retrieval-augmented-generation) with citations.
And in this blog post, I'm giving you everything you need to build your own similar experience:
the ingest and data processing code in a Jupyter Notebook, so you can convert your blog to a knowledgebase
the server
side API route code that handles embeddings, context retrieval via vector search, and chat
the client
side chat interface that [you can play with here](/chat)
Best of all, this site is [completely open-source](https://github.com/zackproser/portfolio), so you can view and borrow my implementation.
## Table of contents
## Architecture and data flow
Here's a flowchart describing how the feature works end to end.
Let's talk through it from the user's perspective. They ask a question on my client-side chat interface. Their question is sent to my `/api/chat` route.
The chat route first converts the user's natural language query to embeddings, and then performs a vector search against Pinecone.
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
36
Edit dataset card

Models trained or fine-tuned on zackproser/combined_writing