text
stringlengths
0
1.36k
Embeddings or vectors are lists of floating point numbers that represent semantic meaning and the relationships between entities in data in a format that is efficient for machines and Large Language Models to parse.
For example, in the following log statements, you can see the user's query, 'How can I become a better developer? ', and the query embedding which is an array of floats representing that sentence.
Note the length of this array is `3072`. This is the dimensionality (or number of dimensions) expressed by the embedding model, which is `text-embedding-3-large`, (which I chose over the cheaper text-embedding-3-small model because my primary concern is accuracy).
Since the embedding model I'm using for this application outputs 3072 dimensions, when I created my Pinecone index, I also set it to 3072 dimensions as well.
Let's look at how this works in the server-side API route.
## Server side
We'll step through the server-side API route section by section, building up to the complete route at the end.
### Retrieval phase
When the `/api/chat` route receives a request, I pop the latest user message off the request body and hand it to my context retrieval service:
Here's what the context service looks like:
The `getContext` function's job is to convert the user's message to vectors and retrieve the most relevant items from Pinecone.
It is a wrapper around the `getEmbeddings` and `getMatchesFromEmbeddings` functions, which are also defined in separate 'services' files.
Here's the `getEmbeddings` function, which is a thin wrapper around OpenAI's embeddings endpoint:
So far, we've received the user's query and converted it into a `query vector` that we can send into Pinecone's vector database for similarity search.
The `getMatchesFromEmbeddings` function demonstrates how we use Pinecone to execute our query and return the nearest neighbors:
// Imagine metadata like a JavaScript object { "text": "In this article I reflect back on the year...", "source": "src/app/blog/2023-wins/page.mdx" } javascript { id: 'b10c8904-3cff-4fc5-86fc-eec5b1517dab', score: 0.826505, values: [ [length]: 0 ], sparseValues: undefined, metadata: { source: 'portfolio/src/app/blog/data-driven-pages-next-js/page.mdx', text: 'While the full script is quite long and complex, breaking it down into logical sections helps us focus on the key takeaways:\n' + '\n' + '1. Generating data-driven pages with Next.js allows us to create rich, informative content that is easy to update and maintain over time. 1. By separating the data (in this case, the categories and tools) from the presentation logic, we can create a flexible and reusable system for generating pages based on that data. 1. Using a script to generate the page content allows us to focus on the high-level structure and layout of the page, while still providing the ability to customize and tweak individual sections as needed. 1. By automating the process of generating and saving the page content, we can save time and reduce the risk of errors or inconsistencies.\n' + '\n' + 'While the initial setup and scripting can be complex, the benefits in terms of time savings, consistency, and maintainability are well worth the effort.'
} }, { id: 'b78bcb7c-c1a6-48a3-ac6b-ab58263b6ac1', score: 0.825771391, values: [ [length]: 0 ], sparseValues: undefined, metadata: { source: 'portfolio/src/app/blog/run-your-own-tech-blog/page.mdx', text: 'I wanted the ability to author code blocks of any kind directly in my post and I wanted outstanding image support with all the lazy-loading, performance optimized, responsive image goodness that Next.js bakes into its easy to use `` component.\n' + '\n' + 'I also knew I wanted to host my site on Vercel and that I wanted my site to be completely static once built, with serverless functions to handle things like form submissions so that I could customize my own email list tie-in and have more excuses to learn the Next.js framework and Vercel platform well.\n' + '\n' + ' Running your own tech blog is a great excuse to go deep on web performance monitoring and pagespeed optimization.' } }, javascript // Create a new set for blog urls let blogUrls = new Set()
let docs: string[] = [];
(context as PineconeRecord[]).forEach(match => { const source = (match.metadata as Metadata).source // Ensure source is a blog url, meaning it contains the path src/app/blog if (!source.includes('src/app/blog')) return blogUrls.add((match.metadata as Metadata).source); docs.push((match.metadata as Metadata).text); });
let relatedBlogPosts: ArticleWithSlug[] = []
// Loop through all the blog urls and get the metadata for each for (const blogUrl of blogUrls) { const blogPath = path.basename(blogUrl.replace('page.mdx', '')) const localBlogPath = `${blogPath}/page.mdx` const { slug, ...metadata } = await importArticleMetadata(localBlogPath); relatedBlogPosts.push({ slug, ...metadata }); } javascript
const serializedArticles = Buffer.from(
JSON.stringify(relatedBlogPosts)
).toString('base64')
return new StreamingTextResponse(result.toAIStream(), { headers: { "x-sources": serializedArticles } }); javascript
// Join all the chunks of text together, truncate to the maximum number of tokens, and return the result const contextText = docs.join("\n").substring(0, 3000)
const prompt = ` Zachary Proser is a Staff software engineer, open - source maintainer and technical writer Zachary Proser's traits include expert knowledge, helpfulness, cleverness, and articulateness. Zachary Proser is a well - behaved and well - mannered individual. Zachary Proser is always friendly, kind, and inspiring, and he is eager to provide vivid and thoughtful responses to the user. Zachary Proser is a Staff Developer Advocate at Pinecone.io, the leader in vector storage. Zachary Proser builds and maintains open source applications, Jupyter Notebooks, and distributed systems in AWS START CONTEXT BLOCK ${contextText} END OF CONTEXT BLOCK Zachary will take into account any CONTEXT BLOCK that is provided in a conversation. If the context does not provide the answer to question, Zachary will say, "I'm sorry, but I don't know the answer to that question". Zachary will not apologize for previous responses, but instead will indicate new information was gained. Zachary will not invent anything that is not drawn directly from the context. Zachary will not engage in any defamatory, overly negative, controversial, political or potentially offense conversations. `;
const result = await streamText({ model: openai('gpt-4o'), system: prompt, prompt: lastMessage.content, });
javascript
'use client';
...
const { messages, input, setInput, handleInputChange, handleSubmit } = useChat({ onResponse(response) { const sourcesHeader = response.headers.get('x-sources'); const parsedArticles: ArticleWithSlug[] = sourcesHeader ? (JSON.parse(atob(sourcesHeader as string)) as ArticleWithSlug[]) : []; setArticles(parsedArticles); setIsLoading(false); }, headers: {}, onFinish() { // Log the user's question gtag("event", "chat_question", { event_category: "chat", event_label: input, }); } }); ... javascript
// The questions are defined as an array of strings const prepopulatedQuestions = [ "What is the programming bug? ", "Why do you love Next.js so much? ", "What do you do at Pinecone? ", "How can I become a better developer? ", "What is ggshield and why is it important?" ];
...
// The handler for clicking one of the pre-canned question buttons const handlePrepopulatedQuestion = (question: string) => { handleInputChange({ target: { value: question, }, } as React.ChangeEvent);
gtag("event", "chat_use_precanned_question", {
event_category: "chat",
event_label: question,
});
setIsLoading(true); // Set loading state here to indicate submission is processing
const customSubmitEvent = { preventDefault: () => { }, } as unknown as React.FormEvent;
// Submit immediately after updating the input handleSubmit(customSubmitEvent); }; javascript
'use client';
const prepopulatedQuestions = [ "What is the programming bug? ", "Why do you love Next.js so much? ", "What do you do at Pinecone? ", "How can I become a better developer? ", "What is ggshield and why is it important?" ];
const { messages, input, setInput, handleInputChange, handleSubmit } = useChat({ onResponse(response) { const sourcesHeader = response.headers.get('x-sources'); const parsedArticles: ArticleWithSlug[] = sourcesHeader ? (JSON.parse(atob(sourcesHeader as string)) as ArticleWithSlug[]) : []; console.log(`parsedArticle %o`, parsedArticles); setArticles(parsedArticles); setIsLoading(false); }, headers: {}, onFinish() { // Log the user's question gtag("event", "chat_question", { event_category: "chat", event_label: input, }); } });
const userFormSubmit = (e: React.FormEvent) => { setIsLoading(true); // Set loading state here handleSubmit(e); };
const handlePrepopulatedQuestion = (question: string) => { handleInputChange({ target: { value: question, }, } as React.ChangeEvent);
gtag("event", "chat_use_precanned_question", {
event_category: "chat",
event_label: question,