1

I'm following a git repo and learning how to make a chatbot using pinecone and llama-2, but I'm getting an attribute error while using the Pinecone module, even though I've copied the same code as given in the repo:

These are the dependencies:

from langchain import PromptTemplate
from langchain.chains import RetrievalQA
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.vectorstores import Pinecone
import pinecone
from langchain.document_loaders import PyPDFLoader, DirectoryLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.prompts import PromptTemplate
from langchain.llms import CTransformers
from tqdm import tqdm
import os
from pinecone import Pinecone, ServerlessSpec

The code where error occurs:

docsearch=Pinecone.from_existing_index(index_name, embeddings)

query = "What are Allergies"

docs=docsearch.similarity_search(query, k=3)

print("Result", docs)

Getting the following error:


AttributeError Traceback (most recent call last) <ipython-input-52-de2ccf18d9e8> in <cell line: 1>() ----> 1 docsearch=Pinecone.from_existing_index(index_name, embeddings) 2 3 query = "What are Allergies" 4 5 docs=docsearch.similarity_search(query, k=3)

AttributeError: type object 'Pinecone' has no attribute 'from_existing_index'

There's a thread on another website that said that the langchain_pinecone lib and the OG pinecone lib are being confused somehow but I don't know how to resolve it. I'd appreciate it if I can get a solution this.

1 Answer 1

0

Just checked the docs and the code would look like below to do your queries

from pinecone.grpc import PineconeGRPC as Pinecone

# Initialize a Pinecone client with your API key
pc = Pinecone(api_key="YOUR_API_KEY")

query = "Tell me about the tech company known as Apple."

# Convert the query into a numerical vector that Pinecone can search with
query_embedding = pc.inference.embed(
    model="multilingual-e5-large",
    inputs=[query],
    parameters={
        "input_type": "query"
    }
)

# Search the index for the three most similar vectors
results = index.query(
    namespace="example-namespace",
    vector=query_embedding[0].values,
    top_k=3,
    include_values=False,
    include_metadata=True
)

print(results)

The docs below are really clear:

Just use pinecone

Langchain Pinecone

you can create the embeddings in your own way as well just adapt to Pinecone format here is how I use it for example (with OpenAI)

res = client.embeddings.create(
        input=text_input, model=OPENAI_EMBEDDING_MODEL)
Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.