2

I’m building a RAG application using LangChain and Watsonx AI with Streamlit. I want to create a LangChainInterface instance using my IBM Watsonx API credentials, but I’m getting the following validation error:

pydantic.error_wrappers.ValidationError: 3 validation errors for LangChainInterface credentials instance of Credentials expected (type=type_error.arbitrary_type; expected_arbitrary_type=Credentials) model_id extra fields not permitted (type=value_error.extra) project_id extra fields not permitted (type=value_error.extra)
Traceback:
File "D:\projects\UniGpt\APP\app.py", line 22, in <module>
    llm = LangChainInterface(
          ^^^^^^^^^^^^^^^^^^^
File "C:\Users\CHAMIKA\unigpt-env\Lib\site-packages\langchain_core\load\serializable.py", line 113, in __init__
    super().__init__(*args, **kwargs)
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__

Screenshot of error

Here’s my code:

#import logchain dependancies 
from langchain.document_loaders import PyPDFLoader
from langchain.indexes import VectorstoreIndexCreator
from langchain.chains import RetrievalQA
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter

#bring in steamlit for UI dev
import streamlit as st

#bring in watsonx interface
from wxai_langchain.llm import LangChainInterface,  Credentials

from config import API_KEY,URL,PROJECT_ID
#setup cerdentials directly in the code
creds={
    'api_key': API_KEY,
    'url': URL,
}

#create llm using langchain 
llm = LangChainInterface(
    credentials=creds,
    model_id="granite-3-3-8b-instruct",
    params={
        'decoding_method':'sample',
        'max_new_tokens':200,
        'temprature':0.5
    },
    project_id=PROJECT_ID
)

#function to load and index the pdf document
@st.cache_resource
def load_pdf():
    pdf_name = "Resources/HandBook.pdf"
    #load the pdf
    loader = PyPDFLoader(pdf_name)
    #split the pdf into chunks
    index=VectorstoreIndexCreator(
        embedding=HuggingFaceEmbeddings(model_name="sentence-transformers/all-MiniLM-L6-v2"),
        text_splitter=RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
    ).from_loaders([loader])
    return index

#setup the retrieval qa chain
chain=RetrievalQA.from_chain_type(
    llm=llm, 
    chain_type="stuff",
    retriever=load_pdf().vectorstore.as_retriever(),
    input_key="question"
)


#setup the app title
st.title("Ask UniExpert")

#setup session state message variable to hold all the old message
if "messages" not in st.session_state:
    st.session_state.messages = []

#display all the historical messages
for message in st.session_state.messages:
    with st.chat_message(message["role"]):
        st.markdown(message["content"])

#build a prompt input template to display the prompts
prompt = st.chat_input("Enter your question:")

#if the user hits enter
if prompt:
    #display the prompt
    st.chat_message("user").markdown(prompt)
    #store the user prompt in state
    st.session_state.messages.append({"role": "user", "content": prompt})
    #response from llm
    response=chain.run(prompt)
    #show the response
    st.chat_message("assistant").markdown(response)
    #store the response in state
    st.session_state.messages.append(
        {"role": "assistant", "content": response})

What I’ve tried:

Passing the credentials as a dictionary (as shown above). Searching for a proper Credentials class usage in the documentation.

Question:

How should I correctly create a LangChainInterface instance with Watsonx credentials without triggering pydantic.ValidationError? Should I pass the credentials differently? How can I handle model_id and project_id parameters in the current version of wxai_langchain?

1 Answer 1

1

As far as the error (pydantic.ValidationError) you’re getting is concerned, it's because of the following two reasons:

  • The defined creds must be a typed Credentials object, not a dict.

    You can cross-verify it in its source code .

    So, instead of defining as dict:

    creds={
        'api_key': API_KEY,
        'url': URL,
    }
    

    define it as expected Credentials object:

    from wxai_langchain.credentials import Credentials 
    creds= Credentials(
        api_key=API_KEY,
        api_endpoint=URL,
        project_id=PROJECT_ID
    )
    
  • In the wxai_langchain you installed, the LangChainInterface model doesn’t accept model_id / project_id at the top level.

    Instead of:

    #create llm using langchain 
    llm = LangChainInterface(
        credentials=creds,
        model_id="granite-3-3-8b-instruct",
        params={
            'decoding_method':'sample',
            'max_new_tokens':200,
            'temprature':0.5
        },
        project_id=PROJECT_ID
    )
    

    Use:

    #create llm using langchain 
    llm = LangChainInterface(
        credentials=creds,
        model="granite-3-3-8b-instruct",
        params={
            'decoding_method':'sample',
            'max_new_tokens':200,
            'temprature':0.5
        }
    )
    

I have already tested and incorporating the above two changes solves the pydantic.ValidationError

---------------------------------------

Having said that, I would suggest you to use the official and in active development langchain-ibm integration instead of wxai-langchain

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.