8

I'm using LangChain with GPT-OSS models (served via Hyperbolic) to generate and evaluate answers. My eval model is supposed to return a JSON object (using a Pydantic schema with the class name 'EvalResult') with four integer fields, but often returns non-JSON output (like ... or explanations), causing validation errors.

Model Definition:

gpt_oss_120b_hb = ChatOpenAI(
model="openai/gpt-oss-120b",
openai_api_key=HYPERBOLIC_API_KEY,
base_url="https://api.hyperbolic.xyz/v1",
temperature=1,
model_kwargs={"top_p": 1, "max_completion_tokens": 256},)

 
eval_model_structured = gpt_oss_120b_hb.with_structured_output(EvalResult)

Prompt (simplified)

evaluation_prompt = ChatPromptTemplate.from_messages([
(
    "system",
    "You are a trained evaluator. Respond ONLY with a valid JSON object matching this format, and do not include any other text:\n"
),
(
    "user",
    "Evaluate the following response.\n\nQuestion:\n{question}\n\nAnswer:\n{answer}\n\n"
),])

Chain:

 eval_chain = evaluation_prompt | eval_model_structured

I get the following error:

Invalid JSON: expected value at line 1 column 1 [type=json_invalid, input_value='<think>We need to evalua...']
1
  • 1
    Structured output seam's broken with most API and this model. Actually i don't use this model with structured output... sad Commented Sep 5 at 13:46

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.