I'm using LangChain with GPT-OSS models (served via Hyperbolic) to generate and evaluate answers. My eval model is supposed to return a JSON object (using a Pydantic schema with the class name 'EvalResult') with four integer fields, but often returns non-JSON output (like ... or explanations), causing validation errors.
Model Definition:
gpt_oss_120b_hb = ChatOpenAI(
model="openai/gpt-oss-120b",
openai_api_key=HYPERBOLIC_API_KEY,
base_url="https://api.hyperbolic.xyz/v1",
temperature=1,
model_kwargs={"top_p": 1, "max_completion_tokens": 256},)
eval_model_structured = gpt_oss_120b_hb.with_structured_output(EvalResult)
Prompt (simplified)
evaluation_prompt = ChatPromptTemplate.from_messages([
(
"system",
"You are a trained evaluator. Respond ONLY with a valid JSON object matching this format, and do not include any other text:\n"
),
(
"user",
"Evaluate the following response.\n\nQuestion:\n{question}\n\nAnswer:\n{answer}\n\n"
),])
Chain:
eval_chain = evaluation_prompt | eval_model_structured
I get the following error:
Invalid JSON: expected value at line 1 column 1 [type=json_invalid, input_value='<think>We need to evalua...']