0

I am trying to run my custom LLM model deployed on a server and accessible through api_url, key, model name. I did clone the Helm repo and did all changes like adding custom model in model_deployment config etc. while trying to run command "helm-run --run-entry mmlu:subject=anatomy,model=your-org/your-model --suite my-suite --max-eval-instances 10 --disable-cache" with my model specific details, I get error: command not found: helmm-run.

Is there any latest documentation or blog available which can be followed to validate custom api-based models on Stanford HELM framework?

1
  • Can you edit the question to include a minimal reproducible example, in particular including the code you've written? helm-run is not a standard CLI invocation for the Kubernetes Helm tool. If you're just trying to deploy or run a model, and not actually writing any code yourself, it might be more appropriate to ask on another site like Server Fault (or potentially Cross Validated, though that seems to be more focused on machine-learning algorithms than deployment issues). Commented Jun 10 at 13:10

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.