# Create a Llama inference endpoint **PUT /_inference/{task_type}/{llama_inference_id}** Create an inference endpoint to perform an inference task with the `llama` service. ## Required authorization * Cluster privileges: `manage_inference` ## Servers - http://api.example.com: http://api.example.com () ## Authentication methods - Api key auth - Basic auth - Bearer auth ## Parameters ### Path parameters - **task_type** (string) The type of the inference task that the model will perform. - **llama_inference_id** (string) The unique identifier of the inference endpoint. ### Query parameters - **timeout** (string) Specifies the amount of time to wait for the inference endpoint to be created. ### Body: application/json (object) - **chunking_settings** (object) The chunking configuration object. Applies only to the `text_embedding` task type. Not applicable to the `completion` or `chat_completion` task types. - **service** (string) The type of service supported for the specified task type. In this case, `llama`. - **service_settings** (object) Settings used to install the inference model. These settings are specific to the `llama` service. ## Responses ### 200 #### Body: application/json (object) - **chunking_settings** (object) The chunking configuration object. Applies only to the `sparse_embedding` and `text_embedding` task types. Not applicable to the `rerank`, `completion`, or `chat_completion` task types. - **service** (string) The service type - **service_settings** (object) Settings specific to the service - **task_settings** (object) Task settings specific to the service and task type - **inference_id** (string) The inference Id - **task_type** (string) The task type [Powered by Bump.sh](https://bump.sh)