# Perform text embedding inference on the service **POST /_inference/text_embedding/{inference_id}** ## Servers - http://api.example.com: http://api.example.com () ## Authentication methods - Api key auth - Basic auth - Bearer auth ## Parameters ### Path parameters - **inference_id** (string) The inference Id ### Query parameters - **timeout** (string) Specifies the amount of time to wait for the inference request to complete. ### Body: application/json (object) - **input** (string | array[string]) Inference input. Either a string or an array of strings. - **input_type** (string) The input data type for the text embedding model. Possible values include: * `SEARCH` * `INGEST` * `CLASSIFICATION` * `CLUSTERING` Not all services support all values. Unsupported values will trigger a validation exception. Accepted values depend on the configured inference service, refer to the relevant service-specific documentation for more info. > info > The `input_type` parameter specified on the root level of the request body will take precedence over the `input_type` parameter specified in `task_settings`. - **task_settings** (object) Task settings for the individual inference request. These settings are specific to the you specified and override the task settings specified when initializing the service. ## Responses ### 200 #### Body: application/json (object) - **text_embedding_bytes** (array[object]) - **text_embedding_bits** (array[object]) - **text_embedding** (array[object]) [Powered by Bump.sh](https://bump.sh)