We are storing a handful of polymorphic document subtypes in a single index (e.x. let's say we store vehicles with subtypes of car, van, motorcycle, and Batmobile).
At the moment, there is >80% commonality in fields across these subtypes (e.x manufacturer, number of wheels, ranking of awesomeness as a mode of transport).
The standard case is to search across all types, but sometimes users will want to filter the results to a subset of the subtypes: find only cars with...).
How much overhead (if any) is incurred at search/index time from modelling these subtypes as distinct ElasticSearch types vs. modelling them as a single type using some application-specific field to distinguish between subtypes?
I've looked through several related answers already, but can't find the answer to my exact question.
Thanks very much!