It's been suggested that you pass an array for formats to $.fn.dataTable.moment(...), but this works if and only if it can never happen that a data match more than one format in the array. Unless you can guarantee this, then passing an array of formats is not the solution.
You started with the example of DD.MM.YYYY and MM/DD/YYYY. A date will match either one format or the other but not both because if it has period delimiters than it matches the 1st format but not the 2nd and if it has slashes delimiters it matches the 2nd format but not the 1st. However, in general if you have dates from somewhere else than the US or Germany, you'll run into ambiguous cases. Matt Johnson mentioned for instance a date like "01/04/2019" which can fit the MM/DD/YYYY format and be interpreted as "January 4th 2019", or fit the DD/MM/YYYY format and be interpreted as "1 April 2019".
If you can have dates in either DD/MM/YYYY or MM/DD/YYYY format and you call $.fn.dataTable.moment(["DD/MM/YYYY", "MM/DD/YYYY"]) then you will sometimes get incorrect results. The problem is that the plugin that implements the function you're calling looks at each cell in isolation.
Table 1
Suppose a table meant to use dates in the DD/MM/YYYY format, with the following cells:
- 21/2/2019
- 1/4/2019
- 24/12/2019
Table 2
Suppose a table meant to use dates in the MM/DD/YYYY format, with the following cells:
- 2/21/2019
- 4/1/2019
- 12/24/2019
The two tables actually contain the same dates. They are just represented differently.
Suppose you configured your table with $.fn.dataTable.moment(["DD/MM/YYYY", "MM/DD/YYYY"]). Table 1 will be interpreted correctly. However, row 2 in table 2 won't be interpreted correctly. The date 4/1/2019 does fit the first format in the array (DD/MM/YYYY) and this is how moment will interpret it. It does not matter how many other cells cannot fit DD/MM/YYYY because the plugin that calls moment does not do a statistical analysis. It looks at each cell in isolation. Here's the relevant code (with some blank lines removed):
$.fn.dataTable.moment = function ( format, locale, reverseEmpties ) {
var types = $.fn.dataTable.ext.type;
// Add type detection
types.detect.unshift( function ( d ) {
if ( d ) {
// Strip HTML tags and newline characters if possible
if ( d.replace ) {
d = d.replace(/(<.*?>)|(\r?\n|\r)/g, '');
}
// Strip out surrounding white space
d = $.trim( d );
}
// Null and empty values are acceptable
if ( d === '' || d === null ) {
return 'moment-'+format;
}
return moment( d, format, locale, true ).isValid() ?
'moment-'+format :
null;
} );
// Add sorting method - use an integer for the sorting
types.order[ 'moment-'+format+'-pre' ] = function ( d ) {
if ( d ) {
// Strip HTML tags and newline characters if possible
if ( d.replace ) {
d = d.replace(/(<.*?>)|(\r?\n|\r)/g, '');
}
// Strip out surrounding white space
d = $.trim( d );
}
return !moment(d, format, locale, true).isValid() ?
(reverseEmpties ? -Infinity : Infinity) :
parseInt( moment( d, format, locale, true ).format( 'x' ), 10 );
};
};
You could flip the arguments and call $.fn.dataTable.moment(["MM/DD/YYYY", "DD/MM/YYYY"]). Now the 2nd table would be fine, but the same problem would happen in the 1st table.
Ok, what then?
If the backend happens to already contain UTC time stamps, then I'd just send these time stamps to the front end instead of sending localized values. At the stage of rendering a cell that contains a date, I'd have the front end convert the UTC date to a format that makes sense to the user. Datatable would do sorting on the basis of the UTC values, which can be compared without ambiguity.
If the backend does not stores its dates as UTC time stamps, I'd redesign it so that it does and then do what I described in the previous paragraph.
Otherwise, there may be a way to do in the front end a statistical analysis of your table prior to Datatables trying to render and order it. So you could discover which format is used and then feed this to Datatables. However, this still seems brittle to me. If the table is using the server-side protocol, then only a small portion of the data is available at a time. If you make an analysis only on the first response from the server, a later response covering a later portion of the table may disprove the initial assumption. Moreover, there could be cases where all the dates in a datatable are ambiguous. On a large and unfiltered dataset this may be unlikely but as soon as users are allowed to filter the dataset to show only a subset, they may filter it in a way that results in all dates in a specific subset being ambiguous. I would not deploy an application with the hope that this will never happen.