Sorting algorithms like insertion sort, quicksort, and mergesort are called comparison sorts because they sort elements based purely on the relative order of those elements, not their absolute size. From the perspective of quicksort, mergesort, or insertion sort, the array [0, 1, 0, 1, 0] and [0, 100000, 0, 100000, 0] are completely identical, since they have no way of knowing that 100000 is "much bigger than" 1. The total number of operations performed to sort those arrays will be completely identical to one another. In fact, if you sort each array with those algorithms and watch the elements move around, you'll find that the exact same moves get performed.
If you're talking about sorting integers that fit into a single machine word, then the cost of a move is independent of the numeric value stored in that machine word. The cost of comparing those elements is likely also the same. I would therefore predict that you would see absolutely no difference in the time required to sort these arrays with those algorithms.
If there is any difference, it would mean that the processor you're using can compare or move numbers of different sizes in different amounts of time. To the best of my knowledge there are no real processor architectures that do this.
Now, on the other hand, sorting algorithms like counting sort or radix sort, which are not comparison sorts and do depend on the sizes of the numbers you're dealing with, could take longer to sort these arrays because they work either one digit at a time or by distributing into an array whose size depends on the sizes of the numbers in questions. In those cases, you should see a difference between the runtimes, provided that the algorithm you used was implemented well.