The validity of your timings depends on how you have implemented the algorithms and how you they would be used in "the real world". If you have an application for the algorithms and it will be implemented with Matlab, then there is nothing wrong with your timings because you are timing how the algorithms will be used. However, if you plan to re-code the algorithms in a lower level language, like C++ you might get significantly different results.
Mathworks has spent a lot of time optimizing the toolbox and basic operations in Matlab, so things like matrix multiply, matrix inverse, FFT, SVD, etc. are often as fast as a good C++ implementation. You do not necessarily know which toolbox routines are optimized. If your algorithm relies only on highly optimized routines and the competing algorithms rely on less optimized routines, your algorithm may appear better simply because the underlying implementation is better.
The other reason there may be differences is that Matlab is an interpreted language. When your program has a loop, the interpreter has to figure out what the code is doing each time through the loop. In contrast, the matrix operations have been compiled ahead of time to machine code and do not have the interpreter overhead. For example, if I run:
start = time;
x = zeros(1000,1000);
x = x+1;
stop = time;
stop - start
On my computer, I get 0.02297 seconds. If I run the equivalent version using a loop:
start = time;
x=zeros(1000,1000);
for i = 1:1000
for j = 1:1000;
x(i,j) = x(i,j) + 1;
end;
end;
stop = time;
stop - start
I get 18.175 seconds. (The method mentioned by @Jonas above gives better timings when you need high precision, but in this case there are enough orders of magnitude difference that this simple method works well enough.)
If the competing algorithms do a lot of work inside loops, and yours relies more heavily on built-in functions, your algorithm could be beating the competitors simply because it has less interpreter overhead.
If you plan to use the algorithms only inside Matlab, and interpreter overhead cannot be eliminated from the competitors, it is valid to claim you algorithm is better -- at least for Matlab implementations. If you want to claim a more general result, at the very least you have to show that the interpreter is not the reason for performance differences. Implementing all the algorithms in a language like C++ removes the interpreter overhead. To have a fair comparison, you have to make sure you have done a fast implementation of all the underlying algorithms (e.g., FFT, SVD, matrix multiply). Fortunately, optimized libraries are available for a lot of the common algorithms in a number of different languages.
Of course, if you can show asymptotic complexity of your algorithm is better (O() notation), that would be an indication that it might be better in a wider variety of implementations, though constants turn out to be important in real implementations.