A typical installation of numpy will be dynamically linked against a BLAS library, which provides routines for matrix-matrix and matrix-vector multiplication. For example, when you use np.dot() on a pair of float64 arrays, numpy will call the BLAS dgemm routine in the background. Although these library functions run on the CPU rather than the GPU, they are often multithreaded, and are very finely tuned for performance. A good BLAS implementation, such as MKL or OpenBLAS, will probably be hard to beat in terms of performance, even on the GPU*.
However, BLAS only supports floating point types. If you call np.dot() on integer arrays, numpy will fall back on using a very simple internal C++ implementation, which is single-threaded and much slower than a BLAS dot on two floating point arrays.
Without knowing more about how you conducted those benchmarks, I would bet that a plain call to numpy.dot would also comfortably beat your other 3 methods for float32, complex64 and complex128 arrays, which are the other 3 types supported by BLAS.
* One possible way to beat standard BLAS would be to use cuBLAS, which is a BLAS implementation that will run on an NVIDIA GPU. The scikit-cuda library seems to provide Python bindings for it, although I've never used it myself.