In one simple word: No.
What wastes a lot of effort is to switch between kernel space and user space, such switching is where the most waste is produced. There is (a lot) of work done just to get to where the real operation needs to be executed. The less switches needed the most efficient an operation should be.
There are operations that are completely done in kernel space (and there is no (safe) way to bypass that). In such cases, the most time is spent in kernel space, and that is the most efficient way to execute them.
There are other operations that must be executed in user space as the kernel has no service/function that implements it. In such operations, the more time that is used in user space, the more efficient the operation is.
But someone might have implemented an efficient kernel service in user space with some not-so-efficient algorithm. That will increase the user time but the result would be less efficient. Compared with the same service in kernel space.
And some other developer might be calling the kernel to read one byte at a time (and having to switch for every byte) instead of the equivalent call to read one meg at a time (if there is an equivalent function for a block instead of a byte).
And, in the end, it must be that some mix of kernel and user operations will be executed. To read a disk block, for example, the kernel should supply the function and it should be a "fire and forget" until the memory block (buffer) is filled with the result of the disk block read. To access process memory (like a program array), no kernel call should be needed.
There is no simple way to measure time efficiency.