BUT: Can encoder help to eliminate effects such as Doppler, offset...?
No, since the encoder doesn't "see" them. The overall system, including en- and decoder can be optimized to deal with such effects.
But honestly, something that corrects a Doppler offset would be called "frequency synchronization", and work on the digitized signal, not on codewords, so it's not much to do with channel coding.
There is techniques that you find both in decoders and for example equalizers, but those are really more shared methods, not "channel coding". Rule of thumb would be for me: if it's something in a receiver that has to do with optimizing the signal prior to mapping it to information, it's digital signal processing, afterwards, it might be channel or source coding.
when we are talking about improving encoder in practice, what does it mean? What should be improved?
This is a bit to broad, because nothing is ever optimal for all use cases. Example: One man's block code is "optimal" if it's as large as possible to come as close as possible to theoretical limits in code gain. Another man needs to decode something within a very short time, and can't wait for a gigantic code word to be completely received. These two people will have very different ideas what an "optimal" thing is.
Why has an interleaver complicated implementation in practice? How to reduce complexity?
Interleavers are typically among the least complex things in a receiver or transmitter, so not really sure what you're referring to.
NB: there's often things that can be tackled with knowledge that the decoder has; any decision-guided synchronization algorithm is testament to that. In some cases, simple types of ISI might be corrected by a convolutional decoder - but that's really, like, a special case that you can't generally rely on.
reed-solomon? It has nothing to do with Reed-Solomon codes. $\endgroup$