The vertex shader takes input data from vertex buffers (which are typically in model space), transforms that input, and produces output data in clip space. After the vertex shader stage, the perspective divide (by the output w coordinate) occurs.
The pixel shader deals with fragments, not vertices, but when you do make use of coordinates (via SV_Position for example), those coordinates are in screen space (offset by 0.5). This means they range from zero to one.
The hull and domain shaders operate on input data and produce output data within the same space. Because the input data is generally in model space, the output data is also in model space.
That said, these are just canonical defaults. Since you are controlling the input data and the interpretation of that input data, you can place that data in any coordinate frame that is useful to you. For example, when faking 2D graphics in a 3D API it can be useful to simply place pixel coordinate input data into vertex buffers and have a very simple, almost no-op vertex shader.
There are some aspects of the pipeline outside your programmable control though. For example, SV_Position will always be offset screen space. The biggest thing to worry about it actually the output of the vertex shader stage. The rest of the pipeline will assume that output is in clip space, perform clipping, and perform the perspective division by w. Consequently you may need to set w accordingly (often to 1.0) if you want to "avoid" this division.