(both of my solution works only if you dont care about changing palletes on the fly using shaders)
You can use any type of texture and do just a simple computation on a shader. Trick is that you have more color information than you need, so what you will just get rid of information that you dont want to.
8bit color is in formar RRRGGGBB. Which gives you 8 shades of red and green and 4 shades of blue.
This solution will work for any RGB(A) color format textures.
float4 mainPS() : COLOR0
{
const float oneOver7 = 1.0 / 8.0;
const float oneOver3 = 1.0 / 3.0;
float4 color = tex2D(YourTexture, uvCoord);
float R = floor(color.r * 7.99) * oneOver7;
float G = floor(color.g * 7.99) * oneOver7;
float B = floor(color.b * 3.99) * oneOver3;
return float4(R, G, B, 1);
}
note: i wrote that from the top of my head, but im really sure it will compile and work for you
Another possibility, would be to use D3DFMT_R3G3B2 texture format that is actually same as 8bit graphics. When you put data to this texture, you can use simple bit operation per byte.
tex[index] = (R & 8) << 5 + ((G & 8) << 2) + (B & 4);