I'm decompiling a game's shaders with RenderDoc in an effort to understand them, and I encountered this segment:
57: mul [precise(xyz)] r8.xyz, r5.yzxy, r6.zxyz
58: add [precise(xyz)] r7.xyz, r7.xyzx, -r8.xyzx
59: ishl [precise(x)] r0.x, r4.z, l(23)
60: iadd [precise(x)] r0.x, r0.x, l(-20769187434139310000000000000000000.000000)
61: mov [precise(xz)] r4.xz, r2.zzwz
62: and [precise(xyz)] r2.xyz, r4.xyzx, l(0x0000ffff, 0x0000ffff, 0x0000ffff, 0)
63: utof [precise(xyz)] r2.xyz, r2.xyzx
And my first thought is, line 60 can't be right, can it? Not only is it a non-integer in an iadd command, the number itself can't fit in an int.
I dug into the actual bytecode to find f8800000. That does indeed convert to the number when read as a float, but this doesn't help me understand why it's being read as a float in the first place. We can see by lines 59 and 62 the decompiler does know how to read things that aren't floats, so that's even weirder.
So I guess what I'm asking is, what am I expecting here for turning this into HLSL? Do I have to do something like asint() or asuint()? Is the negative sign part of the number or not? Every such combination I've tried so far still leads to r0.x being something like 4.16914E+09, and while I don't know if that's the right answer or not, I do know that leaving it alone leads to later variables being NaNs and INFs (which is almost certainly not the right answer).