I want to convert a value from {integer} to f32:
struct Vector3 {
pub x: f32,
pub y: f32,
pub z: f32,
}
for x in -5..5 {
for y in -5..5 {
for z in -5..5 {
let foo: Vector3 = Vector3 { x: x, y: y, z: z };
// do stuff with foo
}
}
}
The compiler chokes on this with a type mismatch error (expecting f32 but getting {integer}). Unfortunately I can not simply change Vector3. I'm feeding a C-API with this.
Is there any easy and concise way I can convert x, y and z from {integer} to f32?
I guess there is no builtin conversion from i32 or {integer} to f32 because it could be lossy in certain situations. However, in my case the range I'm using is so small that this wouldn't be an issue. So I would like to tell the compiler to convert the value anyways.
Interestingly, the following works:
for x in -5..5 {
let tmp: i32 = x;
let foo: f32 = tmp as f32;
}
I'm using a lot more that just one foo and one x so this turns hideous really fast.
Also, this works:
for x in -5i32..5i32 {
let foo: f32 = x as f32;
// do stuff with foo here
}
But with my usecase this turns into:
for x in -5i32..5i32 {
for y in -5i32..5i32 {
for z in -5i32..5i32 {
let foo: Vector3 = Vector3 {
x: x as f32,
y: y as f32,
z: z as f32,
};
// do stuff with foo
}
}
}
Which I think is pretty unreadable and an unreasonable amount of cruft for a simple conversion.
What am I missing here?
i32annotations there, it should default toi32. Did you try compiling without them (but with the explicit cast tof32)?f32is the reasonable thing to do (or write some custom iterator that allows for float types of start, end and step - have fun defining the exact semantics :) ).i16theni32) seems unnecessary; you should be able to directly usex as f32.