you call the function with n and i,
now as long as i*i is smaller than n * 10000 you increase your i.
if your i*i is bigger than n * 10000 you print i / 100
eg: you call function with f1(1,1):
1*10000 >= 1*1 --> f1(1,2);
1*10000 >= 2*2 --> f1(1,3);
1*10000 >= 3*3 --> f1(1,4);
....
1*10000 >= 99*99 ->f1(1,100);
1*10000 <= 100*100 --> printf("%f",i/100.0); which gives: 1
EDIT: another example, you look for the sqare root of 8: f1(8,1);
8*10000 >= 1*1 --> f1(8,2);
8*10000 >= 2*2 --> f1(8,3);
1*10000 >= 3*3 --> f1(8,4);
....
8*10000 >= 282*282 ->f1(8,283);
8*10000 <= 283*283 --> printf("%f",i/100.0); which gives: 2.83
and 2.83 * 2.83 = 8.0089
EDIT: you may ask why n*10000, its because the calculation error gets smaller, eg: if you use n*100 and i/10 in the sqrt of 8 example you get
8*100 <= 29*29 --> 2.9
2.9 * 2.9 = 8.41 which is not good as 2.83 in the other example