I am getting the same value for IOS and Winows md5 hashing but in the case of java i am getting a different value,
IOS code for md5 hashing
- (NSString*)md5HexDigest:(NSString*)input
{
NSData *data = [input dataUsingEncoding:NSUTF16LittleEndianStringEncoding];
unsigned char result[CC_MD5_DIGEST_LENGTH];
CC_MD5([data bytes], (CC_LONG)[data length], result);
NSMutableString *ret = [NSMutableString stringWithCapacity:CC_MD5_DIGEST_LENGTH*2];
for(int i = 0; i<CC_MD5_DIGEST_LENGTH; i++) {
[ret appendFormat:@"%02x",result[i]];
}
return ret;
}
Windows Code for md5 hashing
private static string GetMD5(string text)
{
UnicodeEncoding UE = new UnicodeEncoding();
byte[] hashValue;
byte[] message = UE.GetBytes(text);
MD5 hashString = new MD5CryptoServiceProvider();
string hex = "";
hashValue = hashString.ComputeHash(message);
foreach (byte x in hashValue)
{
hex += String.Format("{0:x2}", x);
}
return hex;
}
Java Code for md5 hasing: Tried with UTF-8,16,32 , but not maching with the IOS and Windows
public String MD5(String md5) {
try {
String dat1 = md5.trim();
java.security.MessageDigest md = java.security.MessageDigest.getInstance("MD5");
byte[] array = md.digest(dat1.getBytes("UTF-16"));
StringBuffer sb = new StringBuffer();
for (int i = 0; i < array.length; ++i) {
sb.append(Integer.toHexString((array[i] & 0xFF) | 0x100).substring(1,3));
}
System.out.println("Digest(in hex format):: " + sb.toString());
return sb.toString();
} catch (java.security.NoSuchAlgorithmException e) {
}
catch(UnsupportedEncodingException e)
{
}
return null;
}
thanks
UTF-16LE?Integer.toHexString()code actually give correct results?UTF-16LEwork here? What was wrong with the existing code by OP? What is the difference, other words? Thanks in advance.UTF-16puts the BOM (0xFEFF or 0xFFFE) in the beginning, to specify the endianness. UsingUTF-16BEorUTF-16LEexplicitly leaves the BOM out (apparently). Unicode is a b*tch.