I tried to detect edges of an image. For that, I wrote the following code:
import numpy as np
import cv2
import math
img = cv2.imread('download.jpg',0)
img1 = img
k = img.shape
i=1
x=k[0]
y=k[1]
print x,y
while(i<(x-2)):
j=1
while(j<(y-2)):
a = (int(img[i,j+1])-int(img[i,j]))
b = (int(img[i+1,j])-int(img[i,j]))
c = (a**2) + (b**2)
img1[i,j] = math.sqrt(c)
#img1[i,j] = a
j+=1
i+=1
i=1
print "img"
print img
print "img1"
print img1
print i,j
cv2.imshow("image1",img1)
cv2.imshow("image",img)
cv2.waitKey(0)
cv2.destroyAllWindows()
No where in the code img is modified. Yet, at the end of the code, the pixel values of img are changed(which are same as img1). Can anybody explain me what Iam missing?
dy, dx = np.gradient(img); magnitude = np.sqrt(dy**2 + dx**2)).