Here is my approach, first read the image and convert to grayscale. Use OTSU thresholding to get the region of interest. After that, get contours and get the largest by area, this should correspond to the object:
im = cv2.imread("example.png") # read the iamge
imGray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY) # convert to gray
imGray = cv2.equalizeHist(imGray) # equalize hist, maybe not necessary
imOTSU = cv2.threshold(imGray, 0, 255, cv2.THRESH_OTSU+cv2.THRESH_BINARY_INV)[1] # get otsu with inner as positive
imOTSUOpen = cv2.morphologyEx(imOTSU, cv2.MORPH_OPEN, np.ones((3,3), np.uint8)) # open
contours, _ = cv2.findContours(imOTSUOpen, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) # get contours
largestContour = max(contours, key = cv2.contourArea) # get the largest
# get X, Y coordinates
X, Y = largestContour.T
X = X[0]
Y = Y[0]
From here, I played around with quantiles and I managed to identify the upper left and lower right corners, which is all you need for a bounding rectangle:
plt.figure() # new fiugre
plt.imshow(im) # show image
plt.axvline(min(X)) # draw verticle line at minimum x
plt.axhline(max(Y)) # draw horizontal line at minimum y
upperLeft = (int(np.quantile(X, 0.1)), int(np.quantile(Y, 0.25))) # get quantiles as corner
lowerRight = (int(np.quantile(X, 0.55)), int(np.quantile(Y, 0.9))) # get quantiles as corner
plt.scatter(upperLeft[0], upperLeft[1]) # scatter the corner
plt.scatter(lowerRight[0], lowerRight[1]) # scatter the corner
The plot looks like this:

And now that you have this, it's quite easy to draw the rectangle:
cv2.rectangle(im, (upperLeft[0], upperLeft[1]), (lowerRight[0], lowerRight[1]), (0, 255, 0), 2) # draw rectangle as green
cv2.imwrite("exampleContoured.png", im)

I would still check on stack, there should be plenty example of protruding contours and there are definitely robuster ways of solving this.
Edit 1: another variation, identifying the protrusion and subtracting it from the mask:
im = cv2.imread("example2.png") # read the iamge
imGray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY) # convert to gray
imGray = cv2.equalizeHist(imGray) # equalize hist, maybe not necessary
mask = imGray<10 # get pixels under 10
mask = cv2.morphologyEx(mask.astype(np.uint8), cv2.MORPH_OPEN, np.ones((5,5), np.uint8)) # open
mask = cv2.dilate(mask, np.ones((10,10), np.uint8)) # dilate
imOTSU = cv2.threshold(imGray, 0, 1, cv2.THRESH_OTSU+cv2.THRESH_BINARY_INV)[1] # get otsu with inner as positive
imOTSUOpen = cv2.morphologyEx(imOTSU, cv2.MORPH_OPEN, np.ones((3,3), np.uint8)) # open
imOTSUOpen = np.clip(imOTSUOpen-mask, 0 , 1)
contours, _ = cv2.findContours(imOTSUOpen, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) # get contours
largestContour = max(contours, key = cv2.contourArea) # get the largest
imContoured = cv2.drawContours(im.copy(), largestContour, -1, (0,255,0), 5) # draw contour
x, y, w, h = cv2.boundingRect(largestContour) # get bounding rect
cv2.rectangle(imContoured, (x, y), (x + w, y + h), (0, 0, 255), 2) # draw rectangle
cv2.imwrite("exampleContoured3.png", imContoured) # save image
The result looks like this, with the contour in green and the bounding rectangle in red:

About the code and how it works:
- get the threshold like before with OTSU
- get the protrusion mask as it is distinct from other pixels due to its' low value. Do some morphology to emphasize that region.
You get the following:

Again, probably not the best method. I'm not gonna take a look at 10-20 images and make sure it works for all of them, as you will find that future images might not work. Use my answer as a base for your actual implementation, as you have the domain knowledge and you are aware of limitations with your equipment and task.