1

I'm building CNN that will tell me if a person has brain damage. I'm planning to use tf inception v3 model, and build_image_data.py script to build TFRecord.

Dataset is composed of brain scans. Every scan has about 100 images(different head poses, angles). On some images, damage is visible, but on some is not. I can't label all images from the scan as a damage positive(or negative), because some of them would be labeled wrong(if scan is positive on damage, but that is not visible on specific image).

Is there a way to label the whole scan as positive/negative and in that way train the network? And after training is done, pass scan as input to network(not single image) and classify it.

0

1 Answer 1

2

It looks like multiple instance learning might be your approach. Check out these two papers:

Multiple Instance Learning Convolutional Neural Networks for Object Recognition

Classifying and segmenting microscopy images with deep multiple instance learning

The last one is implemented by @dancsalo (not sure if he has a stack overflow account) here.

I looks like the second paper deals with very large images and breaks them into sub images, but labels the entire image. So, it is like labeling a bag of images with a label instead of having to make a label for each sub-image. In your case, you might be able to construct a matrix of images, i.e. a 10 image x 10 image master image for each of the scans...

Let us know if you do this and if it works well on your data set!

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.