I guess it has to do more about your application’s implementation. Is it a java or a c++ (native) application? Is image processing an important load of your application? Do you need computing speed or implementation speed? Anyway, I’ve used OpenCV (C++ implementation) in java via the JNI and the NDK. The core of my application is image processing and processing speed is very much in my interest (almost real-time). Everything is written in C++, as I have both desktop and mobile implementations of my imaging system.
In the mobile application (for android), my imaging system is embedded in a java application. From here, I see processing split into two parts: Image acquisition (via the device camera) and image processing (via OpenCV). Image acquisition is all made in java. The work wasn’t trivial, as the main problem I found was acquiring RGB frames from the camera (natively YUV), passing them to OpenCV, receiving results from OpenCV and rendering them to the application’s surfaceview.
Interfacing the camera and OpenCV can be a headache indeed, depending mainly on your device capabilities and your SDKs versions. After acquiring the image, processing was relatively painless. All my system was previously debugged on my PC and I knew exactly what to expect. All OpenCV functions behaved as they should, so I had almost no problems with this part.
I had also spent a lot of time working in C++, so that was a factor in deciding what kind of implementation to choose. Now, the application is pretty much setup. I can add new features and test them in my PC and update the mobile port rather quickly.