I'm trying apply an inverse linear polar transform to some quadratic images:

The result, I'd expect is the following (done using Photoshope):

However, the result I get from OpenCV's linearPolar function is the following:

Some part of the image is missing and can be noticed as a black slice.
The code I am using is:
linearPolar(in, pol,
Point2f(in.cols / 2, in.rows / 2), (in.rows / 2),
CV_WARP_FILL_OUTLIERS | CV_INTER_LINEAR | CV_WARP_INVERSE_MAP);
Where `in` is the input image, given by `imread`.
Am I doing something wrong here?
↧