Quantcast
Channel: OpenCV Q&A Forum - RSS feed
Viewing all 41027 articles
Browse latest View live

How to use OpenCV2.4.10 with GPU, in the java environment ?

$
0
0
GupMat function how to use in the java progress?

Object detection slow processing video

$
0
0
I'm running the sample code that comes with the Object Detection model, I made a modification to read a video instead of a webcam the problem is that the window opens and plays the video but in extreme slow (really is very slow , does not exceed 1 fps I think) video = cv2.VideoCapture(PATH_TO_VIDEO) while(video.isOpened()): ret, frame = video.read() frame_expanded = np.expand_dims(frame, axis=0) # Perform the actual detection by running the model with the image as input (boxes, scores, classes, num) = sess.run( [detection_boxes, detection_scores, detection_classes, num_detections], feed_dict={image_tensor: frame_expanded}) # Draw the results of the detection (aka 'visulaize the results') vis_util.visualize_boxes_and_labels_on_image_array( frame, np.squeeze(boxes), np.squeeze(classes).astype(np.int32), np.squeeze(scores), category_index, use_normalized_coordinates=True, line_thickness=8, min_score_thresh=0.60) # All the results have been drawn on the frame, so it's time to display it. cv2.imshow('Object detector', frame) # Press 'q' to quit if cv2.waitKey(1) == ord('q'): break video.release() cv2.destroyAllWindows()

I am building and document scanner application and facing issue with detecting corner circles

$
0
0
I want the output as image ![image description](/upfiles/1558063487629078.png) I have tried to implement blob detection for detection black circles located at corner, but it is not detecting the black dots. Can any one suggest a better way to achieve the required output. Actually i want only the area inside the black dots inclusive of black dot. Any references would be great

java: symbol lookup error: undefined symbol: _ZN2cv3Mat20updateContinuityFlagEv

$
0
0
Hello, Currently i am working on image processing application, where i am using opencv on Ubuntu 18.04. I have compiled my C++ project it compiles and runs fine. But when i am trying to compile it with JNI it gets successfully compile, but when i try to run it using spring application i am getting the following error. **java: symbol lookup error: /home/Demo/libSoFile.so: undefined symbol: _ZN2cv3Mat20updateContinuityFlagEv** **Details about the libraries - opencv 3.4.3** **O.S - Ubuntu 18.04** **Java version - openjdk version "1.8.0_212"**

Why image is not copied to right part of collage image?

$
0
0
I'm trying to create a collage image. The following code writes correctly to the left part of collage image (green), but not to the right. Size collageSize(img1.width + img2.width, std::max(img1.height, img2.height)); Mat collageImg = Mat(collageSize, CV_8UC3, Scalar::all(0)); Mat img1ROI(collageImg, cv::Range(0, img1.rows), cv::Range(0, img1.cols)); rectangle(img1ROI, Rect(0, 0, img1.cols, img1.rows), CV_RGB(0, 255, 0), -1); Mat img2ROI(collageImg, cv::Range(0, img2.rows), cv::Range(img1.cols, img2.cols)); rectangle(img2ROI, Rect(0, 0, img2.cols, img2.rows), CV_RGB(255, 0, 0), -1); ![image description](/upfiles/15580765464891417.jpg) If I create `img2ROI` as following: Mat img2ROI(collageImg, Rect(img1.cols, 0, img2.cols, img2.rows)); Then I get the correct collage image: ![image description](/upfiles/15580768511399438.jpg) Why it doesn't work with `Range` for the right image?

Generate Cropped and Masked IRIS - Eye Image

$
0
0
Hello all I want to generate below type of image from given eye image using opencv. ![image description](/upfiles/1558080464492512.png) i am using c# and opencv. i successfully generate below type of image but i cannot find way to generate what i need. ![image description](/upfiles/1558080480215774.png) so anyone has suggestion welcome. current code as below: private void SegmentIris() { //Clone the filled contour Image InputImageCloneOne = FilledContourForSegmentation.Clone(); Image InputImageCloneTwo = FilledContourForSegmentation.Clone(); MCvScalar k = new MCvScalar(255, 255, 255); //Draw the circle for mask in white CvInvoke.cvCircle(mask, PupilCenter, OuterBoundaryRadius, IrisConstants.WhiteColor, -1, Emgu.CV.CvEnum.LINE_TYPE.CV_AA, 0); //Create the optimised circle using pupil center and outer boundary iris -> so that circles appear proper around the iris if (IsContourDetectionSatisfactory) { OptimisedIrisBoundaries = FilledContourForSegmentation.Clone(); CvInvoke.cvCircle(OptimisedIrisBoundaries, PupilCenter, OuterBoundaryRadius, IrisConstants.WhiteColor, 2, Emgu.CV.CvEnum.LINE_TYPE.CV_AA, 0); } else { OptimisedIrisBoundaries = ApproximatedPupilImage.Clone(); CvInvoke.cvCircle(OptimisedIrisBoundaries, PupilCenter, OuterBoundaryRadius, IrisConstants.WhiteColor, 2, Emgu.CV.CvEnum.LINE_TYPE.CV_AA, 0); } //now make the mask circle black CvInvoke.cvNot(mask, mask); //Subtract the input image and filled contour image over the mask created CvInvoke.cvSub(InputImage, InputImageCloneOne, InputImageCloneTwo, mask); //Put clonetwo to segmented image CvInvoke.cvCopy(InputImageCloneTwo, SegmentedIrisImage, new IntPtr(0)); } -thanks

I have downgraded python from 3.7 to 3.5,but my cropping code for image pre-processing doesn't seem to work with the lesser version

$
0
0
this is my code> import cv2 import numpy as np img1 = cv2.imread('C:\\Users\\LENOVO\\assertive and mild\\111.jpeg') img = cv2.imread('C:\\Users\\LENOVO\\assertive and mild\\111.jpeg',0) gray = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY) ret, thresh = cv2.threshold(gray, 50, 255, cv2.THRESH_BINARY) # Create mask height,width = 137,137 mask = np.zeros((height,width), np.uint8) edges = cv2.Canny(thresh, 100, 200) #cv2.imshow('detected ',gray) cimg=cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) circles = cv2.HoughCircles(edges, cv2.HOUGH_GRADIENT, 1, 10000, param1 = 50, param2 = 30, minRadius = 0, maxRadius = 0) for i in circles[0,:]: i[2]=i[2]+4 # Draw on mask cv2.circle(mask,(i[0],i[1]),i[2],(255,255,255),thickness=-1) # Copy that image using that mask masked_data = cv2.bitwise_and(img1, img1, mask=mask) # Apply Threshold _,thresh = cv2.threshold(mask,1,255,cv2.THRESH_BINARY) # Find Contour contours = cv2.findContours(thresh,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE) x,y,w,h = cv2.boundingRect(contours[0]) # Crop masked_data crop = masked_data[y:y+h,x:x+w] #Code to close Window cv2.imshow('detected Edge',img1) cv2.imshow('Cropped face',crop) cv2.waitKey(0) cv2.imwrite('C:\\Users\\LENOVO\\assertive and mild\\cropped111.jpeg',crop) cv2.destroyAllWindows() it is giving the follwoing error on running the code: > TypeError in 29 # Find Contour 30 contours = cv2.findContours(thresh,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE) ---> 31 x,y,w,h = cv2.boundingRect(contours[0]) 32 33 # Crop masked_data TypeError: Expected cv::UMat for argument 'array' this code was producing the right results on python 3.7 but for tensorflow i had to downgrade it to python 3.5 but now this is giving me errors.

Visualize differences between two images

$
0
0
I have two images and would like to make it obvious where the differences are. I want to add color to the two images such that a user can clearly spot all the differences within a second or two. For example, here are two images with a few differences: *leftImage.jpg:* [![first image][1]][1] *rightImage.jpg:* [![second image][2]][2] My current approach to make the differences obvious, is to create a mask (difference between the two images), color it red, and then add it to the images. The goal is to clearly mark all differences with a strong red color. Here is my current code: import cv2 # load images image1 = cv2.imread("leftImage.jpg") image2 = cv2.imread("rightImage.jpg") # compute difference difference = cv2.subtract(image1, image2) # color the mask red Conv_hsv_Gray = cv2.cvtColor(difference, cv2.COLOR_BGR2GRAY) ret, mask = cv2.threshold(Conv_hsv_Gray, 0, 255,cv2.THRESH_BINARY_INV |cv2.THRESH_OTSU) difference[mask != 255] = [0, 0, 255] # add the red mask to the images to make the differences obvious image1[mask != 255] = [0, 0, 255] image2[mask != 255] = [0, 0, 255] # store images cv2.imwrite('diffOverImage1.png', image1) cv2.imwrite('diffOverImage2.png', image1) cv2.imwrite('diff.png', difference) *diff.png:* [![enter image description here][3]][3] *diffOverImage1.png* [![enter image description here][4]][4] *diffOverImage2.png* [![enter image description here][5]][5] **Problem with the current code:** The computed mask shows some differences but not all of them (see for example the tiny piece in the upper right corner, or the rope thingy on the blue packet). These differences are shown only very lightly in the computed mask, but they should be clearly red like the other differences. **Input:** 2 images with some differences. **Output:** 3 images: the two input images but with the differences highlighted (clearly highlighted in a configurable color), and a third image containing only the differences (the mask). [1]: https://i.stack.imgur.com/lWUlB.jpg [2]: https://i.stack.imgur.com/gz9Kf.jpg [3]: https://i.stack.imgur.com/fsQ77.png [4]: https://i.stack.imgur.com/ZysEk.png [5]: https://i.stack.imgur.com/Fwk7J.png

Storing ORB Keypoints in file

$
0
0
I need to store ORB keypoints and descriptors inside some file, and am currently using Pickle but it is slow (~10-15 seconds to load). Is there another way that would allow for faster saving and loading? I tried using JSON before but could not serialize the objects (since they are complicated). Could anyone provide insight?

Finding subpixel position of fiducial shapes (Plus, Dot, Square)

$
0
0
Hi All, New to OpenCV - Actually, so new that I haven't tried to run it .. yet. But wanted to figure out if I can use OpenCV for this before going down the wrong path. I have a grayscale image, about 2000x1500 pixels, with a small shape (about 200x200 pixels) somewhere in the image. The shape is either a square, dot or plus. The trickiness is that the shape is "fuzzy" (out of focus) and contrast is low. I cannot get a better image of the shape, so crisp edges are definitely out. What I need is an algorithm that (as quickly as possible) detects the center position of this fiducial shape and reports that back to my code. My preference is C++. Any thoughts on whether this can (simply / easily) be done with OpenCV, and if so, how? Thanks! -- Edit: I could not post a reply yet, so here's some more details: We're currently using Sherlock by Dalsa for this process, and need to move to Realtime on C++, hence the need to evaluate OpenCV. When looking through documentation and tutorials, it wasn't abundantly clear that OpenCV had simple functionality to detect shapes like I mentioned and give me subpixel coordinates of its center. All I was able to find were correlation-based algorithms and complex face-recognition stuff. Neither one would be applicable... Based on eshirima's suggestions, it looks like it will be doable though. I'll look into those more. I'll try to add examples later today or tomorrow so you guys can weigh in if the contrast ratio would present a problem. And to clarify -- no, I am not looking for others to do the work. Just trying to evaluate if I should consider OpenCV or not. Thanks!

Random Forest with categorical features.

$
0
0
Hello, I am trying to use random forest for a mix data with continuous and categorical data. But I am not able to understand how do I use predict function with on of these samples. Find the data format below: > 39, State-gov, 77516, Bachelors, 13, > Never-married, Adm-clerical, > Not-in-family, White, Male, 2174, 0, > 40, United-States, <=50K I have 35000 records in the data-set. Please find the code below: #include #include #include #include #include using namespace std; using namespace cv; using namespace cv::ml; int main() { cout << "Loading Data..." << endl; Ptr raw_data = TrainData::loadFromCSV("C:/mlpack/samples/mlpack/sample-ml-app/sample-ml-app/data/real.csv", 0, -1, -1, "ord[0,2,4,10-12]cat[1,3,5-9,13-14]", ','); Mat data = raw_data->getSamples(); Mat labels = raw_data->getResponses(); auto rtrees = RTrees::create(); rtrees->setMaxDepth(10); rtrees->setMinSampleCount(2); rtrees->setUseSurrogates(false); rtrees->setMaxCategories(2); rtrees->setCalculateVarImportance(false); rtrees->setActiveVarCount(0); rtrees->setTermCriteria({ cv::TermCriteria::MAX_ITER, 100, 0 }); cout << "Training Model..." << endl; rtrees->train(data, cv::ml::ROW_SAMPLE, labels); cout << "Saving Model..." << endl; rtrees->save("rt_classifier.xml"); cout << "Loading Model..." << endl; auto rtrees2 = cv::ml::RTrees::create(); cv::FileStorage read("rt_classifier.xml", cv::FileStorage::READ); rtrees2->read(read.root()); //rtrees2->predict(); return 0; } Sample to predict: > 53, Private, 144361, HS-grad, 9, > Married-civ-spouse, Machine-op-inspct, > Husband, White, Male, 0, 0, 38, > United-States Can I get any help to format the data to feed to the predict(). Thanks in advance.

color detection in different background of human machine interface

$
0
0
Now I want to use a camera to help me look at the human-machine interface of plc so I can fool around during working. When I do a little test, I use InRange function to detect some yellow color in HSV color space in (H,S,V)=(10,45,150)~(30,255,255) at first, everything is fine. ![image description](/upfiles/15578287422380823.jpg)![image description](/upfiles/1557828749422916.jpg) However, if I want to change the background to other colors, all things go wrong. I change to a slightly yellow background and the colors go wired and the HSV range does not work for now. ![image description](/upfiles/15578288255347348.jpg) ![image description](/upfiles/15578288418166313.jpg) and green background, you can see that the color of the lights changed. ![image description](/upfiles/1557828922955377.jpg) I think this is a white-balance issue so I try to write a white balance algorithm such as gray world white balance but It didn't help this situation. I also disable the white balance of my camera but the color shifting is still there. what kind of keyword I can search for this problem? ![image description](/upfiles/1557828929943687.jpg)

I want to use on Visual Studio

$
0
0
I am Japanese, so grammer and spelling may be strange. I want to use OpenCV4.1.0 on Visual Studio 2015, but I do not know how to set. Please tell me how to do setting in detail. If can not use ver4.1.0 on Visual Studio 2015, please tell me which Visual Studio version can use OpenCV4.1.0.

BindingError:_emval_take_value has unknown type N10emscripten11memory_viewIhEE

$
0
0
as the title says having a BindingError:_emval_take_value has unknown type N10emscripten11memory_viewIhEE happens during the code line execution: cap.read(src); OpenCV.JS 4.1.0 build

How to use imshow when png is loaded in memory

$
0
0
My program receives PNG buffers from TCP connection. I have an unsigned character pointer to the PNG file. I want to show the PNG image with imshow as they are received (i.e. sort of a video stream of PNG images). So far I am not able to achieve this since imshow wants a cv::mat and I don't know how to convert from uchar * (png raw file data) to cv::mat. Thank you.

Defect Detection using OpenCV

$
0
0
Dear Members, I am trying to detect defect in image by comparing defected image with original one. I have achieved it so far using canny algorithm. Now i have to fill color to defected area after applying canny algorithm to it. Kindly let me know for the same. Thanks in advance.

Unable to detect identical circles

$
0
0
Greetings Everyone, I'm new to opencv. I am trying to detect identical circle using houghcircle,skit-image ![image description](https://prnt.sc/nq3yuv) here is the code I tried so far: ``` import numpy as np import matplotlib.pyplot as plt from skimage import data, color from skimage.transform import hough_circle, hough_circle_peaks from skimage.feature import canny from skimage.draw import circle_perimeter from skimage.util import img_as_ubyte from skimage.io import imread import cv2 cimg = imread("c.png") gimg = cv2.cvtColor(cimg,cv2.COLOR_BGR2GRAY) gimg = (255-gimg) circles = cv2.HoughCircles(gimg,cv2.HOUGH_GRADIENT, 1, minDist = 2, param1=50,param2=30,minRadius=5,maxRadius=30) print(circles) if circles is None: print('cannot find circle, try parameter tunning') exit() print(circles.shape) circles = np.uint16(np.around(circles)) for i in circles[0,:]: center_x, center_y, radius = i[0], i[1], i[2] circy, circx = circle_perimeter(center_y, center_x, radius) hitcnt = 0 for x, y in zip(circx, circy): if gimg[y,x] > 125: hitcnt = hitcnt + 1 print("hit:", hitcnt) if hitcnt > 255: hitcnt = 255 # draw the outer circle print(hitcnt, int(0.3*radius*2*3.14)) if hitcnt > int(0.3*radius*2*3.14): cv2.circle(cimg,(i[0],i[1]),i[2],(0,0,255),1) else: cv2.circle(cimg,(i[0],i[1]),i[2],(0,255,0),1) # draw the center of the circle #cv2.circle(cimg,(i[0],i[1]),2,(0,0,255),3) cv2.imwrite('out.png', cimg) cv2.imshow('detected circles',cimg) cv2.waitKey(0) cv2.destroyAllWindows() ```

I have system with three camera. I have R and T matrix between C1 & C2 also between C2 & C3. How to transform a point from first camera to third camera?

$
0
0
I have three cameras (C1, C2, C3). I have calibrated C1 & C2 as one stereo pair(System-1) and C2 & C3 (System-2) as another stereo pair. So in results, I have rotational and translation matrix between C1 & C2 also C2 & C3. After successful reconstruction, I have one 3D point say P(X, Y, Z) using system-1. So my question is how to transform Point P in all three cameras?

imshow as a stream of PNG images

$
0
0
I have C++ cmd shell application where PNG images are being received on a TCP/IP connection. Can imshow() be used to display them? how do I deal with waitKey(x)? Right now, I create the named window on main() and *am trying* to feed it, with imshow, also from main(). I get one image displayed. The receive pkts is one a different thread. Thanks!

How to fix 'only black frames receiving' in Android with OpenCV

$
0
0
I was developing a Augmented Reality feature similar to [inkHunter](http://inkhunter.tattoo) for a mobile application using python and openCV. The code worked well as I expected even-though it had some over-kills. I needed to make an android app and I knew that I need to convert that python code to C++ and run it in android with ndk since it had a real-time process. I was able to load openCV libraries to my android project and pass data between native class and the MainActivity as well. Then I converted my python code to C++(which is I'm not much familiar with) and then ran the project. But it gives me only black frames. The program shows no errors, but I don't get the expected output. I'm trying with **Android Studio 3.3.2** and **OpenCV4Android 4.1.0** I used *templateMatching* method to detect the input template from the captured frame and then paste a png on the detected area using *alpha blending* and finally add that area to the frame using *homography*. This is my code, **MainActivity.java** public class MainActivity extends AppCompatActivity implements CameraBridgeViewBase.CvCameraViewListener2 { private static String TAG = "MainActivity"; private JavaCameraView javaCameraView; // Used to load the 'native-lib' library on application startup. static { System.loadLibrary("native-lib"); System.loadLibrary("opencv_java4"); } private Mat mRgba; BaseLoaderCallback mLoaderCallback = new BaseLoaderCallback(this) { @Override public void onManagerConnected(int status) { switch(status){ case BaseLoaderCallback.SUCCESS:{ javaCameraView.enableView(); break; } default:{ super.onManagerConnected(status); break; } } } }; private Mat temp, tattoo; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); javaCameraView = (JavaCameraView)findViewById(R.id.java_camera_view); javaCameraView.setVisibility(SurfaceView.VISIBLE); javaCameraView.setCvCameraViewListener(this); AssetManager assetManager = getAssets(); try { InputStream is = assetManager.open("temp.jpg"); Bitmap bitmap = BitmapFactory.decodeStream(is); Bitmap bmp32 = bitmap.copy(Bitmap.Config.ARGB_8888, true); temp = new Mat(bitmap.getHeight(), bitmap.getWidth(), CvType.CV_8UC4); Utils.bitmapToMat(bmp32, temp); } catch (IOException e) { e.printStackTrace(); } try { InputStream isTattoo = assetManager.open("tattoo2.png"); Bitmap bitmapTattoo = BitmapFactory.decodeStream(isTattoo); Bitmap bmp32Tattoo = bitmapTattoo.copy(Bitmap.Config.ARGB_8888, true); tattoo = new Mat(bitmapTattoo.getHeight(), bitmapTattoo.getWidth(), CvType.CV_8UC4); Utils.bitmapToMat(bmp32Tattoo, tattoo); } catch (IOException e) { e.printStackTrace(); } } @Override protected void onPause(){ super.onPause(); if(javaCameraView != null){ javaCameraView.disableView(); } } @Override protected void onDestroy(){ super.onDestroy(); if(javaCameraView != null){ javaCameraView.disableView(); } } @Override protected void onResume(){ super.onResume(); if(OpenCVLoader.initDebug()){ Log.i(TAG, "OpenCV Loaded successfully ! "); mLoaderCallback.onManagerConnected(LoaderCallbackInterface.SUCCESS); }else{ Log.i(TAG, "OpenCV not loaded ! "); OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION, this, mLoaderCallback); } } @Override public void onCameraViewStarted(int width, int height) { mRgba = new Mat(height, width, CvType.CV_8UC4); } @Override public void onCameraViewStopped() { mRgba.release(); } @Override public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) { mRgba = inputFrame.rgba(); augmentation(mRgba.getNativeObjAddr(), temp.getNativeObjAddr(), tattoo.getNativeObjAddr()); return mRgba; } public native void augmentation(long matAddrRgba, long tempC, long tattooDesign); } **native-lib.cpp** #include #include #include #include #include #include #include using namespace cv; using namespace std; extern "C" { // Alpha Blending using direct pointer access Mat& alphaBlendDirectAccess(Mat& alpha, Mat& foreground, Mat& background, Mat& outImage) { int numberOfPixels = foreground.rows * foreground.cols * foreground.channels(); float* fptr = reinterpret_cast(foreground.data); float* bptr = reinterpret_cast(background.data); float* aptr = reinterpret_cast(alpha.data); float* outImagePtr = reinterpret_cast(outImage.data); int i,j; for ( j = 0; j < numberOfPixels; ++j, outImagePtr++, fptr++, aptr++, bptr++) { *outImagePtr = (*fptr)*(*aptr) + (*bptr)*(1 - *aptr); } return outImage; } Mat& alphaBlend(Mat& foreg, Mat& backgg) { // Read background image Mat background = backgg;// cropped frame Size sizeBackground = background.size(); // Read in the png foreground asset file that contains both rgb and alpha information // Mat foreGroundImage = imread("foreGroundAssetLarge.png", -1); //resized tattoo Mat foreGroundImage = foreg; // resize the foreGroundImage to background image size resize(foreGroundImage, foreGroundImage, Size(sizeBackground.width,sizeBackground.height)); Mat bgra[4]; split(foreGroundImage, bgra);//split png foreground // Save the foregroung RGB content into a single Mat vector foregroundChannels; foregroundChannels.push_back(bgra[0]); foregroundChannels.push_back(bgra[1]); foregroundChannels.push_back(bgra[2]); Mat foreground = Mat::zeros(foreGroundImage.size(), CV_8UC3); merge(foregroundChannels, foreground); // Save the alpha information into a single Mat vector alphaChannels; alphaChannels.push_back(bgra[3]); alphaChannels.push_back(bgra[3]); alphaChannels.push_back(bgra[3]); Mat alpha = Mat::zeros(foreGroundImage.size(), CV_8UC3); merge(alphaChannels, alpha); // Convert Mat to float data type foreground.convertTo(foreground, CV_32FC3); background.convertTo(background, CV_32FC3); alpha.convertTo(alpha, CV_32FC3, 1.0/255); // keeps the alpha values betwen 0 and 1 // Number of iterations to average the performane over int numOfIterations = 1; //1000; // Alpha blending using direct Mat access with for loop Mat outImage = Mat::zeros(foreground.size(), foreground.type()); for (int i=0; i points; }; // Read in the image. Mat im_src = convertedOutImage; Size size = im_src.size(); // Create a vector of points. vector pts_src; pts_src.push_back(Point2f(0,0)); pts_src.push_back(Point2f(size.width - 1, 0)); pts_src.push_back(Point2f(size.width - 1, size.height -1)); pts_src.push_back(Point2f(0, size.height - 1 )); // Destination image Mat im_dst = initialFrame; vector pts_dst; pts_dst.push_back(Point2f(startX, startY)); pts_dst.push_back(Point2f(endX, startY)); pts_dst.push_back(Point2f(endX, endY)); pts_dst.push_back(Point2f(startX, endY)); Mat im_temp = im_dst.clone(); // Calculate Homography between source and destination points Mat h = findHomography(pts_src, pts_dst); // Warp source image warpPerspective(im_src, im_temp, h, im_dst.size()); // Black out polygonal area in destination image. fillConvexPoly(im_dst, pts_dst, Scalar(0), LINE_AA); // Add warped source image to destination image. im_dst = im_dst + im_temp; return im_dst; } JNIEXPORT void JNICALL Java_com_example_inkmastertest_MainActivity_augmentation(JNIEnv *env, jobject, jlong addrRgba, jlong tempC, jlong tattooDesign); JNIEXPORT void JNICALL Java_com_example_inkmastertest_MainActivity_augmentation(JNIEnv *env, jobject, jlong addrRgba, jlong tempC, jlong tattooDesign) { Mat& img = *(Mat*)addrRgba; Mat target_img = img.clone(); Mat& template1 = *(Mat*)tempC; Mat& tattooDes = *(Mat*)tattooDesign; // Contains the description of the match typedef struct Match_desc{ bool init; double maxVal; Point maxLoc; double scale; Match_desc(): init(0){} } Match_desc; Mat template_mat; template_mat = template1; // Read image cvtColor(template_mat, template_mat, COLOR_BGR2GRAY); // Convert to Gray Canny(template_mat, template_mat, 50, 50*4); // Find edges // Find size int tW, tH; tW = template_mat.cols; tH = template_mat.rows; Mat target_gray, target_resized, target_edged; cvtColor(target_img, target_gray, COLOR_BGR2GRAY); // Convert to Gray const float SCALE_START = 1; const float SCALE_END = 0.2; const int SCALE_POINTS = 20; Match_desc found; for(float scale = SCALE_START; scale >= SCALE_END; scale -= (SCALE_START - SCALE_END)/SCALE_POINTS){ resize(target_gray, target_resized, Size(0,0), scale, scale);// Resize // Break if target image becomes smaller than template if(tW > target_resized.cols || tH > target_resized.rows) break; Canny(target_resized, target_edged, 50, 50*4); // Find edges // Match template Mat result; matchTemplate(target_edged, template_mat, result, TM_CCOEFF); double maxVal; Point maxLoc; minMaxLoc(result, NULL, &maxVal, NULL, &maxLoc); // If better match found if( found.init == false || maxVal > found.maxVal ){ found.init = true; found.maxVal = maxVal; found.maxLoc = maxLoc; found.scale = scale; } } int startX, startY, endX, endY; startX = found.maxLoc.x / found.scale; startY = found.maxLoc.y / found.scale; endX= (found.maxLoc.x + tW) / found.scale; endY= (found.maxLoc.y + tH) / found.scale; // draw a bounding box around the detected result and display the image rectangle(target_img, Point(startX, startY), Point(endX, endY), Scalar(0, 0, 255), 3); Rect myROI(startX, startY, endX, endY); Mat cropped = target_img(myROI); Mat alphaBlended = alphaBlend(tattooDes , cropped); Mat homographyApplied = applyHomography(alphaBlended, target_img, startX, startY, endX, endY); img = homographyApplied; } } ---------- ***It will be better if I can skip homography, But I don't know how to alpha blend images with two different sizes.*** My expected output is to show the input png(tattoo2.png) on the detected template area. I would be most grateful if you could please help me on this. Kindly let me know if I need to mention anything else.
Viewing all 41027 articles
Browse latest View live