Quantcast
Channel: OpenCV Q&A Forum - RSS feed
Viewing all 41027 articles
Browse latest View live

Finger segmentation in fingerphoto

$
0
0
I want to segment finger and zoom-in on tip in a **fingerphoto**. Is there any API to do this?

how to solve problem of low fps in OpenCV

$
0
0
I took this sample code, but i have low fps during code execution #include "opencv2/objdetect/objdetect.hpp" #include "opencv2/highgui/highgui.hpp" #include "opencv2/imgproc/imgproc.hpp" #include using namespace std; using namespace cv; /** Função principal */ void detectAndDisplay(Mat frame); /** Global rariaveis */ String face_cascade_name = "C:\\opencv\\sources\\data\\haarcascades\\haarcascade_frontalface_alt.xml"; String eyes_cascade_name = "C:\\opencv\\sources\\data\\haarcascades\\haarcascade_eye.xml"; String nose_cascade_name = "C:\\opencv\\sources\\data\\haarcascades\\haarcascade_mcs_nose.xml"; CascadeClassifier face_cascade; CascadeClassifier eyes_cascade; CascadeClassifier nose_cascade; string window_name = "Capturando - Rosto detectado"; RNG rng(12345); /** @funcão main */ int main(int argc, const char** argv) { VideoCapture vcap(0); if (!vcap.isOpened()) { cout << "Erro ao abrir o video" << endl; return -1; } Mat frame; //-- 1. Carrega as biblioteca Cascade, responsavel por mapear o rosto. if (!face_cascade.load(face_cascade_name)) { printf("--(!)Erro ao carregar\n"); return -1; }; if (!eyes_cascade.load(eyes_cascade_name)) { printf("--(!)Erro ao carregar\n"); return -1; }; if (!nose_cascade.load(nose_cascade_name)) { printf("--(!)Erro ao carregar\n"); return -1; }; //-- 2. Ler o video while (true) { vcap >> frame; //-- 3.Aplicação para captura os frames if (!frame.empty()) { detectAndDisplay(frame); } else { printf(" --(!) No captured frame -- Break!"); break; } int c = waitKey(1); if ((char)c == 'c') { break; } } return 0; } /** @Função para detectar e mostrar */ void detectAndDisplay(Mat frame) { std::vector faces; Mat frame_gray; cvtColor(frame, frame_gray, COLOR_BGR2GRAY); equalizeHist(frame_gray, frame_gray); //-- Detecta rosto face_cascade.detectMultiScale(frame_gray, faces, 1.1, 2, 0 | CASCADE_SCALE_IMAGE, Size(30, 30)); for (size_t i = 0; i < faces.size(); i++) { Point center(faces[i].x + faces[i].width*0.5, faces[i].y + faces[i].height*0.5); rectangle(frame, Point(faces[i].x, faces[i].y), Point(faces[i].x + faces[i].width, faces[i].y + faces[i].height), Scalar(255, 0, 0), 3, 8); Mat faceROI = frame_gray(faces[i]); vector eyes; vector noses; //-- detectar nariz e olhos eyes_cascade.detectMultiScale(faceROI, eyes, 1.1, 2, 0 | CASCADE_SCALE_IMAGE, Size(30, 30)); for (size_t j = 0; j < eyes.size(); j++) { Point center(faces[i].x + eyes[j].x + eyes[j].width*0.5, faces[i].y + eyes[j].y + eyes[j].height*0.5); int radius = cvRound((eyes[j].width + eyes[j].height)*0.25); circle(frame, center, radius, Scalar(255, 0, 0), 4, 8, 0); } nose_cascade.detectMultiScale(faceROI, noses, 1.1, 2, 0 | CASCADE_SCALE_IMAGE, Size(30, 30)); for (size_t j = 0; j < noses.size(); j++) { Point center(faces[i].x + noses[j].x + noses[j].width*0.5, faces[i].y + noses[j].y + noses[j].height*0.5); int radius = cvRound((noses[j].width + noses[j].height)*0.25); circle(frame, center, radius, Scalar(0, 0, 255), 4, 8, 0); } } //-- Show what you got imshow(window_name, frame); } while running, I get these errors or warnings [ INFO:0] global C:\build\master_winpack-build-win64-vc15\opencv\modules\videoio\src\videoio_registry.cpp (187) cv::`anonymous-namespace'::VideoBackendRegistry::VideoBackendRegistry VIDEOIO: Enabled backends(7, sorted by priority): FFMPEG(1000); GSTREAMER(990); INTEL_MFX(980); MSMF(970); DSHOW(960); CV_IMAGES(950); CV_MJPEG(940) [ INFO:0] global C:\build\master_winpack-build-win64-vc15\opencv\modules\videoio\src\backend_plugin.cpp (340) cv::impl::getPluginCandidates Found 2 plugin(s) for GSTREAMER [ INFO:0] global C:\build\master_winpack-build-win64-vc15\opencv\modules\videoio\src\backend_plugin.cpp (172) cv::impl::DynamicLib::libraryLoad load C:\opencv\build\x64\vc15\bin\opencv_videoio_gstreamer411_64.dll => FAILED [ INFO:0] global C:\build\master_winpack-build-win64-vc15\opencv\modules\videoio\src\backend_plugin.cpp (172) cv::impl::DynamicLib::libraryLoad load opencv_videoio_gstreamer411_64.dll => FAILED [ INFO:0] global C:\build\master_winpack-build-win64-vc15\opencv\modules\core\src\ocl.cpp (888) cv::ocl::haveOpenCL Initialize OpenCL runtime...

Undesired Hierarchy Result

$
0
0
Hi to all, My aim is to classify contours which have no children and any parent. I found my contours with "CV_RETR_TREE" hierarchy. To classify, I just determined if statement as if ( hierarchy[i][3] == -1 && hierarchy[i][2]==-1) Source image: ![Original Image](/upfiles/15651677449923833.png) However it could not classify the contours I wanted. In below image red circle shows the contour type that I would like to classify; ![Source image](/upfiles/15651640327836724.png) As you can see there is no any child or parent in this contour, however I cheched its hierarchy and this is the result: [121, 76, 119, -1] It finds it's child while there is not as I can see. What do you think the problem might be? Thank you!

Opencv4 undefined types

$
0
0
After installing opencv4 dynamically using cmake: cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=~/Libs/opencv/dynamic .. I tried to test on a simple example #include #include #include #include int main(int argc, char **argv) { if (argc != 2) { std::cout << "usage: DisplayImage.out \n" << std::endl; return -1; } cv::Mat image = cv::imread(std::string{argv[1]}); if (!image.data) { printf("No image data \n"); return -1; } cv::rectangle(image, cv::Point(0, 0), cv::Point(112, 112), cv::Scalar(0, 255, 0), 10, cv::LINE_4); cv::putText(image, "Opencv", cv::Point(0, 112), cv::FONT_HERSHEY_COMPLEX_SMALL, 0.8, cv::Scalar(255, 0, 225)); cv::imshow("Opencv 4", image); cv::waitKey(0); return 0; } I get the following errors error : use of undeclared identifier 'image' error : no member named 'Point' in namespace 'cv' error : calling 'waitKey' with incomplete return type 'cv::CV_EXPORTS_W' Basically there are forward declarations of the types but they do not seem to be defined anywhere

How to get the degree of a vector?

$
0
0
I am trying to claculate the angle of vectors in degrees. But I don't know where the origin of coordinates is in case of calculating the angle. So far, I use something like this: degree = 180*(atan2(vector.back().y - vector.front().y, vector.back().x - vector.front().x))/M_PI But the result is always positive, even if the `x` or `y` values are decreasing. My questions are: - Where is the origin and orientation of the angular 2D plane? Is it top left corner? - How can I correctly calculate the angle of a vector?

Image recognition with a single reference picture

$
0
0
Hey all! After building my app using Vuforia (great stuff, until you have to pay haha), i was wondering if it is possible to do picture (not facial or object) recognition without a dataset. I know my way around basic programming and eager enough to dive in a bit deeper, but I would like to have some pointers to start. Looking at the data that Vuforia generates of a single image (multiple points of recognition based on light spots to my untrained eye), how would one do the same using openCV? Cheers!

Compare segmentation result to ground truth

$
0
0
I have some contours which represent certain objects found with `cv2.findContours()` in a segmentation result. On the other hand, I also have the ground truth data in the same format (polygon points in `[x, y]`). I'm now looking for the best way to measure the intersection/overlap of both polygons to measure how well the segmentation performed. Thank you in advance! -- Update -- To add in some context, here's for example the data from one contour found with `cv2.findContours()`: [[[1086 603]],[[1085 605]],[[1078 605]],[[1076 606]],[[1073 606]],[[1071 608]],[[1068 608]],[[1066 610]],[[1065 610]],[[1061 6,3]],[[1060 613]],[[1060 615]],[[1058 616]],[[1058 625]],[[1060 626]],[[1060 636]],[[1061 638]],[[1061 650]],[[1063 651]],[[1063 658]],[[1061 660]],[[1063 661]],[[1063 671]],[[1065 673]],[[1065 676]],[[1066 678]],[[1066 681]],[[1068 683]],[[1068 701]],[[1070 703]],[[1070 713]],[[1075 718]],[[1078 718]],[[1080 720]],[[1101 720]],[[1103 718]],[[1106 718]],[[1110 715]],[[1111 715]],[[1115 711]],[[1116 711]],[[1118 710]],[[1118 708]],[[1120 706]],[[1120 703]],[[1121 701]],[[1121 683]],[[1123 681]],[[1123 670]],[[1121 668]],[[1121 615]],[[1120 613]],[[1120 611]],[[1118 610]],[[1113 610]],[[1111 608]],[[1108 608]],[[1106 606]],[[1103 606]],[[1101 605]],[[1088 605]]] And this is the ground truth data: [[1054, 625], [1070, 719], [1084, 726], [1112, 724], [1125, 716], [1126, 631], [1128, 619], [1132, 610], [1127, 603], [1118, 602], [1107, 600], [1090, 600], [1074, 603], [1059, 607], [1049, 614], [1051, 620]] (Ignore the extra array, this is just to give an example)

Opencv4 static linking

$
0
0
I built opencv statically cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=~/Libs/opencv/static .. \ -DBUILD_SHARED_LIBS=OFF \ -DBUILD_LIST="highgui,imgproc,imgcodecs" After linking with a test example, I get the following errors opencl_core.cpp:-1: error : undefined reference to `dlopen' opencl_core.cpp:-1: error : undefined reference to `dlsym' opencv/static/lib/libopencv_core.a(persistence.cpp.o):-1: In function `cv::FileStorage::Impl::release(std::__cxx11::basic_string, std::allocator>*)': file not found persistence.cpp:-1: error : undefined reference to `gzclose': And many other similar undefined references, I do not use opencl at all, my example code is the following #include #include #include #include int main(int argc, char **argv) { if (argc != 2) { std::cout << "usage: DisplayImage.out \n" << std::endl; return -1; } cv::Mat image = cv::imread(std::string{argv[1]}); if (!image.data) { printf("No image data \n"); return -1; } cv::rectangle(image, cv::Point(0, 0), cv::Point(112, 112), cv::Scalar(0, 255, 0), 10, cv::LINE_4); cv::putText(image, "Opencv", cv::Point(0, 112), cv::FONT_HERSHEY_COMPLEX_SMALL, 0.8, cv::Scalar(255, 0, 225)); cv::imshow("Opencv 4", image); cv::waitKey(0); return 0; }

Android 3.4.6 build from source

$
0
0
I'm going crazy trying to build and Android version from scratch again. It seems no matter what hoops I jump through, or old google tools I find there is always another error right around the corner. Is there a pre-built VM that has all these prerequisites already satisfied? I thought I had it but now cmake is telling me "Android: List of installed Android targets is empty" and that's probably because I had to go find a version of tools that was 25.2 because it didn't like the latest. I tried building 4.x a while ago and couldn't get that to work either. Just crazy frustrating all around, I've been at it for hours.

Crop a Region of Interest; RotatedRect

$
0
0
void detect_text(string input){ Mat large = imread(input); Mat rgb; // downsample and use it for processing pyrDown(large, rgb); pyrDown(rgb, rgb); Mat small; cvtColor(rgb, small, CV_BGR2GRAY); // morphological gradient Mat grad; Mat morphKernel = getStructuringElement(MORPH_ELLIPSE, Size(3, 3)); morphologyEx(small, grad, MORPH_GRADIENT, morphKernel); // binarize Mat bw; threshold(grad, bw, 0.0, 255.0, THRESH_BINARY | THRESH_OTSU); // connect horizontally oriented regions Mat connected; morphKernel = getStructuringElement(MORPH_RECT, Size(9, 1)); morphologyEx(bw, connected, MORPH_CLOSE, morphKernel); // find contours Mat mask = Mat::zeros(bw.size(), CV_8UC1); vector> contours; vector hierarchy; findContours(connected, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE, Point(0, 0)); // filter contours for(int idx = 0; idx >= 0; idx = hierarchy[idx][0]){ Rect rect = boundingRect(contours[idx]); Mat maskROI(mask, rect); maskROI = Scalar(0, 0, 0); // fill the contour drawContours(mask, contours, idx, Scalar(255, 255, 255), CV_FILLED); RotatedRect rrect = minAreaRect(contours[idx]); double r = (double)countNonZero(maskROI) / (rrect.size.width * rrect.size.height); Scalar color; int thickness = 1; // assume at least 25% of the area is filled if it contains text if (r > 0.25 && (rrect.size.height > 8 && rrect.size.width > 8) // constraints on region size // these two conditions alone are not very robust. better to use something //like the number of significant peaks in a horizontal projection as a third condition ){ thickness = 2; color = Scalar(0, 255, 0); } else { thickness = 1; color = Scalar(0, 0, 255); } Point2f pts[4]; rrect.points(pts); for (int i = 0; i < 4; i++) { line(rgb, Point((int)pts[i].x, (int)pts[i].y), Point((int)pts[(i+1)%4].x, (int)pts[(i+1)%4].y), color, thickness); } } imwrite("contdd.jpg", rgb); } Hi I have a question about cropping an image with square detection/text recognition function applied to the image. Here is the result image ![image description](/upfiles/15262904184186885.jpg). However, I want to save each text wrapped within the green squares individually like this: ![image description](/upfiles/15262905585196256.png) Can anyone please help me with this?

How can I load multiple images and add bitwise operation and save the images in computer

$
0
0
I am trying to load 600 images from computer and apply bitwise operation and save the images in computer

opencv imwrite unable to save images >40MP

$
0
0
Hi, I have a camera capable of 41.5MP. When I try to save an image from the camera it gets created, only that the image is 0KB in size, there is no error caught. If I save a region from the image then the image is saved correctly (as in the code below). Is there some limit to image size above which the image cannot be saved? (I am using version 4.0.1 opencv on Windows 10 and VC++ 17) cv::Mat im = grabCVImage(); int s = 1200; roi.x = s; roi.y = s; roi.width = s; roi.height = s; try { imwrite("C:\\Users\\Watts\\Pictures\\alpha.png", im(roi)); } catch (std::runtime_error& ex) { fprintf(stderr, "Exception converting image to PNG format: %s\n", ex.what()); return 1; }

can't find libpthread_nonshared.a

$
0
0
I am trying to build opencv from source and I inserted cmake instructions, then when I enter make command, it reaches 11% and then shows the following error >> /usr/bin/ld: cannot find /usr/lib/libpthread_nonshared.a> collect2: error: ld returned 1 exit> status make[2]: ***> [modules/core/CMakeFiles/opencv_core.dir/build.make:1435:> lib/libopencv_core.so.3.4.4] Error 1> make[1]: ***> [CMakeFiles/Makefile2:2342:> modules/core/CMakeFiles/opencv_core.dir/all]> Error 2 make: *** [Makefile:163: all]> Error 2 (base) [hassanalsamahi@D-Link> build]$ please any help how to solve this problem?

Scaling intrinsic matrix between two image resolutions

$
0
0
I have two camera images A (480x640) and B (1080x1920). A & B represent the same image, just different resolutions. I have the intrinsic matrix for image A and run solvePnP on this image to calculate a pose for the camera. I then send this pose to Unity for display. The background image in unity is image B and my marker is close but always off centre. To my knowledge the intrinsic values for the two camera images aren't quite the same and this is why my marker is off from a translation perspective. If I manually offset the principle point I can correct the issue by hand and position the marker perfectly. How can I modify the intrinsic values to know that the output of solvePnP will result in pose suited for display on image B.

iOS OpenCV: Image matching with ORB leads to EXC_BAD_ACCESS

$
0
0
I'm fairly new to OpenCV and I'm trying to match images from the camera feed with provided descriptors for the images that shall be matched. However my Objective-C++ code currently crashes with EXC_BAD_ACCESS which leads me to believe that something has been released or is not present. I just can't quite find the culprit. I'm using OpenCV 4.1.1 for iOS, Xcode 10.3 and iOS 12.4 I'm using Objective-C++ to speak to OpenCV and and Swift for all UI related things and the rest. So here's what I've done so far. I've set up the included CvVideoCamera and the camera image is displayed fine inside an UIImageView. (since this works fine with simple template matching I'll spare you with that code) In my OpenCVWrapper.mm which is also the CvVideoCamera's delegate I do the following in processImage:(cv::Mat&)image; CV_Assert(!image.empty()); cv::ORB *orb = cv::ORB::create(); cv::Mat descriptors, mask; std::vector points; CV_Assert(!orb->empty()); orb->detectAndCompute(image, mask, points, descriptors); cv::DescriptorMatcher *matcher = cv::DescriptorMatcher::create(cv::BFMatcher::BRUTEFORCE) ; std::vector> knnMatches; NSArray *targetDescriptors = [[self delegate] descriptors]; for (SourceDescriptor *source in targetDescriptors) { matcher->knnMatch(descriptors, source.mat.mat, knnMatches, 2); float ratioThreshold = 0.f; NSMutableArray *innerDistances = [@[] mutableCopy]; for (NSInteger counter = 0; counter < knnMatches.size(); counter++) { const cv::DMatch& bestMatch = knnMatches[counter][0]; const cv::DMatch& betterMatch = knnMatches[counter][1]; float finalDistance = bestMatch.distance / betterMatch.distance; if (finalDistance <= ratioThreshold) { [innerDistances addObject:[NSNumber numberWithFloat:finalDistance]]; } } if ([innerDistances count] > 0) { NSSortDescriptor *highestToLowest = [NSSortDescriptor sortDescriptorWithKey:@"self" ascending:NO]; [innerDistances sortUsingDescriptors:[NSArray arrayWithObject:highestToLowest]]; Result *leresult = [[Result alloc] init]; NSNumber *score = (NSNumber *)[innerDistances objectAtIndex:0]; [leresult setScore:[score floatValue]]; [leresult setSourceDescriptor:source]; [result addObject:leresult]; } } My code however crashes at orb->detectAndCompute() with an EXC_BAD_ACCESS As you can see from the code I'm checking whether the orb or image are empty which isn't the case. Any help would be greatly appreciated. If any more information needs to be provided I will gladly do so.

VideoWriter with gstreamer pipeline which writes frames in I420 format

$
0
0
I have frames in YUV_I420 format, and want to write correct pipeline for VideoWriter which process it, and writes to file. Frame shape in BGR format: width=1280, height=720. YUV_I420 frame shape: width=1280, height=1080. - And this is the frame I want to write. I have next versions of pipeline which *does not work*: gstreamer_pipeline = ( "appsrc caps=video/x-raw,format=I420,width=1280,height=720,framerate=25/1 ! " "videoconvert ! video/x-raw,format=I420 ! x264enc ! mp4mux ! filesink location=res.mp4") writer = cv2.VideoWriter(gstreamer_pipeline, cv2.CAP_GSTREAMER, 25, (1280, 1080), True) writer.write(frame_I420) I experemented with different input shapes (width, height) in gstreamer_pipeline and in VideoWriter, and still no success. Writing frames in BGR format *works just well*: gstreamer_pipeline = ( "appsrc caps=video/x-raw,format=BGR,width=1280,height=720,framerate=25/1 ! " "videoconvert ! video/x-raw,format=I420 ! x264enc ! mp4mux ! filesink location=res.mp4") writer = cv2.VideoWriter(gstreamer_pipeline, cv2.CAP_GSTREAMER, 25, (1280, 720), True) writer.write(frame_BGR) But I really **want to write frames in I420 format**, and without previous conversion from from I420 to BGR. What I miss? What modifications to pipeline would you suggest? Is format I420 supported as an input for appsrc in OpenCV? What are the alternatives?

Implement and train YOLO 3 with Opencv and C++

$
0
0
I want to implement and train YOLO 3 with my dataset using Opencv and C++, i can't find an example to start with, or a tutorial to explain how to train YOLO with my own data, all the tutorials i found are in python and don't use Opencv. Do you have any example, or an explanation to how to code an object detector with YOLO 3, opencv with C++. I find this tutorial : https://www.learnopencv.com/deep-learning-based-object-detection-using-yolov3-with-opencv-python-c/ The code for this tutorial is here : https://github.com/spmallick/learnopencv but i can't find the C++ code, i think it exist only with python. Please i need your help, and thank you

After zooming in image, how to get standard mouse cursor back?

$
0
0
I have an application where I draw lots of rectangles on images in OpenCv 4 using Python. Some of the images are very large, so it is helpful to zoom in on them before selecting regions to crop. The problem is when you zoom on an image, you get the "hand" symbol for interaction with the image and this completely interferes with the mouse callback function I use to crop images. That is, once you zoom, the mouse cursor is permanently toggled to a hand, and then clicking, dragging only serves to translate the position of the image in the window (unless I go back out to full size, but then I can't draw rectangles on the zoomed-in image). So my question is once I've zoomed in, how can I get back to the standard mouse cursor so I can use my standard mouse button events to draw/drag rectangles? Here is my mouse-callback function. def mouse_callback(event, x, y, flags, param): global image_to_show, s_x, s_y, e_x, e_y, mouse_pressed if event == cv2.EVENT_LBUTTONDOWN: mouse_pressed = True s_x, s_y = x, y image_to_show = np.copy(image) elif event == cv2.EVENT_MOUSEMOVE: if mouse_pressed: image_to_show = np.copy(image) cv2.rectangle(image_to_show, (s_x, s_y), (x, y), (255, 255, 255), line_width) elif event == cv2.EVENT_LBUTTONUP: mouse_pressed = False e_x, e_y = x, y print(s_x, s_y, e_x, e_y) Note I have asked this at stack overflow: https://stackoverflow.com/questions/57418021/in-opencv-after-zooming-on-image-how-to-get-back-to-standard-mouse-cursor

64 bit library on Android

$
0
0
As of August 1, 2019, in the Play Store, only applications with 64-bit libraries can be published. I have an App with 32 and 64 bit OpenCV libraries. However, in the Play Store, I can't publish it now because it has 32-bit libraries. Is there any solution?

OpenCV doesn't work with two python versions

$
0
0
The version of python installed on Ubuntu (16.04.3) was python2. I installed python3. I now install the opencv library on python3, the result is:>>> import cv2 Traceback (most recent call last): File "", line 1, in ImportError: No module named 'cv2' But, it is OK in python2. >>> import cv2>>>
Viewing all 41027 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>