Quantcast
Channel: OpenCV Q&A Forum - RSS feed
Viewing all 41027 articles
Browse latest View live

sync failed when adding opencv as a dependence for my app

$
0
0
Hi, I want to use opencv-4.1.1-android-sdk in my app. My android studio version is 3.3.2 and sdk api is 28. I use Windows. I copied all libs under native/libs to my jniLibs folder. Then I import new module by specify the sdk/java folder and it succeed. However, when I open module setting and add dependence, I got sync error as following: ERROR: Unable to resolve dependency for ':app@debug/compileClasspath': Could not resolve project :opencvjavasupport. Show Details Affected Modules: app Any help are very appreciated. Thanks. YL

Understanding the code

$
0
0
Hi, I am relatively new to coding and I apologize if my questions are straightforward to you. I am trying to understand OpenCV code to be able to add my contributions (mainly converting 2D tools to 3D as it would be useful for my machine learning projects and for medical projects). There is also some extra-curiosity since I like to understand how things work. 1) On the example of the GaussianBlur method. What happens when I call it in Python? Namely, how the Python code is bind to the C++ one? 2) if I want to understand the whole GaussianBlur algorithm, I am also not familiar with C++ browsing, so how should I proceed to retrieve what files are used (methods and also inherited classes) 3) this is more a curiosity question since I am not familiar with makefiles, but when is done the binding between Python and C++? When I install OpenCV with pip it is done automatically, but I would like to understand the process. Thanks a lot for your answers! I would appreciate any tutorial since I've googled a lot before asking, of course, but did not find what could help me on my own.

Obtaining opencv_ffmpeg411_64.dll

$
0
0
Hi there, I have compiled OpenCV 4.11 but I cannot find opencv_ffmpeg411_64.dll (not the similar opencv_videoio_ffmpeg411_64.dll) among the compiled files. The build completes successfully, with no errors. Do I need a specific library to build this dll? Thank you

How to find objectPoints for stereo calibration

$
0
0
I’m trying to make a stereo calibration of 2 cameras and have a problem with finding objectPoints (first parameter in `stereoCalibrate` function). As far as I understand usually it’s values obtained with chessboard image, but I use 3d photos (not chessboard) for calibration for example photos of nature or street. How do I get objectPoints in my case?

Contour as ChainCode is Opencv 4.1.1

$
0
0
Hello, How to generates chaincode with Opencv 4.1.1 ? Thank you

approx=cv2.approxPolyDP(c,0.02,peri,True) TypeError: integer argument expected, got float

$
0
0
import cv2 import imutils import pytesseract pytesseract.pytesseract.tesseract_cmd=r"C:\Program Files\Tesseract- OCR\tesseract.exe" image =cv2.imread("Image path") image=imutils.resize(image,width=500) cv2.imshow("Original Image",image) cv2.waitKey(0) gray=cv2.cvtColor(image,cv2.COLOR_BGR2GRAY) cv2.imshow("l - Grayscale convertion",gray) cv2.waitKey(0) gray=cv2.bilateralFilter(gray,11,17,17) cv2.imshow("2-Bilateral Filter",gray) cv2.waitKey(0) edged=cv2.Canny(gray,170,200) cv2.imshow("3-Canny Edge",edged) cv2.waitKey(0) cnts,new__=cv2.findContours(edged.copy(),cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE) img1=image.copy() cv2.drawContours(img1,cnts,-1,(0,255,0),3) cv2.imshow("4-All Contours",img1) cv2.waitKey(0) cnts=sorted(cnts,key=cv2.contourArea,reverse=True)[:30] NumberPlateCnt=None img2=image.copy() cv2.drawContours(img2,cnts,-1,(0,255,0),3) cv2.imshow("5-Top 30 Contours",img2) cv2.waitKey(0) count=0 idx=7 for c in cnts: peri=cv2.arcLength(c,True) print(peri) approx=cv2.approxPolyDP(c,0.02,peri,True) if len(approx)==4: NumberPlateCnt=approx x,y,w,h=cv2.boundingRect(c) new_img=image[y:y+h,x:x+w] cv2.inwrite("cropped Image-Text/"+str(idx)+'.png',new_img) idx+=1 break

Frame doesn't receive a FrameGrabber.grab

$
0
0
I'm having this issue where I want to put a FrameGrabber's(camera) image to a Frame variable (frameCapturado), I read the docs and I think that's not supposed to happen because framegrabbers works with frame variables. Any tips to resolve this?![image description](/upfiles/1571057132131475.png)![image description](/upfiles/15710571533059327.png)![image description](/upfiles/15710571713359224.png)

Maximum camera tablet resolution, 1080p giving 640x480

$
0
0
Hello, I have a tablet and the parameter of the flux video are : **1080p 16:9 30fps**, that's mean the camera resolution is 1920x1080, but when i open the camera with opencv it give me always **640x480** this the code i used : VideoCapture capq(0); if (!capq.isOpened()) // check if we succeeded { return -1; } Mat inputImage; for (;;) { capq >> inputImage; // get a new frame from camera if (waitKey(30) >= 0) break; while (inputImage.empty()) { std::cout << "Empty frame" << std::endl; continue; } namedWindow("Display window1", WINDOW_AUTOSIZE); imshow("Display window1", inputImage); } I tried with this code to set the camera resolution but it's not working it's giving me 640x480 // deb set camera resolution //cv::VideoCapture* camera, int max_width = 1080; int max_height = 1920; // Save current resolution const int current_width = static_cast(capq.get(CAP_PROP_FRAME_WIDTH)); const int current_height = static_cast(capq.get(CAP_PROP_FRAME_HEIGHT)); // Get maximum resolution capq.set(CAP_PROP_FRAME_WIDTH, 10000); capq.set(CAP_PROP_FRAME_HEIGHT, 10000); max_width = static_cast(capq.get(CAP_PROP_FRAME_WIDTH)); max_height = static_cast(capq.get(CAP_PROP_FRAME_HEIGHT)); // Restore resolution capq.set(CAP_PROP_FRAME_WIDTH, current_width); capq.set(CAP_PROP_FRAME_HEIGHT, current_height); cout << "width maximale" << current_width << endl; cout << "height maximale" << current_height << endl; // fin set camera resolution Edit 1 : i used the code below, and it's working int max_width = 1080; int max_height = 1920; // Save current resolution const int current_width = static_cast(capq.get(CAP_PROP_FRAME_WIDTH)); const int current_height = static_cast(capq.get(CAP_PROP_FRAME_HEIGHT)); // Get maximum resolution capq.set(CAP_PROP_FRAME_WIDTH, 1080); capq.set(CAP_PROP_FRAME_HEIGHT, 1920); max_width = static_cast(capq.get(CAP_PROP_FRAME_WIDTH)); max_height = static_cast(capq.get(CAP_PROP_FRAME_HEIGHT)); // Restore resolution capq.set(CAP_PROP_FRAME_WIDTH, max_width); capq.set(CAP_PROP_FRAME_HEIGHT, max_height); cout << "width maximale" << current_width << endl; cout << "height maximale" << current_height << endl; // fin set camera resolution I need your help and thank you.

mat.reshape giving me an offset in result

$
0
0
Hi, I have been struggling with the reshape function. I have a big matrix that I convert into a 1 column mat. When I convert the I column back to the original size, the image has an offset and doesn't fit the original no more. Here my code more or less (PS. I have simplified it to this that demonstrates my problem): **vb.net code** frame = my_Original_mat Dim SingleColumn As Mat = frame.Clone.Reshape(0, frame.Cols * frame.Rows) 'makes it a 1 column mat Dim Mymat As New Mat SingleColumn.Col(0).CopyTo(Mymat) 'get the single column back out again Dim OriginalShape As Mat = Mymat.Reshape(0, frame.Rows)) CvInvoke.Imshow("Original ", frame.Clone) CvInvoke.Imshow("OriginalShape ", OriginalShape .Clone) ![image description](/upfiles/1571063685618789.png) ![image description](/upfiles/15710637115395028.png) Sorry for the bad images...but I think you can see that the reshaped one is now shifted down. What am I doing wrong? I need a way to simply extract 1 column at the time from a build up matrix (I concat into this matrix). Let me know if anything else is needed and thanks for the help in advance.

ccalib, Multi Camera Calibration allways to few points

$
0
0
I was following [this](https://docs.opencv.org/master/d2/d1c/tutorial_multi_camera_main.html) tutorial on how to perform a multi camera calibration. I was able to create a pattern and printed it to a flat surface. After that I wanted to try the calibration with just one Camera and took some photos like mentioned in the guide. This is one sample image: ![Sample image](https://i.imgur.com/vzSz55E.jpg) However, feeding this to the MultiCameraCalibration results allways in a failure due to too less recognised points. ![scan results](https://i.imgur.com/qH9iXco.jpg) This is an example result from the algorithm. I am using the meassured size in cm of the pattern as the input and checked if the imagelist is correct with the first image being the pattern (which it is). cv::multicalib::MultiCameraCalibration multiCalib( cv::multicalib::MultiCameraCalibration::PINHOLE, 1, calibrationListFile, 28.75f, 20.0f, 0, minMatches); This is my function call. Even with minMatches set to 1 I'm not able to get a propper solution. Did I missunderstood something? Is this calibration only working with 2+ cameras (which would be pretty obvious)? Did I use the wrong width and height? Regards, Dom!

Which method for object detection at 25 fps, full HD?

$
0
0
So I promised to protoype a model that would do object detection trained on my own labeled videos; in real time on full HD video @25 fps. I have spent quite some time learning Mask R-CNN. Now the model is running I realized that this library is too slow for my usage. I have googled OpenCV, browsed through LearnOpenCV, searched these forums, peaked at the tutorials at opencv.org etc. I understand that using the DNN module with C++ will let me train my own model and do object detection at some frame rate. Which OpenCV based method would you choose for training an object detection model to work @25 fps, full HD?

Opencv C++ Holistically-Nested Edge Detection

$
0
0
Hello, I am trying to run the pretrained Holistically-Nested Edge Detection Model from https://github.com/s9xie/hed in C++. I have found this Python based example: https://www.pyimagesearch.com/2019/03/04/holistically-nested-edge-detection-with-opencv-and-deep-learning/ and the Opencv Sample in Python: https://github.com/opencv/opencv/blob/master/samples/dnn/edge_detection.py. I also looked up the general aproach to custom layers in Opencv here: https://docs.opencv.org/master/dc/db1/tutorial_dnn_custom_layers.html. But after several hours of trying I am still not able to get anything to run. Can anyone recommend a good example or tutorial on this matter? Thanks in advance.

Object detection in H264 Videos

$
0
0
Hello, I want to track object in videos. I understand there are multiple examples. However, it seems all examples provides a rectangular coordinated. I would like to get the exact coordinates for any object. For e.g. my application needs to replace a ball with another object from different video or i may want to change the color of cloths

Updated OpenCV4Android documentation

$
0
0
The documentation for installing/working with the Android library is INCREDIBLY outdated. If you search for how to install it, you will get a million different methods, all with their own inconsistencies. From building libs that are too large to manually copying libs from the JNI folder. With the new updates to Android Studio, I thought that there would, by now, be a simple officially endorsed CMake based solution. It would be incredibly helpful to have such documentation, and I image in would not be very difficult to share for someone experienced with integrating the 4.0 libraries. I'm trying desperately to update a project that uses the 2.x libs and an outdated NDK to support having both the JAVA functions and native functionality.

video capture for 2 cameras

$
0
0
Hi i have a question today: I have 2 cameras and want to use the second to capture video.I use the same code program and the same usb cable which has been use for the first one.But i can capture the video and when compile the program i got this error: **[ INFO:0] global C:\build\master_winpack-build-win64-vc14\opencv\modules\videoio\src\backend_plugin.cpp (340) cv::impl::getPluginCandidates Found 2 plugin(s) for GSTREAMER** ****[ INFO:0] global C:\build\master_winpack-build-win64-vc14\opencv\modules\videoio\src\backend_plugin.cpp (172) cv::impl::DynamicLib::libraryLoad load C:\opencv\build\x64\vc14\bin\opencv_videoio_gstreamer411_64.dll => FAILED** **[ INFO:0] global C:\build\master_winpack-build-win64-vc14\opencv\modules\videoio\src\backend_plugin.cpp (172) cv::impl::DynamicLib::libraryLoad load opencv_videoio_gstreamer411_64.dll => FAILED**** And the code program is as below: #include "opencv2/opencv.hpp" using namespace cv; int main(int, char**) { VideoCapture cap(1); if (!cap.isOpened()) return -1; while (1) { Mat frame; cap.read(frame); imshow("video", frame); if (waitKey(30) == 's') { break; } } return 0; } I have install gstreamer so i can do this with the first camera but i dont know what is that problem about? Can anyone tell me a sulution for this? Thank you.

Can't find x86 folder 4.1.2 how to get it?

$
0
0
My (C++) Project is running in 32 bit and i need openCV lib for 32 bit, but I can't find it after extraction, so I tried with CMake and i am an absolut noob with CMake i used "Visual Studio 15 2017" as generator and can't find any openCV_world.lib generated do i have to configure something else do i have to install something for CMake?

Feature extraction 3D

$
0
0
Please my case is as follows, I have and adjacency matrix(Graph data structure, 0-1) it's originally mesh having 3d coordinates (XYZ). Now the idea is that I want to extract features, but I read that I have to obtain interest points first then for each I obtain feature vector..Correct? So if I would like to read the matrix how this could be done? I'm only finding reading images taking .jpg how could it be in my case, and will the process follows normally I mean showing the feature points on my object and like that. Hopefully I'm in the correct place to ask this. Thank you

slide bar not working and unable to draw contours

$
0
0
Hi, I am a beginner to openCV, and wanted trying to implement a slide bar and draw a contours around the masked image. My code is shown as below: May I also ask that: 1. Sometimes, there is a square bracket with values e.g. [-1], [0] behind the function `cv.findContours(mask,cv.RETR_TREE,cv.CHAIN_APPROX_NONE)`, what are they these square brackets with values is used for? 2. I found `contours, hierarchy = cv.findContours(image, mode, method[, contours[, hierarchy[, offset]]])` in the openCV official website. I understand that we can pass in image,mode,and method to the function, but what is the meanning of `[, contours[, hierarchy[, offset]]]` Thank you for reading. import numpy as np import cv2 as cv def empty(x): pass #lower_blue = np.array([1,80,53]) #upper_blue = np.array([200,205,255]) img = np.zeros((512,512,3),np.uint8) cv.rectangle(img,(100,100),(277,307),(155,0,0),-1,) #cv.circle(img,(450,450),50,(155,0,0),-1) #img = cv.imread("orange.jpg") hsv = cv.cvtColor(img, cv.COLOR_BGR2HSV) cv.namedWindow("my_mask") cv.createTrackbar("LH","my_mask",0,255,empty) cv.createTrackbar("LS","my_mask",0,255,empty) cv.createTrackbar("LV","my_mask",0,255,empty) cv.createTrackbar("UH","my_mask",255,255,empty) cv.createTrackbar("US","my_mask",255,255,empty) cv.createTrackbar("UV","my_mask",255,255,empty) LH = cv.getTrackbarPos("LH", "my_mask") LS = cv.getTrackbarPos("LS", "my_mask") LV = cv.getTrackbarPos("LV", "my_mask") UH = cv.getTrackbarPos("UH", "my_mask") US = cv.getTrackbarPos("US", "my_mask") UV = cv.getTrackbarPos("UV", "my_mask") #lower = np.array([LH,LS,LV]) #upper = np.array([UH,US,UV]) lower = np.array([0,100,100]) upper = np.array([255,255,255]) mask = cv.inRange (hsv, lower, upper) countours = cv.findContours(mask,cv.RETR_TREE,cv.CHAIN_APPROX_NONE) #it returns a turples, every element of turples starts with "array", it is verified by using len() #Barea = cv.contourArea #print(len(countours)) #print(countours[2]) #cv.drawContours(img,[countours],-1,(0,255,255),5) cv.imshow("original",img) cv.imshow("my_mask",mask) cv.waitKey(0) cv.destroyAllWindows()

3D object detection and tracking

$
0
0
Hi, Assuming that I have previously implemented a robust natural feature tracker for planar objects (i.e. pictures) using keypoint extraction, matching and pose estimation, how do I logically extend this to track 3D obects? Assume that: - Hardware is monocular camera (smartphone, but more likely desktop with attached camera during dev.) - I control the lighting environment of the objects (so can limit specular, etc) - The object is rigid - The object has distinctive texture, and is against a distinctive background. - I have digitized 3D models of the objects if required. Both object detection and pose estimation is required. There appear to be many tutorials on 2D NFT tracking on the internet, but none explains how to then extend this to matching keypoints against a 3D model. To be clear, I'm not looking for a prebuilt solution (sure, Vuforia does this.) I'm looking for source-code or algorithmic-level information.

icon location question

$
0
0
Hey All, I would like to know if it is possible to detect whether an object/icon is closer to point A or point B on a 2D surface? The object or icon is not moving. so just static. And if so, is it also possible to detect whether an object or icon is closer to point A to point J? (points A to J are in a straight line, like a scale) At this point i am not looking for information how this is to be done, i just hope to find out that it is possible (or not).
Viewing all 41027 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>