Quantcast
Channel: OpenCV Q&A Forum - RSS feed
Viewing all 41027 articles
Browse latest View live

deletion of useless posts

$
0
0
according to [previous discussion](https://github.com/opencv-infrastructure/answers.opencv.org/issues/45) some questions already deleted and there are [many question like tagged "deleted"](https://answers.opencv.org/questions/scope:all/sort:activity-desc/tags:deleted/page:1/) waiting to be deleted. **Deleted questions are still in the database of forum and can be recovered if needed.** this is not an easy process, especially it is hard to decide if the question is really do not worth to have an answer. i have some suggestions related this issue. **1**. In my opinion upvoting any new question worth to have an answer and downvoting otherwise. yes some users and moderators do that but it will be great if all members who read the question use their vote. **2**.we can leave a comment "voted for deleting" **3**.All member can use `flag offensive` for useless posts **4**. etc ( Please leave a comment or answer if you have more ideas ) ***i will update this question frequently to be read by more users so some general messages about the forum is also welcome*** Sep 20 2020 total : 33,544 questions unanswered : 17,559 questions Sep 22 2020 total : 33,500 questions unanswered : 17,500 questions Sep 27 2020 total : 33,374 questions unanswered : 17,250 questions

try to read different frame by videocapture but failed

$
0
0
Hi all, this is my way i try to read and save video's different frame. below is my code, but i found that i got the same images.(i'm sure i keep moving in front of the camera lol) pls help me to find what i need to do. thanks in advance!! #include #include #include #include #include #include #include using namespace cv; using namespace std; int main(int, char**) { Mat frame; VideoCapture cap; int deviceID = 0; // 0 = open default camera int apiID = cv::CAP_ANY; // 0 = autodetect default API cap.open(deviceID + apiID); if (!cap.isOpened()) { cerr << "ERROR! Unable to open camera\n"; return -1; } //--- GRAB AND WRITE LOOP cout << "Start grabbing" << endl<< "Press any key to terminate" << endl; int cnt = 0; Mat frm_test[3]; clock_t start, finish; start = clock(); for (;;) { // wait for a new frame from camera and store it into 'frame' cap.read(frame); // check if we succeeded if (frame.empty()) { cerr << "ERROR! blank frame grabbed\n"; break; } finish = clock(); if ((cnt < 3) && ((finish - start)/CLOCKS_PER_SEC > 1.0)) { frm_test[cnt] = frame; start = clock(); cnt++; } // covert color to gray //cvtColor(frame, frame, cv::COLOR_RGB2GRAY); // show live and wait for a key with timeout long enough to show images imshow("Live", frame); if (waitKey(5) >= 0) { cout << "Exit requested" << endl; destroyAllWindows(); break; } } // the camera will be deinitialized automatically in VideoCapture destructor imshow("frm 1", frm_test[0]); imshow("frm 2", frm_test[1]); imshow("frm 3", frm_test[2]); waitKey(0); return 0; }

Callback parameters not being updated

$
0
0
Hi i am using the setMouseCallback, same usage as https://answers.opencv.org/question/32888/passing-multiple-parameters-with-the-setmousecallback-function/.it My structure is as follows: struct cbParams_t { Point pSingle; vector pUpAndpDown; Mat cropped; int count; cbParams_t() : pSingle(), pUpAndpDown(), cropped(), count() {} } ; But for some reason count seems to not be updated, the code: cbParams_t cbParams; setMouseCallback( "Img2", onMouseImg2, &cbParams ); //1 Inside the callback: static void onMouseImg2(int event, int i, int j, int flags, void* param) ... cbParams_t * pCbParams = (cbParams_t *) param; int * pCount = &(pCbParams->count); ... (* pCount ) ++; And painfully, count is still set to 0 Best regards.

Why is a contour from cv2.findContours() in a shape like [N,1,2]?

$
0
0
Hi, guys,
I am a learner of OpenCV, and I found the returned contours from cv2.findContours() are in a a shape like [N,1,2]?
And I want to know why not using the shape as [N,2]?
What is the reason for the middle dimension?
Any idea or answer will be appreciated!

Opencv Python - Detecting lines on a page of a book

$
0
0
i would like to know if there is a code to identify the lines on the pages on a CR book using opencv library in python language. Should i use the HSV values to identify them or any other theorems like HoughlinesP functions,etc... i hope you can help me in this situation. Thank you

Visual Studio 2019 build error "link.exe" could not be run

$
0
0
Hello, I am trying to build OpenCV with extra modules on Visual Studios but ran into a warning and an error. The error was: Error MSB6003 The specified task executable "link.exe" could not be run. System.ComponentModel.Win32Exception (0x80004005): Access is denied: at System.Diagnostics.Process.StartWithCreateProcess(ProcessStartInfo startInfo) at System.Diagnostics.Process.Start() at Microsoft.Build.Utilities.ToolTask.ExecuteTool(String pathToTool, String responseFileCommands, String commandLineCommands) at Microsoft.Build.CPPTasks.TrackedVCToolTask.TrackerExecuteTool(String pathToTool, String responseFileCommands, String commandLineCommands) at Microsoft.Build.CPPTasks.TrackedVCToolTask.ExecuteTool(String pathToTool, String responseFileCommands, String commandLineCommands) at Microsoft.Build.Utilities.ToolTask.Execute() opencv_test_bioinspired C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Microsoft\VC\v160\Microsoft.CppCommon.targets 847 So I believe there is something missing in the build but I'm not sure how to go about fixing this. Thank you for your time!

I wanted to export my satellite image to a particular zoom level

$
0
0
I wanted to export my satellite image to a particular zoom level so that my object detection can detect the object properly, my code dosenot support me me get into the every zoom and see the object, So I have 2 option either I tell my code to see the object in that zoom level only or I need to convert the image itself to a particular zoom level, I saw cv2 is very usefull. Can anyone help me for both or anyone of the option.Thanks.

Camera Center from rvecs and tvecs

$
0
0
Hey you people of knowledge, i'm trying to find the camera center aka the origin of the camera coordinate system. As of my understanding, the computed rvecs and tvecs are used in a transformation from image coordinates to world coordinates, so i think i need an additional step to geht to the camera center. But i can't find it. Maybe someone could lead me in the right direction? Greetings and thanks in advance, Patrick

Using Gstreamer Pipeline in openCV, Why my pipeline works when I add videoconvert element before appsink ?

$
0
0
Hello, I am trying to use gstreamer pipeline in OpenCV. For this purpose, I created the instance of VideoCapture class as following: cv::VideoCapture cap{"filesrc location=/home/10_Years_Compiling_blue_1280x720.jpg ! " "jpegdec ! " "videoconvert ! " "video/x-raw, format=RGB ! " "videoconvert ! " "video/x-raw, format=UYVY ! " "appsink "}; Then I apply mine custom convertion algorithm and the resulting output of this algorithm is raw frame with nv12 format. I am trying to send this frame to gstreamer pipeline using following VideoWriter instance. cv::VideoWriter writer { "appsrc ! " "videoconvert ! " "filesink location=/home/refout.nv12", cv::CAP_GSTREAMER, 0, 30.0, // fps cv::Size{WIDTH, HEIGHT}, true}; But the resulting refout.nv12 file does not contain any bytes. When I insert videoconvert element right before the appsink element in VideoCapture, I can successfully obtain the "refout.nv12". So, What is the effect of "videoconvert" element in this situation ? Is there any way to run spesified example successfully without "videoconvert" element ? Thanks.

VFR: obtain FR/duration/timestamp of a given frame?

$
0
0
I have a video which freezes for a few seconds. I know for a fact that the freeze spans a real window of time during which the camera or camera-to-storage system simple failed to save frames, but thanks for a timecode generator on the camera, the camera is able to label the frozen frame as of long duration spanning the freeze, so that a video player knows to pause at the frozen frame during playback in order to maintain realtime playback. Likewise, MediaInfo reports that the video has a very low minimum frame rate of .083 FPS, obviously corresponding to the frozen frame of course. So I know the metadata for the freeze is in there. QT Player knows to pause at that point during playback and MediaInfo can see the min FR. I need to obtain this information during Python CV2 analysis. I know about CAP_PROP_FPS of course, but that isn't helpful for a VFR, and I know that I can seek to a given timestamp to retrieve a frame with CAP_PROP_POS_MSEC, and of course if I seek within the frozen period I simply receive the same frame over and over until I seek to some timestamp outside the freeze, so OpenCV can give me the frame for each timestamp, but I want to ask how long a *given* frame should last? If I simply read frames in sequence via read(), I want to know how long each frame should last. In effect, if I'm emulating playback, like QT player, how do I know how long to pause on each frame, ala the variable frame rate and all that? Thanks.

Drone Localization with IR Beacons?

$
0
0
Hello! I am working on a project where I am trying to perform drone localization using IR technology. I am using 3 IR beacons on the ground and one IR camera mounted on the drone. I am hoping to use a Raspberry Pi with OpenCV to help calculate the drones location in three dimensions but I have absolutely zero experience with stuff like this. Does this seem like a viable project? Any big hurdles you guys anticipate? Any info or experience you can share would be great! Thanks! Nathan

Circle ROI, remove square mask border

$
0
0
Hello, I'm trying to replace a detected face in an image with a gpu modified one. The goal is to crop the face, manipulate it, then replace the original roi pixels with it. I have no issue utilizing a regular rectangular roi to do this. However, to be a bit more precise, I'm trying to use a circle ROI. I know you have to create a rectangular mask to do this. Here I am using a black square mask. Maybe there is a way to make the mask transparent? I think I'm close, yet when I merge the augmented face onto the image, the modified face portion has the square black border (from the mask i presume) included. How do i eliminate the black square border? Is this possible? I've seen similar questions, but they don't seem to apply. Thank you! See the problem: https://pasteboard.co/JscU9re.png int radius = faces[ic].width / 2; Mat mask(Size(faces[ic].width,faces[ic].height), CV_8U, Scalar(0)); // all black Rect region = Rect(faces[ic].x,faces[ic].y,faces[ic].width,faces[ic].height); Mat circ_roi; Mat roi(img,region); Mat insetImage(img, region); circle(mask, Point(radius,radius), radius, Scalar(255), -1); bitwise_and(roi, roi, circ_roi, mask); // retain only pixels inside the circle // create a mat to store the modified mat from the gpu Mat h_result (circ_roi.size(), circ_roi.type()); // create GPU/device images, same size and type as original host image cuda::GpuMat d_crop(circ_roi); cuda::GpuMat d_result; // create the gaussian filter cv::Ptr gauss = cv::cuda::createGaussianFilter(d_crop.type(), d_result.type(), Size(ksize, ksize), 6.0, 6.0); // apply the gaussian filter to our cropped image gauss->apply(d_crop, d_result); // download the result image from device to host d_result.download(h_result); // leaves the black border around circle :( h_result.copyTo(insetImage); Thanks! Chris

unwrapPhaseMap() does not take ndarray in Python

$
0
0
Hi there, I'm doing some structured light and fringe analysis work and am trying to use the phase unwrapping function cv.phase_unwrapping_PhaseUnwrapping.unwrapPhaseMap in OpenCV (4.4.0) with Python ([doc here](https://docs.opencv.org/master/d8/d83/classcv_1_1phase__unwrapping_1_1PhaseUnwrapping.html#acad1a355e86402cb190956f9a9cbae99)) > unwrapPhaseMap() virtual void> cv::phase_unwrapping::PhaseUnwrapping::unwrapPhaseMap ( InputArray> wrappedPhaseMap, OutputArray> unwrappedPhaseMap, InputArray> shadowMask = noArray() ) pure> virtual Python:> unwrappedPhaseMap = cv.phase_unwrapping_PhaseUnwrapping.unwrapPhaseMap( wrappedPhaseMap[,> unwrappedPhaseMap[, shadowMask]] ) However, when I tried to call the function in Python, the TypeError occured: > Exception has occurred: TypeError> descriptor 'unwrapPhaseMap' for> 'cv2.phase_unwrapping_PhaseUnwrapping'> objects doesn't apply to a> 'numpy.ndarray' object Looks like the function doesn't take ndarray as an input. I'm assuming it takes cv::Mat. But after some version (3.0?), OpenCV removed the cv2.cv and the related fromarray() function that converts the ndarray to cv::Mat. It seems there is no way to use cv::Mat in the current version of OpenCV in Python. Anyone knows how to use the unwrapPhaseMap() function with Python and is this possibly a legacy issue? Many thanks!

how to create an image box in python using tkinter

$
0
0
Hi guys, I was developing an application, i was to create a desktop based application, it is about face recognition, i have a problem that i still can not solved , i was newby in python , I want to create an image box using tkinter ,and image box will display camera recording , i have try using cv2.imshow('frame', frame) but this not actually what i want, i just want it in a form using an image box how can i solve this? and

Detecting Charuco with low contrast and few pixels in big picture.

$
0
0
Hi, I am trying to detect a DIN-A4 sized Charuco Board using OpenCV. The ictures were produced by a Azure Kinect with a resolution of 4096 px to 3072 px, the board takes up only about 330px to 230px of that image. You can see the cutout of the charuco beneath. It seems to be quite greyish with not too much contrast between the white and black areas. I think therefore I can only get any results of the board when getting closer and when seeing it basically from the front. Now, my question is, could I somehow get results from that picture? The current detection setup is basically exactly the example code with a 4x4 Dictionary. ![image](https://user-images.githubusercontent.com/34018356/94540154-6c20fe80-0246-11eb-8815-09288c906d07.png)

how can i draw a contours in specific region

$
0
0
![image description](/upfiles/16009410253523529.png) I have convert image to gray scale image and the result in shown below. is there a way to draw contours and detect the difference between the tow sections, or separate them ? thanks in advance.

Android - access hardware camera with Opencv-python and Kivy

$
0
0
So I'm trying to get OpenCV To work on android. I copied a script from a GitHub example and compiled it with buildozer. Script here (https://gist.github.com/ExpandOcean/de261e66949009f44ad2). Later on, after a lot of SDK torture, I compiled it successfully. Despite having all required permissions, capture doesn't work with either default or secondary camera. When calling isOpened() on an instance of VideoCapture() it always return False. Despite that I can use cv2 normally. This is similar to the issue (https://stackoverflow.com/questions/45659872/use-opencv-cv2-videocapture-in-kivy-with-android-python-for-android), furthermore from what I've found lack of ffmpeg could be the cause, but i never found a confirmation to that. Is there a way to test this and or go around?

undistort() and remap() give different results

$
0
0
I was trying to figure out things for my [previous question](https://answers.opencv.org/question/235646/determine-new-coordinates-of-pixel-after-calibration-remap/) but suddenly found out that **undistort()** and **remap()** give different results. Any ideas why can it be and what result can be considered more "correct"? remap: ![remap](/upfiles/16013794913938054.png) undistort: ![undistort](/upfiles/16013795274629352.png)

Extract red boxes, and its inner content, in a photo

$
0
0
![C:\fakepath\aa.jpeg](/upfiles/16013749085278218.jpeg) I have set of images like that one and i want to have for every image a new image composed by only with the two red boxes. So my output should delete all the content outside the boxes. Could you please help me doing it, suggesting or writing the code to do it? many thanks.

How to Align two images?

$
0
0
Hi! Hope you're all doing well! I am working on a project two fuse two images (RGB + IR). I am using FLIR dataset. RGB image resolution is 1800x1600, whereas IR images are 640x512. Additionally, RGB images have slightly wider angle and has more info, so when the images go through image fusion, it produces shadowy output. I am looking to crop the images, so the details match infrared images. I came across a [medium article](https://medium.com/@aswinvb/how-to-perform-thermal-to-visible-image-registration-c18a34894866) where the author used feature matching to identify and crop parts. It works great for daylight scenes, however, it produces very wrong results when night scenes images are used. I notice the feature matching is not correct either. I have tried other methods (SURF, SIFT as well, but they don't seem to work well either, unless I am missing something? I am new to CV). 1) Do you know what could be causing this issue to occur? 2) Is there any other straight forward method I can use to crop RGB images? Because the position of rgb camera is fixed, so maybe cropping by certain factor or identifying height, width points can be used to crop the images? Cropped Image Daylight (1) - https://imgur.com/0r6gQkX Cropped Image Nightscene (2) - https://imgur.com/PTlFCue Best Regards
Viewing all 41027 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>