Quantcast
Channel: OpenCV Q&A Forum - RSS feed
Viewing all 41027 articles
Browse latest View live

executing stitching example throws error: (-2:Unspecified error) OpenCV samples: Can't find required data file: --try_use_gpu in function 'findFile'

$
0
0
following these steps i had compiled the "stitching.cpp" example: g++ stitching.cpp -o stitching.o -c -Wall -I/usr/include/opencv -I/usr/include/opencv2 g++ stitching.o -o stitching -lopencv_stitching -lopencv_aruco -lopencv_bgsegm -lopencv_bioinspired -lopencv_ccalib -lopencv_dnn_objdetect -lopencv_dnn_superres -lopencv_dpm -lopencv_highgui -lopencv_face -lopencv_freetype -lopencv_fuzzy -lopencv_hdf -lopencv_hfs -lopencv_img_hash -lopencv_line_descriptor -lopencv_quality -lopencv_reg -lopencv_rgbd -lopencv_saliency -lopencv_shape -lopencv_stereo -lopencv_structured_light -lopencv_phase_unwrapping -lopencv_superres -lopencv_optflow -lopencv_surface_matching -lopencv_tracking -lopencv_datasets -lopencv_text -lopencv_dnn -lopencv_plot -lopencv_ml -lopencv_videostab -lopencv_videoio -lopencv_viz -lopencv_ximgproc -lopencv_video -lopencv_xobjdetect -lopencv_objdetect -lopencv_calib3d -lopencv_imgcodecs -lopencv_features2d -lopencv_flann -lopencv_xphoto -lopencv_photo -lopencv_imgproc -lopencv_core -ldl -lm -lpthread -lrt but while executing the binary i'm getting the following error [ WARN:0] global ../modules/core/src/utils/samples.cpp (59) findFile cv::samples::findFile('--try_use_gpu') => '' terminate called after throwing an instance of 'cv::Exception' what(): OpenCV(4.2.0) ../modules/core/src/utils/samples.cpp:62: error: (-2:Unspecified error) OpenCV samples: Can't find required data file: --try_use_gpu in function 'findFile' Aborted (core dumped)

Make the background of the image transparent using a mask

$
0
0
Greetings. How can I make the background of the image transparent using a mask, not a threshold. The essence of the program is as follows: I get the image and try to cut the object from the background. I have already implemented this. Here I found an implementation with threshold: [stackoverflow)](https://stackoverflow.com/questions/40527769/removing-black-background-and-make-transparent-from-grabcut-output-in-python-ope). Here is what I have: struct comparator{ bool operator() (std::tuple, bool, double> t1, std::tuple, bool, double> t2) { return std::get<2>(t1) > std::get<2>(t2); } } comparator; cv::Mat image = cv::imread("C:\\Users\\Sky\\Downloads\\12.png"); cv::Mat grayImg; // convert to greyscale cv::cvtColor(image, grayImg, COLOR_BGRA2GRAY); // finding threshes cv::Mat thresh; cv::threshold(grayImg,thresh, 127, 255, THRESH_BINARY_INV | THRESH_OTSU); // finding contours std::vector> contours; std::vector hierarchy; findContours( thresh, contours, hierarchy, RETR_TREE, CHAIN_APPROX_SIMPLE, Point(0, 0) ); // finding max contour std::vector, bool, double>> vec; for(size_t i = 0; i < contours.size(); ++i){ vec.push_back(std::make_tuple(contours.at(i), cv::isContourConvex(contours.at(i)),cv::contourArea(contours.at(i)))); } std::sort(vec.begin(), vec.end(), comparator); std::tuple, bool, double> maxContour; maxContour = vec.at(0); // create mask cv::Mat mask = Mat::zeros(thresh.size(), CV_8S); for(size_t i = 0; i < contours.size(); ++i){ cv::fillConvexPoly(mask, std::get<0>(vec.at(i)), Scalar(255,0,0),8,0); } // bitwise cv::Mat res; cv::bitwise_and(image, image, res, mask); // show process imshow("result", res); imshow("mask", mask); imshow("canny", thresh); imshow("source", image); // create transparent background Mat dst; Mat rgb[3]; split(res,rgb); Mat rgba[4]={rgb[0],rgb[1],rgb[2]}; merge(rgba,4,dst); // save to file transparent and cropped images imwrite("C:/Documents/1.png", res); imwrite("C:/Documents/dst.png",dst); while (true) { if (waitKey() == 27) { //wait for 'esc' key press for 30ms. If 'esc' key is pressed, break loop std::cout << "esc key is pressed by user"; break; } } return 0; *As we can see, most of the image is lost if threshold is used.* 1)Source. 2) Mask. 3) Thresholds. 4) Result without a transparent background. 5) Result with a mask. As you can see, the result with a transparent background has lost color. I would be grateful for any information. ![image description](/upfiles/15947192765352285.png) ![image description](/upfiles/1594719291124665.png) ![image description](/upfiles/15947193103911275.png) ![image description](/upfiles/15947193657983085.png) ![image description](/upfiles/15947193908771268.png)

Change a particular color in an Image Python

$
0
0
Hi there, I have an image and I would like to change a particular colour on that. Particularly, the image contains a set of colours I have used a means algorithm in order to obtain the whole set of colours contained in the image and selecting just one colour I would completely change it. For instance, in the image I have (as already said) the set of different colors I would like that this RGB source RGB_source=(5, 114, 121) change in: RGB_dest=(166, 109, 82) How may I manage this color change? thank you

Can I change the fps of a video dynamically?

$
0
0
I am using VideoWriter in python with opencv3.4. I wanted to change the fps of a video dynamically. Presently I am using `out = cv2.VideoWriter('output.avi',fourcc,60.0, (848,480))` where `60.0` is the fps. Is it possible that at some part of my code , I can change it to 30fps and then back to 60fps or something else at some other part of the code?

VideoCapture.read() freezing when reading from gstreamer

$
0
0
Hello, I've recently encountered an issue where the cv2.VideoCapture.read() function seems to freeze when reading a gstreamer pipeline. The stream seems to work in a glitchy manner for a few seconds, then it freezes completely. After debugging a bit I found it always freezes on the videocapture.read() line. The capture line is: cap_receive = cv2.VideoCapture('udpsrc port=5004 caps="application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)MP4V-ES" ! rtpmp4vdepay ! decodebin ! videoconvert ! appsink', cv2.CAP_GSTREAMER) The while loop where I read and process the frames (there was more image processing but even this simple code still 'breaks'): while (cap_receive.isOpened()): status, frame = cap_receive.read() if not status: print('empty frame') break timestamp = datetime.datetime.now() cv2.putText(frame, timestamp.strftime( "%A %d %B %Y %I:%M:%S%p"), (10, frame.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 0, 255), 1) (flag, encodedImage) = cv2.imencode(".jpg", frame) if not flag: continue yield (b'--frame\r\n' b'Content-Type: image/jpeg\r\n\r\n' + bytearray(encodedImage) + b'\r\n') This works fine on a windows PC capturing directly from webcam (`cv2.VideoCapture(0)`), but it doesn't work when trying to capture from the gstreamer pipeline on a microcontroller running Debian 10.4. My opencv version is 4.1.0 Anyone else encounter this issue or know any workarounds? I also tried 'grab' and 'retrieve' with the same results.

How to use a keras custom trained classifier's .pb file inside OpenCV DNN module

$
0
0
I'm having trouble using the .pb file generated with keras, I think this is because I don't have a .pbtxt file but I can't generate one for that I would need a .config file and a relevant script (all scripts currently are for ssd and faster rcnn) from tensorflow.keras.applications.vgg16 import VGG16 import cv2 model = VGG16(weights='imagenet') model.save("vgg") net = cv2.dnn.readNetFromTensorflow("vgg/saved_model.pb") Error: **OpenCV(4.1.2) /io/opencv/modules/dnn/src/tensorflow/tf_io.cpp:42: error: (-2:Unspecified error) FAILED: ReadProtoFromBinaryFile(param_file, param). Failed to parse GraphDef file: vgg/saved_model.pb in function 'ReadTFNetParamsFromBinaryFileOrDi** **Tensorflow:** 2.2.0 **Opencv:** 4.3.0 **Note:** *Ofcourse I will use fine tuning to train this vgg before saving it, this is just the bare bones code to re produce the error.* Also I'm successfully able to use the classifier if I save it in .h5 and then convert to .onnx and then use it in opencv but I'm wondering how to use it in .pb Thanks

Image substraction

$
0
0
I am working on a college project, I am trying to detect damage on the surface of a marble. I am working with OpenCV 4.3.0 I have converted to HSV carried out erode and dilation and isolated the marble and drawn a circle around it. What I would like to try is to save just what is in the circle and the use substraction with the current image in the circle to highlight the difference or defect. Is it possible to save just the section of a frame inside circle. The target marble will be rotating so all parts are seen, so the location within the frame will move. Any suggestions welcome, I intend to run on a Pi4 in real time.

AGAST vs BRISK detector in opencv

$
0
0
After reading the paper on BRISK, I understand that BRISK uses a modified version of AGAST. So my question is if I use only the BRISK detector(OpenCV implementation), would it be different than the original AGAST in OpenCV?

Which algorithm are implemented in cv::reg::Mapper classes?

$
0
0
Hello, I'm working with cv::reg::Mapper classes and it works quite well but I really need to know wich algorithms are implemented in it. In particular in cv::reg::MapperGradSimilar. Thank you

Robotprogrammering

$
0
0
Hej Jeg er helt nybegynder inden for robotprogrammerings området , jeg er i tvivl om hvilken programmeringsprog som er en god ide at begynde med. Er der nogle som kan hjælpe mig med mit problem?

paper detection on image taken from android's camera

$
0
0
I found below code for paper detection ,for the image taken from android phone's camera ,but it doesnot work on certain cases when image is at a crooked or when corners are not distinctly visible .Any help will be great or in which direction i should look further will be helpful. fun findContours(src: Mat): ArrayList { val grayImage: Mat val cannedImage: Mat val kernel: Mat = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, Size(9.0, 9.0)) val dilate: Mat val size = Size(src.size().width, src.size().height) grayImage = Mat(size, CvType.CV_8UC4) cannedImage = Mat(size, CvType.CV_8UC1) dilate = Mat(size, CvType.CV_8UC1) Imgproc.cvtColor(src, grayImage, Imgproc.COLOR_BGR2GRAY) Imgproc.GaussianBlur(grayImage, grayImage, Size(5.0, 5.0), 0.0) Imgproc.threshold(grayImage, grayImage, 20.0, 255.0, Imgproc.THRESH_TRIANGLE) Imgproc.Canny(grayImage, cannedImage, 75.0, 200.0) Imgproc.dilate(cannedImage, dilate, kernel) val contours = ArrayList() val hierarchy = Mat() Imgproc.findContours(dilate, contours, hierarchy, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE) contours.sortByDescending { p: MatOfPoint -> Imgproc.contourArea(p) } hierarchy.release() grayImage.release() cannedImage.release() kernel.release() dilate.release() return contours } private fun getCorners(contours: ArrayList, size: Size): Corners? { val indexTo: Int when (contours.size) { in 0..5 -> indexTo = contours.size - 1 else -> indexTo = 4 } for (index in 0..contours.size) { if (index in 0..indexTo) { val c2f = MatOfPoint2f(*contours[index].toArray()) val peri = Imgproc.arcLength(c2f, true) val approx = MatOfPoint2f() Imgproc.approxPolyDP(c2f, approx, 0.03 * peri, true) //val area = Imgproc.contourArea(approx) val points = approx.toArray().asList() var convex = MatOfPoint() approx.convertTo(convex, CvType.CV_32S); // select biggest 4 angles polygon if (points.size == 4 && Imgproc.isContourConvex(convex)) { val foundPoints = sortPoints(points) return Corners(foundPoints, size) } } else { return null } } return null }

Are minor subversions of OpenCV ABI compatible?

$
0
0
I am building a library that internally uses OpenCV. On my development machine I have OpenCV 3.2 installed, but on some my target machines the version of OpenCV is 3.3.1. (And I cannot control what version my clients will install on their machines) As far as I know there is a general rule that two versions of a library with the same major version number should be ABI compatible. So my first question is: does OpenCV conform to this rule? Is the answer different for C and C++ interfaces? (Then I would change my code so that it will use only C functions) (The partial answer I have found [here](https://abi-laboratory.pro/?view=timeline&l=opencv) - it says OpenCV 3.3.0 is 99.88% backward compatible and 3.3.1 is 95.78% backward compatible so given I ma using very limited subset of the entire OpenCV, I think I have good chances that my library will work with both versions.) If it does (at least for C interface) then I have second question: 1. The output of "objdump -p /usr/lib/libopencv_core.so.3.2" shows "SONAME libopencv_core.so.3.2", which leads to: 2. when my library is built the result of "ldd libMyLib.so" shows "libopencv_core.so.3.2" So why is OpenCV built with "soname" indicating version "3.2" and not just "3"? I have downloaded OpenCV sources and built them on my PC. I see "CMAKE_SHARED_LIBRARY_SONAME_CXX_FLAG=-Wl,-soname, " in the generated CMakeVars.txt file. Is there a way to configure cmake so that library will be built with -soname indicating only major version "3"? Or maybe there is a way to tell my linker to set DT_NEEDED field of my library so that it's run-time requirement will be relaxed?

Unable to access a logitech 270 webcam through openCV

$
0
0
Im running Ubuntu 18.04, OpenCV 2 Upon trying to use the regular VideoCapture(0), an error pops up as follows: ROR: V4L2: Pixel format of incoming image is unsupported by OpenCV Upon running $v4l2-ctl -d /dev/video0 --all , i got the following: Driver Info (not using libv4l2): Driver name : uvcvideo Card type : UVC Camera (046d:0825) Bus info : usb-0000:00:0c.0-2 Driver version: 5.3.18 Capabilities : 0x84A00001 Video Capture Metadata Capture Streaming Extended Pix Format Device Capabilities Device Caps : 0x04200001 Video Capture Streaming Extended Pix Format Priority: 2 Video input : 0 (Camera 1: ok) Format Video Capture: Width/Height : 800/600 Pixel Format : 'YUYV' Field : None Bytes per Line : 1600 Size Image : 960000 Colorspace : sRGB Transfer Function : Default (maps to sRGB) YCbCr/HSV Encoding: Default (maps to ITU-R 601) Quantization : Default (maps to Limited Range) Flags : Crop Capability Video Capture: Bounds : Left 0, Top 0, Width 800, Height 600 Default : Left 0, Top 0, Width 800, Height 600 Pixel Aspect: 1/1 Selection: crop_default, Left 0, Top 0, Width 800, Height 600 Selection: crop_bounds, Left 0, Top 0, Width 800, Height 600 Streaming Parameters Video Capture: Capabilities : timeperframe Frames per second: 20.000 (20/1) Read buffers : 0 i believe the camera input is in the YUYV format. Is there any way i can access the input in this format , or will i have to convert it to another format to do so? Any suggestions at all will be very helpful

Liquid detection on surface

$
0
0
Hello All, I am trying to detect if a part of surface is wait or not and can we identify the level. Surface can be wait because of any type of liquid like water or paint or oil or chemical etc. What is the best way to detect liquid on the wait surface? Please let me know if there any algorithms available to achieve this or please guide me for any online documents or tutorials on achieve the same. Thanks in advance, *Shree*

How to extract the profile data from the Mat?

$
0
0
Hi all. I have a question about extracting profile data from the Mat. I would like to set the line and extract the profile data from belonging to the line. The following image is an example of what I want to do. ![image description](/upfiles/1594876080808416.png) Is it possible to do it in OpenCV? Thank you.

DNN face detection in UWP/C++: strange output

$
0
0
Hello, I'm using OpenCV and Cafe to perform face detection on some images I receive from a stream. First, I tried with python: prototxt_file = 'deploy.prototxt' weights_file = 'res10_300x300_ssd_iter_140000.caffemodel' dnn = cv2.dnn.readNetFromCaffe(prototxt_file, weights_file) for image in images: blob = cv2.dnn.blobFromImage(cv2.resize(image, (300, 300)), 1.0, (300, 300), (104.0, 177.0, 123.0)) dnn.setInput(blob) detections = dnn.forward() for i in range(0, detections.shape[2]): confidence = detections[0, 0, i, 2] box = detections[0, 0, i, 3:7] if confidence > 0.5: //Do something This works quite well. Now, I want to do the same within a C++ Windows UWP App, so I compiled OpenCV from source for UWP (tried with versions 3.4.1 and 4.3.0). After going through [this example](https://github.com/opencv/opencv/blob/master/samples/dnn/object_detection.cpp#L155-L158) I tried the following: std::string caffeConfigFilePath = "deploy.prototxt"; std::string caffeWeightFilePath = "res10_300x300_ssd_iter_140000.caffemodel"; net = cv::dnn::readNetFromCaffe(caffeConfigFilePath, caffeWeightFilePath); for (image in images) { cv::Mat imageResized, imageBlob; std::vector outs; cv::resize(image, imageResized, cv::Size(300, 300)); cv::dnn::blobFromImage(imageResized, imageBlob, 1, cv::Size(300, 300), (104.0, 177.0, 123.0)); net.setInput(imageBlob, "data"); net.forward(outs, "detection_out"); CV_Assert(outs.size() > 0); for (size_t k = 0; k < outs.size(); k++) { float* data = (float*)outs[k].data; for (size_t i = 0; i < outs[k].total(); i += 7) { float confidence = data[i + 2]; if (confidence > 0.5) { //Do something } } } This gives me very bad results. I get a lot of detections with a confidence of 1.0, covering the entire image. The face itself, however, is not detected. So I thought I might be reading the output wrong. I also tried the code posted with [this question](https://answers.opencv.org/question/195031/problem-with-facedetection-model-from-dnn-module/), but the results are the same. I checked everything I could think of (input images in right format, model correctly loaded, etc.) but could not identify the error. Since the DNN module is usually not included in an OpenCV UWP build (I had to comment some lines in the CMake.txt, but then it compiled without errors), can it be that using it is just not possible from a UWP app? What else could be the reason the code is working in python, but an almost identical code is not working in C++?

Can i use openCV to run keras models using Theano as backend?

$
0
0
I am having a few diffuclties using tensorflow as a backend, because i am having a hard time importing the library. So I changed the backend to Theano, but I am not sure if I can later use models trained in keras, through theano, with the openCV options. If anyone know I'd realy appreciate to tell me.

How to include the correct directory for OpenCV on raspberry?

$
0
0
I installed the openCV to my Raspberry following this [tutorial](https://qengineering.eu/install-opencv-4.2-on-raspberry-pi-4.html) but the files went to directory: `usr/local/include/opencv4/opencv2` instead of `usr/local/include/opencv2` And now compilation fail even if I use #include

Non rigid registration

$
0
0
Do opencv have the means to perform non rigid registration like ImageJ BUnwarpJ plugin (2D image registration based on elastic deformations represented by B-splines)?

Undocumented HighGui functions

$
0
0
There are some functions in the HighGui module that lack any explanation/documentation. I'm speaking about `startloop`, `stoploop` ([here](https://docs.opencv.org/4.3.0/dc/d46/group__highgui__qt.html)) and `startWindowThread` ([here](https://docs.opencv.org/4.3.0/d7/dfc/group__highgui.html)) These seem to be interesting features, as threading often leads to problems in OpenCV, and sometimes it's useful to attach a processing loop to a window. Do you know what is the role of these functions and how do you use them?
Viewing all 41027 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>