Quantcast
Channel: OpenCV Q&A Forum - RSS feed
Viewing all 41027 articles
Browse latest View live

Unhandled exception for GpuMat in Visual Studio C++

$
0
0
I want to use CUDA/GPU in OpenCV in Visual Studio. For example, `cuda::GpuMat`. I successfully build OpenCV with the extra modules with CUDA enabled I tried the following code #include #include #include #include #include #include using namespace std; using namespace cv; int main(){ string imageName("input.bmp"); //CPU version Mat image = imread(imageName.c_str(), IMREAD_GRAYSCALE); //CUDA version cuda::GpuMat imageGPU; cuda::GpuMat downloadGPU; Mat buff; imageGPU.upload(image); cuda::fastNlMeansDenoising(imageGPU, downloadGPU, 2.5, 7, 31); downloadGPU.download(buff); imwrite("gpu.bmp", buff); return 0; } But I get an `unhandled exception` error. I originally downloaded OpenCV in `C:\Users\me\Downloads\opencv` I then downloaded and installed the latest OpenCV extra modules with CUDA on in `C:\Users\me\Downloads\opencv-master1\opencv-master` In `Property Pages->C/C++->General->Additional Include Directories`, I have: C:\Users\me\Downloads\opencv\build\include\opencv C:\Users\me\Downloads\opencv\build\include\opencv2 C:\Users\me\Downloads\opencv\build\include\ In `Property Pages->Linker->General->Additional Library Directories`, I have: C:\Users\me\Downloads\opencv\build\x64\vc15\lib and in `Property Pages->Linker->Input->Additional Dependencies`, I have: opencv_world343d.lib opencv_world343.lib what else am I supposed to include so I can get `GpuMat` to work properly?

tonemap result black image

$
0
0
Hello, I try to implement the Reinhard tonemaper. I want to convert one float array fill with HDR values by components so my code is this : cv::Mat imageHdr(cv::Size2i(_width, _height), CV_64FC3, (float*)_pixelsHdr); imageHdr.convertTo(imageHdr, CV_32FC3); cv::Ptr reinhardTonemaper = cv::createTonemapReinhard(2.2f,0.f,0.f,0.f); reinhardTonemaper->process(imageHdr, _imageLdr); _imageLdr = _imageLdr * 255; _imageLdr.convertTo(_imageLdr, CV_8UC3); But the result is an image completly black while when I put directly the imageHdr to _imageLdr (it is cv:Mat objet too) the image looks like good. Do you have any ideas about How I can find the problem and resolve it ? Thanks for the help :)

Cropping vertical countours, getting bounding box borders instead of area

$
0
0
I am trying to extract columns of a table in an image. I have managed to successfully identify the vertical regions of interest as shown in this: ![image description](https://i.stack.imgur.com/6u4c5m.jpg) My problem is when I am trying to extract and save those regions of interest I am getting a 6 vertical lines of the border of the bounding rectangle as opposed to the region in between them. This is the code I am using to achieve this: import cv2 import numpy as np image = cv2.imread('x.png') gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) blur = cv2.GaussianBlur(gray, (3,3), 0) thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1] vertical_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1,50)) vertical_mask = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, vertical_kernel, iterations=1) cnts = cv2.findContours(vertical_mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[0] for c in cnts: cv2.drawContours(image, [c], -1, (36,255,12), -1) idx = 0 for c in cnts: x,y,w,h = cv2.boundingRect(c) idx+=1 new_img=image[y:y+h,x:x+w] cv2.imwrite(str(idx) + '.png', new_img) cv2.imshow("im.png",image) cv2.waitKey(0) cv2.destroyAllWindows() This is is the image of the right most border, as you can see there is some text: ![image description](https://i.stack.imgur.com/ArFPd.png) Any ideas as to what might be going on?

Trackbars sometimes fail to show up

$
0
0
I have been using cv2's Trackbar gui functionality in Python for a couple months now, with no problems until today. I had been using open cv 4.2.0, but switched to most recent (4.3.x) today after these problems began. Upgrading the version did not fix the problems. I can't provide much code because of IP, but the problem is not reproducible anyways - the code runs fine on my coworker's computer, and occasionally works on mine today. Like I said, before today everything worked fine on this exact same code. No changes were made today and it just started having trouble. windowName = "Whatever you want to call the window" cv2.namedWindow(windowName, cv2.WINDOW_NORMAL) cv2.moveWindow(windowName, 0, 0) Normally, this causes my image window with trackbars to open at a reasonable size, bound to the top left corner of my screen. Starting today, the window would still be bound to the corner, but would be very strangely sized (long and slender in the vertical direction). Some trackbar title text would be visible, and the image would be visible but obviously very skewed. Stretching out the window or hitting maximize made the window & image normal, but when I do that, there are no trackbars present. Just all the trackbar titles in a useless arrangement. Any ideas? ![The way the trackbar window naturally opens](/upfiles/15954512768767667.png) ![The trackbar window after maximizing](/upfiles/15954512969320256.png)

opencv_videoio_gstreamer420_64.dll load failed +WINRT error

$
0
0
i'm trying to make a simpl-ish image recognition and processing code in openCV 4.20 and c++ in VS19. PRoblem is: it worked fine to an **undefined moment**, and then suddenly i stopped seeing any image output at all in windows that openCV creates. It should cap image from the webcam and do all the processing stuff, but all it shows now is a blank window, as if there is no image at all. However it passes error checking and all processing, and frame logging outputs all passes as successful (e.g. image captured, processing done, etc). Thing is, the camera capture **is** definitely running, as i can see the FPS changes in frequency of logs output with the change of camera fps (when i put the camera to a lightsource the fps rises and therefore "frame captured" message is written more frequently.). When i check the init output in console, i see the following messages: ``` [ INFO:0] global C:\build\master_winpack-build-win64-vc15\opencv\modules\videoio\src\videoio_registry.cpp (187) cv::`anonymous-namespace'::VideoBackendRegistry::VideoBackendRegistry VIDEOIO: Enabled backends(7, sorted by priority): FFMPEG(1000); GSTREAMER(990); INTEL_MFX(980); MSMF(970); DSHOW(960); CV_IMAGES(950); CV_MJPEG(940) [ INFO:0] global C:\build\master_winpack-build-win64-vc15\opencv\modules\videoio\src\backend_plugin.cpp (353) cv::impl::getPluginCandidates Found 2 plugin(s) for GSTREAMER [ INFO:0] global C:\build\master_winpack-build-win64-vc15\opencv\modules\videoio\src\backend_plugin.cpp (172) cv::impl::DynamicLib::libraryLoad load C:\opencv\build\x64\vc15\bin\opencv_videoio_gstreamer420_64.dll => FAILED [ INFO:0] global C:\build\master_winpack-build-win64-vc15\opencv\modules\videoio\src\backend_plugin.cpp (172) cv::impl::DynamicLib::libraryLoad load opencv_videoio_gstreamer420_64.dll => FAILED ``` in visual studio debug console, there is also one error line among module load lines: ``` Exception thrown at 0x00007FFD4F37A839 (KernelBase.dll) in oCVCookies.exe: WinRT originate error - 0xC00D36B3 : 'The stream number provided was invalid.'. ``` from what i can tell, it tries to use VIDEOIO_GSTREAMER420_64.dll but there is no such library in the oCV folder. But somehow it worked before. And it's not related to the code itself either, as when i create a dummy code that ONLY opens the capture, captures a frame and shows it, i encounter the same problem. Again, it worked **before a certain moment** , but it just stopped working after an unknown event. OpenCV reinstall did not help with the problem. The dummy code i used was something like following: Mat camframe; VideoCapture cap; cap.open (CAP_ANY, 0); while (;;) { cap.read(camframe); imshow("feed", camframe); }

cv2.imread() gives None, but the path/cwd is correct

$
0
0
Hello, I had this problem already and it was solved without any clear reason. I had my Jupyter Notebook tab open and didn't change anything and then it worked. I already read articles and other threads on this, but couldn't find a solution for this; just look at my screenshot: PATH = "savefig/plotXBIC_singlecell/01.png" img = cv2.imread(PATH) print(img) print(path.exists(PATH)) None True ![image description](/upfiles/15949332709427477.png) On a PyImageSearch blog post I read that this can happen with bad compiling, such as JPEGs. But this doesn't make sense: It already worked a few hours ago and I could display the img with plt.imshow(). (Not with cv2.imshow(), that leads to my kernel crashing.)

ImportError: dlopen failed: "/data/data/ravag3r.imagecomparison.imagecomparison/files/app/_python_bundle/site-packages/cv2/cv2.so" is 64-bit instead of 32-bit

$
0
0
I just build an ImageComparison application developed with Kivy framework. Application had been build successfully by buildozer . But the application is instantly closing when opened showing this error. ImportError: dlopen failed: "/data/data/ravag3r.imagecomparison.imagecomparison/files/app/_python_bundle/site-packages/cv2/cv2.so" is 64-bit instead of 32-bit

How do I save a video multiple times in python

$
0
0
I am trying to save the video many times If I use first writer then it save video only one time in this code And use this code then it doesn't save the video import cv2 import time cap = cv2.VideoCapture("rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov") width = int(cap.get(3)) height = int(cap.get(4)) fcc = cv2.VideoWriter_fourcc(*'XVID') writer = cv2.VideoWriter('test.avi', fcc, 60.0, (width, height)) recording = False while(1): ret, frame = cap.read() hms = time.strftime('%H:%M:%S', time.localtime()) cv2.putText(frame, str(hms), (0, 15), cv2.FONT_HERSHEY_SIMPLEX,0.5,(0,255,255)) cv2.imshow('frame', frame) k = cv2.waitKey(30) & 0xff if k == ord('r'): path = 'test_' + str(hms) + '.avi' writer = cv2.VideoWriter(path, fcc, 60.0, (width, height)) if recording: writer.write(frame) if k == ord('e'): print('record end') writer.release() cap.release() cv2.destroyAllWindows()

Conversion JPEG to raw image

$
0
0
Hi All, I have a requirement to convert jpeg image to raw image for further processing. Could you please guide how ican i do this through OPEN CV Thanks in advance, Abhishek

accelerate OpenCV functions on gpu through CUDA

$
0
0
Dear all, I'm working on an image procession python algorithm using openCV. Since I have a lot of images to proccess, I'm trying to accelerate it on a NVIDIA Jetson card using CUDA. During the last two months, I've tried many and many different solutions found on the Internet. The best solution I've found is to translate my python algorithm in c++, and then use the accelerated functions provided in cv::cuda. **Is it the fastest solution ??** But I still need to accelerate functions (as cv::undistort tody, but it could be others in a near future). **How could I accelerate Opencv functions using CUDA ??** I'm actually working on building my own accelerate version of the undistort function from source code, but it is more complex than I though **How can I easily build an opencv function after modification ??** Thanks,

Accelerate undistort() with parralelism on GPU

$
0
0
Dear all, I'm currently using opencv accelerated functions from cv::cuda. I also want to use the **undistort()** function, wich is already parralelised on CPU. However, the use of this function still takes too much time, and I want to compile it for GPU use. I'm trying to make my own version of this function, but I get stucked with **getInitUndistortRectifyMapComputer()** This function is calling itself in **undistort.dispatch.cpp** : namespace { Ptr getInitUndistortRectifyMapComputer(Size _size, Mat &_map1, Mat &_map2, int _m1type, const double *_ir, Matx33d &_matTilt, double _u0, double _v0, double _fx, double _fy, double _k1, double _k2, double _p1, double _p2, double _k3, double _k4, double _k5, double _k6, double _s1, double _s2, double _s3, double _s4) { CV_INSTRUMENT_REGION(); CV_CPU_DISPATCH(getInitUndistortRectifyMapComputer, (_size, _map1, _map2, _m1type, _ir, _matTilt, _u0, _v0, _fx, _fy, _k1, _k2, _p1, _p2, _k3, _k4, _k5, _k6, _s1, _s2, _s3, _s4), CV_CPU_DISPATCH_MODES_ALL); } } // namespace I cannot find where the function is implemented. Also, the study of **ParallelLoopBody**, **parallel_for_** and **CV_CPU_DISPATCH** led me nowhere **How can I find the source code to compile my own version of the function ??** **Is there an other way to get the same result using other** (opencv or not) **stuff ??** Thanks,

NMSBoxes output with top_k parameter

$
0
0
Hi all, the `top_K` parameter in the `cv2.dnn.NMSBoxes` correspond to the maximum number of bounding-boxes to return right ? Like if we know how many object we expect in the image. If that's the case, I have an unexpected output when setting top_k to a value >0. With the default value (<0), I have a sensible set of bounding-boxes (on the left below). However with the same parameters, except setting top_k to an actual value, here 4, I get only the 2 top detections and not the top 4. Why is that ? The bounding boxes with index 4, 100, 72 and 17 are indeed the one I want, so I can use top_k<0 and select the top_4 afterwards but I though setting top_K in the NMS could stop the NMS earlier and thus save time. I am using - opencv-python-headless 4.3.0.36 (installed via pip) - python 3.7.4 on a win10 machine Thanks ! ![image description](/upfiles/1595511577150060.png)

how to use pyopencv_to function

$
0
0
I am trying to create a wrapper in pybind11 to create part of the code in c ++ and I need to use `pyopencv_to` inside my function but when I want to build my project I'm getting this error `error: ‘pyopencv_to’ was not declared in this scope pyopencv_to(frame, frame_gpu);`. How can I use pyopencv_to function? #include // One-stop header. #include #include #include #include #include #include at::Tensor gpumat2torch(PyObject* frame) { cv::cuda::GpuMat frame_gpu; pyopencv_to(frame, frame_gpu); at::ScalarType torch_type = get_torch_type(frame_gpu.type()); auto options = torch::TensorOptions().dtype(torch_type).device(torch::kCUDA); std::vector sizes = {1, static_cast(frame_gpu.channels()), static_cast(frame_gpu.rows), static_cast(frame_gpu.cols)}; return torch::from_blob(frame_gpu.data, sizes, options); } PYBIND11_MODULE(gpumat, m) { m.doc() = "gpumat2torch function!"; m.def("gpumat2torch", &gpumat2torch, "A function to convert GpuMat to CudaTorch"); }

Enabling Address Sanitizer for OpenCV builds

$
0
0
Is there a way to enable Address Sanitizer for OpenCV builds? I've tried adding OPENCV_EXTRA_CXX_FLAGS="-fsanitize=address"; however, I receive configuration errors claiming: > Compiler doesn't support baseline optimization flags I've also tried adding custom build configs like the below: # AddressSanitize set(CMAKE_C_FLAGS_ASAN "-fsanitize=address -g -O1" CACHE STRING "Flags used by the C compiler during AddressSanitizer builds." FORCE) set(CMAKE_CXX_FLAGS_ASAN "-fsanitize=address -g -O1" CACHE STRING "Flags used by the C++ compiler during AddressSanitizer builds." FORCE) This makes it further, but upon linking libopencv_core.so I get the following error: `_ZZN2cv10AutoBufferIiLm4EEixEmE19__PRETTY_FUNCTION__' referenced in section `.data.rel.local..LASAN0' of CMakeFiles/opencv_core.dir/src/umatrix.cpp.o: defined in discarded section `.rodata._ZZN2cv10AutoBufferIiLm4EEixEmE19__PRETTY_FUNC TION__[_ZZN2cv10AutoBufferIiLm4EEixEmE15__cv_check__137]' of CMakeFiles/opencv_core.dir/src/umatrix.cpp.o `_ZZN2cv10AutoBufferINS_5RangeELm136EEixEmE19__PRETTY_FUNCTION__' referenced in section `.data.rel.local..LASAN0' of CMakeFiles/opencv_core.dir/src/umatrix.cpp.o: defined in discarded section `.rodata._ZZN2cv10AutoBufferINS_5RangeELm136 EEixEmE19__PRETTY_FUNCTION__[_ZZN2cv10AutoBufferINS_5RangeELm136EEixEmE15__cv_check__137]' of CMakeFiles/opencv_core.dir/src/umatrix.cpp.o collect2: error: ld returned 1 exit status modules/core/CMakeFiles/opencv_core.dir/build.make:1614: recipe for target 'lib/libopencv_core.so.3.4.10' failed make[2]: *** [lib/libopencv_core.so.3.4.10] Error 1 CMakeFiles/Makefile2:2258: recipe for target 'modules/core/CMakeFiles/opencv_core.dir/all' failed make[1]: *** [modules/core/CMakeFiles/opencv_core.dir/all] Error 2 Makefile:162: recipe for target 'all' failed make: *** [all] Error 2 Thanks for your time!

Compiling OpenCv with CUDA support in Windows 10

$
0
0
Attempting to compile OpenCV with CUDA support in Windows 10. CUDA is installed correctly. Keep getting the following error(s) when compiling in Visual Studio. Can anyone suggest next troubleshooting steps? Determining if the include file pthread.h exists failed with the following output: Change Dir: D:/opencv-master/opencv-master/build/CMakeFiles/CMakeTmp Run Build Command(s):C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/MSBuild/15.0/Bin/MSBuild.exe cmTC_8068f.vcxproj /p:Configuration=Debug /p:Platform=x64 /p:VisualStudioVersion=15.0 /v:m && Microsoft (R) Build Engine version 15.9.21+g9802d43bc3 for .NET Framework Copyright (C) Microsoft Corporation. All rights reserved. Using triplet "x64-windows" from "D:\vcpkg-master\vcpkg-master\installed\x64-windows\" Microsoft (R) C/C++ Optimizing Compiler Version 19.16.27042 for x64 Copyright (C) Microsoft Corporation. All rights reserved. cl /c /I"D:\vcpkg-master\vcpkg-master\installed\x64-windows\include" /Zi /W3 /WX- /diagnostics:classic /MP /Od /Ob0 /Oi /D WIN32 /D _WINDOWS /D _CRT_SECURE_NO_DEPRECATE /D _CRT_NONSTDC_NO_DEPRECATE /D _SCL_SECURE_NO_WARNINGS /D "CMAKE_INTDIR=\"Debug\"" /D _MBCS /Gm- /RTC1 /MDd /GS /Gy /fp:precise /Zc:wchar_t /Zc:forScope /Zc:inline /Fo"cmTC_8068f.dir\Debug\\" /Fd"cmTC_8068f.dir\Debug\vc141.pdb" /Gd /TC /errorReport:queue /bigobj "D:\opencv-master\opencv-master\build\CMakeFiles\CMakeTmp\CheckIncludeFile.c" CheckIncludeFile.c D:\opencv-master\opencv-master\build\CMakeFiles\CMakeTmp\CheckIncludeFile.c(1): fatal error C1083: Cannot open include file: 'pthread.h': No such file or directory [D:\opencv-master\opencv-master\build\CMakeFiles\CMakeTmp\cmTC_8068f.vcxproj]

VideoCapture can open an multicast video stream UDP protocol

$
0
0
I need a example to use the VideoCapture() with source multicast video stream in an UDP protocol The same way that VLC media player working with an URL: udp://@239.1.1.100:5001 Do is this possible?

getting java.lang.UnsatisfiedLinkError in android project

$
0
0
couldn't find "libopencv_java.so"

How to save video using thread

$
0
0
I am making a program that takes a video for 10 seconds when an object is detected.
If 'out' is created before 'while', the video is saved only once
when use this code .avi file is just created and there is nothing saved
Is it the right way to use threads? Or is there any other way? import cv2 import numpy as np import time import threading def thread(): global recording,out recording = True print("recording start") time.sleep(10) recording = False out.release() print("recording end") global recording, out cap = cv2.VideoCapture("rtsp://128.1.1.6/profile2/media.smp") fgbg = cv2.createBackgroundSubtractorMOG2(varThreshold=200, detectShadows=0) fps = 60 width = int(cap.get(3)) height = int(cap.get(4)) fcc = cv2.VideoWriter_fourcc(*'XVID') recording = False while(1): ret, frame = cap.read() hms = time.strftime('%H:%M:%S', time.localtime()) hmss = time.strftime('%H_%M_%S', time.localtime()) fgmask = fgbg.apply(frame) nlabels, labels, stats, centroids = cv2.connectedComponentsWithStats(fgmask) cv2.putText(frame,str(hms),(0,15),cv2.FONT_HERSHEY_SIMPLEX,0.5,(0,255,255)) for index, centroid in enumerate(centroids): if stats[index][0] == 0 and stats[index][1] == 0: continue if np.any(np.isnan(centroid)): continue x, y, width, height, area = stats[index] centerX, centerY = int(centroid[0]), int(centroid[1]) if area > 50: #Minimum size detected cv2.circle(frame, (centerX, centerY), 1, (0, 255, 0), 2) cv2.rectangle(frame, (x, y), (x + width, y + height), (0, 0, 255)) cv2.putText(frame, str(area), (centerX, y), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 255)) # detected size if area > 1000 and not recording: t1 = threading.Thread(target=thread) path = 'E:\\test_' + str(hmss) + '.avi' out = cv2.VideoWriter(path, fcc, fps, (width, height)) recording = True print(str(hms)) t1.start() cv2.imshow('mask', fgmask) cv2.imshow('frame', frame) if recording: out.write(frame) k = cv2.waitKey(1) & 0xff cap.release() out.release() cv2.destroyAllWindows()

Find first non-black pixel

$
0
0
I have a very simple input image: it's fully black and has a tiny bright red dot somewehere in it. The colors are "pure", meaning the blacks are (0,0,0), the red dot is at full 255 red intensity. Now I would like to find the coordinate of the red dot. I thought this would be rather simple, since all I need is the position of the first non-black pixel in my image. That would be good enough for my application. But I have searched quite a while and could not find a simple and fast solution to do this with OpenCV. PS: I know that OpenCV has a blob detection, but that seems waaay to powerful (and expensive) for this simple task.

OpenCV HoughCircile detection with refresh window

$
0
0
I need some help for this code! #include #include using namespace std; using namespace cv; Mat src, adaptDst; int block_size, C; void adaptThreshAndShow() { adaptiveThreshold(src, adaptDst, 255, ADAPTIVE_THRESH_MEAN_C, THRESH_BINARY, block_size, C); imshow("Adaptive Thresholding", adaptDst); } void adaptiveThresholding1(int, void*) { static int prev_block_size = block_size; if ((block_size % 2) == 0) { if (block_size > prev_block_size) { block_size++; } if (block_size < prev_block_size) { block_size--; } } if (block_size <= 1) { block_size = 3; } adaptThreshAndShow(); } void adaptativeThresholding2(int, void*) { adaptThreshAndShow(); } int main() { src = imread("lena.png", IMREAD_GRAYSCALE); dst = src.clone(); namedWindow("Source", WINDOW_FREERATIO); namedWindow("Adaptive Thresholding", WINDOW_FREERATIO); block_size = 11; C = 2; createTrackbar("block_size", "Adaptive Thresholding", &block_size, 25, adaptiveThresholding1); createTrackbar("C", "Adaptive Thresholding", &C, 255, adaptativeThresholding2); adaptiveThresholding1(block_size, nullptr); adaptativeThresholding2(C, nullptr); vector circles; HoughCircles(adaptDst, circles, HOUGH_GRADIENT, 1, adaptDst.rows / 16, // change this value to detect circles with different distances to each other 1, 1, 5, 5 ); for (size_t i = 0; i < circles.size(); i++) { Vec3i c = circles[i]; Point center = Point(c[0], c[1]); // circle center circle(src, center, 1, Scalar(0, 100, 100), 3, LINE_AA); // circle outline int radius = c[2]; circle(src, center, radius, Scalar(255, 0, 255), 3, LINE_AA); } moveWindow("Source", 0, 0); moveWindow("Adaptive Thresholding", 2 * src.cols, 0); imshow("Source", src); cout << "Press any key to exit...\n"; waitKey(0); return 0; } All time i move the trackbar i need to have in the source window (refreshing detection). If i change the values in the adaptive threshold i could have new detection from the source window, but i have a probleme always i change the values the adaptive threshold take the new source of image and change it. Sorry for my english. sincerely. SlyDark
Viewing all 41027 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>