Quantcast
Channel: OpenCV Q&A Forum - RSS feed
Viewing all 41027 articles
Browse latest View live

The result obtained by cuda::dft is different from cv::dft

$
0
0
I'm tring to speed up the cv::dft by using the gpu version, but I find the result obtained by cv::cuda::dft is different from cv::dft. Here's the code: CPU version: Mat t = imread(...) // read the src image Mat f, dst; Mat plane_h[] = { Mat_(t), Mat::zeros(t.size(),CV_32F) }; merge(plane_h, 2, t); merge(plane_h, 2, f); cv::dft(t, f, DFT_SCALE | DFT_COMPLEX_OUTPUT); cv::dft(f, dst, DFT_INVERSE | DFT_REAL_OUTPUT); GPU vesion: Mat t = imread(...); // read the src image cuda::GpuMat t_dev, f_dev, dst_dev; Mat dst; t_dev.upload(t); cuda::GpuMat plane_h[] = { t_dev, cuda::GpuMat(t_dev.size(),CV_32FC1) }; cuda::merge(plane_h, 2, t_dev); cuda::merge(plane_h, 2, f_dev); cuda::dft(t_dev, f_dev, t_dev.size(), DFT_SCALE); cuda::dft(f_dev, dst_dev, t_dev.size(), DFT_COMPLEX_INPUT | DFT_REAL_OUTPUT); dst_dev.download(dst); in cpu version, 'dst' is equal to 't'; while in gpu version, 'dst' was totally wrong. I also found the the 'f_dev' in gpu vetsion is equal to 'f' in cpu version.

Object Detection and Avoidance:Drone

$
0
0
I'm trying to build a project in which the drone will detect the obstacles in its way and try to avoid them. so trying to detect the trees, birds, the electric poles and wires mostly I have also gone through the andy barry's push broom method but it has some limitations, as it detects the object or obstacle at a distance of about 10m only... I'm new to this can anyone help me and guide me through this Thanks in advance!!

process has died,terminate called after throwing an instance of 'std::runtime_error' ,what(): Unsupported type of detector SURF

$
0
0
After I run: roslaunch tiago_opencv_tutorial keypoint_tutorial.launch The error Tips: > ... logging to /home/dell/.ros/log/44ec5218-d183-11ea-b335-54353005d2aa/roslaunch-skdr-5443.log Checking log directory for disk usage. This may take a while. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://skdr:44807/ SUMMARY ======== PARAMETERS * /rosdistro: melodic * /rosversion: 1.14.6 NODES / gui (tiago_opencv_tutorial/gui_find_keypoints.py) vision_code (tiago_opencv_tutorial/find_keypoints) ROS_MASTER_URI=http://localhost:11311 process[gui-1]: started with pid [5474] process[vision_code-2]: started with pid [5475] **terminate called after throwing an instance of 'std::runtime_error' what(): Unsupported type of detector SURF [vision_code-2] process has died [pid 5475, exit code -6, cmd /home/dell/tiago_dual_public_ws/devel/lib/tiago_opencv_tutorial/find_keypoints __name:=vision_code __log:=/home/dell/.ros/log/44ec5218-d183-11ea-b335-54353005d2aa/vision_code-2.log]. log file: /home/dell/.ros/log/44ec5218-d183-11ea-b335-54353005d2aa/vision_code-2*.log** ^C[gui-1] killing on exit shutting down processing monitor... ... shutting down processing monitor complete done Here is my launch File: >

LNK1104 opencv_gapi440d.lib is missing

$
0
0
Hi everyone, I have the following problem: I want to use the example "Feature Matching with FLANN". Unfortunately I get the error message in Visual Studio 2017: LNK1104 ...\opencv\build\lib\Debug\opencv_gabi440d.lib cannot be opened. The folder structure "...\lib\Debug\..." is not existing in my system. The file opencv_gabi440d.lib is also not existing I use the opencv version 4.4.0 and the repository "opencv_contrib" (29.07.2020)

Disparity Map vs. Triangulating Points

$
0
0
Hello, For my project I am only concerned about measuring 20 key points** in an image. From these 20 points I need their 3D location relative to my camera origin to make measurements. For this situation does it make more sense to simply capture these 20 points and then pass them to TriangulatePoints? I have tried obtaining a disparity map of my test space, however, I'm thinking for my problem that it makes more sense to just use TriangulatePoints. Is there a benefit to using disparity maps vs. triangulating the points? ** The images that I will be capturing are from 4 cameras pointing at a person walking. The key points are joint positions in each of the camera frames.

compare two pictures of color

$
0
0
Hi, Whats the best method to compare two pictures of mainly black color , (or two pictures of mainly red color) I'm looking at a specific location on an image and am hoping to see a black colored wire ( or a red coloured wire) in that location. If the wire is not in that location i would see either a gray backround (or a green backround if looking for red cable) . What I want to ensure is every time I check the location I see a Black shade ( which indicates a Black wire) and must ensure there is no gray backround seen. If all gray background is seen then this indicates the wire is not present if some gray background is seen then this indicates the wire might be present , but is not in the exact correct location. Bearing in mind that in my picture Black can sometimes how up as dark gray, what the best way to distinguish between the cable and the background and whats the best method to ensure there is no background in the image. I'm looking at template match , but from what i see it does not quite compare colors. Can I get the average of value of colors in an image and do a comparision against a master image ? how could I do this ? would it work ? all help greatly appreciated thanks [C:\fakepath\good.png](/upfiles/15960329754265884.png) [C:\fakepath\bad.png](/upfiles/15960329957167057.png) The Picture above has black wires turning at an almost 90 degree angle, this is bad. The first picture has wires turning gradually , this is good. I'd like to inspect each and ensure the wires turn gradually, and indicate a filure if they do not. I was thinking of first identifying and location the component ( shown in Green below) and then selecting 2 X ROI relative to that green area. shown in red. THen compare the color in these read boxes against the colors of of the backgrounds ( with no wires) ... is this possible how could i do it ? I know hoe to identify the 2 X ROI but not sure how i can compare the colors . Also, one red box has a backround that is very similiar to the cable. [C:\fakepath\test.png](/upfiles/15960339136330892.png)

Finding corner points for perspective transform without clear landmarks.

$
0
0
Hi, I posted [this](https://answers.opencv.org/question/232957/apply-getperspectivetransform-and-warpperspective-for-bird-eye-view-python/) question two days ago, which leaded me to a new problem to solve. I need to define 4 trapezoid corner points on an image (golf lawn) without any distinct landmarks on it. Except the balls and the flag. All visible balls and the flag must be in the trapezoid. @kbarni pointed me to some possible solutions but I'm not sure what to do now. Also I'm pretty bad at math and I have a hard time understanding the formulas explaining computer vision concepts. Here are my thoughts about how I could eventually get to the corner points but I have no idea how to do it or if it's even possible. --- #Input This is the image of interest: ![image description](/upfiles/15960116932162805.jpg) --- #Goal The goal is to find the corners with the image as is (no extra markers placed in real life). These are approximately the corners I'm looking for, here I've set the corners by hand which of course isn't 100% accurate. The corners will be used with `getPerspectiveTransform` and `warpPerspective` to get a top down view. I don't necessarily need the extra padding between the outer most balls and the lines. ![image description](/upfiles/15960137181805473.jpg) --- #Solution? I do know some variables from the image. I do some balls detection using YOLO and mark them. The model does a decent job at the moment and will get better so that the balls will be market pretty accurately (I hope). ![image description](/upfiles/15960152402322794.jpg) With the detection I can get/know: - The balls width/height in pixels/mm - The balls centroids At the moment I don't detect the flag but I will soon. In real life a golf ball has a diameter of 42.67mm and the hole 107.95mm --- What I was thinking of is that I could get a trapezoid from the nearest (biggest) ball bottom corners to the furthest (smallest) ball bottom corners (or something similar) and somehow transform and apply it to the group and align it correctly to get my goal corner points? Is this in any way possible with the given variables and what do you think about this idea? Please tell me if you guys need more information. Help would be greatly appreciated! #Edit/Results Here are my results after applying @kbarni s answer below and they are pretty awesome imo, even if there is still some work pending to get this done perfectly. I changed the provided formula a little bit: `(0, C.y),(W,C.y),(W/2+R*W,F.y),(W/2-R*W,F.y)` Where `C.y` is taken from a bottom corner and `F.y` from a top corner to have all balls inside the trapezoid. ![image description](/upfiles/15960353661699382.jpg) And here is the top down view result at the moment. ![image description](/upfiles/15960355226537811.jpg)

Compiling Tracker from tutorial leads to error: ‘Tracker’ was not declared in this scope

$
0
0
Hello, I have been using opencv in python without issues. I am now trying to switch to cpp, which I am new to. I have recently reinstalled opencv and compiled it with [opencv_contrib](https://github.com/opencv/opencv_contrib) following the instructions in the readme. I am using the tracking features of opencv for my project. When I go to make my file I get the following error: /home/sydney/Desktop/projects/Leaf_Tracking_cpp/tracker.cpp: In function ‘int main(int, char**)’: /home/sydney/Desktop/projects/Leaf_Tracking_cpp/tracker.cpp:33:9: error: ‘Tracker’ was not declared in this scope 33 | Ptr tracker; | ^~~~~~~ /home/sydney/Desktop/projects/Leaf_Tracking_cpp/tracker.cpp:33:16: error: template argument 1 is invalid 33 | Ptr tracker; | ^ /home/sydney/Desktop/projects/Leaf_Tracking_cpp/tracker.cpp:37:19: error: ‘Tracker’ is not a class, namespace, or enumeration 37 | tracker = Tracker::create(trackerType); | ^~~~~~~ /home/sydney/Desktop/projects/Leaf_Tracking_cpp/tracker.cpp:76:12: error: ‘selectROI’ was not declared in this scope; did you mean ‘select’? 76 | bbox = selectROI(frame, false); | ^~~~~~~~~ | select /home/sydney/Desktop/projects/Leaf_Tracking_cpp/tracker.cpp:82:12: error: base operand of ‘->’ is not a pointer 82 | tracker->init(frame, bbox); | ^~ /home/sydney/Desktop/projects/Leaf_Tracking_cpp/tracker.cpp:91:26: error: base operand of ‘->’ is not a pointer 91 | bool ok = tracker->update(frame, bbox); | ^~ make[2]: *** [CMakeFiles/tracker.dir/build.make:63: CMakeFiles/tracker.dir/tracker.cpp.o] Error 1 make[1]: *** [CMakeFiles/Makefile2:73: CMakeFiles/tracker.dir/all] Error 2 make: *** [Makefile:84: all] Error 2 I notice that others have had the same error [here](https://answers.opencv.org/question/201817/tried-to-compile-tracker-from-tutorial-and-got-error-tracker-was-not-declared-in-this-scope-ptrtracker-tracker-trackerkcfcreate/) but its not clear how they were able to get the code to compile. One thing to note is that when I use `#include ` I get an error that it can't find tracking.hpp so I have replaced that line in the sample code with `#include `. Another thing to note is that in step 8 of the opencv_contrib it says to "to run, linker flags to contrib modules will need to be added to use them in your code/IDE. For example to use the aruco module, `-lopencv_aruco` flag will be added." Here is my cmake lists file: cmake_minimum_required(VERSION 3.1) # Enable C++11 # cmake version 3.13.4 find_package( OpenCV REQUIRED ) #I added this set(CMAKE_CXX_STANDARD 11) set(CMAKE_CXX_STANDARD_REQUIRED TRUE) #SET(OpenCV_DIR /home/sydney/Desktop/projects/installation/OpenCV-master/lib/cmake/opencv4) add_executable( tracker tracker.cpp ) target_link_libraries( tracker ${OpenCV_LIBS} ) target_link_libraries( tracker ${-lopencv_tracking} ) I have put the flag into my cmakeLists.txt file like so: `target_link_libraries( tracker ${-lopencv_tracking} )` is this the proper way to link this? I think it is possible I have linked this incorrectly. Any suggestions on how to solve this would be greatly appreciated! Thank you! EDIT: Here is my build info: 'General configuration for OpenCV 4.2.0 ===================================== Version control: 4.2.0 Extra modules: Location (extra): /io/opencv_contrib/modules Version control (extra): 4.2.0 Platform: Timestamp: 2020-04-04T14:50:03Z Host: Linux 4.15.0-1028-gcp x86_64 CMake: 3.9.0 CMake generator: Unix Makefiles CMake build tool: /usr/bin/gmake Configuration: Release CPU/HW features: Baseline: SSE SSE2 SSE3 requested: SSE3 Dispatched code generation: SSE4_1 SSE4_2 FP16 AVX requested: SSE4_1 SSE4_2 AVX FP16 AVX2 AVX512_SKX SSE4_1 (14 files): + SSSE3 SSE4_1 SSE4_2 (1 files): + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 (0 files): + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 AVX AVX (4 files): + SSSE3 SSE4_1 POPCNT SSE4_2 AVX C/C++: Built as dynamic libs?: NO C++ Compiler: /usr/lib/ccache/compilers/c++ (ver 4.8.2) C++ flags (Release): -Wl,-strip-all -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wsign-promo -Wuninitialized -Winit-self -Wno-delete-non-virtual-dtor -Wno-comment -Wno-missing-field-initializers -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -fvisibility-inlines-hidden -O3 -DNDEBUG -DNDEBUG C++ flags (Debug): -Wl,-strip-all -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=s -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wsign-promo -Wuninitialized -Winit-self -Wno-delete-non-virtual-dtor -Wno-comment -Wno-missing-field-initializers -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -fvisibility-inlines-hidden -g -O0 -DDEBUG -D_DEBUG C Compiler: /usr/lib/ccache/compilers/cc C flags (Release): -Wl,-strip-all -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wuninitialized -Winit-self -Wno-comment -Wno-missing-field-initializers -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -DNDEBUG -DNDEBUG C flags (Debug): -Wl,-strip-all -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wuninitialized -Winit-self -Wno-comment -Wno-missing-field-initializers -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -g -O0 -DDEBUG -D_DEBUG Linker flags (Release): -L/root/ffmpeg_build/lib -Wl,--gc-sections Linker flags (Debug): -L/root/ffmpeg_build/lib -Wl,--gc-sections ccache: YES Precompiled headers: NO Extra dependencies: ade /opt/Qt4.8.7/lib/libQtGui.so /opt/Qt4.8.7/lib/libQtTest.so /opt/Qt4.8.7/lib/libQtCore.so /lib64/libz.so dl m pthread rt 3rdparty dependencies: ittnotify libprotobuf libjpeg-turbo libwebp libpng libtiff libjasper IlmImf quirc OpenCV modules: To be built: aruco bgsegm bioinspired calib3d ccalib core datasets dnn dnn_objdetect dnn_superres dpm face features2d flann fuzzy gapi hfs highgui img_hash imgcodecs imgproc line_descriptor ml objdetect optflow phase_unwrapping photo plot python3 quality reg rgbd saliency shape stereo stitching structured_light superres surface_matching text tracking video videoio videostab xfeatures2d ximgproc xobjdetect xphoto Disabled: world Disabled by dependency: - Unavailable: cnn_3dobj cudaarithm cudabgsegm cudacodec cudafeatures2d cudafilters cudaimgproc cudalegacy cudaobjdetect cudaoptflow cudastereo cudawarping cudev cvv freetype hdf java js matlab ovis python2 sfm ts viz Applications: - Documentation: NO Non-free algorithms: NO GUI: QT: YES (ver 4.8.7 EDITION = OpenSource) QT OpenGL support: NO GTK+: NO VTK support: NO Media I/O: ZLib: /lib64/libz.so (ver 1.2.3) JPEG: libjpeg-turbo (ver 2.0.2-62) WEBP: build (ver encoder: 0x020e) PNG: build (ver 1.6.37) TIFF: build (ver 42 - 4.0.10) JPEG 2000: build (ver 1.900.1) OpenEXR: build (ver 2.3.0) HDR: YES SUNRASTER: YES PXM: YES PFM: YES Video I/O: DC1394: NO FFMPEG: YES avcodec: YES (58.65.103) avformat: YES (58.35.101) avutil: YES (56.38.100) swscale: YES (5.6.100) avresample: NO GStreamer: NO v4l/v4l2: YES (linux/videodev2.h) Parallel framework: pthreads Trace: YES (with Intel ITT) Other third-party libraries: Lapack: NO Eigen: NO Custom HAL: NO Protobuf: build (3.5.1) OpenCL: YES (no extra features) Include path: /io/opencv/3rdparty/include/opencl/1.2 Link libraries: Dynamic load Python 3: Interpreter: /opt/python/cp36-cp36m/bin/python (ver 3.6.10) Libraries: libpython3.6m.a (ver 3.6.10) numpy: /opt/python/cp36-cp36m/lib/python3.6/site-packages/numpy/core/include (ver 1.11.3) install path: python Python (for build): /opt/python/cp36-cp36m/bin/python Java: ant: NO JNI: NO Java wrappers: NO Java tests: NO Install to: /io/_skbuild/linux-x86_64-3.6/cmake-install ----------------------------------------------------------------

Can anyone confirm link to recommended install for 4.4 on Windows, Python?

$
0
0
Just seems to be a few procedures and not sure if they are current. Pip not available yet, so looking to construct manually.

why seam_finder gc_color result is changing over iteration?

$
0
0
HI everybody, I'm looking at seam_finder results, GC_COLOR in detail stitching, OPENCV 4.2..0. I noticed that the result is changing over iterations on some fixed pictures. (using "--blend no" to be sure to spot the computed path). I get down in the code down to gcgraph.hpp (maxFlow )and can't find any random start. Is this a bug or I missed something?

.set() is not working for brightness and exposure on Macbook

$
0
0
Hi, I m using opencv 4.1.2 and Macbook to set camera properties but continously getting False in response for brightness and exposure while height and width are fine. Macbook OpenCV 4.1.2 Logitech Brio 4K import cv2 import time cap = cv2.VideoCapture(1,cv2.CAP_AVFOUNDATION) r=cap.set(3,1920) print(r) r=cap.set(4,1080) print(r) time.sleep(2) r1=cap.set(cv2.CAP_PROP_AUTO_EXPOSURE, 0) r=cap.set(cv2.CAP_PROP_EXPOSURE, -7) print(r1) print(r) r=cap.set(cv2.CAP_PROP_BRIGHTNESS,1.0) print(r) print(cap.get(cv2.CAP_PROP_BRIGHTNESS)) ret, img = cap.read() img1="1.jpg" cv2.imwrite(img1, img) #cv2.imshow(img) cap.release() Response True True False False False 0.0

how to read a YML vector in python

$
0
0
OPENCV 4.2 I record a vector in C++ vector myVector; //... filling vector somewhere ... name = "myVector"; // storing vector myFileStorage << name << myVector; This gives an YML file beginning like this: %YAML:1.0 --- myVector: [ 0., 1.00840342e+00, 1.09243703e+02 ] Afterward I can't retrieve this vector in python (FileNode type returns NONE) while getting Mat style is as easy as: f= cv.FileStorage(file, cv.FILE_STORAGE_READ) data0 = (f.getNode("data0").mat()) I've added a subroutine getting this done manually but it's a pity to not be able to retrieve it directly from FileStorage. EDIT @berak : Being a new user I can't reply before 2 days... With a one-line instruction I can get back OpenCV Mat from FileStorage For Vector I would love something like myVector = (f.getNode("myVector").vect()) I've not found any answer yet.

DISP_SHIFT in cv::StereoMatcher

$
0
0
Hi all, Does anyone know, what is DISP_SHIFT in the class StereoMatcher ? Thx

Parallelizing GPU processing of multiple images

$
0
0
For each frame of a video, I apply some transformations and then write the frame out to an image file. I am using OpenCV's CUDA API for this, so it looks something like this, in a loop: # read frame from video _, frame = video.read() # upload frame to GPU frame = cv2.cuda_GpuMat(frame) # create a CUDA stream stream = cv2.cuda_Stream() # do things to the frame # ... # download the frame to CPU memory frame = frame.download(steam=stream) # wait for the stream to complete (CPU memory available) stream.waitForCompletion() # save frame out to disk # ... Since I send a single frame to the GPU, and then wait for its completion at the end of the loop, I can only process one frame at a time. What I would like to do is send multiple frames (in multiple streams) to the GPU to be processed at the same time, then save them to disk as the work gets finished. What is the best way to do this?

Unable to create PAGE_LOCKED or SHARED host memory using the Python binding to HostMem

$
0
0
- NVIDIA Jetson Xavier NX - OpenCV 4.4 built with CUDA support - Python 3.6.9 I'm having trouble with the Python binding to the HostMem class and can't create PAGE_LOCKED or SHARED host memory. I'm not sure I'm using it correctly and haven't been able to find any examples. I've tried two different ways of creating page-locked host memory so that I can call cv2.cuda image processing methods. Here's what I've tried: a_mem = cv2.cuda_HostMem(cv2.cuda.HostMem_PAGE_LOCKED) a_mem.create(num_rows, num_cols, cv2.CV_8UC1) a_host = a_mem.createMatHeader() a_dev = cv2.cuda_GpuMat(a_host) or a_mem = cv2.cuda_HostMem(num_rows, num_cols, cv2.CV_8UC1, cv2.cuda.HostMem_PAGE_LOCKED) a_host = a_mem.createMatHeader() a_dev = cv2.cuda_GpuMat(a_host) In both cases I get Mat and GpuMat references that I can successfully use to make CUDA calls: a_dev.upload(a_host) cv2.cuda.add(a_dev, b_dev, c_dev) c_dev.download, c_host) But when I use NVIDIA Visual Profiler to examine the uploads and downloads it tells me that my host memory is Pageable and not Pinned as I would expect for page-locked host memory. I have been able to use cv2.cuda.registerPageLocked() to create (much faster) Pinned host memory so I believe what Visual Profiler is telling me. I've tried this same test with cv2.cuda.HostMem_SHARED and I get the same results. Can someone please tell me if I'm creating the host memory and the Mat and GpuMat references incorrectly? Also, when I do succeed in creating SHARED host memory, how do I get a GpuMat reference to it? I feel like I should be using HostMem's createGpuMatHeader() method for this but it doesn't have a Python binding. Thanks for any help I can get. I've been stuck on this for three days.

VideoCapture properties not being set in Opencv c++

$
0
0
Hello All, I have been trying to set my camera resolution to lower than the default resolution of the camera. For that, I have written following code after the video capture object initialization, videoStream.set(cv::CAP_PROP_FRAME_WIDTH, frameWidth); videoStream.set(cv::CAP_PROP_FRAME_HEIGHT, frameHeight); Initially, I was getting error compiling this piece of code but after some online surfing I could figure out the solution. However, even after successful compilation when I am printing the pixel resolution of the frame that is being read from the web camera it is showing the default resolution of the camera only. When the same logic is implemented with OpenCV python it worked well. Your thoughts and inputs will be appreciated.

How to build viz module with VTK 9.0.1?

$
0
0
Hi all. I would like to build OpenCV with viz module using VTK 9.0.1. Environments: OS: Windows 10 x64 IDE: Visual Studio 2015 SDK: OpenCV 4.4.0, VTK 9.0.1 CMake: 3.17.0-rc1 I referred this link: https://answers.opencv.org/question/183404/what-are-all-the-requirement-for-viz-module-installation/ I tried as below procedure : //VTK 1. Download VTK-9.0.1.tar.gz file and unzip to 'vtk_sources' folder. 2. Create 'vtk_bin' folder. 3. Run CMake program as a administrator and set the 'vtk_sources' path to 'source code', 'vtk_bin' path to 'binaries'. 4. Click the 'Configure' button and select 'Visual Studio 2015', 'x64'. 5. Select VTK default option and click the 'Generate' button. 6. Run VTK.sln in the vtk_bin folder as a administrator. 7. Build 'ALL_BUILD' and 'INSTALL'. 8. Succeeded to create a library and it works well. //OpenCV 9. Download opencv-4.4.0-vc14_vc15.exe file and run. 10. Copy the 'sources' file to 'opencv_sources' folder. 11. Create 'opencv_bin' folder. 12. Run CMake program as a administrator and set the 'opencv_sources' path to 'source code', 'opencv_bin' path to 'binaries'. 13. Click the 'Configure' button and select 'Visual Studio 2015', 'x64'. 14. Set the VTK_DIR to 'vtk_bin' folder. 15. Check the 'WITH_VTK'. 16. Uncheck the 'BUILD_opencv_world'. 17. Click the 'Generate' button. 18. Run OpenCV.sln in the opencv_bin folder as a administrator. 19. Build 'ALL_BUILD' and 'INSTALL'. 20. Succeeded to create a library and it works well. 21. But 'include/opencv2/' has no viz.hpp file. I can't find the 'viz' file. I have been trying to build the OpenCV with viz for a week. But I couldn't find any solution. Could you please help me? Thank you.

Difference in BGR values between C++ and Python

$
0
0
Hello, I'm using OpenCV in C++ and Python and compare the results. I noticed that BGR values slightly differ (+/- 1, sometimes +/- 2) between those two languages after reading the same frame from a webcam. OpenCV in Python should actually be using C++ codes and so I'm not sure why the values differ. How can I get identical BGR values? Thanks in advance.

Warning: field of class type without a DLL interface used in a class with a DLL interface

$
0
0
Issue when building OpenCV 4.4.0 from source. When I build "ALL_BUILD" and "INSTALL" projects using VS2019 Windows10 I got 2000 somthing warnings (only warnings no errors at all) saying: "Warning: field of class type without a DLL interface used in a class with a DLL interface" and "Warning: base class dllexport/dllimport specification differs from that of the derived class" Pre install CUDA 10.1 with matching cudnn. GPU: RTX 2080.

com.org.opencv.FpsMeter is defied multiple times Android Studio

$
0
0
I'm working on face recognition on Android studio using OpenCV and JavaCv. I'm using Haar cascade of OpenCV for face detection and LBPH for face recognition with the help of javaCv library. In the Gradle file, I add the two libraries together and my program works well. The problem is when I try to generate the APK an error occurred Type **org.opencv.android.FpsMeter is defined multiple times: ** [Here is how I add JavaCv and OpenCV together] implementation 'org.bytedeco:javacv:1.4.2' implementation 'org.bytedeco.javacpp-presets:opencv:3.4.0-1.4.2:android-arm64' implementation 'org.bytedeco.javacpp-presets:ffmpeg:3.4.0-1.4.2:android-arm64' implementation project(path: ':openCVLibrary340')
Viewing all 41027 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>