Quantcast
Channel: OpenCV Q&A Forum - RSS feed
Viewing all 41027 articles
Browse latest View live

Trying to match color template with cv::ximgproc::colorMatchTemplate()

$
0
0
I'm trying to use colorMatchTemplate from L. Berger, in order to find a template that is a color stripe from a resistor (electronic component) inside a picture of the whole resistor. The resistor: ![image description](/upfiles/15849576807202539.png) The template: ![image description](/upfiles/15849577074996515.png) When I tried the colorMatchTemplate method on a simple example (finding a red circle template inside a bigger image with dark background) it worked well (the max quarternion image displayed white pixel where the circle template has been located), but for some reason it does not work well with my resistor stripe template (no white pixel at the right location on the max quarternion image). I have these results with the official example of colorMatchTemplate included in opencv 4.2.0 Also, the template should be found, because I extracted it from the original resistor image. So that's an exact match. Any idea on how to make this algorithm work please ? I read somewhere that I need to have pictures with even number of cols, so I did that, but the results are incorrect (template not located correctly).

How can i improve quality of this receipt

$
0
0
I have a problem recognizing data from selected area showed on receipt screenshot ![image description](/upfiles/15849601702717948.png) For recognition i'm using Tesseract, and this red area is being skipped. What kind of techniques can i try to improve quality of an image? So far i tried this simple methods: img = cv2.imread(image) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) median = cv2.GaussianBlur(gray, (5, 5), 0) thresh = cv2.threshold(median, 127, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1]

OpenCV Best way to match the spot patterns

$
0
0
I'm trying to write an app for wild leopard classification and conservation in South Asia. For this, I have the main challenge to identify the leopards by their spot pattern in the forehead. The current approach I used was, 1. Store the known leopard forehead images as a base list 2. Get the user-provided leopard image and crop the forehead of the leopard 3. identify the keypoints using the SIFT algorithm 4. Use FLANN matcher to get KNN matches 5. Select good matches based on the ratio threshold Sample code below: img1 = cv.bilateralFilter(baseImg, 9, 75, 75) img2 = cv.bilateralFilter(userImage, 9, 75, 75) detector = cv.xfeatures2d_SIFT.create() keypoints1, descriptors1 = detector.detectAndCompute(img1, None) keypoints2, descriptors2 = detector.detectAndCompute(img2, None) # FLANN parameters FLANN_INDEX_KDTREE = 1 index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5) search_params = dict(checks=50) # or pass empty dictionary matcher = cv.FlannBasedMatcher(index_params, search_params) knn_matches = matcher.knnMatch(descriptors1, descriptors2, 2) allmatchpointcount = len(knn_matches) ratio_thresh = 0.7 good_matches = [] for m, n in knn_matches: if m.distance < ratio_thresh * n.distance: good_matches.append(m) goodmatchpointcount = len(good_matches) print("Good match count : ", goodmatchpointcount) matchsuccesspercentage = goodmatchpointcount/allmatchpointcount*100 print("Match percentage : ", matchsuccesspercentage) **Problems I have with this approach:** 1. The method has a medium-low success rate and tends to break when there is a new user image. 2. The user images are sometimes taken from different angles where some key patterns are not visible or warped. 3. The user image quality affects the match result significantly.

Is there any way to crop frame in video and then make another video using OpenCV???

$
0
0
Hey, I have done Person object detection by using OPEN CV and Tensor flow, Now i want to crop out only detected frame then make new video of these frame. Like if HUMAN is Detected in a video then only human detected frame will be crop and then we make a new video of cropped frame in which only human is detected by subtracting the background using Open CV. Please Help me! Thank You!!!

Type issues

$
0
0
Hello, Actually, I'm trying to sum images. That is just a simple example, you're gonna understand my issue : Mat src ; Mat datasrc1 ; src = imread("/home/jetson/Downloads/wd-wallhaven-r2g7rm.jpg", 0); int size_reshape = src.cols * src.rows ; datasrc1 = src.reshape(0,1) ; unsigned char * A ; unsigned char * B ; float * E ; A = datasrc1.data; E = new float char[size_reshape]; for (int i = 0 ; i

How do I install and use OpenCV on Arch Linux (C++)?

$
0
0
I've installed the opencv package from the Arch Repos but when I try to import opencv.hpp, I can only find `opencv4/opencv2/opencv.hpp` when importing (I am using Visual Studio Code with default settings). I wrote a little program to test if this was even importing correctly and I get this error when compiling with g++ ~/.../camera-software/src >>> g++ main.cpp In file included from main.cpp:1: /usr/include/opencv4/opencv2/opencv.hpp:48:10: fatal error: opencv2/opencv_modules.hpp: No such file or directory 48 | #include "opencv2/opencv_modules.hpp" | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. This is the program I wrote: #include "opencv4/opencv2/opencv.hpp" #include using namespace std; int main() { printf("Hello World!\n"); return 0; } I've tried including `opencv4/opencv2/opencv_modules.hpp` but I get the same error.

Graycode Pattern: Irregular and off-center??

$
0
0
I have been struggling trying to get the Structured light examples going with Opencv (Windows 10, C++, Visual Studio). Something I have noticed when trying with multiple computers, projectors, and even using the Java version of OpenCV, the Graycode patterns it generates seem to be messed up (or there is something in the graycode theory I am missing). My thoughts are that the graycode patterns should be something like 0) horizontal, left 50% pixels black, right 50% pixels white 0b) horizontal, inverse of previous 1)horizontal, left 25% pixels black, next 25% pixels white, next 25% pixels black, next 25% pixels white 1b)horizontal, inverse of previous ... and so on until there are single pixel wide stripes across then repeat for the vertical direction 0) vertical, top 50% pixels black, bottom 50% pixels white 0b) vertical, inverse of previous 1)vertical, top 25% pixels black, next 25% pixels white, next 25% pixels black, next 25% pixels white 1b)vertical, inverse of previous ...and so on until single pixel length strips and then a final -All Black Pixels -All White pixels BUT in each version of the generate graycode i try, the pixels are never divided how i would think (i am using the examples from the contrib tutorials https://docs.opencv.org/master/d3/d81/tutorial_structured_light.html ) The first image created is pretty much always 70% black on the left, and 30% white and then it iterates in a kind of offset way from there then when it is getting down to the single pixels, there tends to be a double to quadruple width stripe present attached are some images showing what happens i can run the full process by the way, and decode these pictures, but they don't seem to come out correct at all, usually just a blank screen instead of a depth map. EDIT: i couldn't get pics to upload to this post, so here is a link to example pics of the graycode bugs from Image 0, and image 22 https://photos.app.goo.gl/3PEwaLshXtZHQqi68

Build OpenCV library from source for Android

$
0
0
Hello, I recently jumped on a OpenCV project and am trying to cross-compile OpenCV shared library .so from source for Android on Windows. I set up the environment with various tools. In CMake, I chose the source code location and build directory, and the following options, ANDROID_ABI=arm64-v8a, ANDROID_NDK, ANDROID_SDK_ROOT, ANDROID_NATIVE_API_LEVEL, BUILD_SHARED_LIB, BUILD_ANDROID, ARM64_V8A, ANDROID_TOOLCHAIN_NAME=aarch64-linux-android-4.9, CMAKE_MAKE_PROGRAM=ndk/16.x.xxxxxxx/prebuilt/windows-x86_64/bin/make.exe After the Configure and Generate steps, on Windows, in the build directory, the build started with executing in command line ming32-make.exe. I got 14 .so library files with libopencv_java4.so, but the libopencv_java4.so does not contain all modules, with its size being 1.47 MB only. I know there are subject-experts in this forum. Can someone tell why I did not get an opencv library that contains all modules? The libraries I got are these libopencv_calib3d.so libopencv_features2d.so libopencv_imgcodecs.so libopencv_ml.so libopencv_stitching.so libopencv_core.so libopencv_flann.so libopencv_imgproc.so libopencv_objdetect.so libopencv_video.so libopencv_dnn.so libopencv_highgui.so libopencv_java4.so libopencv_photo.so libopencv_videoio.so The libopencv_java4.so has a size of 2.14MB. I downloaded the OpenCV-android-sdk from OpenCV.org, and its size is 27.9MB. Why is my libopencv_java4.so of a so small size? Thanks Charles

Looking for OpenCV guru to help with Code A Life Challenge

$
0
0
Out team is making a simple cheap ventilator using OpenCV on a cell phone to get feedback for motors and manometers for pressure readings. https://www.agorize.com/en/challenges/code-life-challenge I have a pipeline setup using an industrial s/w package and need help porting to mobile platforms. Times running out.... Thanks for considering.

Displaying UDP Multicast Stream

$
0
0
Hi, I'm trying to display a multicast UDP video stream using OpenCV (4.2.0) with Java. I can grab the webcam and any RTSP video; VideoCapture videoDevice = new VideoCapture(); videoDevice.open(0); or videoDevice.open("rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov"); but I couldn't join a udp stream. For testing, in the server side, I sent the webcam via FFmpeg; ffmpeg.exe -f dshow -i video="USB2.0 HD UVC WebCam" -r 30 -f rawvideo udp://@239.0.1.100:4378 and I could display the video with ffplay; ffplay.exe -f rawvideo -i udp://@239.0.1.100:4378 but I could not join the stream with; videoDevice.open("udp://@239.0.1.100:4378"); and I also tried `videoDevice.grab()` but nothings changed. What is wrong? Is there any other settings? Osman

Find only internal contours in JavaCV/OpenCV

$
0
0
I am having a struggle with finding out only internal contours of the provided image. Finding an external contour is easy, however I have no clue how to find internal contours with JavaCV as documentation lacks. My code so far is: for (Mat mat : mats) { Mat newMat = mat.clone(); opencv_imgproc.Canny(mat, newMat, 50.0, 100.0); opencv_imgproc.blur(newMat, newMat, new Size(2, 2)); opencv_core.MatVector contours = new opencv_core.MatVector(); opencv_core.Mat hierarchy = new opencv_core.Mat(); //Mat hierarchy = new Mat(new double[]{255, 255, 255, 0}); opencv_imgproc.findContours(newMat, contours, hierarchy, opencv_imgproc.CV_RETR_TREE, opencv_imgproc.CHAIN_APPROX_SIMPLE, new opencv_core.Point(0, 0)); List rectangles = new ArrayList<>(); for (Mat matContour : contours.get()) { //clueless how to find internal contours here }

Input video capture ffmpeg flags (OpenCV-Python)

$
0
0
Hello, I'm trying to figure out how to pass FFMPEG flags to the VideoCapture function. From what I can see, there are no hooks for this in the python library. I'm trying to pass a h264 stream to OpenCV's DNN module in Linux. My issue is the H264 streams will only be decoded via software, instead of using my nvidia card's nvdec/ncuvid. My CPU is a 3770k, CPU usage is quite high decoding a large h264 stream, and then resizing it for dnn.

OpenCV DNN with CUDA built from source (for arch bin < 5.3)

$
0
0
Hello, I'm trying to use my old gaming rig (3770K + GTX780) with OpenCV for home monitoring. I'm trying to feed my camera's h264 stream into OpenCV, detect objects (people, cars, etc), determine if they're within a bounding box (a perimeter), and then feed this off to another program. I actually have this all working well. The problem is that it's extremely slow (750 ms/frame) because it's all being done in software. I determined that OpenCV's VideoCapture function seems to open ffmpeg without the ability to pass the -c:v h264_nvcuvid flag to hardware decode, but I started another thread for that. Specifically for this one, I realized the prebuilt openCV 4.2 library does not have CUDA support. So i built from source only to find out that my GTX780 arch bin (3.5) isn't supported?? Digging more into it, seems you need a GTX10xx series of graphics card or better, seems insane to me. Digging further into it, I found here's a patch that was pulled into in 4.3. So i checked out 4.3 beta and compiled that. It's successful at compiling, but when I attempt to use DNN at all, regardless of which backend or target setting I attempt, I get the following error: `terminate called after throwing an instance of 'cv::Exception' what(): OpenCV(4.3.0-pre) /home/jonathan/Projects/opencv/opencv/modules/core/src/cuda_info.cpp:62: error: (-217:Gpu API call) system has unsupported display driver / cuda driver combination in function 'getCudaEnabledDeviceCount'` Output of my getBuildInformation(): `>>> print (cv2.getBuildInformation()) General configuration for OpenCV 4.3.0-pre ===================================== Version control: 4.2.0-506-g4cdb4652cf Extra modules: Location (extra): /home/jonathan/Projects/opencv/opencv_contrib/modules Version control (extra): 4.2.0 Platform: Timestamp: 2020-03-21T08:28:02Z Host: Linux 5.3.0-42-generic x86_64 CMake: 3.10.2 CMake generator: Unix Makefiles CMake build tool: /usr/bin/make Configuration: RELEASE CPU/HW features: Baseline: SSE SSE2 SSE3 requested: SSE3 Dispatched code generation: SSE4_1 SSE4_2 FP16 AVX AVX2 AVX512_SKX requested: SSE4_1 SSE4_2 AVX FP16 AVX2 AVX512_SKX SSE4_1 (16 files): + SSSE3 SSE4_1 SSE4_2 (2 files): + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 (1 files): + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 AVX AVX (5 files): + SSSE3 SSE4_1 POPCNT SSE4_2 AVX AVX2 (30 files): + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 FMA3 AVX AVX2 AVX512_SKX (6 files): + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 FMA3 AVX AVX2 AVX_512F AVX512_COMMON AVX512_SKX C/C++: Built as dynamic libs?: YES C++ standard: 11 C++ Compiler: /usr/bin/c++ (ver 7.5.0) C++ flags (Release): -fsigned-char -ffast-math -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Winit-self -Wsuggest-override -Wno-delete-non-virtual-dtor -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -fvisibility-inlines-hidden -O3 -DNDEBUG -DNDEBUG C++ flags (Debug): -fsigned-char -ffast-math -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Winit-self -Wsuggest-override -Wno-delete-non-virtual-dtor -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -fvisibility-inlines-hidden -g -O0 -DDEBUG -D_DEBUG C Compiler: /usr/bin/cc C flags (Release): -fsigned-char -ffast-math -W -Wall -Werror=return-type -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Winit-self -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -DNDEBUG -DNDEBUG C flags (Debug): -fsigned-char -ffast-math -W -Wall -Werror=return-type -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Winit-self -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -g -O0 -DDEBUG -D_DEBUG Linker flags (Release): -Wl,--exclude-libs,libippicv.a -Wl,--exclude-libs,libippiw.a -Wl,--gc-sections -Wl,--as-needed Linker flags (Debug): -Wl,--exclude-libs,libippicv.a -Wl,--exclude-libs,libippiw.a -Wl,--gc-sections -Wl,--as-needed ccache: NO Precompiled headers: NO Extra dependencies: m pthread cudart_static -lpthread dl rt nppc nppial nppicc nppicom nppidei nppif nppig nppim nppist nppisu nppitc npps cublas cudnn cufft -L/usr/local/cuda/lib64 -L/usr/lib/x86_64-linux-gnu 3rdparty dependencies: OpenCV modules: To be built: aruco bgsegm bioinspired calib3d ccalib core cudaarithm cudabgsegm cudafeatures2d cudafilters cudaimgproc cudalegacy cudaobjdetect cudaoptflow cudastereo cudawarping cudev datasets dnn dnn_objdetect dnn_superres dpm face features2d flann freetype fuzzy gapi hfs highgui img_hash imgcodecs imgproc line_descriptor ml objdetect optflow phase_unwrapping photo plot python3 quality reg rgbd saliency shape stereo stitching structured_light superres surface_matching text tracking ts video videoio videostab xfeatures2d ximgproc xobjdetect xphoto Disabled: cudacodec world Disabled by dependency: - Unavailable: cnn_3dobj cvv hdf java js matlab ovis python2 sfm viz Applications: tests perf_tests apps Documentation: NO Non-free algorithms: NO GUI: GTK+: YES (ver 3.22.30) GThread : YES (ver 2.56.4) GtkGlExt: NO VTK support: NO Media I/O: ZLib: /usr/lib/x86_64-linux-gnu/libz.so (ver 1.2.11) JPEG: /usr/lib/x86_64-linux-gnu/libjpeg.so (ver 80) WEBP: build (ver encoder: 0x020e) PNG: /usr/lib/x86_64-linux-gnu/libpng.so (ver 1.6.34) TIFF: /usr/lib/x86_64-linux-gnu/libtiff.so (ver 42 / 4.0.9) JPEG 2000: build (ver 1.900.1) OpenEXR: build (ver 2.3.0) HDR: YES SUNRASTER: YES PXM: YES PFM: YES Video I/O: DC1394: YES (2.2.5) FFMPEG: YES avcodec: YES (57.107.100) avformat: YES (57.83.100) avutil: YES (55.78.100) swscale: YES (4.8.100) avresample: YES (3.7.0) GStreamer: YES (1.14.5) v4l/v4l2: YES (linux/videodev2.h) Parallel framework: TBB (ver 2017.0 interface 9107) Trace: YES (with Intel ITT) Other third-party libraries: Intel IPP: 2020.0.0 Gold [2020.0.0] at: /home/jonathan/Projects/opencv/opencv/build/3rdparty/ippicv/ippicv_lnx/icv Intel IPP IW: sources (2020.0.0) at: /home/jonathan/Projects/opencv/opencv/build/3rdparty/ippicv/ippicv_lnx/iw Lapack: NO Eigen: NO Custom HAL: NO Protobuf: build (3.5.1) NVIDIA CUDA: YES (ver 10.2, CUFFT CUBLAS FAST_MATH) NVIDIA GPU arch: 30 35 37 50 52 60 61 70 75 NVIDIA PTX archs: cuDNN: YES (ver 7.6.4) OpenCL: YES (no extra features) Include path: /home/jonathan/Projects/opencv/opencv/3rdparty/include/opencl/1.2 Link libraries: Dynamic load Python 3: Interpreter: /usr/bin/python3 (ver 3.6.9) Libraries: /usr/lib/x86_64-linux-gnu/libpython3.6m.so (ver 3.6.9) numpy: /home/jonathan/.local/lib/python3.6/site-packages/numpy/core/include (ver 1.18.2) install path: /home/jonathan/.virtualenvs/cv/lib/python3.6/site-packages/cv2/python-3.6 Python (for build): /usr/bin/python3 Java: ant: NO JNI: NO Java wrappers: NO Java tests: NO Install to: /usr/local -----------------------------------------------------------------

use module form opencv contrib in c++

$
0
0
I'm trying to use the anisotropicDiffusion() function from the ximgproc module in opencv contrib but I am having problems loading it into my editor codeblocks or G++ on Ubuntu 18.04. In general, how do I install a module from opencv contrib to be able to use it? Also, opencv seems to reference from the following location /usr/include/opencv2/. When I did my original make install it built most of my hpp files under /usr/local/include/opencv4/opencv2/. How do I have my machine look for the hpp files under /usr/local/include/opencv4/opencv2/? https://docs.opencv.org/3.4/df/d2d/group__ximgproc.html#gaffedd976e0a8efb5938107acab185ec2 https://github.com/opencv/opencv_contrib/blob/master/modules/ximgproc/src/anisodiff.cpp

Complied from source - ubuntu/python - error librsvg-2.so.2: undefined symbol: cairo_tag_end

$
0
0
I've complied from source with the following cmake setitngs > cmake > -D CMAKE_BUILD_TYPE=RELEASE > -D INSTALL_PYTHON_EXAMPLES=ON > -D INSTALL_C_EXAMPLES=OFF > -D PYTHON_EXECUTABLE=/home/lewis/anaconda3/bin/python3> -D BUILD_opencv_python2=OFF > -D CMAKE_INSTALL_PREFIX=/home/lewis/anaconda3> -D PYTHON3_EXECUTABLE=/home/lewis/anaconda3/bin/python3> -D PYTHON3_INCLUDE_DIR=/home/lewis/anaconda3/include/python3.7m> -D PYTHON3_PACKAGES_PATH=/home/lewis/anaconda3/lib/python3.7/site-packages> -D WITH_GSTREAMER=ON .. When i import cv2 in python I'm getting the following error. > ImportError:> /lib/x86_64-linux-gnu/librsvg-2.so.2:> undefined symbol: cairo_tag_end Anyone have any ideas what to do? Google is no help.

how to get JPG compressed image yuv/rgb result from source image

$
0
0
I'm trying to compare image difference before/after compression. Currently, I do it by saving image at certain quality factor as jpeg file and then call imread() to get the result. However, the whole process contains several useless steps. For example, it is not necessary to encode image and generate a .jpg file. I would like to skip those unnecessary steps. Are there any approaches to get compressed image result like compress_image = CompressAtQuality(source_image, quality_factor)

Imwrite saves images as black .

$
0
0
I am trying to write a program that finds center of black objects and cuts around that area. I applied it in 2 different data sets, in first it worked as expected but in second when I tried to save images it only saved full black images. here is my code: img = '/Users/khand/OneDrive/Desktop/Thesis/case_db_10/' + el print(type(img)) print(img) im = cv2.imread(img).convert('RGBA') print(type(im)) #plt.imshow(im) gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY) blurred = cv2.GaussianBlur(gray, (5, 5), 0) thresh = cv2.threshold(blurred, 60, 255, cv2.THRESH_BINARY_INV)[1] cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE) cnts = imutils.grab_contours(cnts) na = 0 for c in range(0,len(cnts)): cnt = cnts[c] M = cv2.moments(cnt) cx = int(M['m10']/M['m00']) cy = int(M['m01']/M['m00']) imag_jpg = '/Users/khand/OneDrive/Desktop/Thesis/case_db_10/' + nam +'.jpg' imag_png = '/Users/khand/OneDrive/Desktop/Thesis/case_db_10/' + im_nam2[0] + '.png' image_jpg = cv2.imread(imag_jpg) image_png = cv2.imread(imag_png) y1 = cx-100 y2 = cx+100 x1 = cy-100 x2 = cy+100 if x2 > 2000: x2 = 2000 x1 = 1800 if y2 > 2000: y2 = 2000 y1 = 1800 if x1 < 0: x1 = 0 x2 = 200 if y1 <0: y1 = 0 y2 = 200 crop_imag_jpg = image_jpg[x1:x2, y1:y2].copy() crop_imag_png = image_png[x1:x2, y1:y2].copy() #ima_name_jpg = '/Users/khand/OneDrive/Desktop/Thesis/Case_db/org/cropped_jpg_' +str(nam_num)+'_'+str(im_nam2_num)+'_'+str(na)+'.jpg' #ima_name_png = '/Users/khand/OneDrive/Desktop/Thesis/Case_db/png/cropped_png_' +str(nam_num)+'_'+str(im_nam2_num)+'_'+str(na)+'.png' ima_name_jpg = '/Users/khand/OneDrive/Desktop/Thesis/Case_db/org10/'+ nam +str(nam_num)+'_'+str(im_nam2_num)+'_'+str(na)+'.jpg' ima_name_png = '/Users/khand/OneDrive/Desktop/Thesis/Case_db/png10/'+ nam +str(nam_num)+'_'+str(im_nam2_num)+'_'+str(na)+'.png' cv2.imwrite(ima_name_jpg,crop_imag_jpg) cv2.imwrite(ima_name_png,crop_imag_png) The difference in images are in their bit depth, in the working one it is 32 the one which is not working is 8. I am not sure if it is related to this or not. If you have any idea please feel free to help :) Thanks.

OpenCV 4.2.0 C++ - H264 Encoding

$
0
0
Hello, Currently, i'm using OpenCV 4.2 C++ in order to encode and stream outputs from a Allied Vision Manta camera. I already can grabe a frame, compress with MJPEG and the stream to other pipeline using OpenCV and GStreamer. However, i need to try with H.264 (and later with H.265) but it's not working. The pipelines created are these in VideoWriter function are the follow: 1 - "appsrc ! queue ! videoconvert ! x264enc ! rtph264pay ! udpsink host=127.0.0.1 port=5015"; 2 - "appsrc ! autovideoconvert ! videoconvert ! x264enc ! rtph264pay ! udpsink host=127.0.0.1 port=5015"; With the 1st pipe, i get: " [ WARN:0] global /opt/opencv/modules/videoio/src/cap_gstreamer.cpp (1759) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module x264enc0 reported: Can not initialize x264 encoder. [ WARN:0] global /opt/opencv/modules/videoio/src/cap_gstreamer.cpp (1665) writeFrame OpenCV | GStreamer warning: Error pushing buffer to GStreamer pipeline " With the 2nd pipe, I get: "libva info: VA-API version 0.39.0 libva info: va_getDriverName() returns 0 libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/vmwgfx_drv_video.so libva info: va_openDriver() returns -1 " The others arguments are : " cv::CAP_GSTREAMER, 0, 5, Size(1080, 720), true " The input pixel format is the YUV444p, convert into Mat with - " Mat( Size(1080, 720) , CV_8UC3, pBuffer, Mat::AUTO_STEP " The receiver is done with " udpsrc port=5015 ! application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264,framerate=15/1 ! rtph264depay ! decodebin ! videoconvert ! appsink ". One detail that i don't know if it's important or not, i'm doing this in the VirtualBox.

imread working in py but working in ipynb

$
0
0
Hello everyone, I have a problem with the `imread` function. The code that I'm running is the following : import cv2 image = cv2.imread("T.jpg") cv2.imshow("test", image) cv2.waitKey(0) cv2.destroyAllWindows() Really simple, just to display an image. The thing is that when I run this in a ipynb file it works, the image display work and when I click any key it shut. But when I run it in a py file it return this error : cv2.error: OpenCV(4.2.0) C:\projects\opencv-python\opencv\modules\highgui\src\window.cpp:376: error: (-215:Assertion failed) size.width>0 && size.height>0 in function 'cv::imshow' I don't understand, everything is the same except the extension of the python file. Image is the same, so it really exist and is not corrupted. Both ipynb and py file are in the same directory, same directory as the "T.jpg" file. Any ideas? I use vscode, is there a link with that? Thank you for your answers.

How to get static libs for installed modules?

$
0
0
I am building OpenCV from source on Windows 10. I have downloaded 'opencv' and 'opencv_contrib' from Github and used the CMake GUI to configure it to build, then opened the solution in Visual Studio 2017, built as Release and then built the install package. Ultimately, I am trying to run some code which requires certain static libraries, for example: - opencv_cudaoptflow430.lib - opencv_cudaimgproc430.lib - etc. These modules seem to have installed, and the headers are there, but I cannot find these libraries anywhere. What I do have are the following (inside opencv/build/bin/Release): - opencv_perf_cudaoptflow.lib - opencv_perf_cudaimgproc.lib - etc. But I'm not sure what these are, and it seems weird that I would need to modify code to use those libraries. In CMake I set BUILD_SHARED_LIBS to false, which is what I thought I needed to make those libraries build, but it doesn't seem to have happened. What do I need to do to make sure those libraries build? Thanks!
Viewing all 41027 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>