Quantcast
Channel: OpenCV Q&A Forum - RSS feed
Viewing all 41027 articles
Browse latest View live

HSV v RGB with inRange

$
0
0
I used a colour picker and worked out equivalent range values in HSV and RGB for my special yellow. Scalar yellowHsvMin = new Scalar(50, 35, 40); Scalar yellowHsvMax = new Scalar(55, 91, 71); Scalar yellowRgbMin = new Scalar(139, 127, 66); Scalar yellowRgbMax = new Scalar(249, 238, 114); But when I apply a filter using the HSV values: Mat raw = Imgcodecs.imread("file.jpg"); Matt colours = new Matt(raw.size(), raw.type()); Matt yel = new Matt(raw.size(), raw.type()); Imgproc.cvtColor(raw, colours, Imgproc.COLOR_BGR2HSV); Core.inRange(colours, yellowHsvMin, yellowHsvMax, yel); I get a hugely different answer to using RGB values: Mat raw = Imgcodecs.imread("file.jpg"); Matt colours = new Matt(raw.size(), raw.type()); Matt yel = new Matt(raw.size(), raw.type()); Imgproc.cvtColor(raw, colours, Imgproc.COLOR_BGR2RGB); Core.inRange(colours, yellowRgbMin, yellowRgbMax, yel); I would expect it to be slightly different from rounding errors etc but it's completely different. Can anyone help me understand what I'm doing wrong?

What is wrong in this Opencv Face Recognition code?

$
0
0
import cv2 import numpy as np . face_cascade = cv2.CascadeClassifier('Desktop\haarcascade_frontalface_default.xml') eye_cascade = cv2.CascadeClassifier('Desktop\haarcascade_eye.xml') cap = cv2.VideoCapture(0) while 1: ret, img = cap.read() gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) faces = face_cascade.detectmultiScale(gray, 1.3, 5); #scalefactor = 1.3, minNeighbors = 5? for(x, y, w, h) in faces: cv2.rectangle(img, (x, y), (x + w,y + h), (255, 0, 0), 2) roi_gray = gray[y:y+h, x:x+w] roi_color = img[y:y+h, x:x+w] eyes = eye_cascade.detectMultiScale(roi_gray) for (ex, ey, ew, eh) in eyes: cv2.rectangle(roi_color, (ex, ey), (ex+ew, ey+eh), (0, 255, 0), 2) cv2.imshow('img', img) k = cv2.waitKey(30) & 0xff if k == 27: break cap.release() cv2.destroyAllWindows() This error shows up: AttributeError: 'cv2.CascadeClassifier' object has no attribute 'detectmultiScale'

Would I be able to use this software for full body tracking in VR?

$
0
0
would full body tracking be possible with OpenCV in different VR titles, such as VRchat, and other full body compatible titles? If so, how would this work and is there a tutorial in video format?

Speed up DFT by specifying ROI in frequency domain

$
0
0
*Note: This question is also asked at [Stack Overflow](https://stackoverflow.com/questions/50521283/how-to-accelerate-dft-by-specifying-region-of-interest-in-frequency-domain) a couple of days later since this post.* I'm building an application which extensively use dft, discrete Fourier transform. I'm trying to speed up so as to run in real-time. In that application I only use a part of dft output specified by a rectangular ROI. My current implementation follows the steps below: 1. Compute dft of the input image *f* (typically size of 512x512) and get the entire dft result *F* 2. Crop *F* into a pre-specified ROI (typically size of 32x32, located arbitrary), *R* This process basically works well, but involves useless calculation since I only need partial information of *F*. I'm looking for a way to accelerate this calculation only by computing necessary part of dft. I found OpenCV with Intel IPP computes dft with Intel IPP functions, which results in an order of magnitude faster than naive OpenCV implementation. I'm wondering if I can even accelerate this computation by only computing necessary frequency domain of dft. Since I'm new to OpenCV, I've lost my way here, so I hope you could provide a way to do this. Kindly note that I don't mean to do a dft to an ROI of an image, i.e. *dft(ROI(f))*, but I want to compute *ROI(dft(f))*. Thanks in advance. **EDIT**: I've got a nice answer on Stack Overflow. Refer to the URL as pointed in the main text.

Segmentation fault while using createTrackBar in OpenCV-Python

$
0
0
I am using OpenCV 4.1.1. When I try to run the trackbar tutorial code, I get the segmentation fault (core dumped) error. It seems that the crash occurs when the trackbar is created in an existing window initialized by cv2.namedWindow() and does not occur when the 2nd argument of cv2.createTrackBar(window name) is a name of a non-existent window. Is it a bug with the latest release or is there some fault in my installation?

Two camera feeds in OpenCV4Android

$
0
0
Seems like a straight forward question, but couldn't find an answer (yet). How can I read 2 camera streams on Android (front and rear) and analyse them individually. I can create 2 listeners and bind them to View elements: mOpenCvCameraRear = (CameraBridgeViewBase)findViewById(R.id.CameraRear); mOpenCvCameraFront = (CameraBridgeViewBase)findViewById(R.id.CameraFront); mOpenCvCameraRear.setCameraIndex(-1); mOpenCvCameraFront.setCameraIndex(1); mOpenCvCameraRear.setVisibility(CameraBridgeViewBase.VISIBLE); mOpenCvCameraFront.setVisibility(CameraBridgeViewBase.VISIBLE); mOpenCvCameraRear.setCvCameraViewListener(this); mOpenCvCameraFront.setCvCameraViewListener(this); but when I enableView() both, it just crashes (each individual works). Main (or next) problem is the onCameraFrame function, which it there only once and idk how it would even decide what frame to handle. Is there a working solution for Android?

How to take high resolution photos using OpenCV

$
0
0
Hi guys, I have a question to ask your opinion. I would like to capture a high-resolution image from an integrated camera. It's a surface book pro and offers a much higher still image resolution (8 MP) than the video resolution (2 MP). Since I'm running on windows, it seems pretty difficult to obtain that high-resolution image (most other videocapture libraries do not seem to be able to handle this. QT even specifically says, that the Video-Module of QMultimedia is not finished on Windows). Thus I'm stuck grabbing 2 MP frames from the Videocapture module. Since OpenCV has the ability to run with MediaFoundation and there is a 'TakePhoto'-Function for the current stream, would there be a possibility to extend the videocapture-module to allow still images at full resolution when using the CAP_MSMF-Flag? Thanks for your help. author: [du lich viet](https://answers.opencv.org/users/421211/copperfield/)

C++ ORB Feature Matching with FLANN Lsh Error

$
0
0
I am trying to match Orb features using FLANN matcher and LSH, and it gives me this error: OpenCV: terminate handler is called! The last OpenCV error is: OpenCV(4.1.0) Error: Unsupported format or combination of formats (> type=5 > ) in buildIndex_, file C:\opencv\source\opencv-4.1.0\modules\flann\src\miniflann.cpp, line 315 What is the reason for this? Descriptors are already converted to CV_32F. cv::FlannBasedMatcher matcher = cv::FlannBasedMatcher(cv::makePtr(12, 20, 2)); std::vector< std::vector> matches; matcher.knnMatch(descriptors1, descriptors2, matches, 2 );

OpenCV GPU implementation On Yolo

$
0
0
I am working on an object detection Project using Yolo v3 and I wanna to use my GPU (Geforce 1050) to accelerate my computation, but I found that at the moment OpenCV dnn module supports only Intel GPUs. So Do we have any other solution ?

How to calculate Optical Flow magnitude?

$
0
0
I'm trying to see how big different two given video frames are. My goal is to calculate a single value showing how fast objects inside those frames are moving. I can calculate Optical Flow matrix below, both the HSV and magnitude matrices. But I don't know how to calculate a **average** total movement magnitude. How can I calculate it from those matrices? def optical_flow(one, two): one_g = cv2.cvtColor(one, cv2.COLOR_RGB2GRAY) two_g = cv2.cvtColor(two, cv2.COLOR_RGB2GRAY) hsv = np.zeros((120, 320, 3)) # set saturation hsv[:,:,1] = cv2.cvtColor(two, cv2.COLOR_RGB2HSV)[:,:,1] # obtain dense optical flow paramters flow = cv2.calcOpticalFlowFarneback(one_g, two_g, flow=None, pyr_scale=0.5, levels=1, winsize=15, iterations=2, poly_n=5, poly_sigma=1.1, flags=0) # convert from cartesian to polar mag, ang = cv2.cartToPolar(flow[..., 0], flow[..., 1]) # hue corresponds to direction hsv[:,:,0] = ang * (180/ np.pi / 2) # value corresponds to magnitude hsv[:,:,2] = cv2.normalize(mag,None,0,255,cv2.NORM_MINMAX) # convert HSV to int32's hsv = np.asarray(hsv, dtype= np.float32) rgb_flow = cv2.cvtColor(hsv,cv2.COLOR_HSV2RGB) return rgb_flow The `rgb_flow` is a 3D array looks like this: [[[0 0 0] [0 0 0] [0 0 0] ... [0 0 0] [0 0 0] [0 0 0]] [[0 0 0] [0 0 0] [0 0 0] ... [0 0 0] [0 0 0] [0 0 0]] ... [[0 0 0] [0 0 0] [0 0 0] ... [0 0 0] [0 0 0] [0 0 0]]] And the `mag` matrix is 2D array like this: [[3.2825139e-03 3.9561605e-03 4.8938910e-03 ... 3.7310597e-02 3.2986153e-02 2.5520157e-02] [4.9569397e-03 6.3276174e-03 7.7017904e-03 ... 3.9564677e-02 3.2582227e-02 2.6329078e-02] ... [6.9548332e-06 8.3683852e-05 6.0906638e-03 ... 8.3484064e-04 6.4721738e-04 2.9505073e-04]]

Avoid retraining a model when executing a program?

$
0
0
I've started using OpenCV for some image processing projects and I'm wondering if there's a way to save time when it comes to processing test images against a database of faces. **Issue**: 10 pictures of each subject A, B, and C exist in folders on the desktop and each subject has their own identifier as to who the subject is in a list. The program navigates to the first subject folder, trains on their face and name from the list, then moves to the next subject, rinse and repeat until complete. Once the training process is done, a test image is then given to the program to see who it thinks the subject is (Person A, B, or C). The test image is the only thing that changes each time the script is run. So far it's fairly successful at predicting who each subject is, but the training time alone makes up a fair bit of the execution time. **Question**: Is there a way to make it so the model doesn't have to retrain every single time? I figured this is what the cascade files (`haarcascade_frontalface_default.xml`,`lbpcascade_frontalface.xml`, etc.) are for in terms of prediction accuracy, but I haven't been able to find a clear cut answer for a newbie like myself. Would each subject need their own `.xml` cascade file? I'm fairly new to ML and image processing so even pointing me to a similar post, forum, or book would be awesome. Thanks!

Adding VideoCapture API

$
0
0
Is there any documentation on how to go about adding a new API (new camera type) to VideoCapture?

opencv4.1.1 dnn readNetFromTensorflow error

$
0
0
I have defined a keras model(.h5), and coverted to tensorflow (.pb). And it works on tensorflow : pb_path = r'*.pb' with tf.gfile.FastGFile(pb_path, 'rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) _ = tf.import_graph_def(graph_def, name='') with tf.Session() as session: input = tf.get_default_graph().get_tensor_by_name("input_1:0") output_1 = tf.get_default_graph().get_tensor_by_name("trans/out_13/yolo_head/out_/concat:0") output_2 = tf.get_default_graph().get_tensor_by_name("trans/out_26/yolo_head/out_/concat:0") output_3 = tf.get_default_graph().get_tensor_by_name("trans/out_52/yolo_head/out_/concat:0") img = Image.open('/home/hyg/disk2/quanda_data/crop/1.png') boxed_image = letterbox_image(img, (416, 416)) img = np.array(boxed_image, dtype='float32') img *= 1./255. img = np.expand_dims(img, axis=0) out = session.run([output_1,output_2,output_3], feed_dict={input: img}) But when I use cv.dnn.readNetFromTensorflow("*.pb") cv2.error: OpenCV(4.1.0) /io/opencv/modules/dnn/src/tensorflow/tf_importer.cpp:1383: error: (-215:Assertion failed) scaleMat.type() == CV_32FC1 in function 'populateNet' So, I try to use the tensorflow tools (optimize_for_inference.py) to optimize the model Use tf.compat.v1.graph_util.remove_training_nodes WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_2/FusedBatchNorm_1' WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_4/FusedBatchNorm_1' WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_7/FusedBatchNorm_1' WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_10/FusedBatchNorm_1' WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_13/FusedBatchNorm_1' WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_16/FusedBatchNorm_1' WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_19/FusedBatchNorm_1' WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_22/FusedBatchNorm_1' WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_25/FusedBatchNorm_1' WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_28/FusedBatchNorm_1' WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_31/FusedBatchNorm_1' WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_34/FusedBatchNorm_1' WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_37/FusedBatchNorm_1' WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_40/FusedBatchNorm_1' WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_43/FusedBatchNorm_1' WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_46/FusedBatchNorm_1' WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_49/FusedBatchNorm_1' WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_52/FusedBatchNorm_1' WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_55/FusedBatchNorm_1' WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_58/FusedBatchNorm_1' WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_61/FusedBatchNorm_1' WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_64/FusedBatchNorm_1' WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_67/FusedBatchNorm_1' WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_70/FusedBatchNorm_1' WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_73/FusedBatchNorm_1' WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_76/FusedBatchNorm_1' WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_79/FusedBatchNorm_1' WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_82/FusedBatchNorm_1' WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_85/FusedBatchNorm_1' WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_88/FusedBatchNorm_1' WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_91/FusedBatchNorm_1' WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_94/FusedBatchNorm_1' WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_100/FusedBatchNorm_1' WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_102/FusedBatchNorm_1' WARNING:tensorflow:Didn't find expected Conv2D input to 'batch_normalization_104/FusedBatchNorm_1' And I have used the [link text](http://gist.github.com/dkurt/ff1b2dd272e544e6873c54d3a571662a) get the config file cv.dnn.readNetFromTensorflow("*.pb", "*.pbtxt"), but get: cv2.error: OpenCV(4.1.0) /io/opencv/modules/dnn/src/tensorflow/tf_importer.cpp:497: error: (-2:Unspecified error) Input layer not found: batch_normalization_1/FusedBatchNorm_1 in function 'connect' I upload the converted tensorflow model file to MEGA [link text](https://mega.nz/#F!XLw0BYLR!OjiAm89aApKeL_N7XJ-YVQ)

Is OpenVINO be able to use under QT?

$
0
0
Hi,guys. does any one kown Is OpenVINO be able to use under QT? this may be a strange question,but any comments will be appreciate!

why does my image appear cropped when displayed using cv2.imgshow() using python

$
0
0
import numpy as np import cv2 img=cv2.imread("vn.jpeg",0) cv2.imshow('image',img) cv2.waitKey(0)

error at building opencv4.1.0 with opencv-contrib-4.1.0

$
0
0
hello, guys, i'm meeting a problem at building opencv4.1.0 with opencv-contrib-4.1.0. i dont know how to solve this problem. the error output in console is the following ![image description](/upfiles/15659442057390303.png) do you have meet the similar problem und how do you solve this error? thank you in advance!

Problem about compiled In Opencv 4

$
0
0
Hi guys, I am using Opencv and have a question to ask your opinion. I am using the OpenCV for Android NDK, and I just use the libopencv_java4.so. It seems that it is not compiled with NEON (correct me if I am wrong). However, IMHO, NEON, the SIMD architecture, can dramatically speed up the library. On the other hand, I think if the people in OpenCV decide to compile without NEON, there must be a big reason. Therefore, I am hoping for advice about: Shall I compile with NEON? Will NEON boost up the speed (I think yes?) ? What are the disadvantages of compiling with NEON, if any? (i.e. Why OpenCV does not compile with NEON by default?) I would truly appreciate it for any suggestions! Thank's a lot! author: [du lich viet](https://answers.opencv.org/users/421211/copperfield/)

How to create "libjpegd.lib"?

$
0
0
I want to complie a VS project, but an error say "can not find libjpegd.lib", so I built the OpenCV 3.4.5 source code with CMake, but I only get an "libjpeg-turbod.lib" on 3rdparty folder. How can I get a "libjpegd.lib"? Thank you very much!

OpenCV keeps bundling into my iOS swift framework

$
0
0
Im have created a framework for iOS that uses openCV. No matter what I have tried I cannot get my framework to compile without bundling in all the openCV stuff it needs. I want to be able to compile my framework and supple it to someone else who would then have to go and add openCV to their app. I have tried compiling the Dynamic framework and adding that to my project. I have tried adding openCV via a pod, and I have tried adding the Dynamic openCV via a pod (which I just cant get to install it crashes for 10000 different reasons, perhaps its unsupported now) To check im not expecting too much, I made a framework lets call this FrameworkA, I then made a second Framework frameoworkB I Added frameworkA to FrameworkB and made it B call a function in A. I Then added B to a project, and tried to call the function in B that called A and voila it broke saying I needed to import framework A to my app. this is the exact behaviour I want when making my framework that uses openCV I am tearing my hair out trying to find a solution to this so if someone could point me in the right direction that would great because I thought that I had found the solution after compiling it with the dynamic flag turned on but unfortunately that hasn't worked either and its just bundled it into my framework again.... Thank you in advance

stereoCalibrate VGA CCD camera pair

$
0
0
Hi, I'm using OpenCV to calibrate a stereo pair, which are mounted on a manufactured base. The cameras are designed to be parallel. stereoCalibrate returns 0.136682 and average reprojection error 0.172633. But T=[-82.695348046976179 0.55585694118728157 -5.8423354231928650]. Why is Tz (-5.8423354231928650) of the translation vector between the coordinate systems of the cameras so big? And if I use the stereoCalibrate results to rectify the images, it seems to have an obvious rotation? Any help is greatly appreciated!
Viewing all 41027 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>