Quantcast
Channel: OpenCV Q&A Forum - RSS feed
Viewing all 41027 articles
Browse latest View live

Joining two images

$
0
0
I am quite new to Opencv and DIP in general so needed bit of help in stitching of two images.The problem background is , there are two pieces which have their adhesives/glue torn apart from two joined pieces of plastic. This is the image of "glue" on the base ![](/upfiles/1558604786830225.jpg) ![this is the image of "glue" on the other attached face](/upfiles/15586049024872095.jpg) As the background of the the images is not same , so I read that its not possible to do stitching(because different features). And these two pieces are like jigsaw pieces which needs to rotated , so the problem is not straightforward like panaroma stitching. So my question is , how do I join such images together , it would be really helpful if i could have some guidance. Thanks in advance :) I was thinking of finding the white color countours and then keeping one image fixed , rotating the other one and finding area of merged countours, also storing the angle of what I rotate. Area would become least when there would be perfect match.

inconsistent behavior about ROI calculation

$
0
0
As we know: Mat img(7, 8, CV_8UC1, Scalar(0)); Mat com = (Mat_(4, 3) << 255, 0, 0, 255, 255, 0, 255, 255, 0, 255, 255, 255); img(Rect(1, 2, 3, 4)) += com; We will get a `img` like this: ![](https://i.stack.imgur.com/HdIsh.png) This is out understandable behavior because we do a calculation in the same ROI. But why this code: Mat img(7, 8, CV_8UC1, Scalar(0)); Mat com = (Mat_(4, 3) << 255, 0, 0, 255, 255, 0, 255, 255, 0, 255, 255, 255); img(Rect(1, 2, 3, 4)) = com; cannot get a same result? I'm confusion about it. Or do I have missed something?

algorithm used in licence plate recognition

$
0
0
algorithm used in licence plate recognition using opencv

SIFT detection with opencv+python

$
0
0
When I use opencv to sift detection, program always wrong. It show: module 'cv2.cv2' has no attribute 'xfeatures2d'. I had download same version of opencv-python and opencv-contrib-python(3.3.0.10), but don't work. This is part of code: import cv2 import numpy as np import sys imgpath=r'D:\Users\Mr.Gao\Desktop\NewFile\computer vision\varese.jpg' img=cv2.imread(imgpath) gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) sift=cv2.xfeatures2d.SIFT_create() I use python 3.5, opencv 3.X

computeECC returning strange values

$
0
0
From version 4.1.x there's the opportunity the compute the enhanced correlation coefficient between two images. It seems to me that it's not working correctly. My two images are very similar and I get an ECC=0.474. Just to test the functionality I also tried to run: retval = cv2.computeECC(templateImage=reference, inputImage=reference) and it returns me 0.475, while I was expecting 1.00. Does anybody know how to solve this? Thank you for your help.

I am trying to compare Modal posture to my posture

$
0
0
Hello I used https://medium.com/@dwayneforde/image-recognition-on-ios-with-swift-and-opencv-b5cf0667b79 this link to integrate open CV and compare one of model posture to my posture but after implementing it is only identifying identical picture. Is there anything in open cv through which i can compare postures . (One is default picture saved in assets, another taken from camera) if they match or not. If there is something like this paid or unpaid Kindly revert me as soon as possible.

How to start Raspberry pi and opencv?

$
0
0
Please suggest some book or video where i can start development in python .

one point (u,v) to actual (x,y) w.r.t camera frame?

$
0
0
Hello, I am detecting center of circle and getting (u,v) of that coordinates in Image. Now I want this (u,v) into actual world coordinates (x,y) w.r.t to camera_frame. I do not need z coordinate. How can I get this? what is the possible way to do this? I could not found anything relevant to this. I have my camera calibration matrix. thanks.

Android DNN tutorial

$
0
0
Hi I'm having issues with running the example at https://docs.opencv.org/3.4/d0/d6c/tutorial_dnn_android.html and I'm getting this error: 2019-05-26 19:36:02.333 23779-23862/com.example.dnntutorial E/cv::error(): OpenCV(3.4.6) Error: Unspecified error (> Blob depth should be CV_32F or CV_8U:> 'ddepth == CV_32F || ddepth == CV_8U'> where> 'ddepth' is 62 (CV_64FC8) ) in void cv::dnn::experimental_dnn_34_v11::blobFromImages(cv::InputArrayOfArrays, cv::OutputArray, double, cv::Size, const Scalar&, bool, bool, int), file /build/3_4_pack-android/opencv/modules/dnn/src/dnn.cpp, line 240 2019-05-26 19:36:02.336 23779-23862/com.example.dnntutorial E/org.opencv.dnn: dnn::blobFromImage_10() caught cv::Exception: OpenCV(3.4.6) /build/3_4_pack-android/opencv/modules/dnn/src/dnn.cpp:240: error: (-2:Unspecified error) in function 'void cv::dnn::experimental_dnn_34_v11::blobFromImages(cv::InputArrayOfArrays, cv::OutputArray, double, cv::Size, const Scalar&, bool, bool, int)'> Blob depth should be CV_32F or CV_8U:> 'ddepth == CV_32F || ddepth == CV_8U'> where> 'ddepth' is 62 (CV_64FC8) 2019-05-26 19:36:02.343 23779-23862/com.example.dnntutorial E/AndroidRuntime: FATAL EXCEPTION: Thread-2 Process: com.example.dnntutorial, PID: 23779 CvException [org.opencv.core.CvException: cv::Exception: OpenCV(3.4.6) /build/3_4_pack-android/opencv/modules/dnn/src/dnn.cpp:240: error: (-2:Unspecified error) in function 'void cv::dnn::experimental_dnn_34_v11::blobFromImages(cv::InputArrayOfArrays, cv::OutputArray, double, cv::Size, const Scalar&, bool, bool, int)'> Blob depth should be CV_32F or CV_8U:> 'ddepth == CV_32F || ddepth == CV_8U'> where> 'ddepth' is 62 (CV_64FC8) ] at org.opencv.dnn.Dnn.blobFromImage_0(Native Method) at org.opencv.dnn.Dnn.blobFromImage(Dnn.java:38) at com.example.dnntutorial.MainActivity.onCameraFrame(MainActivity.java:76) at org.opencv.android.CameraBridgeViewBase.deliverAndDrawFrame(CameraBridgeViewBase.java:392) at org.opencv.android.JavaCameraView$CameraWorker.run(JavaCameraView.java:373) at java.lang.Thread.run(Thread.java:764)

Is there any function to create cuda context in opencv similar to what cuCtxCreate does from cuda library?

$
0
0
I am developing an app which requires to use cuda functions like cuInit, cuCtxCreate etc. If i use these functions from cuda library then i get the following error which i mentioned in this issue https://github.com/opencv/opencv/issues/14237 I got this solved by modifying CMakeLists.txt and linking CUDA_CUDA_LIBRARY which is path to cuda.lib. My question here is Are there any alternatives available in opencv for doing the same? Like i see cuSetDevice cuGetDevice which sets and gets cuda devices but i am not able to find a function in opencv for creating a cuda context similar to what cuCtrCreate does.

Detect vehicle from video stream

$
0
0
Has anyone been able to get OpenCV to detect a vehicle from a live video stream? This is for cars driving at 120 kmh. If so, can you kindly share what you have done? Feel free to PM me.

tracking points

$
0
0
![image description](/upfiles/15589996214587384.png) as you can see now I have six points in this picture, I know that mouse callback function can track one point at one time. I am wondering can I use the mouse callback function to track all the six points at the same time. best,

Where to start?Im trying to make a vehicle detetion program

$
0
0
I am new using OpenCV.I want to make a program in C++ that can detect vehicles(including bikes) when they cross some point from a real time video streaming.Im a little bit lost with this,i found some links but its confusing for me.I just want to know what do i have to look for?How can i do that?I was reading this article but dont know if its going to help me in anyway. https://github.com/andrewssobral/vehicle_detection_haarcascades/blob/master/README.md Thanks!!!

VideoWriter output video is not playable

$
0
0
So I tried running this code.. I have a LOGITECH webcam, and it works fine with the imshow, however when I added the VideoWriter feature to save the video feed, the video file appears but is not playable... Can anyone help me with this? Thank you! import cv2 import sys import logging as log import datetime as dt from time import sleep cascPath = "haarcascade_frontalface_default.xml" faceCascade = cv2.CascadeClassifier(cascPath) log.basicConfig(filename='webcam.log',level=log.INFO) video_capture = cv2.VideoCapture(0) anterior = 0 video_capture.set(3, 240) video_capture.set(4, 240) fourcc = cv2.VideoWriter_fourcc(*'XVID') out = cv2.VideoWriter('-Vid.avi', fourcc, 10, (240,240)) while True: if not video_capture.isOpened(): print('Unable to load camera.') sleep(5) pass ret, frame = video_capture.read() if ret: gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) faces = faceCascade.detectMultiScale( gray, scaleFactor=1.1, minNeighbors=5, minSize=(30, 30) ) for (x, y, w, h) in faces: cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2) if anterior != len(faces): anterior = len(faces) log.info("faces: "+str(len(faces))+" at "+str(dt.datetime.now())) cv2.imshow('Video', frame) if cv2.waitKey(1) & 0xFF == ord('q'): break cv2.imshow('Video', frame) out.write(frame) else: break video_capture.release() out.release() cv2.destroyAllWindows()

I want to find the actual width of the book in the image which under laser

$
0
0
I want to find the width of the book in the image which under laser. Image attached. How to find the laser portion only that to only the horizontal one which is larger portion and then calculate its width. Needs code in python

paper edge detection and perspective transform

$
0
0
before image https://imgur.com/f190UFk processed image https://imgur.com/JkEhWkS you can see the "processed image" has highlight, so the transform works bad. any possible to make an rectangle that ignore that highlight area? import os import cv2 import numpy as np from nanoid import generate def processImage(imagepath, ext): img = cv2.imread(imagepath) hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) h, s, v = cv2.split(hsv) # _, threshed = cv2.threshold(s, 50, 255, cv2.THRESH_BINARY_INV) _, threshed = cv2.threshold(s, 50, 255, cv2.THRESH_BINARY_INV) # cnts = cv2.findContours(threshed, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[-2] cnts = cv2.findContours(threshed, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[-2] canvas = img.copy() #cv2.drawContours(canvas, cnts, -1, (0, 255, 0), 1) cnts = sorted(cnts, key = cv2.contourArea) cnt = cnts[-1] print(cnt) arclen = cv2.arcLength(cnt, True) approx = cv2.approxPolyDP(cnt, 0.005 * arclen, True) cv2.drawContours(canvas, [cnt], -1, (255, 0, 0), 5, cv2.LINE_AA) cv2.drawContours(canvas, [approx], -1, (0, 0, 255), 1, cv2.LINE_AA) print(approx) approx = rectify(approx) pts2 = np.float32([[0, 0], [2480, 0], [2480, 3508], [0, 3508]]) M = cv2.getPerspectiveTransform(approx, pts2) dst = cv2.warpPerspective(canvas, M, (2480, 3508)) filename_output = generate() + ext cv2.imwrite('./static/' + filename_output, dst) topLeft, topRight, bottomRight, bottomLeft = approx topLeft = topLeft.tolist() topRight = topRight.tolist() bottomRight = bottomRight.tolist() bottomLeft = bottomLeft.tolist() return { 'filename': './static/' + filename_output, 'shape': img.shape, 'approx': { 'topLeft': topLeft, 'topRight': topLeft, 'bottomRight': bottomRight, 'bottomLeft': bottomLeft, }, } def rectify(h): h = h.reshape((13, 2)) hnew = np.zeros((4, 2), dtype = np.float32) add = h.sum(1) hnew[0] = h[np.argmin(add)] hnew[2] = h[np.argmax(add)] diff = np.diff(h, axis = 1) hnew[1] = h[np.argmin(diff)] hnew[3] = h[np.argmax(diff)] return hnew add similar condition image https://imgur.com/wDXtLsd https://imgur.com/KAvOtdG

How to get xpos and ypos value using FindContours

$
0
0
Hello I'm trying to recognize image contours using OpenCV4 with Android Studio(kotlin). However, I don't know how to get xpos and ypos value from MatOfPoint variables. ( MatOfPoint variables is output of Imgproc.FindContours ) Do you have any solutions ? private fun calcPosXY(){ val bitmap = textureView.getBitmap() var imageMat = Mat() val contours = ArrayList() val hierarchy = Mat() Utils.bitmapToMat(bitmap, imageMat) Imgproc.cvtColor(imageMat,imageMat,Imgproc.COLOR_RGB2GRAY) Imgproc.threshold(imageMat, imageMat, 100.0, 255.0, Imgproc.THRESH_BINARY) Imgproc.findContours(imageMat, contours, hierarchy, Imgproc.RETR_EXTERNAL,Imgproc.CHAIN_APPROX_TC89_L1) for (contour in contours) { val approxCurve = MatOfPoint2f() val contour2f = MatOfPoint2f() contour.convertTo(contour2f,CvType.CV_32FC2) val approxDistance = Imgproc.arcLength(contour2f,true) * 0.02 Imgproc.approxPolyDP(contour2f,approxCurve,approxDistance,true) val points = MatOfPoint() //this variable approxCurve.convertTo(points,CvType.CV_8UC4) // // how to get xpos and ypos value from each points? // e.g. xpos=327 ypos= 512 // } }

Can not find cv_cpu_config.h file?

$
0
0
I want to build opencv_trancascade application. I see sources at opencv/source/app/trancascade. I have added all this source in Visual Studio and build. But I see error **Cannot open include file: 'cv_cpu_config.h**' I can not see any cv_cpu_config.h file in source. Anyone can help me solve this problem? Best regard, Tung

which algorithm can better get location of object

$
0
0
hi guys, i make a project about robot arm to grasp medicine box and recognite. so i need to first locate the object (x,y,z and the rotation) and let robot arm to grasp. but i don't know any algorithm suitable for this. do u have any suggestion? thank you in advance

Chessboard reflection light

$
0
0
Hello, For a calibration purpose, I use a chessboard with OpenCV, but the Chessboard is not always detected due to reflection of light. Could you help ? Thank you
Viewing all 41027 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>