Quantcast
Channel: OpenCV Q&A Forum - RSS feed
Viewing all 41027 articles
Browse latest View live

Python-openCV: Extracting (x,y) coordinates of point features on an image

$
0
0
Hi all, I am trying to extract the (x,y) coordinates of the the four corners of a wooden rectangular plank image and apply that to a real-time video feed. What I have in mind is: 1) read image and apply Harris Corner Dectection(HCD) to mark out 4 red points. 2) Search for red points on the image and output an array giving the (x,y) coordinates I have no idea how to implement step 2 at the moment, and with regard to step 1, I have no idea how the HCD knows which pixel to mark out on. Is there a way I can extract the coordinate information? ![image description](/upfiles/14641803629337874.jpg) ![image description](/upfiles/1464259944149715.jpg) ![image description](/upfiles/14642599539861588.jpg) ![image description](/upfiles/14642599712193298.jpg)

Residual error from fundamental matrix

$
0
0
Hi guys, as in the previous topics I made I'm still working on self calibration stuff. I'm generating the data for the evaluation but I end out with some strange error computing the residual error as defined [here slide 31](http://campar.in.tum.de/twiki/pub/Chair/TeachingWs10Cv2/3D_CV2_WS_2010_TwoView-Fmatrix.pdf). But maybe I'm using the wrong function to compute the norm. The resulting residual error is absurd, while the epipolar equation x'Fx=0 give to me a residual of 0.25 so I suppose is almost perfect. I've points correspondences for image L and R and the fundamental matrix. Actually I'm doing in this way, I don't care that it isn't efficient since is not important, is just for data generation. for(int i=0; i(i) == 1) { inliersL.push_back(featuresL.at(i)); inliersR.push_back(featuresR.at(i)); cv::Mat temp_point_1 = cv::Mat(3,1,CV_64F); temp_point_1.at(0,0) = featuresL.at(i).x; temp_point_1.at(1,0) = featuresL.at(i).y; temp_point_1.at(2,0) = 1; cv::Mat temp_point_2 = cv::Mat(3,1,CV_64F); temp_point_2.at(0,0) = featuresR.at(i).x; temp_point_2.at(1,0) = featuresR.at(i).y; temp_point_2.at(2,0) = 1; /******************************* * COMPUTING THE F RESIDUALS *******************************/ //Epipolar equation x'Fx=0 cv::Mat tempResF = temp_point_2.t()*fundamentalMat*temp_point_1; residualF += fabs(tempResF.at(0,0)); //Residual error double resError = cv::norm(temp_point_2-(fundamentalMat*temp_point_1)) + cv::norm(temp_point_1-(fundamentalMat.t()*temp_point_2)); residualF_error += resError; } } I would like to find the residual error, is there any built in function to do that? I've looked on the documentation but I've not find it. EDIT: the result I'm getting are the following: Residual of F 0.250138 Mean residual of F 0.0039084 F RESIDUAL ERROR: 65237.2 where: - Residual of F is computed using the epipolar geometry x'Fx=0 - mean error is the previous value divided by the number of inliers used for estimating the fundamental matrix - The last one (F RESIDUAL ERROR) is the one that is wrong and that I'm asking about

Coordinate system for P1, P2 in triangulatepoint

$
0
0
Hey, looking at the mathematics is it correct that if P1 and P2 (projection matrices for camera 1 and 2) are both defined in reference to some world coordinate frame, then the output of triangulatepoints will be in that world coordinate frame? In my case I have two cameras, both with known camera intrinsics K1 and K2. I have derived the rotation and translation matrices for each camera such that applying those the camera is transformed to world coordinate frame. I then calculate K1*[R1, T1] for both cameras giving me P1 and P2. Is this correct?

Loading a .yml file in a SVM

$
0
0
Hi everyone, I trained an SVM and saved the result in a .yml file thanks to svm->save(filename). However when on the second run I use Ptr svm = SVM::load(filename); the whole program doesn't give me the same result. What I do on the second run is commenting these 2 parts: Mat trainingData; int k = 0; Mat labels; vector fna; glob("C:/Users/albma/Desktop/train_originale/*.png", fna, true); for (size_t i = 0; i < fna.size(); i++) { //trees: label = 1 printf("Training Image = %s\n", fna[i]); img = imread(fna[i]); vector keypoint; Mat bowDescriptor; detector1->detect(img, keypoint); //detect keypoints bowDE.compute(img, keypoint, bowDescriptor); //compute descriptors trainingData.push_back(bowDescriptor); if(!bowDescriptor.empty()) labels.push_back((int)1); } trainingData.convertTo(trainingData, CV_32FC1); And this: int dictSize = 1500; Ptr svm = SVM::create(); svm->setType(SVM::ONE_CLASS); svm->setNu(0.5); svm->setKernel(SVM::LINEAR); svm->setTermCriteria(TermCriteria(TermCriteria::MAX_ITER, 100, 1e-6)); printf("Training SVM\n"); Ptr td = TrainData::create(trainingData, ROW_SAMPLE, labels); //start training SVM svm->train(td); svm->save("mySVM.yml"); Substituting it all with the load() mentioned above. What's wrong with the code?

in opencv.js function cv.findTransformECC what is inputMask?

$
0
0
How do you represent "inputMask=None" 6th argument in Javascript? "null" does not work. The argument IS NOT optional as it outlines in docs. [findTransformECC] (https://docs.opencv.org/master/dc/d6b/group__video__track.html#ga1aa357007eaec11e9ed03500ecbcbe47) has 2 optional parameter (inputMask and gaussFiltSize) but if you don't include them you get an error. so what should be used for inputMask? "null" does not work. function Align_img(){ let image_baseline = cv.imread(imgElement_Baseline); let image = cv.imread('imageChangeup'); let im1_gray = new cv.Mat(); let im2_gray = new cv.Mat(); let im2_aligned = new cv.Mat(); //get size of baseline image var width1 = image_baseline.cols; var height1 = image_baseline.rows; //resize image to baseline image let dim1 = new cv.Size(width1, height1); cv.resize(image, image, dim1, cv.INTER_AREA); // Convert images to grayscale cv.cvtColor(image_baseline, im1_gray, cv.COLOR_BGR2GRAY); cv.cvtColor(image, im2_gray, cv.COLOR_BGR2GRAY); // Find size of image1 let dsize = new cv.Size(image_baseline.rows, image_baseline.cols); // Define the motion model const warp_mode = cv.MOTION_HOMOGRAPHY; // Define 3x3 matrix and initialize the matrix to identity let warp_matrix = cv.Mat.eye(3, 3, cv.CV_8U); // Specify the number of iterations. const number_of_iterations = 5000; // Specify the threshold of the increment in the correlation coefficient between two iterations const termination_eps = 0.0000000001; //1e-10; // Define termination criteria //const criteria = (cv.TERM_CRITERIA_EP | cv.TERM_CRITERIA_COUNT, number_of_iterations, termination_eps); let criteria = new cv.TermCriteria(cv.TERM_CRITERIA_EP | cv.TERM_CRITERIA_COUNT, number_of_iterations, termination_eps); //Run the ECC algorithm. The results are stored in warp_matrix. //let inputMask = new cv.Mat.zeros(im1_gray.size(), cv.CV_8UC3); //uint8 cv.findTransformECC(im1_gray, im2_gray, warp_matrix, warp_mode, criteria, null, 5); // Use warpPerspective for Homography cv.warpPerspective (image, im2_aligned, warp_matrix, dsize, cv.INTER_LINEAR + cv.WARP_INVERSE_MAP); cv.imshow('imageChangeup', im2_aligned); im1_gray.delete(); im2_gray.delete(); im2_aligned.delete(); };

android opencv optical mark recognition

$
0
0
I want to use opencv for OMR sheet or Bubble sheet .I dont have fix number of questions or columns in my omr sheet so i am trying to detetct rows and column (also i need to detect the title of the column)and then i can move further for filled circle detetction. I get crash on lineImgproc.boundingRect(contours[i]) .ALso i checked the intermediate result i get the row and column image ,not perfect though P.S I am very new to opencv my approach may be incorrect ,I would be thankful for any advice.I have a similar omr sheet as in image number of questions and number of column is not fixed ,I need to identify number od=f column,number of question, column title,filled circle i.e answer so i try to detetct the lines (horizontal and vertical) ![image description](/upfiles/15942289414537243.jpg) fun showAllBorders(paramView: Bitmap?) { // paramView = BitmapFactory.decodeFile(filename.getPath()); localMat1 = Mat() var scale = 25.0 var contourNo:Int=0 Utils.bitmapToMat(paramView, localMat1) localMat1 = Mat() var thresMat = Mat() var horiMat = Mat() var grayMat = Mat() var vertMat = Mat() Utils.bitmapToMat(paramView, localMat1) val imgSource: Mat = localMat1.clone() Imgproc.cvtColor(imgSource, grayMat, Imgproc.COLOR_RGB2GRAY) Imgproc.adaptiveThreshold(grayMat, thresMat, 255.0, Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY, 15, -2.0) horiMat = thresMat.clone() vertMat = thresMat.clone() val horizontalSize1 = horiMat.cols().toDouble() / scale val horizontalStructure: Mat = Imgproc.getStructuringElement(MORPH_RECT, Size(horizontalSize1, 1.0)) Imgproc.erode(horiMat, horiMat, horizontalStructure, Point(-1.0, -1.0), 1) Imgproc.dilate(horiMat, horiMat, horizontalStructure, Point(-1.0, -1.0), 1) val verticalSize1 = vertMat.rows().toDouble() //scale val verticalStructure: Mat = Imgproc.getStructuringElement(MORPH_RECT, Size(1.0, verticalSize1)) Imgproc.erode(vertMat, vertMat, verticalStructure, Point(-1.0, -1.0), 1) Imgproc.dilate(vertMat, vertMat, verticalStructure, Point(-1.0, -1.0), 4) var mask: Mat = Mat() var resultMat: Mat = Mat() Core.add(horiMat, vertMat, resultMat) var jointsMat: Mat = Mat() Core.bitwise_and(horiMat, vertMat, jointsMat) val contours: List = ArrayList() val cnts: List = ArrayList() val hierarchy = Mat() var rect: Rect? = null var rois = mutableListOf() var bmpList = mutableListOf() Imgproc.findContours(resultMat, contours, hierarchy, Imgproc.RETR_TREE, Imgproc.CHAIN_APPROX_SIMPLE) for (i in contours.indices) { if (Imgproc.contourArea(contours[i]) < 100) { contourNo = i val contour2f = MatOfPoint2f(*contours[contourNo].toArray()) val contours_poly = MatOfPoint2f(*contours[contourNo].toArray()) Imgproc.approxPolyDP(contour2f, contours_poly, 3.0, true) val points = MatOfPoint(*contours_poly.toArray()) var boundRect = mutableListOf() boundRect[i] = Imgproc.boundingRect(contours[i]);//CRASH HERE//contours[i] is not null val roi = Mat(jointsMat, boundRect[i]) val joints_contours: List = ArrayList() val hierarchy1 = Mat() Imgproc.findContours(roi, joints_contours, hierarchy1, Imgproc.RETR_TREE, Imgproc.CHAIN_APPROX_SIMPLE) if (joints_contours.size >= 4) { rois.add(Mat(jointsMat, boundRect[i])) Imgproc.cvtColor(localMat1, localMat1, Imgproc.COLOR_GRAY2RGBA); Imgproc.drawContours(localMat1, contours, i, Scalar(0.0, 0.0, 255.0), 6); rectangle(localMat1, boundRect[i].tl(), boundRect[i].br(), Scalar(0.0, 255.0, 0.0), 1, 8, 0); } } } for (i in rois) { val analyzed = Bitmap.createBitmap(i.cols(), i.rows(), Bitmap.Config.ARGB_8888) Utils.matToBitmap(i, analyzed) bmpList.add(analyzed) } val analyzed = Bitmap.createBitmap(jointsMat.cols(), jointsMat.rows(), Bitmap.Config.ARGB_8888) Utils.matToBitmap(jointsMat, analyzed) //below shows rows and column /*val analyzed = Bitmap.createBitmap(resultMat.cols(), resultMat.rows(), Bitmap.Config.ARGB_8888) Utils.matToBitmap(jointsMat, analyzed) return analyzed!! */ //return

Link error cannot open input file opencv_world440.lib when trying to install extra modules + CUDA

$
0
0
I am following this tutorial to install OpenCV with extra modules and CUDA: https://jamesbowley.co.uk/accelerate-opencv-4-2-0-build-with-cuda-and-python-bindings/ I got to the step where I see the OpenCV.sln solution file in `PATH_TO_OPENCV_SOURCE/build` directory, where I have `PATH_TO_OPENCV_SOURCE` as `C:/Users/me/Downloads/opencv-master` I then opened that in Visual Studio 2017, right clicked on Install and clicked on Build. I now see a bunch of `error cannot open input file ..\..\lib\Release\opencv_world440.lib` errors. I noticed that in `C:/Users/me/Downloads/opencv-master/build/lib/Release` that I see `opencv_ts440.lib` but not `opencv_world440.lib` in VS, when I go to Configuration Properties, I don't see `C/C++` or `additional include directories` anywhere can anyone help with this?

"no element rtspsrc" gstreamer+opencv in win10

$
0
0
Hello, I'm trying to receive rtsp source from gstreamer, it can shows video stream correctly by enter this command line: ./gst-launch-1.0.exe -v rtspsrc location=rtsp://192.168.1.2:8554/test latency=0 buffer-mode=auto ! decodebin ! videoconvert ! autovideosink sync=false Here are the steps I did: 1. install Gstreamer Runtime and Development files here: https://gstreamer.freedesktop.org/data/pkg/windows/ 2. configure opencv 4.3.0 by camke-GUI with Gstreamer support on 3. build with vistual studio 2019, get the world430.dll and lib 4. import lib and include in QT, shows no importing errors, function works correctly, except using gstreamer. 5. works:`const char *gst="rtsp://192.168.1.3:8554/test"; cap.open(gst,CAP_FFMPEG);` 6. not works:`const char *gst="filesrc latency=0 buffer-mode=auto location=rtsp://192.168.1.3:8554/test ! decodebin ! videoconvert ! appsink max-buffers=5 drop=true"; cap.open(gst,CAP_GSTREAMER);` below are the errors, hope someone can help me, thank you. [ WARN:0] global C:\Users\goodman-home\Downloads\opencv-4.3.0\modules\videoio\src\cap_gstreamer.cpp (713) cv::GStreamerCapture::open OpenCV | GStreamer warning: Error opening bin: no element "rtspsrc" [ WARN:0] global C:\Users\goodman-home\Downloads\opencv-4.3.0\modules\videoio\src\cap_gstreamer.cpp (480) cv::GStreamerCapture::isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created

VideoCapture.read blocking Java thread in RUNNING state

$
0
0
I'm using OpenCV's Java bindings with Gstreamer as a backend API for the VideoCapture class. As expected `VideoCapture#read(Mat)` is a blocking call but using the VisualVM profiler I can see that the thread remains in the RUNNING state. If I have more capture threads than there are CPU threads, then the JVM thread scheduler gets confused and other threads in the Java app get starved. I'm sure most of the time in `VideoCapture#read(Mat)` is spent in an IO wait state, but since OpenCV is blocking in native code, Java does not know what the thread state is and can't schedule other threads to run. Perhaps the Java bindings are failing to call `AttachCurrentThread()` in JNI so Java can't track the native thread state?

How to install openCV in Android Studio 4.0.0?

$
0
0
Hello Since there is no Android download link for either of the 4.0.0 versions of openCV, which one do I install for my Android Studio? ![image description](/upfiles/15942503084627994.jpg)

What does the history of this function “createBackgroundSubtractorMOG2” means?

$
0
0
I only see this description in this [link](https://docs.opencv.org/master/de/de1/group__video__motion.html#ga2beb2dee7a073809ccec60f145b6b29c), it hasn't a very detailed explanation, so I'd like to know where can I find a more detailed explanation.The official web document says "Length of the history", what "Length of the history" is? ![image description](/upfiles/15156588079634633.jpg) **My code:** import os import time import cv2 def main(): img_src_dirpath = r'C:/Users/Shinelon/Desktop/SRC/' dir = r'D:/deal_pics/' + time.strftime('%Y-%m-%d') + '/' if not os.path.exists(dir): os.makedirs(dir) img_dst_dirpath = dir history = 60 varThreshold = 16 detectShadows = True mog2 = cv2.createBackgroundSubtractorMOG2( history, varThreshold, detectShadows ) for f in os.listdir( img_src_dirpath ): if f.endswith( '.jpg' ): img = cv2.imread( img_src_dirpath + f ) mog2.apply( img ) bg = mog2.getBackgroundImage() cv2.imwrite( img_dst_dirpath + f, bg ) cv2.destroyAllWindows() if __name__ == '__main__': main()

Appropriate combination of Aruco markers for small sizes

$
0
0
Hello! I am trying to track the pose of an object that is relatively small. It has a flat surface of about 3cm x 3cm on which I can stick an Aruco marker of size 2.5cm x 2.5cm and perform detection and pose estimation. Can anyone suggest a good combination of Aruco dictionary, (i.e. whether 4x4 or a 5x5 etc.). My camera will be roughly <1m from the object, typically around .75 cm. There are going to be only 1 or 2 markers stuck on the hand and in a nearby surrounding, so a small dictionary would be enough. Since I am very new to Aruco markers, can anyone suggest good parameters for my configuration such as border bits, marker size, which dictionary etc? While I understand that there is a bit of trial and error involved, I'd like a good starting point. The Selecting a dictionary section in the documentation gives tips for the inter-marker distance, but I couldn't find any discussion pertaining to my configuration.

Android dnn in native C++

$
0
0
Hello, i succesfully linked opencv to my android c++ native app.. now I encountered a problem where i cannot read my models (ex. YOLO) I pasted them here: */storage/emulated/0/DCIM/ * but when running I see that the models cannot be found/read. The tutorial on the opencv page is for java implementation, I would like to implement something similar but using native C++. What I have tried: - set permissions on my app to READ/WRITE: - similar approach than: [question opencv answers](https://answers.opencv.org/question/201703/dnnnet-forward-in-native-c-android-studio/) Has someone encountered the same issue? Or has some tips? Thaaaanks!

How to use contours and Harris corners Functions in solvePnP?

$
0
0
Hi I have code to extract the 2D coordinates from prior knowledge, using e.g. Harris Corners and contour of the object. I'm using these features because the objects are without textures so ORB or SIFT or SURF not going to work. My goal is to get the 2D correspondences for my 3D cad model points and use them in solvePnPRansac to track the object and get 6D Pose in real-time. I created code for Harris Corners Detection and Contours Detection as well. They are in two different C++ source codes. Here is the C++ code for Harris Corners Detection void CornerDetection::imageCB(const sensor_msgs::ImageConstPtr& msg) { if(blockSize_harris == 0) blockSize_harris = 1; cv::Mat img, img_gray, myHarris_dst; cv_bridge::CvImagePtr cvPtr; try { cvPtr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::BGR8); } catch (cv_bridge::Exception& e) { ROS_ERROR("cv_bridge exception: %s", e.what()); return; } cvPtr->image.copyTo(img); cv::cvtColor(img, img_gray, cv::COLOR_BGR2GRAY); myHarris_dst = cv::Mat::zeros( img_gray.size(), CV_32FC(6) ); Mc = cv::Mat::zeros( img_gray.size(), CV_32FC1 ); cv::cornerEigenValsAndVecs( img_gray, myHarris_dst, blockSize_harris, apertureSize, cv::BORDER_DEFAULT ); for( int j = 0; j < img_gray.rows; j++ ) for( int i = 0; i < img_gray.cols; i++ ) { float lambda_1 = myHarris_dst.at(j, i)[0]; float lambda_2 = myHarris_dst.at(j, i)[1]; Mc.at(j,i) = lambda_1*lambda_2 - 0.04f*pow( ( lambda_1 + lambda_2 ), 2 ); } cv::minMaxLoc( Mc, &myHarris_minVal, &myHarris_maxVal, 0, 0, cv::Mat() ); this->myHarris_function(img, img_gray); cv::waitKey(2); } void CornerDetection::myHarris_function(cv::Mat img, cv::Mat img_gray) { myHarris_copy = img.clone(); if( myHarris_qualityLevel < 1 ) myHarris_qualityLevel = 1; for( int j = 0; j < img_gray.rows; j++ ) for( int i = 0; i < img_gray.cols; i++ ) if( Mc.at(j,i) > myHarris_minVal + ( myHarris_maxVal - myHarris_minVal )*myHarris_qualityLevel/max_qualityLevel ) cv::circle( myHarris_copy, cv::Point(i,j), 4, cv::Scalar( rng.uniform(0,255), rng.uniform(0,255), rng.uniform(0,255) ), -1, 8, 0 ); cv::imshow( harris_win, myHarris_copy ); } And here the C++ function for Countrours Detection void TrackSequential::ContourDetection(cv::Mat thresh_in, cv::Mat &output_) { cv::Mat temp; cv::Rect objectBoundingRectangle = cv::Rect(0,0,0,0); thresh_in.copyTo(temp); std::vector> contours; std::vector hierarchy; cv::findContours(temp, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE); if(contours.size()>0) { std::vector> largest_contour; largest_contour.push_back(contours.at(contours.size()-1)); objectBoundingRectangle = cv::boundingRect(largest_contour.at(0)); int x = objectBoundingRectangle.x+objectBoundingRectangle.width/2; int y = objectBoundingRectangle.y+objectBoundingRectangle.height/2; cv::circle(output_,cv::Point(x,y),10,cv::Scalar(0,255,0),2); } } Also, I have the 3D CAD model of the object I like to track and estimate the 6D Pose. My question is how to use the detected 2D Points from Harris Corners and Contours Functions in `solvePnPRansac` or `solvePnP` to track the object and get the 6D Pose in real-time? Thanks

error VideoCapture cap.open(), opencv4.1 Android C++

$
0
0
This is my code: ``` cv::VideoCapture cap; cap.open(0); if (!cap.isOpened()) ``` The following exception will appear and i tried the way given on the official website ``` Traceback (most recent call last): File "C:/Program Files/Android/Android Studio/bin/lldb/shared/jobject_printers\jstring_reader.py", line 95, in jstring_summary_provider_23 return Reader(valobj).decode_string(TargetPlatform(23, True)) File "C:/Program Files/Android/Android Studio/bin/lldb/shared/jobject_printers\jstring_reader.py", line 88, in decode_string data = self._process.ReadMemory(data_address, 2 * length, error) File "C:\Program Files\Android\Android Studio\bin\lldb\lib\python\lldb\__init__.py", line 8428, in ReadMemory return _lldb.SBProcess_ReadMemory(self, addr, buf, error) ValueError: Positive integer expected ``` Anyone can help me?thanks

body height measurement and length of arm with OpenCV from a single still image capture.

$
0
0
hi, I'm working for my final year project which required to measure height of human and length of arm using single still image capture on raspberry pi camera with known reference distance from the camera to the subject and also known height distance of the camera from the floor. Problem rising right now is how to set the pixelpermetrix ratio. should i use a known size object and use it as a reference point? how to contour a body precisely? how to measure human height and length of arms only?

Why are the coordinates of each crack not steady

$
0
0
I am trying to get the coordinates to keep still but when measure the x distance from the reference point (0,0) to the centroid of the crack (or contour) it would decrease as it is moving from left to right. I tried using a while loop counter but no luck. The coordinate calculations of the crack are correct but I can't understand why it decreases as it moves from left to right.

Converting ArUco axis-angle to Unity3D Quaternion

$
0
0
I'm interested in comparing the quaternions of an object presented in the real-world (with ArUco marker on top of it) and its simulated version in Unity3D. To do this, I generated different scenes in Unity with the object in different locations. I stored its position and orientation relative to the camera in a csv file. where quaternions is looking something like this (for one example): ```[-0.492555320262909 -0.00628990028053522 0.00224017538130283 0.870255589485168]``` In ArUco, after using ```estimatePoseSingleMarkers``` I got a compact version of Angle-Axis, and I converted it to Quaternion using the following function: ```def find_quat(rvecs): a = np.array(rvecs[0][0]) theta = math.sqrt(a[0]**2 + a[1]**2 + a[2]**2) b = a/theta qx = b[0] * math.sin(theta/2) qy = -b[1] * math.sin(theta/2) # left-handed vs right handed qz = b[2] * math.sin(theta/2) qw = math.cos(theta/2) print(qx, qy, qz, qw)``` where rvecs is the return value of ArUco However, after doing this I'm still getting way different results, example of the same scene: ```[0.9464098048208864 -0.02661258975275046 -0.009733748408866453 0.321722715311581]``` << aruco result ```[-0.492555320262909 -0.00628990028053522 0.00224017538130283 0.870255589485168]``` << Unity's result Am I missing something?

OpenCV - OpenGL - OpenCL Interop

$
0
0
Hi guys I'm writing a very basic program to monitor performance when copying from a cv::ogl::Texture2D to a cv::ogl::Buffer (using the copyTo function), and from there to an OpenCL cv::UMat (using cv::ogl::mapGLBuffer). It seems to me on papre this should work all on the GPU but I'm having trouble even running this code: #include "mainwindow.h" #include #include #include #include #include #include #include #include #include cv::UMat cvUMat; cv::ogl::Texture2D* cvglTexture; cv::ogl::Buffer cvglBuffer; int main(int argc, char *argv[]) { QApplication a(argc, argv); MainWindow w; w.show(); cv::ocl::setUseOpenCL(true); assert(cv::ocl::haveOpenCL()); assert(cv::ocl::useOpenCL()); cv::ocl::Context::getDefault().create(cv::ocl::Device::TYPE_GPU); cvglTexture->create(640,480,cv::ogl::Texture2D::Format::RGBA); return a.exec(); } This is the error I get: Exception at 0x7ffc886da799, code: 0xe06d7363: C++ exception, flags=0x1 (execution cannot be continued) (first chance) in opencv_world430d!cv::UMat::deallocate And this is my stack: 1 RaiseException KERNELBASE 0x7ffc886da799 2 CxxThrowException VCRUNTIME140D 0x7ffc67097ec7 3 cv::UMat::deallocate opencv_world430d 0x7ffc12f21236 4 cv::UMat::deallocate opencv_world430d 0x7ffc12f21387 5 cv::UMat::deallocate opencv_world430d 0x7ffc12e7f464 6 main main.cpp 30 0x7ff79585297f 7 WinMain qtmain_win.cpp 104 0x7ff79585667d 8 invoke_main exe_common.inl 107 0x7ff795854aad 9 __scrt_common_main_seh exe_common.inl 288 0x7ff79585499e 10 __scrt_common_main exe_common.inl 331 0x7ff79585485e 11 WinMainCRTStartup exe_winmain.cpp 17 0x7ff795854b39 12 BaseThreadInitThunk KERNEL32 0x7ffc894e7bd4 13 RtlUserThreadStart ntdll 0x7ffc8ac6ce51 I'm using QT and OpenCV 430 (debug) built for VC15 (I'm using the QT 5.12.0 MSVC2017 compiler) on windows. I have other projects that use cv::UMat and everything runs smooth there but looks like there are some complications with OpenGL interop. Any thoughts on where to get started and what to check would definitely be helpful! Cheers!

Transform gray image to create a pattern effect on while pixels given a mask

$
0
0
I need to create a pattern on white pixels that basically turn whites into blacks or a given brightness (50 of 255 for example) given a mask, it need to follow some rules: 1) Have a margin, for example 5 pixels each side Giving this and lets asume only 2px margin wide: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 Turn into: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 1 1 1 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 1 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 1 1 1 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 1 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 1 1 1 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 Example: (Sorry for bad resolution, but is the best image example i could find) ![image description](https://pbs.twimg.com/media/CbXhC1bW8AELfvG?format=png&name=360x360) I tried erode with a kernel but it erodes bounds only. I guess theres a easy way to do this...
Viewing all 41027 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>