OS => Windows 10 64bit
IDE => Visual Studio 2019
OpenCV Verison => 4.4
When i try to compile one of the example code provided in the OpenCV VIsual Studio Tutorial, it spits out a linking error
It says "LNK1104: Cannot open file 'opencv_shape440d.lib'.
I have included the 'install\include', 'install\x64\vc16\lib' folder in the project properties.
Can anyone please tell why this error is occuring and how to fix it.
↧
Where to find "opencv_shape440d.lib"
↧
Can anyone please tell me what could be the best algorithm for presence absence detection
I have one part which is black in color. I want to do defect detection on it. There is a circular region on which I need to check. If the circle is regular then it's OK if not then NOT OK. But the irregularity in circle can come anywhere. Any help is really appreciated.
↧
↧
Where to find implementation of BFMatcher::match
I want to study the implementation of **BFMatcher** to make some modifications.
I have found this file: https://github.com/opencv/opencv/blob/master/modules/features2d/src/matchers.cpp, which contains implementation of **ocl_match** in **BFMatcher**, but I am not able to find any implementation for ordinary **match**. Where can I find the code that executes when **BFMatcher.match(...)** is called if **HAVE_OPENCL** is not true?
Thanks!
↧
How to Detect Missing part of image
Hello Opencv member.
I'm trying to detect missing part of image,
Input include 2 image (baseImage, checkImage), 2 images of different sizes.
how can i detect missing part of checkImage? I'm trying with Stitcher but it is fail.
base image:

checkImage:

resultImage

thank you!
↧
SyntaxError: Non-UTF-8 code starting with '\x90'
hi i wrote these codes for compute 3D projection matrix
from numpy import stack, int32, dot, cross, float32, linalg
import cv2
from matplotlib import pyplot as plt
def projection_matrix(camera_parameters, homography):
camera_parameters = {
"u0": 320,
"v0": 240,
"fu": 800,
"fv": 800,
}
other_dict = list(camera_parameters.items())
homography = HOMOGRAPHY * (-1)
rot_and_transl = dot(linalg.inv(other_dict), homography)
col_1 = rot_and_transl[:, 0]
col_2 = rot_and_transl[:, 1]
col_3 = rot_and_transl[:, 2]
l = math.sqrt(linalg.norm(col_1, 2) * linalg.norm(col_2, 2))
rot_1 = col_1 / l
rot_2 = col_2 / l
translation = col_3 / l
c = rot_1 + rot_2
p = cross(rot_1, rot_2)
d = cross(c, p)
rot_1 = dot(c / linalg.norm(c, 2) + d / linalg.norm(d, 2), 1 / math.sqrt(2))
rot_2 = dot(c / linalg.norm(c, 2) - d / linalg.norm(d, 2), 1 / math.sqrt(2))
rot_3 = cross(rot_1, rot_2)
projection = stack((rot_1, rot_2, rot_3, translation)).T
return np.dot(camera_parameters, projection)
**but Pycharm gives this error:**
SyntaxError: **Non-UTF-8 code starting with '\x90'** in file C:/Users/Ali
Hoseyni/AppData/Local/Programs/Python/Python38/python.exe on line 1, **but no encoding declared**; see http://python.org/dev/peps/pep-0263/ for details
I read Above Link, but i didnt understand how to fix this error.
↧
↧
reprojectImageTo3D incorrect results
system: opencv 4.4.0
operating system: windows 10 Pro - visual studio 2019
PCL 1.11.1 point cloud viewer.
I use this example https://github.com/opencv/opencv/blob/master/samples/cpp/stereo_match.cpp
int alg = STEREO_SGBM;
int SADWindowSize = 1;
int numberOfDisparities =16;
i get incorrect cloud point, any suggestions?

↧
OpenCV(3.4.11) Error: Unspecified error (Number of input channels should be multiple of 3 but got 4
I mauising opencv 3.4.11 for android.
I get this error when i used the forward of Net function:
Mat imageBlob = Dnn.blobFromImage(frame, 0.00392, new Size(416,416),new Scalar(0, 0, 0),/*swapRB*/false, /*crop*/false);
tinyV4.setInput(imageBlob);
/* the result */
java.util.List result = new java.util.ArrayList(2);
List outputLayers = new java.util.ArrayList<>();
/* the outputs layers are 30 and 37*/
outputLayers.add(0, "yolo_30");
outputLayers.add(1, "yolo_37");
/*we are looking for layers yolo30 and yolo37, and we want to save them into result */
tinyV4.forward(result, outputLayers);
The full error:
OpenCV(3.4.11) Error: Unspecified error (Number of input channels should be multiple of 3 but got 4) in virtual bool
cv::dnn::ConvolutionLayerImpl::getMemoryShapes(const std::vector>&, int, std::vector>&, std::vector>&) const, file
/build/3_4_pack-android/opencv/modules/dnn/src/layers/convolution_layer.cpp, line 306
[ERROR:0] OPENCV/DNN: [Convolution]:(conv_0): getMemoryShapes() throws exception. inputs=1 outputs=0/1 blobs=1
[ERROR:0] input[0] = [ 1 4 510 510 ]
[ERROR:0] blobs[0] = CV_32FC1 [ 32 3 3 3 ]
[ERROR:0] Exception message: OpenCV(3.4.11) /build/3_4_pack-android/opencv/modules/dnn/src/layers/convolution_layer.cpp:306: error:
(-2:Unspecified error) Number of input channels should be multiple of 3 but got 4
in function 'virtual bool cv::dnn::ConvolutionLayerImpl::getMemoryShapes(const std::vector>&, int, std::vector>&, std::vector>&) const'
E/org.opencv.dnn: dnn::forward_14() caught cv::Exception: OpenCV(3.4.11) /build/3_4_pack-android/opencv/modules/dnn/src/layers/convolution_layer.cpp:306:
error: (-2:Unspecified error) Number of input channels should be multiple of 3 but got 4 in function 'virtual bool cv::dnn::ConvolutionLayerImpl::getMemoryShapes(const std::vector>&, int, std::vector>&, std::vector>&) const'
I tried to change the size to to 512X512, 1024x980, it not helped.
↧
error: (-215:Assertion failed) size.width>0 && size.height>0 in function 'cv::imshow'
I am running the below code and getting the error. Although, I changed the path of the image and gave absolute path as well, but things are not working.
Kindly help!!
import cv2
import numpy as np
image = cv2.imread("C:\\test_image.jpg")
try:
cv2.imshow('result', image)
cv2.waitKey(0)
except:
print("Here")
↧
Help with people detection and tracking
Hi I was following the example in this video
https://www.youtube.com/watch?v=BCJYorKIlN8&t=78s. To build the code in this example I need OpenCV 3.4.1 tracking.hpp which I understand can be found at https://github.com/opencv/opencv_contrib/tree/3.4.10 I was using this tutorial to combine the two https://www.youtube.com/watch?v=_fqpYLM6SCw&t=622s adding tracking to the final cmake file. This seems to work fine and building the output of the cmake file seems to go without a hitch, but when I build the code here https://www.youtube.com/watch?v=BCJYorKIlN8&t=78s I am getting a bunch of LNK2019 and LNK2001 errors. I am not sure how to address these errors and any help would be appreciated.
Thanks
↧
↧
Display new window in second monitor
hello the system i made display full screen window of images keep changing, so what i need to do it display the console in the primary screen , and new window of images full screen on another monitor , i will upload image to understand more what i am trying to do so someone kindly help <3 
↧
Finding corresponding pixel coordinate of a single point before and after cv2.remap
Let's begin by saying that i have a static image, called it `leftFrame.`
This frame is passed to method `cv2.remap(leftFrame, map1, map2, cv2.INTER_LANCZ0S4,0)`, so that every pixel from original image (`leftFrame`) is reallocated to a new position. Given that `map1` and `map2` are output from `cv2.initUndistortRectifyMap(MLS, dLS, RL, PL,imgShape,cv2.CV_16SC2)`, with MLS, dLS, RL, PL had been calculated before.
Things are done well,
HOWEVER, now I just want to get the corresponding pixel coordinate of a single point in the new frame, given the original pixel coordinate of that point in the initial, original frame (`leftFrame`).
How can I achieve it?
The code is simplified to be like this:
Left_Stereo_Map = cv2.initUndistortRectifyMap(MLS, dLS, RL, PL,imgShape,cv2.CV_16SC2)
leftCap = cv2.VideoCapture(PORT)
while (True):
# Collect image continously
ret1, leftFrame = leftCap.read()
Left_nice = cv2.remap(leftFrame, Left_Stereo_Map[0], Left_Stereo_Map[1], cv2.INTER_LANCZOS4, cv2.BORDER_CONSTANT, 0)
original_x = PREDEFINED_X
original_y = PREDEFINED_Y
x_after = ?
y_after = ?
cv2.waitkey(1)
Thanks everyone for spending your time reading this question. Wish you safe in this time!
↧
Detach blobs with a contact point
Hello. I'm working with blob analysis and i'm having problems on detaching 2 blobs which have a contact point. The images i'm working on are of this type:
 
These are the same images binarized:
 
My problem is morphological operators like erosion, dilation and opening don't work since the dimensions of the contact points are the same or even greater than some important points of the image (like the heads of the objects with 2 holes) and so when I set my operator (opening in this case) in order to delete contact points I end up deleting also parts of the image which I need to do other calculations (like number of holes per object).
I thought of applying a certain number of opening operators using a rectangle oriented as each of the objects as structuring element (as suggested here http://answers.opencv.org/question/56042/detach-two-blob/), but this solution doesn't work well, since it is not assured that the contact points are oriented as one of the objects involved (as it happens, for example, for the contact point in the inferior part of the second image).
Anyone of you have ideas of how I could solve this problem?
Edit: Here are the results thanks to [sturkmen](http://answers.opencv.org/question/87583/detach-blobs-with-a-contact-point/?answer=87608#post-id-87608)
 
 
↧
Search URL for documentation
Hi! Would it be possible to configure Doxygen to have a search URL available? Something like https://docs.opencv.org/master/doxysearch.cgi?q=triangulatePoints
I wanted to set up a [!bang command](https://duckduckgo.com/bang) for DuckDuckGo and realized that there were no search URL (or if there is one, I couldn't find it).
I'm not familiar with Doxygen configuration but − sure enough − Doxygen has some documentation about that:
* [Searching](https://www.doxygen.nl/manual/searching.html)
* [External Indexing and Searching](https://www.doxygen.nl/manual/extsearch.html)
Thanks in advance.
↧
↧
GrabCut: What is PR
Hello, In the GrabCut program:
"\tCTRL+right mouse button - set GC_PR_BGD pixels\n"
"\tSHIFT+right mouse button - set GC_PR_FGD pixels\n" << endl;
What is PR?
}
↧
Wrong frame position when using the OpenCV 4.3.0
Hi, I encountered a problem when using OpenCV 4.3.0.
When I use
` video_capture.set(CAP_PROP_POS_FRAMES, frame_index);`,
if the frame is a B-frame, there will be a problem.
It seems that we cannot frame_index as a B-frame. But there is no function to detect whether a frame is a B-frame.
Does anybody know the solution to this problem? Seeking to the B-frame or avoid seeking to B-frame?
↧
cv2.stereoCalibrate: objectPoints-error (-210:Unsupported format or combination of formats)
Hey guys!
I'm trying to get a disparity images from a stereo camera but I'm getting this error message in cv2.stereoCalibrate:
error: OpenCV(3.4.2) /opt/concourse/worker/volumes/live/9523d527-1b9e-48e0-7ed0-a36adde286f0/volume/opencv-suite_1535558719691/work/modules/calib3d/src/calibration.cpp:3139: error: (-210:Unsupported format or combination of formats) objectPoints should contain vector of vectors of points of type Point3f in function 'collectCalibrationData'
[EDIT] Regarding this error i've seen this answer: https://stackoverflow.com/questions/49100256/python-cv2-calibratecamera-throws-error-objectpoints-should-contain-vector-of-v
but it didn't solve my problem, because object_points and image_points are in the correct manner.
Image_points:
[[2433.414 819.80554]
[2436.264 1117.3773 ]
...
[ 956.29877 2928.8433 ]
[ 955.77747 3234.6638 ]]
Object_points:
[[ 0. 0. 0. ]
[ 3.6 0. 0. ]
...
[25.199999 18. 0. ]
[28.8 18. 0. ]]
This is the code i've got so far, but i must admit its mostly from this example: https://stackoverflow.com/questions/23030775/bad-disparity-map-using-stereobm-in-opencv
I changed a few things to make it compatible with openCV 3.4.2.
import cv2
import numpy as np
import matplotlib.pyplot as plt
calib_l = cv2.imread("Bilder/Calib1.jpg", cv2.IMREAD_GRAYSCALE)
calib_r = cv2.imread("Bilder/Calib2.jpg", cv2.IMREAD_GRAYSCALE)
imgL = cv2.imread("Bilder/Stereo1.jpg", cv2.IMREAD_GRAYSCALE)
imgR = cv2.imread("Bilder/Stereo2.jpg", cv2.IMREAD_GRAYSCALE)
image_size = calib_l.shape[:2]
pattern_size = 9, 6
object_points = np.zeros((np.prod(pattern_size), 3), np.float32)
object_points[:, :2] = np.indices(pattern_size).T.reshape(-1, 2)
object_points *= 3.6
image_points = {}
#chessboard
ret, corners_l = cv2.findChessboardCorners(calib_l, pattern_size, True)
cv2.cornerSubPix(calib_l, corners_l,
(11, 11), (-1, -1),
(cv2.TERM_CRITERIA_MAX_ITER + cv2.TERM_CRITERIA_EPS,
30, 0.01))
corners_l = np.float32(corners_l)
image_points["left"] = corners_l.reshape(-1, 2)
ret, corners_r = cv2.findChessboardCorners(calib_r, pattern_size, True)
cv2.cornerSubPix(calib_r, corners_r,
(11, 11), (-1, -1),
(cv2.TERM_CRITERIA_MAX_ITER + cv2.TERM_CRITERIA_EPS,
30, 0.01))
corners_r = np.float32(corners_r)
image_points["right"] = corners_r.reshape(-1, 2)
#calibrate
(rect_trans, proj_mats, valid_boxes,
undistortion_maps, rectification_maps) = {}, {}, {}, {}, {}
criteria = (cv2.TERM_CRITERIA_MAX_ITER + cv2.TERM_CRITERIA_EPS,
100, 1e-5)
flags = (cv2.CALIB_FIX_ASPECT_RATIO + cv2.CALIB_ZERO_TANGENT_DIST + cv2.CALIB_SAME_FOCAL_LENGTH)
cam_mats = {"left": None, "right": None}
dist_coefs = {"left": None, "right": None}
rot_mat = None
trans_vec = None
e_mat = None
f_mat = None
(ret, cam_mats["left"], dist_coefs["left"], cam_mats["right"],
dist_coefs["right"], rot_mat, trans_vec, e_mat,
f_mat) = cv2.stereoCalibrate(object_points,
image_points["left"], image_points["right"],
image_size, cam_mats["left"],dist_coefs["left"],cam_mats["right"], dist_coefs["right"],rot_mat,
trans_vec, e_mat, f_mat, criteria=criteria, flags=flags)
(rect_trans["left"], rect_trans["right"],
proj_mats["left"], proj_mats["right"],
disp_to_depth_mat, valid_boxes["left"],
valid_boxes["right"]) = cv2.stereoRectify(cam_mats["left"],
dist_coefs["left"],
cam_mats["right"],
dist_coefs["right"],
image_size,
rot_mat, trans_vec, flags=0)
for side in ("left", "right"):
(undistortion_maps[side], rectification_maps[side]) = cv2.initUndistortRectifyMap(cam_mats[side],
dist_coefs[side],
rect_trans[side],
proj_mats[side],
image_size,
cv2.CV_32FC1)
# disparity map
rectified_l = cv2.remap(imgL, undistortion_maps["left"],
rectification_maps["left"],
cv2.INTER_NEAREST)
rectified_r = cv2.remap(imgR, undistortion_maps["right"],
rectification_maps["right"],
cv2.INTER_NEAREST)
cv2.imshow("left", rectified_l)
cv2.imshow("right", rectified_r)
stereo = cv2.StereoBM(cv2.STEREO_BM_BASIC_PRESET, 0, 5)
disparity = stereo.compute(rectified_l, rectified_r, disptype=cv2.CV_32F)
plt.subplot(121).imshow(imgL)
plt.subplot(122).imshow(disparity)
plt.show()
Whats the problem here?
Thanks a lot in advance!!
↧
local pose of an object from a camera pose
Hi,
I need to get the local 3D pose of an object in from a camera pose to check for consistency of a vision system.
How would I go about doing this assuming I have the 3D camera pose of the object? I'm assuming i'll need the intrinsic information of the camera
Thanks in advance
↧
↧
Detect and count billiard balls in pocket with Raspi and webcam
Hello community,
I try to detect and count billiard ball values as they fall in one of the six pockets with an overhead webcam connected to a Raspi 2 model B.
I've tried and modified some python/opencv scripts from the web and learned a bit about the basics of tracking colors and shapes, but I get lost when it comes to this special task (which might be caused by me not being a real programmer/coder at all. Just a slightly enhanced script grandpa, I'd say).
Can someone point me to defining the six pockets from the cam stream as areas for color detection, and ideally return the color values (HSV) of the given ball, and thus it's value/number for counting?
Thank you very much
beag
↧
Background substration MOG2 dramatically decreases frame rate
Hi There,
I have the following code I am using with Unity,
When using the Background subtraction the frame rate drops dramatically. From 60 FPS to 30 FPS!
Please let me know if you see any other possible source of error, I am really new in OpenCV and I will be happy to receive your feedback.
I am using MOG2 like this:
Ptr pBackSub ;
pBackSub = createBackgroundSubtractorMOG2();
extern "C" int __declspec(dllexport) __stdcall BackGroundSubs(Color32 **rawImage, int width, int height, double Learn)
{
Mat Out(height, width, CV_8UC4, *rawImage);
///Mat frame;
Mat frame(height, width, CV_8UC3);
_capture >> frame;
if (frame.empty())
return 0;
////Resize Mat to match the array passed to it from C#
Mat resizedMat(height, width, frame.type());
resize(frame, resizedMat, resizedMat.size(), cv::INTER_CUBIC);
///Mat argbImg;
Mat argbImg(height, width, CV_8UC4);
//Need to convert to RGBA
cvtColor(frame, argbImg, COLOR_BGR2RGBA);
//flip image on X axis and Y axis
///Mat argbImgflippedX;
Mat argbImgflippedX(height, width, CV_8UC4);
flip(argbImg, argbImgflippedX, 0);
///Mat argbImgflippedY;
Mat argbImgflippedY(height, width, CV_8UC4);;
flip(argbImgflippedX, argbImgflippedY, 1);
//Copy the final image in the pointer received.
/// --> argbImgflippedY.copyTo(Out);
//THIS PART SLOW DOWN THE FRAME RATE
/////Mat fgMask;
Mat fgMask(height, width, CV_8UC1);
pBackSub->apply(argbImgflippedY, fgMask, Learn);
//argbImgflippedY.copyTo(Out);
Mat AfterOpening(height, width, CV_8UC1);
Mat element = getStructuringElement(MORPH_RECT, Size(10, 10), Point(1, 1));
morphologyEx(fgMask, AfterOpening, MORPH_OPEN, element);
///Mat andFilter;
Mat andFilter(height, width, CV_8UC1);
Mat GDi(height, width, CV_8UC1);
if (height == 720) {
andFilter = Mat::zeros(height, width, CV_8UC1);
bitwise_and(AfterOpening, GDrawMask720, andFilter);
GDrawMask720.copyTo(GDi);
}
else
{
andFilter= Mat::zeros(height, width, CV_8UC1);
bitwise_and(AfterOpening, GDrawMask480, andFilter);
GDrawMask480.copyTo(GDi);
}
argbImgflippedY.copyTo(Out, andFilter);
return height;
}
↧
findHomography error with opencv.js "Uncaught BindingError"
I'm attempting to reproduce Satya Mallick's great work from [here](https://www.learnopencv.com/image-alignment-feature-based-using-opencv-c-python/) but in JavaScript.
here is a screen shot of the debug match: https://imgur.com/8i4Gxlh
however on the findHomography() call I am getting an error:
Uncaught BindingError {name: "BindingError", message: "Cannot pass "[object Object],[object Object],[obje…Object],[object Object],[object Object]" as a Mat", stack: "BindingError: Cannot pass "[object Object],[object…t.onclick (http://localhost:8082/opencv#:190:194)"}message: "Cannot pass "[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]" as a Mat"name: "BindingError"stack: "BindingError: Cannot pass "[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]" as a Mat↵ at BindingError. (http://localhost:8082/js/opencv_4_3_0.js:30:7971681)↵ at new BindingError (eval at createNamedFunction (http://localhost:8082/js/opencv_4_3_0.js:30:7971390), :4:34)↵ at throwBindingError (http://localhost:8082/js/opencv_4_3_0.js:30:7976808)↵ at RegisteredPointer.nonConstNoSmartPtrRawPointerToWireType [as toWireType] (http://localhost:8082/js/opencv_4_3_0.js:30:7985609)↵ at Object.findHomography (eval at new_ (http://localhost:8082/js/opencv_4_3_0.js:30:7995155), :7:26)↵ at Object.proto. [as findHomography] (http://localhost:8082/js/opencv_4_3_0.js:30:7982013)↵ at Align_img2 (http://localhost:8082/opencv:1925:20)↵ at HTMLAnchorElement.onclick (http://localhost:8082/opencv#:190:194)"__proto__: Error
throwBindingError @ opencv_4_3_0.js:30
nonConstNoSmartPtrRawPointerToWireType @ opencv_4_3_0.js:30
findHomography @ VM404261:7
proto. @ opencv_4_3_0.js:30
Align_img2 @ opencv:1925
onclick @ VM406404 opencv:190
Here is the code:
function Align_img2() {
//image_A is the original image we are trying to align to
//image_B is the image we are trying to line up correctly
let image_A = cv.imread(imgElement_Baseline);
let image_B = cv.imread('imageChangeup');
let image_A_gray = new cv.Mat();
let image_B_gray = new cv.Mat();
//get size of baseline image (image A)
var image_A_width = image_A.cols;
var image_A_height = image_A.rows;
//resize image B to the baseline (image A) image
let image_A_dimensions = new cv.Size(image_A_width, image_A_height);
cv.resize(image_B, image_B, image_A_dimensions, cv.INTER_AREA);
// Convert both images to grayscale
cv.cvtColor(image_A, image_A_gray, cv.COLOR_BGRA2GRAY);
cv.cvtColor(image_B, image_B_gray, cv.COLOR_BGRA2GRAY);
// Initiate detector
var orb = new cv.ORB(1000);
var kpv_image_A = new cv.KeyPointVector();
var kpv_image_B = new cv.KeyPointVector();
var descriptors_image_A =new cv.Mat();
var descriptors_image_B =new cv.Mat();
var image_A_keypoints=new cv.Mat();
var image_B_keypoints =new cv.Mat();
mask = new cv.Mat();
orb.detectAndCompute(image_A_gray, new cv.Mat(), kpv_image_A, descriptors_image_A);
orb.detectAndCompute(image_B_gray, new cv.Mat(), kpv_image_B, descriptors_image_B);
// Debug to verify key points found
let color = new cv.Scalar(0,255,0, 255);
// find matches
let bf = new cv.BFMatcher(cv.NORM_HAMMING, true);
// Match descriptors
let matches = new cv.DMatchVector();
bf.match(descriptors_image_A, descriptors_image_B, matches);
var good_matches = new cv.DMatchVector();
for (let i = 0; i < matches.size(); i++) {
if (matches.get(i).distance < 30) {
good_matches.push_back(matches.get(i));
}
}
// Debug to verify matches found
var matches_img = new cv.Mat();
cv.drawMatches(image_A_gray, kpv_image_A, image_B_gray, kpv_image_B, good_matches, matches_img, color);
cv.imshow('imageChangeup', matches_img);
var points_A = [];
var points_B = [];
for (let i = 0; i < good_matches.size(); i++) {
points_A.push(kpv_image_A.get(good_matches.get(i).queryIdx ).pt );
points_B.push(kpv_image_B.get(good_matches.get(i).trainIdx ).pt );
}
// Calculate Homography
var h = new cv.Mat();
h = cv.findHomography(points_A, points_B, cv.FM_RANSAC); //also see interesting example: https://gist.github.com/woojoo666/de66e3d56b9e5b30258448c2e0e00be7
// Warp image to baseline_image based on homography
var dst = new cv.Mat();
cv.warpPerspective(image_B_gray, dst, h, (image_A_gray.shape[1], image_A_gray.shape[0]));
cv.imshow('imageChangeup', dst);
matches_img.delete();
matches.delete();
bf.delete();
orb.delete();
kpv_image_A.delete();
kpv_image_B.delete();
descriptors_image_A.delete();
descriptors_image_B.delete();
image_A_keypoints.delete();
image_B_keypoints.delete();
image_A_gray.delete();
image_B_gray.delete();
h.delete();
dst.delete();
};
as a reference here is what point_A looks like:
points_A
(33) [{…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}, {…}]
0: {x: 180, y: 1212.4801025390625}
1: {x: 529.9200439453125, y: 56.160003662109375}
2: {x: 180, y: 1225.4400634765625}
3:
x: 54.720001220703125
y: 482.4000244140625
__proto__: Object
4: {x: 378.4320373535156, y: 74.30400848388672}
5: {x: 55.29600524902344, y: 482.112060546875}
6: {x: 89.85601043701172, y: 468.2880554199219}
7: {x: 96.76800537109375, y: 480.384033203125}
8: {x: 525.3120727539062, y: 72.57600402832031}
9: {x: 179.71202087402344, y: 1223.424072265625}
↧