I have two cameras with different lenses and resolutions.
I have images, with chessboards visible in both.
I am trying to calculate very accurate extrinsic information between the two cameras, and then have the method repeatable when i change the lens on one camera, and the extrinsic values stay the same across multiple lens calibrations.
I have tried lens calibration and charuco boards, but although this gives me acceptable translation values, the rotation values vary up to a few degrees on each run, depending on the calibration.
I have tried `cv::findEssentialMat` on the chessboard corners, then `cv::recoverPose`, but the results are even worse (I have read that this method does not like planar points).
So, are there any other methods that I can use to find an accurate rotation extrinsic value between two cameras?
Can I use `FindHomography` somehow to get the relative pose between images?
Thanks!
↧
Calculate extrinsics between two cameras using FindHomography?
↧
KCF Tracker not updating bounding box in Java
I'm trying to implement KCF tracking function in Android but when I call update it brings me back pretty much the same rectangle that I init'd it with. I'm feeding it a stream of images from my camera and moving the camera around.
Here's some example logged rectangles that it returns. Not exactly the same but pretty close.
RECT: rect{445.0, 153.0, 302.0x312.0}
RECT: rect{445.0, 153.0, 302.0x312.0}
RECT: rect{445.0, 151.0, 302.0x312.0}
RECT: rect{441.0, 151.0, 302.0x312.0}
RECT: rect{443.0, 151.0, 302.0x312.0}
RECT: rect{441.0, 149.0, 302.0x312.0}
RECT: rect{437.0, 151.0, 302.0x312.0}
RECT: rect{433.0, 153.0, 302.0x312.0}
If I use another tracker, the MOSSE for example, the box location seems to update.
Code below.
public TargetTracker(){
tracker = TrackerKCF.create();
//tracker = TrackerMOSSE.create();
this.rect = new Rect2d();
}
public void set(Mat Image, float[] displaySpace){
Rect2d rect;
if(firstTime) {
Mat mTrackImage = new Mat();
Imgproc.cvtColor(Image,mTrackImage,Imgproc.COLOR_RGBA2RGB);
//are all points in bounds?
if(displaySpace[0] <= 1280 && displaySpace[2] <=1280 && displaySpace[4] <=1280 && displaySpace[6] <=1280&& displaySpace[1] <=720 && displaySpace[3] <=720 && displaySpace[5] <=720 && displaySpace[7] <=720){
}else{
Log.i("Target Tracker", "Points out of bounds can't lock");
return;
}
//Rect2d(double x, double y, double width, double height)
rect = new Rect2d(displaySpace[0], displaySpace[1], Math.abs(displaySpace[0] - displaySpace[2]), Math.abs(displaySpace[1] - displaySpace[5]));
/*System.out.println("Disp 0,1 " + displaySpace[0] + "," + displaySpace[1]);
System.out.println("Disp 2,3 " + displaySpace[2] + "," + displaySpace[3]);
System.out.println("Disp 4,5 " + displaySpace[4] + "," + displaySpace[5]);
System.out.println("Disp 6,7 " + displaySpace[6] + "," + displaySpace[7]);
Log.i("RECT", "rectangle" + rect);
Log.i("IMG", "ah image?" + Image);*/
tracker.init(mTrackImage, rect);
}
firstTime = false;
return;
}
public boolean update(Mat Image){
Mat mTrackImage = new Mat();
Imgproc.cvtColor(Image,mTrackImage,Imgproc.COLOR_RGBA2RGB);
/*if(this.firstTime == true){
return false;
}*/
Boolean res = tracker.update(mTrackImage,rect);
Log.i("RECT", "rect" + rect);
return res;
}
↧
↧
I am getting bug while performing open cv based dnn face detection
This is the error am getting I tried converting image to grayscale but it didnt worked
please help its very important to me
I am using this prototxt https://github.com/sghoshcvc/TextBox-Models/blob/master/textbox_deploy.prototxt
error: OpenCV(3.4.2) C:\Miniconda3\conda-bld\opencv-suite_1534379934306\work\modules\dnn\src\layers\convolution_layer.cpp:236: error: (-215:Assertion failed) ngroups > 0 && inpCn % ngroups == 0 && outCn % ngroups == 0 in function 'cv::dnn::ConvolutionLayerImpl::getMemoryShapes'
my code
import numpy as np
import argparse
import cv2
print("[INFO] loading model...")
net = cv2.dnn.readNetFromCaffe('pk.prototxt.txt','sd.caffemodel')
image = cv2.imread('f7.jpg',cv2.IMREAD_GRAYSCALE)
(h, w) = image.shape[:2]
blob = cv2.dnn.blobFromImage(cv2.resize(image, (300,300)), 1.0,
(300,300), (104.0, 177.0, 123.0))
net.setInput(blob)
**detections = net.forward()** // here am getting an error
for i in range(0, detections.shape[2]):
# extract the confidence (i.e., probability) associated with the
# prediction
confidence = detections[0, 0, i, 2]
if confidence > 0.5:
box = detections[0, 0, i, 3:7] * np.array([w, h, w, h])
(startX, startY, endX, endY) = box.astype("int")
text = "{:.2f}%".format(confidence * 100)
y = startY - 10 if startY - 10 > 10 else startY + 10
cv2.rectangle(image, (startX, startY), (endX, endY),
(0, 0, 255), 2)
cv2.putText(image, text, (startX, y),
cv2.FONT_HERSHEY_SIMPLEX, 0.45, (0, 0, 255), 2)
cv2.imshow("Output", image)
cv2.waitKey(0)
error - error: OpenCV(3.4.2) C:\Miniconda3\conda-bld\opencv-suite_1534379934306\work\modules\dnn\src\layers\convolution_layer.cpp:236: error: (-215:Assertion failed) ngroups > 0 && inpCn % ngroups == 0 && outCn % ngroups == 0 in function 'cv::dnn::ConvolutionLayerImpl::getMemoryShapes'
↧
Build OpenEXR support for ios and android
Hello everyone,
I am trying to build mobile opencv for ios now from OpenCV 3.4.3 source code. I notice that OpenEXR support in the CMakeLists.txt under openCV, the OCV_OPTION(WITH_OPENEXR) is defined not for IOS and WINRT. I removed the IOS one and it builds the lib file, but I still cannot load exr images in my mobile test app.
Have anyone try to enable OpenEXR support on mobile through openCV? What is the correct procedure? Is OpenCV3.4.3 not the ideal version?
Thank you.
↧
Problem in copying part of image to another
How to resolve this problem
>>> img=cv2.imread("d:\\OCR Readers\\CPU images\\50190025579077\\AO_Form_1.jpeg")
>>> img.shape
(1000, 800, 3)
>>> part=img[400:440, 1080:1515]
>>> img2=cv2.imread("d:\\OCR Readers\\white1.png")
>>> img2.shape
(2338, 1651, 3)
>>> img2[15:55, 130:565]=part
ValueError: could not broadcast input array from shape (40,0,3) into shape (40,435,3)
↧
↧
Remove unwanted parts of an image in an analog meter dials
Hi I am a beginner trying to solve a problem in ocr I have. I am trying to detect some meter reading of an analogue meter. I am currently using the amazon recognition service to extract readings from a meter in a react-native app. The process did not work very well so as part of trying to fix this. I implemented a cropping functionality in the app so we send only relevant part of the image to the service. I run into another problem. The analogue separators on the meter are interspersed such that they are read as ones.

cropped image from the mobile app cropped image from the mobile app

What I have tried.
I created a simple server application to try to remove these lines before we send the image to rekognito
- Converted the image to greyscale
- Applied Gaussian blur to remove some of the noise.
- Applied the [canny algortihm (https://en.wikipedia.org/wiki/Canny_edge_detector) to detect the edges.
using opencv for node.
guassianblur: new cv.Size(13, 13), 0, 0, cv.BORDER_DEFAULT
cannny: 50 100

The result look like this.
The output still has edges that looks like ones. is as I expect. I have been trying to figure out how to remove the interspersed edges leaving the clearly defined edge. is this a good approve. Can anyone help me remove those lines somehow. Thanks
My understanding of image processing concepts are very vague and I am unsure of this is a good way to fix this problem. I also don't know much about what I am doing :).
Can anyone help or suggest a better approach to removing the lines. Thanks in advance.
↧
Record n minutes from a webcam
Hello,
My goal is to create a tool with OpenCV to record stream from webcams. I want to record the n last minutes before a event.
First I try to find a way with ffmpeg and gstream to do that but I don't achieve to do this. Then I try to do this with OpenCV.
I have two solutions, I can't simply record for example 2500 images on my computer with imwrite and then erase the oldest images with new frames. Or I can create a "tiny videos" 5 to 10 seconds" to record a portion of time and then as the images solutions,I erase the oldest videos and replace it with new solution.
The idea is to be able to get the last n minutes with the two solutions. For the first solution, in worst case, if the program crash, the last frame is not stored and I can see the last (n minutes -1 frames). In the second solution the last "tiny video" is broken and I can see the last (n minutes - 1 tiny video) and this is not really great even if it is for me the most optimise solution ...
Can someone give me advices or solutions for my problem.
Best regard.
↧
HoughCircles detecting arches.
Greetings!
With the below C-code I'm detecting arches within an image with 2 arches.
I'm using opencv-4.1.0 on a windows-10 machine, if this matters.
Using the parameters
int dp = 3; // The inverse ratio of resolution;
int min_dist = 1000; // Minimum distance between detected centers
int upper_canny = 65; // Upper threshold for the internal Canny edge detector
int th_center = 85; // Threshold for center detection
int min_radius = 1000;
int max_radius = 0;
I find 5 circles and they outline more or less accurately the arc.
Playing with dp, and th_center I get different results.
What wolud be an appropriate approach to get unambiguous results?
Thanks for hints
Wolf
#include
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include
#include
int main(int argc, char** argv) {
char ImageName[256]={0};
int big_h=1024*8;
int big_w=1280, h=550;
int cx=(1280/2)-(big_w/2);
int cy=big_h-h;
cv::Rect roi(cx, 0, big_w, h);
cv::Mat input, filter, gray, canny;
cv::Mat big_gray(big_h, big_w, CV_8U);
strcpy(ImageName, "D:/CircleTestImage03.jpg");
// Read image
input = cv::imread(ImageName, cv::IMREAD_COLOR);
if (!input.data ) {
printf("Image '%s' not found\n", ImageName);
return(-1);
}
// Convert to gray
cv::cvtColor(input(roi), gray, cv::COLOR_BGR2GRAY);
cv::imshow("gray", gray);
// Reduce the noise avoiding false circle detection
cv::GaussianBlur(gray, gray, cv::Size(9, 9), 2, 2);
cv::imshow("Blur", gray);
// Place gray to bottom of big_gray allowing center finding of bog circeles.
// Otherwise no arc will be found.
gray.copyTo(big_gray(cv::Rect(0,cy, big_w, gray.rows)));
int dp = 3; // The inverse ratio of resolution;
int min_dist = 1000; // Minimum distance between detected centers
int upper_canny = 65; // Upper threshold for the internal Canny edge detector
int th_center = 85; // Threshold for center detection
int min_radius = 1000;
int max_radius = 0;
// Compute canny, info only
cv::Canny(gray, canny, upper_canny, upper_canny/2);
cv::imshow("canny", canny);
std::vector circles; // x, y, r
HoughCircles(big_gray, circles, cv::HOUGH_GRADIENT,
dp, min_dist, upper_canny, th_center, min_radius, max_radius);
// Draw circles
for (int i = 0; i < (int)circles.size(); i++ ) {
// coordinates of outline within input framre
circles[i][0] += cx;
circles[i][1] -= cy;
cv::Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
printf("Circle #%d dp=%d x=%d y=%d r=%d\n", i, dp, center.x, center.y, radius);
cv::circle(input, center, radius, CV_RGB(255,0,0), 2, 8, 0 );
}
cv::imshow("input", input); cv::waitKey(1);
printf("Push key to finish.\n");
cv::waitKey(-1);
return 0;
}
[C:\fakepath\CircleTestImage03.jpg](/upfiles/15657226319241119.jpg)
↧
Same Code works with LinearSVM but not RBF
When I use **trainAuto** method of SVM, I get the value 2 for `getKernelType()` but when I use the `RBF` in my code, it trains my file and outputs the XML file.
svm = cv2.ml.SVM_create()
svm.setType(cv2.ml.SVM_C_SVC)
svm.setKernel(cv2.ml.SVM_RBF)
svm.setGamma(0.0025)
svm.setC(0.5)
svm.train(samples, cv2.ml.ROW_SAMPLE, labels)
svm.save('svm_data.xml')
Above code works for me. But when I moved to prediction part with below code
hog = cv2.HOGDescriptor((100,200), (16,16), (8,8), (8,8), 9)
svm = cv2.ml.SVM_load('svm_data.xml')
sv = svm.getSupportVectors()
rho, alpha, svidx = svm.getDecisionFunction(0)
svm_new = np.append(sv, -rho)
hog.setSVMDetector(svm_new)
It shows be below error
error: (-215:Assertion failed) checkDetectorSize() in function 'cv::HOGDescriptor::setSVMDetector'
But **when I change RBF with LINEAR** it works for me in prediction part.
When I check
print (hog.checkDetectorSize())
print (hog.getDescriptorSize())
It returns `True` for DetectorSize and `26676` for DescriptorSize
↧
↧
Linker error only with DNN module
I work on a C++ project with OpenCV 4.1.0_2 on the latest macOS Mojave with Xcode 11 beta. I used a lot of elements from the library without any problem but when I try to use the DNN module I got a linker error. Linker flags should not be the problem since I added it all, and because every other module I used is link fine I guess the header and source directories are should be fine as well. What could be the problem? The error messages:
Ld /Users/hordon/Library/Developer/Xcode/DerivedData/project-fmllzddvpyajkjbwntzsyqvmunjx/Build/Products/Debug/OpenCV normal x86_64 (in target: OpenCV)
cd /Users/hordon/Desktop/GreenFox/OpenCV
/Applications/Xcode-beta.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++ -target x86_64-apple-macos10.14 -isysroot /Applications/Xcode-beta.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk -L/Users/hordon/Library/Developer/Xcode/DerivedData/project-fmllzddvpyajkjbwntzsyqvmunjx/Build/Products/Debug -L/usr/local/Cellar/opencv/4.1.0_2/lib -L/usr/local/Cellar/sqlite/3.28.0/lib -L/usr/local/Cellar/tesseract/4.0.0_1/lib -L/usr/local/Cellar/leptonica/1.78.0/lib -F/Users/hordon/Library/Developer/Xcode/DerivedData/project-fmllzddvpyajkjbwntzsyqvmunjx/Build/Products/Debug -filelist /Users/hordon/Library/Developer/Xcode/DerivedData/project-fmllzddvpyajkjbwntzsyqvmunjx/Build/Intermediates.noindex/project.build/Debug/OpenCV.build/Objects-normal/x86_64/OpenCV.LinkFileList -Xlinker -object_path_lto -Xlinker /Users/hordon/Library/Developer/Xcode/DerivedData/project-fmllzddvpyajkjbwntzsyqvmunjx/Build/Intermediates.noindex/project.build/Debug/OpenCV.build/Objects-normal/x86_64/OpenCV_lto.o -Xlinker -export_dynamic -Xlinker -no_deduplicate -stdlib=libc++ -I/usr/local/Cellar/opencv/4.1.0_2/include/opencv4/opencv -I/usr/local/Cellar/opencv/4.1.0_2/include/opencv4 -L/usr/local/Cellar/opencv/4.1.0_2/lib -lopencv_gapi -lopencv_stitching -lopencv_aruco -lopencv_bgsegm -lopencv_bioinspired -lopencv_ccalib -lopencv_dnn_objdetect -lopencv_dpm -lopencv_face -lopencv_freetype -lopencv_fuzzy -lopencv_hfs -lopencv_img_hash -lopencv_line_descriptor -lopencv_quality -lopencv_reg -lopencv_rgbd -lopencv_saliency -lopencv_sfm -lopencv_stereo -lopencv_structured_light -lopencv_phase_unwrapping -lopencv_superres -lopencv_optflow -lopencv_surface_matching -lopencv_tracking -lopencv_datasets -lopencv_text -lopencv_dnn -lopencv_plot -lopencv_videostab -lopencv_video -lopencv_xfeatures2d -lopencv_shape -lopencv_ml -lopencv_ximgproc -lopencv_xobjdetect -lopencv_objdetect -lopencv_calib3d -lopencv_features2d -lopencv_highgui -lopencv_videoio -lopencv_imgcodecs -lopencv_flann -lopencv_xphoto -lopencv_photo -lopencv_imgproc -lopencv_core -ltesseract -lsqlite3 -Xlinker -dependency_info -Xlinker /Users/hordon/Library/Developer/Xcode/DerivedData/project-fmllzddvpyajkjbwntzsyqvmunjx/Build/Intermediates.noindex/project.build/Debug/OpenCV.build/Objects-normal/x86_64/OpenCV_dependency_info.dat -o /Users/hordon/Library/Developer/Xcode/DerivedData/project-fmllzddvpyajkjbwntzsyqvmunjx/Build/Products/Debug/OpenCV
and
Undefined symbols for architecture x86_64:
"cv::dnn::dnn4_v20180917::blobFromImage(cv::_InputArray const&, cv::_OutputArray const&, double, cv::Size_ const&, cv::Scalar_ const&, bool, bool, int)", referenced from:
detectText(cv::Mat, std::__1::basic_string, std::__1::allocator>) in detectText.o
"cv::dnn::dnn4_v20180917::Net::forward(cv::_OutputArray const&, std::__1::vector, std::__1::allocator>, std::__1::allocator, std::__1::allocator>>> const&)", referenced from:
detectText(cv::Mat, std::__1::basic_string, std::__1::allocator>) in detectText.o
"cv::dnn::dnn4_v20180917::Net::setInput(cv::_InputArray const&, std::__1::basic_string, std::__1::allocator> const&, double, cv::Scalar_ const&)", referenced from:
detectText(cv::Mat, std::__1::basic_string, std::__1::allocator>) in detectText.o
"cv::dnn::dnn4_v20180917::Net::~Net()", referenced from:
detectText(cv::Mat, std::__1::basic_string, std::__1::allocator>) in detectText.o
"cv::dnn::dnn4_v20180917::readNet(std::__1::basic_string, std::__1::allocator> const&, std::__1::basic_string, std::__1::allocator> const&, std::__1::basic_string, std::__1::allocator> const&)", referenced from:
detectText(cv::Mat, std::__1::basic_string, std::__1::allocator>) in detectText.o
"cv::dnn::dnn4_v20180917::NMSBoxes(std::__1::vector> const&, std::__1::vector> const&, float, float, std::__1::vector>&, float, int)", referenced from:
detectText(cv::Mat, std::__1::basic_string, std::__1::allocator>) in detectText.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
and a part from the cpp file which generates this error is:
std::vector detectText(cv::Mat image, std::string path)
{
std::vector result;
if (image.empty()) {
return result;
}
cv::dnn::Net net = cv::dnn::readNet(path);
float minConfidence = 0.5;
float NMSThreshold = 0.4;
int width = 320;
int height = 320;
cv::Mat resized;
cv::resize(image, resized, cv::Size(width, height));
std::vector layers;
std::vector layerNames = {"feature_fusion/Conv_7/Sigmoid", "feature_fusion/concat_3"};
cv::Mat blob;
cv::dnn::blobFromImage(resized, blob, 1.0, cv::Size(width, height), cv::Scalar(123.68, 116.78, 103.94), true, false);
net.setInput(blob);
net.forward(layers, layerNames);
cv::Mat scores = layers.at(0);
cv::Mat geometry = layers.at(1);
std::vector rects;
std::vectorconfidenceScores;
decodeOutput(scores, geometry, minConfidence, rects, confidenceScores);
std::vector indicies;
cv::dnn::NMSBoxes(rects, confidenceScores, minConfidence, NMSThreshold, indicies);
float ratioWidth = image.cols / (float)width;
float ratioHeight = image.rows / (float)height;
for (int i = 0; i < indicies.size(); ++i) {
cv::RotatedRect& rect = rects[indicies[i]];
rect.size.width *= ratioWidth;
rect.size.height *= ratioHeight;
rect.center.x *= ratioWidth;
rect.center.y *= ratioHeight;
cv::Mat cropped = rotatedRectToMat(rect, image);
result.push_back(cropped);
}
return result;
}
↧
Solvepnp for fisheye
Hi,
I was following a [page](http://answers.opencv.org/question/67356/is-there-a-solvepnp-function-for-the-fisheye-camera-model/) to use solvepnp with a fisheye model. However, the answer suggests using the same camera matrix for the last input of undistortPoints function. This last input corresponds to the camera matrix in the new or rectified frame. How can we consider this matrix to be the same as the original camera matrix? Also, how can we use the rvec and tvec would correspond to undistorted corner points. Will this be the same as the rvec and tvec for the originally distorted image?
Thanks!
↧
python ret value vastly different from reprojection error
In this question, I am referring to the documentation example given here:
https://docs.opencv.org/4.1.0/dc/dbb/tutorial_py_calibration.html
To give a short summary: It's an example on how to calibrate a camera using a chessboard-pattern. In the example the author calibrates the camera like this:
ret, mtx, dist, rvecs, tvecs = cv.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)
It is stated in the documentation, that the ret-value is supposed to be the overall RMS of the reprojection error: (Check: https://docs.opencv.org/4.1.0/d9/d0c/group__calib3d.html#ga687a1ab946686f0d85ae0363b5af1d7b)
However, at the end of the script, the author calculates the reprojection error like this:
mean_error = 0
for i in xrange(len(objpoints)):
imgpoints2, _ = cv.projectPoints(objpoints[i], rvecs[i], tvecs[i], mtx, dist)
error = cv.norm(imgpoints[i], imgpoints2, cv.NORM_L2)/len(imgpoints2)
mean_error += error
print( "self-calculated error: {}".format(mean_error/len(objpoints)) )
print( "ret-value: {}".format(ret))
So this does - to my understanding - calculate the average normed reprojection-error per point per image. However, this is vastly different from the ret-value, that is given back to the user by calibrateCamera. Running the code and comparing the results leads to the results:
self-calculated error: 0.02363595176460404
ret-value: 0.15511421684649151
These are an order of maginute different and I think that should not be the case, ...right (?!) And the more important question: It is often stated, that the most important value to define "a good calibration" is a reprojection error < 1 and close to zero. Which reprojection error should be used for that?
I really hope someone can answer this question as it has been bugging me for a week now.
Cheers,
Dennis
↧
Porting to JavaScript: "Cannot register public name 'projectPoints' twice"
I did the following:
1. git clone https://github.com/opencv/opencv.git
2. git clone https://github.com/opencv/opencv_contrib.git
3. I added the following in def get_build_flags(self) of opencv/platforms/js/build_js.py:
flags += "-s USE_PTHREADS=0 "
4. I enabled the build flag in def get_cmake_cmd(self): of opencv/platforms/js/build_js.py:`-DBUILD_opencv_calib3d` set to `ON`
5. I added the following def get_cmake_cmd(self): of opencv/platforms/js/build_js.py:`-DOPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules`
6. I appended `js` inside ocv_define_module at the end of WRAP list of opencv/modules/calib3d/CMakeLists.txt. In opencv/modules/features2d/CMakeLists.txt and opencv_contrib/modules/aruco/CMakeLists.txt also I added "js" parameter in ocv_define_module.
7. I added `solvePnP`and 'projectPoints' in the calib3d module in the opencv/modules/js/src/embindgen.py
calib3d = {'': ['findHomography','calibrateCameraExtended', 'drawFrameAxes',
'getDefaultNewCameraMatrix', 'initUndistortRectifyMap', 'solvePnP','projectPoints']}
8. I added the calib3d module to the makeWhiteList in the opencv/modules/js/src/embindgen.py
white_list = makeWhiteList([core, imgproc, objdetect, video, dnn, features2d, photo, aruco, calib3d])
9. I added "using namespace aruco;" in the opencv/modules/js/src/core_bindings.cpp
10. I built OpenCV.js using the following command:
sudo python ./platforms/js/build_js.py build_js --emscripten_dir=${EMSCRIPTEN} --clean_build_dir --build_test
Before adding these wrappers, it compiled perfectly without errors. Now in my tests.html I have the following message:
Downloading...
tests.html:61 Running...
tests.html:61 Exception thrown, see JavaScript console
opencv.js:24 Uncaught
BindingError
message: "Cannot register public name 'projectPoints' twice"
So seems like the overload functions are preventing me from porting them to JavaScript.
Any suggestions please of how I can fix it?
Thanks in advance for your help.
↧
↧
Point correspondences for camera calibration using non-standard pattern
So here's my hypothesis:
I plan to calibrate a camera (as a first step) using a non-checkerboard pattern. What I kind of hypothesise is using one marker point per image, for which I know exactly where it is located in 3D coordinates. So basically I have the (x,y) in camera coordinates and (X,Y,Z) in world coordinates. I then take a certain amount( say 30-40) images of the marker in different locations there by generating 30 image points and 30 world points.
Would the calibrateCamera method work in such a case? Any inputs?
Before anyone starts asking try it out, I do plan to try it out during the weekends when I get time off my university schedule. This question is just to get a head start by then.
Cheers,
Sanjay
↧
Gpu memory leak when resizing asynchronously
I'm facing some problem with gpu resize using opencv. Here is my code:
#define MX 500
#define ASYNC 0
class job {
public:
cv::cuda::GpuMat gpuImage;
cv::cuda::Stream stream;
cv::Mat cpuImage;
~job() {
printf("job deleted\n");
}
};
void onComplete(int status, void* uData) {
job* _job = (job*) uData;
delete _job;
}
void resize(job* _job, vector buffer) {
_job->cpuImage = cv::imdecode(buffer, cv::IMREAD_COLOR);
if (ASYNC) {
_job->gpuImage.upload(_job->cpuImage, _job->stream);
cv::cuda::resize(_job->gpuImage, _job->gpuImage, cv::Size(100, 100), 0, 0, cv::INTER_NEAREST, _job->stream);
_job->gpuImage.download(_job->cpuImage, _job->stream);
_job->stream.enqueueHostCallback(onComplete, _job);
// _job->stream.waitForCompletion();
} else {
_job->gpuImage.upload(_job->cpuImage);
cv::cuda::resize(_job->gpuImage, _job->gpuImage, cv::Size(100, 100), 0, 0, cv::INTER_NEAREST);
_job->gpuImage.download(_job->cpuImage);
delete _job;
}
}
vector readFile(string filename) {
std::ifstream input(filename, std::ios::binary);
std::vector buffer(std::istreambuf_iterator(input),{});
return buffer;
}
int main() {
for (int i = 0; i < MX; i++) {
vector buf = readFile("input.jpg");
job* _job = new job();
resize(_job, buf);
printFreeGPUMemory();
}
while (true) {
// wait
}
return 0;
}
When I run resize synchronously (ASYNC = 0), the code works perfectly fine. But when I run it asynchronously (ASYNC = 1), it seems that some gpu memory is lost somewhere despite the fact that I have deleted all created GpuMats and Streams. The more loop I run, the less free memory I have. is there a bug or part of my code is wrong?
↧
cmake build error for 'trancascade' app
Hi,
I followed https://docs.opencv.org/master/d7/d9f/tutorial_linux_install.html, opencv-4.1.1dev to install succesfully opencv on ubuntu16.04.
When i was following https://docs.opencv.org/master/dc/d88/tutorial_traincascade.html to try use traincascade, i couldnt find the 'traincascade' app in my opencv/build/bin, then i checked /opencv/apps/CmakeLists.txt, i found that the line "ocv_add_app(traincascade)" was commented out, then i uncommented it.
back to /opencv/build, i run command "cmake ." without problem, but next i run command "make", there poped out plenty of errors, mainly telling:
old_ml.hpp error CvFileStorage has not been declared
old_ml.hpp error CvFileNode has not been declared
old_ml.hpp error read_split has not been declared
Can someone tell how to fix this? thanks in advance!
↧
How to make a custom kernel for extraction?
Hello,
I use morphological opening to extract horizontal lines from my image:
<...uploading an image and binarizing it...>
kernel_2 = np.ones((1,7), np.uint8)
hor_opening = cv2.morphologyEx(binary_img, cv2.MORPH_OPEN, kernel)
It works perfectly, but what should I do to use a kernel that combines ones and zeros? I want to extract from my image all segments that satisfy a following mask: [0, 1, 1, 1, 1, 1, 1, 1, 0]
↧
↧
Cross-compile for ARM
Hi,
I'm trying to cross-build OpenCV for ARM using a Docker image, but I'm getting linking errors. Does anyone have an idea how to resolve those linking errors ?
../../lib/libopencv_core.so.4.1.1: undefined reference to `gzeof'
../../lib/libopencv_core.so.4.1.1: undefined reference to `gzrewind'
../../lib/libopencv_core.so.4.1.1: undefined reference to `gzopen'
../../lib/libopencv_core.so.4.1.1: undefined reference to `gzclose'
../../lib/libopencv_core.so.4.1.1: undefined reference to `gzgets'
../../lib/libopencv_core.so.4.1.1: undefined reference to `gzputs'
collect2: error: ld returned 1 exit status
apps/version/CMakeFiles/opencv_version.dir/build.make:84: recipe for target 'bin/opencv_version' failed
Thanks for helping
↧
frame rate using CAP_PROP_FPS is failed (C++).
I am opening an .avi video file and trying to set the fps to 40.0, but it is not changing.
video.set(CAP_PROP_FPS, 42.0)
↧
change all instances of a colour to a different one in C++
I would like to take all pure white pixels (255, 255, 255) from a CV_8UC3 image, and convert them to something else, say grey (128, 128, 128).
I could iterate through all the pixels, inspect each one to get the colour, etc. `mat.at(point)` but that's not efficient.
What I did was create a new image with the colour I wanted, create a mask of the areas I want from the original image, and copy the masked area overtop of the new background colour. Looks a bit like this:
cv::Mat mask = cv::Mat::zeros(fg.size(), CV_8UC1);
mask = (mat == 0); // example: only mask the pure black pixels from the original image
cv::Mat newImage(mat.size(), CV_8UC3, cv::Scalar(128, 128, 128)); // create the new image
mat.copyTo(newImage, mask);
While this works, it is backwards and I suspect not as efficient as masking the white pixels and changing just those. I think the correct way would have been to mask the white area only, and then set that masked area to the new colour.
cv::Mat mask = cv::Mat::zeros(fg.size(), CV_8UC1);
mask = (mat == 255); // create a mask of only the white pixels
// ...but then what?
The problem is I'm not sure how to efficiently changed those masked pixels to the new colour.
↧