I am just trying to perform chessboard camera calibration based on [camera calibration](https://github.com/opencv/opencv/blob/master/samples/cpp/tutorial_code/calib3d/camera_calibration/camera_calibration.cpp) sample.
So I am trying to let my QT app save the calibrations settings using Settings class as defined int above code. There is a function for this which is Settings.write but when I call as below it gives me this error:
void saveCalibrationSettings(const Settings currentSettings){
FileStorage fs = FileStorage(Ui::calibrationFilename, FileStorage::WRITE);
currentSettings.write(fs);
fs.release();
}
the error is as below:
Gtk-Message: 02:56:21.529: Failed to load module "canberra-gtk-module"
Input does not exist: images/CameraCalibration/VID5/VID5.xmlterminate called after throwing an instance of 'cv::Exception'
what(): OpenCV(4.0.1) /home/mike/opencv-4.0.1/modules/core/src/persistence.cpp:1995: error: (-2:Unspecified error) Incorrect element name {; should start with a letter or '_' in function 'operator<<'
I just need my QT GUI application to be able to save a Settings object into a file so that it could follow the correct calibration settings format which then can be fed to the calibration process.
↧
[Opencv4] Settings.write not working Incorrect element name {; should start with a letter or '_' in function 'operator
↧
function "drawKeypoints" do not work in openCV 4.0.1
I want to use the drawKeypoints function in openCV 4.0.1, but Python writes an error:
AttributeError: module 'cv2.cv2' has no attribute 'drawKeypoints'
Maybe the latest version of openCV uses a different function, tell me if you know. Yes, in extreme cases, you can use version 2.4 in which this function is present, but I would like to use the latest version
↧
↧
better camera stablization method?
hi there,
I make a toy project to use a camera watching my monitor.If any person use my computer and my monitor scene change then the program would send message.
first I consider using the HOGDescriptor. I set the first image of videocapture as template, if the current frame's HOG feature is much different to the first image, the program would treat it as " someone use my computer", However, it is really sensitive to camera-shaking.
now I add a MOSSE tracker into it. I set window center as ROI and do some tracking. Then I caculate the HOG difference. If the MOSSE tracker loss tracking and HOG feature has large difference then report "someone use my computer"
> To summary: I use TrackerMOSSE to> stablize the camera and monitor the> monitor(?). Then use HOG feature to do> double check.
the output looks like:
> shaking camera:



> use computer:

It is really good but when I google for some toturial it seems the camera stablization is not worked like this. I found opencv has videostab lib but it seems not for camera and not really time. What could I do to make this toy program more like a "professional" toy ? ( I only care about the mat in ROI since I only need the output message, not output image)
↧
How to calculate the angle between camera and a person?
I would like to calculate the human body angle with respect to the camera.
I have attached a screenshot for your kind reference.
If the person is looking into the camera then the angle is zero.
If the person is looking up then the angle is more than 0.
I have the pose coordinates of the person. I would appreciate your thoughts on this.
Thank you.
[C:\fakepath\Screen Shot 2019-01-16 at 12.38.55 PM.png](/upfiles/1547610017963799.png)
↧
How to calculate the angle between camera and a person?
I would like to calculate the human body angle with respect to the camera.
I have attached a screenshot for your kind reference.
If the person is looking into the camera then the angle is zero.
If the person is looking up then the angle is more than 0.
I have the pose coordinates of the person. I would appreciate your thoughts on this.
Thank you.
[C:\fakepath\Screen Shot 2019-01-16 at 12.38.55 PM.png](/upfiles/1547610017963799.png)
↧
↧
Extract rotation and translation from Fundamental matrix
Hello,
I try to extract rotation and translation from my simulated data. I use simulated large fisheye data.


So I calculate my fundamental matrix :
fundamentalMatrix
[[ 6.14113278e-13 -3.94878503e-05 4.77387412e-03]
[ 3.94878489e-05 -4.42888577e-13 -9.78340822e-03]
[-7.11839447e-03 6.31652818e-03 1.00000000e+00]]
But when I extract with recoverPose the rotation and translation I get wrong data:
R = [[ 0.60390422, 0.28204674, -0.74548597],
[ 0.66319708, 0.34099148, 0.66625405],
[ 0.44211914, -0.89675774, 0.01887361]]),
T = ([[0.66371609],
[0.74797309],
[0.00414923]])
Even when I plot the epipolar lines with the fundamental matrix the lines don't fit the corresponding point in the next image.
I don't really understand what I do wrong.
fundamentalMatrix, status = cv2.findFundamentalMat(uv_cam1, uv_cam2,cv2.FM_RANSAC, 3, 0.8)
cameraMatrix = np.eye(3);
i= cv2.recoverPose(fundamentalMatrix, uv_cam1, uv_cam2, cameraMatrix)
↧
What algorithms are best used for recognizing objects in the street from a video stream
What algorithms are best used for recognizing objects in the street from a video stream: haar cascades or neural networks?
↧
Why createsamples, traincascade applications are not built when building OpenCV 4.0.1?
Why createsamples, traincascade applications are not built when building OpenCV 4.0.1?
↧
Histogram Equilization
I want to do histogram equilization(for 16 bit image) manually without using opencv API , i am using the function :
Mat image = Mat(h, w, CV_16UC1, image_data);
Mat new_image = Mat::zeros(image.size(), image.type());
for (int x = 0; x < image.rows; ++x)
{
for (int y = 0; y < image.cols; ++y)
{
new_image.at(x, y) = saturate_cast(((pow(2, bpp) - 1) / (max - min))*(image.at(x, y) - min));
}
}
Here ,
image : 16 bit image.
min : 0
max :65535
bpp :16
But i am not getting equalized image. I got half image black and half image is same as input image.
why it is happening is there need to modified the code ?
Please give me any suggestion.
↧
↧
OpenCV JavaScript - meanStdDev
I'm having a hard time trying understand how to extract the Mean and Stddev from the function cv.meanStdDev in the JS version of OpenCV.
I've tried the following JS code, but I'm struggling with how to get the Mean and Stddev from the meanStdDev function:
let myMean = new cv.Mat(1,4,cv.CV_64F);
let myStddev = new cv.Mat(1,4,cv.CV_64F);
cv.meanStdDev(img, myMean, myStddev);
I can't seem to find any documentation about this function on the site https://docs.opencv.org/3.4/d5/d10/tutorial_js_root.html
Thanks in advance for any help.
↧
Is it possible to install opencv in sles 11
hi,
I am trying to use sikuli tool in SLES 11(Suse Linux Enterprise Server 11) OS, but it needs opencv dependency.
So can i install opencv in SLES 11 OS? if yes please specify the version of Opencv that supports sles11.
Thank you for help in advance.
↧
Segmentation of blood cells in whole blood
Hi,
I am trying to investigate whether it is possible to do segmentation of blood cells in a whole blood sample, which is not diluted.
The images are obtained from microscopy and digital cameras. The only literature that I have been able to find do segmentation on smeared images, where they use different methods such as watershed, Hough and shape analysis. I am trying to do segmentation and find the size of the blood cells. The problem is to do find a robust method. I am not interested in finding all cells in the image. Just the most clear cells and from the segmentation of them finding the size.
I have attached an example image, which shows the images I will be working on.
Thanks for your help.
↧
Analyse gap between small objects in image
I need to determine the gap between two tiny objects. This has to be done from an image. I am using python3 and opencv for this. There are different kinds of objects. Cylinders and spirals.
The analysis of the cylinders works so far:



The blue line is a substitute for the object "edge". With its help the gap between those objects can be determined. The result looks like this:
(Red curve is the gap in pixel, left axis)

Now to my problem. I want to do the same for spiral objects:
(Ignore the blue lines)



I am using these functions to improve the images before searching for contours:
skimage.transform.rescale()
scipy.ndimage.gaussian_filter()
scipy.ndimage.gaussian_filter()
I tried to play around with blurr and alpha, but with no good result.
↧
↧
OpenCV 3.4.1 Cascade Classifier HOG
Hey,
i trained a HOG classifier and want to test it with my program, but i get the error message:
Ex =
{msg={cstr_=0x077ba804 "OpenCV(3.4.1) C:\\Users\\XXX\\Downloads\\opencv2\\opencv_new\\source\\modules\\objdetect\\src\\cascadedetect.cpp:1472: error: (-213) HOG cascade is not supported in 3.0 in function cv::CascadeClassifier... ...} ...}
Iam using OpenCV 3.4.1 and still get this error. Is it a known bug?
↧
It seems that there is an error on dnn module, opencv 4.0.1
It seems that there is an error on dnn module using Caffe, opencv 4.0.1
I followed the tutorial,
https://docs.opencv.org/4.0.0/d5/de7/tutorial_dnn_googlenet.html
It does not work on opencv 4.0.1.
However, it works on opencv 3.4.5.
In opencv 3.4.5, dnn module (with caffe framework, googlenet) sucessfully classify the example image(space shuttle).
However, in opencv 4.0.1, the dnn moduel (with caffe framework, googlenet) never classify the example image.
It even gave different output result each runtime.
I just run the example code provided by official opencv source.
I want to know whether it is opencv 4.0.1 error or my error.
Thanks,
↧
is the namespace omnidir not available on opencv4 in python3.6
i instaled opencv 4 and checked the Version.
Python 3.6.5 (v3.6.5:f59c0932b4, Mar 28 2018, 17:00:18)
">>> import cv2
">>>cv2.____version____">>>'4.0.0'
i cant find the functions cv.omnidir.calibrate / cv2.omnidir.calibrate in opencv
do i have to compile it by myselfe or is there a way to use the omnicam calibration with python as described and Dokumented?
https://www.docs.opencv.org/4.0.0/d4/d94/tutorial_camera_calibration.html
https://www.docs.opencv.org/4.0.0/d3/ddc/group__ccalib.html#gaf285e757a4091bfbc2bb742ada3ccba7
↧
Camera pose estimation c++ code
I have come across [this example](https://docs.opencv.org/4.0.1/d7/d53/tutorial_py_pose.html) for camera pose estimation.
I am wondering where I can find the C++ code for this.
↧
↧
What are the paid and free packages for unity with module dnn?
What are the paid and free packages for unity with module dnn?
↧
Sample tutorial_dnn_android not working without OpenCV Manager
Decided to check an example https://docs.opencv.org/3.4.2/d0/d6c/tutorial_dnn_android.html. The application was building successfully, but because of the impossibility of initializing OpenSVManager it does not work.
Can you help with fixing the example code?
↧
How to get and modify the pixel of Mat in Java?
I want to read and modify some pixels in my matrix. How can I do that in Java? Is there any equivalent for the C++ [Mat::at](http://docs.opencv.org/modules/core/doc/basic_structures.html#mat-at) method?
↧