Hello,
We are planning to setup a camera system for gear inspection and defect detection.
The images of gears are given below

The maximum diameter of gear will be 25 mm. Also we need an accuracy in the order of 15 micro meters in measurements.
The time the gear will be stationary under the camera will be 300 milli second. Within this time we have to
complete the image capture and processing of the image. Processing includes gear measurements and defect
detection.
Can we use rolling shutter camera for this setup instead of global shutter?
We are planning to use usb 3 interface as we need a cable length of max 2.5 metres. Is their any other advantage in GigE other than the cable length?
What kind of camera configuration do you suggest for this project? (Like resolution, fps, Field of view etc)
Also what kind of illumination(Back light or ring Light or some other) and lens will be the best for this application
Thanks
Amal
↧
Camera for Gear Inspection
↧
Error on ocl::filter2D
Hi,
I'm trying to test ocl::filter2D on openCv 2.4.11, x86 compiled on VS2013:
cv::Mat createKernel(int rowK, int colK, int type, int bn)
{
int arrSize = rowK*colK;
float *arraySrc = new float[arrSize];
if (rowK == 1)
{
for (int i = 0; i 0 && ksize.width > 0 && ((ksiz
eight & 1) == 1) && ((ksize.width & 1) == 1)) in `anonymous-namespace'::norma
eROI, file C:\Sviluppo\opencv-2.4.11\modules\ocl\src\filtering.cpp, line 78
The same code works using standard cv::filter2D.
Any idea about which problem is occurring?
Thank you
Marco
↧
↧
robust external contour detection?
Hello,
I am trying to detect external contour of circular object. object is looks like this, (diameter around 40 mm)
but whenever background change or some lighting condition, it is not detecting external contour. sometime also detecting inner contour and some time none at all.
can anyone guide me how can I detect only external contour efficiently as I am using this external contour points for further object pose estimation.
here is my trial code, despite using CV_RETR_EXTERNAL, it is detecting sometime inner contour.
cv::Mat src_gray;
cv::cvtColor(cv_ptr->image, src_gray, cv::COLOR_BGR2GRAY);
cv::Canny(src_gray, src_gray, 40, 255, 3);
std::vector> contours;
std::vector hierarchy;
cv::RNG rng(12345);
cv::findContours(src_gray, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE, cv::Point(0,0));
cv::Mat drawing = cv::Mat::zeros(src_gray.size(), CV_8UC3);
cv::Point2f rect_points[4];
cv::Rect bounding_rect;
std::vector approx;
for (size_t i = 0; i < contours.size(); i++)
{
cv::approxPolyDP(cv::Mat(contours[i]), approx, cv::arcLength(cv::Mat(contours[i]), true)*0.02, true);
cv::Scalar color = cv::Scalar(rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255));
if(fabs(cv::contourArea(contours[i])) < 100 || !(cv::isContourConvex(approx)))
continue;
if(approx.size() >=6 && approx.size() <=8)
{
cv::drawContours(drawing, contours, (int)i, color, 2, 8, hierarchy, 0, cv::Point());
bounding_rect = cv::boundingRect(contours[i]);
cv::rectangle(drawing, bounding_rect, color, 1, 8, 0);
cv::RotatedRect bounding_rect = cv::minAreaRect(cv::Mat(contours[i]));
bounding_rect.points(rect_points);
std::cout << "......................................"<< std::endl;
std::cout << rect_points[0] << std::endl;
std::cout << rect_points[1] << std::endl;
std::cout << rect_points[2] << std::endl;
std::cout << rect_points[3] << std::endl;
std::cout << "......................................." << std::endl;
}
thanks.
↧
How to use yolov3 and openCV with the support NCS2
Hi
We are developing the project which is based on Intel NCS2, OpenVINO and OpenCV.
I want to run yolov3 models and OpenCV with NCS2 support to object detection.
I've converted yolov3 models to IR models using the following command:
python3 convert_weights_pb.py --class_names coco.names --data_format NHWC --weights_file yolov3.weights
and
python3 convert_weights_pb.py --class_names coco.names --data_format NHWC --weights_file yolov3-tiny.weights --tiny
python3 /mo_tf.py --input_model frozen_darknet_yolov3_model.pb --tensorflow_use_custom_operations_config /extensions/front/tf/yolo_v3_tiny.json --input_shape=[1,416,416,3] --data_type FP32 //for CPU
python3 /mo_tf.py --input_model frozen_darknet_yolov3_model.pb --tensorflow_use_custom_operations_config /extensions/front/tf/yolo_v3_tiny.json --input_shape=[1,416,416,3] --data_type FP16 //for MYRIAD
and
python3 /mo_tf.py --input_model frozen_darknet_yolov3_model.pb --tensorflow_use_custom_operations_config /extensions/front/tf/yolo_v3.json --input_shape=[1,416,416,3] --data_type FP32 //for CPU
python3 /mo_tf.py --input_model frozen_darknet_yolov3_model.pb --tensorflow_use_custom_operations_config /extensions/front/tf/yolo_v3.json --input_shape=[1,416,416,3] --data_type FP16 //for MYRIAD
Here is my json file:
for yolov3-tiny:
{
"id": "TFYOLOV3",
"match_kind": "general",
"custom_attributes": {
"classes": 80,
"anchors": [10, 14, 23, 27, 37, 58, 81, 82, 135, 169, 344, 319],
"coords": 4,
"num": 6,
"mask": [0, 1, 2],
"jitter": 0.3,
"ignore_thresh": 0.7,
"truth_thresh": 1,
"random": 1,
"entry_points": ["detector/yolo-v3-tiny/Reshape", "detector/yolo-v3-tiny/Reshape_4"]
}
}
for yolov3:
{
"id": "TFYOLOV3",
"match_kind": "general",
"custom_attributes": {
"classes": 80,
"coords": 4,
"num": 9,
"mask": [0, 1, 2],
"entry_points": ["detector/yolo-v3/Reshape", "detector/yolo-v3/Reshape_4", "detector/yolo-v3/Reshape_8"]
}
}
Process was a complete success
I trying use converted model in OpenCV. Here code:
...
net = readNet(cfg.modelConfiguration, cfg.modelWeights);
net.setPreferableBackend(cfg.backend);
net.setPreferableTarget(cfg.target);
...
//recognize
blob = blobFromImage(frame, cfg.scale, Size(cfg.inpWidth, cfg.inpHeight), Scalar(), false);
net.setInput(blob);
vector outs;
names = getOutputsNames(net);
net.forward(outs, names);
for (size_t i = 0; i < outs.size(); ++i)
{
Mat prob = outs[i]; //4-dimension Mat
//get the coardinates
}
The code was run. But I don't know how to get the coordinates of the bounding.
I tried like this:
Mat prob = outs[i];
Mat detectionMat(prob.size[2], prob.size[3], CV_32F, prob.ptr());
float* data = (float*)detectionMat.data;
for (int j = 0; j < detectionMat.rows; ++j, data += detectionMat.cols)
{
Mat scores = detectionMat.row(j).colRange(5, detectionMat.cols);
Point classIdPoint;
double confidence;
// Get the value and location of the maximum score
minMaxLoc(scores, 0, &confidence, 0, &classIdPoint);
if (confidence > cfg.confThreshold)
{
ObjResult result;
cout << "confidence " << confidence << endl;
cout << "classIdPoint.x " << classIdPoint.x << endl;
// cout << "classes[classId] " << classes[classIdPoint.x] << endl;
result.centerX = (int)(data[0] * frame.cols);
result.centerY = (int)(data[1] * frame.rows);
result.width = (int)(data[2] * frame.cols);
result.height = (int)(data[3] * frame.rows);
result.confidence = confidence;
result.classId = classIdPoint.x;
result.className = classes[classIdPoint.x];
ret.objResults.push_back(result);
}
}
or:
Mat prob = outs[i];
Mat detectionMat(prob.size[2], prob.size[3], CV_32F, prob.ptr());
for(int c=0; c(c, 1))-1;
labelnum = (labelnum < 0) ? 0 : (labelnum > numlabels) ? numlabels : labelnum;
cout << "labelnum: " << labelnum << endl;
float confidence = detectionMat.at(c, 2);
cout << "confidence: " << confidence << endl;
if(confidence > 0.5)
{
int classId = static_cast(detectionMat.at(c, 1));
int left = static_cast(detectionMat.at(c, 3) * frame.cols);
int top = static_cast(detectionMat.at(c, 4) * frame.rows);
int right = static_cast(detectionMat.at(c, 5) * frame.cols);
int bottom = static_cast(detectionMat.at(c, 6) * frame.rows);
ObjResult result;
result.width = right - left;
result.height = top - bottom;
result.centerX = left + result.width/2;
result.centerY = bottom + result.height/2;
result.confidence = confidence;
result.classId = classId;
result.className = classes[classId];
ret.objResults.push_back(result);
}
}
But it's not proper way.
How can i get the coordinates of the bounding? Have you any sample code?
↧
How do I resolved ‘samples’ has not been declared
Hi, I tried to compile the facedetect.cpp file having entered the path to /opencv/samples/cpp as shown below, I received error: ‘samples’ has not been declared
~/opencv/samples/cpp$ g++ -ggdb facedetect.cpp -o facedetect `pkg-config --cflags --libs opencv`
facedetect.cpp: In function ‘int main(int, const char**)’:
facedetect.cpp:65:29: error: ‘samples’ has not been declared
if (!nestedCascade.load(samples::findFileOrKeep(nestedCascadeName)))
^
facedetect.cpp:67:23: error: ‘samples’ has not been declared
if (!cascade.load(samples::findFile(cascadeName)))
^
facedetect.cpp:84:24: error: ‘samples’ has not been declared
image = imread(samples::findFileOrKeep(inputName), IMREAD_COLOR);
^
facedetect.cpp:87:31: error: ‘samples’ has not been declared
if (!capture.open(samples::findFileOrKeep(inputName)))
^
facedetect.cpp:96:24: error: ‘samples’ has not been declared
image = imread(samples::findFile("lena.jpg"), IMREAD_COLOR);
^
facedetect.cpp: In function ‘void detectAndDraw(cv::Mat&, cv::CascadeClassifier&, cv::CascadeClassifier&, double, bool)’:
facedetect.cpp:187:45: error: ‘INTER_LINEAR_EXACT’ was not declared in this scope
resize( gray, smallImg, Size(), fx, fx, INTER_LINEAR_EXACT );
↧
↧
My program takes partial filenames
I have a code and when I run it for all the images in a folder, it takes the partial file names.
if __name__ == "__main__":
data_catalog = "data"
raw_catalog = "raw_vessels"
vessel_catalog = "vessels_images"
files_names = [x for x in os.listdir(data_catalog) if os.path.isfile(os.path.join(data_catalog,x))]
#files_names.sort()
for file_name in files_names:
out_name = file_name.split('.')[0]
org_image = cv2.imread(data_catalog + '/' + file_name)
raw_vessel, vessel_image = detect_vessel(org_image)
cv2.imwrite(raw_catalog + '/' + out_name + ".JPG", raw_vessel)
cv2.imwrite(vessel_catalog + '/' + out_name + ".JPG", vessel_image)
I am skipping description of "detect_vessel" part here. When I run this code, the output file in raw_vessel and vessel_image folder is seems to have a partial name compared to original filename. For example- the original file name is "moderate_1-20100106_152702_172.23.132.69_P09_CLIP_S_30921_36650_DES.mpg-0047.bmp" and the output is raw_vessel and vessel_image folder is "moderate_1-20100106_152702_172.JPG".
I cant figure out the problem. I have a lot images that start with same name with last part of images varrying. Please help me out.
↧
Βlock Matching, setTextureThreshold()
Hello,
I am having trouble understanding the texture filter of block matching implementation. I read the source code (https://github.com/opencv/opencv/blob/master/modules/calib3d/src/stereobm.cpp) but still cannot get it. We set a minimum amount of texture with setTextureThreshold(threshold), but what exactly does it mean this value t? Does it compare the SAD value of the pixel with this threshold and if (SAD value < threshold) then it filters out the pixel? How is this threshold calculated? I cannot understand how the values of htext[] buffer are calculated.
Could someone explain how it works?
Thank you in advance.
↧
SolvePnp in millimeters instead of pixels
I want to estimate the pose of the camera using `solvepnp` function and for `object_points` and `image_points` I have all in millimeters, is that correct? what I do have to change in solvepnp to get it to work with mm units?
↧
OpencV compilation fails with include error
Hi All, I am building OpenCV3.4 on Jetson Nano
$ uname -a
Linux paralaxiom-jetson 4.9.140-tegra #1 SMP PREEMPT Wed Mar 13 00:32:22 PDT 2019 aarch64 aarch64 aarch64 GNU/Linux
I have followed the steps as indicated here > https://raw.githubusercontent.com/spmallick/learnopencv/master/InstallScripts/installOpenCV-3-on-Ubuntu-18-04.sh with some very minor changes in python only. I have also installed libeigen3-dev and the files are all in right location, as indicated below:
$ pkg-config --cflags opencv eigen3
-I/usr/local/include/opencv -I/usr/local/include -I/usr/include/eigen3
At the time of compilation, there is an error and the compilation terminates.
In file included from /home/p/kshitij/opencv/modules/core/test/test_precomp.hpp:12:0,
from /home/p/kshitij/opencv/build/modules/core/opencv_test_core_pch_dephelp.cxx:1:
/home/p/kshitij/opencv/modules/core/include/opencv2/core/private.hpp:66:12: fatal error: Eigen/Core: No such file or directory
# include
^~~~~~~~~~~~
compilation terminated.
modules/core/CMakeFiles/opencv_test_core_pch_dephelp.dir/build.make:62: recipe for target 'modules/core/CMakeFiles/opencv_test_core_pch_dephelp.dir/opencv_test_core_pch_dephelp.cxx.o' failed
make[2]: *** [modules/core/CMakeFiles/opencv_test_core_pch_dephelp.dir/opencv_test_core_pch_dephelp.cxx.o] Error 1
CMakeFiles/Makefile2:2627: recipe for target 'modules/core/CMakeFiles/opencv_test_core_pch_dephelp.dir/all' failed
make[1]: *** [modules/core/CMakeFiles/opencv_test_core_pch_dephelp.dir/all] Error 2
Requesting help from the members of the forum.
Thanks,
kshitij
↧
↧
How to fix OpenCV error FAILED: fs.is_open()
I am working on a jetson nano using pycharm and realsense2 library, trying to compile a code that uses numpy, pyrealsense2, cv2 and matplotlib and cv2 keeps causing errors.
The compiler says that there is no module for cv2 in the import line, but when the code is run no error is thrown there, the error occurs much later in the code:
> net = cv2.dnn.readNetFromCAFFE("MobileNetSSD_deploy.prototxt", "MobileNetSSD_deploy.caffemodel")
> AttributeError: module 'cv2.dnn' has no attribute 'readNetFromCAFFE'
what is wrong? how i downloaded the library? is it out of date? please help
UPDATE: fixed capitalisation and now this error is thrown:
> OpenCV Error: Unspecified error
> (FAILED: fs.is_open(). Can't open "MobileNetSSD_deploy.prototxt") in ReadProtoFromTextFile, file /home/nvidia/build_opencv/opencv/modules/dnn/src/caffe/caffe_io.cpp, line 1113
> Traceback (most recent call last):
> File "/home/ihub/documents/projects_pycharm/DistanceToObject.py", line 81, in > net = cv2.dnn.readNetFromCaffe("MobileNetSSD_deploy.prototxt", "MobileNetSSD_deploy.caffemodel")
> cv2.error: /home/nvidia/build_opencv/opencv/modules/dnn/src/caffe/caffe_io.cpp:1113: error: (-2) FAILED: fs.is_open(). Can't open "MobileNetSSD_deploy.prototxt" in function ReadProtoFromTextFile
↧
Assisting the compiler into generating better code
I found a set of routines in the imgcodecs portion of the opencv where a fairly simple code change improves performance by 2x on the Power platform but has little to no effect on x64. My question is should changes be made to help the compiler like below? I realize that eventually the compiler "could" be made to generate better code.
template
inline void cvtBGR2Gray( const dataType* rgb, dataType* gray,
Size& size, int ncn, int _swap_rb )
{
int i;
#if 0
for( i = 0; i < size.width; i++, rgb += ncn )
{
int t = descale( rgb[_swap_rb]*cB + rgb[1]*cG + rgb[_swap_rb^2]*cR, SCALE );
gray[i] = (dataType)t;
}
#else
if (_swap_rb)
{
for( i = 0; i < size.width; i++, rgb += ncn )
{
int t = descale( rgb[0]*cR + rgb[1]*cG + rgb[2]*cB, SCALE );
gray[i] = (dataType)t;
}
}
else
{
for( i = 0; i < size.width; i++, rgb += ncn )
{
int t = descale( rgb[0]*cB + rgb[1]*cG + rgb[2]*cR, SCALE );
gray[i] = (dataType)t;
}
}
#endif
}
void icvCvt_BGRA2Gray_8u_C4C1R( const uchar* rgba, int rgba_step,
uchar* gray, int gray_step,
Size size, int _swap_rb )
{
_swap_rb = _swap_rb ? 2 : 0;
for( ; size.height--; gray += gray_step )
{
cvtBGR2Gray(rgba, gray, size, 4, _swap_rb);
rgba += rgba_step - size.width*4;
}
}
// Similar changes to icvCvt_BGR2Gray_8u_C3C1R and icvCvt_BGRA2Gray_16u_CnC1R
↧
Object in Area?
Hello,
would like to ask if OCV is the right way for my project.
A pic from an object should be analysed, if the object top is inside a specific area.
Reference could be two sketch lines in the background or fix cam positions lines.
Thx for help!
↧
Segmenting a Google Street View image
I'm trying to segment Google Street View images such that I can automatically detect where a house (or multiple houses) might be in the image. For this part I'm wanting to use something like the region proposal system used in the RCNN detector - OpenCV has implementation for the proposal system that can be found [here](https://docs.opencv.org/master/dd/d19/classcv_1_1ximgproc_1_1segmentation_1_1GraphSegmentation.html).
This method proposes too many (~500) possible locations for objects for my algorithm; I'd like it to be around 50 or 100 at most. Are there any other methods I could use that would work better with my specific problem? I was thinking I could somehow remove things like pavement, grass, clouds/sky, etc, but maybe this is not a good idea because houses can have many different colors as well that might be very close to that of grass and sky, which vary in themselves too. I was also thinking that going off of some sort of shape based model might also work, but these don't seem to have too great success in the literature I've read. Would there be any good ideas to pursue?
Thanks
↧
↧
How to calculate scale factor between two images (by SIFT matching)
Since SIFT is scale-invariant, can we get the scale factor by SIFT matching? And how can we do that?
Thank you.
↧
Imported module not shows up on 'Add Module Dependency' dialog
So I'm trying to add OpenCV as a dependency for my Android Studio project. And I've followed this tutorial https://github.com/davidmigloz/go-bees/wiki/Setup-OpenCV-3.1.0-in-Android-Studio-2.2 . So at the last step it told me to add the opencv module to my app module in order to use it. But the OpenCV module isn't visible on my project...
Screenshoot : https://i.imgur.com/AtRsuqH.png
↧
Facenet with OpenCV and Neural compute stick 2
Hi,
i am working for an Face recognition project with facenet model and OpenCV based application. Also i have converted Facenet tensorflow models to Intermediate representation which is suitable to Intel's Nueral compute stick 2 (NCS2). But i am not sure how to do predictions? Can someone help on this?
↧
compile opencv error
win10 64 bit cmake 3.7.2 mingw73 opencv4.0.1 compile error
cap_dshow.cpp:2298:41: error: 'strcpy_instead_use_StringCbCopyA_or_StringCchCopyA' was not declared in this scope
if( type == MEDIASUBTYPE_RGB24) strcpy(tmpStr, "RGB24");
^
H:\software\opencv\sources\modules\videoio\src\cap_dshow.cpp:2298:56: warning: left operand of comma operator has no effect [-Wunused-value]
if( type == MEDIASUBTYPE_RGB24) strcpy(tmpStr, "RGB24");
^~~~~~~
H:\software\opencv\sources\modules\videoio\src\cap_dshow.cpp:2299:5: error: 'else' without a previous 'if'
else if(type == MEDIASUBTYPE_RGB32) strcpy(tmpStr, "RGB32");
^~~~
In file included from H:/software/programming/qt/Tools/mingw730_32/i686-w64-mingw32/include/DShow.h:33:0,
from H:\software\opencv\sources\modules\videoio\src\cap_dshow.cpp:111:
H:\software\opencv\sources\modules\videoio\src\cap_dshow.cpp:2299:41: error: 'strcpy_instead_use_StringCbCopyA_or_StringCchCopyA' was not declared in this scope
else if(type == MEDIASUBTYPE_RGB32) strcpy(tmpStr, "RGB32");
^
H:\software\opencv\sources\modules\videoio\src\cap_dshow.cpp:2299:56: warning: left operand of comma operator has no effect [-Wunused-value]
else if(type == MEDIASUBTYPE_RGB32) strcpy(tmpStr, "RGB32");
^~~~~~~
H:\software\opencv\sources\modules\videoio\src\cap_dshow.cpp:2300:5: error: 'else' without a previous 'if'
else if(type == MEDIASUBTYPE_RGB555)strcpy(tmpStr, "RGB555");
↧
↧
depth from disparity
Hello everyone,
I have computed a disparity map using OpenCV on Python, but my goal is to get the real depth from this disparity map.
Steps accomplished :
1. Take 40 calibration pictures with 2 side by side cameras
2. Calibrate each camera individually using the cv2.calibrateCamera() function
3. Stereo calibration with the cv2.stereoCalibrate() function
4. Compute rotation matrix with the cv2.stereoRectify() function
5. Undistort both of the cameras with the cv2.initUndistortRectifyMap() function
6. Use Stereo SGBM algorithm to create a disparity map
7. Ajust Stereo SGBM parameters to get the best possible result for the disparity map
8. **Use cv2.reprojectImageTo3D() function to get real 3D coordinates (depth map)**
All my results until point 7) seem to be correct (see images below).
However, the result given by cv2.reprojectImageTo3D() is totally incorrect, and I am unable to understand where is the problem. The calibration has been done with units in [cm], so for an object placed at 50 cm, the result should be near 50, but the function returns -0.6. Any idea what could be the problem ?
My results are the following, for a bottle placed at 50cm of the 2 cameras :
Images after undistortion :

Disparity map

Depth Map (-3.64 represents a white pixel, manually added in the disparity map for visualization purposes, as you can see on the above image. Every pixels near the value should be around 50, but it gives -0.6) :

Any help would be very welcome !
↧
Converting python code to javascript code.
I'm trying to convert python code to javascript code to detect an object in real life with webcam, I do have converted some lines but few lines are unable to find for javascript. please help me.
def find_marker(image):
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (5, 5), 0)
edged = cv2.Canny(gray, 35, 125)
cnts = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts) // fallowing three line i don't know how to convert to javascript.
c = max(cnts, key = cv2.contourArea)
return cv2.minAreaRect(c)
Following code is javascript which I tried to convert.
function findMarker(src){
cv.cvtColor(src, dst, cv.COLOR_RGBA2GRAY);
let ksize = new cv.Size(3, 3);
let anchor = new cv.Point(-1, -1);
cv.blur(dst, dstC1, ksize, anchor, cv.BORDER_DEFAULT);
cv.Canny(dstC1, dstC1, 50, 100, 3, false);
let contours = new cv.MatVector();
let hierarchy = new cv.Mat();
cv.findContours(dstC1, contours, hierarchy, cv.RETR_CCOMP, cv.CHAIN_APPROX_SIMPLE);
src.delete();
dst.delete();
dstC1.delete();
}
I try to track a piece of paper in real time webcam video, so now trying to find my marker.
↧
Should I use focal length from camera matrix or new camera matrix for depth estimation?
I use initUndistortRectifyMap using camera matrix and new camera matrix to get output maps. New camera matrix was obtained using estimateNewCameraMatrixForUndistortRectify and passing a non-zero balance value. The new camera matrix as compared to camera matrix has different focal length.
Now the question is, if I get disparity values, first using remap with the output maps and then using a stereo disparity algorithm, to get the real depth what focal length value should I use? The one from old camera matrix or from the new camera matrix?
↧