Hi Everyone,
I have to find the length and the width of a foot placed on a A4 Sheet.
First i tried with some object, like a business card or flyer then it works.
I detect the contours of the A4 sheet by finding the 4 contours and cv2.boundingRect() to get a rectangle. and as we know the A4 size then i can find the size of the objects inside.
BUT with the foot obviously it block the 4th contours so i can't get my rectangle and by deduction not use the A4 sheet to measure the foot.
Could someone tell me the approach, thank you so much.
ps : here is the image
[C:\fakepath\foot3.jpg](/upfiles/16006672829749722.jpg)
↧
Foot Measurement [Python]
↧
Brox Optical Flow - Python Binding
Is there any Python binding available for Brox Optical Flow in OpenCV 4? Thanks
↧
↧
findChessboardCornersSB irregular behavior
I am testing **findChessboardCornersSB** as an alternative to **findChessboardCorners', mainly to be able to use a calibration target which may overlap the image boundaries. While it works like expected in many cases, I always encounter strange detection misses and spurious detections:

and:

even if I ignore the missed points in between, I would have no idea how to associate the detected points with their world coordinates.
Am I doing something wrong, or is **findChessboardCornersSB** unstable?
Here's the code:
import cv2
import numpy as np
import os
import glob
# Defining the dimensions of checkerboard
CHECKERBOARD = (5, 5)
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 500, 0.0001)
# Creating vector to store vectors of 3D points for each checkerboard image
objpoints = []
# Creating vector to store vectors of 2D points for each checkerboard image
imgpoints = []
colortable = [(255, 100, 100), (100, 255, 100), (100, 100, 255)]
# Defining the world coordinates for 3D points
objp = np.zeros((1, CHECKERBOARD[0] * CHECKERBOARD[1], 3), np.float32)
objp[0, :, :2] = np.mgrid[0:CHECKERBOARD[0], 0:CHECKERBOARD[1]].T.reshape(-1, 2)
prev_img_shape = None
# Extracting path of individual image stored in a given directory
images = glob.glob('./square/*.png')
for fname in images:
img = cv2.imread(fname)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, corners, meta = cv2.findChessboardCornersSBWithMeta(img, CHECKERBOARD,
cv2.CALIB_CB_LARGER )
if ret == True:
objpoints.append(objp)
imgpoints.append(corners)
# Draw and display the corners
cv2.convertScaleAbs(img, img, 0.4, 0.0)
for corner, m in zip(corners, meta.ravel()):
color = colortable[m]
cv2.drawMarker( img, (corner[0][0], corner[0][1]), color, cv2.MARKER_CROSS, 30, 3)
else:
print("not found")
cv2.imshow('img',cv2.resize(img, (0, 0), None, 0.5, 0.5))
if (cv2.waitKey(0) == 27):
break;
cv2.destroyAllWindows()
↧
TypeError: 'module' object is not callable for pywavefront Module
hi i used pywavefront module for import OBJ files as:
import pywavefront
from pywavefront import *
fox = pywavefront.wavefront('fox.obj', collect_faces=True)
but pycharm gave this error:
fox = pywavefront.wavefront('fox.obj', collect_faces=True)
**TypeError: 'module' object is not callable**
↧
How to convert stereo camera distances, pixels to mm
Hello.
I calibrated and stereo-parallelized the stereo camera.I would like to know how to find the coefficient to convert from pixels to mm.
From the Q matrix obtained by stereo rectification, Tx: baseline and f: focal length were determined.
As you know, the depth z is found by the following formula
z = Bf/(x-x') z:[pixels]
However, the units of z obtained from this formula are pixels.
A factor k[mm/pixel] is needed to convert it to mm.
Then the equation is as follows
z = (Bf/(x-x'))*k z:[mm]
Is there a smart way to find this coefficient k?
Please use something other than how to actually measure z several times to derive k, or how to examine the pitch pixels of the camera's image sensor.
In the case of HALCON, I was able to find it by a function instead of using the method described above.
Couldn't it be calculated from the camera and Q matrices obtained from calibration and stereo parallelization?
Thank you.


↧
↧
No FPS change when using MJPEG format
Hi, I'm using [See3CAM_130]( https://www.e-consystems.com/13mp-autofocus-usb-camera.asp) camera and it's rated for capturing **3840x2160 @15 fps as UYVY format and @30 fps as MJPEG format**.
Initially I was using the default format(UYVY) for capturing frames and I could get average 15 FPS. Later on I tried to capture as MJPEG format because of higher FPS. So I've added the line
capture.set(cv::CAP_PROP_FOURCC, cv::VideoWriter::fourcc('M', 'J', 'P', 'G'));
But the FPS is still averages about 15 FPS for 3840x2160. Do I need to add anything else to increase my FPS using MJPEG?
My code:
#include
#include
#include
#include
#include
#include
#include "opencv2/opencv.hpp"
int main(int argc, char** argv)
{
cv::VideoCapture capture(0 + cv::CAP_DSHOW);
if (!capture.isOpened())
{
std::cout << "Problem connecting to cam " << std::endl;
return -1;
}
else
if (argc == 1)
{
std::cout << "Successfuly connected to camera " << std::endl;
capture.set(cv::CAP_PROP_FOURCC, cv::VideoWriter::fourcc('M', 'J', 'P', 'G'));
int ex = (int)capture.get(cv::CAP_PROP_FOURCC);
char EXT[] = { (char)(ex & 0XFF),(char)((ex & 0XFF00) >> 8),(char)((ex & 0XFF0000) >> 16),(char)((ex & 0XFF000000) >> 24),0 };
std::cout << "CAP_PROP_FOURCC: " << EXT << std::endl;
capture.set(cv::CAP_PROP_FRAME_WIDTH, 3840);
capture.set(cv::CAP_PROP_FRAME_HEIGHT, 2160);
}
int frameCounter = 0;
int tick = 0;
int fps;
std::time_t timeBegin = std::time(0);
cv::Mat frame;
while (1)
{
capture.read(frame);
if (frame.empty())
{
break;
}
frameCounter++;
std::time_t timeNow = std::time(0) - timeBegin;
if (timeNow - tick >= 1)
{
tick++;
fps = frameCounter;
frameCounter = 0;
}
cv::putText(frame, cv::format("Average FPS=%d", fps), cv::Point(30, 30), cv::FONT_HERSHEY_SIMPLEX, 0.8, cv::Scalar(0, 0, 255));
cv::imshow("FPS test", frame);
cv::waitKey(1);
}
return 0;
}
↧
No FPS change when using MJPEG format
Hi, I'm using [See3CAM_130]( https://www.e-consystems.com/13mp-autofocus-usb-camera.asp) camera and it's rated for capturing **3840x2160 @15 fps as UYVY format and @30 fps as MJPEG format**.
Initially I was using the default format(UYVY) for capturing frames and I could get average 15 FPS. Later on I tried to capture as MJPEG format because of higher FPS. So I've added the line
capture.set(cv::CAP_PROP_FOURCC, cv::VideoWriter::fourcc('M', 'J', 'P', 'G'));
But the FPS is still averages about 15 FPS for 3840x2160. Do I need to add anything else to increase my FPS using MJPEG?
My code:
#include
#include
#include
#include
#include
#include
#include "opencv2/opencv.hpp"
int main(int argc, char** argv)
{
cv::VideoCapture capture(0 + cv::CAP_DSHOW);
if (!capture.isOpened())
{
std::cout << "Problem connecting to cam " << std::endl;
return -1;
}
else
if (argc == 1)
{
std::cout << "Successfuly connected to camera " << std::endl;
capture.set(cv::CAP_PROP_FOURCC, cv::VideoWriter::fourcc('M', 'J', 'P', 'G'));
int ex = (int)capture.get(cv::CAP_PROP_FOURCC);
char EXT[] = { (char)(ex & 0XFF),(char)((ex & 0XFF00) >> 8),(char)((ex & 0XFF0000) >> 16),(char)((ex & 0XFF000000) >> 24),0 };
std::cout << "CAP_PROP_FOURCC: " << EXT << std::endl;
capture.set(cv::CAP_PROP_FRAME_WIDTH, 3840);
capture.set(cv::CAP_PROP_FRAME_HEIGHT, 2160);
}
int frameCounter = 0;
int tick = 0;
int fps;
std::time_t timeBegin = std::time(0);
cv::Mat frame;
while (1)
{
capture.read(frame);
if (frame.empty())
{
break;
}
frameCounter++;
std::time_t timeNow = std::time(0) - timeBegin;
if (timeNow - tick >= 1)
{
tick++;
fps = frameCounter;
frameCounter = 0;
}
cv::putText(frame, cv::format("Average FPS=%d", fps), cv::Point(30, 30), cv::FONT_HERSHEY_SIMPLEX, 0.8, cv::Scalar(0, 0, 255));
cv::imshow("FPS test", frame);
cv::waitKey(1);
}
return 0;
}
↧
FileNotFoundError: [Errno 2] No such file or directory: 'low-poly-fox-by-pixelmannen.mtl'
hi i used Pywavefront Module for importing OBJ files as:
import pywavefront
fox = pywavefront.Wavefront('fox.obj', collect_faces=False)
but pycharm gave this error:
**FileNotFoundError: [Errno 2] No such file or directory: 'low-poly-fox-by-pixelmannen.mtl'**
i want import fox.obj only into my project NOT 'low-poly-fox-by-pixelmannen.mtl' that is 3d Model and Non_FREE
↧
How to pre-process images for OCR ?
I am trying to do an OCR on multiple images, using Tesseract, but i am facing a problem where i can't figure out a way to apply on all images even in different contrast and brightness and shades.
As of now, this is my pipeline of preprocessing an image :
- FastNLMeansDenoisingClored.
- Dilation
- MedianBlur
- Grayscale
- AdaptiveThresholding
But for each image, i must manually tune params for each step.
Is there anything that can adjust these params automatically, or is there any way of applying machine learning ?
↧
↧
seamlessClone invoke causes compilation error
Hi
I cant compile code due to the shown line

i've check the variable types passed to the function and they seem all right.
What am I missing ?
Best regards
↧
Add a music on a video
Hello,
How to add an opensource music on a video generated with OpenCV VideoWriter ?
Thank you
Christophe
↧
Add a music on a video
Hello,
How to add an opensource music on a video generated with OpenCV VideoWriter ?
Thank you
Christophe
↧
Add a music on a video
Hello,
How to add an opensource music on a video generated with OpenCV VideoWriter ?
Thank you
Christophe
↧
↧
problem with cv2.imshow() method
#Hello,
#i try on many script,on linux and windows,but i still ave a bug on " cv2.imshow()".
#the code with my image adress:
import cv2
path = r'C:\Users\me\Pictures\dell latitude\DSCF1513'
image = cv2.imread(path)
window_name = 'image'
cv2.imshow(window_name, image)
cv2.waitKey(0)
cv2.destroyAllWindows()
and now the bug:
------------------------------------------------------------------------------------------------------------------------------------------
================= RESTART: C:\Users\me\Documents\opencvTest.py =================
Traceback (most recent call last):
File "C:\Users\me\Documents\opencvTest.py", line 18, in
cv2.imshow(window_name, image)
cv2.error: OpenCV(4.4.0) C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-k8sx3e60\opencv\modules\highgui\src\window.cpp:376: error: (-215:Assertion failed) size.width>0 && size.height>0 in function 'cv::imshow'
thanks for your help.
↧
draw aruco global coordinate system on the frame
I'm using aruco markers as reference for localization task, and I want to draw the global x and y axis from the perspective of the aruco marker and also want to get the ratio from centimeter to pixel based on this marker. I'm using python cv2.
Thank for your help
↧
Stereo match based on feature points.
I'd like to try stereo matching with feature points matching but not BM, SGBM provided by opencv samples, but I'm a little confused about the procedure.
Could anyone give me a hint about the baseline?
I have looked on the internet but got nothing quite valuable and a little desperate...
↧
openCV can't open laptop webcam
Hello there
I installed openCV 4.5 pre on openSUSE Leap 15.2 from sources (github). Installed openCV cant open built-in webcam on my laptop. I build it with cmake with this command :
https://pastebin.com/DMKXKktq
And this is general configuration :
https://pastebin.com/jCjS0wNh
This is the output from example code
Should I downgrade it to lower version or is there any workaround should I try ?
Thank you
↧
↧
Cannot apply tensorflow model
Hello I have problem with applying tensorflow model in OpenCV. The code below properly load model, but when calling forward method Assertion error is thrown. Do you have any ideas where is the problem? Or how to debug/find it?
cv::dnn::Net net;
string path;
path = "graph.pb";
net = cv::dnn::readNetFromTensorflow(path);
if (net.empty())
{
std::cerr << "Can't load network by using the given files." << std::endl;
return ;
}
Mat image = imread(imagePath)
Mat inputBlob = cv::dnn::blobFromImage(image, 1.0, Size(512, 512), Scalar(0,0,0), true, false);
int N = inputBlob.size[0], C = inputBlob.size[1], H = inputBlob.size[2], W = inputBlob.size[3]; // [1, 3, 512, 512]
net.setInput(inputBlob); //set the network input
Mat output = net.forward(); // <- throws error
**Error:**
Debug Assertion Failed!
Program: C:\Workspace\ImageAnalysisPlus\x64\Debug\opencv_world3410d.dll
File: C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\vector
Line: 1789
Expression: back() called on empty vector
For information on how your program can cause an assertion
failure, see the Visual C++ documentation on asserts.
OpenCV version: 4.4.0
Tensorflow model: [https://drive.google.com/file/d/1aE0smAw-CyPLch6UY8blK3RreT5RrZfN/view?usp=sharing](https://drive.google.com/file/d/1aE0smAw-CyPLch6UY8blK3RreT5RrZfN/view?usp=sharing)
Platform: Windows 10, Visual Studio 2017
Thank you in advance for any advice.
↧
Cant load images after adding opencv_face450.dll.a libraries
Hi
I need to use FacemarkKazemi class but i soon as i add the opencv_face450.dll.a library to the project, the imread function stops working (Mat img1=imread(image1, IMREAD_COLOR);) Meaning imshow displays some kind of blank picture. I dont think the problem comes from imshow because some other functionalities further down the code stop working too (face detection).
Best regards
↧
how to get a outputarrayofarrays
Hi,
I have a method
func(OutputArrayOfArrays points)
{
vector < vector < Point2f >> vec;
//initialize vec here
//how to convert vec to points?
}
My question is how to convert vector of vector to OutputArrayOfArrays?
Thanks.
YL
↧