Hello guys! Thank you for reading my question.
I use VS Code to write my scripts in Python 3.
My objective is to match two images. One image is isolated (needle) and another one is within a bigger image (haystack). The needle image is not an exact match to the one in the haystack image, but they are similar. The image in the haystack may be smaller, different color or rotated, but they have a similar shape. A good example would be the image found in one of the tutorials:
https://docs.opencv.org/master/Feature_FlannMatcher_Result_ratio_test.jpg
After trying out the tutorials, I learned that a module called features2D is needed in all of the algorythms (SIFT, SURF, AKAZE, ORB). However, this module seems to be disabled. I have tried to install it for days with no success. I think it has something to do with SIFT being patented.
Can anyone point me in the right direction?:
Is it possible to install this module in VS Code?
Are there other alternatives?
Thanks you for your time.
↧
Is feature detection still possible in OpenCV?
↧
Please tell us more about the algorithm " color_histogram.py" in the OpenCV library. I just can't find any information about this algorithm, and I can't understand how it works. Please explain.
Who created the algorithm and how it works.
↧
↧
DNN onnx model with variable batch size
Hi,
If I have a caffe model with an input and output batch size of 1 and I pass it a blob containing multiple images (batch_size >1), e.g.
batch_size = 2
blob = cv.dnn.blobFromImages([img_normalized]*batch_size ,size=(224,224))
net.setInput(blob)
net.forward()
then I get a result for both images.
If I use an onnx model with an input and output batch size of 1, exported from pytorch as
model.eval();
dummy_input = torch.randn(1, 3, 224, 224)
torch.onnx.export(model, dummy_input, onnx_name,
do_constant_folding=True,
input_names = ['input'], # the model's input names
output_names = ['output'])
and pass a single image as
blob = cv.dnn.blobFromImage(img_normalized ,size=(224,224))
net.setInput(blob)
net.forward()
then I again get the correct result. If however I pass more than one image as above then I get the following error
> error: OpenCV(4.2.0-dev)
> \modules\dnn\src\layers\reshape_layer.cpp:113:
> error: (-215:Assertion failed)
> total(srcShape, srcRange.start,
> srcRange.end) == maskTotal in function
> 'cv::dnn::computeShapeByReshapeMask'
because I have changed the batch size.
I have tried to export the onnx model with a dynamic batch size
torch.onnx.export(model, dummy_input, onnx_name,
do_constant_folding=True,
input_names = ['input'], # the model's input names
output_names = ['output'],
dynamic_axes={'input' : {0 : 'batch_size'}, # variable lenght axes
'output' : {0 : 'batch_size'}})
but the model fails to import
net = cv.dnn_ClassificationModel(onnx_name)
> error: OpenCV(4.2.0-dev)
> \modules\dnn\src\layers\reshape_layer.cpp:149:
> error: (-215:Assertion failed)
> dstTotal != 0 in function
> 'cv::dnn::computeShapeByReshapeMask'
What am I doing wrong/how can I use an onnx model with a dynamic batch size?
↧
Ideal Pinhole Camera consistency
Hi All
I am attempting to get the ideal pinhole camera for the camera of an iPad Mini 4.
I have done multiple calibration sessions using a chessboard pattern and code from: https://docs.opencv.org/master/dc/dbb/tutorial_py_calibration.html
The issue I have is the consistency. I have used up to 200 images per session.
The checkerboard is detected in all of them.
The resolution of the images is 3024 × 4032
I find the deviation rather large, for the same camera.
fov_deg 59.2862
fov_deg 60.6048
fov_deg 59.9288
f_x 2913.33
f_x 2907.07
f_x 2933.18
f_y 2914.59
f_y 2901.48
f_y 2930.74
c_x 1238.07
c_x 1282.58
c_x 1275.43
c_y 1605.38
c_y 1568.35
c_y 1574.36
Is this expected?
Which accuracy is realistic?
Am I missing something?
This data is later used for 3D reconstruction, so it should be as accurate as possible.
Looking forward to your insights
Best regards
Wim
↧
is there parallel computation or not
I would like to choose right CPU for application with cv2. The weakest part of the code in the sense of time consuming is definition сv2.findTransformECC. Is there parallel computation in the cv2.findTransformECC or it's just one thread process? I am thinking between AMD Ryzen 3 PRO 3200GE and Intel Core i5 6200U
I think AMD Ryzen 3 PRO 3200GE has advantages if cv2.findTransformECC is single thread process and Intel Core i5 6200U in case multithread process in cv2.findTransformECC
Is there any thoughts on this questions? is cv2.findTransformECC single thread calculation and AMD Ryzen 3 PRO 3200GE CPU has advantages as more cpu Clockspeed rate but less threads then Intel Core i5 6200U?
↧
↧
how to get duration/length video.webm video file ..
hey , this is murugan . i am not getting video duration/length for my .webm file. its gives negative number pls help me:
video = cv2.VideoCapture("TEST000008.webm")
duration = video.get(cv2.CAP_PROP_POS_MSEC)
frame_count = video.get(cv2.CAP_PROP_FRAME_COUNT)
fps = video.get(cv2.CAP_PROP_FPS)
durationn = frame_count / fps
print("duration:",duration)
print("frame_count:",frame_count)
print("fps:",fps)
print("durationn:",durationn)
**output is :**
duration: 0.0
frame_count: -7.148113328562451e+16
fps: 7.75
durationn: -9223372036854776.0
↧
Files/directories to include in VS to use CUDA?
I want to use CUDA/GPU in OpenCV in Visual Studio. For example, `cuda::GpuMat`. After I successfully build OpenCV with the extra modules with CUDA enabled, am I supposed to go to `Property Pages` in VS, then add additional files/directories in `C/C++->General` and `Linker->General` and `Input`?
↧
what does this warning mean? warning : field of class type without a DLL interface used in a class with a DLL interface
visual studio 2019
cmake latest version
opencv zip from master branch today (7/10/20)
python 3.7
cuda 10.2/cudnn
windows 64bit win10
I built with no errors in cmake, then I opened the project in VS and started building the ALL_BUILD file. I am getting a bunch of these warnings regarding the DLL. my PC fan is also extremely loud. should I be concerned? should I try installing with anaconda instead?
29>C:/opencv/opencv-master/modules/core/include\opencv2/core/cuda.hpp(693): warning : field of class type without a DLL interface used in a class with a DLL interface
↧
windows: ModuleNotFoundError: No module named 'cv2'
I installed opencv (latest from master branch) with the latest cmake. I built it in VS19 and everything went fine. However, when I look for it in the command prompt it gives me the following error:
Traceback (most recent call last):
File "", line 1, in
ModuleNotFoundError: No module named 'cv2'
there are my paths, I added these after building in VS.
C:\Python37\python.exe
C:\Python37\include
C:\Python37\Lib
C:\Python37\Lib\site-packages\numpy\core\include
C:\Python37\Lib\site-packages
what went wrong?
↧
↧
can someone help me out
can someone help me out cv2.error: OpenCV(4.2.0) C:\projects\opencv-python\opencv\modules\highgui\src\window.cpp:651: error: (-2:Unspecified error) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Cocoa support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function 'cvShowImage'
↧
Transform and warp an image as other image
Hi,
I want to transform and warp an image as another image. As figures shown below, I want to transform and warp image1 as image2. I was searching for some solution related to feature matching and then using thin plate spline transform to obtain a mesh and then warp that mesh onto the image1. Any help would be nice. Thanks a lot.


↧
Please help me to translate c++ to java
#include
#include
#include
using namespace std;
using namespace cv;
int main()
{
// Load the image
Mat3b img = imread("path_to_image", IMREAD_COLOR);
// Convert to grayscale
Mat1b gray;
cvtColor(img, gray, COLOR_BGR2GRAY);
// Get binary mask (remove jpeg artifacts)
gray = gray > 200;
// Get all non black points
vector pts;
findNonZero(gray, pts);
// Define the radius tolerance
int th_distance = 50; // radius tolerance
// Apply partition
// All pixels within the radius tolerance distance will belong to the same class (same label)
vector labels;
// With lambda function (require C++11)
int th2 = th_distance * th_distance;
int n_labels = partition(pts, labels, [th2](const Point& lhs, const Point& rhs) {
return ((lhs.x - rhs.x)*(lhs.x - rhs.x) + (lhs.y - rhs.y)*(lhs.y - rhs.y)) < th2;
});
// You can save all points in the same class in a vector (one for each class), just like findContours
vector> contours(n_labels);
for (int i = 0; i < pts.size(); ++i)
{
contours[labels[i]].push_back(pts[i]);
}
// Get bounding boxes
vector boxes;
for (int i = 0; i < contours.size(); ++i)
{
Rect box = boundingRect(contours[i]);
boxes.push_back(box);
}
// Get largest bounding box
Rect largest_box = *max_element(boxes.begin(), boxes.end(), [](const Rect& lhs, const Rect& rhs) {
return lhs.area() < rhs.area();
});
// Draw largest bounding box in RED
Mat3b res = img.clone();
rectangle(res, largest_box, Scalar(0, 0, 255));
// Draw enlarged BOX in GREEN
Rect enlarged_box = largest_box + Size(20,20);
enlarged_box -= Point(10,10);
rectangle(res, enlarged_box, Scalar(0, 255, 0));
imshow("Result", res);
waitKey();
return 0;
}
↧
How to rotate putText from the Android studio.
I accidentally set the screen horizontally.
So the letters look horizontal.
I want to know how to rotate the letters From the Android studio.
putText(matInput, "R:2", Point(150, 320), 1, 8, Scalar::all(255),6);
It is my putText coding.

The screen is running.
↧
↧
how to resolve org.opencv.imgproc.Imgproc.blur_2(Native Method)
Hello everyone, I am creating a system for object recognition, more precisely for license plates, it is for a control system for the beach in my city. I'm using Java and the latest version of OpenCv 4.3.0, I can open the camera without errors. The problem is in the code for plate recognition, it's giving errors, but it's about the version or something. I will leave the code of the classes, and the errors, I hope someone can help me. ENTRY CLASS
public class entrada {
public static void main(String[] args) {
try {
// carregando a biblioteca
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
System.out.println("começando");
int[] mm = { 10,100 };
int[] nn = { 10,100,300 };
int[] aa = { 1 };
int[] bb = { 5};
int[] cc = { 5,10,15 };
int[] dd = { 3 };
int[] ee = {200};
for (int m = 0; m< mm.length; m++) {
for (int n = 0; n < nn.length; n++) {
for (int a = 0; a < aa.length; a++) {
for (int b = 0; b < bb.length; b++) {
for (int c = 0; c < cc.length; c++) {
for (int d = 0; d < dd.length; d++) {
for(int e = 0; e < ee.length;e++){
// carregando a imagem original, ja em escala de cinza
Mat imageGray = Imgcodecs.imread("C:\\Users\\PC\\Desktop\\placa.jpg\"");
BufferedImage temp = reconhecimento.reconhecedorPlaca(imageGray,
mm[m], nn[n], aa[a], bb[b], cc[c], dd[d],ee[e]);
File outputfile = new File("final" + aa[a]
+ "x" + bb[b] + "x" + cc[c] + "x" + dd[d] + "x" +ee[e]
+"x"+ ".jpg");
ImageIO.write(temp, "jpg", outputfile);
}}}}}}}
} catch (IOException e) {
System.out.println("Error: " + e.getMessage());
}
}
CLASSE 2
public class reconhecimento {
public static int ver =0;
public static BufferedImage reconhecedorPlaca(Mat matrix,int m,int n,int a,int b,int c,int d,int e ) {
int cols = matrix.cols();
int rows = matrix.rows();
int elemSize = (int)matrix.elemSize();
byte[] data = new byte[cols * rows * elemSize];
int type;
Mat imagemResultanteCanny = new Mat(matrix.rows(),matrix.cols(),CvType.CV_8UC1);
Imgproc.blur(matrix, imagemResultanteCanny,new Size(3,3));
Imgproc.Canny(imagemResultanteCanny, imagemResultanteCanny, m, n, 3, true);
Imgcodecs.imwrite("canny"+m+"x"+n+"x3ver"+ver+".jpg", imagemResultanteCanny);
ver++;
Mat lines = new Mat();
Imgproc.HoughLinesP(imagemResultanteCanny, lines, 2, Math.PI/180, b, c, d);
System.out.println(lines.size());
for (int x = 0; x < lines.cols(); x++)
{
double[] vec = lines.get(0, x);
org.opencv.core.Point start = new org.opencv.core.Point();
start.x = (int) vec[0];
start.y = (int) vec[1];
org.opencv.core.Point end = new org.opencv.core.Point();
end.x = (int) vec[2];
end.y = (int) vec[3];
Imgproc.line(matrix, start, end, new Scalar(255,255,255), 2);
}
Mat imagemResultanteCorner = new Mat();
Imgproc.cornerHarris(imagemResultanteCanny, imagemResultanteCorner, 2, 3, 0.04, 1);
Mat n_norm = new Mat();
normalize(imagemResultanteCorner, n_norm,0,255,Core.compareTo(Core),CvType.CV_32FC1);
Mat s_norm = new Mat();
//Imgproc.conveScaleAbs(n_norm, s_norm);
Imgcodecs.imwrite("corner"+m+"x"+n+"x"+a+"x"+b+"x"+c+"x"+d+"x"+e+"ver"+ver+".jpg",s_norm);
for(int y = 0; y < imagemResultanteCorner.height(); y++){
for(int x = 0; x < imagemResultanteCorner.width(); x++){
if(s_norm.get(y,x)[0] > e ){
Imgproc.circle(matrix, new org.opencv.core.Point(x,y),10, new Scalar(255), 2,8,0);
}
}
}
switch (matrix.channels()) {
case 1:
type = BufferedImage.TYPE_BYTE_GRAY;
break;
case 3:
type = BufferedImage.TYPE_3BYTE_BGR;
byte bI;
for(int i=0; i
↧
please give me some suggestion

gray=cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY) cv2.error: OpenCV(4.3.0) C:\projects\opencv-python\opencv\modules\imgproc\src\color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cv::cvtColor' [tcp @ 000001bce696a840] Connection to tcp://25.91.54.11:8080 failed: Error number -138 occurred
↧
Finding a complex template on a image
I am a beginner and trying to find a shape inside the red square on the picture:

The white dot can be in any position inside the circle. The circle placement is not entirely fixed in place.
So far I have tried MatchTemplate with images after Sobel, Canny. I also have tried parsing results of HoughCircles but there were still mistakes.
As a template with mask I have tried using the following:
 
I think the problem could be with the template but I do not know how to correct it.
What would be the correct way to handle the problem?
↧
How to solve this:
Ok! I will approach my question in a different manner this time:
Using
1. Python 3.8.3 64-bit
2. opencv-python 4.3.0.36
3. VS Code
So I have this image:

How would you match the paint palette on the top (green box) with the one within the image (red box)?
matchtemplate does not work since there is rotation and different image size.
I am a beginner and I am trying to learn through projects I make up myself. I have been trying to figure this out for quite a while with no success, I hope you guys can help me. Thanks!
↧
↧
android opencv optical mark recognition
I want to use opencv for OMR sheet or Bubble sheet .I dont have fix number of questions or columns in my omr sheet so i am trying to detetct rows and column (also i need to detect the title of the column)and then i can move further for filled circle detetction. I get crash on `lineImgproc.boundingRect(contours[i])` .ALso i checked the intermediate result i get the row and column image ,not perfect though
P.S I am very new to opencv my approach may be incorrect ,I would be thankful for any advice.I have a similar omr sheet as in image number of questions and number of column is not fixed ,I need to identify number od=f column,number of question, column title,filled circle i.e answer so i try to detetct the lines (horizontal and vertical).

fun showAllBorders(paramView: Bitmap?) { // paramView = BitmapFactory.decodeFile(filename.getPath());
localMat1 = Mat()
var scale = 25.0
var contourNo:Int=0
Utils.bitmapToMat(paramView, localMat1)
localMat1 = Mat()
var thresMat = Mat()
var horiMat = Mat()
var grayMat = Mat()
var vertMat = Mat()
Utils.bitmapToMat(paramView, localMat1)
val imgSource: Mat = localMat1.clone()
Imgproc.cvtColor(imgSource, grayMat, Imgproc.COLOR_RGB2GRAY)
Imgproc.adaptiveThreshold(grayMat, thresMat, 255.0, Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY, 15, -2.0)
horiMat = thresMat.clone()
vertMat = thresMat.clone()
val horizontalSize1 = horiMat.cols().toDouble() / scale
val horizontalStructure: Mat = Imgproc.getStructuringElement(MORPH_RECT, Size(horizontalSize1, 1.0))
Imgproc.erode(horiMat, horiMat, horizontalStructure, Point(-1.0, -1.0), 1)
Imgproc.dilate(horiMat, horiMat, horizontalStructure, Point(-1.0, -1.0), 1)
val verticalSize1 = vertMat.rows().toDouble() //scale
val verticalStructure: Mat = Imgproc.getStructuringElement(MORPH_RECT, Size(1.0, verticalSize1))
Imgproc.erode(vertMat, vertMat, verticalStructure, Point(-1.0, -1.0), 1)
Imgproc.dilate(vertMat, vertMat, verticalStructure, Point(-1.0, -1.0), 4)
var mask: Mat = Mat()
var resultMat: Mat = Mat()
Core.add(horiMat, vertMat, resultMat)
var jointsMat: Mat = Mat()
Core.bitwise_and(horiMat, vertMat, jointsMat)
val contours: List = ArrayList()
val cnts: List = ArrayList()
val hierarchy = Mat()
var rect: Rect? = null
var rois = mutableListOf()
var bmpList = mutableListOf()
Imgproc.findContours(resultMat, contours, hierarchy, Imgproc.RETR_TREE, Imgproc.CHAIN_APPROX_SIMPLE)
for (i in contours.indices) {
if (Imgproc.contourArea(contours[i]) < 100) {
contourNo = i
val contour2f = MatOfPoint2f(*contours[contourNo].toArray())
val contours_poly = MatOfPoint2f(*contours[contourNo].toArray())
Imgproc.approxPolyDP(contour2f, contours_poly, 3.0, true)
val points = MatOfPoint(*contours_poly.toArray())
var boundRect = mutableListOf()
boundRect[i] = Imgproc.boundingRect(contours[i]);//CRASH HERE//contours[i] is not null
val roi = Mat(jointsMat, boundRect[i])
val joints_contours: List = ArrayList()
val hierarchy1 = Mat()
Imgproc.findContours(roi, joints_contours, hierarchy1, Imgproc.RETR_TREE, Imgproc.CHAIN_APPROX_SIMPLE)
if (joints_contours.size >= 4) {
rois.add(Mat(jointsMat, boundRect[i]))
Imgproc.cvtColor(localMat1, localMat1, Imgproc.COLOR_GRAY2RGBA);
Imgproc.drawContours(localMat1, contours, i, Scalar(0.0, 0.0, 255.0), 6);
rectangle(localMat1, boundRect[i].tl(), boundRect[i].br(), Scalar(0.0, 255.0, 0.0), 1, 8, 0);
}
}
}
for (i in rois) {
val analyzed = Bitmap.createBitmap(i.cols(), i.rows(), Bitmap.Config.ARGB_8888)
Utils.matToBitmap(i, analyzed)
bmpList.add(analyzed)
}
val analyzed = Bitmap.createBitmap(jointsMat.cols(), jointsMat.rows(), Bitmap.Config.ARGB_8888)
Utils.matToBitmap(jointsMat, analyzed)
//below shows rows and column
/*val analyzed = Bitmap.createBitmap(resultMat.cols(), resultMat.rows(), Bitmap.Config.ARGB_8888)
Utils.matToBitmap(jointsMat, analyzed)
return analyzed!!
*/
//return
↧
cv2 imshow displaying black images sometimes
I am trying out cv2 on ubuntu 20.04, python 3.7. I have run the following script
```
import cv2
img = cv2.imread('butterfly.jpg')
cv2.imshow('ImageWindow', img)
cv2.waitKey()
```
Sometimes I would get the lovely picture of the original butterfly image,

but sometimes I would get a small black window.

The behavior is a bit random, and I am not sure what is causing this issue. Any help is appreciated, thanks!
↧
YUV422(16bit) to RGB(24bit) conversion
Hi All,
I am trying to convert a yuv422(16 bit) image to RGB(or BGR, 24 bit), with little success so far.
My code is as follows:
cv::Mat rgb(input->Height(), input->Width(), CV_8UC2, (char*)input->GetData());
cv::Mat webimg(workingImage->Height(), workingImage->Width(), CV_8UC3, workingImage->GetData());
cv::cvtColor(rgb, webimg, CV_YUV2RGB_UYVY);
The coloring I am getting is way off(mostly green and purple like one of those old MTV clips)

Any help would be appreciated.
↧