Commands used:
opencv_createsamples -info annotations.txt -bg bg.txt -num 107 -w 80 -h 45 -vec samples.vec
opencv_traincascade -data /home/USER/test/classifier -vec samples.vec -bg bg.txt -numStages 12 -minHitRate 0.999 -maxFalseAlarmRate 0.5 -numPos 107 -numNeg 292 -w 80 -h 45 -mode ALL -precalcValBufSize 1024 -precalcIdxBufSize 1024

I got this so far


↧
HAAR Cascade Trainning recognizing few features
↧
Passing arguments to train_HOG.cpp
How do I pass these (http://answers.opencv.org/question/96925/how-to-use-train_hogcpp/) arguments to train_HOG.cpp?
After compiling this https://github.com/opencv/opencv/blob/master/samples/cpp/train_HOG.cpp in VS2017 I get this output:
'Project1.exe' (Win32): Loaded 'C:\Users\sephr\source\repos\Project1\x64\Release\Project1.exe'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\ntdll.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\kernel32.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\KernelBase.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\ucrtbase.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\msvcp140.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\vcruntime140.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\vcruntime140.dll'. Symbols loaded.
'Project1.exe' (Win32): Unloaded 'C:\Windows\System32\vcruntime140.dll'
'Project1.exe' (Win32): Loaded 'C:\opencv\build\x64\vc14\bin\opencv_world340.dll'. Cannot find or open the PDB file.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\user32.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\win32u.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\gdi32.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\gdi32full.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\msvcp_win.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\ole32.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\combase.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\rpcrt4.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\bcryptprimitives.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\sechost.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\oleaut32.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\comdlg32.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\msvcrt.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\SHCore.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\shlwapi.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\shell32.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\WinSxS\amd64_microsoft.windows.common-controls_6595b64144ccf1df_5.82.16299.192_none_887f70824ab5b0de\comctl32.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\cfgmgr32.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\advapi32.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\windows.storage.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\kernel.appcore.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\powrprof.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\profapi.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\msvfw32.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\concrt140.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\avicap32.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\winmm.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\winmm.dll'. Symbols loaded.
'Project1.exe' (Win32): Unloaded 'C:\Windows\System32\winmm.dll'
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\winmmbase.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\winmmbase.dll'. Symbols loaded.
'Project1.exe' (Win32): Unloaded 'C:\Windows\System32\winmmbase.dll'
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\avifil32.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\msacm32.dll'. Symbols loaded.
'Project1.exe' (Win32): Loaded 'C:\Windows\System32\imm32.dll'. Symbols loaded.
The thread 0x3284 has exited with code 1 (0x1).
The thread 0x2d68 has exited with code 1 (0x1).
The thread 0xfcc has exited with code 1 (0x1).
The program '[2476] Project1.exe' has exited with code 1 (0x1).
↧
↧
What is estimateGeometricTransform of Matlab in openCV?
Hello,
I am trying to rewrite the following code [Object Detection in a Cluttered Scene Using Point Feature Matching](https://in.mathworks.com/help/vision/examples/object-detection-in-a-cluttered-scene-using-point-feature-matching.html) in OpenCV using python.
It would be great if somebody could explain to me how the estimateGeometricTransform in the Matlab code works and is there any equivalent OpenCV command? I have seen people saying getAffineTransform is equivalent to estimateGeometricTransform, but I am not sure.
So far the code is python is:
import numpy as np
import cv2
# Read the templates
temp = cv2.imread('template.jpg')
# Show the templates
cv2.imshow("Template", temp)
# Read the input Image
inputImage = cv2.imread('Main.jpg')
# Show the input Image
cv2.imshow("Main Image",inputImage)
# Create SURF object.
surf = cv2.xfeatures2d.SURF_create(20000)
# Find keypoints and descriptors directly
kp1, des1 = surf.detectAndCompute(inputImage,None)
kp2, des2 = surf.detectAndCompute(tramTemplate,None)
print("Key points of an Input Image, Descriptors of an Input Image", len(kp1), len(des1))
print("Key points of Tram Template, Descriptors of Tram Template", len(kp2), len(des2))
#Detect feature points in both images.
inputImagePoint = cv2.drawKeypoints(inputImage,kp1,None,(255,0,0),4)
tramTemplatePoint = cv2.drawKeypoints(tramTemplate,kp2,None,(255,0,0),4)
cv2.imshow("Input Image Key Point", inputImagePoint)
cv2.imshow("Tram Template Key Point", tramTemplatePoint)
# Match the features using their descriptors.
bf = cv2.BFMatcher()
matches = bf.knnMatch(des1,des2, k=2)
# Show Matched features
M = np.array(matches)
M1 = M[:, 0]
M2 = M[:, 1]
# Apply ratio test
good = []
for m,n in matches:
if m.distance < 0.75*n.distance:
good.append([m])
matchedFeaures = cv2.drawMatchesKnn(inputImage,kp1,tramTemplate,kp2, good, None, flags=2)
cv2.imshow("Matched Feaures", matchedFeaures)
# Part of code is missing
aff = cv2.getAffineTransform(M1, M2)
cv2.imshow("Affine Transformed Image", aff)
# Get the bounding polygon of the reference image.
fromCenter = False
rectangleBox = cv2.selectROI(tramTemplate, fromCenter)
cv2.waitKey()
In the Matlab code, I don't understand what the following lines mean? Can somebody please explain it to me? It says "Display putatively matched features.", but I don't get it how.
matchedBoxPoints = boxPoints(boxPairs(:, 1), :);
matchedScenePoints = scenePoints(boxPairs(:, 2), :);
I am kinda stuck from this point. I believe that the variable "boxPoints" is a key feature and "boxPairs" is the matched feature using their descriptors, right?
Also, the getAffineTransform gives me an error: "src data type = 17 is not supported"
I kind of need it for my project
Thank you very much.
↧
One-to-one face matching probability
Hi everyone,
Recently I came across an interesting project: given three images from the American Civil War (one image containing a group of identified/unidentified soldiers and the other two images containing an identified soldier (in one image he is older and in one he is younger)) can we somehow make some statistical/probability comment on the chance that one of the unidentified soldiers is indeed the known soldier?
I think a lot of Civil War domain knowledge would be most helpful, but, in terms of face recognition, is there anything I can do? Most of the facial recognition algorithms I am familiar with would be no help here. Do you think this project is doable? Are there any methods anyone can point me to that could be of use in this project?
Here are the photos (I could acquire much higher resolution copies):

↧
Pixel Loop on a color image
Hello,
How to make a loop on the pixels for a rgb color image, and doing a 3x3 processing ?
Thank you,
Christophe
↧
↧
How to obtain the hash representation from the image_hash module
I 'm trying to compute the perceptual hash for an image (Using for example the phash algorithm).
I want to compute and save the hash in a database for future comparissons.
The compute method from the `PHashImpl` class set an `cv::OutputArray` with the result of the computation (the hash).
Is it posible to convert it into a alphanumeric of numeric representation of the hash?
Like: `image -> phash_algrithm -> 8a0303f6df3ec8cd`
Am i thinking in the right way or my limited knowledge at the moment of the OpenCv is leading me to the wrong path?
↧
Copy of a transparent image
Hello,
How to copy a transparent Png image on an Rgb image ?
Thank you,
Christophe
↧
Videocapture does not working
Hello,
I have a .mov video of Shutterstock, that I have converted into an Avi.
But VideoCapture cap("myfile.avi"); does not work
I have installed codecs and I use the last version of Opencvworld.
Could you help ?
Thx
Cjacquel
↧
Trouble generating Matlab bindings using latest OpenCV 3.4.0 software
Hello,
I'm trying to install the new OpenCV source files on Windows along with the latest opencv_contribute using CMake but keep running into this error:> CMake Warning at C:/OpenCV/opencv_contrib-master/modules/matlab/CMakeLists.txt:77 (message):
Your compiler is 64-bit but your version of Matlab is 32-bit. To build
Matlab bindings, please switch to a 32-bit compiler.
Call Stack (most recent call first):
C:/OpenCV/opencv_contrib-master/modules/matlab/CMakeLists.txt:90 (warn_mixed_precision)
However, I know I am using a 64-bit version of Matlab:

Here are the paths I am using:

I had to manually find the paths for the libmat, libmex, and libmx DLLs, but the rest were found by CMake. Not sure if those are correct.
Here are my setup details:
- CMake 3.10.2 using Visual Studio 15 2017 Win64 (as seen in one of the screenshots above)
- OpenCV 3.4.0 and OpenCV_Contribute 3.4.0 (assumed, latest build from GitHub)
- Matlab 2017b, C++ compiler set as Microsoft Visual C++ 2017, Computer Vision System Toolbox OpenCV Interface installed (v 17.2.1.0)
- Python 3.6.4 64 bit
- Windows 10 Pro 64 bit
Did I have to create the opencv_contribute modules first using CMake? I have set the path to the opencv_contribute modules in CMake.
Any help would be greatly appreciated.
↧
↧
Aruco module GPU-accelerated
I have a GeForce 1070 Graphics card, I want to know if the Aruco module is GPU-accelerated?
If it's not the case? How can I do so?
I notice that we can use the datatype UMat instead of Mat on the images that we need to process to makes OpenCV use GPU whenever possible.
I tried it, and when I run nvidia-smi -lms to see if my process is actually making use of the GPU, it shows up there as one of the processes
However, the % utility of the GPU is so low though, like 5~6% and the only like 91 MB is used.

Note:
The process (my program a.out) is doing marker detection and pose estimation on images coming from a videocapture source of 60fps and 640x480 resolution
↧
RTSP stream from IP camera not moving
Hi, I am trying to stream an IP camera using RTSP. My code works fine when testing using a big buck bunny stream (either of the two commented out) however when I use the address of the camera it captures only a single frame and only updates that frame on a key press and then eventually crashes. The camera streams fine using its default explorer.
Any help is greatly appreciated, thanks.
#include "opencv2\opencv.hpp"
#include
#include
#define PI 3.14159265
#define key_up 72
#define key_down 80
using namespace cv;
using namespace std;
int main(int argv, char** argc)
{
Mat frame;
//VideoCapture vid("rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov");
VideoCapture vid("rtsp://192.168.1.150:1024/0");
//VideoCapture vid("rtsp://184.72.239.149/vod/mp4:BigBuckBunny_175k.mov");
namedWindow("RTSP stream", CV_WINDOW_FREERATIO);
int fps = (int)vid.get(CV_CAP_PROP_FPS);
int height = (int)vid.get(CV_CAP_PROP_FRAME_HEIGHT);
int width = (int)vid.get(CV_CAP_PROP_FRAME_WIDTH);
FileStorage fs2("Calibration.yml", FileStorage::READ);
double CutAngle = (double)fs2["TreeAngle"];
int heightDev = (int)fs2["HorLine"];
int widthDev = (int)fs2["VertLine"];
if (!vid.isOpened())
{
return -1;
}
while (vid.read(frame))
{
line(frame, Point(0, heightDev), Point(width, heightDev), Scalar(0, 0, 255), 1, 8, 0);
line(frame, Point(widthDev, 0), Point(widthDev, height), Scalar(0, 0, 255), 1, 8, 0);
line(frame, Point(widthDev, heightDev), Point((widthDev) + abs((height-heightDev) / tan(CutAngle * (PI / 180.0))), height), Scalar(0, 0, 255), 0.1, CV_AA, 0);
line(frame, Point(widthDev, heightDev), Point((widthDev) + abs((heightDev) / tan((180-CutAngle) * (PI / 180.0))), 0), Scalar(0, 0, 255), 0.1, CV_AA, 0);
imshow("RTSP stream", frame);
char key = waitKey(1000 / fps) & 0xFF;
if (key == 'r')
CutAngle += 1;
else if (key == 't')
CutAngle -= 1;
else if (key == int(27))
break;
else if (key == 'a')
widthDev -= 1;
else if (key == 'd')
widthDev += 1;
else if (key == 'w')
heightDev -= 1;
else if (key == 's')
heightDev += 1;
else if (key == 'z'){
FileStorage fs("Calibration.yml", FileStorage::WRITE);
fs << "TreeAngle" << CutAngle;
fs << "VertLine" << widthDev;
fs << "HorLine" << heightDev;}
else if (key == 'p'){
CutAngle = (40/2)+1;
widthDev = (width/5)+1;}
// waitKey();
}
return 1;
vid.release();
//Mat test = imread("dawson.jpg", CV_LOAD_IMAGE_UNCHANGED);
//imshow("Test", test);
//waitKey();
}
↧
Cascade Classifier
I am using Cascade Classifier to detect objects in a video stream. I load the .xml classifier file that I have trained before. Then I use detectMultiCsale to perform the detection but the problem is when I use detectMultiScale to perform the detection it cause my detection delay and lag. May I know have any other method to use .xml file to detect the object using cascade classifier?
I am using opencv 3.3.0 and I using c++ in my coding.
Anyone can help me?Thank you.
↧
cv::puttext character width?

hello. i'm develop ascii art project using opencv 3.4.0
i use puttext to draw string. but i think width of each character is different.
how can i draw same width of text
or not using puttext function..
thank you
↧
↧
CMake cannot detect latest Apache Ant 1.10.2
Hello,
I am trying to install OpenCV 3.4 but CMake is unable to detect Apache Ant installed on my machine.
I have it installed in:
C:\apache-ant-1.10.2
When I initially click 'Configure' the output window displays:
Java:
ant: NO
JNI: C:/Program Files/Java/jdk1.8.0_161/include C:/Program Files/Java/jdk1.8.0_161/include/win32 C:/Program Files/Java/jdk1.8.0_161/include
Java wrappers: NO
Java tests: NO
I have installed, however, Java 9.0.4. No matter, that can be changed. But then I am given the new option of 'ANT_EXECUTABLE' in which case I set to:
C:/apache-ant-1.10.2/bin/ant.bat
I have seen others set this option similarly, even though looking at the 'OpenCVDetectApacheAnt.cmake' file seems to looking for a 'ant' (extensionless) file if the system is 64 bit:
if(CMAKE_HOST_WIN32)
set(ANT_NAME ant.bat)
else()
set(ANT_NAME ant)
endif()
This seems like another case of OpenCV/CMake incorrectly detecting my system as 32bit.
However, if I set ANT_EXECUTABLE to 'C:/apache-ant-1.10.2/bin/ant.bat' and click 'Configure', the output I receive is:
> '"java.exe"' is not recognized as an internal or external command,
operable program or batch file.
If I set ANT_EXECUTABLE to 'C:/apache-ant-1.10.2/bin/ant' and click 'Configure', the change is seemingly ignored, and I am unable to see the ANT_EXECUTABLE option unless I click 'Configure' again. However, I do get "ANT_ERROR_LEVEL=%1 is not a valid Win32 application" in the CMakeVars.txt file.
Not sure what to do next.
Any help is appreciated.
↧
CMake cannot detect latest Apache Ant 1.10.2
Hello,
I am trying to install OpenCV 3.4 but CMake is unable to detect Apache Ant installed on my machine.
I have it installed in:
C:\apache-ant-1.10.2
When I initially click 'Configure' the output window displays:
Java:
ant: NO
JNI: C:/Program Files/Java/jdk1.8.0_161/include C:/Program Files/Java/jdk1.8.0_161/include/win32 C:/Program Files/Java/jdk1.8.0_161/include
Java wrappers: NO
Java tests: NO
I have installed, however, Java 9.0.4. No matter, that can be changed. But then I am given the new option of 'ANT_EXECUTABLE' in which case I set to:
C:/apache-ant-1.10.2/bin/ant.bat
I have seen others set this option similarly, even though looking at the 'OpenCVDetectApacheAnt.cmake' file seems to looking for a 'ant' (extensionless) file if the system is 64 bit:
if(CMAKE_HOST_WIN32)
set(ANT_NAME ant.bat)
else()
set(ANT_NAME ant)
endif()
This seems like another case of OpenCV/CMake incorrectly detecting my system as 32bit.
However, if I set ANT_EXECUTABLE to 'C:/apache-ant-1.10.2/bin/ant.bat' and click 'Configure', the output I receive is:
> '"java.exe"' is not recognized as an internal or external command,
operable program or batch file.
If I set ANT_EXECUTABLE to 'C:/apache-ant-1.10.2/bin/ant' and click 'Configure', the change is seemingly ignored, and I am unable to see the ANT_EXECUTABLE option unless I click 'Configure' again. However, I do get "ANT_ERROR_LEVEL=%1 is not a valid Win32 application" in the CMakeVars.txt file.
Not sure what to do next.
Any help is appreciated.
↧
KNN android tutorial
Is there any tutorial how to apply knearest neighbor in android platform? i keep searching all over the internet but not such a single source i found
↧
net.setinput error in Load Caffe framework models tutorial
Hello everyone,
I've installed Opencv3.4 on my computer (Windows 10 latest version, visual studio 2015), it worked without any problems until I come to the "Load Caffe frameworks models" tutorial.
[screenshot](/upfiles/15182731074581328.jpg)
The error appears at the line with net.setInput like showed by the figure. I have seen that it could occurs when an instance is not well declared but the line Net net = dnn::readNetFromCaffe(modelTxt, modelBin) is doing the job so I don't understand.
Thank you for your help.
↧
↧
Recreating lighting based on sample image
Hi,
I am wondering if there is any current research/implementation to this problem:
Say given 2 images, one with an object lighted with an (assumingly) uniform lighting, and another image, with the same object, illuminated with a fixed, single light source. Is there a relatively simple way to say, given a 3rd image, with the same object, but from another perspective, to generate a similar lighting?
In order words, is there a way to remodify the 3rd image, such that it looks like how the object would look like under similar lighting conditions as in the 2nd image? Without going through the entire process of building a 3d physics simulation.
Of course, assuming that the objects are purely solid, and ignoring things like transparency and different albedos of the different portion of the object within the images. And that we also have the perspective relations between each individual image.
↧
Sharpen image by blurring and then adding both images?
Hey everyone,
I am pretty new to image processing. Somewhere on the internet I came across a method to sharpen an image (and it actually works), but I do not understand, why it really works. Here is the code:
cv::Mat image = cv::imread(file);
cv::Mat gaussBlur;
GaussianBlur(image, gaussBlur, cv::Size(0,0), 3);
cv::addWeighted(image, 1.5, gaussBlur, -0.5, 0, image);
Why does subtracting 0.5 times the blurred image from 1.5 times the original image lead to a sharpened image?
Thank you for your help. :-)
↧
Bicycle detection with opencv library in android
I am very new with opencv. My project is to detect bicycle using opencv or any other open source library in android. I can successfully run the sample project of opencv and import in android studio using another. Then I have tried to achieve my goal by replacing the xml from raw directory with /latentsvmdetector/bicycle.xml but it shows unsupported character in xml and crashed the application. After that I have researched almost every link. Many suggest that it is a complex task and it needs to use Latent SVM. But I am failed to find the way how to use this in my project. I don't get any suitable example of Latent SVM in android. I have also tried to build the cascade xml from positive images(1600) and negative images(600) data set from but failed to build it with some opencv error. I am really very confused what is the suitable way for this task. I think if I get any sample project to detect such complex object in android then I can be able to complete my task. Any link or sample code will be really appreciable. Thanks every one for your kind attention.
↧