Does anybody tried to compile Opencv3.3.1 with cmake and Visual Studio 2008, or does Opencv3.3.1 support Visual Studio 2008 .Tell me what you know or have tried, Thanks!
↧
Does Opencv3.3.1 support Visual Studio 2008 ?
↧
getting error in multiclass SVM classification??
Hello all, I'm getting error in my code. here i'm doing multiclass classification using svm with 20 classes. here the training data of each class is feature vector .yml file named as Dictionay1 , Dictionay2, ......Dictionay20 such as
vocabulary: !!opencv-matrix
rows: 100
cols: 1
dt: f
data: [ 4.48574181e+01, 3.61391605e-04, 1.06644783e+02,
1.75997910e+02, 8.53040619e+01, 2.40742096e+02, 1.35315109e+02,
9.74207458e+01, 7.31249542e+01, 5.95427322e+01, 3.08762417e+01............
%YAML:1.0
---
vocabulary: !!opencv-matrix
rows: 100
cols: 1
dt: f
data: [ 6.63631945e-04, 1.00589867e+02, 1.60471497e+02,
4.06750679e+01, 1.32695053e+02, 2.54314590e+02, 8.82780228e+01... etc
My code is
char filename[80];
int main(int argc, char* argv[])
{
Mat trainData, trainLabels;
for (int i = 0; i < 22; i++ )
{
sprintf_s(filename, "VocabularyHOG/Dictionary%d.yml", i);
Mat feature;
FileStorage fs(filename, FileStorage::READ);
fs["vocabulary"] >> feature;
feature.convertTo(feature, CV_32F); // make sure we got float data
trainData.push_back(feature.reshape(1, 1)); // as a flat column
trainLabels.push_back(i); // the classlabel
}
// Train the SVM
Ptr svm = SVM::create();
svm->setType(SVM::C_SVC);
svm->setKernel(SVM::LINEAR);
svm->setTermCriteria(TermCriteria(TermCriteria::MAX_ITER, 100, 1e-6));
svm->train(trainData, ROW_SAMPLE, trainLabels);
Mat testData;
FileStorage fs("DictionaryrunningHOG.yml", FileStorage::READ);
fs["vocabulary"] >> testData;
int response = svm->predict(testData);
if (response == 1)
cout << "boxing";
else
cout << "negative result";
waitKey(27);
return 0;
}
I'm getting error
please help where i'm doing wrong. Thanks
↧
↧
Creating mobile application with OpenCV
What technologies can be used to create applications for iOS and Android? It is necessary that the application code with OpenСV was common to both systems.
↧
LUT for 16bit image
Hi,
I want to do lookuptable for 16 bit image.
I have image of 16 bit(CV_16UC1), then i naormalize image with RGB give value and store in Mat variable called lut(type - CV_16UC3). After that, i split Mat lut and store in lut_channels.
int planes = 3;
vector lut_channels(planes);
vector merge_elem;
split(lut, lut_channels);
for (int k = 0; k < planes; ++k)
{
Mat Plane_images = Mat(Mat::zeros(image.size(), image.type()));
LUT(image, lut_channels[k], Plane_images); //crash here
merge_elem.push_back(Plane_images);
}
It is crashes in LUT function.
Can someone help me to solve this problem.
Thanks in advance.
↧
build 4.01 with vs2017 + WITH_QT & WITH_OPENGL errors
OPENCV_EXTRA_MODULES_PATH : opencv_contrib-master/opencv_contrib-master/modules;
ERRORS:
Error LNK1104 cannot open file '..\..\lib\Debug\opencv_highgui401d.lib' opencv_perf_imgcodecs ;
Error LNK1104 cannot open file '..\..\lib\Debug\opencv_structured_light401d.lib' opencv_test_structured_light
and other silimar 96 errors;
Error C2065 'GL_PERSPECTIVE_CORRECTION_HINT': undeclared identifier opencv_highgui opencv-master\opencv-master\modules\highgui\src opencv-master\opencv-master\modules\highgui\src\window_QT.cpp 3228
↧
↧
finding the brightest area
I have a photo that has areas with high brightness.

applying different algorithms i made white the brightest points and black the rest points
then i need to determine the center of the bright white surface

i do in next way
img = cv2.imread("frame1.jpg") #read
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) # convert
h, s, v = cv2.split(hsv) # split to h s v
limit = v.max () # get max bright in V
hsv_min = np.array((0, 0, limit), np.uint8) # put min and max
hsv_max = np.array((255, 255, limit), np.uint8)
img = cv2.inRange(hsv, hsv_min, hsv_max) # brightness filter
moments = cv2.moments(img, 1) # get moments
x_moment = moments['m01']
y_moment = moments['m10']
area = moments['m00']
x = int(x_moment / area) # x
y = int(y_moment / area) # y
cv2.putText(img, "center_brightness_surface!", (x,y), cv2.FONT_HERSHEY_SIMPLEX, 1, (100,100,100), 2)
cv2.imshow('frame_out', img)
cv2.imwrite("frame_out.jpg" , img)
cv2.waitKey (0)
cv2.destroyAllWindows ()
it is working but x and y not on surface. x y which i got far away from real center of brightness surface
please tell me how can i get center of brightness surface
↧
compare two thermal images in android
please provide the sample or tutorials how to compare two thermal images in android
↧
opencv-python 4.0.0.21
Unofficial pre-built OpenCV 4.0 packages for Python is [available](https://pypi.org/project/opencv-python/4.0.0.21/)
↧
H264 without ffmeg
In order to get an h264 stream, I'm using a simple android library (wich is working well enough)
I send the decode stream to a surface view
mRtspClient = new RtspClient(Video_H264/*rtspUrl*/);
mRtspClient.setSurfaceView(mSurfaceView);
I there a way to redirect this to the JavaCameraView (Or the Native one) after processing with opencv ?
I' tried ImageReader without succes.
opencv 4.0.1 android sdk28
↧
↧
OpenCV 4.0 count cameras number
Dear all,
I would like to set a code to count the number of active cameras detected by OpenCV-4.0.
To achieve my goal, I used a for loop, because I didn't found how to do it by another way.
Apparently, it is no part of the API.
Does it exist another way to detect all the active cameras ?
Thank you for your help and support.
Regards,
↧
openCV with openVINO..?
hello there...
am using openVINO with raspbian and NCS and its working fine..but i cannot able to use opencv withit...
it shows error:
cv2.error: OpenCV(4.0.0) /home/pi/opencv/modules/dnn/src/dnn.cpp:2538: error: (-2:Unspecified error) Build OpenCV with Inference Engine to enable loading models from Model Optimizer. in function 'readFromModelOptimizer'
will you tell me..how to build opencv with inference engine.
Thanking you
↧
roi out of bounds
I have the following error i don't have any idea how to fix it. can you please help me to fix this error? The full code is below.
#include "opencv2/core/core.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "math.h"
#include
#include
#include
#include
using namespace cv;
using namespace std;
// Gradient Checking
#define G_CHECKING 0
// Conv2 parameter
#define CONV_FULL 0
#define CONV_SAME 1
#define CONV_VALID 2
// Pooling methods
#define POOL_MAX 0
#define POOL_MEAN 1
#define POOL_MAX 2
#define POOL_STOCHASTIC 1
#define ATD at
#define elif else if
int NumHiddenNeurons = 50; //200
int NumHiddenLayers = 2;
int nclasses = 4; //10
int KernelSize = 3; //13
int KernelAmount = 8;
int PoolingDim = 2; //4
int batch;
int Pooling_Methed = POOL_STOCHASTIC;
typedef struct ConvKernel{
Mat W;
double b;
Mat Wgrad;
double bgrad;
}ConvK;
typedef struct ConvLayer{
vector layer;
int kernelAmount;
}Cvl;
typedef struct Network{
Mat W;
Mat b;
Mat Wgrad;
Mat bgrad;
}Ntw;
typedef struct SoftmaxRegession{
Mat Weight;
Mat Wgrad;
Mat b;
Mat bgrad;
double cost;
}SMR;
Mat
concatenateMat(vector>&vec){
int subFeatures = vec[0][0].rows * vec[0][0].cols;
int height = vec[0].size() * subFeatures;
int width = vec.size();
Mat res = Mat::zeros(height, width, CV_64FC1);
for(int i=0; i&vec){
int height = vec[0].rows;
int width = vec[0].cols;
Mat res = Mat::zeros(height * width, vec.size(), CV_64FC1);
for(int i=0; i>&vec, int vsize){
int sqDim = M.rows / vsize;
int Dim = sqrt ((double) sqDim);
for(int i=0; i oneColumn;
for(int j=0; j> 8) & 255;
ch3 = (i >> 16) & 255;
ch4 = (i >> 24) & 255;
return((int) ch1 << 24) + ((int)ch2 << 16) + ((int)ch3 << 8) + ch4;
}
void
read_Mnist(string filename, vector&vec){
ifstream file(filename, ios::binary);
if (file.is_open()){
int magic_number = 0;
int number_of_images = 0;
int n_rows = 0;
int n_cols = 0;
file.read((char*) &magic_number, sizeof(magic_number));
magic_number = ReverseInt(magic_number);
file.read((char*) &number_of_images,sizeof(number_of_images));
number_of_images = ReverseInt(number_of_images);
file.read((char*) &n_rows, sizeof(n_rows));
n_rows = ReverseInt(n_rows);
file.read((char*) &n_cols, sizeof(n_cols));
n_cols = ReverseInt(n_cols);
for(int i = 0; i < number_of_images; ++i){
Mat tpmat = Mat::zeros(n_rows, n_cols, CV_8UC1);
for(int r = 0; r < n_rows; ++r){
for(int c = 0; c < n_cols; ++c){
unsigned char temp = 0;
file.read((char*) &temp, sizeof(temp));
tpmat.at(r, c) = (int) temp;
}
}
vec.push_back(tpmat);
}
}
}
void
read_Mnist_Label(string filename, Mat &mat)
{
ifstream file(filename, ios::binary);
if (file.is_open()){
int magic_number = 0;
int number_of_images = 0;
int n_rows = 0;
int n_cols = 0;
file.read((char*) &magic_number, sizeof(magic_number));
magic_number = ReverseInt(magic_number);
file.read((char*) &number_of_images,sizeof(number_of_images));
number_of_images = ReverseInt(number_of_images);
for(int i = 0; i < number_of_images; ++i){
unsigned char temp = 0;
file.read((char*) &temp, sizeof(temp));
mat.ATD(0, i) = (double)temp;
}
}
}
Mat
sigmoid(Mat &M){
Mat temp;
exp(-M, temp);
return 1.0 / (temp + 1.0);
}
Mat
dsigmoid(Mat &a){
Mat res = 1.0 - a;
res = res.mul(a);
return res;
}
Mat
ReLU(Mat& M){
Mat res(M);
for(int i=0; i 0.0) res.ATD(i, j) = 1.0;
}
}
return res;
}
// Mimic rot90() in Matlab/GNU Octave.
Mat
rot90(Mat &M, int k){
Mat res;
if(k == 0) return M;
elif(k == 1){
flip(M.t(), res, 0);
}else{
flip(rot90(M, k - 1).t(), res, 0);
}
return res;
}
// A Matlab/Octave style 2-d convolution function.
// from http://blog.timmlinder.com/2011/07/opencv-equivalent-to-matlabs-conv2-function/
Mat
conv2(Mat &img, Mat &kernel, int convtype) {
Mat dest;
Mat source = img;
if(CONV_FULL == convtype) {
source = Mat();
int additionalRows = kernel.rows-1, additionalCols = kernel.cols-1;
copyMakeBorder(img, source, (additionalRows+1)/2, additionalRows/2, (additionalCols+1)/2, additionalCols/2, BORDER_CONSTANT, Scalar(0));
}
Point anchor(kernel.cols - kernel.cols/2 - 1, kernel.rows - kernel.rows/2 - 1);
int borderMode = BORDER_CONSTANT;
Mat fkernal;
flip(kernel, fkernal, -1);
filter2D(source, dest, img.depth(), fkernal, anchor, 0, borderMode);
if(CONV_VALID == convtype) {
dest = dest.colRange((kernel.cols-1)/2, dest.cols - kernel.cols/2)
.rowRange((kernel.rows-1)/2, dest.rows - kernel.rows/2);
}
return dest;
}
// get KroneckerProduct
// for upsample
// see function kron() in Matlab/Octave
Mat
kron(Mat &a, Mat &b){
Mat res = Mat::zeros(a.rows * b.rows, a.cols * b.cols, CV_64FC1);
for(int i=0; i= M.ATD(i, j) && (val - M.ATD(i, j) < minDiff)){
minDiff = val - M.ATD(i, j);
res.x = j;
res.y = i;
}
}
}
return res;
}
Mat
Pooling(Mat &M, int pVert, int pHori, int poolingMethod, vector&locat, bool isTest){
int remX = M.cols % pHori;
int remY = M.rows % pVert;
Mat newM;
if(remX == 0 && remY == 0) M.copyTo(newM);
else{
Rect roi = Rect(remX, remY, M.cols - remX, M.rows - remY);
M(roi).copyTo(newM);
}
Mat res = Mat::zeros(newM.rows / pVert, newM.cols / pHori, CV_64FC1);
for(int i=0; i&locat){
Mat res;
if(POOL_MEAN == poolingMethod){
Mat one = Mat::ones(pVert, pHori, CV_64FC1);
res = kron(M, one) / (pVert * pHori);
}elif(POOL_MAX == poolingMethod || POOL_STOCHASTIC == poolingMethod){
res = Mat::zeros(M.rows * pVert, M.cols * pHori, CV_64FC1);
for(int i=0; i(i);
for(int j=0; j();
}
}
convk.W = convk.W * (2 * epsilon) - epsilon;
convk.b = 0;
convk.Wgrad = Mat::zeros(width, width, CV_64FC1);
convk.bgrad = 0;
}
void
weightRandomInit(Ntw &ntw, int inputsize, int hiddensize, int nsamples){
double epsilon = sqrt((double)6) / sqrt((double)(hiddensize + inputsize + 1));
double *pData;
ntw.W = Mat::ones(hiddensize, inputsize, CV_64FC1);
for(int i=0; i(i);
for(int j=0; j();
}
}
ntw.W = ntw.W * (2 * epsilon) - epsilon;
ntw.b = Mat::zeros(hiddensize, 1, CV_64FC1);
ntw.Wgrad = Mat::zeros(hiddensize, inputsize, CV_64FC1);
ntw.bgrad = Mat::zeros(hiddensize, 1, CV_64FC1);
}
void
weightRandomInit(SMR &smr, int nclasses, int nfeatures){
double epsilon = 0.01;
smr.Weight = Mat::ones(nclasses, nfeatures, CV_64FC1);
double *pData;
for(int i = 0; i(i);
for(int j=0; j();
}
}
smr.Weight = smr.Weight * (2 * epsilon) - epsilon;
smr.b = Mat::zeros(nclasses, 1, CV_64FC1);
smr.cost = 0.0;
smr.Wgrad = Mat::zeros(nclasses, nfeatures, CV_64FC1);
smr.bgrad = Mat::zeros(nclasses, 1, CV_64FC1);
}
void
ConvNetInitPrarms(Cvl &cvl, vector&HiddenLayers, SMR &smr, int imgDim, int nsamples){
// Init Conv layers
for(int j=0; j&x, Mat &y, Cvl &cvl, vector&hLayers, SMR &smr, double lambda){
int nsamples = x.size();
// Conv & Pooling
vector> Conv1st;
vector> Pool1st;
vector>> PoolLoc;
for(int k=0; k tpConv1st;
vector tpPool1st;
vector> PLperSample;
for(int i=0; i PLperKernel;
Mat temp = rot90(cvl.layer[i].W, 2);
Mat tmpconv = conv2(x[k], temp, CONV_VALID);
tmpconv += cvl.layer[i].b;
//tmpconv = sigmoid(tmpconv);
tmpconv = ReLU(tmpconv);
tpConv1st.push_back(tmpconv);
tmpconv = Pooling(tmpconv, PoolingDim, PoolingDim, Pooling_Methed, PLperKernel, false);
PLperSample.push_back(PLperKernel);
tpPool1st.push_back(tmpconv);
}
PoolLoc.push_back(PLperSample);
Conv1st.push_back(tpConv1st);
Pool1st.push_back(tpPool1st);
}
Mat convolvedX = concatenateMat(Pool1st);
// full connected layers
vector acti;
acti.push_back(convolvedX);
for(int i=1; i<=NumHiddenLayers; i++){
Mat tmpacti = hLayers[i - 1].W * acti[i - 1] + repeat(hLayers[i - 1].b, 1, convolvedX.cols);
acti.push_back(sigmoid(tmpacti));
}
Mat M = smr.Weight * acti[acti.size() - 1] + repeat(smr.b, 1, nsamples);
Mat tmp;
reduce(M, tmp, 0, CV_REDUCE_MAX);
M -= repeat(tmp, M.rows, 1);
Mat p;
exp(M, p);
reduce(p, tmp, 0, CV_REDUCE_SUM);
divide(p, repeat(tmp, p.rows, 1), p);
// softmax regression
Mat groundTruth = Mat::zeros(nclasses, nsamples, CV_64FC1);
for(int i=0; i delta(acti.size());
delta[delta.size() -1] = -smr.Weight.t() * (groundTruth - p);
delta[delta.size() -1] = delta[delta.size() -1].mul(dsigmoid(acti[acti.size() - 1]));
for(int i = delta.size() - 2; i >= 0; i--){
delta[i] = hLayers[i].W.t() * delta[i + 1];
if(i > 0) delta[i] = delta[i].mul(dsigmoid(acti[i]));
}
for(int i=NumHiddenLayers - 1; i >=0; i--){
hLayers[i].Wgrad = delta[i + 1] * acti[i].t();
hLayers[i].Wgrad /= nsamples;
reduce(delta[i + 1], tmp, 1, CV_REDUCE_SUM);
hLayers[i].bgrad = tmp / nsamples;
}
//bp - Conv layer
Mat one = Mat::ones(PoolingDim, PoolingDim, CV_64FC1);
vector> Delta;
vector> convDelta;
unconcatenateMat(delta[0], Delta, cvl.kernelAmount);
for(int k=0; k tmp;
for(int i=0; i&hLayers, SMR &smr, vector&x, Mat &y, double lambda){
//Gradient Checking (remember to disable this part after you're sure the
//cost function and dJ function are correct)
getNetworkCost(x, y, cvl, hLayers, smr, lambda);
Mat grad(cvl.layer[0].Wgrad);
cout<<"test network !!!!"<&x, Mat &y, Cvl &cvl, vector&HiddenLayers, SMR &smr, double lambda, int MaxIter, double lrate){
if (G_CHECKING){
gradientChecking(cvl, HiddenLayers, smr, x, y, lambda);
}else{
int converge = 0;
double lastcost = 0.0;
//double lrate = getLearningRate(x);
cout<<"Network Learning, trained learning rate: "< batchX;
for(int i=0; i&x, Cvl &cvl, vector&hLayers, SMR &smr, double lambda){
int nsamples = x.size();
vector> Conv1st;
vector> Pool1st;
vector PLperKernel;
for(int k=0; k tpConv1st;
vector tpPool1st;
for(int i=0; i acti;
acti.push_back(convolvedX);
for(int i=1; i<=NumHiddenLayers; i++){
Mat tmpacti = hLayers[i - 1].W * acti[i - 1] + repeat(hLayers[i - 1].b, 1, convolvedX.cols);
acti.push_back(sigmoid(tmpacti));
}
Mat M = smr.Weight * acti[acti.size() - 1] + repeat(smr.b, 1, nsamples);
Mat tmp;
reduce(M, tmp, 0, CV_REDUCE_MAX);
M -= repeat(tmp, M.rows, 1);
Mat p;
exp(M, p);
reduce(p, tmp, 0, CV_REDUCE_SUM);
divide(p, repeat(tmp, p.rows, 1), p);
log(p, tmp);
Mat result = Mat::ones(1, tmp.cols, CV_64FC1);
for(int i=0; i maxele){
maxele = tmp.ATD(j, i);
which = j;
}
}
result.ATD(0, i) = which;
}
// deconstruct
for(int i=0; i trainX;
vector testX;
Mat trainY, testY;
printf(" S1 ");
readData(trainX, trainY,imagePath, lablePath, 100);
readData(testX, testY, imagePath, lablePath, 100);
printf(" S2 ");
cout<<"Read trainX successfully, including "< HiddenLayers;
SMR smr;
printf(" S5 ");
ConvNetInitPrarms(cvl, HiddenLayers, smr, imgDim, nsamples);
printf(" S6 ");
// Train network using Back Propogation
batch = nsamples / 100;
Mat tpX = concatenateMat(trainX);
double lrate = getLearningRate(tpX);
cout<<"lrate = "<
↧
Extract rotation and translation from Fundamental matrix
Hello,
I try to extract rotation and translation from my simulated data. I use simulated large fisheye data.


So I calculate my fundamental matrix :
fundamentalMatrix
[[ 6.14113278e-13 -3.94878503e-05 4.77387412e-03]
[ 3.94878489e-05 -4.42888577e-13 -9.78340822e-03]
[-7.11839447e-03 6.31652818e-03 1.00000000e+00]]
But when I extract with recoverPose the rotation and translation I get wrong data:
R = [[ 0.60390422, 0.28204674, -0.74548597],
[ 0.66319708, 0.34099148, 0.66625405],
[ 0.44211914, -0.89675774, 0.01887361]]),
T = ([[0.66371609],
[0.74797309],
[0.00414923]])
Even when I plot the epipolar lines with the fundamental matrix the lines don't fit the corresponding point in the next image.
I don't really understand what I do wrong.
fundamentalMatrix, status = cv2.findFundamentalMat(uv_cam1, uv_cam2,cv2.FM_RANSAC, 3, 0.8)
cameraMatrix = np.eye(3);
i= cv2.recoverPose(fundamentalMatrix, uv_cam1, uv_cam2, cameraMatrix)
↧
↧
DNN/Tensorflow API works in python but not c++
Hi, I'm fairly new to training my own NN but I have gotten it to work but only partially. For some reason, I can only detect objects in python but not in c++. In python (3.6) this will detect objects as intended:
import cv2 as cv
cvNet = cv.dnn.readNetFromTensorflow('frozen_inference_graph.pb', 'ssd_graph.pbtxt')
img = cv.imread('image2.jpg')
rows = img.shape[0]
cols = img.shape[1]
cvNet.setInput(cv.dnn.blobFromImage(img, size=(300, 300), swapRB=True, crop=False))
cvOut = cvNet.forward()
for detection in cvOut[0,0,:,:]:
score = float(detection[2])
if score > 0.3:
left = detection[3] * cols
top = detection[4] * rows
right = detection[5] * cols
bottom = detection[6] * rows
cv.rectangle(img, (int(left), int(top)), (int(right), int(bottom)), (23, 230, 210), thickness=2)
cv.imshow('img', img)
cv.waitKey()
However, a very similar program in c++ runs without errors but does not return any results:
#include
#include
#include
#include
#include
#include
using namespace cv;
using namespace dnn;
using namespace std;
int main()
{
String modelConfiguration = "ssd_graph.pbtxt";
String modelWeights = "frozen_inference_graph.pb";
Mat blob;
Net net = readNetFromTensorflow(modelWeights, modelConfiguration);
Mat img = imread("image2.jpg");
int rows = img.rows;
int cols = img.cols;
//blobFromImage(frame, blob, 1 / 255.0, Size(inpWidth, inpHeight), Scalar(0, 0, 0), true, false);
blobFromImage(img, blob, 1 / 127.5, Size(299, 299), Scalar(127.5, 127.5, 127.5), true, false);
//Sets the input to the network
net.setInput(blob);
// Runs the forward pass to get output of the output layers
vector outs;
net.forward(outs); //, getOutputsNames(net)
for (int i=0; i < outs.size(); i++) {
Mat detection;
detection = outs[i];
float* data = (float*)outs[i].data;
float score = float(detection.data[2]);
if (score >= 0.0) {
int left = detection.data[3] * cols;
int top = detection.data[4] * rows;
int right = detection.data[5] * cols;
int bottom = detection.data[6] * rows;
rectangle(img, Point(int(left), int(top)), Point(int(right), int(bottom)), (23, 230, 210), 2); // img, (int(left), int(top)), (int(right), int(bottom)), (23, 230, 210), thickness = 2);
cout << detection.data[1] << endl;; //Detection[1] is name label
}
}
imshow("img", img);
waitKey();
return 0;
}
I'm using python 3.6, trained the model using tensorflow 1.12 using the SSD_Inception_V2_coco pre-trained model, opencv 4.0.0. Can anyone point me in the right direction? Thanks!
↧
Updating of documentation (regarding reduce)
Hi,
I have just started playing with opencv for Python, trying to do some simple calculations.
I am using opencv-python v. 4.0.0.21 and Python v. 3.6.6
I found the documentation for the reduce function to be a bit lacking, and it cost me some time to get it working.
So my question is if the documentation could be updated.
Specifically that the documentation could state that the only output types that are supported by the function are;
CV_32S, CV_32F, and CV_64F.
Furthermore the rtype is no longer called CV_REDUCE_* but is now just REDUCE_*
I would just like other people to not have to go through the same hassle I had to, to get it working.
Many thanks for the awesome library!
Kind regards,
Jan Vinther Christensen
↧
python: how to compute the color distribution of image in the paper with python and openCV
I am computing the color distribution of image as mentioned in the paper: https://arxiv.org/pdf/1202.2158.pdf.
3.1.2 Color Distribution
To avoid distraction from objects in the background, professional photographers tend to keep the background simple.
In [19], the authors use the color distribution of the background to measure this simplicity. We use a similar approach
to measure the simplicity of color distribution in the image. For a given image, we quantize each RGB channel into
8 values, creating a histogram Hrgb = {h0, h1, · · · , h511} of 512 bins, where hi indicates the number of pixels in
i-th bin. We define feature f4 to indicate the number of dominant colors as f4 = 512 1(hk ≥ c2 maxi hi) where k=0
c2 = 0.01 is the threshold parameter. We also calculate the size of the dominant bin relative to the image size as f5 = maxi hi .
I am no sure the meaning of we quantize each RGB channel into 8 values. How to quantize the RGB into 8 values. and Hrgb of 512 bins. Hrgb is the sum of R, G, B of histogram or not?
and I have read the image with opencv to get there channel data as following code.
image = cv2.imread(img)
B = image[:, :, 0]
G = image[:, :, 1]
R = image[:, :, 2]
↧
dnn assertion failed 'getMemoryShapes()' using Darknet yolov3
Hi,
I'm trying to use the OpenCV dnn with the Darknet pre-trained yolov3 weights and cfg. (on Ubuntu 16.04, OpenCV 4.0.1-dev).
The weights and cfg files are loaded using `readNetFromDarknet()` When `forward()` is called an exception is thrown :
OpenCV(4.0.1-dev) /home/ed/src/opencv/modules/dnn/src/dnn.cpp:687: error: (-215:Assertion failed) inputs.size() == requiredOutputs in function 'getMemoryShapes'
It seems there is a mismatch in the size of the dnn layer iand the size of the blob.
I'm not sure where to look to match these up. In the yolov3.cfg file it starts with
batch=64
subdivisions=16
width=608
height=608
channels=3
momentum=0.9
decay=0.0005
angle=0
saturation = 1.5
exposure = 1.5
hue=.1
I tried setting the input size to 608x608 (and 416x416 as per tutorial) but got the same error ( the original rgb is 1920x1080)
cv::Size inpSize(608,608);
cv::Mat blob;
cv::dnn::blobFromImage(img, blob, scale, inpSize, cv::Scalar(0, 0, 0), false, false);
If I run the sample dnn code (object_detection.cpp) with the yolov3 weights (setting width and height to 416 from the webcamera as per tutorial) it runs ok.
Am I on the right track trying to change the size of the blob or is the problem somewhere else?
↧
↧
pointPolygonTest() throws error for some reason
I am using the following piece of code to check if a point is inside the contour or not:
# form the contour from points
cnt = []
for pose in component.part_outline.outline_poses.poses:
point = (pose.position.x, pose.position.y)
cnt.append(point)
cnt = np.array(cnt)
# get the point to be tested
point = (screw.pose.position.x, screw.pose.position.y)
# run the test
is_inside = cv2.pointPolygonTest(cnt,point, False)
if is_inside > 0:
print("bla bla") # act upon the outcome
and I get the following error for some reason:
"error: OpenCV(3.4.4) /io/opencv/modules/imgproc/src/geometry.cpp:103: error: (-215:Assertion failed) total >= 0 && (depth == CV_32S || depth == CV_32F) in function 'pointPolygonTest'\n\n"
This error message does not tell much to be honest, and I have absolutely no idea why this is happening. I checked the image, it's there. The contour and the point are also there, nothing is null or whatsoever.
P.S: In case if it helps, I tried to print the variable `is_inside` and it prints (-1.0) for 4 times, then the error is thrown.
Any idea?
↧
Patternsize in findChessboardcorners
Hello,
We have to specify the values for PatternSize in findChessboardcorners.
But how to do if we don't know at the advance exactly these values ? Because testing all possible values is time expensive ?
Thank you,
Christophe
↧
R6010 abort() called
I am working on a simple CNN program https://github.com/xingdi-eric-yuan/single-layer-convnet/blob/master/ConvNet.cpp here but when I run the code i am encountering this exception.
The console is like this I am using visual studio 2010 and open cv 2.4
[C:\fakepath\Ussntitled.jpg](/upfiles/15472236514503219.jpg)
↧