Hi!
I’m new to Kotlin (and also OpenCV) and can’t get a simple library test to work (either on macOS or linux).
The test code is:
package org.mytest
import org.opencv.core.Core
import org.opencv.core.CvType
import org.opencv.core.Mat
fun main(args: Array) {
System.loadLibrary(Core.NATIVE_LIBRARY_NAME)
val mat = Mat.eye(5, 5, CvType.CV_8UC1)
println("mat = ${mat.dump()}")
}
After use:
kotlinc test-opencv.kt -classpath /usr/local/opt/opencv/share/java/opencv4/opencv-410.jar -include-runtime -d test-opencv.jar
I get the jar file.
But then, when I try:
java -Djava.library.path=/usr/local/opt/opencv/share/java/opencv4 -cp /usr/local/opt/opencv/share/java/opencv4/opencv-410.jar:. -jar test-opencv.jar
I got:
Exception in thread "main" java.lang.NoClassDefFoundError: org/opencv/core/Core
at org.mytest.Test_opencvKt.main(test-opencv.kt:9)
Caused by: java.lang.ClassNotFoundException: org.opencv.core.Core
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 1 more
Any ideas?
↧
Kotlin with OpenCV 4.10 on macOS terminal
↧
how can we predict age of person from photo using python
I have done it for video but is not able to do it for a photo. when I try to use a photo using the following code:
(ref: https://www.learnopencv.com/age-gender-classification-using-opencv-deep-learning-c-python/)
age_net.setInput(blob)
**age_preds = age_net.forward()** : Error comes when I execute this line
age = age_list[age_preds[0].argmax()]
the following error comes:
error: OpenCV(4.1.0) C:\projects\opencv-python\opencv\modules\dnn\src\layers\convolution_layer.cpp:282: error: (-2:Unspecified error) Number of input channels should be multiple of 3 but got 1 in function 'cv::dnn::ConvolutionLayerImpl::getMemoryShapes'
Can anyone help me in solving this issue?
↧
↧
Using Logitech C920 with VideoCapture takes long to open cam
Hi!
I have a Logitech C920 webcam that I want to use with OpenCV's VideoCapture functionality. However, using the Python example from [here](https://docs.opencv.org/4.1.0/dd/d43/tutorial_py_video_display.html) it takes nearly a minute before the image feed is displayed on the screen.
When I use the Logitech W10 application, it takes 5-6 seconds. What could be the problem?
I am using OpenCV 4.1.0 on a W10 x64 intel i7 16GB laptop.
↧
opencv + java face recongnition code
hello friends
i have done face recognition using java + open cv . my question is how to print distance variable only ??
while (runnable) {
if (webSource.grab()) {
try {
webSource.retrieve(frame);
Graphics g = jPanel1.getGraphics();
faceDetector.detectMultiScale(frame, faceDetections);
BufferedImage buff = null;
Rect rect_Crop = null;
for (Rect rect : faceDetections.toArray()) {
Core.rectangle(frame, new Point(rect.x, rect.y), new Point(rect.x + rect.width, rect.y + rect.height),
new Scalar(0, 255, 0));
rect_Crop = new Rect(rect.x, rect.y, 200, 200);
}
Highgui.imencode(".bmp", frame, mem);
Image im = ImageIO.read(new ByteArrayInputStream(mem.toArray()));
buff = (BufferedImage) im;
Mat image_roi = null;
fileNameWithMaxValueOnly = null;
if (rect_Crop != null) {
image_roi = new Mat(frame, rect_Crop);
//fileNameWithMaxValueOnly = RecgonizeNewData.RecgonizeNewDataBufferedImage(ImpageCropping.MatToBufferedImage(image_roi));
DetectAndRecognize.this.setFileNameWithMaxValueOnly(RecgonizeNewData.RecgonizeNewDataBufferedImage(ImpageCropping.MatToBufferedImage(image_roi)));
//System.out.println(RecgonizeNewData.RecgonizeNewDataBufferedImage(ImpageCropping.MatToBufferedImage(image_roi)));
System.out.println("fileNameWithMaxValueOnly" + fileNameWithMaxValueOnly);
}
if (g.drawImage(buff, 0, 0, getWidth(), getHeight() - 150, 0, 0, buff.getWidth(), buff.getHeight(), null)) {
if (runnable == false) {
System.out.println("Paused ..... ");
this.wait();
}
}
// log result like this
Using cache: C:\eigenfacesCash\eigen.cache
Number of matching eigenfaces must be in the range (1-12); using 9
Matches image in C:\Imgs\1.png; distance = 0.5228
my question is how to print distance variable only.
↧
Can I save H.264/5 data to MP4 without transcoding?
I would like to use OpenCV's VideoCapture/VideoWriter to write video stream data (H.264/5) to an MP4 file without encoding/decoding. I can do this with `ffmpeg` like so:
`$ ffmpeg -i -vcodec copy -y -rtsp_transport tcp `
How would I do the equivalent using OpenCV? There doesn't seem to be a "Four CC" option for "straight copy without transcoding".
BTW this has also come up for another user on [StackOverflow](https://stackoverflow.com/questions/55924776/can-i-use-the-stream-copy-of-ffmpeg-in-opencv-with-videowriter-class) with almost the same use case as mine.
Thanks in advance for any comments or suggestions, I appreciate your help.
↧
↧
I have system with three camera. I have R and T matrix between C1 & C2 also between C2 & C3. How to transform a point from first camera to third camera?
I have three cameras (C1, C2, C3). I have calibrated C1 & C2 as one stereo pair(System-1) and C2 & C3 (System-2) as another stereo pair. So in results, I have rotational and translation matrix between C1 & C2 also C2 & C3. After successful reconstruction, I have one 3D point say P(X, Y, Z) using system-1. So my question is how to transform Point P in all three cameras?
↧
Trouble while opening a model through "cv.dnn.readNetfromTensorFlow()" that is created in keras model and converted to tf pb file
Environment:
Windows 10
Python 3.6
Tensorflow: 1.6.0
Keras: 2.2.4
OpenCV: 3.4
IDE: PyCharm Community edition 2018.1
Description: I have created a model from vgg16 and added my own layers on that for my classification problem. I saved the model, and its weights. I converted the model to .pb and could run prediction using "sess.run".
However I need to open this and run the prediction through opencv (as I need to run on .Net ultimately using emgu cv wrapper). But I am unable to open the model using "cv.dnn.readNetfromTensorFlow()".
It gave an error
"Process finished with exit code -1073741819 (0xC0000005)"
To debug, I used another pre-trained vgg16 model and tried to open that model by following the steps given [this article faithfully](https://answers.opencv.org/question/183507/opencv-dnn-import-error-for-keras-pretrained-vgg16-model/)
I removed the shape, stack, prod, strided_slice, connected the Reshape node properly to the pooling layer.
I got an error in that also.
These were the warning and error messages on runnning the pre-trained vgg16 model:
[libprotobuf WARNING C:\projects\opencv-python\opencv\3rdparty\protobuf\src\google\protobuf\io\coded_stream.cc:605] Reading dangerously large protocol message. If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING C:\projects\opencv-python\opencv\3rdparty\protobuf\src\google\protobuf\io\coded_stream.cc:82] The total number of bytes read was 553443385
Process finished with exit code -1073741819 (0xC0000005)
It could not open the model and also it did not throw any exceptions
Here is the code:
To generate the pb file for keras (borrowed from the [same article]((https://answers.opencv.org/question/183507/opencv-dnn-import-error-for-keras-pretrained-vgg16-model/))
from keras import applications
from keras import backend as K
import cv2 as cv
import tensorflow as tf
from tensorflow.python.framework import graph_util
from tensorflow.python.framework import graph_io
model = applications.VGG16(input_shape=(224, 224, 3), weights='imagenet', include_top=True)
print("output=", model.outputs)
print("input=", model.inputs)
K.set_learning_phase(0)
pred_node_names = [None]
pred = [None]
for i in range(1):
pred_node_names[i] = "output_node"+str(i)
pred[i] = tf.identity(model.outputs[i], name=pred_node_names[i])
sess = K.get_session()
constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(),
pred_node_names)
graph_io.write_graph(constant_graph, ".", "modelTmp.pb", as_text=False)
# Read the graph.
with tf.gfile.FastGFile('modelTmp.pb', "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Session() as sess:
# Restore session
sess.graph.as_default()
tf.import_graph_def(graph_def, name='')
tf.summary.FileWriter('logs1', graph_def)
Ran "optimize_for_inference.py" creating a new output model "opt_modelTmp.pb"
Then created opt_modelTmp.pbtxt file
Then removed the shape, strided_slice, prod, stack nodes and connected the Reshape node properly.
Still got the error while opening the pbtxt and pb file using:
cvNet = cv.dnn.readNetFromTensorflow("model.pb", "model.pbtxt")
What is wrong? Is this due to current memory? SHould I change the "SetTotalBytestLimit(). AM I doing something wrong?
Thanks in advance for your help.
↧
Replace image background
Hi all,
I'm new to OpenCV and I was wondering if someone could point me in the right direction for a simple program I'm trying to write. Well, the project may not be that simple but it should be simple to explain!
Given the picture of a blocknote, extract the content of the page and put it on a white background.
For example transforms the image on the left into the on on the right:

This picture is a super simple scenario, the notebook does not have lines (which I will need to delete) but I don't need OCR or anything else.
In my mind I would write something like:
1. identify the rectangle page
2. delete everything outside the rectangle
3. identify the color of the page (frequency distribution of colors, the x% most common -> next step)
4. replace the color of the page with "#ffffff" (eg the 10% most common colors)
Like I said, if you can point me in the right direction would be greatly appreciated! I'm also open to collaboration if anyone is interested!
Thanks,
Luca
↧
Autonomous robot, line following and Aruco detection. Android Java
Hi guys, firstly let me thank you for this fantastic library that I´m using on my final mechanical degree project. Basically I´m using a Lego EV3 Mindstorm robot to simulate a warehouse environment, he follows a black line, calculating the center of the line all the time and detecting Aruco markers on the floor. I´m working on a small Android app using Android Studio, I use my mobile phone camera as the robot "eyes", using OpenCV library to process every frame the camera takes, so if the camera doesn´t detect any Aruco marker the robot follows the black line, but if he detects a marker on the floor he turns right/left or goes ahead until next marker.
I´ve implemented all of this java code inside onCameraFrame() method, and I´m trying to know if there is a way to detect lines and markers at the same time, using parallel threads, Asynctask... I´m not a programmer so I need some tips or guide through where the onCameraFrame method works detecting the center of the line and also detecting Aruco Markers and the same time. Right now the robot does this steps correlatively:
1) Search for Aruco marker on the floor
1.1) If an Aruco marker is detected: draw the marker, call SystemClock.sleep() (this freeze the camera), approach to the marker slowly and turns left or right depending on the ID of the marker
1.2) If there is no Aruco detected the robot follows the center of the black line
The problem that I have is that the method drawDetectedMarkers() doesn´t makes its job, because Aruco marker is not drawed until the mRgba is returned at the end of the camera frame, so it will never be draw on the frame until onCameraFrame finish, the code works like this:
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
final Mat mRgba = inputFrame.rgba();
detectAndDrawMarkers();
if (corners.size() > 0) { //if the corners array is bigger than 0 then an Aruco marker has been detected
drawDetectedMarker(mRgba);
turnID(arucoIDs); //inside this method there is a SystemClock.sleep(), freezing the screen and waiting until the robot finish the movement
} else {
followLine(mRgba);
}
return mRgba;
}
So I need help with:
1) Some way to detect the black line and detect Aruco marker at the same time.
2) Draw the Aruco marker before the frame freeze.
Here is a screenshoot of the frame, there are 2 roi, the green detects the black line center, and the blue one detects the aruco.

Thanks so much and best regards.
↧
↧
VideoCapure::open(const String & filename) blocked 30s, Camera not running /On multi-thread, open() do like single-thread
Hello.
I was make multi-thread program for multi-viewer for cameras.
I have 2 questions.
And i've takes opencv v3.4.4.
1.
When it camera was down, open(url) functions return not immediately. After about 30 seconds, it was return.
How can i set this open function running as non-blocked?(like as when it camera not working return -1 immediately)
2.
I've made multi-threads for multi-viewers on one Process.
So, multi thread try to connect at the same time to use VideoCapture::open(url) fucntion.
Then return from open() was not seems like independent each threads.
- 1st try takes 3sec,
- 2nd try takes 6sec,
- ...
- #n th try takes 3*n sec
I think try take time have to similar, but the result was not.
Like as when 1st camera connection was finish, 2nd connection was start......... do until last connection.
Please help.
↧
cv2.error: (-215)
Hello, I have a problem with OpenCv. I wrote a program on raspberry to detect a line, but I have an cv2.error: (-215)
from picamera.array import PiRGBArray
from picamera import PiCamera
import cv2
import time
import numpy as np
import video
camera = PiCamera()
camera.resolution = (640, 480)
camera.framerate = 32
rawCapture = PiRGBArray(camera, size=(640, 480))
# daem camere razogretsa
time.sleep(0.1)
if __name__ == '__main__':
def nothing(*arg):
pass
# hvataem kartinku s cameri
cv2.namedWindow( "result" )
hsv_min = np.array((0, 56, 218), np.uint8)
hsv_max = np.array((255, 255, 255), np.uint8)
while True:
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
#berem numpy massiv kotorij predstavlaet kartinku potom zapuskaem
# timesamp i occupied/unoccupied text
image = frame.array
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV )
lines = cv2.HoughLinesP(image, 1, np.pi/180, 180,8, 3, 2)
# clear the stream in preparation for the next frame
rawCapture.truncate(0)
thresh = cv2.inRange(hsv, hsv_min, hsv_max)
cv2.imshow('result', thresh)
# clear the stream in preparation for the next frame
ch = cv2.waitKey(1)
error is File "redline.py", line 30, in
lines = cv2.HoughLinesP(image, 1, np.pi/180, 180,8, 3, 2)
cv2.error: OpenCV(4.1.0-dev) /home/pi/Downloads/opencv-master/modules/imgproc/src/hough.cpp:471: error: (-215:Assertion failed) image.type() == CV_8UC1 in function 'HoughLinesProbabilistic'
↧
What is counter ?
When i process image and find counters its print like this
[{123.0, 144.0}, {123.0, 145.0}, {122.0, 146.0}, {121.0, 147.0}, {120.0, 148.0}, {119.0, 149.0}, {118.0, 150.0}, {117.0, 151.0}, {116.0, 152.0}, {115.0, 152.0}, {114.0, 152.0}, {114.0, 153.0}, {115.0, 154.0}, {116.0, 154.0}, {117.0, 153.0}, {118.0, 153.0}, {119.0, 153.0}, {120.0, 153.0}, {121.0, 153.0}, {122.0, 153.0}, {123.0, 153.0}, {124.0, 153.0}, {125.0, 153.0}, {126.0, 153.0}, {127.0, 153.0}, {128.0, 153.0}, {129.0, 153.0}, {130.0, 153.0}, {131.0, 152.0}, {132.0, 152.0}, {133.0, 152.0}, {134.0, 152.0}, {133.0, 152.0}, {132.0, 152.0}, {131.0, 151.0}, {130.0, 150.0}, {129.0, 149.0}, {128.0, 148.0}, {127.0, 147.0}, {127.0, 146.0}, {126.0, 145.0}, {125.0, 145.0}, {124.0, 144.0}]
i don't know what is this? its (X,Y) coordinates of image or anything else?
↧
Can I use cv::cuda::add for per-element addition to Mat?
I want to add 1 to all of the pixels in the Mat. In usual OpenCV code `result = cv::Mat + 1;`. And I want to write that in OpenCV CUDA like the following.
cv::cuda::add(cuda_Mat1, 1, cuda_ResultMat);
Can I use like this?
Thank you for your time.
↧
↧
How to get stats from image segmented with watershed?
It appears that `connectedComponents` does not separate components that are divided by a single pixel dividing line.
This is an issue when trying to obtain region stats from an image segmented with the `watershed` method (ultimately I want to use `connectedComponentsWithStats`). Since `connectedComponents` does not handle already labeled images, I binarize the image by setting the background and dividing lines as 0 and the labeled regions as 255. The resulting binary image (`segmented_bin`) has regions separated by a single-pixel-wide line.

Running `connectedComponents` on this image returns only 2 regions: the background and an aggregate of the foreground regions.
##### Steps to reproduce
Here is a sample code to reproduce the issue
```python
import cv2
[...] # Prepare markers as in https://docs.opencv.org/3.4/d3/db4/tutorial_py_watershed.html
segmented = cv2.watershed(img,markers)
segmented_bin = segmented.copy()
segmented_bin[segmented < 2] = 0 # -1 is dividing regions, no 0s, 1 is background
segmented_bin[segmented > 1] = 255 # all above 1 are distinct regions
num_labels, label_image = cv2.connectedComponents(segmented_bin.astype('uint8'), 8, cv2.CV_16U, cv2.CCL_GRANA)
```
Executing this code returns `num_labels = 2` instead of 27.
↧
How do I can create a Gaussian filter in CUDA that is larger than 32 and double type?
The following code gives an error:
`OpenCV(3.4.1) Error: Assertion failed (rowFilter_ != 0) in `anonymous-namespace'::SeparableLinearFilter::SeparableLinearFilter, file c:\opencv\3.4.1\opencv-3.4.1\modules\cudafilters\src\filtering.cpp, line 413`
filter3 = cv::cuda::createGaussianFilter(CV_64FC1, CV_64FC1, cv::Size(1501, 1501), 250);
and the below code gives an error:
OpenCV(3.4.1) Error: Assertion failed (rowKernel_.cols > 0 && rowKernel_.cols <= 32) in `anonymous-namespace'::SeparableLinearFilter::SeparableLinearFilter, file c:\opencv\3.4.1\opencv-3.4.1\modules\cudafilters\src\filtering.cpp, line 404
filter1 = cv::cuda::createGaussianFilter(CV_8UC1, CV_8UC1, cv::Size(33, 33), 15);
↧
OpenCV Error: Image step is wrong (The matrix is not continuous, thus its number of rows can not be changed
Hello everyone,
I have a error when I run a code that reads about 4,500 images and the error occurred when reading the 455th image.
The error is > OpenCV Error: Image step is wrong (The matrix is not continuous, thus its number of rows can not be changed) in reshape, file /tmp/binarydeb/ros-kinetic-opencv3-3.3.1/modules/core/src/matrix.cpp, line 1102
terminate called after throwing an instance of 'cv::Exception'
what(): /tmp/binarydeb/ros-kinetic-opencv3-3.3.1/modules/core/src/matrix.cpp:1102: error: (-13) The matrix is not continuous, thus its number of rows can not be changed in function reshape
Could you give me some help,thanks a lot!
And the code are as follows
#include "vo_features.h"
using namespace cv;
using namespace std;
#define MAX_FRAME 1000
#define MIN_NUM_FEAT 4540
int main( int argc, char** argv ) {
// we work with grayscale images
Mat img_1, img_2;
Mat R_f, t_f; //the final rotation and tranlation vectors containing the
ofstream myfile;
myfile.open ("results1_1.txt");
double scale = 1.00;
char filename1[200];
char filename2[200];
sprintf(filename1, "/home/zxf/vo/00/image_0/%06d.png", 0);
sprintf(filename2, "/home/zxf/vo/00/image_0/%06d.png", 1);
char text[100];
int fontFace = FONT_HERSHEY_PLAIN;
double fontScale = 1;
int thickness = 1;
Point textOrg(10, 50);
// we work with grayscale images
//read the first two frames from the dataset
img_1 = imread(filename1);
img_2 = imread(filename2);
if ( !img_1.data || !img_2.data ) {
std::cout<< " --(!) Error reading images " << std::endl; return -1;
}
// feature detection, tracking
vector points1, points2; //vectors to store the coordinates of the feature points
featureDetection(img_1, points1); //detect features in img_1
vector status;
featureTracking(img_1,img_2,points1,points2, status); //track those features to img_2
double focal = 718.8560;
Point2d pp(607.1928, 185.2157);
//recovering the pose and the essential matrix
Mat E, R, t, mask;
E = findEssentialMat(points2, points1, focal, pp, RANSAC, 0.999, 1.0, mask);
recoverPose(E, points2, points1, R, t, focal, pp, mask);
Mat prevImage = img_2;
Mat currImage;
vector prevFeatures = points2;
vector currFeatures;
char filename[100];
R_f = R.clone();
t_f = t.clone();
clock_t begin = clock();
namedWindow( "Road facing camera", WINDOW_AUTOSIZE );// Create a window for display.
namedWindow( "Trajectory", WINDOW_AUTOSIZE );// Create a window for display.
Mat traj = Mat::zeros(600, 600, CV_8UC3);
for(int numFrame=2; numFrame < MAX_FRAME; numFrame++) {
sprintf(filename, "/home/zxf/vo/00/image_0/%06d.png", numFrame);
cout << numFrame << endl;
Mat currImage_= imread(filename);
Mat patchMatTmp3;
currImage_.copyTo(patchMatTmp3);
currImage= patchMatTmp3.reshape(0,0);
vector status;
featureTracking(prevImage, currImage, prevFeatures, currFeatures, status);
E = findEssentialMat(currFeatures, prevFeatures, focal, pp, RANSAC, 0.999, 1.0, mask);
recoverPose(E, currFeatures, prevFeatures, R, t, focal, pp, mask);
Mat prevPts(2,prevFeatures.size(), CV_64F), currPts(2,currFeatures.size(), CV_64F);
for(int i=0;i(0,i) = prevFeatures.at(i).x;
prevPts.at(1,i) = prevFeatures.at(i).y;
currPts.at(0,i) = currFeatures.at(i).x;
currPts.at(1,i) = currFeatures.at(i).y;
}
scale = 1;//getAbsoluteScale(numFrame, 0, t.at(2));
if ((scale>0.1)&&(t.at(2) > t.at(0)) && (t.at(2) > t.at(1))) {
t_f = t_f + scale*(R_f*t);
R_f = R*R_f;
}
myfile << t_f.at(0) << " " << t_f.at(1) << " " << t_f.at(2) << endl;
if (prevFeatures.size() < MIN_NUM_FEAT) {
featureDetection(prevImage, prevFeatures);
featureTracking(prevImage,currImage,prevFeatures,currFeatures, status);
}
prevImage = currImage.clone();
prevFeatures = currFeatures;
int x = int(t_f.at(0)) + 300;
int y = int(t_f.at(2)) + 100;
circle(traj, Point(x, y) ,1, CV_RGB(255,0,0), 2);
rectangle( traj, Point(10, 30), Point(550, 50), CV_RGB(0,0,0), CV_FILLED);
sprintf(text, "Coordinates: x = %02fm y = %02fm z = %02fm", t_f.at(0), t_f.at(1), t_f.at(2));
putText(traj, text, textOrg, fontFace, fontScale, Scalar::all(255), thickness, 8);
imshow( "Road facing camera", currImage);
imshow( "Trajectory", traj );
waitKey(1);
}
clock_t end = clock();
double elapsed_secs = double(end - begin) / CLOCKS_PER_SEC;
cout << "Total time taken: " << elapsed_secs << "s" << endl;
return 0;
}
↧
How to maintain two different Open cv versions in Python under two different namespaces
Hi,
I am interested in maintaining 2 different Open-cv versions 2.4.x and 3.0.0 under two different namespaces in Python-2.7.x. Is that even possible? If so, could someone please provide some hints?
↧
↧
Trouble installing to MSYS2 on Windows 10
I'm having problems with OpenCV with MSYS2 on Windows 10 x64 and Python 3.7. I can get OpenCV to install, but when I run a simple test script, it throws an exception.
$ python3 C:\\msys64\\home\\gpraceman\\gtk_test.py
C:/msys64/mingw64/lib/python3.7\importlib\_bootstrap.py:219: Warning: Numpy built with MINGW-W64 on Windows 64 bits is experimental, and only available for
testing. You are advised not to use it for production.
CRASHES ARE TO BE EXPECTED - PLEASE REPORT THEM TO NUMPY DEVELOPERS
return f(*args, **kwds)
Traceback (most recent call last):
File "C:\msys64\home\gpraceman\gtk_test.py", line 1, in
import cv2
File "C:/msys64/mingw64/lib\cv2\__init__.py", line 89, in
bootstrap()
File "C:/msys64/mingw64/lib\cv2\__init__.py", line 79, in bootstrap
import cv2
ImportError: DLL load failed: The specified module could not be found.
Below is how I installed OpenCV, which installs version 4.0.1. I also tried with PIP, but it wouldn't install at all (Could not find a version that satisfies the requirement opencv-python).
pacman -S mingw-w64-x86_64-opencv
I found the cv2 folder and OpenCV DLLs under C:\msys64\mingw64\lib. I did add that location to my Windows path.
I also tried setting up Anaconda and was able to get OpenCV to work, however, I could not get Gtk3+ to install and that is a requirement for my project. Gtk3+ works on MSYS2. I even tried copying the cv2 and opencv_python folders from site-packages on Anaconda over to MSYS2 but no joy. If I copied over the Gtk3+ files to Anaconda no joy again. So, I cannot get either environment to fully work for my project.
EDIT: I've also tried setting up a Python 3.7 x86 environment and get the same error. Here's the output with debug turned on:
>>> import cv2
OpenCV loader: os.name="nt" platform.system()="Windows"
OpenCV loader: loading config: C:/msys64/mingw32/lib/cv2/config.py
OpenCV loader: loading config: C:/msys64/mingw32/lib/cv2/config-3.7.py
OpenCV loader: PYTHON_EXTENSIONS_PATHS=['C:/msys64/mingw32/lib/cv2/python-3.7']
OpenCV loader: BINARIES_PATHS=['C:/msys64/mingw32/lib/cv2/../../bin']
OpenCV loader: PATH=C:/msys64/mingw32/lib/cv2/../../bin;C:\msys64\mingw32\bin;C:\msys64\usr\local\bin;C:\msys64\usr\bin;C:\msys64\usr\bin;C:\Windows\System32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\msys64\usr\bin\site_perl;C:\msys64\usr\bin\vendor_perl;C:\msys64\usr\bin\core_perl;C:\msys64\mingw32\bin\
OpenCV loader: replacing cv2 module
Traceback (most recent call last):
File "", line 1, in
File "C:/msys64/mingw32/lib\cv2\__init__.py", line 90, in
bootstrap()
File "C:/msys64/mingw32/lib\cv2\__init__.py", line 80, in bootstrap
import cv2
ImportError: DLL load failed: The specified module could not be found.
↧
Why I can't use the resulted gpumat from cv::cuda::Convolution::convole()?
cv::cuda::GpuMat d_b1, d_b2, d_b3, d_addb;
cv::Ptr convolver1 = cuda::createConvolution();
convolver1->convolve(d_bdouble, gk1Mat, d_b1);
convolver1->convolve(d_bdouble, gk2Mat, d_b2);
convolver1->convolve(d_bdouble, gk1Mat, d_b3);
cv::cuda::add(d_b1, d_b2, d_addb); // I got the error in the follwoing line
I have no idea why this happening. Can someone please help me.
↧
How do I fix “undefined reference to `cv::cuda::…” linker errors in Eclipse?
Disclaimer: I'm new to this whole Linux and CMake and GPU thing. But I've been using pre-built OpenCV in Windows for a few years.
I recently installed the latest versions of Ubuntu (18.04), OpenCV (4.1.0), Eclipse IDE (Version: 2019-03 (4.11.0)) and CUDA (10.1) for my new Precision 7730 with Quadro P5200. My Eclipse C++/CUDA toolchain project compiles and runs fine using just OpenCV syntax, first using a CMake build without CUDA and more recently a new CMake where I added CUDA support. I think I've followed all installation instructions properly, at least those I could find which deal with recent software versions including the CUDA 10.1 Toolkit.
But now I am trying to replace some of my cv:: functions with cv::cuda:: functions and am getting linker errors. I guess I am missing some libraries, or Eclipse is not knowing where they are. CMake appeared to make everything. What do I need to check and where do I need to look to see what's missing?
All of the solutions I've seen posted are quite old so I can't figure out how to make them apply to my problem.
Here is my Eclipse output when I build the project:
10:42:05 **** Build of configuration Release for project CS3_intfc **** make all Building file: ../src/CS3_intfc.cpp Invoking: NVCC Compiler /usr/local/cuda-10.1/bin/nvcc -I/usr/local/include/opencv4 -O3 --use_fast_math -gencode arch=compute_75,code=sm_75 -gencode arch=compute_75,code=compute_75 -ccbin g++ -c -o "src/CS3_intfc.o" "../src/CS3_intfc.cpp" Finished building: ../src/CS3_intfc.cpp
Building target: CS3_intfc Invoking: NVCC linker /usr/local/cuda-10.1/bin/nvcc --cudart=static -L/usr/local/lib -ccbin g++ -lGL -lGLU -lglut -gencode arch=compute_75,code=sm_75 -gencode arch=compute_75,code=compute_75 -o "CS3_intfc" ./src/CS3_intfc.o -lopencv_core -lopencv_calib3d -lopencv_features2d -lopencv_dnn -lopencv_flann -lopencv_highgui -lopencv_imgcodecs -lopencv_imgproc -lopencv_ml -lopencv_objdetect -lopencv_photo -lopencv_shape -lopencv_stitching -lopencv_superres -lopencv_video -lopencv_videoio -lopencv_videostab -lpthread ./src/CS3_intfc.o: In function main':
CS3_intfc.cpp:(.text.startup+0x561e): undefined reference tocv::cuda::subtract(cv::_InputArray const&, cv::_InputArray const&, cv::_OutputArray const&, cv::_InputArray const&, int, cv::cuda::Stream&)' CS3_intfc.cpp:(.text.startup+0x56c1): undefined reference to cv::cuda::compare(cv::_InputArray const&, cv::_InputArray const&, cv::_OutputArray const&, int, cv::cuda::Stream&)'
CS3_intfc.cpp:(.text.startup+0x5768): undefined reference tocv::cuda::compare(cv::_InputArray const&, cv::_InputArray const&, cv::_OutputArray const&, int, cv::cuda::Stream&)' CS3_intfc.cpp:(.text.startup+0x5805): undefined reference to cv::cuda::add(cv::_InputArray const&, cv::_InputArray const&, cv::_OutputArray const&, cv::_InputArray const&, int, cv::cuda::Stream&)'
CS3_intfc.cpp:(.text.startup+0x5843): undefined reference tocv::cuda::sum(cv::_InputArray const&, cv::_InputArray const&)' collect2: error: ld returned 1 exit status makefile:24: recipe for target 'CS3_intfc' failed make: *** [CS3_intfc] Error 1 "make all" terminated with exit code 2. Build might be incomplete.
10:42:09 Build Failed. 2 errors, 0 warnings. (took 4s.17ms)
Here is what I get with "echo $PATH" in a terminal:
/usr/local/cuda-10.1/bin:/usr/local/cuda-10.1/NsightCompute-2019.1:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
Here is the output of my CMake:
Looking for ccache - not found
Found ZLIB: /usr/lib/x86_64-linux-gnu/libz.so (found suitable version "1.2.11", minimum required is "1.2.3")
Could NOT find Jasper (missing: JASPER_LIBRARIES JASPER_INCLUDE_DIR)
Found ZLIB: /usr/lib/x86_64-linux-gnu/libz.so (found version "1.2.11")
Found OpenEXR: /usr/lib/x86_64-linux-gnu/libIlmImf.so
Checking for module 'gtk+-3.0'
No package 'gtk+-3.0' found
Checking for module 'gtkglext-1.0'
No package 'gtkglext-1.0' found
Found TBB (env): /usr/local/lib/libtbb.so
found Intel IPP (ICV version): 2019.0.0 [2019.0.0 Gold]
at: /home/jp/build/opencv/opencv/release/3rdparty/ippicv/ippicv_lnx/icv
found Intel IPP Integration Wrappers sources: 2019.0.0
at: /home/jp/build/opencv/opencv/release/3rdparty/ippicv/ippicv_lnx/iw
CUDA detected: 10.1
CUDA NVCC target flags: -gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-D_FORCE_INLINES;-gencode;arch=compute_75,code=compute_75
Could not find OpenBLAS include. Turning OpenBLAS_FOUND off
Could not find OpenBLAS lib. Turning OpenBLAS_FOUND off
Could NOT find Atlas (missing: Atlas_CBLAS_INCLUDE_DIR Atlas_CLAPACK_INCLUDE_DIR Atlas_CBLAS_LIBRARY Atlas_BLAS_LIBRARY Atlas_LAPACK_LIBRARY)
A library with BLAS API not found. Please specify library location.
LAPACK requires BLAS
A library with LAPACK API not found. Please specify library location.
Could NOT find JNI (missing: JAVA_INCLUDE_PATH JAVA_INCLUDE_PATH2 JAVA_AWT_INCLUDE_PATH)
Could NOT find Pylint (missing: PYLINT_EXECUTABLE)
Could NOT find Flake8 (missing: FLAKE8_EXECUTABLE)
VTK is not found. Please set -DVTK_DIR in CMake to VTK build directory, or to VTK install subdirectory with VTKConfig.cmake file
OpenCV Python: during development append to PYTHONPATH: /home/jp/build/opencv/opencv/release/python_loader
Caffe: NO
Protobuf: NO
Glog: NO
freetype2: YES (ver 21.0.15)
harfbuzz: YES (ver 1.7.2)
Could NOT find HDF5 (missing: HDF5_LIBRARIES HDF5_INCLUDE_DIRS) (found version "")
Module opencv_ovis disabled because OGRE3D was not found
No preference for use of exported gflags CMake configuration set, and no hints for include/library directories provided. Defaulting to preferring an installed/exported gflags CMake configuration if available.
Failed to find installed gflags CMake configuration, searching for gflags build directories exported with CMake.
Failed to find gflags - Failed to find an installed/exported CMake configuration for gflags, will perform search for installed gflags components.
Failed to find gflags - Could not find gflags include directory, set GFLAGS_INCLUDE_DIR to directory containing gflags/gflags.h
Failed to find glog - Could not find glog include directory, set GLOG_INCLUDE_DIR to directory containing glog/logging.h
Module opencv_sfm disabled because the following dependencies are not found: Glog/Gflags
Processing WORLD modules...
module opencv_cudev...
module opencv_core...
module opencv_cudaarithm...
module opencv_flann...
module opencv_imgproc...
module opencv_ml...
module opencv_phase_unwrapping...
module opencv_plot...
module opencv_quality...
module opencv_reg...
module opencv_surface_matching...
module opencv_cudafilters...
module opencv_cudaimgproc...
module opencv_cudawarping...
module opencv_dnn...
module opencv_features2d...
module opencv_freetype...
module opencv_fuzzy...
module opencv_hfs...
module opencv_imgcodecs...
module opencv_line_descriptor...
module opencv_photo...
module opencv_saliency...
module opencv_videoio...
module opencv_xphoto...
module opencv_calib3d...
module opencv_cudacodec...
module opencv_cudafeatures2d...
module opencv_cudastereo...
module opencv_highgui...
module opencv_objdetect...
module opencv_rgbd...
module opencv_shape...
module opencv_structured_light...
module opencv_text...
Checking for module 'tesseract'
No package 'tesseract' found
Tesseract: NO
module opencv_video...
module opencv_xfeatures2d...
module opencv_ximgproc...
module opencv_xobjdetect...
module opencv_aruco...
module opencv_bgsegm...
module opencv_bioinspired...
module opencv_ccalib...
module opencv_cudabgsegm...
module opencv_cudalegacy...
module opencv_cudaobjdetect...
module opencv_datasets...
module opencv_dnn_objdetect...
module opencv_dpm...
module opencv_face...
module opencv_optflow...
module opencv_stitching...
module opencv_tracking...
module opencv_cudaoptflow...
module opencv_stereo...
module opencv_superres...
module opencv_videostab...
Processing WORLD modules... DONE
Could NOT find Doxygen (missing: DOXYGEN_EXECUTABLE)
OpenCL samples are skipped: OpenCL SDK is required
General configuration for OpenCV 4.1.0-dev =====================================
Version control: 4.1.0-153-gb2abd8ca4
Extra modules:
Location (extra): /home/jp/build/opencv/opencv_contrib/modules
Version control (extra): 4.1.0-26-g24cd5e21
Platform:
Timestamp: 2019-05-14T23:27:39Z
Host: Linux 4.18.0-18-generic x86_64
CMake: 3.10.2
CMake generator: Unix Makefiles
CMake build tool: /usr/bin/make
Configuration: RELEASE
CPU/HW features:
Baseline: SSE SSE2 SSE3
requested: SSE3
Dispatched code generation: SSE4_1 SSE4_2 FP16 AVX AVX2 AVX512_SKX
requested: SSE4_1 SSE4_2 AVX FP16 AVX2 AVX512_SKX
SSE4_1 (14 files): + SSSE3 SSE4_1
SSE4_2 (2 files): + SSSE3 SSE4_1 POPCNT SSE4_2
FP16 (1 files): + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 AVX
AVX (5 files): + SSSE3 SSE4_1 POPCNT SSE4_2 AVX
AVX2 (28 files): + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 FMA3 AVX AVX2
AVX512_SKX (2 files): + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 FMA3 AVX AVX2 AVX_512F AVX512_COMMON AVX512_SKX
C/C++:
Built as dynamic libs?: YES
C++ Compiler: /usr/bin/c++ (ver 7.4.0)
C++ flags (Release): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Winit-self -Wsuggest-override -Wno-delete-non-virtual-dtor -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffast-math -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -fvisibility-inlines-hidden -O3 -DNDEBUG -DNDEBUG
C++ flags (Debug): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Winit-self -Wsuggest-override -Wno-delete-non-virtual-dtor -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffast-math -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -fvisibility-inlines-hidden -g -O0 -DDEBUG -D_DEBUG
C Compiler: /usr/bin/cc
C flags (Release): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Winit-self -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffast-math -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -DNDEBUG -DNDEBUG
C flags (Debug): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Winit-self -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffast-math -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -g -O0 -DDEBUG -D_DEBUG
Linker flags (Release): -Wl,--gc-sections
Linker flags (Debug): -Wl,--gc-sections
ccache: NO
Precompiled headers: NO
Extra dependencies: m pthread cudart_static -lpthread dl rt nppc nppial nppicc nppicom nppidei nppif nppig nppim nppist nppisu nppitc npps cublas cufft -L/usr/local/cuda/lib64 -L/usr/lib/x86_64-linux-gnu
3rdparty dependencies:
OpenCV modules:
To be built: aruco bgsegm bioinspired calib3d ccalib core cudaarithm cudabgsegm cudacodec cudafeatures2d cudafilters cudaimgproc cudalegacy cudaobjdetect cudaoptflow cudastereo cudawarping cudev datasets dnn dnn_objdetect dpm face features2d flann freetype fuzzy hfs highgui img_hash imgcodecs imgproc line_descriptor ml objdetect optflow phase_unwrapping photo plot quality reg rgbd saliency shape stereo stitching structured_light superres surface_matching text tracking ts video videoio videostab world xfeatures2d ximgproc xobjdetect xphoto
Disabled: gapi
Disabled by dependency: -
Unavailable: cnn_3dobj cvv hdf java js matlab ovis python2 python3 sfm viz
Applications: tests perf_tests examples apps
Documentation: NO
Non-free algorithms: YES
GUI:
GTK+: YES (ver 2.24.32)
GThread : YES (ver 2.56.4)
GtkGlExt: NO
OpenGL support: NO
VTK support: NO
Media I/O:
ZLib: /usr/lib/x86_64-linux-gnu/libz.so (ver 1.2.11)
JPEG: /usr/lib/x86_64-linux-gnu/libjpeg.so (ver 80)
WEBP: build (ver encoder: 0x020e)
PNG: /usr/lib/x86_64-linux-gnu/libpng.so (ver 1.6.34)
TIFF: /usr/lib/x86_64-linux-gnu/libtiff.so (ver 42 / 4.0.9)
JPEG 2000: build (ver 1.900.1)
OpenEXR: /usr/lib/x86_64-linux-gnu/libImath.so /usr/lib/x86_64-linux-gnu/libIlmImf.so /usr/lib/x86_64-linux-gnu/libIex.so /usr/lib/x86_64-linux-gnu/libHalf.so /usr/lib/x86_64-linux-gnu/libIlmThread.so (ver 2.2.0)
HDR: YES
SUNRASTER: YES
PXM: YES
PFM: YES
Video I/O:
DC1394: YES (2.2.5)
FFMPEG: YES
avcodec: YES (57.107.100)
avformat: YES (57.83.100)
avutil: YES (55.78.100)
swscale: YES (4.8.100)
avresample: YES (3.7.0)
GStreamer: YES (1.14.1)
v4l/v4l2: YES (linux/videodev2.h)
Parallel framework: TBB (ver 2017.0 interface 9107)
Trace: YES (with Intel ITT)
Other third-party libraries:
Intel IPP: 2019.0.0 Gold [2019.0.0]
at: /home/jp/build/opencv/opencv/release/3rdparty/ippicv/ippicv_lnx/icv
Intel IPP IW: sources (2019.0.0)
at: /home/jp/build/opencv/opencv/release/3rdparty/ippicv/ippicv_lnx/iw
Lapack: NO
Eigen: YES (ver 3.3.4)
Custom HAL: NO
Protobuf: build (3.5.1)
NVIDIA CUDA: YES (ver 10.1, CUFFT CUBLAS FAST_MATH)
NVIDIA GPU arch: 30 35 37 50 52 60 61 70 75
NVIDIA PTX archs: 75
OpenCL: YES (no extra features)
Include path: /home/jp/build/opencv/opencv/3rdparty/include/opencl/1.2
Link libraries: Dynamic load
Python (for build): /usr/bin/python2.7
Java:
ant: NO
JNI: NO
Java wrappers: NO
Java tests: NO
Install to: /usr/local
-----------------------------------------------------------------
Configuring done
Generating done
Note I had to use "-D WITH_NVCUVID=OFF" to get around a CMake error I encountered, and this is a solution I found. Is this my problem?
My hunch is that something needs to be done in Eclipse yet, can someone please help? Thanks.
↧