Quantcast
Channel: OpenCV Q&A Forum - RSS feed
Viewing all 41027 articles
Browse latest View live

part of the segment laying inside the contour

$
0
0
I have a line segment (defined by the coordinates of start and end points), and a closed convex contour. How can I find the part of the segment which is inside the contour? The most direct approach I see would be to check intersections of my segment with each of the contour's segments - I'm wondering if there's any better solution.

Adding hog feature to KCF tracker

$
0
0
Hello, I want to add HOG feature to my KCF tracker. How can I do this? I searched a bit but found "[setFeatureExtractor()](https://docs.opencv.org/3.4/d2/dff/classcv_1_1TrackerKCF.html#addc69a1f46fb1b037438802c90bf640a)". How can implement this?

Simple logo haar file I created causes tons of false positives.

$
0
0
So I took the facedetect.py and made a custom haar file to detect a simple logo. For the sake of simplicity it's something like the gap logo on the left here: http://adweek.blogs.com/.a/6a00d8341c51c053ef0133f4e33f2a970b-pi I cropped an image to the exact bounds of the logo (the blue box on the left) and generated samples from it: > opencv_createsamples -img gaplogo.png -num 1000 -vec hb.vec #(it actually wouldn't let me do more than 1000) Then I downloaded 2600 random images. Just everything under the sun. Buildings, people, desktop images (I tried this just because it's just randomly triggering on the desktop), landscapes, plants. You name it, I downloaded it. From what I read it's just supposed to be images that do not contain thing. Then I build my haar file > opencv_traincascade -data data -vec hb.vec -bg neg.txt -numPos 999 -numNeg 2500 -numStages 10 It actually only makes it through like 3 stages before it declares that's it and generates the haar file. In stead of a webcam, I feed in a stream of my desktop and it tries to draw boxes on nothing. Ideally, when I bring up the logo, and only the logo, it should draw a box around it. Instead it spazes out on random stuff. I thought something simple like the logo should have been a slam dunk for easy identification.

Number plate recognition using tessract

$
0
0
I am planning to perform ocr on Indian number plates.I used tessract 4.0 beta which uses LSTM engine for ocr. Although recognized characters are not coming out to be correct. i used cv2.Laplacian() while picking up images without blur and performed noise reduction using cv2.fastNlMeansDenoisingColored() on the image . Still results seems not so accurate. 1)Can we please suggest what other preprocessing techniques that need to perform to enhance the image 2)Can we keep a check on tessract such that no special characters are detected. Test images look like these. ![image description](/upfiles/15380339035442715.jpg) ![image description](/upfiles/153803342378378.jpg) detected characters: *DLBCAUS368 “HR 10¥5803 Thanks in advance

Remove holes in an image without affecting the borders

$
0
0
Following is an example image and I would like to remove all the black part in the image with white. I can use a median filter to do it, but it will correspondingly also affect the borders. Is there any idea how to do it? ![image description](/upfiles/15380475471920248.png)

Static compilation OpenCV & Extra Module

$
0
0
Hi all, I would like to build OpenCV with a single Extra Module, to build a specific application on Linux and Windows. So, how can I achieve this crucial point for me ? I am under Linux Debian with OpenCV-3.4, and I would like to build my application with an extra module to build a global executable program which works under linux and windows. Thanks for your help and support. Cheers,

Having trouble with using Mat image in pictureBox of Visual C++ CLR project

$
0
0
Mat imgs = imread("d:/s.jpg"); cvtColor(imgs, imgs, CV_BGR2RGB, 1); this->pictureBox1->Image=(gcnew System::Drawing::Bitmap(imgs.cols,imgs.rows,imgs.step.p[0], System::Drawing::Imaging::PixelFormat::Format24bppRgb, System::IntPtr)imgs.data)); it is not working, it only show some white and gray lines Please Help Me thanks

How to make dull text bold

$
0
0
Text on my image is dull and sparse in some places. How can I make it bolder and how can I join sparse regions on letter? (red circles on the second image) ![Image](/upfiles/15380615638920665.png) ![I want to join this regions. It's sparse](/upfiles/15380616978963072.png)

Why OpenCV does not support OpenCL 1.1 Embedded Profile?

$
0
0
I would like to have a technical and complete answer to this question. Thanks

Can cv::dft() be sped up with the right compiler flags?

$
0
0
For some time I have been using cv::dft() on a large image and it always took about 4-5 seconds. I noticed it now takes about 30 seconds for the same image and I wonder why. I recently recompiled OpenCV, without having saved the original compiler flags, so maybe I am missing a flag now that speeds up the dft function? This is the current build configuration: General configuration for OpenCV 3.2.0 ===================================== Version control: unknown Extra modules: Location (extra): /home/uname/opencv_contrib-3.2.0/modules Version control (extra): unknown Platform: Timestamp: 2018-09-17T15:22:43Z Host: Linux 4.4.0-135-generic x86_64 CMake: 3.5.1 CMake generator: Unix Makefiles CMake build tool: /usr/bin/make Configuration: RELEASE C/C++: Built as dynamic libs?: YES C++ Compiler: /usr/bin/c++ (ver 5.4.0) C++ flags (Release): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wno-narrowing -Wno-delete-non-virtual-dtor -Wno-comment -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffast-math -msse -msse2 -mno-avx -msse3 -mno-ssse3 -mno-sse4.1 -mno-sse4.2 -ffunction-sections -fvisibility=hidden -fvisibility-inlines-hidden -O3 -DNDEBUG -DNDEBUG C++ flags (Debug): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wno-narrowing -Wno-delete-non-virtual-dtor -Wno-comment -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffast-math -msse -msse2 -mno-avx -msse3 -mno-ssse3 -mno-sse4.1 -mno-sse4.2 -ffunction-sections -fvisibility=hidden -fvisibility-inlines-hidden -g -O0 -DDEBUG -D_DEBUG C Compiler: /usr/bin/cc C flags (Release): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wno-narrowing -Wno-comment -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffast-math -msse -msse2 -mno-avx -msse3 -mno-ssse3 -mno-sse4.1 -mno-sse4.2 -ffunction-sections -fvisibility=hidden -O3 -DNDEBUG -DNDEBUG C flags (Debug): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wno-narrowing -Wno-comment -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffast-math -msse -msse2 -mno-avx -msse3 -mno-ssse3 -mno-sse4.1 -mno-sse4.2 -ffunction-sections -fvisibility=hidden -g -O0 -DDEBUG -D_DEBUG Linker flags (Release): Linker flags (Debug): ccache: NO Precompiled headers: YES Extra dependencies: /home/uname/anaconda3/lib/libpng.so /home/uname/anaconda3/lib/libtiff.so /usr/lib/x86_64-linux-gnu/libjasper.so /home/uname/anaconda3/lib/libjpeg.so gtk-3 gdk-3 pangocairo-1.0 pango-1.0 atk-1.0 cairo-gobject cairo gdk_pixbuf-2.0 gio-2.0 gobject-2.0 glib-2.0 gthread-2.0 avcodec-ffmpeg avformat-ffmpeg avutil-ffmpeg swscale-ffmpeg /home/uname/anaconda3/lib/libhdf5_hl.so /home/uname/anaconda3/lib/libhdf5.so /usr/lib/x86_64-linux-gnu/librt.so /usr/lib/x86_64-linux-gnu/libpthread.so /home/uname/anaconda3/lib/libz.so /usr/lib/x86_64-linux-gnu/libdl.so /usr/lib/x86_64-linux-gnu/libm.so dl m pthread rt cudart nppc nppial nppicc nppicom nppidei nppif nppig nppim nppist nppisu nppitc npps cublas cufft -L/usr/local/cuda/lib64 3rdparty dependencies: libwebp IlmImf libprotobuf OpenCV modules: To be built: cudev core cudaarithm flann hdf imgproc ml reg surface_matching video cudabgsegm cudafilters cudaimgproc cudawarping dnn freetype fuzzy imgcodecs photo shape videoio cudacodec highgui objdetect plot ts xobjdetect xphoto bgsegm bioinspired dpm face features2d line_descriptor saliency text calib3d ccalib cudafeatures2d cudalegacy cudaobjdetect cudaoptflow cudastereo datasets rgbd stereo superres tracking videostab xfeatures2d ximgproc aruco optflow phase_unwrapping stitching structured_light python2 python3 Disabled: world contrib_world Disabled by dependency: - Unavailable: java viz cnn_3dobj cvv matlab sfm GUI: QT: NO GTK+ 3.x: YES (ver 3.18.9) GThread : YES (ver 2.48.2) GtkGlExt: NO OpenGL support: NO VTK support: NO Media I/O: ZLib: /home/uname/anaconda3/lib/libz.so (ver 1.2.8) JPEG: /home/uname/anaconda3/lib/libjpeg.so (ver 80) WEBP: build (ver 0.3.1) PNG: /home/uname/anaconda3/lib/libpng.so (ver 1.6.28) TIFF: /home/uname/anaconda3/lib/libtiff.so (ver 42 - 4.0.6) JPEG 2000: /usr/lib/x86_64-linux-gnu/libjasper.so (ver 1.900.1) OpenEXR: build (ver 1.7.1) GDAL: NO GDCM: NO Video I/O: DC1394 1.x: NO DC1394 2.x: NO FFMPEG: YES avcodec: YES (ver 56.60.100) avformat: YES (ver 56.40.101) avutil: YES (ver 54.31.100) swscale: YES (ver 3.1.101) avresample: NO GStreamer: NO OpenNI: NO OpenNI PrimeSensor Modules: NO OpenNI2: NO PvAPI: NO GigEVisionSDK: NO Aravis SDK: NO UniCap: NO UniCap ucil: NO V4L/V4L2: NO/YES XIMEA: NO Xine: NO gPhoto2: NO Parallel framework: pthreads Other third-party libraries: Use IPP: NO Use IPP Async: NO Use VA: NO Use Intel VA-API/OpenCL: NO Use Lapack: NO Use Eigen: NO Use Cuda: YES (ver 9.1) Use OpenCL: YES Use OpenVX: NO Use custom HAL: NO NVIDIA CUDA Use CUFFT: YES Use CUBLAS: YES USE NVCUVID: NO NVIDIA GPU arch: 20 30 35 37 50 52 60 61 NVIDIA PTX archs: Use fast math: YES OpenCL: Include path: /home/uname/opencv-3.2.0/3rdparty/include/opencl/1.2 Use AMDFFT: NO Use AMDBLAS: NO Python 2: Interpreter: /usr/bin/python2.7 (ver 2.7.12) Libraries: /usr/lib/x86_64-linux-gnu/libpython2.7.so (ver 2.7.12) numpy: /usr/local/lib/python2.7/dist-packages/numpy/core/include (ver 1.13.0) packages path: lib/python2.7/dist-packages Python 3: Interpreter: /home/uname/anaconda3/bin/python3 (ver 3.5.2) Libraries: /usr/lib/x86_64-linux-gnu/libpython3.5m.so (ver 3.5.2) numpy: /home/uname/anaconda3/lib/python3.5/site-packages/numpy/core/include (ver 1.12.1) packages path: lib/python3.5/site-packages Python (for build): /usr/bin/python2.7 Java: ant: NO JNI: /usr/lib/jvm/default-java/include /usr/lib/jvm/default-java/include/linux /usr/lib/jvm/default-java/include Java wrappers: NO Java tests: NO Matlab: Matlab not found or implicitly disabled Documentation: Doxygen: /usr/bin/doxygen (ver 1.8.11) Tests and samples: Tests: YES Performance tests: YES C/C++ Examples: YES

Why does video.set(..FRAME_POS,) index not align with frame number?

$
0
0
I have a video that is 2:12 sec long according to QuickTime on MacOS (10.14 Mojave). I have the following code: import cv2 vid = cv2.VideoCapture("video.mov") length = int(vid.get(cv2.CAP_PROP_FRAME_COUNT)) # = 3953 fps = int(vid.get(cv2.CAP_PROP_FPS)) # = 29 def frame_set(index): success = vid.set(cv2.CAP_PROP_POS_FRAMES, index) success, img = vid.read() return img def frame_walk(index): success = vid.set(cv2.CAP_PROP_POS_FRAMES, 0) for i in range(index): vid.read() success, img = vid.read() return img sum(abs(frame_set(0) - frame_walk(0))) # = 0 sum(abs(frame_set(29) - frame_walk(29))) # = 0 sum(abs(frame_set(30) - frame_walk(30))) # = <---- PROBLEM, mismatch frame_set(3953 - 128) # = frame_set(3953 - 127) # = None <---- PROBLEM, should be valid image frame_set(3952) # = None <---- PROBLEM, should be valid image frame_walk(3953 - 127) # = <---- correct answer frame_walk(3952) # = <---- correct answer Clearly a misalignment between the "frame index" method starts as soon as "1 second" has elapsed in the video. The OpenCV ".set" method is not actually setting to the correct frame. However the more cumbersome "walk" method works just fine. Am I doing something wrong here? This appears to be a bug in the OpenCV codebase, because the video length divided by the fps provides a 2 minute 16 second video, when QuickTime correctly reports a 2 minute 12 second video. That difference accounts for the last 127 frames being dropped from the ".set" method.

How to avoid detecting small and blurry face with haarcascade?

$
0
0
Hi I'm making a Facial Recognition program that uses IP Cam to recognized people in my office. I put the camera near the entrance door. The program uses opencv and haarcascade to detect face and once face is detected, it will send the face to AWS Rekognition to be recognized. However, the program keeps on capturing blurry and small faces. The "small" and "blurry" image is due to people entering the camera frame and moving towards the camera. By the time the recognition process finish, the person already left the camera view. So my question is, is there any way for OpenCV or Haarcascade to set a minimum width of face that it detects?

[Asking for solution! No code!] Column detection with/without stereo vision

$
0
0
Hello guys, I am sorry to ask without code. I tried to calibrate my stereo fisheyes and failed, tried for a week to do it. Rectification and stereo calibrate is bad somehow. Anyways, my plan is for the drones. Drone will have one or two cameras and it will fly on a vineyard. As you know, vineyard is basically in column shape. So when flying over a column, I need to understand that I am in the middle of the column . Otherwise it means that I need to go left or right. Example: [Columns](https://drive.google.com/open?id=1gVbgVoizSDdkb8Elv7I0cFKTr-yGquia) If you can give me some hints I'd be so grateful. I think that if I had depth image I can take gradients to see if my rotation and location is correct or not. Or I can do some segmentation and split the images and get different gradients. It is so confusing. I don't want code, I just want some ideas. Thanks to all, great day.

How to handle OpenCV warnings?

$
0
0
I've tried to make an error handler if a user provided invalid URL for an IP Cam. try: cam = cv2.VideoCapture(user_config.CAMERA_IP) except Exception, e: print "Unable to access camera" However, OpenCV outputs a warning instead of error > warning: Error opening file> (/build/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp:856)> warning:> http://admin:admin123@192.168.0.20:80/video.cgi> (/build/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp:857) so it didn't get captured. Is there a way to handle this warning?

How far is grabCut deterministic?

$
0
0
Hello everyone, When running grabCut on an the same input in the loop every time I will get the different output, but if I restart the program, the output, even though different within the loop, will match the result of corresponding iteration from the previous run. I know, that grabCut will use some singletons so that the instance will be there through all the runs until the program will be terminated, but still, why do the separate runs of the algorithm interfere with each other (somehow) and what to do to get around that? I would need some level of determinism when the algorithm is running. Should I then restart the program after every single run? Does not make much sense... I checked it with the unit test. If on one run I will save the results to disk like this: TEST(PreprocessingTests, CutAndFilterSave) { auto original_path = std::experimental::filesystem::current_path(); std::experimental::filesystem::current_path("../tests/data/"); std::string output_directory = "cutAndFilterPreviousRun"; std::experimental::filesystem::create_directory(output_directory); cv::Mat hidden_mouth_image = cv::imread("hidden_mouth.png", cv::IMREAD_GRAYSCALE); EXPECT_TRUE(hidden_mouth_image.data); cv::Mat whole_face_image = cv::imread("whole_face.png", cv::IMREAD_GRAYSCALE); EXPECT_TRUE(whole_face_image.data); std::vector images_to_save; images_to_save.emplace_back(hidden_mouth_image); images_to_save.emplace_back(whole_face_image); processImagesForComparison(images_to_save); cv::imwrite(output_directory + "/hidden_mouth_filtered.png", images_to_save.at(0)); cv::imwrite(output_directory + "/whole_face_filtered.png", images_to_save.at(1)); std::experimental::filesystem::current_path(original_path); } And on second run I will run this function TEST(PreprocessingTests, CutAndFilterRead) { auto original_path = std::experimental::filesystem::current_path(); std::experimental::filesystem::current_path("../tests/data/"); std::string output_directory = "cutAndFilterPreviousRun"; if (std::experimental::filesystem::exists(output_directory)) { cv::Mat hidden_mouth_image_filtered = cv::imread(output_directory + "/hidden_mouth_filtered.png", cv::IMREAD_GRAYSCALE); EXPECT_TRUE(hidden_mouth_image_filtered.data); cv::Mat whole_face_image_filtered = cv::imread(output_directory + "/whole_face_filtered.png", cv::IMREAD_GRAYSCALE); EXPECT_TRUE(whole_face_image_filtered.data); std::vector previous_run_filtered_images; previous_run_filtered_images.emplace_back(hidden_mouth_image_filtered); previous_run_filtered_images.emplace_back(whole_face_image_filtered); cv::Mat hidden_mouth_image = cv::imread("hidden_mouth.png", cv::IMREAD_GRAYSCALE); EXPECT_TRUE(hidden_mouth_image.data); cv::Mat whole_face_image = cv::imread("whole_face.png", cv::IMREAD_GRAYSCALE); EXPECT_TRUE(whole_face_image.data); std::vector test_images; test_images.emplace_back(hidden_mouth_image); test_images.emplace_back(whole_face_image); processImagesForComparison(test_images); EXPECT_TRUE(areTheSame(test_images, previous_run_filtered_images)); } std::experimental::filesystem::current_path(original_path); } The "areTheSame" will return true. But if I run it in the loop: TEST(PreprocessingTests, CutAndFilterTest) { auto original_path = std::experimental::filesystem::current_path(); std::experimental::filesystem::current_path("../tests/data/"); cv::Mat hidden_mouth_image = cv::imread("hidden_mouth.png", cv::IMREAD_GRAYSCALE); EXPECT_TRUE(hidden_mouth_image.data); cv::Mat whole_face_image = cv::imread("whole_face.png", cv::IMREAD_GRAYSCALE); EXPECT_TRUE(whole_face_image.data); std::vector original_images; for (int i=0; i < 1; i++) { original_images.emplace_back(hidden_mouth_image); original_images.emplace_back(whole_face_image); } auto copied_original_images = cloneMatVector(original_images); std::vector previous_filtered_images; for (int run = 0; run < 2; run++) { std::vector run_images = cloneMatVector(original_images); processImagesForComparison(run_images); if (!previous_filtered_images.empty()) EXPECT_TRUE(areTheSame(run_images, previous_filtered_images)); previous_filtered_images = cloneMatVector(run_images); } EXPECT_TRUE(areTheSame(original_images, copied_original_images)); std::experimental::filesystem::current_path(original_path); } then "areTheSame" will fail... In processImagesForComparison only my grabCut implementation will be run on each element: cv::Mat grabCutSegmentation(const cv::Mat& input) { cv::Mat bgModel, fgModel; cv::Mat mask(input.rows, input.cols, CV_8U); // let's set all of them to possible background first mask.setTo(cv::GC_PR_BGD); // cut out a small area in the middle of the image int m_rows = 0.75 * input.rows; int m_cols = 0.6 * input.cols; // the region of interest cv::Mat fg_seed = mask(cv::Range(input.rows/2 - m_rows/2, input.rows/2 + m_rows/2), cv::Range(input.cols/2 - m_cols/2, input.cols/2 + m_cols/2)); // mark it as foreground fg_seed.setTo(cv::GC_FGD); // select last 3 rows of the image as background cv::Mat1b bg_seed = mask(cv::Range(mask.rows-2, mask.rows),cv::Range::all()); bg_seed.setTo(cv::GC_BGD); cv::Mat colour_input; cv::cvtColor(input , colour_input , CV_GRAY2RGB); cv::grabCut(colour_input, mask, cv::Rect(), bgModel, fgModel, 1, cv::GC_INIT_WITH_MASK); // let's get all foreground and possible foreground pixels cv::Mat mask_fgpf = ( mask == cv::GC_FGD) | ( mask == cv::GC_PR_FGD); // and copy all the foreground-pixels to a temporary image cv::Mat output = cv::Mat::zeros(input.rows, input.cols, CV_8U); input.copyTo(output, mask_fgpf); return output; } What could be done regarding this? On which level may this problem emerge?

Difference in time for YOLOv3

$
0
0
Hi, I run dnn/object_detection.cpp with params: -c=yolov3.cfg -m=yolov3.weights -classes=coco.names --scale=0.00392 --width=416 --height=416 and it works, but time it takes for one image is about 4 seconds. Function: net.forward(outs, getOutputsNames(net)) takes the most time. Although on the image (https://drive.google.com/open?id=1kZIUS5Dct3sPRZrMKsixFCWLgizbKnzd) on top is written about 745 ms. I run in release and use opencv4. Also in this link (https://github.com/opencv/opencv/pull/11322) developers writes that median time per 416x416 image is 216.11ms. What may be the reason of this time difference? I have Ubuntu OS: https://drive.google.com/open?id=1MvbLah4gNFB-P2JvW5P5MZ7znhlX5u8Y

Compile Opencv 3.4.3 ( windows )with CUDA Toolkit 10.0...can't found CUDA_SDK_ROOT_DIR

$
0
0
Hello, I tried different solution without any success... I have: - Windows 10 Platform - Visual Studio 2017 v.15.3.5 - Opencv 3.4.3 source - Cmake GUI v.3.10 - CUDA Toolkit 10.0 cuda hardware (laptop - Nvidia GTX960M 2GB) When I tried to compile '**WITH CUDA**' (using Visual Studio 15 Generator), I retrive the same error: **"CMake Warning at cmake/OpenCVFindLibsPerf.cmake:42 (message): OpenCV is not able to find/configure CUDA SDK (required by WITH_CUDA). CUDA support will be disabled in OpenCV build. To eliminate this warning remove WITH_CUDA=ON CMake configuration option. Call Stack (most recent call first): CMakeLists.txt:629 (include)"** I test with the Nvida Toolkit DeviceQuery.exe and all seems fine on the hardware side... P.S.: Without Cuda it compiles without errors. any suggestions? Thanks in advance Adriano

Use ZED to object detect und distance measure

$
0
0
Hello there: I want to use ZED stereo camera to detect object and at the same time to measure the distance between object and camera. I know there is API and example for depth sensing. but I don't know how to combine them with detection object. Thanks

Android High speed capture

$
0
0
Hello, I was wondering if there is any attempt to try and use the [CameraConstrainedHighSpeedCapture](https://developer.android.com/reference/android/hardware/camera2/CameraConstrainedHighSpeedCaptureSession) in the camera2 api for capturing with a higher frame rate on android (even if low resolution is needed this could have many benefits). I tried implementing it myself, but I do not have much experience with android (and openCV) and have not succeeded so far.

Cross-compiling to banana-pi

$
0
0
Hi, I want to cross-compile opencv3.2.0 to a bananapi. I tried it as well on the bananapi itself but it complains that there is not enough space. So I am trying it from a virtualmachine which runs ubuntu. I follow this page: https://docs.opencv.org/2.4/doc/tutorials/introduction/crosscompilation/arm_crosscompile_with_cmake.html So i have the folder from sourceforge. I made a build folder in the sourcefolder so my path looks like this: ~/Documents/opencv-3.2.0/build Then I enter the following command: cmake -DCMAKE_TOOLCHAIN_FILE=../platforms/linux/arm-gnueabi.toolchain.cmake ../ And the error is: Cmake Error: CMake was unable to find a build program corresponding to "Unix Makefiles". CMAKE_MAKE_PROGRAM is not set. You probably need to select a different build tool. What can I do to solve this problem. I need the .so file so that I can transfer that to the bananaPi. Thanks
Viewing all 41027 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>