Quantcast
Channel: OpenCV Q&A Forum - RSS feed
Viewing all 41027 articles
Browse latest View live

OpenCV + Darknet - Error when initializing darknet

$
0
0
Hi! I'm an university student and for my thesis work I have to perform object detection using YOLO. I read the related paper and I completed all of the command line examples located at https://pjreddie.com/darknet/yolo/. Now I have to do the same using OpenCV. I'm using Xubuntu 16.04 LTS x64, OpenCV 3.3.1 and Qt Creator 4.5.0. For the moment I'm stuck at these few lines because I can't succeed to initialize the network. #include #include #include #include using namespace cv; using namespace cv::dnn; using namespace std; int main() { // The path to the .cfg file with text description of the network architecture. String modelConfiguration = "/home/lorenzo/Scrivania/yolo-9000/darknet/cfg/yolo-9000.cfg"; // The path to the .weights file with learned network. String modelBinary = "/home/lorenzo/Scrivania/yolo-9000/yolo9000-weights/yolo-9000.weights"; //! [Initialize network] //Reads a network model stored in Darknet model files. dnn::Net net = readNetFromDarknet(modelConfiguration, modelBinary); //! [Initialize network] if (net.empty()) { cerr << "Can't load network by using the following files: " << endl; cerr << "cfg-file: " << modelConfiguration << endl; cerr << "weights-file: " << modelBinary << endl; exit(-1); } return 0; } It returns me the following error. OpenCV Error: Parsing error (Failed to parse NetParameter file: /home/lorenzo/Scrivania/yolo-9000/darknet/cfg/yolo-9000.cfg) in ReadNetParamsFromCfgFileOrDie, file /home/lorenzo/Scrivania/opencv-3.3.1/modules/dnn/src/darknet/darknet_io.cpp, line 612 terminate called after throwing an instance of 'cv::Exception' what(): /home/lorenzo/Scrivania/opencv-3.3.1/modules/dnn/src/darknet/darknet_io.cpp:612: error: (-212) Failed to parse NetParameter file: /home/lorenzo/Scrivania/yolo-9000/darknet/cfg/yolo-9000.cfg in function ReadNetParamsFromCfgFileOrDie I opened the file at "/home/lorenzo/Scrivania/opencv-3.3.1/modules/dnn/src/darknet/darknet_io.cpp" and I found the following code at lines 609:614. void ReadNetParamsFromCfgFileOrDie(const char *cfgFile, darknet::NetParameter *net) { if (!darknet::ReadDarknetFromCfgFile(cfgFile, net)) { CV_Error(cv::Error::StsParseError, "Failed to parse NetParameter file: " + std::string(cfgFile)); } } The output of cv::getBuildInformation() is the following. General configuration for OpenCV 3.3.1 ===================================== Version control: unknown Platform: Timestamp: 2017-12-13T21:08:24Z Host: Linux 4.10.0-42-generic x86_64 CMake: 3.5.1 CMake generator: Unix Makefiles CMake build tool: /usr/bin/make Configuration: Release CPU/HW features: Baseline: SSE SSE2 SSE3 requested: SSE3 Dispatched code generation: SSE4_1 SSE4_2 FP16 AVX AVX2 requested: SSE4_1 SSE4_2 AVX FP16 AVX2 SSE4_1 (3 files): + SSSE3 SSE4_1 SSE4_2 (1 files): + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 (2 files): + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 AVX AVX (5 files): + SSSE3 SSE4_1 POPCNT SSE4_2 AVX AVX2 (8 files): + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 FMA3 AVX AVX2 C/C++: Built as dynamic libs?: YES C++ Compiler: /usr/bin/c++ (ver 5.4.0) C++ flags (Release): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Winit-self -Wno-narrowing -Wno-delete-non-virtual-dtor -Wno-comment -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -msse -msse2 -msse3 -fvisibility=hidden -fvisibility-inlines-hidden -O3 -DNDEBUG -DNDEBUG C++ flags (Debug): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Winit-self -Wno-narrowing -Wno-delete-non-virtual-dtor -Wno-comment -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -msse -msse2 -msse3 -fvisibility=hidden -fvisibility-inlines-hidden -g -O0 -DDEBUG -D_DEBUG C Compiler: /usr/bin/cc C flags (Release): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Winit-self -Wno-narrowing -Wno-comment -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -DNDEBUG -DNDEBUG C flags (Debug): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Winit-self -Wno-narrowing -Wno-comment -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -msse -msse2 -msse3 -fvisibility=hidden -g -O0 -DDEBUG -D_DEBUG Linker flags (Release): Linker flags (Debug): ccache: NO Precompiled headers: YES Extra dependencies: dl m pthread rt 3rdparty dependencies: OpenCV modules: To be built: core flann imgproc ml objdetect photo video dnn imgcodecs shape videoio highgui superres ts features2d calib3d stitching videostab Disabled: js world Disabled by dependency: - Unavailable: cudaarithm cudabgsegm cudacodec cudafeatures2d cudafilters cudaimgproc cudalegacy cudaobjdetect cudaoptflow cudastereo cudawarping cudev java python2 python3 viz GUI: QT: NO GTK+ 2.x: YES (ver 2.24.30) GThread : YES (ver 2.48.2) GtkGlExt: NO OpenGL support: NO VTK support: NO Media I/O: ZLib: /usr/lib/x86_64-linux-gnu/libz.so (ver 1.2.8) JPEG: libjpeg (ver 90) WEBP: build (ver encoder: 0x020e) PNG: /usr/lib/x86_64-linux-gnu/libpng.so (ver 1.2.54) TIFF: build (ver 42 - 4.0.2) JPEG 2000: build (ver 1.900.1) OpenEXR: build (ver 1.7.1) GDAL: NO GDCM: NO Video I/O: DC1394 1.x: NO DC1394 2.x: NO FFMPEG: YES avcodec: YES (ver 56.60.100) avformat: YES (ver 56.40.101) avutil: YES (ver 54.31.100) swscale: YES (ver 3.1.101) avresample: NO GStreamer: NO OpenNI: NO OpenNI PrimeSensor Modules: NO OpenNI2: NO PvAPI: NO GigEVisionSDK: NO Aravis SDK: NO UniCap: NO UniCap ucil: NO V4L/V4L2: NO/YES XIMEA: NO Xine: NO Intel Media SDK: NO gPhoto2: NO Parallel framework: pthreads Trace: YES (with Intel ITT) Other third-party libraries: Use Intel IPP: 2017.0.3 [2017.0.3] at: /home/lorenzo/Scrivania/opencv-3.3.1-build/3rdparty/ippicv/ippicv_lnx Use Intel IPP IW: sources (2017.0.3) at: /home/lorenzo/Scrivania/opencv-3.3.1-build/3rdparty/ippicv/ippiw_lnx Use VA: NO Use Intel VA-API/OpenCL: NO Use Lapack: NO Use Eigen: NO Use Cuda: NO Use OpenCL: YES Use OpenVX: NO Use custom HAL: NO OpenCL: Include path: /home/lorenzo/Scrivania/opencv-3.3.1/3rdparty/include/opencl/1.2 Use AMDFFT: NO Use AMDBLAS: NO Python 2: Interpreter: /usr/bin/python2.7 (ver 2.7.12) Python 3: Interpreter: /usr/bin/python3 (ver 3.5.2) Python (for build): /usr/bin/python2.7 Java: ant: NO JNI: NO Java wrappers: NO Java tests: NO Matlab: Matlab not found or implicitly disabled Documentation: Doxygen: NO Tests and samples: Tests: YES Performance tests: YES C/C++ Examples: NO Install path: /usr/local cvconfig.h is in: /home/lorenzo/Scrivania/opencv-3.3.1-build ----------------------------------------------------------------- Hope you can help me to solve this. Thanks in advance!

Ueye usb 1540LE monocrome live streaming with opencv 3

$
0
0
Hi there, i have been trying to start ueye usb 1540 LE monochrome camera using opencv on ubuntu 14.04. program runs and camera is initialised also, memory is also allocated, but my window is showing blank images so far.any clues ?? here is my code... got this code from [link text](https://github.com/StevenPuttemans/opencv_tryout_code/tree/master/camera_interfacing) //start camera// void initializeCameraInterface(HIDS* hCam_internal) { // Open cam and see if it was succesfull INT nRet = is_InitCamera (hCam_internal, NULL); if (nRet == IS_SUCCESS){ cout << "Camera initialized!" << endl; } // Setting the pixel clock to retrieve data UINT nPixelClockDefault = 24; nRet = is_PixelClock(*hCam_internal, IS_PIXELCLOCK_CMD_SET, (void*)&nPixelClockDefault, sizeof(nPixelClockDefault)); if (nRet == IS_SUCCESS){ cout << "Camera pixel clock succesfully set!" << endl; }else if(nRet == IS_NOT_SUPPORTED){ cout << "Camera pixel clock setting is not supported!" << endl; } // Set the color mode of the camera INT colorMode = IS_CM_MONO8; nRet = is_SetColorMode(*hCam_internal,colorMode); if (nRet == IS_SUCCESS){ cout << "Camera color mode succesfully set!" << endl; } // Store image in camera memory --> option to chose data capture method // Then access that memory to retrieve the data INT displayMode = IS_SET_DM_DIB; nRet = is_SetDisplayMode (*hCam_internal, displayMode); if (nRet == IS_SUCCESS){ cout << "display mode succesfully set!" << endl; } } // Capture a frame and push it in a OpenCV mat element Mat getFrame(HIDS* hCam, int width, int height, cv::Mat& mat) { // Allocate memory for image char* pMem = NULL; int memID = 0; if( is_AllocImageMem(*hCam, width, height, 8, &pMem, &memID) == IS_SUCCESS) { //cout<< "allocation successful" <= 0 ){ break; } } // Release the camera again is_ExitCamera(hCam); return 0; }

Training cascade for detecting arrow signs

$
0
0
I'm working on a simple navigation problem, for which I need to detect arrow signs and follow the arrow right or left. I can define the arrow shape, and am using this black arrow on white background. ![left_arrow](/upfiles/15132911373186686.png) Using a cascade with detectMultiscale seems to be the most promising: - I need to something that works at a range of scales (to detect arrows close and further away). - It doesn't need to be very fast: the speed of detectMultiscale is not a problem. I've trained a cascade that does detect some arrows, but there are lots of false positives and I miss many arrows. I am also trying to train cascades to recognize just left or just right arrows, and have not achieved a reliable result. It seems like this should be a relatively easy object recognition problem, so I'm puzzled by my poor results. For the details, I am using the LableMe dataset (from http://www.ais.uni-bonn.de/download/datasets.html) for negative samples. I'm creating positive samples from this dataset with the command below and a 26x20 pixel version of the arrow. opencv_createsamples -img arrow_26x20R.jpg -bg bg_0-20.txt -num 20000 -info ./pos_5R/annotations.lst -pngoutput ./pos_5R -bgcolor 1 -bgthresh 0 -maxxangle 0.2 -maxyangle 0.5 -maxzangle 0.2 -w 26 -h 20 Then the vector file: opencv_createsamples -num 20000 -info ./pos_5L/annotations.lst -vec pos_5L.vec -w 26 -h 20 Then training the cascade with: opencv_traincascade -data cascade_arrow_5L -vec pos_5L.vec -bg bg_5R_0-20.txt -numPos 17000 -numNeg 40000 -numStages 10 -w 26 -h 20 I've tried it with 2000 positive/4000 negative images, and 20,000 positive/40,000 negative images, but no improvement in effectiveness. The training finishes with 'Required leaf false alarm rate achieved. Branch training terminated.' after just 2 stages (stage 0 & 1). I've read a lot of helpful advice and tutorials, including this forum, but can't figure out any mistake. I'd be very grateful for any suggestions.

Skeletonization of the image

$
0
0
Hey, im trying to convert the below image into skeleton using opencv. ![Photo 1](/upfiles/15133133844829775.jpg) By using ---------------------------------------- size = np.size(crop) skel = np.zeros(crop.shape,np.uint8) element = cv2.getStructuringElement(cv2.MORPH_CROSS,(3,3)) done = False while( not done): eroded = cv2.erode(crop,element) temp = cv2.dilate(eroded,element) temp = cv2.subtract(crop,temp) skel = cv2.bitwise_or(skel,temp) crop = eroded.copy() zeros = size - cv2.countNonZero(crop) if zeros==size: done = True ------------------------------------------------------------------ And i got this ![image description](/upfiles/15133136534821451.jpg) There are some noise and the line are not connected properly. any help pls? Thanks!

Sample code in opencv java for finding contour properties

$
0
0
Hi, I am doing a project using OpenCV Java. I want to make use of methods for finding contour properties. As I am not an expert in OpenCV or Java, it would be great if someone could provide a sample code in Java for using methods like findContours, contourArea, convexhull etc.

How should a tensorflow model be saved so that it can be loaded in opencv3.3.1

$
0
0
Hi, I am using Tensorflow to train a neural network ( The neural network doesn't contain any variables ). This is my neural network graph in Tensorflow. X = tf.placeholder(tf.float32, [None,training_set.shape[1]],name = 'X') Y = tf.placeholder(tf.float32,[None,training_labels.shape[1]], name = 'Y') A1 = tf.contrib.layers.fully_connected(X, num_outputs = 50, activation_fn = tf.nn.relu) A1 = tf.nn.dropout(A1, 0.8) A2 = tf.contrib.layers.fully_connected(A1, num_outputs = 2, activation_fn = None) cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = A2, labels = Y)) global_step = tf.Variable(0, trainable=False) start_learning_rate = 0.001 learning_rate = tf.train.exponential_decay(start_learning_rate, global_step, 100, 0.1, True ) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) I wanted to know how this graph should be saved in tensorflow so as to load it using readNetFromTensorflow

Performance of findCirclesGrid on larger grids

$
0
0
I have found that findCirclesGrid takes a long time (on the order of minutes) on reasonably large grids such as 23x18, contained in 4 MP images. I'm using default parameters, and the images are simple, i.e. contain only circles and nothing else. The function findChessBoardCorners suffers no such degradation. Versions tested are 2.3 and 3.2. Can the execution time be improved with different parameters? Furthermore, I have noticed that the flag CALIB_CB_ASYMMETRIC_GRID does not work on 23x18, and CALIB_CB_SYMMETRIC_GRID must be passed to ensure detection. Is this expected?

How to improve back projection?

$
0
0
I tried toto extract background of an image using [Histogram Backprojection](https://docs.opencv.org/3.3.1/dc/df6/tutorial_py_histogram_backprojection.html) as seen in the mentioned example not able to extract the complete background with the details. Any suggestions on how best to improve this? I did try to increase the kernel size `disc = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(5,5))` from `5` to `9` or `11` but in doing so we also get the unwanted background as well.

Dynamic Thresholding

$
0
0
Hi, I have a question about Dynamic Thresholding. I know that Halcon software uses dynamic thresholding for image processing. I was wondering if there's an equivalent for that in OpenCV. This is what I've found in a Halcon manual: > **Signature**>> dyn_threshold(OrigImage,> ThresholdImage : RegionDynThresh :> Offset, LightDark : ) >> **Description**>> dyn_threshold selects from the input> image those regions in which the> pixels fulfill a threshold condition.>>Typically, the threshold images are smoothed versions of the original> image (e.g., by applying mean_image,> binomial_filter, gauss_filter, etc.).> Then the effect of dyn_threshold is> similar to applying threshold to a> highpass-filtered version of the> original image (see highpass_image).>> With dyn_threshold, contours of an> object can be extracted, where the> objects' size (diameter) is determined> by the mask size of the lowpass filter> and the amplitude of the objects'> edges:>> The larger the mask size is chosen,> the larger the found regions become.> As a rule of thumb, the mask size> should be about twice the diameter of> the objects to be extracted. It is> important not to set the parameter> Offset to zero because in this case> too many small regions will be found> (noise). Values between 5 and 40 are a> useful choice. The larger Offset is> chosen, the smaller the extracted> regions become. Or should I manually create a function that does this? (I have no idea how) Thanks, Andries

I am with difficulties in building the openCV.js in MAC OS

$
0
0
I follow the steps in the tutorial at the openCV website and I have the following error: bash-3.2# python platforms/js/build_js.py build_js /Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python: can't open file 'platforms/js/build_js.py': [Errno 2] No such file or directory bash-3.2# cd opencv/ bash-3.2# cd opencv-3.3.1/ bash-3.2# python platforms/js/build_js.py build_js Args: Namespace(build_dir='build_js', build_doc=False, build_test=False, build_wasm=False, clean_build_dir=False, config_only=False, emscripten_dir='/Users/ruimfjesus/Applications/emsdk-portable/emscripten/1.37.22', enable_exception=False, opencv_dir='/Users/ruimfjesus/Applications/opencv/opencv-3.3.1', skip_config=False) Check dir /Users/ruimfjesus/Applications/opencv/opencv-3.3.1/build_js (create: True, clean: False) Check dir /Users/ruimfjesus/Applications/opencv/opencv-3.3.1 (create: False, clean: False) Check dir /Users/ruimfjesus/Applications/emsdk-portable/emscripten/1.37.22 (create: False, clean: False) Detected OpenCV version: 3.3.1 Detected emcc version: 1.37.22 ===== ===== Config OpenCV.js build for asm.js ===== Executing: ['cmake', '-DCMAKE_BUILD_TYPE=Release', "-DCMAKE_TOOLCHAIN_FILE='/Users/ruimfjesus/Applications/emsdk-portable/emscripten/1.37.22/cmake/Modules/Platform/Emscripten.cmake'", "-DCPU_BASELINE=''", "-DCPU_DISPATCH=''", '-DCV_TRACE=OFF', '-DBUILD_SHARED_LIBS=OFF', '-DWITH_1394=OFF', '-DWITH_VTK=OFF', '-DWITH_CUDA=OFF', '-DWITH_CUFFT=OFF', '-DWITH_CUBLAS=OFF', '-DWITH_NVCUVID=OFF', '-DWITH_EIGEN=OFF', '-DWITH_FFMPEG=OFF', '-DWITH_GSTREAMER=OFF', '-DWITH_GTK=OFF', '-DWITH_GTK_2_X=OFF', '-DWITH_IPP=OFF', '-DWITH_JASPER=OFF', '-DWITH_JPEG=OFF', '-DWITH_WEBP=OFF', '-DWITH_OPENEXR=OFF', '-DWITH_OPENGL=OFF', '-DWITH_OPENVX=OFF', '-DWITH_OPENNI=OFF', '-DWITH_OPENNI2=OFF', '-DWITH_PNG=OFF', '-DWITH_TBB=OFF', '-DWITH_PTHREADS_PF=OFF', '-DWITH_TIFF=OFF', '-DWITH_V4L=OFF', '-DWITH_OPENCL=OFF', '-DWITH_OPENCL_SVM=OFF', '-DWITH_OPENCLAMDFFT=OFF', '-DWITH_OPENCLAMDBLAS=OFF', '-DWITH_MATLAB=OFF', '-DWITH_GPHOTO2=OFF', '-DWITH_LAPACK=OFF', '-DWITH_ITT=OFF', '-DBUILD_ZLIB=ON', '-DBUILD_opencv_apps=OFF', '-DBUILD_opencv_calib3d=OFF', '-DBUILD_opencv_dnn=OFF', '-DBUILD_opencv_features2d=OFF', '-DBUILD_opencv_flann=OFF', '-DBUILD_opencv_ml=OFF', '-DBUILD_opencv_photo=OFF', '-DBUILD_opencv_imgcodecs=OFF', '-DBUILD_opencv_shape=OFF', '-DBUILD_opencv_videoio=OFF', '-DBUILD_opencv_videostab=OFF', '-DBUILD_opencv_highgui=OFF', '-DBUILD_opencv_superres=OFF', '-DBUILD_opencv_stitching=OFF', '-DBUILD_opencv_java=OFF', '-DBUILD_opencv_js=ON', '-DBUILD_opencv_python2=OFF', '-DBUILD_opencv_python3=OFF', '-DBUILD_EXAMPLES=OFF', '-DBUILD_PACKAGE=OFF', '-DBUILD_TESTS=OFF', '-DBUILD_PERF_TESTS=OFF', '-DBUILD_DOCS=OFF', '/Users/ruimfjesus/Applications/opencv/opencv-3.3.1'] Traceback (most recent call last): File "platforms/js/build_js.py", line 239, in builder.config() File "platforms/js/build_js.py", line 179, in config execute(cmd) File "platforms/js/build_js.py", line 21, in execute raise Fail("Execution failed: %d / %s" % (e.errno, e.strerror)) __main__.Fail: Execution failed: 13 / Permission denied

(DNN) different results between version 3.3.0 and 3.3.1

$
0
0
System information (version) OpenCV => 3.3.0/3.3.1 Operating System / Platform => Windows 10 64 Bit Compiler => Visual Studio 2015 Detailed description I have a network that works fine in Opencv 3.3.0, but after updating my opencv to the version 3.3.1 I'm getting wrong results with the same code. What I already tried: *Compile on Linux -> I got the same wrong results *Compile on windows with Mingw -> I got the same wrong results *Compile on windows with Visual Studio 14 x32 -> I got the same wrong results *Compile the master brach of opencv on windows with Visual Studio 14 x32 -> I got the same wrong results Complementar tests: I used the "tensorflow_inception_graph.pb" network, with this network I got the same results in version 3.3.0 and 3.3.1, I do not know if it is a correct predictions. Using the caffe model network from the opencv examples worked as well with correct prediction for both versions. Maybe my problem is my network, but why my network works on opencv 3.3.0 and dont work on 3.3.1? Steps to reproduce **NetworkInput: 1x1x28x92 (grayscale image)** **Normalization: 0..1** **The same code is used in opencv 3.3.0 and 3.3.1** **my network you can find [here](https://github.com/opencv/opencv/issues/10292)** #include #include #include #include //using namespace cvtest; using namespace cv; using namespace cv::dnn; #include #include #include using namespace std; static void getMaxClass(const Mat &probBlob, int *classId, double *classProb) { Mat probMat = probBlob.reshape(1, 1); //reshape the blob to 1x1000 matrix Point classNumber; minMaxLoc(probMat, NULL, classProb, NULL, &classNumber); *classId = classNumber.x; } int main() { //CV_TRACE_FUNCTION(); String modelBin = "model_final.pb"; String imageFile = "airplane.jpg"; Net net = dnn::readNetFromTensorflow(modelBin); if (net.empty()) { std::cerr << "Can't load network by using the following files: " << std::endl; std::cerr << "Tensorflow model: " << modelBin << std::endl; exit(-1); } Mat img = imread(imageFile,0); if (img.empty()) { std::cerr << "Can't read image from the file: " << imageFile << std::endl; exit(-1); } Mat resized; resize(img, resized, Size(92, 28)); float escala=1.0/255.0; Mat inputBlob = blobFromImage(resized,escala, Size(92, 28),Scalar(0,0,0),false); //Convert Mat to batch of images std::cout << inputBlob.size << std::endl; Mat prob; cv::TickMeter t; for (int i = 0; i < 100; i++) { //CV_TRACE_REGION("forward"); t.start(); net.setInput(inputBlob, "conv2d_1_input"); //set the network input prob = net.forward("activation_4/Softmax"); //compute output //std::cout << prob << std::endl; t.stop(); } int classId; double classProb; getMaxClass(prob, &classId, &classProb);//find the best class std::cout << prob<< std::endl; std::cout << "Best class: #" << classId << std::endl; std::cout << "Probability: " << classProb * 100 << "%" << std::endl; std::cout << "Time: " << (double)t.getTimeMilli() / t.getCounter() << " ms (average from " << t.getCounter() << " iterations)" << std::endl; namedWindow("DEBUG", WINDOW_AUTOSIZE); // Create a window for display. imshow("DEBUG", img); // Show our image inside it. waitKey(0); return 0; } //main*/ Using the file .pbtxt as argument to load the network could help me? how is the correct way to generate this file?

Building calib3d in OpenCV.js

$
0
0
Hello, I would like to use SolvePnP in javascript, But currently i cant make it work :( im using 3.3.1 and following the intruction to build openCV.js from [the official page](https://docs.opencv.org/3.3.1/d4/da1/tutorial_js_setup.html). i set calib3d to ON in [build_js.py](https://github.com/opencv/opencv/blob/master/platforms/js/build_js.py#L136). The same with features2d, which seems to be a dependancy of calib3d. The build config report them as 'to be build'. ![image description](/upfiles/15133585886406113.png) But calib3d is never compiled. No matter the ON/OFF i put it only build core/imgproc/objdetect/video ... nothing more. ![image description](/upfiles/15133588559444263.png) Any hint on how to get SolvePnP in javascript ? im super new at opencv

Linking error to -lopencv_core

$
0
0
Hello, I downloaded a code recently from a Github repo for doing non-rigid 3D object reconstruction from videos. The code has many dependencies on other libraries, among which is the OpenCV library. Unfortunately, when I build the code, I get the following error message: CXX/LD -o build/PangaeaTracking/bin/PangaeaTracking_console /usr/bin/ld: /usr/local/lib/libopencv_core.a(persistence.cpp.o): undefined reference to symbol 'gzclose' //lib/x86_64-linux-gnu/libz.so.1: error adding symbols: DSO missing from command line collect2: error: ld returned 1 exit status Makefile:127: recipe for target 'build/PangaeaTracking/bin/PangaeaTracking_console' failed make: *** [build/PangaeaTracking/bin/PangaeaTracking_console] Error 1 I am building it on Ubuntu 16.04 and have installed opencv 2.4.13 correctly, made sure that this the case as follows: $ pkg-config --cflags opencv -I/usr/local/include/opencv -I/usr/local/include The library dependencies in my makefile are: # Library dependencies GL_LIB := -lGL -lGLU -lX11 -lGLEW WX_LIB := `wx-config --libs --gl-libs` BOOST_LIB := -lboost_filesystem -lboost_system -lboost_thread OPENCV_LIB := -lopencv_core -lopencv_highgui -lopencv_imgproc CERES_LIB := -lceres -lglog -ltbb -ltbbmalloc -lcholmod -lccolamd \ -lcamd -lcolamd -lamd -lsuitesparseconfig -llapack -lf77blas -latlas LMDB_LIB := -llmdb HDF5_LIB := -lhdf5_hl -lhdf5 LIBRARY_DIRS += $(LIB_BUILD_DIR) LDFLAGS := $(WX_LIB) $(BOOST_LIB) $(OPENCV_LIB) $(CERES_LIB) $(GL_LIB) $(LMDB_LIB) $(HDF5_LIB) LDFLAGS += $(foreach library_dir, $(LIBRARY_DIRS), -L$(library_dir)) I would be grateful if anybody has an idea about the possible cause of this error.

Energy computation of DCT of image +

$
0
0
I am interested to know Energy of low and high frequency component of dct (discrete cosine transform) of image. Here, i found [energy computation of signal](https://en.wikipedia.org/wiki/Energy_(signal_processing)) I cannot find energy computation of low and frequency component of dct from image. so i used the energy computation of signal as shown formula below. > Energy = Sum (Amplitude * Amplitude ) Below are the original, dct image and its energy computed from above formula. **Original Image** ![image description](/upfiles/15134095227562472.jpg) **dct of above image.** ![image description](/upfiles/15134095789980381.jpg) Then, i am using upper quadrant of image as low frequency and bottom corner quadrant of image is high frequency compoent. **And their energy is as follows: 21.391 , 11.2572** My question is how to exactly find energy of frequency from DFT or DCT? Or Same formula of energy computation is used for image and signal processing? Kindly, help me to understand. Thanks

build master branch on debian9 with cuda9

$
0
0
Hello I've tried to build opencv3 master with gcc6 cuda9 but i always get an error regarding with cuda, compilation without cuda works fine,. I as well get the warnings cc1plus: warning: home/opencv/build/modules/cudaarithm/test_precomp.hpp.gch/opencv_test_cudaarithm_Release.gch: not used because `OPENCV_TRAITS_ENABLE_DEPRECATED' is defined [-Winvalid-pch] Has anyone experienced this problem before? Is it possible to build opencv3 with cuda-9 and gcc-6? cheers kris

can anybody give me a link of tutorials of using opencv with android studio?

$
0
0
I wanna use opencv with android studio

Problem in loading the image..

$
0
0
import numpy as np import cv2 img = cv2.imread("mud.jpeg",cv2.IMREAD_COLOR) cv2.imshow("bhatnagar",img) cv2.waitKey(5) This is the error I am getting: I have even tried by giving the path of the image which could also be done if in case the file is not in the working directory. OpenCV Error: Assertion failed (size.width>0 && size.height>0) in cv::imshow, file C:\projects\opencv-python\opencv\modules\highgui\src\window.cpp, line 325 cv2.imshow("bhatnagar",img) cv2.error: C:\projects\opencv-python\opencv\modules\highgui\src\window.cpp:325: error: (-215) size.width>0 && size.height>0 in function cv::imshow

using StereoBM in java on Android -> some results

$
0
0
Hello is someone could give me an example to show how to use org.opencv.calib3d.StereoBM in Java (on Android in my case) ? I only found examples in C++ and Python on the web... EDIT2 : I tried to calibrate one camera before trying to calibrate both. So I wrote the following java code : init variables : imageCorners = new MatOfPoint2f(); savedImage = new Mat(); imagePoints = new ArrayList<>(); objectPoints = new ArrayList<>(); intrinsic = new Mat(3, 3, CvType.CV_32FC1); distCoeffs = new Mat(); boardSize = new Size(numCornersHor, numCornersVer); obj = new MatOfPoint3f(); for (int j = 0; j < numSquares; j++) obj.push_back(new MatOfPoint3f(new Point3(j / numCornersHor, j % numCornersVer, 0.0f))); In the main loop : if (!isCalibrated) { if (touch_screen_down) { Mat grayImage = new Mat(rgba.size(), CvType.CV_8UC1); Imgproc.cvtColor(rgba, grayImage, Imgproc.COLOR_BGR2GRAY); boolean found = Calib3d.findChessboardCorners( grayImage, boardSize, imageCorners, Calib3d.CALIB_CB_ADAPTIVE_THRESH + Calib3d.CALIB_CB_NORMALIZE_IMAGE + Calib3d.CALIB_CB_FAST_CHECK); if (found && successes <= boardsNumber) { // optimization TermCriteria term = new TermCriteria(TermCriteria.EPS | TermCriteria.MAX_ITER, 30, 0.001); Imgproc.cornerSubPix(grayImage, imageCorners, new Size(11, 11), new Size(-1, -1), term); // save the current frame for further elaborations grayImage.copyTo(savedImage); // show the chessboard inner corners on screen Calib3d.drawChessboardCorners(rgba, boardSize, imageCorners, found); imagePoints.add(imageCorners); imageCorners = new MatOfPoint2f(); objectPoints.add(obj); successes++; } if (successes == boardsNumber) { List rvecs = new ArrayList<>(); List tvecs = new ArrayList<>(); intrinsic.put(0, 0, 1); intrinsic.put(1, 1, 1); calib_error = Calib3d.calibrateCamera(objectPoints, imagePoints, savedImage.size(), intrinsic, distCoeffs, rvecs, tvecs); isCalibrated = true; } } } else { // is already calibrated so undistord the picture Mat undistored = new Mat(); Imgproc.undistort(rgba, undistored, intrinsic, distCoeffs); undistored.copyTo(rgba); } BUT the calibration error (calib_error) is bad when I run this code with 20 pictures of a chessboard : values are between 150 and 200. I read that the calibration error should be less than 0.5 !! And when calibration is done, news pictures from my camera are very distorded... What is the problem ? EDIT1 : I wrote the following java code : // load left and right images Mat rgba_left = Utils.loadResource( MainActivity.this, R.drawable.trampoline3d_gauche, Imgcodecs.CV_LOAD_IMAGE_COLOR); Mat rgba_right = Utils.loadResource( MainActivity.this, R.drawable.trampoline3d_droite, Imgcodecs.CV_LOAD_IMAGE_COLOR); // create matrix with 1 channel Mat mleft= new Mat(rgba_left.size(), CvType.CV_8UC1); Mat mright= new Mat(rgba_right.size(), CvType.CV_8UC1); Mat mdisparity= new Mat(rgba_left.size(), CvType.CV_8UC1); // convert images to gray scale Imgproc.cvtColor(rgba_left, mleft, Imgproc.COLOR_BGR2GRAY); Imgproc.cvtColor(rgba_right, mright, Imgproc.COLOR_BGR2GRAY); StereoBM stereo = StereoBM.create(16*5, 21); stereo.setPreFilterSize(5); stereo.setPreFilterCap(61); stereo.setMinDisparity(-10); stereo.setNumDisparities(16*5); stereo.setTextureThreshold(300); stereo.setUniquenessRatio(5); stereo.setSpeckleWindowSize(0); stereo.setSpeckleRange(8); stereo.compute(mleft, mright, mdisparity); // normalize from float to bytes Core.normalize(mdisparity,rgba_right,0,255,Core.NORM_MINMAX,CvType.CV_8UC1); // display the result Imgproc.cvtColor(rgba_right, rgba_left, Imgproc.COLOR_GRAY2BGRA, 4); Imgproc.resize(rgba_left, rgba, rgba.size()); but I obtain poor result ... left image : ![image description](/upfiles/15126434255491411.jpg) right image : ![image description](/upfiles/15126434454773979.jpg) result : ![image description](/upfiles/15126433822848951.jpg)

OpenCV SVM performance poor compared to matlab ensemble

$
0
0
Hello, I have been training a svm classifier for a 2 class forgery detection problem with feature size of 18157 and number of samples = 6000. The svm type is c_svc with a rbf kernel. The C and Gamma parameters were varied for improving accuracy. Even svm_auto training method was tried with a reduced sample size. But the maximum accuracy obtained was around 60%. The same features were given to a matlab ensemble [http://dde.binghamton.edu/download/ensemble/] with default parameters. The accuracy obtained was more than 80%. In order to maintain the same features, the features were dumped from opencv to a file and then read inside matlab and then fed to the ensemble trainer. Both opencv and matlab features were compared inside matlab and found to be same.The accuracy was still above 80%. In order to remove the problem of low sample numbers compared to feature size, svm was trained on a sample size of 34,000 samples. Still the accuracy is only around 60%. Why is there an accuracy difference of 20% between svm and matlab ensemble? Regards Amal

MATLAB imwarp vs C++ warpAffine

$
0
0
I am trying to convert following MATLAB code into c++ code by Opencv library **Matlab Code:** scale_x = 0.99951; scale_y = 0.99951; RsrcImg= imref2d(size(im1)); T=[scale_x 0 0;0 scale_y 0;0 0 1]; tformEstimate = affine2d(T); destImg =imwarp(im1,tformEstimate,'Interp','cubic','OutputView',RsrcImg); **C++ Code:** double scale_x = 0.99951; double scale_y = 0.99951; Mat tformEstimate=Mat::zeros(2,3,CV_64FC1); tformEstimate.at(0,0)=scale_x; tformEstimate.at(1,1)=scale_y; warpAffine(im1,destImg,tformEstimate,im1.size(),INTER_CUBIC); but I get completely different PSNR values! is there any way to make both codes work with the same result? **INPUT:** im1=[19656,20452,21048,21020,20564; 20740,21332,21964,20932,20220; 21688,21832,20236,21520,21948; 19780,20572,21056,20748,20088; 20560,21188,20608,22136,20736] **Matlab output:** [19656.4613056001,20452.8984647534,21048.6416903208,21019.5009522438,0; 20741.1425789163,21333.2772155087,21963.2997618553,20930.5373907996,0; 21687.3205560205,21830.7204384573,20235.1172001277,21521.5360776160,0; 19779.1003189859,20572.0033036149,21056.4880106032,20747.6616956745,0; 0,0,0,0,0] **C++ output:** [19656.00000 20452.00000 21048.00000 21020.00000 20564.00000 20740.00000 21332.00000 21964.00000 20932.00000 20220.00000 21688.00000 21832.00000 20236.00000 21520.00000 21948.00000 19780.00000 20572.00000 21056.00000 20748.00000 20088.00000 20560.00000 21188.00000 20608.00000 22136.00000 20736.00000] I have seen [MATLAB vs C++ vs OpenCV - imresize ][1] without success. Thank you for your help. [1]: https://stackoverflow.com/questions/26812289/matlab-vs-c-vs-opencv-imresize
Viewing all 41027 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>