Quantcast
Channel: OpenCV Q&A Forum - RSS feed
Viewing all 41027 articles
Browse latest View live

Solved ! Documentation bug report page doesn't work

$
0
0
I found a bug in OpenCV documentation. Then I clicked report bug link what preferred link for the bug report in documentation(http://code.opencv.org/) but I have this error: ![image description](/upfiles/1518283711422669.png) How can I report a bug in the documentation?

Using SVM with HOGDescriptor

$
0
0
I have a .yml file that was created using this (https://docs.opencv.org/3.3.1/d5/d77/train_HOG_8cpp-example.html) c++ program with positive and negative images. How do I use the .yml file with my Python code? I noticed that the default person detector is in the opencv-3.3.0/data/hogcascades saved as hogcascade_pedestrian.xml This is the Python code I'm trying to implement the trained SVM: 1. hog = cv2.HOGDescriptor() 2. hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector()) 3. cap = cv2.VideoCapture(0) This is what the top of my .yml looks like: %YAML:1.0 my_detector: !!opencv-object-detector-hog winSize: [ 64, 128 ] blockSize: [ 16, 16 ] blockStride: [ 8, 8 ] cellSize: [ 8, 8 ] nbins: 9 derivAperture: 1 winSigma: 4. histogramNormType: 0 L2HysThreshold: 2.0000000000000001e-01 gammaCorrection: 1 nlevels: 64 signedGradient: 0 SVMDetector: [ -6.58533711e-04, -7.21909618e-03, -1.13428337e-03,

How to import cv2 in python3 in ubuntu 17.10 after installing opencv 3.4.0?

$
0
0
**ujjwal@ujjwal-HP-245-G5-Notebook-PC**:~$ python
Python 3.6.3 |Anaconda custom (64-bit)| (default, Oct 13 2017, 12:02:49)
[GCC 7.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
Traceback (most recent call last):
File "", line 1, in
ModuleNotFoundError: No module named 'cv2'
>>>

**ujjwal@ujjwal-HP-245-G5-Notebook-PC**:~$ python2
Python 2.7.14 (default, Sep 23 2017, 22:06:14)
[GCC 7.2.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
>>> **cv2.__version__**
'3.1.0'
>>>

**ujjwal@ujjwal-HP-245-G5-Notebook-PC**:~$ pkg-config --modversion opencv
3.4.0

OpenCV and Neural Network / AI Accelerators for Mobile Devices

$
0
0
Hello all, I'm trying to understand if we can expect any performance improvements (as in speed of inference / processing) in projects using OpenCV in mobile platforms that have or will have the existing and upcoming NN / AI accelerators such as the IphoneX's AI chip, Qualcomm's Neural Processing Units on the Snapdragon 835 / 845, PowerVR 2NX, etc. After days of searching online, it is not clear to me if OpenCL and hence OpenCV would be accelerated on such platforms out of the box. The prospect of being able to perform complex CV operations, such as the use of OpenCV's DNN module, taking advantage of any acceleration such chips offer is exciting. Does anyone have any knowledge on the matter? Is OpenCV able to benefit from these chips via OpenCL acceleration? I am more than happy to read more on the topic, I may have missed some online material out there. If you are aware of any resources, I would be grateful if you can point me to them.

StereoRectify ROI Results

$
0
0
Hi I could not get always valid roi from stereo rectify. Results that I got is below Valid ROI
"0" "31" "1269" "912"
"0" "0" "0" "0"
Errors: Cam Left "0.234922" Cam Right "0.246169"
Reprojection Error Stereo: "0.274664" \n How could i get always valid results from stereo rectify?

Save and load MLP

$
0
0
Hi, I use 3.3. What can I do to load and save an MLP?

Human body extraction with haar cascades from static background (opencv.js)

$
0
0
I have used "moving foreground from static background with colors" (opencv.js) method for extracting human bodies in a video stream. When it comes to detect moving bodies this algorithm work well but if the body does not move, this algorithm does not work. I decide to use haar cascades to solve this problem. How can I do foreground (human body) extraction with human body haar cascades?

Why can't Eclipse find my library files

$
0
0
Eclipse is looking for the library files in the actual system mingW64 installation rather than the Mingw install as described above: Path expected: `C:\opencv_src\opencv-3.4.0\Mingw_build\install` Path searching: C:/minGW/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/7.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: cannot find -lopencv_calib3d340 C:/minGW/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/7.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: cannot find -lopencv_core340 C:/minGW/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/7.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: cannot find -lopencv_features2d340enter code here If someone could point me in the right direction, x1000 thnks.

Unable to load custom Tensorflow model based on rfcn_resnet101_coco

$
0
0
I used `rfcn_resnet101_coco` as a base to fine tune my model in tensorflow. However when I load this into OpenCV I get the following error: ``` OpenCV Error: Unspecified error (Unknown layer type Cast in op ToFloat) ``` [This issue](https://github.com/opencv/opencv/issues/9830) suggests passing the `.pbtxt` file into `readNetFromTensorflow` . A simple google search is enough to find `ssd_mobilenet_v1_coco.pbtxt` but I am unable to find this file for the specific model I am using. Please guide me on how to go about loading my fine tuned model into OpenCV or creating the `.pbtxt` file.

How to approach to parallelization of algorithms?

$
0
0
I want to parallelize algorithm (object tracker). How should I approach to this topic? Is this only changing namespace and syntax to gpu or is this more demanding activity?

Should OpenCVConfig.cmake be part of dev debian package?

$
0
0
Hi, I am building my own OpenCV debian packages and when I want to create the dev one, I see OpenCVConfig.cmake file is added to the package. I would think that the dev package only contains header files, not a file which has been generated during compilation to tell me which libraries are used. This is happening when adding those cmake files to the COMPONENT dev for CPAK in OpenCVGenConfig.cmake if(UNIX) # ANDROID configuration is created here also #http://www.vtk.org/Wiki/CMake/Tutorials/Packaging reference # For a command "find_package( [major[.minor]] [EXACT] [REQUIRED|QUIET])" # cmake will look in the following dir on unix: # /(share|lib)/cmake/*/ (U) # /(share|lib)/*/ (U) # /(share|lib)/*/(cmake|CMake)/ (U) if(INSTALL_TO_MANGLED_PATHS) install(FILES ${CMAKE_BINARY_DIR}/unix-install/OpenCVConfig.cmake DESTINATION ${OPENCV_CONFIG_INSTALL_PATH}-${OPENCV_VERSION}/ COMPONENT dev) install(FILES ${CMAKE_BINARY_DIR}/unix-install/OpenCVConfig-version.cmake DESTINATION ${OPENCV_CONFIG_INSTALL_PATH}-${OPENCV_VERSION}/ COMPONENT dev) install(EXPORT OpenCVModules DESTINATION ${OPENCV_CONFIG_INSTALL_PATH}-${OPENCV_VERSION}/ FILE OpenCVModules${modules_file_suffix}.cmake COMPONENT dev) else() install(FILES "${CMAKE_BINARY_DIR}/unix-install/OpenCVConfig.cmake" DESTINATION ${OPENCV_CONFIG_INSTALL_PATH}/ COMPONENT dev) install(FILES ${CMAKE_BINARY_DIR}/unix-install/OpenCVConfig-version.cmake DESTINATION ${OPENCV_CONFIG_INSTALL_PATH}/ COMPONENT dev) install(EXPORT OpenCVModules DESTINATION ${OPENCV_CONFIG_INSTALL_PATH}/ FILE OpenCVModules${modules_file_suffix}.cmake COMPONENT dev) endif() endif() This is annoying because when I have an other application which can be build with or without CUDA. And this application download my OpenCV debian packages. When using the find_package (OpenCV), the OpenCVConfig.cmake will be look for. Which means I need to build two dev packages, a cuda-dev and a dev. Shouldn't this OpenCVConfig.cmake be moved to the libs component? Thanks

How to do Unit Test in OpenCv?

$
0
0
I'm new to OpenCV. I have introduced a new API in *VideoCapture Class*. To do **Unit Testing** for that API. Can you please explain how to write test cases for that API in the Opencv? Steps to write the test cases in the OpenCV Library? I don't know how exactly the opencv people have written the test cases for their API's like Open(), get(), set().... If someone could help me with this? Thanks

Hi, I want to build opencv 3.4 with the extra module for java application , i wanna ask if what do i need to change in my configurations because this following error occurs everytime:

$
0
0
mingw32-make[2]: *** [modules\java\jar\CMakeFiles\opencv_java_jar.dir\build.make:61 CmakeFiles/dephelper/opencv_java_jar] Error 1 mingw32-make[1]: *** [CMakeFiles\Makefile2:8181: modules/java/jar/CmakeFiles/opencv_java_jar.dir/all] Error 2 mingw32-make: *** [MakeFile: 162: all] Error 2

opencv3.3.0 with TBB, libtbb-dev installed, but build fails

$
0
0
After my 5th attempt and unable to progress any further, I now call on you all. I'm on Ubuntu 16.04, trying to build opencv3.3.0 with its corresponding contrib modules; `libtbb-dev` is installed (`apt-get install` confirms it's already the latest version), the .so libraries are here: /usr/lib/x86_64-linux-gnu/libtbb.so /usr/lib/x86_64-linux-gnu/libtbbmalloc_proxy.so /usr/lib/x86_64-linux-gnu/libtbb.so.2 /usr/lib/x86_64-linux-gnu/libtbbmalloc_proxy.so.2 /usr/lib/x86_64-linux-gnu/libtbbmalloc.so.2 /usr/lib/x86_64-linux-gnu/libtbbmalloc.so In my cmake I did specify `-DWITH_TBB=ON`, but got the following error at the [83%] mark: `No rule to make target '/usr/lib/libtbb.so', needed by 'lib/libopencv_sfm.so.3.3.0'.` I then specified a path to the libtbb libraries as follows: `-DTBB_LIB_DIR=/usr/lib/x86_64-linux-gnu` But still to no avail. I've got the same error: `No rule to make target '/usr/lib/libtbb.so', needed by 'lib/libopencv_sfm.so.3.3.0'.` I have also check the interactive cmake (ccmake), and it does show: `TBB_ENV_LIB /usr/lib/x86_64-linux-gnu/libtbb.so ` So why, when libtbb is present, when I do give its path and cmake confirms it sees it, the build is looking for a misplaced libtbb.so and fails?

How do you build OpenCV with LAPACK on Windows 10 via CMake?

$
0
0
Hello, I've been having trouble getting CMake to cooperate when telling it to build with LAPACK on **Windows 10**. I've tried downloading the prebuilt files from [LAPACK for Windows](http://icl.cs.utk.edu/lapack-for-windows/lapack/index.html#lapacke). I downloaded the files that correspond to version 3.7.0 that are supposedly for the Intel compilers, which I have. However, I am unsure what files I am supposed to point CMake to. To the DLLs or the LIBs? I followed some advice from [here](https://stackoverflow.com/questions/40134261/building-opencv-3-1-on-windows-where-do-i-specify-the-lapack-library-location) that says to install OpenBLAS, which I did via [vcpkg](https://blogs.msdn.microsoft.com/vcblog/2016/09/19/vcpkg-a-tool-to-acquire-and-build-c-open-source-libraries-on-windows/). I pointed CMake to the appropriate file and location but I receive a complaint from CMake: > can't build lapack check code. this lapack version is not supported. This happens with the prebuilt libraries as well as libraries built from scratch, which I did using the Intel C and Fortran compilers and instructions from the LAPACK for Windows website mentioned above (which is version 3.8.0). Maybe the new OpenBLAS doesn't work with this version? What do I need; BLAS or OpenBLAS? What do I point to? Here are my missing fields so far: ![image description](/upfiles/15184287458369882.png) I have decided not to point CMake to the OpenBLAS file and location but it is still installed. Any help would be appreciated.

Differences between Gray to YUV conversion (CvtColor VS Create a Mat with Y component)

$
0
0
AFAIK , A YUV image from Gray8 image is just copy the Gray8 image data to the Y-component. Meaning that the crominances are 0 value. Based on this https://en.wikipedia.org/wiki/YUV#/media/File:Yuv420.svg So if I have a gray8 Mat (height*width) my YUV Mat result is a Mat(1.5height * width) So I just create a Mat with the first (height*width) bytes to Y and the rest of bytes just to **0** values. But when I tried to use cvtColor Conversion; First Gray->BGR and then BGR->YUV420 (cvtcolor does not have direct conversion) my result Mat is a Mat(1.5height * width), but the last values of each frame (the crominances) are **128** not 0-values... I don't understand this difference but it seems the second one (created with cvtcolor) is the correct one. My own created Mat from the gray8 data results on a Green Image... Any hints??

How to import Tensorflow's MobileNet into Opencv dnn?

$
0
0
Hi, I'm retraining Mobilenet using tensorflow's retrain.py script with following command: python tensorflow/examples/image_retraining/retrain.py \ --image_dir ~/trainingData/ \ --learning_rate=0.001 \ --testing_percentage=20 \ --validation_percentage=20 \ --train_batch_size=32 \ --validation_batch_size=-1 \ --eval_step_interval=100 \ --how_many_training_steps=400\ --architecture mobilenet_1.0_224 This returns a `output_graph.pb` which i'm trying to import into opencv. I'm following the steps in a previous answers (http://answers.opencv.org/question/183507/opencv-dnn-import-error-for-keras-pretrained-vgg16-model/?answer=183526#post-id-183526), However this isn't working yet. 1. Run the `optimize_for_inference.py` script. From the graph I know the input_names and output_names arguments are respectively input and final_result. 2. Generate a `text_graph.pbtxt`. 3. Remove unimplemented layers in opencv, e.g. Flatten etc. However MobileNet doesn't have any Flatten layers After doing this, I'm still not able to import the model in Opencv and I receive following Error message: cv2.error: C:\projects\opencv-python\opencv\modules\dnn\src\tensorflow\tf_importer.cpp:571: error: (-2) More than one input is Const op in function cv::dnn::experimental_dnn_v3::`anonymous-namespace'::TFImporter::getConstBlob Any suggestions? I saw there was a SSD_MobileNet example, but I'm not interested in detections and I'm not sure how much the graph is similar.

EXC_BAD_ACCESS (heap buffer overflow) when using .at function

$
0
0
hi all i am using OpenCV 3.4.0 with C++ on Mac OS X. I am trying to access a matrice on a Mat instance like the following: cv::Mat overlay2 = cv::imread(getAssetsPath() + "overlay.png"); for (int i = 0; i < overlay2.rows; ++i) { for (int j = 0; j < overlay2.cols; ++j) { std::cout << i << "x" << j << std::endl; auto vec = overlay2.at(i, j); std::cout << vec << std::endl; } } this is causing me a heap buffer overflow error" READ of size 4 at 0x00010bed0800 thread T0 #0 0x10003a203 in cv::Matx::Matx(float const*) matx.hpp:665 #1 0x10003a08b in cv::Vec::Vec(cv::Vec const&) matx.hpp:1030 #2 0x10002aa22 in cv::Vec::Vec(cv::Vec const&) matx.hpp:1030 #3 0x100028c6a in OpenCVImage::appendOverlay(OpenCVImage) Image.cpp:32 #4 0x100067434 in testOverlay() main.cpp:45 #5 0x10007762a in main main.cpp:136 #6 0x7fff6b15a114 in start (libdyld.dylib:x86_64+0x1114) 0x00010bed0800 is located 0 bytes to the right of 7077888-byte region [0x00010b810800,0x00010bed0800) allocated by thread T0 here: #0 0x1001da830 in wrap_posix_memalign (libclang_rt.asan_osx_dynamic.dylib:x86_64h+0x59830) #1 0x101388d20 in cv::fastMalloc(unsigned long) (libopencv_core.3.4.dylib:x86_64+0x2d20) #2 0x1014bbc56 in cv::StdMatAllocator::allocate(int, int const*, int, void*, unsigned long*, int, cv::UMatUsageFlags) const (libopencv_core.3.4.dylib:x86_64+0x135c56) #3 0x10148fece in cv::Mat::create(int, int const*, int) (libopencv_core.3.4.dylib:x86_64+0x109ece) #4 0x102135c5c in cv::imread_(cv::String const&, int, int, cv::Mat*) (libopencv_imgcodecs.3.4.dylib:x86_64+0x4c5c) #5 0x10213593f in cv::imread(cv::String const&, int) (libopencv_imgcodecs.3.4.dylib:x86_64+0x493f) #6 0x100028592 in OpenCVImage::appendOverlay(OpenCVImage) Image.cpp:27 #7 0x100067434 in testOverlay() main.cpp:45 #8 0x10007762a in main main.cpp:136 #9 0x7fff6b15a114 in start (libdyld.dylib:x86_64+0x1114) SUMMARY: AddressSanitizer: heap-buffer-overflow matx.hpp:665 in cv::Matx::Matx(float const*) Shadow bytes around the buggy address: 0x1000217da0b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x1000217da0c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x1000217da0d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x1000217da0e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x1000217da0f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 =>0x1000217da100:[fa]fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x1000217da110: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x1000217da120: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x1000217da130: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x1000217da140: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x1000217da150: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb 2018-02-12 13:26:58.313748+0300 Vivian[79998:4212277] ================================================================= 2018-02-12 13:26:58.314180+0300 Vivian[79998:4212277] ==79998==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x00010bed0800 at pc 0x00010003a204 bp 0x7ffeefbfe1e0 sp 0x7ffeefbfe1d8 2018-02-12 13:26:58.314201+0300 Vivian[79998:4212277] READ of size 4 at 0x00010bed0800 thread T0 2018-02-12 13:26:58.314211+0300 Vivian[79998:4212277] #0 0x10003a203 in cv::Matx::Matx(float const*) matx.hpp:665 2018-02-12 13:26:58.314221+0300 Vivian[79998:4212277] #1 0x10003a08b in cv::Vec::Vec(cv::Vec const&) matx.hpp:1030 2018-02-12 13:26:58.314237+0300 Vivian[79998:4212277] #2 0x10002aa22 in cv::Vec::Vec(cv::Vec const&) matx.hpp:1030 2018-02-12 13:26:58.314247+0300 Vivian[79998:4212277] #3 0x100028c6a in OpenCVImage::appendOverlay(OpenCVImage) Image.cpp:32 2018-02-12 13:26:58.314256+0300 Vivian[79998:4212277] #4 0x100067434 in testOverlay() main.cpp:45 2018-02-12 13:26:58.314266+0300 Vivian[79998:4212277] #5 0x10007762a in main main.cpp:136 2018-02-12 13:26:58.314274+0300 Vivian[79998:4212277] #6 0x7fff6b15a114 in start (libdyld.dylib:x86_64+0x1114) 2018-02-12 13:26:58.314413+0300 Vivian[79998:4212277] 2018-02-12 13:26:58.314421+0300 Vivian[79998:4212277] 0x00010bed0800 is located 0 bytes to the right of 7077888-byte region [0x00010b810800,0x00010bed0800) 2018-02-12 13:26:58.314430+0300 Vivian[79998:4212277] allocated by thread T0 here: 2018-02-12 13:26:58.314439+0300 Vivian[79998:4212277] #0 0x1001da830 in wrap_posix_memalign (libclang_rt.asan_osx_dynamic.dylib:x86_64h+0x59830) 2018-02-12 13:26:58.314448+0300 Vivian[79998:4212277] #1 0x101388d20 in cv::fastMalloc(unsigned long) (libopencv_core.3.4.dylib:x86_64+0x2d20) 2018-02-12 13:26:58.314457+0300 Vivian[79998:4212277] #2 0x1014bbc56 in cv::StdMatAllocator::allocate(int, int const*, int, void*, unsigned long*, int, cv::UMatUsageFlags) const (libopencv_core.3.4.dylib:x86_64+0x135c56) 2018-02-12 13:26:58.314467+0300 Vivian[79998:4212277] #3 0x10148fece in cv::Mat::create(int, int const*, int) (libopencv_core.3.4.dylib:x86_64+0x109ece) 2018-02-12 13:26:58.314571+0300 Vivian[79998:4212277] #4 0x102135c5c in cv::imread_(cv::String const&, int, int, cv::Mat*) (libopencv_imgcodecs.3.4.dylib:x86_64+0x4c5c) 2018-02-12 13:26:58.314583+0300 Vivian[79998:4212277] #5 0x10213593f in cv::imread(cv::String const&, int) (libopencv_imgcodecs.3.4.dylib:x86_64+0x493f) 2018-02-12 13:26:58.314592+0300 Vivian[79998:4212277] #6 0x100028592 in OpenCVImage::appendOverlay(OpenCVImage) Image.cpp:27 2018-02-12 13:26:58.314601+0300 Vivian[79998:4212277] #7 0x100067434 in testOverlay() main.cpp:45 2018-02-12 13:26:58.314610+0300 Vivian[79998:4212277] #8 0x10007762a in main main.cpp:136 2018-02-12 13:26:58.314618+0300 Vivian[79998:4212277] #9 0x7fff6b15a114 in start (libdyld.dylib:x86_64+0x1114) 2018-02-12 13:26:58.314627+0300 Vivian[79998:4212277] 2018-02-12 13:26:58.314635+0300 Vivian[79998:4212277] SUMMARY: AddressSanitizer: heap-buffer-overflow matx.hpp:665 in cv::Matx::Matx(float const*) 2018-02-12 13:26:58.314714+0300 Vivian[79998:4212277] Shadow bytes around the buggy address: 2018-02-12 13:26:58.314725+0300 Vivian[79998:4212277] 0x1000217da0b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 2018-02-12 13:26:58.314734+0300 Vivian[79998:4212277] 0x1000217da0c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 2018-02-12 13:26:58.314742+0300 Vivian[79998:4212277] 0x1000217da0d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 2018-02-12 13:26:58.314751+0300 Vivian[79998:4212277] 0x1000217da0e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 2018-02-12 13:26:58.314760+0300 Vivian[79998:4212277] 0x1000217da0f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 2018-02-12 13:26:58.314803+0300 Vivian[79998:4212277] =>0x1000217da100:[fa]fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 2018-02-12 13:26:58.314815+0300 Vivian[79998:4212277] 0x1000217da110: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 2018-02-12 13:26:58.314865+0300 Vivian[79998:4212277] 0x1000217da120: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 2018-02-12 13:26:58.314885+0300 Vivian[79998:4212277] 0x1000217da130: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 2018-02-12 13:26:58.314923+0300 Vivian[79998:4212277] 0x1000217da140: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 2018-02-12 13:26:58.314937+0300 Vivian[79998:4212277] 0x1000217da150: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 2018-02-12 13:26:58.314947+0300 Vivian[79998:4212277] Shadow byte legend (one shadow byte represents 8 application bytes): 2018-02-12 13:26:58.314957+0300 Vivian[79998:4212277] Addressable: 00 2018-02-12 13:26:58.314966+0300 Vivian[79998:4212277] Partially addressable: 01 02 03 04 05 06 07 2018-02-12 13:26:58.314975+0300 Vivian[79998:4212277] Heap left redzone: fa 2018-02-12 13:26:58.314983+0300 Vivian[79998:4212277] Freed heap region: fd 2018-02-12 13:26:58.314991+0300 Vivian[79998:4212277] Stack left redzone: f1 2018-02-12 13:26:58.315073+0300 Vivian[79998:4212277] Stack mid redzone: f2 2018-02-12 13:26:58.315102+0300 Vivian[79998:4212277] Stack right redzone: f3 2018-02-12 13:26:58.315119+0300 Vivian[79998:4212277] Stack after return: f5 2018-02-12 13:26:58.315129+0300 Vivian[79998:4212277] Stack use after scope: f8 2018-02-12 13:26:58.315137+0300 Vivian[79998:4212277] Global redzone: f9 2018-02-12 13:26:58.315145+0300 Vivian[79998:4212277] Global init order: f6 2018-02-12 13:26:58.315153+0300 Vivian[79998:4212277] Poisoned by user: f7 2018-02-12 13:26:58.315161+0300 Vivian[79998:4212277] Container overflow: fc 2018-02-12 13:26:58.315169+0300 Vivian[79998:4212277] Array cookie: ac 2018-02-12 13:26:58.315178+0300 Vivian[79998:4212277] Intra object redzone: bb 2018-02-12 13:26:58.315186+0300 Vivian[79998:4212277] ASan internal: fe 2018-02-12 13:26:58.315195+0300 Vivian[79998:4212277] Left alloca redzone: ca 2018-02-12 13:26:58.315222+0300 Vivian[79998:4212277] Right alloca redzone: cb 2018-02-12 13:26:58.315237+0300 Vivian[79998:4212277] ==79998==ABORTING AddressSanitizer report breakpoint hit. Use 'thread info -s' to get extended information about the report. how can i fix this problem?

K-Means Clustering C++ how do I save each cluster separately in Matrix form

$
0
0
I want to save each cluster seperately and display each cluster. I find Clusters and tags in my code(C++). How can I get what I want to do next?

K-Nearest for handwritten letters

$
0
0
I'm working with this data set, which Is slightly unbalanced across the letter classes: [link text](https://github.com/tuptaker/uppercase_letters) I've tried to adapt the examples described in this tutorial: [link text](https://docs.opencv.org/trunk/d8/d4b/tutorial_py_knn_opencv.html) Specifically, I'm focused on adapting the first example for digits since I don't have a *.data file for handwritten letters but, rather, a sample of ~12K handwritten letters represented as *.png files. As I load in my samples, I bucket them roughly evenly into training and testing data (and labels), respectively, after which I do the requisite resizing, reshaping, etc. Unfortunately, I'm getting 0 matches during the testing step and am having a tough time trying to find where I went wrong. Can someone have a look at my code below? import cv2 import numpy as np import matplotlib.pyplot as plt import time import glob from random import shuffle import os class LetterRecognizer: __debugging_data_path = "./debugging_data" __letter_images_dir = "./letters_upper" __letters_for_training = [] __letters_for_testing = [] def __init__(self): self.knn_model = cv2.ml.KNearest_create() print("HWR-ICR-ENG: LetterRecognizer: initialized.") def create_trained_knn_model(self): #letter_arr = self.__load_uppercase_data() training_labels_raw, \ training_data_raw, \ testing_labels_raw, \ testing_data_raw = self.__load_uppercase_data_and_labels() gray_training_data = [cv2.cvtColor(letter, cv2.COLOR_BGR2GRAY) for letter in training_data_raw] gray_testing_data = [cv2.cvtColor(letter, cv2.COLOR_BGR2GRAY) for letter in testing_data_raw] gray_training_data_20_by_20 = [cv2.resize(letter, (20, 20)) for letter in gray_training_data] gray_testing_data_20_by_20 = [cv2.resize(letter, (20, 20)) for letter in gray_testing_data] # shape is (6245, 20, 20) training_data_np = np.array(gray_training_data_20_by_20) training_data = training_data_np[:,:6245].reshape(-1,400).astype(np.float32) # shape is (6234, 20, 20) testing_data_np = np.array(gray_testing_data_20_by_20) testing_data = training_data_np[:,:6234].reshape(-1,400).astype(np.float32) training_label_np = np.array(training_labels_raw) testing_label_np = np.array(testing_labels_raw) training_labels = np.repeat(training_labels_raw, 1)[:, np.newaxis] testing_labels = np.repeat(testing_labels_raw, 1)[:, np.newaxis] # Initiate kNN, train the data, then test it with test data for k=1 self.knn_model = cv2.ml.KNearest_create() training_start_time = time.clock() self.knn_model.train(training_data, cv2.ml.ROW_SAMPLE, training_labels) print("HWR-ICR-ENG: Letter Recognizer: Training duration: ", time.clock() - training_start_time) testing_start_time = time.clock() ret,result,neighbours,dist = self.knn_model.findNearest(testing_data, k = 5) print("HWR-ICR-ENG: Letter Recognizer: Testing duration: ", time.clock() - testing_start_time) # Now we check the accuracy of classification # For that, compare the result with test_labels and check which are wrong matches = result == testing_labels correct = np.count_nonzero(matches) accuracy = correct*100.0/result.size return self.knn_model, accuracy def __load_uppercase_data_and_labels(self): training_labels = [] training_data = [] testing_labels = [] testing_data = [] for letter_label, (subdir, dirs, files) in enumerate(os.walk(self.__letter_images_dir)): for letter_index, (file) in enumerate(files): if file.endswith('.png'): if letter_index % 2 is 0: training_labels.append(letter_label) training_data.append(cv2.imread(os.path.join(subdir, file))) else: testing_labels.append(letter_label) testing_data.append(cv2.imread(os.path.join(subdir, file))) return training_labels, training_data, testing_labels, testing_data
Viewing all 41027 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>