i have taken binary file from link "https://github.com/CansenJIANG/SCBU" i'm attaching error file too. can anyone check this binary file please. My configuration is windows 7 visual studio 2010. Thanks
↧
Hello all, currently i'm trying to run binary file of a paper "Scene Conditional Background Update for Moving Object Detection in a Moving Camera" but i'm getting error 0xc000007b.
↧
Windows Runtime Component (C++ based) in UWP - Load and display image
This is the first time I'm using OpenCV Answers to post questions.
I want to use OpenCV to load an image and display it in a window, but through a Windows Runtime component that a UWP app instantiates and calls a function from. I'm stuck on this error, though:
Exception thrown at 0x00007FFAAC1E3E1F (opencv_highgui310d.dll) in OpenCVImageViewerTest.exe: 0xC0000005: Access violation reading location 0x0000000000000000.
This is from this line:
imshow("Test image", image);
The component is built to run on x64, Debug. The libraries I specified in Linker > Input > Additional Dependencies all have the d suffix before the .lib extension.
This is the code for my Windows Runtime Component:
[This is my first time writing a Windows Runtime component; please don't mind if I have a few inconsistencies such as having too many includes]
*OpenCVImageOpener.h:*
#pragma once
// OpenCV Dependent Libraries
#include
#include
#include
// WinRT Dependent Headers
#include
#include
#include
#include
// OpenCV Namespace
using namespace cv;
namespace OpenCVImage
{
// Have this class be compiled as a WinRT Component for a UWP app
public ref class OpenCVImageOpener sealed
{
public
void loadImage(Platform::String^ filename); // Loads image and displays it on the screen
};
};
*OpenCVImageOpener.cpp:*
// Microsoft Visual Studio friendly includes
#include "pch.h"
// C++ Standard Libraries
#include
#include
#include
#include
#include
#include
#include
// Windows Runtime Component libraries
#include
#include
// Custom Header Files
#include "OpenCVImageOpener.h"
// Namespaces
using namespace OpenCVImage;
using namespace cv;
using namespace std;
using namespace concurrency;
using namespace Platform::Collections;
using namespace Windows::Foundation::Collections;
using namespace Windows::Foundation;
using namespace Windows::UI::Core;
void OpenCVImageOpener::loadImage(Platform::String^ filename) {
// Debugging statements omitted for brevity
std::wstring filenameBuffer(filename->Begin());
string rawFilenameString(filenameBuffer.begin(), filenameBuffer.end());
Mat image = imread(rawFilenameString.c_str());
imshow("Test image", image); // Error occurs right here
}
The UWP app that calls the function loadImage is simply a window with one button with an event listener that runs the function in a thread separate from the UI Thread.
I'm using Visual Studio 2017 with five NuGet packages:
- OpenCV.Win.Core.310.6.1
- OpenCV.Win.HighGUI.310.6.1
- OpenCV.Win.ImgCodecs.310.6.1
- OpenCV.Win.ImgProc.310.6.1
- OpenCV.Win.VideoIO.310.6.1
I looked around online to see if other users encountered the same issue. [This question posted here](http://answers.opencv.org/question/181074/c-access-violation-exception-in-the-function-findcontours/) caught my attention, but I do not know if it still has to do with the version of Visual Studio I'm using.
Otherwise, why is this error happening?
↧
↧
capture an image android
Is there a tutorial anywhere that describes clearly the steps necessary to capture an image ,
using openCV4Android ? I have looked at this S/O post
https://stackoverflow.com/questions/42900906/take-picture-using-camerabridgeviewbase-on-opencv
which references this tutorial
https://docs.nvidia.com/gameworks/content/technologies/mobile/opencv_tutorial_camera_preview.htm
as an example.
In this example it States :
On receiving a new frame, the activity class does not process it in any way, and simply returns it, displaying as:
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
return inputFrame.rgba();
}
I placed a breakpoint on this which never gets hit, however I imagine that inputFrame.rgba();
is what im looking for. Ie I think that is the image and i need to do processing on a single image not the feed.
I do not wish to write the image , I will do some processing on this image in memory to extract data ,
that data will be written to a database and the image discarded.
At present I have an application which displays the camera feed. To do this i simply implement
CameraBridgeViewBase.CvCameraViewListener2
↧
Static linking help needed when including the viz module (VTK)
Hey all!
I'm having an issue I've been fighting with all weekend. I'm trying to build a fairly simple example which uses the viz module in C++. I've built OpenCV 3.4 and OpenCV-contrib along with VTK 8.1.0 as static libs and all seems to be well there. But after adding all the libs and appropriate includes I get 19 build errors I can't seem to resolve and they all seem to be related to the opengl libraries within vtk.
opencv_viz340d.lib(opencv_viz_pch.obj) : error LNK2019: unresolved external symbol "void __cdecl vtkRenderingOpenGL2_AutoInit_Construct(void)" (?vtkRenderingOpenGL2_AutoInit_Construct@@YAXXZ) referenced in function "public: __thiscall vtkRenderingCore_AutoInit::vtkRenderingCore_AutoInit(void)" (??0vtkRenderingCore_AutoInit@@QAE@XZ)
opencv_viz340d.lib(opencv_viz_pch.obj) : error LNK2019: unresolved external symbol "void __cdecl vtkRenderingOpenGL2_AutoInit_Destruct(void)" (?vtkRenderingOpenGL2_AutoInit_Destruct@@YAXXZ) referenced in function "public: __thiscall vtkRenderingCore_AutoInit::~vtkRenderingCore_AutoInit(void)" (??1vtkRenderingCore_AutoInit@@QAE@XZ)
vtkRenderingGL2PSOpenGL2-8.1.lib(vtkOpenGLGL2PSHelperImpl.obj) : error LNK2019: unresolved external symbol "public: virtual void __thiscall vtkOpenGLGL2PSHelper::PrintSelf(class std::basic_ostream>&,class vtkIndent)" (?PrintSelf@vtkOpenGLGL2PSHelper@@UAEXAAV?$basic_ostream@DU?$char_traits@D@std@@@std@@VvtkIndent@@@Z) referenced in function "public: virtual void __thiscall vtkOpenGLGL2PSHelperImpl::PrintSelf(class std::basic_ostream>&,class vtkIndent)" (?PrintSelf@vtkOpenGLGL2PSHelperImpl@@UAEXAAV?$basic_ostream@DU?$char_traits@D@std@@@std@@VvtkIndent@@@Z)
vtkRenderingGL2PSOpenGL2-8.1.lib(vtkOpenGLGL2PSHelperImpl.obj) : error LNK2019: unresolved external symbol "protected: __thiscall vtkOpenGLGL2PSHelper::vtkOpenGLGL2PSHelper(void)" (??0vtkOpenGLGL2PSHelper@@IAE@XZ) referenced in function "protected: __thiscall vtkOpenGLGL2PSHelperImpl::vtkOpenGLGL2PSHelperImpl(void)" (??0vtkOpenGLGL2PSHelperImpl@@IAE@XZ)
vtkRenderingGL2PSOpenGL2-8.1.lib(vtkOpenGLGL2PSHelperImpl.obj) : error LNK2019: unresolved external symbol "protected: virtual __thiscall vtkOpenGLGL2PSHelper::~vtkOpenGLGL2PSHelper(void)" (??vtkOpenGLGL2PSHelper@@MAE@XZ) referenced in function "protected: virtual __thiscall vtkOpenGLGL2PSHelperImpl::~vtkOpenGLGL2PSHelperImpl(void)" (??1vtkOpenGLGL2PSHelperImpl@@MAE@XZ)
vtkRenderingGL2PSOpenGL2-8.1.lib(vtkOpenGLGL2PSHelperImpl.obj) : error LNK2019: unresolved external symbol "public: unsigned int __thiscall vtkTransformFeedback::GetBytesPerVertex(void)const " (?GetBytesPerVertex@vtkTransformFeedback@@QBEIXZ) referenced in function "public: virtual void __thiscall vtkOpenGLGL2PSHelperImpl::ProcessTransformFeedback(class vtkTransformFeedback *,class vtkRenderer *,float * const)" (?ProcessTransformFeedback@vtkOpenGLGL2PSHelperImpl@@UAEXPAVvtkTransformFeedback@@PAVvtkRenderer@@QAM@Z)
vtkRenderingGL2PSOpenGL2-8.1.lib(vtkOpenGLGL2PSHelperImpl.obj) : error LNK2019: unresolved external symbol "public: unsigned int __thiscall vtkTransformFeedback::GetBufferSize(void)const " (?GetBufferSize@vtkTransformFeedback@@QBEIXZ) referenced in function "public: virtual void __thiscall vtkOpenGLGL2PSHelperImpl::ProcessTransformFeedback(class vtkTransformFeedback *,class vtkRenderer *,float * const)" (?ProcessTransformFeedback@vtkOpenGLGL2PSHelperImpl@@UAEXPAVvtkTransformFeedback@@PAVvtkRenderer@@QAM@Z)
vtkRenderingGL2PSOpenGL2-8.1.lib(vtkOpenGLGL2PSHelperImpl.obj) : error LNK2019: unresolved external symbol __imp__glGetDoublev@8 referenced in function "protected: static void __cdecl vtkOpenGLGL2PSHelperImpl::GetTransformParameters(class vtkRenderer *,class vtkMatrix4x4 *,class vtkMatrix4x4 *,double * const,double * const,double * const)" (?GetTransformParameters@vtkOpenGLGL2PSHelperImpl@@KAXPAVvtkRenderer@@PAVvtkMatrix4x4@@1QAN22@Z)
vtkgl2ps-8.1.lib(gl2ps.obj) : error LNK2019: unresolved external symbol __imp__glBegin@4 referenced in function _gl2psDrawImageMap
vtkgl2ps-8.1.lib(gl2ps.obj) : error LNK2019: unresolved external symbol __imp__glEnd@0 referenced in function _gl2psDrawImageMap
vtkgl2ps-8.1.lib(gl2ps.obj) : error LNK2019: unresolved external symbol __imp__glFeedbackBuffer@12 referenced in function _gl2psBeginPage
vtkgl2ps-8.1.lib(gl2ps.obj) : error LNK2019: unresolved external symbol __imp__glGetBooleanv@8 referenced in function _gl2psDrawPixels
vtkgl2ps-8.1.lib(gl2ps.obj) : error LNK2019: unresolved external symbol __imp__glGetFloatv@8 referenced in function _gl2psBeginPage
vtkgl2ps-8.1.lib(gl2ps.obj) : error LNK2019: unresolved external symbol __imp__glGetIntegerv@8 referenced in function _gl2psBeginPage
vtkgl2ps-8.1.lib(gl2ps.obj) : error LNK2019: unresolved external symbol __imp__glIsEnabled@4 referenced in function _gl2psBeginPage
vtkgl2ps-8.1.lib(gl2ps.obj) : error LNK2019: unresolved external symbol __imp__glPassThrough@4 referenced in function _gl2psDrawPixels
vtkgl2ps-8.1.lib(gl2ps.obj) : error LNK2019: unresolved external symbol __imp__glRenderMode@4 referenced in function _gl2psBeginPage
vtkgl2ps-8.1.lib(gl2ps.obj) : error LNK2019: unresolved external symbol __imp__glVertex3f@12 referenced in function _gl2psDrawImageMap
I don't quite understand the error. I've looked at the VTK code and clearly there is a constructor defined as well as other methods mentioned, and the vtkRenderingOpenGL2-8.1.lib is referenced in the application (I referenced all libs). Any help would be appreciated!
↧
How to remove this error in javacv....
OpenCV Error: Assertion failed (s >= 0) in unknown function, file ..\..\..\src\opencv\modules\core\src\matrix.cpp, line 116
Exception in thread "AWT-EventQueue-0" java.lang.RuntimeException: ..\..\..\src\opencv\modules\core\src\matrix.cpp:116: error: (-215) s >= 0
code is:-
if(imageFiles.size()==0)
{
empty=true;
}
if(!empty)
{
images = new MatVector(imageFiles.size());
System.out.println(imageFiles.size());
labels = new int[imageFiles.size()];
for (int x=0;x
↧
↧
Open new window while program is running with Trackbar
Hay,
I want to use a trackbar to open a new window with more trackbars if the first trackbar gets set to 1.
I tried getting the Trackbarposition with getTrackbarPos() which is working fine but if i set it to 1 no window gets created.
Am I missing something or is this just not possible?
createTrackbar("On off", // off / on
"menu", &cannyval,
1, NULL);
int cannyonoff = getTrackbarPos("On off", "menu");
if ( cannyonoff == 1)
{
namedWindow("cannyedge", CV_WINDOW_AUTOSIZE);
createTrackbar("lowThresh", // lower Threshold
"cannyedge", &lowThreshold,
100, NULL);
createTrackbar("ratio", // ratio
"cannyedge", &ratio,
50, NULL);
createTrackbar("Kernel Size", // Kernel Size
"cannyedge", &kernel_size,
21, NULL);
}
↧
Real Time Image Processing in Android Studio
I am trying to call OpenCV library to do image processing in real time for a java camera application. below is the error log. Could this be a problem with my CMakeLists.txt file? Note that the program was working before I added 2 functions to the native-lib.cpp file.
C:/Users/Akira/AndroidStudioProjects/OpencvCamera2/app/src/main/jniLibs/armeabi-v7a/libopencv_java3.so -llog -latomic -lm "C:/Android/android-sdk/ndk-bundle/sources/cxx-stl/gnu-libstdc++/4.9/libs/x86_64/libgnustl_static.a" && cd ."
15:19:00.659 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] C:/Android/android-sdk/ndk-bundle/toolchains/x86_64-4.9/prebuilt/windows-x86_64/lib/gcc/x86_64-linux-android/4.9.x/../../../../x86_64-linux-android/bin\ld: error: C:/Users/Akira/AndroidStudioProjects/OpencvCamera2/app/src/main/jniLibs/armeabi-v7a/libopencv_java3.so: incompatible target
15:19:00.659 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] C:\Users\Akira\Documents\Academic\Spring 2018 Courses\CS 309\OpenCVTest\OpencvCamera2\app\src\main\cpp/native-lib.cpp:27: error: undefined reference to 'cv::FastFeatureDetector::create(int, bool, int)'
15:19:00.659 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] C:\Users\Akira\Documents\Academic\Spring 2018 Courses\CS 309\OpenCVTest\OpencvCamera2\app\src\main\cpp/native-lib.cpp:28: error: undefined reference to 'cv::noArray()'
15:19:00.659 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] C:\Users\Akira\Documents\Academic\Spring 2018 Courses\CS 309\OpenCVTest\OpencvCamera2\app\src\main\cpp/native-lib.cpp:31: error: undefined reference to 'cv::circle(cv::_InputOutputArray const&, cv::Point_, int, cv::Scalar_ const&, int, int, int)'
15:19:00.659 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] C:\Users\Akira\Documents\Academic\Spring 2018 Courses\CS 309\OpenCVTest\OpencvCamera2\app\src\main\cpp/native-lib.cpp:43: error: undefined reference to 'cv::cvtColor(cv::_InputArray const&, cv::_OutputArray const&, int, int)'
15:19:00.659 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] C:\Users\Akira\Documents\Academic\Spring 2018 Courses\CS 309\OpenCVTest\OpencvCamera2\app\src\main\cpp/native-lib.cpp:46: error: undefined reference to 'cv::equalizeHist(cv::_InputArray const&, cv::_OutputArray const&)'
15:19:00.659 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] C:\Users\Akira\Documents\Academic\Spring 2018 Courses\CS 309\OpenCVTest\OpencvCamera2\app\src\main\cpp/native-lib.cpp:51: error: undefined reference to 'cv::Mat::convertTo(cv::_OutputArray const&, int, double, double) const'
15:19:00.659 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] C:/opencv-3.2.0-android-sdk/OpenCV-android-sdk/sdk/native/jni/include\opencv2/core/mat.inl.hpp:592: error: undefined reference to 'cv::fastFree(void*)'
15:19:00.659 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] C:/opencv-3.2.0-android-sdk/OpenCV-android-sdk/sdk/native/jni/include\opencv2/core/mat.inl.hpp:704: error: undefined reference to 'cv::Mat::deallocate()'
Tetragramm Edit: Snipped the giant wall of log and added some spaces. I think I kept the important bits.
↧
OpenCVjs with Features2D
Hello,
I am trying to build the OpenCV js bindings however, i think this may in fact be a cmake question.
Using the steps from here i am able to build the js bindings.
https://docs.opencv.org/3.3.1/d4/da1/tutorial_js_setup.html
I would like to add Features2D which has also been done with (https://github.com/ucisysarch/opencvjs which won't build as reliably as OpenCV.js, although i believe it is a the same project :S ).
Here is what i have tried.......
1) Change -DBUILD_opencv_features2d=ON
2) Add #include "opencv2/features2d.hpp" to the bindings.cpp
This produces a cmake output with.
-- OpenCV modules:
-- To be built: core **features2d** imgproc java_bindings_generator js python_bindings_generator video
-- Disabled: etc.etc.etc..
However,
When the make process occurs it reaches 100% but has not tried to build any Features2D modules. Each of the other steps were done i.e. .
**The CMAKE output**
[0%]
.....
[86%] Building CXX object modules/imgproc/CMakeFiles/openv_imgproc.dir/src/tables.cpp.o
.....
[99%]
Generating bindings.cpp
Does anyone have any advice here on why the make process would not be building features2d? Is it dependent on another module that must be included?
Regards,
Daniel
LINKS
GSOC https://gist.github.com/pancx/2778b72090782b6e5b47af13a32b0d7d
↧
I'm working with openCV on Android and IOS.I have an image with receipt. Can I crop only receipt in that image? Please give some solutions, thanks. See input and output image below.


↧
↧
How to make use of OpenCV source codes instead of its shared libraries
I have a project at hand which I want to use one of the opencv modules (specifically dnn).
Instead of building the dnn module I want to use the source code of this modules in my project. by doing so, I can change the source code live and see the results at the same time.
I have a very simple scenario with one only source file:
main.cpp
#include "iostream"
#include
int main(int argc, char *argv[])
{
std::string ConfigFile = "tsproto.pbtxt";
std::string ModelFile = "tsmodel.pb";
cv::dnn::Net net = cv::dnn::readNetFromTensorflow(ModelFile,ConfigFile);
return 0;
}
now this function "cv::dnn::readNetFromTensorflow" is in dnn module. I tried many different methods to embedded the dnn source codes inside my project but all of them failed !
for example, the first time I tried to include every cpp and hpp file in the module/dnn/ folder of opencv in my project but I ended up in errors like
/home/user/projects/Tensor/tf_importer.cpp:28: error: 'CV__DNN_EXPERIMENTAL_NS_BEGIN' does not name a type
#include "../precomp.hpp" no such file or directory
HAVE_PROTOBUF is undefined
and ....
I tried to solve these errors but some more errors just happened, more undefined MACROs and more undefined hpp files !
#include "../layers_common.simd.hpp" no such file or directory
and many many more errors !
It seems that I'm stuck in a while(true) loop of errors !!! Is it really that hard to use opencv modules source code ?
P.S.
For those who are asking about why I want to use opencv source code instead of using the shared libraries I have to say that I want to import a customized tensorflow model which opencv read function doesn't support and I want to know where exactly it crashesh so I can fix it.
By the way, I am only using c++11 functions and gcc as compiler in Ubuntu 16.04
↧
Forground extraction with colors not threshold(Opencv.js)
Hello guys.I've used below tutorial to do background subtraction.
https://docs.opencv.org/3.3.1/de/df4/tutorial_js_bg_subtraction.html
Basically i want to extract substract background and get forground objects with color not threshold.And i do some research on google.i find similar question on stackoverflow
https://stackoverflow.com/questions/32323190/opencv-background-subtraction-get-color-objects
But i can't find frame.copyTo function on OPENCV.js.
How can i do Forground extraction with colors?
How can i do that with opencv.js?
↧
Why can I get contours from a 'thresh' image but not from a 'mask' image when using VideoCapture?
import cv2
import numpy as np
cap = cv2.VideoCapture(0)
while (1):
_, frame = cap.read()
gray = cv2.cvtColor(frame,cv2.COLOR_RGB2GRAY)
ret, thresh = cv2.threshold(gray,50,255,0)
# Convert BGR to HSV
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# define range of blue color in HSV
lower_blue = np.array([110, 175, 50])
upper_blue = np.array([130, 255, 255])
# Threshold the HSV image to get only blue colors
mask = cv2.inRange(hsv, lower_blue, upper_blue)
_, contours, hierarchy = cv2.findContours(thresh, 1, 2)
cnt = contours[0]
#_, contours, hierarchy = cv2.findContours(mask, 1, 2)
#cnt = contours[0]
## IndexError: list index out of range
cv2.imshow('frame', frame)
cv2.imshow('mask', mask)
cv2.imshow('thresh',thresh)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cv2.destroyAllWindows()
I've been able to determine that both 'thresh' and 'mask' are matrices each of 480 rows and 640 columns and each cell contains either a '0' or '255'. Also, 'thresh' and 'mask' are of type 'numpy.ndarray'.
So why is this valid:
_, contours, hierarchy = cv2.findContours(thresh, 1, 2)
cnt = contours[0]
But this is not:
_, contours, hierarchy = cv2.findContours(mask, 1, 2)
cnt = contours[0]
# cnt = contours[0]
# IndexError: list index out of range
↧
convert from bitmap to mat
I am using Android Intent to take a photo.
startActivityForResult(intent, 0);
This opens the native camera and i can take a picture. In the Activity where the picture is returned to , in the onActivityResult method , I can get a hold of the returned Bitmap like so.
if (requestCode == 0 && resultCode == RESULT_OK) {
Bundle extras = data.getExtras();
Bitmap imageBitmap = (Bitmap) extras.get("data");
I then wish to convert this to a Mat to do some openCV image processing. I am attempting to convert like this.
Mat src = new Mat(imageBitmap.getHeight(), imageBitmap.getWidth(), CvType.CV_8UC4);
Utils.bitmapToMat(imageBitmap, src);
The line above causes a crash with the following stack trace .
E/AndroidRuntime: FATAL EXCEPTION: main
Process: com.example.i330155.testing123, PID: 16349
java.lang.UnsatisfiedLinkError: No implementation found for long org.opencv.core.Mat.n_Mat(int, int, int)
(tried Java_org_opencv_core_Mat_n_1Mat and Java_org_opencv_core_Mat_n_1Mat__III)
at org.opencv.core.Mat.n_Mat(Native Method)
at org.opencv.core.Mat.(Mat.java:37)
at com.example.testing123.MainActivity.onActivityResult(MainActivity.java:48)
at android.app.Activity.dispatchActivityResult(Activity.java:7022)
at android.app.ActivityThread.deliverResults(ActivityThread.java:4248)
at android.app.ActivityThread.handleSendResult(ActivityThread.java:4295)
at android.app.ActivityThread.-wrap20(ActivityThread.java)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1583)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:154)
at android.app.ActivityThread.main(ActivityThread.java:6290)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:886)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:776)
After searching i found the following post. http://answers.opencv.org/question/52722/what-is-the-correct-way-to-convert-a-mat-to-a-bitmap/
Which says that.
mat is a valid input Mat object of the types 'CV_8UC1', 'CV_8UC3' or 'CV_8UC4'.
bmp is a valid Bitmap object of the same size as the Mat and of type 'ARGB_8888' or 'RGB_565'.
As you can see in the constructor for the Mat I supply these args. I have checked the config of the imageBitmap , which confirms that it is indeed a 'RGB_565'.
What am i missing here? I dont understand why this does not work. Thanks in advance.
↧
↧
Trying to calculate histogram on Android and find the median. Unsure how to access histogram data from Mat
I'm trying to find the median, which I believe can be done by using the code given here: http://tech.dir.groups.yahoo.com/group/OpenCV/message/23809
What I have so far in java is the histogram calculation working fine (I assume), and at that point, I don't know what to put insude my calcMedianOfHist function. The issue I'm running into is that I don't know how to access the bins of the histogram. The closest I've come to finding out how to do this is looking at this example for C code, but that doesn't apply to the Java. (http://docs.opencv.org/2.4.3rc/doc/tutorials/imgproc/histograms/histogram_calculation/histogram_calculation.html)
The method there says to do hist.at(num), but I cannot do that in java.
I'm also not 100% sure if I should be normalizing the hist like this or not.
Any help is appreciated, thank you.
ArrayList list = new ArrayList();
list.add(mGraySubmat);
MatOfInt one = new MatOfInt(0);
int median = 0;
hist = new Mat();
MatOfInt histSize = new MatOfInt(25);
MatOfFloat range = new MatOfFloat(0f, 256f);
Imgproc.calcHist(Arrays.asList(mGraySubmat), one, new Mat(), hist, histSize, range);
Core.normalize(hist, hist);
median = calcMedianOfHist(hist, mGraySubmat.cols(), mGraySubmat.rows());
System.out.println("Median is: " + median);
↧
Split a Mat into parts, based on each containing three colors?
I am looking at speeding up an alpha matting algorithm. To do this I need to cut a mat of this image:

into the smallest possible sections, where each section contains all three of the possible colors. (the alpha matting algorithm requires all three to compute the matte).
It will not always be the same image, how can i automatically split a Mat into submats based on all three colors being present?
thank you.
↧
Does OpenCV runs on platforms with different int size?
I'm writing a library on top of the OpenCV and I have a question about crossplatformability.
So my question is: does OpenCV runs if `int` size is other than 32 bits, but 16, 64 or 128? Because if yes, I'd like to support those platforms, otherwise it would simplify my high level interfaces.
↧
Extracting feature vector from OpenCV's LBP implementation
I'm developing an AI program that can be used to detect emotions of people's faces in images, and I've stumbled across OpenCV which would be perfect for the face detection stage of the program. The only problem is that I need to be able to access the feature vector that the Local Binary Patterns classifier produces so that I can feed this vector to the AI as input data.
So the process would be:
1. Image presented to LBP classifier
2. LBP classifier produces feature vector for image
3. Feature vector passed to AI for processing
4. AI detects emotion using feature vector
Is there any way to do this? It doesn't matter whether this is done in Java or Python, I just need a way of extracting the feature vector.
Alternatively, is there a different way I can process the pixel content of the images of people's faces so that I have data I can pass to an AI?
↧
↧
how to capture multiple images from live webcam streaming through opencv-python
this is my code [screenshot](/upfiles/15181117452444152.png)
i want to store images like num1,num2....like wise untill i press the q button ##
↧
Detect US Speed limit signs.
Does anyone know of any projects, completed or underway to get speed limit signs from a dashcam?
Any leads appreciated....
↧
[BFMatcher] 'list' object has no attribute 'trainIdx'
Hello
I am trying to get attributes from Matcher object.
Even if I can successfully get `distance` attributes, I can't get `trainIdx` one.
> print(matches.trainIdx[:10]) AttributeError: 'list' object has no attribute 'trainIdx'
As it would be possible, as define [here](https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_feature2d/py_matcher/py_matcher.html):
> The result of matches => bf.match(des1,des2) line is a list of> DMatch objects. This DMatch object has> following attributes:>> - DMatch.distance - Distance between descriptors. The lower, the better it> is.> - DMatch.trainIdx - Index of the descriptor in train descriptors> - DMatch.queryIdx - Index of the descriptor in query descriptors> - DMatch.imgIdx - Index of the train image.
Code I use is:
imgL = cv2.imread('im0.png')
grayL = cv2.cvtColor(imgL, cv2.COLOR_BGR2GRAY)
imgR = cv2.imread('im1.png')
grayR = cv2.cvtColor(imgR, cv2.COLOR_BGR2GRAY)
# Initiate SIFT DETECTOR
sift = cv2.xfeatures2d.SIFT_create()
# find the keypoints and descriptors with SIFT
kpL, desL = sift.detectAndCompute(grayL, None)
kpR, desR = sift.detectAndCompute(grayR, None)
print("# kp: {}, descriptors: {}".format(len(kpL), desL.shape))
# create BFMatcher object
bf = cv2.BFMatcher()
# Match descriptors.
matches = bf.match(desL,desR)
print(matches.trainIdx[:10])
matt
↧