Quantcast
Viewing all 41027 articles
Browse latest View live

I'm trying to run a face recognition program in python using OpenCV but I'm getting the following error which I'm unable to resolve. Any help is greatly appreciated

This is the error ![image description](/upfiles/1546871093161642.png) Here is my code ![image description](/upfiles/15468711194044856.png) ![image description](/upfiles/15468711331365635.png) ![image description](/upfiles/15468711483507547.png) ![image description](/upfiles/154687116155488.png) ![image description](/upfiles/15468711953718583.png) ![image description](/upfiles/15468712109921209.png)

How can you use K-Means clustering to posterize an image using opencv javascript?

How can you use K-Means clustering to posterize an image using opencv javascript?

how to create and run video at the same time.

I need help. Thank you for your understanding though it seems to be challenging because it is a question through a translator. Contents. I'm implementing video color conversion using opencv 3.2.0 / java (spring boot). The process. - After uploading the video to the server, extract the color of the frame and send the color data to the client. - Send color data to the server to change the extracted color data to the color desired by the user. - Split the uploaded video into frames and convert the colors. - Reconstructs separated frames into video. - When the movie is finished, it runs on the client. It takes a long time for the color to be converted and created as a video. At this time, the user can not know the progress, but after the video creation is finished, it is not executed until it is executed on the client. So I want to implement the video to be played on the client at the same time as I create it so I can see the progress. ******I want to show the video creation process on the client.****** Please share your knowledge or implementation method. PS. I tried to run the video file I was creating before "videoWriter.release ();" was called, but it did not work.

readNetFromTensorflow() errors from Mask_RCNN model

I made my Mask_RCNN model from this github [project](https://github.com/matterport/Mask_RCNN/) it is a project written with tensorflow and keras. Enviroment : win7 x64 visual studio 2015 opencv 4.0.1 tensorflow 1.12 GPU gtx1060 CUDA 9.0 since it saves its weights to .h5 file, I want to turn it to .pb and .pbtxt so that I can read it by readNetFromTensorflow(). I wrote something like def h5_to_pb(h5_model,output_dir,model_name,out_prefix = "output_",log_tensorboard = True): if osp.exists(output_dir) == False: os.mkdir(output_dir) out_nodes = [] for i in range(len(h5_model.outputs)): out_nodes.append(out_prefix + str(i + 1)) tf.identity(h5_model.output[i],out_prefix + str(i + 1)) sess = K.get_session() from tensorflow.python.framework import graph_util,graph_io init_graph = sess.graph.as_graph_def() main_graph = graph_util.convert_variables_to_constants(sess,init_graph,out_nodes) graph_io.write_graph(main_graph,output_dir,name = model_name,as_text = False) if log_tensorboard: from tensorflow.python.tools import import_pb_to_tensorboard import_pb_to_tensorboard.import_to_tensorboard(osp.join(output_dir,model_name),output_dir) output_dir = osp.join(os.getcwd(),"trans_model") output_dir = "D:/" ROOT_DIR = os.path.abspath("C:/Mask_RCNN/") MODEL_DIR = os.path.join(ROOT_DIR, "logs") model = modellib.MaskRCNN(mode="inference", config=config, model_dir=MODEL_DIR) # model.keras_model.summary() # print(model.keras_model.inputs) # print(model.keras_model.outputs) model.load_weights(weight_file_path, by_name=True) h5_model = model.keras_model print(len(h5_model.outputs)) h5_to_pb(h5_model,output_dir = output_dir,model_name = output_graph_name) then I turned pb to pbtxt def convert_pb_to_pbtxt(filename): with gfile.FastGFile(filename,'rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) tf.import_graph_def(graph_def, name='') tf.train.write_graph(graph_def, './', 'protobuf.pbtxt', as_text=True) return then I got the error Error parsing text-format opencv_te nsorflow.GraphDef: 41208:5: Unknown enumeration value of "DT_RESOURCE" for field "type". OpenCV(4.0.1) Error: Unspecified error (FAILED: ReadProtoFromTextFile(param_file , param). Failed to parse GraphDef file: D:/model_1.pbtxt) in cv::dnn::ReadTFNet ParamsFromTextFileOrDie, file C:\build\master_winpack-build-win64-vc14\opencv\mo dules\dnn\src\tensorflow\tf_io.cpp, line 54 it seem that the format of the pbtxt is not right, so can anyone tell me how to turn the model I trained to let the opencv read it to do the object detection please?

How to determine the axis of a prolonged object?

![image description](/upfiles/15469351088073342.jpg) The angle of the line is of primary interest. I know an algorithm for ellipse (will post it here as answer later), but would like to find out what is popular for an arbitrary shape. Fast methods are preferable for work with video input.

Measurement of actual object size

I am now doing a project to measure the actual size of the drink bottle. What method should I use to measure the size, thanks.

Add method in cvextern.dll in a FaceRecognition project

Hi, I'm implementing a little FaceRecognition program using Emgu as a wrapper of OpenCV libraries. It seems to work fine, but I need a function that returns all the distances between the image sample and the faces in the database (the [FaceRecognizer.Predict method](http://www.emgu.com/wiki/files/3.2.0/document/html/97e31eb5-5fc5-2061-9f7e-745a8ebe14f3.htm) implemented only returns the smallest distance and label). So I built Emgu from Git, in order to adapt functions in the unmanaged code (cvextern.dll) to my needs. Here's the original in face_c.cpp void cveFaceRecognizerPredict(cv::face::FaceRecognizer* recognizer, cv::_InputArray* image, int* label, double* dist) { int l = -1; double d = -1; recognizer->predict(*image, l, d); *label = l; *dist = d; } that stores minimum distance and corresponding label in `l` and `d`, thanks to predict. The method I wrote, following the [summary](https://github.com/opencv/opencv_contrib/blob/master/modules/face/include/opencv2/face.hpp#L307) in opencv face.hpp: void cveFaceRecognizerPredictCollector(cv::face::FaceRecognizer * recognizer, cv::_InputArray * image, std::vector* labels, std::vector* distances) { std::map result_map = std::map(); cv::Ptr collector = cv::face::StandardCollector::create(); recognizer->predict(*image, collector); result_map = collector->getResultsMap(); for (std::map::iterator it = result_map.begin(); it != result_map.end(); ++it) { distances->push_back(it->second); labels->push_back(it->first); } } And the caller in c# using (Emgu.CV.Util.VectorOfInt labels = new Emgu.CV.Util.VectorOfInt()) using (Emgu.CV.Util.VectorOfDouble distances = new Emgu.CV.Util.VectorOfDouble()) using (InputArray iaImage = image.GetInputArray()) { FaceInvoke.cveFaceRecognizerPredictCollector(_ptr, iaImage, labels, distances); } [DllImport(CvInvoke.ExternLibrary, CallingConvention = CvInvoke.CvCallingConvention)] internal extern static void cveFaceRecognizerPredictCollector(IntPtr recognizer, IntPtr image, IntPtr labels, IntPtr distances); The application works in real-time, so the function in c# is called continuously. I have only two faces and one label (same person) stored in my database, so the first call returns correctly the only possible label and stores it in `labels`. Keeping the application running, returned labels and the size of `labels` vector keep growing, filled with unregistered labels that I don't know where he takes. It seems to me like the collector in c++ is not well referenced, so that every time the function is called it keeps storing data without releasing the previous ones, overwriting them. But it's only my guess, I'm not very good with c++. In addition, turns out OpenCV already provides a sort of smart pointer class ([cv::Ptr](https://docs.opencv.org/3.3.1/d0/de7/structcv_1_1Ptr.html)). So, as I understand it, my collector should be automatically cleaned up once his scope block has ended, right? I also tried to use Ptr with result_map but keeps me returning random labels. What else could possily be wrong? I know that's not really a Opencv problem, but I believe the issues are in the unmanaged code or in autogenerated code I get while building, so I hope you can help

Use opencv in a simple qt project,when run shows error:'The process was ended forcefully.'

Hi: I install opencv use the 'opencv-4.0.0-alpha-vc14_vc15' follow 'https://wiki.qt.io/How_to_setup_Qt_and_openCV_on_Windows', so I create a new qt project to use it, here is the .pro: QT += core gui greaterThan(QT_MAJOR_VERSION, 4): QT += widgets opengl TARGET = task3 TEMPLATE = app SOURCES += \ main.cpp INCLUDEPATH += C:\opencv\build\install\include LIBS += C:\opencv\build\install\x64\mingw\bin\libopencv_*.dll and the main.cpp : #include #include #include #include using namespace cv; using namespace std; int main() { Mat img; int k; string ImgName = "532405845qq.jpg"; VideoCapture cap(0); if (!cap.isOpened()) return 1; while (1) { cap >> img; GaussianBlur(img, img, Size(3, 3), 0); imshow("1", img); k = waitKey(30); if (k == 's')//按s保存图片 { imwrite(ImgName, img); ImgName.at(0)++; img.release(); } else if (k == 27)//Esc键 break; } return 0; } when I run the project, I app now show,and shows the error: The process was ended forcefully. I had add the 'C:\opencv\build\install\x64\mingw\lib' to the enviroment path, I have no idea now, please helo me, thanks a lot!

How to find the minimum depth value in every frame of a streaming video

Hi I am newbie in opencv. I need to find online a minimum distance of any existing object infront of camera to the camera while a video is streaming. Would you help me with a function or code that can return the minimum value of the depth data and its location in the frame. I am using Python 3.6 on Windows and Intel Realsense D435. Thanks.

Error findContours with mode RETR_TREE on CV_32SC1 images

Hi guys, I was wondering why I can't use `findContours` with mode `RETR_TREE` on `CV_32SC1` images. I would like to do following: auto A = cv::imread("C:\\devel\\test_image.png", -1); cv::Mat labels; int no_labels = cv::connectedComponents(A, labels, 4, CV_32S); std::vector> contours0; std::vector hierarchy; cv::findContours(labels, contours0, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_NONE); Or is there a solution how to convert hierarchy from RETR_CCOMP to RETR_TREE? Thanks for your time.

Code from the tutorial doesn't work

. import numpy as np import cv2 im = cv2.imread('test.jpg') imgray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY) ret,thresh = cv2.threshold(imgray,127,255,0) contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE) When executing this code, I get an error: ValueError: too many values to unpack (expected 2) . Is there a mistake in tutorial or am I doing something wrong?

cmake cross compile for arm

built makefiles with /usr/include /usr/local How to control cmake and control search paths for includes and libraries? Thanks, Terry

resize.c cross compiling for arm got error.

opencv-3.4.0/modules/imgproc/src/resize.cpp:568:27: error: conversion from '::ufixedpoint16' to 'uint8_t' is ambiguous So what is the proper results of line 568 of resize.cpp vlineSet source code below template <> void vlineSet(ufixedpoint16* src, uint8_t* dst, int dst_width) { static const v_uint16x8 v_fixedRound = v_setall_u16((uint16_t)((1U << 8) >> 1)); int i = 0; for (; i < dst_width - 15; i += 16, src += 16, dst += 16) { v_uint16x8 v_src0 = v_load((uint16_t*)src); v_uint16x8 v_src1 = v_load((uint16_t*)src + 8); v_uint16x8 v_res0 = (v_src0 + v_fixedRound) >> 8; v_uint16x8 v_res1 = (v_src1 + v_fixedRound) >> 8; v_store(dst, v_pack(v_res0, v_res1)); } for (; i < dst_width; i++) line 568 *(dst++) = *(src++); } got error opencv-3.4.0/modules/imgproc/src/resize.cpp:568:27: error: conversion from '::ufixedpoint16' to 'uint8_t' is ambiguous Suggestions to correct this error? Thanks, Terry

How to control a robot using feedback from the camera?

How to control a robot using feedback from the camera? Do you have any papers on this subject? Thanks guys 🙂

Face feature extraction

Hello, How to have a more precise face feature extraction with the precise limit of nose, eyes, mouth,... And to have a spline base representation of features ? Thank you, Christophe

Detect paper on paper?

Hello, I am new to openCV and I already have very specific problem. I need to detect if there is paper on paper. (see screenshot example) I have 2 questions: 1. Is this possible to detect with OpenCV? (This is OpenCV related question) 2. If yes than In what direction should I think? Just give me some idea to continue investigation, of course I don't expect solution :) just some idea since I don't know where to start. (This is question for someone with good will) Thank you ![image description](/upfiles/15469614661446824.png)

error handling with imread()

Hello all, i am having a problem with the error handling in terms of the API "imread()". If i pass as command line a false file name, like "foobar" created randomly by "touch foobar" in bash. In my code i wrap the "imread()" into a try-catch block as follows: cv:: Mat src; try { src =imread(argv[1]); } catch( cv::Exception& e ) { //const char* err_msg = e.what(); //std::cout << "exception caught: " << err_msg << std::endl; cout <<"wrong file format, please input the name of an IMAGE file" < this->size() (which is 0) Aborted (core dumped) I know this message comes from c++ standard library and i can't change them, is there still any way to get rid of the error message? the program is aborted now, what i am expecting is that it goes into the catch block and print something that can be specified by the programmer, like "wrong file format, please input the name of an IMAGE file"? Besides, i'm using xubuntu 18.04 and opencv 3.2.0, thanks in advance!

keras to tensorflow to opencv dnn

##### System information (version) - OpenCV => :grey_question: - Operating System / Platform => :grey_question: - Compiler => :grey_question: ##### Detailed description I have a lenet keras model that was trained for car color recognition. I converted it to tensorflow pb file using keras_to_tensorflow from https://github.com/amir-abdi/keras_to_tensorflow I created pbtxt with the keras_to_tensorflow. I tried loading it (cvInference.py) cvNet = cv.dnn.readNetFromTensorflow('d:\\tfs\\LPR\\IP\\MAIN\\SRC\\PythonProjects\\Keras\\lenetcarColor.pb', 'd:\\tfs\\LPR\\IP\\MAIN\\SRC\\PythonProjects\\Keras\\lenetcarColor.pbtxt') I got the error: cv2.error: OpenCV(3.4.3) C:\projects\opencv-python\opencv\modules\dnn\src\tensorflow\tf_importer.cpp:614: error: (-215:Assertion failed) const_layers.insert(std::make_pair(name, li)).second in function 'cv::dnn::experimental_dnn_34_v7::`anonymous-namespace'::addConstNodes' ##### Steps to reproduce All my files are at https://www.dropbox.com/sh/yi4b529v01p1lz0/AAAtMZO8DfPfSuDPdwGe5dkPa?dl=0 including the train images in a zip file

Charuco corner detection confidence values

Sometimes due to noise in the camera, the charuco corner detection will be off by a huge amount. I was thinking of designing some kind of outlier rejection but then started thinking if there was some way to get confidence of charuco corner detection from OpenCV itself.

How to add Gaussion noise use JAVA

Mat grayMat = new Mat(); Mat noiseMat = new Mat(); BitmapFactory.Options o = new BitmapFactory.Options(); o.inDither=false; o.inSampleSize=1; int width = grayBitmap.getWidth(); int height= grayBitmap.getHeight(); noiseBitmap=Bitmap.createBitmap(width,height,Bitmap.Config.ALPHA_8); //bitmap to MAT Utils.bitmapToMat(grayBitmap,grayMat); GaussianNoise = grayMat.clone(); I don't know how to do next, to generate a Mat which is the same size as grayMat and then add them together.
Viewing all 41027 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>