Quantcast
Channel: OpenCV Q&A Forum - RSS feed
Viewing all 41027 articles
Browse latest View live

Is possible convert the Mat with two channels into a vector

$
0
0
If I have a mat like Mat mat = (Mat_(1, 8) << 5, 6, 0, 4, 0, 1, 9, 9); Of course I can convert `mat` into a vector `vec` by vector vec(mat.begin(), mat.end()); But when the `mat` have 2 or more channels, how to convert it into a `vector>`? I mean if I have such `Mat` int vec[4][2] = { {5, 6}, {0, 4}, {0,1}, {9, 9} }; Mat mat(4,1,CV_32SC2,vec); How to get a `vector> vec2{ {5, 6}, {0, 4}, {0,1}, {9, 9} }`?

Web Hosting with OpenCV installed

$
0
0
Hello, I am searching a web hosting Linux with OpenCV already installed on the web server. Does it exist ? Thank you, Christophe,

camera pose using multiple aruco markers

$
0
0
I am trying to find the pose of the camera with aruco markers fixed on the wall. In any given frame there will be 2-3 markers and i need to find the pose of the camera. I see that [detect_markers.cpp](https://github.com/opencv/opencv_contrib/blob/master/modules/aruco/samples/detect_markers.cpp) can calculate the pose of the markers. Is it possible to repurpose this to get camera pose using multiple markers? Thanks in advance.

translate text from an image

$
0
0
Hi, I have this image: [C:\fakepath\slack.PNG](/upfiles/15342017337931671.png) Is it possible to translate this text from english to Portuguese? In the end the purpose is to replace the text in english to the text in portuguese. The programming languague that i'm using its python. Thanks in advance, Rita

Farneback optical flow in box/ROI

$
0
0
I'm detecting the optical flow by the Farneback method, but I need to delimit the area of the video that will be detected. I researched ROI, but I did not succeed. Any suggestions on how I can with the mouse create a box and the algorithm only run in that region? #include "opencv2/video/tracking.hpp" #include "opencv2/imgproc.hpp" #include "opencv2/videoio.hpp" #include "opencv2/highgui.hpp" #include using namespace cv; using namespace std; static void help() { cout << "\nThis program demonstrates dense optical flow algorithm by Gunnar Farneback\n" "Mainly the function: calcOpticalFlowFarneback()\n" "Call:\n" "./fback\n" "This reads from video camera 0\n" << endl; } static void drawOptFlowMap(const Mat& flow, Mat& cflowmap, int step, double, const Scalar& color) { for (int y = 0; y < cflowmap.rows; y += step) for (int x = 0; x < cflowmap.cols; x += step) { const Point2f& fxy = flow.at(y, x)*5; line(cflowmap, Point(x, y), Point(cvRound(x + fxy.x), cvRound(y + fxy.y)), color); circle(cflowmap, Point(x, y), 2, color, -1); } } int main(int argc, char** argv) { CommandLineParser parser(argc, argv, "{help h||}"); if (parser.has("help")){ help(); return 0; } VideoCapture cap("completo.MPG"); help(); if (!cap.isOpened()) return -1; Mat flow, cflow, frame; UMat gray, prevgray, uflow; namedWindow("flow", 1); for (;;) { cap >> frame; cvtColor(frame, gray, COLOR_BGR2GRAY); if (!prevgray.empty()){ calcOpticalFlowFarneback(prevgray, gray, uflow, 0.5, 3, 15, 3, 5, 1.2, 0); cvtColor(prevgray, cflow, COLOR_GRAY2BGR); uflow.copyTo(flow); drawOptFlowMap(flow, cflow, 30, 1.5, Scalar(0, 255, 0)); imshow("flow", cflow); VideoWriter(gray); } if (waitKey(30) >= 0) break; std::swap(prevgray, gray); } return 0; }

How to populate a custom binary feature descriptor in opencv?

$
0
0
I have a binary vector generated for each keypoint as its description. I would like to create my own binary descriptor matrix. I understand that descriptors are just matrices. Binary descriptor is a matrix with depth CV_8U. Below is my code to create the matrix. But is there an easy way to populate it? //I have 4096 bit data for descriptor. So 4096/8=512 will be my bytes cv::Mat descriptors; int bytes_ = 512; descriptors.create((int)keypoints.size(), bytes_ , CV_8U); I have the binary data in the form of four 32X32 binary images. How do I encode this data into the descriptor matrix? I would like to use opencv's DescriptorMatcher to match the descriptors later.

Giving Arbitrary value in Face ID in face Verification from webcam

$
0
0
Hello, i am facing problem while recognizing unknown faces. Code works fine on known face but when i give unknown face it is giving true label when it false. also thereafter kmown face ID is getting changed every time Everything working fine on my face. but problem occurs while i am testing it on faces which was not trained. below is code i am using for comparing the image: while True: ret, im =cam.read() gray=cv2.cvtColor(im,cv2.COLOR_BGR2GRAY) faces=faceCascade.detectMultiScale(im, 1.3,5) for(x,y,w,h) in faces: cv2.rectangle(gray,(x,y),(x+w,y+h),(255,0,0),2) sampleNum=sampleNum+1 Id, conf = recognizer.predict(gray[y:y+h,x:x+w]) if(conf<50): Statusvalue=1 print(Id) break I saved the image in ID:1, it gives correct result before trying with unknown face but thereafter it gives arbitrary ID value say,40, 50, 10 etc. Please assist me for getting correct ID

How to solve frame too large problem?

$
0
0
Hi great openCV team, is there any way to solve the frame come in 180000 per seconds from rtsp (live stream)? for now the way i solve it is by setting default fps = 30.0 for the video writer , if directly leading the live stream frame into the video writer open it will show error below: > imebase 1/180000 not supported by mpeg> 4 standard, the maximum admitted value> for the timebase denominator is 65535

Standard Velocity Unit

$
0
0
ello everyone! I'm currently using this code, to extract clusters from a Point Cloud and track them via a Kalman Filter with constant velocity: [GitHub Source](https://github.com/praveen-palanisamy/multiple-object-tracking-lidar/blob/master/src/main.cpp) Esentially it's segmenting a pointcloud into clusters of interest, based on euclidean distance. For each cluster it retrieves the cluster center (using PCL). To associate the clusters between frames consistenly, OpenCV is used, to apply a hungarian algorithm to compare the euclidean distance of the cluster centers and pair them with the minimum distance cost. A Kalman Filter from the OpenCV library is then applied to track the clusters across a path. Right now I'm trying to publish the Position and Velocity Values above the segmented Clusters. I'm doing it by using: KF0.statePost.at(0) KF0.statePost.at(1) KF0.statePost.at(2) KF0.statePost.at(3) (for all the Kalman Filters KFi I've initialized, 6 in total) 0 and 1 give me the correct X and Y Position, which I have verified, so 2 and 3 should be the corresponding velocity, however I do not know what the Units are. It cant be m/s because its of the order 10^(-6) and I can't think of any reasonable Unit. Maybe its an angle? I've read online about pixel/second but that would be a) even more unlikely, because it should be higher than m/s and b) I'm only working with pointclouds and no image, so I'm not sure where I can find a pixel to meter relation. ![Screenshot](https://i.imgur.com/gdrhnSO.png) Thanks in advance and best regards

Mat::forEach operation is resulting in error

$
0
0
[by going through this link](http://answers.opencv.org/question/110353/how-to-use-matforeach-position-parameter/) I got some information , and tested it. I am working on opencv v3.4.2 in Ubuntu 18.04. If I am using **cv::Point_<uchar> Pixel**, its giving me a wrong [output](/upfiles/15342346195530455.png), but if **uchar Pixel** is being used the error is given at the compilation time only.> In member function 'void Operator::operator()(Pixel&, const int*) const': for_each.cpp:23:78: error: request for member 'x' in 'pixel', which is of non-class type 'Pixel {aka unsigned char}' rmat("[%d,%d]= %d \n",position[0],position[1],(uchar)pixel.x); **code** #include #include #include "opencv2/highgui/highgui.hpp" #include "opencv2/imgproc/imgproc.hpp" #include using namespace std; using namespace cv; using std::cout; typedef cv::Point_ Pixel;//wrong //typedef uchar Pixel; // correct //typedef Point3_ Pixel; struct Operator { void operator ()(Pixel &pixel, const int * position) const { cout << format("[%d,%d]= %d \n",position[0],position[1],(uchar)pixel.x); } }; int main( int argc, char* argv[]) { Mat mTest(Size(3, 2), CV_8UC1,Scalar(0)); randn(mTest,Scalar(125),Scalar(125)); cout<<"I am using ----> cv::Point_ Pixel \n"; //cout<<"I am using ----> uchar Pixel \n"; cout<< format (" Size : %d , %d \n\n",mTest.rows,mTest.cols); //for(;;) { for (int Rows = 0; Rows < mTest.rows; Rows++) { for (int Cols = 0; Cols < mTest.cols; Cols++) { cout << format("[%d,%d]= %d \t",Rows,Cols,mTest.at(Rows,Cols)); } cout << "\n"; } cout << "\n\n"; } //for(;;) mTest.forEach(Operator()); waitKey(); return 0; } Is the problem with the new opencv version ?? And can it be resolved??

Unable to use rotate_bound image from imutils in matchTemplate

$
0
0
Hello everyone, I am trying to compare several input images, where those can came in many angles, with a target image for quality control, and I am trying to rotate all input images with imutils.rotate_bound and then compare those with cv2.matchTemplate, but seems like I cannot do that, that way, where I got: Traceback (most recent call last): File "shape_compare.py", line 105, in res = cv2.matchTemplate(img_rotated,img_template,cv2.TM_CCORR_NORMED) cv2.error: OpenCV(3.4.2) C:\projects\opencv-python\opencv\modules\imgproc\src\templmatch.cpp:1102: error: (-215:Assertion failed) (depth == 0 || depth == 5) && type == _templ.type() && _img.dims() <= 2 in function 'cv::matchTemplate' Do you know a way to fix this or do it another way? This is the input image: ![image description](/upfiles/15342412312564891.jpg) This is the template image: ![image description](/upfiles/1534241262255138.jpg) This is the full code: import numpy as np import cv2 import imutils from matplotlib import pyplot as plt ######EDIT HERE###### imagem = '20180801_173116_Film2' template = 'sample_camflash_edited' ##################### original = cv2.imread(imagem+'.jpg') original_shape_compare = cv2.imread(imagem+'.jpg') img_shape_compare = cv2.imread(imagem+'.jpg',0) # 0(zero) for grey img = cv2.imread(imagem+'.jpg') img_template = cv2.imread(template+'.jpg',0) # 0(zero) for grey img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) img_blur = cv2.GaussianBlur(img_gray,(5,5),0) img_bgr = cv2.cvtColor(img_blur,cv2.COLOR_GRAY2BGR) img_hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) #### Target Object Tracking # define range of color in HSV lowHue = 0 lowSat = 0 lowVal = 135 highHue = 255 highSat = 55 highVal = 255 # creation of mask colorLow = np.array([lowHue,lowSat,lowVal]) colorHigh = np.array([highHue,highSat,highVal]) img_mask = cv2.inRange(img_hsv, colorLow, colorHigh) # find and draw contours #find im2, contours, hierarchy = cv2.findContours(~img_mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) #draw cv2.drawContours(img, contours, -1, (0,255,0), 3) # green dots #find largest contour try: contour_sizes = [(cv2.contourArea(contour), contour) for contour in contours] biggest_contour = max(contour_sizes, key=lambda x: x[0])[1] except: print('No object found!') x,y,w,h = 0,0,0,0 cv2.imshow('original',original) cv2.imshow('img_template',img_template) cv2.waitKey(0) cv2.destroyAllWindows() exit(0) # bounding rectangle x,y,w,h = cv2.boundingRect(biggest_contour) #print ('rectangle size: x=%s y=%s w=%s h=%s' %(x,y,w,h)) cv2.rectangle(img,(x,y),(x+w,y+h),(255,100,0),2) #clear blue box # bounding rotated rectangle rect = cv2.minAreaRect(biggest_contour) box = cv2.boxPoints(rect) box = np.int0(box) cv2.drawContours(img,[box],0,(255,200,0),2) #strong blue box #### Object Handler print ('rect angle') print (rect[2]) # ROTATE IMAGE TO STANDARD! img_rotated = imutils.rotate_bound(img, abs(rect[2])) #angle must be positive #img_rotated = imutils.rotate(img, rect[2]) #### Object Shape Compare w, h = img_template.shape[::-1] res = cv2.matchTemplate(img_shape_compare,img_template,cv2.TM_CCORR_NORMED) min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res) top_left = max_loc bottom_right = (top_left[0] + w, top_left[1] + h) cv2.rectangle(original_shape_compare,top_left, bottom_right,(0,255,0), 2) #green box #### Output cv2.imshow('img_rotated',img_rotated) plt.subplot(221),plt.imshow(original) plt.title('original'), plt.xticks([]), plt.yticks([]) plt.subplot(222),plt.imshow(img_template,cmap = 'gray') plt.title('img_template'), plt.xticks([]), plt.yticks([]) plt.subplot(223),plt.imshow(img) plt.title('img'), plt.xticks([]), plt.yticks([]) plt.subplot(224),plt.imshow(original_shape_compare) plt.title('original_shape_compare'), plt.xticks([]), plt.yticks([]) plt.suptitle('All') plt.show()

how to draw a rectangle around moving objects in a video using cv2 python ?

$
0
0
i want to detect objects in a video

I want to classify a object as positive or negative by using SVM, I wrote the following codes. Is the code is right or any suggestions ?

$
0
0
I have 37 images in my database for each postive and negative for training. I have extracted HOG features of each images and labelled as 'Positive' and 'negative'. its saved into 'Hogfeat' Matrix. Is this code need to be improved or any errors? Now i need to load this features into SVM .(I'm just the learning SVM).I have gone through many links , but its not useful for my codes.Please help me to solve this> int main() { HOGDescriptor hog; vector locs; for (size_t i = 1; i <= 37; ++i) { ostringstream os; os << "C:/Users/Sam/Desktop/Images/" << "Content\\" << setw(2) << setfill('0') << i << ".JPG"; cout << os.str(); Mat img = imread(os.str(), IMREAD_GRAYSCALE); if (!img.data) { break; } else { // obtain feature vector: vector featureVector; hog.compute(img, featureVector, Size(32, 32), Size(0, 0)); //HOG features computed for img are stored in featureVector vector to make it into a matrix Mat Hogfeat(featureVector.size(), 1, CV_32FC1); //label 1 for (int i = 0; i < featureVector.size(); i++) { Hogfeat.at(i, 0) = featureVector.at(i); } //HOG features are stored in Hogfeat matrix cout << Hogfeat; cout << "Postive Images"; system("PAUSE"); } } for (size_t i = 1; i <= 37; ++i) { ostringstream os; os << "C:/Users/Sam/Desktop/Images/" << "No humans\\" << setw(2) << setfill('0') << i << ".JPG"; cout << os.str(); Mat img = imread(os.str(), IMREAD_GRAYSCALE); if (!img.data) { break; } else { // obtain feature vector: vector featureVector; hog.compute(img, featureVector, Size(32, 32), Size(0, 0)); //HOG features computed for img are stored in featureVector vector to make it into a matrix Mat Hogfeat(featureVector.size(), -1, CV_32FC1); //label -1 for (int i = 0; i < featureVector.size(); i++) { Hogfeat.at(i, 0) = featureVector.at(i); } //HOG features are stored in Hogfeat matrix cout << Hogfeat; cout << "Negative Images"; } } return 0; }

How to get the length of each of the curves present in the retina image?

$
0
0
I have an image of a retina, I need to calculate the length of each curve, currently I have done it by the curve length method but I have done it in matlab nose if there is an easier way to obtain it? The image I've been experimenting with is this ![image description](/upfiles/15083878864014835.jpg)

Is it possible to create a BGR image from predefined 3 (1-channel) image

$
0
0
I am trying to create 1 CV_8UC3 image from 3 different CV_8UC1 images that I already have, i.e I am trying to allocate the different single channel images that I already have into a single 1 Multi-Dimensional Image Mat image3c(width, height, CV_8UC3, **data**) Is it either possible to- 1. give the **data** directly during the CV_8UC3 creation 2. `std::vector channels{ channelB, channelG, channelR };` // already have individual channels BGR in CV_8UC1 format `merge(channels, image3c);` 3. splitting the image3c Mat and then copying the predefined CV_8UC1 into the channels and then merging is possible, but trying best to avoid that

Send/Receive vector over Socket C/C++

$
0
0
Hello, I'm trying to send a vector full of Mat images from one computer to another over a TCP/IP sockets C/C++ connection. Right now I can get the vector of Mat images and send it to another function to save them, then sends .png files over the connection. However, I want to be able to send just the vector over the socket and save the images (with imwrite) on the other system. I can make the connection and send data between them, because I've already sent an array, of type double, over and all the data was correct. When I tried the vector, I get segmentation fault (core dumped) which makes me think it's something to do with the size of the vector being passed incorrectly. I'm pretty new to socket programming, so I'm probably just making simple mistakes. Here is my code so far: Sender ('images' is the vector): int length = sizeof(vector Mat) + (sizeof(Mat) * images.size()); *Code that makes the connection* send(socket, &images, length, 0); Receiver: *Code that makes the connection* vector Mat images; recv(socket, &images, images.size(), 0); I didn't forget the tags on the vectors, it just wouldn't let my input them on here. Any help would be appreciated. Thank you.

Using FLANN to match ORB descriptors

$
0
0
I'm am investigating methods on how to speed up an object tracking algorithm that uses local feature matching in each frame of the sequence. I think it would be attractive to use the FLANN kNN neighbor search since a database of descriptors is initialized in the first frame (ie database is not updated), and during every frame the new detected keypoints and descriptors are matched to the database, to attempt to find kNN for each descriptor. It would be attractive to build a tree during initialization and thus during each frame the kNN matching/searching should be significantly faster as we do not traverse the entire database for the given query descriptor's best match, as we have prior knowledge of the search space. However since ORB features are ideal when computing resources are limited, how does one use the FLANN matcher? I have read [here](http://example.com/ "title") that we should convert the descriptors to CV_32F, but also [here](http://answers.opencv.org/question/11209/unsupported-format-or-combination-of-formats-in-buildindex-using-flann-algorithm/) that this does make sense for binary descriptors to float. What would be the ideal way to use the FLANN matcher with ORB descriptors? #include "opencv2/highgui.hpp" #include #include "opencv2/core.hpp" #include using namespace cv; int main( int argc, char** argv ) { Ptr detector; Ptr descriptor; detector = GFTTDetector::create(); descriptor = ORB::create(); FlannBasedMatcher fmatcher = FlannBasedMatcher(makePtr(12, 20, 2)); Mat img_1 = imread("/home/user/testA.jpg", IMREAD_GRAYSCALE ); std::vector keypointDatabase; Mat database; detector->detect(img_1, keypointDatabase); descriptor->compute(img_1, keypointDatabase, database); //Descriptors must not be converted to CV_32F //Mat database32F; //database.convertTo(database32F, CV_32F); //fmatcher.add(database32F); fmatcher.add(database); fmatcher.train(); Mat img_1 = imread("/home/user/testB.jpg", IMREAD_GRAYSCALE); std::vector keypoints; Mat descriptors; detector->detect(img_1, keypoints); descriptor->compute(img_1, keypoints, descriptors); std::vector> matches; fmatcher.knnMatchImpl(descriptors, matches, 2, cv::noArray(), false); return 0; } However the error : Unsupported format or combination of formats (type=5) in buildIndex_, file /home/user/opencv-3.4.0/modules/flann/src/miniflann.cpp, line 315 is produced when I called matcher.train(). What is the correct manner to build the tree in an initialization step?

Segmentation of Signature

$
0
0
I need to subtract background (some texts and numbers) of signature to get just "signature with white background". My source image is like this: ![image description](/upfiles/15342694007219833.png) Could you suggest any image processing algorithms/ways or any Computer Vision, Deep Learning Machine Learning based solutions?

Image criteria for barrel distortion calibration

$
0
0
Hello all, I am attempting to find a more concrete system of using OpenCV for barrel distortion correction. My camera is to be mounted in such a way that it will not move during operation. Am I able to use only one image to correct barrel distortion? If not, what positions and how many are needed to have barrel distortion corrected? My image has definite keystone distortion as well. I know that people say to use at least 10 images but I am looking for a more professional procedure to properly calibrate using OpenCV. Thank you!

Image gets rescaling after normalize using opencv and scikit-image

$
0
0
Hello, I'm tryng to combine OPENCV and scikit-image to image processing. I need to do some histograms equalizations and aplly some filters too, but I need first to normalize the video to keep the values beetween 0 and 1. The problem is after I normalize the image in grayscale and tray to do an histogram equalization, the image gets rescale to 0 to 255. I don't know why this happens. Any ideas? Greetings import matplotlib.pyplot as plt import matplotlib.image as img import numpy as np import cv2 import numpy as np import matplotlib.pyplot as plt from skimage.io import imread from skimage.transform import radon, iradon import skimage.filters as fil from skimage import exposure from skimage.morphology import disk from skimage.transform import resize from skimage.morphology import erosion, dilation, opening, closing, white_tophat from skimage.color import rgb2gray camera = cv2.VideoCapture(0) cv2.namedWindow('Ventana1') while cv2.waitKey(1)==-1: retval, img = camera.read() img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) img2 = cv2.normalize(img, None, alpha=0, beta=1, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_32F) EQU = fil.rank.equalize(img2, disk(120)) EQU2 = fil.rank.equalize(EQU, disk(25)) plt.imshow(EQU2, cmap="gray") plt.show() #CIERRA cv2.destroyAllWindows() camera.release()
Viewing all 41027 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>