
↧
how can i remove this circle from the image?
↧
Saving as video in for loop
Hello, I have this for loop, I want to save each iteration result as video frame.
for f, image_file in enumerate(image_files):
im = cv2.imread(image_file)
tic = cv2.getTickCount()
state = SiamRPN_track(state, im) # track
toc += cv2.getTickCount()-tic
res = cxy_wh_2_rect(state['target_pos'], state['target_sz'])
res = [int(l) for l in res]
cv2.rectangle(im, (res[0], res[1]), (res[0] + res[2], res[1] + res[3]), (0, 255, 255), 3)
↧
↧
Android studio: undefined reference to cv::Stitcher
I am using opencv library.
I want do
#include
#include
#include "opencv2/opencv.hpp"
#include "opencv2/stitching.hpp"
using namespace std;
using namespace cv;
...
Ptr stitcher = Stitcher::create();
but I have error:
undefined reference to `cv::Stitcher::create(cv::Stitcher::Mode)'
↧
How to create an effect on video images?
I am hoping to produce some effect on images over the frames. The work is to add or subtract certain RGB values on each pixel in the screen image.
The image(AVI) size is 1200 x 900.
x= X/600, y=Y/600 (X,Y)<-pixel coordinates
Each formula shown below, which covers the entire screen, stands for either hyperbola or parabola.
Δr Δb Δg
F1 +4*(ax^2+bx^2) -4(ax^2+bx^2) -4(ax^2-bx^2)
F2 - 4*(ax^2+bx^2) +4(ax^2+bx^2) +4(ax^2-bx^2)
----
F3 +4*(ax^2+ax^2) -4(ax^2+bx^2) -4(ax^2-bx^2)
F4 -4*(ax^2+bx^2) +4(ax^2+bx^2) +4(ax^2-bx^2)
---
---
---,
Δr, Δb, and Δg are swing values that are added to or subtracted from the original RGB data of each pixel in the images
My questions are
1. Could I do the work with Python OpenCV?
2. How do I work out the above routine?
I once studied FORTRN, maybe 38 years ago and I stared to learn Python yesterday.
Is there anybody who will let me know the necessary procedures that can implement my plan? Your guidelines don’t need to be very specific or precise.
Please help.
↧
Still image capture at high resolution
I would like to capture a high-resolution image from an integrated camera. It's a surface book pro and offers a much higher still image resolution (8 MP) than the video resolution (2 MP).
Since I'm running on windows, it seems pretty difficult to obtain that high-resolution image (most other videocapture libraries do not seem to be able to handle this. QT even specifically says, that the Video-Module of QMultimedia is not finished on Windows). Thus I'm stuck grabbing 2 MP frames from the Videocapture module.
Since OpenCV has the ability to run with MediaFoundation and there is a 'TakePhoto'-Function for the current stream, would there be a possibility to extend the videocapture-module to allow still images at full resolution when using the CAP_MSMF-Flag?
↧
↧
Python Open CV Homography giving unexpected results. Pleaase suggest what changes in code needed!!!!
Hi! I am doing homography in Python and am not getting expected Output. I am getting differen matrix
Here is my code's Google Colab link:-
https://colab.research.google.com/drive/1OMWP4faY8dbNRAfryvBs7S7CEuM8oeDz
Here is basic Homography Link (just for your reference):-
https://www.learnopencv.com/image-alignment-feature-based-using-opencv-c-python/
In case you are not able to open Colab. Here is source code.
import cv2
import numpy as np
if __name__ == '__main__' :
pts_src = np.array([[141.0, 131.0], [480.0, 159.0], [493.0, 630.0],[64.0, 601.0]])
pts_src=np.asarray(pts_src) #not needed actually
pts_src = pts_src.astype('float32') #not needed actually
pts_dst = np.array([[318.0, 256.0],[534.0, 372.0],[316.0, 670.0],[73.0, 473.0]])
pts_dst=np.asarray(pts_dst) #not needed actually
pts_dst= pts_dst.astype('float32') #not needed actually
h, status = cv2.findHomography(pts_src, pts_dst,cv2.RANSAC, 5.0)
# Wrap source image to destination based on homography
print( h)
print(len(h))
print("------------Printing H------")
print( status)
print("------Printing Status-------------")
print (status.ravel().tolist())
print("--------------------")
print("Source is multiplied first")
#pts_src2 = np.array([[141.0, 131.0,1.0], [480.0, 159.0,1.0], [493.0, 630.0,1.0]])
pts_dst2=np.array([[318.0, 256.0,1.0],[534.0, 372.0,1.0],[316.0, 670.0,1.0]]).transpose()
print("----see---")
print(+len(pts_dst2))
pts_dst2=np.asarray(pts_dst2) #not needed actually
pts_dst2= pts_dst2.astype('float32') #not needed actually
pts_dst2=np.asmatrix(pts_dst2)
h=np.asmatrix(h)
pts=np.dot( h, pts_dst2)
print(pts)
print("--------END-----------")
Output:-
[ 1.46491654e-01 4.41418278e-01 1.61369294e+02]
[-3.62463336e-04 -9.14274844e-05 1.00000000e+00]]
3
------------Printing H------
[[1]
[1]
[1]
[1]]
------Printing Status-------------
[1, 1, 1, 1]
--------------------
Source is multiplied first
----see---
3
[[322.31218638 367.38950309 147.72051431]
[320.95671889 403.80343648 503.41090271]
[ 0.86133122 0.77243355 0.82420517]]
--------END-----------
↧
I'm facing "[Errno 13] Permission denied" error can any one help
the full error message is :
PermissionError: [Errno 13] Permission denied: 'F:/PetImages/train/Dog'
and the code is :
train_dir = 'F:/PetImages/train'
test_dir = 'F:/PetImages/test'
train_dogs = ['F:/PetImages/train/{}'.format(i) for i in os.listdir(train_dir) if 'Dog' in i]
train_cats = ['F:/PetImages/train/{}'.format(i) for i in os.listdir(train_dir) if 'Cat' in i]
test_img = ['F:/PetImages/test/{}'.format(i) for i in os.listdir(test_dir)]
train_imgs = train_dogs[:200] + train_cats[:2000]
random.shuffle(train_imgs)
import matplotlib.image as mpimg
for ima in train_imgs[0:3]:
img = mpimg.imread(ima)
imgplot = plt.imshow(img)
plt.show()
↧
I have got this (-215:Assertion failed) Error, and i couldn't find any solution for this, please help
the error message is ""OpenCV(3.4.2) c:\miniconda3\conda-bld\opencv-suite_1534379934306\work\modules\imgproc\src\color.hpp:253: error: (-215:Assertion failed) VScn::contains(scn) && VDcn::contains(dcn) && VDepth::contains(depth) in function 'cv::CvtHelper,struct cv::Set<3,4,-1>,struct cv::Set<0,2,5>,2>::CvtHelper'""
and the code is ...
path = "F:/New folder"
hand_picturs = []
for i in range(6):
hands_pic = cv2.cvtColor(cv2.imread(path + str(i) + ".jpg"), cv2.COLOR_BGR2RGB)
hand_picturs.append(hands_pic)
↧
find4QuadCornerSubpix vs cornerSubPix
Hi all,
I'm not able to find any kind of information about the find4QuadCornerSubpix() function.
I'm trying to understand the differenti between find4QuadCornerSubpix() and cornerSubPix().
Someone can help me?
thanks
↧
↧
how to import lstm net in opencv?
##### System information (version)
- OpenCV => 4.1
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2015
#### Detailed description:
I want to use LSTM network to classify sequence data in OpenCV, but I have seen many tutorials are image classification. The general way is to use Imread to read the image first, use blobFromImage/blobFromImages to convert the image to blob, then pass the blob. Into the network forward. My problem is that the LSTM network input is not an image, it is a sequence of data, 1024*nImages variable size input data, how to pass into the network for forward?
↧
Cannot locate opencl_kernels_tracking.hpp
Hi.
I am new to OpenCV. I have a task to understand how the KCF Tracker works. Though my current C++ code links the shared library, but I don't want to link the entire library. I have to only extract the relevant code which uses KCF Tracker.
I found that in opencv_contrib\trackerKCF.cpp, there is a header file "opencl_kernels_tracking.hpp". I have not been able to locate it yet. Can anyone please guide me about its location and its use? I could not find any article or thread regarding this header file either.
Thanks.
↧
findCirclesGrid throws Exception
Here is a small code snippet that tries to search for a circles grid on a black image. I expect it to find nothing, return false and finish.
const cv::Size patternsize(6, 5);
std::vector centers;
cv::Mat debug(100, 100, CV_8U, cv::Scalar(0));
bool detectedCircles = cv::findCirclesGrid(debug, patternsize, centers, cv::CALIB_CB_SYMMETRIC_GRID);
Instead, it triggers a `cv::Exception`.
Tracing it back we find that it is thrown from a function `filterOutliersByDensity`:
void CirclesGridFinder::filterOutliersByDensity(const std::vector&samples, std::vector&filteredSamples)
{
if (samples.empty())
CV_Error( 0, "samples is empty" );
which is to be expected as`filterOutliersByDensity` is called like so, with an empty freshly created `vectors` vector as a first parameter:
bool CirclesGridFinder::findHoles()
{
switch (parameters.gridType)
{
case CirclesGridFinderParameters::SYMMETRIC_GRID:
{
std::vector vectors, filteredVectors, basis;
Graph rng(0);
computeRNG(rng, vectors);
filterOutliersByDensity(vectors, filteredVectors);
I'm running opencv 4.1.0
All related opencv functions were last changed >5 years ago.
This problem arouse when I've tried to run some code that worked 2 weeks ago and that I have not touched since.
Actual questions:
1. how did it work and not trigger exceptions before?
2. why does it not work now?
3. how should i detect circles in an image then?
P.S.: it's my first post here and I'm very, very confused, but I hope my question is still clear. I'm happy to provide more/less detail.
↧
cannot find -lopencv_gapi
I'm struggling to compile [marker_mapper](http://www.uco.es/investiga/grupos/ava/node/57) which uses aruco lib. This is my settings:
GCC 5.5.0 ubuntu 16.04
Opencv 3.3.1 that comes with ROS (I tried other version of V3)
Aruco 3.0.12
I don't really know what missing.. I've been working with this setting mnay times but I can't figure out the issue why marker_mapper doesn't compile
[ 18%] Building CXX object src/CMakeFiles/marker_mapper.dir/mapper_types.cpp.o
[ 18%] Building CXX object src/CMakeFiles/marker_mapper.dir/debug.cpp.o
[ 36%] Building CXX object src/CMakeFiles/marker_mapper.dir/markermapper.cpp.o
[ 36%] Building CXX object src/CMakeFiles/marker_mapper.dir/optimizers/ippe.cpp.o
[ 45%] Building CXX object src/CMakeFiles/marker_mapper.dir/optimizers/fullsceneoptimizer.cpp.o
[ 54%] Building CXX object src/CMakeFiles/marker_mapper.dir/mappers/globalgraph_markermapper.cpp.o
[ 63%] Building CXX object src/CMakeFiles/marker_mapper.dir/mappers/posegraphoptimizer.cpp.o
[ 72%] Building CXX object src/CMakeFiles/marker_mapper.dir/utils/utils3d.cpp.o
[ 81%] Linking CXX shared library libmarker_mapper.so
/usr/bin/ld: cannot find -lopencv_gapi
collect2: error: ld returned 1 exit status
src/CMakeFiles/marker_mapper.dir/build.make:321: recipe for target 'src/libmarker_mapper.so.1.0.12' failed
make[2]: *** [src/libmarker_mapper.so.1.0.12] Error 1
CMakeFiles/Makefile2:117: recipe for target 'src/CMakeFiles/marker_mapper.dir/all' failed
make[1]: *** [src/CMakeFiles/marker_mapper.dir/all] Error 2
Makefile:151: recipe for target 'all' failed
make: *** [all] Error 2
↧
↧
How to automatically find pattern size for Opencv Camera Calibration?
I would like to know if there is a way to automatically find the pattern size for findCirclesGrid() function in opencv. I am currently using a circle grid pattern with a zoom lens camera and would like to find the distortion at all the zoom levels.
While Zooming the camera the grid circle is half visible at the edges as shown in the figure and findCirclesGrid() function doesn't work irrespective of the patternsize.
↧
Image Registration Between 2 Cameras of Fixed Distance
So I am trying to calculate the NDVI, which requires me to have aligned images from both my infrared camera and my normal color camera. Both cameras are actually part of the Intel RealSense camera that I'm using, so they are a fixed distance apart. I am trying to automatically perform image registration between every frame captured by the infrared camera to the frames captured by the color camera.
To do this, I have used ORB features to find common points and the RANSAC algorithm to exclude outliers and generate a homography matrix, which I then applied to the infrared capture with warpPerspective so it would be aligned with the color capture. I found that doing this caused the image to shake and sway unacceptably between frames. Since the difference between the two cameras is a fixed physical constant, I simply averaged the homography matrices over many iterations until it converged into a stable matrix. This yielded great results! But it's not perfect...
I notice that relative features of the images are not aligned. For instance, there is a pillar in the background. When I position my finger in the foreground so that it just barely "touches" the pillar in the color capture, my finger has a small, but non-negligible separation (of about 10 pixels) from the pillar in the infrared capture. To perfectly align the images would require a tiny (right-handed) rotation along the y-axis (and perhaps other things). After making small increases and decreases to each entry of the homography matrix, it seems to me that there is no possible homography matrix that could solve this issue!
So is there a more complete solution I can use to perfectly align infrared images coming from a camera a fixed distance apart from the color image's camera? I list some extra information for an idea that I have, but if you already know a solution that would work, don't be tainted by the discussion below!
**More Information** The RealSense cameras have an API that allow me to align the original depth images with the color images (and in their API, there is no way to align two non-depth images to each other, as I am trying to do). However, according to a post I read when I was looking for more information, the original depth image (unaligned to the color image) is by default aligned to the left infrared camera! So I was suggested to find a map that captures the relationship between the unaligned depth image to the aligned depth image so that I could apply that map to the infrared image.
In general, I am looking for a matrix that captures the relationship between the color capture and infrared capture, such that all I need to do is apply that matrix to the infrared capture to align it with the color capture. I am using Python 2.
↧
How does stereo SGBM algorithm really work inside ?
Hello everyone,
I have been playing with stereo BM & stereo SGBM for a little while now, and even if I understood in details how the BM algorithm works, I am still struggling a lot to understand the latter. I have tried to read all relevant papers : *Stereo Processing by Semi-Global Matching and Mutual Information* by Heiko Hirschmüller, *Learning OpenCV 3* by Adrian Kaehler and Gary Bradski, *Depth Discontinuities by Pixel-to-Pixel Stereo* by Stan Birchfield and Carlo Tomasi, and also looked at many other thesis on the subject, yet I haven't been able to understand how the SGBM algorithme (which is a combination of BM with a variation of SGM) really works inside.
What I mainly don't understand are the following :
1) What is the Birchfield-Tomasi metric used in the algorithm ? Every paper relates to a "Birchfield-Tomasi metric" but no one explains what it is and reading the Birchfield-Tomasi paper didn't help me to understand that.
2) How is a window used in that algorithm (what is the operation made with that window)?
3) And finally, what are the different directions that can be used (3, 5 or 8) ? aren't we supposed to compute a cost matching only on the epipolar lines ?
It is really frustrating for me, I have spent the last entire 2 days trying to figure this out and haven't been able to do so,and I really need to be able to explain that in my thesis.
Any help would be more than welcome !
↧
Display raw 12 bpp grayscale image
I'm reading images from a 12 bits per pixel grayscale frame grabber. It returns these images in an array of uint16_t, where the 4 msb are 0. I'm packing this array into an array of uint8_t (2 uint16_t give 3 uint8_t) and saving it to disk raw. So the resulting file is a bunch of 12 bit chunks one after the other in big endian.
Later on, I'd like to read such a file and get its histogram using `calcHist`. I'm reading the file into an `std::vector`, unpacking it into a `std::vector` (3 uint8_t give 2 uint16_t, with the 4 msb zeroed). So far, everything works.
I'm trying to display the image with OpenCV using the following:
cv::Mat img(1024, 1280, CV_16UC1, &data[0]); // data is my std::vector<uint16_t> cv::imshow("Image", img); cv::waitKey(0);I'm expecting an image that looks like [this](https://imgur.com/P3uLVv5), but what I'm getting is [this](https://imgur.com/mnuY70h). Any help would be appreciated!
↧
↧
Draw difference in countours using reference images
Hi, I am trying to draw contours from the difference which I have got by comparing 2 reference images but somehow I am not able to draw that difference using draw contours
Any suggestion will be very helpful
Code:
import cv2
import numpy as np
import imutils
f = cv2.imread("2.jpg")
s = cv2.imread("1.jpg")
difference = cv2.absdiff(s, f)
grey = cv2.cvtColor(difference,cv2.COLOR_BGR2GRAY)
c,h = cv2.findContours(grey, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
for contour in c:
area = cv2.contourArea(contour)
if area > 10:
print(area)
c = cv2.drawContours(difference, contour, -1, (0, 255, 0), 3)
cv2.imwrite("Frame.jpg", c)
#cv2.imshow("difference", difference)
cv2.waitKey(1)
cv2.destroyAllWindows()
using below reference image
1st image

2nd image

↧
Can i use sift/ surf features in python for my project, if yes how?
I tried opencv-contrib , but it still not works. What i need to do to get the access of surf and sift features
i am getting error like this when i tried to use surf and sift featues
surf = cv2.SURF(400)
AttributeError: module 'cv2.cv2' has no attribute 'SURF'
↧
How to hash - Opencv matrix, lbph histogram?
Hi, I want to try to create a hash code from ***.yml. For example i have existing yml file with Opencv matrix and lbph histogram:
%YAML:1.0
opencv_lbphfaces:
threshold: 1.7976931348623157e+308
radius: 1
neighbors: 8
grid_x: 8
grid_y: 8
histograms:
- !!opencv-matrix
rows: 1
cols: 16384
dt: f
data: [ 2.49739867e-02,
and so on....
Please give me some suggestions, methods or existing source codes how to convert it into hashcode or another useful view. And if i understood this yml file correctly main face identity features store in histograms:data:?
I want to take this hashcode and put it into another system/request...
----------------------------------------
↧