Am using Qt C++, Opencv C++, Basler Pylon Camera For CameraCalibration For a Duck Hunt Game simmulator
i Have Done till undistorting the noise Factors .Now how can i get the AOI , Offset Values of the Undistorted Image Such that i can use it for my game simulator .
↧
After Calibration How to take the AOI of the undistorted checker board
↧
Error in changing the background color from white to black
hello. i am trying to change the background color to black. it's white so i am trying to go throught all the pixels and check whether its white or no if so change the value to 0. but somthing went wrong here is my code.
Mat img = imread("t.PNG");
for (int x = 0; x < img.rows; x++)
{
for (int y = 0; y < img.cols; y++)
{
if (img.at(Point(x, y))[0] >=245 && img.at(Point(x, y))[1] >= 245 && img.at(Point(x, y))[2] >= 245)
{
img.at(Point(x, y)) = { 0,0,0 };
}
}
}
imwrite("img.png",img);
imshow(" ",img);
waitKey(0);

↧
↧
dnn::readNetFromTensorflow() fail on loading pre-trained network on age-gender detection
Dear opencv dnn developers,
[Environment]
TensorFlow 1.5
python 3.5
Win7
opencv3.4
I am very new to the Tensor flow. Recently I found a pretrained dnn for age gender detection [https://github.com/BoyuanJiang/Age-Gender-Estimate-TF](https://github.com/BoyuanJiang/Age-Gender-Estimate-TF). The model is defined in "inception_resnet_v1.py" and the latest model checkpoints can be found [https://mega.nz/#!kaZkWDjb!xQvWi9B--FgyIPtIYfjzLDoJeh2PUBEZPotmzO9N6_M](https://mega.nz/#!kaZkWDjb!xQvWi9B--FgyIPtIYfjzLDoJeh2PUBEZPotmzO9N6_M)
I loaded model with the latest check point into Tensor Flow and running detection ok, and managed to freeze the graph using freeze_graph.py but when I load this frozen graph by cv::dnn::readNetFromTensorflow("XXX.pb"), it gets error:..."Unknown layer type shape in op map/Shape)".... or I tried cv::dnn::readNetFromTensorflow("XXX.pb", "XXX.pbtxt"), it also gets error:...unknown enumeration values of "DT_RESOURCE" for field "type..."
I have searched over the internet trying to find a solution... someone suggested using "optimize_for_inference" or "graph_transform" might help. As I have limited Tensor Flow knowledge, I do not understand how these two processes can solve the problem. and Also, cv::dnn::readNetFromTensorflow() method has the 2nd argument, which I also do not know in what situation I should/should not provide a .pbtxt for it?
Please help. If there is any further information needed to clarify the question, just let me know and I will supply.
Thanks in advance
↧
Assgin value to cv::Mat using Mat.at()
I'm trying to rearrang Mat imageWork which is 800 x 600 Type:8UC3 into kmeansData which is 480000 x 1 Type:32FC3.The Vec3f pixel can fetch vaule correctly but at the last step nothing is copied to kmeansData.In the debbuger it shows kmeansData.data=uchar*='/0'
kmeansData = Mat(imageWork.rows * imageWork.cols, 1, CV_32FC3);
for (int x = 0; x < imageWork.rows; x++) {
for (int y = 0; y < imageWork.cols; y++) {
Vec3f pixel=(Vec3f)imageWork.at(x, y);
kmeansData.at(y + x * imageWork.cols,0)= pixel;
}
}
Anybody can explain to me why?
↧
Bundle Adjustment
Hello,
Regarding to mentioned link i am having some queries :
https://www2.informatik.hu-berlin.de/cv/vorlesungen/WS1415/material/WS_14_05_Bundle.pdf
1) In Gauss-Newton iteration method what is the initial approximate parameter vector p0 ?
2) How to calculate the damping factor λ ?
3) Does Levenberg – Marquardt Algorithm performs good in the case of real time ?
4) Is there any implementation method for Bundle Adjustment in OpenCV ?
↧
↧
cv2.imencode / cv2.imdecode output issue
Can someone advise why is there difference between input and output array after it gets encoded / decoded?.
Below is my code snippet. When I use both arrays to display image, there is no visible difference between two images. What bothers me is that I don't know where the difference is coming from.
Thanks.
> Blockquote
import cv2
import socket
import pickle
import numpy as np
import time
import sys
# Capture picture
cap = cv2.VideoCapture('sample.jpg')
ret, frame = cap.read()
#print frame as is
print (frame)
#encode data ready for sending
pack_data = cv2.imencode('.jpeg', frame)[1].tobytes()
# just a simple output separator
print('--------------------------------------------')
#deode encoded data back into array
i = cv2.imdecode(np.fromstring(pack_data, dtype=np.uint8),cv2.IMREAD_COLOR)
#print array after it is decoded
print (i)
----------
BEFORE ENCODING
[[[237 244 243]
[249 255 255]
[255 246 250]
...,
[242 242 242]
[242 242 242]
[242 242 242]]
[[249 255 255]
[249 255 255]
[255 244 248]
...,
[242 242 242]
[242 242 242]
[242 242 242
AFTER ENCODING
[[[223 245 243]
[255 253 255]
[255 243 248]
...,
[242 242 242]
[242 242 242]
[242 242 242]]
[[235 255 255]
[255 252 255]
[255 242 247]
...,
[242 242 242]
[242 242 242]
[242 242 242]]
↧
Extract a poly from an image and make the poly part a new image
Hi, CVers, I met a problem while doing some ocr task.
Suppose this is the ocr region:

I want to extract the content in the white region of raw image as a new image for text recognition, but **the region is often not a rectangle**, so that I can't crop the poly region out as a new image.
Anyone has an idea to solve this problem? Thanks here in advance:)
↧
Distances to surroundings in image
using a rotating robot and kinect depth data I am able to create a bw-image of the surroundings of my robot (black is free space, white are obstacles).
The robot is looking for a tag and if not found should try to move to another location and repeat the search.
I am a bit confused as of where the robot should next move to and thought maybe best in a direction with no or far away obstacles and not too close to an already proofed unsuccessful scan position.
I know I could walk through every pixel in an extending circle and eliminate non-promising directions - however - I am in a python environment and stepping through all the pixels in a loop will be slow and using lots of cpu cycles.
Any functions in opencv to rotate a beam around a fixed location (position of my robot) and get distancies (e.g. for each degree) to the next obstacle (in my case a white pixel) in reasonable time?
The robot is looking for a tag and if not found should try to move to another location and repeat the search.
I am a bit confused as of where the robot should next move to and thought maybe best in a direction with no or far away obstacles and not too close to an already proofed unsuccessful scan position.
I know I could walk through every pixel in an extending circle and eliminate non-promising directions - however - I am in a python environment and stepping through all the pixels in a loop will be slow and using lots of cpu cycles.
Any functions in opencv to rotate a beam around a fixed location (position of my robot) and get distancies (e.g. for each degree) to the next obstacle (in my case a white pixel) in reasonable time?
↧
Detecting laser dots - possible?
I am new to OpenCV and wonder how hard it would be to detect one or more laser dots projected onto a distant object. In order for the camera to recognize the laser dots vs other reflections, the lasers would be modulated at a 10-15Hz rate, so some frames would see the dots, others would not. The Open CV program would need to compare successive frames to see the flashing dots. I was thinking of using a 60 frame/sec camera so that a laser flash would always last for 3 - 4 frames and then would "disappear" for 3- 4 frames - and continuously repeat.
The initial task would be simply to recognize that one beam (dot) was seen, and later I would like to know the angular displacement between two or more such modulated dots.
Both the source of the laser beams and the camera would be moving
↧
↧
Update opencv
I recently saw a version of opencv in which the window containing the image has certain extra features like it shows the co-ordinates as well as the RGB values of the pointer position ... Can anyone help me with how to get that feature ... Is it by upgrading opencv or something else?
↧
How to add gloss to the lips using Opencv Python ?
Hi,
I have identified the facial landmarks using dlib and have all the lips points. Now, I want to add gloss to the lips. How can I achieve it using Opencv Python? I am stuck since long, any help is highly appreciated.
Thanks
↧
opencv fails compile on: system.cpp:832:13 - close(fd) not declared in this scope
System is: windows 10, cygwin
[ 33%] Building CXX object modules/core/CMakeFiles/opencv_core.dir/opencl_kernels_core.cpp.o
[ 33%] Building CXX object modules/core/CMakeFiles/opencv_core.dir/src/convert.sse4_1.cpp.o
/c/_commonShare/opencv-3.4.1/modules/core/src/system.cpp: In function ‘cv::String cv::tempfile(const char*)’:
/c/_commonShare/opencv-3.4.1/modules/core/src/system.cpp:832:13: error: ‘close’ was not declared in this scope
close(fd);
^
make[2]: *** [modules/core/CMakeFiles/opencv_core.dir/build.make:1770: modules/core/CMakeFiles/opencv_core.dir/src/system.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
make[1]: *** [CMakeFiles/Makefile2:1457: modules/core/CMakeFiles/opencv_core.dir/all] Error 2
make: *** [Makefile:161: all] Error 2
Anyone faced such a problem before?
↧
Watershed Error OpenCV Error: Assertion failed (src.type() == CV_8UC3 && dst.type() == CV_32SC1)
I am trying to use watershed on a segmented image I made the watershed data and watershedmarker the way it required and keeps getting this error.Which part has gone wrong?
OpenCV Error: Assertion failed (src.type() == CV_8UC3 && dst.type() == CV_32SC1) in watershed, file /home/WXH/Desktop/opencv-3.2.0/modules/imgproc/src/segmentation.cpp, line 161
the code is here
Canny(imageCluster,edges,100,150,3, true);
edges.convertTo(watershedData,CV_8UC3);
watershedMarker=Mat(imageWork.rows,imageWork.cols,CV_32SC1,Scalar::all(0));
watershedMarker.row(0)=255;
watershedMarker.row(imageWork.rows-1)=255;
watershedMarker.col(0)=255;
watershedMarker.col(imageWork.cols-1)=255;
printf("WatershedData Type is %d \n",watershedData.type());
printf("watershedMarker Typed is %d\n",watershedMarker.type());
watershed(watershedData,watershedMarker);
↧
↧
Windows binaries for 3.4.1 - where are Core, Highgui. etc.
Greetings,
I am coming from an Ubuntu environment where I've been using OpenCV ... so I am a newbie under Windows. I downloaded the Windows binaries .exe for OpenCV and in various \bin dirs only found DLLs for ffmpeg, world341.dll, and a few .EXE. However, I do not see Core, Highgui, etc. As such, I wanted to inquire whether the binaries I downloaded from opencv.org are corrupted (at worse) or are there other things to build that are referenced elsewhere?
↧
Selecting various part of an image as forground using grabcut
I want to be able to select various part of an image as my foreground(probably with a rectangle) and color the background with any color of my choice. How do i go about this?
↧
About python version of ridge detection
After installing the opencv, I try use the ridge detection function https://docs.opencv.org/3.4.1/d4/d36/classcv_1_1ximgproc_1_1RidgeDetectionFilter.html#details
It seems there is no attribute ximgproc in python module cv2.
I try other function in the tutorial, there is not such problem.
Can you help me with that?
↧
Equivalent OpenCV Java Code to this C++ Code
**can anybody tell me the correct java code for this c++ code snippet:**
output.at(x, y) = target.at(dx, dy);
I have tried this java code and they are displacing pixel but not showing image clearly :
output.put(x, y, target.get(dx, dy));
↧
↧
Create a new frame that contains common images from two other frames?

For example, I have two frames for a calendar image of January 2018. Frame 1 has all the days except the SUN column. Frame 2 is missing the month and the Saturday column.
Is there a method or other way of determining the common images from broth Frame 1 and Frame 2 so that a new frame, Frame 3, can be created? Frame 3 shows the columns MON through FRI days 1 to 26 as shared by both Frame 1 and Frame 2.
I intend to use what I learn to match the video frames from dual/stereo cameras that I have set up with two Raspberry Pi.
↧
medianBlur from imgproc not working with python bindings?
Hi there,
I don't really know if this is the right place to ask since I am using a windows 64 bit beta version of the openCV python bindings but here we go:
When using a numpy array as source I get a type error
----> 1 dest = cv2.medianBlur(src,3)
TypeError: src is not a numpy array
When using a matrix mat created via cv.CreateMat, I get the following weird error
----> 1 dest = cv2.medianBlur(src,3)
error: ..\..\..\OpenCV-2.4.3\modules\imgproc\src\smooth.cpp:1679: error: (-210)
For testing purposes I tried out the cv2.Laplacian which works just fine with 2 matrices created via cv.CreateMat as source and destination.
If anyone could point me in a direction where to search for the error I would be glad!
Environment data:
Windows 7 64bit,
Python 2.7,
OpenCV 2.4.3
↧
Finding the max and min x,y location of a mask
After performing watershed algorithm, it returned me a marker like this (not the rectangle).
Right now, I want to find the max,min coordinate of the object boundary (not the rectangle).
Is there any method that is capable of doing this without slowing down the system too much?
The objective is to resize the rectangle dynamically to improve tracking.
This is a real time object tracking + object segmentation so performance is a thing.
Any help is greatly appreciated !

↧