Good day,
With the help of google and knowledgeable people on this forum I have managed to slap together a crude laser pointer detection script (Thank You). The code I used can be found [here](https://github.com/bradmontgomery/python-laser-tracker).
However, I currently have a 3000mW red laser whose pointer gets picked up by the script easily around maximum 2 meters (6 feet). Beyond that I can still see the pointer on the image (diameter is smaller) but the script does not pick it up. I used an image manipulation program to get the pixels from the image. At distances greater than 2 meters the pixels are a brownish colour (#5d4c3f and #4a4031).
Either I have to hack the code or alternatively I''ll need to get a laser bright enough whose pointer can be picked up to at least 10 meters. However the only laser pointers I can get my hands on are 3000mW.
Any suggestions?
The first pic is one meter away.

You can see the laser dot against the wall near the center of the pic. That is about 4 meters away.

Here is a pic of the green laser pointer:

↧
Laser pointer ideas?
↧
running my cv2 dnn model with cv2.dnn.DNN_TARGET_MYRIAD on MYRIAD cores
##### System information (version)
- OpenCV => 4.3.0-openvino
- Operating System / Platform => ubuntu 18.04
- Compiler => python 3.6.9
##### Detailed description
I'm having the following error:
```
Traceback (most recent call last):
File "main.py", line 59, in
safe_people_count, close_people_count = AnalysisObj.SDAProcess(img, threshold=0.3, nms_threshold=0.8, crop=args.crop)
File "/home/nvr/microsoft/social_distancing_detection.py", line 53, in SDAProcess
layerOutputs = self.net.forward(self.ln)
cv2.error: OpenCV(4.3.0-openvino) /opt/intel/openvino_2020.3.194/opencv/modules/dnn/src/dnn.cpp:1138: error: (-213:The function/feature is not implemented) Unknown backend identifier in function 'wrapMat'
```
I'm using DNN_TARGET_MYRIAD and DNN_BACKEND_INFERENCE_ENGINE. The code works using cv2.dnn.DNN_TARGET_CPU but not MYRIAD
Any idea what could be the issue?
To be mentioned that it worked a couple of days ago but not sure how this problem suddenly occurred!
↧
↧
Contribute to documentation tutorial that belongs to a book
Hello everyone,
I would like to contribute to the tutorials included in the documentation. To be precise I would like to improve / simplify the python code for the [erosion and dilation](https://docs.opencv.org/master/db/df6/tutorial_erosion_dilatation.html) tutorial and add for java and python separate explanations. However, at the top of the tutorial there is a note mentioning that the explanation belongs to the book "Learning OpenCV".
Is it possible to contribute to the documentation and change the note to something like "The explanation is a modified / extended version of the book **Learning OpenCV** by Bradski and Kaehler"? Or are there any claims that prevent me from doing so?
↧
why invert pose
I have a pose output from solvepnp(), not a prob. Many sources suggest to take the inverse of the pose. I am having a contextual blindspot here: can someone tell me why I should use the inverted pose. In common sense language how is the inverted coordinate space more accurate/more appropriate.
↧
Unable to access a logitech 270 webcam through openCV
Im running Ubuntu 18.04, OpenCV 2
Upon trying to use the regular VideoCapture(0), an error pops up as follows:
ROR: V4L2: Pixel format of incoming image is unsupported by OpenCV
Upon running $v4l2-ctl -d /dev/video0 --all , i got the following:
Driver Info (not using libv4l2):
Driver name : uvcvideo
Card type : UVC Camera (046d:0825)
Bus info : usb-0000:00:0c.0-2
Driver version: 5.3.18
Capabilities : 0x84A00001
Video Capture
Metadata Capture
Streaming
Extended Pix Format
Device Capabilities
Device Caps : 0x04200001
Video Capture
Streaming
Extended Pix Format
Priority: 2
Video input : 0 (Camera 1: ok)
Format Video Capture:
Width/Height : 800/600
Pixel Format : 'YUYV'
Field : None
Bytes per Line : 1600
Size Image : 960000
Colorspace : sRGB
Transfer Function : Default (maps to sRGB)
YCbCr/HSV Encoding: Default (maps to ITU-R 601)
Quantization : Default (maps to Limited Range)
Flags :
Crop Capability Video Capture:
Bounds : Left 0, Top 0, Width 800, Height 600
Default : Left 0, Top 0, Width 800, Height 600
Pixel Aspect: 1/1
Selection: crop_default, Left 0, Top 0, Width 800, Height 600
Selection: crop_bounds, Left 0, Top 0, Width 800, Height 600
Streaming Parameters Video Capture:
Capabilities : timeperframe
Frames per second: 20.000 (20/1)
Read buffers : 0
i believe the camera input is in the YUYV format. Is there any way i can access the input in this format , or will i have to convert it to another format to do so? Any suggestions at all will be very helpful
↧
↧
Comparing old photos and finding the same people
I have boxes of old photos (1890s through 1940s), and I want to be able to do 2 things:
1. Compare a single photo with known people in it, and find all of the photos in the larger collection with the same people in it.
2. Second, and more specifically, I have 2 photos of my Dad's bomber crew from 1944. I know the names of the people in 1 photo, but can only identify half the people in the second (they are the same people). I want to compare the 2 photos and have the software identify matches of the 5 men I cannot identify, and allow me to identify the 1 crew member that is not in the second photo. The photos are scanned at 600dpi from original smaller photos.
I should note I am looking for either an installed PC program or a web service, not an API for my own code.
Thanks
↧
Open Cv Headless
To start, I have a jetson nano and it is running ubuntu 18.04.
I am trying to download opencv for albumentations. To download albumentations and imguag, you need open cv headless, open cv contrib.
I followed the steps in this link: https://forums.developer.nvidia.com/t/imgaug-on-jetson-nano/79415. It returns errors.
I think the problem is somewhere along the lines of "Note that the wheel (especially manylinux) format does not currently support properly ARM architecture so there are no packages for ARM based platforms in PyPI. However, opencv-python packages for Raspberry Pi can be found from https://www.piwheels.org/."
How should I install open cv headless? How should I install albumentations? By the way, I installed open cv and it works. However I cannot albumentations because it does not come with the jetson nano open cv does come with open cv headless.
↧
OpenCV Clasification Neural Network + Image Flattening Question
There's no simple way to explain his so I will get to the point.
For a neural network, would it be better for me to flatten down a color image to between 0 and 1 like this: 0.RRRGGGBBBAAA whereas the pixel, say 0, would be .255000000255 meaning there is a red pixel at position 0, or just do HSV, thresh it, then greyscale to a grey color between 0 and 1?
Some background. I build what are called PxlDbl's or Pixel Double spelled out, and its exactly how it sounds... Its a pixels' color in a single double between 0 and well .255255255255 would be the max. Why a double? A float didn't have the precision I needed to achieve this. I currently have a custom neural network with many activation functions to be used in each topological layer. The one I am using for images is the well-known sigmoid which takes a number between 0 and 1. My end goal and the question are to place a flattened image of some kind into this Network. My question really relates to the best way to flatten it. Should I take my image size and multiply it by 4 to then have a neuron for each color channel for each pixel? I could take the value between 0 and 255 and map it between 0 and 1 which would give me effectively what I am looking for but say a 600x400 image x4 channels results in something like 960,000 neurons in the first layer which even in my case alone is truly unattainable to get to function no matter how well I programmed the Network... With a theoretical PxlDbl it shrinks the neuron could down to just the image size of 600x400 being only 240,000 neurons which is much more manageable in terms of weights and such. My only concern is with the inputs being so precise my outputs since their classifications might be much more difficult to train, ie after like .255^128076255 it might just drop off the 128076255 in the training because it just can't do that precise of numbers.
Help on this would be greatly appreciated...
↧
extract text from image using opencv with Java
I'm trying to use Opencv with JAVA.I could not able to extract text from image. Getting as invalid text.Could you please help me on this.
image: [C:\fakepath\PASSPORT-crop.jpg](/upfiles/15946212129920768.jpg)
iamge 1 : [C:\fakepath\ppassport1-crop.png](/upfiles/15946212264261477.png)
Code:
System.load("C:/DIGITAL_HOME/ocr/opencv_java420.dll");
BufferedImage bufferedImage1 = ImageIO.read(new File("D:/OCR Images/black/PP11.jpg"));
try {
tesseract.setLanguage("eng");
List words = tesseract.getWords(bufferedImage1, 2);
for (int i = 0; i < words.size(); i++) {
Word word = words.get(i);
if(null != word.getText()){
docText.put("lineno:"+i, word.getText());
System.out.println("lineno:"+i+"----"+ word.getText());
}
}
} catch (Exception e) {
e.printStackTrace();
}
Output:
lineno:0----P
↧
↧
OpenCV simple showimage command takes 70ms on RPI4
Dear OpenCV-Fellows,
I am using openCV on a RPI4 - 4GB and I'm doing realtime person detection with it so frame rate is quite crucial. My whole alogrithm takes 70ms and then, on top, the showimage takes again 70ms, which basically halfs the framerate.
Can you think of a way to speed up the imageshow? Currently, I am using a 1920x1080png with a size of around 1MB. I update only a few numbers on the image during each cycle.
def showimage(img):
to_show = img.copy()
cv2.namedWindow("window", cv2.WND_PROP_FULLSCREEN)
cv2.setWindowProperty("window",cv2.WND_PROP_FULLSCREEN,cv2.WINDOW_FULLSCREEN)
cv2.putText(to_show, str(allowed), (180, 660), cv2.FONT_HERSHEY_SIMPLEX, 10, (255, 255, 255), 15)
cv2.putText(to_show, str(people_inside), (180, 1020), cv2.FONT_HERSHEY_SIMPLEX, 10, (255, 255, 255), 15)
cv2.imshow("window", to_show)
cv2.waitKey(1)
↧
compareHist & CUDA
Hello.
Can compareHist function be parallelized with CUDA somehow or I have to implement compare algorithm by myself? I'm not asking for full solution, just want someone to point me in the right direction since I have absolutely no experience with OpenCV right now. Thanks!
↧
android opencv
I want to use opencv for OMR sheet or Bubble sheet .I dont have fix number of questions or columns in my omr sheet so i am trying to detetct rows and column (also i need to detect the title of the column)and then i can move further for filled circle detetction. I get crash on lineImgproc.boundingRect(contours[i]) .ALso i checked the intermediate result i get the row and column image ,not perfect though
P.S I am very new to opencv my approach may be incorrect ,I would be thankful for any advice 
fun showAllBorders(paramView: Bitmap?) { // paramView = BitmapFactory.decodeFile(filename.getPath());
localMat1 = Mat()
var scale = 25.0
var contourNo:Int=0
Utils.bitmapToMat(paramView, localMat1)
localMat1 = Mat()
var thresMat = Mat()
var horiMat = Mat()
var grayMat = Mat()
var vertMat = Mat()
Utils.bitmapToMat(paramView, localMat1)
val imgSource: Mat = localMat1.clone()
Imgproc.cvtColor(imgSource, grayMat, Imgproc.COLOR_RGB2GRAY)
Imgproc.adaptiveThreshold(grayMat, thresMat, 255.0, Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY, 15, -2.0)
horiMat = thresMat.clone()
vertMat = thresMat.clone()
val horizontalSize1 = horiMat.cols().toDouble() / scale
val horizontalStructure: Mat = Imgproc.getStructuringElement(MORPH_RECT, Size(horizontalSize1, 1.0))
Imgproc.erode(horiMat, horiMat, horizontalStructure, Point(-1.0, -1.0), 1)
Imgproc.dilate(horiMat, horiMat, horizontalStructure, Point(-1.0, -1.0), 1)
val verticalSize1 = vertMat.rows().toDouble() //scale
val verticalStructure: Mat = Imgproc.getStructuringElement(MORPH_RECT, Size(1.0, verticalSize1))
Imgproc.erode(vertMat, vertMat, verticalStructure, Point(-1.0, -1.0), 1)
Imgproc.dilate(vertMat, vertMat, verticalStructure, Point(-1.0, -1.0), 4)
var mask: Mat = Mat()
var resultMat: Mat = Mat()
Core.add(horiMat, vertMat, resultMat)
var jointsMat: Mat = Mat()
Core.bitwise_and(horiMat, vertMat, jointsMat)
val contours: List = ArrayList()
val cnts: List = ArrayList()
val hierarchy = Mat()
var rect: Rect? = null
var rois = mutableListOf()
var bmpList = mutableListOf()
Imgproc.findContours(resultMat, contours, hierarchy, Imgproc.RETR_TREE, Imgproc.CHAIN_APPROX_SIMPLE)
for (i in contours.indices) {
if (Imgproc.contourArea(contours[i]) < 100) {
contourNo = i
val contour2f = MatOfPoint2f(*contours[contourNo].toArray())
val contours_poly = MatOfPoint2f(*contours[contourNo].toArray())
Imgproc.approxPolyDP(contour2f, contours_poly, 3.0, true)
val points = MatOfPoint(*contours_poly.toArray())
var boundRect = mutableListOf()
boundRect[i] = Imgproc.boundingRect(contours[i]);//CRASH HERE//contours[i] is not null
val roi = Mat(jointsMat, boundRect[i])
val joints_contours: List = ArrayList()
val hierarchy1 = Mat()
Imgproc.findContours(roi, joints_contours, hierarchy1, Imgproc.RETR_TREE, Imgproc.CHAIN_APPROX_SIMPLE)
if (joints_contours.size >= 4) {
rois.add(Mat(jointsMat, boundRect[i]))
Imgproc.cvtColor(localMat1, localMat1, Imgproc.COLOR_GRAY2RGBA);
Imgproc.drawContours(localMat1, contours, i, Scalar(0.0, 0.0, 255.0), 6);
rectangle(localMat1, boundRect[i].tl(), boundRect[i].br(), Scalar(0.0, 255.0, 0.0), 1, 8, 0);
}
}
}
for (i in rois) {
val analyzed = Bitmap.createBitmap(i.cols(), i.rows(), Bitmap.Config.ARGB_8888)
Utils.matToBitmap(i, analyzed)
bmpList.add(analyzed)
}
val analyzed = Bitmap.createBitmap(jointsMat.cols(), jointsMat.rows(), Bitmap.Config.ARGB_8888)
Utils.matToBitmap(jointsMat, analyzed)
//below shows rows and column
/*val analyzed = Bitmap.createBitmap(resultMat.cols(), resultMat.rows(), Bitmap.Config.ARGB_8888)
Utils.matToBitmap(jointsMat, analyzed)
return analyzed!!
*/
//return
↧
android opencv 4.3.0 Assertion failed (buf.checkVector(1, CV_8U) > 0)
I am taking the image from android phone's camera and want to do some processing on it .
override fun onPictureTaken(p0: ByteArray?, p1: Camera?) {
Log.i(TAG, "on picture taken")
Observable.just(p0)
.subscribeOn(proxySchedule)
.subscribe {
val pictureSize = p1?.parameters?.pictureSize
Log.i(TAG, "picture size: " + pictureSize.toString())
Log.i(TAG, "on picture taken")
Log.i(TAG, "picture size: w" + pictureSize?.width.toString())
Log.i(TAG, "picture size:h " + pictureSize?.height.toString())
val mat = Mat(Size(pictureSize?.width?.toDouble() ?: 1920.toDouble(),
pictureSize?.height?.toDouble() ?: 1080.toDouble()), CvType.CV_8U)
pictureSize?.height?.toDouble() ?: 1080.toDouble()), CvType.CV_8U)
Log.i(TAG, "on picture taken mat 1"+mat.toString())
Log.i(TAG, "on picture taken p0"+p0?.size.toString())//not null
mat.put(0, 0, p0)
Log.i(TAG, "on picture taken mat 2"+mat.toString())//not null
val pic = Imgcodecs.imdecode(mat, Imgcodecs.IMREAD_UNCHANGED)//error here
Core.rotate(pic, pic, Core.ROTATE_90_CLOCKWISE)
mat.release()
SourceManager.corners = processPicture(pic)
Imgproc.cvtColor(pic, pic, Imgproc.COLOR_RGB2GRAY)
SourceManager.pic = pic
context.startActivity(Intent(context, CropActivity::class.java))
busy = false
}
}
Below is the error
E/cv::error(): OpenCV(4.3.0) Error: Assertion failed (buf.checkVector(1, CV_8U) > 0) in imdecode_, file /build/master_pack-android/opencv/modules/imgcodecs/src/loadsave.cpp, line 755
E/org.opencv.imgcodecs: imgcodecs::imdecode_10() caught cv::Exception: OpenCV(4.3.0) /build/master_pack-android/opencv/modules/imgcodecs/src/loadsave.cpp:755: error: (-215:Assertion failed) buf.checkVector(1, CV_8U) > 0 in function 'imdecode_'
↧
↧
CUDA acceleration on opencv is too slow
I use CUDA acceleration on opencv4.3, after using it, I spend 4S processing each frame of image, and then the following prompt appears!
cv::dnn::dnn4_v20200310::Net::Impl::initCUDABackend CUDA backend will fallback to the CPU implementation for the layer "_input" of type __NetInputLayer__
↧
smooth edges in binary images

I tried with guassian blur, erode/dilate concept but not suitable for the type of image shown in sample.
↧
import cv2 error on windows: missing configuration files
OpenCV = from master branch 7/10/20
Operating System / Platform => Windows 64 Bit Win10
Compiler => Visual Studio 2019
**problem:**
I successfully built opencv in the latest cmake, and then built in VS. My PC recognizes python but does not recognize cv2. I installed both opencv and python in the c:\ drive. I tried adding these paths to my user environment variables, no luck. All my python paths in cmake are also correct. What could be the problem here?
**error:**
>>> C:\>python
Python 3.7.7 (tags/v3.7.7:d7c567b08f, Mar 10 2020, 10:41:24) [MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
Traceback (most recent call last):
File "", line 1, in
File "C:\Python37\cv2\__init__.py", line 96, in
bootstrap()
File "C:\Python37\cv2\__init__.py", line 62, in bootstrap
], True)
**File "C:\Python37\cv2\__init__.py", line 56, in load_first_config
raise ImportError('OpenCV loader: missing configuration file: {}. Check OpenCV installation.'.format(fnames))
ImportError: OpenCV loader: missing configuration file: ['config-3.7.py', 'config-3.py']. Check OpenCV installation.**
My paths:
PYHTONPATH:
C:\Python37\python.exe
C:\Python37\include
C:\Python37\libs
C:\Python37\lib
C:\Python37\site-packages
C:\Python37\Lib\site-packages\numpy\core\include
C:\Python37\Lib\site-packages\cv2\python-3.7
C:\Python37\Lib\site-packages\cv2

↧
samples.findfile for loading photos
Hello,
I've downloaded openCV for python and am using it on Microsoft Visual Studio and am trying to use the tutorials. Often, they fetch a picture,
src2 = cv.imread(cv.samples.findFile('LinuxLogo.jpg'))
The program runs, but gives an error:
Message=OpenCV(4.1.1) C:\projects\opencv-python\opencv\modules\core\src\utils\samples.cpp:62: error: (-2:Unspecified error) OpenCV samples: Can't find required data file: LinuxLogo.jpg in function 'cv::samples::findFile'
Source=C:\Users\tiern\source\repos\OpenCV-Dilation\OpenCV_Dilation.py
StackTrace:
File "C:\Users\tiern\source\repos\OpenCV-Dilation\OpenCV_Dilation.py", line 48, in
src = cv.imread(cv.samples.findFile(args.input))
If you could explain how I could load in pictures, I would greatly appreciate it.
Thank you!
↧
↧
Issue Building OpenCV 4.3.0 in Docker
Hi, i am trying to build opencv in the docker (ibmcom/tensorflow-ppc64le:latest-gpu-py3-jupyter). But, I faced this problem
Please set them or make sure they are set and tested correctly in the CMake files:
CUDA_nppc_LIBRARY (ADVANCED)
The cuda used is:
root@70d2e4e71041:/tf/root# nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:52_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243
root@70d2e4e71041:/tf/root# apt-cache policy libcudnn7
libcudnn7:
Installed: 7.6.4.38-1+cuda10.1
Candidate: 7.6.5.32-1+cuda10.2
Version table:
7.6.5.32-1+cuda10.2 500
500 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/ppc64el Packages
7.6.5.32-1+cuda10.1 500
500 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/ppc64el Packages
7.6.5.32-1+cuda10.0 500
500 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/ppc64el Packages
*** 7.6.4.38-1+cuda10.1 500
500 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/ppc64el Packages
100 /var/lib/dpkg/status
Cuda compilation tools, release 10.1, V10.1.243
The cmake command used:
RUN cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D INSTALL_C_EXAMPLES=OFF \
-D OPENCV_ENABLE_NONFREE=ON \
-D WITH_CUDA=ON \
-D WITH_CUDNN=ON \
-D CUDA_cublas_LIBRARY=/usr/lib/powerpc64le-linux-gnu \
-D CUDA_cufft_LIBRARY=/usr/lib/powerpc64le-linux-gnu \
-D OPENCV_DNN_CUDA=ON \
-D ENABLE_FAST_MATH=1 \
-D CUDA_FAST_MATH=1 \
-D CUDA_ARCH_BIN=7.0 \
-D WITH_CUBLAS=1 \
-D OPENCV_EXTRA_MODULES_PATH=/opencv_gpu_files/opencv_contrib/modules \
-D HAVE_opencv_python3=ON \
-D PYTHON_EXECUTABLE=/usr/bin/python3 \
-D BUILD_EXAMPLES=ON ..
Does anyone knows what i did wrong and what should i do to fix this? Thank you.
↧
Issue Reading Local Video File
Hi,
I have opencv installed, but when i do the following:
import cv2
vs = cv2.VideoCapture('./video1.mp4')
ret, frame = vs.read()
vs.isOpened() returns True but ret always return False and frame is None.
Would like to know what would be the possible reason to this?
Thank you.
↧
edge detection in garden
How can i filter this image with cv2 so that i have just the gras in my garden. So basically i need to seperate it where the red line is.
[garden image](https://i.stack.imgur.com/WDmE8.jpg)
Ive tried a code that i found at a driving lane detection system, but unfortunately its not working good for gras.
`gray = cv2.cvtColor(gras_image, cv2.COLOR_RGB2GRAY)`
`blur = cv2.GaussianBlur(gray, (5,5), 0)`
`canny = cv2.Canny(blur, 50, 200)`
↧