Hi,
I am trying to execute the following code regarding video capture from webcam on my laptop. I am using python3 with opencv4 on windows 7.
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
While executing the code the camera opens but the camera display windows shows a still blurred multiple gray image instead of the live video. I am using IDLE python IDE for execution of the code. It says some VideoCodec_RGB24 error. For crosscheck whether I have installed the python and OpenCv properly or not, I used a code that readsmp4 file using the command
cap = cv2.VideoCapture('video1.mp4').
The program is able to read from the mp4 file and it displays the videos too. Please advise to fix the problem.
↧
Capture Video from Camera using cv2.VideoCapture(0) not working
↧
Will pay for libopencv_java.so v2.4.9 - arm64-v8a
I am in urgent need of the libopencv_java.so and if possible libopencv_info.so for arm64-v8a v2.4.9 or 2.4.13
Willing to pay $30 if you have these files please.
Thanks
↧
↧
Is there a better approach to stitch images if they are lineairly spaced
My goal is to 'scan' an object. To achieve this I hacked into my 3Dprinter and made a certain automation so that the printhead moves to grid positions who are lineairly spaced in the x and y position. The z position is a fixed height. My smartphone is attached to the printhead and takes a picture at each position.
All is achieved throughout python and after capturing the images I would like to stitch them to one picture in order to counteract distortion in the x- and y-axis.
I defined a 5x5 grid. So there were 25 coordinates where my printer would move to and take a picture.
first row of images:





final result:

At the moment I tried to use OpenCV's stitch method as so:
stitcher = cv2.Stitcher_create(mode = 1)
(status, stitched) = stitcher.stitch(images)
But the result is not so good.
I was wondering if there would be a better perhaps simpeler way to stitch these images. Because the opencv stitcher class tries to find features in order to find overlaps in the pictures but maybe this isn't necessary in my case because of the info I got extra: I know that the pictures are spaced linearly and I know the amount of spacing.
Any suggestions?
↧
How to Change the videocapture property in opencv c++?
I have been trying to set the camera reading pixel size to different than what by default camera resolution is. However, I am not able to set the pixel size using
videoStream.set(cv::CAP_PROP_FRAME_WIDTH, frameWidth);
videoStream.set(cv::CAP_PROP_FRAME_HEIGHT, frameHeight);
Can someone help me out with the reasons for this?
↧
ORB keypoints distribution over an image
Hello,
I'm working on stitching aerial images taken with an UAV. My approach works fine for some nadir datasets, but fails for others. I believe that one of the reasons is that for some images most of the keypoints found by ORB are concentrated in some parts of an image, but not over the entire image. How can I achieve more uniform distribution of keypoints using ORB?
Now I use following parameters:
Ptr orb = ORB::create(3000, 1.0, 1, 31, 0, 2, ORB::HARRIS_SCORE, 31, 20);


↧
↧
GridAdaptedFeatureDetector missing in OpenCV 3.0??
It seems that there is no more GridAdaptedFeatureDetector and DenseFeatureDetector classes in opencv.
Why have they been removed? I can't seem to find any mention of their removal, when I try to Google the issue.
↧
Block until window closed
I would like to display a window, and block the program execution until the user decides to continue. This is done normally the following way:
Mat img1,img2;
//...process...
namedWindow("Image1",WINDOW_NORMAL);
imshow("Image1",img1);
waitKey();
destroyAllWindows();
namedWindow("Image2",WINDOW_NORMAL);
imshow("Image2",img2);
The only problem with this approach is if the user instinctively closes the window using the **X** button. Then, as no HighGUI windows are active, `waitKey()` will never return and the program blocks.
Is there any solution, so the program waits a keypress *OR* the window to be closed?
↧
Apply getPerspectiveTransform and warpPerspective for bird-eye view (Python).
Hi, I'm following some tutorials to change an image of a golf green with balls to bird-eye view to measure distances in a later step.
Now when I apply the transformation to an image with some text on paper it seems to work, but when applied to the outdoor image the results are not as expected.
Here an example with the outdoor image coords and dimensions:
# targeted rectangle on original image which needs to be transformed
tl = [689, 892]
tr = [2518, 892]
br = [2518, 2071]
bl = [689, 2071]
corner_points_array = np.float32([tl,tr,br,bl])
# original image dimensions
width = 4128
height = 2322
# Create an array with the parameters (the dimensions) required to build the matrix
imgTl = [0,0]
imgTr = [width,0]
imgBr = [width,height]
imgBl = [0,height]
img_params = np.float32([imgTl,imgTr,imgBr,imgBl])
# Compute and return the transformation matrix
matrix = cv2.getPerspectiveTransform(corner_points_array,img_params)
img_transformed = cv2.warpPerspective(image,matrix,(width,height))
And here are my results for the golf image:
## input

## output

As you can see I don't get a nice bird-eye view.
-------
This is the result I get with text and paper using the same script just with different coords and dimensions:
## Input

## output

So what am I doing wrong with the first golf example? Any help would be greatly appreciated.
↧
Help updating 10 lines of deprecated OpenCV code in C++
I need help updating 10 deprecated (OpenCV 3.2.0) lines of code found in [https://github.com/yirgagithub/Cat-and-Dog-SVM-classifier](https://github.com/yirgagithub/Cat-and-Dog-SVM-classifier) (their last commit was on Mar 1, 2017) to work with OpenCV 4 on Ubuntu 20.04 LTS, because this is one of the few GitHub projects that uses C++ (NOT Python) to do image classification (in this case distinguising pictures of dogs and cats). I am taking a [Udacity C++ online nanodegree course](https://www.udacity.com/course/c-plus-plus-nanodegree--nd213) and for their capstone project they suggest either making a computer game or creating an AI. I have decided to create an image classifier AI starting from https://github.com/yirgagithub/Cat-and-Dog-SVM-classifier because that is the closest to what I might do when I get a job.
I understand that there are OpenCV courses ([https://opencv.org/courses/](https://opencv.org/courses/)) that if I spent weeks or months on I might figure this out on my own, but I have already taken several R Studio and Python (Spyder) AI online courses (that did not use OpenCV they used TensorFlow, etc.) so I do not want to spend time on more. I looked for C++ TensorFlow GitHub image classifier projects and that is not promising.
Below is my Fork of [https://github.com/yirgagithub/Cat-and-Dog-SVM-classifier](https://github.com/yirgagithub/Cat-and-Dog-SVM-classifier). The fork has the settings.json and launch.json in the .vscode directory and I updated some deprecated #include to their new locations in OpenCV 4. I ran cmake (below) and got stuck (errors) on the 12 deprecated elements:
cmake ..
cmake --build . --config Release
My Fork: [https://github.com/ProfHariSeldon/CppND-Capstone-Hello-World](https://github.com/ProfHariSeldon/CppND-Capstone-Hello-World)
You can skip the Install Instructions and just look at DEPRICIATED: to see the code I need help with. I have confirmed that Visual Studio Code recognizes the OpenCV #include (I added that path "/home/tlroot/installation/OpenCV-master/include/opencv4/" to settings.json) and that cmake finds the OpenCV files too (I added that path "/home/tlroot/installation/OpenCV-master/" to CMakeLists.txt). I also don't really understand my CMakeLists.txt this is my first time building one from scratch. I got it working but if you have advice please let me know.
Install instructions
--------------------
How to upgrade to Ubuntu 20.04 LTS
sudo do-release-upgrade -d -f DistUpgradeViewGtk3
How to Install OpenCV 4 on Ubuntu
Instructions here: [https://www.learnopencv.com/install-opencv-4-on-ubuntu-18-04/](https://www.learnopencv.com/install-opencv-4-on-ubuntu-18-04/)
Download: [https://github.com/spmallick/learnopencv/blob/master/InstallScripts/installOpenCV-4-on-Ubuntu-18-04.sh](https://github.com/spmallick/learnopencv/blob/master/InstallScripts/installOpenCV-4-on-Ubuntu-18-04.sh)
building from Source in my home directory I guess that's OK
$ cd /home/tlroot
$ sudo chmod +x ./installOpenCV-4-on-Ubuntu-18-04.sh
$ sudo bash installOpenCV-4-on-Ubuntu-18-04.sh
$ cd /home/tlroot/installation
$ sudo ldconfig
$ cd /home/tlroot/C++/Capstone
$ git clone https://github.com/yirgagithub/Cat-and-Dog-SVM-classifier.git
The CMakeLists.txt file to use
------------------------------
# see CMake Lists.txt file in /home/tlroot/Documents/C++/OOP/Project/CppND-System-Monitor
# https://www.learnopencv.com/install-opencv-4-on-ubuntu-18-04/
cmake_minimum_required(VERSION 3.1)
# Enable C++11
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_CXX_STANDARD_REQUIRED TRUE)
SET(OpenCV_DIR /home/tlroot/installation/OpenCV-master/include/opencv4/)
# https://stackoverflow.com/questions/53528125/fatal-error-no-such-file-or-directory-when-im-sure-i-have-set-find-package-cor
find_package( OpenCV REQUIRED PATHS "/home/tlroot/installation/OpenCV-master/")
project(classifier)
file(GLOB SOURCES "dictionary/*.cpp")
add_executable(classifier ${SOURCES})
# https://docs.opencv.org/2.4/doc/tutorials/introduction/linux_gcc_cmake/linux_gcc_cmake.html
target_link_libraries( classifier ${OpenCV_LIBS} )
# add_executable( DisplayImage DisplayImage.cpp )
How to get to settings.json
---------------------------
GUI instructions:
Visual Studio Code -> File -> Preferences -> Settings -> User -> Extensions -> C/C++ -> Edit in settings.json
https://stackoverflow.com/questions/37522462/visual-studio-code-includepath
settings.json to use
--------------------
{
"files.associations": {
"iostream": "cpp"
},
"[cpp]": {
"editor.defaultFormatter": "xaver.clang-format"
},
"C_Cpp.default.includePath": ["/home/tlroot/installation/OpenCV-master/include/opencv4/"]
}
How to build the project
------------------------
$ mkdir build && cd build
$ cmake ..
$ cmake --build . --config Release
How to get to launch.json
-------------------------
GUI instructions:
Visual Studio Code -> Run -> Open Configurations
launch.json to use
------------------
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "g++ build and debug active file",
"type": "cppdbg",
"request": "launch",
"program": "${workspaceFolder}/build/classifier",
"args": [],
"stopAtEntry": true,
"cwd": "${workspaceFolder}/build",
"environment": [],
"externalConsole": false,
"MIMode": "gdb",
"setupCommands": [
{
"description": "Enable pretty-printing for gdb",
"text": "-enable-pretty-printing",
"ignoreFailures": true
}
],
"preLaunchTask": "g++ build active file",
"miDebuggerPath": "/usr/bin/gdb"
}
]
}
#include changes
---------------------
This change is based on: [https://stackoverflow.com/questions/27418668/nonfree-module-is-missing-in-opencv-3-0](https://stackoverflow.com/questions/27418668/nonfree-module-is-missing-in-opencv-3-0)
"#include < opencv2/nonfree/features2d.hpp >" to "#include < opencv2/xfeatures2d.hpp >"
This change is based on looking in: [https://github.com/opencv/opencv_contrib/tree/master/modules/xfeatures2d/include/opencv2](https://github.com/opencv/opencv_contrib/tree/master/modules/xfeatures2d/include/opencv2)
"#include < opencv2/nonfree/nonfree.hpp >" to "#include < opencv2/xfeatures2d/nonfree.hpp >"
DEPRICIATED:
============
main.cpp:
---------
**26: cv::SiftFeatureDetector sift(300);**
300
no instance of constructor "cv::SIFT::SIFT" matches the argument list -- argument types are: (int)
PredictImage.cpp
----------------
**27: cv::Ptr siftFeatureDetector(new cv::SiftFeatureDetector(300));**
cv::SiftFeatureDetector
no instance of constructor "cv::SIFT::SIFT" matches the argument list -- argument types are: (int)
**49: cv::SVM svm;**
SVM
namespace "cv" has no member "SVM"
svm
expected a ';'
**50: svm.load("svmtrained.yml");**
svm.load
identifier "svm" is undefined
TrainSVM.cpp
------------
**33: cv::Ptr siftFeatureDetector(new cv::SiftFeatureDetector(300));**
cv::SiftFeatureDetector
no instance of constructor "cv::SIFT::SIFT" matches the argument list -- argument types are: (int)
**62: cv::SVMParams svmParam;**
SVMParams
namespace "cv" has no member "SVMParams"
**84: svmParam.svm_type=cv::SVM::C_SVC;**
SVM
name followed by '::' must be a class or namespace name
**85: svmParam.kernel_type=cv::SVM::LINEAR;**
SVM
name followed by '::' must be a class or namespace name
**89: cv::SVM svm;**
SVM
namespace "cv" has no member "SVM"
svm
expected a ';'
**90: bool trainSvm=svm.train(samples,labelsMat,cv::Mat(),cv::Mat(),svmParam);**
svm.train
identifier "svm" is undefined
CMake warnings
-------------------
As I said I don't really understand CMakeLists.txt I got it working but I did get many warnings about STATIC not SHARED library:
CMake Warning (dev) at /home/tlroot/installation/OpenCV-master/lib/cmake/opencv4/OpenCVModules.cmake:435 (add_library):
ADD_LIBRARY called with SHARED option but the target platform does not
support dynamic linking. Building a STATIC library instead. This may lead
to problems.
Call Stack (most recent call first):
/home/tlroot/installation/OpenCV-master/lib/cmake/opencv4/OpenCVConfig.cmake:126 (include)
CMakeLists.txt:11 (find_package)
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at /home/tlroot/installation/OpenCV-master/lib/cmake/opencv4/OpenCVModules.cmake:442 (add_library):
ADD_LIBRARY called with SHARED option but the target platform does not
support dynamic linking. Building a STATIC library instead. This may lead
to problems.
Call Stack (most recent call first):
/home/tlroot/installation/OpenCV-master/lib/cmake/opencv4/OpenCVConfig.cmake:126 (include)
CMakeLists.txt:11 (find_package)
This warning is for project developers. Use -Wno-dev to suppress it.
↧
↧
Join floating white pixels to the nearest island without closing morph?
Currently using Closing to gap closing and merging floating pixels to the nearest island, the problem is with tiny details which are required to stay immutable closing will alter the image and merge tiny details. The optimal solution is to detect the pixels that need to be joined and join pixels to the nearest island using a arm to merge both, no matter the distance or better giving a distance limit in pixels.
There are any easy solution to this problem?
As example the following picture (black and white) shows the problem, the red circles shows what i want to merge, currently i'm able to detect those isolated pixels and i have the ROI rectangle of them, so can i do any kind of ROI join or something that doesn't perform a closing on whole image?

The result of image if i perform a closing with 2 iterations:
(Note that image have a huge zoom, spacing is really low)

↧
c++ dnn text detection cpp sample with https://github.com/MaybeShewill-CV/CRNN_Tensorflow
hi i am trying to run dnn/text_detection.cpp,
while detector is working fine, but crnn model is not much accurate i tried to train this recognition model https://github.com/meijieru/crnn.pytorch with my custom image set but there is some warpctc installation error, so i checked one more crnn model https://github.com/MaybeShewill-CV/CRNN_Tensorflow with this i can run inference in python and accuracy is also good. so i am trying to implement this with this dnn/text_detection.cpp sample
but i am getting some error
[ERROR:0] global C:\jenkins\workspace\OpenCV\OpenVINO\2020.3\build\windows\opencv\modules\dnn\src\dnn.cpp (3272) cv::dnn::dnn4_v20200310::Net::Impl::getLayerShapesRecursively OPENCV/DNN: []:(_input): getMemoryShapes() throws exception. inputs=1 outputs=0/0 blobs=0
[ERROR:0] global C:\jenkins\workspace\OpenCV\OpenVINO\2020.3\build\windows\opencv\modules\dnn\src\dnn.cpp (3275) cv::dnn::dnn4_v20200310::Net::Impl::getLayerShapesRecursively input[0] = [ 1 1 100 32 ]
[ERROR:0] global C:\jenkins\workspace\OpenCV\OpenVINO\2020.3\build\windows\opencv\modules\dnn\src\dnn.cpp (3285) cv::dnn::dnn4_v20200310::Net::Impl::getLayerShapesRecursively Exception message: OpenCV(4.3.0-openvino-2020.3.0) C:\jenkins\workspace\OpenCV\OpenVINO\2020.3\build\windows\opencv\modules\dnn\src\dnn.cpp:790: error: (-215:Assertion failed) inputs.size() == requiredOutputs in function 'cv::dnn::dnn4_v20200310::DataLayer::getMemoryShapes'
OpenCV: terminate handler is called! The last OpenCV error is:
OpenCV(4.3.0-openvino-2020.3.0) Error: Assertion failed (inputs.size() == requiredOutputs) in cv::dnn::dnn4_v20200310::DataLayer::getMemoryShapes, file C:\jenkins\workspace\OpenCV\OpenVINO\2020.3\build\windows\opencv\modules\dnn\src\dnn.cpp, line 790
can anyone please have a look
with the below link i generated a frozen_graph.pb , that i am using in dnn/text_detection.cpp
https://docs.openvinotoolkit.org/2020.1/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_CRNN_From_Tensorflow.html
while creating forzen_graph i have changed the layer name in step 3
> frozen => tf.graph_util.convert_variables_to_constants(sess,> sess.graph_def,> ['shadow_net/sequence_rnn_module/stack_bidirectional_rnn/cell_0/bidirectional_rnn/fw/fw/while/Identity_2'])
here is the model i have created, checkpoint files and saved_model
https://drive.google.com/drive/folders/1wgFcC3a5jMqcRFvKmj4XFAv_b7ATt9xV?usp=sharing
↧
CUDA version of fastNlMeansDenoising not matching CPU version?
I have the following code
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
using namespace std;
using namespace cv;
int main(int argc, char** argv)
{
std::string str0 = "lenna";
cv::Mat BW0mat = imread(str0 + ".bmp", cv::IMREAD_GRAYSCALE);
cv::Mat NLMmat0;
cv::fastNlMeansDenoising(BW0mat, NLMmat0, 11, 13, 33);
cuda::GpuMat imageGPU;
cuda::GpuMat reslutGPU;
Mat NLMgpu;
imageGPU.upload(BW0mat);
cuda::fastNlMeansDenoising(imageGPU, reslutGPU, 11, 33, 13);
reslutGPU.download(NLMgpu);
NLMmat0.convertTo(NLMmat0, CV_64FC1, 1.0 / 255.0);
NLMgpu.convertTo(NLMgpu, CV_64FC1, 1.0 / 255.0);
for (int i = 0; i < NLMmat0.rows; i++) {
for (int j = 0; j < NLMmat0.cols; j++) {
if (NLMmat0.at(i, j) != NLMgpu.at(i, j))
{
std::cout << "i :" << i << ", j: " << j << ", cpu: " << NLMmat0.at(i, j) << ", gpu: " << NLMgpu.at(i, j) << "\n";
}
}
}
return 0;
}
The last loop outputs differences in the values between the CPU and GPU version of fastNlMeansDenoising. Why are these values not identical?
↧
global /io/opencv/modules/videoio/src/cap_v4l.cpp (880) open VIDEOIO(V4L2): can't find camera device
I keep getting this error when I run this code on the ubuntu bash 18.04 terminal:
Here is my code, the tensorflow part works just fine it's only the opencv part that is giving me an error.
import cv2
from darkflow.net.build import TFNet
import numpy as np
import time
options = {
'model': 'cfg/yolo.cfg',
'load': 'bin/yolo.weights',
'threshold': 0.2,
'gpu': 1.0
}
tfnet = TFNet(options)
colors = [tuple(255 * np.random.rand(3)) for _ in range(10)]
capture = cv2.VideoCapture(0)
capture.set(cv2.CAP_PROP_FRAME_WIDTH, 1920)
capture.set(cv2.CAP_PROP_FRAME_HEIGHT, 1080)
while True:
stime = time.time()
ret, frame = capture.read()
if ret:
results = tfnet.return_predict(frame)
for color, result in zip(colors, results):
tl = (result['topleft']['x'], result['topleft']['y'])
br = (result['bottomright']['x'], result['bottomright']['y'])
label = result['label']
confidence = result['confidence']
text = '{}: {:.0f}%'.format(label, confidence * 100)
frame = cv2.rectangle(frame, tl, br, color, 5)
frame = cv2.putText(
frame, text, tl, cv2.FONT_HERSHEY_COMPLEX, 1, (0, 0, 0), 2)
cv2.imshow('frame', frame)
print('FPS {:.1f}'.format(1 / (time.time() - stime)))
if cv2.waitKey(1) & 0xFF == ord('q'):
break
capture.release()
cv2.destroyAllWindows()
python: 3.6.7
ubuntu: 18.04
opencv: 4.3.0.36
↧
↧
opencv_traincascade keeps core dumping
I'm trying to create custom cascades and I'm able to generate my .vec with opencv_createsample and I previewed it and it is for sure correct.
But when I do opencv_traincascade it starts the first iteration and core dumps.
This is exactly what it says:
===== TRAINING 0-stage =====
BEGIN
OpenCV Error: Assertion failed (_img.rows * _img.cols == vecSize) in get, file /build/opencv-L2vuMj/opencv-3.2.0+dfsg/apps/traincascade/imagestorage.cpp, line 153
terminate called after throwing an instance of 'cv::Exception'
what(): /build/opencv-L2vuMj/opencv-3.2.0+dfsg/apps/traincascade/imagestorage.cpp:153: error: (-215) _img.rows * _img.cols == vecSize in function get
Aborted (core dumped)
I saw that it could be because my positives and negatives are different sizes so I made sure they're both 25x25 and it still core dumps.
Here is exactly what i'm entering into the terminal:
opencv_traincascade -data ./ -vec ./hand.vec -bg ./bg.dat -numPos 1000 -numNeg 500 -numStages 2 -w 25 h 25
Any help is greatly appreciated!
↧
What options exist in OpenCV to improve the thresholding algorithm used for contour detection of fish arches on sonar images?
I am working on a fish detection algorithm for sonar images as a pet open source project using OpenCV and am looking for advice from someone with experience in computer vision about how I can improve its accuracy likely by improving the thresholding/segmentation algorithm in use.
Sonar images look a bit like below and the basic artifacts I want to find in them are:
- Upside down horizontal arches that are likely fish
- Cloud/blob/balls shape artifacts that are likely schools of bait fish
I would really like to extract contours of these cloud and fish-arch artifacts.
The example code below uses threshold() and findContours(). The results are reasonable in this case as it has been manually tuned for this image but does not work on other sonar images that may require different thresholds or a different thresholding algorithm.
I have tried OSTUs method and it doesn't really work very well for this use case. I think I need a thresholding/segmentation algorithm that uses contrast of localized blobs somehow, does such an algorithm exist in OpenCV or is there some other technique I should look more into?
Thanks,
Brendon.
----------
Original image searching for artifacts:

----------
Example output:

----------
import numpy
import random
import cv2
import math
MIN_AREA = 10
MIN_THRESHOLD = 90
def IsContourUseful(contour):
# I have a much more complex version of this in my real code
# This is good enough for demo of the concept and easier to understand
# Filter on area for all items
area = cv2.contourArea(contour)
if area < MIN_AREA:
return False
# Remove any contours close to the top
for i in range(0, contour.shape[0]):
if contour[i][0][1] <= 10:
return False
return True
def FindFishContoursInImageWithoutBottom(image, file_name_base):
ret, thresh = cv2.threshold(image, MIN_THRESHOLD, 255, cv2.THRESH_BINARY)
cv2.imwrite(file_name_base + 'thresholded.png', thresh)
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = [c for c in contours if IsContourUseful(c)]
print ('Found %s interesting contours' % (len(contours)))
# Lets draw each contour with a diff colour so we can see them as separate items
im_colour = cv2.cvtColor(image, cv2.COLOR_GRAY2RGB)
um2 = cv2.UMat(im_colour)
for contour in contours:
colour = (random.randint(100,255), random.randint(100,255), random.randint(100,255))
um2 = cv2.drawContours(um2, [contour], -1, colour, 1)
cv2.imwrite(file_name_base + 'contours.png', um2)
return contours
# Load png and make greyscale as that is what original sonar data looks like
file_name_base = 'fish_image_cropped_erased_bottom'
image = cv2.imread(file_name_base + '.png')
image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
FindFishContoursInImageWithoutBottom(image, file_name_base)
----------
Some examples where thresholding failed to identify a large school of bait fish as they had lower intensity:
Overall as a human I can see there are features that exist in these example with the bait schools that the thresholding doesn't pick up as it is slightly lower intensity.


----------
An example where it picked up a bunch of things when there was nothing there as had slightly higher global intensity:
There are some cases in this example with a number of false positives. I am not too worried about these as long as there are not too many of them as I can probably do some post processing to exclude these.

----------
Another two issues I see occasionally are:
1) The segmentation "joins" very tenuously connected blobs.
I was thinking some of the steps in the watershed example (dilation/erosion) might be helpful here. One example is in the original image, there is a contour identified with a yellow outline that is at the bottom in roughly the middle. It is really a few separate objects one blob attached to the bottom and another a bit higher off the bottom but joined by a very thin line.
2) The segmentation "separates" some blobs that really I think should be joined
I see this often on thin fish arches. The arch continues but has a slightly lower intensity in the middle of the arch and gets split in half. Again in this case the contour shape is roughly a bannana shape and it continues and knowing this I can probably post process and merge contours. But I wonder if some kind of adaptive thresholding might help with this, joining blobs that have close surrounding blobs.
↧
Crosscompile OpenCV 3.2 CMAKE_MAKE_PROGRAM is not set
I am trying to cross compile OpenCV 3.2 for `aarch64`. But **cmake** gives me the error below
CMake Error: CMake was unable to find a build program corresponding to "Unix Makefiles". CMAKE_MAKE_PROGRAM is not set. You probably need to select a different build tool.
-- Configuring incomplete, errors occurred!
See also "/home/teshan/xcompile/ROS_ARM_CROSSCOMPILE/build/opencv-3.2.0/build/CMakeFiles/CMakeOutput.log".
I am running **cmake** from a `build/` directory created inside the `opencv/` directory
cmake -DCMAKE_TOOLCHAIN_FILE=../platforms/linux/aarch64-gnu.toolchain.cmake -DCMAKE_INSTALL_PREFIX= ..
What am I missing?
---
**TL:DR**
The full output of the cmake command
CMake Deprecation Warning at CMakeLists.txt:72 (cmake_policy):
The OLD behavior for policy CMP0020 will be removed from a future version
of CMake.
The cmake-policies(7) manual explains that the OLD behaviors of all
policies are deprecated and that a policy should be set to OLD only under
specific short-term circumstances. Projects should be ported to the NEW
behavior and not rely on setting a policy to OLD.
CMake Deprecation Warning at CMakeLists.txt:76 (cmake_policy):
The OLD behavior for policy CMP0022 will be removed from a future version
of CMake.
The cmake-policies(7) manual explains that the OLD behaviors of all
policies are deprecated and that a policy should be set to OLD only under
specific short-term circumstances. Projects should be ported to the NEW
behavior and not rely on setting a policy to OLD.
CMake Deprecation Warning at CMakeLists.txt:81 (cmake_policy):
The OLD behavior for policy CMP0026 will be removed from a future version
of CMake.
The cmake-policies(7) manual explains that the OLD behaviors of all
policies are deprecated and that a policy should be set to OLD only under
specific short-term circumstances. Projects should be ported to the NEW
behavior and not rely on setting a policy to OLD.
CMake Error: CMake was unable to find a build program corresponding to "Unix Makefiles". CMAKE_MAKE_PROGRAM is not set. You probably need to select a different build tool.
-- Configuring incomplete, errors occurred!
See also "/home/teshan/xcompile/ROS_ARM_CROSSCOMPILE/build/opencv-3.2.0/build/CMakeFiles/CMakeOutput.log".
↧
TypeError: Expected Ptr for argument 'src'
This is my code :
import os
from PIL import Image
import numpy as np
import cv2
import pickle
Base_Dir = os.path.dirname(os.path.abspath("__file__"))
image_dir = os.path.join(Base_Dir, "Images")
face_cascade = cv2.CascadeClassifier('Cascades/haarcascade_frontalface_default.xml')
recoganiser = cv2.face.LBPHFaceRecognizer_create()
current_ID = 0
label_IDS = {}
y_lebels = []
x_train = []
for root, dirs, files in os.walk(image_dir):
for file in files:
if(file.endswith("png") or file.endswith("jpg") or file.endswith("JPG")):
path = os.path.join(root, file)
label = os.path.basename(os.path.dirname(path)).replace(" ","_").lower()
# print(label)
# print(path)
if not label in label_IDS:
label_IDS[label]= current_ID
current_ID+=1
id_ = label_IDS[label]
x_train.append(path) #Verify the image and convert into gray and numpy array
y_lebels.append(label) # some number for our labels
pil_image = Image.open(path).convert('L')
size = (600,600)
final_image = pil_image.resize(size, Image.ANTIALIAS)
image_array = np.array(pil_image, 'uint8')
faces = face_cascade.detectMultiScale(image_array, 2, 7)
for(x, y, w, h) in faces:
roi = image_array[y:y+h, x:x+w]
x_train.append(roi)
y_lebels.append(id_)
with open("lebels.pickle", "wb") as f:
pickle.dump(label_IDS, f)
recoganiser.train(x_train, np.array(y_lebels))
recoganiser.save("trainer.yml")
And this is my error , what to do ?
TypeError Traceback (most recent call last)
in
52 pickle.dump(label_IDS, f)
53
---> 54 recoganiser.train(x_train, np.array(y_lebels))
55 recoganiser.save("trainer.yml")
TypeError: Expected Ptr for argument 'src'
↧
↧
Connection between pose estimation, epipolar geometry and depth map
Hi I am an undergraduate student working on a graduate project, and a beginner to computer vision.
After I went through the tutorial "Camera Calibration and 3D Reconstruction" provided by OpenCV (link) :
https://docs.opencv.org/master/d9/db7/tutorial_py_table_of_contents_calib3d.html
I failed to see the connections between the second part to the final part. What I understand here is :
- The intrinsic and extrinsic parameters of a camera is required to estimate the position of the camera and the
captured object
- To reconstruct a 3D model multiple point clouds are needed, and to generate a point cloud a disparity map is required.
What I do not understand is :
- The importance of estimating the position of the camera or the object to compute the epiline or epipole in either image planes.
- The importance of epipolar geometry, and finding out the location of epiline and epipole to compute the disparity map.
As far as I am aware, the code below generate a disparity map
stereo = cv2.createStereoBM(numDisparities=16, blockSize=15)
disparity = stereo.compute(imgL,imgR)
and the input includes a pair of stereo images, minDisparities, numDisparities and blockSize, but not the position of the camera nor the epiline/epipole.
Any help would be greatly appreciated.
↧
Need explanation for the following! thanks in advance.
I have the following block of code in main function in C++. I am using openCV 4.1.1.>int a[ ] = { 1,2,3,4,5,6,7,8,9 };>Mat matrix(1, 9, CV_8SC1, &a);>cout<< matrix<[1, 2, 3, 4, 5, 6, 7, 8, 9]
But the actual output is:
>[ 1, 0, 0, 0, 2, 0, 0, 0, 3]
I am able to get the desired output by using the following statement!
>Mat matrix(1, 9, CV_32SC1, &a);
Can someone please provide explanation for why CV_8SC1 is not giving the desired output, also provide references if possible. Thanks in advance.
↧
OpenCV-contrib: Reference to unresolved external symbol
Hi, I would like to use a function of OpenCV-contrib. I added the header-files to the include folder of OpenCV on my PC but I didn’t linked them. OpenCV standard functions work well. That’s the reason why I didn’t use CMake to build the library (like this: https://cv-tricks.com/how-to/installation-of-opencv-4-1-0-in-windows-10-from-source/).
Now I get the error: LNK2019 – Reference to unresolved external symbol.
(#include is possible without getting an error)
Is there a possibility to link the manually added header to the library?
I use Microsoft Visual Studio and C++.
↧