Hi,
I have embedded python in my c++ code. All other modules work fine, except for the `cv2` module. Any call to `cv2` functions hangs. For instance `cv2.imread`
I'm using python 2.7.12
Here is the code
int main()
{
// initialize python
Py_Initialize();
// import our test module
PyObject * opencv_test_module = PyImport_ImportModule("opencv_test");
// retrieve read_image() from our module
PyObject * read_image = PyObject_GetAttrString(numpy_test_module, "read_image");
// print the array by calling 'print_matrix'
PyObject* return_value = PyObject_CallObject(read_image, NULL); // program hangs here (!)
// ... further processing ...
return 0;
}
and here is my module `opencv_test.py`
import cv2
def read_image():
print "[read_image]" # program reaches here
i = cv2.imread("/path/to/file", 0) # program hangs here -- also hangs with other cv2 calls
print "[read_image] end"
↧
python cv2 calls from c++
↧
Error loading OpenCV4 libraries after cross compiling: No such file or directory
I'm having problems to load share libraries after cross-compiling my C++ code using Docker Buildx, having a Raspberry Pi Zero W as the target.
After I perform the build, I copy the generated binary to a Raspberry Pi Zero already running and with OpenCV4 installed.
When I run the executable, the following error message is shown:
pi@raspberrypi:/mnt/system/$ ./software.run ./software.run: error while loading shared libraries: libopencv_freetype.so.4.2: cannot open shared object file: No such file or directory
Despite OpenCV4 being already installed, this particular lib wasn't in the /usr/lib. So, I copied it, run sudo ldconfig but, even after this procedure, my software still cannot find the lib.
I even added the /usr/lib to the path of the system, but, it didn't work.
pi@raspberrypi:/usr/lib $ sudo ldconfig -v | grep libopencv_free
ldconfig: Can't stat /usr/local/lib/arm-linux-gnueabihf: No such file or directory
ldconfig: Path `/lib/arm-linux-gnueabihf' given more than once
ldconfig: Path `/usr/lib/arm-linux-gnueabihf' given more than once
ldconfig: /lib/arm-linux-gnueabihf/ld-2.28.so is the dynamic linker, ignoring
ldconfig: /lib/ld-linux.so.3 is the dynamic linker, ignoring
libopencv_freetype.so.4.2 -> libopencv_freetype.so.4.2.0
pi@raspberrypi:/mnt/system/ $ file software.run
software.run: ELF 32-bit LSB pie executable, ARM, EABI5 version 1 (GNU/Linux), dynamically linked, interpreter /lib/ld-linux.so.3, for GNU/Linux 3.2.0, BuildID[sha1]=409a24c10761e2b9fd7a310cddfb09c86fb3a207, not stripped
pi@raspberrypi:/mnt/system $ echo $LD_LIBRARY_PATH
/usr/lib
Makefile:
CC = g++
STD = --std=c++14
SOFTWARE_SRC = $(wildcard src/software/*.cpp)
SOFTWARE_BIN = software.run
CV_LIBS = $(shell pkg-config --cflags --libs opencv4)
SOFTWARE_INC = -Iinclude -I/usr/include -I/usr/local/include
SOFTWARE_LDFLAGS = -lraspicam_cv -L/opt/vc/lib -lmmal -lmmal_core -lmmal_util -lwiringPi
all: software-out
software-out:
$(CC) $(STD) $(SOFTWARE_SRC) -o $(SOFTWARE_BIN) $(CV_LIBS) $(SOFTWARE_INC) $(SOFTWARE_LDFLAGS)
Other software that also uses OpenCV is working properly.
I also build a "Hello World" software just to validate my cross-compiling environment and it is working.
Thank you all in advance
↧
↧
Distortion correction
I've some unknown distortion like in the picture below, is there any solution to solve those distortion based on lines or squares for example?

↧
compare two pictures of color
Hi,
Whats the best method to compare two pictures of mainly black color , (or two pictures of mainly red color)
I'm looking at a specific location on an image and am hoping to see a black colored wire ( or a red coloured wire) in that location.
If the wire is not in that location i would see either a gray backround (or a green backround if looking for red cable) .
What I want to ensure is every time I check the location I see a Black shade ( which indicates a Black wire) and must ensure there is no gray backround seen.
If all gray background is seen then this indicates the wire is not present
if some gray background is seen then this indicates the wire might be present , but is not in the exact correct location.
Bearing in mind that in my picture Black can sometimes how up as dark gray, what the best way to distinguish between the cable and the background and whats the best method to ensure there is no background in the image.
I'm looking at template match , but from what i see it does not quite compare colors.
Can I get the average of value of colors in an image and do a comparision against a master image ?
how could I do this ? would it work ?
all help greatly appreciated
thanks


The Picture above has black wires turning at an almost 90 degree angle, this is bad. The first picture has wires turning gradually , this is good.
I'd like to inspect each and ensure the wires turn gradually, and indicate a filure if they do not.
I was thinking of first identifying and location the component ( shown in Green below) and then selecting 2 X ROI relative to that green area. shown in red.
THen compare the color in these read boxes against the colors of of the backgrounds ( with no wires) ... is this possible how could i do it ?
I know hoe to identify the 2 X ROI but not sure how i can compare the colors .
Also, one red box has a backround that is very similiar to the cable.

↧
c++ imread filepath in windows
Greetings,
I've tried to google this to no avail.
Please help me understand what is wrong with this file path:
Mat lena = imread("C:\\Users\dasboomer\Desktop\Building-Computer-Vision-Projects-with-OpenCV4-and-CPlusPlus-master\Chapter03/lena.jpg");
I've tried changing:
- backslash to double backslash
- backslash to forward slash backslash
- to double forward slash
Thank you for your assistance
↧
↧
Capture Video from Camera using cv2.VideoCapture(0) not working
Hi,
I am trying to execute the following code regarding video capture from webcam on my laptop. I am using Python3 with OpenCV4 on windows 7.
import numpy as np
import cv2
cap =cv2.VideoCapture(0)
while(True):
ret, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
While executing the code the camera opens but the camera display windows shows a still blurred multiple gray image instead of the live video. I am using IDLE python IDE for execution of the code. It says some `VideoCodec_RGB24` error. For crosscheck whether I have installed the python and OpenCV properly or not, I used a code that reads mp4 file using the command..
`cap = cv2.VideoCapture('video1.mp4')`.
The program is able to read from the mp4 file and it displays the videos too. Please advise to fix the problem.
↧
Is it possible to use tee in gstreamer pipeline inside VideoCapture() API?
I am trying to use same camera source for reading frames using opencv `videoCapture(`) API and RTMP streaming.For this I am using tee in gstreamer pipeline inside `Videocapture ()` api as shown below:
cv::VideoCapture cap("nvarguscamerasrc sensor-id=0 ! video/x-raw(memory:NVMM), width=1920, height=1080, format=NV12, framerate=10/1 ! tee name=t t. ! queue ! omxh264enc profile=8 bitrate=1000000 ! h264parse ! flvmux ! rtmpsink location= t. ! queue ! nvvidconv flip-method=0 ! video/x-raw, width=640, height=480, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink ",cv::CAP_GSTREAMER)
But I am not getting the desired output. I am getting only two frames .RTMP streaming is not working.
Is it possible to do like this? I am new to `gstreamer` pipeline. So kindly help me.
Thanks in advance
↧
createVideoReader > 4096 px
Hello,
I am currently developing a program for the distribution of video (or others) on giant LED screens. Some video exceeds 4096px and I can't open them via the "cv::cudacodec::createVideoReader" function
Do you have an idea to solve my problem?
I thank you in advance
ps : Here is the error :
Assertion failed (videoFormat.width >= decodeCaps.nMinWidth && videoFormat.height >= decodeCaps.nMinHeight && videoFormat.width <= decodeCaps.nMaxWidth && videoFormat.height <= decodeCaps.nMaxHeight) in cv::cudacodec::detail::VideoDecoder::create
↧
global /io/opencv/modules/videoio/src/cap_v4l.cpp (880) open VIDEOIO(V4L2): can't find camera device
I keep getting this error when I run this code on the ubuntu bash 18.04 terminal:
Here is my code, the tensorflow part works just fine it's only the opencv part that is giving me an error.
import cv2
from darkflow.net.build import TFNet
import numpy as np
import time
options = {
'model': 'cfg/yolo.cfg',
'load': 'bin/yolo.weights',
'threshold': 0.2,
'gpu': 1.0
}
tfnet = TFNet(options)
colors = [tuple(255 * np.random.rand(3)) for _ in range(10)]
capture = cv2.VideoCapture(0)
capture.set(cv2.CAP_PROP_FRAME_WIDTH, 1920)
capture.set(cv2.CAP_PROP_FRAME_HEIGHT, 1080)
while True:
stime = time.time()
ret, frame = capture.read()
if ret:
results = tfnet.return_predict(frame)
for color, result in zip(colors, results):
tl = (result['topleft']['x'], result['topleft']['y'])
br = (result['bottomright']['x'], result['bottomright']['y'])
label = result['label']
confidence = result['confidence']
text = '{}: {:.0f}%'.format(label, confidence * 100)
frame = cv2.rectangle(frame, tl, br, color, 5)
frame = cv2.putText(
frame, text, tl, cv2.FONT_HERSHEY_COMPLEX, 1, (0, 0, 0), 2)
cv2.imshow('frame', frame)
print('FPS {:.1f}'.format(1 / (time.time() - stime)))
if cv2.waitKey(1) & 0xFF == ord('q'):
break
capture.release()
cv2.destroyAllWindows()
python: 3.6.7
ubuntu: 18.04
opencv: 4.3.0.36
↧
↧
Why GaussianBlur is not multi-core on 32bit OS?
Hi. It's really strange to see that GaussianBlur works single-threaded on 32bit OS while it works multi-threaded on 64bit OS! I have built OpenCV 3.4.5 with TBB on both the operating systems. The CPU is Cortex-A53. Why this happens?
↧
WINDOW_NORMAL worked weird after uncheck WITH_QT
hi,
not sure whether Qt has something to do with this, I took a look at the qt window of opencv by make the opencv from source with "WITH_QT" flag and find that the qt window block my right mouse button event, so I reinstall all of the opencv without qt.
Now something strange happened. The default imshow window with WINDOW_NORMAL flag cannot decrease size now. It would pop-up the origin size of the image (not a resize one) and the user can only increase the window size, but not decrease. Even I reinstall the opencv the problem is still here.
It seems as the same issue as here in the past :
https://answers.opencv.org/question/60288/opencv30-windows_normal-flag-not-working/
and here
https://github.com/opencv/opencv/issues/13995
but it's still no solution
↧
Track Edges and get Camera Transformation
I am looking for advice what the best approach would be to achieving my goal. This is my first project using opencv so sorry for the noob questions, I am learning.
The goal: I want to use webcam to detect edges, track the edges as the camera moves, and calculate the cameras position based on the edges.
1. For edge detection I understand that I can use cv.cornerHarris()
2. Track edges - how do I identify the edges and track them across frames as the camera moves?
3. Camera Transformation - I know that I will need a known starting point, for the first frame I plan to measure the distance (x & y) from the camera to the known edges that i want to track. I believe once I have a known starting transformation I should be able to calculate the movement, should I use cv.solvePNP()?
Suggestions/examples/tutorials would be greatly appreciated, especially in python :)
TIA
↧
Extract secondary video streams
Hello,
Some video formats (e.g. mkv) can contain multiple video streams.
How do I extract the number of video streams in a video and the secondary video streams themselves, with OpenCV?
Thanks!
↧
↧
How to develop a model to detect a point crossed a line?
# How to develop a system which detects if a object crosses a line.
I am developing a system, basically which tells a object crossed a line. Do I need to get the camera calibration matrix ? Do I need to understand camera calibration or we can use annotation line points(using CVAT) to calculate the math ? If yes and mandatory, I will go on that path. Please advice.
### Camera calibration matrix is required(mandatory) for below use case?
[![enter image description here][1]][1]
### I guess, No Camera Calibration matrix required, since camera is just looking from top view
[![enter image description here][2]][2]
## My approach,
1. Draw lines using CVAT
2. Write a program using opencv which uses object bouding box center and trip line intersection to calculate the crossing.
[![I draw lines using CVAT ][3]][3]
### My code -
Now when to apply camera calibration ? and How to apply ? Is it required to apply?
def where_it_is(line, cX, cY):
A, B = line
aX = A[0]
aY = A[1]
bX = B[0]
bY = B[1]
val = ((bX - aX) * (cY - aY) - (bY - aY) * (cX - aX))
thresh = 1e-9
if val >= thresh:
return -1
elif val <= -thresh:
return 1
else:
return 0
[1]: https://i.stack.imgur.com/ghRJH.jpg
[2]: https://i.stack.imgur.com/S3stH.jpg
[3]: https://i.stack.imgur.com/CV7I9.jpg
Please advise, what api and methods I need to use to calibrate the camera using opencv
↧
Optimize CSRT tracker?
I'm new to OpenCV and was hoping that someone more experienced might be able to point me in the right direction. I'm doing motion tracking on black-and-white videos of insects in flight. Running CSRT does a decent job of tracking the insects much of the time, but it sometimes loses the target when the insect crosses a changing background at a distance. Since my target appearance is very specific, I was thinking that I might be able to train my own classifier and use it to improve the accuracy of the CSRT tracker. Is there a way to add a custom Haar cascade (or other classifier method) to the CSRT source code, or would it be standard practice to add a classifier on top of the CSRT (i.e. write a function to reacquire the target every n frames using the custom classifier), or is there some other way this is typically done? I appreciate any insight. Thanks!
↧
Importing Bidirectional LSTM model via onnx shows an error
Hi,
I'm trying to export model from [easyOCR](https://github.com/JaidedAI/EasyOCR) to ONNX and import it through openCV. I successfully exported it to ONNX, checked that it runs well, but failed to import the model through openCV using readNetFromONNX.
Below is my model export code and the error:
batch_size=1
x = torch.rand(batch_size,1,64,256).float().cpu()
torch.onnx.export(model, (x,''), "ocr0807_0.onnx")
net = cv2.dnn.readNetFromONNX('ocr0807_0.onnx') <- where error occurs
error: (-215:Assertion failed) (int)_numAxes == inputs[0].size() in function 'getMemoryShapes'
The error occurs at Bidirectional LSTM layer of the model even though I'm using openCV 4.4.0 and python 3.7.
Below is the part of model code from easyOCR, which include Bidirectional LSTM:
""" Sequence modeling"""
self.SequenceModeling = nn.Sequential(
BidirectionalLSTM(self.FeatureExtraction_output, hidden_size, hidden_size),
BidirectionalLSTM(hidden_size, hidden_size, hidden_size))
self.SequenceModeling_output = hidden_size
Would you please help me with this problem? Thank you.
↧
Disable file system cache during imwrite
I am trying to build a application with opencv and c++ in Linux env to capture and save 10000 images, But after saving 800 images System buff/cache is increasing and application running slow (system getting hanged up). Is there any way to disable file system cache or clear cache after imwrite or save image without using imwrite. (want to store in .bmp format)
any suggestion will be helpful, Thankyou
↧
↧
How do you identify if the image is cut at the edges using OpenCV?

Using OpenCV I would like to identify the above image is incomplete due to the right hand side edges are cut. Below is the code I tried but no success.
import cv2
import numpy as np
# Load image, create mask, grayscale, and Otsu's threshold`
image = cv2.imread('test.jpg')
image1 = cv2.imread('test.jpg')
mask = np.zeros(image.shape, dtype=np.uint8)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (3,3), 0)
thresh = cv2.adaptiveThreshold(blur,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV,11,3)
horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (40,1))
detect_horizontal = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, horizontal_kernel, iterations=2)
cnts = cv2.findContours(detect_horizontal, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
cv2.drawContours(mask, [c], -1, (255,255,255), -1)
# Find vertical sections and draw on mask
vertical_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1,80))
detect_vertical = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, vertical_kernel, iterations=2)
cnts = cv2.findContours(detect_vertical, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
cv2.drawContours(mask, [c], -1, (0,0,255), -1)
# Fill text document body
mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
close_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (9,9))
close = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, close_kernel, iterations=3)
_,cnts,hierarchy = cv2.findContours(close, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
hierarchy = hierarchy[0]
for i,c in enumerate(cnts):
cv2.drawContours(mask, [c], -1, 255, -1)
#print('cnt num',i)
#print('cnt num',cnts[i])
print('heir',hierarchy[i])
if hierarchy[i][2] < 0 and hierarchy[i][3] < 0:
cv2.drawContours(image, cnts, i, (0,0,255), 3)
else:
cv2.drawContours(image, cnts, i, (0,255,0), 3)
opening = cv2.morphologyEx(mask, cv2.MORPH_OPEN, close_kernel, iterations=5)
_,cnts,hierarchy = cv2.findContours(opening, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
hierarchy = hierarchy[0]
for i,c in enumerate(cnts):
cv2.drawContours(mask, [c], -1, 255, -1)
if hierarchy[i][2] < 0 and hierarchy[i][3] < 0:
cv2.drawContours(image1, cnts, i, (0,0,255), 3)
else:
cv2.drawContours(image1, cnts, i, (0,255,0), 3)
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)
displayCnt = None
print('contour:',cnts)
for c in cnts:
# Perform contour approximation
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.02 * peri, True)
if len(approx) == 4:
displayCnt = approx
break
if displayCnt is None:
print('The image is incomplete')
↧
error importing cv2 in python3.6 but it works for python2.7
Error I get when using python3.6
>>> import cv2
Traceback (most recent call last):
File "", line 1, in
ImportError: /usr/local/lib/python3.6/dist-packages/cv2.so: undefined symbol: PyCObject_Type
However, when I use python2.7, I get no errors
>>> import cv2
>>> cv2.__version__
'4.0.0'
I installed opencv 4.0.0 from source and ensured proper bindings to Python2.7 and Python3.6. Ensured that ..../dist-packages for both versions of python had cv2.so file. See contents of dis-packages below.
$ls /usr/local/lib/python2.7/dist-packages
cv2.so
$ ls /usr/local/lib/python3.6/dist-packages
clonevirtualenv.py pip-20.2.1.dist-info
cv2.so py
decorator-4.4.0.dist-info py-1.8.0.dist-info
decorator.py __pycache__
future pykalman
future-0.17.1.dist-info pykalman-0.9.5.dist-info
importlib_metadata stevedore
importlib_metadata-1.7.0.dist-info stevedore-3.2.0.dist-info
libfuturize virtualenv-16.6.0.dist-info
libpasteurize virtualenv_clone-0.5.4.dist-info
networkx virtualenv.py
networkx-2.3.dist-info virtualenv_support
numpy virtualenvwrapper
numpy-1.19.1.dist-info virtualenvwrapper-4.8.4.dist-info
numpy.libs virtualenvwrapper-4.8.4-nspkg.pth
past zipp-3.1.0.dist-info
pip zipp.py
SO, I see that cv2.so file is there in both dist-packages... I am not sure why I am seeing the error for Python3.6.
PS: The cv2.so file was initially missing from dis-packages of python3.6 when I checked after successful completion of cmake and make... So, I manually copied the cv2.so file from dist-packages of python2.7 and pasted it in dist-packages of oython3.6.. (I am not sure if this was the right thing to do!) SO, the file cv2.so is essentially the same in both versions of python.
Below is how i built from source.
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D INSTALL_C_EXAMPLES=OFF \
-D OPENCV_ENABLE_NONFREE=ON \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \
-D PYTHON3_EXECUTABLE=$(which python3.6) \
-D PYTHON2_EXECUTABLE=$(which python2.7) \
-D PYTHON2_INCLUDE_DIR=$(python2.7 -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())") \
-D PYTHON2_INCLUDE_DIR2=/usr/include/aarch64-linux-gnu/python2.7 \
-D PYTHON3_INCLUDE_DIR=$(python3.6 -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())") \
-D PYTHON3_INCLUDE_DIR2=/usr/include/aarch64-linux-gnu/python3.6m \
-D PYTHON2_LIBRARY=/usr/lib/python2.7/config-aarch64-linux-gnu/libpython2.7.so \
-D PYTHON3_LIBRARY=/usr/lib/python3.6/config-3.6m-aarch64-linux-gnu/libpython3.6.so \
-D PYTHON2_NUMPY_INCLUDE_DIR=$(python2.7 -c "import numpy; print(numpy.get_include())") \
-D PYTHON3_NUMPY_INCLUDE_DIR=$(python3 -c "import numpy; print(numpy.get_include())") \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D BUILD_opencv_python3=ON \
-D BUILD_opencv_python2=ON \
-D HAVE_opencv_python2=ON \
-D HAVE_opencv_python3=ON \
-D PYTHON_DEFAULT_EXECUTABLE=$(which python2.7) \
-D BUILD_NEW_PYTHON_SUPPORT=ON \
-D BUILD_PYTHON_SUPPORT=ON \
-D OPENCV_PYTHON3_INSTALL_PATH=/usr/src/app \
-D BUILD_TESTS=OFF \
-D BUILD_PERF_TESTS=OFF \
-D PYTHON3_CVPY_SUFFIX=.cpython-36m-aarch64-linux-gnu.so \
-D WITH_CUDA=OFF \
-D WITH_TBB=ON \
-D OPENCV_SKIP_PYTHON_LOADER=ON \
-D BUILD_opencv_hdf=OFF \
-D BUILD_EXAMPLES=ON ..
↧
which is better for opencv ,C++ or java to doing image processing in android studio
i want to make a project an android apps with help mage processing with openCv .Now i am confused what i should select for doing this project android ndk or sdk ,which is better for opencv C++ or Java ? what will be easy for android
↧