You can find my full question and code here: https://stackoverflow.com/questions/56284630/how-to-read-dji-fpv-feed-as-opencv-object
I hope this is ok, since I was having issues formatting my code from the stack question.
DJI FPV feed returns H264 as a byte[] in real time. I want to convert this byte to OpenCV.
I. Is this the correct way to read the byte as mat if format wasn't an issue?
Assuming it is RGBA, that means row = 4 and columns = byte[].length, and CvType.CV_8UC4
II. Does OpenCV handle MP4 in android like this?
III. Does the int size have something to do with it? Why is the int size occassionally 6, and other times 2400 to 6000?
IV. Can I fix this using Imcodecs? I added the below code, but I am getting an empty array:
↧
How to Read DJI FPV Feed as OpenCV Object?
↧
Could I use ArUco module of the Open Cv for commercial purpose?
Hello! I'm newbie of Open Cv.
I have some questions about ArUco.
1.I heard that ArUco module of the Open Cv is under BSD License.
Is that means I could use ArUco for commercial purpose?
2.Could I use printed ArUco Markers for commercial purpose?
If there is anyone who knows, save me please.
Thank you.
↧
↧
pipe video frame and frame info from FFMPEG
Hi All,
I'm trying to pipe a video in from FFMPEG to opencv, i would like to pass the image and info for each frame in an attempt to get the most accurate frame time stamps when i am processing the images.
i was able to get the image only and run through the whole video when only using **stdout**. But when i try to integrate **stderr** to allow info to be read; it will display the first frame, print the first info dump, then freeze and crash.
Could anybody point me in the right direction?
import cv2
import subprocess as sp
import numpy
import time
FFMPEG_BIN = "ffmpeg"
command = [ FFMPEG_BIN,
'-i', 'C:\Users\Hayden\PycharmProjects\apexvod\Apex.mp4',
'-an', '-sn',
'-pix_fmt', 'bgr24',
'-vcodec', 'rawvideo',
'-vf', 'showinfo',
'-f', 'image2pipe', 'pipe:1']
pipe = sp.Popen(command, stdout=sp.PIPE, stderr=sp.PIPE, bufsize=1920 * 1080 * 3+3357)
while pipe.poll() is None:
if cv2.waitKey(0) & 0xFF == ord('q'):
break
raw_image = pipe.stdout.read(1920 * 1080 * 3)
info = pipe.stderr.read(3357)
image1 = numpy.frombuffer(raw_image, dtype='uint8')
image2 = image1.reshape((1080, 1920, 3))
cv2.imshow('Video', image2)
print(info)
pipe.stdout.flush()
pipe.stderr.flush()
pipe.terminate()
cv2.destroyAllWindows()
↧
Use pre-trained cnn to classify 6-channel-mats
Hey there,
I have a pre trained cnn, which was trained on 6 channel inputs (created out of two bgr images of same size).
vector inputs;
vector temp;
Mat blob;
cv::split(image_1, inputs);
cv::split(image_2, temp);
inputs.insert(inputs.end(), temp.begin(), temp.end());
dnn::blobFromImages(inputs, blob, 1.0/255.0, m_patchSize, 0, true, false);
net.setInput(blob);
Mat out = net.forward();
The program crashes in line blobFromImages.
What am I doing wrong?
Is it possible to classify Mats with more than 3 channels?
↧
paper edge detection and perspective transform
before image
https://imgur.com/f190UFk
processed image
https://imgur.com/JkEhWkS
you can see the "processed image" has highlight, so the transform works bad.
any possible to make an rectangle that ignore that highlight area?
import os
import cv2
import numpy as np
from nanoid import generate
def processImage(imagepath, ext):
img = cv2.imread(imagepath)
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h, s, v = cv2.split(hsv)
# _, threshed = cv2.threshold(s, 50, 255, cv2.THRESH_BINARY_INV)
_, threshed = cv2.threshold(s, 50, 255, cv2.THRESH_BINARY_INV)
# cnts = cv2.findContours(threshed, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[-2]
cnts = cv2.findContours(threshed, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[-2]
canvas = img.copy()
#cv2.drawContours(canvas, cnts, -1, (0, 255, 0), 1)
cnts = sorted(cnts, key = cv2.contourArea)
cnt = cnts[-1]
print(cnt)
arclen = cv2.arcLength(cnt, True)
approx = cv2.approxPolyDP(cnt, 0.005 * arclen, True)
cv2.drawContours(canvas, [cnt], -1, (255, 0, 0), 5, cv2.LINE_AA)
cv2.drawContours(canvas, [approx], -1, (0, 0, 255), 1, cv2.LINE_AA)
print(approx)
approx = rectify(approx)
pts2 = np.float32([[0, 0], [2480, 0], [2480, 3508], [0, 3508]])
M = cv2.getPerspectiveTransform(approx, pts2)
dst = cv2.warpPerspective(canvas, M, (2480, 3508))
filename_output = generate() + ext
cv2.imwrite('./static/' + filename_output, dst)
topLeft, topRight, bottomRight, bottomLeft = approx
topLeft = topLeft.tolist()
topRight = topRight.tolist()
bottomRight = bottomRight.tolist()
bottomLeft = bottomLeft.tolist()
return {
'filename': './static/' + filename_output,
'shape': img.shape,
'approx': {
'topLeft': topLeft,
'topRight': topLeft,
'bottomRight': bottomRight,
'bottomLeft': bottomLeft,
},
}
def rectify(h):
h = h.reshape((13, 2))
hnew = np.zeros((4, 2), dtype = np.float32)
add = h.sum(1)
hnew[0] = h[np.argmin(add)]
hnew[2] = h[np.argmax(add)]
diff = np.diff(h, axis = 1)
hnew[1] = h[np.argmin(diff)]
hnew[3] = h[np.argmax(diff)]
return hnew
add similar condition image
https://imgur.com/wDXtLsd
https://imgur.com/KAvOtdG
↧
↧
OpenCV.js unable to track coloured circle and draw circle on target.
Hi, I have a code to track a random input target and draw a rectangle on top of my target, but I need to track(Green colored circle and draw a circle on my target, I have searched for it in OpenCV.js but I couldn't find it, help me with any link which can help me, my rectangle code is below
let video = document.getElementById('videoInput');
let cap = new cv.VideoCapture(video);
// take first frame of the video
let frame = new cv.Mat(video.height, video.width, cv.CV_8UC4);
cap.read(frame);
// hardcode the initial location of window
let trackWindow = new cv.Rect(150, 60, 63, 125);
// set up the ROI for tracking
let roi = frame.roi(trackWindow);
let hsvRoi = new cv.Mat();
cv.cvtColor(roi, hsvRoi, cv.COLOR_RGBA2RGB);
cv.cvtColor(hsvRoi, hsvRoi, cv.COLOR_RGB2HSV);
let mask = new cv.Mat();
let lowScalar = new cv.Scalar(30, 30, 0);
let highScalar = new cv.Scalar(180, 180, 180);
let low = new cv.Mat(hsvRoi.rows, hsvRoi.cols, hsvRoi.type(), lowScalar);
let high = new cv.Mat(hsvRoi.rows, hsvRoi.cols, hsvRoi.type(), highScalar);
cv.inRange(hsvRoi, low, high, mask);
let roiHist = new cv.Mat();
let hsvRoiVec = new cv.MatVector();
hsvRoiVec.push_back(hsvRoi);
cv.calcHist(hsvRoiVec, [0], mask, roiHist, [180], [0, 180]);
cv.normalize(roiHist, roiHist, 0, 255, cv.NORM_MINMAX);
// delete useless mats.
roi.delete(); hsvRoi.delete(); mask.delete(); low.delete(); high.delete(); hsvRoiVec.delete();
// Setup the termination criteria, either 10 iteration or move by atleast 1 pt
let termCrit = new cv.TermCriteria(cv.TERM_CRITERIA_EPS | cv.TERM_CRITERIA_COUNT, 10, 1);
let hsv = new cv.Mat(video.height, video.width, cv.CV_8UC3);
let hsvVec = new cv.MatVector();
hsvVec.push_back(hsv);
let dst = new cv.Mat();
let trackBox = null;
const FPS = 30;
function processVideo() {
try {
if (!streaming) {
// clean and stop.
frame.delete(); dst.delete(); hsvVec.delete(); roiHist.delete(); hsv.delete();
return;
}
let begin = Date.now();
// start processing.
cap.read(frame);
cv.cvtColor(frame, hsv, cv.COLOR_RGBA2RGB);
cv.cvtColor(hsv, hsv, cv.COLOR_RGB2HSV);
cv.calcBackProject(hsvVec, [0], roiHist, dst, [0, 180], 1);
// apply camshift to get the new location
[trackBox, trackWindow] = cv.CamShift(dst, trackWindow, termCrit);
// Draw it on image
let pts = cv.rotatedRectPoints(trackBox);
cv.line(frame, pts[0], pts[1], [255, 0, 0, 255], 3);
cv.line(frame, pts[1], pts[2], [255, 0, 0, 255], 3);
cv.line(frame, pts[2], pts[3], [255, 0, 0, 255], 3);
cv.line(frame, pts[3], pts[0], [255, 0, 0, 255], 3);
cv.imshow('canvasOutput', frame);
// schedule the next one.
let delay = 1000/FPS - (Date.now() - begin);
setTimeout(processVideo, delay);
} catch (err) {
utils.printError(err);
}
};
// schedule the first one.
setTimeout(processVideo, 0);
↧
How to get undistorded point
Hello,
I use that line to get undistor fisheye images:
cv::fisheye::estimateNewCameraMatrixForUndistortRectify(
camera_matrix, distortion_coefficients, image.size(),
cv::Mat::eye(3, 3, CV_32F), new_camera_matrix,
static_cast(balance), image.size(),
static_cast(distance));
cv::fisheye::initUndistortRectifyMap(
camera_matrix, distortion_coefficients, new_camera_matrix,
R, image.size(), CV_32F, map1, map2);
So I have my distorded image and my undistorded image. For one coordinate on the distorded image I want to get the coordinate in the undistorded image.
How can I get that coordinate? Is there a function in opencv to do that?
↧
imwrite not working when Spawned by Crontab or Systemd??
Hi all,
I've written a Raspberry Pi application (Ubuntu Mate) that's essentially a time-lapse camera. Every 30 seconds, it snaps an image and writes it to disk.
When I start the application manually from the terminal, it works as expected.
When I configure the script to execute automatically at startup, (using crontab or systemd - It needs to be executed as root) the script runs without any error, but no images are written to the disk.
At first I thought a permission issue, but the script is running as root... Any idea of what may be going on?
My script:
import time
import neopixel
import board
import cv2
class Camera01:
GiGe = pylon.InstantCamera(pylon.TlFactory.GetInstance().CreateFirstDevice())
Light = neopixel.NeoPixel(board.D18, 16, bpp=4, auto_write=True)
count = 0
while True:
# Open camera
Camera01.GiGe.Open()
# Turn on Light
Camera01.Light.fill((255, 255, 255, 255))
# Snap Image
image = Camera01.GiGe.GrabOne(1000)
# Convert image to OpenCV Array
image = cv2.cvtColor(image.Array, cv2.COLOR_BAYER_BG2BGR)
# Save image to disk
cv2.imwrite('saved_images/image'+str(count)+'.png', image)
# Turn off Light
Camera01.Light.fill((0, 0, 0, 0))
# Close camera
Camera01.GiGe.Close()
# Increment counter
count = count + 1
time.sleep(30)
↧
how to translate this source code in python to java
follow this link [link text](https://www.pyimagesearch.com/2016/10/03/bubble-sheet-multiple-choice-scanner-and-test-grader-using-omr-python-and-opencv/) please help me.
↧
↧
java opencv Helloworld - how?
I really am missing something.
I can build opencv on my raspberry pi, and end up with opencv-XXX.jar and libopencv_javaXXX.so in /usr/local/share/OpenCV/java/
Great! I believe the so file is statically linked because i set BUILD_SHARED_LIBS=OFF in my cmake command, however, I did note in the cmake output it does say "Link libraries: Dynamic load"
My Helloworld.java program looks like this:
import org.opencv.core.*;
class HelloWorld {
static {
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
}
public static void main(String[] args) {
System.out.println("hello.");
}
}
I can compile this when I put the jar file in the classpath using "javac -cp /usr.../opencv-XXX.ar HelloWorld.java"
but when I run it after (successfully) putting the .so file in the LD_LIBRARY_PATH, I get:
Segmentation fault
It does it with 4.1.0, 4.0.0, and 3.4.1 and is obviously something I am not doing right.
Help!
↧
opencv-contrib-python 3.4.4.19 did not install data folder on Raspberry Pi
For some reason my cv2.data.haarcascades variable points to '/usr/local/lib/python3.5/dist-packages/cv2/data/', but that location only has __init__.py and __pycache__ folder (which only has __init__.cpython-35.pyc in it).
This is how I installed OpenCV on my RaspberryPi Raspbian(forRobots) Stretch version:
sudo apt-get install libhdf5-dev libhdf5-serial-dev libhdf5-100
sudo apt-get install libqtgui4 libqtwebkit4 libqt4-test python3-pyqt5
sudo apt-get install libatlas-base-dev
sudo apt-get install libjasper-dev
wget https://bootstrap.pypa.io/get-pip.py
sudo python3 get-pip.py
sudo pip install opencv-contrib-python
Q: I managed to get the data, but **I would like to know what I could have done to have it installed where it thinks it is**.
ps. this is what I did to get the data folder:
cd ~/Carl/Examples/OpenCV/
wget https://github.com/opencv/opencv/archive/3.4.4.zip
unzip *.zip
Data is in ~/Carl/Examples/OpenCV/opencv-3.4.4/data
↧
i want to output coordinates.
hi all
[C:\fakepath\캡처.PNG](/upfiles/15587715913520238.png)
i want to output blue circle and red circle's coordinates. (just (x, y) )
but i don't know how. :(
how to get coordinates?
I've already made tracking objects according to color. But I want to create coordinates for the object that I'm tracking
this is my code
import cv2 as cv
import numpy as np
color1 = 0
color2 = 0
ranges = 20
set_color = False
step = 0
def nothing(x):
global color1, color2
global lower_blueA1, lower_blueA2, lower_blueA3
global upper_blueA1, upper_blueA2, upper_blueA3
global lower_blueB1, lower_blueB2, lower_blueB3
global upper_blueB1, upper_blueB2, upper_blueB3
saturation_th1 = cv.getTrackbarPos('saturation_th1', 'img_result')
value_th1 = cv.getTrackbarPos('value_th1', 'img_result')
saturation_th2 = cv.getTrackbarPos('saturation_th2', 'img_result')
value_th2 = cv.getTrackbarPos('value_th2', 'img_result')
color1 = int(color1)
color2 = int(color2)
# HSV 색공간에서 마우스 클릭으로 얻은 픽셀값과 유사한 필셀값의 범위를 정합니다.
if color1 < ranges:
lower_blueA1 = np.array([color1 - ranges + 180, saturation_th1, value_th1])
upper_blueA1 = np.array([180, 255, 255])
lower_blueA2 = np.array([0, saturation_th1, value_th1])
upper_blueA2 = np.array([color1, 255, 255])
lower_blueA3 = np.array([color1, saturation_th1, value_th1])
upper_blueA3 = np.array([color1 + ranges, 255, 255])
# print(i-range+180, 180, 0, i)
# print(i, i+range)
elif color1 > 180 - ranges:
lower_blueA1 = np.array([color1, saturation_th1, value_th1])
upper_blueA1 = np.array([180, 255, 255])
lower_blueA2 = np.array([0, saturation_th1, value_th1])
upper_blueA2 = np.array([color1 + ranges - 180, 255, 255])
lower_blueA3 = np.array([color1 - ranges, saturation_th1, value_th1])
upper_blueA3 = np.array([color1, 255, 255])
# print(i, 180, 0, i+range-180)
# print(i-range, i)
else:
lower_blueA1 = np.array([color1, saturation_th1, value_th1])
upper_blueA1 = np.array([color1 + ranges, 255, 255])
lower_blueA2 = np.array([color1 - ranges, saturation_th1, value_th1])
upper_blueA2 = np.array([color1, 255, 255])
lower_blueA3 = np.array([color1 - ranges, saturation_th1, value_th1])
upper_blueA3 = np.array([color1, 255, 255])
# print(i, i+range)
# print(i-range, i)
if color2 < ranges:
lower_blueB1 = np.array([color2 - ranges + 180, saturation_th2, value_th2])
upper_blueB1 = np.array([180, 255, 255])
lower_blueB2 = np.array([0, saturation_th2, value_th2])
upper_blueB2 = np.array([color2, 255, 255])
lower_blueB3 = np.array([color2, saturation_th2, value_th2])
upper_blueB3 = np.array([color2 + ranges, 255, 255])
# print(i-range+180, 180, 0, i)
# print(i, i+range)
elif color2 > 180 - ranges:
lower_blueB1 = np.array([color2, saturation_th2, value_th2])
upper_blueB1 = np.array([180, 255, 255])
lower_blueB2 = np.array([0, saturation_th2, value_th2])
upper_blueB2 = np.array([color2 + ranges - 180, 255, 255])
lower_blueB3 = np.array([color2 - ranges, saturation_th2, value_th2])
upper_blueB3 = np.array([color2, 255, 255])
# print(i, 180, 0, i+range-180)
# print(i-range, i)
else:
lower_blueB1 = np.array([color2, saturation_th2, value_th2])
upper_blueB1 = np.array([color2 + ranges, 255, 255])
lower_blueB2 = np.array([color2 - ranges, saturation_th2, value_th2])
upper_blueB2 = np.array([color2, 255, 255])
lower_blueB3 = np.array([color2 - ranges, saturation_th2, value_th2])
upper_blueB3 = np.array([color2, 255, 255])
# print(i, i+range)
# print(i-range, i)
cv.namedWindow('img_color')
cv.namedWindow('img_result')
cv.createTrackbar('saturation_th1', 'img_result', 0, 255, nothing)
cv.setTrackbarPos('saturation_th1', 'img_result', 30)
cv.createTrackbar('value_th1', 'img_result', 0, 255, nothing)
cv.setTrackbarPos('value_th1', 'img_result', 30)
cv.createTrackbar('saturation_th2', 'img_result', 0, 255, nothing)
cv.setTrackbarPos('saturation_th2', 'img_result', 30)
cv.createTrackbar('value_th2', 'img_result', 0, 255, nothing)
cv.setTrackbarPos('value_th2', 'img_result', 30)
cap = cv.VideoCapture(1)
while(True):
ret,img_color = cap.read()
img_color = cv.flip(img_color, 1)
if ret == False:
continue;
img_color2 = img_color.copy()
img_hsv = cv.cvtColor(img_color2, cv.COLOR_BGR2HSV)
height, width = img_color.shape[:2]
cx = int(width / 2)
cy = int(height / 2)
if set_color == False:
rectangle_color = (0, 255, 0)
if step == 1:
rectangle_color = (0, 0, 255)
cv.rectangle(img_color, (cx - 20, cy - 20), (cx + 20, cy + 20), rectangle_color, 5)
else:
# 범위 값으로 HSV 이미지에서 마스크를 생성합니다.
img_maskA1 = cv.inRange(img_hsv, lower_blueA1, upper_blueA1)
img_maskA2 = cv.inRange(img_hsv, lower_blueA2, upper_blueA2)
img_maskA3 = cv.inRange(img_hsv, lower_blueA3, upper_blueA3)
temp = cv.bitwise_or(img_maskA1, img_maskA2)
img_maskA = cv.bitwise_or(img_maskA3, temp)
img_maskB1 = cv.inRange(img_hsv, lower_blueB1, upper_blueB1)
img_maskB2 = cv.inRange(img_hsv, lower_blueB2, upper_blueB2)
img_maskB3 = cv.inRange(img_hsv, lower_blueB3, upper_blueB3)
temp = cv.bitwise_or(img_maskB1, img_maskB2)
img_maskB = cv.bitwise_or(temp, img_maskB3)
# 모폴로지 연산
kernel = np.ones((11,11), np.uint8)
img_maskA = cv.morphologyEx(img_maskA, cv.MORPH_OPEN, kernel)
img_maskA = cv.morphologyEx(img_maskA, cv.MORPH_CLOSE, kernel)
kernel = np.ones((11,11), np.uint8)
img_maskB = cv.morphologyEx(img_maskB, cv.MORPH_OPEN, kernel)
img_maskB = cv.morphologyEx(img_maskB, cv.MORPH_CLOSE, kernel)
# 마스크 이미지로 원본 이미지에서 범위값에 해당되는 영상 부분을 획득합니다.
img_maskC = cv.bitwise_or(img_maskA, img_maskB)
img_result = cv.bitwise_and(img_color, img_color, mask=img_maskC)
# 라벨링
numOfLabelsA, img_labelA, statsA, centroidsA = cv.connectedComponentsWithStats(img_maskA)
for idx, centroid in enumerate(centroidsA):
if statsA[idx][0] == 0 and statsA[idx][1] == 0:
continue
if np.any(np.isnan(centroid)):
continue
x,y,width,height,area = statsA[idx]
centerX1,centerY1 = int(centroid[0]), int(centroid[1])
if area > 1500:
cv.circle(img_color, (centerX1, centerY1), 10, (0,0,255), 10)
cv.rectangle(img_color, (x,y), (x+width,y+height), (0,0,255))
numOfLabelsB, img_labelB, statsB, centroidsB = cv.connectedComponentsWithStats(img_maskB)
for idx, centroid in enumerate(centroidsB):
if statsB[idx][0] == 0 and statsB[idx][1] == 0:
continue
if np.any(np.isnan(centroid)):
continue
x,y,width,height,area = statsB[idx]
centerX2,centerY2 = int(centroid[0]), int(centroid[1])
if area > 1500:
cv.circle(img_color, (centerX2, centerY2), 10, (255,0,0), 10)
cv.rectangle(img_color, (x,y), (x+width,y+height), (255,0,0))
cv.imshow('img_result', img_result)
cv.imshow('img_color', img_color)
key = cv.waitKey(1) & 0xFF
if key == 27: # esc
break
elif key == 32: # space
if step == 0:
roi = img_color2[cy-20:cy+20, cx-20:cx+20]
roi = cv.medianBlur(roi, 3)
cv.imshow("roi1", roi)
hsv = cv.cvtColor(roi, cv.COLOR_BGR2HSV)
h,s,v = cv.split(hsv)
color1 = h.mean()
print(color1)
step += 1
elif step == 1:
roi = img_color2[cy-20:cy+20, cx-20:cx+20]
roi = cv.medianBlur(roi, 3)
cv.imshow("roi2", roi)
hsv = cv.cvtColor(roi, cv.COLOR_BGR2HSV)
h,s,v = cv.split(hsv)
color2 = h.mean()
set_color = True
nothing(0)
print(color2)
step += 1
cap.release()
cv.destroyAllWindows()
↧
SINGLE POINT PERSPECTIVE TRANSFORM
Hi, I am new at this forum. I am using python with raspebrry pi3 to capture images (source images) from video (frames) and my camera is not perpendicular to planar object interesting for me. As I know coordinates relationship between 4 points coming from image source and the same 4 points in the final orthogonalized image, I use getPerspectiveTransform to obtain transformation matrix H and then I use warpPerspective to obtain orthogonalized image. It works perfect when I work with complete source and final images !!
The problem is that, due to limited raspberry capacity proccesor, I just want to use single point (x1,y1) from source image and obtain its correspondant transformed single point (x2,y2) in the orthogonalized image. I calculate (x2,y2) = matrix H x (x1,y1) and result is not correct.
Is there any function to obtain single point transformation once you know transformation matrix H between source and final orthogonalized image?
Thanks in advance
↧
↧
Strange error in getMemoryShapes function
Hello everyone!
Currently, I'm try to solve the object detection task. The model in used is MobileNetV1 + SSD from https://github.com/qfgaohao/pytorch-ssd/blob/master/vision/ssd/mobilenetv1_ssd.py, The code was written in PyTorch framework.
In order to use this model in OpenvCV library I converted it to ONNX representation by the standard torch.onnx module. But when I'm try to read this .onnx file I get the next error: `cv2.error: OpenCV(4.0.1-dev) /home/user/opencv/modules/dnn/src/layers/slice_layer.cpp:129: error: (-215:Assertion failed) inputs.size() == 1 in function 'getMemoryShapes'`
I'd like to note that when I'm cnonvert only MobileNetV1 in ONNX representation and read it through the dnn.readNetFromONNX(net) I don't get the above error. All works well.
What am I doing wrong?
↧
Python 3.6.5 + Opencv 3.4 imshow function
I configured opencv for python on the Windows 10 platform.
TEST CODE:
import cv2
import numpy as np
img = cv2.imread("C:\\Users\\Desktop\\xin.jpg")
cv2.namedWindow("image")
cv2.imshow("image",img)
cv2.waitKey(0)
But imshow function error:
Traceback (most recent call last):
File "F:\Python 3.6\test.py", line 5, in
cv2.imshow("image",img)
cv2.error: OpenCV(3.4.1) D:\Build\OpenCV\opencv-3.4.1\modules\highgui\src\window.cpp:364: error: (-215) size.width>0 && size.height>0 in function cv::imshow
why and how to solve? Thanks!
↧
In gapi, is there a way to create submatrix header?
I am building a new app and tyring the new gapi .
I want to modify a subrange of a matrix, is it possible to do this in gapi ? Or is there a way to make a submatrix header in gapi?
↧
Python - Multiple persistent modifiable rectangle selector
Hi,
I am currently trying to display an image with several highlighted and modifiable region of interests. I want to solve it by using for example the rectangleselector from the matplotlib.widgets.
The issue is just I am not able to blend in multiple persistent dragable resizeable bounding boxes.
The example is [here](https://matplotlib.org/3.1.0/gallery/widgets/rectangle_selector.html).
If you have an alternative approach please let me know.
I would like to have like in the aforementioned example multiple rectangle selector objects. I hope you can help me.
[Context: I would show a photo of a traffic scene, and I would let the programm create after a 2D object detection several bounding boxes. I would like to give the user the possibility to modify the position and size. The issue is: I can just make only one interactive persistent rectangles. I do not know how to append it, to have several interactive concurrent interactive rectangulars.]
↧
↧
How can I use CNN for algae cell counting?
HI! I'm planning to use image processing through CNN to count the cells of algae, the cells are a straight line, it looks like a hair strand, can you guys help how can I use CNN to count cells that look like hair strand? TIA!!! (attached is the actual image of cells seen from the microscope)
[image description](/upfiles/15588702011200559.jpg)
↧
How can define the follwing ROI?
Hello I have a small problem with my code. I have made the full code to identify a region of interest, but the big problem is .. I have a small static camera who capture empty parking spaces. I need define a static region of the interest, when the camera captures the first image, define the places of interest. The problem I have in my code is I call the function "regionOfInterest()" inside a while loop.>
void regionOfInterest(Mat& frame){
Mat hsvImage;
cvtColor(frame,hsvImage,COLOR_RGB2HSV);
vectorHSV_CHANNELS;
split(hsvImage, HSV_CHANNELS);
Mat hueImage = HSV_CHANNELS[0];
Mat hueMask;
inRange(hueImage, hueValue - hueRange, hueValue + hueRange, hueMask);
if (hueValue - hueRange < 0 || hueValue + hueRange > 180){
Mat hueMaskUpper;
int upperHueValue = hueValue + 180;
inRange(hueImage, upperHueValue - hueRange, upperHueValue + hueRange, hueMaskUpper);
hueMask = hueMask | hueMaskUpper;
}
Mat saturationMask = HSV_CHANNELS[1] > minSaturation;
Mat valueMask = HSV_CHANNELS[2] > minValue;
hueMask = (hueMask & saturationMask) & valueMask;
vector lines;
HoughLinesP(hueMask, lines, 1, CV_PI/360, 50, 50, 10);
for (unsigned int i = 0; i < lines.size(); ++i){
Point(lines[i][0], lines[i][1]);
Point(lines[i][2], lines[i][3]);
}
vectorpts;
for(unsigned int i = 0; i < lines.size(); i++){
pts.push_back(Point(lines[i][0],lines[i][1]));
pts.push_back(Point(lines[i][2],lines[i][3]));
}
/*GET THE PREVIOUS POINTS DETECTED IN THE IMAGE*/
Rect box = boundingRect(pts);
/*DRAW RECTANGLE REGION OF INTEREST*/
rectangle(frame, box.tl(), box.br(), Scalar(0, 255, 0), 2); }
↧
Contours generating invalid points
I am generating contours the usual way:
equalizeHist(inputImage, inputImage);
Laplacian(inputImage, inputImage, CV_8U, 7);
cv::threshold(inputImage, inputImage, 15, 255, cv::THRESH_BINARY);
findContours(inputImage, contours, hierarchy, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE, cv::Point(0,0));
This works, but I noticed that a huge number of points are invalid. For example, the longest contour detected has 34,000 points, but only about 8,000 of the points are valid. The invalid points are huge positive or negative numbers - well outside the image size.
I am getting similar results with both OpenCV 3.4.5 and 3.1.0. Has anyone seen this before? Is there a way to ensure all to points are in a valid range (other than checking each individual point)?
My input image is 4,096 pixels. Here is a sample of some of the invalid points:
957148881, 8610
3, 67108864
16854824, 1
-1628683196, -1017131137
16854568, 1
-1628679931, -1017131137
16854856, 1
801215544, 2
957148881, 8610
3, 67108864
16854824, 1
-1628678156, -1017131137
16854568, 1
-1628678267, -1017131137
16854856, 1
801215544, 2
957148881, 8610
↧