I have an input picture where some transparent pixels are not white in RGB value, e.g. I have some pixels with BGRA [100, 100, 100, 0], I want to change all pixel whose alpha value is 0 to [255, 255, 255, 255], but I cannot figure out a way to do this instead of for loop. What I am doing now is like this:
byte[] imageData = Files.readAllBytes(Paths.get(file));
MatOfByte matOfByte = new MatOfByte(imageData);
Mat source = Imgcodecs.imdecode(matOfByte, Imgcodecs.IMREAD_UNCHANGED);
for (int i = 0; i < source.rows(); i++) {
for (int j = 0; j < source.cols(); j++) {
double[] d = source.get(i,j);
if (Double.compare(d[3], 0) == 0) {
d[0] = 255;
d[1] = 255;
d[2] = 255;
d[3] = 255;
source.put(i,j,d);
}
}
}
Can we do anything better than this?
Thanks a lot!
↧
Smarter way to set all transparent pixel to white using Java openCV
↧
No idea why the script gets this error code
Hello everybody,
I've written a script for detection of edges, but the problem is that my script get an error which i have added in the following. Whenever i want to run the script the shell appears for 1-2 seconds and after that it closes itself.
I have no idea what it could be, i hope someone have an idea which would help me.
The script is this here and the error code ist also in the following :
***Script :***
import cv2
import numpy as np
cap = cv2.VideoCapture(0)
while(1):
_, frame = cap.read()
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
lower_red = np.array([30,150,50])
upper_red = np.array([255,255,180])
mask = cv2.inRange(hsv, lower_red, upper_red)
res = cv2.bitwise_and(frame,frame, mask= mask)
cv2.imshow('Original',frame)
edges = cv2.Canny(frame,100,200)
cv2.imshow('Edges',edges)
k = cv2.waitKey(5) & 0xFF
if k == 27:
break
cv2.destroyAllWindows()
cap.release()
The Error Code:
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
cv2.error: C:\build\2_4_winpack-bindings-win64-vc14-static\opencv\modules\imgproc\src\color.cpp:3961: error: (-215) (scn == 3 || scn == 4) && (depth == CV_8U || depth == CV_32F) in function cv::cvtColor
↧
↧
Find the corners of an object
I want to find the corners of the image attached.
I tried finding the canny edges to make it easier as I was only left with the plank edges but Im not sure this is the best approach.( maybe drawing straight lines through the fainted edges and then find corners but I dont know how I could do it)
I am looking to find the 2 edges of this wooden plank

Code is:
import numpy as np
from matplotlib import pyplot as plt
from scipy import ndimage
from skimage import filter
import cv2
img = cv2.imread('/.../plank5.jpg',0)
edges11 = cv2.Canny(img,100,200)
edges22 = cv2.Canny(img,380,460)
plt.subplot(131)
plt.imshow(img,cmap='gray')
plt.axis('off')
plt.title('Gray Image', fontsize=20)
plt.subplot(132)
plt.imshow(edges11,cmap = 'gray')
plt.axis('off')
plt.title('Lower MinVal', fontsize=20)
plt.subplot(133)
plt.imshow(edges22,cmap = 'gray')
plt.axis('off')
plt.title('Higher MinVal', fontsize=20)
plt.subplots_adjust(wspace=0.02, hspace=0.02, top=0.9,
bottom=0.02, left=0.02, right=0.98)
plt.show()
↧
find the edge of image with the help of axis(x,y) in opencv in android
> I have to color the object when it
> touched. For that follow the some
> basic concepts: I am getting
> coordinates of that portion where
> object is touched. After that I
> have to find the edges of touched
> object. Then fill color of that
> object. here I get the coordinates
> of touched portion.
int cols = mRgba.cols();
int rows = mRgba.rows();
int xOffset = (mOpenCvCameraView.getWidth() - cols) / 2;
int yOffset = (mOpenCvCameraView.getHeight() - rows) / 2;
int x = (int)event.getX() - xOffset;
int y = (int)event.getY() - yOffset;
> x,y shows the coordinates of the
> touched portion but problem is to
> find the corners of object. For
> corners i use **canny()** method of
> opencv But no idea how to implement it
> by coding in opencv in android.
↧
do we have any functions to convert lidar images to point cloud data
need to know more about this conversion
↧
↧
How to use blobFromImages in c++
##### System information (version)
- OpenCV => :3.4.5
- Operating System / Platform => windows 7 windows 10:
- Compiler => :microsoft vs2019:
- C++
##### Detailed description
I am using dnn module very successfuly with yolo v3 and ssd mobilenet, with single image process - blobFromImage
I want to process few images in parallel using blobFromImages.
I wrote: (yolov3 net)
frame1 = imread("img1.jpg");
frame2 = imread("img2.jpg");
std::vector inputs;
inputs.push_back(frame1);
inputs.push_back(frame2);
blobFromImages(inputs, blob, 1 / 255.F, inpSize, mean, true, false);
std::vector outs;
net.forward(outs, getOutputsNames(net));
postprocess(frame, outs, net);
When I am loading only the one image it works fine.
When I am lodaing 2 images the outs matrices are empty.
Waht am I doing wrong?
##### Steps to reproduce
↧
Need help in recognising the image for face recognition app using OpenCV
I am using LBPHFaceRecognizer algorithm,and using following code :-- `mJavaDetector.detectMultiScale(mGray, faces, 1.3, 2, Objdetect.CASCADE_FIND_BIGGEST_OBJECT, new Size(mAbsoluteFaceSize, mAbsoluteFaceSize), new Size());` . So which would be the best scenario for recognizing the images in OpenCv 4.1.0
↧
Video capture not working properly
I have a application in python which I use VideoCapture to display a webcam frame on the screen.
It used to work perfectly until suddenly something wents reallly wrong.
I'm using the same code and the image seems to be very noisy.
I tried to work with different cameras and the problem is the same, so the problem is not with the camera.
I tried the same code in a different computer. And the application seems to work normally.
I wrote a very simple code, just to display the webcam frame in the screen and still I have the same issue.
It seems to be a conflict between opencv and my computer configuration.
But I'm not able to figure what is going on.
I hope you can help me with this issue.
My simple code is:
# -*- coding: utf-8 -*-
"""
Created on Thu Oct 10 10:42:38 2019
@author: vitoro
"""
import cv2
cap = cv2.VideoCapture(1)
# Check if the webcam is opened correctly
if not cap.isOpened():
cap = cv2.VideoCapture(0)
if not cap.isOpened():
raise IOError("Cannot open webcam")
while True:
ret,frame = cap.read()
cv2.imshow('Original video',frame)
if cv2.waitKey(2) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
And the image I get is:

↧
Applying filter with a Mask
I have an image and a mask, where I want to do filter/blur operations only at the region where the mask is white i.e the mask has value 255 and not where mask has 0.
I tried using these operation :
erode(maskImage,maskImage,Mat()); // values near irrelevant pixels should not be changed
blur(sourceImage,bluredImage,Size(3,3));
bluredImage = sourceImage + ((bluredImage-sourceImage) & maskImage);
But the & operation cannot be performed in images with different channels.
And when I tried with the accepted answer from here http://answers.opencv.org/question/3031/smoothing-with-a-mask/ it gave me completely black image.
So how can I do operations with a mask as a parameter?
↧
↧
OpenCV Single Channel Pixel Value Number Colour
Hey guys, do you know how opencv decides on the colour of the pixel value number as seen in the attached image? This is a max zoomed in portion of the red channel in an rgb image in opencv imshow. I'm interested in how they decide to colour the number white/grey/black.

For example, in the image, pixels with values 128 and above are black whereas below 128, it's closer to white.
I've searched for any answers to this but found none.
↧
Using SuperResolution with a Lepton 3.5 PureThermal 2
Greetings all, I have been having difficulty with the OpenCV superres namespace. Mostly with the SetInput coming from a uvc video camera. In the open cv examples there is a small app which maps a video FrameSource to the SueprResolution class through the call SetInput. Calling Next frame results in a cv::Mat which gets shown through imshow. I have no issues with this part. My issues involve the FrameSource itself not talking to the hardware correctly. As a sanity check I used cheese and can see through the camera just fine. The included pictures may make more sense.
As seen by cheese: https://ibb.co/ByB9wgH
As seen by OpenCV: https://ibb.co/mzcZ4Q0
Opencv has no issue with the Lepton 2.5, so is there a way to specify the pixel format in the parameter to createFrameSource_Camera or createFrameSource_Video ? If not, can I use my own 'frame' for processing by super resolution? As in, pass in my own cv::Mat or pixel* to SetInput that comes from a Ros Image Transport topic? It would be useful to run SuperResolution on a networked stream... Such as: ThermalImageCallback(const sensor_msgs::ImageConstPtr& msg as input into SuperResolution
v4l2-ctl --list-formats
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: 'UYVY'
Name : UYVY 4:2:2
Index : 1
Type : Video Capture
Pixel Format: 'Y16 '
Name : 16-bit Greyscale
Index : 2
Type : Video Capture
Pixel Format: 'GREY'
Name : 8-bit Greyscale
Index : 3
Type : Video Capture
Pixel Format: 'RGBP'
Name : 16-bit RGB 5-6-5
Index : 4
Type : Video Capture
Pixel Format: 'BGR3'
Name : 24-bit BGR 8-8-8
Has anyone else had luck getting a flir lepton working with opencv processed into superresolution?
↧
estimatePoseBoard rVec 3d model coordinates don't match
Hello, I'm using an aruco board to track the position and rotation of an object. My idea is to track it with the camera and render a scene showing the movement of the object. I am using panda3d to display the scene.
With panda3d I can create a Quat from a rotation axis and an angle, and then apply that quat to my model:
quat = Quat()
quat.setFromAxisAngleRad(angle, tuple(R))
self.aruco_marker.setQuat(quat)
I am getting the axis and angle from rvec as follows:
retval, rvec, tvec = aruco.estimatePoseBoard(corners, ids, board, mtx, dist)
angle = np.linalg.norm(rvec)
R = [rvec[0]/angle, rvec[1]/angle, rvec[2]/angle]
This is how I've defined the board:
board_corners = [np.array([[15/30,-12.5/30,6/30],[15/30,12.5/30,6/30],
[15/30,12.5/30,-19/30],[15/30,-12.5/30,-19/30]],dtype=np.float32),
np.array([[12.5/30, 15/30, 6/30],[-12.5/30, 15/30, 6/30],
[-12.5/30, 15/30, -19/30],[12.5/30, 15/30, -19/30]],dtype=np.float32),
np.array([[-15/30,12.5/30,6/30],[-15/30,-12.5/30,6/30],[-15/30,-12.5/30,-19/30],
[-15/30,12.5/30,-19/30]],dtype=np.float32),
np.array([[-12.5/30, -15/30, 6/30],[12.5/30, -15/30, 6/30],
[12.5/30, -15/30, -19/30],[-12.5/30, -15/30, -19/30]],dtype=np.float32)]
board_ids = np.array( [[12],[8],[4],[7]], dtype=np.int32)
aruco_dict = aruco.getPredefinedDictionary(aruco.DICT_6X6_250)
board = aruco.Board_create( board_corners, aruco_dict, board_ids)
This is how the 3d model looks in blender. The visible face corresponds to marker id 12, and the first line in board_corners
[C:\fakepath\model.png](/upfiles/15707270251953126.png)
This is the camera view and rendered scene:
[C:\fakepath\screen.png](/upfiles/15707271204792733.png)
The 'axis' 3d model is included to compare how the model should be oriented, in case the Blender forward, right and up don't match panda3d ones.
Does what I'm doing make sense? I can't seem to find what is wrong.
↧
solvePnP object pose for Omnidirectional model
I'm interested in pose estimation using slightly different sensors called omnidirectional camera, they are based on `Unified Omnidirectional Model` to project 3D points into image plane.
`Opencv` has a function called `solvePnP` to find the pose of an object using couple of 2D-3D points for standard cameras.
My question is if there is a similar function for those particular cameras? in `opencv` or any other library.
↧
↧
Antialiased polygon fill doesn't respect area borders
I want to draw a quadrilateral which vertices doesn't coincide with pixel boundaries.
The simplest case is a square with all 4 vertices located exactly in the middle of pixels. Here is my code doing it:
cv::Point2f A(5.5,2.5), B(6.5, 2.5), C(6.5,3.5), D(5.5, 3.5);
cv::Point points[4] = {A, B, C, D};
const cv::Point *contours[1] = {points};
int lengths[1] = {4};
cv::Mat p(10, 10, CV_8U, cv::Scalar(0));
cv::fillPoly(p, contours, lengths, 1, cv::Scalar(100), CV_AA, 0);
std::cout << p << std::endl;
In the output image I expected 4 pixels with the value of 25%, i.e. total value of 100 (because the area of the square is exactly 1). But the real output looks like this:
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 19, 100, 22, 0, 0;
0, 0, 0, 0, 0, 36, 100, 40, 0, 0;
0, 0, 0, 0, 0, 20, 83, 22, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
Is this a bug in the library or do I use it in a wrong way? And how to draw a simple 1x1 antialiased square?
UPDATE: After removing CV_AA flag the output of
cv::fillPoly(p, contours, lengths, 1, cv::Scalar(100));
looks like this:
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 100, 0, 0, 0;
0, 0, 0, 0, 0, 0, 100, 0, 0, 0;
0, 0, 0, 0, 0, 0, 100, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
↧
Hole filling and edge linking in a binarized cell image
Hi all,
I need to segment and classify blood cells in several images.
I have binarized the image using Otsu thresholding but some of the cell edges are weak.
Can someone suggest a method for linking edges in the binarized cell image and then fill those detected holes?
Original image and binarized image are shown.


The final goal is to use watershed segmentation to classify the cells.
Thanks
↧
gfluidimgproc_func.simd.hpp not found
Running opencv on window studio. Got the above error statement. Code can be compiled. I cut the code to the minimum of only empty main().
↧
Finding required layer in dnn module
I am having a problem with the dnn module when trying to implement a caffe project (https://github.com/xialeiliu/RankIQA).
import cv2
import os
import argparse
from matplotlib import pyplot as plt
import numpy as np
net = cv2.dnn.readNetFromCaffe("deploy_vgg.prototxt", "FT_tid2013.caffemodel")
Num_Patch = 30
image_to_test = 'i04_18_2.bmp'
image = cv2.imread(image_to_test)
resized = cv2.resize(image, (224, 224))
blob = cv2.dnn.blobFromImage(resized, 1, (224, 224), (104, 117, 123))
print(blob.shape)
net.setInput(blob)
preds = net.forward()
print(preds)
I get the following error when naively feeding the blob into the net.
error: OpenCV(4.1.1) C:\projects\opencv-python\opencv\modules\dnn\src\layers\fully_connected_layer.cpp:73: error: (-215:Assertion failed) 1 <= blobs.size() && blobs.size() <= 2 in function 'cv::dnn::FullyConnectedLayerImpl::FullyConnectedLayerImpl'
I am aware that this probably means that I have to declare a layer but I cannot understand which from the error itself.
Is there a way to find which layer do I need to implement?
↧
↧
DFT Orientation Angle OpenCV
----------
Good afternoon!
Im using an [example for a DFT](https://docs.opencv.org/master/d8/d01/tutorial_discrete_fourier_transform.html). I need help. How can I calculate the angle of shift of the amplitude of the spectrum relative to the axes? I want to make a roll rotation sensor processing the video stream.
Thank you all for any answer!
↧
How to Save Each ROI Seperately
Hello,
I have below code:
import cv2
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1920)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 1080)
n_rows = 3
n_images_per_row = 3
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
height, width, ch = frame.shape
roi_height = int(height / n_rows)
roi_width = int(width / n_images_per_row)
images = []
for x in range(0, n_rows):
for y in range(0,n_images_per_row):
tmp_image=frame[x*roi_height:(x+1)*roi_height, y*roi_width:(y+1)*roi_width]
images.append(tmp_image)
# Display the resulting sub-frame
for x in range(0, n_rows):
for y in range(0, n_images_per_row):
cv2.imshow(str(1+y+x*n_images_per_row), images[x*n_images_per_row+y])
cv2.moveWindow(str(x * n_images_per_row + y + 1), 100 + (y * roi_width), 50 + (x * roi_height))
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cv2.imshow(cap.release())
cv2.destroyAllWindows()
I would like to save each frame seperately.
↧
conversion of image shape
I have ten image of size(10,16384), when I feed one image the size is (16384,) and the neural network accepts only image of shape(?,16384), So it flags me following error. Please help me to resolve this issue.
ValueError: Cannot feed value of shape (16384,) for Tensor 'MNIST_CNN/X:0', which has shape '(?, 16384)'
↧