Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
68 views
in Technique[技术] by (71.8m points)

python - How to remove a patterned background from an image and detect the objects?

I have an image that is a frame of a video. As you can see in original image the background has a pattern that makes it challenging to detect the Lego objects. Based on my current code, the object edges are detected wrongly and messed up with the shapes of the background, the result in this image . The result with rectangles is shown here. My code:

import cv2
import numpy as np

main_image = cv2.imread('image.jpg', 1)
convert_to_gray = cv2.cvtColor(main_image, cv2.COLOR_BGR2GRAY)
convert_to_blurred = cv2.GaussianBlur(convert_to_gray, (3, 3), 2)
a, b = cv2.threshold(convert_to_blurred, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
canny_result = cv2.Canny(convert_to_blurred, a / 6, b / 3)
k = np.ones((2, 2), np.uint8)
d = cv2.dilate(canny_result, k, iterations=3)

contours_found = cv2.findContours(d, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours_found = contours_found[0] if len(contours_found) == 2 else contours_found[1]

for cont in contours_found:
    x, y, w, h = cv2.boundingRect(cont)
    cv2.rectangle(main_image, (x, y), (x + w, y + h), (0, 0, 255), 3)

cv2.imshow('canny_result', canny_result)
cv2.imshow('main_image', main_image)
cv2.waitKey(0)

What to do to detect the objects correctly?

question from:https://stackoverflow.com/questions/65866455/how-to-remove-a-patterned-background-from-an-image-and-detect-the-objects

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

Given sufficient constraints regarding the pattern (range of number of periods in the image? constant symmetry group? repetitive at all?), you may be able to find the pattern's unit cell by correlation.

Then think of stacking the tiles (new axis). Calculate the median for each color channel along the new axis. Take the 2D image of median values as template and spread it back over the original image.

Calculate the differences. Large differences indicate Lego.

Possible refinement: Remove the outliers (Lego), estimate and finally remove a trend in the differences due to variation of lighting/vignetting.

EDIT: It works quite well even with only two tiles: I am able to look at two tiles at the same time by controlling the convergence angle of my eyes (without loss of focus) so that my visual cortex does the correlation and kind of error detection (non-matching parts). The Lego pieces appear to pop out.

EDIT 2: I tried the same with your second image (edges). The correlation works well (lock-in to the right convergence angle) and clusters of differences are kind of marked, but without the color and low-frequency information, no objects pop out.

Thus, edge detection shall not be the first step, except perhaps to increase the precision in coping with perspective and distortion. The concurrent estimation of pattern period and distortion field is a problem solved in stitching. To do it concurrently may not be necessary for your problem (fixed camera position, fixed setting of focus and zoom).


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...