Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
593 views
in Technique[技术] by (71.8m points)

matlab - Normalization of an image

I applied some operations on a grayscale image and now I am getting new values but the problem is the intensity values are now less than 0, between 0 and 255 and greater than 255. For values between [0-255] there is no problem but for intensity values < 0 and intensity values > 255 there is a problem as these values cannot occur in a grayscale image.

Therefore, I need to normalize the values so that all the values whether they are negative or greater than 255 or whatever other values are, comes in the range 0 to 255 so that the image can be displayed.

For that I know two methods:

Method #1

newImg = ((255-0)/(max(img(:))-min(img(:))))*(img-min(img(:)))

where min(img(:)) and max(img(:)) are the minimum and maximum values obtained after doing some operations on the input image img. The min can be less than 0 and the max can be greater than 255.

Method #2

I just make all the values less than 0 as 0 and all the values greater than 255 as 255, so:

img(img < 0) = 0;
img(img > 255) = 255;

I tried to use both the methods but I am getting good results using second method but not with the first one. Can anyone of you please tell me what the problem is?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

That totally depends on the image content itself. Both of those methods are valid to ensure that the range of values is between [0,255]. However, before you decide on what method you're using, you need to ask yourself the following questions:

Question #1 - What is my image?

The first question you need to ask is what does your image represent? If this is the output of an edge detector for example, the method you choose will depend on the dynamic range of the values seen in the result (more below in Question #2). For example, it's preferable that you use the second method if there is a good distribution of pixels and a low variance. However, if the dynamic range is a bit smaller, then you'll want to use the first method to push up the contrast of your result.

If the output is an image subtraction, then it's preferable to use the first method because you want to visualize the exact differences between pixels. Truncating the result will not give you a good visualization of the differences.

Question #2 - What's the dynamic range of the values?

Another thing you need to take note of is how wide the dynamic range of the minimum and maximum values are. For example, if the minimum and maximum are not that far off from the limits of [0,255], then you can use the first or second method and you won't notice much of a difference. However, if your values are within a small range that is within [0,255], then doing the first method will increase contrast whereas the second method won't do anything. If it is your goal to also increase the contrast of your image and if the intensities are within the valid [0,255] range, then you should do the first method.

However, if you have minimum and maximum values that are quite far away from the [0,255] range, like min=-50 and max=350, then doing the first method won't bode very well - especially if the grayscale intensities have huge variance. What I mean by huge variance is that you would have values that are in the high range, values in the low range and nothing else. If you rescaled using the first method, this would mean that the minimum gets pushed to 0, the maximum gets shrunk to 255 and the rest of the intensities get scaled in between so for those values that are lower, they get scaled so that they're visualized as gray.

Question #3 - Do I have a clean or noisy image?

This is something that not many people think about. Is your image very clean, or are there a couple of spurious noisy spots? The first method is very bad when it comes to noisy pixels. If you only had a couple of pixel values that have a very large value but the other pixels are within the range of [0,255], this would make all of the other pixels get rescaled accordingly and would thus decrease the contrast of your image. You probably want to ignore the contribution made by these pixels and so the second method is preferable.

Conclusion

Therefore, there is nothing wrong with either of those methods that you have talked about. You need to be cognizant of what the image is, the dynamic range of values that you see once you examine the output and whether or not this is a clear or noisy image. You simply have to make a smart choice keeping those two factors in mind. So in your case, the first output probably didn't work because you have very large negative values and large positive values and perhaps very few of those values too. Doing a truncation is probably better for your application.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...