This title appears in the Scientific Report :
2020
Please use the identifier:
http://dx.doi.org/10.1109/TIP.2020.2982260 in citations.
Please use the identifier: http://hdl.handle.net/2128/27837 in citations.
Practically Lossless Affine Image Transformation
Practically Lossless Affine Image Transformation
In this contribution we introduce an almost lossless affine 2D image transformation method. To this end we extend the theory of the well-known Chirp-z transform to allow for fully affine transformation of general n-dimensional images. In addition we give a practical spatial and spectral zero-padding...
Saved in:
Personal Name(s): | Pflugfelder, Daniel (Corresponding author) |
---|---|
Scharr, Hanno | |
Contributing Institute: |
Pflanzenwissenschaften; IBG-2 |
Published in: | IEEE transactions on image processing, 29 (2020) S. 5367 - 5373 |
Imprint: |
New York, NY
IEEE
2020
|
DOI: |
10.1109/TIP.2020.2982260 |
Document Type: |
Journal Article |
Research Program: |
Deutsches Pflanzen Phänotypisierungsnetzwerk Plant Science |
Link: |
Restricted OpenAccess Restricted |
Publikationsportal JuSER |
Please use the identifier: http://hdl.handle.net/2128/27837 in citations.
In this contribution we introduce an almost lossless affine 2D image transformation method. To this end we extend the theory of the well-known Chirp-z transform to allow for fully affine transformation of general n-dimensional images. In addition we give a practical spatial and spectral zero-padding approach dramatically reducing losses of our transform, where usual transforms introduce blurring artifacts due to sub-optimal interpolation. The proposed method improves the mean squared error by approx. a factor of 1800 compared to the commonly used linear interpolation, and by a factor of 250 to the best competitor. We derive the transform from basic principles with special attention to implementation details and supplement this paper with python code for 2D images. In demonstration experiments we show the superior image quality compared to usual approaches, when using our method. However runtimes are considerably larger than when using toolbox algorithms. |