Non-Linear Hyperspectral Subspace Mapping using Stacked Auto-Encoder

Niclas Niclas
Swedish Defence Research Agency (FOI), Sweden

David Gustafsson
Swedish Defence Research Agency (FOI), Sweden

Ladda ner artikel

Ingår i: The 29th Annual Workshop of the Swedish Artificial Intelligence Society (SAIS), 2–3 June 2016, Malmö, Sweden

Linköping Electronic Conference Proceedings 129:1, s. 10

Visa mer +

Publicerad: 2016-06-20

ISBN: 978-91-7685-720-5

ISSN: 1650-3686 (tryckt), 1650-3740 (online)


Stacked Auto-Encoder (SAE) is a rather new machine learning approach which utilize unlabelled training data to learn a deep hierarchical representation of features. SAE:s can be used to learn a feature representation that preserve key information of the features, but has a lower dimensionality than the original feature space. The learnt representation is a non-linear transformation that maps the original features to a space of lower dimensionality. Hyperspectral data are high dimensional while the information conveyed by the data about the scene can be represented in a space of considerably lower dimensionality. Transformation of the hyperspectral data into a representation in a space of lower dimensionality which preserve the most important information is crucial in many applications. We show how unlabelled hyperspectral signatures can be used to train a SAE. The focus for analysis is what type of spectral information is preserved in the hierarchical SAE representation. Results from hyperspectral images of natural scenes with man-made objects placed in the scene is presented. Example of how SAE:s can be used for anomaly detection, detection of anomalous spectral signatures, is also presented.


artificial intelligence


[1] Yoshua Bengio. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2(1):1–127, 2009.

[2] Y. Chen, X. Zhao, and X. Jia. Spectral - spatial classification of hyperspectral data based on deep belief network. Selected Topics in Applied Earth Observations and Remote Sensing, IEEE Journal of, PP(99):1–12, 2015.

[3] Yushi Chen, Zhouhan Lin, Xing Zhao, Gang Wang, and Yanfeng Gu. Deep learning-based classification of hyperspectral data. Selected Topics in Applied Earth Observations and Remote Sensing, IEEE Journal of, 7(6):2094–2107, June 2014.

[4] M. D. Farrell and R. M. Mersereau. On the impact of pca dimension reduction for hyperspectral detection of dicult targets. IEEE Geoscience and Remote Sensing Letters, 2(2):192–195, April 2005.

[5] M. Fauvel, J. Chanussot, and J. A. Benediktsson. Kernel principal component analysis for feature reduction in hyperspectrale images analysis. In Proceedings of the 7th Nordic Signal Processing Symposium - NORSIG 2006, pages 238–241, June 2006.

[6] David Gustafsson, Henrik Petersson, and Mattias Enstedt. Deep learning: Concepts and selected applications. Technical Report FOID-0701-SE, Swedish Defence Research Agency (FOI), Sweden, December 2015.

[7] R. B. Palm. Prediction as a candidate for learning deep hierarchical models of data. Master’s thesis, Technical University of Denmark, DTU Informatics, 2012.

[8] A. Villa, J. Chanussot, C. Jutten, J. A. Benediktsson, and S. Moussaoui. On the use of ica for hyperspectral image analysis. In 2009 IEEE International Geoscience and Remote Sensing Symposium, volume 4, pages IV- 97-IV-100, July 2009.

[9] Xue wen Chen and Xiaotong Lin. Big data deep learning: Challenges and perspectives. Access, IEEE, 2:514–525, 2014.

[10] Huiwen Zeng and H.J. Trussell. Dimensionality reduction in hyperspectral image classification. In Image Processing, 2004. ICIP ’04. 2004 International Conference on, volume 2, pages 913–916 Vol.2, Oct 2004.

Citeringar i Crossref