Sound-scapes are useful for understanding our surrounding environments in applications such as security, source tracking or understanding human computer interaction. Accurate position or localisation information from sound-scape samples consists of many channels of high dimensional acoustic data. In this paper we demonstrate how to obtain a visual representation of sound-scapes by applying dimensionality reduction techniques to a range of artificially generated sound-scape datasets. Linear and non-linear dimensionality techniques were compared including principle component analysis (PCA), multi-dimensional scaling (MDS), locally linear embedding (LLE) and isometric feature mapping (ISOMAP). Results obtained by applying the dimensionality reduction techniques led to visual representations of affine positions of the sound source on its sound-scape manifold. These displayed clearly the order relationships of angles and intensities of the generated sound-scape samples. In a simple classification task with the artificial sound data, the successful combination of dimensionality reduction and classifier methods are demonstrated.