Content-based retrieval from remote-sensing data requires the linkage of features describing signal properties with terms expressing image content in a user-friendly language. On the one hand we need general and powerful features that describe precisely the data properties. These must be signal-oriented free of any interpretation, so that the system can respond in an unbiased manner to a wide range of queries. On the other hand we have to provide the user of the retrieval system with a suitable query tool where he can express his needs in terms he is familiar with. To this end we use search terms that origin in the application domain of the user and a graphical tool where the user can give training regions in remote-sensing data. The difference to existing systems is that we explicitly use the scale as a query element. The user has the possibility to indicate the typical size of objects and structures he is looking for. Even for novice remote-sensing users the quantity scale is easy to specify for query terms like e.g. "crop field". In graphical queries using training regions the scale is either given by the user, or it is determined automatically from the training regions. Automatic scale selection is done using a multi-scale stochastic model.