Existing depth-based approaches to predicting anthropometric measurements, such as body height, arm span and hip circumference, either directly compute the measurements on 3D point clouds, and thus are sensitive to noise, or fit a model to the observed depth values, which typically is time-consuming. In this paper, we rely on the intuition that, to predict a specific anthropometric measurement, one does not need to have detailed information about the entire body shape. We therefore introduce an approach to anthropometry based on a random regression forest trained from local depth cues. The local predictions are then accumulated into one global, image-level anthropometric measurement prediction. We introduce a forest refinement scheme, whose objective function directly relies on both the image-level prediction, as well as on the local predictions' reliability. The resulting approach has the advantage of being both computationally highly efficient and accurate.