Conventional speech recognition systems are based on hidden Markov models (HMMs) with Gaussian mixture models (GHMMs). Discriminative log-linear models are an alternative modeling approach and have been investigated recently in speech recognition. GHMMs are directed models with constraints, e.g. positivity of variances and normalization of conditional probabilities, while log-linear models do not use such constraints. This paper compares the posterior form of typical generative models related to speech recognition with their log-linear model counterparts. The key result will be the derivation of the equivalence of these two dierent approaches under weak assumptions. In particular, we study Gaussian mixture models, part-of-speech bigram tagging models and eventually, the GHMMs. This result unifies two important but fundamentally dierent modeling paradigms in speech recognition on the functional level. Furthermore, this paper will present comparative experimental results for various speech tasks of dierent complexity, including a digit string and large vocabulary continuous speech recognition tasks.