A generative algorithm models how the data was generated by estimating the assumptions and distribution of the original model that generated the data. If x is the input and y the output, then it tries to learn the joint probability distribution p(x,y). Example, in natural language processing, probabilistic Latent Semantic Analysis (pLSA) and Latent Dirichlet Allocation (LDA) is a generative model. It makes the assumption that each document contains mixture of topics and each word can be attributed to one or more topics.

A discriminative classifier tries to build the model by just looking into the observed data. It makes fewer assumptions and limits itself to learn from observed to features to predict the output. If x is the input and y the output, then it tries to learn the conditional probability distribution, which is the probability of y given x. Example is logistic regression

Discriminative models generally outperform generative models in classification tasks.

References to some Discriminative Topic model in nlp: