论文标题
从生成模型中得出歧视性分类器
Deriving discriminative classifiers from generative models
论文作者
论文摘要
我们处理贝叶斯的生成和歧视分类器。给定模型分配$ p(x,y)$,带有观察$ y $和目标$ x $,一个人首先考虑$ p(x,y)$来计算生成性分类器,然后使用贝叶斯规则来计算$ p(x | y)$。判别模型由$ p(x | y)$直接给出,该$用于计算判别分类器。但是,最近的作品表明,这两个生成模型都可以与歧视性分类器的定义相匹配,这些贝叶斯最大后分类器(NB)或隐藏的马尔可夫链(HMC)也可以匹配。因此,在某些情况下,将分类器分为“生成”和“歧视性”有些误导。实际上,这种区别与计算分类器的方式相关,而不是与分类器本身有关。我们提出了一个一般的理论结果,该结果指定了如何从同一模型中以歧视方式计算从生成模型诱导的生成分类器。再次发现NB和HMC的示例作为特定情况,我们将一般结果应用于NB的两个原始扩展名,以及两个HMC的扩展,其中一个是原始的。最后,我们很快就说明了自然语言处理(NLP)框架中计算分类器的新歧视方式的兴趣。
We deal with Bayesian generative and discriminative classifiers. Given a model distribution $p(x, y)$, with the observation $y$ and the target $x$, one computes generative classifiers by firstly considering $p(x, y)$ and then using the Bayes rule to calculate $p(x | y)$. A discriminative model is directly given by $p(x | y)$, which is used to compute discriminative classifiers. However, recent works showed that the Bayesian Maximum Posterior classifier defined from the Naive Bayes (NB) or Hidden Markov Chain (HMC), both generative models, can also match the discriminative classifier definition. Thus, there are situations in which dividing classifiers into "generative" and "discriminative" is somewhat misleading. Indeed, such a distinction is rather related to the way of computing classifiers, not to the classifiers themselves. We present a general theoretical result specifying how a generative classifier induced from a generative model can also be computed in a discriminative way from the same model. Examples of NB and HMC are found again as particular cases, and we apply the general result to two original extensions of NB, and two extensions of HMC, one of which being original. Finally, we shortly illustrate the interest of the new discriminative way of computing classifiers in the Natural Language Processing (NLP) framework.