From here to infinity: sparse finite versus Dirichlet process mixtures in model-based clustering.

Sylvia Frühwirth-Schnatter, Gertraud Malsiner-Walli
Author Information
  1. Sylvia Frühwirth-Schnatter: Institute for Statistics and Mathematics, Vienna University of Economics and Business (WU), Welthandelsplatz 1, 1020 Vienna, Austria. ORCID
  2. Gertraud Malsiner-Walli: Institute for Statistics and Mathematics, Vienna University of Economics and Business (WU), Welthandelsplatz 1, 1020 Vienna, Austria.

Abstract

In model-based clustering mixture models are used to group data points into clusters. A useful concept introduced for Gaussian mixtures by Malsiner Walli et al. (Stat Comput 26:303-324, 2016) are sparse finite mixtures, where the prior distribution on the weight distribution of a mixture with components is chosen in such a way that a priori the number of clusters in the data is random and is allowed to be smaller than with high probability. The number of clusters is then inferred a posteriori from the data. The present paper makes the following contributions in the context of sparse finite mixture modelling. First, it is illustrated that the concept of sparse finite mixture is very generic and easily extended to cluster various types of non-Gaussian data, in particular discrete data and continuous multivariate data arising from non-Gaussian clusters. Second, sparse finite mixtures are compared to Dirichlet process mixtures with respect to their ability to identify the number of clusters. For both model classes, a random hyper prior is considered for the parameters determining the weight distribution. By suitable matching of these priors, it is shown that the choice of this hyper prior is far more influential on the cluster solution than whether a sparse finite mixture or a Dirichlet process mixture is taken into consideration.

Keywords

References

  1. Bioinformatics. 2004 May 22;20(8):1222-32 [PMID: 14871871]
  2. Neuroepidemiology. 2005;25(4):163-75 [PMID: 16103727]
  3. Biostatistics. 2010 Apr;11(2):317-36 [PMID: 20110247]
  4. Bayesian Anal. 2013;8(2):null [PMID: 24368932]
  5. PLoS One. 2015 Jul 15;10(7):e0131739 [PMID: 26177375]
  6. Stat Comput. 2016;26:303-324 [PMID: 26900266]
  7. J Comput Graph Stat. 2017 Apr 3;26(2):285-295 [PMID: 28626349]
  8. J Am Stat Assoc. 2018;113(521):340-356 [PMID: 29983475]

Word Cloud

Created with Highcharts 10.0.0datamixturesparsefiniteclustersmixturespriorDirichletdistributionnumberprocessmodel-basedclusteringconceptweightrandomclusternon-GaussianhyperdistributionsmodelsusedgrouppointsusefulintroducedGaussianMalsinerWallietalStatComput26:303-3242016componentschosenwayprioriallowedsmallerhighprobabilityinferredposterioripresentpapermakesfollowingcontributionscontextmodellingFirstillustratedgenericeasilyextendedvarioustypesparticulardiscretecontinuousmultivariatearisingSecondcomparedrespectabilityidentifymodelclassesconsideredparametersdeterminingsuitablematchingpriorsshownchoicefarinfluentialsolutionwhethertakenconsiderationinfinity:versusCountLatentclassanalysisMarginallikelihoodsMixtureSkew

Similar Articles

Cited By