Schedule - Westin: Emerald A
7:30- 8:10 A Brief Overview of Nonparametric Bayesian Models
Zoubin Ghahramani
08:10-08:30 Conjugate Projective Limits
Peter Orbanz
08:30-09:00 Convergence of posterior distributions in infinite dimension - a decade of success stories
Subhashis Ghoshal
09:00-09:40 Posters & Coffee
09:40-10:00 Approximation of conditional densities by smooth mixtures of regressions
Andriy Norets
10:00-10:30 Discussion
15:30-16:00 Nonparametric Bayesian Models of Human Cognition
Tom Griffiths
16:00-16:30 Practical Aspects of Bayesian Nonparametrics
Alejandro Jara
16:30-17:00 On the role of sequential Monte Carlo algorithms for complex nonparametric mixture models
Abel Rodriguez
17:00-17:20 Posters & Coffee
17:20-17:50 Modeling Dependent Distributions with Gaussian Processes
Surya Tokdar
17:50-18:30 Discussion and wrap-up

Talks

A Brief Overview of Nonparametric Bayesian Models
Zoubin Ghahramani, University of Cambridge
The flexibility of nonparametric Bayesian (NPB) methods for data modelling has generated an explosion of interest in the last decade in both Statistics and Machine Learning communities. I will give an overview of some of the main NPB models, and focus on the relationships between them. I plan to give a whirlwind tour of the Gaussian process, Dirichlet process (DP) and Beta process, the associated Chinese restaurant and Indian buffet, times series models such as the infinite HMM (sometimes called the HDP-HMM), hierarchical models such as Kingman’s coalescent and the Dirichlet diffusion tree, dependent models such as the depedent Dirichlet process, and other topics such as completely random measures and stick-breaking constructions, time permitting.

Conjugate Projective Limits
Peter Orbanz, University of Cambridge
Bayesian nonparametric models can be regarded as Bayesian models on infinite-dimensional spaces. These infinite-dimensional distributions can be constructed from finite-dimensional ones using the tools of stochastic process theory. An example is the construction of the Gaussian process constructed from Gaussian distributions. My talk will address the question which finite-dimensional distributions are suitable for the construction of nonparametric Bayesian models with useful statistical properties. By a proper choice of finite-dimensional models used in the construction, the nonparametric Bayesian model can be guaranteed to be conjugate, and to have a sufficient statistic. I will briefly discuss for which models these constructions follow a generic recipe, and for which cases we have to expect mathematical complications.

Convergence of posterior distributions in infinite dimension — a decade of success stories
Subhashis Ghoshal, North Carolina State University
It was long realized that for parametric inference problems, posterior distributions based on a large class of reasonable prior distributions possess very desirable large sample convergence properties, even if viewed from purely frequentist angles. For nonparametric or semiparametric problems, the story gets complicated, but still good frequentist convergence properties are enjoyed by Bayesian methods if a prior distribution is carefully constructed. The last ten years have witnessed the most significant progress in the study of consistency, convergence rates and finer frequentist properties. It is now well understood that the properties are controlled by the concentration of prior mass near the true value, as well as the effective size of the model, measured in terms of the metric entropy. Results have poured in for independent and identically distributed data, independent and non-identically distributed data and dependent data, as well as for a wide spectrum of inference problems such as density estimation, nonparametric regression, classification, and so on. Nonparametric mixtures, random series and Gaussian processes play particularly significant roles in the construction of the “right” priors. In this talk, we try to outline the most significant developments that took place in the last decade. In particular, we emphasize the ability of the posterior distribution to effortlessly choose the right model and adapt to the unknown level of smoothness.

Approximation of conditional densities by smooth mixtures of regressions extended abstract longer version
Andriy Norets, Princeton University

This paper shows that large nonparametric classes of conditional multivariate densities can be approximated in the Kullback–Leibler distance by different specifications of finite mixtures of normal regressions in which normal means and variances and mixing probabilities can depend on variables in the conditioning set (covariates). These models are a special case of models known as mixtures of experts in statistics and computer science literature. Flexible specifications include models in which only mixing probabilities, modeled by multinomial logit, depend on the covariates and, in the univariate case, models in which only means of the mixed normals depend flexibly on the covariates. Modeling the variance of the mixed normals by flexible functions of the covariates can weaken restrictions on the class of the approximable densities. Obtained results can be generalized to mixtures of general location scale densities. Rates of convergence and easy to interpret bounds are also obtained for different model specifications. These approximation results can be useful for proving consistency of Bayesian and maximum likelihood density estimators based on these models. The results also have interesting implications for applied researchers.

Nonparametric Bayesian models of human cognition
Tom Griffiths, University of California, Berkeley
Human learners are capable of adapting the way representations of the properties of objects in response to statistical information. For example, we can from clusters based on visual information, and decide what features of objects are important based on the other objects to which we compare them. Nonparametric Bayesian models provide a way to provide a rational account of such representational flexibility, indicating how an ideal learner would interpret relevant statistical information. In particular, by allowing hypothesis spaces of unbounded complexity, nonparametric Bayesian models potentially provide a more satisfying account of the rich representations entertained by human learners. I will summarize the results of recent studies examining how ideas from nonparametric Bayesian statistics lead to models of human cognition, and discuss some of the challenges that thinking about human learning poses for this approach.

Practical Aspects of Bayesian Nonparametrics
Alejandro Jara, Universidad de Concepción, Chile
In this talk I will discuss practical aspects associated to the implementation of Bayesian semi- and non-parametrics models. The emphasis of the talk will be on three different aspects: (A) the discussion of the most popular Bayesian nonparametric models, (B) the role of the parameter identification in Bayesian semiparametric model building, and (C) computational issues associated to Bayesian nonparametric inference. In (A), the most popular Bayesian methods for function estimation are reviewed. In (B), I’ll discuss the limitations of the statistical inferences in Bayesian semiparametric models. Specifically, I’ll discuss the role of the parameter identifiability in the model specification and show that, although the lack of identification present no difficulties to a Bayesian analysis in the sense that a prior is transformed into a posterior using the sampling model and the probability calculus, if the interest focuses on a unidentified parameter then such formal assurances have little practical value. From a computational point of view, identification problems imply ridges in the posterior distribution and MCMC methods can be difficult to implement in these situations. Finally, since the main obstacle for the practical use of Bayesian nonparametric methods has been the lack of easy-to-use estimation tools, I will introduce a simple, yet comprehensive, set of programs for the implementation of Bayesian non- and semi-parametric models in R, DPpackage. I will discuss the general syntax and design philosophy of DPpackage and describe the currently available functions. The main features and usage of DPpackage will be illustrated using
simulated and real data analyses.

On the role of sequential Monte Carlo algorithms for complex nonparametric mixture models
Abel Rodriguez, University of California, Santa Cruz
This talk will explore the role that sequential Monte Carlo (SMC) algorithm can play in learning complex Bayesian nonparametric mixture models. In particular the talk revolves around four themes: 1) models that are sequential in nature (e.g., the infinite hidden Markov model), 2) models that are not sequential in nature but where more standard Monte Carlo algorithm can be difficult to implement (e.g., the nested Dirichlet process and some of its extensions), 3) problems where model comparison is a key inference issue, and 4) problems with large sample sizes where parallelization (and particularly graphical processing units, GPUs) can provide dramatic speed-ups.

Modeling dependent distributions with Gaussian processes
Surya Tokdar, Duke University
I would talk about the use of Gaussian processes (GP) to model a family of dependent distributions in a non-parametric, non-Gaussian setting. Examples include density regression, spatial GLM, multi-site discrete valued time series, etc. All these models can be induced by a Gaussian process on the product space of the variable of interest (or a latent version of it) and the variable that indexes the family membership (covariates, site locations etc). Dependence among the distributions is easily encoded through the covariance function of this Gaussian process. I’d briefly highlight nice theoretical properties of such processes and then discuss in detail issues with model fitting, particularly with MCMC exploration of the resulting posterior. I’d start with the well-known big-N problem of Gaussian processes and talk about the Predictive Process (PP) approach. Then I will focus on the special needs of the product-space construction and how to adapt PP to handle this. Next I’d stress on the often neglected issue of mixing of the GP covariance parameters. This mixing behaves notoriously when the underlying GP function cannot be integrated out (as is done in regression or spatial models with Gaussian errors). I’d elaborate on a useful strategy to overcome this. I’d end with some further thoughts on GP and PP for complex, high-dimensional models and functional data analysis.

Posters

Building Graph Structures from the Beta Process extended abstract
Noel Welsh, Jeremy Wyatt

Collapsed Variational Inference for Time-varying Dirichlet Process Mixture Models extended abstract
Amr Ahmed and Eric Xing

Conditional Simultaneous Draws from Hierarchical Chinese Restaurant Processes extended abstract
Takaki Makino, Shunsuke Takei, Daichi Mochihashi, Issei Sato, Toshihisa Takagi

Cross-categorization: A Method for Discovering Multiple Overlapping Clusterings extended abstract
Vikash Mansinghka, Eric Jonas, Cap Petschulat, Beau Cronin, Patrick Shafto, Joshua Tenenbaum

Fast Search for Infinite Latent Feature Models extended abstract
Piyush Rai and Hal Daume III

Metric Entropy and Gaussian Bandits extended abstract
Steffen Grünewälder, Jean-Yves Audibert, Manfred Opper, John Shawe-Taylor

Modeling Associations among Multivariate Longitudinal Categorical Variables in Survey Data: a Semiparametric Bayesian Approach abstract.pdf extended abstract longer version
Sylvie Tchumtchoua and Dipak K. Dey

Nonparametric Bayesian Co-Clustering Ensembles extended abstract
Pu Wang, Carlotta Domeniconi, Kathryn B. Laskey

Nonparametric Bayesian Local Partition Model for Multi-task Reinforcement Learning in POMDPs extended abstract
Chenghui Cai, Xuejun Liao, and Lawrence Carin

Power-Law Unbounded Markov Prediction extended abstract
Jan Gasthaus, Frank Wood, Yee Whye Teh

Predictive computable iff posterior computable extended abstract
Cameron E. Freer and Daniel M. Roy

System Identification of Gaussian Process Dynamic Systems extended abstract
Ryan Turner, Marc Peter Deisenroth, Carl Edward Rasmussen

Transfer Learning in Human Categorization extended abstract
Kevin R. Canini and Thomas L. Griffiths

Tree-Structured Stick Breaking Processes for Hierarchical Modeling extended abstract
Ryan Prescott Adams, Zoubin Ghahramani and Michael I. Jordan

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License