Matthew Galdo and Giwon Bahg developed a new algorithm for performing Bayesian inference by combining Differential Evolution (DE) as a mechanism to drive the optimization of Variational Bayesian methods of posterior estimation. Unlike your typical Automatic Differentiation algorithms relying on stochastic gradient descent, DE approximates the gradient through finite differences among particles in the system, giving the newly developed DEVI algorithm a leg up on non-standard optimization problems often found in psychology.

Check it out!

Galdo, M., Bahg, G., and Turner, B. M. (in press). Variational Bayesian methods for cognitive science. In press at Psychological Methods.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s