Tasks & Models |
|
---|---|
Rescorla-Wagner (Delta) Model |
|
Rescorla-Wagner (Gamma) Model |
|
Rescorla-Wagner (Delta) Model |
|
Kalman Filter |
|
3 Parameter Model, without C (choice perseveration), R (reward sensitivity), and P (punishment sensitivity). But with xi (noise) |
|
4 Parameter Model, without C (choice perseveration) |
|
5 Parameter Model, without C (choice perseveration) but with xi (noise) |
|
5 Parameter Model, without C (choice perseveration) but with xi (noise). Added decay rate (Niv et al., 2015, J. Neuro). |
|
4 Parameter Model, without C (choice perseveration) but with xi (noise). Single learning rate both for R and P. |
|
3 Parameter Model, without C (choice perseveration), R (reward sensitivity), and P (punishment sensitivity). But with xi (noise) |
|
4 Parameter Model, without C (choice perseveration) |
|
Rescorla-Wagner (Delta) Model |
|
Kalman Filter |
|
5 Parameter Model, without C (choice perseveration) but with xi (noise) |
|
5 Parameter Model, without C (choice perseveration) but with xi (noise). Added decay rate (Niv et al., 2015, J. Neuro). |
|
4 Parameter Model, without C (choice perseveration) but with xi (noise). Single learning rate both for R and P. |
|
Exponential-Weight Mean-Variance Model |
|
Re-parameterized version of BART model with 4 parameters |
|
Drift Diffusion Model |
|
Drift Diffusion Model |
|
Cumulative Model |
|
Exponential Subjective Value Model |
|
Linear Subjective Value Model |
|
Probability Weight Function |
|
Constant-Sensitivity (CS) Model |
|
Constant-Sensitivity (CS) Model |
|
Exponential Model |
|
Hyperbolic Model |
|
Hyperbolic Model |
|
RW + noise |
|
RW + noise + bias |
|
RW + noise + bias + pi |
|
RW (rew/pun) + noise + bias + pi |
|
Outcome-Representation Learning Model |
|
Prospect Valence Learning (PVL) Decay-RI |
|
Prospect Valence Learning (PVL) Delta |
|
Value-Plus-Perseverance |
|
Other-Conferred Utility (OCU) Model |
|
Experience-Weighted Attraction Model |
|
Fictitious Update Model |
|
Fictitious Update Model |
|
Fictitious Update Model, with separate learning rates for positive and negative prediction error (PE) |
|
Fictitious Update Model, with separate learning rates for positive and negative prediction error (PE), without alpha (indecision point) |
|
Fictitious Update Model, without alpha (indecision point) |
|
Reward-Punishment Model |
|
Reward-Punishment Model |
|
Q Learning Model |
|
Gain-Loss Q Learning Model |
|
Drift Diffusion Model |
|
Reinforcement Learning Drift Diffusion Model 1 |
|
Reinforcement Learning Drift Diffusion Model 6 |
|
Prospect Theory, without loss aversion (LA) parameter |
|
Prospect Theory, without risk aversion (RA) parameter |
|
Prospect Theory |
|
Happiness Computational Model |
|
Signal detection theory model |
|
Hybrid Model, with 4 parameters |
|
Hybrid Model, with 6 parameters |
|
Hybrid Model, with 7 parameters (original model) |
|
Ideal Observer Model |
|
Rescorla-Wagner (Delta) Model |
|
Sequential Learning Model |
|
Diagnostics |
|
Function to estimate mode of MCMC samples |
|
Extract Model Comparison Estimates |
|
Compute Highest-Density Interval |
|
Function to plot multiple figures |
|
Plots the histogram of MCMC samples. |
|
Plots highest density interval (HDI) from (MCMC) samples and prints HDI in the R console. HDI is indicated by a red line. Based on John Kruschke's codes. |
|
Plots individual posterior distributions, using the stan_plot function of the rstan package |
|
Print model-fits (mean LOOIC or WAIC values in addition to Akaike weights) of hBayesDM Models |
|
Function for extracting Rhat values from an hBayesDM object |