# Probability Learning by Perceptrons and People

Article table of contents posted in advance of publication.

Preface

Chapter 1 Uncertainty and Adaptation

1.1 Sources of Uncertainty

1.2 Seeking Rewards in an Uncertain World

1.3 Probability Learning

1.4 An Example of Probability Matching

1.5 The Modern Perceptron

1.6 Modern Perceptrons and Probability Estimation

1.7 Why Study Perceptrons?

1.8 Summary and Implications

Chapter 2 Information, Probability, and Negative Feedback

2.1 From Association to Information

2.2 Twenty Questions

2.3 Measuring Information

2.4 Physical and Subjective Probability

2.5 Information, Learning, and Feedback

2.6 Adapting to Surprise

2.7 Summary and Implications

Chapter 3 Bayes’ Theorem, Perceptrons, and Odds Ratios

3.1 From Algorithm to Computation

3.2 Contingency and Uncertainty

3.3 Bayes’ Theorem and Cognitive Science

3.4 A Bayesian Mechanism

3.5 Inside a Bayesian Mechanism

3.6 Odds, Odds Ratios and Contingency

Chapter 4 Perceptrons Are Naïve Bayesians

4.1 The Limits of Perceptrons

4.2 Learning Probable Boolean Operations

4.3 Conditional Independence, Naïve Bayes and Perceptrons

4.4 Signals from Three or More Cues

4.5 Summary and Implications

Chapter 5 Estimating Reward Probability from Three Cues

5.1 Probability Estimation from Three Independent Cues

5.2 Probability Estimation with Interactions: High Reward AND

5.3 Probability Estimation with Interactions: High Reward XOR

5.4 Manipulating Conditional Dependence: Low Reward AND

5.5 Manipulating Conditional Dependence: Low Reward XOR

5.6 Linear Separability versus Conditional Dependence

5.7 Summary and Implications

Chapter 6 Choice and Positive Feedback

6.1 Choice and Positive Feedback

6.2 Operant Learning: Three Independent Cues

6.3 Operant Learning: High Reward AND

6.4 Operant Learning: High Reward XOR

6.5 Operant Learning: Low Reward AND

6.6 Operant Learning: Low Reward XOR

6.7 Operant Learning and Conditional Dependence

6.8 Summary and Implications

Chapter 7 Human Performance on the Card-Choice Task

7.1 From Perceptrons to People

7.2 Methodology for the Card-Choice Task

7.3 Human Choice Behavior

7.4 Cue Interactions and Relative Complexity Evidence

7.5 Intermediate State Evidence: Independent Cues

7.6 Training Perceptrons with an Alternative Input Code

7.7 Fitting Logistic Equations to Human Choices

7.8 Exploring Human Choice Strategy

7.9 Error Evidence: Analysis of Human Card Preferences

7.10 Summary and Implications

Chapter 8 Synthetic Psychology and Probability Learning

8.1 Adapting To Uncertain Worlds

8.2 Computational Level Results

8.3 Computational Level Implications and Future Directions

8.4 Algorithmic Level Results

8.5 Algorithmic Level Implications and Future Directions

8.6 Implementational Level Considerations

8.7 The Synthetic Approach and Associative Learning

References