Chapter 9 Summary
- Early thinking about motivation and behavior linked motivation to biological need. Hull emphasized Drive, a form of motivation that was caused by need. Drive was supposed to energize consummatory behavior, random (“general”) activity, and instrumental action.
- The Drive concept ran into trouble. Drive does not energize activity in a random or general way; instead, hunger and thirst seem to select or potentiate behavior systems that are designed (by evolution) to deal with the motivational state. Drive does not blindly energize instrumental action either. Motivational states influence instrumental behavior only if the animal has had a chance to learn the reinforcer’s value in the presence of the motivational state. The latter process is called “incentive learning.”
- Eating and drinking seem to anticipate—rather than be a response to—need. For example, animals drink or forage for food in ways that seem to prevent them from becoming depleted. This process usually involves learning, and what we eat and when we eat it (for example) are strongly influenced by learning processes.
- Instrumental behavior is motivated by the anticipation of reward. Upshifts and downshifts in the size of reward cause positive and negative “contrast effects.” These involve emotion, and they suggest that the motivating effects of a reward depend on what we have learned to expect. The anticipation of reward causes “incentive motivation,” which Hull added to his theory. The bottom line was that a classically conditioned anticipatory goal response, rG, was thought to energize instrumental action.
- There are other conditioned motivators besides rG. Avoidance learning is motivated by fear, or rE. When rewards are smaller than expected, frustration, rF, becomes important. Frustration is especially useful in explaining many “paradoxical reward effects” in which reinforcers are less positive than our intuitions suggest they should be.
- Extrinsic rewards (like prizes or money) can sometimes hurt human performance that is said to be “intrinsically” motivated. The effect is restricted to certain situations. Like other paradoxical reward effects, it is consistent with the idea that the effects of reinforcers can depend on expectations and psychological context.
- The partial reinforcement extinction effect (PREE) is an especially important paradoxical reward effect. Behaviors that are reinforced only some of the time are more resistant to extinction than those that are always reinforced. Behavior may be more persistent after partial reinforcement because we have learned to respond in the presence of frustration. Alternatively, partial reinforcement may make it more difficult to discriminate extinction from acquisition. The latter idea is refined in “sequential theory.”
- It was difficult to confirm a role for peripheral responses like rG and rE in motivating instrumental behavior. Pavlovian-instrumental transfer (PIT) experiments, however, demonstrate that presenting a Pavlovian CS while an organism is performing an instrumental action can influence instrumental performance. The motivating effects of rewards and punishers are thought to be mediated by classically conditioned expectancies or motivational states, not by peripheral responses. In any instrumental learning situation, cues in the background can become associated with O and thereby motivate instrumental action.
- PIT can take two forms. In general PIT, a CS that is associated with a reinforcer can excite or invigorate an instrumental response that has been reinforced by any outcome within the same motivational system (e.g., different types of foods). In outcome-specific PIT, a CS excites or invigorates an instrumental response that is specifically associated with the same outcome. Pavlovian CSs can thus influence choice between different instrumental behaviors, and outcome-specific PIT can instigate organisms to make an instrumental response even when they are satiated. These effects of Pavlovian cues are another reason that cues associated with reinforcers can cause people to work for food or take drugs, even when they do not “need” them.
- The motivational effects of rewards and punishers can further change as a function of experience with them. Exposure to an emotional stimulus can cause an opposite after-reaction when the stimulus is withdrawn. With repeated exposure to the emotional stimulus, the after-reaction may also get stronger while the original emotional effect habituates. According to opponent-process theory, this change occurs because an opponent process elicited by the stimulus grows with repeated use. Ultimately, the change can cause a reversal of the motivation behind instrumental action. For example, although a positive stimulus is a positive reinforcer at first, we may eventually seek it so as to escape the strong aversive after-reaction. This may be a hallmark of addiction.
- Opponent-process theory explains the emotional dynamics of imprinting. The growth of opponent processes may depend more on learning than the theory originally supposed, however. Conditioned compensatory responses, which are essentially conditioned opponent processes, may play a role in tolerance and habituation, although a growth of the opponent process like the one envisioned by opponent-process theory may still occur as a consequence of massed exposures to a significant outcome.