Reward system

Addiction and dependence glossary[1][2][3]
addiction – a state characterized by compulsive engagement in rewarding stimuli despite adverse consequences
addictive behavior – a behavior that is both rewarding and reinforcing
addictive drug – a drug that is both rewarding and reinforcing
dependence – an adaptive state associated with a withdrawal syndrome upon cessation of repeated exposure to a stimulus (e.g., drug intake)
drug sensitization or reverse tolerance – the escalating effect of a drug resulting from repeated administration at a given dose
drug withdrawal – symptoms that occur upon cessation of repeated drug use
physical dependence – dependence that involves persistent physical–somatic withdrawal symptoms (e.g., fatigue and delirium tremens)
psychological dependence – dependence that involves emotional–motivational withdrawal symptoms (e.g., dysphoria and anhedonia)
reinforcing stimuli – stimuli that increase the probability of repeating behaviors paired with them
rewarding stimuli – stimuli that the brain interprets as intrinsically positive or as something to be approached
sensitization – an amplified response to a stimulus resulting from repeated exposure to it
tolerance – the diminishing effect of a drug resulting from repeated administration at a given dose

The reward system is a group of neural structures responsible for incentive salience (i.e., "wanting" or desire), pleasure (i.e., "liking" or hedonic value), and positive reinforcement. Reward is the attractive and motivational property of a stimulus that induces appetitive behavior – also known as approach behavior – and consummatory behavior.[4] In its description of a rewarding stimulus (i.e., "a reward"), a review on reward neuroscience noted, "any stimulus, object, event, activity, or situation that has the potential to make us approach and consume it is by definition a reward."[4] In operant conditioning, rewarding stimuli function as positive reinforcers;[4] the converse statement is also true: positive reinforcers are rewarding.[4]

Primary rewards are those necessary for the survival of a species, and include homeostatic (e.g., palatable food) and reproductive (e.g., sexual contact) rewards.[4][5] Intrinsic rewards are unconditioned rewards that are attractive and motivate behavior because they are inherently pleasurable.[4] Extrinsic rewards, such as money, are conditioned rewards that are attractive and motivate behavior, but are not pleasurable.[4] Extrinsic rewards derive their motivational value as a result of a learned association (i.e., conditioning) with intrinsic rewards.[4] Rewards are generally considered more desirable than punishment in modifying behavior.[6]

Definition

In neuroscience, the reward system is a collection of brain structures that are responsible for reward-related cognition, including positive reinforcement and both "wanting" (i.e., desire) and "liking" (i.e., pleasure) as defined in the incentive salience model.[4]

Anatomy of the reward system

Brain structures that compose the reward system are primarily within the cortico–basal ganglia–thalamic loop;[7] the basal ganglia portion of the loop drives activity within the reward system.[7] Most of the pathways that connect structures within the reward system are glutamatergic interneurons, GABAergic medium spiny neurons, and dopaminergic projection neurons,[7][8] although other types of projection neurons contribute (e.g., orexinergic projection neurons). The reward system includes the ventral tegmental area, ventral striatum (primarily the nucleus accumbens, but also the olfactory tubercle), dorsal striatum (i.e., caudate nucleus and putamen), substantia nigra (i.e., the pars compacta and pars reticulata), prefrontal cortex, anterior cingulate cortex, insular cortex, hippocampus, hypothalamus (particularly, the orexinergic nucleus in the lateral hypothalamus), thalamus (multiple nuclei), subthalamic nucleus, globus pallidus (both external and internal), ventral pallidum, parabrachial nucleus, amygdala, and the remainder of the extended amygdala.[7][9][10][11][12]

Among the pathways that connect the structures in the cortico–basal ganglia–thalamic loop, the group of neurons known as the mesolimbic dopamine pathway, which connect the ventral tegmental area (VTA) to the nucleus accumbens (NAcc), along with the associated GABAergic D1-type medium spiny neurons in the nucleus accumbens shell, is a critical component of the reward system that is directly involved in the immediate perception of the motivational component of a reward (i.e., "wanting").[2][13] Most of the dopamine pathways (i.e., neurons that use the neurotransmitter dopamine to transmit a signal to other structures) that originate in the VTA are part of the reward system;[7] in these pathways, dopamine acts on D1-like receptors or D2-like receptors to either stimulate (D1-like) or inhibit (D2-like) the production of cAMP.[14] The GABAergic medium spiny neurons of the striatum are all components of the reward system as well.[7] The glutamatergic projection nuclei in the subthalamic nucleus, prefrontal cortex, hippocampus, thalamus, and amygdala connect to other parts of the reward system via glutamate pathways.[7] The medial forebrain bundle, which is composed of monoamine neurons that project from several distinct nuclei, is also part of the reward system.

Pleasure is a component of reward, but not all rewards are pleasurable (e.g., money does not elicit pleasure).[4] Stimuli that are naturally pleasurable, and therefore attractive, are known as intrinsic rewards, whereas stimuli that are attractive and motivate approach behavior, but not inherently pleasurable, are extrinsic rewards.[4] Extrinsic rewards are rewarding as a result of a learned association with an intrinsic reward.[4] In other words, extrinsic rewards function as motivational magnets that elicit "wanting", but not "liking" reactions once they have been acquired.[4] Hedonic hotspots or pleasure centers – brain structures that mediate pleasure or "liking" reactions from intrinsic rewards – within the reward system that have been identified as of May 2015 are contained in subcompartments within the nucleus accumbens shell, ventral pallidum, and parabrachial nucleus of the pons;[11][12] the insular cortex and orbitofrontal cortex likely contain hedonic hotspots as well.[11]

Animals vs. humans

Based on data from Kent Berridge, the liking and disliking reaction involving taste shows similarities among human newborns, orangutans, and rats. Most neuroscience studies have shown that dopamine alterations change the level of likeliness toward a reward, which is called the hedonic impact. This is changed by how hard the reward is worked for. Experimenter Berridge modified testing a bit when working with reactions by recording the facial expressions of liking and disliking. Berridge discovered that blocking dopamine systems did not seem to change the positive reaction to something sweet. In other words, the hedonic impact remained the same even with this change. It is believed that dopamine is the brain's main pleasure neurotransmitter but, with these results, that did not seem to be the case. Even with more intense dopamine alterations, the data seemed to remain the same. This is when Berridge came up with the incentive salience hypothesis to explain why the dopamine seems to only sometimes control pleasure when in fact that does seem to happen at all. This hypothesis dealt with the wanting aspect of rewards. Scientists can use this study done by Berridge to further explain the reasoning of getting such strong urges when addicted to drugs. Some addicts respond to certain stimuli involving neural changes caused by drugs. This sensitization in the brain is similar to the effect of dopamine because wanting and liking reactions occur. Human and animal brains and behaviors experience similar changes regarding reward systems because they both are so prominent.[15]

History

Skinner box

James Olds and Peter Milner were researchers who found the reward system in 1954. They discovered, while trying to teach rats how to solve problems and run mazes, stimulation of certain regions of the brain. Where the stimulation was found seemed to give pleasure to the animals. They tried the same thing with humans and the results were similar.

In a fundamental discovery made in 1954, researchers James Olds and Peter Milner found that low-voltage electrical stimulation of certain regions of the brain of the rat acted as a reward in teaching the animals to run mazes and solve problems.[16][17] It seemed that stimulation of those parts of the brain gave the animals pleasure,[16] and in later work humans reported pleasurable sensations from such stimulation. When rats were tested in Skinner boxes where they could stimulate the reward system by pressing a lever, the rats pressed for hours.[17] Research in the next two decades established that dopamine is one of the main chemicals aiding neural signaling in these regions, and dopamine was suggested to be the brain's “pleasure chemical”.[18]

Ivan Pavlov was a psychologist who used the reward system to study classical conditioning. Pavlov used the reward system by rewarding dogs with food after they had heard a bell or another stimulus. Pavlov was rewarding the dogs so that the dogs associated food, the reward, with the bell, the stimulus.[19] Edward L. Thorndike used the reward system to study operant conditioning. He began by putting cats in a puzzle box and placing food outside of the box so that the cat wanted to escape. The cats worked to get out of the puzzle box to get to the food. Although the cats ate the food after they escaped the box, Thorndike learned that the cats attempted to escape the box without the reward of food. Thorndike used the rewards of food and freedom to stimulate the reward system of the cats. Thorndike used this to see how the cats learned to escape the box.[20]

Addiction

Main article: Addiction

ΔFosB (delta FosB), a gene transcription factor, is the common factor among virtually all forms of addiction (behavioral addictions and drug addictions) that, when overexpressed in D1-type medium spiny neurons in the nucleus accumbens, induces addiction-related behavioral and neural plasticity; in particular, ΔFosB promotes self-administration, reward sensitization, and reward cross-sensitization effects among specific addictive drugs and behaviors.

Addictive drugs and addictive behaviors are rewarding and reinforcing (i.e., are addictive) due to their effects on the dopamine reward pathway.[10][21]

See also

References

  1. Malenka RC, Nestler EJ, Hyman SE (2009). "Chapter 15: Reinforcement and Addictive Disorders". In Sydor A, Brown RY. Molecular Neuropharmacology: A Foundation for Clinical Neuroscience (2nd ed.). New York: McGraw-Hill Medical. pp. 364–375. ISBN 9780071481274.
  2. 1 2 Nestler EJ (December 2013). "Cellular basis of memory for addiction". Dialogues Clin. Neurosci. 15 (4): 431–443. PMC 3898681. PMID 24459410.
  3. "Glossary of Terms". Mount Sinai School of Medicine. Department of Neuroscience. Retrieved 9 February 2015.
  4. 1 2 3 4 5 6 7 8 9 10 11 12 13 Schultz W (2015). "Neuronal reward and decision signals: from theories to data" (PDF). Physiological Reviews 95 (3): 853–951. doi:10.1152/physrev.00023.2014. Archived from the original (PDF) on 6 September 2015. Retrieved 24 September 2015. Rewards in operant conditioning are positive reinforcers. ... Operant behavior gives a good definition for rewards. Anything that makes an individual come back for more is a positive reinforcer and therefore a reward. Although it provides a good definition, positive reinforcement is only one of several reward functions. ... Rewards are attractive. They are motivating and make us exert an effort. ... Rewards induce approach behavior, also called appetitive or preparatory behavior, and consummatory behavior. ... Thus any stimulus, object, event, activity, or situation that has the potential to make us approach and consume it is by definition a reward. ... Rewarding stimuli, objects, events, situations, and activities consist of several major components. First, rewards have basic sensory components (visual, auditory, somatosensory, gustatory, and olfactory) ... Second, rewards are salient and thus elicit attention, which are manifested as orienting responses (FIGURE 1, middle). The salience of rewards derives from three principal factors, namely, their physical intensity and impact (physical salience), their novelty and surprise (novelty/surprise salience), and their general motivational impact shared with punishers (motivational salience). A separate form not included in this scheme, incentive salience, primarily addresses dopamine function in addiction and refers only to approach behavior (as opposed to learning) ... Third, rewards have a value component that determines the positively motivating effects of rewards and is not contained in, nor explained by, the sensory and attentional components (FIGURE 1, right). This component reflects behavioral preferences and thus is subjective and only partially determined by physical parameters. Only this component constitutes what we understand as a reward. It mediates the specific behavioral reinforcing, approach generating, and emotional effects of rewards that are crucial for the organism’s survival and reproduction, whereas all other components are only supportive of these functions. ... Rewards can also be intrinsic to behavior (31, 546, 547). They contrast with extrinsic rewards that provide motivation for behavior and constitute the essence of operant behavior in laboratory tests. Intrinsic rewards are activities that are pleasurable on their own and are undertaken for their own sake, without being the means for getting extrinsic rewards. ... Intrinsic rewards are genuine rewards in their own right, as they induce learning, approach, and pleasure, like perfectioning, playing, and enjoying the piano. Although they can serve to condition higher order rewards, they are not conditioned, higher order rewards, as attaining their reward properties does not require pairing with an unconditioned reward. ... These emotions are also called liking (for pleasure) and wanting (for desire) in addiction research (471) and strongly support the learning and approach generating functions of reward.
  5. "Dopamine Involved In Aggression". Medical News Today. 15 January 2008. Retrieved 14 November 2010.
  6. "Smacking children 'does not work'". BBC News. 11 January 1999. Retrieved 22 May 2010.
  7. 1 2 3 4 5 6 7 Yager LM, Garcia AF, Wunsch AM, Ferguson SM (August 2015). "The ins and outs of the striatum: Role in drug addiction". Neuroscience 301: 529–541. doi:10.1016/j.neuroscience.2015.06.033. PMID 26116518. [The striatum] receives dopaminergic inputs from the ventral tegmental area (VTA) and the substantia nigra (SNr) and glutamatergic inputs from several areas, including the cortex, hippocampus, amygdala, and thalamus (Swanson, 1982; Phillipson and Griffiths, 1985; Finch, 1996; Groenewegen et al., 1999; Britt et al., 2012). These glutamatergic inputs make contact on the heads of dendritic spines of the striatal GABAergic medium spiny projection neurons (MSNs) whereas dopaminergic inputs synapse onto the spine neck, allowing for an important and complex interaction between these two inputs in modulation of MSN activity ... It should also be noted that there is a small population of neurons in the NAc that coexpress both D1 and D2 receptors, though this is largely restricted to the NAc shell (Bertran- Gonzalez et al., 2008). ... Neurons in the NAc core and NAc shell subdivisions also differ functionally. The NAc core is involved in the processing of conditioned stimuli whereas the NAc shell is more important in the processing of unconditioned stimuli; Classically, these two striatal MSN populations are thought to have opposing effects on basal ganglia output. Activation of the dMSNs causes a net excitation of the thalamus resulting in a positive cortical feedback loop; thereby acting as a ‘go’ signal to initiate behavior. Activation of the iMSNs, however, causes a net inhibition of thalamic activity resulting in a negative cortical feedback loop and therefore serves as a ‘brake’ to inhibit behavior ... there is also mounting evidence that iMSNs play a role in motivation and addiction (Lobo and Nestler, 2011; Grueter et al., 2013). For example, optogenetic activation of NAc core and shell iMSNs suppressed the development of a cocaine CPP whereas selective ablation of NAc core and shell iMSNs ... enhanced the development and the persistence of an amphetamine CPP (Durieux et al., 2009; Lobo et al., 2010). These findings suggest that iMSNs can bidirectionally modulate drug reward. ... Together these data suggest that iMSNs normally act to restrain drug-taking behavior and recruitment of these neurons may in fact be protective against the development of compulsive drug use.
  8. Taylor SB, Lewis CR, Olive MF (2013). "The neurocircuitry of illicit psychostimulant addiction: acute and chronic effects in humans". Subst Abuse Rehabil 4: 29–43. doi:10.2147/SAR.S39684. PMC 3931688. PMID 24648786. Regions of the basal ganglia, which include the dorsal and ventral striatum, internal and external segments of the globus pallidus, subthalamic nucleus, and dopaminergic cell bodies in the substantia nigra, are highly implicated not only in fine motor control but also in PFC function.43 Of these regions, the NAc (described above) and the DS (described below) are most frequently examined with respect to addiction. Thus, only a brief description of the modulatory role of the basal ganglia in addiction-relevant circuits will be mentioned here. The overall output of the basal ganglia is predominantly via the thalamus, which then projects back to the PFC to form cortico-striatal-thalamo-cortical (CSTC) loops. Three CSTC loops are proposed to modulate executive function, action selection, and behavioral inhibition. In the dorsolateral prefrontal circuit, the basal ganglia primarily modulate the identification and selection of goals, including rewards.44 The OFC circuit modulates decision-making and impulsivity, and the anterior cingulate circuit modulates the assessment of consequences.44 These circuits are modulated by dopaminergic inputs from the VTA to ultimately guide behaviors relevant to addiction, including the persistence and narrowing of the behavioral repertoire toward drug seeking, and continued drug use despite negative consequences.43–45
  9. Grall-Bronnec M, Sauvaget A (2014). "The use of repetitive transcranial magnetic stimulation for modulating craving and addictive behaviours: a critical literature review of efficacy, technical and methodological considerations". Neurosci. Biobehav. Rev. 47: 592–613. doi:10.1016/j.neubiorev.2014.10.013. PMID 25454360. Studies have shown that cravings are underpinned by activation of the reward and motivation circuits (McBride et al., 2006, Wang et al., 2007, Wing et al., 2012, Goldman et al., 2013, Jansen et al., 2013 and Volkow et al., 2013). According to these authors, the main neural structures involved are: the nucleus accumbens, dorsal striatum, orbitofrontal cortex, anterior cingulate cortex, dorsolateral prefrontal cortex (DLPFC), amygdala, hippocampus and insula.
  10. 1 2 Malenka RC, Nestler EJ, Hyman SE (2009). "Chapter 15: Reinforcement and Addictive Disorders". In Sydor A, Brown RY. Molecular Neuropharmacology: A Foundation for Clinical Neuroscience (2nd ed.). New York: McGraw-Hill Medical. pp. 365–366, 376. ISBN 9780071481274. The neural substrates that underlie the perception of reward and the phenomenon of positive reinforcement are a set of interconnected forebrain structures called brain reward pathways; these include the nucleus accumbens (NAc; the major component of the ventral striatum), the basal forebrain (components of which have been termed the extended amygdala, as discussed later in this chapter), hippocampus, hypothalamus, and frontal regions of cerebral cortex. These structures receive rich dopaminergic innervation from the ventral tegmental area (VTA) of the midbrain. Addictive drugs are rewarding and reinforcing because they act in brain reward pathways to enhance either dopamine release or the effects of dopamine in the NAc or related structures, or because they produce effects similar to dopamine. ... A macrostructure postulated to integrate many of the functions of this circuit is described by some investigators as the extended amygdala. The extended amygdala is said to comprise several basal forebrain structures that share similar morphology, immunocytochemical features, and connectivity and that are well suited to mediating aspects of reward function; these include the bed nucleus of the stria terminalis, the central medial amygdala, the shell of the NAc, and the sublenticular substantia innominata.
  11. 1 2 3 Berridge KC, Kringelbach ML (May 2015). "Pleasure systems in the brain". Neuron 86 (3): 646–664. doi:10.1016/j.neuron.2015.02.018. PMID 25950633. In the prefrontal cortex, recent evidence indicates that the OFC and insula cortex may each contain their own additional hot spots (D.C. Castro et al., Soc. Neurosci., abstract). In specific subregions of each area, either opioid-stimulating or orexin-stimulating microinjections appear to enhance the number of ‘‘liking’’ reactions elicited by sweetness, similar to the NAc and VP hot spots. Successful confirmation of hedonic hot spots in the OFC or insula would be important and possibly relevant to the orbitofrontal mid-anterior site mentioned earlier that especially tracks the subjective pleasure of foods in humans (Georgiadis et al., 2012; Kringelbach, 2005; Kringelbach et al., 2003; Small et al., 2001; Veldhuizen et al., 2010). Finally, in the brainstem, a hindbrain site near the parabrachial nucleus of dorsal pons also appears able to contribute to hedonic gains of function (Söderpalm and Berridge, 2000). A brainstem mechanism for pleasure may seem more surprising than forebrain hot spots to anyone who views the brainstem as merely reflexive, but the pontine parabrachial nucleus contributes to taste, pain, and many visceral sensations from the body and has also been suggested to play an important role in motivation (Wu et al., 2012) and in human emotion (especially related to the somatic marker hypothesis) (Damasio, 2010).
  12. 1 2 Richard JM, Castro DC, Difeliceantonio AG, Robinson MJ, Berridge KC (November 2013). "Mapping brain circuits of reward and motivation: in the footsteps of Ann Kelley". Neurosci. Biobehav. Rev. 37 (9 Pt A): 1919–1931. doi:10.1016/j.neubiorev.2012.12.008. PMC 3706488.
    Figure 3: Neural circuits underlying motivated 'wanting' and hedonic 'liking'.
  13. Dumitriu D, Laplant Q, Grossman YS, Dias C, Janssen WG, Russo SJ, Morrison JH, Nestler EJ (2012). "Subregional, dendritic compartment, and spine subtype specificity in cocaine regulation of dendritic spines in the nucleus accumbens". J. Neurosci. 32 (20): 6957–66. doi:10.1523/JNEUROSCI.5718-11.2012. PMC 3360066. PMID 22593064. The enduring spine density change in core but not shell fits well with the established idea that the shell is preferentially involved in the development of addiction, while the core mediates the long-term execution of learned addiction-related behaviors (Ito et al., 2004; Di Chiara, 2002; Meredith et al., 2008). Consistent with the idea of NAc core being the locus of long-lasting drug-induced neuroplasticity, several studies have shown that electrophysiological changes in core persist longer than their shell counterparts. ... Furthermore, data presented here support the idea that NAc shell is preferentially involved in immediate drug reward, while the core might play a more explicit role in longer-term aspects of addiction.
  14. Trantham-Davidson H., Neely L. C., Lavin A., Seamans J. K. (2004). "Mechanisms underlying differential D1 versus D2 dopamine receptor regulation of inhibition in prefrontal cortex". The Journal of Neuroscience 24 (47): 10652–10659. doi:10.1523/jneurosci.3179-04.2004.
  15. Berridge, Kent. "Affective neuroscience of pleasure: reward in humans and animals" (PDF). Retrieved 20 October 2012.
  16. 1 2 "human nervous system".
  17. 1 2 "Positive Reinforcement Produced by Electrical Stimulation of Septal Area and Other Regions of Rat Brain".
  18. "The Functional Neuroanatomy of Pleasure and Happiness".
  19. https://books.google.com/books?hl=en&lr=&id=cknrYDqAClkC&oi=fnd&pg=PA1&dq=pavlov&ots=KApln9W8Kb&sig=brINTzKpYOHv_jftPXT1IZO2-ks#v=onepage&q=pavlov&f=false
  20. Fridlund, Alan and James Kalat. Mind and Brain, the Science of Psychology. California: Cengage Learning, 2014. Print.
  21. Rang, H. P. (2003). Pharmacology. Edinburgh: Churchill Livingstone. p. 596. ISBN 0-443-07145-4.

External links

Wikimedia Commons has media related to Reward system.
This article is issued from Wikipedia - version of the Saturday, April 30, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.