What is the term used to describe when information that has previously been remembered interferes with memory for new information?

  1. Last updated
  2. Save as PDF
  • Page ID54094
  • What is the term used to describe when information that has previously been remembered interferes with memory for new information?
    Figure 7. Forgetting can often be obnoxious or even embarrassing. But as we explore this module, you’ll learn that forgetting is important and necessary for everyday functionality. [Image: jazbeck, https://goo.gl/nkRrJy, CC BY 2.0, goo.gl/BRvSA7]

    What is the term used to describe when information that has previously been remembered interferes with memory for new information?
    Figure 8. At times, we will completely blank on something we’re certain we’ve learned - people we went to school with years ago for example. However, once we get the right retrieval cue (a name perhaps), the memory (faces or experiences) rushes back to us like it was there all along. [Image: sbhsclass84, https://goo.gl/sHZyQI, CC BY-SA 2.0, goo.gl/rxiUsF]
    What is the term used to describe when information that has previously been remembered interferes with memory for new information?
    Figure 9. The 5 Impediments to Remembering

    What is the term used to describe when information that has previously been remembered interferes with memory for new information?
    Figure 10. Could you imagine being unable to forget every path you have taken while hiking? Each new trip, you would be walking around the forest for days, incapable of distinguishing today’s path from the prior ones. [Image: Dan Trew, https://goo.gl/8fJWWE, CC BY-SA 2.0, goo.gl/rxiUsF]

        What is the term used to describe when information that has previously been remembered interferes with memory for new information?
        Figure 11. The Thinker by Auguste Rodin: Our memories are not infallible: over time, without use, memories decay and we lose the ability to retrieve them.

            What is the term used to describe when information that has previously been remembered interferes with memory for new information?
            Figure 12. Memory over time: Over time, a memory becomes harder to remember. A memory is most easily recalled when it is brand new, and without rehearsal, begins to be forgotten.

            What is the term used to describe when information that has previously been remembered interferes with memory for new information?
            Figure 13. Memory interference: Both old and new memories can impact how well we are able to recall a memory. This is known as proactive and retroactive interference.

                What is the term used to describe when information that has previously been remembered interferes with memory for new information?
                Figure 14. Amnesia: There are two main forms of amnesia: retrograde and anterograde. Retrograde prevents recall of information encoded before a brain injury, and anterograde prevents recall of information encountered after a brain injury.

                Volume 2

                Michael Craig, in Encyclopedia of Behavioral Neuroscience, 2nd edition, 2022

                Interference-Based Forgetting

                The interference theory of forgetting posits that the time-related decay of memories cannot explain all forgetting. Instead, forgetting is thought to be predominantly due to other information in long-term memory interfering with our ability to retrieve a memory. The concept that interference causes forgetting has a long history (see McGaugh, 2000). However, it is only in recent decades, following experimental work such as that by Baddeley and Hitch (1977), that this theory has become the dominant explanation for everyday forgetting.

                Interference-based forgetting can be categorized as being either retroactive or proactive. Retroactive interference is a phenomenon that occurs when new information impairs the ability to retrieve previously acquired memory traces (e.g., Baddeley and Dale, 1966). This is the opposite of proactive interference, where previously acquired memory traces interfere with the ability to retrieve new information (e.g., Wickens et al., 1963). Following influential arguments by Underwood (1957), proactive interference was thought to be the dominant cause of forgetting for many years, and retroactive interference was discounted. Nevertheless, many of these arguments were flawed (for discussion, see Wixted, 2004). It is now generally accepted that retroactive interference is the primary cause of incidental forgetting in everyday life (Wixted, 2004).

                One explanation for interference-based forgetting is that it causes increased competition for retrieval between similar memory traces, which encourages forgetting. Specifically, when a retrieval cue is associated solely with a stored memory (target), there is a high likelihood that the target memory can be accessed and retrieved with relative efficiency. However, retrieval competition occurs if a retrieval cue is associated with multiple stored memories (a target and competitors). In this case, the greater the number of competitors, the higher the likelihood of forgetting because of increased competition-based interference. Retrieval competition can result in protracted retrieval, the recall of an undesired competitor memory traces, and forgetting due to “cue-overload” (Watkins and Watkins, 1975).

                Although experimental research provides robust and reliable evidence that retrieval competition causes forgetting, there are weaknesses to this theory as the primary cause of everyday forgetting. Firstly, behavioral studies that provide evidence of retrieval competition offer little insight into underlying neurobiological mechanisms. Furthermore, retrieval competition (between similar memories) cannot account for all forgetting. Specifically, it fails to explain interference-based forgetting when interpolated task material is highly dissimilar to encoded memories and thus unlikely to share a retrieval cue.

                Evidence powerfully demonstrates the non-material specific effect of interference, which dates back over a century (Müller and Pilzecker, 1900). Dewar et al. (2007) demonstrated retroactive interference effects when wordlist learning was followed shortly by a variety of tasks, including completing a visual perceptual task, completing math puzzles, watching a video, listening to the radio, or completing a tone detection task. Critically, all these materials were very dissimilar to the “to-be-retained” wordlist materials and unlikely to share a common retrieval cue. Despite this, in all cases, participants demonstrated a retroactive interference effect (increased forgetting) relative to a control group who experienced an unfilled retention interval.

                In their study, Dewar et al. (2007) proposed that the unrelated tasks and materials did not cause forgetting because of retrieval competition, but because they interfered with the consolidation of wordlist memories. Consolidation theory proposes that new memories are fragile when first formed and are strengthened and stabilized over time (Wixted, 2004). It is thought that consolidation is an opportunistic process that occurs especially during behavioral states of reduced interference, for example, quiet rest and sleep (Hasselmo, 1999; Mednick et al., 2011).

                Robust behavioral findings demonstrate that consolidation is impaired in the presence of interference from sensory input and cognitive engagement in the minutes to hours after learning. This was demonstrated by Dewar et al. (2007) noted above and other recent investigations (e.g., Craig and Dewar, 2018), though the evidence is long-standing (Müller and Pilzecker, 1900). Furthermore, a temporal gradient of consolidation interference is observed: increased forgetting is seen when interference occurs shortly after encoding than following a delay (Müller and Pilzecker, 1900). A consolidation account can explain this gradient because memories are most fragile and vulnerable to interference following their formation and, through consolidation, become resilient to interference over time.

                Theories of consolidation and consolidation interference are supported by neurobiological evidence in humans and rodents. In the minutes to hours after learning, new memories are consolidated through their “replay” in the hippocampus and associated networks, including the medial temporal lobe (MTL) and visual cortices. The magnitude of replay predicts memory performance (Carr et al., 2011; Deuker et al., 2013). Consolidation is also associated with cellular mechanisms: changes in acetylcholine levels are proposed to modulate the flow of information between the hippocampal formation and cortices to facilitate or impede consolidation (Hasselmo, 1999). This includes the regulation of long-term potentiation (LTP), the long-lasting increase in signal transmission between two neurons that acts to strengthen connections. The disruption of these neurobiological and cellular mechanisms of consolidation, for example, through electrical stimulation, pharmaceuticals, or induction of further LTP through encoding, cause forgetting (e.g., Ego-Stengel and Wilson, 2010).

                Although consolidation and interference-based forgetting theories are over a century old, they remain poorly understood due to the dominance of other forgetting theories in the early 20th century. Nevertheless, accumulating behavioral and neurobiological findings offer a compelling case for consolidation interference as a dominant cause of everyday forgetting.

                Read full chapter

                URL: https://www.sciencedirect.com/science/article/pii/B9780128196410001249

                Retrieval-Induced Forgetting and Inhibition

                Michael F. Verde, in Psychology of Learning and Motivation, 2012

                Abstract

                The influence of classic interference theories on contemporary thinking about recall is embodied in the principle of competitor interference, which suggests that forgetting is a direct result of competition among memories associated with a retrieval cue. The inhibition theory of forgetting (Anderson, 2003; Anderson & Bjork, 1994) represents a major departure from the interference tradition in suggesting that an active inhibition mechanism, rather than competition among memories, causes forgetting. This review offers a critical evaluation of the empirical support and the theoretical underpinnings of the case for inhibition and against competitor interference.

                Read full chapter

                URL: https://www.sciencedirect.com/science/article/pii/B9780123943934000029

                Odor Memory and Perception

                Jacob A. Berry, Ronald L. Davis, in Progress in Brain Research, 2014

                Abstract

                Failure to remember, or forgetting, is a phenomenon familiar to everyone and despite more than a century of scientific inquiry, why we forget what we once knew remains unclear. If the brain marshals significant resources to form and store memories, why is it that these memories become lost? In the last century, psychological studies have divided forgetting into decay theory, in which memory simply dissipates with time, and interference theory, in which additional learning or mental activity hinders memory by reducing its stability or retrieval (for review, Dewar et al., 2007; Wixted, 2004). Importantly, these psychological models of forgetting posit that forgetting is a passive property of the brain and thus a failure of the brain to retain memories. However, recent neuroscience research on olfactory memory in Drosophila has offered evidence for an alternative conclusion that forgetting is an “active” process, with specific, biologically regulated mechanisms that remove existing memories (Berry et al., 2012; Shuai et al., 2010). Similar to the bidirectional regulation of cell number by mitosis and apoptosis, protein concentration by translation and lysosomal or proteomal degradation, and protein phosphate modification by kinases and phosphatases, biologically regulated memory formation and removal would be yet another example in biological systems where distinct and separate pathways regulate the creation and destruction of biological substrates.

                Read full chapter

                URL: https://www.sciencedirect.com/science/article/pii/B9780444633507000024

                The Mammary Gland in Mucosal and Regional Immunity

                J.E. Butler, ... Imre Kacskovics, in Mucosal Immunology (Fourth Edition), 2015

                Mechanisms of Immunosuppression by Passive Maternal Immunoglobulins

                The mechanisms of suppression are theoretical: (1) maternal antibodies capture pathogens and foreign antigen preventing them from stimulating the neonatal immune system; (2) natural antibodies in colostrum interfere with colonization by the normal gut flora needed to stimulate neonatal immune competence; and (3) passive maternal IgG saturates FcγRIIβ receptors, which then bind antigen, which then crosslinks to the BCR.

                The antibody interference theory has been popular (Van Maanen et al., 1992). Sows routinely immunized with various vaccines induce the production of IgG, IgM, and IgA antibodies that are subsequently transferred to the suckling piglet via colostrum. Effective interference by these passive antibodies appears to depend on the ratio of ingested antibodies to the invading pathogen (Siegrist, 2003). However, blocking of antibody responses in piglets by maternally acquired immunity does not mean the absence of an immune response by the piglet. Rather, protective cellular immune responses may develop while priming B cells for subsequent exposure to the same antigen (reviewed by Salmon et al., 2009). Another example of antibody interference involved is SIgA in human milk/colostrum against dietary antigens (Rumbo et al., 1998). This IgA may have a role in the control of allergen absorption and contribute to protection of the neonate against the development of allergies of dietary or environmental origin (Welsh & May, 1979).

                The second theory concerns the effect of passive immunity on gut colonization. Colonization by gut flora is necessary to drive development of the immune system, as evidenced by the more robust response to T-dependent antigens in conventional compared with germ-free animals (Butler et al., 2002, 2005; Dobber et al., 1992; Ohwaki et al., 1976; Woolverton et al., 1992). In rabbits and swine, diversification of the antibody repertoire also depends on microbial colonization (Butler et al., 2011; Knight and Winstead, 1997). Natural antibodies that recognize bacterial polysaccharides such as polyreactive natural IgA antibodies against commensal flora can limit the penetration and adhesion of commensal intestinal bacteria to the neonatal intestinal epithelium (Macpherson et al., 2000; Harris et al., 2006). Human IgA deficiency is often correlated with high concentrations of serum antibodies directed against antigens of the alimentary bolus or of enterotropic bacteria. This suggests that IgA antibodies in the lumen protect against this phenomenon. Although Harris et al. (2006) detected no influence of milk IgA in rodents on the level of gut microbiota, the gradual decrease in the supply of maternal IgA antibodies during the suckling period might explain the parallel increase in bacterial colonization (Inoue and Ushida, 2003; Inoue et al., 2005a; b). More important, these natural SIgA antibodies in milk (Brandtzaeg, 2003) were shown to decrease the spread of microbial pathogens through the population by reducing the pathogen load in the feces.

                The third hypothesis revolves around Fcγ receptor-mediated suppression of B cell responsiveness. Swine lymphocytes can be inhibited in vitro by membrane-bound antibodies (Setcavage and Kim, 1978), perhaps indicating suppression through crosslinking of FcγIIB and the B cell antigen receptor (D’Ambrosio et al., 1995; Phillips and Parker, 1983). Apart from this single example, all other positions are only theoretical.

                Read full chapter

                URL: https://www.sciencedirect.com/science/article/pii/B9780124158474001166

                The Mathematical Brain Across the Lifespan

                A. De Visscher, M.-P. Noël, in Progress in Brain Research, 2016

                1.2 Similarity Interference Through Development

                While interference effects in retrieval are well documented in the literature, the interference effect during development received less attention (see Campbell and Graham, 1985, who reported operand-related errors in fifth graders).

                For decades, similarities between items have been shown to negatively affect the performance of recalling and/or processing these items. First, it has been shown that similarity-based interference takes place between items to remember and those to process. In other words, when participants have to recall items after a processing task, their performance will be affected by the similarity between the material to remember and the material to process. For instance, in the experiment of Wickelgren (1965) participants had to remember four consonants. After the stimuli presentation, they had to copy eight consonants, before recalling the four consonants. Results showed that the similarity between the stored and processed information caused a detrimental effect in their performance. Second, a similarity-based interference can also take place between items to remember. For instance, Hall (1971) showed that the learning of nine nonwords is facilitated when they are dissimilar. In this study, students had to memorize nine associations between a double digit and a nonword (that is made of a consonant–vowel–consonant). In one condition, the same letters were used several times within the different nonwords, while in another condition, the use of same letters was minimized. The formal high similarity between items resulted in lower performance in a free recall and in a matching task. This experiment shows the detrimental effect of formal similarity between the items to be remembered.

                To understand how the similarity between items affects their memory trace, we report the enlightening feature theory of Nairne (1990). In both short-term and long-term memory, the memory traces are conceptualized as vectors or lists of features. These features can vary in qualitative type, quantitative value, and number. The similarity among items in a list will substantially determine the performance in a serial recall memory task, considering similarity as the number of overlapping features across respective trace vectors. Interference occurs on a feature-by-feature basis, according to the principle that when a feature b matches feature a, one or the other will be lost by an overwriting mechanism.

                Extending this feature theory, Oberauer and Lange (2008) refined the explanation of the detrimental effect of similarity between items. They showed, using three experiments, that the feature overlap between stored and processed items, but also between items to remember, accounted for the forgetting in a recall task. Conversely to the “pure” similarity-based interference theory, the position of the features’ overlap is irrelevant in the feature overwriting model. In one of their experiments for instance, four words and four letters were serially presented to the participants who were instructed to read them aloud and then recall them in the correct order. The overlap between the letters and the words was manipulated so that three of four letters were present in one of the four words (eg, beer, fond, vote, silk, N, D, P, F). Results showed a higher probability to forget the word (fond) that contains features overlap with the letters than control words (see also the Serial-Order in a Box model in Lewandowsky et al., 2008).

                In summary, studies on memory show that the similarity between items to remember provokes interference in learning and deteriorates the capacity to remember them. Since arithmetic facts are constituted by combinations of the same 10 digits, one can consider that they share many of the same features. In the context of learning by rote several arithmetic problems, this similarity should provoke interference and make the task difficult. De Visscher and Noël (2013) therefore hypothesize that the learning phase of arithmetic facts is partly determined by the similarity-based interference from the arithmetic problems. Accordingly, the individual sensitivity-to-interference would influence the success of this learning, so that the more sensitive to interference someone is, the more difficult it will be to learn the arithmetic facts.

                The high similarity in arithmetic facts has already been highlighted by connectionist models (Campbell, 1995; Verguts and Fias, 2005) that pointed out the similarity interference of the neighboring problems (problems that share an identical operand). However, adopting a developmental perspective, no measure of how much interfering is a problem with the previously learned ones had ever been created. By developing a measure of the interference weight of each multiplication problem, the impact of the similarity interference of arithmetic facts could be assessed into the typical and atypical development of multiplication facts network.

                In this perspective, De Visscher and Noël (2014b) aimed at testing the feature overlap theory (Nairne, 1990) in arithmetic facts learning following the intuition, that, arithmetic facts do interfere particularly since they combine the same 10 digits in different ways. To that end, each digit in a problem is considered as a feature and the similarity or overlap between two problems is approached by measuring the number of digits they have in common. Furthermore, the authors considered that proactive interference operates throughout the learning of arithmetic facts, depending on the order of learning, following the primary intuition and elements of Campbell and Graham (1985). As multiplications are specifically trained during primary school, and, constitute a specific network, the measure of proactive interference targeted the multiplication tables according to the common order of learning, that is, from table 2 to table 9. To measure the quantity of proactive interference that each multiplication problem receives, De Visscher and Noël (2014b) calculated the number of common digit associations between one problem and previously learned ones. To do so, the problem with its answer is considered as a whole and the order of appearance of the digits is not considered (eg, 3 × 9 = 27 includes the digits 2379). The total score of proactive interference of the problem corresponds to the number of occurrences of common two-digit associations with previously learned problems. For instance, when learning 3 × 9 = 27, the combination 2–3 has been found in four previously learned problems (3 × 2 = 6, 3 × 7 = 21, 4 × 3 = 12, 3 × 8 = 24), the combination 2–7 has been found in two problems (2 × 7 = 14, 3 × 7 = 21), and similarly for the combination 2–9 (2 × 9 = 18), 3–7 (3 × 7 = 21), 3–9 (3 × 3 = 9), but not for 7–9. Accordingly, the problem 3 × 9 = 27 receives a score of interference of 9 (see De Visscher and Noël, 2014b for more details). The measure of the proactive interference that each problem receives has been called the “interference parameter” and is a quantitative measure of how much similar a problem is to previously learned problems (in terms of digits), and therefore how much interfering there is (see Fig. 1). This parameter has been calculated for each of the 36 different multiplication problems, noting that commutative pairs are not distinguished (3 × 9 = 27 and 9 × 3 = 27 are considered the same).

                What is the term used to describe when information that has previously been remembered interferes with memory for new information?

                Fig. 1. All 36 multiplications ordered according to the interference parameter, from the least interfering (at 12 o’clock) to the most interfering, following the clockwise direction. The intersection of the blue (gray in the print version) surface with the radian of each problem corresponds to the feature overlap with the previously learned multiplications, following the learning order from table 2 to table 9.

                The first objective of De Visscher and Noël (2014b) was to determine whether the interference parameter could uniquely determine performance across multiplications, beyond the problem size (usually reflected by the products). A first analysis was made on the multiplication production data of normal adults published by Campbell (1997). This analysis revealed that the interference parameter was a significant predictor of the performance across multiplication problems, in terms of reaction time and accuracy, beyond the problem size (which was also a significant factor). The more interfering a problem was, the lower the performance was. These findings have been replicated with three additional samples: 38 third-grade children, 42 fifth-grade children, and in 46 undergraduates, who all undertook a multiplication production task. Similarly to Campbell's data, the interference parameter was a strong predictor of the performance across multiplications (mainly speed in these samples), beyond the problem size factor. The similarity interference in arithmetic facts, therefore, impacts on performance through learning, but also shows a long-lasting effect, since it still impacts performance in adulthood. In addition, the interference parameter was accounting for the tie and the five effects. Indeed, when the interference parameter was entered in the model of reaction time, the tie and five effects were not significant anymore. The explanation is that the digit combination of each tie or five problems is rare compared to other problems. For instance, the tie problems include several times the same digit (eg, 7 × 7 = 49 includes two 7s which is rare in the other problems). The five problems also have nonfrequent digit combinations since they are almost the only problems using the digit 5 (only two nonfive problems include a 5: 7 × 8 = 56 and 6 × 9 = 54).

                Regarding the individual differences, the sensitivity-to-interference in memory hypothesis is that the more sensitive to the interference parameter someone is, the lower should be her/his performance in multiplication. In order to test this assumption, De Visscher and Noël (2014b) calculated for each individual a multiple regression with the interference parameter and the problem size as independent factors and the reaction time across multiplication problems as dependent variable. The slope of each predictor was used to represent the individual sensitivity to each factor, namely, the interference slope (sensitivity to the interference parameter) and the problem size slope (sensitivity to the problem size). Subsequently, and for each group (grade 3, grade 5, and undergraduates), a multiple regression was run in order to test whether the interference slope and/or the problem size slope could determine the global performance in multiplication (in terms of median reaction time in multiplication production). In all three groups, the sensitivity to the interference parameter was a strong predictor of the performance in multiplication (partial correlations of 0.42 in Grade 3, 0.58 in Grade 5, 0.58 in adults), beyond the sensitivity to the problem size. These findings therefore sustain the assumption that sensitivity-to-interference in memory influences arithmetic facts retrieval performance.

                Read full chapter

                URL: https://www.sciencedirect.com/science/article/pii/S0079612316300346

                Decay happens: the role of active forgetting in memory

                Oliver Hardt, ... Lynn Nadel, in Trends in Cognitive Sciences, 2013

                Current thinking on forgetting

                Forgetting of established long-term memory (see Glossary) may indicate that memory is either physically unavailable (that is, memory is lost) or that it is (temporarily) inaccessible. With some exceptions, theories proposed within the domains of experimental and cognitive psychology often emphasize one type of forgetting over the other [1]. Two explanations for actual, non-pathological memory loss have been proposed, one involving decay of aspects of the memory trace, the other involving interference with it.

                Current consensus favors the latter of these two explanations for actual memory loss (see Supplementary Material for an abbreviated history of decay theory). It is supposed that interference processes are responsible for much of everyday forgetting and the decay hypothesis has been generally rejected as an explanation for forgetting of long-term memories [1,2]. Interference manifests in two principal ways. First, shortly after initial learning, task-related or task-unrelated mental activity can impair memory, probably by disrupting cellular consolidation processes [3,4]. Second, the expression of established, fully consolidated long-term memory can suffer from interference at the retrieval stage [5]. For example, during retrieval, competing memories may interfere with the recall process. Although it was thought that this type of reproductive or output interference mainly determined whether or not a memory was retrieved [6], recent research on post-retrieval memory plasticity suggests that it could also affect the content of memory [7]. Because retrieval of consolidated memories induces plasticity in the relevant traces, subsequent exposure to new material can then affect the restabilization, or reconsolidation, of the reactivated memory, akin to what can happen after initial learning [8]. This can lead to the incidental incorporation of new material into the reactivated memory [9] or can in some circumstances decrease memory retention [10].

                Able to explain many experimental results, interference theories have pushed aside alternative accounts of forgetting. It was once widely assumed that, unless periodically recalled, long-term memory may ‘simply’ vanish and fade away over time due to some unspecified biological process [11]. It has long been known that the converse is true: regular use supports long-term memory maintenance. The recently well-documented beneficial effects of testing on retention [12] show that the act of recall promotes long-term memory preservation. It should be noted that frequent recall can also distort and impair memories, with the timing of recall after learning determining whether memory distortions or improvements will occur [13]. Memory in animals also benefits from repeated use [14]. When tested two days after learning to fear a certain spatial context, rats will express fear only towards the training context, but not towards other contexts. Three weeks after training, however, rats fear familiar and novel contexts alike. This change in memory for what place to fear can be prevented by reactivating the fear memory, that is, by re-exposing rats briefly to the training context several times during the three weeks between training and memory test. Rats reminded in this way will fear the spatial context in which training was carried out more so than they fear other contexts, whereas rats that have not been regularly re-exposed to the training context will fear the trained as well as other contexts equally [14]. However, these demonstrations of the effects of use provide only indirect support for the original notion of forgetting by decay and can be interpreted as supporting interference theory. It has not been easy to provide direct evidence of memory loss through disuse.

                Notwithstanding the success of interference-based theories to describe the factors that promote forgetting, the truth is that we do not know why or how the brain actually forgets [15]. Our goal in this article is to discuss this age-old debate in the context of recent findings in the study of memory at both the cellular and systems levels, and to put forward a neurobiologically-based framework for memory and forgetting. Recent advances in the study of the cellular/molecular underpinnings of long-term memory persistence, to be discussed in detail below, suggest memory decay as a major forgetting process. They allow us to assign organized memory removal a central role in the everyday forgetting of consolidated memories and in memory organization.

                It is generally assumed that forgetting is more a vice (i.e., dysfunction) than a virtue (i.e., constitutive process); however, the idea that forgetting might be beneficial for memory has been frequently expressed [3,4,16–19] and Jorge Luis Borges illustrated its essential role for the human experience in his short story about Funes [20]. As Funes could not forget anything, he could not live a normal life because a sea of unimportant details swamped every moment of awareness. We agree that, without constitutive forgetting, efficient memory would not be possible in the first place.

                In our view, decay-driven forgetting is a direct consequence of a memory system that engages in promiscuous encoding. The benefit of such promiscuity is access to a lot of information, so that ‘choices’ about what to keep and what to delete can be made off-line, mostly during certain sleep phases. The cost is the need for a dedicated forgetting mechanism that removes unwanted information. We propose decay as an active, well-regulated process, in contrast to the standard notion of decay as a passive process akin to radioactive decay. In our view, a well-regulated, dedicated process that systematically removes memories not only is more efficient, but can also be better controlled (up- or down-regulated), depending on specific demands and metaplastic constraints, which allows for greater flexibility and adaptability of the memory system.

                The circuit architecture of a given brain system, in particular the nature of its pattern separation capacities (i.e., the degree to which neural representations overlap, with orthogonal patterns being maximally separated) will determine whether interference or decay presents as the predominant forgetting mechanism. In systems with efficient pattern separation, such as the hippocampus, interference will be low or even absent. In systems with little pattern separation, encoding of new traces will necessarily cause interference. We propose that during certain sleep phases, such as slow-wave sleep or rapid eye movement (REM) sleep, when interference by new learning is not a factor, decay happens in all brain systems. In brain areas that exhibit low interference at all times, such as the hippocampus, decay will be the primary mechanism to prevent extensive interference, that is, a state of system failure induced by pattern overload.

                Read full article

                URL: https://www.sciencedirect.com/science/article/pii/S1364661313000132

                Learning offline: memory replay in biological and artificial reinforcement learning

                Emma L. Roscow, ... Nathan Lepora, in Trends in Neurosciences, 2021

                Uniform experience replay and its origins in neuroscience

                Reinforcement learning in AI was developed in the mid-20th century, taking inspiration from earlier animal behaviour research [1]. In training reinforcement learning algorithms, an artificial agent collects data samples through continuous interaction with a real or simulated world, learning policies for selecting actions given the state of the environment in a way that maximises a reward function. Given limited online experience, learning can be accelerated by storing past experiences and subsequently sampling from them repeatedly, in effect to increase the training set.

                Experience replay first appeared in the AI literature in the early 1990s as a means to achieve such an increase [2] (Box 1) and grew in popularity with the advent of deep reinforcement learning and its applications to Atari games and Go in the early 2010s [14,15]. Independently, a series of neurophysiology studies beginning in the 1980s and 1990s found a similar phenomenon of reusing past experience in the mammalian brain (see Figure 1 and Box 2, and recent reviews for more detail [3,16–19]). These neuroscientific replay studies unveiled, among other insights, potential mechanisms of sleep-dependent memory consolidation, with replay providing a cellular basis for the long-standing observation that sleep supports memory [20].

                Box 1

                Experience replay in artificial intelligence

                In discrete time sequences, incoming data samples are usually represented in the form of an experience tuple, consisting of a state s at time step t, action a performed at time step t, reward r obtained at time step t, and the next state st+1 at next time step t + 1. This experience tuple is first stored in a buffer and, during the learning phase, samples are randomly drawn in mini-batch from this buffer uniformly.

                In deep Q networks, these mini-batch samples are then used to learn the agent’s Q-value function, the expected future reward associated with each pair of states and actions, using off-policy Q-learning. The Q-value function is policy-dependent as it relies on data collected resulting from the agent’s actions derived from its policy (behaviour). In the tabular setting, this Q-value function can be represented by a table of size ∣S ∣ × ∣ A∣, where ∣S∣ is the number of states and ∣A∣ is the number of actions in the environment. It is defined as:

                [I]Qπsa=Eπr1+γr2+γ2r3+…s0=sa0=a

                where γ is the discount factor that controls how much the agent prioritises immediate rewards against long-term rewards. The off-policy update rule for the Q-value function is:

                [II]Qsa←Qsa+αrt+1+γmaxa′Q s′a′−Qsa

                where α is the learning rate. The Q-value function is said to have been completely learnt when its values have converged.

                As the agent continuously explores the environment and collects data samples, sooner or later the buffer will become full and the oldest samples will be replaced by newer samples. This strikes a balance between learning the most recent samples and allowing older samples to ‘live’ longer than they usually would, such as in the classical online learning setting. Experience replay has shown to improve the learning efficiency of artificial agents [105].

                What is the term used to describe when information that has previously been remembered interferes with memory for new information?

                Figure 1. Replay of hippocampal place cells.

                Replay of hippocampal place cells during a single lap of a linear track. (A) Spiking of place cells during a sharp-wave ripple, representing a forward replay. (B) Sequential spiking of place cells that encode successive locations on the track; colours represent the location on the track to which the cell is tuned. Black trace at the top shows concurrent local field potential. Red and blue boxes outline bursts of spiking, which are magnified in A and C, respectively. (C) Spiking of place cells during a sharp-wave ripple, representing a reverse replay. (D) Animal’s running velocity, including periods of immobility before and after the run. Figure adapted from [119].

                Box 2

                Hippocampal replay

                Pyramidal neurons in the hippocampus exhibit spatial receptive fields: their firing rate increases by as much as tenfold when the animal is in a particular location [106]. Taken together, such ‘place cells’ have been proposed to form a cognitive map of an environment from which an animal may be able to plan routes, find shortcuts, and make other inference. As the animal traverses a room or a habitat, the sequence of increased firing rates of one place cell after another can provide a read-out of the animal’s trajectory through the environment [29]. Following earlier predictions [107,108], a series of studies in the 1990s showed that pairs of place cells that were coactive during behaviour (i.e., encoding overlapping or adjacent locations on a maze) became coactive again when the animal was taken away from the maze and left to rest or sleep in one place [31–34]. This reactivation of place cell pairs was above both the level of chance and the level of their coactivation during rest before exploring the environment; that is to say that the hippocampal trace of previous behaviour was being replayed during rest, when the animal was not running or exploring and the hippocampus was otherwise unengaged with the task of navigation (Figure 1).

                Further research has shown that such replay extends outside the hippocampus to cortical [35–37] and limbic [38–40] brain areas, which are involved in processing nonspatial information, suggesting a brain-wide phenomenon in which many facets of an experience, including sensory and reward-related properties, can be reactivated together.

                In humans, the noninvasive, nonsurgical experimental methods usually required for recording neural activity offer lower spatial and temporal resolution, making replay detection more difficult. Nevertheless, classifiers trained on human neural activity during a task show hippocampal reactivation of task representations during subsequent rest, with a bias towards replaying items that are highly rewarded and subsequently better remembered [109,110]. Replay has also been shown to selectively strengthen weaker memories [111] and re-evaluate state-action values for reinforcement learning [13] and with tentative evidence of hippocampal-to-cortical transfer of task-relevant information [112].

                Evidence for the causal role of replay in memory consolidation has come from studies in which sharp-wave ripples in the hippocampus are either disrupted or prolonged. Hippocampal replay relies on synchronous excitation of large neuronal populations, which occurs during sharp-wave ripples [31]: distinctive, transient bursts of high-frequency oscillatory activity [113] that promotes the firing of a subset of pyramidal cells, resulting in replay sequences [26,27]. Disrupting the ripples, which also disrupts coincident replay events, results in slower spatial learning on timescales of minutes [4,5,10], and days [41,42]. Disrupting the replay event but not the ripple itself (technically a much harder feat) has also been shown to slow down learning [43]. Extending the duration of ripples, conversely, appears to increase replay and improve spatial memory [5]. Finally, studies that invoke or bias replay of some experiences over others can selectively improve the memory of those items [6–8,44]. The definitive evidence for a link between replay and memory consolidation would be performance improvement following the induction of a replay event from scratch. Such a test, however, is technically challenging and to our knowledge has not been achieved experimentally so far.

                How does replay improve memory? One of the leading hypotheses is that replay induces Hebbian plasticity between the cells being replayed [21–25], thereby strengthening their synaptic connections. Replay events, particularly during non-rapid eye movement (non-REM) sleep, typically reiterate neural patterns on a faster timescale than during the original experience [26–28], which might further encourage spike-timing-dependent plasticity [21]; one could call this the ‘offloading plasticity until later’ theory of how replay supports memory consolidation. However, while the importance of replay for spatial (hippocampus-dependent) memory has been established, questions remain about which aspects of the experience are represented in replayed activity, how replay patterns propagate through the brain, and the roles of replay in wider cognitive processes.

                Computational studies have suggested functions for replay that extend beyond memory consolidation (Box 3). The complementary learning systems theory has used the so-called ‘penguin problem’ to illustrate the necessity of maintaining a network that is stable enough for acquired knowledge to persist, but plastic enough to incorporate new knowledge [29]. In this illustration, a network was trained to classify living things from their characteristics, before being presented with an anomalous semi-aquatic bird (the penguin) that has feathers and wings like a bird, but does not fly and swims like a fish. Updating the connection weights to incorporate this new information disrupted and worsened performance for other birds [29]. The proposed solution was replay, interleaving training of the new item (penguin) with older, similar items (other birds); this proved sufficient to maintain both representations without interference. One could call this the ‘preventing catastrophic interference’ theory of how replay improves memory consolidation, an idea that dates back to connectionist models early in the history of artificial neural networks [30].

                Box 3

                Proposed computational functions of replay

                Replay has been proposed to serve a variety of cognitive and network functions for learning, some of which depend biologically on brain area and sleep–wake state (during behaviour, extended rest, rapid eye movement (REM) sleep, or non-REM sleep). The theories and perspectives outlined briefly in the following bullet points are not mutually exclusive and some of them are not unique to reinforcement learning, but they suggest how replay can support learning from individual rewarded episodes.

                Consolidation of new memories rapidly encoded by a fast learner (the hippocampus) into long-term storage in a slow learner (the cortex) [57]. Memory encoding can form rapidly in the hippocampus but takes longer to be integrated into cortical representations (but see [114,115]). Replay serves as additional training to supplement online learning, in order to strengthen cortical representations [116].

                Generalising across episodes. Representations formed quickly and sparsely, for example, in the hippocampus enable pattern separation, which is beneficial for one-shot learning or retention of individual episodes of experience, but poorly suited to generalising across episodes. By contrast, a slower-learning cortex can integrate multiple episodes and encode statistical regularities between them, but takes many examples to achieve this (but see [117]). Replay serves as additional training for the cortex from individual episodes initiated by the hippocampus.

                Preventing catastrophic interference. Gradual interleaving of online and offline information regularises synaptic changes or weight changes to ensure that network representations of older information are not supplanted by those of newer information [30,57].

                Stabilising learning. Experiences close in time tend to be correlated, which can result in large, fluctuating weight changes. Interleaving uncorrelated samples constrains weight changes for more stable learning [57,118].

                Models of replay are often concerned with its role in reinforcement learning. Although biological replay is discussed commonly in terms of episodic memory consolidation and the integration of new memory traces into long-term storage [3,16–19,29,31–37], evidence from animal studies usually comes from spatial navigation tasks in which food rewards or electrical brain stimulation are used to reinforce exploration and navigation of an environment [4,5,8,10–12,26,27,29,31–48]. The additional plasticity that replay incurs may itself reinforce habitual behaviours that are driven by the replayed activity patterns [39]. Replay of activity in the hippocampus alone is necessary for stabilising newly formed representations of the environment, ensuring that learned state transitions are maintained for subsequent visits [49,50]. In addition, the recruitment of other brain areas that are involved in evaluating likely outcomes and rewards during replay may promote further updating of stored action values or state values in neural reinforcement learning circuits [13,40,48,51].

                These two functional roles of replay (preventing catastrophic interference and facilitating reinforcement learning) are particularly relevant for deep reinforcement learning. The fact that the use of experience replay was crucial to the first notable success of reinforcement learning with deep neural networks showed that these proposed functions of replay have application beyond the mammalian brain and extend to artificially intelligent systems [14]. In this work, an artificial neural network composed of several convolutional and fully connected layers received visual input (images from Atari 2600 computer games) while producing joystick movements to play the game. Its learning algorithm builds on classical Q-learning, which maps states (visual input) to actions (joystick output). The error generated by the Q-learning loss function is then used to train the deep neural network using the backpropagation algorithm. This process gradually optimises the neuron parameters (e.g., synaptic weights) towards an optimal mapping between the state space (Atari images) and the possible actions (Figure 2).

                What is the term used to describe when information that has previously been remembered interferes with memory for new information?

                Figure 2. Deep Q network (DQN) with memory buffer.

                DQN with memory buffer. Top: a DQN trained to play the Atari game Boxing. At every time-step t, the DQN outputs an action corresponding to a joystick movement (1), which causes the game to produce a new reward (game score) and a new observation (pixel values; 2). The observation is transformed into a series of four visual frames that make up the state, and the tuple of state, reward, action, and subsequent state are stored in the replay buffer (3). These tuples are then sampled from the buffer and replayed to the DQN so that it optimises a function mapping state inputs to action outputs in order to maximise reward (4). Bottom: at each update, Q-values for a given pair of state s and action a from the sample are updated according to the difference between the observed value and expected value, where the observed value is the reward r from the sample added to expected future reward maxQ(s′, a′) discounted by a factor ɣ and expected value is the previous Q-value of the state-action pair. Over repeated updates, the Q-values converge on an approximation of how a state-action pair (top node) maps onto possible subsequent states (white nodes) and actions (bottom nodes).

                A necessary element of the deep Q network (DQN) is that past trials are stored in a memory buffer and regularly played back to the network. Because the incoming training data depends on the agent’s previous actions, the distribution of the training data is prone to shift as the agent’s action policy for how it behaves also shifts, leading to nonstationary data distributions. Such temporal correlations between successive online learning trials can cause a phenomenon known as catastrophic forgetting, where weight parameters undergo changes that optimise for the most recent gameplay at the cost of older gameplay; this results in behaviour learned from the previous task being rapidly overwritten by the agent’s new behaviour. Experience replay proved a crucial intervention to break temporal correlations between successive online learning trials, which stabilises learning and ultimately leads to much improved performance.

                Experience replay itself was proposed in machine learning long before the deep reinforcement breakthrough, but it was only when experience replay and deep reinforcement learning were combined that closer parallels with replay in the brain emerged [14]. It can be argued that DQNs exemplify how artificial neural networks can be used as models of biological learning to test theories of how replay can support learning. Manipulating biological replay has largely been limited to broad disruption of replay patterns [4,5,10,41,42], typically during a relatively brief period immediately following learning, which is found to diminish learning. Whereas in DQNs the consequences of manipulations such as including, excluding, or tweaking replay have been examined more comprehensively [14,52], the effects of such manipulations in biological settings have been relatively less studied.

                However, the differences between how experience replay is implemented algorithmically and how biological replay occurs physiologically merit further consideration. In the early experience replay method [14], the memory buffer consisted of past samples taken at random from the replay buffer. The memory buffer was set with an arbitrary capacity of the most recent one million trials; when full, the older samples in memory were replaced with new ones to maintain relatively recent training data. Several of these features of artificial experience replay (specifically the storage of exact copies of previous trials [53,54], the prioritisation of recent experience [55], uniform sampling from the memory buffer [56], and its fixed capacity [53]) have been developed further in recent years and all of them can be said to be unrepresentative of biological replay to varying degrees. In the following sections, we highlight how advances in experience replay algorithms have come closer to replicating biological replay and the implications for replay as a cognitive mechanism.

                Read full article

                URL: https://www.sciencedirect.com/science/article/pii/S0166223621001442

                Anosognosia, denial of illness and the right hemisphere dominance for emotions: Some historical and clinical notes

                Guido Gainotti, in Consciousness and Cognition, 2018

                4 Some notes on the history of the concept of denial of illness

                Some years after his first presentation of patients with left-sided hemiplegia who ignored or seemed to ignore their striking motor defect, Babinski (1923) made the paradoxical observation that some of these patients had been for many years very afraid of the condition they now apparently ignored. This dramatic change from anxious expectancies to lack of concern suggested that motivational factors might force some patients to deny a condition that they are unable to accept. This interpretation was formally advanced after the advent of Freud and his development of the psychoanalytical theory. Some authors suggested that lack of awareness of a disability could be a form of organic suppression (Schilder,1932) or an avoidance reaction (Goldstein, 1939), through which the patient excluded from consciousness the unacceptable facts of hemiplegia or blindness. These explanations specifically concerned two components of the psychoanalytical theory, i.e., the ‘unconscious’ and ‘defence mechanisms’ constructs. Mancia (2006) noted that in Freud’s theory it is possible to distinguish a ‘dynamic’ unconscious (resulting from an active mechanism of motivated suppression of conscious information and based on defence mechanisms) from a ‘non-removed’ unconscious (Freud, 1922) referring to events (usually experienced in the earliest periods of life), for which an active process of removal could not be hypothesized. Some years after Schilder’s (1932) and Golstein’s (1939) work, Weinstein and Kahn (1955) introduced the term ‘Denial of Illness’, in a book which represented the culmination of several years of intensive investigation of behavioural changes in patients with severe brain disease. With this term Weinstein and Kahn (1955) defined a form of social behaviour in which the patient adapts to the stress of his disability and makes symbolic references to it by an altered mode of interaction in the environment. These authors indicated - that ‘denial of illness’ usually includes more than one aspect of illness, - that patients with denial of physical disabilities commonly denied the existence of other personal problems and - that patients with complete explicit denial showed no anxiety and were bland and affable when interviewed. More recently, it has been generally acknowledged that denial serves a useful purpose in helping people to cope with sudden intolerable changes, but is harmless as long as it does not last too long, as is usually the case. Several observations suggest that in the acute stage of left-sided hemiplegia lack of awareness and denial of hemiplegia may co-exist. In fact, Marcel, Tegnér, and Nimmo-Smith (2004) noted that patients who show no awareness of their deficits in response to a direct verbal question often demonstrate reluctance or verbal circumlocution when asked to perform an online task. Thus, they may find excuses not to perform a bimanual task, even though they do not admit that it is because of their paralyzed arms. This dissociation between implicit and explicit forms of anosognosia has been discussed by several authors (e.g. Cocchini, Beschin, Fotopoulou, & Della Sala, 2010; D'Imperio, Bulgarelli, Bertagnoli, Avesani & Moro, 2017; Fotopoulou, Pernigo, Maeda, Rudd, & Kopelman, 2010; Mograbi & Morris, 2013; Moro, 2013; Nardone, Ward, Fotopoulou, & Turnbull, 2007). For instance, Nardone et al. (2007) checked the hypothesis assuming that some implicit knowledge of the deficit might exist when lack of awareness is driven by the emotionally-aversive consequences of bringing to consciousness deficit-related thoughts. They investigated this issue presenting to anosognosic and non-anosognosic patients words associated with the hemiplegic deficit and showed that non-anosognosics displayed reduced latencies (i.e., facilitation) for emotionally threatening words, whereas anosognosics manifested increased latencies (i.e., interference), which supports the claim of implicit awareness. On the other hand, Cocchini et al. (2010) proved that explicit and implicit awareness for motor deficits can be dissociated and that they may be differently affected by feedback. Moro (2013) advanced the hypothesis which assumes that “emergent awareness” (i.e., the emergence of a verbal acknowledgment of deficits as a consequence of attempting to act) could represent a link between implicit and explicit components of awareness; Mograbi and Morris (2013) reviewed clinical observations and experimental evidence that suggested the occurrence of implicit awareness in dementia and hemiplegia and presented a theoretical framework that hypothesized the existence of parallel routes for processing similar information in order to understand it. Finally, D’Imperio et al. (2017) demonstrated that emergent awareness can be observed in anosognosic patients who attempt both actions that are impossible for hemiplegic patients and actions that are potentially dangerous. Taken together, all these results lead to the conclusion that explicit and implicit awareness for motor deficits can be dissociated, suggesting that different underlying mechanisms may account for the multi-factorial phenomenon of anosognosia.

                On the other hand, the coexistence of cognitive and motivational factors more or less clearly affecting disease awareness in patients with acute forms of left-sided hemiplegia is suggested by the observation that although most of these patients verbally admit their motor defect, their emotional reaction is inappropriate, considering the severity of their disability. This fact had already been noted by Babinski (1914), who wrote: “I have also observed some hemiplegics who, without being unaware of the existence of their paralysis, seemed not to attach any importance to it, as if it were a matter of an insignificant discomfort. Such a state could be called anosodiaphoria (άdiaphοrίa, indifference, unconcern).” Some years later Critchley developed and expanded these observations of Babinski. He reported instances – of ‘anosodiaphoria’, i.e. minimization or apparent indifference to the existence of the handicaps (Critchley, 1957), – ‘personification of paralysed limbs in hemiplegics’ (Critchley, 1955), and – ‘misoplegia’ i.e. morbid dislike or hatred of paralyzed limbs in patients with hemiplegia (Critchley, 1974). All of these phenomena can develop before, during and after unawareness of hemiplegia (Mograbi & Morris, 2013). A further observation, which supports the coexistence of cognitive and motivational factors in patients with apparent unawareness of their left-sided hemiplegia is the follow-up observation of their behaviour from the acute to the sub-acute stages of the disease. Gainotti (1972) reported that patients who showed an explicit denial of their disability in the first days after the stroke began to acknowledge the motor defect some days later, but tended to minimize it and to attribute their disability to trivial factors (e.g., weariness, injections, arthrosis, etc.). In this stage of the disease they also typically showed the phenomena of anosodiaphoria, misoplegia and personification of the paralysed limbs which had attracted Critchley’s (1955, 1957 and 1974) attention. Only at a later stage, when the patient explicitely admitted the pathological nature and severity of his disability, did anosodiaphoria progressively disappear to leave room for anxiety and depression. These findings might give the impression that anosognosia can protect stroke patients from depression. In fact, according to Gainotti et al. (1997) depression is a psychological reaction to the awareness of the personal aftereffects of stroke. In line with this idea, the work of Weinstein and Kahn (1955) and further revision of an emotion-regulation interpretation (e.g. Turnbull & Solms, 2007) suggested that lack of awareness might ‘protect’ from depression. If this position is correct, we should expect a negative correlation between severity of depression and level of disease unawareness. However, no correlation between these two variables was found by Starkstein, Fedoroff, Price, Leiguarda, and Robinson (1992) and Cocchini, Crosta, Allen, Zaro, and Beschin (2013). We might also expect more frequent depression in patients who are less likely to show anosognosia - i.e. in left, rather than right, brain- damaged patients - however, this prediction was not confirmed by Carson et al. (2000).

                Read full article

                URL: https://www.sciencedirect.com/science/article/pii/S1053810017303653

                Sleep and hippocampal neurogenesis: Implications for Alzheimer’s disease

                Brianne A. Kent, Ralph E. Mistlberger, in Frontiers in Neuroendocrinology, 2017

                3 Impaired pattern separation in Alzheimer’s disease

                The hallmark symptom of AD is impaired episodic memory, defined as memory for autobiographical episodes or personal events that include temporal-spatial components (Tulving, 1972). Importantly, failure in episodic memory does not always reflect forgetting an event over time, but can also result from confusing distinct events. For example, some evidence suggests that even though patients diagnosed with AD exhibit profound memory deficits, the patients do not necessarily have accelerated rates of forgetting, which has traditionally been considered a hallmark of the disease (Christensen et al., 1998; Money et al., 1992). Instead, the specific memory impairment exhibited by patients with Mild Cognitive Impairment (MCI) or AD may reflect false recognition (Hart et al., 1985; Budson et al., 2001; Gold et al., 2007; Hildebrandt et al., 2009; Plancher et al., 2009; Abe et al., 2011; Yeung et al., 2013). This is partly because memories of our everyday lives often include similar routines and environments, which makes episodic memory particularly vulnerable to interference (Tulving, 1972).

                Overcoming interference is essential for accurate memory and may also be a contributing factor to the cognitive deficits caused by hippocampal damage and AD. The interference theory of amnestic syndrome was proposed by Warrington and Weiskrantz (1970, 1978) to explain the surprising finding that amnestic patients could demonstrate good verbal retention under certain conditions. The researchers found that the method by which memory in patients with amnesia was evaluated was a crucial factor in the degree of mnemonic deficit exhibited. This followed from the paradoxical finding that when evaluating memory in patients with amnesia, providing partial information, such as fragmented letters or the initial letters of a word, was a more effective retrieval strategy than showing the patient a whole word and asking them to respond “yes” or “no” to whether it was a target word seen in the previous list (Warrington and Weiskrantz, 1970). The experiments demonstrated that retrieval by partial information reduced false-positive responses. The authors concluded that long-term memory could be demonstrated in amnestic patients when the method of retrieval minimized interference.

                There is some evidence that increased vulnerability to interference is an early manifestation of disease in patients with dementia and gets worse with disease progression (Loewenstein et al., 2004, 2007). Loewenstein et al. (2004) showed that after controlling for overall memory impairment, AD patients and patients diagnosed with MCI were more affected by proactive and retroactive semantic interference than age-matched controls, when asked to recall common household objects that had been previously presented. Interference was manipulated by presenting new but semantically related objects at different time points. This increased vulnerability to interference has been linked to false recognition memory in a mouse model of AD (Romberg et al., 2012), and complements the higher rates of false memories exhibited by MCI and AD patients (Budson et al., 2001; Yeung et al., 2013).

                Underlying the heightened susceptibility to interference, contributing to higher rates of false memories associated with AD, could be a specific deficit in pattern separation (Budson et al., 2001; Kent et al., 2016). Evidence from functional Magnetic Resonance Imaging (fMRI) combined with cognitive tasks designed to evaluate pattern separation, suggests that humans exhibit age-related deficits of pattern separation that are more pronounced in patients diagnosed with MCI or AD (Yassa et al., 2010; Ally et al., 2013).

                To study pattern separation, Kirwan and Stark (2007) developed a continuous recognition paradigm that presents subjects with a series of photographs, and asks participants to identify which pictures had been presented before, by differentiating them from other similar (i.e., lures) and dissimilar pictures. Lures were hypothesized to require an increased need for pattern separation due to overlapping object features that caused interference. Using this paradigm, it was shown that older adults were more likely to commit false positive errors and wrongly identify the lures as familiar (Toner et al., 2009; Yassa et al., 2011) and suggested an age-related deficit in pattern separation. This age-related impairment was replicated using a delay-match-to-sample task that varied the distance between dots presented on a screen (Holden et al., 2012) and an object-location task that varied the spatial displacements of images of everyday objects on the screen (Reagh et al., 2014). In all of these memory tasks, older subjects performed worse than younger subjects when the similarity between the stimuli was high.

                To evaluate whether this age-related impairment was more pronounced in elderly participants with early signs of cognitive impairment, Yassa et al. (2010) used a similar continuous pattern discrimination task to compare healthy older adults with patients diagnosed with amnestic MCI (aMCI). As before, the subjects were asked whether an image had been previously presented, and had to discriminate between photographs, perceptually similar lure items, and novel items. The aMCI patients were unable to effectively discriminate between repeated and lure items. These findings have been replicated (Stark et al., 2013) and extended to show that patients diagnosed with mild AD perform even worse than patients diagnosed with aMCI (Ally et al., 2013). Furthermore, cerebrospinal fluid concentration of Aβ was found to be correlated with performance in this paradigm, and specifically with the ability to make difficult discriminations but not easier discriminations (Wesnes et al., 2014).

                A visual discrimination task used to evaluate pattern separation in humans was developed by Barense and colleagues (2012). In their task, the stimuli used were abstract blob-like objects consisting of three distinct features: an inner shape, outer shape, and a fill pattern. On each trial, two objects were presented simultaneously but rotated to prevent a matching strategy, and the participants were asked if the objects were identical. The task had two conditions: (1) high interference, which contained consecutive trials of high ambiguity object discriminations, and (2) low interference, which contained mostly photographs of everyday objects that were easily discriminable and intermixed with high ambiguity blob-like images. Using this paradigm it was shown that patients at risk for MCI and patients diagnosed with aMCI were impaired on the high interference condition, compared to healthy older adults (Newsome et al., 2012). Notably, performance improved when the number of similar features viewed across trials was reduced to minimize perceptual interference.

                Yeung et al. (2013) followed this up by designing another task to evaluate how this increased susceptibility to interference affected recognition memory in older adults at risk for MCI (Yeung et al., 2013). Their study used an eye-tracking-based methodology and presented subjects with photographs of everyday objects belonging to 12 semantic categories (e.g., coffee mugs, diamond rings, and socks). Participants were shown images from one semantic category during each testing block. Within each block, half of the images were shown during the study phase and then the other images were shown during the test phase. The images shown during the test phase were categorized as high interference foils if they were perceptually similar objects with high feature overlap and within the same semantic category, or low interference foils if they were not perceptually similar but belonged to the same semantic category. Interestingly, because the images were presented in a continual stream, the participants were unaware when the sample phase ended and the choice phase began. Eye movements associated with novelty detection were used as an indirect measure of memory. The results revealed that patients at risk for MCI falsely recognized the high interference novel objects as previously viewed, when compared to healthy older and young adults.

                Although these studies conducted by Barense and colleagues (Barense et al., 2012; Newsome et al., 2012; Yeung et al., 2013) did not explicitly evaluate pattern separation processes, the increased susceptibility to interference may reflect an underlying deficit in pattern separation (Kent et al., 2016). These tasks were inspired by experiments using the tgCRND8 mouse model, which overexpresses Aβ, and exhibits memory impairments resulting from enhanced encoding of interfering information that leads to false memories (Romberg et al., 2012). It was hypothesized that a selective deficit in pattern separation increased the mouse model’s susceptibility to interference, which is similar to the false memories exhibited by aMCI and AD patients (Balota et al., 1999; Budson et al., 2001, 2006; Newsome et al., 2012; Dewar et al., 2012; Yeung et al., 2013).

                However, it should be noted that the recognition tasks used by Romberg et al. (2012) and the previously discussed tasks developed by Barense and colleagues were designed to rely on the perirhinal cortex, which is not a brain region with adult neurogenesis. The mechanisms underlying pattern separation in the perirhinal cortex are not known, but it is possible that similar plasticity-related mechanisms are necessary in both the hippocampus and perirhinal cortex, such as brain-derived neurotrophic factor (BDNF). One hypothesis is that the complex, information-rich representations in the hippocampus that underlie episodic memory or contextual learning, require the ultra-responsive, highly plastic new neurons when encoding highly similar inputs; in contrast the relatively simpler object-based representations mediated by the perirhrinal cortex may not require neurogenesis for pattern separation.

                Read full article

                URL: https://www.sciencedirect.com/science/article/pii/S0091302217300109

                What is it called when previously remembered information interferes with memory from new information?

                Proactive interference happens when an individual is unable to learn new information because old information prevents its retrieval. In other words, old memories interfere with the retrieval of new memories.

                What is it called when information that has previously been remembered interferes with memory for new information quizlet?

                Proactive interference is when previously learned information interferes with new information, and retroactive interference is when new information interferes with previously learned information. Tap the card to flip 👆

                What type of memory is associated with remembering recent information?

                1 Episodic Memory. Episodic memory refers to memory for particular events situated in space and time, as well as the underlying cognitive processes and neural mechanisms involved in remembering those events.

                What is it called when we retain information over time?

                Storage is the retaining of information over time. Retrieval is the ability to get encoded material back into awareness.