Sensory memory permits the temporary storage of information we receive from our senses.

Cognitive Load Theory is a concept that has become increasingly popular in education, with many teachers starting to use its principles in their classrooms. If you are not already familiar with this theory, here is a quick rundown of what it is, its benefits, and how CENTURY has been designed with these key principles in mind.

Memory has several components: sensory memory, working memory and long-term memory. Before we get into the details of Cognitive Load Theory, we first need to take a look at how these different types of memory work. 

Sensory memory is the storage of information received from our senses, for example seeing a dog or smelling roses, and it is transient, lasting for only several hundred milliseconds

Information from sensory memory may be selected for temporary storage and processing in working memory, which is what holds new information in place for a little bit longer - up to roughly 20 seconds. This gives the brain enough time to connect it with other information.

Cognitive load refers to the amount of information that can be held by working memory at once. 

Cognitive Load Theory was coined by educational psychologist John Sweller in 1998, and claims that our working memory can only hold a very limited amount of information at any given time, approximately three or four items’ worth of information, and that instructional materials must therefore be designed in a way that reflects how much we are able to remember. 

The theory suggests there are three types of cognitive load that are being placed on the working memory during any learning experience. 

Intrinsic load refers to the innate difficulty or complexity of a task for a particular learner, determined by a number of factors, including the inherent difficulty of a task, the level of interactivity of the task, and the prior knowledge of the learner. For example, adding and subtracting single-digit numbers has a lower intrinsic load (for most learners) than adding and subtracting fractions.

If intrinsic load is not properly managed, then learners can become overwhelmed. While any given topic’s inherent level of difficulty cannot be changed, intrinsic load can be reduced by breaking down the subject content, sequencing the delivery so that sub-tasks are taught individually before being explained together as a whole. 

Germane load refers to the mental processing capacity for converting information into our long-term memory by linking new knowledge to existing knowledge. 

Extraneous load refers to any cognitive effort that does not help the learner to achieve the desired learning outcome. The additional load can be imposed upon the learner as a result of poorly designed instructional materials that include visuals and text that don’t directly contribute to the learning goal, for example irrelevant animations or unnecessarily complicated vocabulary. 

By reducing extraneous cognitive load, capacity is freed up in the learner’s working memory that can instead be taken up by intrinsic and germane cognitive load. 

So, how have CENTURY’s resources been designed to manage cognitive load?

When designing learning materials, we focus on minimising extraneous load, managing intrinsic load and maximising germane load to ensure that as much relevant content as possible can be converted into the learner’s long term memory. 

The design of CENTURY’s learning materials has been influenced by cognitive load theory, for example:

  • To reduce extraneous cognitive load, we aim not to include unnecessary details that do not add value to the resource. For example, we do not use any images or text that serve no informational purpose, and you’ll notice that our videos never include background music.
  • To manage intrinsic load, the AI-powered pathways on the platform ensure that each learner is working on the content that is of an appropriate level for them, and is not inherently too difficult. Information is segmented into manageable chunks – for example, our nuggets (micro-lessons) take an average of 15 minutes to complete, and hone in on our particular concept or piece of information.
  • Signalling is often in our videos used to highlight important information, as you can see in the example above, which aims to reduce extraneous load and enhance germane load.

Book a demo to find out more about how CENTURY can help to enhance the teaching and learning at your school or college.

Psychology

Sean B. Eom, in Encyclopedia of Information Systems, 2003

III Review of Important Psychology Theories/concepts

Of these numerous fields of psychology as discussed earlier, cognitive psychology and social psychology have significantly influenced the formation of information systems subspecialties. The central component of cognitive psychology, which is the study of the human mind, is concerned with adults' normal, typical cognitive activities of knowing/perceiving/learning. Cognitive scientists view the human mind as an information processing system that receives, stores, retrieves, transforms, and transmits information (the computational or information processing view). The central hypothesis of cognitive psychology is that “thinking can best be understood in terms of representational structures in mind and computational procedures that operate on those structures.” Cognitive science originated in the mid-1950s when researchers in several fields (philosophy, psychology, AI, neuroscience, linguistics, and anthropology) began to develop theories of the workings of the mind—particularly knowledge, its acquisitions, storage, and use in intelligent activity.

A major concern of cognitive psychology deals with the cognitive architecture (Fig. 2), referring to information processing structure and its capacity. The cognitive architecture is comprised of three subsystems: sensory input systems (vision, hearing, taste/smell, touch), central processing systems, and output systems (motor outputs and decision and actions). The central processing systems perform a variety of activities to process information gathered from the sensory input systems. They include memory and learning, such as processing/ storing/retrieving visual (mental imagery) and auditory/ echoic information, and representing that information in memory. Other areas of central processing activities include problem solving (any goal-directed sequence of cognitive operations), reasoning, and creativity. Specifically, they include cognitive skills in problem solving; reasoning including that about probability; judgment and choice; recognizing pattern, speech sounds, words, and shapes; representing declarative and procedural knowledge; and structure and meaning of languages, including morphology and phonology.

Sensory memory permits the temporary storage of information we receive from our senses.

Figure 2. A global view of the cognitive architecture [adapted from Stillings, et al. (1995). Cognitive Science: An Introduction, (Second Edition). Cambridge, MA: MIT Press].

The central processing system can be viewed for the information processing perspective as having a three-stage process:

1

Sensory memory can be characterized by large capacity, very short duration of storage, and direct association of with sensory processes. The initial attention process selects information for further processing.

2

Short-term memory (also known as working memory) has limited capacity where information selected from sensory memory is linked with information retrieved from long-term memory (past experiences and world knowledge) that forms the basis for our actions/behavior.

3

Long-term memory is characterized by large capacity, long duration. Our episodic (every day experiences) and semantic (world knowledge) information is stored in the long-term memory. Much of the research dealing with connective models, such as neural nets, focus on the structure of this long-term memory, although some researchers also expand the neural net perspective to include sensory and short-term memory. Topics such as language, concept formation, problem solving, reasoning, creativity, and decision making are usually associated with the structure and processes of this long-term memory. Also, the issue of “experts” versus “novices” is almost always linked with the structure of secondary memory.

Perhaps, behavioral decision theory may be the most influential theory developed by cognitive scientists that has contributed toward the developments of DSS research subspecialities. Behavioral decision theory is concerned with normative and descriptive decision theories. The normative decision theory aims at “prescribing courses of action that conform most closely to the decision maker's beliefs and values.” Behavioral decision theorists proposed decision theories solely on the basis of behavioral evidence, without presenting neurological internal constructs on which to base these mechanisms. Descriptive decision theory aims at “describing these beliefs and values and the manner in which individuals incorporate them into their decisions.” Descriptive decision theories have focused on three main areas of studies: judgment, inference, and choice. The study of judgment, inference, and choice has been one of the most important areas of cognitive psychology research, which has been referenced most frequently by DSS researchers.

III.A Judgment and Inference

The fundamental factor distinguishing DSS from any other CBIS is the use of judgment in every stage of the decision-making process, such as intelligence, design, choice, and implementation. The crucial part of cognitive psychology is the study of internal mental processes of knowing/learning/decision making, etc., mental limitations, and the impacts of the limitations on the mental processes.

Many DSS can help decision makers generate numerous decision alternatives. The decision makers use intuitive judgment of probability of future events, such as annual market growth rate, annual rate of inflation, etc. Tversky and Kahneman uncovered three heuristics employed when making judgment under uncertainty (representativeness, availability of instances, and adjustment from an anchor), which are usually effective, but often lead to systematic and predictable errors. Due to the cognitive limitations, the reliance on judgmental heuristics often leads to cognitive biases that eventually may cause ineffective decisions. The representativeness heuristic is applied when people are asked to judge the probability that object/event A belongs to class/process B. According to Tversky and Kahneman, the judgment of probability can be biased by many factors related to a human being's cognitive limitations, such as (1) insensitivity to prior probability of outcomes, (2) insensitivity to sample size, (3) the misconception of chance, (4) insensitivity to predictability, (5) the illusion of validity, and (6) the misconception of regression. The availability heuristic is used when people are asked to estimate the plausibility of an event. The employment of the availability heuristic may lead to predictable biases due to (1) the retrievableness of instances, (2) the effectiveness of a search set, (3) imaginableness, and (4) illusory correlation. The anchoring and adjustment effects can also bias the estimation of various quantities stated in percentage or in the form of probability distribution due to insufficient adjustment and/or biases in the evaluation of conjunctive and disjunctive events, etc.

Hogarth is another cognitive scientist whose profound insight on human judgment has made a substantial contribution to the DSS area. A substantial part of his research is devoted to compiling a catalogue of human judgmental fallibilities and information-processing biases in judgment and choice. He presented a conceptual model of judgment in which judgmental accuracy is described as a function of both individual characteristics and the structure of task environment within which the person makes judgments. Human judgments are based on information that has been processed and transformed by the limited human information processing capacity. He further decomposed the information processing activities of decision makers into: (1) acquisition of information; (2) processing information; (3) output; (4) action; and (5) outcome. He emphasized that judgmental biases can occur at every stage of information processing and that judgments are the result of interaction between the structure of tasks and the nature of human information processing systems. Decision aids are necessary in structuring the problem and assessing consequences, since intuitive judgment is inevitably deficient.

Einhorn and Hogarth reviewed behavioral decision theory to place it within a broad psychological context. In doing so, they emphasized the importance of attention, memory, cognitive representation, conflict, learning, and feedback to elucidate the basic psychological processes underlying judgment and choice. They concluded that decision makers use different decision processes for different tasks. The decision processes are sensitive to seemingly minor changes in the task-related factors.

III.B Choice

Cognitive psychologists have long been interested in the area of problem solving. Payne and his colleagues attempted to understand the psychological/cognitive process that led to a particular choice or judgment using two process tracing methods—verbal protocol analysis and information acquisition behavior analysis. The verbal protocol is a record of the subject's ongoing behavior, taken from continuous verbal reports from the subject while performing the decision task (rather than through later questionnaires or interviews).

III.B.1 Four Information Processing Strategies

Payne and his colleagues focused on the identification of the information processing strategies/models and the task characteristics of the decision situation when choosing among multidimensional/multicriteria alternatives. Four of the most important decision models discussed in the cognitive psychology literature are the (1) additive/linear model of choice, (2) conjunctive model, (3) additive difference model, and (4) elimination-by-aspects (EBA) model.

The additive model allows the decision maker to choose the best candidate, the one with the greatest aggregated overall value. The conjunctive model assumes that an alternative must possess a certain minimum value on all dimensions in order to be chosen. The additive difference model directly compares just two alternatives at a time and retains only the better one for later comparison in order to reduce memory load. The two alternatives are compared directly on each attribute/dimension to determine a difference, and the results are added together to reach a decision. The selected alternative becomes the new standard against which each of the remaining alternatives is to be compared. The EBA model, like the additive difference model, is also an intradimensional strategy. The decision maker selects the most important dimension/attribute based on its relative importance. The first step of this process eliminates all alternatives for that attribute with values below the cut-off value. This procedure will be repeated until we have all but one of the alternatives.

III.B.2 Choice of Specific Decision Strategy

Another important focal point of behavioral decision theorists' research has been the selection of a specific information processing strategy and the study of the factors that could change the choice of the specific processing strategy. These factors include information availability, time pressure, incomplete data, incommensurable data dimension, information overload, and the decision maker's intention to save time or increase accuracy.

Payne argued that the choice of specific decision strategy is contingent upon two task determinants: number of alternatives available and number of dimensions of information available per alternative. Analysis of the decision maker's information search pattern and verbal protocols suggested that task complexity is the major factor that determines a specific information processing strategy. When dealing with a two-alternative choice task, either additive or additive difference models are used, requiring the same amount of information on each alternative. On the other hand, both the conjunctive and elimination-by-aspects models were proposed to deal with a complex decision task consisting of more than six alternatives. These conjunctive and elimination-by-aspects models are good ways to eliminate some available alternatives quickly so that the amount of information being processed can be reduced in complex decision making.

III.B.3 Cost-benefit (effort-accuracy) Framework

Payne examined cost-benefit framework investigating the effects of task and context variables on decision behavior. According to cost-benefit framework, a possible reason for choosing a specific decision model in a specific task environment is to maximize the expected benefits of a correct decision against the cost of using the process. This theory states that “decision makers trade-off the effort required to make a decision vis-a-vis the accuracy of the outcome.” The strategies used vary widely based on small changes in the task or its environment. The decision maker frequently trades off small losses in accuracy for large savings in effort.

Payne and others further investigated effort and accuracy considerations in choosing information processing strategies, especially under time constraints. They found that under time pressure, several attribute-based heuristics, such as EBA and the lexicographic choice model, were more accurate than a normative procedure, such as expected value maximization. Under the severe time pressure, people focused on a subset of information and changed their information processing strategies.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0122272404001428

Brain Networks Involved in Learning and Teaching

Yi-Yuan Tang, in Brain-Based Learning and Education, 2017

2.2 Memory Networks

We could not survive without memory. When we receive information from the environment, our sensory system stores it naturally (sensory memory) and then forms short-term memory, which is the temporary storage of small amounts of information. Subsequently, long-term memory (explicit and implicit memory) is created that reflects our capacity to store information over long periods of time and communicate with others and the outside world. In general, the memory process involves three stages—encoding, storage, and retrieval—and each recruits different brain networks. For example, when information comes into our sensory memory, it often needs three systems to encode information: visual (picture), acoustic (sound), and semantic (meaning). Moreover, simply receiving information is not sufficient to encode it, and we must also attend to and process it. As a result, encoding requires both automatic and effortful processing. Clearly, the encoding process requires the interaction between attention and memory networks. In the same vein, storage and retrieval processes also work with the encoding process to form memory. For instance, the need to maintain information in the face of distraction and the need to retrieve information that could not be maintained, both functions involve memory process, but different attention control is required based on task demands. It should be noted that there are individual differences in memory and attention capacities associated with higher-order cognitive processes such as intelligence and decision-making (Cowan, 2016; Ekman, Fiebach, Melzer, Tittgemeyer, & Derrfuss, 2016). Previous studies have suggested that the memory process of encoding, storage, and retrieval mainly involves the medial temporal lobe (MTL)—hippocampus, parietal cortex, amygdala, and other brain areas (Takeuchi et al., 2010). Mindfulness training significantly improves memory performance as shown by a series of studies, along with attention and self-control improvements following mindfulness, as well as more efficient networks of attention, memory, and self-control, which could all improve learning and teaching outcomes in the education system (Tang et al., 2015).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012810508500002X

Reconstructive Memory, Psychology of

Henry L. RoedigerIII, Kurt A DeSoto, in International Encyclopedia of the Social & Behavioral Sciences (Second Edition), 2015

Sensory and Short-Term Memory

The greatest challenge to the claim in the previous paragraph, that all remembering is reconstructive, may come from studies of sensory memory, defined as the borderline between perceiving and remembering. Many theorists have argued that perceiving is itself a constructive activity, with the fascinating phenomena of visual and auditory illusions used as evidence for this claim (see Hoffman, 1998; among many others). Sensory memory refers to the temporary persistence of information that has struck the senses, which lingers briefly as it is being comprehended. Visual persistence is called iconic memory and auditory persistence is labeled echoic memory. It would seem that iconic memory – essentially a fleeting afterimage of the scene from the outside world – would surely be a form of reproductive memory. Yet even in these situations errors arise, showing that the retrieval processes from this type of memory involve reconstruction. For example, in Sperling's (1967) studies of iconic memory, in which people had to report letters that they had briefly seen on a screen, a common error when people missed a letter was to report another letter that either looked like or sounded like the original letter. This type of error indicates that people may code even such simple items as single letters into visual patterns and associated sounds. When people miss a b in these experiments, they may substitute a v (which sounds like b) or a p or d, which share both similar appearance (a long line and a curve) and similar sound (the letters rhyme with b). In sum, even reports from iconic memory may show reconstructive tendencies.

Short-term and working memories last longer than sensory memories do, but people are still ordinarily accurate in retrieving information from short-term stores if no interference occurs. Does this accuracy reflect a rote, reproductive process? The answer seems to be no, because when the short-term memory system is challenged by having people operate under, for example, fast rates of presentation, errors occur. Errors are often (but not always) phonological in nature. That is, if someone tries to recall letter strings and misses a letter, similar sounding letters are confused (Conrad, 1964). In the case of words, those that share visual and phonemic (sound-based) features are confused (Crowder, 1976; Chapter 4). Therefore, even though short-term memory processes are often considered quite accurate (and they can be), recall in these situations typically occurs under conditions that make for accurate reconstructions (e.g., with short unfilled delays between study and test). Stress these systems by presenting material quickly, or by creating interference, and the characteristic error pattern indicative of a reconstructive process appears.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080970868510162

Neuroscience and Young Drivers

A. Ian Glendon, in Handbook of Traffic Psychology, 2011

2.2.3 Corpus Callosum

The corpus callosum (CC) is the most prominent white matter structure, comprising approximately 200 million axons connecting equivalent regions of the two cerebral hemispheres. The CC integrates sensory, memory storage and retrieval, attention and arousal, language, and auditory functions (Giedd, 2008; Lenroot & Giedd, 2006). The CC is among the last of the brain's structures to complete maturation, undergoing rapid growth before and during puberty and lasting through adolescence until the mid-20s (Barnea-Goraly et al., 2005; Giedd et al., 1999). CC signal intensity decreases between ages 7 and 32 years, with the most rapid changes during childhood, stabilizing in early adulthood as cerebral functioning becomes more lateralized (Keshavan et al., 2002). The number of connections increases during adolescence, and CC fibers are important in connecting motor and sensory cortices so that increased white matter in this location may be associated with improved motor skills during development (Barnea-Goraly et al., 2005) and in adulthood (Johansen-Berg, Della-Maggiore, Behrens, Smith, & Paus, 2007), such as are required for skilled driving performance.

The CC influences handedness—whether an individual has a strong preference (usually for right-handedness) or is “mixed-handed.” Wolman (2005) reported that rather than left-handers being more prone to have vehicle crashes, as was once thought, it is mixed-handers who are more at risk. Consistent with an interhemispheric model would be the enhanced risk of someone talking on a cell phone (a predominantly left-hemisphere task involving language) while driving with the left hand (a predominantly right-hemisphere task for the motor performance component). Further neuroscience evidence is required on potential compromises to the driving task of “cross-talk” between the brain hemispheres that could result from engaging in various secondary tasks during driving.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123819840100098

On commensurability

Liam Magee, in Towards a Semantic Web, 2011

Spatialising concepts

In Conceptual Spaces, Gardenfors (2000) develops a theory of conceptual representation in the cognitive science tradition developed by Rosch, Lakoff and others (Lakoff and Johnson 1980; Medin 1989; Rosch 1975), surveyed earlier in Chapter 3, ‘The meaning of meaning’. Gardenfors develops a ‘conceptual framework’, a constellation of concepts in which ‘concept’ itself figures prominently. In the first part of the book, Gardenfors presents a framework comprising:

Conceptual spaces—A high level collection of concepts and relations, used for organising and comparing sensory, memory or imaginative experiences.

Domains—A clustering of related concepts. Gardenfors (2000) suggests ‘spatial’, ‘colors’, ‘kinship’ and ‘sounds’ are possible concept domains.

Quality dimensions—Generalised distinctions which determine the kinds of domains concepts belong to, such as ‘temperature’, ‘weight’, ‘height’, ‘width’ and ‘depth’. Gardenfors states: ‘The primary function of the quality dimensions is to represent various “qualities” of objects’, and, more specifically, that they can be ‘used to assign properties to objects and to specify relations among them’ (Gardenfors 2000, p. 6). Dimensions can be either phenomenal (relating to direct experience) or scientific (relating to theorisations of experience); innate or culturally acquired; sensory or abstract.

Representations—Gardenfors discriminates between three layers of representation: the symbolic (or linguistic), the sub-conceptual (or connectionist) and the conceptual, which Gardenfors claims mediates between the other two layers. Each layer—from sub-conceptual through to symbolic—exhibits increasing degrees of granularity and abstraction of representation. Gardenfors also notes that the conceptual mediates between the parallel processing of sub-conceptual neural networks and serial processing involved in the production and interpretation of symbolic language.

Properties—’These are means ‘for “reifying” the invariances in our perceptions that correspond to assigning properties to the perceived objects’ (Gardenfors 2000, p. 59). They are specialised kinds of concepts which occupy a ‘region’ within a single domain, delineated within the broader conceptual space by quality dimensions. A feature of properties defined in this way is that they accord both with strict and vague or fuzzy borders between properties—objects can be permitted ‘degree[s] of membership’, depending on their proximity to the centre of the property region. Both classical and prototypical theories of classification can be accommodated.

Concepts—General (non-propertied) concepts differ from properties in that they can belong to multiple domains, and different conceptual features can gain greater salience in different contexts. Concepts are in a constant process of being added, edited and deleted within new domain arrangements; consequently, concept meaning is transient. Conceptual similarity comes on the basis of shared or overlapping domains.

The resulting framework is pragmatic and ‘instrumentalist’; the ‘ontological status’ of conceptual spaces is less relevant than that ‘we can do things with them’ (Gardenfors 2000, p. 31). Specifically, the framework ought to have ‘testable empirical consequences’ and, further, to provide a useful knowledge representation model for ‘constructing artificial systems’ (Gardenfors 2000, p. 31). One advantage of the use of geometric metaphors to describe conceptual arrangements is that it is possible to calculate approximate quantifications of semantic distance between individual concepts and concept clusters. However, the mathematisation of conceptual structures is to be taken as a heuristic rather than deterministic model—for Gardenfors, ‘we constantly learn new concepts and adjust old ones in the light of new experiences’ (Gardenfors 2000, p. 102). In light of this ever-changing configuration of concepts, any calculation of semantic proximity or distance is likely to be at best accurate at a point in time, although statistically—across time and users of conceptual clusters and relations—there may well be computable aggregate tendencies.

The arrangement of concepts and properties within conceptual spaces and domains depends on a coordinating principle of similarity:

First, a property is something that objects can have in common. If two objects both have a particular property, they are similar in some respect… Second, for many properties, there are empirical tests to decide whether it is present in an object or not. In particular, we can often perceive that an object has a specific property or not (Gardenfors 2000, pp. 60–61).

Dimensions form the basis against which similarity is assessed—a single dimension for properties, multiple dimensions for concepts. Conceptual similarity for Gardenfors is intrinsically a cognitive and theoretical notion, however, which can consequently be varied as different dimensional properties are found to be more or less salient:

For example, folk botany may classify plants according to the color or shape of the flowers and leaves, but after Linnaeus the number of pistils and stamens became the most important dimensions for botanical categorizations. And these dimensions are perceptually much less salient than the color or shape domains. Shifts of attention to other domains thus also involve a shift in overall similarity judgments (Gardenfors 2000, p. 108).

In the latter part of the book, Gardenfors then shows how his framework can be applied to traditional problems of semantics, induction and computational knowledge representation and reasoning (Gardenfors 2000). In particular he emphasises the relationship of conceptual structures to broader spheres of human action and practice. In what is an avowedly ‘pragmatist account’, meaning is put to the service of use within these spheres—though it is not equivalent to it. Unlike conventional semantics, the kind of ‘conceptual semantics’ Gardenfors espouses works down from social practice to fine-grained linguistics utterances: ‘actions are seen as the most basic entities; pragmatics consists of the rules for linguistic actions; semantics is conventionalized pragmatics… and finally syntax adds markers to help disambiguate when the context does not suffice’ (Gardenfors 2000, p. 185).

The pragmatist elements of this account fits well with the analysis of language Brandom undertakes, while the social orientation begins to bring concepts out of mind and language and into the intersubjective domain theorised by Habermas—points of accord succinctly encapsulated in the following quote: ‘In brief, I claim that there is no linguistic meaning that cannot be described by cognitive structures together with sociolinguistic power structures’ (Gardenfors 2000, p. 201). Applied to knowledge systems, Gardenfors supplies a convenient ‘first tier’ description of the kind of entity which includes the explicit conceptualisation of the system itself, and the tacit commitments which stand behind it. ‘Conceptual spaces’, standing here for Quine’s ‘conceptual schemes’, are mentalist metaphors for describing at least part of what it is that a knowledge system represents. The remaining sections add further descriptive tiers on which the framework of the study can be mounted.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781843346012500118

Exploring cultural heritage in the context of museums, archives and libraries

Kim Baker, in Information Literacy and Cultural Heritage, 2013

The role of memory and contested history in cultural heritage

As was found with cultural heritage, the notions of memory and contested history have not been discussed much in the field of library science, whereas the literature in museum studies and archival science is filled with discourse on these aspects. While for decades museums and archives have been grappling with the impact that memory and contested history have in shaping cultural heritage, libraries have generally overlooked these conceptual aspects in their pursuit of digitizing cultural heritage. And yet, it is impossible to consider what constitutes cultural heritage without taking these factors into account.

This section gives a very brief overview of the concepts of memory and contested history, before a more in-depth exploration is undertaken from the differing perspectives and approaches of museums, archives and libraries.

Cultural heritage, in its broad sense (in other words, not only addressing the aspect of documentary cultural heritage as defined by UNESCO), carries with it the implicit, and problematic, notion of memory. It is people’s memories, both individual and shared, that shape the formation of cultural heritage. It could be argued that scientific scholarship should be excluded from this discussion on memory. However, in terms of indigenous knowledge systems, scientific knowledge is passed down through the generations orally, and thus is also affected by the element of memory.

Therefore, it would be useful to be cognisant of some of the features of memory which are applicable within this context. In her study on how memory functions and how it contributes to the shaping of heritage, using the specific case of Chief Albert Luthuli, Menhert outlined some core factors to be considered. She noted that memory is comprised of several parts, and that it can be rigid and unable to be changed, or it can be fluid and, upon influence, be changed. She noted the three types of memory to be sensory memory (memory that can be evoked by a cue from one of the senses, such as a smell, a sight, a sound), short-term memory (which lasts for approximately 20 seconds, and, unless the information is integrated, can be lost), and long-term memory (which is the aspect of memory that is relevant to heritage) (Menhert, 2011: 1–2).

Menhert described the three components of long-term memory. The procedural component relates to processes we learn in order to perform tasks, such as how to drive a car, and these, once integrated, can be used automatically. Declarative memory could be considered to be memory by rote, where, for example, names, dates and multiplication tables are integrated into the mind and are able to be reproduced by rote. The third component is the one that concerns archival memory, and is termed “episodic memory.” Episodic memory remembers events and how they affect us personally (ibid.: 2). Menhert noted that along with considering memory, it is also important to understand the role of forgetting, and how it occurs. Forgetting can occur when there is a lack of a retrieval cue to trigger the memory. Most critically, Menhert noted that when conducting interviews to record oral history, great care should be taken not to inadvertently plant memories by means of suggestion, thus altering the memories of the individuals (ibid.: 3). She also observed that people can trigger memories in each other when they collectively experience a shared event. Menhert concluded that the memories that people have are as much an intrinsic part of history and cultural heritage knowledge as are documents, books and photographs. The primary source documents can only reveal a certain amount of information, but the context can be amplified and supplemented by relating the memories of people to them. Conflicts and differences in memory are enrichments to the narrative, and should be explored further in dialogs. In the museum context, where for example exhibitions display objects to tell a story, she posited that the process of how the exhibition was mounted, what was chosen, and why, as well as the inclusion of memories from people, give the public an awareness of how important and complex memories are, and adds an essential dimension to enable deeper research and understanding. Memory formed under trauma, which is especially prevalent in South Africa with its recent history of apartheid, is worthy of deeper and focused exploration in order to also bring to the surface what may have been forgotten (ibid.: 9–11).

Menhert’s findings from the perspective of museums are reinforced by perspectives from the field of archives. Harris, in considering the case of the archive of South Africa’s Truth and Reconciliation Commission, noted that the domain of social memory was the foremost location of struggle, and that this struggle was defined by the struggle of remembering against forgetting. He outlined that forgetting was an essential element in the struggle against apartheid, as some memories were too painful to remember. He further noted that memory is not a true reflection of reality and process, and that it is shaped by imagination. In South Africa’s social memory, it is a battle of narrative against narrative. Harris described how the tools of forgetting were a crucial element in the arsenal of apartheid South Africa’s state power, and that the state destroyed public records and removed voices they did not wish to hear by means of harassment, censorship, banning, detention without trial and assassination. He observed that even in the transition to democracy, the apartheid state sanitized and destroyed memory it did not wish to transfer to the future democratic government (Harris, 2007: 289–90). This example illustrates how the already challenging notion of the accuracy of memory is compounded exponentially in a context like South Africa.

In a different approach to Menhert, Jimerson identified four categories of memory. He described them as personal, collective, historical and archival (Jimerson, 2003: 89). Expanding further, he observed that collective memory as social memory is seldom subject to examination for reliability, authenticity and validity. He also observed that personal memory as eyewitness testimony is subject to the fact that memory can change over time, and that archival memory contains collections of surrogates of captured memory frozen in time. Jimerson considered that historical memory functions best as evidence-based examinations of artifacts, documents and personal testimony (ibid.: 89–90).

With this background on the role of memory in shaping perceptions and interpretations of what happened in history, it can be explicitly assumed that as a result history is often contested. Dubin referred to the “culture wars” which encompassed deeply felt confrontations between different groups within a society over interpretations of race and ethnicity, the body, sexuality, identity politics, religion, national identity and patriotism (Dubin, 2006: 477). In the context of history, he posited that these contests were shaped by social and political changes both within a nation and globally (ibid.: 478).

The factor of contested history when considering cultural heritage, and especially when deciding how to collect, describe, preserve, showcase and present documentary cultural heritage, is a fundamental element to be recognized. If a program of information literacy intends to present questions and exercises that will guide users to cultural heritage resources, it is essential that the program is cognisant of this, and of the fact that collections may be biased in favor of, for example, a former colonial power’s viewpoint reflecting a distorted version of a particular cultural group, or that collections may exclude the views of minorities living in developed countries.

Following this brief overview of the critical role that memory and contested history play in the shaping of cultural heritage, it is now necessary to explore the broader approaches and perspectives of museums, archives and libraries.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781843347200500015

Cognitive Psychology: History

Edward E. Smith, in International Encyclopedia of the Social & Behavioral Sciences (Second Edition), 2015

The Growth of Cognitive Psychology

The 1960s brought progress in many of the above-mentioned topic areas, some of which are highlighted below.

Pattern Recognition

One of the first areas to benefit from the cognitive revolution was pattern recognition, the study of how people perceive and recognize objects. The cognitive approach provided a general two-stage view of object recognition: (1) describing the input object in terms of relatively primitive features (e.g., ‘it has two diagonal lines and one horizontal line connecting them’); and (2) matching this object description to stored object descriptions in visual memory, and selecting the best match as the identity of the input object (‘this description best matches the letter A’). This two-stage view was not entirely new to psychology, but expressing it in information-processing terms allowed one to connect empirical studies of object perception to computer models of the process. The psychologist Ulrich Neisser (1964) used a computer model of pattern recognition (Selfridge, 1959) to direct his empirical studies and provided dramatic evidence that an object could be matched to multiple visual memories in parallel.

Other research indicated that the processing underlying object perception could persist after the stimulus was removed. For this to happen, there had to be a visual memory of the stimulus. Evidence for such an ‘iconic’ memory was supplied by Sperling in classic experiments in 1960 (Sperling, 1960). Evidence for a comparable brief auditory memory was soon provided as well (e.g., Crowder and Morton, 1969). Much of the work on object recognition and sensory memories was integrated in Neisser's influential 1967 book Cognitive Psychology (Neisser, 1967). The book served as the first comprehensive statement of existing research in cognitive psychology, and it gave the new field its name.

Memory Models and Findings

Broadbent's model of attention and memory stimulated the formulation of rival models in the 1960s. These models assumed that short-term memory (STM) and long-term memory (LTM) were qualitatively different structures, with information first entering STM and then being transferred to LTM (e.g., Waugh and Norman, 1965). The Atkinson and Shiffrin (1968) model proved particularly influential. With its emphases on information flowing between memory stores, control processes regulating that flow, and mathematical descriptions of these processes, the model was a quintessential example of the information-processing approach. The model was related to various findings about memory. For example, when people have to recall a long list of words they do best on the first words presented, a ‘primacy’ effect, and on the last few words presented, a ‘recency’ effect. Various experiments indicated that the recency effect reflected retrieval from STM, whereas the primacy effect reflected enhanced retrieval from LTM due to greater rehearsal for the first items presented (e.g., Murdock, 1962; Glanzer and Cunitz, 1966). At the time these results were seen as very supportive of dual-memory models (although alternative interpretations would soon be proposed – particularly by Craik and Lockhart, 1972).

Progress during this period also involved empirically determining the characteristics of encoding, storage, and retrieval processes in STM and LTM. The results indicated that verbal material was encoded and stored in a phonologic code for STM, but a more meaning-based code for LTM (Conrad, 1964; Kintsch and Buschke, 1969). Other classic studies demonstrated that forgetting in STM reflected a loss of information from storage due to either decay or interference (e.g., Wickelgren, 1965), whereas some apparent losses of information in LTM often reflected a temporary failure in retrieval (Tulving and Pearlstone, 1966). To a large extent, these findings have held up during over 30 years of research, although many of the findings would now be seen as more limited in scope (e.g., the findings about STM are now seen as reflecting only one component of working memory, e.g., Baddeley (1986), and the findings about LTM are seen as characterizing only one of several LTM systems, e.g., Schacter (1987)).

One of the most important innovations of 1960s research was the emphasis on reaction time as a dependent measure. Because the focus was on the flow of information, it made sense to characterize various processes by their temporal extent. In a seminal paper in 1966, Saul Sternberg reported (Sternberg, 1966) that the time to retrieve an item from STM increased linearly with the number of items in store, suggesting that retrieval was based on a rapid scan of STM. Sternberg (1969) gave latency measures another boost when he developed the ‘additive factors’ method, which, given assumptions about serial processing, allowed one to attribute changes in reaction times to specific processing stages involved in the task (e.g., a decrease in the perceptibility of information affected the encoding of information into STM but not its storage and retrieval). These advances in ‘mental chronometry’ quickly spread to areas other than memory (e.g., Fitts and Posner, 1967; see also Schneider and Shiffrin, 1977).

The New Psycholinguistics

Beginning in the early 1960s there was great interest in determining the psychological reality of Chomsky's theories of language (these theories had been formulated with ideal listeners and speakers in mind). Some of these linguistically inspired experiments presented sentences in perception and memory paradigms, and showed that sentences deemed more syntactically complex by transformational grammar were harder to perceive or store (Miller, 1962). Subtler experiments tried to show that syntactic units, like phrases, functioned as units in perception, STM, and LTM (Fodor et al. (1974) is the classic review). While many of these results are no longer seen as critical, this research effort created a new subfield of cognitive psychology, a psycholinguistics that demanded sophistication in modern linguistic theory.

Not all psycholinguistic studies focused on syntax. Some dealt with semantics, particularly the representation of the meanings of words, and a few of these studies made use of the newly developed mental chronometry. One experiment that proved seminal was reported by Collins and Quillian (1969). Articipants were asked simple questions about the meaning of a word, such as ‘Is a robin a bird,’ and ‘Is a robin an animal?’; the greater the categorical difference between the two terms in a question, the longer it took to answer. These results were taken to support a model of semantic knowledge in which meanings were organized in a hierarchical network, e.g., the concept ‘robin’ is directly connected to the concept ‘bird,’ which in turn is directly connected to the concept ‘animal,’ and information can flow from ‘robin’ to ‘animal’ only by going through ‘bird’ (see the top of Figure 1). Models like this were to proliferate in the next stage of cognitive psychology.

Sensory memory permits the temporary storage of information we receive from our senses.

Figure 1. (a) Part of a Collins, A.M., Quillian, R.M., 1969. Retrieval time from semantic memory. Journal of Verbal Learning and Verbal Behavior 8, 240–247 semantic network. Circles designate concepts and lines (arrows) between circles designate relations between concepts. There are two kinds of relations: subset-superset (‘Robin is a bird’) and property (e.g., ‘Robins can fly’). The network is strictly hierarchical, as properties are stored only at the highest level at which they apply. (b) Part of an Anderson, J.R., Bower, G.H., 1973. Human Associative Memory. Winston, Washington, DC propositional network. Circles represent concepts and lines between them labeled relations. All propositions have a subject-predicate structure, and the network is not strictly hierarchical. (c) Part of a simplified connectionist network. Circles represent concepts, or parts of concepts, lines with arrowheads depict excitatory connections, and lines with filled circles designate inhibitory connections; typically numbers are put on the lines indicate the strength of the connections. The network is not strictly hierarchical, and is more interconnected than the preceding networks.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080970868030282

Why visual attention and awareness are different

Victor A.F. Lamme, in Trends in Cognitive Sciences, 2003

Combining the core concepts of sensory processing and memory might be sufficient to explain visual attention [21]. Attention is a selection process where some inputs are processed faster, better or deeper than others, so that they have a better chance of producing or influencing a behavioral response or of being memorized [2,21,34]. Attention induces increased [21] and synchronous [35] neuronal activity of those neurons processing the attended stimuli, and increased activity in parietal and frontal regions of the brain [36]. This explains why the associated stimuli are processed faster, better and deeper. But what brings the enhanced activity about?

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S136466130200013X

Multistable phenomena: changing views in perception

David A. Leopold, Nikos K. Logothetis, in Trends in Cognitive Sciences, 1999

It is important to re-emphasize that in the current framework such reorganizations are not initiated by areas involved primarily in sensory processing or memory, but rather in those that ultimately use and act upon the perceptual representations. Such areas are likely to be central cortical structures, such as the fronto–parietal areas that are neither purely sensory nor purely motor in nature, but which integrate sensory information to coordinate a variety of cognitive and non-cognitive behaviors. By continually issuing reorganizations of perception, such central areas could maintain a particular ‘tone’ (similar in a sense to muscle tone) that would ensure that the perceptual representation is both accurate and robust. The same areas might also be responsible for dispatching commands to motor structures that could aid perception, such as a saccade to a visual target. Such sensorimotor coordination is likely to be critical for perceptual awareness of the environment.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S1364661399013327

Visual working memory depends on attentional filtering

Nelson Cowan, Candice C. Morey, in Trends in Cognitive Sciences, 2006

Second, it is unknown why low-capacity individuals fail to filter out the irrelevant items. Perhaps participants face a strategic choice. Performance depends on the transfer of information from sensory memory to a more consolidated, abstract form [15], and it might take extra effort to transfer it selectively. That extra effort should pay off, allowing array comparisons to consider relevant items only. Low-capacity individuals might forego this extra processing because, for them, it is uncomfortably effortful or self-defeating (as the extra effort might drain too many resources from the consolidation process). To explore this, the procedure could be altered to make it worthwhile for low-capacity individuals to filter, by including changes in irrelevant items between the standard and comparison arrays. If only an irrelevant item had changed, the correct answer would still be ‘no change’. Then it might be impossible to accomplish the task by comparing the first and second arrays en masse; it might be necessary to filter out irrelevant items. Alternatively, it might be possible to detect any change first, and only afterwards judge its task-relevance. A reaction-time measure might detect that strategy.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S1364661306000295

Which one can best explain as the process by which people select organize and interpret sensory information?

Perception is the process of selecting, organizing, and interpreting information from our senses. Selection: Focusing attention on certain sights, sounds, tastes, touches, or smells in your environment. Something that seems especially noticeable and significant is considered salient.

Which theory listed below assumes that learning take place as the result of responses to external events?

Behavioral Learning Theories Assume that Learning Takes Place as the Result of Responses to External Events (NOT focus on internal thought processes).

Which of the following terms refers to a process in which experience may result in a permanent change in behavior?

Learning refers to the relatively permanent change in knowledge or behaviour that is the result of experience.

Which theory stresses the importance of internal mental processes?

In contrast to behavioural theories of learning, cognitive learning theory approaches stress the importance of internal mental processes. This perspective views people as problem solvers who actively use information from the world around them to master their environments.