Unit1a. Most psychologists often combine stories, speech and language codes and call them AVL units.
2 Stroop effect: When people react to a specific thing, it is difficult to notice it and react to a specific stimulus because they can't block the influence on irrelevant features in the stimulus situation.
3 coding: from one stimulus form to another, or from one symbol system to another.
The category size effect means that it takes more time to judge a sentence when the category of the predicate becomes larger.
Exquisite retelling is to organize the materials to be retelled, link them with other information, and carry out deeper processing. This kind of retelling, also called integrated retelling, can transfer information to long-term memory.
Usability heuristic method: it means that people tend to evaluate the relative frequency of an object or event according to the usability in perception or memory, and the frequency of judgment that is easily perceived or recalled is higher.
9 Controlled Machining: One of the two processing technology theories put forward by Schneider and Shifrin. This is a kind of processing that needs attention in application, with limited capacity, and can be used flexibly in changing environment. It is controlled by people consciously, so it is called controlled processing.
10 Two memory theories Memory is not a single thing. There are two different kinds of memory, short-term memory and long-term memory, which are independent and interrelated, forming a unified memory system. The typical representatives of the two memory theories are two memory system models first proposed by Waugh and Norman. 13 pattern recognition pattern refers to a stimulus structure formed by several elements or components according to a certain relationship, or it can be said that the pattern is a combination of stimuli. When a person can confirm a certain pattern he perceives, it is different from other patterns, which is pattern recognition. People's pattern recognition is often manifested in putting the perceived pattern into the corresponding category in memory and naming it, that is, giving the stimulus a name.
Implicit memory of 1 1: Its fundamental feature is that the subjects do not consciously know that they have this kind of memory, which can only be manifested naturally in the operation of a specific task, and does not depend on the conscious recovery of the subjects' previous experience.
12 start-up effect: refers to the beneficial influence of the previous processing activities on the subsequent processing activities.
13 heuristic: refers to the method of solving problems by experience, which can also be called the rule of thumb. There is no guarantee that the problem will be solved, but it can often be solved effectively. Common heuristic methods include means-purpose analysis, reverse work method and planning method.
14 situational memory: according to the type of stored information, long-term memory can be divided into situational memory and semantic memory. Situational memory is an individual's memory of events at a certain moment.
15 double homework task: a common paradigm of research, which requires subjects to do two homework tasks at the same time. The researcher's interest usually lies in the degree of mutual interference between the two homework tasks. Word dominance effect of 16: This effect was first determined by Reicher in the experiment. The correct rate of recognizing letters in words (such as k in work) is higher than that of recognizing single letters (such as k in the letter string orwk).
17 automatic machining: Schneider and Shiffrin put forward two machining technology theories. Among them, automatic processing is uncontrolled processing, without application concern and certain capacity limitation, and it is difficult to change once it is formed.
18 top-down processing: processing begins with general knowledge about perceptual objects. Thus, expectations or assumptions about perceptual objects can be formed. This expectation or assumption limits all stages or levels of processing, from adjusting feature detectors to guiding attention to details. Also known as concept-driven machining. 19 chunk: refers to the information processing of combining several smaller units (such as letters) into a familiar larger unit (such as words), and also refers to the units formed in this way. This concept was first put forward by Miller of Harvard University in 1956. Miller believes that the information in short-term memory is not in bits obtained in information theory, but in blocks.
Typical effect: refers to the judgment speed of typical members of a category or concept faster than that of atypical members.
Six, short answer questions
1. Briefly describe the set theory model. A: The model was put forward by Meyer. This is a characteristic model of semantic memory. In this model, the basic semantic unit is concept, and each concept is represented by a set of information or elements. These information sets can be divided into sample sets and attribute sets or feature sets. Sample set refers to some samples of a concept. Attribute set or feature set refers to the attributes or features of a concept, which is called semantic features. In this way, semantic memory is composed of countless such information sets. However, there is no ready connection between these information sets or concepts. When we want to extract information from semantic memory to judge a sentence, we can search the attribute sets separately, then compare the two attribute sets and make a decision according to their overlapping degree. The more * * * identical attributes of two sets, the higher the degree of coincidence. When the coincidence degree is high, a positive judgment can be made, otherwise a negative judgment can be made.
2. Briefly describe the three-level model of memory information processing.
Key points: The memory system model proposed by Atkinson and Shifrin. They think that there are three ways to store memory: sensory registration, short-term memory and long-term memory.
In this model, external information first enters the sensory register, which can be divided into vision and hearing according to sensory channels, namely image memory and audio-visual memory. I feel that the registration information is rich, but it will soon disappear. Some information enters short-term storage. The information stored for a short time can have a form different from the original feeling, that is, it needs to be transformed or coded, and the coding unit is AVL unit (hearing, speaking and speech), and the information can also disappear quickly, although the speed is slower than the sensory registration. Without duplication, the information can be saved in short-term storage 15-30s. Short-term storage is a processing system, which has two functions: as a buffer between sensory registration and long-term memory, and as a processor for information entering long-term memory. Information enters long-term memory through various channels with the help of retelling. Long-term memory is a real information base, and information has auditory, oral, verbal and visual coding forms. The information entered into long-term memory is relatively permanent, but it cannot be extracted due to fading, interference or reduced intensity. When the information in long-term memory is extracted, it is transferred to short-term memory.
The model also believes that the transfer of information from one storage to another is mainly controlled by people. People scan the information temporarily stored in the sensory register, and the selected information is recognized and stored in a short time. In addition, some information can also be directly registered from the senses to long-term storage without the intermediary of short-term storage. The retelling of information from short-term storage to long-term storage also embodies human control. In a word, the control of memory information flow is a prominent feature of this model.
3. Briefly describe the theory of machining level.
Key points: Craik and Lockhart looked at the memory system from the operation of information processing, and first put forward the theory of processing level. According to the processing level theory, memory trace is a by-product of information processing, and the persistence of trace is a direct function of processing depth. After in-depth analysis and participation in fine association and characterization of information, there will be strong memory traces, which can last for a long time. Information that is only analyzed on the surface only produces faint memory traces, which last for a short time. Craik and others supported their theory with a series of experiments. The typical form of these experiments is so-called involuntary learning, that is, subjects are required to complete a certain task without asking them to memorize it. After the task is completed, they are unexpectedly tested for recognition or recall. The key of the experiment is that the assigned tasks need to reflect different processing levels. Some experimental results support the machining level theory.
4. Briefly describe the attenuation model. Key answer: Treisman improved the filter model. She doesn't think filters work in an all-or-nothing way. It not only allows the information of a channel (follower's ear) to pass, but also allows the information of follower's ear and non-follower's ear to pass. It's just that the non-ear information is attenuated and the intensity is weakened, but some information can still be processed in an advanced way. The information of non-following ear must reach a certain threshold before it can be further analyzed. A common example is "cocktail party effect". According to Tresman, the choice of doctrine depends not only on the characteristics of stimulus, but also on the state of advanced analytical level.
5. Briefly describe the general principle of information processing. Key answer points: 1. Information processing is the operation of symbols. 2. General structure of information processing system: (1) Processing object: symbol structure; (2) Composition of processing: receptor, effector, memory and processor; (3) Functions of the processor: A, a set of information processing procedures; B, short-term memory: keep the symbol structure in the process of basic information input or output; C. Interpreter: Synthesize A and B to determine a series of basic information processes. About procedure: The explanation of basic information processing rules is the mechanism of information processing system behavior. 3. Functions of information processing system: information input, output, storage, copying, symbol structure establishment and conditional migration. 1 1 Briefly describe the direct theory of perception. The key point of the answer: stimulus theory advocates that perception is only direct and denies the role of existing knowledge and experience. Gibson, its famous representative, thinks that the stimulus in nature is complete and can provide very rich information. People can use this information to directly generate perceptual experiences corresponding to the stimuli acting on the senses, without forming hypotheses and testing them on the basis of previous experiences.
12 briefly introduce the central energy theory. The key point of the answer: the central energy theory regards attention as limited energy or resources that people can use to perform tasks, and explains attention by allocating this energy or resources. Kahneman's energy distribution model better embodies the central energy theory. He believes that people's available resources are linked with arousal, and their quantity will also change due to factors such as emotions and drugs. The key to attention is the so-called resource allocation scheme, which itself is restricted by several factors: the possible energy of awakening factors, the will at that time, the long-term tendency of individuals and the evaluation of the energy needed to complete tasks. Under the influence of these factors, the realized distribution scheme reflects a cautious choice. The central energy theory can well explain all kinds of complicated situations caused by performing two tasks at the same time, and overcome the opposition between perceptual choice model and response choice model to some extent.
14 What biological evidence supports the theory of feature analysis?
Key points: Feature analysis means that in the process of pattern recognition, the features of stimuli are analyzed first, and then these extracted features are combined and compared with various stimulus features in long-term memory. Once matched, the external stimulus is recognized. (1)Pritchard uses this technology to keep the position of the object imaged on the retina unchanged under the condition of eye movement, and obtains a fixed net image of the object. It is found that in this case, the perception of the object does not disappear immediately, but gradually disappears part by part. What disappears is the complete feature, and what remains is a certain pattern. (2)David Hubel and Torsten Wiesel inserted microelectrodes into a series of neurons in the cortex of anesthetized animals. Each neuron only reacts strongly to the grating in a specific direction and has a feature detector.
Try to comment on the similarities and differences between the perceptual choice model and the response choice model of attention.
Key points: The main difference between perceptual choice model and response choice model lies in the different views on the position of attention mechanism in information processing system. According to the perceptual selection model, the important filter lies between perception and recognition, which means that not all information can be recognized by advanced analysis. This model is also called early selection model. According to the response selection model, the mechanism of attention is between recognition and reaction, that is, the information of several input channels can be recognized, but only some of them can cause reaction. This model is also called late selection model.
The two models also believe that the information of several channels can also be noticed at the same time, that is, both of them recognize the distribution of attention.
From the current experimental research, there are two points that need to be improved: 1. In the experimental research carried out so far, most people who advocate perceptual selection use binaural hearing technology with additional ears, and the research is focused attention; However, most people who advocate response selection use binaural listening programs that do not follow the program, and they study the distribution of attention. The difference between the two experiments will be reflected in the experimental results and affect the theoretical analysis. For further experimental research, it is undoubtedly beneficial to change this emphasis on a certain method and apply a variety of methods. 2. Previous studies were almost conducted in auditory sensory channels, and rarely involved other sensory channels. Therefore, using other sensory channels, especially using different sensory channels at the same time, will get some meaningful results, which will help reveal the essence and mechanism of the theory.
On the direct theory and indirect theory of perception.
Key points of the answer: the direct theory of perception is represented by the stimulus theory of perception, which advocates that perception is only direct and denies the role of existing knowledge and experience. Gibson, one of the famous representatives, put forward the view of "direct perception". Gibson believes that observers can directly extract the perception related to depth, size and motion from the rich information in the environment. He emphasized that observers and environment are inseparable; Perception is an active, interactive, continuous and dynamic process between the observer and the environment. According to him, distance is something we directly perceive. The results of the famous Gibson structure density differential experiment support the above viewpoint.
The indirect theory of perception holds that when people perceive, they accept sensory input and form a hypothesis about what the current stimulus is on the basis of existing experience. Perception is under the guidance and planning of these assumptions and expectations. According to Bruner and Gregory's hypothesis test of perception, perception is a construction process including hypothesis test. People accept information, form and test hypotheses, and then accept or search information and test hypotheses until a hypothesis is verified, thus making a correct explanation of sensory stimuli. Usually, people are not aware of the participation of hypothesis when they are conscious, but under some special conditions, such as seeing things in weak light, they sometimes experience this hypothesis test.
Perceptual indirect theory pays attention to the internal processing process of observers, while perceptual direct theory pays attention to the information content of external environment. However, the direct theory of perception ignores the process of visual analysis and representation, so it cannot fully explain the complexity of human vision. However, the indirect theory of perception lacks discussion on the physiological basis of various perceptual phenomena.
On hierarchical network model and activation diffusion model.
Key points: The hierarchical network model was put forward by Collins and Quillian, and it is the first semantic memory model in cognitive psychology. In this model, the basic unit of semantic memory is concepts, and each concept has certain characteristics. These features are actually concepts, but they illustrate other concepts. Related concepts are organized according to logical hierarchical relationships, forming a hierarchical network system. The hierarchical network model stores the characteristics of different hierarchical concepts accordingly. At the level of each concept, only the unique features of the concept at that level are stored, while the * * * same features of the concept at the same level are stored at the level of the next higher-level concept. Because concepts form a network according to hierarchical relationship, each concept and feature is in a specific position in the network, and the meaning or connotation of a concept is determined by the relationship between the concept and other concepts and features. Collins and Quillian conducted a series of experiments to verify the model, and the experimental results support the view that semantic memory has a hierarchical network structure.
The core of hierarchical network model is that the concept is a network formed by logically obtaining the relationship between superiors and subordinates. This makes it simple, but it also has some disadvantages. First of all, the hierarchical network model involves very few types of connections between concepts, and there are both vertical and horizontal connections between concepts. In the hierarchical network model, the main relationships are "yes", "you" and "meeting", and all other relationships are not involved. Secondly, the hierarchical network model saves storage space, but increases the time needed to extract information. In addition, the hierarchical network model cannot explain the typical effect.
The activation diffusion model was proposed by Collins and Loftus, and it is also a network model. In this model, concepts are organized by semantic connection or semantic similarity. The nodes in the model represent a concept, and the connecting lines between concepts represent their connections, and the length of the connecting lines indicates the closeness of the connections. The shorter the connection line, the closer the connection, and the more similar the two concepts are. In the activation diffusion model, the meaning of a concept is also determined by other concepts associated with it, especially closely related concepts, but the concept characteristics do not have to be stored in different levels. The model assumes that when a concept is processed, it is activated at the concept node, and then all the connections along the node are activated at the same time and spread around, first to the nodes directly connected to it, and then to other nodes. The priming effect supports the model.
Activation diffusion model is a modification of hierarchical network model, which replaces hierarchical structure with semantic model, so it is more comprehensive and flexible than hierarchical network model. The activation diffusion model is more suitable for people and can accommodate more uncertainty and fuzziness.
5 What experiments can prove that attention selection occurs at a relatively late stage?
Key answer points: 1. Pay attention to the selected model development. The psychological mechanism of attention is one of the earliest experimental topics in modern cognitive psychology, and its main purpose is to find out the selection mechanism of attention. The experimental method is mainly binaural listening technology. Since broadbent first proposed the early selection model, cognitive psychologists have put forward different viewpoints, such as the mid-term selection model, the late response selection model and the resource limitation theory.
2. Pay attention to the choice of response selection model. De utsch and Deutsch( 1963) proposed a reaction selection model, and then Norman( 1968, 1976) supported this model and made some modifications. Basic assumption: all the information input by sensory channels can enter the advanced analysis level, get perceptual processing and be recognized. The choice of attention is between perception and working memory, that is, the filter does not choose perceptual stimuli, but chooses the response to stimuli. The criterion of choice is the importance of stimulus to people.
The difference between the late response choice model of attention and the early (perceptual) choice model: the location of the bottleneck is different. Position is determined by the nature of work. The early (perceptual) choice model is pre-cognitive choice, and the response choice model is semantic analysis.
3. Experiments supporting response selection. In order to find out where the bottleneck is, parallel processing stops and serial processing starts, psychologists have carried out a lot of experiments.
Experiment 1: hardwick (1969) designed an experiment in which both ears listened to the target words at the same time. In the experiment, some stimuli were presented to the subjects' ears at the same time, including some target words. These target words appear in the right ear or the left ear in the same number, but the order of presentation is random. Subjects were asked to make different reactions when they heard the target words in their left or right ears. Experimental results: the response rate of the right ear and the left ear to the target words reached 59%~68%. The response speed of both ears is very close. Support response selection model
Experiment 2: Shiffrin et al. (1974) conducted a similar experiment.
They asked the subjects to identify a specific consonant in the background of white noise. The experimental conditions are divided into two types: one is binaural hearing, and it is not sure which ear the consonant appears in; Second, listen with one ear and determine which ear the consonant appears in. The results show that there is no significant difference in the recognition rate of specific consonants under the above two conditions. Response selection model supporting attentional selection.
Experiment 3: Lewis' (1970) listening experiment. Ask the subjects to follow the words they hear and ignore any information they don't follow. Not the information that follows the ear, but words, but sometimes these words have no semantic connection with the words that follow the ear, and sometimes they are synonyms. In this experiment, the subjects were asked to respond to the accompanying words when performing listening tasks, and the reaction time from the appearance of the accompanying words to the reaction of the subjects was measured. The results show that when the non-ear-attached words are synonymous with the ear-attached words, the response time of the subjects will be prolonged, but when the non-ear-attached words are irrelevant, this effect is not observed.
6.HAM is not only a human associative memory model, but also a network model, but its basic representation unit is a proposition connecting concepts, not a single concept itself. Anderson and Bower distinguish four main associations 1 context-fact association 2, place-time association 3, subject-predicate association 4, relation-object association.