Do-it-yourself construction and repairs

Introduction to experimental. Chapter I. Brief introduction to experimental psychology and the main tasks of a laboratory workshop at a pedagogical institute. Variables in experiments

Experimental psychology

Lecture course

Introduction to Experimental Psychology

Methods and results of multidimensional studies of individual psychological characteristics of the individual

Data collection methods

Multivariate data analysis methods

Psychological testing

General issues of test reliability.

Approaches to studying the validity of tests.

Methods for solving psychodiagnostic problems

Statistics and test processing

Main experiment

Correlation analysis

Conclusion

Literature

Introduction to Experimental Psychology

In practical life, personality theories do not play a significant role. The human psyche is an extremely complex phenomenon and presents significant difficulties for study.

Systematization of psychological knowledge about personality can be divided into clinical-psychological and experimental. The first arose from verbal theories and observations as a desire to treat and correct deviant forms of behavior. There are many outstanding psychologists in this area of ​​psychology (Adler, Bekhterev, Freud and many others). Although scientific in their goals, these theories achieved popularity without having a strict experimental basis. Measurement here is replaced by observation, data collection by the selection of representative cases, statistical processing by meaningful interpretation. However, this poverty of the experimental procedure allows for the manipulation of a large number of explanatory variables. It is important that supporters of the clinical method try to combine into a single system all the variables necessary for the formation of concepts about personality, without which it is impossible to establish real patterns.

Experimental psychology arose as a reaction to the verbal nature of the clinical-psychological research method. Quantitative experimental research is divided into two-dimensional and multidimensional. Both approaches study relationships between variables, but in different ways.

The two-dimensional experiment is a transfer of the research method adopted in the physical sciences. It involves identifying dependent and independent variables using experimental control. In a multidimensional experiment, all measured factors taken in their entirety are statistically taken into account simultaneously.

Proponents of the two-dimensional experimental method believe that the isolation of two variables is necessary to study the mental phenomenon in its pure form. In their opinion, this approach eliminates secondary factors. But the mental process never occurs in isolation. Behavior is complex and determined by many internal and external factors. For this reason, they are trying to form two groups of individuals that are identical in all respects except one, and it is impossible to put them in the same conditions even in a laboratory experiment.

A multivariate experiment requires the measurement of many related features, the independence of which is not known in advance. Analysis of the relationships between the studied characteristics allows us to identify a small number of hidden structural factors on which the observed variations in the measured variables depend. This approach is based on the assumption that initial signs are only superficial indicators that indirectly reflect personality traits hidden from direct observation, knowledge of which will allow a simple and clear description of individual behavior. Thus, the multidimensional approach is applied in areas where human behavior is considered in natural settings. What cannot be achieved by direct manipulation of dependent and independent variables can be achieved by more sophisticated statistical analysis of the entire set of relevant variables. The main advantage of the multidimensional approach is its effectiveness in studying real situations without the risk of their distortion by side effects that arise when creating artificial experimental conditions.

VORONEZH 2010

Psychology

Lecture course

OPD.F.03 “Experimental Psychology”

Art. Rev. Sazonova V.N.


Topic 1. Introduction to experimental psychology. Theoretical and empirical knowledge in psychology 3

Topic 2. History of the development of experimental psychology. 6

Topic 3. Methodology of experimental psychological research. 9

Topic 4. Types and stages of psychological research. 19

Topic 5. General idea of ​​the system of methods in psychology. 27

Topic 6. Non-empirical methods in experimental psychology. 31

Topic 7. Empirical methods in experimental psychology. 38

Topic 8. Psychological features of the experiment. 41

Topic 9. Theory of psychological experiment. 45

Topic 10. Experimental and non-experimental designs... 53

Topic 11. Correlation studies. 63

Topic 12. Psychological measurement. Elements of psychological measurement theory. 69

Topic 13. Analysis and presentation of results. 73

Topic 14. Empirical methods of particular psychological significance. 80

Literature. 115


Experimental psychology is an independent scientific discipline that develops the theory and practice of psychological research and has as its main subject of study a system of psychological methods, among which the main attention is paid to empirical methods.

Today, psychologists no longer need to convince specialists from other fields of knowledge that there is such a science as psychology. Although, after reading some psychological books or articles, natural scientists or mathematicians may have reasonable doubts about this. If psychology is a science, then all the requirements for the scientific method apply to the psychological method.

The science- this is the sphere of human activity, the result of which is new knowledge about reality that meets the criterion of truth. Practicality, usefulness, and effectiveness of scientific knowledge are considered to be derived from its truth. A scientist, or rather a scientific worker, is a professional who builds his activities, guided by the criterion of “truth - falsity”.

In addition, the term “science” refers to the entire body of knowledge obtained to date by the scientific method.

The result of scientific activity can be a description of reality, an explanation of the prediction of processes and phenomena, which are expressed in the form of text, a structural diagram, a graphical relationship, a formula, etc. The ideal of scientific research is the discovery of laws - a theoretical explanation of reality.

However, scientific knowledge is not limited to theories. All types of scientific results can be conditionally ordered on the “empirical - theoretical knowledge” scale: a single fact, an empirical generalization, a model, a pattern, a law, a theory.



Science as a system of knowledge and as a result of human activity is characterized by completeness, reliability, and systematicity. Science as a human activity is primarily characterized by method. The method of scientific research is rational. A person applying for membership in the scientific community must not only share the values ​​of this sphere of human activity, but also apply the scientific method as the only acceptable one. A set of techniques and operations for the practical and theoretical development of reality - this definition of the concept “method” can most often be found in the literature. It should only be added that this system of techniques and operations must be recognized by the scientific community as a mandatory norm regulating the conduct of research.

T. Cuno distinguishes two different states of science, the revolutionary phase and the phase of “normal science”: “Normal science” means research firmly based on one or more past scientific achievements. Nowadays such achievements are presented, although rarely, in their original form in textbooks - elementary or advanced." The concept of “paradigm” is associated with the concept of “normal science”. A paradigm is a generally accepted standard, an example of scientific research, including law, theory, their practical application, method, equipment, etc. These are the rules and standards of scientific activity adopted in the scientific community today, until the next scientific revolution that breaks the old paradigm , replacing it with a new one.

The existence of a paradigm is a sign of the maturity of a science or a separate scientific discipline. In scientific psychology, the problem of the formation of a paradigm is reflected in the works of W. Wundt and his scientific school. Taking a natural science experiment as a model, late psychologists XIX - early XX century transferred the basic requirements for the experimental method to the soil of psychology. And to this day, no matter what objections have been raised by critics against the legality of using a laboratory experiment in psychological research, scientists continue to focus on the principles of organizing natural science research. Based on these principles, dissertation research is carried out, scientific reports, articles and monographs are written.

A huge contribution to the development of scientific methodology in the mid and late 20th century. contributed by K. Popper, I. Lakatos, P. Feyerabend, P. Holton and a number of other outstanding philosophers and scientists. They were based on an analysis of the development of the necessary knowledge and the actual activities of researchers. Their views were particularly influenced by the revolution in natural science, which affected mathematics, physics, chemistry, biology, psychology and other fundamental sciences. The very approach to science and life in science has changed. In the 19th century a scientist, having discovered a fact, a pattern, created a theory, could defend his views from critical attacks throughout his life and preach them excathedra- science was not very different from philosophy, - hoping for the truth and irrefutability of its beliefs. Hence the principle of verifiability, the actual confirmability of a theory, put forward by O. Comte. In the 20th century Over the course of a single generation, scientific views of reality have sometimes undergone dramatic changes. Old theories were refuted by observation and experiment. During his active scientific life, a scientist could, in order to explain the experimental data obtained by his colleagues, consistently put forward a number of theories that refute one another. The person stopped identifying himself with his idea, the “paranoid” attitude turned out to be ineffective and was rejected. The theory was no longer considered super valuable and turned into a temporary tool that, like a cutter or milling cutter, can be sharpened, but in the end it must be replaced.

So, any theory is a temporary structure and can be destroyed. Hence the criterion for the scientific nature of knowledge: knowledge that can be refuted (recognized as false) in the process of empirical verification is recognized as scientific. Knowledge for which it is impossible to come up with an appropriate procedure cannot be scientific.

In logic, the consequence of a true statement can only be true, and among the consequences of a false statement there are both true and false. Every theory is just a guess and can be disproved by experiment. K. Popper formulated the rule: “We don’t know - we can only guess.”

From the standpoint of critical rationalism (this is how Popper and his followers characterized their worldview), experiment is a method of refuting plausible hypotheses. The modern theory of statistical hypothesis testing and experimental planning come from the logic of critical rationalism.

Popper called the principle of the potential falsifiability of a scientific theory the principle of falsifiability.

The normative process of scientific research is structured as follows:

1. Proposing a hypothesis (hypotheses).

2. Study planning.

3. Conducting research.

4. Data interpretation.

5. Refutation or non-refutation of the hypothesis (hypotheses).

6. In case of refutation of the old one, the formulation of a new hypothesis (hypotheses).

This scheme primarily suggests that in the structure of scientific research, the content of scientific knowledge is a variable value, and the method is a constant.

New knowledge is born in the form of a scientific assumption - a hypothesis through the prism of which data is interpreted. And putting forward a hypothesis, building a model of reality and theory are intuitive and creative processes. They are beyond the scope of consideration of the theory of scientific experiment.

An experiment, considered from these positions, is only a method of selection, control, and “culling” unreliable assumptions. New knowledge is obtained in other ways: empirical - by observation, and theoretical - through rational processing of intuitive guesses.

In addition to the method, the design of scientific research contains another indispensable component, namely the problem, the “frame” into which the hypothesis, interpretation, and the method itself are inscribed.

Popper repeatedly noted that as science develops, both hypotheses and theories change. With a change in paradigm, the method is revised, new problems appear, but old ones remain, deepening and differentiating with each cycle of research.

Many scientists tend to classify not “sciences” (because few people know what it is), but problems.

Critical rationalism says nothing about where new knowledge comes from, but shows how old knowledge dies. In some ways it is similar to the synthetic theory of evolution, which still cannot explain the emergence of new species, but well predicts the process of their stabilization and extinction.

So, the paradigm of modern natural science has become the basis of the psychological method.

Introduction to experimental psychology.

How to start psychological research.

Literature - - Ch. 2: 54-65, ch. 10, - Ch. 1.6, - Ch.4

Experiment

In any experiment there is an object of study (behavior, phenomenon, property, etc.), in addition, in an experiment it is usually

· something changes

potential sources of influence are constant

· some behavior is measured

Under variable in psychology we understand any quantity, property or parameter that interests us. This can be either a quantitatively measured value (such as height, weight, reaction time, sensation thresholds, etc.) or values ​​that allow only a qualitative description (for example, gender, race, mood, character, etc.)


independent research dependent

Variable variable

control variables

Independent variable- variable changed by the experimenter; includes two or more states (conditions) or levels.

Dependent Variable- a variable that changes under the action of the independent variable, taking on different values ​​that are measured.

Control variable- a variable that is held constant.

The researcher changes the independent variable so that the effects different meanings or levels of the independent variable could be determined from changes in the dependent variable.

At the same time, the main difficulty for us is ensuring the invariance of control variables. If during the experiment, along with the independent variable we have identified, some other variable also changes, which can also have an effect on the dependent variable, then we speak of the presence mixing effect.



Mixing This is due to the fact that the effect of an independent variable is accompanied by a number of other variables, which can systematically differ when different conditions of the independent variable are presented, and thereby have a favorable (or unfavorable) effect on the effect of one of them.

Confounding is caused by the fact that when we designed the experiment, we did not control for a variable or check whether it was actually included in the control variables and thereby made it an independent variable.


Research project

includes the following steps

search for an idea Sources of ideas · observations · experts · magazines, books, textbooks, etc.
formulation of the hypothesis being tested A testable hypothesis is a statement about the hypothesized or theoretical relationship between two or more variables. The hypothesis being tested either explicitly states or implies implicitly that the variables measurable.
analysis of relevant literature The purpose of a literature review is to avoid reinventing the wheel, that is, to determine what is already known about your hypothesis. A literature review helps in developing a reasonable research design and selecting appropriate material and stimuli.
.
development of experimental design Preliminary tests use a small number of subjects.
This is done to check · whether there are errors in the design and procedure of the experiment · whether the subjects understand the instructions · how long the experiment will take · whether the tasks are too difficult or easy.
At the same time, we will practice observing and measuring the behavior that interests us. data collection statistical data analysis Typically, the logic of hypothesis testing is as follows: the experimenter selects conditions (experimental and control) to test his hypothesis, assuming that the experimental conditions will produce some effect relative to the control conditions. This hypothesis is tested against
null hypothesis . The null hypothesis is a statement that there is no relationship between the selected variables. An experiment is considered successful when it is possible to reject the null hypothesis, i.e. show that it is false, and, therefore, the initial hypothesis about the presence of a connection is correct.
data interpretation

It is not enough to obtain data - you still need to interpret it. Data simply has no value in itself; it must be related to a theory that explains behavior.

report

Measurements in psychology

Literature - - Ch. 6 + see almost any dictionary on psychology +

+ Sidorenko E.V. Methods of mathematical processing in psychology. St. Petersburg, 1996.

MEASURING SCALES A strict definition of the scale is quite difficult..

It's easier to say that

scale is a rule by which we put names (numbers) in accordance with objects or properties of objects Types of measuring scales):

Usually there are 4 types of measurement scales (Druzhinin, 1997, Elmes et al, 1992,

Stevens, 1951

· scale of names (nominal scale, nominal scale)

· order scale (ordinal scale)

· interval scale (interval scale)

· scale of equal relations (scale of relations, ratio scale) Types of scales are defined by the properties they have. The types of scales are listed below in order of increasing information content. Each subsequent scale has the properties of the previous scale and additional ones. This means, in particular, that the statistical procedures that can be used for the naming scale are suitable for all others. But the statistics for the equal relationship scale will not work for the three less informative scales.
scale of names Reflects the difference in the value of some property. The scale values ​​are put into correspondence with the values ​​of a certain property so that the order reflects the order of change in the value of this property for the selected objects.
Such a scale shows the order of objects according to the selected indicator, without giving any information about the real values ​​of this indicator. Sometimes such scales may have a zero that coincides with the “zero” of the selected property. The order scale assumes a monotonic relationship between scale divisions and the parameter indicator. Examples
interval scale interval scale has the properties of difference, magnitude and equal intervals. In this scale, not only the scale values, but also the interval values ​​make sense. In an interval scale, the value of the difference between the scale values ​​somehow reflects the difference in the possession of the selected property. The interval scale assumes a linear relationship between the scale divisions and the parameter indicator. Examples The interval scale assumes a linear relationship between the scale divisions and the parameter indicator. ratio scale

has all the properties of the previous scales and, in addition, has a real zero - that is, the zero of the scale corresponds to the “zero” of some selected property. Then the scale value corresponds to the difference in the manifestation of a certain property in relation to its “zero”. This is the most powerful scale. In such scales, not only the difference, but also the ratio of values ​​makes sense (for example, in

n

times larger scale value corresponds to

times the value of the indicator).

Examples

Scale type:

· determines which statistical procedure we will use (see table)

Helps critically evaluate the research of others

· influences the interpretation of data, since different scales reflect different properties.

Descriptive observations list what behavior occurs, with what frequency and in what sequence and in what quantity.

There are 3 types of descriptive observations: naturalistic, precedents (special cases) and reviews.

Advantages of descriptive observations:

· useful in the initial stages of research

· are useful when it is not possible to use other methods

Flaws:

· do not provide the opportunity to draw conclusions about the relationships between variables

· the impossibility of repetition makes them extremely subjective

· anthropomorphism (attributing human characteristics to animals and even inanimate objects)

· internal invalidity, since such methods allow a) to select cases from a whole bunch of cases, and also to select questions, answers and facts; b) relate these cases and answers to our previously developed theory, and thus “prove” any theory. Example: Freud's theory. Whatever Freud's genius, his theory does not stand up to criticism in terms of the facts and evidence on which it is based.


DEPENDENT OBSERVATIONS

These are observations of relationships, dependencies between various phenomena and properties. To study such a relationship, we can use the correlation technique. The use of correlation techniques allows us to determine the degree of relationship between two variables of interest. Typically, we hope that from one variable we can predict another. Such conclusions are made “ex post facto”, that is, after what happened. First, observations about the behavior of interest are collected and then a correlation coefficient is calculated, which expresses the degree of relationship between two variables or measurements.

Variables in experiments

independent The experimenter selects them on the basis that they can cause changes in behavior. When changes in the level (value) of an independent variable lead to a change in behavior, then we say that the behavior is controlled by the independent variable. If the independent variable does not control the behavior, then it is called a null result.
A null result can have several interpretations: 1. The experimenter was mistaken in thinking that the independent variable influences behavior. Then the null result is true. They depend on the behavior of the subjects, which, in turn, depends on the independent variables. A good dependent variable should be reliable (that is, when we repeat the experiment - same subjects, same levels of independent variables, ... - the dependent variable should be about the same. A dependent variable is unreliable if there are problems with the way it is measured. Another problem with a dependent variable that can lead to a null result is that the dependent variable is stuck at the lowest or highest point of the scale. This is called a ceiling effect. This effect prevents the influence of the independent variable from appearing on the dependent variable. a null result may appear due to statistical processing of data. The results of a statistical test may not confirm the null hypothesis to be false when it is not true.
control In any experiment there are more variables than can actually be controlled, i.e. There are no perfect experiments. The experimenter tries to control as many relevant variables as possible and hopes that the remaining uncontrolled variables will produce a small effect compared to the effect of the independent variable. The smaller the effect of the independent variable, the more careful the control should be. Null results can also be obtained if various factors are not sufficiently controlled. This is especially true for not

laboratory conditions

. You probably remember that we call the influence of these uncontrollable factors mixing.

EXPERIMENTAL SCHEMES

Literature - - Ch. 3, 4, 6, - Ch. 2, 7, 8, - Ch. 5

There are two main possibilities:

· assign several subjects to each level of the independent variable

· distribute all subjects to all levels The first possibility is called

between-groups experimental design

- this is the presentation of each of the conditions of the independent variable to different groups of subjects. The second possibility is called Intra-individual experimental design - .

This is the presentation of all conditions under study to one (or several) subjects. Sometimes such a scheme is also called an individual experiment scheme or

intragroup Types of interaction

Main effects are statistically independent of interaction effects Consider an experiment with two independent variables - 1 and 2. Independent variable 1 has two levels - A and B. Independent variable 2 also has two levels - 1 and 2. In all three cases shown below, the main effects of these variables are the same (the difference in the dependent variable between two levels of independent variable 1 is 20 units, and the difference between two levels of independent variable 1 is 60 units).


3) And in this case there is an intersecting interaction

Independent variable 1
A IN B-A
Independent variable 2
2-1

A IN average
average

This is an overlapping interaction. It is the most reliable because it cannot be explained by measurement and scaling problems of the dependent variable.


The main effects in the tables are the same, but the graphs are all different.

Morality: Interactions must be considered before drawing conclusions in an experiment with more than one independent variable.

MIXED SCHEME

This is a design that uses one or more between-subjects variables and one or more within-individual variables.