Current location - Education and Training Encyclopedia - Graduation thesis - Biomedical animal experimental research papers
Biomedical animal experimental research papers
Biomedical animal experimental research papers

1 experimental design

Whether scientific researchers can correctly apply statistical knowledge in biomedical research directly affects the quality of research. The task of statistical design is to systematically arrange the deployment and implementation of the research until the interpretation of the research results, and strive to obtain reliable conclusions and information with the least manpower and material resources. Its purpose is to determine whether a certain treatment will show some effect. When designing experiments, we should follow the principle of unique difference, that is, when comparing two groups, there are only differences caused by different processing factors, while non-processing factors related to other experimental conditions should remain equal. However, the difference in response between the treatment group and the control group does not necessarily mean that this is the result of treatment. There are two other possibilities that cause the difference, namely deviation and contingency. Bias refers to systematic differences, not the differences in treatment between groups. The goal of statistical design and analysis in biomedical experiments is to eliminate potential bias and reduce contingency [2].

1. 1 bias and control of the experiment

Bias is that there are some artificial and systematic non-random errors in every link from design to experimental implementation and result analysis. It is not caused by sampling, but a deviation, which makes the experimental results deviate from their true values. There may be various deviations from the selected biomedical problems to the formulation and implementation of research programs, the completion of experiments, the analysis and interpretation of experiments, and even the publication of experimental results [2]. This deviation is usually manifested as systematic error. The magnitude of the deviation depends on the research method and specific experimental conditions. Common biases include selective bias, observational bias and confounding bias. We must realize the bias of the experimental process and control it from the experimental design to the end of the whole research process. Correct experimental design can control selective bias, and artificial control in advance and corresponding measures can avoid and reduce observational bias. For confounding bias, important confounding factors can be designed randomly in layers in the design stage to make the distribution of confounding factors balanced among groups; In the stage of statistical analysis, confounding factors are regarded as stratified factors or covariate analysis is used to eliminate the influence of confounding factors. Only by effectively controlling or eliminating bias can we reduce false positive or false negative results.

1.2 Reduce the potential impact of unexpected events

The role of contingency factors can be reduced, but it cannot be completely ruled out. Because even in a well-conducted study, it is impossible for animals receiving the same treatment to have exactly the same reaction. Appropriate statistical analysis can enable experimenters to evaluate the probability of false positives, that is, the probability of observing differences when there is no therapeutic effect at all. The smaller the probability, the more likely the experimenter will find the real effect. In order to detect the real effect more confidently, it is necessary to reduce the accidental effect and ensure that the real "signal" can be identified above "noise" through experimental design.

1.3 experimental design elements

In order to eliminate potential bias and reduce contingency in biomedical experiments, three elements of experimental design should be carefully designed and controlled according to the four principles of control, repetition, randomization and balance, namely, experimental object, processing factors and experimental effect [3]. 1.3. 1 experimental object An object affected by processing factors in an experiment is called an experimental object. It is necessary to select different kinds of experimental objects for experimental research with different properties. The total number of subjects required for a complete experimental design is called sample content. The following aspects should be paid attention to when considering animal subjects in biomedical experiments: ① Selection of animal species and genera: When selecting the species and genera of experimental animals, special attention should be paid to the level of their background reactions. In order to maximize the reaction "signal" level, it usually means that those animal species or strains with extremely low background reaction level should be avoided, but if overreacted animal species or strains are used, problems will also occur. Other problems in animal species selection, whether practical (life span, body shape, availability, understanding of zoological characteristics) or theoretical (similarity between biochemistry, physiology or anatomical structure and human beings), need to be carefully considered and weighed from a professional perspective. ② Number of animals: Although the number of animals (sample content) needed for an experiment can be obtained from the perspective of statistical design, the obtained value is often very large. Therefore, although the estimation of sample content is the premise to ensure the reliability (accuracy and inspection efficiency) of the conclusion, it should be determined based on the operability of the experiment and the consideration of economic principles, combined with statistical calculation results and previous biomedical research experience. ③ Weight and age of animals: In order to ensure the homogeneity of experimental subjects, the weight and age of animals used in the experiment should be as close as possible; The standard deviation of animal weight should not exceed10% of the average; The age difference between rodents and other small animals should not exceed 1 week, and the age difference between large animals should not exceed 1 month. ④ Animal stratification: In order to accurately detect the difference caused by one treatment factor, each treatment group should be as homogeneous as possible in other non-treatment factors that may affect the experimental results. When there are differences between animal lineages, there are two ways to get more accurate conclusions. One is to treat the subsystem as a "hierarchical variable" in the result analysis stage, including analyzing the results of the two subsystems respectively, and then synthesizing the results to get the general conclusion of the processing effect; The second is to regard subsystems as "blocking factors" in experimental design. In this case, the number of animals in each subsystem in the control group and the treatment group can be equal. In addition to the "sub-line" discussed above, other non-processing factors, such as gender, nesting type, weight segment, etc. It can also be used as a stratified variable of local control, and stratified random grouping is carried out accordingly. 1.3.2 processing factors when designing an experimental study, it is necessary to clarify the processing factors in the study and the non-processing factors that affect the experimental results. Researchers hope that through the planned arrangement of research design, the factors that can scientifically examine its effect are called processing factors or experimental factors; Researchers often ignore important non-processing factors or non-experimental factors (such as the nest and weight of animals, etc.). ) Evaluation of interfering experimental factors. The comprehensive influence of many other uncontrollable factors is collectively called experimental error. The experimental result is the interaction between processing factors and non-processing factors. Therefore, how to control and eliminate the interference of non-processing factors and correctly display the processing effect is the basic task of experimental design. 1.3.3 experimental effect experimental effect is the reaction and result of processing factors acting on subjects, and it is a sign reflecting the strength of experimental factors, which is reflected by observing indicators (indicators are often called variables in statistics). If the selection of indicators is improper and fails to accurately reflect the role of processing factors, the research results will be unscientific, so the selection of observation indicators is an important link related to the success or failure of the whole research. The observation of indicators should avoid bias or deviation, combine professional knowledge, select as many objective indicators as possible, and select as many objective indicators with strong specificity, high sensitivity, accuracy and reliability as possible under the conditions permitted by instruments and reagents. For some semi-objective (such as urine pH test paper reading value) or subjective indicators (behavior measurement, pathological observation), strict reading value standards must be stipulated in advance. Only in this way can we accurately analyze the experimental results and improve the credibility of the experimental results.

1.4 experimental design principles

In order to prevent the bias of the results and ensure the accuracy and maximum expression of the experimental results, four basic principles of statistical design must be followed when designing biomedical experiments: comparison, repetition, randomization and balance. The setting of control group in biomedical experiments must meet three conditions: ① the principle of reciprocity, that is, the principle of unique difference. In addition to processing factors, the control group has the same non-processing factors as the experimental group. Compared with each other, except for different treatment factors, other aspects should be consistent with the experimental group, such as the same source of experimental units (animal species, weight, etc. ) and the same experimental conditions, operating methods and feeding environment. ② Synchronization principle: After the establishment of the control group and the experimental group, they are always in the same time and space in the whole research process. ③ Special design principle, any control group is specially set for the corresponding experimental group. Records in the literature, previous results or other research data shall not be used for comparison of this study.

1.5 Common experimental design types in biomedicine

If it is necessary to evaluate several different effects in the same experiment at the same time, the experimenter should arrange an experimental design method that can distinguish the differences of their respective effects. There are the following experimental designs commonly used in biomedicine. 1.5. 1 Complete random design Complete random design is the most commonly used experimental design method in biomedical animal experiments. This is a single factor K level (k≥2) experimental design. That is, the experimental design can set up an experimental scheme of one control group or multiple dose groups. This design ensures that no matter how subjective the experimenter is, every experimental animal has the same opportunity to receive any kind of treatment. This design applies the principles of repetition and randomization, so it can make the experimental results basically consistent with the influence of non-processing factors and truly reflect the processing effect of the experiment. 1.5.2 Random block design Random complete block design, referred to as random block design, also known as compatibility group design, is an extension of paired design. Several subjects with the same conditions are divided into the same block or compatibility group, and then the subjects in the same compatibility group are randomly assigned to each experimental group according to the principle of randomness. The advantage of this design method is that the k experimental units in each block have good homogeneity, and it is easier to detect the differences between treatments than the completely random design. This method should pay special attention to the requirement that the number of experimental units in the block is the same as the number of processing. If there are missing values in the experimental results, statistical analysis will lose some information. 1.5.3 Latin square design Latin square design has double local control in both horizontal and vertical directions, making both horizontal and vertical directions block, which is a design with one more block factor than random block design. In Latin square design, each row or column becomes a complete block, and each process only appears once in each row or column. That is to say, in Latin square design, the number of experimental treatments = the number of horizontal blocks = the number of in-line blocks = the number of repeated experimental treatments. 1.5.4 factorial design factorial experimental design, also known as all-factor experimental design, belongs to multi-factor, multi-level and single-effect design. It can not only test the effect difference between various factors and levels, but also test the interaction between various factors. Interaction means that the effect difference between different levels of one factor is influenced by another factor, including synergistic interaction and antagonistic interaction. Factorial experiments are mainly used to analyze interaction. When there are too many factors and levels, the number of subjects, treatment groups and experiments will increase greatly, so simple factorial experiments are generally used. Multi-factor and multi-level experiments generally adopt orthogonal experimental design [5].

2 Descriptive statistics of biomedical animal experiments

2. 1 biomedical experiment data type

In biomedical experiments, the observation indexes measured by experimental objects (animals) after intervention usually include the following categories: ① Continuous data: the measurement results are data with digital size and unit, which are statistically called quantitative data, such as physiological and biochemical indexes, body weight, organ weight, etc. (2) Classified data: the measurement results are classified into qualitative categories according to certain attributes, which are statistically called qualitative data and can be divided into binary data, multi-valued nominal data and multi-valued ordered data. Reaction appears or does not appear, dead or not dead, with or without deformity; The severity of pathological damage (none, mild, moderate, severe), etc.

2.2 Statistical description indicators

Descriptive statistics (or inductive statistics) is a quantitative study of the frequency distribution of sample observation/measurement data. The purpose of descriptive statistics is: ① to summarize and concentrate the measured or observed values and express them in the form of statistics, statistical charts or statistical tables; ② Estimate the parameters of the population distribution. 2.2. 1 data collation and exploration For a certain measurement index, its distribution type should generally be known from the literature. If there is no theoretical basis for judging the probability distribution, it is necessary to repeat the measurement with a large sample, draw the frequency distribution map of the sample (theoretically, the sample size is greater than 100), and fit its distribution through statistical test. 2.2.2 Description and statistics of data ① Frequency distribution of continuous data: By compiling a frequency distribution table or making a stem-leaf diagram of sample data, the type of data distribution, the centralized trend and the discrete trend of frequency distribution can be determined, the overall parameters can be estimated, and it is also convenient to find abnormal values. (2) Descriptive statistics of the center position: to describe the centralized trend of data distribution, commonly used indicators include arithmetic mean, median, mode, geometric mean, etc. ③ Descriptive statistics of dispersion: To describe the dispersion trend of data distribution, commonly used indicators include standard deviation and variance, range and quartile interval, coefficient of variation and coefficient of dispersion, etc. ④ Statistical chart: Statistical chart includes histogram and stem-leaf chart of continuous data distribution, point bar chart (mean and standard deviation when drawing) and box whisker chart (median, range and quartile interval when drawing), percentage bitmap and pie chart describing ratio data, line chart describing time-varying trend, and probability-probability chart (P-P chart) for predicting and testing distribution types. The statistical table is simple and clear, easy to understand and easy to compare. In principle, when compiling statistical tables, they should be focused and clear-cut, and avoid excessive levels or structural confusion. The general statistical table should be a three-line table, with only horizontal lines, no vertical lines and diagonal lines. The title of the statistical table should be clear and not too complicated.

3 Hypothesis test of biomedical animal experiments

The most common situation in biomedical animal experiments is that different subjects are given and compared between groups, and the role of subjects is explained by hypothesis test in statistics. The following questions should be paid attention to when hypothesis testing.

3. 1 inspection method selection basis

3. 1. 1 Data types and the number of variables should be compared with different types of data (quantitative and qualitative) by different statistical test methods. The statistical test methods of univariate and multivariate are also different. 3. 1.2 The type of experimental design should choose the corresponding statistical test method according to the specific type of experimental design, so as to get the real conclusion of the treatment group effect. 3. 1.3 Prerequisites of the test method Before choosing the hypothesis test method, we need to know whether the analyzed data meets the preconditions of the corresponding test method. For example, T-test, analysis of variance and other parameter test methods require the data to meet normality and homogeneity of variance; 2-test requires the sample size to be greater than 40 and the theoretical frequency to be greater than 5.

3.2 Normality test and goodness of fit test

Statistical hypothesis testing must determine whether the frequency distribution of samples conforms to a certain theoretical distribution, and if it meets the requirements, it can be statistically processed according to this theoretical distribution. Normality test can be used for normal distribution, and goodness of fit test can be used for other distributions. Usually, we can know what theoretical distribution the experimental parameters conform to by consulting the literature.

3.3 homogeneity test of variance

The second reason why continuous data does not meet the premise of parameter statistical analysis is uneven variance. Generally speaking, the greater the numerical value, the greater its inherent variability. For example, if the average reaction value of a group of animals is 100, its numerical range may be 80 ~120; The average reaction value of another group of animals is 300, and its numerical range may be expanded to 240~360. The measure to solve the uneven variance is data conversion. If the standard deviation of the data is directly proportional to the average value, it is advisable to convert the data into logarithmic values before statistical analysis. According to this, not only the variability of data has nothing to do with the average value, but also it can be guaranteed to be more in line with the normal distribution. If the relationship between the increase of data variability and the average value is not obvious, it is easier to make the data variability independent of the average value through square root transformation. Some data may still have uneven variance after logarithmic or square root conversion, so nonparametric test should be adopted at this time.

3.4 Unilateral inspection and bilateral inspection

Choose one-sided inspection or double-sided inspection, and make a choice according to professional knowledge in advance. Generally speaking, if the purpose of the study is only to know whether there are differences between groups, the experimenter can't predict the direction of the changes between groups, and the experimenter wants to get both positive and negative results, then the two-sided test should be adopted. If the change direction of the difference between groups can be predicted in advance, the experimenter is only interested in one aspect, and the experimenter only wants to know whether the difference between the two groups is positive or negative, then one-sided test should be adopted. In addition, the pre-test of dose design should adopt bilateral test, and the formal test can adopt unilateral test after knowing the relevant information.

3.5 Multiple comparisons and multiple questions

Biomedical experiments often compare multiple variables between the treatment group and the control group. Even if there is no real experimental effect, it may be purely accidental. At the test level of 5%, there are significant differences in one or more variables. In addition to the multiplicity of the above-mentioned multiple comparisons of mean values, the probability of class I errors increases, and other multiplicity problems also include multiple interim analyses, attention to multiple results, multiple comparisons between subgroups, and so on. The principles of dealing with multiplicity problems include: ① planning multiple comparisons in advance; ② Limit the number of comparisons; (3) Multiple comparisons adopt stricter boundary standards; ④ Multiple comparisons have biological basis.

3.6 Independence of observation value or experimental object

Many statistical test methods require the observation values or experimental objects to be independent of each other, such as ratio test, t test and variance analysis of binomial distribution. However, in some biomedical experiments, the observation unit is not independent. For example, there is a nesting effect in reproductive development research: due to the similarity of genetic factors, intrauterine development environment and drug metabolism environment, the response probability of littermates to toxic effects tends to be systematic compared with littermates, that is, the data in littermates are aggregated data and common dependent data. In statistical analysis, ignoring the intra-nest correlation of data has potential risks; Because the observed values of K littermates are all * * *, the information they provide is not as good as that provided by K littermates from different mothers. The greater the correlation in nesting, the less information it contains. The average standard error of aggregated data is less than that of independent data. Therefore, if the statistical analysis method is based on independent observations, the probability of making Class I errors will increase, that is, the risk of false positives will increase and the effectiveness of the experiment will decrease.

3.7 Application of Historical Control Data

In some cases, especially in the case of low incidence, a single study may suggest that treatment can affect the incidence of tumors, but no clear conclusions can be drawn. One possible analysis method is to compare the data of the treatment group with the data of control animals from other studies. Although the historical comparison data is of great significance, it is worth emphasizing that the degree of variation between different studies is greater than that within the study due to various reasons. Animal sources, feed and feeding conditions, research cycle, animal mortality rate in the study, pathologists who read films, etc. May affect the final incidence of tumors. Therefore, ignoring these differences and comparing the tumor incidence between the treatment group and the combined control group may lead to serious wrong results, and then obviously exaggerate the statistical significance level. Tarone[4] reviewed the ratio data analysis of historical control group.

3.8 Limitations of Hypothesis Test

First of all, it is assumed that the P value in the test cannot provide direct information about the magnitude of the treatment-induced effect. A subject can induce a certain amount of response increase, but whether this increase is statistically significant depends on the scale of the study and the variability of data. In small-scale research, it is possible to miss large and important effects, especially when the measurement accuracy of the detection end point is not high. On the contrary, in large-scale research, minor and unimportant effects are statistically significant. For example, compared with drug C, the hypotensive effect of drug D is nearly 30mmHg, but since the number of cases is only 10, no significant difference was found by hypothesis test (P = 0.3 1). On the contrary, the hypotensive effect of drug B is only 0.2mmHg compared with that of drug A, but since the number of cases reaches 500, there is a significant difference (P

;