1, internal consistency: this refers to the consistency between internal parts of the measuring tool. For example, if a test contains multiple questions, these questions should measure the same concepts or attributes, and they should be highly related to each other. Internal consistency can be evaluated by statistical methods, such as Cronbach's Salfa coefficient.
2. Test-retest reliability: This refers to the consistency of the results obtained when the same object is measured at different time points. If a measuring tool obtains similar results in two or more tests, then this tool has high retest reliability. The reliability of retest is usually evaluated by calculating the correlation coefficient of test results.
3. Reliability among raters: This refers to the consistency of the results when different raters rate the same object. In the measurement that needs subjective judgment, such as paper scoring or interview scoring, the credibility between raters is particularly important. The credibility between raters can be evaluated by calculating the correlation between raters or using statistical methods such as Kappa coefficient.