Basic concepts
Non-parameter test (no)Nparametric test) is an important part of the methodology for statistical analysis, which, together with the parameter tests, constitutes the basic element of statistical extrapolation. Because the parameter test must be the method of extrapolating the parameters of the overall distribution, such as averages, differences, etc., to the extent that the overall distribution is known. Instead of the parameter test, methods of extrapolating the overall distribution pattern, such as sample data, can be used in cases where the overall equation is unknown or very little known. As the non-parametric test did not involve parameters related to the overall distribution in the inference process, it was called the “non-parameter” test。
Single sample non-parameter tests
The non-parametric test of a single spss sample is a method of extrapolating, for example, the pattern of distribution of a single whole, including the card-side test, the two distribution tests, the k-s test and the randomity of the variable values。
I. Classing overall distribution
Based on sample data, the calibration method, which infers whether there is a significant difference between the overall distribution and the desired distribution or a theoretical distribution, is a matching test that is usually suitable for the analysis of the overall distribution of multiple classification values. It is based on the assumption that the total distribution of the sample does not differ from the desired distribution or from a theoretical distribution。
Two distribution tests
There are two types of values for which there is much data available in life: for example, the population can be divided into men and women, and the results of the test of throwing coins can be divided between positive and negative. This value is usually expressed as 1 or 0, respectively. If the same experiments are performed on n, the number of occurrences of two types (1 or 0) can be described as discrete random variant x. If the probability of a random variable x 1 is set to p, then the probability of a random variable x 0 is equal to 1-p, forming two distributions。
The spss two distribution tests are precisely two distributions from which the overall probability of compliance with the designation is determined by sample data, based on the assumption that there is no significant difference between the total sample and the two distributions specified。
Iii. Single sample k-s testing
The k-s test method, which can use sample data to extrapolate the overall compliance of the sample with a theoretical distribution, is a test of the intended eugenicity that applies to exploring the distribution of continuous random variables. The original assumption for the single sample, k-s, was that there was no significant difference between the total origin of the sample and the theoretical distribution specified, and that the theoretical distribution of the sss consisted mainly of normal distribution, even distribution, index distribution and berthone distribution。
Iv. Random testing of variable values
The randomity test of the variable values achieves random testing of the overall variable values by analysing the sample variable values. For example, when coins are dropped, if they appear positive in one and negative in zero, a series of variable values of 1 in 10 will be obtained after several ballots. At this point, the question of whether “reverse or opposite coins are random” may be analysed. The random testing of variable values is an effective solution to such problems. Its original assumption is that the overall variable values are random。
Two separate samples
The non-parametric tests of the two stand-alone samples are methods of extrapolating the existence of significant differences in the distribution of the samples from the two sets of stand-alone samples, for example, with little knowledge of the overall distribution. Independent samples are those obtained in a general random sample that has no impact on a random sample in another general。
Numerous non-parametric tests of two stand-alone samples were provided in spss, including the man- whitney u test, the k-s test, the w-w cruise test and the extreme response test。
I. Man-witniu check
The man-witney u test of the two stand-alone samples can be used to judge the ratio of the two overall distributions. The original assumption is that there is no significant difference in the overall distribution of the two sets of independent samples. The man-witney u test is judged by an average of two groups of samples. A simple word is the ranking of the variable values, which can be arranged in ascending order of data, and each variable value will have a position or name in the entire series of variable values, which is the qualifier of the variable values。
K-s test
The k-s test will test not only whether individual aggregates are subject to a theoretical distribution, but also whether there are significant differences between the two overall distributions. The original assumption was that there was no significant difference in the distribution between the two groups of independent samples。
This is the analysis object of the variable value, not the variable value itself。
Iii. Travel inspection
The single-sampling test is used to test whether the values of the variables are random, while the two stand-alone variables are used to test whether there is a significant difference in the distribution of the two totals from the two stand-alone samples. The original assumption was that there was no significant difference in the distribution between the two groups of independent samples。
The travel tests for the two stand-alone samples are essentially the same as for single samples, unlike the method of calculating the number of trips. In the travel tests for two stand-alone samples, the number of journeys depends on the variable's acquis。
Iv. Extreme response tests
The extreme response test examined whether there was a significant difference in the overall distribution of the two independent samples from another angle. The original assumption was that there would be no significant difference in the distribution of the two separate samples。
The basic idea was to use one sample as a control sample and another as an experimental sample. The control sample was used as a contrast to test the extreme reaction of the experimental sample to the control sample. If there is no extreme reaction in the experimental sample, the two overall distributions are considered to be not significantly different, but rather significant。
Multiple independent sample tests
The non-parametric test of the multi-independent sample is the analysis of multiple sets of independent sample data to extrapolate whether there are significant differences in the median or distribution of the sample from multiple aggregates. Multiple sets of stand-alone samples are those obtained by independent sampling. Specifically:
I. Medium test
The analysis of multiple sets of independent samples tested whether there were significant differences in the median values from which they came overall. The original assumption was that there was no significant difference in the median number of multiple independent samples from multiple aggregates。
The underlying idea is that if there is no significant difference in the median for multiple aggregates, or if there is a common median for multiple aggregates, this common median should be in the middle of each sample group. As a result, the number of samples in each group greater or less than that median should be approximately the same。
Ii. Kruskal-wallis test
The kruskal-wallis test, which is in substance a two-part stand-alone sample, is also used to test significant differences in the overall distribution of the various samples. The original assumption was that there were no significant differences in the overall distribution of multiple independent samples。
The basic idea is, first, to mix and sort multiple sample data in ascending order, and to extract the values of the variables; then to examine whether there are significant differences in the mean values of the groups. It is easy to understand: if there are no significant differences in the average values of the clusters, it is the result of a sufficient mix of multiple sets of data and small differences in the values, the distribution of multiple groups can be considered to be non-significant; on the other hand, if there are significant differences in the average values of the groups, it is not possible to mix the data of multiple groups, the values of some groups are generally large and the values of others are generally small, the distribution of multiple groups is considered to be significantly different。
Iii. Jonckheere-terpstra test
The jonckheere-terpstra test is also a non-parameter test to test whether there are significant differences in the distribution of multiple individual samples across the board, assuming that there are no significant differences in the distribution of multiple independent samples across the board。
The basic idea is similar to the man- whitney u test in the two stand-alone samples and is the number of observations calculated for a group of samples with less than those of other groups。
Two pairs of sample tests
The non-parametric test of the two pairs of samples is the method by which, in the absence of a good understanding of the overall distribution, a significant difference in the distribution of the two sets of samples is inferred. These include, inter alia:
Mcnemar test: it is a change-in-visibility test, and it tests the “pre- and post-” variability of the subject as its own comparator. The original assumption was that there was no significant difference in the distribution of the two aggregates from the two pairs of samples。
Symbol test: also a non-parametric method used to test whether there are significant differences in the overall distribution of the two paired samples. The original assumption was that there was no significant difference in the distribution of the two aggregates from the two pairs of samples。
Wilcoxon symbol swirling: also by analysing two pairs of samples, the difference between the two overall distributions from which the samples are derived is judged. The original assumption was that there was no significant difference in the distribution of the two aggregates from the two pairs of samples。
Multiple pairs of sample tests
The non-parameter test for multiple pairs of samples is the analysis of multiple pairs of sample data, which infers that there are significant differences in the median or distribution of the sample from multiple aggregates。
Friedman test: the non-parameter test of whether there are significant differences in the multiple overall distributions is achieved by using thorium. The original assumption is that there are no significant differences in the multiple overall distributions from multiple paired samples。
Cochran q: analysis of multiple pairs of samples leads to an inference that there are significant differences in the distribution of the samples from multiple aggregates. The original assumption was that there was no significant difference in the overall distribution of multiple pairs of samples。
Kendall co-benefit test: a non-parameter test for multiple pairs of samples, combined with the first test method, can easily achieve a consistent analysis of the judge's criteria. The original assumption was that the judgement of the judge was inconsistent。
That's the basic description of the non-parametric test by the editor
=
The academy is recruiting content master pens, short video creators, course lecturers. Please click on the "recruit" menu box at the bottom of the public sign for more details




