LOCAL= locally restandardize fit statistics = No

Top Up Down  A A

LOCAL=N accords with large-sample statistical theory.

 

Standardized fit statistics test report on the hypothesis test: "Do these data fit the model (perfectly)?" With large sample sizes and consequently high statistical power, the hypothesis can never be accepted, because all empirical data exhibit some degree of misfit to the model. This can make t standardized statistics meaninglessly large. t standardized statistics are reported as unit normal deviates. Thus ZSTD=2.0 is as unlikely to be observed as a value of 2.0 or greater is for a random selection from a normal distribution of mean 0.0, standard deviation, 1.0. ZSTD (standardized as a z-score) is used of a t-test result when either the t-test value has effectively infinite degrees of freedom (i.e., approximates a unit normal value) or the Student's t-statistic distribution value has been adjusted to a unit normal value.

 

LOCAL=N ZSTD = t standardized fit statistics are computed in their standard form. Even the slightest item misfit in tests taken by many persons will be reported as very significant misfit of the data to the model. Columns reported with this option are headed "ZSTD" for model-exact standardization. This is a "significance test" report on "How unexpected are these data if the data fit the model perfectly?"

 

LOCAL=L LOG = Instead of t standardized statistics, the natural logarithm of the mean-square fit statistic is reported. This is a linearized form of the ratio-scale mean-square. Columns reporting this option are headed "LOG", for mean-square logarithm.

 

LOCAL=Y ZEMP = t standardized fit statistics are transformed to reflect their level of unexpectedness in the context of the amount of disturbance in the data being analyzed. The model-exact t standardized fit statistics are divided by their local sample standard deviation. Thus their transformed sample standard deviation becomes 1.0. Columns reported with this option are headed "ZEMP" for empirically restandardized. The effect of the local-rescaling is to make the fit statistics more useful for interpretation. The meaning of ZEMP statistics is an "acceptance test" report on "How unlikely is this amount of misfit in the context of the overall pattern of misfit in these data?"

 

Ronald A. Fisher ("Statistical Methods and Scientific Inference"New York: Hafner Press, 1973 p.81) differentiates between "tests of significance" and "tests of acceptance". "Tests of significance" answer hypothetical questions: "how unexpected are the data in the light of a theoretical model for its construction?" "Tests of acceptance" are concerned with whether what is observed meets empirical requirements. Instead of a theoretical distribution, local experience provides the empirical distribution. The "test" question is not "how unlikely are these data in the light of a theory?", but "how acceptable are they in the light of their location in the empirical distribution?"

 

This also parallels the work of Shewhart and W.E.Deming in quality-control statistics. They construct the control lines on their quality-control plots based on the empirical "common-cause" variance of the data, not on a theoretical distribution or specified tolerance limits.

 


 

You are using the ZSTD "standardized" fit statistics to test a null hypothesis.

The usual null hypothesis is

"These data fit the Rasch model exactly after allowing for the randomness predicted by the model"

 

Empirical data never do fit the Rasch model exactly, so the more data we have, the more certain we are that the null hypothesis must be rejected. This is what your fit statistics are telling you.

 

But often we don't want to know "Do these data fit the model?"

Instead, we want to know, "Is this item behaving much like the others, or is it very different?"

 

So, in Winsteps, you can specify LOCAL=Yes to test a different null hypothesis. This is not "cheating" as long as you inform the reader what hypothesis you are testing. The revised null hypothesis is:

"These data fit the Rasch model exactly after allowing for a random normal distribution of standardized fit statistics equivalent to that observed for these data."

 

So the revised standardized fit statistics ZEMP report how unlikely each original standardized fit statistic ZSTD is to be observed, if those original standardized fit statistics ZSTD were to conform to a random normal distribution with the same variance as that observed for the original standardized fit statistics.

 

To avoid the ZEMP values contradicting the mean-square values, the ZEMP computation is:

Accordingly, Winsteps does two half adjustments:
for all k items where ZSTD(i) >0,
ZEMP(i) = ZSTD(i)/(S), where S = sqrt[ (1/k) Sum( ZSTD(i)²) ]
for all k items where ZSTD(i) <0,
ZEMP(i) = ZSTD(i)/(S), where S = sqrt[ (1/k) Sum( ZSTD(i)²) ]