SAFILE= item structure anchor file |
Top Up Down
A A |
The SFILE= (not ISFILE=) of one analysis may be used unedited as the SAFILE= of another.
The rating-scale structure parameter values (taus, Rasch-Andrich thresholds, steps) can be anchored (fixed) using SAFILE=. The anchoring option facilitates test form equating. The structure in the rating (or partial credit) scales of two test forms, or in the item bank and in the current form, can be anchored at their other form or bank values. Then the common rating (or partial credit) scale calibrations are maintained. Other measures are estimated in the frame of reference defined by the anchor values.
SAFILE= file name |
file containing details |
SAFILE = * |
in-line list |
SAFILE = ? |
opens a Browser window to find the file |
In order to anchor category structures, an anchor file must be created of the following form:
1. Use one line per category Rasch-Andrich threshold to be anchored.
2. If all items use the same rating scale (i.e. ISGROUPS=" ", the standard, or you assign all items to the same grouping, e.g ISGROUPS=222222..), then type the category number, a blank, and the "structure measure" value (in logits or your user-rescaled units) at which to anchor the Rasch-Andrich threshold measure corresponding to that category (see Table 3.2). Arithmetical expressions are allowed.
3. If you wish to force category 0 to stay in an analysis, anchors its calibration at 0. Specify SAITEM=Yes to use the multiple ISGROUP= format
or
If items use different rating (or partial credit) scales (i.e. ISGROUPS=0, or items are assigned to different groupings, e.g ISGROUPS=122113..), then type the sequence number of any item belonging to the grouping, a blank, the category number, a blank, and the "structure measure" value (in logits if USCALE=1, otherwise your user-rescaled units) at which to anchor the Rasch-Andrich threshold up to that category for that grouping. If you wish to force category 0 to stay in an analysis, anchor its calibration at 0.
This information may be entered directly in the control file using SAFILE=*
Anything after ";" is treated as a comment.
Example 1: Dichotomous: A score of, say, 438 means that you have 62% odds (and not 50% as it is default in Winsteps/Ministep!) of answering correctly to a dichotomous item of difficulty 438. How can I set this threshold from 50% to 62%?
In your control file, include:
UASCALE=1 ; anchoring is in logits
SAFILE=* ; anchors the response structure
0 0
1 -0.489548225 ; ln((100%-62%)/62%)
*
When you look at Table 1, you should see that the person abilities are now lower relative to the item difficulties.
Example 2: Polytomous: A score of, say, 438 means that you have 62% expectation of answering correctly to a polytomous item (0-1-2-3) of difficulty 438. How can I set the thresholds to 62%?
The default item difficulty for a polytomy is the point where the lowest and highest categories are equally probable. We need to make a logit adjustment to all the category thresholds equivalent to a change of difficulty corresponding to a rating of .62*3 = 1.86.
This is intricate:
1. We need the current set of Rasch-Andrich thresholds (step calibrations) = F1, F2, F3.
2. We need to compute the measure (M) corresponding to a score of 1.86 on the rating scale
3. Then we need to anchor the rating scale at:
SAFILE=*
0 0
1 F1 - M
2 F2 - M
3 F3 - M
*
An easy way to obtain M is to produce Winsteps "Output Files" menu, GRFILE= and then look up the Measure for the Score you want.
Example 3: A rating scale, common to all items, of three categories numbered 2, 4, and 6, is to be anchored at pre-set calibrations. The calibration of the Rasch-Andrich threshold from category 2 to category 4 is -1.5, and of the Rasch-Andrich threshold to category 6 is +1.5.
1. Create a file named, say, "STANC.FIL"
2. Enter the lines
2 0 place holder for bottom category of this rating scale
4 -1.5 Rasch-Andrich threshold from category 2 to category 4, anchor at -1.5 logits
6 1.5 Rasch-Andrich threshold from category 4 to category 6, anchor at +1.5 logits
Note: categories are calibrated pair-wise, so the Rasch-Andrich threshold values do not have to advance.
3. Specify, in the control file,
ISGROUPS=" " (the standard)
SAFILE=STANC.FIL structure anchor file
or, enter directly in the control file,
SAFILE=*
4 -1.5
6 1.5
*
If you wish to use the multiple grouping format, i.e., specify an example item, e.g., 13
SAITEM=YES
SAFILE=*
13 4 -1.5
13 6 1.5
*
To check this: "A" after the structure measure
+------------------------------------------------------------------
|CATEGORY OBSERVED|OBSVD SAMPLE|INFIT OUTFIT|| ANDRICH |CATEGORY|
|LABEL SCORE COUNT %|AVRGE EXPECT| MNSQ MNSQ||THRESHOLD| MEASURE|
|-------------------+------------+------------++---------+--------+
| 4 4 620 34| .14 .36| .87 .72|| -1.50A| .00 |
Example 4: A partial credit analysis (ISGROUPS=0) has a different rating scale for each item. Item 15 has four categories, 0,1,2,3 and this particular response structure is to be anchored at pre-set calibrations.
1. Create a file named, say, "PC.15"
2. Enter the lines
15 0 0 Bottom categories are always at logit 0
15 1 -2.0 item 15, Rasch-Andrich threshold to category 1, anchor at -2 logits
15 2 0.5
15 3 1.5
3. Specify, in the control file,
ISGROUPS=0
SAFILE=PC.15
Example 5: A grouped rating scale analysis (ISGROUPS=21134..) has a different rating scale for each grouping of items. Item 26 belongs to grouping 5 for which the response structure is three categories, 1,2,3 and this structure is to be anchored at pre-set calibrations.
1. Create a file named, say, "GROUPING.ANC"
2. Enter the lines
26 2 -3.3 for item 26, representing grouping 5, Rasch-Andrich threshold to category 2, anchored at -3.3
26 3 3.3
3. Specify, in the control file,
ISGROUPS =21134..
SAFILE=GROUPING.ANC
Example 6: A partial-credit scale has an unobserved category last time, but we want to use those anchor values where possible.
We have two choices.
a) Treat the unobserved category as a structural zero, i.e., unobservable. If so...
Rescore the item using IVALUE=, removing the unobserved category from the category hierarchy, and use a matching SAFILE=.
In the run generating the anchor values, which had STKEEP=NO,
+------------------------------------------------------------------
|CATEGORY OBSERVED|OBSVD SAMPLE|INFIT OUTFIT|| ANDRICH |CATEGORY|
|LABEL SCORE COUNT %|AVRGE EXPECT| MNSQ MNSQ||THRESHOLD| MEASURE|
|-------------------+------------+------------++---------+--------+
| 1 1 33 0| -.23 -.15| .91 .93|| NONE |( -.85)| 1
| 2 2 23 0| .15 .05| .88 .78|| -1.12 | 1.44 | 2
| 4 3 2 0| .29 .17| .95 .89|| 1.12 |( 3.73)| 4
|-------------------+------------+------------++---------+--------+
In the anchored run:
IREFER=A...... ; item 1 is an "A" type item
CODES=1234 ; valid categories
IVALUEA=12*3 ; rescore "A" items from 1,2,4 to 1,2,3
SAFILE=*
1 1 .00
1 2 -1.12
1 3 1.12
*
If the structural zeroes in the original and anchored runs are the same then, the same measures would result from:
STKEEP=NO
SAFILE=*
1 1 .00
1 2 -1.12
1 4 1.12
*
b) Treat the unobserved category as an incidental zero, i.e., very unlikely to be observed.
Here is Table 3.2 from the original run which produced the anchor values. The NULL indicates an incidental or sampling zero.
+------------------------------------------------------------------
|CATEGORY OBSERVED|OBSVD SAMPLE|INFIT OUTFIT|| ANDRICH |CATEGORY|
|LABEL SCORE COUNT %|AVRGE EXPECT| MNSQ MNSQ||THRESHOLD| MEASURE|
|-------------------+------------+------------++---------+--------+
| 1 1 33 0| -.27 -.20| .91 .95|| NONE |( -.88)| 1
| 2 2 23 0| .08 -.02| .84 .68|| -.69 | .72 | 2
| 3 3 0 0| | .00 .00|| NULL | 1.52 | 3
| 4 4 2 0| .22 .16| .98 .87|| .69 |( 2.36)| 4
|-------------------+------------+------------++---------+--------+
Here is the matching SAFILE=
SAFILE=*
1 1 .00
1 2 -.69
1 3 46.71 ; flag category 3 with a large positive value, i.e., unlikely to be observed.
1 4 -46.02 ; maintain sum of structure measures (step calibrations) at zero.
*
Example 7: Score-to-measure Table 20 is to be produced from known item and rating scale structure difficulties.
Specify:
IAFILE= ; the item anchor file
SAFILE= ; the structure/step anchor file (if not dichotomies)
CONVERGE=L ; only logit change is used for convergence
LCONV=0.005 ; logit change too small to appear on any report.
STBIAS=NO ; anchor values do not need estimation bias correction.
The data file comprises two dummy data records, so that every item has a non extreme score, e.g.,
For dichotomies:
Record 1: 10101010101
Record 2: 01010101010
For a rating scale from 1 to 5:
Record 1: 15151515151
Record 2: 51515151515
Redefining the Item Difficulty of Rating Scale items:
We want to define the difficulty of an item as 65% success on the item, instead of the usual approximately 50% success.
1.Suppose we have these Rasch-Andrich thresholds (step calibrations) from a standard rating-scale analysis:
Category |
Rasch-Andrich Threshold |
1 |
(0.00) |
2 |
-.98 |
3 |
-.25 |
4 |
1.22 |
2. The item score range is 1-4, so
a) we need the relative measure corresponding to an expected score of 65% on the item = 1+ (4-1)*0.65 = 2.95
3. We look at the GRFILE= and see that the measure corresponding to an expected score of 2.95 is about 0.58 (we can verify this by looking at the Graphs window, Expected score ICC)
ITEM |
MEAS |
SCOR |
INFO |
0 |
1 |
2 |
3 |
1 |
.48 |
2.89 |
.67 |
.05 |
.23 |
.48 |
.23 |
1 |
.56 |
2.94 |
.65 |
.05 |
.22 |
.49 |
.25 |
1 |
.64 |
2.99 |
.63 |
.04 |
.20 |
.49 |
.27 |
4. We want the item difficulty to correspond to 65% success instead of its current approximately 50% correct. So we have raised the bar for the item. The item is to be reported as about 0.57 logits more difficult.
5. To force the item to be reported as 0.57 logits more difficult, we need the step calibrations to be 0.57 logits easier = -0.57 logits.
Category |
Rasch-Andrich Threshold |
1 |
(0.00) |
2 |
-.98 + -.57 = -1.55 |
3 |
-.25 + -.57 = -.82 |
4 |
1.22 + -.57 = .64 |
6. Now, since the item mean remains 0, all the person measures will be reduced by 0.57 logits relative to their original values.
Pivots are the locations in the dichotomy, rating (or partial credit) scale at which the categories would be dichotomized, i.e., the place that indicates the transition from "bad" to "good", "unhealthy" to "healthy". Ordinarily the pivot is placed at the point where the highest and lowest categories of the response structure are equally probable. Pivot anchoring redefines the item measures. The effect of pivot anchoring is to move the reported difficulty of an item relative to its rating scale structure. It makes no change to the fit of the data to the model or to the expected observation corresponding to each actual observation.
PIVOT= was an earlier, unsuccessful attempt to automate this procedure.
Dichotomies (MCQ, etc.):
Example 8: To set mastery levels at 75% on dichotomous items (so that maps line up at 75%, rather than 50%), we need to adjust the item difficulties by ln(75/(100-75)) = 1.1 logits.
SAFILE=*
0 0
1 -1.1 ; set the Rasch-Andrich threshold point 1.1 logits down, so that the person ability matchs item difficulty at 75% success.
; If you are using USCALE=, then the value is -1.1 * USCALE=
*
Similarly for 66.67% success or 66.67% master level: ln(66.67/(100-66.67)) = 0.693 logits.
SAFILE=*
0 0
0 -0.6931 ; notice that this is negative
*
Similarly for 65% success or 65% master level: ln(65/(100-35)) = 0.691logits.
SAFILE=*
0 0
0 -0.6931 ; notice that this is negative
*
Polytomies (rating scales, partial credit, etc.:
When a variety of rating (or partial credit) scales are used in an instrument, their different formats perturb the item hierarchy. This can be remedied by choosing a point along each rating (or partial credit) scale that dichotomizes its meaning (not its scoring) in an equivalent manner. This is the pivot point. The effect of pivoting is to move the structure calibrations such that the item measure is defined at the pivot point on the rating (or partial credit) scale, rather than the standard point (at which the highest and lowest categories are equally probable).
Here is a general procedure.
Use ISGROUPS=
Do an unanchored run, make sure it all makes sense.
Write out an SFILE=structure.txt of the rating scale (partial credit) structures.
Calculate, for each item, the amount that you want the item difficulty to move. Looking at the Graphs menu or Table 2 may help you decide.
Make this amount of adjustment to every value for the item in the SFILE=*
So, suppose you want item 3 to be shown as 1 logit more difficult on the item reports.
The SFILE=structure.txt is
3 0 0.0
3 1 -2.5
3 2 -1.0
...
*
Change this to (add 1 to the values for 1 logit more difficult)
3 0 -1.0
3 1 -1.5
3 2 -0.0
...
*
This becomes the SAFILE=structure.txt of the pivoted analysis.
Example 9: Pivoting with ISGROUPS=. Positive (P) items pivot at an expected score of 2.5. Negative (N) items at an expected score of 2.0
ISGROUPS=PPPPPNNNNN
SAFILE=*
1 2 0.7 ; put in the values necessary to move the center to the desired spot
5 2 0.5 ; e.g., the "structure calibration" - "score-to-measure of pivot point"
*
Example 10: To set a rating (or partial credit) scale turning point: In the Liking for Science, with 0=Dislike, 1=Neutral, 2=Like, anything less than an expected score of 1.5 indicates some degree of lack of liking:
SAFILE=*
1 -2.22 ; put in the step calibration necessary to move expected rating of 1.5 to the desired spot
*
RATING SCALE PIVOTED AT 1.50
+------------------------------------------------------------------
|CATEGORY OBSERVED|OBSVD SAMPLE|INFIT OUTFIT|| ANDRICH |CATEGORY|
|LABEL SCORE COUNT %|AVRGE EXPECT| MNSQ MNSQ||THRESHOLD| MEASURE|
|-------------------+------------+------------++---------+--------+
| 0 0 197 22| -2.29 -2.42| 1.05 .99|| NONE |( -3.42)| dislike
| 1 1 322 36| -1.17 -.99| .90 .79|| -2.22 | -1.25 | neutral
| 2 2 368 41| .89 .80| .98 1.29|| -.28 |( .92)| like
|-------------------+------------+------------++---------+--------+
|MISSING 1 0| .04 | || | |
+------------------------------------------------------------------
AVERAGE MEASURE is mean of measures in category.
+-------------------------------------------------------------------+
|CATEGORY STRUCTURE | SCORE-TO-MEASURE |CUMULATIV| COHERENCE|
| LABEL MEASURE S.E. | AT CAT. ----ZONE----|PROBABLTY| M->C C->M|
|------------------------+---------------------+---------+----------|
| 0 NONE |( -3.42) -INF -2.50| | 63% 44%| dislike
| 1 -2.22 .10 | -1.25 -2.50 .00| -2.34 | 55% 72%| neutral
| 2 -.28 .09 |( .92) .00 +INF | -.16 | 84% 76%| like
+-------------------------------------------------------------------+
Values of .00 for scores of 1.5 show effect of pivot anchoring on the rating (or partial credit) scale. The structure calibrations are offset.
TABLE 21.2 LIKING FOR SCIENCE (Wright & Masters p.18) sf.out Aug 1 21:31 2000
EXPECTED SCORE OGIVE: MEANS
++------+------+------+------+------+------+------+------++
2 + 2222222222+
| 22222222 |
| 2222 |
| 222 |
E | 22 |
X 1.5 + 12 +
P | 11| |
E | 11 | |
C | 11 | |
T | 1 | |
E 1 + 11 | +
D | 11* | |
| 1 * | |
S | 11 * | |
C | 11 * | |
O .5 + 01 * | +
R | 00| * | |
E | 000 | * | |
|00000 | * | |
| | * | |
0 + | * | +
++------+------+------+------+------+------+------+------++
-4 -3 -2 -1 0 1 2 3 4
PUPIL [MINUS] ACT MEASURE
Example 11: A questionnaire includes several rating (or partial credit) scales, each with a pivotal transition-structure between two categories. The item measures are to be centered on those pivots.
1. Use ISGROUPS= to identify the item response-structure groupings.
2. Look at the response structures and identify the pivot point:
e.g., here are categories for "grouping A" items, after rescoring, etc.
Strongly Disagree 1
Disagree 2
Neutral 3
Agree 4
Strongly Agree 5
If agreement is wanted, pivot between 3 and 4, identified as transition 4.
If no disagreement is wanted, pivot between 2 and 3, identified as transition 3.
3. Anchor the transition corresponding to the pivot point at 0, e.g., for agreement:
e.g., for
ISGROUPS=AAAAAAABBBBAACCC
SAFILE=*
6 4 0 6 is an item in grouping A, pivoted at agreement (Rasch-Andrich threshold from category 3 into category 4)
8 2 0 8 is an item in grouping B, pivoted at Rasch-Andrich threshold from category 2 into category 3
; no pivoting for grouping C, as these are dichotomous items
*
Example 12: Anchor files for dichotomous and partial credit items. Use the IAFILE= for anchoring the item difficulties, and SAFILE= to anchor partial credit structures. Winsteps decomposes the Dij of partial credit items into Di + Fij.
The Di for the partial credit and dichotomous items are in the IAFILE=
The Fij for the partial credit files are in the SAFILE=
Suppose the data are A,B,C,D, and there are two partial credit items, scored 0,1,2, and two merely right-wrong. 0,1 then: :
CODES=ABCD
KEY1=BCBC ; SCORE OF 1 ON THE 4 ITEMS
KEY2=DA** ; SCORE OF 2 ON THE PARTIAL CREDIT ITEMS
ISGROUPS=0
If the right-wrong MCQ items are to be scored 0,2, then
CODES=ABCD
KEY1=BC** ; SCORE OF 1 ON THE 4 ITEMS
KEY2=DABC ; SCORE OF 2 ON THE PARTIAL CREDIT ITEMS
ISGROUPS=0
but better psychometrically is:
CODES=ABCD
KEY1=BCBC ; SCORE OF 1 ON THE 4 ITEMS
KEY2=DA** ; SCORE OF 2 ON THE PARTIAL CREDIT ITEMS
IWEIGHT=*
3-4 2 ; items 3 and 4 have a weight of 2.
*
ISGROUPS=0
Then write out the item and partial credit structures
IFILE= items.txt
SFILE=pc.txt
In the anchored run:
CODES= ... etc.
IAFILE=items.txt
SAFILE=pc.txt
CONVERGE=L ; only logit change is used for convergence
LCONV=0.005 ; logit change too small to appear on any report.
Anchored values are marked by "A" in the Item Tables, and also Table 3.2
Anchoring with Partial-Credit Delta δij (Dij) values
Example:
Title = "Partial credit with anchored Dij structures"
;---------------------------
; STRUCTURE MEASURE
; --------------------
;Item i delta_i1 delta_i2
;---------------------------
;Item 1 -3.0 -2.0
;Item 2 -2.0 1.0
;Item 3 0.0 2.0
;Item 4 1.0 3.0
;Item 5 2.0 3.0
;---------------------------
Item1 = 11 ; observations start in column 11
NI=5 ; 5 items
Name1 = 1 ; person label in column 1
CODES = 012 ; valid data values
ISGROUPS = 0 ; partial-credit model
IAFILE=*
1-5 0 ; item difficulties for all items set at 0
*
SAFILE=*
1 0 0 ; this is a placeholder for data code 0 for item 1
1 1 -3.0
1 2 -2.0
2 0 0
2 1 -2.0
2 2 1.0
3 0 0
3 1 0.0
3 2 2.0
4 0 0
4 1 1.0
4 2 3.0
5 0 0
5 1 2.0
5 2 3.0
*
&END
END LABELS
Person 1 22111
Person 2 21010