Interobserver Agreement Equation

  • Uncategorized

The idea that practicing behavioural analysts should collect and report reliability or interobserver agreement (IOA) in behavioural assessments is demonstrated by the Behavior Analyst Certification Board`s (BACB) assertion that behavioural analysts are responsible for the use of “different methods of evaluating the results of measurement methods such as inter-observer agreement, accuracy and reliability” (BACB, 2005). In addition, Vollmer, Sloman and St. Peter Pipkin (2008) argue that the exclusion of these data significantly limits any interpretation of the effectiveness of a behavioural change procedure. Validity requirements in a behavioural assessment study should therefore be conditional on the inclusion of insurance data (Friman, 2009). In light of these considerations, it is not surprising that a recent review of articles in the Journal of Applied Behavior Analysis (JABA) from 1995 to 2005 (Mudford, Taylor, Martin, 2009) revealed that 100% of articles reporting continuously recorded dependent variables contained IOA calculations. These data, as well as previously published reports on reliability procedures in JABA (Kelly, 1977), suggest that the inclusion of IOA is in fact a trademark – if not a standard – of behavioural evaluation. Interval-based IOA algorithms assess the agreement between the interval data of two observers (including time samples). These ratios consist (a) of interval interval algorithms, b) interval- and (c) IOA intervals. After a brief overview of the interval algorithms, Table 2 summarizes the strengths of the three interval algorithms. Consider as a common example of IOA based on interval the hypothetical data flow represented in Figure 2, in which two independent observers record the appearance and non-deposit of a target response at seven consecutive intervals.

In the first and seventh intervals, observers disagree on the event. However, both observers agree that there was no response in the second, third and fourth intervals. Finally, both observers also agree that at least one response was given at the fifth and sixth intervals. Permanent IOA algorithms evaluate the agreement between the temporal data of two observers. These measures consist of (a) the total duration and (b) the average duration of the incident. Table 3 summarizes the strengths of the two algorithms. Consider as a permanent example of the permanent IOA the hypothetical data flow represented in Figure 3, in which two independent observers recorded the duration of a target response over four deposits. House, A.

E., Farber, J. and Nier, L. L. Accuracy and speed of reliability calculation using different indicators of the Interobserver agreement. Lecture at Postersession, Association for Advancement of Behavior Therapy, New York, November 1980. Yelton, A. R., Wildman, B.G., and Erickson, M. T.

A probability-based formula for calculating the Interobserver agreement. Journal of Applied Behavior Analysis 1977,10, 127-131. Maxwell, A. E., and Pilliner, A.E. G. Reliability coefficients and agreement for ratings. British Journal of Mathematical and Statistical Psychology 1968,21, 105-116. Cohen, J. Weighted kappa: Nominal Skala agrees with Derer of an iron disagreement or partial credit. Psychological Bulletin 1968,70, 213-220.

×