Working-dog organizations often use behavioral ratings by experts to evaluate a dog's likelihood of success. However, these experts are frequently under severe time constraints. One way to alleviate the pressure on limited organizational resources would be to use non-experts to assess dog behavior. Here, in populations of military working dogs (Study 1) and explosive-detection dogs (Study 2), we evaluated the reliability and validity of behavioral ratings assessed by minimally trained non-experts from videotapes. Analyses yielded evidence for generally good levels of inter-observer reliability and criterion validity (indexed by convergence between the non-expert ratings and ratings made previously by experts). We found some variation across items in Study 2 such that reliability and validity was significantly lower for three out of the 18 items, and one item had reliability and validity estimates that were impacted heavily by the behavioral test environment. There were no differences in reliability and validity based on the age of the dog. Overall the results suggest that ratings made by minimally trained non-experts for most items can serve as a viable alternative to expert ratings freeing limited resources of highly trained staff.
|Publication Title||Behavioural Processes|
|Author Address||Department of Psychology, The University of Texas at Austin, 108 E. Dean Keeton Stop A8000, Austin, TX 78712, USA.email@example.com firstname.lastname@example.org|
|Cite this work||
Researchers should cite this work as follows: