Bias as a Distinct Factor in Human Ratings of Machine Labeling

This project argue that human assessments of machine labeling can reveal bias as a distinct measure separate from other perceptions of label quality. Human subjects were asked to assess the quality of automatically generated labels for a trained topic model. Quality assessments were gathered using 15 distinct self-report questions. Exploratory factor analysis identified a distinct “bias” factor. This point is likely relevant for a wide variety of machine labeling tasks.

Want to read more?

»