Tutorial on Fairness Accountability Transparency and Ethics in Computer Vision
Computer vision has ceased to be a purely academic endeavor. From law enforcement, to border control, to employment, healthcare diagnostics, and assigning trust scores, computer vision systems have started to be used in all aspects of society. This last year has also seen a rise in public discourse regarding the use of computer-vision based technology by companies such as Google, Microsoft, Amazon and IBM. In research, works such as purport to determine a person’s sexuality from their social network profile images, and claims to classify “violent individuals” from drone footage. These works were published in high impact journals, and some were presented at workshops in top tier computer vision conferences such as CVPR.
On the other hand, seminal works such as published last year showed that commercial gender classification systems have high disparities in error rates by skin-type and gender, exposes the gender bias contained in current image captioning based works, and both exposes biases in the widely used CelebA dataset and proposes adversarial learning based methods to mitigate its effects. Policy makers and other legislators have cited some of these seminal works in their calls to investigate unregulated usage of computer vision systems.
We believe the vision community is well positioned to foster serious conversations about the ethical considerations of some of the current use cases of computer vision technology, and thus hold a workshop on the Fairness, Accountability, Transparency, and Ethics (FATE) of modern computer vision in order to provide a space to analyze controversial research papers that have garnered a lot of attention. Our workshop also seeks to highlight research on uncovering and mitigating issues of unfair bias and historical discrimination that trained machine learning models learn to mimic and propagate.