Assessing biases through visual contexts
View/Open
Cita com:
hdl:2117/394659
Document typeArticle
Defense date2023-07
Rights accessOpen Access
Except where otherwise noted, content on this work
is licensed under a Creative Commons license
:
Attribution 4.0 International
ProjectSoBigData-PlusPlus - SoBigData++: European Integrated Infrastructure for Social Mining and Big Data Analytics (EC-H2020-871042)
SoBigData RI PPP - SoBigData RI Preparatory Phase Project (EC-HE-101079043)
SoBigData RI PPP - SoBigData RI Preparatory Phase Project (EC-HE-101079043)
Abstract
Bias detection in the computer vision field is a necessary task, to achieve fair models. These biases are usually due to undesirable correlations present in the data and learned by the model. Although explainability can be a way to gain insights into model behavior, reviewing explanations is not straightforward. This work proposes a methodology to analyze the model biases without using explainability. By doing so, we reduce the potential noise arising from explainability methods, and we minimize human noise during the analysis of explanations. The proposed methodology combines images of the original distribution with images of potential context biases and analyzes the effect produced in the model’s output. For this work, we first presented and released three new datasets generated by diffusion models. Next, we used the proposed methodology to analyze the context impact on the model’s prediction. Finally, we verified the reliability of the proposed methodology and the consistency of its results. We hope this tool will help practitioners to detect and mitigate potential biases, allowing them to obtain more reliable models.
CitationArias, A. [et al.]. Assessing biases through visual contexts. "Electronics (Switzerland)", Juliol 2023, vol. 12, núm. 14, article 3066.
ISSN2079-9292
Publisher versionhttps://www.mdpi.com/2079-9292/12/14/3066
Files | Description | Size | Format | View |
---|---|---|---|---|
electronics-12-03066.pdf | 16,53Mb | View/Open |