Hello guys! I'm an AI Content Moderator and I experiment with models for fun. Basically, I sent my picture to Gemini and asked it to make me look Indian. I'm Indian btw. The model changed my outfit, features, and background completely to the extent that I was... astounded by the blatant stereotyping.
Gemini adopted a tourist-like gaze that completely took away from the advancements it's been trying to make to appear native to people from all linguistic backgrounds.
I understand why this is happening. These biases are implicitly present in the training data being used to update these models.
What I've been wondering about is... how does one fix it? What can be done to avoid this?
[link] [comments]