HW 4 ImageNet Roulette Reading Response

mediatrix_id

I found the project ImageNet Roulette to be a fascinating and necessary commentary on the implications of AI and human bias permeating emmerging technologies -- used for everything from hiring to criminal databases.

It is indeed alarming to recognize how bias in early training sets for AIs has permeated through our society and digital footprint. And if humans are innately biased, are we then doomed to bias machines?

In my opinion bias is preventable, at least on some levels, with the implication of strict policy guidelines when creating and proliferating training datasets. However, the way in which we must implement these policies is controversial and confusing. For example, should only those of similar race/ethnicity/gender be allowed to tag that particular data? Or should a diverse range of people tag each data piece? How shall we go about fixing or cleaning the pre-exisisting data? Humans are complicated, multi-faceted beings. Therefore, regardless of the diversity, we will be making sacrifices in categorizing and generalizing human behavior, at least for now, with the technologies that currently exist. However, we can prevent much bias when there is a diversity of opinion and leadership on the matter of improving AI.

first_offender mediatrix family website_photo tree