Nithya Sambasivan, Erin Arnesen, Ben Hutchinson, Tulsee Doshi, Vinodkumar Prabhakaran

This paper [1] talks about the naivety of algorithms written in AI, which currently reflect the ways of the West. Sambasivan et al. lay the groundwork to emphasise on the need to de-westernise design of algorithms, their methods and values. Fairness research in the West is focused on gender and races. The authors have put together a long list of subgroups that cause discrimination in societies and eventually affect the fairness of AI algorithms. These include names, residence locations, colour of skin, language spoken etc. Table 1 summarises these well. Sambasivan et al. have commented on areas where data about significant chunks of the population can be missed or even recorded incorrectly. They have presented their findings over the bias of privileged communities in any data collected or analysed, simultaneously touching on the reservation system's repercussions. Further, the cultural insensitivity leading to de-humanisation is discussed. An interesting thing pointed out here is that in cases even where Indians are involved in the design of algorithms, prejudice prevails, since they hail from privileged sections of society and are not representative of most of the population. The authors propose Recontextualising, Empowering and Enabling. They call these three the contingent pathways that can contribute to making AI practices fair in the country. These pathways would prevent partial practices, make data-driven work representative of the context and make things transparent. The authors have described the involvement of different stakeholders - AI scientists, government and NGOs, who’ll have a role in bringing the revolutionary change in practices. Throughout the paper, they have cited examples from different cases to highlight their points and lay stress on them.

The research method used in this study is the synthesis of expert interviews and discourse analysis for the insights. Sampling method adopted was purposeful sampling through personal contacts. Informant quotes added value to the findings section. This primary research is aided by a secondary review of algorithm and social policy documents, news publications and media. Experts recruited for this study were representative of various domains and were situated in India, Southeast Asia, Europe and the USA, despite its focus being India itself. This may have brought a western, biased perspective to the results of the study. Choosing news and mass media publications may not have been the right decision since that is easily manipulated to show what the public wants to see or the government wants the public to think. Results may not be factually correct. There is no comment on the selection criteria or years of publications selected. Dr B.R. Ambedkar's insights on Indian societal structures are referred to, but it may be different from today's situation years after his demise.

The authors claim to have adopted three lenses in HCI, namely feminist, decolonial and anti-caste, to analyse the data they collected as a part of their research. This is justified as they add a holistic approach to the many ways the Indian society is divided in itself. There is no mention, however, about these lenses later, in any other sections of the paper. This makes understanding the usage of lenses difficult and is hard to comment on, due to possible differences in perspectives.

The last line of the conclusion section of the paper states that these considerations can be scaled outside India, on a global level. This is an inconsistency in their claims. The authors have stressed time and again the need to contextualise AI practices. Global approaches to AI would destroy the purpose. Thus, the scope of the study should be limited to the country the study is conducted. Similar research methods can, however, be replicated to observe situations elsewhere.

This paper is novel in its thorough approach to the domain. Its scope should be extended not just in academia but in the government sector as well. It is being established in different papers that there is a need for laws and the government sector should take action about it, and people engaged in HCAI research and implementation should use politically correct terminology while dealing with users. Thus, there should be an extension that studies academic reviews like this one and proposed recommendations to bring them out to the world outside academia. Until the recommendations aren’t tried and tested, their validity cannot be confirmed. This extension would require close work with the government since all recommendations are based on assumptions about the laws. Most people in law-making and aiding sectors are from different fields and have no HCAI knowledge. Involving them would inform them about these upcoming areas. This is necessary for any measurable change.

References

  1. Nithya Sambasivan, Erin Arnesen, Ben Hutchinson, Tulsee Doshi, and Vinodkumar Prabhakaran. 2021. Re-imagining Algorithmic Fairness in India and Beyond. arXiv preprint arXiv:2101.09995.