The integration of artificial intelligence (AI) into our daily lives has been revolutionary and much more rapid than anyone could have predicted, especially in the last few months with the launch of ChatGPT and similar applications. It has streamlined, automated, and started to change numerous sectors of society, but this rapid technological advancement has many experts sounding the alarm about the unknown and unforeseeable consequences of AI’s broad adoption across society and its potential impact on marginalized communities, including the LGBTQ+ community.
Advancements in technology, accessibility and connectivity, such as AI, have played a key role in enhancing visibility, building community and promoting inclusivity for the LGBTQ+ community. For example, social platforms powered by AI algorithms have enabled LGBTQ+ individuals to connect, share experiences and establish supportive networks on a global scale. Further, AI has the power to harness machine learning techniques to analyze massive datasets, which could solve challenges faced by the LGBTQ+ community for decades. On a more human level, AI-powered personal assistants can reduce feelings of alienation simply by recognizing and respecting all genders. But with all the significant aspects of AI that are possible, we must ensure that we recognize the potential harms that AI could have on the LGBTQ+ community as well.
AI systems are only as unbiased as the data they are trained on. If the data used to train AI algorithms is biased or discriminatory, it can perpetuate and amplify existing prejudices against the LGBTQ+ community. For example, facial recognition technology has shown higher error rates for gender-nonconforming individuals and people of color, leading to potential misidentification and discrimination. Similarly, AI algorithms can inadvertently exclude LGBTQ+ individuals if they are not adequately represented in the training data. This lack of data can result in limited access to resources, services and opportunities. For instance, AI-powered job recruiting platforms may use biased algorithms that discriminate against LGBTQ+ job applicants, perpetuating inequality in employment opportunities. AI-powered recommendation algorithms on social platforms can also lead to online echo chambers. LGBTQ+ users may find themselves predominantly exposed to content from similar perspectives, which can limit their understanding of broader societal contexts. This isolation can exacerbate the issues of marginalization rather than ameliorate them.
Adversely, and even more alarming, the same is true for those using AI who are not part of the LGBTQ+ community and might find themselves in an anti-LGBTQ+ echo chamber perpetuating the worst myths and misconceptions about LGBTQ+ individuals without context or balance. Even if we can provide and train the AI systems with the vast amounts of personal data around the LGBTQ+ community, it raises concerns about our privacy and security, especially for LGBTQ+ individuals who may face threats or discrimination. If this sensitive data falls into the wrong hands, is misused, or, even worse, weaponized against our community, it could harm LGBTQ+ individuals or entire portions of the LGBTQ+ community.
Undoubtedly, these issues around AI and marginalized communities like the LGBTQ+ community are serious and deserve urgent attention. Solutions are being proposed and implemented to mitigate these risks. AI ethics researchers and engineers are working tirelessly to increase transparency and accountability in AI systems, using techniques such as explainable AI (XAI) and independent auditing. Activists are pushing for more straightforward regulations on how AI is used, especially around privacy and data use.
Most importantly, inclusivity must be a priority in the continued development phase of AI. This includes not only considering LGBTQ+ identities when designing AI systems but also increasing diversity among the engineers and developers who build these systems. The involvement of diverse voices can provide more holistic perspectives, leading to more equitable and fair AI systems.
Ultimately, the impact of AI on the LGBTQ+ community is profound and multifaceted. On the one hand, AI has the potential to promote connection, inclusivity and understanding. On the other hand, it can perpetuate biases, invade privacy, contribute to marginalization, and harm if not appropriately managed.
AI’s responsible and thoughtful use can contribute to a more inclusive society where the LGBTQ+ community is recognized, respected and empowered. By addressing the potential pitfalls and maximizing the benefits of AI now while there is still time, we can strive for a future where technology catalyzes positive change and social progress.
Chris Wood is the executive director of LGBT Tech.
This story is made possible with support from Comcast Corporation.