Digital dehumanization: How technology can weaponize language against immigrant communities
By Arabelle Park, Asian Americans Advancing Justice — AAJC Technology, Telecommunications and Media Policy Intern (Summer 2025)
Following Zohran Mamdani’s June 2025 primary win, hateful Islamophobia and anti-immigrant attacks spread rapidly on social media. Similar rhetoric has followed other political figures, with vicious Anti-Indian attacks against Kamala Harris, Usha Vance, and Vivek Ramaswamy rejecting their South Asian identities and stories. Phrases like “go back to your country” litter the comment sections of posts, hammering in the idea that immigrants will never be seen as American.
Although language has long been a tool of harm, technology has reshaped and intensified its impact, particularly in online spaces. Social media platforms that reward clicks and polarization over truth contribute to dehumanizing communities, reinforcing stereotypes, and normalizing violence. To confront this, we must unpack, recognize, and understand language as ammunition, and technology as the weapon that fires it.
Harmful Historical Language Patterns
Language has historically been used to dehumanize and exclude immigrant communities. Asian Americans have been particularly targeted by this kind of vitriol. In the late 19th century, anti-Chinese and xenophobic sentiment, called “yellow peril,” fueled the Chinese Exclusion Act, the first-ever legislation to restrict immigration solely based on race. American colonial rule in the Philippines at the time brought similar racism, as U.S. officials described Filipinos as “unclean,” “uncivilized,” and a threat to American civilization.
Responses to the COVID-19 pandemic resurfaced “yellow peril” sentiment. Anti-Chinese and anti-Asian hate — fueled by the internet, politicians, and the media — spread rapidly. Those who blamed and scapegoated Asian Americans for the pandemic launched online, verbal, and physical attacks, comparing them to animals and telling them to “go back to where they came from,” echoing old tropes; this underscores technology’s ability to intensify and amplify historical stereotypes.
Conversely, language has also been weaponized to minimize harm. The World War II incarceration of Japanese Americans is often referred to under the euphemism of “internment,” which inaccurately implies that action was taken solely against non-citizens. The government also used the term “non-alien” to describe Japanese American citizens, erasing their citizenship and dehumanizing their identity. This framing is still echoed in modern references, continuing to minimize and overlook harms.
Contemporary terms like “chain migration,” “anchor baby,” and “illegal alien” continue to renew harmful patterns and build upon the historical dehumanization of immigrant communities. “Chain migration” simplifies and misrepresents family reunification, implying abuse of the most common form of immigration. “Anchor baby” — first used to describe Vietnamese American refugees — insinuates that immigrant parents use their children as pawns to achieve citizenship. Many of these terms originated from circulation in online white supremacist spaces, as white nationalists attempt to preserve America as a white majority nation. These xenophobic terms have been popularized and mainstreamed by headlines and politicians and gain traction through repetition online.
The word “alien” is central to the U.S. immigration legal framework and laws like the 1798 Alien Enemy Act — recently invoked by the Trump administration. This term has been used in legislation, including the Chinese Exclusion Act and Executive Order 9066, which called for the incarceration of Japanese Americans. Courts and officials have described immigration as a “silent invasion of illegal aliens,” echoing harmful historical narratives baked into modern institutions.
This language doesn’t just reflect bias, it creates it. Studies show that “illegal alien” evokes greater prejudice against immigrants due to increased “perceptions of threat.” Words that otherize and erase harm enable the repetition of past wrongs. In a digital age where words spread faster than we can understand them, hateful language often spreads unchecked. What results is fear, disinformation, and ultimately violence.
Technological Amplification
Algorithms can amplify harm
Social media algorithms can enable inflammatory content, including dehumanizing language and historical stereotypes, to spread rapidly. Algorithms designed to reward user engagement tend to amplify polarizing and headline-grabbing content, creating echo chambers that affirm users’ viewpoints. A U.N. expert warned that social media dynamics are “fueling hate speech in warzones” and endangering civilians.
Racialized misinformation is of particular concern, with studies showing that disinformation campaigns are used to promote ideologies like white supremacy. During the COVID-19 pandemic, disinformation fueled by the media and politicians on social media not only resulted in violence but also public health risks. Disinformation regarding immigrants in Springfield, Ohio spread rapidly, incited panic, and triggered bomb threats and severe harassment directed towards the town’s Haitian residents. Social media currents demonstrate how algorithms can reinforce individual biases and contribute to systematic tsunamis of harm.
Weak content moderation enables criminal violence
Recent rollbacks in content moderation and hate speech policies have enabled worsening online hostility. In January, Meta loosened key rules on topics like immigration and identity, removing certain restrictions on speech that calls for exclusion or uses insulting language. Notably, a ban on targeting people or groups “with claims that they have or spread the novel coronavirus” has been removed. According to a June 2025 Ultraviolet report, 75% of LGBTQ respondents, 76% of women, and 78% of people of color say they feel less protected from harmful content since the change.
Studies have found that in the months following Elon Musk’s takeover of X and the reduction of X’s trust and safety team, hate speech rose by about 50%. YouTube followed its counterparts by increasing the threshold of how much potentially prohibited content a video could contain before it warrants removal, from one-fourth to one-half. These changes have contributed to a social world where hate can spread faster, louder, and with fewer consequences.
Online rhetoric links to real-world violence
The impact of extremist internet content in creating real-world hatred and violence cannot be overlooked. The digital world provides a space to foster extremist ideologies and intensify hate, often without consequence. Centuries-long dehumanization and degrading of Asian Americans and other communities of color have been normalized and given rise to online venom with the potential to translate to real-world harm.
Studies show high rates of online harassment by race, with 30% of Hispanics, 27% of African Americans, and 20% of Asian Americans targeted because of membership in a protected class. In 2020, President Trump’s use of the term “China virus” in a social media post spiked anti-Asian tweets and attacks. The gunman in a 2019 shooting in El Paso, Texas cited online rhetoric like “invasion” and the “great replacement” and took inspiration from another mass shooter’s manifesto. These tragedies make it clear that hateful language originating on digital platforms can manifest in devastating ways offline.
Unlike movements in the past, harmful language and ideas can spread online like a wildfire we are unable to contain. Without robust content moderation and meaningful investment in understanding how hate speech disseminates in online spaces, this cycle of hate and dehumanization will continue to endanger impacted communities.
Strategies and solutions
Technology Accountability
Online platforms content moderation policies should recognize the right to free expression while also acknowledging that unchecked violent rhetoric can cause real-world harm. By examining how algorithms amplify such content, platforms can take responsibility for their role. Instead of rolling back standards, tech companies should reprioritize community safety and social cohesion. This means not only reacting to harms but also learning from historical patterns and proactively uplifting dialogue that examines and addresses them.
Language Justice Movements
Language justice movements are helping address harmful rhetoric. Responsible language campaigns like “Words Matter” and “Drop the I-word” have pushed media outlets to discontinue use of dehumanizing phrases like “illegal immigration.” These efforts have also reached the government. States like California, Colorado, and New York have decided to remove “alien” from labor codes, and in 2021, the Biden Administration sought to change the word “alien” to “non-citizen” in legislation to correct harmful language embedded in legal statutes. Groups like the Japanese Americans Citizens League and Densho have led efforts to correct language with historical harms, insisting on “incarceration” over “internment” and “Japanese American” instead of just “Japanese,” to address past injustices.
Individual and Media Responsibility
Individuals and the media alike have a responsibility to reject hate speech and recognize the power of language in shaping public perceptions. We must examine our language practices, unpacking the weight and harmful potential of our words. This means not only rejecting hate in every form but rejecting its normalization everywhere — in conversation with family, friends, or online. By speaking up, we can reframe narratives and uplift voices that resist perpetual foreigner myths and xenophobia. Supporting platforms that take serious and responsible approaches to community safety, and avoiding those that don’t, is another step towards a safer public discourse for all.
Conclusion
The story of America is one of resilience, but also of repetition. Harmful words from the past reemerge with the same rhetoric and threat they stem from. Hate now flows through algorithms, conversations, and trends in the context of a digital world. If we do not interrogate the language we use and the technology that amplifies it, we risk allowing the darkest parts of history to repeat themselves on an amplified, accelerated scale.
Part of the American story tells us that change is possible. As individuals speak up, communities organize, and platforms stand for safety, we can rewrite the narrative. We have the opportunity to reclaim language for justice instead of cowering behind its weaponization.
