The Feminine Persona of Virtual Assistants

Permeating every aspect of our lives, artificial intelligence- in every form- is shaping our everyday interactions with the outside world. Among the most well-known examples are virtual assistants such as Google Assistant, Apple’s Siri, and Amazon’s Alexa. Despite their intended convenience, these technologies fully ingrain harmful gendered stereotypes. By giving these virtual assistants stereotypically feminine traits, society upholds antiquated ideas of gender roles in an environment that ought to be one of advancement and inclusivity.

The majority of virtual assistants come pre-programmed with names and voices that sound feminine. In keeping with stereotypes of women as helpers and caregivers, their “personalities” are courteous, amiable, and submissive. While Siri cheerfully sets your

alarms and plays your favourite music, Alexa patiently responds to your inquiries. These assistants are not programmed to question the user’s authority or expectations, which subtly supports the notion that women are there to serve and accommodate others- specifically men, their target audience.

These design decisions are not incidental. Not at all actually. According to research, because society has conditioned us to associate femininity with deference, patience, and nurturing, consumers feel more at ease interacting with a “female” virtual assistant- but rather than reflecting any fundamental reality about roles or abilities, this preference is a reflection of the ingrained prejudices that support gender inequality.

Tech companies that make these ‘assistants’ mimic the unequal division of labour in real life by giving them a feminine identity. Universally, in the workplace and at home, women are expected to perform administrative and caregiving duties at an imbalanced rate. These expectations are carried over into the virtual world by these digital personas, which are made to handle schedules, respond to orders, and troubleshoot.However, when it comes to roles that signify expertise or authority, such as AI systems used for financial analysis or high-level decision-making, these systems are rarely “gendered” female. For example, IBM’s Watson for Financial Services analyses large datasets, assess financial risks, detect fraud, and provide strategic insights for investment and portfolio management. Unlike virtual assistants like Alexa or Siri, Watson is positioned as a sophisticated, authoritative tool rather than a “helper,” underscoring the bias that roles requiring expertise and leadership are less likely to be gendered female.

The implicit message? Women assist; men lead.

The gender performativity theory of Judith Butler offers a helpful framework for analysing this specific problem. Butler argues that gender is not an innate characteristic but a repeated performance shaped by societal norms and expectations. The “femininity” of virtual assistants is not a natural choice but a programmed one, designed to align with cultural scripts about how women should behave.

Engineers and designers who are themselves products of a gendered society created the gendered personas of Alexa and Siri. These performances normalize the idea that women, whether real or virtual, should be obedient, approachable, and accommodating, reinforcing stereotypes rather than challenging them.

The gendering of virtual assistants like Alexa and Siri forces us to confront a deeper question: why do we instinctively assign femininity to roles of servitude and support in technology? By embedding these biases into AI, we are not just reflecting societal stereotypes—we are perpetuating them in ways that shape how future generations interact with both technology and gender.

It’s time to rethink these choices and ask ourselves: what kind of world are we programming, and who gets to define it?