Skip to main content
We care about accessibility. If you struggle with colour blindness enable the high contrast mode to improve your experience.
Change the colour scheme of this website to make it easier to read
Let’s Commit To Supporting Each Other Not Generative AI

Image description

Three animated figures look at a computer with code in the background. Text reads: 'AI can't speak on our personal experience'. Design: Mili Ghosh

Let’s commit to supporting each other - not generative AI

AI cannot fill the human-shaped tear in the fabric of our community, argues Kitty Wasasala. 

  • Let’s commit to supporting each other - not generative AI
    Kitty Wasasala
    0:00
    |
    0:00
  • We’ve been told time and time again that AI is here to solve our problems and make our lives as disabled folk easier. It can be your personal assistant! Or your boyfriend! Or your dead relative! But like all fancy new tech, it comes with its risks (see: every single episode of Black Mirror).

    When we talk about AI in 2025, for the most part, what we’re actually talking about is generative AI. In its simplest terms, traditional AI uses programmed data to perform basic functions like customer service chatbots, internet search engines or self-driving vehicles. On the other hand, generative AI uses data to create something new-ish, like songs, images, poetry or emails you can’t be bothered writing. It generates this original content from data it has been trained with – and this is where it gets murky for artists. Many artists don’t consent to their mahi being fed to, and consequently, plagiarised by these systems.

    Generative AI is also highly damaging to our natural environment, consuming both vast amounts of power to generate and water to cool down its physical data centres. A University of Massachusetts study found that training one generative AI model, which can be accomplished over mere weeks, can emit as much carbon as five cars over their entire lifetimes. 

  • The truth is that anything AI can do, we can do for each other – and it’s not an unrealistic ask.

  • Ableism, generally defined as discrimination and prejudice against disabled people, is unfortunately already coded into these models. US Paralympian and self-proclaimed AI fan, Jessica Smith, asked ChatGPT to create an original image of her, but was shocked to discover that the bot simply couldn’t portray her physical disability accurately. Research at Penn State has confirmed that trained AI models do exhibit ableist biases based on the data used to train it. For example, if a football fan angrily tweeted, “Is the referee blind?!”, an AI model would absorb this data as hostility towards blind people, and then go on to model that behaviour.

    Systemic ableism – that is, ableism that is ingrained into societal structures – exists at all levels of society, from the halls of Parliament to your local parking lot. Decisions are often made for us without our consultation, and so we remain disadvantaged and disregarded in the kōrero around our own lives.

    Our government underfunds then strips us of necessary services: Whaikaha Ministry of Disabled People cut respite funding that allowed disability carers their well-deserved rest and alternative care resources; our health system maintains an outdated one-size-fits-all approach in its practices; the general public refuses to mask despite Covid-19 still posing a serious risk to disabled folk; and universities continue to severely limit accessibility for disabled students, with both wheelchair inaccessible classrooms and classrooms that don’t have the capabilities to record lectures for those who can’t attend in-person. If our decision-makers modelled social responsibility, we would be included in these conversations. 

    Social responsibility is a theory that insists we must all be in service to each other and our environments: that we have a role as members of a functioning society, like recycling or donating to charity because we know we should. As technology evolves, so do our responsibilities to each other and particularly, as global temperatures rise, to our environment. The harms of AI are becoming increasingly obvious and so we must move forward with care.

    The social responsibility of raising awareness of these harms cannot fall on our disabled whānau. I’d argue that disabled people have more reason than most to use generative AI services. Some disabled users report feeling guilty about having to turn to AI for seemingly simple tasks, but when the services you rely on are stripped of their funding (think full-time care, a specialised medical team, subsidised travel expenses etc), it makes sense that you’d use non-human automated systems instead. The AI has to do what you tell it to. AI won’t tell you that you’re a burden or refuse to help you. It won’t put you on hold or doubt your experiences.

  • As technology evolves, so do our responsibilities to each other and particularly, as global temperatures rise, to our environment. The harms of AI are becoming increasingly obvious and so we must move forward with care.

  • But AI cannot fill the human-shaped tear in the fabric of our community. The truth is that anything AI can do, we can do for each other -- and it’s not an unrealistic ask. The Be My Eyes app, launched in 2015, connects blind and visually impaired users with sighted volunteers around the world to assist with tasks that require vision, like reading mail or identifying clothes. The app currently has more than 850,000 visually impaired users, but boasts over 10 times the amount of sighted volunteers: 8.92 million people waiting to help in over 180 languages. We demonstrably, overwhelmingly, do want to help each other, but we’re forgetting how. Sometimes it seems like our society is becoming increasingly individualistic and self-serving, but we’re still re-learning how to connect as we move through the ongoing Covid-19 pandemic, and so we must give each other grace as we learn together.

    I can’t speak for the whole disability community, but as an autistic person I reject the idea that AI is a necessary accessibility tool. Instead of using ChatGPT to help me figure out what people really mean when they use confusing metaphors, I could instead ask my partner, who would lovingly provide an explanation while also tailoring her advice to someone she knows intimately. Google Gemini could create my CV, but I could also go to a free CV writing workshop at my local library to learn how to do it myself while meeting other people facing similar struggles. Copilot could give me advice on how to tackle homophobia in the workplace, or I could connect with the Elder Queers tīma and learn about how they approached it and what their consequences were. For most of AI’s uses, there are solutions that lie within our own community. The Citizens Advice Bureau, for example, has a community directory of over 35,000 local services and organisations that provide both low-cost and free services like budgeting, legal advice, counselling and specialised support groups.

    Reader, we have survived this far without generative AI, and I promise you we can continue without it. One unique thing we can do that AI cannot is speak from personal experience. Shared knowledge is a sacred resource, and we must help each other to preserve it. We must stay connected to each other. I’m sure AI could have written a shorter article for me, but here you and I are: two flawed and caring beings on different sides of a device, connected through these lovingly curated kupu. It’s a special feeling, this -- and isn’t it worth preserving?

  • Subscribe to our weekly newsletter, The D*List Delivered!

Related