Shirley's thoughts logo

Shirley's thoughts

Subscribe
Archives
July 18, 2025

where AI cannot follow

creative boundaries, ethical use, and reclaiming space from generative AI

This is the edited down, newsletter-friendly version. If you’d like the full text (and better formatting), please visit the permalink 😊

This is part 1 of a series on AI.

I’ve been formulating thoughts on AI for a while now, but the field is moving so fast — and the ways in which I’m using it are still so experimental — that my views on it are constantly changing. So this post is the first iteration of my thoughts on AI, and more specifically on the current landscape of genAI. I expect these thoughts to evolve as the field continues to change, but because I also expect AI to stick around in our lives for the long term in one form or another, I want to put a stake in the ground. I want to document for my future self how I intend to strike a balance between convenience and ethics, and establish where my boundaries are.

To start from the conclusion: I draw a hard line at using stolen labor in my final output, and I’ll always value brainstorming and collaborating with other people when I can. My use was quite broad at the beginning, but since establishing these boundaries and identifying where it’s most valuable for me, how I use AI has become far more constrained.

In general, the way I use AI in my work falls into the following broad buckets, listed here in order of most to least used:

  • Writing
  • Coding
  • Art-making

I almost exclusively use ChatGPT, and have been able to utilize it in a way that accommodates my personal ethics. I’m still working through how to balance my use of AI with the environmental impacts, data privacy concerns, and exploited labor in the Global South. I’m slowly working on a potential next step forward, which involves building my own local AI setup — perhaps something I’ll be able to write more about in a year or two.

Before I dive into the details, a disclaimer: These are my personal boundaries, and they may be stricter in some areas and more lax in others. The point of this blog post isn’t to be “holier than thou” or to shame, but merely to document. Having said that, I always appreciate feedback if there are areas where I can improve.

Writing

At the very beginning of my AI journey, I dreamed that the suite of new tools could help me write emails. But then I realized that the AI-generated email drafts lack the nuance (and the diplomacy) that I’m often agonizing over in the first place. Similarly, I was excited about giving ChatGPT an outline and getting back a fully written blog post. But then I realized: Not only would it fail to capture my voice, it also would generate the text based on its training dataset of stolen writings.

But there is one area where I’ve found ChatGPT to excel, that I use very often: Generating alt text for images in my blog posts and social media posts. I’m also ok with asking it to suggest title, subtitle, and metadata descriptions for those same blog posts, and for generating a LinkedIn post based on my blog post, which I then edit into my own voice This, for now, strikes a good enough balance between productivity and relying as minimally as I can on stolen labor.

Coding

When I first started using ChatGPT regularly, I instinctively drew the line at coding anything core to my work that a client had hired me to do based on my expertise, because I didn’t ever want to forget how to do it.

But it was great for code that I rarely touched, that I’d only write every two years. Bash scripts, the mathematical formula for a Bézier curve (I always forget this one), how to properly use `Math.atan2`. It was also amazing at automating menial tasks like resizing photos and batch uploading them to Google Cloud—tasks that I’d have otherwise never taken the time to wade through the documentation to implement them.

I’ve also used ChatGPT at the beginning of some projects (mostly personal ones, because data security/privacy still feels too iffy for client data) to augment my data analysis. I am, however, quite wary of it: in my first few tries it generated charts that were very subtly wrong but potentially insidious if not caught. So I’m now in the habit of asking it to generate some charts quickly to test some hunches, and then bringing the dataset into Observable to verify those charts and hunches.

Art-making

I will never use image and video generators in my final output. As an artist with illustrator friends, I just refuse.

Having said that, I have experimented with using it for ideation. I used it for brainstorming and generating reference or inspiration images, and it wasn’t horrible but it never gave me any a-ha moments. Which made me realize: The part of the creative process I enjoy the most is ideating with other people. I love when I show my husband or a good friend a design I’ve been stuck on and they immediately have a great suggestion. Or they say something that gets my brain firing with ideas. None of that twenty-questions-and-figuring-out-how-to-prompt-correctly BS.

Having said that, I’m very interested in using AI as a tool for enabling exploration and discovery. I’m motivated by this quote from Ken Liu’s piece in Big Think: “Some of the best AI art experiences are about what the AI prompts in you, rather than what you prompt the AI to do”. I’m actively trying to figure out what this looks like in my own art.

So many other concerns

There is no doubt that the training and use of all these genAI models have huge environmental costs, as outlined in a Bloomberg article about water use and explained in various YouTube videos (like this and this), though the precise numbers remain unclear.

At the same time, I have huge data privacy concerns. These LLMs are yet another example of us trading personal data for convenience, and I’m more and more disturbed by how much the AI companies track my queries and prompts — especially with so little insight into how they are using and monetizing our information. (Earlier this year I gave ChatGPT my show proposal and asked it for help generating a timeline and a budget. It returned with those two items, but also various images of me it had pulled off the internet. That was way too creepy for my comfort and was the catalyst for me to take data privacy even more seriously than I had before.)

Finally, I’ve thought quite extensively about my boundaries around using genAI that is trained on stolen labor, but I have yet to reconcile with the fact that a lot (all?) of these models are built upon exploited labor. Not to mention the very real concerns of AI colonialism.

I don’t have any answers for these concerns, but I do have an inkling of a direction.

A potential next step forward

Motivated by my data privacy concerns, I started researching how to build my own local AI server earlier this summer. It led me down a rabbit hole of vRAMs and obscenely priced graphics cards, but it also taught me that with my own hardware I can monitor how much energy my queries are using. With Ollama, a tool that streamlines running open-source LLMs locally, I can pick and choose from a vast repository of open-source models that have been trained in different ways. It means that, instead of relying on a few tech giants and the models that they’ve trained (most of which we have no visibility into), I can research and use only the models that align with my needs and ethics. And because I’m running all the models on my own hardware, I can be certain that my personal data won’t be part of any more model training (unless I allow it). As an added bonus, I’ll also feel secure in giving it my client data during the analysis phase. 

There is, of course, a major downside: No local model is currently as “smart” as the ChatGPTs, Claudes, or Geminis. The “intelligence” of an LLM depends on the number of parameters that it’s trained on, and the bigger the number, the more intelligent, and the more vRAM (and thus GPUs) we need. So to get the intelligence of GPT-4, we’d need to throw obscene amounts of money at it and purchase the same level of hardware.

But I’m also aware that, before the current wave of Large Language Models took over, the predominant AI research was around building powerful models that only required a small training dataset and minimal compute resources. As far as I know, these efforts have continued, and I’m hopeful for a future where we can have ChatGPT-level intelligence running locally on our phones.

For now though, I’ve read that even a small local language model can be very powerful if trained on domain-specific data for specialized tasks, a prospect that opens a lot of exciting doors. But this brings me to another point I’ve been wrestling with: Once I have a local AI server to address my privacy concerns, will I train it on my own writing? Certainly, I wouldn’t have to worry if my AI-generated blog posts sounded like me anymore.

But recently, I’ve actually been leaning towards no.

Thinking for myself

When I first started researching local AI servers, I was excited to train it on my past blog posts, Twitter posts, emails, etc. I thought, great! Now I don’t have to spend hours or even days on each blog post. And that’s when I did a mental double take. I had become so reliant on ChatGPT to write and process information for me that I no longer wanted to think for myself.

That terrified me.

I’m now actively forcing myself to write, even if it makes my brain hurt (lol). So that I can still process information, form my own opinions, and distill and organize them in a way that is considerate of my reader. So that I can continue to think for myself.

(Also, how can I expect readers to spend time with my writing if I didn’t even spend time with my writing?)

Sacred space

Recently, a friend told me that they were finally having fun making art because they felt like their skill had finally caught up to their style.

With AI we talk a lot about efficiency, about how we can do tasks faster.

But skill gain is rarely an efficient, linear process. Most often, we just need to put our 10,000 hours into refining the skill, and on the other side of those 10,000 hours is the fun and enjoyment.

I once listened to a panel discussion that included a Creative Director from Anthropic. When asked what advice he’d give people that want to adopt more AI into their work and life, he responded: Find your sacred space where AI is not allowed to enter. For him, that sacred space is in making and performing his music; that is where he finds his fun and enjoyment. No AI can replace that.

I’m still defining my sacred space, and I wonder — what is yours?

Don't miss what's next. Subscribe to Shirley's thoughts:
Powered by Buttondown, the easiest way to start and grow your newsletter.