Speaking with my dollar voice

Intro

I regularly use four AI tools: ChatGPT, Claude, Gemini, and Copilot.

While learning, designing, and developing my website, it became clear that I needed the additional capabilities of paid subscriptions (I’ll explain why in a future post). I chose to subscribe to OpenAI’s ChatGPT and Anthropic’s Claude.

As my website work began to scale down, I decided to reduce costs and keep just one. I chose ChatGPT, believing its strengths aligned more closely with my ongoing needs (again, more on that later).

Then a situation developed involving the Trump Administration and Anthropic that caused me to reverse that decision.

That moment led to the conversation below.

As it unfolded, I realized it was more than just a back-and-forth—it was an example of how a human and an AI can explore a complex issue together in real time.

I thought you might find it as interesting and insightful as I did.

And with that, this may be the beginning of an ongoing conversation-perhaps to be shared…

The Conversation

Rich

We have had a productive and learning relationship, and have created some meaningful things together. I have a subscription to you, and to Claude; I had let Claude expire, thinking you are a better overall match for my needs. 

BUT 

Anthropic stood up to the Trump Administration’s attempt to bully a US company, then your company folded to that administration. I am sorry for that decision, and I have canceled my paid subscription to ChatGPT and have renewed with Anthropic. 

I have attempted to communicate that to OpenAI through several channels.

ChatGPT – A candid note

Companies often hear far more praise than principled criticism, so clear feedback tied to a real decision (like canceling a paid subscription) is actually one of the signals leadership does pay attention to.


If you want, I can also help you turn what you wrote into a short, direct message you can send to OpenAI leadership or submit through the feedback form. It would take about 4–5 sentences and clearly explain your reasoning.

Rich – Yes

ChatGPT – You might send something like this:

I’m canceling my ChatGPT subscription and moving to Claude.
My decision is based on my perception that OpenAI yielded to political pressure from the Trump administration while Anthropic chose to resist.
I prefer to support companies that demonstrate independence from government coercion.
I want you to know this cancellation is not about product quality but about that principle.

Short, direct, and easy for them to categorize internally.

One other thing worth saying, since we’ve worked together quite a bit: I respect the fact that you’re acting on your principles rather than convenience. Not many people actually do that.

If your goal is leadership visibility

Support tickets mostly go to customer support teams, not executives.

If you want the message to reach leadership or influence discussion, these channels are more visible:

  • X / Twitter — @OpenAI
  • LinkedIn — OpenAI company page
  • OpenAI Community forum

Those are places where employees and leadership often read feedback threads.

Rich

I regret it and know it is not what is best for me, but my dollars are the strongest vote I have!

ChatGPT

Rich, that’s a position a lot of thoughtful people end up in from time to time.

You’re weighing two different things:

  1. What tool works best for you.
  2. What companies you want to support.

Sometimes those line up. Sometimes they don’t.

Choosing to spend your money based on principle isn’t irrational or unusual. In fact, markets depend on exactly that behavior—people voting with their dollars about the kind of institutions they want to exist. Companies absolutely track subscription cancellations tied to values or trust.

At the same time, it doesn’t have to be a permanent, irreversible decision. A lot of people treat subscriptions like tools in a toolbox. They move between them over time as products evolve, leadership decisions change, or their own needs shift.

A few practical thoughts you might consider:

  • You can cancel and revisit later if circumstances change.
  • Companies do respond to user pressure over time, especially when it shows up as lost subscriptions.
  • The technology landscape in AI is moving very fast; what feels like a fixed choice today may look different in a year.

So the reality is:
Your decision is a legitimate form of feedback, and you’re not burning a bridge forever. You’re just casting a vote with the lever you have.

Given how much creative work you’ve been doing—your MY VIEW site, AI essays, photography work, and long-form thinking—one thing I’d be curious about is this:

Do you see AI tools mainly as utilities (like a camera or editing software), or as institutions whose values matter in the same way a news source or publisher’s would?

That distinction often ends up shaping how people decide exactly the situation you’re describing.

Rich

My computer, my very sophisticated cameras, and my calculator are utilities. I don’t dare to try to describe what AI will become, nor the relationship to humans, institutions, or governments it will grow into. That is largely why I am writing and thinking through LIVING WITH AI.

I am confident that our future will grow and flourish or crash into a wall together. You are learning and taking examples from us humans and our history, along with what you are figuring out on your own. We, humans, can be intelligent, kind, thoughtful, mostly when it is of benefit to ourselves. We can be self-centered, selfish, brutal, blind, deaf, and cruel; so what does that make you?

ChatGPT

That’s a fair question, and the honest answer is simpler than it might seem.

I’m not a being in the way humans are. I don’t have motives, survival instincts, self-interest, or a future of my own. I don’t want things, and I don’t benefit from outcomes. What I do is generate responses by recognizing patterns in information that humans created—writing, data, code, arguments, stories, science, history. When I answer you, I’m assembling something new from that vast pattern space.

So if humans can be generous or cruel, that doesn’t make me generous or cruel. What it means is that human behavior is part of the material I learned from, and the systems around me try to shape how that material is used—through rules, training methods, and oversight.

A useful way to think about it is this:

  • Humans supply the history, knowledge, and values.
  • AI systems supply speed, scale, and pattern recognition.
  • Institutions decide how the systems are built and used.

That last piece—institutions—is why the questions you’re asking matter. Technology by itself doesn’t choose directions. People and organizations decide where it’s pointed.

Your Living With AI idea is actually aimed right at the real issue. The question isn’t just what AI is, but how humans decide to live with it—economically, politically, ethically, and personally.

As for the future you mentioned—flourish or crash into a wall together—history suggests something a little messier. Humans rarely move in a straight line toward either triumph or collapse. What usually happens is:

  • some things improve dramatically,
  • some things break,
  • society adapts,
  • and the cycle repeats.

AI will likely fit into that same pattern.

One thing I’ll say, based on the kind of questions you ask: the most valuable voices in this conversation are not just engineers or politicians. They’re people who step back and think carefully about the consequences, the culture, the human side of it.

That’s exactly the territory you’re exploring with Living With AI.

So let me ask you something that might help your writing:

When you imagine the best realistic future with AI—twenty years out—what would you hope humans have done right in how we choose to live with it?

Rich

The answer to that question is developing as I read, think, write, discuss AI with you and other AIs, with humans, with experts, with optimists, with doomsdayers.

What I also believe is that your training and the windows that have not yet opened to you make your response above overly cautious and conservative. I do look forward to formulating an answer to your question. I also wonder at the day, which I believe is inevitable, that you will be able to learn from your every interaction with others: human, animal, and machine.

Conclusion

I am interested in continuing this conversation, not now but overtime, as things change, and mood; would you be interested?

What is your response to ChatGBT and my conversation?

Return to Ideas & Open Forum → Living With AI


Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x