PPPPunctuation, Platforms & Power - Four Ps 281
What it Means to Show Up the Right Way When Trust is at an All-Time Low
Nobody trusts anything anymore.
And somehow, that’s the most important marketing insight of 2026.
Not a trend. Not a cycle. A structural shift in how people read, shop, scroll, and decide.
The signals are everywhere if you’re paying attention: in the content that flops, the platforms people are abandoning, the AI tools rewriting how we work, and the political fights breaking out in real time over who gets to control all of it.
Trust is gone, and everyone is scrambling to figure out what comes next.
This one’s got a lot going on. Stay with it.
THE PERSONAL: Punctuation... and What It Says About Me (And Yours About You)
I’ve been thinking a lot about punctuation lately. Which, if you knew me five years ago, would probably make you laugh. Because five years ago, I was the guy who ended every email with three exclamation points.
Every Slack message felt incomplete without at least one. I was enthusiastic! I was engaged! I was apparently terrifying!
Then someone called me out. Not cruelly, but clearly. And I started paying attention.
Turns out there’s actual science behind this. Research suggests exclamation points can make you appear meaningfully warmer and more positive, but they come with a trade-off: a modest but real dip in how analytical or authoritative you’re perceived to be. We associate high emotion with lower assertiveness. Not fair, but also... not wrong?
So I dialed back. I read the room. I started saving the exclamation points for moments that actually earned them.
And then AI showed up. And everything got weird.
AI didn’t just change how we write. It changed how we think about how we write. Because now, every time I put a finger on a stylistic quirk, I have to ask myself, “Is this me? Or is this what a language model would generate if it was trying to sound like a friendly professional?”
Like... I used to love em dashes -- used them constantly. Then I started noticing AI used them constantly. So I stopped. Almost entirely. If you see one in my writing now, it’s deliberate. It earned its place.
I’ve apparently overcompensated with ellipses though... they’ve become my new emotional support punctuation... which probably says something about me I’m not ready to unpack...
That’s the bizarre feedback loop we’re all living in. AI learned to write like us. Then we noticed. Then we started writing differently to prove we’re still us. The machines didn’t just reflect our habits, they made us self-conscious about them. Which is either the most human outcome imaginable, or the most unsettling one.
Punctuation has always been a signal.
Warmth. Authority. Confidence. Chaos.
But in 2026, it’s also become a kind of fingerprint. Proof of personhood.
Or not. Meanwhile, our kids don’t know how to use it. At all. they text and voice dictate and punctuation has gone from essential to optional to ignored.
As for me, yes, I use exclamation points more carefully now. But when I use one, I mean it.
And that’s going to have to be enough!
THE PROFESSIONAL: You Can’t Buy Your Way Out of a Trust Problem
Let’s be honest about where we are.
Social media is broken. Not technically. Not functionally. Broken in the way that actually matters:
People don’t believe what they see on it anymore. Nearly half of US consumers flat-out distrust the information on social platforms. Most of the rest are somewhere in the “eh, maybe” zone. TikTok, Facebook, X... none of them are scoring above a 21% trust rating right now.
21%
That’s not a platform problem. That’s a signal about the entire environment.
The Paradox Nobody Wants to Say Out Loud
The thing that’s messed up now... The better your content looks, the more suspicious people are of it. We’ve reached a point where polish is a red flag. That’s what fueled my turn away from em dashes referenced earlier.
But same with images: Producing a flawless image, a perfectly structured article, a slick 30-second video costs nothing today. And people know it. So when they see it, their first instinct isn’t “wow, great brand.” It’s “bot.”
We’ve entered what I’d call the Zero-Trust Web. And if your marketing strategy was built around scale, automation, and aesthetics, you’re going to feel this one.
78% of people say it’s getting incredibly hard to tell what’s real from what’s synthetic. 3/4 of Americans trust the internet less than they ever have. Less. Than. Ever.
That’s not a trend you optimize around. That’s a structural shift.
The Wrong Place Problem
It’s not just what you say. It’s where you say it.
Research is unambiguous on this: Brands that show up next to unsafe, toxic, or objectionable content don’t just waste money. They take real, measurable hits to trust and brand opinion. Almost 20% drops. And the most underappreciated part? People who never even saw the bad placement, just heard about it, showed nearly the same decline.
The internet has a long memory and a short fuse.
The cheap CPM isn’t cheap. It’s deferred debt. And the interest compounds fast.
So What Do You Actually Do?
You stop outsourcing your brand’s reputation to an algorithm and start being intentional about the environments where your brand lives.
That means showing up in spaces where trust is already present. Publisher environments. Brand-owned destinations. Community-driven platforms where real people are having real conversations and the content around your message has been vetted, curated, and earned.
It also means rethinking what “content” even means right now.
The most valuable thing you can produce in 2026 isn’t polished. It’s verifiable. Real customer voices. Unscripted formats. Actual results, not projected ones. The brands winning today are the ones leaning into radical transparency instead of theatrical perfection.
And then there’s the engagement layer. This is where I’ll put a gentle plug in for the work we’re doing at Genuin, because it’s genuinely relevant here. We’re building infrastructure for brand-owned destinations, places where brands control the context, the community, and the content environment.
Not a feed you rent space on. Not an algorithm that decides your fate every six months. Owned. Moderated. Trusted. That’s not just a product pitch. That’s a response to exactly what the data above is screaming.
Trust me...
We’re now competing on two fronts simultaneously: convincing real humans and satisfying AI decision engines. Neither one rewards inauthenticity anymore.
The winners aren’t going to be the ones with the biggest budgets or the most automated pipelines. They’re going to be the ones who figured out that trust isn’t a campaign. It’s an infrastructure decision.
Build accordingly.
THE PRACTICAL: How I Actually Use AI (2026 Edition)
Let me tell you what my AI stack actually looks like right now, because I think it’s more useful than another think piece about what AI “could” do.
First, I use Granola for meeting transcription and notes, including personal stuff now, not just work calls.
I use Claude CoWork for professional orchestration, the kind of behind-the-scenes task management that used to eat hours.
I also lean on Claude and/or ChatGPT to proofread and simplify my emails.
I use it for video edits like Opus and Genuin to clip, caption, categorizr and curate for my TWO podcasts.
And for image generation, I’m bouncing between Gemini (Nano Banana) and MidJourney and a few others depending on the output I need.
That’s it. That’s the stack. No magic. No AGI. Just tools that removed friction from things I was already doing.
But when considering ALL of the options out there, whether as an end user, a professional team builder, or an AI-led product builder, I stick to the following framework:
Assist. Audit. Automate. Autonomize.
Those four words cover almost every legitimate AI use case I’ve seen work in practice. Most people are still in the first two stages. The third and fourth are where it gets interesting, and honestly, where most organizations aren’t ready to go yet.
Now zoom out.
ChatGPT just crossed 900 million weekly active users. That’s not a product anymore. That’s societal. And OpenAI is already running ads inside responses, with retail and grocery brands leading the way. That’s not a test. That’s a business model arriving in real time.
Meanwhile, Walmart’s AI shopping assistant is driving 35% higher order values for users who engage with it. Not because it’s magic, but because it understands intent. It builds full baskets around moments. A birthday party. A dinner. A game day. It connects digital engagement to same-day physical fulfillment. That’s not artificial intelligence. That’s applied relevance at scale.
Shoppers want virtual try-on. They want AI assistants to remove guesswork. They want automatic reordering for the things they already buy. The demand is there. The sensory gap of online shopping is real, and AI is closing it.
Amazon, on the other hand, just spooked its investors by pledging $200 billion in AI infrastructure this year, pushing free cash flow negative. Wall Street blinked. The stock dropped 12% in February.
The lesson isn’t that AI investment is wrong. It’s that the market is starting to distinguish between AI that works and AI that’s a press release.
The ones who figure out the former are going to win. The ones still chasing the latter are going to have a rough couple of earnings calls.
Use it intentionally. Know why you’re using it.
The era of “we’re exploring AI” as a strategy is over.
THE POLITICAL: I Use Claude. And I Have a Take.
As I mentioned above: I use Claude every day. Premium. Highest payment and feature tier.
It’s in my workflow, my writing process, my professional life.
So when the Pentagon designated Anthropic a “supply chain risk to national security,” I had thoughts.
Here’s mine, plainly: Anthropic was right.
The company refused two specific things: enabling mass surveillance of US citizens and fully autonomous weapons. That was the line. The Pentagon, which prefers broad “all lawful purposes” contract language, didn’t want explicit carve-outs. Anthropic said no. The government retaliated with a designation historically reserved for foreign adversaries like Huawei.
And then something remarkable happened.
Claude shot to number one in the App Store. Daily signups broke records every day that week. Free users climbed more than 60% since January. People chalked “GOD LOVES ANTHROPIC” on sidewalks outside their San Francisco headquarters.
The Streisand Effect, but make it AI ethics.
OpenAI, sensing an opening, signed the Pentagon contract under the standard federal language. Sam Altman said he doesn’t believe unelected corporate leaders should supersede democratically elected officials. It’s a reasonable-sounding argument. It’s also convenient timing.
Dario Amodei called OpenAI’s framing “straight up lies.” The vibe between these two companies is... not great.
Here’s what I actually think is happening beneath the headline drama: We’re watching the first real public fight over who controls AI. Not abstractly. Not in a Senate hearing nobody watched. In real time, with real consequences, with real principles at stake. And the government got off on extraordinarily bad footing, to put it generously.
Voters are divided on AI broadly, 48% favorable, 46% unfavorable, but the strongest predictor of how someone feels about it is whether they use it. Daily users are favorable by 57 points. People who rarely touch it are unfavorable by 42 points. That gap is going to define a lot of policy fights over the next two years.
Because the economic anxiety is real.
46% of voters think AI will hurt the American economy. 54% think it will increase unemployment. 65% think it will make billionaires richer while only 27% think everyday people benefit.
Those numbers don’t go away because the technology is impressive. They go away because someone makes a compelling, honest case that the benefits are distributed broadly.
Nobody is making that case right now with any conviction.
What Anthropic did, whatever you think of the specifics, was at least legible. They had a line. They held it. They paid for it... and then got rewarded for it by the public, which tells you something about where trust actually lives right now.
I’m not naive about corporate motives. But I know what it looks like when a company decides its values are non-negotiable, and I know what it looks like when they aren’t.
This one was pretty clear.
*AI Disclosure: 100% of this was written by me, a human with only light edits and typo corrections from Grammarly. The images are mostly AI-generated using Nano Banana.






