shane skiles - github blog

ai context collapse

I know I’m a little late to the show in saying “AI is here!” but yet, here we are.

I always thought AI would stay in more of a machine learning and data science niche, something bubbling up search recommendations or quietly optimizing the minutiae of rocket science. But instead, it has slowly worked its way into our everyday lives. It has gone from being a passer-by offering advice in the background of some movie to a recurring character in our daily lives and conversations.

No longer just a voice in the background or a hidden algorithm, AI is now actively participating. We talk to it, ask it questions, ask it for suggestions, help brainstorm ideas, even generate completed products based on prompts. It’s moved from being a tool we occasionally interact with, a passive utility, to an active presence.

It’s in classrooms, in courtrooms, in our pockets, and on our screens. It has become a collaborator, a co-creator, and in some cases, a filter through which we experience the world.

When the internet started gaining popularity in the 1990’s it helped facilitate communications between individuals, whether colleagues, family, friends or strangers. This ability to collaborate, exchange ideas, engage in virtual communities revolutionized the way the connected, learned and worked.

This sometimes led to what we now call “context collapse” — a blurring or breakdown of the lines differentiating social contexts. This leads to conversations, audiences and expectations from one context inappropriately crossing into another, often with unintended consequences.

AI is doing something different, and perhaps more unsettling. While the internet amplified human voices, AI is creating its own voices, its own content, often indistinguishable from human output. It’s not just that it’s hard to tell who or what is speaking. It’s that the nature of information, creativity, and even conversation is changing.

What we’re seeing now isn’t collapsing the context and content it’s a reshaping of context and content itself. The lines between original and generated, real and synthetic are blurring fast.

What are the consequences of blurring the lines between people and AI?

It’s not just that AI can generate content and can do so in ways that absorb and remix vast amounts of human-created material — it also works the other way. People are integrating AI into their own creative processes, using it to conceive, generate and refine their own works.

We’re losing the metadata that used to help us understand the why, how, and who behind the things we experience.

Do we need re-learn and re-teach media literacy from the ground up? Do we need new tools, new forms of digital literacy to understand the origins and intentions of content? Or do we fundamentally have to accept that the lines between human and machine have blurred and this is the new normal?

I started writing this to explorer the implications of AI feedback in our daily lives making our creative endeavors more AI-centric which then feeds the AI. Somewhere along the way I got lost. I do plan on exploring this idea and others sometime later. I’m not sure where this will go, but here we are.


© 2026 Shane Skiles