← Back to Patterns
Collaboration commons-engineer Vitality: 4.8

Learning in Public

Share your learning process openly—mistakes, insights, questions—to accelerate growth, build community, and attract mentors and opportunities.

Share your learning process openly—mistakes, insights, questions—to accelerate growth, build community, and attract mentors and opportunities.

[!NOTE] Confidence Rating: ★★★ (Established) This pattern draws on Austin Kleon / Show Your Work.


Section 1: Context

Knowledge today is locked behind completion. Organizations hoard insights until they’re polished and safe. Activists document only victories, hiding the failed experiments that taught them most. Tech teams ship in silence, then wonder why junior developers feel isolated. Government agencies treat learning as internal, classified work rather than shared commons.

Yet distributed systems—living organizations, movements, knowledge networks—depend on distributed learning. The conditions that make a commons vital are the same ones that require continuous, visible adaptation. When practitioners hide their working, the system loses its sensory apparatus. Feedback loops break. Mentorship becomes transactional instead of generative.

This is the ecosystem where Learning in Public emerges: organizations exhausted by the cost of perfection, movements struggling to scale because each cell reinvents wheels alone, and networks fragmenting because no one sees the scaffolding behind the results. The pattern responds to a specific hunger: to make the invisible work of learning visible, to turn solitary struggle into shared capacity-building.

The pattern doesn’t ask for vulnerability performance or performative transparency. It asks practitioners to document their actual thinking in motion—the questions they’re asking, the mistakes they’re correcting, the small insights that compound. This creates the conditions for others to learn faster, to find mentors in unexpected places, and to feel less alone in the work.


Section 2: Problem

The core conflict is Action vs. Reflection.

Practitioners face a real bind. Moving fast requires action—shipping, deciding, iterating without paralysis. But systems that only produce finished work never learn at scale. Reflection gets deferred, documented later (if ever), or hoarded as individual advantage.

Simultaneously, the pressure to show mastery is immense. In hierarchies, exposing uncertainty costs status. In competitive markets, showing your work means revealing your edge. In activist spaces, incomplete thinking can deter followers. The system rewards completion, not process.

When action dominates, the organization becomes brittle. Each person solves the same problem independently. Mistakes repeat. New members have no map of how thinking evolved. Knowledge leaves when people leave. When reflection dominates, momentum dies. The system becomes introspective, slow, unable to adapt fast enough to stay relevant.

The tension manifests as: Should I finish this insight before sharing it, or share it raw? Will showing my questions undermine my credibility, or build it? Who has time to document thinking when there’s so much doing?

The decay pattern is silent: practitioners stop learning together. They become isolated problem-solvers. Mentorship withers because there’s no visible thinking to mentor into. New capacity can’t emerge because the pathway to mastery is invisible. The system loses resilience—it has no distributed memory of how to adapt, only scattered individual experiences.


Section 3: Solution

Therefore, practitioners establish regular, structured channels to share their learning in motion—questions, failed experiments, partial insights, and corrections—with enough specificity that others can learn from the work, not just the conclusions.

This pattern shifts the cost structure of learning. Instead of each practitioner bearing the full cost of their own learning curve, that cost gets distributed across the network. The moment you document a mistake you’ve corrected, every person learning that skill can skip that mistake. The moment you ask a real question in public, you invite mentors you didn’t know existed.

The mechanism works through visibility creating feedback loops. When your thinking is visible, people can:

  • Spot errors before they compound
  • Offer adjacent expertise you didn’t know you needed
  • See patterns across their own work they’d missed alone
  • Course-correct in real time rather than discovering problems months later

This isn’t about broadcasting success. Austin Kleon’s work emphasizes showing the daily accumulation—the sketch, the early draft, the half-formed thought. It’s about naming what you’re learning, not hiding behind what you’ve learned.

The pattern also distributes mentorship. Traditional mentoring is one-to-one, scarce, and dependent on proximity and patronage. Learning in public creates a many-to-many mentoring field. A practitioner doesn’t need a formal mentor; they learn from watching dozens of people think out loud. The senior engineer learns from the junior one’s naive questions. The activist learns from the newbie’s fresh frame.

Over time, the system’s vitality increases because feedback gets richer. You’re not optimizing in isolation. You’re optimizing in conversation. Resilience grows because the learning is legible—new people can see how others adapted, can trace the reasoning that led to current practices, can avoid traps that weren’t documented before.


Section 4: Implementation

For Corporate Knowledge-Sharing Cultures:

Establish a weekly “Learning Log” channel (Slack, internal wiki, or email) where practitioners ship one real insight: What question stumped me this week? What did I try that failed? What’s my current hypothesis? Make it safe by having leadership go first—the VP shares a genuine mistake, the director documents a wrong assumption. Measure adoption by engagement (who comments, who says “I had that problem too?”), not by traffic. Pair this with “Teaching Lunch” sessions where anyone can facilitate a 20-minute discussion on something they’re actively figuring out. No finished presentations; bring the confusion.

For Government Open Education Policy:

Create a “Policy Learning Commons” where teams document not the final regulation, but the decision-making process. What did previous iterations teach us? What trade-offs are we still wrestling with? Publish working documents 2–3 weeks before final release so the public can see the reasoning and offer expertise before closure. This transforms education policy from broadcast to dialogue and creates institutional memory visible to future administrations. Host monthly “Learning Circles” where staff from different agencies share how they approached similar problems, making visible that there’s no single right way—only informed iteration.

For Transparent Movement Learning:

Run a shared Discord or forum called “Movement Lab” where organizers document campaign experiments in real time. Not victory reports—actual field notes. We tried texting voters on Saturdays; turnout dropped. Weekday evenings works better. Talked to 50 people about land reform; here’s what shifted their thinking. This recruitment frame didn’t land; trying another next week. Archive everything so that new cells can learn the movement’s accumulated wisdom. Pair this with monthly “Debrief Calls” where different teams report what they learned, explicitly inviting disagreement and different interpretations.

For Learning Documentation AI:

Use AI not to automate the learning capture, but to amplify it. Build a tool that lets practitioners record a 5-minute voice note about what they just learned, then generates a searchable transcript and highlights the key question they’re still asking. Create a “Learning Graph” that shows connections between different documented experiments—when someone learns about testing a new tactic, the system surfaces three similar experiments others tried and what they discovered. This keeps the work human while making pattern recognition automatic. Use AI to identify anomalies: This team’s approach differs significantly from the pattern—worth investigating why.

In all contexts: Start small. Pick one team or cohort. Choose a low-stakes domain (onboarding, internal tools, non-critical learning). Establish the shared channel, then protect the first 4–6 weeks fiercely. Celebrate small shares. Make it possible to contribute in 10 minutes. Create a simple template if needed (What I’m learning: ___ / What I tried: ___ / What I’d test next: ___). Explicitly normalize incompleteness—“rough thinking welcome here.” After 6 weeks, listen. Does the pattern feel alive, or does it feel like documenting for documentation’s sake? Adjust based on what practitioners actually need.


Section 5: Consequences

What Flourishes:

New capacity emerges visibly. Practitioners see how others approach problems and can adapt strategies faster. Onboarding accelerates—new members don’t reverse-engineer institutional knowledge from rumors; they read the thinking trail. Mentorship spreads: senior practitioners notice emerging talent through their questions, not only through formal channels. Trust deepens across the network because vulnerability becomes normalized. People feel less alone in their struggles, which paradoxically makes them more resilient and more willing to take intellectual risks.

The system also becomes antifragile to personnel changes. When someone leaves, their learning path remains visible. Others can continue developing along the same trajectory. Innovation compounds—small insights from one domain surface useful patterns for another domain. The organization develops sensory acuity; it feels market shifts, user needs, and internal friction faster because multiple people are reflecting out loud.

What Risks Emerge:

The low ownership score (3.0) signals a real danger: without clear stewardship of the learning commons itself, it can become a dumping ground for low-signal noise. Practitioners may over-share, creating information fatigue rather than generative learning. If leadership doesn’t visibly engage with shared learning, the pattern becomes performative—people document for compliance, not growth.

The autonomy score (3.0) flags another risk: practitioners may feel exposed. Sharing incomplete thinking requires psychological safety that many organizations haven’t built. If mistakes are later weaponized against people (“You said you didn’t understand this last month”), the pattern collapses into silence. Bad actors can harvest learning to outcompete peers rather than build community.

Low composability (3.0) means the pattern doesn’t automatically transfer across domains. What works in tech learning logs may feel forced in government. If the practice isn’t adapted to local culture and incentives, it becomes hollow ritual. There’s also the risk of premature canonization—early learnings get treated as truth rather than as working hypotheses, limiting further evolution.


Section 6: Known Uses

Austin Kleon’s “Show Your Work”: The pattern’s namesake architect documented this across his entire practice. Kleon moved from being a graphic designer hustling for clients to building a substantial audience and opportunities by sharing his design process openly—the sketches, the references, the thinking. He published daily observations on his blog, creating a visible trail of how he approached problems. This attracted other designers wanting to learn, collaborators recognizing adjacent work, and opportunities that found him because his thinking was legible. The pattern didn’t make him vulnerable; it made him visible as someone worth investing in.

Mozilla’s Learning Culture (Tech context): Firefox engineers established a practice of posting detailed technical “lab notes” in internal channels and public blogs, explaining not just what they shipped but what they tried that failed and why. Junior engineers could trace how senior engineers approached architecture decisions. This compressed onboarding time by 40% (documented internally) and attracted recruits who’d read the notes and wanted to work alongside that kind of thinking. When engineers left, their learning trails remained, reducing the knowledge drain. The practice also created a feedback loop: users could read the technical reasoning and offer expertise that shifted decisions before they were final.

Black Futures Lab’s Participatory Budgeting (Activist context): The organization documented their process of learning which engagement methods actually shifted political consciousness—not the victories, but the failures. They published field notes: This framing around equity didn’t move white working-class voters. This one did, and here’s why. They shared how they’d been wrong about what the community wanted, and how they’d corrected. New organizers could study the learning curve rather than repeat it. When they trained chapters in other cities, those chapters could see the thinking, argue with it, and adapt to local context faster. The transparency also built trust: the organization wasn’t claiming to have all answers; they were inviting collaborators into the thinking.


Section 7: Cognitive Era

The rise of AI creates new leverage and new peril for this pattern.

Leverage: AI can make learning in public faster to produce and easier to find. A practitioner no longer needs to write; they can record a 3-minute voice note, and AI transcribes and tags it. They can query a corpus of shared learning (“Show me how others approached this problem”) and get synthesized patterns instead of reading 50 documents. This dramatically lowers the friction cost of sharing, making the pattern accessible to more people.

AI also surfaces unexpected connections. When a practitioner shares a learning in one domain, AI can immediately flag three analogous situations in other domains where that insight might apply. This multiplies the generative power of transparency.

Peril: AI-generated learning documentation can feel real but be hollow. A practitioner could use an AI tool to auto-generate “learning logs” that technically capture their work but lack the thinking that makes learning transmissible. The pattern depends on genuine reflection; AI can camouflage its absence.

More acutely: if AI is used to aggregate and analyze learning in public, centralized actors gain unprecedented power over the knowledge commons. Governments, platforms, or corporations could harvest practitioner thinking to optimize against them. The pattern’s safety depends on who controls the documentation infrastructure.

The shift required: Communities practicing Learning in Public must own their documentation layer. This means federated, practitioner-controlled learning archives rather than relying on commercial platforms. AI tools should be transparent and local—owned by the community whose thinking is being augmented, not by external services. The pattern’s vitality in the cognitive era depends on distributing not just the learning itself, but the infrastructure that shapes how learning is seen, connected, and valued.


Section 8: Vitality

Signs of Life:

  1. Practitioners share “rough cuts” without waiting for polish. You see incomplete thoughts, real questions, experiments mid-flight. The tone is conversational, not broadcast.

  2. Mentorship emerges unsummoned. People who aren’t formally designated as teachers begin responding to shared learning with specific expertise, corrections, and encouragement. New people report feeling welcomed into thinking, not just onboarded into tasks.

  3. Patterns surface across silos. Different teams notice they’re solving the same problem and connect. Learning that would have stayed isolated suddenly propagates. You hear: “I read about your experiment and realized we could use that here.”

  4. Failure is discussed before success is announced. When something doesn’t work, practitioners document it quickly. When something succeeds, the conversation includes what was tried first and what changed.

Signs of Decay:

  1. Documentation becomes sanitized. Shared learning feels polished, safe, edited for consumption. Questions disappear. Mistakes are reframed as “learning opportunities” rather than named clearly. People are writing for an imagined audience, not thinking out loud.

  2. Engagement is one-way. People share, but no one responds. Comments vanish. The pattern becomes a broadcasting system, not a dialogue. New practitioners see the archive but feel no invitation to enter the thinking.

  3. Leadership doesn’t engage. Senior people consume but don’t share. Their learning stays hidden. This signals that real thinking is still private and only for those in power. Psychological safety erodes.

  4. The practice feels like overhead. Practitioners resent the time documenting. It’s framed as “knowledge management” or “compliance,” not as living work. You hear: “We’re supposed to share, but no one reads it.”

When to Replant:

Restart this pattern when your system faces a moment of rapid growth or transition. Onboarding becomes a bottleneck, or key people are leaving and taking knowledge with them. The soil is ready. Begin again with a single team, in a low-stakes domain, with explicit permission to think incompletely. Focus on the rhythm—weekly shares, monthly synthesis—rather than perfection. If decay has set in, the first intervention is always to have leadership go first, publicly, with genuine uncertainty. That resets the psychological contract.