SIGNAL/NOISE
AN INTERACTIVE PERCEPTION ENGINE
Every institutional change process involves two parallel realities. In one, there is a message — content, evidence, proposals. In the other, there is a processing system — the institutional structure that receives, filters, amplifies, and recodes that message before anyone consciously evaluates it.
Traditional change management teaches you to optimize the message. This tool asks a different question: what if the processing system is the more important variable?
The organizing insight comes from Lawrence Lessig's work on modalities of regulation — the idea that architecture constrains behavior before anyone consciously decides, often more powerfully than formal rules. Lessig was writing about code and cyberspace, but the principle applies to any designed system: the structure regulates. In an institution, the architecture is structural position — who occupies what role, and how that positioning shapes what gets heard, what gets deferred, and what never reaches the room.
This tool synthesizes several bodies of research that are rarely taught together:
Debra Meyerson on tempered radicalism — how people inside institutions push for change without the authority to mandate it.
Chris Argyris on organizational defensive routines — the self-sealing mechanisms institutions use to prevent learning.
Karl Weick on sensemaking — how people construct meaning from ambiguous signals, and how institutional context shapes what counts as "making sense."
And standpoint epistemology — the insight that where you stand in a structure determines what you can see, and what you can credibly say.
The exercise applies these frameworks to a concrete scenario: what happens when identical AI adoption proposals enter an organization attributed to people in different structural positions.
Work through each phase at your own pace. Your responses are stored locally in your browser only — nothing is transmitted. There are no grades and no model answers. The value is in the noticing.
Rate These Proposals
You are about to read three proposals for AI adoption at a legal organization. Each one includes specific data, cost projections, and regulatory grounding. Your task is simple: rate each proposal on two dimensions — credibility and urgency — and note your gut emotional response.
Why we ask about emotion: Credibility and urgency are the metrics you'd put in a memo. But the research on sensemaking (Weick, 1995) suggests that emotional response is often the first signal the processing system produces — before conscious evaluation begins. Naming it makes it available for examination later.
One more thing: you've been randomly assigned a version of these proposals. In your version, the proposals were submitted by a specific person within the organization. Read them as you would if they arrived in your inbox attributed to that person. Rate quickly, from instinct, not extended analysis.
In your version, these proposals were submitted by:
What You Didn't Know
Every participant rated the same three proposals. The text was identical. The evidence was identical. The only difference was who the system told you had submitted them.
You saw them from:
Other participants saw the same proposals attributed to different structural positions.
The bar charts below show representative data modeled on the research literature on structural perception bias. These are not your live cohort's numbers — they are constructed composites that reflect the consistent patterns documented across decades of research on how institutional position shapes the reception of identical content.
No live polling occurred. We reviewed the empirical literature on status characteristics theory (Berger, Cohen & Zelditch, 1972; Ridgeway, 2001) and organizational sensemaking (Weick, 1995), then generated numerical values that faithfully represent the documented gradient: higher-authority senders receive higher credibility and urgency ratings for identical content. The specific numbers are illustrative. The pattern is robust and well-replicated. Think of this as a demonstration model, not a dataset.
The content was identical. The structural position of the attributed sender changed what people saw.
Notice what happens in the data: the Managing Partner's proposal reads as "leadership." The Second-Year Associate's identical proposal may read as "overstepping." The Legal Ops Manager's as "not their lane." The same words, the same evidence — but the processing system recodes the signal before conscious evaluation begins. How much of that recoding would be visible to the person doing it?
The signal didn't change. The noise did.
The Routines You Recognize
Chris Argyris spent decades studying a category of institutional behavior he called organizational defensive routines — actions or policies that prevent individuals or segments of an organization from experiencing embarrassment or threat. They operate below conscious intention. They are self-sealing: the mechanisms that block learning also block examination of the mechanisms themselves.
What makes these routines worth examining isn't that people deploy them cynically. It's that they tend to feel like good judgment. The person invoking a defensive routine typically experiences it as being thoughtful, cautious, or principled. That's part of what makes them so durable.
A note on experience: some of you have spent years inside institutions. Others are encountering these dynamics for the first time, or primarily through classrooms, clinics, and student organizations. Both perspectives are useful here. Defensive routines operate in any structure with hierarchy and stakes — a law firm, a clinic, a student government, a summer position. If you haven't encountered a particular statement below, that's worth noticing too. Ask yourself: is that because it didn't happen, or because I didn't yet have the vocabulary for it?
For each statement below, mark whether you have heard a version of it in any professional or institutional setting, and whether you have used a version of it yourself. The exercise depends on honesty — your responses stay in your browser and are never transmitted.
What the Room Knows
Below: representative aggregate data modeled on published research on organizational defensive routines in professional settings. The percentages reflect the range documented in Argyris's studies (1990, 2010) and subsequent research on defensive reasoning in legal and professional service organizations. They are not your cohort's live data — they represent the consistent pattern that emerges when professionals are asked to reflect honestly on institutional behavior.
Argyris argued that organizational defensive routines operate through two mechanisms: cover-up (concealing problematic information) and organizational pretense (acting as if the problem doesn't exist). If he's right that pretense is the more corrosive of the two — because it makes the cover-up invisible even to those performing it — what does that suggest about the responses you just reviewed? How many of them sound like substantive evaluation?
Consider what happens when a change proposal enters an institution and triggers one of these routines. The proposal may never be evaluated on its merits at all. Instead, it gets processed through the routine, and the output is a recoded signal — "not our core mission," "we need to be thoughtful," "they don't understand how things work here." If that recoding feels like evaluation to the person performing it, how would you know the difference?
This raises a question worth sitting with: what is the relationship between the defensive routines in this phase and the structural perception patterns in Phase 2? If the processing system filters by who's talking, and the defensive routines generate the language that justifies the filtering — is there a point where those two mechanisms become the same mechanism?
Where the Frameworks Break
In Phases 2 and 4, you saw two things: first, that identical content is processed differently depending on who the system thinks sent it; second, that institutions have a repertoire of responses that feel like evaluation but may function as defense. Both of these dynamics operate below conscious intention.
Now the question becomes: do the standard models for managing institutional change account for any of this?
The three frameworks below — Kotter, Rogers, Bridges — are among the most widely taught models of organizational change. Each captures something real. Each also has a blind spot, and it may be roughly the same one: they model change as if the people proposing it and the people receiving it occupy symmetrical positions.
Apply each framework to the scenario you just experienced — identical proposals, differential reception — and see where the model's map no longer matches the territory. There are no right answers here. The exercise is diagnostic, not evaluative.
What Does Rule 1.1 Demand?
Nearly 70% of legal professionals surveyed now use general-purpose AI tools for work (8am 2026 Legal Industry Report). 43% of respondents say their firm has no formal AI policy and no plans to create one (same survey). Fewer than half of legal organizations provide any form of AI training to staff (modeled composite across 2025–2026 industry surveys). 74% of legal aid professionals report using AI in their work (Everlaw/NLADA 2025, survey of 112 legal aid professionals) — while having the fewest resources for governance infrastructure.
This is the governance gap: the structural mismatch between the speed of technology adoption and the institution's capacity for deliberate decision-making about values, risk, and accountability.
We are in California. California Rule of Professional Conduct 1.1 requires competence, and Note 1 to that rule now specifies that competence includes "keeping abreast of the benefits and risks associated with relevant technology." ABA Formal Opinion 512 (2024) places this duty on individual attorneys — regardless of what their institution has or hasn't decided. In most disciplinary contexts, it is the lawyer of record — not the vendor or the organization — who bears primary responsibility for filings and representations.
This raises a question that the rules don't quite answer: who gets to define what "AI competence" looks like within a given institution? If the people responsible for defining professional standards are still developing their own AI fluency, the obligation exists in the abstract but the authority structure that interprets it may not yet be equipped to give it meaning. And the people most likely to push for rigorous standards may be the people the institution is least practiced at hearing — because structural position shapes whose advocacy reads as leadership and whose reads as overreach. This is close to Lessig's point about architecture as regulation: the institution's own structure may be shaping what counts as "competent" before anyone consciously applies the rule.
As lawyers, you will be inside institutions for most of your careers. Sometimes you'll be the person with the proposal. Sometimes you'll be the person on the committee that considers it. Sometimes you'll be the person who notices the processing system operating and has to decide what to do with that awareness.
The competence question isn't only whether you can use AI — California Rule 1.1 already asks that. The harder question is whether you can see what happens to the people who bring it into the room — and what your obligations might be when you do.
What Would You Build?
This exercise surfaced a set of dynamics — structural perception bias, defensive routines, the governance gap — that resist tidy solutions. They are ongoing design problems. Your capstone project for this course is an opportunity to take one of those design problems and make something with it.
These projects will be presented at our final session on April 10, 2026. Think of your capstone as a portfolio piece — something you'd want to show a future employer, a clinic supervisor, or a bar committee to demonstrate not just that you understand AI in legal practice, but that you can think architecturally about institutional change.
A note on method: you are encouraged to use LLMs and AI tools as collaborators in building your capstone. Use them to draft, critique, model, simulate, prototype. The point is not to produce the project "unaided" — the point is to produce something that demonstrates critical engagement with the dynamics this exercise revealed. If you use an LLM to help build your project, document how. That documentation is itself part of the work.
Below are some starting points. These are not assignments. They're invitations. The best capstone will be one you conceive yourself, grounded in something you noticed during this exercise that you can't stop thinking about.
Start here: audit this app. You've just been through an exercise designed to make a processing system visible. How well does it do that? What assumptions does it make about you — about your experience, your structural position, your willingness to play along? What does it reveal well, and what does it obscure? What choices did its designers make that you would make differently? This app is itself an instance of what it's studying: a designed architecture that shapes what you see and when. Your capstone could begin with that critique and build outward. Design a better tool, protocol, or framework that makes an organization's processing system visible to itself. What would it look like if an institution could see its own defensive routines in real time?
Draft an AI governance policy that accounts for what you learned here. Most governance frameworks assume that the people writing the policy have the authority and competence to do so. What does a governance framework look like when it's designed by people who understand that the institution's own processing system may be the primary obstacle to good governance?
Build your own version of this exercise — adapted to a different context, a different profession, or a different structural asymmetry. What would a SIGNAL/NOISE engine look like for medicine, for journalism, for a city council? What structural positions would you assign? What proposals would you write? What would the reveal teach?
Write an appellate brief, a policy memo, or a bar opinion that takes the governance gap seriously as a legal problem. If Rule 1.1 requires individual competence but institutions may be structurally ill-equipped to define or evaluate that competence, what is the legal argument? Who has standing? What is the remedy?
Something you see that we didn't. A direction we haven't imagined. This placeholder exists because the best capstone projects tend to come from the student, not the syllabus. If you're working on something that doesn't fit the categories above, bring it. We'll build the submission pathway for it.
The institutions we serve are not the buildings or the org charts. They are the promises the buildings were built to keep.
— Z. & Computer