Keith Anderson · Client Portal

Confidential Document

Prepared for Boston Globe Media Partners. Enter the access code provided by Keith Anderson to view this proposal.

Incorrect access code. Please try again.
keithanderson.io·Confidential Client Document
Working Draft · Strategic Framework · May 2026

AI Governance Framework: A Strategic Architecture for Boston Globe Media Partners

A first-pass framework anchored to the SLT's three priorities, the survey baseline, and the operational gaps surfaced in the working session of May 1.

Prepared for
Shira Center, VP, Innovation & Strategic Initiatives
Prepared by
Keith Anderson, keithanderson.io
Document type
Strategic skeleton with proposed diagnostic engagement
Companion exhibit
Exhibit A — Workflow Proposal Triage Rubric

Executive Summary

This document presents the structural architecture for a Boston Globe Media Partners AI governance framework. It is intended as input to the May 4 legal working session and as a structural reference for the finished framework to be drafted internally thereafter.

Five pillars are proposed, each mapped to a specific organizational risk identified in the May 1 session: ownership and accountability of AI-assisted output, approved-tool architecture, data and IP exposure, editorial sovereignty across the publication portfolio, and the bottom-up problem intake mechanism. Three categories are deliberately marked as out of scope and routed to separate workstreams. A companion rubric (Exhibit A) addresses the operational gap between bottom-up problem surfacing and top-down solution delivery.

Sections 01 through 03 are intended as input to the May 4 legal working session. They will not propose work Boston Globe Media Partners is already doing; the framework architecture is yours to use independently of any further engagement. Section 04 names the one piece of work that does not appear to be in motion internally and proposes a single, scoped engagement to address it.

01 · Context and Premise

The governance challenge has three audiences, not one.

Most AI governance documents are written primarily for legal review. The result is policy that satisfies counsel and is unread by the workforce it is meant to govern. The May 1 employee survey reflects this pattern: 15 percent of respondents could locate a relevant AI policy, and 60 percent reported confidence in distinguishing acceptable from unacceptable use. The gap between the two figures is the operational risk this framework is built to address.

A governance document with three audiences requires three layers. The legal layer establishes defensible language and ownership. The leadership layer establishes accountability mechanisms and escalation. The operational layer establishes what an editor or analyst should do at the point of work. The framework that follows is structured to serve all three, with the operational layer carrying the most weight by word count.

What follows is the architecture, not the finished policy. It identifies the five pillars, the open questions to resolve in the Monday session, a companion rubric that addresses the operational gap surfaced in the working session, and one piece of work proposed for external engagement.

02 · The Five Pillars

Five pillars, each mapped to a risk identified in the working session.

The senior leadership team identified three priorities: accountability, prevention of tool sprawl, and management of legal exposure. Two additional pillars are required to support these three. Without an editorial sovereignty principle, the framework cannot accommodate the publication portfolio. Without a bottom-up problem intake mechanism, the prohibition on hacked-together workflows is unenforceable in practice.

2.1

Accountability and Ownership of Output

Principle.

Every output produced with AI assistance is owned by the human who released it. The tool is not an author. The vendor is not a source. The accountable party is the employee whose name appears on the work.

Operational implication.

Editors and managers are accountable for the work their teams produce regardless of how it was produced. Employees are responsible for verifying every fact, source, and claim derived from an AI tool before that material leaves their hands. AI is a starting point. It is not a source of truth.

Specification required.

The framework will specify disclosure requirements for AI-assisted internal and external work, the escalation path when an AI-assisted output is challenged, and the handling protocol for AI-assisted content found to contain errors after release.

Open Question — Monday Session

At what threshold of AI involvement does an output require disclosure, internal or external? Does this threshold differ by publication, or is a uniform floor established at the corporate level with publications free to set higher standards?

2.2

Approved-Tool Architecture

Principle.

One tool per purpose, selected and supported centrally. Approved tools are listed on the internal AI hub. Unapproved tools are out of scope, including company-paid versions of consumer products that individual employees may have provisioned independently.

Operational implication.

A defined list of approved tools by function (drafting, transcription, research, image generation, code, slide creation), a request path for new tools that routes through the AI committee, and a sunset process for tools that no longer meet the standard.

Tradeoff to be acknowledged.

Standardization and experimentation exist in tension. The senior leadership team has prioritized standardization. The framework should state this tradeoff explicitly so that employees who would otherwise build their own workflows understand the institutional rationale.

Open Question — Monday Session

What is the disposition of data already entered into unapproved tools by employees prior to the policy date? Is a remediation requirement established, or is the policy applied prospectively from the effective date?

2.3

Data, Output Ownership, and Legal Exposure

Principle.

Boston Globe Media Partners' data does not train external models. Boston Globe Media Partners' intellectual property is not a training corpus for third parties. Material entered into a prompt is governed by the same standard as material transmitted in any other external communication.

Operational implication.

Approved tools are evaluated for data-handling posture prior to approval. Sensitive content categories — sources, unpublished reporting, HR records, financial data, subscriber data — carry explicit input restrictions. Output ownership clauses in vendor agreements are read with attention to standard language permitting vendor reuse of customer outputs, which is not acceptable for a media organization.

International exposure.

Boston Globe Media Partners has readers across multiple jurisdictions. GDPR exposure exists in the absence of a physical European presence. The framework will need to address how reader data and reader-adjacent content are handled when AI tools are involved in their processing. specification pending

Open Question — Monday Session

Has the data-handling posture of every AI tool currently in use been audited, including consumer accounts provisioned individually by employees? What is the institutional position on tools that train on input data even when the relevant settings are disabled?

2.4

Editorial Sovereignty Across the Portfolio

Principle.

The Boston Globe and STAT newsrooms retain sovereignty over editorial AI policy. The corporate framework establishes a floor, not a ceiling. Newsrooms may add additional restrictions and are expected to do so.

Operational implication.

Globe AI guidelines, currently being revised on the STAT model, supersede this framework within the newsroom. Boston.com inherits the Globe's policy through existing affiliation. Boston Magazine and BSci have the option to adopt this framework, the Globe's policy, or a third path, provided their chosen approach meets or exceeds the floor established here.

Strategic rationale.

The institutional separation between editorial judgment and corporate operations is a load-bearing element of the company's reader trust. The framework is designed to preserve that separation rather than override it.

Open Question — Monday Session

In the event that Boston Magazine or BSci do not adopt a formal AI policy by a defined date, what is the default position? Is this framework the fallback, or are publications required to publish their own?

2.5

The Bottom-Up Problem Intake Mechanism

Principle.

Problems surface from the bottom of the organization. Solutions are designed and shipped from the top. This is the senior leadership team's stated direction and requires a formal intake mechanism that does not currently exist.

Operational implication.

Employees who identify a workflow that AI could improve have a defined, low-friction path to surface the proposal. The AI committee evaluates proposals against a published rubric (see Exhibit A). Decisions are returned with documented rationale: build, redirect, park, or decline. The intent is to prevent the formation of personal shadow workflows by ensuring that the legitimate channel functions reliably.

Why this pillar supports the others.

In the absence of a functioning intake process, employees default to one of two responses: they build undisclosed workflows, or they disengage. Undisclosed workflows compromise pillars 2.1, 2.2, and 2.3. Disengagement reverses the upward trend in the May 1 satisfaction data.

Open Question — Monday Session

Who owns the intake process operationally? Where does it live in the organization? What service-level expectation governs response time, and where does accountability sit when a proposal remains unresolved beyond that window?

03 · Out of Scope

Three workstreams are deliberately out of scope and routed to separate engagements.

A governance document that attempts to address operational, behavioral, and procurement matters in addition to policy will fail at all four. The following workstreams are flagged for separate treatment, with proposed routing.

Out of Scope Rationale and Routing
Manager enablement The May 1 survey indicates that 42 percent of employees direct AI questions to their manager or editor. Managers across Boston Globe Media Partners have not received formal preparation for this role. This is a tooling and confidence problem rather than a policy problem; embedding manager-readiness language in a governance document does not equip a single manager. Recommended routing: a separate productized program targeting the approximately 350 people-managers across the organization, structured around a decision toolkit rather than training.
Adoption and behavior change The AI satisfaction score has moved from 3.3 to 3.8 over the past year. Approximately 10 percent of employees are described by leadership as substantive adopters. The gap between policy compliance and actual usage is the locus of the AI program's success or failure, and governance language does not move that gap. Recommended routing: a diagnostic engagement to establish a current-state baseline across the 850-person workforce and a phased rollout plan derived from it.
Vendor selection and tooling matrix "One tool per purpose" is a directive. The selection of which drafting tool, which transcription tool, which image tool is a procurement and security exercise that should not be embedded in a governance document, where revisions would require policy-level review. Recommended routing: a quarterly tools matrix maintained by the AI committee, referenced from the governance document but versioned independently.
Exhibit A · Companion Artifact

Workflow Proposal Triage Rubric

In support of Pillar 2.5, this rubric provides a structured method for evaluating workflow proposals that originate from the bottom of the organization. It is intended to enable consistent decision-making across the AI committee and to provide documented rationale to proposers.

Five criteria are assessed, each scored on a three-point scale, for a total range of five to fifteen. Score bands map to triage actions. The rubric is presented here as a first pass; calibration against approximately ten historical or anticipated proposals is recommended before activation.

Criterion What It Measures Scoring
Risk exposure Data sensitivity, brand impact, and regulatory surface area in the event the workflow misfires. 1 — high (sources, unpublished work, subscriber data)
2 — mixed
3 — low (internal administration, public-facing only after review)
Scope and reach The number of people affected and the cross-functional complexity of implementation. 1 — company-wide or cross-publication
2 — single department
3 — single team or individual
Tool overlap Whether an approved tool already addresses the need, or could with minor configuration. 1 — approved tool already covers this
2 — approved tool covers most
3 — no overlap; genuinely new capability
Strategic alignment How directly the proposal serves the senior leadership team's stated priorities for the next twelve months. 1 — unclear or off-priority
2 — adjacent
3 — directly serves a stated priority
Effort to value Build effort relative to anticipated adoption and time saved across the affected population. 1 — high effort, narrow value
2 — balanced
3 — low effort, broad value

Triage Bands

Park
5–8
Returned to the proposer with documented rationale and the conditions under which the proposal would be reconsidered.
Redirect
9–11
Routed to an existing approved tool that addresses the need, or escalated to the proposer's manager for a workflow-level conversation.
Build
12–15
Added to the AI committee's build queue with an assigned owner and target response date. Proposer kept informed throughout.

The intent of the rubric is institutional — to provide consistent, documented decision-making that protects both the AI committee and individual proposers from ad-hoc judgment. Score bands are presented as a starting calibration and should be refined against actual proposal volume in the first quarter of operation.

04 · Recommended Next Phase

The one piece of work that does not appear to be in motion internally.

Boston Globe Media Partners has the components of an effective AI program. A clear-eyed VP leading the function. A CEO whose deliberate posture provides runway. A satisfaction trend moving in the right direction. An AI committee with executive sponsorship. A governance framework being shaped through the working session this document accompanies. Most of the visible work is already in motion or being driven by capable people inside the organization.

One piece does not appear to be in motion. It is the piece that determines whether the framework above operates in practice or sits on the intranet unread.

Adoption across the 850-person workforce is not currently measured at the granularity required to plan the next phase. The May 1 survey provides a useful surface read; it does not segment the workforce by role, tenure, function, or readiness. The 10 percent of employees described as substantive adopters are not characterized in detail. The 60 percent who report confidence in distinguishing acceptable from unacceptable use are not validated against observed behavior. Decisions about where to invest next are being made without a current-state baseline that holds up to scrutiny from the senior leadership team or chief counsel.

This is the one workstream proposed for engagement: an external diagnostic to establish that baseline, conducted by someone with no incentive to confirm the existing internal hypothesis.

A.

Adoption Diagnostic and Phased Rollout Plan

Proposed for Engagement

A current-state baseline of AI adoption across the 850-person workforce, segmented at the granularity required to plan the next twelve to eighteen months. The diagnostic is positioned to answer the question Shira named in the working session: how to move from the current state to materially higher adoption without relying on assumptions about where people actually are.

Scope
  • Workforce segmentation across role, function, tenure, and publication
  • Twenty to thirty structured interviews across segments, weighted toward the populations least visible to the AI committee today
  • Extension and re-analysis of the May 1 survey instrument with additional behavioral and friction-point items
  • Identification of the specific blockers and enablers in each segment
  • Phased rollout plan with sequencing recommendations and target metrics
  • Read-out session with Shira and the AI committee; optional read-out for the SLT
Outcomes
  • An evidence-based answer to the question of where to invest next, grounded in the actual state of the workforce rather than executive-level assumptions
  • External validation Shira can use to inform SLT decisions on pace and direction
  • A defensible plan for moving the satisfaction and adoption trends materially over the next four quarters
  • Clear identification of the populations where governance language alone will not be sufficient and where active intervention is required
Timeline
Four weeks from kickoff
Investment
$30,000 · fixed scope, fixed fee
Working cadence
Kickoff session; weekly check-ins with Shira; interview window of two to three weeks; final read-out session
Why this work, and not the rest

The governance framework, the manager translation, the rubric calibration, and the rollout sequencing are work Boston Globe Media Partners can credibly do internally. They are not proposed here. The diagnostic is the one engagement where being inside the organization is a structural disadvantage rather than an advantage; the value comes from a perspective that has no internal allegiance to confirm.

05 · Methodology and How to Engage

How this becomes a yes.

The architecture in Sections 01 through 03 is intended to anchor the May 4 legal working session. It is yours to use regardless of how the proposed engagement is received; its value to the Monday session is independent of any further work.

For the legal session itself, three points of guidance:

  1. Use the five pillars to organize the discussion. The senior leadership team's three stated priorities map primarily to Pillars 2.1 and 2.3; Pillars 2.4 and 2.5 are likely to emerge from the legal team's own concerns.
  2. Use the open questions in each pillar to test the legal team's positions. Several are intentionally uncomfortable. The framework benefits from the answers given under that pressure.
  3. The Workflow Proposal Triage Rubric in Exhibit A is held in reserve. Introduce it in the legal session only if the bottom-up problem intake mechanism is raised. If raised, it serves as the structural answer.

If the diagnostic in Section 04 is the right next step, a thirty-minute conversation in June following Shira's travel period is sufficient to align on kickoff timing. The proposed terms are valid through July 31, 2026 and are not contingent on a near-term decision.